text
string | source
string |
|---|---|
A Fourier-based inference method for learning interaction kernels in particle systems Grigorios A. Pavliotis∗Andrea Zanoni† Abstract We consider the problem of inferring the interaction kernel of stochastic interacting particle systems from observations of a single particle. We adopt a semi-parametric approach and represent the interaction kernel in terms of a generalized Fourier series. The basis functions in this expansion are tailored to the problem at hand and are chosen to be orthogonal polynomials with respect to the invariant measure of the mean-field dynamics. The generalized Fourier coefficients are obtained as the solution of an appropriate linear system whose coefficients depend on the moments of the invariant measure, and which are approximated from the particle trajectory that we observe. We quantify the approximation error in the Lebesgue space weighted by the invariant measure and study the asymptotic properties of the estimator in the joint limit as the observation interval and the number of particles tend to infinity, i.e. the joint large time-mean field limit. We also explore the regime where an increasing number of generalized Fourier coefficients is needed to represent the interaction kernel. Our theoretical results are supported by extensive numerical simulations. AMS subject classifications. 35Q70, 35Q84, 42C10, 60J60, 62M20. Key words. Interacting particle systems, statistical inference, interaction kernel, invariant measure, mean-field limit, McKean-Vlasov PDE, orthogonal polynomials. 1 Introduction Stochastic interacting particle systems find applications in many areas related to social sciences [18], collective behavior [35], pedestrian dynamics [24], physics [22], biology [46], and machine learning [45]. Recently, tremendous progress has been made in the qualitative and quantitative understanding of such systems, from different perspectives: modeling, analysis, quantitative propagation of chaos [15,28], optimal control [8], as well as the development of numerical methods for solving the nonlinear, nonlocal SDEs and PDEs that we obtain in the mean-field limit [11,23,30]. We refer to [9,10] for a recent review and references to the literature. In addition, several important contributions have been made to the development of efficient inference methodologies for interacting particle systems and their mean-field limit. Statistical inference, learning, data assimilation, and inverse problems for McKean SDEs and McKean–Vlasov PDEs are topics of great current interest. ∗Department of Mathematics, Imperial College London, London SW7 2AZ, UK, g.pavliotis@imperial.ac.uk. †Centro di Ricerca Matematica Ennio De Giorgi, Scuola Normale Superiore, Pisa, Italy, an- drea.zanoni@sns.it. 1arXiv:2505.05207v1 [math.ST] 8 May 2025 In particular, the wealth of data in real-world problems pushes towards data-driven models, and therefore learning parameters or functions in mathematical models from observed data is fundamental. A partial list of recent activities on inference for mean-field SDEs and PDEs includes kernel methods [29,31], constrast functions based on a pseudo-likelihood [3], maximum likelihood estimation [17], stochastic gradient descent [38,44], approximate likelihood based on an empirical approximation of the invariant measure [19], method of moments [40], and eigenfunction martingale estimating functions [39,41]. An important observation upon which our previous work on inference for McKean SDEs is based is that, under the assumption of uniform propagation of chaos, the nonlinear mean-field SDE can be replaced by a linear (in the sense of Mckean) SDE, obtained by calculating the convolution
|
https://arxiv.org/abs/2505.05207v1
|
term in the drift with respect to the (unique) invariant measure of the process. A detailed analysis of this approach can be found in [41]. Furthermore, a fully nonparametric estimation of the interaction potential has been studied using different methodologies based on deconvolution [2], data-driven kernel estimators [16], the method of moments [13], least squares [29], and projection techniques [12]. Nonparametric inference for the McKean–Vlasov PDE can also be formulated as an inverse PDE problem and a Bayesian approach can be applied [37]. The goal of this paper is to extend the parametric inference methodologies that were developed recently in [39,40] to a semiparametric setting, combining the method of moments [13,40] with a spectral-theoretic approach [20,21,36,39] based on generalized Fourier expansions. The main idea behind our approach is to expand the interaction kernel into an appropriate orthonormal basis that is tailored to the problem at hand. We will consider a system of Ninteracting particles moving in one dimension dX(n) t=−V′(X(n) t) dt−1 NN/summationdisplay i=1W′(X(n) t−X(i) t) dt+√ 2σdB(n) t, X(n) 0∼ν, n = 1,...,N,(1.1) wheret∈[0,T],V,W :R→Rare the confining and interaction potentials, respectively, andσ > 0is the diffusion coefficient. Moreover, {B(n) t}N n=1are standard independent one-dimensional Brownian motions. We note that the method developed in this paper can be, in principle, extended to the multidimensional case, as we discuss in Section 5. We assume chaotic initial conditions with distribution ν, which is, of course, independent of the Brownian motions {B(n) t}N n=1. Then, in the limit as N→∞, and under appropriate assumptions on the confining and interaction potentials, each particle converges in law to the solution of the mean-field McKean SDE dXt=−V′(Xt) dt−(W′∗u(t,·))(Xt) dt+√ 2σdBt, X0∼ν,(1.2) whereu(t,·)is the density of the law of the process Xtwith respect to the Lebesgue measure, and solves the McKean–Vlasov PDE ∂u ∂t(t,x) =∂ ∂x/parenleftbig(V′(x) + (W′∗u(t,·))(x))u(t,x)/parenrightbig+σ∂2u ∂x2(t,x), u(x,0) dx=ν(dx).(1.3) Moreover, the density ρof the invariant measure satisfies the stationary McKean–Vlasov equation d dx/parenleftig V′(x) + (W′∗ρ)(x))ρ/parenrightig +σd2ρ dx2= 0, (1.4) 2 or, equivalently, the self-consistency equation ρ(x) =1 Ze−1 σ(V(x)+(W∗ρ)(x))withZ=/integraldisplay Re−1 σ(V(x)+(W∗ρ)(x))dx. (1.5) As is well known, without convexity assumptions on interaction and / or confining potentials, the McKean SDE (1.2)can have multiple stationary states [14,48]. However, in this work, we place ourselves in the (strict) convexity/uniform propagation of chaos setting [28,32] that ensures the uniqueness of stationary states for the McKean-Vlasov dynamics. We also need the initial distribution to have bounded moments of any order. In particular, we make the following assumption. Assumption 1.1.The process Xtthat solves the McKean SDE (1.2)is ergodic and admits a unique invariant measure with density ρthat satisfies equations (1.4)and(1.5). Moreover, there exists a constant/tildewideC > 0, independent of TandN, such that for all t∈[0,T], n= 1,...,N, andp≥1 /parenleftig E/bracketleftig (X(n) t−Xt)2/bracketrightig/parenrightig1/2≤/tildewideC√ N,/parenleftig E/bracketleftig/vextendsingle/vextendsingle/vextendsingleX(n) t/vextendsingle/vextendsingle/vextendsinglep/bracketrightig/parenrightig1/p ≤/tildewideCand (E[|Xt|p])1/p≤C, whereXtis given by setting the Brownian motion Bt=B(n) tin equation (1.2). Remark 1.2.Assumption 1.1 is satisfied, for example, in the setting of [32], where the confining and interaction potentials are strictly convex and convex, respectively. However, we believe that uniform propagation of chaos is not needed for the inference methodology presented in this work and
|
https://arxiv.org/abs/2505.05207v1
|
that the analysis can be extended to the setting considered in [34]. This is confirmed in the numerical experiments that are presented in Section 4. The analysis of our inference methodology in the presence of phase transitions will be presented elsewhere. In this work, we consider the problem of learning the interaction kernel W′(or equivalently the interaction potential W) by observing a single particle from the interacting particle system(1.1). The confining potential is assumed to be known. It is important to note that it is not possible, in general, to identify both the confining and interaction potentials from a single-particle observation. This was already noted in our previous work [39,40]. In this paper, we make this remark more precise; see Proposition 3.4. We can consider the case where we either observe a continuous trajectory (Yt)t∈[0,T]or discrete-time samples from it {/tildewideYi}I i=0, where Yt=X(¯n) tand/tildewideYi=X(¯n) ∆i, for a particle ¯nin(1.1)and sampling rate ∆ =T/I. Clearly, because of the exchangeability of the interacting particle system, it does not matter which particle we are observing. Assuming now that W′∈L2(ρ)withρthe unique invariant measure of the mean-field system, we consider the truncated generalized Fourier series expansion W′(x)≃K/summationdisplay k=0βkψk(x), (1.6) where{ψk}∞ k=0are orthogonal polynomials with respect to the invariant measure ρthat, in practice, will be approximated using the available observations. We then propose to estimate the coefficients {βk}K k=0in the expansion by solving a linear system that is obtained by imposing appropriate constraints on the moments of ρ. These conditions, in turn, are derived from the stationary Fokker–Planck equation (1.4). An outline of 3 the main steps of this approach is provided in Algorithm 1 in Section 3. We note here the link between this part of the proposed statistical inference methodology and the problem of identifying all generators of reversible diffusions–or, equivalently, all Gibbs measures–whose eigenfunctions are orthogonal polynomials with respect to the invariant measure; see [5, Section 2.7] and, in particular, [6] for details. Our method builds upon and connects our two previous papers [13,39] on the application of the method of moments and of eigenfunction expansions to inference problems for interacting particle systems, respectively. In [39] the main limitation is that the drift, interaction, and diffusion functions are assumed to be polynomials, and, therefore, the inference problem is fully parametric. In an interesting recent paper [13], this hypothesis is eliminated, and no assumptions are made about the functional form of potentials. However, for their methodology to work, multiple stationary particle trajectories, for example, 4, of the mean-field SDE must be observed. In addition, the basis functions in the generalized Fourier series expansion in (1.6)have to be chosen appropriately a priori. The two main innovations of the work reported in this paper are: (1) we only need to observe a singlenon-stationary particle trajectory of the interacting particle system and not of the mean-field SDE; and (2) we consider basis functions that are purpose-built and tailored to the stochastic interacting particle system. These basis functions are orthogonal polynomials with respect to the invariant measure of the mean-field dynamics. In the following, we summarize the main contributions of
|
https://arxiv.org/abs/2505.05207v1
|
this paper. •We extend our previous work on the application of the method of moments to inference problems for stochastic interacting particle systems. In particular, we introduce a semi-parametric methodology for learning the interaction kernel from the observation of a single-particle trajectory. •For the semi-parametric representation of the interaction kernel, we use orthogonal polynomials with respect to the invariant measure of the McKean SDE as basis functions for the Fourier series expansion. In particular, we construct purpose-built orthonormal basis functions that are tailored to the problem at hand. These basis functions are calculated numerically using the available particle trajectory observations. •We present a detailed convergence analysis of the proposed methodology, computing estimates of the approximation error as a function of the number of data, particles, and basis functions in the Fourier expansion. We provide thus theoretical guarantees for the convergence and accuracy of our method. •We present several numerical experiments that highlight the effectiveness of our inference methodology. Outline. The rest of the paper is organized as follows. In Section 2 we consider orthogonal polynomials with respect to the invariant measure of the mean-field dynamics, which we approximate using the available observations from a single interacting particle, and in Section 3 we present a generalized method of moments to infer the interaction kernel. The convergence analysis in the limit of infinite data and particles for both orthogonal polynomials and the kernel estimator is provided in Sections 2.1 and 3.2, respectively. Finally, we present different numerical experiments to test our method in Section 4, and we suggest possible developments in Section 5. 4 2 Orthonormal polynomials w.r.t. the invariant measure We consider the orthonormal basis consisting of orthogonal polynomials for the weighted spaceL2(ρ), whereρis the invariant measure of the mean-field dynamics. Starting from the set of monomials {xk}∞ k=0, the orthogonal polynomials {ψk}∞ k=0can be derived using the Gram–Schmidt orthonormalization procedure. ψk(x) = 1 ifk= 0, xk−/summationtextk−1 j=0ψj(x)/integraltext Rykψj(y)ρ(y) dy/vextenddouble/vextenddouble/vextenddoublexk−/summationtextk−1 j=0ψj(x)/integraltext Rykψj(y)ρ(y) dy/vextenddouble/vextenddouble/vextenddouble L2(ρ)ifk≥1. We mention that it is always possible to construct orthogonal polynomials with respect to a given probability measure on the real line, provided that the measure has finite moments of all orders. We refer to [47] for more details. Notice that, since {ψk}∞ k=0are polynomials, the Gram–Schmidt procedure is only dependent on the moments {M(r)}∞ r=0of the measure ρ M(r)=Eρ[Xr] =/integraldisplay Rxrρ(x) dx, where the superscript ρdenotes the fact that the expectation is computed with respect to the measure ρ. In particular, the k-th basis function is given by the determinant of a Hankel matrix with an additional row made of monomials ψk(x) =1 ckdet M(0)M(1)M(2)··· M(k) M(1)M(2)M(3)···M(k+1) ............... M(k−1)M(k)M(k+1)···M(2k−1) 1x x2···xk =:1 ckdet(Mk),(2.1) whereckis the normalization constant that ensures ∥ψk∥L2(ρ)= 1. Using the Laplace expansion for the determinants, we then write ψk(x) =1 ckk/summationdisplay j=0λkjxj, (2.2) where λkj= (−1)k+jdet M(0)···M(j−1)M(j+1)··· M(k) M(1)··· M(j)M(j+2)···M(k+1) .................. M(k−1)···M(k+j−2)M(k+j)···M(2k−1) =:(−1)k+jdet(Λkj).(2.3) Therefore, the normalization constant satisfies c2 k=/integraldisplay Rd k/summationdisplay j=0λkjxj 2 ρ(x) dx=k/summationdisplay i=0k/summationdisplay j=0λkiλkj/integraldisplay Rdxi+jρ(x) dx=k/summationdisplay i=0k/summationdisplay j=0λkiλkjM(i+j). (2.4) 5 We now aim to approximate the basis functions using the available observations (Yt)t∈[0,T] or{/tildewideYi}I i=0.
|
https://arxiv.org/abs/2505.05207v1
|
Since for the construction of ψkwe only need the moments {M(r)}2k−1 r=0, it is sufficient to estimate them using the empirical moments /tildewiderM(r) T,N=1 T/integraldisplayT 0Yr tdt,/tildewiderM(r) T,I=1 II/summationdisplay i=1/tildewideYr i, (2.5) depending on whether we have continuous-time or discrete-time observations. We can then build an approximation {/tildewideψk}∞ k=0of the orthonormal basis using formula (2.1), where the exact moments are replaced by the empirical moments. Notice that we denote by/tildewideλkjand /tildewideckthe coefficients and the normalization constant of the polynomial/tildewideψk, respectively. Remark 2.1.From now on, for the sake of simplicity, we will only consider the case of continuous-time observations. Nevertheless, all the analysis presented here still holds in the case of discrete-time observations, and therefore the methodology introduced in the next section can still be applied. We emphasize the fact that our approach only relies on the approximation of the moments of the invariant measure of the mean-field dynamics, and, due to the ergodic theorem, it does not matter whether we use discrete-time or continuous-time observations to estimate the moments. To simplify the notation, we will also remove the subscripts TandNwhen it is clear from the context that the referred quantities depend on the observation time and the number of particles. We finally recall the result from [40, Lemma 4.3], which, due to Assumption 1.1, using ergodicity and propagation of chaos, and assuming the particles to be initially distributed accordingly to the invariant measure of the mean-field dynamics, i.e. ν(dx) =ρ(x) dx, quantifies the error given by the approximation of the moments. In particular, there exists a constant/tildewideC > 0independent of TandN, such that for all q∈[1,2) /parenleftig E/bracketleftig/vextendsingle/vextendsingle/vextendsingle/tildewiderM(r)−M(r)/vextendsingle/vextendsingle/vextendsingleq/bracketrightig/parenrightig1 q≤/tildewideC/parenleftbigg1√ T+1√ N/parenrightbigg , (2.6) and this estimate will be useful throughout the paper. We note that, as explained in [40, Remark 4.4], the stationarity assumption is made only to simplify the analysis. Due to geometric ergodicity and uniform propagation of chaos, it should be sufficient to start either at a deterministic initial condition or at a distribution that has, for example, finite relative entropy with respect to the invariant measure and finite moments of all orders. In fact, in all the numerical experiments in Section 4, we do not start at stationarity, and the particle trajectory that we observe and that we use to estimate the moments is not stationary. In the next section, we show the convergence of the approximated orthogonal polynomials /tildewideψkto the corresponding ψkinL2(ρ)asT,N→∞. 2.1Convergence analysis for the approximated orthogonal polynomials Before presenting the main result of this section, that is, the convergence of/tildewideψktoψk, we need the following estimates regarding the coefficients/tildewideλkjof the polynomials and the normalization constants /tildewideck. Lemma 2.2. Under Assumption 1.1, for all k,j≥0and for all q∈[1,2)there exists a constantC=C(k)>0, independent of TandN, such that /parenleftig E/bracketleftig/vextendsingle/vextendsingle/vextendsingleλkj−/tildewideλkj/vextendsingle/vextendsingle/vextendsingleq/bracketrightig/parenrightig1 q≤C(k)/parenleftbigg1√ T+1√ N/parenrightbigg . 6 Proof.First, by [27, Theorem 2.12], we have /vextendsingle/vextendsingle/vextendsingleλkj−/tildewideλkj/vextendsingle/vextendsingle/vextendsingleq=/vextendsingle/vextendsingle/vextendsingledet(Λkj)−det(/tildewideΛkj)/vextendsingle/vextendsingle/vextendsingleq≤kq/vextenddouble/vextenddouble/vextenddoubleΛkj−/tildewideΛkj/vextenddouble/vextenddouble/vextenddoubleq 2max/braceleftig ∥Λkj∥2,/vextenddouble/vextenddouble/vextenddouble/tildewideΛkj/vextenddouble/vextenddouble/vextenddouble 2/bracerightigq(k−1), which, due to the Hölder’s inequality with exponent 1/q+ 1/2∈(1,3/2], implies E/bracketleftig/vextendsingle/vextendsingle/vextendsingleλkj−/tildewideλkj/vextendsingle/vextendsingle/vextendsingleq/bracketrightig ≤kqE/bracketleftbigg/vextenddouble/vextenddouble/vextenddoubleΛkj−/tildewideΛkj/vextenddouble/vextenddouble/vextenddoubleq 2+1 2/bracketrightbigg2q q+2E/bracketleftigg max/braceleftig ∥Λkj∥2,/vextenddouble/vextenddouble/vextenddouble/tildewideΛkj/vextenddouble/vextenddouble/vextenddouble 2/bracerightigq(k−1)(2+q) 2−q/bracketrightigg2−q 2+q , where we remark that q/2 + 1∈[3/2,2). Then, since the spectral norm is bounded by the Frobenius norm and due to the
|
https://arxiv.org/abs/2505.05207v1
|
uniform boundedness of the moments and the estimate (2.6), following the proof of [40, Lemma 4.5(i)] we obtain the desired result. We finally remark that the dependence of Conjis not explicitly emphasized since j≤k. Lemma 2.3. Under Assumption 1.1, for all k≥0and for all q∈[1,2)there exists a constantC=C(k)>0, independent of TandN, such that (i)/parenleftig E/bracketleftig/vextendsingle/vextendsingle/vextendsinglec2 k−/tildewidec2 k/vextendsingle/vextendsingle/vextendsingleq/bracketrightig/parenrightig1 q≤C(k)/parenleftbigg1√ T+1√ N/parenrightbigg , (ii) ( E[|ck−/tildewideck|q])1 q≤C(k)/parenleftbigg1√ T+1√ N/parenrightbigg . Proof.From the definitions of ckin equation (2.4)and of its empirical counterpart /tildewideck, using Jensen’s inequality we deduce that E/bracketleftig/vextendsingle/vextendsingle/vextendsinglec2 k−/tildewidec2 k/vextendsingle/vextendsingle/vextendsingleq/bracketrightig ≤Ck/summationdisplay i=0k/summationdisplay j=0E/bracketleftig/vextendsingle/vextendsingle/vextendsingleλkiλkjM(i+j)−/tildewideλki/tildewideλkj/tildewiderM(i+j)/vextendsingle/vextendsingle/vextendsingleq/bracketrightig , which, due to the triangle inequality, implies E/bracketleftig/vextendsingle/vextendsingle/vextendsinglec2 k−/tildewidec2 k/vextendsingle/vextendsingle/vextendsingleq/bracketrightig ≤k/summationdisplay i=0k/summationdisplay j=0E/bracketleftig/vextendsingle/vextendsingle/vextendsingleλkiλkj/parenleftig M(i+j)−/tildewiderM(i+j)/parenrightig/vextendsingle/vextendsingle/vextendsingleq/bracketrightig +k/summationdisplay i=0k/summationdisplay j=0E/bracketleftig/vextendsingle/vextendsingle/vextendsingleλki/parenleftig λkj−/tildewideλkj/parenrightig /tildewiderM(i+j)/vextendsingle/vextendsingle/vextendsingleq/bracketrightig +k/summationdisplay i=0k/summationdisplay j=0E/bracketleftig/vextendsingle/vextendsingle/vextendsingle/parenleftig λki−/tildewideλki/parenrightig /tildewideλkj/tildewiderM(i+j)/vextendsingle/vextendsingle/vextendsingleq/bracketrightig . Then, applying Hölder’s inequality, the boundedness of the moments, the estimates (2.6), and Lemma 2.2, we obtain (i). Notice now that we have |ck−/tildewideck|=/vextendsingle/vextendsinglec2 k−/tildewidec2 k/vextendsingle/vextendsingle ck+/tildewideck≤/vextendsingle/vextendsinglec2 k−/tildewidec2 k/vextendsingle/vextendsingle ck, which due to (i) gives (ii) and completes the proof. Notice that Lemma 2.3 gives bounds on the difference between the normalization constants. However,ckand/tildewideckappear in the denominator of orthogonal polynomials. Therefore, we need to obtain an estimate in L2of the difference of their inverse, for which the following condition on the boundedness of negative moments for /tildewideckis required. 7 Assumption 2.4.For allk≥0there exists a constant C=C(k)>0, independent of Tand N, such that E/bracketleftigg 1 /tildewidecr k/bracketrightigg ≤C(k), for somer>4. Remark 2.5.Assumption 2.4 is always satisfied as long as /tildewideckis bounded away from zero independently of TandN, e.g./tildewideck≥ζckfor someζ >0. Moreover, in all the numerical experiments that we performed, we observed that Assumption 2.4 holds in practice. We remark that, without this condition, the convergence analysis presented in the following still goes through. However, we could only achieve convergence in probability and without quantitative convergence rates. In the next result, we derive the estimate for the inverse of /tildewideck. Lemma 2.6. Under Assumptions 1.1 and 2.4, for all k≥0and for all q∈[1,2r/(r+ 2)) there exists a constant C=C(k)>0, independent of TandN, such that /parenleftbigg E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 ck−1 /tildewideck/vextendsingle/vextendsingle/vextendsingle/vextendsingleq/bracketrightbigg/parenrightbigg 1 q≤C(k)/parenleftbigg1√ T+1√ N/parenrightbigg . Proof.Using Hölder’s inequality with exponents r/qandr/(r−q), we obtain E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 ck−1 /tildewideck/vextendsingle/vextendsingle/vextendsingle/vextendsingleq/bracketrightbigg =E/bracketleftigg |ck−/tildewideck|q cq k/tildewidecq k/bracketrightigg ≤1 cq k/parenleftig E/bracketleftig |ck−/tildewideck|rq r−q/bracketrightig/parenrightigr−q r/parenleftigg E/bracketleftigg 1 /tildewidecr k/bracketrightigg/parenrightiggq r , where we note that rq/(r−q)∈[1,2), and which implies /parenleftbigg E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 ck−1 /tildewideck/vextendsingle/vextendsingle/vextendsingle/vextendsingleq/bracketrightbigg/parenrightbigg 1 q≤1 ck/parenleftig E/bracketleftig |ck−/tildewideck|rq r−q/bracketrightig/parenrightigr−q rq/parenleftigg E/bracketleftigg 1 /tildewidecr k/bracketrightigg/parenrightigg 1 r . The desired result then follows from Assumption 2.4 and Lemma 2.3(ii). Using the previous estimates, we can finally prove the convergence of the approximated orthogonal polynomials in the weighted space L2(ρ). Proposition 2.7. Under Assumptions 1.1 and 2.4, for all k≥0there exists a constant C=C(k)>0, independent of TandN, such that E/bracketleftbigg/vextenddouble/vextenddouble/vextenddoubleψk−/tildewideψk/vextenddouble/vextenddouble/vextenddouble L2(ρ)/bracketrightbigg ≤C(k)/parenleftbigg1√ T+1√ N/parenrightbigg . Proof.By definition of ψkand/tildewideψkwe have /vextenddouble/vextenddouble/vextenddoubleψk−/tildewideψk/vextenddouble/vextenddouble/vextenddouble2 L2(ρ)=/integraldisplay Rd 1 ckk/summationdisplay j=0λkjxj−1 /tildewideckk/summationdisplay j=0/tildewideλkjxj 2 ρ(x) dx, which due to the triangle inequality gives /vextenddouble/vextenddouble/vextenddoubleψk−/tildewideψk/vextenddouble/vextenddouble/vextenddouble2 L2(ρ)≤2 /tildewidec2 k/integraldisplay Rd k/summationdisplay j=0(λkj−/tildewideλkj)xj 2 ρ(x) dx + 2/parenleftbigg1 ck−1 /tildewideck/parenrightbigg2/integraldisplay Rd k/summationdisplay j=0λkjxj 2 ρ(x) dx. 8 By expanding the square we get /vextenddouble/vextenddouble/vextenddoubleψk−/tildewideψk/vextenddouble/vextenddouble/vextenddouble2 L2(ρ)≤2 /tildewidec2 kk/summationdisplay i=0k/summationdisplay j=0(λki−/tildewideλki)(λkj−/tildewideλkj)M(i+j) + 2/parenleftbigg1 ck−1 /tildewideck/parenrightbigg2k/summationdisplay i=0k/summationdisplay j=0λkiλkjM(i+j),
|
https://arxiv.org/abs/2505.05207v1
|
which implies /vextenddouble/vextenddouble/vextenddoubleψk−/tildewideψk/vextenddouble/vextenddouble/vextenddouble2 L2(ρ)≤C1 /tildewidec2 k k/summationdisplay j=0/vextendsingle/vextendsingle/vextendsingleλkj−/tildewideλkj/vextendsingle/vextendsingle/vextendsingle 2 +C/parenleftbigg1 ck−1 /tildewideck/parenrightbigg2 k/summationdisplay j=0|λkj| 2 . Therefore, using Hölder’s inequality with exponents randq=r/(r−1)∈(1,2), we have E/bracketleftbigg/vextenddouble/vextenddouble/vextenddoubleψk−/tildewideψk/vextenddouble/vextenddouble/vextenddouble L2(ρ)/bracketrightbigg ≤CE 1 /tildewideckk/summationdisplay j=0/vextendsingle/vextendsingle/vextendsingleλkj−/tildewideλkj/vextendsingle/vextendsingle/vextendsingle +CE/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 ck−1 /tildewideck/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≤CE/parenleftigg/bracketleftigg 1 /tildewidecr k/bracketrightigg/parenrightigg1/r k/summationdisplay j=0E/bracketleftig/vextendsingle/vextendsingle/vextendsingleλkj−/tildewideλkj/vextendsingle/vextendsingle/vextendsingle/bracketrightigq 1/q +CE/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 ck−1 /tildewideck/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg , which, due to Assumption 2.4 and Lemmas 2.2 and 2.6, yields the desired result. 3 Generalized method of moments In this section, we present our inference methodology for learning the interaction kernel W′in(1.1)from a single particle observation. Let {ψk}∞ k=0be the orthonormal basis made of orthogonal polynomials introduced in the previous section, and let {/tildewideψk}∞ k=0be its approximation obtained using empirical moments. We now introduce the integrability (with respect to the invariant measure of the mean-field dynamics) and regularity assumptions on the confining and interaction potentials, as well as the approximability property of the weightedL2space. Assumption 3.1.The confining and interaction potentials satisfy V, W∈C2(R)∩H1(ρ), whereH1(ρ)denotes the standard weighted Sobolev space, and the set of polynomials is dense inL2(ρ). Moreover, W′∗ρ∈L2(ρ)andV′andV′′are polynomially bounded. Remark 3.2.The fact that the set of polynomials is dense together with the orthogonality of the polynomials by construction implies that the set {ψk}∞ k=0forms an orthonormal basis ofL2(ρ). The problem of finding conditions that guarantee that the set of polynomials is dense in weighted L2spaces has been thoroughly investigated and is directly related to the Hamburger moment problem. Sufficient conditions based on the definition of Nevanlinna extremal (N-extremal) solutions are given in [1, Section 2.3] and [7]. Moreover, in [43] the analysis is extended to the general case of weighted Sobolev spaces Wk,p(ρ), and conditions on the weight ρare provided. Consider the Fourier expansions of both V′andW′with respect to the basis {ψk}∞ k=0made of orthogonal polynomials V′(x) =∞/summationdisplay k=0αkψk(x), W′(x) =∞/summationdisplay k=0βkψk(x). 9 We emphasize that the confining potential is assumed to be known. Our goal is to infer the coefficients βkfrom approximations of the coefficients αk. In particular, consider the McKean–Vlasov PDE (1.4), and replace V′andW′with their Fourier expansions d dx/parenleftigg∞/summationdisplay k=0(αkψk(x) +βk(ψk∗ρ)(x))ρ(x)/parenrightigg +σd2ρ dx2(x) = 0. Then, multiply the equation by the test function/integraltextψi, integrate over Rand then by parts to obtain ∞/summationdisplay k=0αk/integraldisplay Rψi(x)ψk(x)ρ(x) dx+∞/summationdisplay k=0βk/integraldisplay Rψi(x)(ψk∗ρ)(x)ρ(x) dx=σ/integraldisplay Rψ′ i(x)ρ(x) dx, which, due to the fact that the functions {ψk}∞ k=0are orthonormal, yields αi+∞/summationdisplay k=0βkEρ[ψi(X)(ψk∗ρ)(X)] =σEρ[ψ′ i(X)]. (3.1) Let us now introduce the notation Bik=Eρ[ψi(X)(ψk∗ρ)(X)], γi=Eρ[ψ′ i(Y)], and notice that the quantities αiandγican be computed as follows αi=/integraldisplay RV′(x)ψi(x)ρ(x) dx=1 cii/summationdisplay j=0λij/integraldisplay RV′(x)xjρ(x) dx=1 cii/summationdisplay j=0λijEρ/bracketleftig V′(X)Xj/bracketrightig , γi=/integraldisplay Rψ′ i(x)ρ(x) dx=1 cii/summationdisplay j=1jλij/integraldisplay Rxj−1ρ(x) dx=1 cii/summationdisplay j=1jλijM(j−1). Moreover, by the binomial theorem, we have Bik=/integraldisplay Rψi(x)/integraldisplay Rψk(x−y)ρ(y) dyρ(x) dx =1 cicki/summationdisplay ℓ=0k/summationdisplay j=0λiℓλkj/integraldisplay R/integraldisplay Rxℓ(x−y)jρ(y)ρ(x) dydx =1 cicki/summationdisplay ℓ=0k/summationdisplay j=0λiℓλkj/integraldisplay R/integraldisplay Rxℓj/summationdisplay m=0/parenleftigg j m/parenrightigg (−1)j−mxmyj−mρ(y)ρ(x) dydx =1 cicki/summationdisplay ℓ=0k/summationdisplay j=0j/summationdisplay m=0(−1)j−m/parenleftigg j m/parenrightigg λiℓλkjM(m+ℓ)M(j−m).(3.2) If we then truncate the Fourier series up to order Kand therefore consider the indices i,k= 0,...,K, from equation (3.1), we obtain the linear system of dimension K+ 1 B(K)β(K)=σγ(K)−α(K)−e(K), (3.3) where α(K)=/bracketleftig α0···αK/bracketrightig⊤∈RK+1, β(K)=/bracketleftig β0···βK/bracketrightig⊤∈RK+1, γ(K)=/bracketleftig γ0···γK/bracketrightig⊤∈RK+1, e(K)=/bracketleftig ε(K) 0···ε(K) K/bracketrightig⊤∈RK+1withε(K) i=∞/summationdisplay k=K+1Bikβk, 10 andB∈R(K+1)×(K+1)is the matrix whose entries are given in equation (3.2).
|
https://arxiv.org/abs/2505.05207v1
|
It now remains to approximate the entries of the matrix and the right-hand side in the system (3.3), since they cannot be computed exactly. Due to ergodicity and propagation of chaos, using the empirical moments, and neglecting the reminder e(K), we define the linear system whose solution/tildewideβ(K) T,Nis the estimator of the coefficients of the Fourier expansion of the interaction kernel as /tildewideB(K) T,N/tildewideβ(K) T,N=σ/tildewideγ(K) T,N−/tildewideα(K) T,N, (3.4) where (/tildewideα(K) T,N)i=1 /tildewidecii/summationdisplay j=0/tildewideλij1 T/integraldisplayT 0V′(Yt)Yj tdt, (/tildewideγ(K) T,N)i=1 /tildewidecii/summationdisplay j=1j/tildewideλij/tildewiderM(j−1), (/tildewideB(K) T,N)ik=1 /tildewideci/tildewidecki/summationdisplay ℓ=0k/summationdisplay j=0j/summationdisplay m=0(−1)j−m/parenleftigg j m/parenrightigg /tildewideλiℓ/tildewideλkj/tildewiderM(m+ℓ)/tildewiderM(j−m).(3.5) Ifdet(/tildewideB(K) T,N)̸= 0, then this definition is sufficient from the practical point of view, as we will observe in the following numerical experiments. However, in order to analyze the convergence properties of the estimator of the interaction kernel, similarly to what was done in [40], we need to introduce a sequence of convex and compact sets {BK}∞ K=0,BK⊂RK+1, for the admissible coefficients. Then, let/hatwideβ(K) T,Nbe the projection of the solution/tildewideβ(K) T,Nof the linear system (3.4) onto the convex and compact set BK /hatwideβ(K) T,N= arg min β∈BK/vextenddouble/vextenddouble/vextenddoubleβ−/tildewideβ(K) T,N/vextenddouble/vextenddouble/vextenddouble. (3.6) Finally, the estimator of the interaction kernel is given by (/hatwiderW′)(K) T,N(x) =K/summationdisplay k=0(/hatwideβ(K) T,N)k/tildewideψk(x) =K/summationdisplay k=0(/hatwideβ(K) T,N)k /tildewideckk/summationdisplay j=0/tildewideλkjxj, (3.7) and in Algorithm 1 we summarize the main steps needed for its construction. Before studying the dependence of the estimator (/hatwiderW′)(K) T,Non its parameters T,N,andK, in the next section we discuss the problem of inferring the drift term V′. Remark3.3.We believe that one of the main advantages of the proposed inference method- ology is that it requires the observation of a single-particle trajectory of the interacting particle system. On the other hand, if multiple trajectories are available, it would be possible to use them to obtain a more precise approximation of the interaction kernel. In particular, a more accurate estimate of the empirical moments could be obtained by averaging over all available particle trajectories. Alternatively, we could first compute an estimator of the Fourier coefficients of the interaction kernel for each particle and then average. We expect that averaging over many particle observations will reduce the variance of the resulting estimator. However, the study of the best averaging strategy and the variance of the estimator is beyond the scope of this work. 11 Algorithm 1: Estimation of W′∈L2(ρ) Input:Drift function V′∈L2(ρ). Observed trajectory (Yt)t∈[0,T]. Number of Fourier coefficients K. Set of admissible coefficients BK⊂RK+1. Diffusion coefficient σ>0. Output: Estimator (/hatwiderW′)(K) T,NofW′. 1: Compute the approximated moments/tildewiderM(r) T,Nforr= 0,..., 2K from equation (2.5). 2: Construct the approximated orthogonal polynomials {/tildewideψk}K k=0 from equations (2.2), (2.3), (2.4). 3: Construct the matrix/tildewideB(K) T,N∈R(K+1)×(K+1)and the vectors /tildewideα(K) T,N,/tildewideγ(K) T,N∈RK+1from equation (3.5). 4: Compute the projection/hatwideβK T,NontoBKof the solution/tildewideβ(K) T,N of the linear system (3.4). 5: Construct the estimator (/hatwiderW′)(K) T,Nfrom equation (3.7). 3.1 Inference of the drift term In case also the drift term V′needs to be estimated, and therefore the vector /tildewideα(K) T,Nin the linear system (3.4) cannot be computed, but it is an unknown, then we would have /bracketleftig IK+1/tildewideB(K) T,N/bracketrightig /tildewideα(K) T,N /tildewideβ(K) T,N =σ/tildewideγ(K) T,N, whereIK+1∈R(K+1)×(K+1)denotes the identity matrix, and which is an
|
https://arxiv.org/abs/2505.05207v1
|
underdetermined system. This can also be seen formally from equation (3.1), where we have “infinitely many” linear equations for “ 2×infinitely many” unknowns. Therefore, our methodology does not allow us to simultaneously infer both the drift term and the interaction kernel. We remark that this property is common to all the methods that rely on the observation of a single particle and consequently leverage the stationary Fokker–Planck equation to derive estimators. This is detailed in the next result. Proposition 3.4. Starting from the McKean–Vlasov PDE (1.3), it is not possible to uniquely determine both the drift term and the interaction kernel in the interacting particle system(1.1)from the observation of a single trajectory. Proof.We show that there exist infinite combinations of V′andW′that give the same stationaryFokker–Planckequationandthereforethesameinvariantmeasure. Let f:R→R and define/tildewideV,/tildewiderW:R→Ras /tildewideV(x) =V(x)−(f∗ρ)(x)and/tildewiderW(x) =W(x) +f(x). 12 Then, we have /tildewideV′+ (/tildewiderW′∗ρ) =V′−(f′∗ρ) + (W′∗ρ) + (f′∗ρ) =V′+ (W′∗ρ), which implies that the stationary Fokker–Planck equation for/tildewideV′and/tildewiderW′coincides with equation (1.4) for V′andW′. Nevertheless, if one knows the interaction kernel W′and is only interested in estimating the drift term V′, then the problem reduces to the inference problem of a scalar diffusion process and we have a closed-form expression for the Fourier coefficients of the drift term /tildewideα(K) T,N=σ/tildewideγ(K) T,N−/tildewideB(K) T,N/tildewideβ(K) T,N. We also mention that in the absence of interaction between the particles, the estimator /tildewideα(K) T,N=σ/tildewideγ(K) T,N is not affected by the additional approximation error of ignoring e(K)in the exact system (3.3), sincee(K)= 0in this case. Then, similarly to the previous section, the estimator of the drift term is given by (/hatwideV′)(K) T,N(x) =K/summationdisplay k=0(/hatwideα(K) T,N)k/tildewideψk(x) =K/summationdisplay k=0(/hatwideα(K) T,N)k /tildewideckk/summationdisplay j=0/tildewideλkjxj, where/hatwideα(K) T,Nis the projection of /tildewideα(K) T,Nonto the set of admissible Fourier coefficients. 3.2 Convergence analysis for the interaction kernel estimator In this section, we study the convergence properties of the estimator (/hatwiderW′)(K) T,Ndefined in equation (3.7)asT,N,K→∞. We note that Kis the number of Fourier coefficients that we use to approximate the interaction kernel. Therefore, fixing Kgives the best possible approximation that can be reached in the ideal setting in which we observe an infinite trajectory from a system of infinite particles. On the other hand, we will observe that increasingKresults in a worse conditioning of the linear system to be solved, which in turn implies that larger values of TandNare necessary in order to obtain an accurate approximation of the interaction kernel. Our study follows the approach in [40], and is based on a forward error stability analysis for the linear system (3.4), which can be seen as a perturbation of equation (3.3). In the next lemma, whose proof is inspired by the proof of [40, Lemma 4.5], we quantify the error that we commit in replacing the exact moments with the empirical moments on both the left-hand side and the right-hand side of the linear system. Lemma 3.5. Under Assumptions 1.1, 2.4, and 3.1, for all K≥0there exists a constant C=C(K)>0, independent of TandN, such that (i)E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideγ(K) T,N−γ(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤C(K)/parenleftbigg1√ T+1√ N/parenrightbigg , (ii)E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideα(K) T,N−α(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤C(K)/parenleftbigg1√ T+1√ N/parenrightbigg , (iii) E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideB(K) T,N−B(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤C(K)/parenleftbigg1√ T+1√ N/parenrightbigg . 13 Proof.Let us start from (i).
|
https://arxiv.org/abs/2505.05207v1
|
By definition of /tildewideγ(K) T,Nandγ(K), we have E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideγ(K) T,N−γ(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig =E /radicaltp/radicalvertex/radicalvertex/radicalvertex/radicalbtK/summationdisplay i=0 1 /tildewidecii/summationdisplay j=1j/tildewideλij/tildewiderM(j−1)−1 cii/summationdisplay j=1jλijM(j−1) 2 ≤K/summationdisplay i=0E /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 /tildewidecii/summationdisplay j=1j/tildewideλij/tildewiderM(j−1)−1 cii/summationdisplay j=1jλijM(j−1)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle , and using the triangle inequality we get E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideγ(K) T,N−γ(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤K/summationdisplay i=0i/summationdisplay j=1j/parenleftbigg E/bracketleftbigg1 /tildewideci/vextendsingle/vextendsingle/vextendsingle/tildewideλij−λij/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewiderM(j−1)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +|λij|E/bracketleftbigg1 /tildewideci/vextendsingle/vextendsingle/vextendsingle/tildewiderM(j−1)−M(j−1)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg/parenrightbigg +K/summationdisplay i=0E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 /tildewideci−1 ci/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg i/summationdisplay j=1j/vextendsingle/vextendsingle/vextendsingleλijM(j−1)/vextendsingle/vextendsingle/vextendsingle . (3.8) Then, using Hölder’s inequality we obtain for rgiven by Assumption 2.4 E/bracketleftbigg1 /tildewideci/vextendsingle/vextendsingle/vextendsingle/tildewiderM(j−1)−M(j−1)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≤/parenleftigg E/bracketleftigg 1 /tildewidecr i/bracketrightigg/parenrightigg 1 r/parenleftbigg E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/tildewiderM(j−1)−M(j−1)/vextendsingle/vextendsingle/vextendsingler r−1/bracketrightbigg/parenrightbiggr−1 r,(3.9) and E/bracketleftbigg1 /tildewideci/vextendsingle/vextendsingle/vextendsingle/tildewideλij−λij/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewiderM(j−1)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≤/parenleftigg E/bracketleftigg 1 /tildewidecr i/bracketrightigg/parenrightigg 1 r/parenleftbigg E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/tildewideλij−λij/vextendsingle/vextendsingle/vextendsingle3r−2 2(r−1)/bracketrightbigg/parenrightbigg2(r−1) 3r−2 ×/parenleftigg E/bracketleftigg/vextendsingle/vextendsingle/vextendsingle/tildewiderM(j−1)/vextendsingle/vextendsingle/vextendsingler(3r−2) (r−2)(r−1)/bracketrightigg/parenrightigg (r−2)(r−1) r(3r−2) ,(3.10) where we notice that 1 r+2(r−1) 3r−2+(r−2)(r−1) r(3r−2)= 1,3r−2 2(r−1)∈[1,2),r(3r−2) (r−2)(r−1)≥1. Therefore, using bounds (3.9)and(3.10)in equation (3.8), due to the boundedness of the moments, equation (2.6), Assumption 2.4, and Lemmas 2.2 and 2.6, we deduce (i). We proceed similarly for (ii) and we get E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideα(K) T,N−α(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤K/summationdisplay i=0i/summationdisplay j=0E/bracketleftigg 1 /tildewideci/vextendsingle/vextendsingle/vextendsingle/tildewideλij−λij/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 T/integraldisplayT 0V′(Yt)Yj tdt/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightigg +K/summationdisplay i=0i/summationdisplay j=0|λij|E/bracketleftigg 1 /tildewideci/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 T/integraldisplayT 0V′(Yt)Yj tdt−Eρ/bracketleftig V′(X)Xj/bracketrightig/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightigg +K/summationdisplay i=0E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 /tildewideci−1 ci/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg i/summationdisplay j=0/vextendsingle/vextendsingle/vextendsingleλijEρ/bracketleftig V′(X)Xj/bracketrightig/vextendsingle/vextendsingle/vextendsingle . The next steps are still analogous to the ones for (i), and therefore (ii) follows if we show that for all p≥1and for allq∈[1,2)there exists a constant/tildewideC > 0, independent of Tand 14 N, such that /parenleftigg E/bracketleftigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 T/integraldisplayT 0V′(Yt)Yj tdt/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep/bracketrightigg/parenrightigg1/p ≤/tildewideC, (3.11) E:=/parenleftigg E/bracketleftigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 T/integraldisplayT 0V′(Yt)Yj tdt−Eρ/bracketleftig V′(X)Xj/bracketrightig/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleq/bracketrightigg/parenrightigg1/q ≤/tildewideC/parenleftbigg1√ T+1√ N/parenrightbigg .(3.12) First, by the Hölder’s inequality, we have E/bracketleftigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 T/integraldisplayT 0V′(Yt)Yj tdt/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsinglep/bracketrightigg ≤1 T/integraldisplayT 0E/bracketleftig/vextendsingle/vextendsingle/vextendsingleV′(Yt)Yj t/vextendsingle/vextendsingle/vextendsinglep/bracketrightig dt, and the bound (3.11)follows since V′is polynomially bounded and due to the boundedness of the moments. For equation (3.12), we proceed as in the proof of [40, Lemma 4.3] and we obtain E≤ E /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 T/integraldisplayT 0V′(Xt)Xj tdt−Eρ/bracketleftig V′(X)Xj/bracketrightig/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 1/2 +/parenleftigg 1 T/integraldisplayT 0E/bracketleftig/vextendsingle/vextendsingleV′(Yt)−V′(Xt)/vextendsingle/vextendsingleq/vextendsingle/vextendsingle/vextendsingleYj t/vextendsingle/vextendsingle/vextendsingleq/bracketrightig dt/parenrightigg1/q +/parenleftigg 1 T/integraldisplayT 0E/bracketleftig/vextendsingle/vextendsingleV′(Xt)/vextendsingle/vextendsingleq/vextendsingle/vextendsingle/vextendsingleYj t−Xj t/vextendsingle/vextendsingle/vextendsingleq/bracketrightig dt/parenrightigg1/q , =:E1+E2+E3. Then, forE1andE3, still following the proof of [40, Lemma 4.3], we apply the mean ergodic theorem in [33, Section 4], Hölder’s and Jensen’s inequality, the boundedness of the moments, and the uniform propagation of chaos property to get E1≤/tildewideC/√ Tand E3≤/tildewideC/√ N. Regarding E2, using the mean value theorem, we have V′(Yt)−V′(Xt) =V′′(Zt)(Yt−Xt), whereZttakes values between YtandXt, and applying again Hölder’s and Jensen’s inequality, the boundedness of the moments, and the uniform propagation of chaos property we deduce E2≤/tildewideC/√ N, which yields (ii). Let us now consider (iii). Since the spectral norm is bounded by the Frobenius norm, we have E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideB(K) T,N−B(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤K/summationdisplay i=0K/summationdisplay k=0E /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 /tildewideci/tildewidecki/summationdisplay ℓ=0k/summationdisplay j=0j/summationdisplay m=0(−1)j−m/parenleftigg j m/parenrightigg /tildewideλiℓ/tildewideλkj/tildewiderM(m+ℓ)/tildewiderM(j−m) −1 cicki/summationdisplay ℓ=0k/summationdisplay j=0j/summationdisplay m=0(−1)j−m/parenleftigg j m/parenrightigg λiℓλkjM(m+ℓ)M(j−m)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≤K/summationdisplay i=0K/summationdisplay k=0i/summationdisplay ℓ=0k/summationdisplay j=0j/summationdisplay m=0/parenleftigg j m/parenrightigg Eikljm, where Eikljm =E/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 /tildewideci/tildewideck/tildewideλiℓ/tildewideλkj/tildewiderM(m+ℓ)/tildewiderM(j−m)−1 cickλiℓλkjM(m+ℓ)M(j−m)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg . 15 Then, applying multiple triangle inequalities to isolate the differences of all the terms appearing in the right-hand side, we obtain Eikljm≤E/bracketleftbigg1 /tildewideci/tildewideck/vextendsingle/vextendsingle/vextendsingle/tildewideλiℓ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewideλkj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewiderM(m+ℓ)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewiderM(j−m)−M(j−m)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +/vextendsingle/vextendsingle/vextendsingleM(j−m)/vextendsingle/vextendsingle/vextendsingleE/bracketleftbigg1 /tildewideci/tildewideck/vextendsingle/vextendsingle/vextendsingle/tildewideλiℓ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewideλkj/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewiderM(m+ℓ)−M(m+ℓ)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +/vextendsingle/vextendsingle/vextendsingleM(m+ℓ)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleM(j−m)/vextendsingle/vextendsingle/vextendsingleE/bracketleftbigg1 /tildewideci/tildewideck/vextendsingle/vextendsingle/vextendsingle/tildewideλiℓ/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/tildewideλkj−λkj/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +|λkj|/vextendsingle/vextendsingle/vextendsingleM(m+ℓ)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleM(j−m)/vextendsingle/vextendsingle/vextendsingleE/bracketleftbigg1 /tildewideci/tildewideck/vextendsingle/vextendsingle/vextendsingle/tildewideλiℓ−λiℓ/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +|λiℓ||λkj|/vextendsingle/vextendsingle/vextendsingleM(m+ℓ)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleM(j−m)/vextendsingle/vextendsingle/vextendsingleE/bracketleftbigg1 /tildewideci/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 /tildewideck−1 ck/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1 ck|λiℓ||λkj|/vextendsingle/vextendsingle/vextendsingleM(m+ℓ)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleM(j−m)/vextendsingle/vextendsingle/vextendsingleE/bracketleftbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 /tildewideci−1 ci/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg . Finally, we estimate all the rows in the right-hand side applying Hölder’s inequality to the expectations with exponents p1=r,p2=r,p3=3r(3r−4) (r−2)(r−4),p4=3r(3r−4) (r−2)(r−4),p5=3r(3r−4) (r−2)(r−4),p6=3r−4 2r−4, p1=r,p2=r,p3=2r(3r−4) (r−2)(r−4),p4=2r(3r−4) (r−2)(r−4),p5=3r−4 2r−4, p1=r,p2=r,p3=r(3r−4) (r−2)(r−4),p4=3r−4 2r−4, p1=r,p2=r,p3=r r−2, p1=r,p2=r r−1, p1= 1, respectively, and then using the boundedness of the moments, equation (2.6), Assump-
|
https://arxiv.org/abs/2505.05207v1
|
tion 2.4, and Lemmas 2.2 and 2.6, we obtain (iii), which concludes the proof. The second approximation we make by passing from the exact system (3.3)to the final system(3.4)is to remove the term e(K). In the next result, we justify this step by showing thate(K)vanishes for large values of K. Lemma 3.6. Under Assumptions 1.1 and 3.1, it holds lim K→∞/vextenddouble/vextenddouble/vextenddoublee(K)/vextenddouble/vextenddouble/vextenddouble= 0. Proof.Let us first notice that since W′∗ρ∈L2(ρ), then we can write (W′∗ρ)(x) =∞/summationdisplay i=0θiψi(x), where θi=/integraldisplay Rψi(x)(W′∗ρ)(x)ρ(x) dx=∞/summationdisplay k=0βk/integraldisplay Rψi(x)(ψk∗ρ)(x)ρ(x) dx=∞/summationdisplay k=0Bikβk.(3.13) Moreover, by Parseval’s theorem, we have /vextenddouble/vextenddoubleW′∗ρ/vextenddouble/vextenddouble2 L2(ρ)=∞/summationdisplay i=0θ2 i=∞/summationdisplay i=0/parenleftigg∞/summationdisplay k=0Bikβk/parenrightigg2 . (3.14) 16 Then, by definition of e(K)we get /vextenddouble/vextenddouble/vextenddoublee(K)/vextenddouble/vextenddouble/vextenddouble2=K/summationdisplay i=0 ∞/summationdisplay k=K+1Bikβk 2 =K/summationdisplay i=0/parenleftigg∞/summationdisplay k=0Bikβk−K/summationdisplay k=0Bikβk/parenrightigg2 , which implies /vextenddouble/vextenddouble/vextenddoublee(K)/vextenddouble/vextenddouble/vextenddouble2=K/summationdisplay i=0/parenleftigg∞/summationdisplay k=0Bikβk/parenrightigg2 +K/summationdisplay i=0/parenleftiggK/summationdisplay k=0Bikβk/parenrightigg2 −2K/summationdisplay i=0/parenleftigg∞/summationdisplay k=0Bikβk/parenrightigg/parenleftiggK/summationdisplay k=0Bikβk/parenrightigg . Therefore, taking the limit as K→∞and using equation (3.14)we obtain the desired result. We can now estimate the error of the estimator/hatwideβ(K) T,Nin equation (3.6)with respect to the true Fourier coefficients β(K). The proof of the next result is based on [42, Section 3.1.2] and [40, Theorem 4.6]. Proposition 3.7. Under Assumptions 1.1, 2.4, and 3.1, for all K≥0, ifdet(B(K))̸= 0, then there exists a constant C=C(K)>0, independent of TandN, such that E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/hatwideβ(K) T,N−β(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤C(K)/parenleftbigg1√ T+1√ N/parenrightbigg +δ(K), where δ(K) = 2/vextenddouble/vextenddouble/vextenddouble(B(K))−1e(K)/vextenddouble/vextenddouble/vextenddouble. Proof.First, define the event AKas AK=/braceleftbigg/vextenddouble/vextenddouble/vextenddouble(B(K))−1/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/tildewideB(K) T,N−B(K)/vextenddouble/vextenddouble/vextenddouble<1 2/bracerightbigg , and due to Markov’s inequality and Lemma 3.5(iii) we have P(AC K)≤2/vextenddouble/vextenddouble/vextenddouble(B(K))−1/vextenddouble/vextenddouble/vextenddoubleE/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideB(K) T,N−B(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig ≤C(K)/parenleftbigg1√ T+1√ N/parenrightbigg .(3.15) Therefore, using the law of total expectation and the fact that/hatwideβ(K) T,Nis the projection of /tildewideβ(K) T,Nonto the convex and compact set BKwe obtain E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/hatwideβ(K) T,N−β(K)/vextenddouble/vextenddouble/vextenddouble/bracketrightig =E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/hatwideβ(K) T,N−β(K)/vextenddouble/vextenddouble/vextenddouble|AK/bracketrightig P(AK) +E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/hatwideβ(K) T,N−β(K)/vextenddouble/vextenddouble/vextenddoubleAC K/bracketrightig P(AC K) ≤E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideβ(K) T,N−β(K)/vextenddouble/vextenddouble/vextenddouble|AK/bracketrightig +C(K)/parenleftbigg1√ T+1√ N/parenrightbigg . (3.16) Then, using equations (3.3) and (3.4) we can write /tildewideβ(K) T,N−β(K)=/parenleftig IK+1+ (B(K))−1(/tildewideB(K) T,N−B(K))/parenrightig−1 ×/bracketleftig (B(K))−1/parenleftig σ(/tildewideγ(K) T,N−γ(K))−(/tildewideα(K) T,N−α(K))/parenrightig + (B(K))−1e(K)/bracketrightig , which, following the proof of [42, Theorem 3.1], implies E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideβ(K) T,N−β(K)/vextenddouble/vextenddouble/vextenddouble|AK/bracketrightig ≤2/vextenddouble/vextenddouble/vextenddouble(B(K))−1/vextenddouble/vextenddouble/vextenddoubleE/bracketleftig σ/vextenddouble/vextenddouble/vextenddoubleγ(K) T,N−γ(K)/vextenddouble/vextenddouble/vextenddouble+/vextenddouble/vextenddouble/vextenddoubleα(K) T,N−α(K)/vextenddouble/vextenddouble/vextenddouble|AK/bracketrightig + 2/vextenddouble/vextenddouble/vextenddouble(B(K))−1e(K)/vextenddouble/vextenddouble/vextenddouble. 17 Using the fact that E[Z|AK]≤E[Z]/P(AK)for a positive random variable Z, applying Lemma 3.5, and due to equation (3.15), we get for TandNsufficiently large E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/tildewideβ(K) T,N−β(K)/vextenddouble/vextenddouble/vextenddouble|AK/bracketrightig ≤C(K)/parenleftbigg1√ T+1√ N/parenrightbigg + 2/vextenddouble/vextenddouble/vextenddouble(B(K))−1e(K)/vextenddouble/vextenddouble/vextenddouble, which, together with equation (3.16), gives the desired result. Finally, in the next result we consider the estimator (/hatwiderW′)(K) T,Nin equation (3.7), and, employing Proposition 3.7, we analyze the approximation error with respect to the true interaction kernel W′. Theorem 3.8. Under Assumptions 1.1, 2.4, and 3.1, for all K≥0, ifdet(B(K))̸= 0, then there exists a constant C=C(K)>0, independent of TandN, such that E/bracketleftbigg/vextenddouble/vextenddouble/vextenddouble(/hatwiderW′)(K) T,N−W′/vextenddouble/vextenddouble/vextenddouble L2(ρ)/bracketrightbigg ≤C(K)/parenleftbigg1√ T+1√ N/parenrightbigg +δ(K) +ϵ(K), where δ(K) = 2/vextenddouble/vextenddouble/vextenddouble(B(K))−1e(K)/vextenddouble/vextenddouble/vextenddoubleandϵ(K) =/radicaltp/radicalvertex/radicalvertex/radicalbt∞/summationdisplay k=K+1β2 k. Proof.By definition of (/hatwiderW′)(K) T,Nand considering the Fourier expansion of W′, we have /vextenddouble/vextenddouble/vextenddouble(/hatwiderW′)(K) T,N−W′/vextenddouble/vextenddouble/vextenddouble L2(ρ)=/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleK/summationdisplay k=0(/hatwideβ(K) T,N)k/tildewideψk−∞/summationdisplay k=0βkψk/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble L2(ρ), which, due to the triangle inequality, implies /vextenddouble/vextenddouble/vextenddouble(/hatwiderW′)(K) T,N−W′/vextenddouble/vextenddouble/vextenddouble L2(ρ)≤/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleK/summationdisplay k=0(/hatwideβ(K) T,N)k(/tildewideψk−ψk)/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble L2(ρ) +/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddoubleK/summationdisplay k=0((/hatwideβ(K) T,N)k−(β(K))kψk/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble L2(ρ)+/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble∞/summationdisplay k=K+1βkψk/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble/vextenddouble L2(ρ). Then, since/hatwideβ(K) T,Nbelongs to the compact set BKand{ψk}∞ k=0is orthonormal in L2(ρ), we have E/bracketleftbigg/vextenddouble/vextenddouble/vextenddouble(/hatwiderW′)(K) T,N−W′/vextenddouble/vextenddouble/vextenddouble L2(ρ)/bracketrightbigg ≤C(K)K/summationdisplay k=0E/bracketleftbigg/vextenddouble/vextenddouble/vextenddouble/tildewideψk−ψk/vextenddouble/vextenddouble/vextenddouble L2(ρ)/bracketrightbigg +E/bracketleftig/vextenddouble/vextenddouble/vextenddouble/hatwideβ(K) T,N−β/vextenddouble/vextenddouble/vextenddouble/bracketrightig +/radicaltp/radicalvertex/radicalvertex/radicalbt∞/summationdisplay k=K+1β2 k, which, due to Propositions 2.7 and 3.7, gives the desired result. Remark3.9.In order for the estimator to converge, the additional terms δ(K)andϵ(K)in Theorem
|
https://arxiv.org/abs/2505.05207v1
|
3.8 must disappear as Kincreases. First, notice that ϵ(K)→0asK→∞because W′∈L2(ρ). In principle, we should be able to relate the decay of the generalized Fourier coefficients of the interaction kernel (or its derivative) with the regularity of W, using results from approximation theory such as Jackson’s inequality, as was done in [21]. In our setting, we would need to study this problem in weighted Sobolev spaces on the whole real line, but the study of this interesting question is beyond the scope of this paper. Regarding 18 δ(K), since, by Lemma 3.6, ∥e(K)∥vanishes, it suffices to require that ∥(B(K))−1∥do not blow up faster than the rate at which ∥e(K)∥converges to zero. The proof of this fact is nontrivial, as it depends on the unknown interaction kernel W′. However, we observed that this was indeed the case in all of the numerical experiments that we considered. In particular, this is straightforward to verify for all polynomial interactions, since e(K)= 0 (and alsoϵ(K) = 0) for allK≥rwithrbeing the degree of the polynomial, which in turn impliesδ(K) = 0. Alternatively, notice that we can write (B(K))−1e(K)= (B(K))−1/parenleftig θ(K)−B(K)β(K)/parenrightig = (B(K))−1θ(K)−β(K), whereθ(K)∈RK+1is the vector of the Fourier coefficients of W′∗ρ, whose components θi,i= 0,...,K, are given in equation (3.13). Therefore, requiring that δ(K)vanishes is equivalent to requiring that the solution b(K)of the linear system B(K)b(K)=θ(K) converges to β(K)asK→∞, which is reasonable to assume since we already know that θi=/summationtext∞ k=0Bikβkstill by equation (3.13). 4 Numerical experiments In this section, we employ the methodology introduced above to infer interaction kernels in particle systems, and verify numerically the estimates predicted by the theory. We first consider the mean-field Ornstein–Uhlenbeck process, for which a basis of orthogonal polynomials with respect to the invariant measure can be computed analytically. Even though the Ornstein–Uhlenbeck process is a simple test case, it is still an interesting example because it allows us to assess the performance of our method, since both the invariant measure and the corresponding orthogonal polynomials are given in closed form. Then, we consider more complex interaction kernels, including examples where the assumptions of our theoretical analysis are not necessarily satisfied. Even in this case, numerical experiments demonstrate that our methodology can still be used to learn the interaction kernel from a single trajectory. We generate synthetic observations by numerically solving the interacting particle system (1.1)with deterministic initial conditions, X(n) 0= 0for alln= 1,...,N. The SDE system is discretized employing the Euler–Maruyama scheme with a time step h= 0.01. Then, we assume to observe only the first particle in the system to infer the interaction kernel, so that Yt=X(1) tfor allt∈[0,T]. 4.1 Mean-field Ornstein–Uhlenbeck process We consider the interacting particle system (1.1), with quadratic confining and interaction potentialsV(x) =W(x) =x2/2, and set the diffusion coefficient σ= 1. Then, in the mean-field limit, the particle system converges to the McKean Ornstein–Uhlenbeck SDE dXt=−Xtdt−(Xt−E[Xt]) dt+√ 2dBt, which has unique invariant measure N(0,1/2)with density ρ(x) =1√πe−x2, whose moments are given by M(k)= 0 ifkis odd, /parenleftig 1√ 2/parenrightigk(k−1)!!ifkis even. 19 T= 10 000 N= 500 Figure 1: Comparison between the first four (excluding
|
https://arxiv.org/abs/2505.05207v1
|
the constant function) exact ( ψk) and approximated (/tildewideψk) orthogonal polynomials with respect to the invariant measure ρof the mean-field Ornstein–Uhlenbeck process. Top: we fix T= 10 000 and varyN= 5,50,500. Bottom: we fix N= 500and varyT= 100,1 000,10 000. Notice that, in this case, the orthogonal polynomials with respect to ρhave the closed-form expression ψk(x) =1√ 2kk!Hk(x), (4.1) for allk∈N, whereHkdenotes the standard Hermite polynomial of degree k. In Figures 1 and 2 we compare the exact orthogonal polynomials of equation (4.1)with the approximated polynomials obtained using the single-particle trajectory observation, for different values of the number of particles Nin the system and the final observation time T. In particular, in Figure 1 we plot the first four polynomials (starting at k= 1) varying N= 5,50,500withT= 10 000 fixed and then varying T= 100,1 000,10 000withN= 500 fixed. We observe that, as expected, the approximation error improves when TandNare larger. Moreover, in Figure 2 we compute the approximation error in the space L2(ρ)and verify the convergence rate provided by the theory. We remark that, even if the error is computed for a single observation and the theoretical rate holds in expectation, the two rates match. We also notice that the approximation error is greater for polynomials with higher degree k, and this is due to the constant C(k)in Proposition 2.7 which grows for larger values of k. We then apply the proposed methodology to learn the interaction kernel W′(x) =x, and we consider two cases: “few” observations ( T= 1 000) in a small system ( N= 50) and “many” observations ( T= 10 000) in a large system ( N= 500). In Figure 3 we compare the results for different Fourier series truncations K= 1,..., 8used in the expansion of the interaction kernel. We note that, in this simple setting, two Fourier coefficients ( β0and 20 Figure 2: Comparison between the theoretical and empirical rate of convergence of the first four (excluding the constant function) orthogonal polynomials with respect to the invariant measure ρof the mean-field Ornstein–Uhlenbeck process in L2(ρ), for both the number of particles N(left) and the final time T(right). β1) are enough to approximate W′andδ(K) =ϵ(K) = 0in the statement of Theorem 3.8, sinceβk= 0for allk≥2. Therefore, we notice that increasing Konly worsens the results due to the constant C(K), appearing in front of 1/√ Tand1/√ N, which blows up for larger values of K. On the other hand, we observe that if the time of observation and the number of particles in the system are larger, then polynomials of higher degree still provide accurate approximations of the interaction kernel, and this is in agreement with the theoretical result in Theorem 3.8. We emphasize that this numerical experiment shows the importance of the choice of Kin the method, in case we have limited observations and/or small interacting particle systems. Therefore, it would be interesting to determine the criteria for automatically adjusting K, and we will return to this problem in future work. 4.2 Inference of interaction kernels We now consider more complex interaction kernels. To ensure erogidicity,
|
https://arxiv.org/abs/2505.05207v1
|
we fix the quadratic confining potential V(x) =x2/2and set the diffusion coefficient σ= 1. In the numerical experiments we will consider the following interaction potentials: W1(x) =x4 4−x2 2, W2(x) = cosh(x), W3(x) =D/parenleftig 1−e−a(x2−r2)/parenrightig2, W4(x) =−A√ 2πe−x2 2.(4.2) All of these interaction potentials give rise to multiple stationary states at low temperatures. We note that W3is somewhat similar to the Morse potential, where we replaced the radius withx2, and it is still the difference between an attractive and a repulsive potential, and W4represents a Gaussian attractive interaction. Moreover, we set D= 5,a= 0.5,r= 1 inW3, andA= 5inW4. We consider N= 250particles and set the final observation time atT= 5000for the first two kernels, while we choose N= 500andT= 10 000 for 21 Figure 3: Comparison between the true interaction kernel W′and the estimators (/hatwiderW′)(K) T,N in two different cases ( T= 1 000,N= 50andT= 10 000,N= 500), for different numbers of Fourier coefficients K= 1,..., 8for the Ornstein-Uhlenbeck interaction kernel. W3andW4. We then compute the estimators (/hatwiderW′ i)(K) T,Nfor alli= 1,..., 4, and report the results in Figure 4. In the first test case, where the interaction kernel W′ 1is polynomial, the approximation obtained with the Fourier coefficients K= 5is accurate, especially in the interval [−2,2]where there are more observations. Similar considerations can be made for the second test case, where we still use K= 5Fourier coefficients, even if the function W′ 2to be inferred is not polynomial. The third and fourth examples are more challenging, as they behave differently at infinity. In fact, both W′ 3andW′ 4vanish forx→±∞, and therefore they cannot be well approximated by any polynomial of finite degree. We observe that even if the estimators, which are computed with the K= 11andK= 9Fourier coefficients, respectively, are not as accurate as in the previous test cases, in particular for larger values of x, they are still able to match the overall shape of the interaction kernels. These numerical experiments show the potentiality of the approach presented in this work, which allows us to obtain a reasonable reconstruction of the interaction kernel using only a finite single trajectory from the interacting particle system, even for more complex scenarios that do not fit in the theoretical analysis. 5 Conclusion In this work, we proposed a methodology for learning the interaction kernel in interacting particle systems that relies only on the observation of a single particle. Our approach is based on a Fourier expansion of the interaction kernel, where the basis is made of orthogonal polynomials with respect to the invariant measure of the mean-field dynamics. We first approximated the moments of the invariant measure using the available observations, and then we employed the empirical moments to estimate the orthogonal polynomials. Finally, the Fourier coefficients are inferred by solving a linear system which still depends on the empirical moments and whose equations are derived from the stationary Fokker–Planck 22 Figure 4: Comparison between the true interaction kernels W′ iand the estimators (/hatwiderW′ i)(K) T,N for the different test cases i= 1,..., 4in equation (4.2). equation. Our approach is easy to implement,
|
https://arxiv.org/abs/2505.05207v1
|
since it only requires the approximation of the moments of the invariant measure and the solution of a low-dimensional linear system, and computationally cheap. Moreover, because of its versatility, it can infer complex interaction kernels. On the other hand, its main limitation, illustrated in the convergence analysis, is the dependence of the approximation error on the number Kof Fourier coefficients used in the expansion of the interaction kernel. In fact, larger values of K, which in principle should provide better estimates, can potentially lead to worse results if the observed trajectory or the number of particles in the system are not sufficiently large. The work presented in this paper can be extended in several directions. First, we would like toimprovetheconvergenceresultbyquantifyingthedependenceonthenumber KofFourier coefficients, as this could help in finding techniques to reduce the approximation error. Moreover, we believe that it should be possible to define large classes of diffusion processes for which Assumption 2.4 holds and δ(K)→0in Theorem 3.8, such that the asymptotic unbiasedness of the estimator is guaranteed a priori by a theoretical result. Another interesting development would be lifting the regularity hypotheses in Assumption 3.1 on the confining and interaction potentials and considering the less regular functions to be estimated. In this paper, we consider the problem in one dimension in space. Similarly to the method of moments studied in [40], we expect that the methodology developed in this paper could be extended to the multidimensional case, considering the appropriate tensor products of orthogonal polynomials with respect to the invariant measure. The detailed analysis of the multidimensional problem will be presented elsewhere. Finally, the numerical experiments suggest that Assumption 1.1 on the uniqueness of the invariant 23 measure of the mean-field dynamics, e.g. our assumption that we have uniform propagation of chaos, is not necessary and that our semiparametric method is applicable even in the presence of multiple stationary states. This was also demonstrated numerically for the eigenfunction martingale estimator in [39]. We expect that the results from [34], combined with the linearization approach from [41] are sufficient to provide a rigorous justification of our methodology to the case where the mean-field dynamics exhibits phase transitions. Finally, it would be interesting to apply our method to the kinetic Langevin / hyperelliptic setting [4,25,26]. All these interesting problems will be studied in future work. Acknowledgements GAP is partially supported by an ERC-EPSRC Frontier Research Guarantee through Grant No. EP/X038645, ERC Advanced Grant No. 247031 and a Leverhulme Trust Senior Research Fellowship, SRF \R1\241055. AZ is supported by “Centro di Ricerca Matematica Ennio De Giorgi” and the “Emma e Giovanni Sansone” Foundation and is a member of INdAM-GNCS. References [1]N. I. Akhiezer ,The classical moment problem and some related questions in anal- ysis, vol. 82 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, [2021] ©2021. Reprint of the 1965 edition [0184042], Translated by N. Kemmer, With a foreword by H. J. Landau. [2]C. Amorino, D. Belomestny, V. Pilipauskait ˙e, M. Podolskij, and S.-Y. Zhou,Polynomial rates via deconvolution for nonparametric estimation in McKean- Vlasov SDEs . Preprint arXiv:2401.04667,
|
https://arxiv.org/abs/2505.05207v1
|
2024. [3]C. Amorino, A. Heidari, V. Pilipauskait ˙e, and M. Podolskij ,Parameter estimation of discretely observed interacting particle systems , Stochastic Process. Appl., 163 (2023), pp. 350–386. [4]C. Amorino and V. Pilipauskait ˙e,Kinetic interacting particle system: parameter estimation from complete and partial discrete observations . Preprint arxiv:2410.10226, 2024. [5]D. Bakry, I. Gentil, and M. Ledoux ,Analysis and geometry of Markov diffusion operators , vol. 348 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer, Cham, 2014. [6]D. Bakry, S. Orevkov, and M. Zani ,Orthogonal polynomials and diffusion operators , Ann. Fac. Sci. Toulouse Math. (6), 30 (2021), pp. 985–1073. [7]C. Berg ,Moment problems and polynomial approximation , 1996, pp. 9–32. 100 ans après Th.-J. Stieltjes. [8]S. Bicego, D. Kalise, and G. A. Pavliotis ,Computation and control of unstable steady states for mean field multiagent systems , in Proc. Roy. Soc. A, vol. 481, The Royal Society, 2025, p. 20240476. 24 [9]L.-P. Chaintron and A. Diez ,Propagation of chaos: a review of models, methods and applications. I. Models and methods , Kinet. Relat. Models, 15 (2022), pp. 895–1015. [10]L.-P. Chaintron and A. Diez ,Propagation of chaos: a review of models, methods and applications. II. Applications , Kinet. Relat. Models, 15 (2022), pp. 1017–1173. [11]X. Chen and G. c. dos Reis ,Euler simulation of interacting particle systems and McKean-Vlasov SDEs with fully super-linear growth drifts in space and interaction , IMA J. Numer. Anal., 44 (2024), pp. 751–796. [12]F. Comte and V. Genon-Catalot ,Nonparametric adaptive estimation for inter- acting particle systems , Scand. J. Stat., 50 (2023), pp. 1716–1755. [13]F. Comte, V. Genon-Catalot, and C. Larédo ,Nonparametric moment method for scalar McKean-Vlasov stochastic differential equations . Preprint hal-04460327v2, 2024. [14]D. A. Dawson ,Critical dynamics and fluctuations for a mean-field model of coopera- tive behavior , J. Statist. Phys., 31 (1983), pp. 29–85. [15]M. G. Delgadino, R. S. Gvalani, G. A. Pavliotis, and S. A. Smith ,Phase Transitions, Logarithmic Sobolev Inequalities, and Uniform-in-Time Propagation of Chaos for Weakly Interacting Diffusions , Comm. Math. Phys., 401 (2023), pp. 275–323. [16]L. Della Maestra and M. Hoffmann ,Nonparametric estimation for interacting particle systems: McKean-Vlasov models , Probab. Theory Related Fields, 182 (2022), pp. 551–613. [17]L. Della Maestra and M. Hoffmann ,The LAN property for McKean-Vlasov models in a mean-field regime , Stochastic Process. Appl., 155 (2023), pp. 109–146. [18]J. Garnier, G. Papanicolaou, and T.-W. Yang ,Consensus convergence with stochastic effects , Vietnam J. Math., 45 (2017), pp. 51–75. [19]V. Genon-Catalot and C. Larédo ,Inference for ergodic McKean-Vlasov stochastic differential equations with polynomial interactions , Ann. Inst. Henri Poincaré Probab. Stat., 60 (2024), pp. 2668–2693. [20]M. Giordano and S. Wang ,Statistical algorithms for low-frequency diffusion data: A pde approach . Preprint arxiv:2405.01372, 2025. [21]E. Gobet, M. Hoffmann, and M. Reiß ,Nonparametric estimation of scalar diffusions based on low frequency data , Ann. Statist., 32 (2004), pp. 2223–2253. [22]F. Golse ,On the dynamics of large particle systems in the mean field limit , in Macroscopic and large scale phenomena: coarse graining, mean field limits and ergodicity, vol. 3 of Lect. Notes Appl. Math. Mech., Springer, [Cham], 2016, pp.
|
https://arxiv.org/abs/2505.05207v1
|
1–144. [23]S. N. Gomes, G. A. Pavliotis, and U. Vaes ,Mean Field Limits for Interacting Diffusions with Colored Noise: Phase Transitions and Spectral Numerical Methods , Multiscale Model. Simul., 18 (2020), pp. 1343–1370. [24]S. N. Gomes, A. M. Stuart, and M.-T. Wolfram ,Parameter estimation for macroscopic pedestrian dynamics models from microscopic data , SIAM J. Appl. Math., 79 (2019), pp. 1475–1500. 25 [25]Y. Iguchi and A. Beskos ,Parameter inference for hypo-elliptic diffusions under a weak design condition , Electron. J. Stat., 19 (2025), pp. –. [26]Y. Iguchi, A. Beskos, and M. Graham ,Parameter estimation with increased precision for elliptic and hypo-elliptic diffusions , Bernoulli, 31 (2025), pp. 333–358. [27]I. C. F. Ipsen and R. Rehman ,Perturbation bounds for determinants and charac- teristic polynomials , SIAM J. Matrix Anal. Appl., 30 (2008), pp. 762–776. [28]D. Lacker and L. Le Flem ,Sharp uniform-in-time propagation of chaos , Probab. Theory Related Fields, 187 (2023), pp. 443–480. [29]Q. Lang and F. Lu ,Learning interaction kernels in mean-field equations of first-order systems of interacting particles , SIAM Journal on Scientific Computing, 44 (2022), pp. A260–A285. [30]L. Li, Y. Tang, and J. Zhang ,Solving stationary nonlinear Fokker-Planck equations via sampling , SIAM J. Appl. Math., 85 (2025), pp. 249–277. [31]F. Lu, M. Maggioni, and S. Tang ,Learning interaction kernels in heterogeneous systems of agents from multiple trajectories , J. Mach. Learn. Res., 22 (2021), pp. Paper No. 32, 67. [32]F. Malrieu ,Logarithmic Sobolev inequalities for some nonlinear PDE’s , Stochastic Process. Appl., 95 (2001), pp. 109–132. [33]J. C. Mattingly, A. M. Stuart, and M. V. Tretyakov ,Convergence of numerical time-averaging and stationary measures via Poisson equations , SIAM J. Numer. Anal., 48 (2010), pp. 552–577. [34]P. Monmarché and J. Reygner ,Local convergence rates for wasserstein gradi- ent flows and mckean-vlasov equations with multiple stationary solutions . Preprint arxiv:2404.15725, 2024. [35]G. Naldi, L. Pareschi, and G. Toscani , eds.,Mathematical modeling of collective behavior in socio-economic and life sciences , Modeling and Simulation in Science, Engineering and Technology, Birkhäuser Boston, Ltd., Boston, MA, 2010. [36]R. Nickl ,Consistent inference for diffusions from low frequency measurements , Ann. Statist., 52 (2024), pp. 519–549. [37]R. Nickl, G. A. Pavliotis, and K. Ray ,Bayesian nonparametric inference in McKean-Vlasov models , Ann. Statist., 53 (2025), pp. 170–193. [38]G. A. Pavliotis, S. Reich, and A. Zanoni ,Filtered data based estimators for stochastic processes driven by colored noise , Stochastic Process. Appl., 181 (2025), pp. Paper No. 104558, 31. [39]G. A. Pavliotis and A. Zanoni ,Eigenfunction martingale estimators for interacting particle systems and their mean field limit , SIAM J. Appl. Dyn. Syst., 21 (2022), pp. 2338–2370. [40]G. A. Pavliotis and A. Zanoni ,A method of moments estimator for interacting particle systems and their mean field limit , SIAM/ASA J. Uncertain. Quantif., 12 (2024), pp. 262–288. 26 [41]G. A. Pavliotis and A. Zanoni ,Linearization of ergodic mckean sdes and applica- tions. Preprint arXiv:2501.13655, 2025. [42]A. Quarteroni, R. Sacco, and F. Saleri ,Matematica numerica , Springer-Verlag Italia, Milan, 1998. [43]J. M. Rodríguez ,Approximation by polynomials and smooth functions in Sobolev spaces with respect to measures , J. Approx.
|
https://arxiv.org/abs/2505.05207v1
|
Theory, 120 (2003), pp. 185–216. [44]L. Sharrock, N. Kantas, P. Parpas, and G. A. Pavliotis ,Online parameter estimation for the McKean-Vlasov stochastic differential equation , Stochastic Process. Appl., 162 (2023), pp. 481–546. [45]J. Sirignano and K. Spiliopoulos ,Mean field analysis of neural networks: a law of large numbers , SIAM J. Appl. Math., 80 (2020), pp. 725–752. [46]T. Suzuki ,Free energy and self-interacting particles , vol. 62 of Progress in Nonlinear Differential Equations and their Applications, Birkhäuser Boston, Inc., Boston, MA, 2005. [47]G. Szegő ,Orthogonal polynomials , American Mathematical Society Colloquium Publications, Vol. XXIII, American Mathematical Society, Providence, RI, fourth ed., 1975. [48]J. Tugaut ,Phase transitions of McKean-Vlasov processes in double-wells landscape , Stochastics, 86 (2014), pp. 257–284. 27
|
https://arxiv.org/abs/2505.05207v1
|
arXiv:2505.05670v1 [econ.EM] 8 May 2025Estimation and Inference in Boundary Discontinuity Design s∗ Matias D. Cattaneo†Rocio Titiunik‡Ruiqi (Rae) Yu§ May 22, 2025 Abstract Boundary Discontinuity Designs are used to learn about treatment effec ts along a continu- ous boundary that splits units into control and treatment groups accordin g to a bivariate score variable. These research designs are also called Multi-Score Regress ion Discontinuity Designs, a leading special case being Geographic Regression Discontinuity Des igns. We study the sta- tistical properties of commonly used local polynomial treatment effect s estimators along the continuous treatment assignment boundary. We consider two distinct approaches: one based explicitly on the bivariate score variable for each unit, and the other b ased on their univariate distance to the boundary. For each approach, we present pointwise and un iform estimation and inference methods for the treatment effect function over the assi gnment boundary. Notably, we show that methods based on univariate distance to the boundary exhibi t an irreducible large misspecification bias when the assignment boundary has kinks or other ir regularities, making the distance-based approach unsuitable for empirical work in those set tings. In contrast, meth- ods based on the bivariate score variable do not suffer from that drawback. We illustrate our methods with an empirical application. Companion general-purpose softw are is provided. Keywords : regression discontinuity, uniform estimation and inference, caus al inference. ∗We thank Eric Gautier, Boris Hanin, Jason Klusowski, Juliana Lo ndo˜ no-V´ elez, Xinwei Ma, Boris Shigida, Mykhaylo Shkolnikov, and Jeff Wooldridge for comments. Cattan eo and Titiunik gratefully acknowledge financial support from the National Science Foundation (SES-2019432, DMS -2210561, and SES-2241575). †Department of Operations Research and Financial Engineering, Pri nceton University. ‡Department of Politics, Princeton University. §Department of Operations Research and Financial Engineering, Pri nceton University. 1 Introduction We study estimation and inference in Boundary Discontinuity Design s (Black,1999;Dell,2010; Jardim et al. ,2024), where the goal is to learn about causal treatment effects along a continuou s boundary that splits units into control and treatment groups according t o the value of a bivari- ate location score variable. This setup is also known as a Multi-Score R egression Discontinuity (RD) Design ( Papay et al. ,2011;Reardon and Robinson ,2012), a leading special case being the Geographic RD Design and variations thereof ( Keele and Titiunik ,2015,2016;Keele et al. ,2017; Galiani et al. ,2017;Diaz and Zubizarreta ,2023). SeeCattaneo and Titiunik (2022, Section 2.3) for an overview of the literature on Multi-Dimensional RD Designs, and Cattaneo et al. (2024, Section 5) for a practical introduction. To describe the setup formally, suppose that ( Yi(0),Yi(1),X¦ i)¦,i= 1,2,...,n, is a random sample, where Yi(0) andYi(1) denote the scalar potential outcomes for unit iunder control and treatment assignment, respectively, and the score Xi= (X1i,X2i)¦is a continuous bivariate vector with support X¦R2. Units are assigned to either the control group or treatment group according to their location Xirelative to a known one-dimensional boundary curve Bsplitting the support X in two disjoint regions: X=A0∪A1withA0andA1the control and treatment disjoint (connected) regions, respectively, and B=bd(A0)∩bd(A1), wherebd(At) denotes the boundary of the set
|
https://arxiv.org/abs/2505.05670v1
|
At. The observed response variable is Yi=1(Xi∈A0)·Yi(0) + 1(Xi∈A1)·Yi(1). Without loss of generality, we assume that the boundary belongs to the treatment group, t hat is,bd(A1)¢A1and B∩A0=∅. Boundarydiscontinuitydesignsarecommonlyusedinquantitativesoc ial,behavioral,andbiomed- ical sciences. For example, consider the substantive application in Londo˜ no-V´ elez, Rodr´ ıguez and S´ anchez (2020). The authors studied the effects of a governmental subsidy for post- secondary education in Colombia, called Ser Pilo Paga (SPP), an anti-poverty policy providing tuition sup- port for four-year or five-year undergraduate college students in any gover nment-certified higher education institution with high-quality status. Eligibility to th e program SSP was based on both merit and economic need: in order to qualify for the program, students h ad to obtain a high grade in Colombia’s national standardized high school exit exam, SABER 11, and t hey also had to come from economically disadvantaged families, as measured by the surv ey-based wealth index 1 (a) Scatterplot and Boundary (SPP data). (b) Estimation and Inference (Simulations). Figure 1: Scatterplot, Treatment Boundary, and Estimation and Inferenc e Methods. Note: Panel (a) presents a scatterplot of the bivariate score Xiusing the SPP data, and also plots the treatment boundary Bwith 40 marked grid points. Panel (b) presents estimation and in ference results over the 40 boundary grid points depicted in Panel (a) based on one simulated datas et calibrated using the real SPP data, where the population treatment effect curve τ(x) is assumed linear to remove smoothing bias. Point estimates are implemented using a MSE-optimal bandwidth, and averaged over 1 ,000 Monte Carlo simulations. The companion Rsoftware packagerd2dis used for implementation, and further details are available in the replication files and Cattaneo et al. (2025). SISBEN. Eligibility followed a deterministic rule with a fixed b ivariate cutoff: students had to obtain a SABER 11 score in the top 9 percent of scores or better, and they had to come from a household with SISBEN index below a region-specific threshold. For mally, each student was as- signed a bivariate score Xi= (SABER11 i,SISBEN i)¦, whereX1i=SABER11 irecorded the SABER11 score and X2i=SISBEN irecorded the SISBEN wealth score. After recentering each variable at its corresponding threshold, the treatment assignment boundary becomes B={(SABER11,SISBEN) : (SABER11,SISBEN)∈ {SABER11 g0 andSISBEN= 0}∪{SABER11 = 0 andSISBENg0}}. Figure 1a presents a scatterplot of the bivariate score of the data of students in the 2014 cohort ( n= 363,096 observations), and also plots the bivariate assignment boundary Btogether with 40 evenly-spaced cutoff points along the boundary. Section2presents the core assumption underlying the causal inference fram ework, which gener- alizes the standard univariate RD design to boundary discontinuity de signs. Given the standard (continuity and finite-moments) conditions in Assumption 1(below), the goal is to conduct esti- 2 mation and inference for the average treatment effect curve along the boundary : τ(x) =E[Yi(1)−Yi(0)|Xi=x],x∈B, both pointwise for each x∈B, and uniformly over B. For example, in the SPP application, the outcome variable is Yi= 1 if student iattended college or Yi= 0 otherwise, and thus the causal parameter τ(x) captures the treatment effect of SPP on the probability of college edu- cation
|
https://arxiv.org/abs/2505.05670v1
|
for students at the margin of program eligibility, as determined by their bivariate score Xi= (SABER11 i,SISBEN i)¦∈B. The parameter τ(x) captures policy-relevant heterogeneous treatment effects along the boundary B: for example, in Figure 1a,τ(b1) corresponds to a (local) average treatment effect for students with high SISBEN score (wealth ) and low SABER11 score (academic), while τ(b40) corresponds to a (local) average treatment effect for students with low SISBEN score and higher SABER11 score. Identification of these boundary t reatment effects par- allels standard continuity-based univariate RD design arguments ( Hahn et al. ,2001): treatment assignment changes abruptly along the boundary B, which implies that conditional expectations on each side of the assignment boundary can be used to identify τ(x) whenever there is no system- atic “sorting” of units across the boundary, that is, whenever E[Yi(0)|Xi=x] andE[Yi(1)|Xi=x] are continuous for all x∈B(Assumption 1below). For more discussion, see Papay et al. (2011), Reardon and Robinson (2012),Keele and Titiunik (2015), and references therein. Motivated by the local-to-the-assignment-boundary identifiability ofτ(x), researchers employ flexible regression methods using only observations with their scor e near the boundary B. Local polynomial methods are the preferred choice for estimation and infere nce because they are simple (weighted) linear regression estimates that intuitively incorporat e localization to the assignment boundary, while also retaining most of the familiar features of least squ ares regression methods. Two distinct implementations can be considered in this setting: (i) regression analysis based on the univariate distance to the boundary B, or (ii) regression analysis based on the bivariate location relative to the boundary B. The first approach is more commonly used in practice because it is perce ived as simpler, while the second approach is less often encountered in applications. Despite th eir widespread use, however, there is no foundational understanding of the statistical properties and relative merits of these 3 alternative implementation methods. Our paper fills this gap in the l iterature by providing com- prehensive large sample statistical results for each of the two approach es, which we then use to offer specific practical recommendations for the analysis and interpretation of boundary discontinuity designs. In particular, we provide pointwise and uniform (over B) estimation and inference meth- ods for both local polynomial regression approaches, and demonstrate theore tical and practical advantages of the approach based on bivariate location over the approach based on d istance (albeit both are shown to be valid under appropriate assumptions). Using simulated data calibrated with the real SPP data, Figure 1billustrates graphically the two local polynomial approaches for estimation of τ(x). The solid black line corresponds to the popu- lation treatment effect curve τ(x), taken to be linear in order to eliminate smoothing bias issues. The red crosses and blue dots correspond to local polynomial estimators b ased on, respectively, univariate distance to the boundary and bivariate location relative to t he boundary, for the 40 grid points depicted on the assignment boundary Bin Figure 1a. These treatment effect estimators are implemented using a MSE-optimal bandwidth choice (developed bel ow), and averaged over 1 ,000 Monte Carlo simulations.
|
https://arxiv.org/abs/2505.05670v1
|
The estimator based on distance exhibits a not iceable higher bias near (but not at) the boundary kink point x=b21, relative to the bivariate local polynomial estimator based on the bivariate score. A contribution of our paper is to provide a t heory-based explanation of this empirical phenomena, which turns out to be a general limitation of distance-based methods. 1.1 Paper Organization and Main Contributions After introducing the formal causal inference framework in Section 2, Section 3presents pointwise anduniformestimationandinferencemethodsforlocalpolynomialregre ssionmethodsbasedonthe univariate distance to the boundary B. We begin by providing interpretable sufficient conditions (Assumption 2below) for identification of τ(x), which restrict the distance function in conjunction with the univariate kernel function used and the shape of the boundary B. We then present an important negative result: near kinks of the boundary B, apth order distance-based local polyno- mial estimator exhibits an irreducible bias of order h, the bandwidth used for implementation, no matter the polynomial order pused. This drawback is due to the fact that the underlying popula- tion regression function is at most Lipschitz continuous near kinks of th e boundary B. Figure 1b illustrated the phenomenon numerically, and our paper provides a th eoretical explanation for the 4 large bias of the distance-based treatment effect estimator. In contrast , when the boundary Bis smooth, we show that the pth order distance-based local polynomial estimator exhibits the usual bias of order hp+1. Thus, our results show that the standard distance-based treatment effect esti- mator, while consistent for τ(x), both pointwise and uniformly over B, it can exhibit a large bias affecting bandwidth selection, point estimation, and statistical inf erence in applications, whenever Bhas kinks or other irregularities. For uncertainty quantification, Sec tion3presents pointwise and uniform (over B) large sample distribution theory, which is used to propose both confid ence intervals for τ(x) and confidence bands for the entire treatment effect curve ( τ(x) :x∈B). The last part of Section 3discusses implementation, and also compares our results with relate d methods for standard univariate RD designs. The analysis based on univariate distance to the boundary is not as straight forward as it may seem, and can in fact lead to invalid empirical findings in applications. More precisely, our results show that standard methods in the literature for univariate RD estim ation and inference will not be valid when deployed directly to a bivariate RD design, upon constru cting distance to the boundary, because the specific features of the assignment boundary can lead to a lar ge misspecification bias near kinks or other irregularities of B. The large misspecification bias we highlight when the assignment boundary is non-smooth is particularly worrisome in Geographi c RD designs ( Keele and Titiunik ,2015), and similar settings, where the boundary naturally exhibits many k inks or other irregularities. An alternative is to employ the bivariate location score directly. Se ction4studies this approach, and offers pointwise and uniform over Bestimation and inference methods. The pointwise results follow directly from the literature, provided an additional regulari ty condition on the bivariate ker- nel function and boundary Bhold (Assumption
|
https://arxiv.org/abs/2505.05670v1
|
3below). On the other hand, the uniform inference results require some additional technical care, and are new to the lit erature. The main potential issue is that a uniform distributional approximation is established over the lower-dimensional man- ifoldB, and thus its shape can affect the validity of the results. Section 4also discusses new bandwidth selection methods based on mean square error (MSE) expansi ons, robust bias-corrected inference, and related implementation details. Our results prov ide natural generalizations of well- established results for univariate RD designs; see Calonico et al. (2020) for bandwidth selection, andCalonico et al. (2014,2018,2022) for robust bias correction. 5 Section5deploys our theoretical and methodological results to the SPP data, rev ising the main results reported in Londo˜ no-V´ elez, Rodr´ ıguez and S´ anchez (2020). In addition to providing further empirical evidence in favor of their empirical findings, we also find some evidence of treatment effect heterogeneity along the assignment boundary B. All empirical results are obtained using the companion Rpackagerd2d, and we provide complete replication files. Section6concludes with specific recommendations for practice. The supplem ental appendix presents generalizations of our theoretical results, reports their p roofs, and gives other theoretical results that may be of independent interest. In particular, it pr esents a new strong approxima- tion result for empirical process with bounded moments, extendin g recent work by Cattaneo and Yu(2025). Our companion software article ( Cattaneo et al. ,2025) discusses the general-purpose software Rpackagerd2d(https://rdpackages.github.io/rd2d ) implementing the methods de- veloped in this paper. 2 Setup We impose the following basic conditions on the underlying data gener ating process. Assumption 1 (Data Generating Process) .Lett∈ {0,1}. (i) (Y1(t),X1)¦,...,(Yn(t),Xn)¦are independent and identically distributed random vectors. (ii)The distribution of Xihas a Lebesgue density fX(x)that is continuous and bounded away from zero on its support X= [a1,b1]×[a2,b2], for−∞< al< bl<∞withl∈ {1,2}. (iii)µt(x) =E[Yi(t)|Xi=x]is(p+1)-times continuously differentiable on X. (iv)σ2 t(x) =V[Yi(t)|Xi=x]is bounded away from zero and continuous on X. (v) supx∈XE[|Yi(t)|2+v|Xi=x]<∞for some vg2. This assumption is on par with the usual assumptions encountered in th e classical RD literature with an univariate score. In particular, part (ii) goes beyond the usual compact support restriction and further assumes a tensor product structure on Xto avoid technicalities underlying our strong approximation results used for uniform distribution theory: the ass umption is not practically re- strictive because all methods considered in this paper localize t o the boundary. Part (iii) imposes standard smoothness conditions on the bivariate conditional expectation s of interest, which will 6 play an important role in misspecification (smoothing bias) reduction in our upcoming results. Identification of τ(x) follows directly from Assumption 1: seePapay et al. (2011),Reardon and Robinson (2012),Keele and Titiunik (2015), and references therein. 2.1 Notation We employ standard concepts and notations from empirical process theor y (van der Vaart and Wellner,1996;Gin´ e and Nickl ,2016) and geometric measure theory ( Simon et al. ,1984;Federer, 2014). For a random variable Vi, we write En[g(Vi)] =n−1/summationtextn i=1g(Vi). For a vector v∈Rk, the Euclidean norm is ∥v∥= (/summationtextk i=1v2 i)1/2. For a matrix A∈Rm×n, the operator norm is ∥A∥= sup∥x∥=1∥Ax∥.Ck(X,Y) denotes
|
https://arxiv.org/abs/2505.05670v1
|
the class of k-times continuously differentiable functions from X toY, andCk(X) is a shorthand for Ck(X,R). For a Borel set S¦X, the De Giorgi perimeter of S isperim(S) = supg∈D2(X)/integraltext R21(x∈S)divg(x)dx/∥g∥∞, where div is the divergence operator, and D2(X) denotes the space of C∞functions with values in R2and with compact support included in X. WhenSis connected, and the boundary bd(S) is a smooth simple closed curve, Ssimplifies to thecurvelengthof bd(S). Acurve B¦R2isarectifiable curve ifthereexistsaLipschitzcontinuous function γ: [0,1]/ma√sto→R2such that B=γ([0,1]). For a function f:R2/ma√sto→R, Supp(f) denotes closure of the set {x∈R2:f(x)̸= 0}. For reals sequences an=o(bn) if limsupn→∞|an| |bn|= 0,|an|≲|bn| if there exists some constant CandN >0 such that n > Nimplies|an| fC|bn|. For sequences of random variables an=oP(bn) if plimn→∞|an| |bn|= 0,|an|≲P|bn|if limsupM→∞limsupn→∞P[|an bn| g M] = 0. Finally, Φ( x) denotes the standard Gaussian cumulative distribution function. 3 Analysis based on Univariate Distance For each unit i= 1,...,n, define their scalar distance-based score Di(x) =d(Xi,x)(1(Xi∈A1)− 1(Xi∈A0)) to the point x∈B, whered(·,·) denotes a distance function. It is customary to use the Euclidean distance d(Xi,x) =∥Xi−x∥=/radicalbig (X1i−x1)2+(X2i−x2)2forx= (x1,x2)¦∈B in applications, but other choices are sometimes encountered. For each x∈B, the setup thus reduces to a standard univariate RD design with Di(x)∈Rthe score variable and c= 0 the cutoff, whereDi(x)g0 if unit iis assigned to treatment status and Di(x)<0 if unit iis assigned to 7 control status. The local polynomial treatment effect curve estimator b ased on distance is /hatwideτdis(x) =e¦ 1/hatwideγ1(x)−e¦ 1/hatwideγ0(x),x∈B, where, for t∈ {0,1}, /hatwideγt(x) = argmin γ∈Rp+1En/bracketleftBig/parenleftbig Yi−rp(Di(x))¦γ/parenrightbig2kh(Di(x))1(Di(x)∈It)/bracketrightBig , withrp(u) = (1,u,u2,···,up)¦the usual univariate polynomial basis, kh(u) =k(u/h)/h2for univariatekernelfunction k(·)andbandwidthparameter h, andI0= (−∞,0)andI1= [0,∞). See Cattaneo and Titiunik (2022) for a literature review of RD designs, and Cattaneo et al. (2020,2024) for practical introductions. The univariate kernel function typic ally down-weights observations as the distance to x∈Bincreases, while the bandwidth determines the level of localizat ion to each point on the boundary B. In this case, observations contribute relative to their distance d(Xi,x) to the point x∈B. We impose the following conditions on the underlying features of the distance-based local poly- nomial estimator /hatwideτdis(x). Assumption 2 (Univariate Distance-Based Kernel) .Lett∈ {0,1}. (i)d:R2/ma√sto→[0,∞)satisfies ∥x1−x2∥≲d(x1,x2)≲∥x1−x2∥for allx1,x2∈X. (ii)k:R→[0,∞)is compact supported and Lipschitz continuous, or k(u) =1(u∈[−1,1]). (iii) liminf h³0infx∈B/integraltext Atkh(d(u,x))du≳1. Part (i) of this assumption requires the distance function be equiv alent (up to constants) to the Euclidean distance, while Assumption 2(ii) imposes standard conditions on the (univariate) kernel function. The last part of this assumption is novel to the lit erature: it implicitly restricts the geometry of the boundary Brelative to the kernel and distance functions. More precisely, it rules out settings where highly irregular boundary shapes will lead to regions with too “few” data: a necessary condition for the estimator /hatwideγt(x) to be well-defined in large samples is that E[kh(Di(x))1(Di(x)∈It)] =/integraltext Atkh(d(u,x))fX(u)du>0. Our theoretical results show that Assumption 2(iii) is a simple sufficient condition, due to Assumption 1(ii). Commonly encountered treatment assignment boundaries and distance functions typically sat isfy Assumption 2; a potential 8 exception being highly irregular geographic boundaries with low densi ty of observations. 3.1 Identification and Interpretation
|
https://arxiv.org/abs/2505.05670v1
|
For each treatment group t∈ {0,1}, boundary point x∈B, and distance score Di(x), the uni- variate distance-based local polynomial estimator /hatwideγt(x) is the sample analog of the coefficients associated with the best (weighted, local) mean square approximation of the conditional expecta- tionE[Yi|Di(x)] based on rp(Di(x)): γ∗ t(x) = argmin γ∈Rp+1E/bracketleftBig/parenleftbig Yi−rp(Di(x))¦γ/parenrightbig2kh(Di(x))1(Di(x)∈It)/bracketrightBig . Therefore, letting /hatwideθt,x(0) =e¦ 1/hatwideγt(x),θ∗ t,x(0) =e¦ 1γ∗ t(x), andθt,x(r) =E[Yi|Di(x) =r,Di(x)∈It] forr∈R, we have the standard least squares (best linear prediction) decomp osition /hatwideθt,x(0)−θt,x(0) =e¦ 1Ψ−1 t,xOt,x+/bracketleftbig θ∗ t,x(0)−θt,x(0)/bracketrightbig +/bracketleftbig e¦ 1(/hatwideΨ−1 t,x−Ψ−1 t,x)Ot,x/bracketrightbig (1) for each group t∈ {0,1}, and where Ψt,x=E/bracketleftBig rp/parenleftBigDi(x) h/parenrightBig rp/parenleftBigDi(x) h/parenrightBig¦ kh(Di(x))1(Di(x)∈It)/bracketrightBig , /hatwideΨt,x=En/bracketleftBig rp/parenleftBigDi(x) h/parenrightBig rp/parenleftBigDi(x) h/parenrightBig¦ kh(Di(x))1(Di(x)∈It)/bracketrightBig ,and Ot,x=En/bracketleftBig rp/parenleftBigDi(x) h/parenrightBig kh(Di(x))(Yi−θ∗ t,x(Di(x)))1(Di(x)∈It)/bracketrightBig . In the decomposition ( 1), the first term is the stochastic linear representation of the cen tered estimator, /hatwideθt,x(0)−θt,x(0), since it is an average of (unconditional) mean-zero random variables, the second term is the mean square approximation bias, and the third te rm is a non-linearity error arising from the convergence of the Gram matrix associated with th e (weighted) least squares estimator. Since/hatwideτdis(x) =/hatwideθ1,x(0)−/hatwideθ0,x(0), the following lemma characterizes the target estimand of the distance-based local polynomial estimator. Lemma 1 (Identification) .Suppose Assumptions 1(i)–(iii) and 2(i) hold. Then, τ(x) = lim r³0θ1,x(r)− limr↑0θ0,x(r)for allx∈B. 9 This lemma is established by noting that, for each group t∈ {0,1}andr∈R, θt,x(r) =E/bracketleftbig Yi/vextendsingle/vextendsingleDi(x) =r,Di(x)∈It/bracketrightbig =E/bracketleftbig Yi/vextendsingle/vextendsingled(Xi,x) =|r|,Xi∈At/bracketrightbig =E/bracketleftbig Yi(t)/vextendsingle/vextendsingled(Xi,x) =|r|/bracketrightbig , and then verifying that lim u³0E[Yi(t)|d(Xi,x) =u] =µt(x) under the conditions imposed. With- out restricting the data generating process, as well as the assignmen t boundary and distance func- tion,θt,x(0) andµt(x) need not agree. Employing Lemma 1and the decomposition ( 1), we obtain: /hatwideτdis(x)−τ(x) =Ln(x)+Bn(x)+Qn(x),x∈B, whereLn(x) =e¦ 1Ψ−1 1,xO1,x−e¦ 1Ψ−1 0,xO0,xis a mean-zero linear statistic, Bn(x) =θ∗ 1,x(0)−θ∗ 0,x(0)− τ(x) is the bias of the estimator, and Qn(x) =e¦ 1(/hatwideΨ−1 1,x−Ψ−1 1,x)O1,x−e¦ 1(/hatwideΨ−1 0,x−Ψ−1 0,x)O0,xis the higher-order linearization error. In standard local polynomial regression settings, Ln(x) is approximately Gaussian, Bn(x) is of orderhp+1, andQn(x) is neglected. However, because estimation is conducted along the bou ndary B, not all of those standard results remain valid in the context of boundary d iscontinuity designs. 3.2 Bias Along the Boundary Unlike the case of standard univariate local polynomial estimation, in boun dary discontinuity de- signs the smoothness of θt,x(r) =E[Yi|Di(x) =r,Di(x)∈It] depends on the smoothness of the boundary B. More specifically, the bias of the distance-based local polynomial est imator can be affected by the shape of the boundary B, regardless of the polynomial order pused when con- structing the estimator. Figure 2demonstrates the problem graphically: for a point x∈Bthat is close enough to a kink point (0 ,0) on the boundary, the conditional expectation θ1,x(r) is not differentiable for all rg0. This problem arises because, given the distance function d(·,·), for any pointx∈Bnear a kink, a “small” rgives a complete arc {x∈A1:d(Xi,x) =r}, while for a “large”rthe arc is truncated by the boundary. As a result, for the example in Fi gure2,θ1,x(r) is smooth for all rfr3andr > r3, but the function is not differentiable at r=r3. Furthermore, at r=r3, the
|
https://arxiv.org/abs/2505.05670v1
|
left derivative is constant, but the right derivative is equal to infinity. The supplemental 10 (a) Distance to b∈B. (b) Distance-based Conditional Expectation. Figure 2: Lack of Smoothness of Distance-Based Conditional Expectation near a Kink. Note: Analytic example of θ1,b(r) =E[Y(1)|Di(b) =r],rg0, for distance transformation Di(b) =d(Xi,b) = ∥Xi−b∥to point b∈Bnear a kink point on the boundary, based on location Xi= (X1i,X2i)¦. The induced univariate conditional expectation r/mapsto→θ1,b(r) is continuous but not differentiable at r=r3. appendix gives details on this analytic example. Although smoothness of the boundary Bcan affect the smoothness of θt,x(r), the fact that the distance-based estimator is “local” means that the approximation error will be no greater than that of a local constant estimator, regardless of the choice of polynomial orde rp. The following lemma formalizes this result, and also shows that this bias order cann ot be improved by increasing pg0. Lemma 2 (Approximation Error: Minimal Guarantee) .For some L >0, letPbe the class of data generating processes satisfying Assumptions 1(i)-(iii) with X¦[−L,L]2, Assumption 2, and the following three additional conditions: (i)L−1finfx∈XfX(x)fsupx∈XfX(x)fL, (ii) max 0f|ν|fpsupx∈X|∂νµ(x)|+max 0f|ν|fpsupx,y∈X|∂νµ(x)−∂νµ(y)| ∥x−y∥fL, and (iii) liminf h³0infx∈B/integraltext Atkh(d(u,x))dugL−1. For any pg1, ifnh2→ ∞, then 1≲liminf n→∞sup P∈Psup x∈BBn(x) hflimsup n→∞sup P∈Psup x∈BBn(x) h≲1. This lemma characterizes precisely the uniform over B(and data generating processes) bias of 11 the distance-based local polynomial estimator /hatwideτdis(x). The upper bound in Lemma 2is established uniformly over the class of data generating processes because we can s how that |θt(0)−θt(r)|≲ rfort∈ {0,1}in general. The lower bound is shown using the following example. S uppose Xi∼Uniform([−2,2]2),µ0(x1,x2) = 0,µ1(x1,x2) =x2for all (x1,x2)∈[−2,2]2, andYi(0)|Xi∼ Normal(µ0(Xi),1) andYi(1)|Xi∼Normal(µ1(Xi),1). Letd(·,·) be the Euclidean distance, and suppose that the control and treatment regions are A1={(x,y)∈R2:xf0,yg0}and A0=R2/A1, respectively, and hence B={(x,y)∈R:−2fxf0,y= 0 orx= 0,0fyf2}. This boundary is as in Figure 2a, having a 90◦kink atx= (0,0). It follows that the conditions of Lemma2hold, and hence this is an allowed data generating process. The suppl emental appendix establishes the lower bound by careful analysis of the resulting app roximation bias. As a point of contrast, note that Bn(x)≲Php+1pointwise in x∈B, for small enough h, provided that the kinks on the boundary Bare sufficiently far apart of each other relative to the bandwidth. However, Lemma 2demonstrates that, no matter how large the sample size is (i.e., how small the bandwidth is), there will always be a region ne ar a kink of the boundary B where the misspecification bias of the distance-based local polynomial estimator /hatwideτdis(x) is at most of order h(i.e., not of order hp+1as expected), regardless of the polynomial order pemployed. The problem arises when the boundary Bchanges non-smoothly, leading to a non-differentiable induced conditional expectation θt,x(r) =E[Yi|Di(x) =r,Di(x)∈It],t∈ {0,1}, as illustrated in Figure2b. On the other hand, if the boundary Bis smooth enough, then a better smoothing bias can be established. Lemma 3 (Approximation Error: Smooth Boundary) .Suppose Assumptions 1(i)-(iii) and 2hold, withd(·,·)the Euclidean distance. (i)Forx∈B, and for some δ,ε >0, suppose that B∩ {y:∥y−x∥ fε}=γ([−δ,δ]), whereγ:R→R2is a one-to-one function in Cκ+2([−δ,δ],R2). Then, θ0,x(·)andθ1,x(·) are(κ'(p+1))-times continuously differentiable on [0,ε]. Therefore, there exists a positive constant Csuch
|
https://arxiv.org/abs/2505.05670v1
|
that |Bn(x)| fChκ'(p+1)for allh∈[0,ε]andx∈B. (ii)Suppose B=γ([0,L])whereγis a one-to-one function in Cι+2([0,L],R2)for some L >0. Suppose there exists δ,ε >0such that for all x∈Bo=γ([δ,L−δ]),r∈[0,ε], andt= 0,1, the set{u∈R2:d(u,x) =r}intersects with bd(At)at only two points in B. Then, for all 12 x∈γ([δ,L−δ]),θ0,x(·)andθ1,x(·)are(ι'(p+1))-times continuously differentiable on [0,ε], andlimr³0dv drvθt,x(r)exists and is finite for all 0fvfp+1andt∈ {0,1}. Therefore, there exists a positive constant Csuch that supx∈Bo|Bn(x)| fChι'(p+1)for allh∈[0,r]. This lemma gives sufficient conditions in terms of smoothness of the b oundary Bto achieve a smaller misspecification bias of /hatwideτdis(x), when compared to the minimal guarantee given by Lemma 2. In particular, for a uniform bound on misspecification bias, the suffici ent condition in Lemma 3requires the boundary Bbeuniformly smooth in that it can be parameterized by onesmooth function. The sufficient condition ensuring a smooth boundary every where is crucial because, as shown in Lemma 2, even a piecewise smooth Bwith only one kink will have detrimental effects on the convergence rate of the bias (Figure 2). In Appendix A, we also show that if we consider the class of rectifiable boundaries, t hen no estimator based on univariate distance, not just local polynomial estimat ors, can achieve a uniform convergence rate better than n−1/4, that is, the non-parametric mean square minimax rate for esti- mating bivariate Lipschitz functions ( Stone,1982). Since we show bellow that the local polynomial estimator based on univariate distance has a uniform convergence no larger than (n/logn)−1/4, it isnearlyminimax optimal (up to the log1/4nterm). This minimax result means that no matter whichunivariatedistance-basedestimatoris used, itisalways poss ible tofindarectifiableboundary and a data generating process such that the estimation error is no bett er thann−1/4. 3.3 Treatment Effect Estimation and Inference Using technical results established in the supplemental appendi x, we obtain the following conver- gence rates for the univariate distance-based local polynomial treatmen t effect estimator. Theorem 1 (Rates of Convergence) .Suppose Assumptions 1and2hold. Ifnh2/log(1/h)→ ∞, then (i)|/hatwideτdis(x)−τ(x)|≲P1√ nh2+1 n1+v 2+vh2+|Bn(x)|forx∈B, and (ii) supx∈B|/hatwideτdis(x)−τ(x)|≲P/radicalBig log(1/h) nh2+log(1/h) n1+v 2+vh2+supx∈B|Bn(x)|. This theorem establishes the poinwise and uniform (over B) convergence rates for the distance- based treatment effect estimator. By Lemma 2,/hatwideτdis(x)→Pτ(x), both pointwise and uniformly, 13 ifh→0. However, /hatwideτdis(x) has a variance convergence rate of order n−1h−2despite being a “uni- variate” local polynomial estimator, which would have na¨ ıvely suggest ed a variance convergence of ordern−1h−1instead. In addition to exhibiting a bivariate curse of dimensionali ty, the convergence rate of the treatment effect estimator /hatwideτdis(x) along the boundary Bis determined by the smooth- ness of the B, as discussed in the previous subsection. Finally, it is not difficu lt to establish valid pointwise, or integrated over B, MSE convergence rates matching those in Theorem 1(i). See the supplemental appendix for omitted details. Section 3.4discusses the implications of these results for implementation, leveraging standard methods from univariate RD de signs. To develop companion pointwise and uniform inference procedures alon g the treatment assign- ment boundary B, we consider the feasible t-statistic at each boundary point x∈B(for given a bandwidth choice): /hatwideTdis(x) =/hatwideτdis(x)−τ(x)/radicalBig /hatwideΞx,x, where, using standard least squares algebra, for all x1,x2∈Bandt∈ {0,1}, we define /hatwideΞx1,x2=/hatwideΞ0,x1,x2+/hatwideΞ1,x1,x2,/hatwideΞt,x1,x2=1 nh2e¦
|
https://arxiv.org/abs/2505.05670v1
|
1/hatwideΨ−1 t,x1/hatwideΥt,x1,x2/hatwideΨ−1 t,x2e1, and /hatwideΥt,x1,x2=h2En/bracketleftBig rp/parenleftBigDi(x1) h/parenrightBig kh(Di(x1))/parenleftbig Yi−rp(Di(x1))¦/hatwideγt(x1)/parenrightbig 1(Di(x1)∈It) ×rp/parenleftBigDi(x2) h/parenrightBig¦ kh(Di(x2))/parenleftbig Yi−rp(Di(x2))¦/hatwideγt(x2)/parenrightbig 1(Di(x2)∈It)/bracketrightBig . Thus, feasible confidence intervals and confidence bands over Btake the form: /hatwideIdis(x;α) =/bracketleftbigg /hatwideτdis(x)−qα/radicalBig /hatwideΞx,x,/hatwideτdis(x)+qα/radicalBig /hatwideΞx,x/bracketrightbigg ,x∈B, foranyα∈[0,1], andwhere qαdenotestheappropriatequantiledependingonthedesiredinferen ce procedure. For pointwise inference, it is straightforward to show that supt∈R|P[/hatwideTdis(x)ft]−Φ(t)| →0 for eachx∈B, under standard regularity conditions, and provided that√ nh2|Bn(x)| →0. Thus, 14 in this case, qα= Φ−1(1−α/2) is an asymptotically valid choice. For uniform inference, we first establish a novel strong approximation for the entire stochastic pr ocess (/hatwideTdis(x) :x∈B), assuming that√ nh2supx∈B|Bn(x)| →0. With some technical work, we can then choose the appropriate qαfor uniform inference because P/bracketleftbig τ(x)∈/hatwideIdis(x;α),for allx∈B/bracketrightbig =P/bracketleftBig sup x∈B/vextendsingle/vextendsingle/hatwideTdis(x)/vextendsingle/vextendsinglefqα/bracketrightBig , and the distribution of supx∈B/vextendsingle/vextendsingle/hatwideTdis(x)/vextendsingle/vextendsinglecan be deduced from the strong approximation of the stochastic process ( /hatwideTdis(x) :x∈B). Theorem 2 (Statistical Inference) .Suppose Assumptions 1and2hold. Let Unbe theσ-algebra generated by ((Yi,(Di(x) :x∈B)) : 1fifn). (i)For allx∈B, ifnv 2+vh2→ ∞andnh2B2 n(x)→0, then P/bracketleftbig τ(x)∈/hatwideIdis(x;α)/bracketrightbig →1−α, forqα= Φ−1(1−α/2). (ii)Ifnv 2+vh2/logn→ ∞,liminf n→∞logh logn>−∞,nh2supx∈BB2 n(x)→0, andperim({y∈At: d(y,x)/h∈Supp(k)})≲hfor allx∈Bandt∈ {0,1}, then P/bracketleftbig τ(x)∈/hatwideIdis(x;α),for allx∈B/bracketrightbig →1−α, forqα= inf{c >0 :P[supx∈B|/hatwideZn(x)| gc|Un]fα}, where(/hatwideZn:x∈B)is a Gaussian pro- cess conditional on Un, withE[/hatwideZn(x1)|Un] = 0andE[/hatwideZn(x1)/hatwideZn(x2)|Un] =/hatwideΞx1,x2//radicalBig /hatwideΞx1,x1/hatwideΞx2,x2, for allx1,x2∈B. This theorem establishes asymptotically valid inference procedu res using the distance-based local polynomial treatment effect estimator /hatwideτdis(x). For uniform inference, an additional restriction on the assignment boundary Bis imposed: the De-Giorgi perimeter condition can be verified when the boundary of {y∈At:d(y,x)/h∈Supp(k)}is a curve of length no greater than hup to a constant; since the set is contained in a h-ball centered at x, the curve length condition holds as long as the curve B∩{y∈R2:d(y,x)fh}is not “too wiggly”. 15 3.4 Discussion and Implementation Although the distance-based estimator /hatwideτdis(x) looks like an univariate local polynomial estimation procedure, basedonthescalarscorevariable Di(x), Theorem 1showsthatitspointwiseanduniform variance convergence rates are equal to that of a bivariate nonparametric es timator (which are unimprovable). Theorem 2shows that inference results derived using univariate local poly nomial regression methods can be deployed directly in distance-based se ttings, provided the side conditions are satisfied. This result follows from the fact that /hatwideTdis(x) is constructed as a self-normalizing statistic, and therefore it is adaptive to the fact that the univariate covariate Di(x) is actually based on the bivariate covariate Xi. This finding documents another advantage of employing pre- asymptotic variance estimators and self-normalizing statistics for di stributional approximation and inference ( Calonico et al. ,2018). It follows that standard estimation and inference methods from the univariate RD design literature could be deployed to boundary di scontinuity designs employing univariate distance to the boundary, provided an appropriate bandwidt h is chosen to ensure that the bias due to the shape of the assignment boundary B(documented in Section 3.2) is small. For implementation, consider first the case that Bn(x)≲hp+1, that is, the assignment bound- aryBis smooth (in the sense of Lemma 3). Establishing a precise MSE expansion for /hatwideτdis(x) is cumbersome due to the added complexity introduced by the distanc e transformation, but the con- vergenceratescanbededucedfromTheorem 1. Theincorrect univariateMSE-optimalbandwidthis h1d≍n−1/(3+2p), while the correctMSE-optimal bandwidth is hdis≍n−1/(4+2p), implying that tak- ing the
|
https://arxiv.org/abs/2505.05670v1
|
distance variable as the univariate covariate, and thus ignoring its intrinsic bivariate dimen- sion, implies that the resulting bandwidth choice will be smaller n−1/(3+2p)< n−1/(4+2p), thereby undersmoothing the point estimator (relative to the correctoptimal MSE bandwidth choice). As a consequence, the point estimator will not be MSE-optimal but rather exhibit more variance and less bias, and the associated inference procedures will be more con servative. An obvious solution is to rescale the incorrect univariate bandwidth,n1/(3+2p) n1/(4+2p)h1d, but this may not be necessary if the empirical implementation of h1demploys a pre-asymptotic variance estimator, as in the software packagerdrobust (https://rdpackages.github.io/rdrobust/ ). SeeCalonico et al. (2020), and references therein, for details on bandwidth selection for standard univariate RD designs. When the assignment boundary Bexhibits kinks (or other irregularities), no matter how large 16 the sample size or the polynomial order, there will always be a region ne ar each kink where the bias will be large, that is, of irreducible order h, as shown by Lemma 2. In this case, near each kink the correctMSE-optimal bandwidth is hdis≍n−1/4. Away from each kink, the MSE-optimal bandwidthsareasdiscussedinthepreceedingparagraph. Thisphenom enonisgeneratedbythelack of smoothness of B, and leads to different MSE-optimal bandwidths for each x∈B, thus making automatic implementation more difficult. A simple solution to this prob lem is to set hdis≍n−1/4 for allx∈B, which is generically suboptimal but always no larger than the pointwi se MSE-optimal bandwidth, thus delivering a more variable (than optimal) point esti mator and more conservative (than possible) associated inference procedure. Given a choice of bandwidth h, valid statistical inference can be developed by controlling the remaining misspecification bias. When the boundary Bis smooth, robust bias correction for standard univariate RD designs continues to be valid in the context of d istance-based estimation (Calonico et al. ,2014,2018,2022). On the other hand, when Bexhibits kinks, undersmoothing relative to the MSE-optimal bandwidth for p= 0 is needed due to the fact that increasing pdoes not necessarily imply a reduction in misspecification bias, and thus bias correction techniques are ineffective (uniformly over x∈B). It remains to explain how uniform inference is implemented based on Theorem 5(ii). In practice, we discretize along the boundary with point of evaluations b1,···,bM∈B, as in Figure 1a, and hence the feasible (conditional) Gaussian process ( /hatwideZn(x) :x∈B) is reduced to the M-dimensional (conditional) Gaussian random vector /hatwideZn= (/hatwideZn(b1),...,/hatwideZn(bM)) with covariance matrix having a typicalelement E[/hatwideZn(b1)/hatwideZn(b2)|U]. Finding qαreducestofindingthe α-quantileofthedistribution of max 1flfM|/hatwideZn(bl)|, which can be easily simulated. In the companion software package rd2d, we employ a simple rule-of-thumb bandwidth selector targeting hdis≍n−1/4to implement the univariate distance-based estimator /hatwideτdis(x) as default for allx∈B, which is valid for estimation whether Bis smooth or not. For inference, following Calonico et al. (2018), the package employs the undersmoothed choice of order n−1/3. IfBis known to be smooth, then the package implements a rule-of-thumb ban dwidth selector targeting hdis≍n−1/(4+2p)for point estimation, and them employs robust bias corrected inferen ce based on that bandwidth choice. Section 5illustrates the performance of these methods with the SPP empirical application, and also considers the software package rdrobust for bandwidth selection 17 withp= 0
|
https://arxiv.org/abs/2505.05670v1
|
as another simple rule-of-thumb bandwidth implementation. Furt her implementation details and simulation evidence are given in Cattaneo et al. (2025). 4 Analysis based on Bivariate Location The previous section demonstrated the potential detrimental aspect s of employing distance-based local polynomial regression to analyze boundary discontinuity designs. When the boundary has kinks, as it is common in many applications based on sharp or geographic assignme nt rules, uni- variate methods based on distance to the boundary can lead to large biases n ear kinks or other irregularities on the assignment boundary. This section shows that an e asy way to avoid the distance-based bias is to employ bivariate local polynomial regression methods. The location-based treatment effect curve estimator of τ(x) is /hatwideτ(x) =e¦ 1/hatwideβ1(x)−e¦ 1/hatwideβ0(x),x∈B, where, for t∈ {0,1}, /hatwideβt(x) = argmin β∈Rpp+1En/bracketleftBig/parenleftbig Yi−Rp(Xi−x)¦β/parenrightbig2Kh(Xi−x)1(Xi∈At)/bracketrightBig , withpp= (2+p)(1+p)/2−1,Rp(u) = (1,u1,u2,u2 1,u2 2,u1u2,···,up 1,up 2)¦denotes the pth order polynomial expansion of the bivariate vector u= (u1,u2)¦,Kh(u) =K(u1/h,u2/h)/h2for a bivariate kernel function K(·) and a bandwidth parameter h. We employ the same bandwidth for both dimensions of Xionly for simplicity, and because it is common practice to first standard ize each dimension of the bivariate score. We impose the following assumption on the bivariate kernel function an d assignment boundary. Assumption 3 (Kernel Function) .Lett∈ {0,1}. (i)K:R2→[0,∞)is compact supported and Lipschitz continuous, or K(u) =1(u∈[−1,1]2). (ii) liminf h³0infx∈B/integraltext AtKh(u−x)du≳1. The first condition is standard in the literature, while the second c ondition is new. Similar to the case of distance-based estimation (Assumption 2(iii)), the goal of Assumption 3(ii) is to ensure 18 enough data is available in large samples for each point x∈BbecauseE[Kh(Xi−x)1(Xi∈At)]≳ /integraltext AtKh(u−x)duunder Assumption 1. Given these conditions, pointwise and uniform estimation results, aswellasvalidMSEexpansions, followfromstandardlocalpoly nomialcalculationsandem- pirical process theory. On the other hand, uniform distribution th eory needs to be established with some extra technical care: the supplemental appendix establishes a key new strong approximation result that takes into account the specific features of the estimator an d manifold B. 4.1 Treatment Effect Estimation Using standard concentration techniques from empirical process the ory, we obtain the pointwise and uniform convergence rate of /hatwideτ(x). Theorem 3 (Rate of Convergence) .Suppose Assumptions 1and3hold. Ifnh2/log(n)→ ∞, then (i)|/hatwideτ(x)−τ(x)|≲P1√ nh2+1 n1+v 2+vh2+hp+1forx∈B, and (ii) supx∈B|/hatwideτ(x)−τ(x)|≲P/radicalBig log(1/h) nh2+log(1/h) n1+v 2+vh2+hp+1. This theorem immediately establishes consistency of the treatmen t effect estimator based on the bivariate location score, provided that h→0. More importantly, the theorem shows that the bias is of order hp+1regardless of whether there are kinks or other irregularities in B. Comparing Theorems 1and3, it follows that both estimators can achieve the same (optimal) convergen ce rate when the boundary Bis smooth (Lemma 3), but otherwise /hatwideτ(x) can achieve a faster (indeed, optimal) convergence rate, while /hatwideτdis(x) cannot because of the large order- hbias when Bis not smooth (Lemma 2). Furthermore, it is interesting to note that the radial (univariate) kernelK(u) =k(∥u∥) satisfies Assumption 3(b). Therefore, the large (order- h) bias of the distance-based estimator /hatwideτdis(x) due to a non-smooth assignment boundary, as demonstrated in Lemma 2, is not necessarily due to the choice of distance-based kernel smoothing,
|
https://arxiv.org/abs/2505.05670v1
|
but rather due to the choice of distance-based (univariate) local polynomial approximation: e.g., rp(∥u∥) (in/hatwideτdis(x)) vs.Rp(u) (in/hatwideτ(x)). In other words, Theorem 3demonstrates that a bivariate local polynomial approximation does not suffer from the bias documented in in Lemma 2even when the radial kernel is used for localization. Given its more standard structure, it is possible to establish poin twise and integrated (condi- tional) MSE expansions for the estimator /hatwideτ(x). Using standard multi-index notation, define the 19 leading conditional bias Bx=B1,x−B0,xwith Bt,x=e¦ 1/hatwideΓ−1 t,x/summationdisplay |k|=p+1µ(k) t(x) k!En/bracketleftBig rp/parenleftBigXi−x h/parenrightBig/parenleftBigXi−x h/parenrightBigk Kh(Xi−x)1(Xi∈At)/bracketrightBig and/hatwideΓt,x=En/bracketleftbig rp/parenleftbigXi−x h/parenrightbig rp/parenleftbigXi−x h/parenrightbig¦Kh(Xi−x)1(Xi∈At)/bracketrightbig , fort∈ {0,1}. Similarly, the leading conditional variance is Vx=V1,x+V0,xwithVt,x=e¦ 1/hatwideΓ−1 t,xΣt,x,x/hatwideΓ−1 t,xe1and Σt,x,x=h2En/bracketleftBig rp/parenleftBigXi−x h/parenrightBig rp/parenleftBigXi−x h/parenrightBig¦ Kh(Xi−x)2σ2 t(Xi)1(Xi∈At)/bracketrightBig , fort∈ {0,1}. The following theorem gives the MSE expansions. Theorem 4 (MSEExpansions) .Suppose Assumptions 1and3hold, and let w(x)be a non-negative continuous function on Bsuch that/integraltext Bw(x)dx<∞. Ifnh2/log(n)→ ∞, then (i)E[(/hatwideτ(x)−τ(x))2|X] =h2(p+1)B2 x+1 nh2Vx+oP(rn), and (ii)/integraltext BE[(/hatwideτ(x)−τ(x))2|X]w(x)dx=h2(p+1)/integraltext BB2 xdw(x)+1 nh2/integraltext BVxw(x)dx+oP(rn), withrn=h2p+2+n−1h−2+n−2(1+v) 2+vh−4. Ignoring the asymptotically constant and higher-order terms, approxim ate MSE-optimal and IMSE-optimal bandwidth choices are hMSE,x=/parenleftBig2Vx (2p+2)B2x1 n/parenrightBig1/(2p+4) andhIMSE=/parenleftBig2/integraltext BVxw(x)dx (2p+2)/integraltext BB2xw(x)dx1 n/parenrightBig1/(2p+4) , provided that Bx̸= 0 and/integraltext BB2 xω(x)dx̸= 0, respectively. These choices are infeasible because a preliminary bandwidth, as well as estimates of the the conditional var iances and higher-order derivatives of the conditional mean, are needed. We discuss impleme ntation below and in our companion software article ( Cattaneo et al. ,2025). 4.2 Uncertainty Quantification Given a bandwidth choice, the feasible t-statistic at each boundary p ointx∈Bis /hatwideT(x) =/hatwideτ(x)−τ(x)/radicalBig /hatwideΩx,x, 20 where, using standard least squares algebra, for all x1,x2∈Bandt∈ {0,1}, /hatwideΩx1,x2=/hatwideΩ0,x1,x2+/hatwideΩ1,x1,x2,/hatwideΩt,x1,x2=1 nh2e¦ 1/hatwideΓ−1 t,x1/hatwideΣt,x1,x2/hatwideΓ−1 t,x2e1 with /hatwideΣt,x1,x2=h2En/bracketleftBig rp/parenleftBigXi−x1 h/parenrightBig rp/parenleftBigXi−x2 h/parenrightBig¦ Kh(Xi−x1)Kh(Xi−x2)εi(x1)εi(x2)1(Xi∈At)/bracketrightBig andεi(x) =Yi−1(Xi∈A0)Rp(Xi−x)¦/hatwideβ0(x)−1(Xi∈A1)Rp(Xi−x)¦/hatwideβ1(x). Feasible confidence intervals and confidence bands over the treatmen t boundary Btake the form: /hatwideI(x;α) =/bracketleftbigg /hatwideτ(x)−qα/radicalBig /hatwideΩx,x,/hatwideτ(x)+qα/radicalBig /hatwideΩx,x/bracketrightbigg ,x∈B, for anyα∈[0,1], and where qαdenotes the appropriate quantile depending on the targeted inference procedure. For pointwise inference, as in the previou s section, it is a textbook exercise to showthatsupt∈R|P[/hatwideT(x)ft]−Φ(t)| →0foreach x∈B, understandardregularityconditions, and provided that nh2p+4→0. For uniform inference, as in the previous section, we approximate t he distribution of the entire stochastic process ( /hatwideT(x) :x∈B) first, and then deduce an approximation for supx∈B|/hatwideT(x)|. This approach enable us to construct asymptotically valid confidence b ands because, as noted previously, P/bracketleftbig τ(x)∈/hatwideI(x;α),for allx∈B/bracketrightbig =P/bracketleftBig sup x∈B/vextendsingle/vextendsingle/hatwideT(x)/vextendsingle/vextendsinglefqα/bracketrightBig . See the supplemental appendix for omitted technical details. Theorem 5 (Inferences) .Suppose Assumptions 1and3hold. Let Wn= ((Y1,X¦ 1)¦,...,(Yn,Xn)). (i)For allx∈B, ifnv 2+vh2→ ∞andnh2p+4→0, then P/bracketleftbig τ(x)∈/hatwideI(x;α)/bracketrightbig →1−α, forqα= Φ−1(1−α/2). (ii)Ifnv 2+vh2/logn→ ∞,liminf n→∞logh logn>−∞,nh2p+4→0, andperim({y∈At: (y−x)/h∈ 21 Supp(K)})≲hfor allx∈Bandt∈ {0,1}, then P/bracketleftbig τ(x)∈/hatwideI(x;α),for allx∈B/bracketrightbig →1−α, forqα= inf{c >0 :P[supx∈B|/hatwideZn(x)| gc|Wn]fα}, where(/hatwideZn:x∈B)is a Gaussian pro- cess conditional on WnwithE[/hatwideZn(x1)|Wn] = 0andE[/hatwideZn(x1)/hatwideZn(x2)|Wn] =/hatwideΩx1,x2//radicalBig /hatwideΩx1,x1/hatwideΩx2,x2, for allx1,x2∈B. This theorem gives valid pointwise and inference procedures for th e boundary treatment effect τ(x) based on the bivariate local polynomial estimator. Unlike the case of dist ance-based esti- mation, the bias condition is the same for x∈B, making implementation substantially easier. However, as in the case of Theorem 2, and additional technical restriction on Bis needed in order to avoid overly “wiggly”
|
https://arxiv.org/abs/2505.05670v1
|
boundary designs that would lead to invalid stati stical inference. 4.3 Implementation The bivariate local polynomial estimator /hatwideτ(x) and associated t-statistic /hatwideT(x) is fully adaptive to kinksandotherirregularitiesoftheboundary B. Therefore, itisstraightforwardtoimplementlocal and global bandwidth selectors based on Theorem 4and the discussion given above. In particular, replacing the (asymptotic) bias and variance constants, BxandVx, with preliminary estimators, we obtain the feasible plug-in bandwidth selectors /hatwidehMSE=/parenleftBig2/hatwideVx (2p+2)/hatwideB2x1 n/parenrightBig1/(2p+4) and/hatwidehIMSE=/parenleftBig2/integraltext B/hatwideVxw(x)dx (2p+2)/integraltext B/hatwideB2xw(x)dx1 n/parenrightBig1/(2p+4) , where/hatwideBxis an appropriate estimator of Bx, and/hatwideVx=nh2/hatwideΩx,x. Omitted details are given in the supplemental appendix. These bandwidth choices can now be used to implement (I)MSE-optimal /hatwideτ(x) point treatment effect estimators pointwise and uniformly along the b oundary B. Further- more, leveraging the results in Theorem 5, a simple application of robust bias-corrected inference proceeds by employing the same (I)MSE-optimal bandwidth (for pth order point estimation), but then constructing the t-statistic /hatwideT(x) with a choice of p+1 (instead of p). This inference approach hasseveraltheoreticaladvantages( Calonicoetal. ,2014,2018,2022), andhasbeenvalidatedempiri- cally (Hyytinen et al. ,2018;De Magalh˜ aes et al. ,2025). Our companion software, rd2d, implements 22 the procedures described above. See Cattaneo et al. (2025) for more details. 5 Empirical Applications We illustrate our proposed methodology with the SPP application. We cons ider both univariate distance-basedandbivariatelocation-basedestimationandinference . Thedatasethas n= 363,096 complete observations for the first cohort of the program (2014). Each observation corresponds to one student, and the bivariate score Xi= (X1i,X2i)¦= (SABER11 i,SISBEN i)¦is composed by the student’s SABER11 test score (ranging from −310 to 172) and SISBEN wealth index (ranging from −103.41 to 127 .21). As discussed in the introduction, and without loss of generality, t he score is recentered at their corresponding cutoff for program eligibility, so t hat the treatment assignment boundary is as shown in Figure 1a. Furthermore, also without loss of generality, each dimension of Xiis standardized in order to more naturally accommodate a common bandwidth h. Recall that Yiis an indicator variable equal to one if the student enrolled in a higher education institution after receiving the SPP subsidy, or equal to zero otherwise. All th e results in this section are implemented using our companion Rsoftware package rd2d, and omitted details are given in the replication files. Figure3presents the empirical results using our proposed methods, both p ointwise across the 40 boundary points depicted in 1aand uniformly over B. Figure 3acompares the two point estimation approaches, and demonstrates empirically the presence of some bias asso ciated with the distance- based approach: the blue dots correspond to /hatwideτ(x) implemented using the data-driven MSE-optimal bandwidth; the red crosses correspond to /hatwideτdisimplemented a rule-of-thumb bandwidth ignoring the presence of the kink at x= (0,0)∈B; and the gray circles correspond to /hatwideτ(x) implemented using a rule-of-thumb bandwidth of order n−1/4. Figure 3bdemonstrates empirically the pointwise and uniform inference results from Section 4based on /hatwideτ(x) implemented using the data-driven MSE- optimal bandwidth. The treatment effects are highly statistically si gnificant (different from zero) along the assignment boundary, indicating mostly homogeneous treatment e ffects along poverty (b1–b21) and some degree of heterogeneity in treatment effects along academic per formance ( b21– b40). Specifically,
|
https://arxiv.org/abs/2505.05670v1
|
college attendance remains constant as students becom e wealthier ( /hatwideτ(x)≈0.3 forx∈ {b1,...,b21}), but the treatment effects appear to decrease as students exhibit higher 23 (a) Point Estimation (b) Confidence Interval and Bands (c) Treatment Effects Heatmap (d) p-values Heatmap Figure 3: Boundary Treatment Effect Estimation and Inference (SPP appl ication). Outcome: College enrollment. academic performance (from /hatwideτ(b21)≈0.3 to/hatwideτ(b40)≈0.2). Finally, Figures 3cand3doffer “heat maps” for the point estimates and associated RBC p-values alogn the assignm ent boundary B. To demonstrate the credibility of the boundary discontinuity desi gn, we repeat the empirical analysis using a pre-intervention covariate: mother’s education of t he student. This corresponds to a standard “placebo” analysis, where the treatment effect is expected to be statistically indistin- guishable from zero. Figure 4shows that this is indeed the case. 24 (a) Point Estimation (b) Confidence Interval and Bands (c) Heatmap of Treatment Effect (d) Heatmap of P-value Figure 4: Boundary Treatment Effect Estimation and Inference (SPP appl ication). Outcome: Mother’s education (placebo). 25 6 Conclusion Westudiedthepointwiseanduniformstatisticalpropertiesofthet womostpopularlocalpolynomial methods for treatment effect estimation and inference in boundary di scontinuity designs. Our theoretical and numerical results demonstrated that methods based on the univariate distance to the assignment boundary can exhibit a large bias in the presence of kink s or other irregularities ofB. In contrast, methods based on the bivariate score do not suffer from thos e problems, and thus can perform better in applications. We thus recommend employi ng bivariate local polynomial analysis whenever possible in boundary discontinuity designs. Our companion software package rd2d(https://rdpackages.github.io/rd2d ) offers general-purpose implementations of all the estimation and inference methods developed. Building on the framework and results presented, there are (at least ) two directions for future research. First, it would be of interest to incorporate predetermi ned covariates for either efficiency improvements in estimation and inference, or for treatment effect h eterogeneity analysis. Second, it would be of interest to consider functionals of the average treatmen t effect curve along the boundary, τ(x), which can help summarize empirical findings. For example, intere sting scalar parameters include the weighted average treatment effect along the boundary τp=/integraltext Rτ(x)w(x)dx, and the largest average treatment effect along the boundary τ∞= supx∈Rτ(x), for some region R¦B. In research underway we are investigating how our estimation and inf erence results can be extended to study the properties of covariate-adjusted estimat ors ofτ(x), and of the plug-in estimators /hatwideτp=/integraltext R/hatwideτ(x)w(x)dxand/hatwideτ∞= supx∈R/hatwideτ(x). (For the latter estimator, the results in the supplemental appendix already apply directly.) A Minimax Convergence Rate The following theorem presents a minimax uniform convergence rate r esult for estimation of bivari- ate compact supported nonparametric functions employing a class of est imators based on distance to the evaluation point. Theorem 6 (Distance-based Minimax Convergence Rate) .For a constants qg1andL >0, let PNP=PNP(L,q)be the class of (joint) probability laws Pof(Y1,X1),···,(Yn,Xn)satisfying the 26 following: (i) ((Yi,Xi) : 1fifn)are i.i.d. taking values in R×R2. (ii)Xiadmits a Lebesgue density fthat is continuous on its compact support X¦[−L,L]2, with L−1finfx∈Xf(x)fsupx∈Xf(x)fL, andB=bd(X)is a rectifiable curve. (iii)µ(x) =E[Yi|Xi=x]isq-times continuously differentiable on Xwith max 0f|ν|f+q,sup x∈X|∂νµ(x)|+max |ν|=qsup x1,x2∈X|∂νµ(x1)−∂νµ(x2)|
|
https://arxiv.org/abs/2505.05670v1
|
∥x1−x2∥q−+q,fL. (iv)σ2(x) =V[Yi|Xi=x]is continuous on XwithL−1finfx∈Xσ2(x)fsupx∈Xσ2(x)fL. In addition, let Tbe the class of all distance-based estimators Tn(Un(x))withUn(x) = ((Yi,∥Xi−x∥)¦: 1fifn)for each x∈X. Then, liminf n→∞n1/4inf Tn∈Tsup P∈PNPEP/bracketleftBig sup x∈B/vextendsingle/vextendsingleTn(Un(x))−µ(x)/vextendsingle/vextendsingle/bracketrightBig ≳1, whereEP[·]denotes an expectation taken under the data generating process P. In sharp contrast to Theorem 6, under the assumptions imposed, a classical result of Stone(1982) shows that the uniform minimax convergence rate is liminf n→∞/parenleftBign logn/parenrightBigq 2q+2inf Sn∈Ssup P∈PNPEP/bracketleftBig sup x∈B/vextendsingle/vextendsingleSn(x;Wn)−µ(x)/vextendsingle/vextendsingle/bracketrightBig ≳1, whereSis the (unrestricted class) of all estimators based on ( Wn= (Yi,X¦ i)¦: 1fifn). There- fore, Theorem 6shows that if we restrict the class of estimators to only using the di stance∥Xi−x∥ to the evaluation point x(as opposed to using Xidirectly) for estimation of µ(x), then for any such estimator there is a data generating process such that the larges t estimation error uniformly along the boundary is at least of order n−1/4, when the boundary can have possibly countably many kinks. Notably, increasing smoothness of the underlying bivari ate regression function µ(x) does not improve the uniform estimation accuracy of estimators in Tdue to the possible lack of smoothness introduced by a non-smooth boundary Bof the support X. Finally, by Theorem 1and Lemma 2, the univariate distance-based p-th order local polynomial 27 nonparametric regression estimator /hatwideµdis(x) =e¦ 1/hatwideγ(x),/hatwideγ(x) = argmin γ∈Rp+1En/bracketleftBig/parenleftbig Yi−rp(Di(x))¦γ/parenrightbig2kh(Di(x))/bracketrightBig ,x∈B, satisfies limsup M→∞limsup n→∞sup P∈PP/bracketleftBig/parenleftBign logn/parenrightBig1/4 sup x∈B/vextendsingle/vextendsingle/hatwideµdis(x)−µ(x)/vextendsingle/vextendsinglegM/bracketrightBig = 0. Since/hatwideµdis(x)∈Tin Theorem 6, it follows that the distance-based local polynomial regression estimator is minimax optimal in the sense of Theorem 6, up to the log1/4nfactor, which we conjecture is unimprovable (i.e., the lower bound in Theorem 6is only loose by the factor log1/4n). References Black, S. E. (1999), “Do Better Schools Matter? Parental Valuation of Elemen tary Education,” Quarterly Journal of Economics , 114, 577–599. Calonico, S., Cattaneo, M. D., and Farrell, M. H. (2018), “On the Effect of Bias Es timation on CoverageAccuracyinNonparametricInference,” Journal of the American Statistical Association , 113, 767–779. (2020), “Optimal Bandwidth Choice for Robust Bias Corrected Inference in Regression Discontinuity Designs,” Econometrics Journal , 23, 192–210. (2022), “Coverage Error Optimal Confidence Intervals for Local Polynomial Regre ssion,” Bernoulli , 28, 2998–3022. Calonico, S., Cattaneo, M. D., and Titiunik, R. (2014), “Robust Nonparametric C onfidence Inter- vals for Regression-Discontinuity Designs,” Econometrica , 82, 2295–2326. Cattaneo, M. D., Idrobo, N., and Titiunik, R. (2020), A Practical Introduction to Regression Discontinuity Designs: Foundations , Cambridge University Press. (2024),A Practical Introduction to Regression Discontinuity Designs: Exten sions, Cam- bridge University Press. 28 Cattaneo, M. D., and Titiunik, R. (2022), “Regression Discontinuity Desi gns,”Annual Review of Economics , 14, 821–851. Cattaneo, M. D., Titiunik, R., and Yu, R. R. (2025), “ rd2d: Causal Inference in Boundary Discon- tinuity Designs,” arXiv preprint arXiv:2505.xxxxx . Cattaneo, M. D., and Yu, R. R. (2025), “Strong Approximations for Empirical Pr ocesses Indexed by Lipschitz Functions,” Annals of Statistics . De Magalh˜ aes, L., Hangartner, D., Hirvonen, S., Meril¨ ainen, J., Ruiz, N. A., and Tukiainen, J. (2025), “When Can We Trust Regression Discontinuity Design Estimates fr om Close Elections? Evidence from Experimental Benchmarks,” Political Analysis . Dell, M. (2010), “The
|
https://arxiv.org/abs/2505.05670v1
|
Persistent Effects of Peru’s Mining Mita,” Econometrica , 78, 1863–1903. Diaz, J. D., and Zubizarreta, J. R. (2023), “Complex Discontinuity Design s Using Covariates for Policy Impact Evaluation,” Annals of Applied Statistics , 17, 67–88. Federer, H. (2014), Geometric measure theory , Springer. Galiani,S.,McEwan,P.J.,andQuistorff,B.(2017),“ExternalandInternal ValidityofaGeographic Quasi-ExperimentEmbeddedinaCluster-RandomizedExperimen t,” inRegression Discontinuity Designs: Theory and Applications (Advances in Econometrics, volume 38), eds. M. D. Cattaneo and J. C. Escanciano, Emerald Group Publishing, pp. 195–236. Gin´ e, E., andNickl, R.(2016), Mathematical Foundations of Infinite-dimensional Statistica l Models, New York: Cambridge University Press. Hahn, J., Todd, P., and van der Klaauw, W. (2001), “Identification and Estimati on of Treatment Effects with a Regression-Discontinuity Design,” Econometrica , 69, 201–209. Hyytinen, A., Meril¨ ainen, J., Saarimaa, T., Toivanen, O., and Tukiaine n, J. (2018), “When does regression discontinuity design work? Evidence from random election outcomes,” Quantitative Economics , 9, 1019–1051. 29 Jardim, E., Long, M. C., Plotnick, R., Vigdor, J., and Wiles, E. (2024), “Local m inimum wage laws, boundary discontinuity methods, and policy spillovers,” Journal of Public Economics , 234, 105131. Keele, L., and Titiunik, R. (2016), “Natural experiments based on geography,” Political Science Research and Methods , 4, 65–95. Keele, L. J., Lorch, S., Passarella, M., Small, D., and Titiunik, R. (2017) , “An Overview of Ge- ographically Discontinuous Treatment Assignments with an Application to Children’s Health Insurance,” in Regression Discontinuity Designs: Theory and Applications (Advan ces in Econo- metrics, volume 38) , eds. M. D. Cattaneo and J. C. Escanciano, Emerald Group Publishing, pp . 147–194. Keele, L. J., and Titiunik, R. (2015), “Geographic Boundaries as Regression D iscontinuities,” Political Analysis , 23, 127–155. Londo˜ no-V´ elez, J., Rodr´ ıguez, C., and S´ anchez, F. (2020), “Upstream an d downstream impacts of college merit-based financial aid for low-income students: Ser Pilo P aga in Colombia,” American Economic Journal: Economic Policy , 12, 193–227. Papay, J. P., Willett, J. B., and Murnane, R. J. (2011), “Extending the regression-discontinuity approach to multiple assignment variables,” Journal of Econometrics , 161, 203–207. Reardon,S.F.,andRobinson,J.P.(2012),“RegressionDiscontinuityDesi gnswithMultipleRating- Score Variables,” Journal of Research on Educational Effectiveness , 5, 83–104. Simon, L. et al. (1984), Lectures on geometric measure theory , Centre for Mathematical Analysis, Australian National University Canberra. Stone, C. J. (1982), “Optimal Global Rates of Convergence for Nonparametric Regre ssion,”Annals of Statistics , 10, 1040–1053. van der Vaart, A. W., and Wellner, J. A. (1996), Weak Convergence and Empirical Processes , Springer. 30 Estimation and Inference in Boundary Discontinuity Design s Supplemental Appendix∗ Matias D. Cattaneo†Rocio Titiunik‡Ruiqi (Rae) Yu§ May 22, 2025 Abstract This supplemental appendix presents more general theoretical re sults encompassing those discussed in the main paper, and their proofs. ∗Cattaneo and Titiunik gratefully acknowledge financial sup port from the National Science Foundation (SES-2019432, DMS-2210561, and SES-2241575). †Department of Operations Research and Financial Engineeri ng, Princeton University. ‡Department of Politics, Princeton University. §Department of Operations Research and Financial Engineeri ng, Princeton University. Contents SA-1 Setup 3 SA-1.1 Notation and Definitions . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.05670v1
|
. . . . . . . . . . . . . . . 3 SA-1.2 Mapping between Main Paper and Supplement . . . . . . . . . . . . . . . . . . . . . . . . 4 SA-2 Location-Based Methods 5 SA-2.1 Preliminary Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 SA-2.2 Point Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 SA-2.3 Pointwise Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 SA-2.4 Uniform Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 SA-3 Distance-Based Methods 10 SA-3.1 Preliminary Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 SA-3.2 Pointwise Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 SA-3.3 Uniform Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 SA-4 Gaussian Strong Approximation Lemmas 16 SA-4.1 Definitions for Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 SA-4.2 Residual-based Empirical Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 SA-4.3 Multiplicative-Separable Empirical Process . . . . . . . . . . . . . . . . . . . . . . . . . . 18 SA-5 Proofs for Section SA-2 19 SA-5.1 Proof of Lemma SA-2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.05670v1
|
. . . . . 19 SA-5.2 Proof of Lemma SA-2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 SA-5.3 Proof of Lemma SA-2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 SA-5.4 Proof of Lemma SA-2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 SA-5.5 Proof of Theorem SA-2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 SA-5.6 Proof of Theorem SA-2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 SA-5.7 Proof of Theorem SA-2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 SA-5.8 Proof of Theorem SA-2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 SA-5.9 Proof of Theorem SA-2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 SA-5.10 Proof of Theorem SA-2.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 SA-5.11 Proof of Theorem SA-2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 SA-5.12 Proof of Theorem SA-2.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 SA-5.13 Proof of Theorem SA-2.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.05670v1
|
40 SA-6 Proofs for Section SA-3 41 SA-6.1 Proof of Lemma SA-3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 SA-6.2 Proof of Lemma SA-3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 SA-6.3 Proof of Lemma SA-3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 SA-6.4 Proof of Lemma SA-3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 SA-6.5 Proof of Lemma SA-3.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 SA-6.6 Proof of Theorem SA-3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 SA-6.7 Proof of Theorem SA-3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 1 SA-6.8 Proof of Theorem SA-3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 SA-6.9 Proof of Theorem SA-3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 SA-6.10 Proof of Theorem SA-3.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 SA-6.11 Proof of Theorem SA-3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 SA-7 Proofs of Distance-Based Bias Results 50 SA-7.1 Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.05670v1
|
. . . . . . . . . . . . . . . 50 SA-7.2 Proof of Lemma 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 SA-7.3 Proof of Theorem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 SA-8 Proofs for Section SA-4 61 SA-8.1 Proof of Lemma SA-4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 SA-8.2 Proof of Lemma SA-4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2 SA-1 Setup This supplemental appendix collects all the technical work underl ying the results presented in the main paper. It considers a generalized version of the problems studied in the main paper: the location variable Xiisd-dimensional with dg1 (its support is X¦Rd), and the boundary region Bis a low dimensional manifold with “effective dimension” d−1. The special case considered in the main paper is d= 2, that is, Xi is bivariate and Bis a one-dimensional (boundary) curve. Assumption 1 from the main paper is generalized to the following: Assumption SA–1 (Data Generating Process) Lett∈ {0,1}. (i)(Y1(t),X¦ 1)¦,...,(Yn(t),X¦ n)¦are independent and identically distributed random vectors with X=/producttextd l=1[al,bl]for−∞< al< bl<∞forl= 1,···,d. (ii)The distribution of Xihas a Lebesgue density fX(x)that is continuous and bounded away from zero on X. (iii)µt(x) =E[Yi(t)|Xi=x]is(p+1)-times continuously differentiable on X. (iv)Ã2 t(x) =V[Yi(t)|Xi=x]is bounded away from zero and continuous on X. (v) supx∈XE[|Yi(t)|2+v|Xi=x]<∞for some vg2. We partition Xinto two areas, At¢Rdwitht∈ {0,1}, which represent the control and treatment regions, respectively. That is, X=A0∪A1, whereA0andA1are two disjoint regions in Rd, andcl(At) denotes the closure of At,t∈ {0,1}. The observed outcome is Yi= 1(Xi∈A0)Yi(0)+ 1(Xi∈A1)Yi(1). B=bd(A0)∩bd(A1) denotes the boundary determined by the assignment regions At,t∈ {0,1}, where bd(At) denotes the topological boundary of At. Thetreatment effect curve along the boundary is Ä(x) =E[Yi(1)−Yi(0)|Xi=x],x∈B. SA-1.1 Notation and Definitions For textbook references on empirical process, see van der Vaart and Wellner (1996),Dudley(2014), and Gin´ e and Nickl (2016). For textbook reference on geometric measure theory, see Simon et al. (1984),Federer (2014), andFolland(2002). (i)Multi-index Notations . For a multi-index u= (u1,...,u d)∈Nd, denote |u|=/summationtextd i=1ud,u! = Πd i=1ud. DenoteRp(u) = (1,u1,...,u d,u2 1,...,u2 d,...,up 1,...,up d), that is, all monomials u³1 1···u³d dsuch that ³i∈Nand/summationtextd i=1³ifp. Define e1+νto be the pd=(d+p)! d!p!-dimensional vector such that e¦ 1+νRp(u) = uνfor allu∈Rd. (ii)Norms. For a vector v∈Rk,∥v∥= (/summationtextk i=1v2 i)1/2,∥v∥∞=max1fifk|vi|. For a matrix A∈Rm×n, ∥A∥p=sup∥x∥p=1∥Ax∥p,p∈N∪{∞}. For
|
https://arxiv.org/abs/2505.05670v1
|
a function fon a metric space ( S,d),∥f∥∞=supx∈X|f|, ∥f∥Lip,∞=supx,x′∈S|x−x′| d(x,x′). For a probability measure Qon (S,S) andpg1, define ∥f∥Q,p= (/integraltext S|f|pdQ)1/p. For a set E¦Rd, denote by m(E) the Lebesgue measure of E. (iii)Empirical Process . We use sta ndard empirical process notations: En[g(vi)] =1 n/summationtextn i=1g(vi) and Gn[g(vi)] =1√n/summationtextn i=1(g(vi)−E[g(vi)]). Let ( S,d) be a semi-metric space. The covering num- berN(S,d,ε) is the minimal number of balls Bs(ε) ={t:d(t,s)< ε}needed to cover S. A 3 P-Brownian bridge is a mean-zero Gaussian random function Wn(f),f∈L2(X,P) with the covariance E[WP(f)WP(g)] =P(fg)−P(f)P(g), forf,g∈L2(X,P). A class F¦L2(X,P) isP-pregaussian if there is a version of P-Brownian bridge WPsuch that WP∈C(F;ÄP) almost surely, where ÄPis the semi-metric on L2(X,P) is defined by ÄP(f,g) = (∥f−g∥2 P,2−(/integraltext f dP−/integraltext gdP)2)1/2, forf,g∈L2(X,P). (iv)Geometric Measure Theory . For a set E¦X, theDe Giorgi perimeter of Erelated to XisL(E) = TV{ 1E},X.Bis arectifiable curve if there exists a Lipschitz continuous function µ: [0,1]→Rsuch thatB=µ([0,1]). We define the curve length function of Bto beL(B) =supÃ∈Πs(Ã,µ), where Π ={(t0,t1,...,tN) :N∈N,0ft0< t1< ...ftNf1}ands(Ã,µ) =/summationtextN i=0∥µ(ti)−µ(ti+1)∥2for Ã= (t0,t1,...,tN). (v)Bounds and Asymptotics . For reals sequences |an|=o(|bn|) iflimsupan bn= 0,|an|≲|bn|if there exists some constant CandN >0 such that n > Nimplies|an| fC|bn|. For sequences of random variables an=oP(bn) if plimn→∞an bn= 0,|an|≲P|bn|if limsupM→∞limsupn→∞P[|an bn| gM] = 0. (vi)Distributions and Statistical Distances . Forµ∈RkandΣak×kpositive definite matrix, N(µ,Σ) denotes the Gaussian distribution with mean µand covariance Σ. For−∞< a < b < ∞,Unif([a,b]) denotes the uniform distribution on [ a,b].Bern(p) denotes the Bernoulli distribution with success probability p. Φ(·)denotesthestandardGaussiancumulativedistributionfunction. F ortwodistributions PandQ,dKL(P,Q) denotes the KL-distance between PandQ, anddÇ2(P,Q) denotes the Ç2distance between PandQ. SA-1.2 Mapping between Main Paper and Supplement The results in the main paper are special cases of the results in this supplemental appendix as follows. •Lemma 1 in the paper corresponds to Lemma SA-3.1withd= 2. •Lemma 2 in is proven in Section SA-7.1. •Lemma 3 in is proven in Section SA-7.2. •Theorem 1(i) in the paper corresponds to Theorem SA-3.1withd= 2. •Theorem 1(ii) in the paper corresponds to Theorem SA-3.3withd= 2. •Theorem 2(i) in the paper corresponds to Theorem SA-3.2withd= 2. •Theorem 2(ii) in the paper corresponds to Theorem SA-3.6withd= 2. •Theorem 3(i) in the paper corresponds to Theorem SA-2.1withd= 2. •Theorem 3(ii) in the paper corresponds to Theorem SA-2.5withd= 2. •Theorem 4 in the paper corresponds to Theorem SA-2.2withd= 2. •Theorem 5(i) in the paper corresponds to Theorem SA-2.4withd= 2. •Theorem 5(ii) in the paper corresponds to Theorem SA-2.9withd= 2. •Theorem 6 is proven in Section SA-7.3. 4 SA-2 Location-Based Methods We consider a more general setting compared to the main paper, where th e parameter of interest is Ä(ν)(x) =µ(ν) 1(x)−µ(ν) 0(x),x∈B, whereνis a multi-index with |ν| fp. Thus, the treatment effect curve estimator is ( /hatwideÄ(ν)(x) :x∈B), where /hatwideÄ(ν)(x) =/hatwideµ(ν) 1(x)−/hatwideµ(ν) 0(x),x∈B, where, for t∈ {0,1},/hatwideµ(ν) t(x) =e¦ 1+ν/hatwideβt(x) with /hatwideβt(x) = argmin β∈Rpp+1En/bracketleftBig/parenleftbig Yi−Rp(Xi−x)¦β/parenrightbig2Kh(Xi−x) 1(Xi∈At)/bracketrightBig ,x∈B, withpp=(d+p)! d!p!,Rp(u) = (1,u1,u2···,ud,u2 1,u1u2,u1u2,···u2 d,···,up 1,up−1 1u2,···,up 2)¦denotes the pth order polynomial expansion of the d-variate vector u= (u1,···,ud)¦,Kh(u) =K(u1/h,···,ud/h)/hdfor a d-variate kernel function K(·) and a bandwidth parameter h. We impose the following
|
https://arxiv.org/abs/2505.05670v1
|
assumption on d-variate kernel function and assignment boundary. Assumption SA–2 (Kernel Function and Bandwidth) Lett∈ {0,1}. (i)K:Rd→[0,∞)is compact supported and Lipschitz continuous, or K(u) = 1(u∈[−1,1]d). (ii) liminf h³0infx∈B/integraltext AtKh(u−x)du≳1. Under the assumptions imposed, for t∈ {0,1}, we have /hatwideβt(x) =H−1/hatwideΓ−1 t,xEn/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x)Yi 1(Xi∈At)/bracketrightbigg , whereH= diag((h|v|)0f|v|fp) withvrunning through alld+p d!p!multi-indices such that |v| fp, and /hatwideΓt,x=En/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−x h/parenrightbigg¦ Kh(Xi−x) 1(Xi∈At)/bracketrightbigg . In particular, ∥e¦ 1+νH−1∥2=∥e¦ 1+νH−1∥∞=h−|ν|. Forx,x1,x2∈Bandt∈ {0,1}, we introduce the following quantities: Γt,x=E/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−x h/parenrightbigg¦ Kh(Xi−x) 1(Xi∈At)/bracketrightbigg , /hatwideΣt,x1,x2=hdEn/bracketleftbigg Rp/parenleftbiggXi−x1 h/parenrightbigg Rp/parenleftbiggXi−x2 h/parenrightbigg¦ Kh(Xi−x1)Kh(Xi−x2)ε2 i 1(Xi∈At)/bracketrightbigg , Σt,x1,x2=hdE/bracketleftbigg Rp/parenleftbiggXi−x1 h/parenrightbigg Rp/parenleftbiggXi−x2 h/parenrightbigg¦ Kh(Xi−x1)Kh(Xi−x2)Ã2 t(Xi) 1(Xi∈At)/bracketrightbigg , /hatwideΩ(ν) t,x1,x2=1 nhd+2|ν|e¦ 1+ν/hatwideΓ−1 t,x1/hatwideΣt,x1,x2/hatwideΓ−1 t,x2e1+ν,/hatwideΩ(ν) x1,x2=/hatwideΩ(ν) 0,x1,x2+/hatwideΩ(ν) 1,x1,x2, Ω(ν) t,x1,x2=1 nhd+2|ν|e¦ 1+νΓ−1 t,x1Σt,x1,x2Γ−1 t,x2e1+ν,Ω(ν) x1,x2= Ω(ν) 0,x1,x2+Ω(ν) 1,x1,x2, 5 whereεi=Yi−/summationtext t∈{0,1} 1(Xi∈At)/hatwideβt(x)¦Rp(Xi−x) andÃ2 t(x) =V[Yi(t)|Xi=x]. Denote /hatwideB(ν) t,x=e¦ 1+ν/hatwideΓ−1 t,x/summationdisplay |ω|=p+1µ(ω) t(x) ω!En/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg/parenleftbiggXi−x h/parenrightbiggω Kh(Xi−x)/bracketrightbigg ,/hatwideB(ν) x=/hatwideB(ν) 1,x−/hatwideB(ν) 0,x, B(ν) t,x=e¦ 1+νΓ−1 t,x/summationdisplay |ω|=p+1µ(ω) t(x) ω!E/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg/parenleftbiggXi−x h/parenrightbiggω Kh(Xi−x)/bracketrightbigg , B(ν) x=B(ν) 1,x−B(ν) 0,x, Qt,x=En/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)εi/bracketrightbigg , /hatwideV(ν) t,x=e¦ 1+ν/hatwideΓ−1 t,x/hatwideΣt,x,x/hatwideΓ−1 t,xe1+ν,/hatwideV(ν) x=/hatwideV(ν) 0,x+/hatwideV(ν) 1,x, V(ν) t,x=e¦ 1+νΓ−1 t,xΣt,x,xΓ−1 t,xe1+ν, V(ν) x=V(ν) 0,x+V(ν) 1,x, whereui=Yi−/summationtext t∈{0,1}1(Xi∈At)µt(Xi). SA-2.1 Preliminary Lemmas In what follows, we denote X= (X¦ 1,···,X¦ n) andWn= ((X¦ 1,Y1),···,(X¦ n,Yn))¦. Lemma SA-2.1 (Gram) Suppose Assumption SA–1(i)(ii) and Assumption SA–2hold. Iflog(1/h) nhd=o(1), then sup x∈B∥/hatwideΓt,x−Γt,x∥≲P/radicalbigg log(1/h) nhd,1≲Pinf x∈B∥/hatwideΓt,x∥≲sup x∈B∥/hatwideΓt,x∥≲P1, sup x∈B∥/hatwideΓ−1 t,x−Γ−1 t,x∥≲P/radicalbigg log(1/h) nhd, fort∈ {0,1}. Lemma SA-2.2 (Bias) Suppose Assumption SA–1(i)(ii)(iii) and Assumption SA–2hold. Iflog(1/h) nhd=o(1), then sup x∈B/vextendsingle/vextendsingle/vextendsingleE[/hatwideµ(ν) t(x)|X]−µ(ν) t(x)/vextendsingle/vextendsingle/vextendsingle≲Php+1−|ν|, fort∈ {0,1}. If, in addition, h=o(1), then sup x∈B/vextendsingle/vextendsingle/vextendsingleE[/hatwideµ(ν) t(x)|X]−µ(ν) t(x)−hp+1−|ν|/hatwideB(ν) t,x/vextendsingle/vextendsingle/vextendsingle=oP(hp+1−|ν|), fort∈ {0,1}. Moreover, supx∈B|/hatwideB(ν) t,x−B(ν) t,x|≲P/radicalBig log(1/h) nhd, which implies supx∈B|/hatwideB(ν) t,x|≲P1fort∈ {0,1}. Lemma SA-2.3 (Stochastic Linear Approximation) Suppose Assumption SA–1(i)(ii)(iv)(v) and Assumption SA–2hold. Supposelog(1/h) nhd=o(1), then sup x∈B∥Qt,x∥≲P/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd, sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideµ(ν) t(x)−E/bracketleftbig /hatwideµ(ν) t(x)/vextendsingle/vextendsingleX/bracketrightbig −e¦ 1+νH−1Γ−1 t,xQt,x/vextendsingle/vextendsingle/vextendsingle≲Ph−|ν|/radicalbigg log(1/h) nhd/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightbigg , 6 fort∈ {0,1}. Lemma SA-2.4 (Covariance) Suppose Assumptions SA–1andSA–2hold. Iflog(1/h) nhd=o(1), then sup x1,x2∈B∥/hatwideΣt,x1,x2−Σt,x1,x2∥≲P/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd+hp+1, sup x1,x2∈B/vextendsingle/vextendsingle/vextendsingle/hatwideΩ(ν) x1,x2−Ω(ν) x1,x2/vextendsingle/vextendsingle/vextendsingle≲P(nhd+2|ν|)−1/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd+hp+1/parenrightbigg , sup x∈B/vextendsingle/vextendsingle/vextendsingle(/hatwideΩ(ν) x,x)−1 2−(Ω(ν) x,x)−1 2/vextendsingle/vextendsingle/vextendsingle≲P/radicalbig nhd+2|ν|/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd+hp+1/parenrightbigg , fort∈ {0,1}. SA-2.2 Point Estimation Theorem SA-2.1 (Pointwise Convergence Rate) Suppose Assumptions SA–1andSA–2hold. Ifnhd→ ∞, then sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideÄ(ν)(x)−Ä(ν)(x)/vextendsingle/vextendsingle/vextendsingle≲Ph−|ν|/parenleftbigg hp+1+1√ nhd+1 n1+v 2+vhd/parenrightbigg . The conditional mean-squared error (MSE) is MSEν(x) =E/bracketleftBig (/hatwideÄ(ν)(x)−Ä(ν)(x))2/vextendsingle/vextendsingle/vextendsingleX/bracketrightBig ,x∈B, and, for some non-negative weighting function Ésatisfying/integraltext BÉ(x)dx<∞, the conditional integrated mean-squared error (IMSE) is defined to be IMSEν=/integraldisplay BMSEν(x)É(x)dHd−1(x), whereHd−1is the (d−1) dimensional Hausdorff measure, also known as “area” element on B(Folland,2002; Federer,2014). Theorem SA-2.2 (MSE Expansions) Suppose Assumptions SA–1andSA–2hold. Iflog(1/h) nhd=o(1)andh=o(1), then MSEν(x) = (hp+1−|ν|B(ν) x)2+n−1h−d−2|ν|V(ν) x+oP/parenleftbig h2p+2−2|ν|+n−1h−d−2|ν|/parenrightbig ,x∈B, IMSEν=/integraldisplay B/bracketleftBig (hp+1−|νB(ν) x)2+n−1h−d−2|ν|V(ν) x/bracketrightBig É(x)dHd−1(x)+oP/parenleftbig h2p+2−2|ν|+n−1h−d−2|ν|/parenrightbig . With the estimated /hatwideB(ν) xand/hatwideV(ν) x, supposelog(1/h) nv 2+vhd=o(1)andh=o(1), then MSEν(x) = (hp+1−|ν|/hatwideB(ν) x)2+n−1h−d−2|ν|/hatwideV(ν) x+oP/parenleftbig h2p+2−2|ν|+n−1h−d−2|ν|/parenrightbig ,x∈B, IMSEν=/integraldisplay B/bracketleftBig (hp+1−|ν/hatwideB(ν) x)2+n−1h−d−2|ν|/hatwideV(ν) x/bracketrightBig É(x)dHd−1(x)+oP/parenleftbig h2p+2−2|ν|+n−1h−d−2|ν|/parenrightbig . 7 If/hatwideB(ν) x̸= 0, the asymptotic MSE-optimal bandwidth is hMSE,ν,p(x) =/parenleftBigg (d+2|ν|)/hatwideV(ν) x (2p+2−2|ν|)n(/hatwideB(ν) x)2/parenrightBigg 1 2p+d+2 ,x∈B. If/integraltext B(B(ν) x)2É(x)dHd−1(x)̸= 0, the asymptotic IMSE-optimal bandwidth is hIMSE,ν,p=/parenleftBigg (d+2|ν|)/integraltext B/hatwideV(ν) xÉ(x)dHd−1(x) (2p+2−2|ν|)n/integraltext B(/hatwideB(ν) x)2É(x)dHd−1(x)/parenrightBigg 1 2p+d+2 . SA-2.3 Pointwise Inference For|ν| fp, define the feasible t-statistics: /hatwideT(ν)(x) =/hatwideÄ(ν)(x)−Ä(ν)(x)/radicalBig /hatwideΩ(ν) x,x,x∈B. Theorem
|
https://arxiv.org/abs/2505.05670v1
|
SA-2.3 (Asymptotic Normality) Suppose Assumptions SA–1andSA–2hold. Ifnhd→ ∞andnhdh2(p+1)→0, then sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbig/hatwideT(ν)(x)fu/parenrightbig −Φ(u)/vextendsingle/vextendsingle/vextendsingle/vextendsingle=o(1),x∈B. For any 0 < ³ <1, define the confidence interval: /hatwideI(ν) ³(x) =/bracketleftbigg /hatwideÄ(ν)(x)−c³/radicalBig /hatwideΩ(ν) x,x,/hatwideÄ(ν)(x)+c³/radicalBig /hatwideΩ(ν) x,x/bracketrightbigg , wherec³= inf{c >0 :P(|/hatwideZ| gc|Wn)f³}with/hatwideZ|X∼Normal(0,/hatwideΩ(ν) x,x), for each x∈B. Theorem SA-2.4 (Confidence Intervals) Suppose Assumptions SA–1andSA–2hold. Ifnhd→ ∞andnhdh2(p+1)→0, then P/bracketleftbig µ(ν)(x)∈/hatwideI(ν) ³(x)/bracketrightbig = 1−³+o(1),x∈B. SA-2.4 Uniform Inference Theorem SA-2.5 (Uniform Convergence Rate) Suppose Assumptions SA–1andSA–2hold. Iflog(1/h) nhd=o(1), then sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideÄ(ν)(x)−Ä(ν)(x)/vextendsingle/vextendsingle/vextendsingle≲Ph−|ν|/parenleftbigg hp+1+/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightbigg . /hatwideT(ν)is not directly a sum of i.i.d terms. For x∈B, we define the stochastic linearization of/hatwideT(ν)(x) to be T(ν)(x) =En/bracketleftbigg e¦ 1+νH−1/parenleftbig 1(Xi∈A1)Γ−1 1,x− 1(Xi∈A0)Γ−1 0,x/parenrightbig Rp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x)ui(Ω(ν) x,x)−1/2/bracketrightbigg , 8 withui=Yi−/summationtext t∈{0,1} 1(Xi∈At)µt(Xi). Theorem SA-2.6 (Stochastic Linearization) Suppose Assumptions SA–1andSA–2hold. Iflog(1/h) nv 2+vhd=o(1), then sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/hatwideT(ν)(x)−T(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲Php+1√ nhd+/radicalbig log(1/h)/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightbigg . Next, we exploit a structure of ( T(ν)(x) :x∈B). Define the following function indexed by x∈B. gx(u) = 1(u∈A1)K(ν) 1(u;x)− 1(u∈A0)K(ν) 0(u;x), u∈X, K(ν) t(u;x) =n−1/2(Ω(ν) x,x)−1/2e¦ 1+νH−1Γ−1 t,xRp/parenleftbiggu−x h/parenrightbigg Kh(u−x),u∈X,t∈ {0,1}, and define the class of functions G={gx:x∈B}andR={Id}, whereId(x) =x, for allx∈R. Define the residual-based empirical process by Rn(g,r) =n−1/2n/summationdisplay i=1/bracketleftBig g(Xi)r(Yi)−g(Xi)E[r(Yi)|Xi]/bracketrightBig , g∈G,r∈R. Then, T(ν)(x) =Rn(gx,Id),x∈B. In Lemma SA-4.1, we provide a generic bound on the rate of Gaussian strong approximation f orresidual-based empirical process . This lemma generalizes Cattaneo and Yu (2025, Theorem 3) to allow for polynomial moment bound on the conditional distribution of YigivenXi. Theorem SA-2.7 (Strong Approximation of T(ν)) Suppose Assumptions SA–1andSA–2hold. Suppose there exists a constant C >0such that for t∈ {0,1} and for any x∈B, the De Giorgi perimeter of the set Et,x={y∈At: (y−x)/h∈Supp(K)}satisfies L(Et,x)fChd−1. Suppose liminf n→∞logh logn>−∞andnhd→ ∞asn→ ∞. Then, on a possibly enlarged probability space, there exists a mean-zero Gaussian process Z(ν)indexed by Bwith almost surely continuous sample path such that E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−Z(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≲(logn)3 2/parenleftbigg1 nhd/parenrightbigg1 2d+2·v v+2 +log(n)/parenleftbigg1 nv 2+vhd/parenrightbigg1 2 , where≲is up to a universal constant, and Z(ν)has the same covariance structure as T(ν); that is, Cov[T(ν)(x1),T(ν)(x2)] =Cov[Z(ν)(x1),Z(ν)(x2)]for allx1,x2∈B. For confidence bands, let /hatwideZ(ν)(x),x∈B, be a mean-zero Gaussian process with feasible (conditional) covariance function given by Cov/bracketleftBig /hatwideZ(ν)(x1),/hatwideZ(ν)(x2)/vextendsingle/vextendsingle/vextendsingleWn/bracketrightBig =/hatwideΩ(ν) x1,x2/radicalBig /hatwideΩ(ν) x1,x1/hatwideΩ(ν) x2,x2,x1,x2∈B. 9 Theorem SA-2.8 (Distributional Approximation for Suprema) Suppose Assumptions SA–1andSA–2hold. Suppose liminf n→∞logh logn>−∞,hp+1√ nhd→0andnv 2+vhd (logn)3→ ∞, then sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/hatwideT(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/vextendsingle/vextendsingle/vextendsingle/vextendsingleWn/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle=oP(1), whereWn= ((X¦ 1,Y1),···,(X¦ n,Yn))¦. For any 0 < ³ <1, define the confidence bands by /hatwideI(ν) ³(x) =/bracketleftbigg /hatwideÄ(ν)(x)−c³/radicalBig /hatwideΩ(ν) x,x,/hatwideÄ(ν)(x)+c³/radicalBig /hatwideΩ(ν) x,x/bracketrightbigg ,x∈B, wherec³= inf/braceleftBig c >0 :P/parenleftBig supx∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglegc/vextendsingle/vextendsingle/vextendsingleWn/parenrightBig f³/bracerightBig . Theorem SA-2.9 (Confidence bands) Suppose Assumptions SA–1andSA–2hold. Suppose liminf n→∞logh logn>−∞,hp+1√ nhd→0andnv 2+vhd (logn)3→ ∞, then P/bracketleftbig µ(ν)(x)∈/hatwideI(ν) ³(x),∀x∈B/bracketrightbig = 1−³−o(1). SA-3 Distance-Based Methods The treatment effect curve estimator for ( Ä(x) :x∈B) is /hatwideÄdis(x) =/hatwide¹1,x(0)−/hatwide¹0,x(0),x∈B, where, for t∈ {0,1},/hatwide¹t,x(0) =e¦ 1/hatwideγt(x) with /hatwideγt(x) = argmin γ∈Rp+1En/bracketleftBig/parenleftbig Yi−rp(Di(x))¦γ/parenrightbig2kh(Di(x)) 1It(Di(x))/bracketrightBig , where the univariate distance score is Di(x) = 1(Xi∈A1)d(Xi,x)− 1(Xi∈A0)d(Xi,x),x∈B, rp(u) = (1,u,···,up)¦,kh(u) =k(u/h)/h2for a univariate kernel k(·) and a bandwidth parameter h, and 1It(Di(x)) = 1(Di(x)∈It) withI0= (−∞,0) andI1= [0,∞). More generally, /hatwide¹t,x(Di(x)) =rp(Di(x))¦/hatwideγt(x), t∈ {0,1},x∈B. We impose the following assumptions on the distance function, kerne l function, and assignment boundary. Assumption SA–3
|
https://arxiv.org/abs/2505.05670v1
|
(Regularity Conditions for Distance) d:Rd×Rd→Ris a metric on Rdequivalent to the Euclidean distance, that is, there exists positive constants CuandClsuch that Cl∥x−x′∥ fd(x,x′)fCu∥x−x′∥for allx,x′∈X. Assumption SA–4 (Kernel Function) Lett∈ {0,1}. 10 (i)k:R→[0,∞)is compact supported and Lipschitz continuous, or k(u) = 1(u∈[−1,1]). (ii) liminf h³0infx∈B/integraltext Atkh(d(u,x))du≳1. For each t∈ {0,1}, the induced conditional expectation based on univariate distance is ¹t,x(r) =E[Yi|Di(x) =r] =E[Yi|d(Xi,x) =|r|,Xi∈At], r∈It,x∈B. More rigorously, for each t∈ {0,1}, letSt,x(r) ={v∈X:d(v,x) =r,v∈At}forrg0 andx∈B. LettingHd−1denote the ( d−1)-dimensional Hausdorff measure, then our definition means ¹t,x(r) =E[Yi|d(Xi,x) =|r|,Xi∈At] =/integraltext St,x(|r|)µt(v)fX(v)Hd−1(dv) /integraltext St,x(|r|)fX(v)Hd−1(dv), for|r|>0,x∈B,t∈ {0,1}. Forr= 0,x∈B,t∈ {0,1}, then ¹t,x(0) = lim r→0E[Yi|d(Xi,x) =|r|,Xi∈At] = lim r→0/integraltext St,x(|r|)µt(v)fX(v)Hd−1(dv) /integraltext St,x(|r|)fX(v)Hd−1(dv). Under our assumptions, the above limit exists, and thus we obtain the f ollowing identification result. Lemma SA-3.1 (Distance-Based Identification) Suppose Assumption SA–1(i)-(iii), and Assumption SA–3hold. Then, ¹t,x(0) =µt(x), for allt∈ {0,1}and x∈B. Fort∈ {0,1}, define the best mean square approximation ¹∗ t,x(Di(x)) =rp(Di(x))¦γ∗ t(x), where γ∗ t(x) = argmin γ∈Rp+1E/bracketleftBig/parenleftbig Yi−rp(Di(x))¦γ/parenrightbig2kh(Di(x)) 1It(Di(x))/bracketrightBig . The estimation error decomposes into linear error ,approximation error , andnon-linear error : /hatwide¹t,x(0)−¹t,x(0) =e¦ 1/hatwideΨ−1 t,xEn/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))Yi/bracketrightbigg −¹t,x(0) =e¦ 1/hatwideΨ−1 t,xEn/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))(Yi−¹∗ t,x(Di))/bracketrightbigg +¹∗ t,x(0)−¹t,x(0) =e¦ 1Ψ−1 t,xOt,x/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright linear error+¹∗ t,x(0)−¹t,x(0) /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright approximation error+e¦ 1(/hatwideΨ−1 t,x−Ψ−1 t,x)Ot,x/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright non-linear error, (SA-3.1) 11 for allt∈ {0,1}andx∈B, where /hatwideΨt,x=En/bracketleftBigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(x) h/parenrightbigg¦ kh(Di(x)) 1It(Di(x))/bracketrightBigg , Ψt,x=E/bracketleftBigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(x) h/parenrightbigg¦ kh(Di(x)) 1It(Di(x))/bracketrightBigg , Ot,x=En/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))(Yi−¹∗ t,x(Di(x))) 1It(Di(x))/bracketrightbigg , and the misspecification bias is Bn,t(x) =¹∗ t,x(0)−¹t,x(0). (SA-3.2) In the main text, Bn(x) =Bn,1(x)−Bn,0(x). Define the following for variance analysis: For t∈ {0,1}, x1,x2∈B, /hatwideΥt,x1,x2=hdEn/bracketleftbigg rp/parenleftbiggDi(x1) h/parenrightbigg rp/parenleftbiggDi(x2) h/parenrightbigg¦ kh(Di(x1))kh(Di(x2))(Yi−/hatwide¹t,x1(Di(x1))) (Yi−/hatwide¹t,x2(Di(x2))) 1It(Di(x1))/bracketrightbigg , Υt,x1,x2=hdE/bracketleftbigg rp/parenleftbiggDi(x1) h/parenrightbigg rp/parenleftbiggDi(x2) h/parenrightbigg¦ kh(Di(x1))kh(Di(x2))(Yi−¹∗ t,x1(Di(x1))) (Yi−¹∗ t,x2(Di(x2))) 1It(Di(x1))/bracketrightbigg , /hatwideΞt,x1,x2=1 nhde¦ 1/hatwideΨ−1 t,x1/hatwideΥt,x1,x2/hatwideΨ−1 t,x2e1,/hatwideΞx1,x2=/hatwideΞ0,x1,x2+/hatwideΞ1,x1,x2, Ξt,x1,x2=1 nhde¦ 1Ψ−1 t,x1Υt,x1,x2Ψ−1 t,x2e1,Ξx1,x2= Ξ0,x1,x2+Ξ1,x1,x2. SA-3.1 Preliminary Lemmas Lemma SA-3.2 (Gram) Suppose Assumption SA–1(i)(ii), Assumption SA–3and Assumption SA–4hold. Ifnhd log(1/h)→ ∞, then sup x∈B∥/hatwideΨt,x−Ψt,x∥≲P/radicalbigg log(1/h) nhd, 1≲Pinf x∈B∥/hatwideΨt,x∥≲sup x∈B∥/hatwideΨt,x∥≲P1, sup x∈B∥/hatwideΨ−1 t,x−Ψ−1 t,x∥≲P/radicalbigg log(1/h) nhd, fort∈ {0,1}. Lemma SA-3.3 (Stochastic Linear Approximation) Suppose Assumption SA–1(i)(ii)(iii)(v), Assumption SA–3and Assumption SA–4hold. Ifnhd log(1/h)→ ∞, 12 then sup x∈B∥Ot,x∥≲P/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd, sup x∈B/vextendsingle/vextendsinglee¦ 1Ψ−1 t,xOt,x/vextendsingle/vextendsingle≲P/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd, sup x∈B/vextendsingle/vextendsingle/vextendsinglee¦ 1(/hatwideΨ−1 t,x−Ψ−1 t,x)Ot,x/vextendsingle/vextendsingle/vextendsingle≲P/radicalbigg log(1/h) nhd/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightbigg , fort∈ {0,1}. Lemma SA-3.4 (Approximation Error: Minimal Guarantee) Suppose Assumption SA–1(i)(ii)(iii), Assumption SA–3and Assumption SA–4hold. Then, sup x∈B|Bn(x)|≲h. Lemma SA-3.5 (Covariance) Suppose Assumptions SA–1,SA–3andSA–4hold. Ifnhd log(1/h)→ ∞, then max t∈{0,1}sup x1,x2∈B∥/hatwideΥt,x1,x2−Υt,x1,x2∥≲P/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd, max t∈{0,1}sup x1,x2∈Bnhd|/hatwideΞt,x1,x2−Ξt,x1,x2|≲P/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd. If, in addition,nv 2+vhd log(1/h)→ ∞, then inf x∈B¼min(/hatwideΥt,x,x)≳P1, inf x∈B/hatwideΞt,x,x≳P(nhd)−1, sup x1,x2∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/hatwideΞt,x1,x2/radicalBig /hatwideΞt,x1,x2/hatwideΞt,x2,x2−Ξt,x1,x2/radicalbig Ξt,x2,x2Ξt,x2,x2/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲P/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd. Since we consider a covariance estimator based on the best linear appro ximation, instead of the population conditional mean functions, no bias condition appears in the estimates ab ove. SA-3.2 Pointwise Inference Theorem SA-3.1 (Convergence Rate) Suppose Assumptions SA–1,SA–3andSA–4hold. Ifnhd→ ∞, then |/hatwideÄdis(x)−Ä(x)|≲P1√ nhd+1 n1+v 2+vhd+|Bn(x)|, for allx∈B. 13 Define the feasible t-statistics by /hatwideTdis(x) =/hatwideÄdis(x)−Ä(x)/radicalBig /hatwideΞx,x,x∈B. Theorem SA-3.2 (Asymptotic Normality) Suppose Assumptions SA–1,SA–3andSA–4hold. Ifnv 2+vhd→ ∞and√ nhdsupx∈B|Bn(x)| →0, then sup u∈R/vextendsingle/vextendsingle/vextendsingleP/parenleftBig /hatwideTdis(x)fu/parenrightBig −Φ(u)/vextendsingle/vextendsingle/vextendsingle=o(1),∀x∈B. For any 0< ³ <1, takec³=inf{c >0 :P(|Z| gc)f³}whereZ∼N(0,1), and define/hatwideIdis(x,³)
|
https://arxiv.org/abs/2505.05670v1
|
=/parenleftbigg /hatwideÄdis(x)−c³/radicalBig /hatwideΞx,x,/hatwideÄdis(x)+c³/radicalBig /hatwideΞx,x/parenrightbigg . Then, P/parenleftBig Ä(x)∈/hatwideIdis(x,³)/parenrightBig →1−³,x∈B. SA-3.3 Uniform Inference Theorem SA-3.3 (Uniform Convergence Rate) Suppose Assumptions SA–1,SA–3andSA–4hold. Ifnhd log(1/h)→ ∞, then sup x∈B|/hatwideÄdis(x)−Ä(x)|≲P/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd+ sup x∈B|Bn(x)|. DefineTdis(x) to be the stochastic linearization of /hatwideTdis(x), that is, we define Tdis(x) = Ξ−1/2 x,x(e¦ 1Ψ−1 1,xO1,x−e¦ 1Ψ−1 0,xO0,x),x∈B. Theorem SA-3.4 (Stochastic Linearization) Suppose Assumptions SA–1,SA–3andSA–4hold. Supposenhd log(1/h)→ ∞. Then, sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideTdis(x)−Tdis(x)/vextendsingle/vextendsingle/vextendsingle≲P/radicalbig log(1/h)/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightbigg +√ nhdsup x∈B|Bn(x)|. ToestablishaGaussianstrongapproximationfor Tdis(x), considertheclassoffunctions G={gx:x∈B} andH={hx:x∈B}, where gx(u) = 1A1(u)K1(u;x)− 1A0(u)K0(u;x), u∈X, Kt(u;x) =1/radicalbig nΞx,xe¦ 1Ψ−1 t,xrp/parenleftbiggd(u,x) h/parenrightbigg kh(d(u,x)), u∈X,x∈B,t∈ {0,1}, hx(u) =− 1A1(u)K1(u;x)¹∗ 1,x(d(u,x))+ 1A0(u)K0(u;x)¹∗ 0,x(d(u,x)),u∈X,x∈B,(SA-3.3) andRis the singleton of identity function Id:R/ma√sto→R,Id(x) =x. For classes of functions G,HfromRdto 14 RandRfromRtoR. Then, for x∈B,Tdis(x) can be represented by Tdis(x) =1√nn/summationdisplay i=1/bracketleftBig gx(Xi)Id(yi)+hx(Xi)−E[gx(Xi)Id(yi)+hx(Xi)]/bracketrightBig . Define the multiplicative separable empirical processes by Mn(g,r) =1√nn/summationdisplay i=1/bracketleftbig g(xi)r(yi)−E[g(xi)r(yi)]/bracketrightbig , g ∈G,r∈R. Then,Tdis(,x) has the representation Tdis(x) =Mn(gx,Id)+Mn(hx,1),x∈B. In Lemma SA-4.2, we give upper bounds for Gaussian strong approximation of additive empirical process of the form ( Mn(g,r)+Mn(h,s) :g∈G,r∈R,h∈H,s∈S). Since upper bounds for empirical processes of the form ( Mn(g,r) :g∈G,r∈R) has already been studied in ( Cattaneo and Yu ,2025, Theorem SA.1), LemmaSA-4.2is given as its simple extension, considering the worse case betwee nGandH, and between RandS. Applying Lemma SA-4.2, we get the following theorem on Gaussian strong approximation of (Tdis(x) :x∈B). Theorem SA-3.5 (Strong Approximation of t-statistics) Suppose Assumption SA–1,SA–3andSA–4hold. Suppose there exists a constant C >0such that for t∈ {0,1}and for any x∈B, the De Giorgi perimeter of the set Et,x={y∈At: (y−x)/h∈Supp(K)} satisfies L(Et,x)fChd−1. Suppose liminf n→∞logh logn>−∞andnhd→ ∞asn→ ∞. Then, on a possibly enlarged probability space there exists a mean-zero Gaussian proc esszindexed by Bwith almost surely continuous sample path such that E/bracketleftbigg sup x∈B/vextendsingle/vextendsingleTdis(x)−z(x)/vextendsingle/vextendsingle/bracketrightbigg ≲(logn)3 2/parenleftbigg1 nhd/parenrightbigg1 2d+2·v v+2 +log(n)/parenleftbigg1 nv 2+vhd/parenrightbigg1 2 , where≲is up to a universal constant. Moreover, zhas the same covariance structure as Tdis, that is, Cov/bracketleftbig Tdis(x1),Tdis(x2)/bracketrightbig =Cov[z(x1),z(x2)],∀x,y∈B. Theorem SA-3.6 (Confidence Bands) Suppose Assumption SA–1,SA–3andSA–4hold. Suppose liminf n→∞logh logn>−∞,nv 2+vhd (logn)3→ ∞, and√ nhdsupx∈B/summationtext t∈{0,1}|Bn,t(x)| →0. Suppose/hatwidezis a mean-zero Gaussian process indexed by Bs.t. Cov[/hatwidez(x1),/hatwidez(x2)] =/hatwideΞx1,x2/radicalBig /hatwideΞx1,x1/hatwideΞx2,x2,x1,x2∈B. LetUnbe theÃ-algebra generated by ((Yi,(Di(x) :x∈B)) : 1fifn). Then sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideTdis(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B|/hatwidez(x)| fu/vextendsingle/vextendsingle/vextendsingle/vextendsingleUn/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle=oP(1). 15 For any 0< ³ < 1, if we define c³=inf{c >0 :P(supx∈B|/hatwidez(x)| gc|Un)f³}and define/hatwideI³(x) =/parenleftbigg /hatwideÄdis(x)−c³/radicalBig /hatwideΞx,x,/hatwideÄdis(x)+c³/radicalBig /hatwideΞx,x/parenrightbigg for allx∈B, then P/parenleftBig Ä(x)∈/hatwideI³(x),∀x∈B/parenrightBig = 1−³−o(1). SA-4 Gaussian Strong Approximation Lemmas We present two Gaussian strong approximation lemmas that are the key te chnical tools behind Theorem SA-2.7 and Theorem SA-3.5, building on and generalizing the results in Cattaneo and Yu (2025). Consider the residual-based empirical process given by Rn[g,r] =1√nn/summationdisplay i=1/bracketleftBig g(xi)r(yi)−E[g(xi)r(yi)|xi]/bracketrightBig , g∈G,r∈R, whereGandRare classes of functions satisfying certain regularity conditions. In addition, consider the multiplicative-separable empirical process given by Mn[g,r] =1√nn/summationdisplay i=1/bracketleftBig g(xi)r(yi)−E[g(xi)r(yi)]/bracketrightBig , g∈G,r∈R. SA-4.1 Definitions for Function Spaces LetFbe a class of measurable functions from a probability space ( Rq,B(Rq),P) toR. We introduce several definitions that capture properties of F. (i)Fis pointwise measurable if it contains a countable subset Gsuch that for any f∈F, there exists a sequence ( gm:mg1)¦Gsuch that lim m→∞gm(u) =f(u) for
|
https://arxiv.org/abs/2505.05670v1
|
allu∈Rq. (ii)LetSupp(F) =∪f∈FSupp(f). A probability measure QFon (Rq,B(Rq)) is a surrogate measure for P with respect to Fif (i)QFagrees with Pon Supp( P)∩Supp(F). (ii)QF(Supp(F)\Supp(P)) = 0. LetQF= Supp(QF). (iii) For q= 1 and an interval I¦R, the pointwise total variation of FoverIis pTVF,I= sup f∈Fsup Pg1sup PP∈IP−1/summationdisplay i=1|f(ai+1)−f(ai)|, wherePP={(a1,...,a P) :a1f ··· faP}denotes the collection of all partitions of I. (iv) For a non-empty C¦Rq, the total variation of FoverCis TVF,C= inf U∈O(C)sup f∈Fsup ϕ∈Dq(U)/integraldisplay Rqf(u)div(ϕ)(u)du/∥∥ϕ∥2∥∞, 16 whereO(C) denotes the collection of all open sets that contains C, and Dq(U) denotes the space of infinitely differentiable functions from RqtoRqwith compact support contained in U. (v)For a non-empty C¦Rq, the local total variation constant of FoverC, is a positive number KF,C such that for any cube D¦Rqwith edges of length ℓparallel to the coordinate axises, TVF,D∩CfKF,Cℓd−1. (vi) For a non-empty C¦Rq, the envelopes of FoverCare MF,C= sup u∈CMF,C(u), M F,C(u) = sup f∈F|f(u)|,u∈C. (vii) For a non-empty C¦Rq, the Lipschitz constant of FoverCis LF,C= sup f∈Fsup u1,u2∈C|f(u1)−f(u2)| ∥u1−u2∥∞. (viii) For a non-empty C¦Rq, theL1bound of FoverCis EF,C= sup f∈F/integraldisplay C|f|dP. (ix) For a non-empty C¦Rq, the uniform covering number of Fwith envelope MF,CoverCis NF,C(¶,MF,C) = sup µN(F,∥·∥µ,2,¶∥MF,C∥µ,2), ¶∈(0,∞), where the supremum is taken over all finite discrete measures on ( C,B(C)). We assume that MF,C(u) is finite for every u∈C. (x) For a non-empty C¦Rq, the uniform entropy integral of Fwith envelope MF,CoverCis JC(¶,F,MF,C) =/integraldisplay¶ 0/radicalBig 1+logNF,C(ε,MF,C)dε, where it is assumed that MF,C(u) is finite for every u∈C. (xi)For a non-empty C¦Rq,Fis a VC-type class with envelope MF,CoverCif (i)MF,Cis measurable andMF,C(u) is finite for every u∈C, and (ii) there exist cF,C>0 anddF,C>0 such that NF,C(ε,MF,C)fcF,Cε−dF,C, ε∈(0,1). If a surrogate measure QFforPwith respect to Fhas been assumed, and it is clear from the context, we drop the dependence on C=QFfor all quantities in the previous definitions. That is, to save notati on, we setTVF=TVF,QF,KF=KF,QF,MF=MF,QF,MF(u) =MF,QF(u),LF=LF,QF, and so on, whenever there is no confusion. 17 SA-4.2 Residual-based Empirical Process The following Lemma SA-4.1generalizes Cattaneo and Yu (2025, Theorem 2) by allowing yito have bounded moments conditional on xi. Lemma SA-4.1 (Strong Approximation for Residual-based Empirical Pr ocesses) Suppose (zi= (xi,yi) : 1fifn)are i.i.d. random vectors taking values in (Rd+1,B(Rd+1))with common lawPZ, wherexihas distribution PXsupported on X¦Rd,yihas distribution PYsupported on Y¦R, supx∈XE[|yi|2+v|xi=x]f2for some v >0, and the following conditions hold. (i)Gis a real-valued pointwise measurable class of functions on (Rd,B(Rd),PX). (ii)There exists a surrogate measure QGforPXwith respect to Gsuch that QG=m◦ϕG, where the normalizing transformation ϕG:QG/ma√sto→[0,1]dis a diffeomorphism. (iii)Gis a VC-type class with envelope MGoverQGwithcGgeanddGg1. (iv)Ris a real-valued pointwise measurable class of functions on (R,B(R),PY). (v)Ris a VC-type class with envelope MR,YoverYwithcR,YgeanddR,Yg1, whereMR,Y(y) + pTVR,(−|y|,|y|)fv(1+|y|)for ally∈Y, for some v>0. (vi)There exists a constant ksuch that |log2EG|+|log2TV|+|log2MG| fklog2n, whereTV=max{TVG,TVG×VR,QG} withVR={¹(·,r) :r∈R}, and¹(x,r) =E[r(yi)|xi=x]. Define the residual based empirical process Rn(g,r) =1√nn/summationdisplay i=1g(xi)(r(yi)−E[r(yi)|xi]), g∈G,r∈R. Then, on a possibly enlarged probability space, there exists a sequence of mean-zero Gaussian processes (ZR n(g,r) :g∈G,r∈R)with almost sure continuous trajectories such that: •E[Rn(g1,r1)Rn(g2,r2)] =E[ZR n(g1,r1)ZR n(g2,r2)]for all(g1,r1),(g2,r2)∈G×R, and •E/bracketleftbig ∥Rn−ZR n∥G×R/bracketrightbig fCv((dlog(cn))3 2rv v+2n(√MGEG)2 v+2+dlog(cn)MGn−v/2 2+v+dlog(cn)MGn−1 2/parenleftBig√MGEG rn/parenrightBig2 v+2), whereCis a universal
|
https://arxiv.org/abs/2505.05670v1
|
constant, c=cG+cR,Y+k,d=dGdR,Yk, and rn= min/braceleftBig(cd 1Md+1 GTVdEG)1/(2d+2) n1/(2d+2),(cd/2 1cd/2 2MGTVd/2EGLd/2)1/(d+2) n1/(d+2)/bracerightBig , c1=dsup x∈QGd−1/productdisplay j=1Ãj(∇ϕG(x)),c2= sup x∈QG1 Ãd(∇ϕG(x)). SA-4.3 Multiplicative-Separable Empirical Process The following Lemma SA-4.2generalizes Cattaneo and Yu (2025, Theorem SA.1) by allowing yito have bounded moments conditional on xi. 18 Lemma SA-4.2 (Strong Approximation for (Mn(g,r)+Mn(h,s) :g∈G,r∈R,h∈H,s∈S)) Suppose (zi= (xi,yi) : 1fifn)are i.i.d. random vectors taking values in (Rd+1,B(Rd+1))with common lawPZ, wherexihas distribution PXsupported on X¦Rd,yihas distribution PYsupported on Y¦R, supx∈XE[|yi|2+v|xi=x]f2for some v >0, and the following conditions hold. (i)GandHare real-valued pointwise measurable classes of functions on (Rd,B(Rd),PX). (ii)There exists a surrogate measure QG∪HforPXwith respect to G∪Hsuch that QG∪H=m◦ϕG∪H, where the normalizing transformation ϕG∪H:QG∪H/ma√sto→[0,1]dis a diffeomorphism. (iii)Gis a VC-type class with envelope MG,QG∪HoverQG∪HwithcG,QG∪HgeanddG,QG∪Hg1.His a VC-type class with envelope MH,QG∪HoverQG∪HwithcH,QG∪HgeanddH,QG∪Hg1. (iv)RandSare real-valued pointwise measurable classes of functions on (R,B(R),PY). (v)Ris a VC-type class with envelope MR,YoverYwithcR,YgeanddR,Yg1, whereMR,Y(y) + pTVR,(−|y|,|y|)fv(1+|y|)for ally∈Y, for some v>0.Sis a VC-type class with envelope MS,Yover YwithcS,YgeanddS,Yg1, whereMS,Y(y)+pTVS,(−|y|,|y|)fv(1+|y|)for ally∈Y, for some v>0. (vi)There exists a constant ksuch that |log2E|+|log2TV|+|log2M| fklog2(n), whereE=max{EG,QG∪H,EH,QG∪H}, TV= max{TVG,QG∪H,TVH,QG∪H}andM= max{MG,QG∪H,MH,QG∪H}. Consider the empirical process An(g,h,r,s) =Mn(g,r)+Mn(h,s), g∈G,r∈R,h∈H,s∈S. Then, on a possibly enlarged probability space, there exists a sequence of mean-zero Gaussian processes (ZA n(g,h,r,s) :g∈G,h∈H,r∈R,s∈S)with almost sure continuous trajectories such that: •E[An(g1,h1,r1,s1)An(g2,h2,r2,s2)] =E[ZA n(g1,h1,r1,s1)ZA n(g2,h2,r2,s2)]holds for all (g1,h1,r1,s1), (g2,h2,r2,s2)∈G×H×R×S, and •E/bracketleftbig ∥An−ZA n∥G×H×R×S/bracketrightbig fCv((dlog(cn))3 2rv v+2n(√ ME)2 v+2+dlog(cn)Mn−v/2 2+v+dlog(cn)Mn−1 2/parenleftBig√ ME rn/parenrightBig2 v+2), whereCis a universal constant, c=cG,QG∪H+cH,QG∪H+cR,Y+cS,Y+k,d=dG,QG∪HdH,QG∪HdR,YdS,Yk, rn= min/braceleftBig(cd 1Md+1TVdE)1/(2d+2) n1/(2d+2),(cd 2 1cd 2 2MTVd 2ELd 2)1/(d+2) n1/(d+2)/bracerightBig , c1=dsup x∈QG∪Hd−1/productdisplay j=1Ãj(∇ϕG∪H(x)),c2= sup x∈QG∪H1 Ãd(∇ϕG∪H(x)). SA-5 Proofs for Section SA-2 SA-5.1 Proof of Lemma SA-2.1 Since/hatwideΓt,xis a finite dimensional matrix, it suffices to show the stated rate of con vergence for each entry. Let vbe a multi-index such that |v| f |2p|. Define gn(À,x) =/parenleftbiggÀ−x h/parenrightbiggv1 hdK/parenleftbiggÀ−x h/parenrightbigg 1(À∈At), À∈X,x∈B. 19 DefineF={gn(·,x) :x∈B}. We will show Fis a VC-type of class. In order to do this, we study the following quantities. Constant Envelope Function: We assume Kis continuous and has compact support, or K= 1(· ∈ [−1,1]d). Hence there exists a constant C1such that for all l∈F, for allx∈B, |l(x)| fC1h−d=F. Diameter of FinL2:supl∈F∥l∥P,2=supx∈B(/integraltext At−x h1 hdy2vK(y)2fX(x+hy)dy)1/2fC2h−d/2for some constant C2. We can take C1large enough so that Ã=C2h−d/2fF=C1h−d. Ratio:For some constant C3, ¶=Ã F=C3√ hd. Covering Numbers: Case 1: When Kis Lipschitz. Letx,x′∈B. Then, sup À∈X|gn(À,x)−gn(À,x′)| f/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftbiggÀ1−x1 h/parenrightbiggv1 ···/parenleftbiggÀ−xd h/parenrightbiggvd −/parenleftbiggÀ1−x′ 1 h/parenrightbiggv1 ···/parenleftbiggÀ−x′ d h/parenrightbiggvd/vextendsingle/vextendsingle/vextendsingle/vextendsingleKh(À−x) +/parenleftbiggÀ1−x′ 1 h/parenrightbiggv1 ···/parenleftbiggÀ−x′ d h/parenrightbiggvd |Kh(À−x)−Kh(À−x′)| ≲h−d−1 n∥x−x′∥∞, since we have assumed that Khas compact support and is Lipschitz continuous. Hence for any ε∈(0,1] and for any finitely supported measure Qand metric ∥·∥Q,2based on L2(Q), N(F,∥·∥Q,2,ε∥F∥Q,2)fN(X,∥·∥∞,ε∥F∥Q,2hd+1)(1) ≲/parenleftbiggdiam(X) ε∥F∥Q,2hd+1/parenrightbiggd ≲/parenleftbiggdiam(X) εh/parenrightbiggd , where in (1) we used the fact that ε∥F∥Q,2hd+1≲εh≲1. Hence Fforms a VC-type class, and taking A1= diam(X)/handA2=d, sup QN(F,∥·∥Q,2,ε∥F∥Q,2)≲(A1/ε)A2, ε∈(0,1], where the supremum is over all finite discrete measure. Case 2: When K= 1(· ∈[−1,1]d).Consider hn(À,x) =/parenleftbiggÀ−x h/parenrightbiggv1 hd1(À∈At), À,x∈X, H={hn(·,x) :x∈B}and the constant envelope function H=C4h−|v|−d, for some constant C4only depending on diameter of X. The same argument as before shows that for any discrete measure Q, we have N(H,∥·∥Q,2,ε∥H∥Q,2)fN(X,∥·∥∞,ε∥H∥Q,2hd+|ν|+1)≲/parenleftbiggdiam(X) ε∥H∥Q,2hd+|ν|+1/parenrightbiggd ≲/parenleftbiggdiam(X) εh/parenrightbiggd . 20 The class G={ 1(·−x∈[−1,1]d) :x∈B}has VC dimension no
|
https://arxiv.org/abs/2505.05670v1
|
greater than 2 d(van der Vaart and Wellner , 1996, Example 2.6.1), and by van der Vaart and Wellner (1996, Theorem 2.6.4), for any discrete measure Q, we have N(G,∥·∥Q,2,ε)f2d(4e)2dε−4d,0< εf1. It then follows that for any discrete measure Q, N(F,∥·∥Q,2,ε∥H∥Q,2)≲N(H,∥·∥Q,2,ε/2∥H∥Q,2)+N(G,∥·∥Q,2,ε/2)≲2dh−dε−d+2d(32e)dε−4d. Hence taking A1= (2dh−d+2d(32e)d)h−|v|andA2= 4d, sup QN(F,∥·∥Q,2,ε∥F∥Q,2)≲(A1/ε)A2, ε∈(0,1], the supremum is over all finite discrete measure. Maximal Inequality: Using Corollary 5.1 in Chernozhukov et al. (2014b) for the empirical process on classF, E/bracketleftbigg sup x∈B|En[gn(Xi,x)]−E[gn(Xi,x)]|/bracketrightbigg ≲Ã√n/radicalbig A2log(A1/¶)+∥F∥P,2A2log(A1/¶) n ≲/radicalbigg log(1/h) nhd+log(1/h) nhd, whereA1,A2,Ã,F,¶are all given previously. Assuminglog(h−1) nhd→0 asn→ ∞, we conclude that supx∈B∥/hatwideΓt,x−Γt,x∥≲P/radicalBig log(1/h) nhd. Hence, 1 ≲Pinfx∈B∥/hatwideΓt,x∥≲Psupx∈B∥/hatwideΓt,x∥≲P1. By Weyl’s The- orem,supx∈B|¼min(/hatwideΓt,x)−¼min(Γt,x)| fsupx∈B∥/hatwideΓt,x−Γt,x∥≲P/radicalBig log(1/h) nhd. Therefore, we can lower bound the minimum eigenvalue by inf x∈B¼min(/hatwideΓt,x)ginf x∈B¼min(Γt,x)−sup x∈B|¼min(/hatwideΓt,x)−¼min(Γt,x)|≳P1. It follows that supx∈B∥/hatwideΓ−1 t,x∥≲P1 and hence sup x∈B∥/hatwideΓ−1 t,x−Γ−1 t,x∥ fsup x∈B∥Γ−1 t,x∥∥Γt,x−/hatwideΓt,x∥∥/hatwideΓ−1 t,x∥≲P/radicalbigg log(1/h) nhd. This completes the proof. ■ SA-5.2 Proof of Lemma SA-2.2 We introduce the following notation for an approximation error and an empi rical average: rt(À;x) =µt(À)−/summationdisplay 0f|ω|fpµ(ω) t(x) ω!(À−x)ω, χt,x=En/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)rt(Xi;x)/bracketrightbigg . 21 Since we have assumed µtis (p+1) times continuously differentiable, there exists αx,Xi,t∈Rp+1such that ∥χt,x∥2=/vextenddouble/vextenddouble/vextenddouble1 nn/summationdisplay i=1Rp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)Rp/parenleftbiggXi−x h/parenrightbigg¦ (0¦,α¦ x,Xi,t)¦/vextenddouble/vextenddouble/vextenddouble2 h2(p+1) f/parenleftBig1 nn/summationdisplay i=1/vextenddouble/vextenddouble/vextenddoubleRp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)Rp/parenleftbiggXi−x h/parenrightbigg¦/vextenddouble/vextenddouble/vextenddouble2/parenrightBig/parenleftBig1 nn/summationdisplay i=1∥αx,Xi,t∥2/parenrightBig h2(p+1), wheresupx∈Bmaxt∈{0,1}max1fifn∥αx,Xi,t∥≲1. Assumelog(1/h) nhd=o(1), the same argument as the proof of Lemma SA-2.1shows 1 nn/summationdisplay i=1/vextenddouble/vextenddouble/vextenddoubleRp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)Rp/parenleftbiggXi−x h/parenrightbigg¦/vextenddouble/vextenddouble/vextenddouble2 ≲P1. It then follows from Lemma SA-2.1that sup x∈B/vextendsingle/vextendsingle/vextendsingleE[/hatwideµ(ν) t(x)|X]−µ(ν) t(x)/vextendsingle/vextendsingle/vextendsingle= sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsinglee¦ 1+νH−1/hatwideΓ−1 t,xχt,x/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲Php+1−|ν|. Now assume further that h=o(1). Since µv(À;x) =|v| v!/integraltext1 0(1−t)|v|−1∂vµt(x+t(À−x))dt, then for all x∈B,À∈X, 1(Kh(À−x)̸= 0)/vextendsingle/vextendsingle/vextendsingle/vextendsingleµv(À;x)−|v| v!∂vµt(x)/vextendsingle/vextendsingle/vextendsingle/vextendsinglef|v| v!sup ∥u−u′∥fh|∂vµt(u)−∂vµt(u′)|=Mn. By Assumption SA–1(iii),∂vµtis uniformly continuous on the compact set X. This implies that when h=o(1),Mn=o(1). Denote /tildewideχt,x=En/bracketleftbigg Rp/parenleftbiggXi−x hn/parenrightbigg Kh(Xi−x) 1(Xi∈At)(/summationdisplay |v|=p+1|v| v!∂vµt(x)(Xi−x)v)/bracketrightbigg , then sup x∈B∥χt,x−/tildewideχt,x∥≲Mnsup x∈B/vextenddouble/vextenddouble/vextenddouble/vextenddoubleEn/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)(/summationdisplay |v|=p+1|v| v!|Xi−x|v)/bracketrightbigg/vextenddouble/vextenddouble/vextenddouble/vextenddouble =oP(hp+1), where in the last equality, we have used the same as in the proof of Lemm aSA-2.1maximal inequality to bound the deviation of the term on the left hand side from its expectati on. Hence sup x∈B/vextendsingle/vextendsingle/vextendsingleE[/hatwideµ(ν) t(x)|X]−µ(ν) t(x)−hp+1−|ν|/hatwideB(ν) t,x/vextendsingle/vextendsingle/vextendsingle= sup x∈B/vextendsingle/vextendsingle/vextendsinglee¦ 1+νH−1/hatwideΓ−1 t,xχt,x−e¦ 1+νH−1/hatwideΓ−1 t,x/tildewideχt,x/vextendsingle/vextendsingle/vextendsingle=oP(hp+1−|ν|). Using Lemma SA-2.1and maximal inequality as in the proof of Lemma SA-2.1, we can show max t∈{0,1}sup x∈B|/hatwideB(ν) t,x−B(ν) t,x|≲P/radicalbigg log(1/h) nhd. 22 Since max t∈{0,1}supx∈B|B(ν) t,x|≲1, the inequality above implies max t∈{0,1}sup x∈B|/hatwideB(ν) t,x|≲P1. This completes the proof. ■ SA-5.3 Proof of Lemma SA-2.3 The proof will be similar to the proof of Lemma SA-2.1. Letvbe a multi-index such that 0 f |v| fp. Denote gn(À,x) =/parenleftbiggÀ−x h/parenrightbiggv Kh/parenleftbiggÀ−x h/parenrightbigg 1(À∈At), À,x∈X. Define the class of functions F={(À,u)∈X×R/ma√sto→gn(À,x) :x∈B}. Consider the following quantities. Envelope Function: SinceKis continuous on its compact support, there exists a constant C1>0 such that|gn(À,x)u| fC1|u| hd∀À,x∈X,u∈R. We define the envelope function F(À,u) =C1h−d|u|,À∈X,u∈R. Moreover, by (v) in Assumption SA–1, denote M= max 1fifnF(Xi,ui), then E[M2]1/2≲h−dE/bracketleftbigg max 1fifn|ui|2/bracketrightbigg1/2 ≲h−dE/bracketleftbigg max 1fifn|ui|2+v/bracketrightbigg1/(2+v) ≲n1/(2+v)h−d. Diameter of FinL2:By (v) in Assumption SA–1, recall we denote ui=Yi−E[Yi|Xi], sup l∈FE[l(Xi,ui)2]1/2fsup À∈XE[u2 i|Xi=À]1/2sup À∈XE[gn(Xi,À)2]1/2fC3h−d/2=Ã. Ratio:¶=à ∥F∥P,2≲hd/2. Covering Numbers: Case 1:Kis Lipschitz. LetQbe a finite distribution on ( X×R,B(X)¹B(R)). Letx,x′∈X. In the proof of Lemma SA-2.1, we have shown supÀ∈Xsupx,x′∈X|gn(À,x)−gn(À,x′)| ∥x−x′∥∞≲h−d−1. Hence ∥gn(Xi,x)ui−gn(Xi,x′)ui∥Q,2f ∥gn(·,x)−gn(·,x′)∥∞∥ui∥Q,2≲h−1∥F∥Q,2∥x−x′∥∞. It
|
https://arxiv.org/abs/2505.05670v1
|
follows that supQN(F,∥·∥Q,2,ϵ∥F∥Q,2)≲(diam(X) ϵh))d, wheresupis over all finite probability distribution on (X×R,B(X)¹B(R)). Denote A1=diam(X) h,A2=d. We have sup QN(F,∥·∥Q,2,ϵ∥F∥Q,2)≲(A1/ϵ)A2, ϵ∈(0,1]. Case 2:Kis the uniform kernel. Consider hn(À,x) =/parenleftbiggÀ−x h/parenrightbiggv1 hd1(À∈At), À,x∈X, andH={(À,u)∈X×R→hn(À,x)u:x∈B}. By similar arguments as Case 1and the proof of LemmaSA-2.1, we can show sup QN(H,∥·∥Q,2,ε∥H∥Q,2)≲/parenleftbiggdiam(X) εh/parenrightbiggd , 23 where the supremum is taken over all finite discrete measures. Tak ingG={ 1(·−x∈[−1,1]d) :x∈B}, the proof of Lemma SA-2.1shows sup QN(G,∥·∥Q,2,ε)f2d(4e)2dε−4d,0< εf1, where the supremum is taken over all finite discrete measures. Tak ingA1= (2dh−d+2d(32e)d)h−|v|and A2= 4d, we have sup QN(F,∥·∥Q,2,ε∥F∥Q,2)≲(A1/ε)A2, ε∈(0,1], the supremum is over all finite discrete measure. Maximal Inequality: By Corollary 5.1 in Chernozhukov et al. (2014b), E/bracketleftBigg sup x∈X/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle1 nn/summationdisplay i=1gn(Xi,x)ui/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≲Ã√n/radicalbig A2log(A1/¶)+∥M∥P,2A2log(A1/¶) n ≲/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd. SinceQt,xis finite-dimensional, entrywise-convergence implies convergen ce in norm in the same rate. Hence supx∈X∥Qt,x∥≲P/radicalBig log(1/h) nhd+log(1/h) n1+v 2+vhd. By Lemma SA-2.1, sup x∈X/vextendsingle/vextendsingle/vextendsingle/hatwideµ(ν) t(x)−E/bracketleftBig /hatwideµ(ν) t(x)/vextendsingle/vextendsingleX/bracketrightBig −e¦ 1+νH−1Γ−1 t,xQt,x/vextendsingle/vextendsingle/vextendsingle= sup x∈X/vextendsingle/vextendsingle/vextendsinglee¦ 1+νH−1/parenleftBig /hatwideΓ−1 t,x−Γ−1 t,x/parenrightBig Qt,x/vextendsingle/vextendsingle/vextendsingle ≲Ph−|ν|/radicalbigg log(1/h) nhd/parenleftBigg/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightBigg , sup x∈X/vextendsingle/vextendsingle/vextendsingle/hatwideµ(ν) t(x)−E/bracketleftBig /hatwideµ(ν) t(x)/vextendsingle/vextendsingleX/bracketrightBig/vextendsingle/vextendsingle/vextendsingle≲Ph−|ν|/parenleftBigg/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightBigg . This completes the proof. ■ SA-5.4 Proof of Lemma SA-2.4 Recall we denote εi=Yi−/summationtext t∈{0,1}1(Xi∈At)/hatwideβt(x)¦Rp(Xi−x), andui=Yi−/summationtext t∈{0,1}1(Xi∈At)µt(Xi). Denote¸i=/summationtext t∈{0,1}1(Xi∈At)(µt(Xi)−/hatwideβt(x)¦Rp(Xi−x)). Then, for all x,y∈B, the difference betwen estimated and true sandwich matrix can be decomposed by /hatwideΣt,x,y−Σt,x,y=M1,x,y+M2,x,y+M3,x,y+M4,x,y 24 where M1,x,y=En/bracketleftBigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−y h/parenrightbigg¦1 hdK/parenleftbiggXi−x h/parenrightbigg K/parenleftbiggXi−y h/parenrightbigg ¸2 i 1(Xi∈At)/bracketrightBigg , M2,x,y= 2En/bracketleftBigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−y h/parenrightbigg¦1 hdK/parenleftbiggXi−x h/parenrightbigg K/parenleftbiggXi−y h/parenrightbigg ¸iui 1(Xi∈At)/bracketrightBigg , M3,x,y=En/bracketleftBigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−y h/parenrightbigg¦1 hdK/parenleftbiggXi−x h/parenrightbigg K/parenleftbiggXi−y h/parenrightbigg (u2 i−Ãt(Xi)2) 1(Xi∈At)/bracketrightBigg , M4,x,y=En/bracketleftBigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−y h/parenrightbigg¦1 hdK/parenleftbiggXi−x h/parenrightbigg K/parenleftbiggXi−y h/parenrightbigg Ãt(Xi)21(Xi∈At)/bracketrightBigg −E/bracketleftBigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−y h/parenrightbigg¦1 hdK/parenleftbiggXi−x h/parenrightbigg K/parenleftbiggXi−y h/parenrightbigg Ãt(Xi)21(Xi∈At)/bracketrightBigg . Letu,vbe multi-indices. Denote gn(Xi;x,y) =1 hd(Xi−x h)u(Xi−y h)vK(Xi−x h)K(Xi−y h) 1(Xi∈At). For notational simplicity, denote in what follows ³n=/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd. First, we present a bound on max1fifn|¸i| 1((Xi−x)/h∈Supp(K)). By Lemma SA-2.2and Lemma SA-2.3, and multi-index νsuch that |ν| fp, sup x∈B|e¦ 1+ν/hatwideµt(x)−e¦ 1+νµt(x)|≲Ph−|ν|(hp+1+³n). SinceKis compactly supported, we have max 1fifn|/summationdisplay t∈{0,1}1(Xi∈At)(/hatwideβt(x)−βt(x))¦Rp(Xi−x) 1((Xi−x)/h∈Supp(K))|≲Php+1+³n. Sinceµtisp+1 times continuously differentiable, max 1fifn|/summationdisplay t∈{0,1}1(Xi∈At)(µt(Xi)−βt(x)¦Rp(Xi−x)) 1((Xi−x)/h∈Supp(K))|≲hp+1. It follows that max 1fifn|¸i| 1((Xi−x)/h∈Supp(K))≲Php+1+³n. Term M 1,x,y. From the proof for Lemma SA-2.1,supx,y∈X|En[gn(Xi;x,y)]−E[gn(Xi;x,y)]|≲P/radicalBig log(1/h) nhd. Moreover, supx,y∈X|E[gn(Xi;x,y)]|≲P1. Hence supx,y∈X|En[gn(Xi;x,y)]|≲P1. So sup x,y∈B/vextendsingle/vextendsingleEn/bracketleftbig gn(Xi;x,y)¸2 i/bracketrightbig/vextendsingle/vextendsinglefmax 1fifn|¸i| 1((Xi−x)/h∈Supp(K))·sup x,y∈X|En[gn(Xi;x,y)]| ≲P/parenleftbig hp+1+³n/parenrightbig2, where we have use Theorem SA-2.5, which does not depend on this lemma, for supx∈B|/hatwideµt(x)−µt(x)|≲P 25 hp+1+³n. Finite dimensionality of M1,x,ythen implies sup x,y∈B∥M1,x,y∥≲P/parenleftbig hp+1+³n/parenrightbig2. Term M 2,x,y. From the proof of Lemma SA-2.3,supx,y∈X|En[gn(Xi;x,y)ui]−E[gn(Xi;x,y)ui]|≲P³n. Moreover, supx,y∈X|E[gn(Xi;x,y)ui]|≲P1. Hence supx,y∈X|En[gn(Xi;x,y)ui]|≲P1. sup x,y∈B|En[gn(Xi;x,y)¸iui]| fsup x,y∈B|/hatwideµt(x)−µt(x)|sup x,y∈BEn[|gn(Xi;x,y)ui|]≲Php+1+³n, implying sup x,y∈B∥M2,x,y∥≲Php+1+³n. Term M 3,x,y. Define ln(·,·;x,y) :X×R→Rby ln(À,ε;x,y) =1 hd/parenleftbiggÀ−x h/parenrightbiggu/parenleftbiggÀ−y h/parenrightbiggv K/parenleftbiggÀ−x h/parenrightbigg K/parenleftbiggÀ−y h/parenrightbigg 1(À∈At)(ε2−Ãt(À)2), and consider L={ln(·,·;x,y) :x,y∈X}. Define L:X×R→RbyL(À,ε) =c hd|ε2−Ãt(À)2|where c=supx,y∈B/vextendsingle/vextendsingle/vextendsingle/parenleftBig À−x h/parenrightBigu/parenleftBig À−y h/parenrightBigv K/parenleftBig À−x h/parenrightBig K/parenleftBig À−y h/parenrightBig/vextendsingle/vextendsingle/vextendsingle. By similar argument as in the proof for Lemma SA-2.3, we can show Lis a VC-type class such that E[ln(Xi,ui;x,y)] = 0,∀x,y∈Xand sup x,y∈XE/bracketleftbig ln(Xi,ε;x,y)2/bracketrightbig1 2≲sup x,y∈BE/bracketleftbig gn(Xi,ui;x,y)2/bracketrightbig1 2sup À∈XV[u2 i|Xi=À]≲h−d/2, E/bracketleftbigg max 1fifnL(Xi,ui)2/bracketrightbigg1 2 ≲h−dE/bracketleftbigg max 1fifnu4 i/bracketrightbigg1/2 ≲h−dE/bracketleftbigg max 1fifnu2+v i/bracketrightbigg2 2+v ≲h−dn2 2+v. Apply Corollary 5.1 in Chernozhukov et al. (2014b), we get sup x,y∈B|En[ln(Xi,ui;x,y)]|≲P/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd,sup
|
https://arxiv.org/abs/2505.05670v1
|
x,y∈B∥M3,x,y∥≲P/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd. Term M 4,x,y. Notice that {gn(·;x,y)Ã2 t(·) :x,y∈B}is a VC-type of class with constant envelope function Ch−dfor some suitable Cand sup x,y∈Bsup À∈X/vextendsingle/vextendsinglegn(À;x,y)Ã2(À)/vextendsingle/vextendsingle≲h−d, sup x,y∈BE/bracketleftbig gn(Xi;x,y)2Ãt(Xi)2/bracketrightbig1 2≲h−d/2. Then, similar to the proof of M1,x,ywe can get sup x,y∈B|En[gn(Xi;x,y)]−E[gn(Xi;x,y)]|≲/radicalbigg log(1/h) nhd,sup x,y∈B∥M4,x,y∥≲/radicalbigg log(1/h) nhd. 26 Putting Together . Combining the the upper bounds of the four terms, we get sup x,y∈B∥/hatwideΣ1,x,y−Σ1,x,y∥≲Php+1+/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd, which implies supx,y∈B∥/hatwideΣ1,x,y∥≲P1. It follows that sup x,y∈B|/hatwideΩ(ν) 1,x,y−Ω(ν) 1,x,y| f1 nhd+2|ν|/parenleftbigg sup x,y∈B∥/hatwideΓ−1 1,x−Γ−1 1,x∥∥/hatwideΣ1,x,y∥∥/hatwideΓ−1 1,y∥+ sup x,y∈B∥Γ−1 1,x∥∥/hatwideΣ1,x,y−Σ1,x,y∥∥/hatwideΓ−1 1,y∥ + sup x,y∈B∥Γ−1 1,x∥∥Σ1,x,y∥∥/hatwideΓ−1 1,y−Γ−1 1,y∥/parenrightbigg f1 nhd+2|ν|/parenleftBigg hp+1+/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightBigg . By Assumption SA–1(iv) and Assumption SA–2(ii),infx∈BΩ(ν) x,x≳P(nhd+2|ν|)−1. Hence infx∈B/hatwideΩ(ν) x,x≳ (nhd+2|ν|)−1. sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/radicalBig /hatwideΩ(ν) x,x−/radicalBig Ω(ν) x,x/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲Psup x∈B/radicalbig nhd+2|ν|/vextendsingle/vextendsingle/vextendsingle/hatwideΩ(ν) x,x−Ω(ν) x,x/vextendsingle/vextendsingle/vextendsingle≲P1√ nhd+2|ν|/parenleftBigg hp+1+/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightBigg , sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleh−|ν| /radicalBig /hatwideΩ(ν) x,x−h−|ν| /radicalBig Ω(ν) x,x/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle=h−|ν|sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/radicalBig /hatwideΩ(ν) x,x−/radicalBig Ω(ν) x,x/radicalBig /hatwideΩ(ν) x,xΩ(ν) x,x/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲P√ nhd/parenleftBigg hp+1+/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightBigg . This completes the proof. ■ SA-5.5 Proof of Theorem SA-2.1 We can use the same argument as in the proof for Lemma SA-2.1,SA-2.2andSA-2.3, withB={x}, to get that under the conditions specified we have |E[/hatwideµ(ν) t(x)|X]−µ(ν) t(x)|≲Php+1−|ν|, and |/hatwideµ(ν) t(x)−E/bracketleftbig /hatwideµ(ν) t(x)/vextendsingle/vextendsingleX/bracketrightbig |≲Ph−|ν|/parenleftbigg1√ nhd+1 n1+v 2+vhd/parenrightbigg . In particular, when applying concentration inequalities, we always apply the singleton class of functions that correspond to the point of evaluation x. Putting together, we get the claimed result. ■ 27 For the proof of Theorem SA-2.2, we define the following matrices: For x,y∈B, Σt,x,y=En/bracketleftbigg Rp/parenleftbiggXi−x h/parenrightbigg Rp/parenleftbiggXi−y h/parenrightbigg¦ hdKh(Xi−x1)Kh(Xi−y)Ã2 t(Xi) 1(Xi∈At)/bracketrightbigg , Ω(ν) t,x,y=1 nhd+2|ν|e¦ 1+νΓ−1 t,xΣt,x,yΓ−1 t,ye1+ν,Ω(ν) x,y=Ω(ν) 0,x,y+Ω(ν) 1,x,y, V(ν) t,x=e¦ 1+νΓ−1 t,xΣt,x,xΓ−1 t,ye1+ν, V(ν) x=V(ν) 0,x+V(ν) 1,x. The following lemma is used for the convergence of Ω(ν) t,x,y. Lemma SA-5.1 (Conditional Variance) Suppose Assumption SA–1(i), (ii), (iv) and Assumption SA–2hold. Iflog(1/h) nhd=o(1), then sup x∈B∥Σt,x,x−Σt,x,x∥≲P/radicalbigg log(1/h) nhd, t∈ {0,1},andsup x∈B/vextendsingle/vextendsingle/vextendsingleV(ν) x−V(ν) x/vextendsingle/vextendsingle/vextendsingle≲P/radicalbigg log(1/h) nhd. Proof of Lemma SA-5.1.The proof will be similar to the proof of Lemma SA-2.1. Letu,vbe multi-indices such that |u| fpand|v| fp. Fixt∈ {0,1}. ForÀ∈Xandx∈B, define gn(À,x) =/parenleftbiggÀ−x h/parenrightbiggu+v1 hdK2/parenleftbiggÀ−x h/parenrightbigg Ã2 t(À) 1(À∈At). Consider the class of functions F={gn(·,x) :x∈B}. Then, by the same maximal inequality argument as in the proof of Lemma SA-2.1, E/bracketleftbigg sup x∈B|En[gn(Xi,x)]−E[gn(Xi,x)]|/bracketrightbigg ≲P/radicalbigg log(1/h) nhd. SinceΣt,x,xis finite dimensional, supx∈B∥Σt,x,x−Σt,x,x∥≲P/radicalBig log(1/h) nhdand hence supx∈B∥Vx−Vx∥≲P/radicalBig log(1/h) nhd. ■ SA-5.6 Proof of Theorem SA-2.2 For conditional bias, by Lemma SA-2.2, sup x∈B/vextendsingle/vextendsingleE/bracketleftbig /hatwideÄ(ν)(x)−Ä(ν)(x)/vextendsingle/vextendsingleX/bracketrightbig2−(hp+1−|ν|/hatwideB(ν) x)2/vextendsingle/vextendsingle fsup x∈B/vextendsingle/vextendsingleE/bracketleftbig /hatwideÄ(ν)(x)−Ä(ν)(x)/vextendsingle/vextendsingleX/bracketrightbig −hp+1−|ν|/hatwideB(ν) x/vextendsingle/vextendsingle·sup x∈B/vextendsingle/vextendsingleE/bracketleftbig /hatwideÄ(ν)(x)−Ä(ν)(x)/vextendsingle/vextendsingleX/bracketrightbig +hp+1−|ν|/hatwideB(ν) x/vextendsingle/vextendsingle =oP(hp+1−|ν|). Since we know supx∈B|/hatwideB(ν) t,x−B(ν) t,x|≲P/radicalBig log(1/h) nhdfrom Lemma SA-2.2, sup x∈B/vextendsingle/vextendsingleE/bracketleftbig /hatwideÄ(ν)(x)−Ä(ν)(x)/vextendsingle/vextendsingleX/bracketrightbig2−(hp+1−|ν|B(ν) x)2/vextendsingle/vextendsingle=oP(hp+1−|ν|). 28 For conditional variance, by Lemma SA-5.1, sup x∈B/vextendsingle/vextendsingleV/bracketleftbig /hatwideÄ(ν)(x)/vextendsingle/vextendsingleX/bracketrightbig −(nhd+2|ν|)−1V(ν) x/vextendsingle/vextendsingle= sup x∈B/vextendsingle/vextendsingle(nhd+2|ν|)−1V(ν) x−(nhd+2|ν|)−1V(ν) x/vextendsingle/vextendsingle=oP((nhd+2|ν|)−1). Since (nhd+2|ν|)−1supx∈B|V(ν) x−/hatwideV(ν) x|= supx∈B|Ω(ν) x,x−/hatwideΩ(ν) x,x|=oP((nhd+2|ν|)−1) from Lemma SA-2.4, sup x∈B/vextendsingle/vextendsingleV/bracketleftbig /hatwideÄ(ν)(x)/vextendsingle/vextendsingleX/bracketrightbig −(nhd+2|ν|)−1/hatwideV(ν) x/vextendsingle/vextendsingle= sup x∈B/vextendsingle/vextendsingle(nhd+2|ν|)−1V(ν) x−(nhd+2|ν|)−1/hatwideV(ν) x/vextendsingle/vextendsingle=oP((nhd+2|ν|)−1). Putting together we get the two MSE results. And /vextendsingle/vextendsingleIMSEν−/integraldisplay B/bracketleftbig (hp+1−|ν|B(ν) x)2+(nhd+2|ν|)−1V(ν) x/bracketrightbig É(x)dHd−1(x)/vextendsingle/vextendsingle f/integraldisplay BÉ(x)dHd−1(x)·sup x∈B/vextendsingle/vextendsingleMSEν(x)−(hp+1−|ν|B(ν) x)2−(nhd+2|ν|)−1V(ν) x/vextendsingle/vextendsingle =oP/parenleftbig h2p+2−2|ν|+(nhd+2|ν|)−1/parenrightbig . Similarly, we can get /vextendsingle/vextendsingleIMSEν−/integraldisplay B/bracketleftbig (hp+1−|ν|/hatwideB(ν) x)2+(nhd+2|ν|)−1/hatwideV(ν) x/bracketrightbig É(x)dHd−1(x)/vextendsingle/vextendsingle=oP/parenleftbig h2p+2−2|ν|+(nhd+2|ν|)−1/parenrightbig . ■ SA-5.7 Proof of Theorem SA-2.3 Consider T(ν)(x) = (Ω(ν) x,x)−1/2e¦ 1+νH−1Γ−1 t,xQt,xandui=Yi−/summationtext t∈{0,1} 1(Xi∈At)µt(Xi). Define Zi=/summationdisplay t∈{0,1}n−1(Ω(ν) x,x)−1/2e¦ 1+νH−1Γ−1 t,xRp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x)
|
https://arxiv.org/abs/2505.05670v1
|
1(Xi∈At)ui. Then,T(ν)(x) =/summationtextn i=1ZiandE[Zi] = 0 and V[Zi] =n−1. By Berry-Essen Theorem, sup u∈R/vextendsingle/vextendsingle/vextendsingleP/parenleftBig T(ν)(x)fu/parenrightBig −Φ(u)/vextendsingle/vextendsingle/vextendsingle≲B−1 nn/summationdisplay i=1E[|Zi|3], whereBn=/summationtextn i=1V[Zi] = 1. Moreover, n/summationdisplay i=1E[|Zi|3] =n−3(Ω(ν) x,x)−3/2n/summationdisplay i=1E/bracketleftBig/vextendsingle/vextendsingle/vextendsingle/summationdisplay t∈{0,1}e¦ 1+νH−1Γ−1 t,xRp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)ui/vextendsingle/vextendsingle/vextendsingle3/bracketrightBig ≲n−3(Ω(ν) x,x)−3/2n/summationdisplay i=1E/bracketleftBig/vextendsingle/vextendsingle/vextendsingle/summationdisplay t∈{0,1}e¦ 1+νH−1Γ−1 t,xRp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)/vextendsingle/vextendsingle/vextendsingle3/bracketrightBig ≲n−2h−|ν|−d(Ω(ν) x,x)−3/2E/bracketleftBig/vextendsingle/vextendsingle/vextendsingle/summationdisplay t∈{0,1}e¦ 1+νH−1Γ−1 t,xRp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)/vextendsingle/vextendsingle/vextendsingle2/bracketrightBig ≲n−1h−|ν|−d(Ω(ν) x,x)−1/2 ≲(nhd)−1/2, 29 where in the second line we used Assumption SA–1(v), in the third line we used /vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay t∈{0,1}e¦ 1+νH−1Γ−1 t,xRp/parenleftbiggXi−x h/parenrightbigg Kh(Xi−x) 1(Xi∈At)/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲h−|ν|−d, in the fourth line we used the definition of Ω(ν) x,x. Hence the Berry-Esseen inequality gives sn= sup u∈R/vextendsingle/vextendsingle/vextendsingleP/parenleftBig T(ν)(x)fu/parenrightBig −Φ(u)/vextendsingle/vextendsingle/vextendsingle=o(1). (SA-5.1) Although Lemma SA-2.1to Lemma SA-2.4provides convergence results uniformly in x, for pointwise result with fixx∈B, we can replace the class of functions in the proof by one singleton corresponding to x, and get: Ifhp+1√ nhd→0 andnv 2+vhd→0, then /vextendsingle/vextendsingle/vextendsingle/hatwideT(ν)(x)−T(ν)(x)/vextendsingle/vextendsingle/vextendsingle≲Prn, (SA-5.2) wherern=hp+1√ nhd+1/√ nhd+1/(nv 2+vhd). TakeZto be a standard Gaussian random variable and using anti-concentration arguments, for any t∈R, P(/hatwideT(ν)(x)ft) =P(/hatwideT(ν)(x)ft,|/hatwideT(ν)(x)−T(ν)(x)| frn)+P(/hatwideT(ν)(x)ft,|/hatwideT(ν)(x)−T(ν)(x)| grn) fP(T(ν)(x)ft+rn)+P(|/hatwideT(ν)(x)−T(ν)(x)| grn) fP(Zft+rn)+P(|/hatwideT(ν)(x)−T(ν)(x)| grn)+sn = Φ(t)+sup t∈R|P(tfZft+rn)|+P(|/hatwideT(ν)(x)−T(ν)(x)| grn)+sn, where in the third line we used Equation (SA-5.1), in the fourth line we used Equation (SA-5.2)and P(tfZft+rn) =o(1). Similarly, for any t∈R, P(/hatwideT(ν)(x)ft) =P(/hatwideT(ν)(x)ft,|/hatwideT(ν)(x)−T(ν)(x)| frn)+P(/hatwideT(ν)(x)ft,|/hatwideT(ν)(x)−T(ν)(x)| grn) gP(T(ν)(x)ft−rn)−P(|/hatwideT(ν)(x)−T(ν)(x)| grn) gP(Zft−rn)−P(|/hatwideT(ν)(x)−T(ν)(x)| grn)−sn = Φ(t)−sup t∈R|P(t−rnfZft)|−P(|/hatwideT(ν)(x)−T(ν)(x)| grn)−sn. It follows that sup t∈R/vextendsingle/vextendsingle/vextendsingleP(/hatwideT(ν)(x)ft)−Φ(t)/vextendsingle/vextendsingle/vextendsinglefsup t∈R|P(t−rnfZft+rn)|+P(|/hatwideT(ν)(x)−T(ν)(x)| grn)+sn=o(1). ■ SA-5.8 Proof of Theorem SA-2.4 The proof follows directly from Theorem SA-2.3. ■ 30 SA-5.9 Proof of Theorem SA-2.5 The result follows from Lemma SA-2.2and Lemma SA-2.3. ■ SA-5.10 Proof of Theorem SA-2.6 The feasible t-statistic can be decomposed as follows: /hatwideT(ν)(x) =/hatwideÄ(ν)(x)−Ä(ν)(x)/radicalBig /hatwideΩ(ν) x,x=T(ν)(x)+G(ν) 1(x)+G(ν) 2(x),x∈B, where G(ν) 1(x) =/parenleftBig E/bracketleftbig /hatwideÄ(ν)(x)/vextendsingle/vextendsingleX/bracketrightbig −Ä(ν)(x)/parenrightBig (/hatwideΩ(ν) x,x)−1/2, G(ν) 2(x) =e¦ 1+νH−1/bracketleftbigg/parenleftBig /hatwideΓ−1 1,xQ1,x−/hatwideΓ−1 0,xQ0,x/parenrightBig (/hatwideΩ(ν) x,x)−1 2−/parenleftbig Γ−1 1,xQ1,x−Γ−1 0,xQ0,x/parenrightbig (Ω(ν) x,x)−1 2/bracketrightbigg . By Lemma SA-2.2and Lemma SA-2.4, sup x∈B/vextendsingle/vextendsingle/vextendsingleG(ν) 1(x)/vextendsingle/vextendsingle/vextendsingle≲Php+1−|ν|(nhd+2|ν|)1/2≲hp+1√ nhd. By Lemma SA-2.1, Lemma SA-2.3and Lemma SA-2.4, fort∈ {0,1}we have sup x∈B|e¦ 1+νH−1/bracketleftBig /hatwideΓ−1 t,x−Γ−1 t,x/bracketrightBig Qt,x(/hatwideΩ(ν) x,x)−1/2|≲/radicalbig logn/parenleftbigg/radicalbigg logn nhd+logn n1+v 2+vhd/parenrightbigg By Lemma SA-2.1, Lemma SA-2.3and Lemma SA-2.4, fort∈ {0,1}we have sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsinglee¦ 1+νH−1Γ−1 t,xQt,x/bracketleftbigg (/hatwideΩ(ν) x,x)−1/2−(Ω(ν) x,x)−1/2/bracketrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≲Ph−|ν|·/parenleftbigg/radicalbigg logn nhd+logn n1+v 2+vhd/parenrightbigg ·√ nhd+2ν/parenleftbigg/radicalbigg logn nhd+logn nv 2+vhd+hp+1/parenrightbigg ≲logn√ nhd+(logn)3/2 nv 2+vhd. Combining the previous two displays, we get sup x∈B|G(ν) 2(x)|≲P/radicalbig logn/parenleftbigg/radicalbigg logn nhd+logn nv 2+vhd/parenrightbigg . It follows from the decomposition of /hatwideT(ν)(x)−T(ν)(x) that sup x∈B|/hatwideT(ν)(x)−T(ν)(x)|≲Php+1√ nhd+/radicalbig logn/parenleftbigg/radicalbigg logn nhd+logn nv 2+vhd/parenrightbigg . ■ The following lemma is used in the proof of Theorem SA-3.5. Lemma SA-5.2 (VC Class to VC2 Class) 31 Assume Fis a VC class on a measure space (X,B)in the sense that there exists an envelope function F and positive constants c(F),d(F)such that for all 0< ε <1, sup Q∈A(X)N(F,∥·∥Q,1,ε∥F∥Q,1)fc(F)ε−d(F). Then,Fis also VC2 in the sense that for all 0< ε <1, sup Q∈A(X)N(F,∥·∥Q,2,ε∥F∥Q,2)fc(F)(ε2/2)−d(F). Proof of Lemma SA-5.2.LetQbe a finite discrete probability measure. Let f,g∈F. Then /integraldisplay |f−g|2dQf2/integraldisplay |f−g||F|dQ. Suppose Qis supported on {c1,...,c p}. Define another probability measure ˜Q(ck) =F(ck)Q(ck)/∥F∥Q,1. Then, /integraldisplay |f−g|2dQf2∥F∥Q,1/integraldisplay |f−g|d˜Q f2∥F∥Q,1∥f−g∥˜Q,1. Hence if we take an ε2/2-net in ( F,∥·∥˜Q,1) with cardinality no greater than c(F)ε−d(F), then for any f∈F, there exists a g∈Fsuch that ∥h−h1∥˜Q,1fε2/2∥F∥˜Q,1, and hence ∥f−g∥2 Q,2f2ε2/2∥F∥Q,1∥F∥˜Q,1fε2∥F∥2 Q,2. Hence supQ∈A(X)N(F,∥·∥Q,2,ε∥F∥Q,2)fc(F)(ε2/2)−d(F). ■
|
https://arxiv.org/abs/2505.05670v1
|
SA-5.11 Proof of Theorem SA-2.7 First, we consider the class of functions Ft={K(ν) t(·;x) :x∈B},t∈ {0,1}. W.l.o.g., we can assume X= [0,1]d, andQFt=PXis a valid surrogate measure for PXwith respect to Ft, andϕFt=Idis a valid normalizing transformation (as in Lemma SA-4.1). This implies the constants c1andc2from Lemma SA-4.1 are all 1. I. Properties of Ft Envelope Function: By Lemma SA-2.1and Lemma SA-2.4and the fact that Supp( K) is compact, sup x∈Bsup À∈X/vextendsingle/vextendsingle/vextendsingleK(ν) t(À;x)/vextendsingle/vextendsingle/vextendsingle≲1√nhd+|ν|sup x∈B/parenleftbig ∥Γ−1 1,x∥+∥Γ−1 0,x∥/parenrightbig sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftBig Ω(ν) x,x/parenrightBig−1 2/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲h−d/2. Hence there exists a constant C1>0 such that MFt=C1h−d/2is a constant envelope function of G. L1Bound: EFt= sup x∈BE/bracketleftBig |K(ν) t(Xi;x)|/bracketrightBig ≲hd/2. 32 Uniform Variation: Case 1: Suppose Kis Lipschitz. By (iv) in Assumption SA–1and Assumption SA–2, LFt= sup x∈Bsup À,À′∈X|K(ν) t(À;x)−K(ν) t(À′;x)| ∥À−À′∥∞≲h−d/2−1. Each entry of Γt,xandΣt,xare of the form/integraltext/parenleftBig À−x h/parenrightBigu+v Kh(À−x) 1(À∈At)f(À)dÀand/integraltext/parenleftBig À−x h/parenrightBigu+v Kh(À− x)Ãt(À)21(À∈At)dÀfor some multi-index uandv, respectively. Hence by Assumption SA–2, each entry of Γt,xandΣt,xareh−1-Lipschitz in x. Hence there exists a constant C2such that for all x,x′∈B, ∥Γ−1 t,x−Γ−1 t,x′∥ f ∥Γ−1 t,x∥∥Γt,x−Γt,x∥∥Γ−1 t,x′∥ fC2h−1∥x−x′∥. Also by definition of Ω t,xand (iv) in Assumption SA–2, there exists C3such that for all x,x′∈X, /vextendsingle/vextendsingle/vextendsingleΩ(ν) t,x−Ω(ν) t,x′/vextendsingle/vextendsingle/vextendsinglefC3(nhd+2|ν|+1)−1∥x−x′∥∞, /vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftBig Ω(ν) t,x/parenrightBig−1/2 −/parenleftBig Ω(ν) t,x′/parenrightBig−1/2/vextendsingle/vextendsingle/vextendsingle/vextendsinglef1 2inf z∈X/parenleftBig Ω(ν) t,z/parenrightBig−3/2/vextendsingle/vextendsingle/vextendsingleΩ(ν) t,x−Ω(ν) t,x′/vextendsingle/vextendsingle/vextendsinglef1 2C3h−1(nhd+2|ν|)1/2∥x−x′∥∞. It then follows that we have a uniform Lipschitz property with resp ect to the point of evaluation: lFt= sup À∈Xsup x,x′∈B/vextendsingle/vextendsingle/vextendsingleK(ν) t(À;x)−K(ν) t(À;x′)/vextendsingle/vextendsingle/vextendsingle ∥x−x′∥∞≲h−d/2−1. Letx∈B. Then, K(ν) t(·;x) is supported on x+c[−h,h]d. Then, TVFt≲m/parenleftbig c[−h,h]d/parenrightbig LFt≲hd/2−1 Case 2: Suppose K= 1(· ∈[−1,1]d).Consider ˜K(ν) t(u;x) =n−1/2(Ω(ν) x,x)−1/2e¦ 1+νH−1Γ−1 t,xRp/parenleftbiggu−x h/parenrightbigg h−d, u∈X,t∈ {0,1}. Then, K(ν)(u;x) =˜K(ν)(u;x) 1(u−x∈[−1,1]d) for allu∈X,x∈B. Consider ˜Ft={˜K(ν)(·;x) :x∈B}, t∈ {0,1}. Then, the argument above implies TV˜Ft≲m/parenleftbig c[−h,h]d/parenrightbig LFt≲hd/2−1. Consider L={ 1((·−x)/h∈[−1,1]d) :x∈B}. Then, using a product rule, we have TVFtfTV˜FtML+M˜FtTVL≲hd/2−1·1+h−d/2hd−1≲hd/2−1. VC-type Class: Case 1: Suppose Kis Lipschitz. We will use Cattaneo et al. (2024, Lemma 7). To make the notation consistent, define fx(·) =1/radicalBig nΩ(ν) x,xe¦ 1+νH−1Γ−1 tRp(·)K(·),x∈B, 33 andH={gx/parenleftbig·−x h/parenrightbig :x∈B}. Notice that fx/parenleftbig·−x h/parenrightbig =hd1/radicalBig nΩ(ν) x,xe¦ 1+νH−1Γ−1Rp/parenleftbig·−x h/parenrightbig Kh(·−x). Then, the following conditions for Lemma 7 in Cattaneo et al. (2024) hold: (i) boundedness sup zsup z′|fz(z′)| fc, (ii) compact support supp(fz(·))¦[−c,c]d,∀z∈X, (iii) Lipschitz continuity sup z|fz(z′)−fz(z′′)| fc|z′−z′′| sup z|fz′(z)−fz′′(z)| fch−1|z′−z′′|. Then, by Cattaneo et al. (2024, Lemma 7), there exists a constant c′only depending on canddthat for any 0fεf1, sup Q∈A(X)N/parenleftbig H,∥·∥Q,1,(2c+1)d+1ε/parenrightbig fc′ε−d−1+1, whereA(X) denotes the collections of all finite discrete measures on X= [0,1]d. It then follows from LemmaSA-5.2that with the constant envelope function MFt=h−d/2, for any 0 fεf1, sup Q∈A(X)N/parenleftbig Ft,∥·∥Q,2,(2c+1)d+1εMFt/parenrightbig fc′ε−d−1+1. Case 2: Suppose K= 1(· ∈[−1,1]d).Recall˜FtandLdefined in the Uniform Variation section. The same argument as before shows sup Q∈A(X)N/parenleftBig ˜Ft,∥·∥Q,2,(2c+1)d+1εM˜Ft/parenrightBig fc′ε−d−1+1, ε∈(0,1], where˜Ft=h−d/2. Byvan der Vaart and Wellner (1996, Example 2.6.1), the class L={ 1((· −x)/h∈ [−1,1]d) :x∈B}has VC dimension no greater than 2 d, and by van der Vaart and Wellner (1996, Theorem 2.6.4), sup Q∈A(X)N(L,∥·∥Q,2,ε)f2d(4e)2dε−4d,0< εf1. Putting together, we have sup Q∈A(X)N(Ft,∥·∥Q,2,εC1M˜Ft)fC2ε−4d, whereC1,C2are constants only depending on d. II. Properties of G Recall for each x∈B, gx(u) = 1A1(u)K(ν) t(u;x)− 1A0(u)K(ν) t(u;x),u∈X, 34 andG={gx:x∈B}. Hence MG≲h−d/2,EG≲hd/2,sup QN(G,∥·∥Q,2,ε(2c+1)d+1MG)f2c′ε−d−1+2. Total Variation: Observe that 1A1(u)K(ν)
|
https://arxiv.org/abs/2505.05670v1
|
t(u;x)̸= 0 implies Et,x=u∈ {y∈At: (y−x)/h∈Supp(K)}, and 1(u∈At)K(ν) t(u;x) = 1(u∈Et,x)K(ν) t(u;x),∀u∈X. By the assumption that the De Giorgi perimeter of Et,xsatisfies L(Et,x)fChd−1and using TV{gh}f M{g}TV{h}+M{h}TV{g}, we have TVG= sup x∈BTV{gx}fsup x∈B/summationdisplay t∈{0,1}TV{ 1AtK(ν) t(·;x)}fsup x∈B/summationdisplay t∈{0,1}TV{K(ν) t(·;x)}+MFtTV{ 1Et,x}≲hd/2−1. Then, by Lemma SA-4.1, on a possibly enlarged probability space, there exists a mean-zero G aussian process Z(ν)with the same covariance structure such that E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−Z(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg ≲(logn)3 2/parenleftbigg1 nhd/parenrightbigg1 d+2·v v+2 +log(n)/parenleftbigg1 nv 2+vhd/parenrightbigg1 2 . ■ To build up the proof for confidence bands, we need the following lem mas. Lemma SA-5.3 (Distance Between Infeasible Gaussian and Bahadur Repr esentation) Suppose the conditions of Theorem SA-2.7hold. Then, for any multi-index |ν| fp, we have sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲/bracketleftBigg (logn)3 2/parenleftbigg1 nhd/parenrightbigg1 d+2·v v+2 +log(n)/radicalbigg 1 nv v+2hd/bracketrightBigg1/2 . Proof of Lemma SA-5.3.DenoteRn= (logn)3 2/parenleftbig1 nhd/parenrightbig1 d+2·v v+2+log(n)/radicalBig 1 nv 2+vhd. Let³nto be determined. For anyu >0, P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefsup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−Z(ν)(x)/vextendsingle/vextendsingle/vextendsingle+u/parenrightbigg fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu+³n/parenrightbigg +P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)−T(ν)(x)/vextendsingle/vextendsingle/vextendsingle> ³n/parenrightbigg fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg +4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg +P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)−T(ν)(x)/vextendsingle/vextendsingle/vextendsingle> ³n/parenrightbigg fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg +4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg +CRn ³n, where in the fourth line we have used the Gaussian Anti-concentrati on Inequality in ( Chernozhukov et al. , 2014a, Theorem 2.1), and in the last line we have used the tail bound in Theore mSA-2.7. Similarly, for any 35 u >0, we have the lower bound P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu−sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−Z(ν)(x)/vextendsingle/vextendsingle/vextendsingle/parenrightbigg gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu−³n/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)−T(ν)(x)/vextendsingle/vextendsingle/vextendsingle> ³n/parenrightbigg gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)−T(ν)(x)/vextendsingle/vextendsingle/vextendsingle> ³n/parenrightbigg gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg −CRn ³n. Notice that Z(ν)(x),x∈Bis a mean-zero Gaussian process such that d/parenleftBig Z(ν)(x),Z(ν)(y)/parenrightBig =E/bracketleftbigg/parenleftBig Z(ν)(x)−Z(ν)(y)/parenrightBig2/bracketrightbigg1 2 =E/bracketleftbigg/parenleftBig T(ν)(x)−G(ν) 0(y)/parenrightBig2/bracketrightbigg1 2 =E/bracketleftBig (K(Xi,x)−K(Xi,y))2Ã(Xi)2/bracketrightBig1 2fC′ln,2∥x−y∥∞, sup x∈Bd(Z(ν)(x),Z(ν)(x)) = sup x∈BE/bracketleftbig K(Xi,x)2Ã2(Xi)/bracketrightbig ≲1. whereC′is a constant and ln,2≍h−1 n. Then, by Corollary 2.2.8 in van der Vaart and Wellner (1996), we have E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg fE[|Zn(x0)|]+/integraldisplay2supx∈Bd(Z(ν)(x),Z(ν)(x)) 0/radicalBigg dlog/parenleftbiggC′′ln,2 ε/parenrightbigg ≲1. Hence by choosing ³∗ n≍√Rn, we have sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲/radicalbig Rn. ■ Lemma SA-5.4 (Distance Between Bahadur Representation and t-statis tics) Suppose the conditions in Theorem SA-2.7hold. Then sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/hatwideT(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle=o(1). For notational simplicity, define rnand³nto be sequences such that rn= (logn)3 2/parenleftbigg1 nhd/parenrightbigg1 d+2·v v+2 +/radicalBigg (logn)2 nv v+2hd 1/2 , ³nj/radicalbig log(1/h)/parenleftBigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightBigg +hp+1 n/radicalBig nhdn. 36 Then, supx∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−/hatwideT(x)/vextendsingle/vextendsingle/vextendsingle=oP(³n). Hence for any u >0, P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideT(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu+³n/parenrightbigg +P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−/hatwideT(x)/vextendsingle/vextendsingle/vextendsingleg³n/parenrightbigg fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu+³n/parenrightbigg +rn+o(1) fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg +4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg +rn+o(1) fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg +4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg +2rn+o(1), where in the third line we have used Lemma SA-5.3andsupx∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−/hatwideT(x)/vextendsingle/vextendsingle/vextendsingle=oP(³n), in the fourth line we use the ( Chernozhukov et al. ,2014a, Theorem 2.1), and in the last line we have used Lemma SA-5.3 again. Similarly, P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideT(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu−³n/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)−/hatwideT(x)/vextendsingle/vextendsingle/vextendsingleg³n/parenrightbigg gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu−³n/parenrightbigg −rn+o(1) gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg −rn+o(1) gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/bracketrightbigg +1/parenrightbigg −2rn+o(1). From the proof of Lemma SA-5.3,E/bracketleftbig supx∈B/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/bracketrightbig ≲1. Hence under the rate restrictions
|
https://arxiv.org/abs/2505.05670v1
|
in this lemma, sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/hatwideT(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleT(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle=o(1). ■ Lemma SA-5.5 (Distance Between Feasible Gaussian and Infeasible Gau ssian) Suppose the conditions for Theorem SA-2.7hold. Then, for any multi-index |ν| fp, sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/vextendsingle/vextendsingle/vextendsingle/vextendsingleWn/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲Plogn/parenleftBigg/radicalbigg logn nhd+logn nv 2+vhd+hp+1/parenrightBigg1 2 . Proof of Lemma SA-5.5.First, using Lemma SA-2.4, we provide an upper bound between covariance 37 functions of the feasible Gaussian process and the infeasible Gaussi an process. sup x,y∈X/vextendsingle/vextendsingle/vextendsingleΠx,y−/hatwideΠx,y/vextendsingle/vextendsingle/vextendsingle = sup x,y∈X/vextendsingle/vextendsingle/vextendsingle/vextendsingleΩx,y//radicalbig Ωx,xΩy−/hatwideΩx1,x2//radicalBig /hatwideΩx,x/hatwideΩy/vextendsingle/vextendsingle/vextendsingle/vextendsingle = sup x,y∈X/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftBig Ωx,y−/hatwideΩx,y/parenrightBig //radicalbig Ωx,xΩy+/hatwideΩx,y/radicalBig /hatwideΩx,x/hatwideΩy /radicalBigg /hatwideΩx,x/hatwideΩy Ωx,xΩy−1 /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle From Lemma SA-2.4and the fact that |√x−√y| f(x'y)−1/2|x−y|/2 forx,y >0, sup x,y∈X/vextendsingle/vextendsingle/vextendsingle/vextendsingle/parenleftBig /hatwideΩx,x/hatwideΩy/parenrightBig1/2 −(Ωx,xΩy)1/2/vextendsingle/vextendsingle/vextendsingle/vextendsingle (Ωx,xΩy)1/2≲supx,y∈X/vextendsingle/vextendsingle/vextendsingle/hatwideΩx,x/hatwideΩy−Ωx,xΩy/vextendsingle/vextendsingle/vextendsingle infx,y/hatwideΩx,x/hatwideΩy'infx,yΩx,xΩy≲Php+1+/radicalbigg logn nhd+logn nv 2+vhd, sup x,y∈X/vextendsingle/vextendsingle/vextendsingleΩx1,x2−/hatwideΩx1,x2/vextendsingle/vextendsingle/vextendsingle /radicalbig Ωx,xΩy≲Php+1+/radicalbigg logn nhd+logn nv 2+vhd. For simplicity, denote an=/radicalBig logn nhd+logn nv 2+vhd. Then, it follows that sup x,y∈X/vextendsingle/vextendsingle/vextendsingleΠx,y−/hatwideΠx,y/vextendsingle/vextendsingle/vextendsingle≲Php+1+an. Then, we bound the Kolmogorov-Smirnov distance between the maximum ofZnand/hatwideZ(ν)on a¶n-net of X, denoted by X¶n, i.e. for all x∈B, there exists z∈X¶nsuch that ∥x−z∥∞f¶n. SinceXis compact, we can assume M:=Card(X¶n)≲¶−d n. Denote Z¶nnand/hatwideZ¶nnto the process Znand/hatwideZ(ν)restricted on X¶n, respectively. Then, by the Gaussian Comparison Inequality Theorem 2.1 from Chernozhuokov et al. (2022), sup y∈RM/vextendsingle/vextendsingle/vextendsingleP/parenleftbig Z¶n nfy/parenrightbig −P/parenleftBig /hatwideZ¶n nfy/vextendsingle/vextendsingle/vextendsingleX/parenrightBig/vextendsingle/vextendsingle/vextendsingle≲logM/parenleftbigg sup x,y∈X/vextendsingle/vextendsingle/vextendsingleΠx,y−/hatwideΠx,y/vextendsingle/vextendsingle/vextendsingle/parenrightbigg1 2 ≲PlogM/parenleftbig an+hp+1 n/parenrightbig1 2. Consequently, sup x∈R/vextendsingle/vextendsingle/vextendsingleP/parenleftbig ∥Z¶n n∥∞fx/parenrightbig −P/parenleftBig ∥/hatwideZ¶n n∥∞fx/vextendsingle/vextendsingle/vextendsingleX/parenrightBig/vextendsingle/vextendsingle/vextendsinglefsup x∈R/vextendsingle/vextendsingle/vextendsingleP/parenleftbig −x1fZ¶n nfx1/parenrightbig −P/parenleftBig −x1f/hatwideZ¶n nfx1/vextendsingle/vextendsingle/vextendsingleX/parenrightBig/vextendsingle/vextendsingle/vextendsingle ≲PlogM/parenleftbig an+hp+1 n/parenrightbig1 2=RM. Then, we bound the Kolmogorov-Smirnov distance on the whole Xwith the help of some ³n>0 to be determined. For simplicity, denote Φ¶n(³n) =P/parenleftBigg sup ∥x−y∥∞f¶n/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)−Z(ν)(y)/vextendsingle/vextendsingle/vextendsingleg³n/parenrightBigg , /hatwideΦ¶n(³n) =P/parenleftBigg sup ∥x−y∥∞f¶n/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)−/hatwideZ(ν)(y)/vextendsingle/vextendsingle/vextendsingleg³n/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/parenrightBigg , 38 then for all t >0, P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingleft/parenrightbigg fP/parenleftBigg sup x∈Bδn/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingleft+³n/parenrightBigg +Φ¶n(³n) fP/parenleftBigg sup x∈Bδn/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingleft+³n/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/parenrightBigg +Φ¶n(³n)+RM fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingleft+³n/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/parenrightbigg +Φ¶n(³n)+/hatwideΦ¶n(³n)+RM fP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingleft/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/parenrightbigg +4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/bracketrightbigg +1/parenrightbigg +Φ¶n(³n)+/hatwideΦ¶n(³n)+RM. Similary, we get for all t >0, P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsingleft/parenrightbigg gP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingleft/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/parenrightbigg −4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/bracketrightbigg +1/parenrightbigg −Φ¶n(³n)−/hatwideΦ¶n(³n)−RM. Heuristically, RMdepends on ¶nthrough logM≍log(¶−d n). By choosing ¶n=n−sfor large enough s, the RMterm will dominates the terms Φ ¶n(³n) and/hatwideΦ¶n(³n). Precisely, for any ¶, sup ∥x−y∥∞f¶E/bracketleftbigg/parenleftBig /hatwideZ(ν)(x)−/hatwideZ(ν)(y)/parenrightBig2/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/bracketrightbigg = sup ∥x−y∥∞f¶/parenleftBig /hatwideΩx,x/hatwideΩy/parenrightBig−1 2/parenleftbigg1 nhdn/parenrightbigg2n/summationdisplay i=1/hatwideεi21(Xi∈A1)· /parenleftbigg eT 1(/hatwideΓ1,x)−1Rp/parenleftbiggXi−x hn/parenrightbigg K/parenleftbiggXi−x hn/parenrightbigg −eT 1(/hatwideΓ1,x)−1Rp/parenleftbiggXi−y hn/parenrightbigg K/parenleftbiggXi−y hn/parenrightbigg/parenrightbigg2 + sup ∥x−y∥∞f¶/parenleftBig /hatwideΩx,x/hatwideΩy/parenrightBig−1 2/parenleftbigg1 nhdn/parenrightbigg2n/summationdisplay i=1/hatwideεi21(Xi∈A0)· /parenleftbigg eT 1(/hatwideΓ0,x)−1Rp/parenleftbiggXi−x hn/parenrightbigg K/parenleftbiggXi−x hn/parenrightbigg −eT 1(/hatwideΓ0,x)−1Rp/parenleftbiggXi−y hn/parenrightbigg K/parenleftbiggXi−y hn/parenrightbigg/parenrightbigg2 ≲Ph−d−2 n¶2, where in the last line we have used the scale of covariance matrices f rom Lemma SA-2.4, the scale of Gram matrices from Lemma SA-2.1, and the almost sure bound on the Lipschitz constant from the proof of Theorem SA-2.7andC >0 is a constant. Similarly, for any ¶ >0, sup ∥x−y∥∞f¶E/bracketleftbigg/parenleftBig Z(ν)(x)−Z(ν)(y)/parenrightBig2/bracketrightbigg = sup ∥x−y∥∞f¶E/bracketleftBig (K(Xi,x)−K(Xi,y))2ε2 i/bracketrightBig fC′h−2 n¶2, 39 Then, by Corollary 2.2.5 from van der Vaart and Wellner (1996), E/bracketleftBigg sup ∥x−y∥∞f¶n/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)−/hatwideZ(ν)(y)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/bracketrightBigg ≲P/integraldisplayCh−d/2−1 n¶n 0/radicalBigg dlog/parenleftbigg1 εhd/2+1 n/parenrightbigg dε≲/radicalbig lognh−d/2−1 n¶n, E/bracketleftBigg sup ∥x−y∥∞f¶n/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)−Z(ν)(y)/vextendsingle/vextendsingle/vextendsingle/bracketrightBigg ≲/integraldisplayCh−1 n¶n 0/radicalBigg dlog/parenleftbigg1 εhn/parenrightbigg dε≲/radicalbig lognh−1 n¶n. Also using the fact that E/bracketleftBig supx∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/bracketrightBig ≲1, by choosing ³∗ n≍/parenleftBig√lognh−d/2−1 n¶n/parenrightBig1 2and¶n≍n−s for some large constant s >0, we have 4³n/parenleftbigg E/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/bracketrightbigg +1/parenrightbigg +Φ¶n(³n)+/hatwideΦ¶n(³n)+RM ≲P/parenleftBig/radicalbig lognh−d/2−1 n¶n/parenrightBig1 2+dlog/parenleftbig ¶−1 n/parenrightbig/parenleftbig an+hp+1 n/parenrightbig1 2 ≲Pdlogn/parenleftbig an+hp+1 n/parenrightbig1 2. Putting together, we have sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingleZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/vextendsingle/vextendsingle/vextendsingle/vextendsingleX/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle≲logn/parenleftbig an+hp+1 n/parenrightbig1
|
https://arxiv.org/abs/2505.05670v1
|
2. ■ SA-5.12 Proof of Theorem SA-2.8 The result follows from Lemma SA-5.3, Lemma SA-5.4and Lemma SA-5.5. ■ SA-5.13 Proof of Theorem SA-2.9 LemmaSA-2.8and Dominated Convergence Theorem give sup u∈R/vextendsingle/vextendsingle/vextendsingle/vextendsingleP/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/vextendsingle/hatwideT(ν)(x)/vextendsingle/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg −P/parenleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefu/parenrightbigg/vextendsingle/vextendsingle/vextendsingle/vextendsingle=o(1). Then, by definition of /hatwideB(ν) ³(x), P/bracketleftbig µ(ν)(x)∈/hatwideB(ν) ³(x),∀x∈B/bracketrightbig =P/bracketleftbigg sup x∈B|/hatwideT(ν)(x)| fc³/bracketrightbigg =P/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefc³/bracketrightbigg +o(1) =E/bracketleftbigg P/bracketleftbigg sup x∈B/vextendsingle/vextendsingle/vextendsingle/hatwideZ(ν)(x)/vextendsingle/vextendsingle/vextendsinglefc³/vextendsingle/vextendsingle/vextendsingle/vextendsingleWn/bracketrightbigg/bracketrightbigg +o(1) = 1−³+o(1). ■ 40 SA-6 Proofs for Section SA-3 SA-6.1 Proof of Lemma SA-3.1 By Assumption SA–1(iii) and Assumption SA–3, for any r̸= 0, for any x∈Bandy∈St,x(r), |µt(y)−µt(x)|≲|r|. Hence for any r̸= 0, for any x∈B,t∈ {0,1}, |¹t,x(r)−µt(x)| f/integraltext St,x(|r|)|µt(y)−µt(x)|fX(y)Hd−1(dy) /integraltext St,x(|r|)fX(y)Hd−1(dy)≲r. implying |¹t,x(0)−µt(x)| flim r→0|¹t,x(r)−µt(x)|= 0. ■ SA-6.2 Proof of Lemma SA-3.2 The proof will be similar to the proof of Lemma SA-2.1. Let 0fvfp. Instead of gn, we study the function kndefined by kn(À,x) =/parenleftbiggd(À,x) h/parenrightbiggv1 hdK/parenleftbiggd(À,x) h/parenrightbigg ,À,x∈X. DefineH={kn(·,x) 1(· ∈At) :x∈X}. We will show His a VC-type of class. Constant Envelope Function We assume Kis continuous and has compact support. Hence there exists a constant C1such that supx∈X∥kn(·,x)∥∞fC1h−d=H. Diameter in L2For each x∈X,kn(·,x) is supported in {À:d(À,x)fh}. By Assumption SA–1(ii) and Assumption SA–3(i),supx∈XP(d(Xi,x)fh)≲hd. It follows that supx∈X∥kn(·,x)∥P,2fC2h−d/2for some constant C2. We can take C1large enough so that Ã=C2h−d/2fF=C1h−d. Ratio¶=à F=C3√ hd, for some constant C3. Covering Numbers Case 1:kis Lipschitz. Letx,x′∈X. By Assumption SA–3and Assumption SA–2, sup À∈X|kn(À,x)−kn(À,x′)| fsup À∈X/bracketleftbigg/parenleftbiggd(À,x) h/parenrightbiggv −/parenleftbiggd(À,x′) h/parenrightbiggv/bracketrightbigg kh(d(À,x))+/parenleftbiggd(À,x′) h/parenrightbiggv [kh(d(À,x))−kh(d(À,x′))] ≲h−d−1 n∥x−x′∥∞, By Lipschitz continuity property of F, for any ε∈(0,1] and for any finitely supported measure Qand metric ∥·∥Q,2based on L2(Q), N({kn(·,x) :x∈X},∥·∥Q,2,ε∥H∥Q,2)fN(X,∥·∥∞,ε∥H∥Q,2hd+1)(1) ≲/parenleftbiggdiam(X) ε∥H∥Q,2hd+1/parenrightbiggd ≲/parenleftbiggdiam(X) εh/parenrightbiggd , 41 where in (1) we used the fact that ε∥H∥Q,2hd+1≲εh≲1. Hence Hforms a VC-type class in that supQN(H,∥·∥Q,2,ε∥H∥Q,2)≲(C1/ϵ)C2for allϵ∈(0,1] withC1=diam(X) handC2=d. Moreover, for any discretemeasure Q, andforany x,x′∈X,∥kn(·,x) 1(· ∈At)−kn(·,x′) 1(· ∈At)∥Q,2f ∥kn(·,x)−kn(·,x′)∥Q,2. Hence sup Q∈A(X)N(H,∥·∥Q,2,ε∥H∥Q,2)fN({kn(·,x) :x∈X},∥·∥Q,2,ε∥H∥Q,2)f(C1/ε)C2, ε∈(0,1], whereA(X) denotes the collection of all finite discrete measures on X. Case 2: k= 1(· ∈[−1,1]).The same argument as in the proof of Lemma SA-4.1and the fact that L={ 1((·−x)/h∈[−1,1]d) :x∈B}has VC dimension no greater than 2 dimplies again we have, sup Q∈A(X)N(H,∥·∥Q,2,ε∥H∥Q,2)fN({kn(·,x) :x∈X},∥·∥Q,2,ε∥H∥Q,2)f(C1/ε)C2,ε∈(0,1]. Hence, by Chernozhukov et al. (2014b, Corollary 5.1) E/bracketleftbigg sup l∈H|En[l(Xi)]−E[l(Xi)]|/bracketrightbigg ≲Ã√n/radicalbig C2log(C1/¶)+∥M∥P,2C2log(C1/¶) n ≲1√ nhd/radicalBigg dlog/parenleftbiggdiam(X) h1+d/2/parenrightbigg +1 nhddlog/parenleftbiggdiam(X) h1+d/2/parenrightbigg ≲/radicalbigg logn nhd. We conclude that supx∈X∥/hatwideΨt,x−Ψt,x∥≲P/radicalBig logn nhd. By Weyl’s Theorem, supx∈X|¼min(/hatwideΨt,x)−¼min(Ψt,x)| f supx∈X∥/hatwideΨt,x−Ψt,x∥≲P/radicalBig logn nhd. Thereforewecanlowerboundtheminimumeigenvalueby infx∈X¼min(/hatwideΨt,x)g infx∈X¼min(Ψt,x)−supx∈X|¼min(/hatwideΨt,x)−¼min(Ψt,x)|≳P1. It follows that supx∈X∥/hatwideΨ−1 t,x∥≲P1 and hence sup x∈X∥/hatwideΨ−1 t,x−Ψ−1 t,x∥ fsup x∈X∥Ψ−1 t,x∥∥Ψt,x−/hatwideΨt,x∥∥/hatwideΨ−1 t,x∥≲P/radicalbigg logn nhd. ■ SA-6.3 Proof of Lemma SA-3.3 Consider the class F={(z,u)/ma√sto→e¦ ¿gx(z)(u−hx(z)) :x∈B}, 0fνfp, where for z∈X, gx(z) =rp/parenleftbiggd(z,x) h/parenrightbigg kh(d(z,x)), h x(z) =γ∗ t(x)¦rp(d(z,x)). By definition of γ∗ t(x), γ∗ t(x) =H−1Ψ−1 t,xSt,x,St,x=E/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))Yi 1(Xi∈At)/bracketrightbigg .(SA-6.1) Assumption SA–1impliesSt,xis continuous in x, hencesupx∈X∥St,x∥≲1. And by Assumption SA–2(ii), infx∈X¼min(Ψt,x)≳1. Hence sup x∈B∥Ψ−1 t,xSt,x∥≲1. (SA-6.2) 42 Now, consider properties of F. Definition of γ∗ t(x) implies E[f(Xi,Yi)] = 0 for all f∈F. SinceKis compactly supported, there exists C1,C2>0 such that F(z,u) =C1h−d(|u|+C2) is an envelope function for F. Denote M= max 1fifnF(Xi,Yi), then E[M2]1/2≲h−dE/bracketleftbigg max 1fifn|Yi|2+1/bracketrightbigg1/2 ≲h−dE/bracketleftbigg max 1fifn|Yi|2+v/bracketrightbigg1/(2+v) ≲h−d/bracketleftbiggn/summationdisplay i=1E[|εi+/summationdisplay t∈{0,1}1(x∈At)µt(x)|2+v]/bracketrightbigg1/(2+v) ≲h−dn1/(2+v), where we have used Xis compact and µtis continuous, hence supx∈X|/summationtext t∈{0,1} 1(x∈At)µt(x)|≲1. Denote
|
https://arxiv.org/abs/2505.05670v1
|
Ã= supf∈FE[f(Xi,Yi)2]1/2. Then, Ã2≲sup x∈BE[∥e¦ ¿gx∥2 ∞(|Yi|+∥e¦ ¿hx∥∞)21(kh(Di(x))̸= 0)]≲h−d. To check for the covering number of F, notice that compare to the proof of Lemma SA-2.1, we have one more terme¦ νgxhx=rp/parenleftBig d(z,x) h/parenrightBig kh(d(z,x))γ∗ t(x)¦rp(d(z,x)). All terms except for γ∗ t(x) can be handled as in the proof of Lemma SA-2.1. Recall Equation (SA-6.1), and consider lt,x=e¦ v[R(d(·,x)/h)kh(d(·,x))µt 1(· ∈ At) and Lt={lt,x:x∈B},vis a any multi-index. Then, for any x1,x2∈B, |St,x1−St,x2| f ∥lt,x1−lt,x2∥PX,2, and hence N({e¦ vSt,x:x∈B},|·|,εh−d)fN(Lt,∥·∥PX,2,εh−d)fsup QN(Lt,∥·∥Q,2,εh−d), Same argument as paragraph Covering Numbers in the proof of Lemma SA-3.2then shows sup QN({gx:x∈B},∥·∥Q,2,εC1h−d)f/parenleftbiggdiam(X) hε/parenrightbiggd ,0< εf1, sup QN({gxhx:x∈B},∥·∥Q,2,εC1h−d)f/parenleftbiggdiam(X) hε/parenrightbiggd ,0< εf1, wheresupis taken over all discrete measures on X. Product {gx:x∈B}with the singleton of identity function {u/ma√sto→u,u∈R}, and adding {gxhx:x∈B}, sup QN(F,∥·∥Q,2,ε∥F∥Q,2)f2/parenleftbigg2diam(X) hε/parenrightbiggd ,0< εf1, wheresupis taken over all discrete measures on X×R. Denote C1=d,C2=2(2diam( X))d hd. Hence, by 43 Chernozhukov et al. (2014b, Corollary 5.1) E/bracketleftbigg sup x∈B|e¦ νOt,x|/bracketrightbigg =E/bracketleftbigg sup f∈F|En[f(Xi,Yi)]−E[f(Xi,Yi)]|/bracketrightbigg ≲Ã√n/radicalBig C2log(C1∥M∥P,2/Ã)+∥M∥P,2C2log(C1∥M∥P,2/Ã) n ≲1√ nhd/radicalBigg dlog/parenleftbiggdiam(X) h1+d/2/parenrightbigg +1 n1+v 2+vhddlog/parenleftbiggdiam(X) h1+d/2/parenrightbigg ≲/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd. The rest follows from finite dimensionality of Ot,x, and Lemma SA-3.2. ■ SA-6.4 Proof of Lemma SA-3.4 By Lemma SA-3.1and Equation ( SA-6.1), we have sup x∈B|Bn,t(x)|= sup x∈B/vextendsingle/vextendsingle/vextendsinglee¦ 1Ψ−1 t,xSt,x−µt(x)/vextendsingle/vextendsingle/vextendsingle = sup x∈B/vextendsingle/vextendsingle/vextendsinglee¦ 1Ψ−1 t,xE/bracketleftBig rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))Rp(Di(x))¦(µt(Xi)−µt(x),0,···,0)) 1(Xi∈At)/bracketrightBig/vextendsingle/vextendsingle/vextendsingle ≲sup x∈Bsup z∈X|µt(x)−µt(z)| 1(kh(d(z,x))>0) ≲h. ■ SA-6.5 Proof of Lemma SA-3.5 Denote¸i,t,x=Yi−¹∗ t,x(Di(x)) andÀi,t,x=¹∗ t,x(Di(x))−/hatwide¹t,x(Di(x)). Then /hatwideΥt,x,y=En/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(y) h/parenrightbigg¦ hdkh(Di(x))kh(Di(y))(¸i,t,x+Ài,t,x)21It(Di(x))/bracketrightbigg , and we decompose the error into /hatwideΥt,x,y−Υt,x,y= ∆1,t,x,y+∆2,t,x,y+∆3,t,x,y, ∆1,t,x,y=En/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(y) h/parenrightbigg¦ hdkh(Di(x))kh(Di(y))À2 i,t,x 1It(Di(x))/bracketrightbigg , ∆2,t,x,y= 2En/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(y) h/parenrightbigg¦ hdkh(Di(x))kh(Di(y))¸i,t,xÀi,t,x 1It(Di(x))/bracketrightbigg , ∆3,t,x,y=En/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(y) h/parenrightbigg¦ hdkh(Di(x))kh(Di(y))¸2 i,t,x 1It(Di(x))/bracketrightbigg −E/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(y) h/parenrightbigg¦ hdkh(Di(x))kh(Di(y))¸2 i,t,x 1It(Di(x))/bracketrightbigg . 44 By Assumption SA–2,kh(Di(x))̸= 0 implies ∥rp(Di(x)/h)∥2≲1. Hence by Lemma SA-3.2andSA-3.3, max t∈{0,1}max 1fifnsup x∈B|Ài,t,x| = max t∈{0,1}max 1fifnsup x∈B|rp(Di(x))¦(/hatwideγt,x−γ∗ t,x)| 1(kh(Di(x))g0) = max t∈{0,1}max 1fifnsup x∈B|rp(Di(x))¦H−1(/hatwideΨ−1 t,xOt,x+(/hatwideΨ−1 t,x−Ψ−1 t,x)Ut,x)| 1(kh(Di(x))g0) fmax t∈{0,1}sup x∈B∥/hatwideΨ−1 t,xOt,x∥2+ max t∈{0,1}sup x∈B∥(/hatwideΨ−1 t,x−Ψ−1 t,x)Ut,x∥2 ≲/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd, where Ut,x=En/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))¹∗ t,x(Xi) 1It(Di(x))/bracketrightbigg . Assuminglog(1/h) n1+v 2+vhd→ ∞, similar maximal inequality as in the proof of Lemma SA-3.2shows sup x,y∈X∥∆1,t,x,y∥≲Pmax t∈{0,1}max 1fifnsup x∈B|Ài,t,x|2≲/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightbigg2 , sup x,y∈X∥∆2,t,x,y∥≲Pmax t∈{0,1}max 1fifnsup x∈B|Ài,t,x|≲/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd. (SA-6.3) Consider the ( µ,¿) entry of ∆ 3,t,x,y. Consider the class F=/braceleftbigg (z,u)/ma√sto→/parenleftbiggd(z,x) h/parenrightbiggµ+¿ hdkh(d(z,x))kh(d(z,y))(u−rp(d(z,x))¦γ∗ t,x)2:x,y∈X/bracerightbigg . By Assumption SA–2andSA–1(v), we have supf∈FE[f(Xi,Yi)2]1/2≲h−d/2. Moreover, Assumption SA–2 and Equation (SA-6.2)imply there exists C1,C2>0 such that F(z,u) =C1h−d(u2+C2) is an envelope function for F, with E/bracketleftbig max 1fifnF(Xi,Yi)2/bracketrightbig1 2≲C1h−d(E/bracketleftbig max 1fifnY4 i/bracketrightbig1 2+C2)≲C1h−d(E/bracketleftbig max 1fifnY2+v i/bracketrightbig2 2+v+C2)≲h−dn2 2+v. ApplyChernozhukov et al. (2014b, Corollary 5.1) similarly as in Lemma SA-3.3gives E/bracketleftBigg sup f∈F|En[f(Xi,Yi)]−E[f(Xi,Yi)]|/bracketrightBigg ≲/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd. Finite dimensionality of ∆ 3,t,x,ythen implies E/bracketleftbigg sup x,y∈X∥∆3,t,x,y∥/bracketrightbigg ≲/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd. (SA-6.4) Putting together Equations ( SA-6.3), (SA-6.4) and Lemma SA-3.2gives the result. ■ 45 SA-6.6 Proof of Theorem SA-3.1 All analysis in Lemma SA-3.2and Lemma SA-3.3can be done when the index set is the singleton {x}instead ofB, replacing ( Chernozhukov et al. ,2014b, Corollary 5.1) by Bernstein inequality, and gives for any x∈B, /vextendsingle/vextendsinglee¦ 1Ψ−1 t,xOt,x/vextendsingle/vextendsingle≲P/radicalbigg 1 nhd+1 n1+v 2+vhd,
|
https://arxiv.org/abs/2505.05670v1
|
/vextendsingle/vextendsingle/vextendsinglee¦ 1(/hatwideΨ−1 t,x−Ψ−1 t,x)Ot,x/vextendsingle/vextendsingle/vextendsingle≲P/radicalbigg 1 nhd/parenleftBigg/radicalbigg 1 nhd+1 n1+v 2+vhd/parenrightBigg . The decomposition Equation ( SA-3.1) then gives the result. ■ SA-6.7 Proof of Theorem SA-3.2 DefineTdis(x) = Ξ−1/2 x,xe¦ 1Ψ−1 t,xOt,x. Notice that if we define Zn,i=1 nΞ−1/2 x,xe¦ 1Ψ−1 t,xrp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))(Yi−¹∗ t,x(Di(x))) 1(Xi∈At), thenTdis(x) =/summationtextn i=1Zn,i. Moreover, E[Zn,i] = 0 and V[Zn,i] =n−1. By Berry-Essen Theorem, sup u∈R/vextendsingle/vextendsingleP/parenleftbig Tdis(x)fu/parenrightbig −Φ(u)/vextendsingle/vextendsingle≲n/summationdisplay i=1E/bracketleftBig |Zn,i|3/bracketrightBig =n/summationdisplay i=1n−3Ξ−3/2 x,xE/bracketleftbigg |e¦ 1Ψ−1 t,xrp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x)) 1(Xi∈At)(Yi−¹∗ t,x(Di(x))|3/bracketrightbigg ≲n−2Ξ−3/2 x,xE[|kh(Di(x))(Yi−¹∗ t,x(Di(x)))|3] ≲n−2Ξ−3/2 x,xE[|kh(Di(x))(E[|Yi|3|Xi]+|¹∗ t,x(Di(x))|3)|] ≲(nhd)−1/2, where in the third line we used supx∈B∥rp/parenleftBig Di(x) h/parenrightBig kh(Di(x))∥≲1 holds almost surely in Xi, and in the last line we used Ξ x,x≳(nhd)−1/2from Lemma SA-3.5, Assumption SA–1(v) so that E[|Yi|3|Xi]≲1 and ¹∗ t,x(Di(x)) =µ∗ t(x)¦rp(Di(x)) = (Ψt,xSt,x)−1rp/parenleftBigDi(x) h/parenrightBig , implying max 1fifnsupx∈B|¹∗ t,x(Di(x))|≲1 fort∈ {0,1}. The counterpart of Theorem SA-3.4gives |/hatwideTdis(x)−Tdis(x)|≲P1√ nhd+1 nv 2+vhd+√ nhd/summationdisplay t∈{0,1}|Bn,t(x)|. Putting together we have P(Ä∈/hatwideIdis(x,³)) =P(|/hatwideTdis(x)| fc³) =P(|Tdis(x)| fc³)+o(1) = 2(1 −Φ(c³))+o(1) = 1−³+o(1). ■ 46 SA-6.8 Proof of Theorem SA-3.3 The statement follows from Lemma SA-3.2, Lemma SA-3.3and the decomposition Equation ( SA-3.1).■ SA-6.9 Proof of Theorem SA-3.4 We make the decomposition based on Equation ( SA-3.1) and convergence of /hatwideΞx,x, /hatwideTdis(x)−Tdis(x) =/hatwideΞ−1/2 x,x/parenleftbigg/summationdisplay t∈{0,1}(−1)t+1 2(/hatwide¹t,x(0)−¹t,x(0))/parenrightbigg −Ξ−1/2 x,x/parenleftbigg/summationdisplay t∈{0,1}(−1)t+1 2e¦ 1Ψ−1 t,xOt,x/parenrightbigg =/hatwideΞ−1/2 x,x/parenleftbigg/summationdisplay t∈{0,1}(−1)t+1 2(/hatwide¹t,x(0)−¹t,x(0))−/summationdisplay t∈{0,1}(−1)t+1 2e¦ 1Ψ−1 t,xOt,x/parenrightbigg (= ∆1,x) +(/hatwideΞ−1/2 x,x−Ξ−1/2 x,x)/summationdisplay t∈{0,1}(−1)t+1 2e¦ 1Ψ−1 t,xOt,x (= ∆2,x) By Lemma SA-3.2andSA-3.3, and the decomposition Equation ( SA-3.1), sup x∈X/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay t∈{0,1}(−1)t+1 2(/hatwide¹t,x(0)−¹t,x(0))−/summationdisplay t∈{0,1}(−1)t+1 2e¦ 1Ψ−1 t,xOt,x/vextendsingle/vextendsingle/vextendsingle/vextendsingle ≲P/radicalbigg log(1/h) nhd/parenleftBigg/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightBigg + sup x∈B/summationdisplay t∈{0,1}|¹∗ t,x(0)−¹t,x(0)|. Together with Lemma SA-3.5, sup x∈B|∆1,x|≲Plog(1/h)√ nhd+(log(1/h))3 2 n1+v 2+vhd+√ nhdsup x∈B/summationdisplay t∈{0,1}|¹∗ t,x(0)−¹t,x(0)|.(SA-6.5) By Lemma SA-3.2, Lemma SA-3.3and Lemma SA-3.5, and assumenv 2+vhd log(1/h)→ ∞, then sup x∈X/vextendsingle/vextendsingle/vextendsinglee¦ 1Ψ−1 t,xOt,x/parenleftBig Ξ−1/2 x,x−/hatwideΞ−1/2 x,x/parenrightBig/vextendsingle/vextendsingle/vextendsingle≲P√ nhd/parenleftbigg/radicalbigg log(1/h) nhd+log(1/h) n1+v 2+vhd/parenrightbigg/parenleftBigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightBigg =/radicalbig log(1/h)/parenleftbigg 1+/radicalBigg log(1/h) nv 2+vhd/parenrightbigg/parenleftBigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightBigg ≲/radicalbig log(1/h)/parenleftBigg/radicalbigg log(1/h) nhd+log(1/h) nv 2+vhd/parenrightBigg . Hence sup x∈B|∆2,x|≲Plog(1/h)√ nhd+(log(1/h))3 2 n1+v 2+vhd. (SA-6.6) Putting together Equations ( SA-6.5), (SA-6.6) give the result. ■ 47 SA-6.10 Proof of Theorem SA-3.5 Without loss of generality, we can assume X= [0,1]d, andQFt=PXis a valid surrogate measure for PX with respect to G, andϕG=Idis a valid normalizing transformation (as in Lemma SA-4.1). This implies the constants c1andc2from Lemma SA-4.1are all 1. By similar arguments as in the proof of Theorem SA-2.7, we get properties of Gas follows: MG≲h−d/2,EG≲hd/2,TVG≲hd/2−1,sup QN(G,∥·∥Q,2,ε(2c+1)d+1MG)f2c′ε−d−1+2. By definition of ¹∗(·), for each x∈B,t∈ {0,1}, ¹∗ t,x(d(u,x)) =γ∗ t(x)¦rp(d(u,x)) = (H−1Ψ−1 t,xSt,x)¦rp(d(u,x)) = (Ψ−1 t,xSt,x)¦rp/parenleftBigd(u,x) h/parenrightBig , recalling Ψt,x=E/bracketleftBigg rp/parenleftbiggDi(x) h/parenrightbigg rp/parenleftbiggDi(x) h/parenrightbigg¦ kh(Di(x)) 1It(Di(x))/bracketrightBigg ,St,x=E/bracketleftbigg rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))Yi 1(Xi∈At)/bracketrightbigg . We can check that ∥Ψ−1 t,x∥≲1,∥St,x∥≲1 and MHt≲h−d/2,EHt≲h−d/2, t∈ {0,1}. In what follows, we verify the entropy and total variation properties of H. Using product rule we can verify sup u∈Xsup x,x′∈B|¹∗ t,x(d(u,x))−¹∗ t,x(d(u,x′))| ∥x−x′∥≲h−1. Defineft,x(·) =h−d/2√ nΞx,xe¦ 1Ψ−1 t,xrp(·)K(·)(Ψ−1 t,xSt,x)¦rp(·). Then, Kt(u;x)¹∗ t,x(d(u,x)) =h−d/2ft,x/parenleftBigd(u,x) h/parenrightBig ,u∈X,x∈B,t∈ {0,1}. TakeHt={Kt(·;x)¹∗ t,x(d(·,x)) :x∈B},t∈ {0,1}. Fort∈ {0,1},ft,xsatisfies: (i) boundedness sup x∈Bsup u∈X|ft,x(u)| fc, (ii) compact support supp(ft,x(·))¦[−c,c]d,∀x∈B, (iii) Lipschitz continuity sup x∈Bsup u,u′∈X|fx(u)−fx(u′)| ∥u−u′∥fc sup u∈Xsup x,x′∈B|fx(u)−fx′(u)| ∥x−x′∥fch−1, for some constant cnot depending on n. Then, by an argument similar
|
https://arxiv.org/abs/2505.05670v1
|
to Cattaneo et al. (2024, Lemma 7), there exists a constant c′only depending on canddthat for any 0 fεf1, sup QN/parenleftBig hd/2Ht,∥·∥Q,1,(2c+1)d+1ε/parenrightBig fc′ε−d−1+1, 48 where supremum is taken over all finite discrete measures. Taking a constant envelope function MHt= (2c+1)d+1h−d/2, we have for any 0 < εf1, sup QN(Ht,∥·∥Q,1,εMFt)fc′ε−d−1+1. By Lemma SA-5.2, above implies the uniform covering number for Htsatisfies NHt(ε)f4c′(ε/2)−d−1,0< εf1. Since H¦H0+H1, here + denotes the Minkowski sum, with MHtaken to be MH0+MH1, a bound on the uniform covering number of Hcan be given by NH(ε)f16(c′)2(ε/2)−2d−2,0< εf1. With the assumption that L(Et,x)fChd−1forEt,x={y∈At: (y−x)/h∈Supp(K)}for allt∈ {0,1}, x∈B, and the fact that TVHt≲hd/2−1fort∈ {0,1}, the same argument as in the paragraph Total Variation in the proof of Theorem SA-2.7shows TVH≲hd/2−1. Now apply Lemma SA-4.2withG,Hdefined in Equation ( SA-3.3) andR={Id}, noticing that (Tdis:x∈B) = (An(g,h,r) : (g,h,r)∈F×R),F={(gx,hx) :x∈B} ¦G×H, the result then follows. ■ SA-6.11 Proof of Theorem SA-3.6 The result follows from Theorem SA-3.5, Theorem SA-3.4, Lemma SA-3.5and similar arguments as the proof of Theorem SA-2.9. ■ 49 SA-7 Proofs of Distance-Based Bias Results SA-7.1 Proof of Lemma 2 SA-7.1.1 Upper Bound The proof is essentially the proof for Lemma SA-3.4with the data generating process ranging over P. By LemmaSA-3.1and Equation ( SA-6.1), we have sup P∈Psup x∈B|Bn,t(x)| = sup P∈Psup x∈B/vextendsingle/vextendsingle/vextendsinglee¦ 1Ψ−1 t,xSt,x−µt(x)/vextendsingle/vextendsingle/vextendsingle = sup P∈Psup x∈B/vextendsingle/vextendsingle/vextendsinglee¦ 1Ψ−1 t,xE/bracketleftBig rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))rp(Di(x))¦(µt(Xi)−µt(x),0,···,0)) 1(Xi∈At)/bracketrightBig/vextendsingle/vextendsingle/vextendsingle ≲sup P∈Psup x∈Bsup z∈X/vextendsingle/vextendsingle/vextendsinglee¦ 1Ψ−1 t,xE/bracketleftBig rp/parenleftbiggDi(x) h/parenrightbigg kh(Di(x))rp/parenleftbiggDi(x) h/parenrightbigg¦/vextendsingle/vextendsingle/vextendsingle ·sup P∈Psup x∈Bsup z∈X|µt(x)−µt(z)| 1(kh(d(z,x))>0) ≲h. SA-7.1.2 Lower Bound The lower bound is proved by considering the following data generat ing process. Suppose Xi∼Unif([−2,2]2), andµ0(x1,x2) = 0 and µ1(x1,x2) =x2for all (x1,x2)∈X= [−2,2]2. Suppose Yi(0)∼N(µ0(Xi),1) andYi(1)∼N(µ1(Xi),1). Define the treatment and control region by A1={(x,y)∈X:xg0,yg0}, A0=X/A1,B={(x,y)∈R: 0fxf2,y= 0orx= 0,0fyf2}. Suppose Yi= 1(Xi∈ A0)Yi(0)+ 1(Xi∈A1)Yi(1). Suppose we choose dto be the Euclidean distance and Di(x) =∥Xi−x∥. In this case, although the underlying conditional mean functions µt,t∈ {0,1}are smooth, the conditional mean given distance ¹t,xmay not even be differentiable. In this example, ¹1,(s,0)(r) = 2 Ãr, if 0frfs, r+s Ã−arccos(s/r),ifr > s. FigureSA-1plotsr/ma√sto→¹1,(3/4,0)(r) with the notation xs= (s,0). Under this data generating process, we can show inf 0<h<1sup x∈B|Bn,1(x)−Bn,0(x)| h>0. The proof proceeds in two steps. First, we show a scaling propert y of the asymptotic bias under our example, which gives a reduction to fixed- hbias calculation. Second, we prove the lower bound via the reduction from previous step. Step 1: A Scaling Property . Let 0< h <1,0< s <1,0< C <1. Define h′=Chands′=Cs. Here 50 Figure SA-1. Conditional Mean Given Distance with One Kink Cis the scaling factor and denote xs= (s,0) andxs′= (s′,0). Denote bias for xs′under bandwidth h′to be biasn,1(h′,s′) =e¦ 1E/bracketleftbigg rp/parenleftbiggDi((s′,0)) h′/parenrightbigg rp/parenleftbiggDi((s′,0)) h′/parenrightbigg¦ kh′(Di((s′,0))) 1(Xi∈A1)/bracketrightbigg−1 E/bracketleftbigg rp/parenleftbiggDi((s′,0)) h′/parenrightbigg kh′(Di((s′,0)))(µ1(Xi−(s′,0))) 1(Xi∈A1)/bracketrightbigg , (SA-7.1) where we have used the fact that µ1is linear in our example, hence µ1(Xi)−µ1((s′,0)) =µ1(Xi−(s′,0)). We reserve the notation Bn,t,t= 0,1, to the bias when bandwidth is h, that is, Bn,t(xs)≡biasn,t(h,s), h∈(0,1),s∈(0,1),t= 0,1. Inspecting each element of the last vector, for all l∈N, E/bracketleftbigg/parenleftbigg∥Xi−(s′,0)∥ h′/parenrightbiggl kh′(∥Xi−(s′,0)∥)(µ1(Xi−(s′,0))) 1(Xi∈A1)/bracketrightbigg =/integraldisplay2 0/integraldisplay2 0/parenleftbigg1 h′/parenrightbigg2/parenleftbigg∥(u′−s′,v′)∥ h′/parenrightbiggl k/parenleftbigg∥(u′−s′,v′)∥ h′/parenrightbigg µ1((u′,v′)−(s′,0))1 4du′dv′ (1)=/integraldisplay2/C 0/integraldisplay2/C 0/parenleftbigg1 Ch/parenrightbigg2/parenleftbigg∥(Cu−Cs,Cv)∥
|
https://arxiv.org/abs/2505.05670v1
|
Ch/parenrightbiggl k/parenleftbigg∥(Cu−Cs,Cv)∥ Ch/parenrightbigg µ1(C(u−s,v))C2 4dudv =/integraldisplay2/C 0/integraldisplay2/C 0/parenleftbigg1 h/parenrightbigg2/parenleftbigg∥(u−s,v)∥ h/parenrightbiggl k/parenleftbigg∥(u−s,v)∥ h/parenrightbigg Cµ1((u−s,v))1 4dudv (2)=/integraldisplay2 0/integraldisplay2 0/parenleftbigg1 h/parenrightbigg2/parenleftbigg∥(u−s,v)∥ h/parenrightbiggl k/parenleftbigg∥(u,v)−(s,0)∥ h/parenrightbigg Cµ1((u,v)−(s,0))1 4dudv =CE/bracketleftbigg/parenleftbigg∥Xi−(s,0)∥ h/parenrightbiggl kh(∥Xi−(s,0)∥)µ1(Xi−(s,0)) 1(Xi∈A1)/bracketrightbigg , where in (1) we have used a change of variable ( u,v) =1 C(u′,v′), and (2) holds since k/parenleftBig ∥·−(s,0)∥ h/parenrightBig is supported in (s,0)+hB(0,1), which is contained in [0 ,2]×[0,2]¦[0,2/C]×[0,2/C] for all 0 < h <1, 0< s <1, 51 0< C <1. This means E/bracketleftbigg rp/parenleftbiggDi((s′,0)) h′/parenrightbigg kh′(Di((s′,0)))(µ1(Xi−(s′,0))) 1(Xi∈A1)/bracketrightbigg =CE/bracketleftbigg rp/parenleftbiggDi((s,0)) h/parenrightbigg kh(Di((s,0)))(µ1(Xi−(s,0))) 1(Xi∈A1)/bracketrightbigg . Similarly, for all l∈Nand 0< h <1, 0< s <1, 0< C <1, E/bracketleftbigg/parenleftbiggDi((s′,0))) h′/parenrightbiggl kh′(Di((s′,0))) 1(Xi∈A1)/bracketrightbigg =E/bracketleftbigg/parenleftbiggDi((s,0)) h/parenrightbiggl kh(Di((s,0))) 1(Xi∈A1)/bracketrightbigg , implying E/bracketleftbigg rp/parenleftbiggDi((s′,0)) h′/parenrightbigg rp/parenleftbiggDi((s′,0)) h′/parenrightbigg¦ kh′(Di((s′,0))) 1(Xi∈A1)/bracketrightbigg =E/bracketleftbigg rp/parenleftbiggDi((s,0)) h/parenrightbigg rp/parenleftbiggDi((s,0)) h/parenrightbigg¦ kh(Di((s,0))) 1(Xi∈A1)/bracketrightbigg . It then follows that for all 0 < h <1, 0< s <1, 0< C <1, biasn,1(h′,s′) =Cbiasn,1(h,s). Moreover, for all 0 < h <1, 0< s < h, Bn,1(xs) = bias n,1(h,s) =hbiasn,1/parenleftBig 1,s h/parenrightBig . (SA-7.2) Sinceµ0≡0, it is easy to check that Bn,0(xs) = bias n,0(h,s)≡0,0< h <1,0< s < h. Step 2: Lower Bound on Bias . Now we want to show sup0fsf1|biasn,1(1,s)−biasn,0(1,s)|>0. By Equation ( SA-7.1), biasn,1(1,s)−biasn,0(1,s) =e¦ 1Ψ−1 sSs−µ1(xs)−0 =e¦ 1Ψ−1 sSs, Ψs=E/bracketleftBig rp(Di(xs))rp(Di(xs))¦k(Di(xs)) 1(Xi∈A1)/bracketrightBig , Ss=E[rp(Di(xs))k(Di(xs))µ1(Xi) 1(Xi∈A1)]. Changing to polar coordinates, we have Ψs=/integraldisplay∞ 0/integraldisplayà Θs(r)rp(r)rp(r)¦K(r)rd¹dr, Ss=/integraldisplay∞ 0/integraldisplayà Θs(r)rp(r)K(r)rsin(¹)rd¹dr, 52 with Θs(r) = 0, if 0frfs, arccos(s/r),ifr > s. For notation simplicity, denote A(s) =/integraldisplay∞ 0/integraldisplayà Θs(u)rp(u)rp(u)¦k(u)ud¹du=A1(s)+A2(s), B(s) =/integraldisplay∞ 0/integraldisplayà Θs(u)rp(u)k(u)usin(¹)ud¹du=B1(s)+B2(s), where A1(s) =/integraldisplays 0/integraldisplayà 0rp(u)rp(u)¦k(u)ud¹du=Ã/integraldisplays 0rp(u)rp(u)¦k(u)udu, A2(s) =/integraldisplay∞ s/integraldisplayà arccos(s/u)rp(u)rp(u)¦k(u)ud¹du=/integraldisplay∞ s(Ã−arccos(s/u))rp(u)rp(u)¦k(u)udu, B1(s) =/integraldisplays 0/integraldisplayà 0rp(u)k(u)usin(¹)ud¹du= 2/integraldisplays 0rp(u)k(u)u2du, B2(s) =/integraldisplay∞ s/integraldisplayà arccos(s/u)rp(u)k(u)usin(¹)ud¹du=/integraldisplay∞ s(1+s u)rp(u)k(u)u2du. Evaluating the above at zero gives A(0) =à 2/integraldisplay∞ 0urp(u)rp(u)¦k(u)du,B(0) =/integraldisplay∞ 0u2rp(u)k(u)du. Hence biasn,1(1,0)−biasn,0(1,0) =e¦ 1A(0)−1B(0) =e¦ 1A(0)−1/bracketleftbigg2 ÃA(0)e2/bracketrightbigg = 0. (SA-7.3) Taking derivatives with respect to s, we have ˙A1(s) =Ãrp(s)rp(s)¦k(s)s, ˙A2(s) =−Ãrp(s)rp(s)¦k(s)s+/integraldisplay∞ s1√ u2−s2urp(u)rp(u)¦k(u)du, ˙B1(s) = 2rp(s)k(s)s2, ˙B2(s) =−2rp(s)k(s)s2+/integraldisplay∞ surp(u)k(u)du. Evaluating the above at zero gives ˙A(0) =/integraldisplay∞ 0rp(u)rp(u)¦k(u)du,˙B(0) =/integraldisplay∞ 0urp(u)k(u)du. 53 Using matrix calculus, we know d dsbiasn,1(1,s)−biasn,0(1,s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle s=0 =d dse¦ 1A(s)−1B(s)/vextendsingle/vextendsingle/vextendsingle/vextendsingle s=0(SA-7.4) =−e¦ 1A(0)−1˙A(0)/bracketleftbig A(0)−1B(0)/bracketrightbig +e¦ 1A(0)−1˙B(0) (SA-7.5) =−e¦ 1A(0)−1˙A(0)/bracketleftbigg2 Ãe2/bracketrightbigg +e¦ 1/bracketleftbigg2 Ãe1/bracketrightbigg =−2 Ãe¦ 1A(0)−1/integraldisplay∞ 0 u u2 ··· up+1 k(u)du+e¦ 1/bracketleftbigg2 Ãe1/bracketrightbigg =−4 Ã2+2 Ã. (SA-7.6) Combining Equations (SA-7.3)and(SA-7.4), and the fact thatd dsbiasn,1(1,s)−biasn,0(1,s) is continuous in s, we can show sup0fsf1|biasn,1(1,s)−biasn,0(1,s)|>0. Combining with Equation ( SA-7.2), we have inf 0<h<1sup x∈B|Bn,1(x)−Bn,0(x)| hginf 0<h<1sup 0<s<h|biasn,1(s,h)−biasn,0(s,h)| h = inf 0<h<1sup 0<s<h/vextendsingle/vextendsingle/vextendsinglebiasn,1/parenleftBig 1,s h/parenrightBig/vextendsingle/vextendsingle/vextendsingle >0. ■ SA-7.2 Proof of Lemma 3 The proof of part (i) follows from part (ii) with B∩B(x,ε) as the boundary. To prove part (ii), without loss of generality, we assume that º=p+1, and want to show supx∈Bo|Bn,t(x)|≲hp+1. This means we have assumed that Bhas a one-to-one curve length parametrization µthat isCp+3with curve length L, there existsε,¶ >0 such that for all x∈µ([¶,L−¶]) and 0< r < ε,S(x,r) intersects Bwith two points, s(x,r) andt(x,r). Define a(x,r) andb(x,r) to be the number in [0 ,2Ã] such that [a(x,r),b(x,r)] ={¹:x+r(cos¹,sin¹)∈A1}. Then, for x∈Band 0< r < ε,¹1,x(r) has the following explicit representation: ¹1,x(r) =/integraltextb(x,r) a(x,r)µ1(x+r(cos¹,sin¹))fX(x+r(cos¹,sin¹))d¹ /integraltextb(x,r) a(x,r)fX(x+r(cos¹,sin¹))d¹. 54 Step 1: Curve length v.s. Distance to µ(0) W.l.o.g., assume µ(0) =xandµ′(0)
|
https://arxiv.org/abs/2505.05670v1
|
= (1,0). LetT: [0,∞)→[0,∞) to be a continuous increasing function that satisfies ∥µ◦T(r)∥2=r2,∀r∈[0,h]. Initial Case: l= 1,2,3. We will show that TisClon (0,h). For notational simplicity, define another function ϕ: [0,∞)→[0,∞) byϕ(t) =∥µ(t)∥2. Using implicit derivations iteratively, ϕ◦T(r) =r2, ϕ′(T(r))T′(r) = 2r, ϕ′′(T(r))(T′(r))2+ϕ′(T(r))T′′(r) = 2, ϕ′′′(T(r))(T′(r))3+3ϕ′′(T(r))T′(r)T′′(r)+ϕ′(T(r))T′′′(r) = 0. (1) From the above equalities, we get T′(r) =2r ϕ′(T(r)), T′′(r) =2−ϕ′′(T(r))(T′(r))2 ϕ′(T(r)), T′′′(r) =−ϕ′′′(T(r))(T′(r))3+3ϕ′′(T(r))T′(r)T′′(r) ϕ′(T(r)). Since we have assumed µisCp+3on (0,h),ϕis alsoCp+1on (0,h). It follows from the above calculation thatTisCp+3on (0,h). In order to find the limit of derivatives of Tat 0, we need ϕ(t) =µ1(t)2+µ2(t)2, ϕ (0) = 0, ϕ′(t) = 2µ1(t)µ′ 1(t)+2µ2(t)µ′ 2(t), ϕ′(0) = 0, ϕ′′(t) = 2µ′ 1(t)µ′ 1(t)+2µ1(t)µ′′ 1(t)+2µ′ 2(t)µ′ 2(t)+2µ2(t)µ′′ 2(t), ϕ′′(0) = 2, ϕ′′′(t) = 6µ′ 1(t)µ′′ 1(t)+2µ1(t)µ′′′ 1(t)+6µ′ 2(t)µ′′ 2(t)+2µ2(t)µ′′′ 2(t). Using L’Hˆ opital’s rule lim r³0T′(r) = lim r³02 ϕ′′(T(r))T′(r)=2 2limr³0T′(r)=⇒lim r³0T′(r) = 1, lim r³0T′′(r) = lim r³0−ϕ′′′(T(r))(T′(r))3−ϕ′′(T(r))2T′(r)T′′(r) ϕ′′(T(r))T′(r) =−ϕ(3)(0)−4limr³0T′′(r) 2 =−ϕ(3)(0) 6 55 lim r³0T(3)(r) =−lim r³0ϕ(4)(T(r))(T′(r))4+ϕ(3)(T(r))3(T′(r))2T′′(r)+3ϕ(3)(T(r))(T′(r))2T′′(r) ϕ′′(T(r))T′(r) +lim r³03ϕ′′(T(r))T′(r)T(3)(r) ϕ′′(T(r))T′(r) =−ϕ(4)(0)−(ϕ(3)(0))2+6lim r³0T(3)(r) 2 =−ϕ(4)(0)−(ϕ(3)(0))2 8. Induction Step: lg4. Assume limr³0T(i)(r) exists and is finite for 0 fifl−2 and there exists a function q(r) such that (i) q(r) is a polynomial of ϕ(j)(T(r)),1fjfl−1 andT(k)(r),1fkfl−2, (ii) limr³0q(r) = 0 and (iii) q(r)+ϕ′(T(r))T(l−1)(r) = 0. (2) Forl= 4, this assumption can be verified from Equation (1). Using L’hopital’s ru le, lim r³0T(l−1)(r) = lim r³0−q(r) ϕ′(T(r) L′h= lim r³0−q′(r) ϕ′′(T(r))T′(r). From the previous paragraph, limr³0ϕ′′(T(r))T′(r) exists and is finite. And q′(r) is a polynomial of ϕ(j)(T(r)),1fjflandT(k)(r),1fkfl−1. Hence limr³0T(l−1)(r) can be solved from the following equation and is finite: lim r³0q′(r)+lim r³0ϕ′′(T(r))T′(r)·lim r³0T(l−1)(r) = 0. (3) Taking derivatives on both sides of Equation (2), q′(r)+ϕ′′(T(r))T′(r)T(l−1)(r)+ϕ′(T(r))T(l)(r) = 0. Takeq2(r) =q′(r)+ϕ′′(T(r))T′(r)T(l−1)(r). Then, (i) q2(r) is a polynomial of ϕ(j)(T(r)),1fjfland T(k)(r),1fkfl−1, (ii) lim r³0q2(r) = 0, and (iii) q2(r)+ϕ′(T(r))T(l)(r) = 0. Continue this argument till l=p+3,limr³0T(j)(r) exists and is a polynomial of ϕ(0)(0),...,ϕ(j+1)(0), which implies that it is bounded by a constant only depending on µ. Step 2: (p+1)-times continuously differentiable Sr We use the notation µ(t) = (µ1(t),µ2(t)). Define A(t) ="µ(t)−µ(0),µ′(0) = arcsin/parenleftbiggµ2(t) ∥µ(t)∥/parenrightbigg . 56 SinceµisCp+3, we can Taylor expand µat 0 to get µ(t) =/parenleftBigg 0 0/parenrightBigg +/parenleftBigg 1 0/parenrightBigg t+/parenleftBigg u2 v2/parenrightBigg t2+···/parenleftBigg up+2 vp+2/parenrightBigg tp+2+/parenleftBigg R1(t) R2(t)/parenrightBigg , where we have used the fact that µ′ 2(0) = 0 and ∥µ′(0)∥= 1 and R1(t) =/integraldisplayt 0µ(p+3) 1(s)(t−s)p+2 (p+2)!ds, R 2(t) =/integraldisplayt 0µ(p+3) 2(s)(t−s)p+2 (p+2)!ds. SinceµisCp+3,R1(t)/tandR2(t)/tareCp+3on (0,∞). Weclaimthatlimt³0dv dtv(R1(t)/t) exists and is uniformly bounded for all x∈B, for all 0 fvfp+1. Define φ(t) =R1(t)/t. Then φ′(t) =−R1(t) t2+R′ 1(t) t, φ′′(t) =2R1(t) t3−2R′ 1(t) t2+R′′ 1(t) t, φ(3)(t) =−6R1(t) t4+6R′ 1(t) t3−3R(2) 1(t) t2+R(3) 1(t) t··· where R′ 1(t) =/integraldisplayt 0µ(p+1) 1(s)(t−s)p−1 (p−1)!ds, R′′ 1(t) =/integraldisplayt 0µ(p+1) 1(s)(t−s)p−2 (p−2)!ds,··· Sinceµ1isCp+3, there exists C1>0onlydependingon µsuchthatforall0 fvfp+3,/vextendsingle/vextendsingledv dtvR1(t)/vextendsingle/vextendsinglefC1tp+1−v. Hence lim r³0φ(j)(r) = 0,∀0fjfp+1. Similarly, lim r³0dv dtv(R2(t)/t) exists and is uniformly bounded for all 0 fvfp+1. Then µ2(t) ∥µ(t)∥=v2t+···+vp+2tp+2+R2(t)/t/radicalbig (1+u2t+···up+2tp+2+R1(t)/t)2+(v2t+···+vp+2tp+2+R2(t)/t)2,t >0. Notice that µ2(t)/∥µ(t)∥is of the form p(t)(1+q(t))³, where³ <0 andp(t),q(t) areCp+1on (0,∞) withlimr³0dv/dtvp(t) andlimr³0dv/dtvq(t) finite. Since the derivative of p(t)(1+q(t))³is p′(t)(1+q(t))³+p(t)³(1+q(t))³−1q′(t), which is the sum of two terms of the form p2(t)(1+q2(t))³withp2andq2functions that are Cpwith
|
https://arxiv.org/abs/2505.05670v1
|
finite limits at 0. Continue this argument, we see thatµ2(·) ∥µ(·)∥isCp+1on (0,∞) and lim r³0dv dtv(µ2(t)/∥µ(t)∥) exist and are uniformly bounded for all x∈Band for all 0 fvfp+1. SincearcsinisCp+1with bounded (higher order derivatives) on [ −1/2,1/2],AisCp+1on (0,¶) and for all 0fvfp+1, lim r³0A(v)(t) exist and are uniformly bounded for all x∈B. 57 Step 3: (p+1)-times continuously differentiable conditional density By the previous two steps, a(x,r) =A◦T(r) isCp+1on (0,∞) with|limr³0dv drva(x,r)|<∞. Similarly, we can show that b(x,r) isCp+1inrwith finite limits at r= 0. By the assumption that fXisCp+1and bounded below by f,¹1,xisCp+1with lim r³0dv drv¹1,x(r) uniformly bounded for all x∈Band for all 0 fvfp+1. This completes the proof. ■ SA-7.3 Proof of Theorem 6 Lets >0 be a parameter that is chosen later. Consider the following two data ge nerating processes. Data Generating Process P0. LetX={r(cos¹,sin¹) : 0frf1,0f¹fΘ(r)}, where Θ(r) = Ã,0fr < s, ¹k, s+ks2fr < s+(k+1)s2,0fk < K, ¹K, s+Ks2fr <1, withK=+1−s s2,and¹kis the unique zero of sin(¹) ¹=(k+1 2)s2 s+(k+1 2)s2 over¹∈[0,Ã], and¹Kis the unique zero of sin(¹) ¹=Ks2+1−s s+Ks2+1 over¹∈[0,Ã]. Suppose Xihas density fXgiven by fX(r(cos¹,sin¹)) =1 2Θ(r),0frf1,0f¹fΘ(r). Suppose µ0(x1,x2) =1 2+1 100x1,(x1,x2)∈R2. Suppose Yi= 1(¸ifµ(Xi)) where ( ¸i:i: 1,···,n) are i.i.d. random variables independent of ( Xi: 1,···,n). Let¸0(r) =EP0[Yi|∥Xi−(0,0)∥=r], forrg0. In particular, bd(X) has length Ã+2. Hence, bd(X) is a rectifiable curve. Data Generating Process P1. LetX={r(cos¹,sin¹) : 0frf1,0f¹fÃ/2},Xiis uniformly distributed on X, and µ1(x1,x2) =1 2+1 100(x1−s),(x1,x2)∈R2. Suppose Yi= 1(¸ifµ(Xi)) where ( ¸i: 1,···,n) are i.i.d random variables independent to ( Xi: 1,···,n). Let¸1(r) =EP1[Yi|∥Xi−(0,0)∥=r], forrg0. In particular, bd(X) has length Ã/2+2. Hence, bd(X) is a rectifiable curve. 58 Figure SA-2. Xfrom DGP P0 Figure SA-3. Xfrom DGP P1 59 Minimax Lower Bound . First, we show under the previous two models, P0(∥Xi∥ fr) =P1(∥Xi∥ fr) for allrg0. Since in P1,Xiis uniform distributed on R, we know P1(∥Xi∥ fr) =r2, 0frf1. P0(∥Xi∥ fr) =/integraldisplayr 0/integraldisplayΘ(s) 01 2Θ(s)sd¹ds=r2,0frf1. Hence, choosing (0 ,0) as the point of evaluation in both P0andP1, we have dKL(P0(∥Xi−(0,0)∥,Yi),P1(∥Xi−(0,0)∥,Yi)) =/integraldisplay∞ 0/integraldisplay∞ −∞dP0(r,y)logdP0(r,y) dP1(r,y) =/integraldisplay∞ 0/integraldisplay∞ −∞dP0(r)dP0(y|r)logdP0(r)dP0(y|r) dP1(r)dP1(y|r) =/integraldisplay∞ 0dP0(r)/integraldisplay∞ −∞dP0(y|r)logdP0(y|r) dP1(y|r) = 2/integraldisplay1 0dKL(Bern(¸0(r)),Bern(¸1(r)))rdr. UnderP0,Xiis uniformly distributed on {r(cos¹,sin¹) : 0f¹fΘ(r)}for each 0 < rf1. Hence ¸0(r) =1 2+1 1001 Θ(r)/integraldisplayΘ(r) 0rcos(u)du−s 100=1 2+1 100rsin(Θ(r)) Θ(r). Thus, for 0 fk < K, ¸0/parenleftBig s+(k+1 2)s2/parenrightBig =1 2+1 100/parenleftBig (s+(k+1 2)s2)sin(Θk) Θk/parenrightBig =1 2+1 100/parenleftBig (s+(k+1 2)s2)(k+1 2)s2 s+(k+1 2)s2/parenrightBig =¸1/parenleftBig s+(k+1 2)s2/parenrightBig . Since both ¸0and¸1are 1-Lipschitz on all intervals [ s+ks2,s+ (k+ 1)s2] for all 0 fk < K, we know |¸0(r)−¸1(r)| f2s2for allr∈[s,1]. Moreover, ¸0(r) =1 2for all 0 frfsand¸1(r) =1 2+1 100(r2 Ã−s). Hence|¸0(r)−¸1(r)| fsfor all 0frfs. Hence, /integraldisplay1 0dKL(Bern(¸0(r)),Bern(¸1(r)))rdrf/integraldisplay1 0dÇ2(Bern(¸0(r)),Bern(¸1(r)))rdr =/integraldisplay1 0(¸1(r)/parenleftBig¸0(r)−¸1(r) ¸1(r)/parenrightBig2 +(1−¸1(r))/parenleftBig¸0(r)−¸1(r) 1−¸1(r)/parenrightBig2 )rdr f1 1 2−3 100/integraldisplay1 0(¸0(r)−¸1(r))2rdr f1 1 2−3 100/integraldisplays 0s2rdr+1 1 2−3 100/integraldisplay1 s(2s2)2rdr f5 1 2−3 100s4. Moreover, |µ0(0,0)−µ1(0,0)|=1 100s. Hence, by Tsybakov (2008, Theorem 2.2 (iii)), take5 1 2−3 100s4 ∗=log2 n, 60 and conclude that inf Tn∈Tsup P∈Psup x∈B(P)EP[|Tn(Un(x))−µ(x)|]g1 1600s∗≳n−1 4. This concludes the proof. ■ SA-8 Proofs for Section SA-4 SA-8.1 Proof of Lemma SA-4.1 We will use a truncation argument. Let Än≳1 be the level of truncation.
|
https://arxiv.org/abs/2505.05670v1
|
For each r∈R, define ˜r(y) =r(y) 1(|y| fÄn), y∈R, and define the class ˜R={˜r:r∈R}. For an overview of our argument, suppose ZR nis a mean-zero Gaussian process indexed by G×R∪GטR, whose existence will be shown below, then we can decompose by: Rn(g,r)−ZR n(g,r) =/bracketleftbig Rn(g,˜r)−ZR n(g,˜r)/bracketrightbig +/bracketleftbig Rn(g,r)−Rn(g,˜r)/bracketrightbig +/bracketleftbig ZR n(g,r)−ZR n(g,˜r)/bracketrightbig . Part 1: Gaussian strong approximation for the truncated process — ∥Rn(g,˜r)−ZR n(g,˜r)∥G×R Observe that M˜R,Y≲ÄnandpTV˜R,Y≲Än, and˜Ris a VC-type class with envelope M˜R,Y=MR,Y 1(|·| fÄn) overYwith constants cR,YanddR,Y. Then,Cattaneo and Yu (2025, Theorem 2) with v=Änand³= 0 for the class of functions Gand˜Rimplies on a possibly enlarged probability space, there exists a seq uence of mean-zero Gaussian processes ( ZR n(g,r) : (g,r)∈GטR) with almost sure continuous trajectories on (GטR,ÄP) such that E[Rn(g1,r1)Rn(g2,r2)] =E[ZR n(g1,r1)ZR n(g2,r2)] for all ( g1,r1),(g2,r2)∈GטR, and E[∥Rn(g,˜r)−ZR n(g,˜r)∥G×R] fC1Än/parenleftbigg√ dmin/braceleftBig(cd 1Md+1 GTVdEG)1 2d+2 n1/(2d+2),(cd 2 1cd 2 2MGTVd 2EGLd 2)1 d+2 n1/(d+2)/bracerightBig ((d+k)log(cn))3/2+(d+k)log(cn)√nMG/parenrightbigg =C1Än(√ drn((d+k)log(cn))3 2+(d+k)log(cn)√nMG). Part 2: Truncation error for the empirical process — ∥Rn(g,r)−Rn(g,/tildewider)∥G×RConsider the class of differences due to truncation, that is, ∆ R={r−˜r:r∈R}. Our assumptions imply G×∆Ris VC-type in the sense that for all 0 < ε <1, sup QN(G×∆R,∥·∥Q,2,ε∥MG(MR,Y−M˜R,Y)∥Q,2)fcGcR,Y(ε2/4)−dG−dR,Y, wheresupis over all finite discrete measure on Rd+1, andM˜R,Y(y) =MR,Y(y) 1(|y| fÄn). We can check thatMG(MR,Y−M˜R,Y) is an envelope function for G×∆R, since all functions in ∆ Rare evaluated to zero 61 on [−Än,Än]. Denote X= (xi)1fifn, E/bracketleftBig max 1fifnM2 G(MR,Y(yi)−M˜R,Y(yi))2/vextendsingle/vextendsingle/vextendsingleX/bracketrightBig1 2≲MGE/bracketleftBig/parenleftbig max 1fifnMR,Y(yi)/parenrightbig2/vextendsingle/vextendsingle/vextendsingleX/bracketrightBig1 2≲MGn1 2+v, sup (g,r)∈G×RE[g(xi)2r(yi)21(|yi| gÄ1/³ n)]1 2≲sup (g,r)∈G×RE/bracketleftBig g(xi)2E[r(yi)2+v|xi]2 2+vP(|yi| gÄn|xi)v 2+v/bracketrightBig ≲√MGEGÄn. By Jensen’s inequality, we also have E/bracketleftBig max 1fifnM2 G(E[MR,Y(yi)−M˜R,Y(yi)|xi])2/vextendsingle/vextendsingle/vextendsingleX/bracketrightBig1 2≲MGn1 2+v, sup (g,r)∈G×RE[g(xi)2E[r(yi)−˜r(yi)|xi]2]1 2≲/radicalBig MGEGÄ−vn, E[M2 G(MR,Y(yi)−M˜R,Y(yi))2]1/2≲MGÄ−v/2 n. DenoteA= (cGcR)1 2dG+2dR/4 andD= 2dG+2dR,Chernozhukov et al. (2014b, Corollary 5.1) gives E[∥Rn(g,r)−Rn(g/tildewider)∥G×R]≲E/bracketleftBigg sup g∈Gsup h∈∆R1√nn/summationdisplay i=1g(xi)(h(yi)−E[h(yi)|xi])/bracketrightBigg ≲/radicalBig DMGEGÄ−vnlog(A/radicalbig MG/EG)+DMGn1 2+V √nlog(A/radicalbig MG/EG) ≲/radicalBig Dlog(A/radicalbig MG/EG)√MGEGÄ−v/2 n+Dlog(A/radicalbig MG/EG)MG/radicalbig nv 2+v. Part 3: Truncation error for the Gaussian process — ∥ZR n(g,r)−ZR n(g,˜r)∥G×ROur assumptions implyGטR∪G×Ris VC-type w.r.p envelope function 2 MGMR,Yin the sense that for all 0 < ε <1, sup QN(G×R∪GטR,∥·∥Q,2,2ε∥MGMR,Y∥Q,2)fcGcR(ε2/4)−dG−dR, wheresupis over all finite discrete measure on Rd+1. HenceGטR∪G×Ris pre-Gaussian, and on some probability space, there exists a mean-zero Gaussian process ¯ZR nindexed by F=GטR∪G×Rwith the same covariance structure as Rn, and has almost sure continuous path w.r.p the metric Ä, given by Ä((g1,r1),(g2,r2)) =E[(ZR n(g1,r1)−ZR n(g2,r2))2]1 2=E[(Rn(g1,r1)−Rn(g2,r2))2]1 2,(g1,r1),(g2,r2)∈F. Recall the definition of G×∆Rin Part 2. Then, we have shown previously that Ã≡sup f∈G×∆RÄ(f,f)f/radicalBig MGEGÄ−vn, Our assumptions imply for all 0 < ε <1, N(G×R∪GטR,Ä,Ä(2εMGMR,Y,2ε∥MGMR,Y)1/2)fcGcR(ε2/4)−dG−dR 62 DenoteA= (cGcR)1 2dG+2dR/4 andD= 2dG+2dR. Then, by van der Vaart and Wellner (1996, Corollary 2.2.8), choose any ( g0,r0)∈G×R, we have E/bracketleftbigg ∥¯ZR n(g,r)−¯ZR n(g,˜r)∥G×R/bracketrightbigg ≲E/bracketleftbig/vextendsingle/vextendsingle¯ZR n(g0,r0)−¯ZR n(g0,˜r0)/vextendsingle/vextendsingle/bracketrightbig +/integraldisplayà 0/radicalbigg log/parenleftBig cGcR/parenleftbigMG ε/parenrightbigdG+dR/parenrightBig dε f/radicalBig Dlog(A/radicalbig MG/EG)√MGEGÄ−v/2 n ≲/radicalBig (dG+dR,Y)log(cGcR,Ykn)√MGEGÄ−v/2 n. Since (¯ZR n(g,r) :g∈G,r∈R) has the same distribution as ( ZR n(g,r) :g∈G,r∈R), we know from Vorob’ev–Berkes–Philipp theorem ( Dudley,2014, Theorem 1.31) that ¯ZR ncan be constructed on the same probability space as ( xi,yi)1fifnandZR n, such that ¯ZR nandZR ncoincide on G×R. By an abuse of notation, call¯ZR nnowZR n, the outputted Gaussian process. Part 4: Putting Together If follows from the definition of ˜Rand the previous
|
https://arxiv.org/abs/2505.05670v1
|
three parts that if we chooseÄnsuch that rnÄn≍√MGEGÄ−v/2 n, then the approximation error can be bounded by E/bracketleftbig ∥Rn−ZR n∥G×R/bracketrightbig ≲(dlog(cn))3/2rv v+2n(√MGEG)2 v+2+dlog(cn)MGn−v/2 2+v +dlog(cn)MGn−1/2/parenleftBig√MGEG rn/parenrightBig2 v+2, whered=dG+dR,Y+k, andc=cGcR,Yk. ■ SA-8.2 Proof of Lemma SA-4.2 SinceAnis the addition of two Mnprocesses, indexed by G×RandH×Srespectively, the Gaussian strong approximation error essentially depends on the worst case scenario between GandH, and between RandS. Hence (1) taking maximums E=max{EG,EH},M=max{MG,MH}andTV=max{TVG,TVH}; (2) noticing that Anis still indexed by a VC-type class of functions, we can get the claim ed result. For a more rigor proof, we can not apply Cattaneo and Yu (2025, Theorem SA.1) on ( Mn(g,r) :g∈ G,r∈R) and (Mn(h,s) :h∈H,s∈S) directly, since this ignores the dependence structure betwe en the two empirical processes. However, we can still project the func tions onto a Haar basis, and control the strong approximation error for projected process and theprojection error as in the proof of Cattaneo and Yu (2025, Theorem SA.1) and show both errors can be controlled via worst case scenario between GandH, and between RandS. Reductions: Here we present some reductions to our problem. By the same argument as i n Section SA-II.3 (Proofs of Theorem 1) in the supplemental appendix of Cattaneo and Yu (2025), we can show there exists ui,1fifni.i.dUniform([0,1]d) on a possibly enlarged probability space, such that f(xi) =f(ϕ−1 G∪H(ui)),∀f∈G∪H,∀1fifn. 63 With the help of Cattaneo and Yu (2025, Lemma SA.10), we can assume w.l.o.g. that xi’s are i.i.d Uniform(X) withX= [0,1]d, andϕG∪H: [0,1]d→[0,1]dis the identity function. Although we assume supx∈XE[|Yi|2+v|Xi=x]<∞, we first present the result under the assumption supx∈XE[exp(|yi|)|xi= x]f2, which is the same as in Cattaneo and Yu (2025, Theorem 2). Also in correspondence to the notations inCattaneo and Yu (2025, Theorem 2), we set ³= 1 throughout this proof. Cell Constructions and Projections: The constructions here are the same as those in Cattaneo and Yu(2025), and we present them here for completeness. Let AM,N(P,1) ={Cj,k: 0fk <2M+N−j,0fjf M+N}be an axis-aligned cylindered quasi-dyadic expansion of Rd+1, with depth Mfor the main subspace Rdand depth Nfor the multiplier subspace R, with respect to P, the joint distribution of ( xi,yi) taking values in Rd×R, as inCattaneo and Yu (2025, Definition SA.4). To see what AM,N(P,1) is, it can be given by the following iterative partition procedure: 1.Initialization ( q= 0): TakeCM+N−q,0=X×RwhereX= [0,1]d. 2.Iteration ( q= 1,...,M):GivenCK−l,kfor 0flfq−1,0fk <2l, takes= (qmodd) + 1, and construct CK−q,2k=CK−q+1,k∩ {(x,y)∈[0,1]d×R:e¦ sxfcK−q+1,k}andCK−q,2k+1= CK−q+1,k∩{(x,y)∈[0,1]d×R:e¦ sx> cK−j+1,k}such that P(CK−q,2k)/P(CK−q+1,k)∈[1 1+Ä,Ä 1+Ä] for all 0 fk <2q−1. Continue until ( CN,k: 0fk <2M) has been constructed. By construction, for each 0fl < M,CN,l=X0,l×Y0,N,0, withY0,N,0=R. 3.Iteration ( q=M+1,···,M+N):GivenCK−l,kfor 0flfq−1,0fk <2l, eachCM+N−q,kcan be written as X0,l×Yl,M+N−q,mwithk= 2q−Ml+m. Construct CM+N−q−1,2k=X0,l×Yl,M+N−q−1,2m andCM+N−q−1,2k+1=X0,l×Yl,M+N−q−1,2m+1, such that there exists some cM+N−q,k∈Rwith Yl,M+N−q−1,2m=Yl,M+N−q,m∩(−∞,cM+N−q,k)andYl,M+N−q−1,2m+1=Yl,M+N−q,m∩(cM+N−q,k,∞), P(yi∈Yl,M+N−q−1,2m|xi∈X0,l) =P(yi∈Yl,M+N−q−1,2m+1|xi∈X0,l) =1 2P(yi∈Yl,M+N−q−1,m|xi∈ X0,l). Consider the projection Π1(AM,n(P,1)) given in Equation ( SA-7) inCattaneo and Yu (2025), noticing that AM,N(P,1) is one special instance of CM,N(P,Ä). That is, define ej,k= 1Cj,kand/tildewideej,k=ej−1,2k−ej−1,2k+1, Π1(CM,N(P,Ä))[g,r] =µM+N,0(g,r)eM+N,0+/summationdisplay 1fjfM+N/summationdisplay 0fk<2M+N−j/tildewideµj,k(g,r)/tildewideej,k, (SA-8.1) whereej,k= 1(Cj,k) and/tildewideej,k= 1(Cj−1,2k)− 1(Cj−1,2k+1), and µj,k(g,r) = E[g(X)r(Y)|X∈Xj−N,k], ifNfjfM+N, E[g(X)|X∈X0,l]·E[r(Y)|X∈X0,l,Y∈Yl,0,m],ifj < N,k = 2N−jl+m, and/tildewideµj,k(g,r) =µj−1,2k(g,r)−µj−1,2k+1(g,r). We will use Π1as a shorthand for Π1(CM,N(P,Ä)). For
|
https://arxiv.org/abs/2505.05670v1
|
simplicity, we denote Π1(AM,n(P,1)) byΠ1instead. Now define the projected empirical process Π1An(g,h,r,s) =Π1Mn(g,r)+Π1Mn(h,s), g∈G,h∈H,r∈R,s∈S, 64 whereΠ1Mn(g,r) andΠ1Mn(h,s) are given in Equation ( SA-10) inCattaneo and Yu (2025), that is, Π1Mn(g,r) =1√nn/summationdisplay i=1/parenleftbig Π1[g,r](xi,yi)−E[Π1[g,r](xi,yi)]/parenrightbig , Π1Mn(h,s) =1√nn/summationdisplay i=1/parenleftbig Π1[h,s](xi,yi)−E[Π1[h,s](xi,yi)]/parenrightbig . Construction of Gaussian Process Suppose (/tildewideÀj,k: 0fk <2M+N−j,1fjfM+N) are i.i.d. standard Gaussian random variables. Take F(j,k),mto be the cumulative distribution function of ( Sj,k− mpj,k)//radicalbig mpj,k(1−pj,k), wherepj,k=P(Cj−1,2k)/P(Cj,k) andSj,kis aBin(m,pj,k) random variable, and G(j,k),m(t) = sup{x:F(j,k),m(x)ft}. We define Uj,k,/tildewideUj,k’s via the following iterative scheme: 1.Initialization: TakeUM+N,0=n. 2.Iteration: Suppose we’ve defined Ul,kforj < lfM+N,0fk <2M+N−l, then solve for Uj,k’s s.t. /tildewideUj,k=/radicalBig Uj,kpj,k(1−pj,k)G(j,k),Uj,k◦Φ(/tildewideÀj,k), /tildewideUj,k= (1−pj,k)Uj−1,2k−pj,kUj−1,2k+1=Uj−1,2k−pj,kUj,k, Uj−1,2k+Uj−1,2k+1=Uj,k,0fk <2M+N−j. Continue till we have defined U0,kfor 0fk <2M+N. Then,{Uj,k: 0fjfK,0fk <2M+N−j}have the same joint distribution as {/summationtextn i=1ej,k(xi,yi) : 0fjf K,0fk <2M+N−j}. By Vorob’ev–Berkes–Philipp theorem ( Dudley,2014, Theorem 1.31), {/tildewideÀj,k: 0fk < 2M+N−j,1fjfM+N}can be constructed on a possibly enlarged probability space such that th e previously constructed Uj,ksatisfies Uj,k=/summationtextn i=1ej,k(xi) almost surely for all 0 fjfM+N,0fk <2M+N−j. We will show/tildewideÀj,k’s can be given as a Brownian bridge indexed by /tildewideej,k’s. Since all of G,H,RandSare VC-type, we can show G×H+R×Sis also VC-type, here + is the Minkowski sum. Hence F=G×H+R×S∪Π1[G×H+R×S] is pre-Gaussian. Then, by Skorohod Embedding lemma ( Dudley,2014, Lemma 3.35), on a possibly enlarged probability space, we can construct a Brownian bridge ( Zn(f) :f∈F) that satisfies /tildewideÀj,k=P(Cj,k)/radicalbig P(Cj−1,2k)P(Cj−1,2k+1)Zn(/tildewideej,k), for 0fk <2M+N−j,1fjfM+N. Moreover, call Vj,k=√nZn(ej,k),/tildewideVj,k=√nZn(/tildewideej,k),/tildewideÀj,k=P(Cj,k)/radicalbig nP(Cj−1,2k)P(Cj−1,2k+1)/tildewideVj,k. for 0fk <2K−j,1fjfK. We have for g∈G,h∈H,r∈R,s∈S, √nΠ1An(g,h,r,s) =M+N/summationdisplay j=1/summationdisplay 0fk<2M+N−j(/tildewideµj,k[g,r]+/tildewideµj,k[h,s])/tildewideUj,k, √nΠ1Zn(g,h,r,s) =M+N/summationdisplay j=1/summationdisplay 0fk<2M+N−j(/tildewideµj,k[g,r]+/tildewideµj,k[h,s])/tildewideVj,k. 65 Decomposition Fix one ( g,h,r,s)∈G×H×R×S, we decompose by An(g,h,r,s)−Zn(g,h,r,s) =Π1An(g,h,r,s)−Π1Zn(g,h,r,s)/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright strong approximation (SA) error for projected+An(g,h,r,s)−Π1An(g,h,r,s)+Π1Zn(g,h,r,s)−Zn(g,h,r,s)/bracehtipupleft /bracehtipdownright/bracehtipdownleft /bracehtipupright projection error. SA error for Projected Process The strong approximation error essentially depends on the Hilbertian pseudo norm M+N/summationdisplay j=1/summationdisplay 0fk<2M+N−j(/tildewideµj,k[g,r]+/tildewideµj,k[h,s])2f2M+N/summationdisplay j=1/summationdisplay 0fk<2M+N−j(/tildewideµj,k[g,r])2+2M+N/summationdisplay j=1/summationdisplay 0fk<2M+N−j(/tildewideµj,k[h,s])2. Hence,Cattaneo and Yu (2025, Lemma SA.19) gives with probability at least 1 −2e−t, |Π1An(g,h,r,s)−Π1Zn(g,h,r,s)| fC1C³/radicalbigg N2³+12MEM nt+C1C³/radicalbigg (∥Π1[g,r]∥∞+∥Π1[h,s]∥∞)2(M+N) nt, whereC1>0 is a universal constant and C³= 1+(2³)³/2. Projection Error For the projection error, we use the simple observation that |An(g,h,r,s)−Π1An(g,h,r,s)| f |Mn(g,r)−Π1Mn(g,r)|+|Mn(h,s)−Π1Mn(h,s)|, andCattaneo and Yu (2025, Lemma SA.23) to get for all t > N, P/bracketleftBig |An(g,h,r,s)−Π1An(g,h,r,s)|> C2/radicalbig C2³/radicalbig N2V+2−NM2t³+1 2+C2C³M√nt³+1/bracketrightBig f4ne−t, P/bracketleftBig |Zn(g,h,r,s)−Π1Zn(g,h,r,s)|> C2/radicalbig C2³/radicalbig N2V+C2C³2−NM2t1 2+C2C³M√nt/bracketrightBig f4ne−t, whereC³= 1+(2³)α 2andC2³= 1+(4³)³andC2is a constant that only depends on the distribution of (x1,y1), with V= min{2M,√ dL2−M/d}2−M/dTVH. Uniform SA Error: Since all of G,H,RandSare VC-type class, from a union bound argument and the same control over fluctuation error as in Cattaneo and Yu (2025, Lemma SA.18), denoting F=G×H×R×S, we get for all t >0 and 0< ¶ <1, P/bracketleftbig ∥An−An◦ÃFδ∥F+∥Zn−Zn◦ÃFδ∥F> C1C³Fn(t,¶)/bracketrightbig fexp(−t), whereC³= 1+(2³)α 2and Fn(t,¶) =J(¶)M+(logn)³/2MJ2(¶) ¶2√n+M√nt+(logn)³M√nt³. 66 where J(¶) = 3¶/parenleftbig/radicalbigg dGlog(2cG ¶)+/radicalbigg dHlog(2cH ¶)+/radicalbigg dRlog(2cR ¶)+/radicalbigg dSlog(2cS ¶)/parenrightbig ≲/radicalbig dlog(c/¶), recallingc=cG,QG∪H+cH,QG∪H+cR,Y+cS,Y+k,d=dG,QG∪HdH,QG∪HdR,YdS,Yk. Choosing the optimal M∗,N∗givesP/bracketleftbig ∥An−ZA n∥F> C1vTn(t)/bracketrightbig fC2e−tfor allt >0, where Tn(t) = min ¶∈(0,1){An(t,¶)+Fn(t,¶)}, with An(t,¶) =√ dmin/braceleftBig/parenleftBigcd 1ETVdMd+1 n/parenrightBig1 2(d+1),/parenleftBigcd 1cd 2E2M2TVdLd n2/parenrightBig1 2(d+2)/bracerightBig (t+log(nN(¶)N∗))³+1 +/radicalbigg M2(M∗+N∗) n(logn)³(t+log(nN(¶)N∗))³+1, Fn(t,¶) =J(¶)M+(logn)³/2MJ2(¶) ¶2√n+M√n√ t+(logn)³M√nt³, where VR={¹(·,r) :r∈R}, N(¶) =NG,QG∪H(¶/2,MG,QG∪H)NH,QG∪H(¶/2,MH,QG∪H)NR,Y(¶/2,MR)NS,Y(¶/2,MS,Y), J(¶) = 2JQG∪H(G,MG,QG∪H,¶/2)+2JQG∪H(H,MH,QG∪H,¶/2)+2JY(R,MR,Y,¶/2)+2JY(S,MS,Y,¶/2), M∗=/floorleftBig log2min/braceleftBig/parenleftBigc1nTV E/parenrightBigd d+1,/parenleftBigc1c2nLTV EM/parenrightBigd d+2/bracerightBig/floorrightBig , N∗=/ceilingleftBig log2max/braceleftBig/parenleftBignMd+1 cd 1ETVd/parenrightBig1 d+1,/parenleftBign2M2d+2 cd 1cd 2TVdLdE2/parenrightBig1 d+2/bracerightBig/ceilingrightBig . Truncation Argument for yi’s with Finite Moments The above
|
https://arxiv.org/abs/2505.05670v1
|
result is derived under the assumption thatsupx∈XE[exp(|yi|)|xi=x]<∞. For the result under the condition supx∈XE[|yi|2+v|xi=x]<∞, we can use the same truncation argument as in Section SA-8.1(proof of Lemma SA-4.1) and the VC-type conditions for G,H,R,Sto get the stated conclusions. ■ 67 References Cattaneo, M. D., Chandak, R., Jansson, M., and Ma, X. (2024), “Boundary Adaptive Loc al Polynomial Conditional Density Estimators,” Bernoulli , 30, 3193–3223. Cattaneo, M. D., and Yu, R. R. (2025), “Strong Approximations for Empirical Pr ocesses Indexed by Lipschitz Functions,” Annals of Statistics . Chernozhukov, V., Chetverikov, D., and Kato, K. (2014a), “Anti-Concentrat ion and Honest, Adaptive Confidence Bands,” Annals of Statistics , 42, 1787–1818. (2014b), “Gaussian Approximation of Suprema of Empirical Processes,” Annals of Statistics , 42, 1564–1597. Chernozhuokov, V., Chetverikov, D., Kato, K., and Koike, Y. (2022), “Impro ved central limit theorem and bootstrap approximations in high dimensions,” Annals of Statistics , 50, 2562–2586. Dudley, R. M. (2014), Uniform central limit theorems , Vol. 142, Cambridge university press. Federer, H. (2014), Geometric measure theory , Springer. Folland, G. (2002), Advanced Calculus , Featured Titles for Advanced Calculus Series, Prentice Hall. Gin´ e, E., and Nickl, R. (2016), Mathematical Foundations of Infinite-dimensional Statistica l Models, New York: Cambridge University Press. Simon, L. et al. (1984), Lectures on geometric measure theory , Centre for Mathematical Analysis, Australian National University Canberra. Tsybakov, A. (2008), Introduction to Nonparametric Estimation , Springer. van der Vaart, A. W., and Wellner, J. A. (1996), Weak Convergence and Empirical Processes , Springer. 68
|
https://arxiv.org/abs/2505.05670v1
|
arXiv:2505.06088v1 [math.PR] 9 May 2025Approximations for the number of maxima and near-maxima in independent data Fraser Daly∗ May 12, 2025 Abstract In the setting where we have nindependent observations of a random variable X, we derive explicit error bounds in total variation distance when approximating the number of obser- vations equal to the maximum of the sample (in the case where Xis discrete) or the number of observations within a given distance of an order statistic of the sample (in the case where Xis absolutely continuous). The logarithmic and Poisson distributions are used as approximations in the discrete case, with proofs which include the development of Stein’s method for a logarithmic target distribution. In the absolutely continuous case our approximations are by the negative bi- nomial distribution, and are established by considering negative binomial approximation for mixed binomials. The cases where Xis geometric, Gumbel and uniform are used as illustrative examples. Key words and phrases: logarithmic distribution; negative binomial distribution; Poisson dis- tribution; order statistics; Stein’s method; size-biasing MSC 2020 subject classification: 62E17; 60E05; 60E15 1 Introduction and main results LetX, X 1, X2, . . .be independent and identically distributed (i .i.d.) random variables, and denote byMn= max {X1, . . . , X n}the maximum of nof these random variables. In the discrete setting where the Xitake positive integer values, our aim is to establish explicit error bounds in approxi- mations for the number of these Xithat are equal to their maximum. We denote this quantity by Kn. That is, Kn=|{i∈ {1, . . . , n }:Xi=Mn}|. (1) In this we are motivated by the work of Brands et al.[3], who show that, while Kndoes not in general converge to a distributional limit as n→ ∞ , there are nevertheless good approximations to the distribution of Kn. Motivated by a coin-tossing problem of R¨ ade [12], Brands et al.pay particular attention to the case where Xis geometrically distributed, and observe that in this case if the parameter pis constant (i.e., independent of n) the distribution of Knis close to logarithmic, ∗Department of Actuarial Mathematics and Statistics, and the Maxwell Institute for Mathematical Sciences, Heriot–Watt University, Edinburgh EH14 4AS, UK. E-mail: F.Daly@hw.ac.uk 1 while for other choices of pit may be close to Poisson. Our first principal aim here is to complement these observations with explicit error bounds in total variation distance for the approximation of the distribution of Knby either a logarithmic or Poisson distribution. The number of maxima in the case where Xis geometrically distributed has also been paid particular attention by Kirschenhofer and Prodinger [8] and Olofsson [9], and we refer the interested reader to the work of Eisenberg [7] and Bruss and Gr¨ ubel [5] and references therein for a discussion of related results in a more general discrete setting. We will treat the general case here, but use the geometric distribution as a motivating and illustrative example. In the setting where Xis absolutely continuous, the situation is somewhat different. Here we consider instead the number of observations within some threshold afrom the maximum. Pakes and
|
https://arxiv.org/abs/2505.06088v1
|
Steutel [11] establish a limiting (shifted) mixed Poisson distribution for this quantity as n→ ∞ , and note that this mixed Poisson simplifies to a geometric distribution in the case where Xis in the maximum domain of attraction of a Gumbel distribution. This geometric limit for observations close to the maximum is generalised to a negative binomial limit for observations close to one of the order statistics of our data by Pakes and Li [10]. Our second principal aim in this note is to provide explicit error bounds (again in total variation distance) in the negative binomial approximation of the number of data points close to the order statistics of our sample in this absolutely continuous setting. We will formulate this problem more precisely later in this section, after first stating and illustrating our main results in the discrete case. All of our approximations and error bounds will be given in the total variation distance, defined for random variables KandLsupported on Z+={0,1, . . .}by dTV(K, L) = sup E⊆Z+|P(K∈E)−P(L∈E)|=1 2X j∈Z+|P(K=j)−P(L=j)|. We use the remainder of this section to present our main results in the discrete setting (Theorems 1 and 3 below) and absolutely continuous case (in Theorem 5), and to further provide examples to illustrate these results. Proofs of these main results are deferred to Sections 2–4. These proofs make use of Stein’s method, adapted here for use with a logarithmic target distribution for the first time and also used to establish new results in the negative binomial approximation of mixed binomial distributions. We will also make use of well-known results on Poisson approximation. 1.1 The discrete case We begin in the discrete setting, where Xtakes positive integer values, with mass function p(j) = P(X=j) and distribution function F(j) =P(X≤j) for j= 1,2, . . .. We recall that Lemma 2.1 of Brands et al.[3] gives the probability mass function of the random variable Kndefined in (1): P(Kn=k) =n k∞X j=1p(j)kF(j−1)n−k, (2) fork= 1, . . . , n . From this, expressions for the factorial moments of Knfollow easily. In particular, letting ( k)ℓ=k(k−1)···(k−ℓ+ 1) denote the falling factorial, we have that E[(Kn)ℓ] = (n)ℓ∞X j=1p(j)ℓF(j)n−ℓ, (3) 2 forℓ= 1, . . . , n . We will consider the approximation of Knby both a logarithmic distribution and a Poisson distribution, where we recall that L∼L(α) has a logarithmic distribution with parameter α∈(0,1) if P(L=k) =−αk klog(1−α), fork= 1,2, . . ., and Y∼Pois( λ) has a Poisson distribution with parameter λ >0 if P(Y=k) =e−λλk k!, fork= 0,1, . . .. We begin by considering logarithmic approximation, in which case our main result is the following. Theorem 1. LetX1, . . . , X nbe positive, integer-valued, i .i.d.random variables with maximum Mn. LetKnbe defined as in (1), with probability mass function as in (2). (a) Let L∼L(α), where 1−α=P(Kn=1) E[Kn]. Then, dTV(Kn, L)≤ −2nlog(1−α)∞X j=1p(j) F(j)n−1−(1−α)(n−1) αp(j)F(j−1)n−2 . (b) Let L∼L(β), where 1−β=E[Kn] E[K2n]. Then, dTV(Kn, L)≤ −2(1 + β) log(1 −β)E[K2 n] β+ (1−β)E[(Kn)3] E[(Kn)2]−(n−3)E[(Kn)2] (n−1)E[Kn] . The proof of Theorem 1(a) is given in Section
|
https://arxiv.org/abs/2505.06088v1
|
2 below, while part (b) is proved in Section 3. Before moving on to state our Poisson approximation results, we illustrate the upper bounds of Theorem 1 in our motivating example of geometrically distributed data. Example 2. Suppose that P(X=j) = p(1−p)j−1forj= 1,2, . . .and some p∈(0,1), so that X∼Geom( p) has a geometric distribution with parameter p. In this example we can check numerically that the upper bound of part (a) of Theorem 1 is superior to that of part (b), so we focus the former result here. We nevertheless include part (b) in the statement of Theorem 1, since there may be other examples where it is useful and since its proof uses tools very similar to those needed for Theorems 3 and 5 below. In the geometric setting we can easily check that the parameter αof our approximating loga- rithmic distribution is given by α=p: using (2) and (3) we have 1−α=P∞ j=1p(1−p)j−1[1−(1−p)j−1]n−1 P∞ j=1p(1−p)j−1[1−(1−p)j]n−1= (1−p)P∞ j=2p(1−p)j−1[1−(1−p)j−1]n−1 P∞ j=1p(1−p)j[1−(1−p)j]n−1= 1−p . Numerical illustration of the upper bound of Theorem 1(a) in this geometric example is given in Figure 1 for the case n= 20 and various small-to-moderate values of p. We focus on such values of phere since the proof of Theorem 1(a) is geared towards this case: see Remark 10 below. Other values of ngive upper bounds which are very similar to those shown on Figure 1. 3 Figure 1: The upper bound of Theorem 1(a) in the case where Xhas a geometric distribution, evaluated for n= 20 and various values of p Our main Poisson approximation result in the discrete case is the following. Theorem 3. LetX1, . . . , X nbe positive, integer-valued, i .i.d.random variables with maximum Mn. LetKnbe defined as in (1), with probability mass function given by (2). Let Y∼Pois(λ), where λ=E[(Kn)2]/E[Kn]. Then, dTV(Kn, Y)≤p E[(Kn)2]−E[Kn](E[Kn]−1) 2E[Kn]+s E[Kn] 4E[(Kn)2]+(n−1)E[(Kn)3] (n−2)E[(Kn)2]−(n−2)E[(Kn)2] (n−1)E[Kn]. The proof of Theorem 3 is given in Section 4. Before moving on to the absolutely continuous case, we again use the example in which Xis geometric to illustrate our main result here. Example 4. We again let X∼Geom( p) have a geometric distribution, where we take p= 1−µ/n for some suitable µ >0. In this case, Brands et al.[3] (see their Remark 3) show that, as n→ ∞ , Knhas a defective Poisson limit, with mass e−µat infinity. It is therefore reasonable to expect that for (large) fixed µand large nwe will obtain a good Poisson approximation in this case. We illustrate the upper bound we obtain from Theorem 3 in this setting for selected values of µandn in Table 1. 1.2 The absolutely continuous case We turn now to the case where Xis a real-valued random variable with probability density function f(x) and distribution function F(x) =Rx −∞f(y) dyforx∈R. Following Pakes and Li [10], we consider the more general setting of negative binomial approximation for the number of observations 4 n 105106107108109 µ100 0.330 0.131 0.103 0.100 0.100 300 — 0.283 0.094 0.062 0.058 500 — 0.610 0.131 0.058 0.046 700 — — 0.184 0.064 0.041 900 —
|
https://arxiv.org/abs/2505.06088v1
|
— 0.251 0.073 0.039 Table 1: The upper bound of Theorem 3 in the case where X∼Geom(1 −µ/n) for selected values of µandn. The symbol ‘–’ indicates that the upper bound is larger than 1, and therefore uninformative within a certain distance of one of the order statistics of our sample X1, . . . , X n. To that end, we letX1:n<···< X n:ndenote the order statistics of our sample, and for ℓ= 1, . . . , n we define Kn(a, ℓ) =|{i∈ {1, . . . , n }:Xn−ℓ+1:n−a < X i< X n−ℓ+1:n}|, (4) the number of our nobservations that are within a distance aof the ℓth order statistic. Pakes and Li (see Theorem 1 of [10]) use a mixed binomial representation of Kn(a, ℓ) to show that if Xis in the maximum domain of attraction of a Gumbel law, then Kn(a, ℓ) may be well approximated by a negative binomial random variable. We aim to complement this result with explicit error bounds in total variation distance. We will write that Z∼NB(ℓ,1−β) has a negative binomial distribution with parameters ℓ >0 andβ∈(0,1) ifZsatisfies P(Z=k) =Γ(ℓ+k) Γ(ℓ)k!(1−β)ℓβk, fork= 0,1, . . ., where Γ( ·) is the gamma function. Our main result in the absolutely continuous setting is the following error bound in the approx- imation of Kn(a, ℓ)−1 by a negative binomial distribution with the same mean. Theorem 5. LetX1, . . . , X nbe real-valued, i .i.d.random variables as above, and Kn(a, ℓ)be as defined in (4). Let Z∼NB(ℓ,1−β)withβ=E[Kn(a,ℓ)]−1 E[Kn(a,ℓ)]+ℓ−1. Then dTV(Kn(a, ℓ)−1, Z) ≤1−(1−β)ℓ βℓE[Kn(a, ℓ)−1] β+ (1−β) (n−ℓ−1)M2 M1−(n−ℓ−2)M1 , where Mj=nn−1 ℓ−1Z∞ −∞(1−F(x))ℓ−1F(x)n−ℓ 1−F(x−a) F(x)j f(x)dx , forj= 1,2. We conclude this section by considering two applications of Theorem 5. Example 6. LetXhave a Gumbel distribution with density function f(x) =e−x+e−xand distribu- tion function F(x) =e−e−xforx∈R, which has mean equal to 0 .57721 . . ., the Euler–Mascheroni 5 constant, and standard deviation π/√ 6. We consider geometric approximation for the number of ni.i.d.samples from Xwhich are within distance aof their maximum; that is, we take ℓ= 1 in Theorem 5. Note that the maximum of our ndata points has itself a Gumbel distribution, shifted log(n) units to the right. With this choice of Xwe have that M1=nZ∞ −∞(e−e−x)n−1 1−e−e−(x−a) e−e−x! e−x+e−xdx=ea−1 n+ea−1,and M2=nZ∞ −∞(e−e−x)n−1 1−e−e−(x−a) e−e−x!2 e−x+e−xdx=2(ea−1)2 (n+ea−1)(n+ 2ea−2). Using the mixed binomial representation of Kn(a,1)−1 (see Lemma 1 of [10] and Section 3.2 below) we have that E[Kn(a,1)]−1 = ( n−1)M1=(n−1)(ea−1) n+ea−1, so that β=(n−1)(ea−1) nea. With Z∼NB(1 ,1−β), Theorem 5 then gives dTV(Kn(a,1)−1, Z)≤(n−1)(ea−1) n+ea−1(n−1)(ea−1) nea+ 1−(n−1)(ea−1) nea2(n−2)(ea−1) n+ 2ea−2−(n−3)(ea−1) n+ea−1 =(n−1)(ea−1)2 ea(n+ea−1) 1 +n−2 n+ 2ea−2 . (5) We can see that unfortunately our upper bound is not strong enough to converge to zero as n→ ∞ for any fixed a, in which setting we know we have a geometric limit thanks to Theorem 1 of [10]. The coupling used in the proof of Theorem 5 does not seem to be strong enough to establish this. Nevertheless, our upper bound does tend to zero if a=a(n)→0 as n→
|
https://arxiv.org/abs/2505.06088v1
|
∞ , and also gives reasonable numerical bounds on dTV(Kn(a,1)−1, Z), even for moderate values of nanda. This is illustrated in Figure 2, which gives the upper bound (5) in the cases n= 20 and n= 100 for various values of a. Example 7. LetX∼U(0, b) have a uniform distribution on the interval (0 , b), so that f(x) = 1 /b andF(x) =x/bfor 0≤x≤b. The uniform distribution is in the maximum domain of attraction of a Weibull (not a Gumbel) distribution, so although we do not expect a negative binomial limit for Kn(a, ℓ)−1 in the case of fixed bhere, we may nevertheless apply our theorem to give explicit upper bounds in the approximation of Kn(a, ℓ) which may be useful both numerically and in determining conditions on a=a(n),b=b(n) and ℓ=ℓ(n) under which we may see a negative binomial limit asn→ ∞ . With our choice of X, M1=an bn−ℓ+1n−1 ℓ−1Zb 0xn−ℓ−1 1−x bℓ−1 dx=an b(n−ℓ),and M2=a2n bn−ℓ+1n−1 ℓ−1Zb 0xn−ℓ−2 1−x bℓ−1 dx=a2n(n−1) b2(n−ℓ)(n−ℓ−1). 6 Figure 2: The upper bound (5) for the case where Xhas a Gumbel distribution, evaluated for various values of ain the cases n= 20 and n= 100 As in the Gumbel example above, we use the mixed binomial representation of Kn(a, ℓ)−1 to get that E[Kn(a, ℓ)]−1 = ( n−ℓ)M1=an b, so that our approximating negative binomial distribution has β=an an+bℓ. With Z∼NB(ℓ,1−β), our Theorem 5 then gives the upper bound dTV(Kn(a, ℓ)−1, Z)≤a bℓ(1−(1−β)ℓ) n+ℓ n−1−n(n−ℓ−2) n−ℓ =a bℓ(1−(1−β)ℓ) n+ℓ(n+ℓ) n−ℓ . Then, for example, for a fixed ℓwe may obtain a negative binomial limit for Kn(a, ℓ)−1 ifa/b→0 fast enough as n→ ∞ . In the examples above we are able to explicitly evaluate the integrals Mjneeded in the upper bound of Theorem 5. In other examples where analytic expressions for these integrals are not available, they may nevertheless be evaluated numerically. 2 Stein’s method for the logarithmic distribution In this section we will prove Theorem 1(a) by developing the tools required to apply Stein’s method for probability approximation to a logarithmic target distribution; these tools are also used as part 7 of the proof of Theorem 1(b) in Section 3 below, and may be of independent interest. To the best of our knowledge, this is the first use of Stein’s technique for approximation by a logarithmic distribution. As a further illustration of our framework, we also give, in Section 2.1, an error bound in the logarithmic approximation of a negative binomial distribution. We refer the reader to the survey by Ross [13] for an introduction to Stein’s method, though we will make the exposition here self-contained. For a non-negative random variable Ywith positive mean, we let Y⋆denote the size-biased version of Y, defined by E[f(Y⋆)] =E[Y f(Y)] E[Y](6) for all functions f:R+→Rfor which the expectation on the right-hand side exists. In the special case where Ytakes non-negative integer values, Y⋆may equivalently be defined by writing P(Y⋆=k) =kP(Y=k) E[Y] fork= 0,1, . . .. For a logarithmic random variable L∼L(α) with mean E[L] =−α (1−α) log(1 −α)and a bounded function f:Z+→Rwe may write Ef(L⋆−1)
|
https://arxiv.org/abs/2505.06088v1
|
= (1 −α)∞X k=0αkf(k) = (1 −α)f(0)−(1−α) log(1 −α)∞X k=1kf(k)P(L=k) = (1−α)f(0) +α E[L]∞X k=1kf(k)P(L=k) = (1 −α)f(0) + αE[f(L⋆)]. That is, L⋆−1 is equal in distribution to IαL⋆, where Iαhas a Bernoulli distribution with mean α and is independent of L⋆. This leads us to define the following Stein equation for a logarithmic target distribution: for a given function h:Z+→RwithE[h(L)] = 0 we let fh:Z+→Rbe such that fh(0) = 0 and h(k) =kfh(k−1)−αkf h(k) (7) fork= 1,2, . . .. This is motivated by noting that replacing kby our logarithmic random variable Land taking expectations on the right-hand side gives Efh(L⋆−1)−αE[fh(L⋆)]. By replacing kby a random variable Kand taking expectations in our Stein equation (7), we may then write dTV(K, L) = sup h∈H|E[h(K)]|= sup h∈H|E[Kfh(K−1)−αKf h(K)]|, (8) where H={I(· ∈E)−P(L∈E) :E⊆Z+} (9) andIdenotes an indicator function. In order to bound this final expression, we will need to control the behaviour of fhforh∈ H. It is straightforward to check that the solution to (7) is given by fh(k) =−1 αkX j=1h(j)αj−1 j=log(1−α) αk+1kX j=1h(j)P(L=j) =−log(1−α) αk+1∞X j=k+1h(j)P(L=j) =1 α∞X j=k+1h(j)αj−k j=1 α∞X j=1h(j+k)αj j+k, 8 where we use the fact that E[h(L)] = 0 for h∈ H. Hence, since |h(x)| ≤1 for h∈ H, |fh(k)| ≤1 α∞X j=1αj j+k≤1 α∞X j=1αj j=−log(1−α) α. (10) 2.1 Logarithmic approximation for the negative binomial distribution Before proceeding to the proof of Theorem 1(a), we illustrate the framework we have set up with a simple example: a bound on the total variation distance between a negative binomial distribution and a logarithmic distribution. Example 8. LetL∼L(α) for some α∈(0,1), and Z∼NB(ℓ,1−β) for some ℓ > 0 and β∈(0,1). Following (8), in order to bound the total variation distance between ZandLwe bound E[Zfh(Z−1)−αZf h(Z)] for h∈ H. We begin by noting that for all bounded functions g:Z+→R, βE[(ℓ+Z)g(Z+ 1)] = E[Zg(Z)] ; see Lemma 1 of [4]. Hence, E[Zfh(Z−1)−αZf h(Z)] =E[(βℓ−(α−β)Z)fh(Z)] =(1−α)βℓ 1−βE[fh(Z)] + ( α−β)E[(E[Z]−Z)fh(Z)], sinceE[Z] =βℓ 1−β. Hence, using the bound (10) we have that, for h∈ H, |E[Zfh(Z−1)−αZf h(Z)]| ≤ −(1−α)βℓ α(1−β)log(1−α)−(α−β) αlog(1−α)p Var(Z) =−log(1−α)√βℓ α(1−β) (1−α)p βℓ+α−β , since Var( Z) =βℓ (1−β)2. Hence, dTV(Z, L)≤ −log(1−α)√βℓ α(1−β) (1−α)p βℓ+α−β . In particular, with the choice α=βwe obtain dTV(Z, L)≤ − log(1−α)ℓ→0 as ℓ→0, as expected. 2.2 Proof of Theorem 1(a) To establish Theorem 1(a), we combine the mass function and moments given in (2) and (3), respectively, with the following result. Theorem 9. LetKbe a positive, integer-valued random variable with finite mean and L∼L(α), where α=P(K⋆>1). Then, dTV(K, L)≤ −2 log(1 −α) E[K]−2(1−α) αP(K= 2) . 9 Proof. We again use (8) and thus need to bound E[Kfh(K−1)−αKf h(K)] =E[K]E[fh(eK⋆−1)−fh(IαK⋆)], where, as above, Iαis a Bernoulli random variable with mean αindependent of K⋆, andeK⋆is an independent copy of K⋆. Using the bound (10), for h∈ H we may then write |E[fh(eK⋆−1)−fh(IαK⋆)]| ≤ −2 log(1 −α) αP(eK⋆−1̸=IαK⋆) for any coupling of eK⋆andIαK⋆, so that dTV(K, L)≤ −2 log(1 −α) αE[K]P(eK⋆−1̸=IαK⋆). (11) Recalling that α=P(eK⋆>1), we may construct Iαas the indicator of the event that eK⋆>1. With this choice, P(eK⋆−1̸=IαK⋆) =αP(eK⋆−1̸=K⋆|eK⋆>1) and P(eK⋆−1 =K⋆|eK⋆>1)≥P(eK⋆=
|
https://arxiv.org/abs/2505.06088v1
|
2, K⋆= 1|eK⋆>1) =P(eK⋆= 2|eK⋆>1)P(K⋆= 1) =1−α αP(K⋆= 2) =2(1−α) αE[K]P(K= 2), (12) so that P(eK⋆−1̸=IαK⋆)≤α 1−2(1−α) αE[K]P(K= 2) . (13) Our result then follows on combining (11) and (13). Remark 10.When bounding the probability P(eK⋆−1 =K⋆|eK⋆>1) in (12) we could, of course, include further terms of the form P(eK⋆=j+ 1, K⋆=j|eK⋆>1) for j >1 to get a more precise lower bound. However, in our coupling construction we have in mind the setting in which αis small, so that both eK⋆−1 and IαK⋆are equal to zero with high probability. In this setting, further terms of this sort for j >1 are significantly smaller than the j= 1 term we have included in the bound (12), so add complexity to the resulting bound on total variation distance for no significant gain in accuracy. 3 Negative binomial approximation for mixed binomials Our main aim in this section is to prove Theorems 1(b) and 5. To do this, we begin by considering the more general setting of negative binomial approximation for mixed binomial random variables. We let Whave a mixed binomial distribution. In particular, for some n∈ {1,2, . . .},ℓ∈ {1, . . . , n } and for some random variable Qsupported on [0 ,1], we let the conditional distribution W|Q∼ Bin(n−ℓ, Q) have a binomial distribution with parameters n−ℓandQ. We then write that W∼MBin( n−ℓ, Q). We consider the approximation of Wby the negative binomial random variable Z∼NB(ℓ,1−β), where we choose β=E[W] E[W]+ℓso that E[Z] =E[W]. Our main result here is the following. 10 Theorem 11. LetW|Q∼Bin(n−ℓ, Q)andZ∼NB(ℓ,1−β)be as above, with β=E[W] E[W]+ℓ. Then dTV(W, Z)≤1−(1−β)ℓ βℓE[W] β+ (1−β) (n−ℓ−1)E[Q2] E[Q]−(n−ℓ−2)E[Q] . Proof. We use the framework of Stein’s method for negative binomial approximation established by Brown and Phillips [4]; see also [14] for more recent developments. To that end, with h∈ H as defined in (9), we let gh:Z+→Rsatisfy gh(0) = 0 and h(k) =β(ℓ+k)gh(k+ 1)−kgh(k), fork∈Z+, so that we may write dTV(W, Z) = sup h∈H|E[β(ℓ+W)gh(W+ 1)−Wgh(W)]|. (14) From Lemma 5 of [4] we have that sup h∈Hsup k∈Z+|∆gh(k)| ≤1−(1−β)ℓ βℓ, (15) where ∆ gh(k) =gh(k+ 1)−gh(k). Now, with our choice of βwe have E[W] =βℓ 1−βand βE[(ℓ+W)gh(W+ 1)] E[W]=(1−β)(ℓ−1) ℓE[gh(W+ 1)] + 1−(1−β)(ℓ−1) ℓE[(W+ 1)gh(W+ 1)] E[W] + 1 =(1−β)(ℓ−1) ℓE[gh(W+ 1)] + 1−(1−β)(ℓ−1) ℓ E[gh((W+ 1)⋆)], where the final equality uses the definition (6) of size-biasing. Again using this same definition, we may therefore write E[β(ℓ+W)gh(W+ 1)−Wgh(W)] =E[W]E[gh(W′)−gh(W⋆)], (16) where W′is equal in distribution to Iγ(W+ 1)⋆+ (1−Iγ)(W+ 1) and Iγis a Bernoulli random variable with mean γ= 1−(1−β)(ℓ−1) ℓindependent of all other random variables with which it appears. It is well-known that we may size-bias a sum of independent random variables by selecting one of these random variables with probabilities proportional to their means, and replacing the chosen random variable with its size-biased version; see Section 2.4 of [1]. Combining this with Lemma 2.4 of [1] on size-biasing a mixture, we have that W⋆= 1 + A, where A∼MBin( n−ℓ−1, Q⋆). Similarly, and noting thatE[W] E[W]+1=β γ, we may write (W+ 1)⋆=Iβ/γ(W⋆+ 1) + (1 −Iβ/γ)(W+ 1)
|
https://arxiv.org/abs/2505.06088v1
|
= 1 + Iβ/γW⋆+ (1−Iβ/γ)W , 11 where Iβ/γis a Bernoulli random variable with mean β/γwhich is independent of all else. It then follows that we may write W′= 1 + 1−IγIβ/γ W+IγIβ/γW⋆= 1 + IγIβ/γ+B+C , where B∼MBin( n−ℓ−1, Q+), Q+= 1−IγIβ/γ Q+IγIβ/γQ⋆, andC∼MBin(1 ,(1−IγIβ/γ)Q), and where, conditional on Q,Q⋆and the Bernoulli random variables IγandIβ/γ,BandCare independent. Since Q⋆is stochastically larger than Q, the closure of the usual stochastic order under mixtures (see Theorem 1.A.3(d) of [15]) implies that Q⋆is stochastically larger than Q+. Since the binomial distribution Bin( n, q) is stochastically increasing in q(see Example 1.A.25 of [15]), and again using closure of stochastic ordering under mixtures, we then have that Ais stochastically larger than B. Hence, for h∈ H, the bound (15) gives |E[gh(W′)−gh(W⋆)]| ≤1−(1−β)ℓ βℓE|W′−W⋆| ≤1−(1−β)ℓ βℓ(β+E[A−B+C]) =1−(1−β)ℓ βℓ β+ (n−ℓ−1)E[Q⋆−Q+] + (1−β)E[Q] =1−(1−β)ℓ βℓ(β+ (1−β){(n−ℓ−1)E[Q⋆−Q] +E[Q]}) =1−(1−β)ℓ βℓ(β+ (1−β){(n−ℓ−1)E[Q⋆]−(n−ℓ−2)E[Q]}).(17) From the definition (6) we have that E[Q⋆] =E[Q2] E[Q]. The desired result then follows by combining this with the representation (14) and (16)–(17). 3.1 Proof of Theorem 1(b) We now apply Theorem 11 to establish the upper bound in Theorem 1(b). To that end, we will first transform the problem of logarithmic approximation for a positive, integer-valued random variable Kto the problem of geometric approximation for the size-biased version K⋆, as defined in (6). Lemma 12. LetKbe a positive, integer-valued random variable with finite mean, L∼L(β)for some β∈(0,1), and G∼Geom (1−β)have a geometric distribution with parameter 1−β. Then dTV(K, L)≤ −2(1 + β) log(1 −β) βE[K]dTV(K⋆, G). Proof. We use the framework for Stein’s method for logarithmic approximation developed in Section 2. Arguing as we did for (11), we have that for any coupling of K⋆−1 and IβK⋆, where Iβis a Bernoulli random variable with mean βindependent of the K⋆it multiplies here, dTV(K, L)≤ −2 log(1 −β) βE[K]P(K⋆−1̸=IβK⋆). 12 Recalling that we may equivalently define the total variation distance as dTV(K⋆−1, IβK⋆) = infP(K⋆−1̸=IβK⋆), where the infimum is taken over all couplings of K⋆−1 and IβK⋆, we may thus write dTV(K, L)≤ −2 log(1 −β) βE[K]dTV(K⋆−1, IβK⋆). (18) From the definition of the geometric distribution we can easily check that G−1 is equal in distri- bution to IβG. We may thus use the triangle inequality for total variation distance to write dTV(K⋆−1, IβK⋆)≤dTV(K⋆−1, G−1) +dTV(IβK⋆, IβG) =dTV(K⋆−1, G−1) +βdTV(K⋆, G) = (1 + β)dTV(K⋆, G). Combining this with (18) yields the desired result. To establish Theorem 1(b) we now take K=Knin the setting of Lemma 12. With notation as in that latter result, we note that G−1∼NB(1 ,1−β) and so we may apply Theorem 11 with ℓ= 1 to bound dTV(K⋆ n, G) once we have shown that K⋆ n−1 is a mixed binomial random variable. To that end, we give the following explicit construction of K⋆ n−1: •Sample the random variable Maccording to the distribution P(M=m) =P(X=m)P(X≤m)n−1 P∞ j=1P(X=j)P(X≤j)n−1, form= 1,2, . . .. •SetXn=M, and sample X′ 1, . . . , X′ n−1independently, each according to the distribution P(X′ 1=k|M=m) =P(X=k) P(X≤m), fork= 1, . . . , m . •SetK⋆ n−1 =|{i∈ {1,
|
https://arxiv.org/abs/2505.06088v1
|
. . . , n −1}:X′ i=M}|. Using an argument analogous to that of Lemma 2.1 of Brands et al.[3], we then have that P(K⋆ n=k|M=m) =n−1 k−1P(X=m)k−1P(X≤m−1)n−k P(X≤m)n−1, so that K⋆ n−1|M∼Bin(n−1, q(M)), (19) where q(m) =P(X=m) P(X≤m). We will return to this mixed binomial representation of K⋆ n−1 in the proof of Theorem 3 in Section 4 below. Removing the conditioning we further have that P(K⋆ n=k) =∞X m=1P(M=m)P(K⋆ n=k|M=m) =n−1 k−1P∞ m=1P(X=m)kP(X≤m−1)n−k P∞ j=1P(X=j)P(X≤j)n−1=kP(Kn=k) E[Kn], 13 using the mass function and first moment given in (2) and (3), respectively, for the final inequality, confirming that K⋆ nis indeed a size-biased version of Kn. With K⋆ n−1|M∼Bin(n−1, q(M)), and noting that dTV(K⋆ n, G) =dTV(K⋆ n−1, G−1), Theorem 11 then gives us that dTV(K⋆ n, G)≤E[K⋆ n−1] β+ (1−β) (n−2)E[q(M)2] E[q(M)]−(n−3)E[q(M)] , where we note that the choice 1 −β=1 E[K⋆n]=E[Kn] E[K2n]in Theorem 11 matches that in Theorem 1(b). Combining this with Lemma 12 we then have that dTV(K, L) ≤ −2(1 + β) log(1 −β) βE[Kn]E[K⋆ n−1] β+ (1−β) (n−2)E[q(M)2] E[q(M)]−(n−3)E[q(M)] . The proof of Theorem 1(b) is completed upon using the choice of βto note that 1 βE[Kn]E[K∗ n−1] =E[Kn]E[K⋆ n] =E[K2 n], and using (3) to note that E[q(M)] =P∞ m=1P(X=m)2P(X≤m)n−2 P∞ j=1P(X=j)P(X≤j)n−1=E[(Kn)2] (n−1)E[Kn],and E[q(M)2] =P∞ m=1P(X=m)3P(X≤m)n−3 P∞ j=1P(X=j)P(X≤j)n−1=E[(Kn)3] (n−1)(n−2)E[Kn]. (20) 3.2 Proof of Theorem 5 Finally in this section, we use Theorem 11 to establish Theorem 5. By Lemma 1 of Pakes and Li [10], we have that Kn(a, ℓ)−1∼MBin( n−ℓ, ra(Xn−ℓ+1:n)), where ra(x) = 1−F(x−a) F(x)forx∈R. We note that Xn−ℓ+1:nhas distribution function P(Xn−ℓ+1:n≤x) =nn−1 ℓ−1Zx −∞(1−F(y))ℓ−1F(y)n−ℓf(y) dy and density function fℓ(x) =nn−1 ℓ−1 (1−F(x))ℓ−1F(x)n−ℓf(x), forx∈R, from which it follows that E[ra(Xn−ℓ+1:n)j] =nn−1 ℓ−1Z∞ −∞(1−F(x))ℓ−1F(x)n−ℓ 1−F(x−a) F(x)j f(x) dx=Mj, forj= 1,2. Noting also that the choice of the parameter βin Theorem 11 matches that in Theorem 5, Theorem 11 thus gives us that dTV(Kn(a, ℓ)−1, Z) ≤1−(1−β)ℓ βℓE[Kn(a, ℓ)−1] β+ (1−β) (n−ℓ−1)M2 M1−(n−ℓ−2)M1 , as required. 14 4 Poisson approximation for Knin the discrete case In this section we establish the Poisson approximation upper bound stated in Theorem 3. Letting Y∼Pois( λ) as in the statement of the theorem, we use the triangle inequality to write dTV(Kn, Y)≤dTV(Kn, K⋆ n) +dTV(Y, Y+ 1) + dTV(K⋆ n−1, Y). (21) Using equation (5) of [6], for the first term on the right-hand side of (21) we have dTV(Kn, K⋆ n)≤p Var(Kn) 2E[Kn]=p E[(Kn)2]−E[Kn](E[Kn]−1) 2E[Kn]. (22) By Proposition 3 of [6] we have dTV(Y, Y+ 1)≤1 2√ λ=s E[Kn] 4E[(Kn)2]. (23) It remains only to bound the final term on the right-hand side of (21). To that end we note the mixed binomial representation (19) of K⋆ n−1 used earlier in the proof of Theorem 1. Recalling that the total variation distance between a binomial Bin( n, q) distribution and a Poisson distribution of the same mean may be bounded by q(see, for example, equation (1.23) of [2]), a conditioning argument using this mixed binomial representation of K⋆ n−1 gives us that dTV(K⋆ n−1, Y†)≤E[q(M)], where Y†has the mixed Poisson distribution Y†|M∼Pois(( n−1)q(M)). Noting that our choice ofλis such
|
https://arxiv.org/abs/2505.06088v1
|
that E[Y†] =E[Y] =λ, from Theorem 1.C(ii) of [2] we have that dTV(Y†, Y)≤Var(( n−1)q(M)) λ=(n−1)2Var(q(M)) λ. Hence, dTV(K⋆ n−1, Y)≤E[q(M)] +(n−1)2Var(q(M)) λ=(n−1)E[(Kn)3] (n−2)E[(Kn)2]−(n−2)E[(Kn)2] (n−1)E[Kn],(24) where the final equality follows from the expressions (20) for the first two moments of q(M). The conclusion of Theorem 3 then follows by combining (21)–(24). References [1] R. Arratia, L. Goldstein and F. Kochman (2019). Size bias for one and all. Probab. Surv. 16: 1–61. [2] A. D. Barbour, L. Holst and S. Janson (1992). Poisson Approximation . Oxford University Press, Oxford. [3] J. J. A. M. Brands, F. W. Steutel and R. J. G. Wilms (1994). On the number of maxima in a discrete sample. Statist. Probab. Lett. 20(3): 209–217. 15 [4] T. C. Brown and M. J. Phillips (1999). Negative binomial approximation with Stein’s method. Methodol. Comput. Appl. Probab. 1(4): 407–421. [5] F. T. Bruss and R. Gr¨ ubel (2003). On the multiplicity of the maximum in a discrete random sample. Ann. Appl. Probab. 13(4): 1252–1263. [6] F. Daly (2011). On Stein’s method, smoothing estimates in total variation distance and mixture distributions. J. Statist. Plann. Inference 141(7): 2228–2237. [7] B. Eisenberg (2009). The number of players tied for the record. Statist. Probab. Lett. 79(3): 283–288. [8] P. Kirschenhofer and H. Prodinger (1996). The number of winners in a discrete geometrically distributed sample. Ann. Appl. Probab. 6(2): 687–694. [9] P. Olofsson (1999). A Poisson approximation with applications to the number of maxima in a discrete sample. Statist. Probab. Lett. 44(1): 23–27. [10] A. G. Pakes and Y. Li (1998). Limit laws for the number of near maxima via the Poisson approximation. Statist. Probab. Lett. 40(4): 395–401. [11] A. G. Pakes and F. W. Steutel (1997). On the number of records near the maximum. Austral. J. Statist. 32(2): 179–192. [12] L. R¨ ade (1991). Problem E3436. Amer. Math. Monthly 98(4): 366. [13] N. Ross (2011). Fundamentals of Stein’s method. Probab. Surv. 8: 210–293. [14] N. Ross (2013). Power laws in preferential attachment graphs and Stein’s method for the negative binomial distribution. Adv. in Appl. Probab. 45(3): 876–893. [15] M. Shaked and J. G. Shanthikumar (2007). Stochastic Orders . Springer, New York. 16
|
https://arxiv.org/abs/2505.06088v1
|
arXiv:2505.06190v1 [econ.EM] 9 May 2025Beyond the Mean: Limit Theory and Tests for Infinite-Mean Autoregressive Conditional Durations Giuseppe Cavalierea, Thomas Mikoschb,Anders Rahbekc and Frederik Vilandtc May 9, 2025 Abstract Integrated autoregressive conditional duration (ACD) models serve as natural counterparts to the well-known integrated GARCH models used for financial returns. However, despite their resemblance, asymptotic theory for ACD is challenging and also not complete, in particular for integrated ACD. Central challenges arise from the facts that (i) integrated ACD processes imply durations with infinite expectation, and (ii) even in the non-integrated case, conventional asymptotic approaches break down due to the randomness in the number of durations within a fixed observation period. Addressing these challenges, we provide here unified asymptotic theory for the (quasi-) maximum likelihood estimator for ACD models; a unified theory which includes integrated ACD models. Based on the new results, we also provide a novel framework for hypothesis testing in duration models, enabling inference on a key empirical question: whether durations possess a finite or infinite expectation. We apply our results to high-frequency cryptocurrency ETF trading data. Motivated by parame- ter estimates near the integrated ACD boundary, we assess whether durations between trades in these markets have finite expectation, an assumption often made implicitly in the literature on point process models. Our empirical findings indicate infinite-mean durations for all the five cryptocurrencies exam- ined, with the integrated ACD hypothesis rejected – against alternatives with tail index less than one – for four out of the five cryptocurrencies considered. Keywords : autoregressive conditional duration (ACD); integrated ACD; testing infinite mean; quasi maximum likelihood; mixed normal; tail index. aDepartment of Economics, University of Bologna, Italy and Department of Economics, University of Exeter, UK. bDepartment of Mathematical Sciences, University of Copenhagen, Denmark. cDepartment of Economics, University of Copenhagen, Denmark. A. Rahbek and G. Cavaliere gratefully acknowledge support from the Independent Research Fund Denmark (DFF Grant 7015-00028) and the Italian Ministry of University and Research (PRIN 2020 Grant 2020B2AKFW). The paper was presented at the Zaragoza time series workshop, April 2025, and we thank participants there for comments. We also thank Stefan Voigt for providing the code used to obtain the duration data analyzed in our empirical application; see also https://www.tidy-finance.org/ for more information. Correspondence to: Anders Rahbek, Department of Economics, University of Copenhagen, email anders.rahbek@econ.ku.dk. 1 1 Introduction The recent work by Cavaliere, Mikosch, Rahbek, and Vilandt (2024, 2025) introduce a novel non-standard asymptotic theory for (quasi-) maximum likelihood estimators ((Q)MLE) for sta- tionary and ergodic autoregressive conditional duration (ACD) models. Prior to these contri- butions, estimation and inference in ACD models were assumed to follow from standard asymp- totic theory; see, e.g., Engle and Russell (1998), Bhogal and Variyam Thekke (2019), Fernan- des, Medeiros and Veiga (2016), Hautsch (2011) and Saulo, Pal, Souza, Vila and Dasilva (2025). A key challenge for the estimation theory in ACD models is that the number of durations, and hence observations n(t), for a given time span [0 , t],t >0, is random. The randomness of the number of observations n(t) implies that classical limit results, including standard laws
|
https://arxiv.org/abs/2505.06190v1
|
of large numbers and central limit theorems, cannot be applied directly, or even applied at all, as they rely on the assumption of a deterministically increasing number of observations. A main implication of the randomness of n(t) is that a crucial role is played by the tail index κ∈(0,∞) of the marginal distribution of the stationary and ergodic durations xi, defined by the condition P(xi> z)∼cκz−κasz→ ∞ for some cκ>0. To deal with this, new non-standard theory was developed in Cavaliere et al. (2024, 2025). In short, the tail index κof the durations determines both (i)the rate of convergence of the likelihood-based estimators, as well as (ii)the limiting distribution of these. More precisely, Cavaliere, Mikosch, Rahbek, and Vilandt (2025), henceforth CMRV, demonstrate that for 0 < κ < 1, durations have infinite mean and the limiting distribution of the estimators is mixed normal with convergence rate√ tκ. On the other hand, when κ >1, the durations have finite expectation, and asymptotic normality holds for the estimators at the standard√ t-rate. Notably, the empirically relevant case of κ= 1 is excluded in CMRV; as stated in their Remark 5, for κ= 1 ‘ the limiting behavior [of the estimators] is unknown ’. In this paper we extend the theory to include the case of κ= 1 as well, and enabling us to provide a unifying framework for estimation and asymptotic inference in ACD models. In terms of the classical ACD process for durations, or waiting times, xi>0, formally introduced in Section 2 below, the points above can be summarized as follows. Consider xi= ψiεi,εii.i.d. with E[εi] = 1, and conditional duration ψigiven by ψi=ω+αxi−1+βψi−1, i= 1,2, . . . , n (t). (1.1) In terms of the parameters α, β > 0 the results in CMRV include α+β >1 (i.e. κ <1) and α+ β <1 (i.e. κ >1), while the empirically relevant case of α+β= 1, and hence κ= 1, is excluded. Note that for the well-known classical generalized autoregressive heteroskedastic model (GARCH), the conditional variance has a functional form in terms of the parameters ω, αand β, identical to that of the conditional duration, or (inverse) intensity, ψiof the ACD process in (1.1). Hence, by analogy with the typically witnessed ‘integrated GARCH’ case of α+β= 1 when modelling financial returns, we label this case integrated ACD (IACD). While the con- ditional duration in an ACD model resembles that of the conditional variance in the GARCH model, and hence their likelihood functions share key structures, the asymptotic theory for the likelihood estimators are very different. Thus, in contrast to the ACD, GARCH likelihood es- timators are asymptotically Gaussian at a standard rate of convergence, regardless of whether 2 α+β̸= 1 or α+β= 1. We complete here the estimation theory for ACD models by showing that for κ= 1, the rate of convergence for the quasi maximum likelihood estimator (QMLE) is the non-standardp t/logt-rate, and the limiting distribution Gaussian. The novel result and theory, in combi- nation with the results in CMRV allow us to develop formal testing of the hypothesis of IACD
|
https://arxiv.org/abs/2505.06190v1
|
in combination with testing infinite mean against finite mean. The latter may be stated as test- ingα+β≥1, against the alternative α+β <1 (κ≤1 against κ >1). Notice that this idea is reminiscent of testing for finite variance in strictly stationary GARCH processes as in Francq and Zako¨ ıan (2022), in double-autoregressive models as in Ling (2004) and in non-causal sta- tionary autoregressions as in Gourierox and Zako¨ ıan (2017). As mentioned, this is of key inter- est in applications as most often the sum of the estimators of αandβis close to one, similar to estimation in GARCH models. The results are illustrated by likelihood analyses on recent high-frequency trade data for exchange traded funds (ETFs) on cryptocurrencies. We find that the ACD model provides estimators of αandβsumming approximately to one. However, while α+β≃1, based on implementation of the testing results, we do not reject α+β≥1, implying a tail index κ≤1, in line with what might be expected for cryptocurrencies due to their highly irregular trading patterns. The paper is structured as follows. In Section 2 we introduce the ACD model and derive its key feature in the IACD case. In Section 3 we present the asymptotic theory for the estimators and the associated test statistics. In Section 4 we analyze the finite sample properties of estimators and tests by Monte Carlo simulation. The empirical analysis of cryptocurrencies is presented in Section 5. Section 6 concludes. All proofs are provided in the Appendix. 2 The ACD Model Engle and Russell (1998) proposed the ACD model in order to analyze n(t) observations of waiting times, or durations, {xi}n(t) i=1, within a time span [0 , t],t >0, which can be a day, a year or some other pre-specified observation period. The ACD has a multiplicative form and is given by xi=ψi(θ)εi,i= 1, ..., n (t), (2.1) where εiis an i.i.d. sequence of strictly positive random variables with E[εi] = 1 and V[εi] = σ2 ε<∞, and with density bounded away from zero on compact subsets of R+. Moreover, the conditional duration is given by (1.1), or ψi(θ) =ω+αxi−1+βψi−1(θ), (2.2) in terms of the parameter vector θ= (ω, α, β )′∈R3 +. With Θ a compact subset of R3 +, the QMLE ˆθtis defined by ˆθt= arg max θ∈ΘLn(t)(θ), (2.3) where Ln(t)(θ) is the exponential log-likelihood function, Ln(t)(θ) =n(t)X i=1ℓi(θ),ℓi(θ) =−(logψi(θ) +xi/ψi(θ)) , (2.4) 3 with initial values x0andψ0(θ). Note that, as reflected by the definition of the log-likelihood in (2.4), properties of both the random waiting times xiand, importantly, the corresponding random number of observations n(t), are key to the analysis of the QMLE, and these are therefore considered next. 2.1 Properties of waiting times and number of observations With θ0denoting the true parameter, it follows by CMRV that {xi}in (2.1) is strictly stationary and ergodic provided E[log (α0εi+β0)]<0 holds. Moreover, it holds that xihas tail index κ0∈(0,∞) given by the unique and positive solution to E[(α0εi+β0)κ] = 1. (2.5) Recall here that if xihas tail index κ0, then E[xs i]<∞, for positive s < κ 0, while E[xs i] =∞ fors≥κ0. By CMRV,
|
https://arxiv.org/abs/2505.06190v1
|
if 0 < α 0+β0<1,xiis stationary and ergodic with tail index κ0>1,such that xihas finite mean, E[xi] =µ0=ω0(1−(α0+β0))−1<∞. On the other hand, if α0+β0>1 andE[log (α0εi+β0)]<0,xiis stationary and ergodic but with tail index κ0<1, and hence infinite mean, E[xi] =∞. Our focus here is on the yet unexplored case of integrated ACD where α0+β0= 1 and E[log (α0εi+β0)]<0, and thus the case where xihas tail index κ0= 1 and infinite mean. It is central to the derivations of the asymptotic behavior of the QMLE ˆθtthat not only does the number of observations n(t) increase as the observation span [0 , t] increases, or, equivalently ast→ ∞ , but also at which rate. That is, the results depend on whether the number of observations in [0 , t],n(t), appropriately normalized by some positive increasing deterministic function of t, is constant, or random, in the limit. Note in this respect that for a given time span [0 , t], by definition n(t) = arg max k≥1{Pk i=1xi≤t}, and hence, by construction, n(t)X i=1xi≃t. (2.6) CMRV establish that for κ0>1, the number of observations n(t) and tare proportional in the sense that, n(t)/tp→1/µ0ast→ ∞ . In contrast, n(t)/tp→0 for κ0<1, and instead n(t)/tκ0→dλκ0, (2.7) where λκ0is a strictly positive random variable. These rates are reflected in Theorems 2 and 3 in CMRV which provide the limiting distributions of ˆθtforκ0>1 and κ0<1, respectively. Specifically, for κ0>1, the QMLE satisfies that√ t(ˆθt−θ0) is asymptotically Gaussian, while forκ0<1, with ˆθtthe MLE,√ tκ0(ˆθt−θ0) is asymptotically mixed Gaussian. For the case κ0= 1, as for the κ0<1 case, it follows by Lemma 2.1 below that n(t)/tp→0. That is, when κ0≤1, the number of events n(t) increase, but at a slower pace than t. To see this intuitively, use (2.6) such that, as t→ ∞ , n(t)/t≃n(t)X i=1xi/n(t)−1 →p0, 4 which follows by using that for deterministic sequence n,1 nPn i=1xidiverges, since E[xi] =∞ when κ0≤1. As demonstrated in Lemma 2.1, n(t) normalized by t/logthas a constant limit in the integrated case. The proof of Lemma 2.1 is given in the appendix and is based on results in Buraczewski, Damek and Mikosch (2016) and Jakubowski and Szewczak (2021), together with arguments from CMRV. Lemma 2.1 Consider the ACD process for xi>0as given by (2.1)-(2.2) with θ0= (ω0, α0, β0)′ satisfying ω0, α0>0and the IACD hypothesis β0= 1−α0≥0.It follows that xiis stationary and ergodic with tail index κ0= 1. Moreover, n(t) logt tp→1/c0, ast→ ∞ , with the constant c0∈(0,∞)given by c0=ω0(E[(1 + α0(εi−1)) log (1 + α0(εi−1))])−1. (2.8) Remark 2.1 Note that for the simple case of the ACD of order one where ψi(θ) =ω+αxi−1, it follows that c0=ω0/E[ε1log (ε1)]when α0= 1. Moreover, for exponentially distributed εi, E[ε1log (ε1)] = 1 −γe, where γe≃0.577is Euler’s constant and hence c0≃2.36×ω0. 3 Asymptotic distribution of the QMLE for IACD Using the central result in Lemma 2.1 that the number of observations n(t) is proportional to t/logtfor the integrated ACD process, we establish in Section 3.1 below that the unrestricted (Q)MLE ˆθtfor the ACD model is asymptotically Gaussian when α0+β0= 1. In Section 3.2
|
https://arxiv.org/abs/2505.06190v1
|
we discuss (Q)ML estimation under the IACD restriction α0+β0= 1 and discuss t- and LR-based test statistics for this restriction. Finally, in Section 3.3 we revisit these tests and show how to implement tests of infinite expectation of durations. 3.1 Asymptotics for the unrestricted QMLE The main result about QMLE of the ACD model is given in Theorem 3.1 and shows that, while the limiting distribution is standard (i.e. Gaussian), the rate of convergencep t/logt,is indeed non-standard. As is common for asymptotic likelihood theory, the asymptotic behavior of the score and the information determine the limiting results. In terms of the likelihood function Ln(t)(θ) =Pn(t) i=1ℓi(θ) in (2.4), introduce the notation Sn(t)=Sn(t)(θ0) and In(t)=In(t)(θ0) for the score and information respectively evaluated at the true parameter θ0, where Sn(t)(θ) =n(t)X i=1si(θ), and In(t)(θ) =n(t)X i=1ιi(θ), (3.1) with si(θ) =∂ℓi(θ)/∂θandιi(θ) =−∂2ℓi(θ)/∂θ∂θ′. With the ACD model given by (2.1)-(2.2) we have the following result for the QMLE ˆθt. 5 Theorem 3.1 Consider the QMLE estimator ˆθtdefined in (2.3) for the ACD model for xi>0 as given by (2.1)-(2.2). With θ0= (ω0, α0, β0)′satisfying ω0, α0>0and the IACD hypothesis β0= 1−α0>0, ast→ ∞ , p t/logt(ˆθt−θ0)→dN (0,Σ), (3.2) where Σ =c0σ2 εΩ−1, with c0is defined in (2.8) and Ω =E[ιt(θ0)], cf. (3.1). Note in particular, as already emphasized, that while ˆθtconverges at the lower non-standard ratep t/logt, the limiting distribution is Gaussian. Remark 3.1 Note that for the case of exponentially distributed innovations, σ2 ε= 1 and the asymptotic variance simplifies to c0Ω−1. For the general QMLE case, a consistent estimator ˆΣt ofΣin (3.2) is ˆΣt=logt tˆσ2 εh In(t)(ˆθt)i−1 , (3.3) where ˆσ2 ε=n(t)−1Pn(t) i=1 ˆεi−¯εn(t)2, with ˆεi=xi/ψi(ˆθt)and¯εn(t)=n(t)−1Pn(t) i=1ˆεi, is the sample variance of the standardized (unrestricted) residuals. As an alternative to ˆΣtas defined in (3.3), one may use the well-known asymptotically equivalent estimator ˆΣt=logt th In(t)(ˆθt)i−1 n(t)X i=1si(ˆθt)si(ˆθt)′ h In(t)(ˆθt)i−1 , withsi(θ)defined in (3.1). Remark 3.2 The results in Theorem 3.1 also applies to estimation of the simple ACD model of order one where ψi(θ) =ω+αxi−1. That is, with θ= (ω, α)′, and ˆθtthe QMLE for the order one ACD, thenp t/logt(ˆθt−θ0)converges as in (3.2). Theorem 3.1 is stated in terms of the deterministic normalization (log t)/t. Alternatively, one may state convergence results in terms of either of the following two random statistics: the number of observations, n(t), or the information, In(t). That is, as an immediate implication of Theorem 3.1 we have the following corollary. Corollary 3.1 Consider the QMLE estimator ˆθtdefined in (2.3). Under the assumptions of Theorem 3.1, as t→ ∞ ,p n(t)(ˆθt−θ0)→dN 0, σ2 εΩ−1 , and I1/2 n(t)(ˆθt−θ0)→dN 0, σ2 εI3 , withI3denoting the (3×3)-dimensional identity matrix. The latter result in Corollary 3.1 means in particular that for the MLE, where σ2 ε= 1, I1/2 n(t)(ˆθt−θ0) is asymptotically standard Gaussian distributed. A further immediate implication of the results in Theorem 3.1 is that we can state the limiting distribution of the t-statistic for testing the hypothesis of integrated ACD. Corollary 3.2 Consider the t-statistic as defined by τt=p t/logtˆαt+ˆβt−1 (g′ˆΣtg)1/2(3.4) where g= (0,1,1)′and ˆΣtis defined in (3.3). Under the assumptions
|
https://arxiv.org/abs/2505.06190v1
|
of Theorem 3.1, as t→ ∞ ,τt→dN (0,1). 6 3.2 Asymptotics for the restricted QMLE Next, turn to QML estimation under the restriction α+β= 1, corresponding to the IACD model. Introduce the parameter vector ϕ∈R2, where ϕ= (ϕ1, ϕ2)′= (ω, α)′∈Φ⊂[0,∞)2, such that θ=θ(ϕ) = (ω, α, 1−α)′∈Θ for ϕ∈Φ. The restricted QML estimator ˜θt= (˜ωt,˜αt,˜βt)′is then given by ˜θt=θ(˜ϕt), where ˜ϕt= arg max ϕ∈ΦLn(t)(θ(ϕ)) , withLn(t)(θ) defined in (2.3). For this estimator, we have the following result. Theorem 3.2 Under the assumptions of Theorem 3.1, with ϕ= (ϕ1, ϕ2)′= (ω, α)′,ω0>0, and0< α 0<1, it follows that for the QMLE ˜ϕtofϕ, ast→ ∞ , p t/logt(˜ϕt−ϕ0)→dN(0,Σϕ), (3.5) where Σϕ=c0σ2 ε(γ′Ωγ)−1, γ=∂θ(ϕ)/∂ϕ′is given by (B.1), and c0is given by (2.8). Moreover, the quasi-likelihood ratio statistic QLRtsatisfies, as t→ ∞ , QLRt= 2h Ln(t)(ˆθt)− L n(t)(θ(˜ϕt))i →dσ2 εχ2 1. (3.6) Remark 3.3 In line with the QLRtstatistic for the hypothesis of integrated ACD in Theorem 3.2, we note that the analogous statistic is considered by simulations for the GARCH model in Busch (2005) and Lumsdaine (1995). Moreover, whereas restricted estimation is to our knowl- edge not covered in existing literature, unrestricted estimation theory which allows integrated GARCH, is considered in Berkes, Horvath and Kokoszka (2003), Lee and Hansen (1994) and Lumsdaine (1996). We emphasize that the theory in the mentioned papers does not apply to the case of ACD models due to the random number of observations n(t). 3.3 Testing IACD and infinite expectation of durations Consider initially testing the null hypothesis of IACD, i.e., HIACD :α+β= 1, against the alternative α+β̸= 1. To this aim, it is natural to use the QLRtstatistic in (3.6), which, normalized by ˆ σ2 ε, is asymptotically χ2 1-distributed. Alternatively, one may run a two-sided test based on the τtstatistic in (3.4), which is asymptotically standard normal under the null. As an alternative, one may do one-sided testing based on τt, similar to the tests for finite moments in GARCH models discussed in Francq and Zakoian (2022). Specifically, consider testing the null of infinite expectation, that is, H∞:α+β≥1 (or E[xi] =∞) against the alternative α+β < 1 (or E[xi]<∞). In this case, at nominal level η, with critical value q(η) = Φ−1(η), Φ (·) being the standard normal distribution function, the null H∞is rejected provided τt< q(η). This implies that the asymptotic size of the test is less than or equal to η, and in this sense size is controlled as t→ ∞ . Although of less interest here, one may also test the null of α+β≤1 against the alternative ofα+β >1, in which case one rejects when τt> q(1−η). This is of less interest as there is no direct interpretation in terms of (in)finite expectation; in particular, it could be viewed as a test for the null of finite expectation, E[xi]<∞, but such a test would have power only against alternatives with α0+β0>1, while having power equal to size in the IACD case α0+β0= 1. 7 As for the previous test, also for this test the asymptotic size is less than or equal to the nominal
|
https://arxiv.org/abs/2505.06190v1
|
level η. These different testing scenarios are investigated using Monte Carlo simulation in Section 4, and applied to cryptocurrency data in the Section 5. 4 Simulations Using Monte Carlo simulation, in this section we assess the finite sample performance and accuracy of the asymptotic properties of testing based on the τtand QLRtstatistics as discussed in the previous section. In particular, we want to analyze both the finite-sample behavior of these statistics under the IACD null hypothesis and their behavior under alternative hypotheses featuring both finite and infinite expected durations. In Section 4.1 we introduce the Monte Carlo set up, including computational details for the reference tests. In Section 4.2 we present the behavior of the test statistics under the null hypothesis (size), while in Section 4.3 we discuss results under the alternative (power). 4.1 Set up The data generating processes (DGPs) for the simulations are given by the ACD process as defined in equations (2.1)-(2.2), with the intercept parameter fixed at ω0= 1. We vary the parameters α0andβ0in the strict stationarity region, such that either the key IACD condition α0+β0= 1 holds, or the parameters satisfy the inequality conditions α0+β0>1 (such that E[xi] = + ∞and the tail index κ0is below unity) or α0+β0<1 (such that E[xi]<+∞ and the tail index κ0is above unity). Thus, we consider scenarios reflecting both IACD and non-integrated ACD processes to evaluate the behavior of the test statistics under different tail indices. Specifically, for a given time span [0 , t],t >0, durations {xi}n(t) i=0are simulated using a ‘burn- in’ sample of b= 1000 observations. That is, the xi’s are generated recursively as xi=ψiεi,i=−(b−1), . . . ,−1,0,1, . . . , n (t) ψi=ω0+α0xi−1+β0ψi−1, with{εi}is an i.i.d. sequence of strictly positive random variables with E[εi] = 1 and initial values x−b=ψ−b= 0. Estimators and test statistics are based on the likelihood function in (2.4). The observations entering the likelihood function are the simulated {xi}n(t) i=1with, as also done in the empirical illustration in Section 5, initial values ( x0, ψ0(θ)) = ( x0, x0). One may alternatively use ψ0(θ) =ωin the likelihood function (see, e.g., Francq and Zako¨ ıan, 2019, Ch.7); however, we found no discernible differences in applying either of the two choices. In order to evaluate the impact of the shape of the distribution of the innovations εi, we consider Weibull distributed random variables with shape parameter ν >0, scaled such that E[εi] = 1. The associated probability density function (pdf) is given by fε,v(x) =νΓ(1 + 1 /ν)νxν−1exp(−(xΓ(1 + 1 /ν))ν), x≥0, where Γ ( ·) is the Gamma function. Note that fε,ν(·) reduces to the pdf of the standard expo- 8 Table 1: Empirical Rejection Probabilities under the Null Hypothesis – two-sided tests. α0= 0.15 α0= 0.50 α0= 0.85 σ2 ε(ν) med {n(t)} τt QLRtτt QLRtτt QLRt 0.5 100 0 .226 0 .378 0 .079 0 .121 0 .060 0 .070 500 0 .160 0 .196 0 .048 0 .049 0 .046 0 .056 2500 0 .067 0 .069 0 .054 0 .059 0 .058 0 .061 12500 0 .053 0 .053
|
https://arxiv.org/abs/2505.06190v1
|
0 .056 0 .059 0 .057 0 .059 62500 0 .063 0 .065 0 .054 0 .056 0 .051 0 .053 1.0 100 0 .145 0 .340 0 .063 0 .103 0 .056 0 .070 500 0 .130 0 .165 0 .044 0 .052 0 .046 0 .059 2500 0 .061 0 .064 0 .051 0 .059 0 .056 0 .061 12500 0 .059 0 .062 0 .057 0 .060 0 .056 0 .058 62500 0 .060 0 .063 0 .054 0 .056 0 .050 0 .052 2.0 100 0 .082 0 .294 0 .054 0 .097 0 .053 0 .075 500 0 .094 0 .131 0 .045 0 .056 0 .049 0 .060 2500 0 .057 0 .064 0 .048 0 .058 0 .052 0 .061 12500 0 .060 0 .066 0 .055 0 .059 0 .050 0 .052 62500 0 .057 0 .059 0 .054 0 .056 0 .050 0 .052 Notes: The table reports empirical rejection probabilities for the QLRtandτtstatistics under the null hypothesis α0+β0= 1 (IACD). Results are based on M= 10,000 Monte Carlo replications. nential distribution for ν= 1. Moreover, the variance of εias a function of ν,σ2 ε(ν), is given by σ2 ε(ν) =Γ(1+2 /ν) Γ(1+1 /ν)2−1, implying that σ2 ε(ν) decreases monotonically with respect to νand achieves the value σ2 ε(1) = 1 (the exponential distribution). Moreover, lim ν→∞σ2 ε(ν) = 0 and lim ν→0σ2 ε(ν) =∞. By varying ν, we include under-dispersion for ν >1 (σ2 ε(v)<1), the exponential case of ν= 1, and over- dispersion for ν <1 (σ2 ε(v)>1). For all designs, the number of Monte Carlo replications is M= 10,000 and the nominal level of tests is η= 0.05, cf. Section 3.3. 4.2 Properties of tests under the null In line with the discussion in Section 3.3, we consider first results for two-sided tests for the IACD null hypothesis, based on the statistics τtand QLRt. Later, we turn to one-sided tests based on τt. We consider parameter settings where α0∈ {0.15,0.50,0.85}andβ0= 1−α0. The shape parameter νof the Weibull distribution takes values in the set {1.435,1.000,0.721}such that σ2 ε(ν)∈ {0.5,1.0,2.0}, representing under- and over-dispersion, as well as the exponential case. For each combination of ( v, α0), we consider five different values of the time span length t, selected by calibrating the (simulated) median number of events med {n(t)}to be in the set{100,500,2500,12500 ,62500}. The latter two are close to the number of durations in the empirical illustration. 4.2.1 Testing IACD In Table 1 we report the (simulated) empirical rejection probabilities (ERPs) under the null hypothesis of IACD, HIACD, using both statistics ( τtand QLRt). We observe that for moderate 9 Figure 1: QQ-plot. Quantiles of the τtstatistic in (3.4) against the N(0,1)-distribution. For each α0∈ {0.15,0.5,0.85}, values of tare such that median number of observations, med{n(t)} ∈ { 100,500,2500,12500 ,62500}. Simulations based on εiexponentially distributed. Number of Monte Carlo-replications M= 10000 . to large sample sizes (med {n(t)} ≥ 2500), both the τtand QLRtstatistics show rejection frequencies close to the nominal level across all values of ( α0, β0) and of
|
https://arxiv.org/abs/2505.06190v1
|
the shape parameter ν. Some size distortions are present for shorter samples, especially in the case of over-dispersion (σ2 ε(v) = 2 >1) and for stronger persistence (e.g., α0= 0.15), where the QLRt-based test is oversized, in particular relatively to τt. Nonetheless, both tests demonstrate good finite-sample size control in all reasonable settings. To further illustrate the validity of the asymptotic results, we consider the τtstatistic which by Corollary 3.2 is asymptotically standard normal. Figure 1 provides QQ-plots of the τtstatistic against the standard normal distribution for the case of ν= 1 and α0∈ {0.15,0.5,0.85}. The QQ-plots confirm the findings reported in Table 1. In particular, the quality of the standard Gaussian approximation improves markedly as the median number of observations increases. For small sample sizes (e.g., med {n(t)}= 100), the quantiles of τtdeviate substantially from the standard normal, particularly in the tails and for α0= 0.15. As the median increases to 2500 and beyond, the empirical quantiles align much more closely with the Gaussian quantiles, as predicted by the asymptotic theory. This pattern holds across all values of α0, though convergence is slower for α0= 0.15. Overall, the figure confirms our theoretical finds, also highlighting the importance of sufficiently large sample sizes for reliable inference in practice, probably due to the slower convergence rate of estimators when α0+β0≥1. 4.2.2 One-sided testing In Table 2 we report the ERPs for the one-sided tests. Here, with q=q(0.95)≃1.64, the columns τt<−qreport the ERPs for the case of testing H∞:α+β≥1 against the alternative α+β <1, while the columns with τt> q, report the ERPs for testing α+β≤1 against the alternative α+β >1. As for the two-sided tests, the size appears well controlled for sufficiently 10 Table 2: Empirical Rejection Probabilities under the Null Hypothesis – one-sided tests. α0= 0.15 α0= 0.50 α0= 0.85 σ2 ε(ν) med {n(t)}τt<−q τ t> q τ t<−q τ t> q τ t<−q τ t> q 0.5 100 0 .304 0 .006 0 .131 0 .016 0 .082 0 .024 500 0 .239 0 .010 0 .073 0 .025 0 .056 0 .045 2500 0 .099 0 .018 0 .054 0 .055 0 .048 0 .061 12500 0 .054 0 .050 0 .050 0 .064 0 .047 0 .062 62500 0 .051 0 .062 0 .046 0 .060 0 .045 0 .053 1.0 100 0 .221 0 .005 0 .108 0 .013 0 .079 0 .022 500 0 .196 0 .010 0 .067 0 .022 0 .058 0 .040 2500 0 .091 0 .020 0 .053 0 .053 0 .049 0 .059 12500 0 .058 0 .053 0 .048 0 .060 0 .049 0 .057 62500 0 .050 0 .062 0 .047 0 .055 0 .046 0 .052 2.0 100 0 .143 0 .006 0 .096 0 .012 0 .079 0 .018 500 0 .151 0 .008 0 .069 0 .020 0 .071 0 .030 2500 0 .088 0 .023 0 .056 0 .046 0 .055 0 .053 12500 0 .064 0 .054 0 .048 0 .055 0 .051 0 .053 62500 0 .052 0 .058 0 .049 0 .056 0
|
https://arxiv.org/abs/2505.06190v1
|
.049 0 .054 Notes: The table reports empirical rejection probabilities for the one-sided t-tests based on τt, with q= 1.64. See also Table 1. large samples. A noticeable difference between the two tests is that the (left sided) test for the infinite expected duration hypothesis, H∞:α+β≥1, the ERPs are above the nominal level for the smaller sample sizes, while the right-sided test is undersized in small samples. For both tests, the ERPs tend to the nominal level η= 0.05 as the median number of observations, med{n(t)}, increases. Figure 2: Rejection frequencies under alternative. Case of α0= 0.85. Rejection frequencies for τt> qR(right hand side of c= 0) and τt< qL(left hand side), with qR[qL] the size-adjusted 0 .95 [0.05] quantiles. Solid line: median number of observations med {n(t)}= 62500 for c= 0; dashed, dotted-dashed and dotted lines: med {n(t)}equal to 12500, 2500 and 500, respectively. Number of Monte Carlo-replications M= 10000 . 11 Figure 3: Rejection frequencies under alternative. Case of α0= 0.5. Rejection fre- quencies for τt> qR(right hand side of c= 0) and τt< qL(left hand side), with qR[qL] the size- adjusted 0 .95 [0.05] quantiles. Solid line: median number of observations med {n(t)}= 62500 forc= 0; dashed, dotted-dashed and dotted lines: med {n(t)}equal to 12500, 2500 and 500, respectively. Number of Monte Carlo-replications M= 10000 . Figure 4: Rejection frequencies under alternative. Case of α0= 0.15. Rejection frequencies for τt> qR(right hand side of c= 0) and τt< qL(left hand side), with qR[qL] the size-adjusted 0 .95 [0.05] quantiles. Solid line: median number of observations med {n(t)}= 62500 for c= 0; dashed, dotted-dashed and dotted lines: med {n(t)}equal to 12500, 2500 and 500, respectively. Number of Monte Carlo-replications M= 10000 . 4.3 Properties under the alternative To investigate the behavior of the ERPs under the alternative, we focus in this section on one- sided tests based on τt; see Section 3.3. In terms of parameters αandβunder the alternatives, we consider the following design. 12 With α0andβ0chosen as under the null, that is, α0∈ {0.15,0.5,0.85}andβ0= 1−α0, we set α=α0+c,β= 1−α0 (4.1) where c∈I(α0) = [−c(α0), c(α0)], with c(α0)>0 – and hence I(α0) – selected such that the stationarity condition E[log (αεi+β)]<0 holds for all c∈I(α0). Specifically, with c(0.15) = 0.011,c(0.5) = 0 .128 and c(0.85) = 0 .149, we consider an equidistant grid of p= 100 points in the intervals I(α0). In terms of t, we consider different values of tand hence time spans [0 , t], such that for c= 0 and each α0value, the median number of observations across Monte Carlo (MC) replications takes values med {n(t)} ∈ { 500,2500,12500 ,62500}. The corresponding ERPs for α0= 0.85,α0= 0.50 and α0= 0.15 are reported in Figures 2, 3 and 4, respectively. These ERPs are size-adjusted; that is, they are based on critical values computed using quantiles from the empirical distribution of the test statistic τtunder c= 0. Consider the (left-sided) test for the null α+β≥1, or E[xi] =∞against the finite expectation alternative α+β <1; the ERPs for this test correspond to negative values ofcin the figures. We note the
|
https://arxiv.org/abs/2505.06190v1
|
following facts. First, as expected, the ERPs increase as t(hence, med {n(t)}) gets larger. Second, the ERPs increase monotonically as c <0 moves away from the null of c= 0. Apart from the fact that the distance from the null increases in −c, this also reflects that by eq.(8) in CMRV, n(t)/t→a.s.1/E[xi] = (1 −(α+β))/ω0=−c, using (4.1). Hence for c <0,n(t) is proportional to ( −c)t, enhancing the observed increase in ERPs as −cincreases. Third, for a given value of c <0, the ERPs increase as α0becomes smaller. For instance, when c=−0.01 and med {n(t)}= 2500, the ERP for α0= 0.15 is close to 90%, while for α0= 0.50 (α0= 0.85) it is approximately 20% (10%). This indicates that the power of the test is highest in regions of the parameter space associated with small α0. Since small values of αare frequently encountered in applied work, this suggests that the test performs well where it is most likely to be used in practice. Next, consider the right-sided test for the null α+β≤1 against α+β >1, or c >0 in (4.1). As for the left-sided test, the ERPs increase with t. Moreover, for a given value of c, the ERPs are highest when α0attains its smallest value, i.e. α0= 0.15. We note that by the implicit definition of κ(see (2.5)), it holds that for α1= 1−β1< α 2= 1−β2, then κas a function of αi, κ(αi, c),i= 1,2 and cfixed, satisfies κ(α1, c)< κ(α2, c). That is, as is well-known also from the integrated GARCH literature, for smaller values of α0(and hence larger β0= 1−α0) the tail index varies more as cis varying, resulting in the larger ERPs. A further notable difference from the left-sided test is that for a fixed time span [0 , t] the ERPs are not monotone in c. To explain this, let θc= (ω0, α, β) with α, βdefined in (4.1) such that θ0= (ω0, α0, β0). It follows thatψi(θc)> ψ i(θ0), with (for the stationary solution) ψi(θc) =ω0h 1 +∞X j=0jY k=0(αεi−1−k+β)i , implying durations are increasing in casxi(c) = ψi(θc)εi> x i=ψi(θ0)εi.That is, the observed number of observations n(t) is decreasing in c, which leads to the observed loss in 13 rejection probabilities for fixed [0 , t]. As noted above, this effect is not present for the left-sided test, where, for c <0, we have the opposite effect: as cdecreases, durations decrease and n(t) increases, leading to an increase in ERPs. 5 Empirical illustration In the seminal work by Engle and Russell (1998), the ACD model was applied to analyze durations between intra-day trades of the IBM stock over a three-month period. Since then, it has been widely used in applications involving high-frequency trade-durations for various financial assets; see, e.g., Aquilina, Budish and O’Neill (2022), Hamilton and Jorda (2002) and Saulo, Pal, Souza, Vila and Dasilva (2025). We illustrate our results by applying ACD models to intra-day, diurnally-adjusted dura- tions{xi}for five different exchange-traded funds (ETFs) tracking cryptocurrency prices from January 2 to February 28, 2025 (or, 35 trading days). The ETFs considered are the Grayscale
|
https://arxiv.org/abs/2505.06190v1
|
Bitcoin Mini Trust (ticker: BTC), Grayscale Ethereum Mini Trust (ETH), Grayscale Bitcoin Trust (GBTC), Grayscale Ethereum Trust (ETHE) and Bitwise Bitcoin (BITB). Intra-day durations for the observed ETFs are measured in seconds (with decimal precision down to nano-seconds) and are obtained from the limit order book records on the NASDAQ stock exchange using the LOBSTER database ( https://lobsterdata.com/index.php) . As detailed in Hautsch (2012, Ch.3), the original, or ‘raw’ intra-day durations obtained from the limit order book are corrected for intradaily patterns using here cubic splines (with knots placed every 30 minutes). Figure 5 shows the obtained diurnally adjusted durations {xi}for each of the ETFs, together with the estimated intradaily patterns for the different ETFs; as expected, more frequent trading (and hence shorter durations) is observed at the market open and close, relative to the mid-day period. The observation period of 35 trading days during regular trading hours (9:30 AM to 4:00 PM EST) corresponds to the time span [0 , t], where t= 35·23400 = 819 ,000 seconds. As to the number of trades n(t) for each of the ETFs these are respectively: 19 ,366 for BTC, 35 ,492 for ETH, 157 ,620 for GBTC, 120 ,104 for ETHE, and n(t) = 51 ,917 for BITB. Although the number of trades may appear comparatively low, this reflects moderate intra-day liquidity exhibited by the ETFs, which is typical of exchange-traded products and contrasts with the high-frequency trading activity commonly observed on cryptocurrency exchanges. For each of the five series, we estimate the ACD model in (2.1) with QMLEs obtained by maximization of the log-likelihood function in (2.4) with initial values x0andψ0(θ) =x0. Note that Engle and Russell (1998) reset the initial value of ψi(θ) on every new trading day; adopting their approach instead of the one used here yields virtually identical empirical results. Parameter estimates of θ:= (ω, α, β )′over the selected time span [0 , t], denoted by ˆθt:= (ˆωt,ˆαt,ˆβt)′, are reported in Table 3 along with robust standard errors computed as in (3.3). The table also reports the corresponding tstatistics τtfrom (3.4) and the quasi-likelihood ratio statistics QLRtfrom (3.6) for testing the null hypothesis HIACD :α+β= 1; these statistics can be used to perform (one-sided or two-sided) tests for the IACD specification, as well as tests for the null hypothesis of infinite expected duration, E[xi] =∞against the alternative of finite expected duration, E[xi]<∞; see the discussion below. The model appears to be reasonably well specified for all five series. In particular, as 14 Figure 5: Durations and Diurnal Pattern. Right column: Diurnally adjusted durations xi (in seconds) as a function of calender time. Left column: Estimated diurnal intra-daily pattern in durations xias a function of time (corresponding to 9:30am-4pm). indicated in Figure 6, some autocorrelation remains in the standardized residuals, ˆ εi=xi/ψi(ˆθt), although their squared values exhibit no significant autocorrelation. This type of dependence in the residuals ˆ εiis consistent with previous findings in the ACD literature, where it is well documented that fully eliminating all serial correlation from the residuals can be challenging; see e.g. Pacurar (2008). Importantly,
|
https://arxiv.org/abs/2505.06190v1
|
the empirical distribution of the ˆ εi’s does not appear to be exponential, again consistent with findings commonly reported in the financial durations literature; see, for example, Section 5.3.1 of Hautsch (2012) and the references therein. Returning to the results in Table 3, we first test the null hypothesis of infinite expected duration E[xi] =∞against the alternative E[xi]<∞. This hypothesis can be assessed by testing the null hypothesis α+β≥1 against the one-sided alternative α+β < 1, using the tstatistics τtfrom (3.4). The τtstatistics are all positive, and thus we do not reject the null hypothesis of infinite mean ( E[xi] =∞) for any of the five series. We next consider the null hypothesis of integrated ACD, HIACD :α+β= 1. This can be tested against the two-sided alternative, α+β̸= 1, or against one sided alternatives, such asα+β > 1 or α+β < 1. Using the t-statistic τtdefined in (3.4) and the QLRtstatistics defined in (3.6), the null hypothesis is rejected at the 5% nominal level for four out of the five cryptocurrency ETF considered: BTC, ETH, GBTC and ETHE. The IACD specification is supported for the BITB cryptocurrency ETF only. Taken together, our results show that diurnally adjusted trade durations for cryptocurrency ETFs are heavy-tailed, with infinite expectation and an implied tail index κless than (or equal to) one. These findings underscore the importance of using statistical models that accommodate infinite expected durations and tail indexes at or below one when analyzing and modeling high- frequency financial durations in cryptocurrency markets. 15 Table 3: ACD(1,1) estimates and IACD test statistics ω α β α +β t α+β=1 QLRt BTC 6 .663 (0.662)0.186 (0.010)0.829 (0.007)1.015 (0.004)3.72 16 .84 ETH 0 .007 (0.004)0.123 (0.007)0.896 (0.004)1.018 (0.004)4.86 76 .88 GBTC 1 .974 (0.221)0.119 (0.003)0.896 (0.002)1.015 (0.001)13.77 235 .92 ETHE 0 .394 (0.041)0.083 (0.003)0.927 (0.002)1.010 (0.001)8.27 105 .90 BITB 4 .836 (0.536)0.095 (0.004)0.906 (0.003)1.002 (0.001)1.43 1 .58 Notes: Parameter estimates (three decimal points) with standard errors in parentheses, together with theτtand QLRtstatistics. Note that ˆ ωthas been scaled by 103. Figure 6: ACF plots. Sample autocorrelation function (ACF) for the estimated residuals, ˆεi=xi/ψi(ˆθt) (left column) and the squared estimated residuals, ˆ ε2 i(right column). Plots include (dashes) standard 0.95-confidence intervals. 6 Conclusions In this paper, we have completed the asymptotic theory for (quasi) likelihood-based estimation in autoregressive conditional duration (ACD) models, specifically addressing the previously un- resolved ‘integrated ACD’ case where the parameters satisfy the critical condition α+β= 1. We have established three main results. First, the rate of convergence of the QML estimators dif- fers from both the α+β >1 case and the α+β <1 the case: interestingly, we find a discontinu- ity, with the rate beingp t/logtwhen α+β= 1, where tdenotes the length of the observation period. Second, despite this nonstandard rate, the QMLE remains asymptotically Gaussian. Third, standard inference procedures – based on t-statistics and likelihood ratio tests – remain valid under the integrated ACD setting. We have characterized by Monte Carlo simulation the quality of the asymptotic approximations, finding that empirical rejection frequencies of tests are close to the selected nominal levels, albeit
|
https://arxiv.org/abs/2505.06190v1
|
large samples are required for some parameter config- urations. Finally, we have applied our results to recent high-frequency trading data on various 16 cryptocurrency ETFs. The empirical evidence indicates heavy-tailed duration distributions, and in most cases, the integrated ACD hypothesis is not rejected in favor of the alternative α+β <1. An important extension of our work concerns the development of bootstrap inference meth- ods within this framework. Bootstrap theory exists for the case of deterministic n(t) as for the class of multiplicative error (MEM) models (see, e.g., Perera, Hidalgo and Silvapulle, 2016, and Hidalgo and Zaffaroni, 2007), and for point processes, such as ACD models, with finite ex- pected durations (see, e.g., Cavaliere, Lu, Rahbek and Stærk-Østergaard, 2023). To the best of our knowledge, no bootstrap theory currently accommodates cases where α+β≥1. This sig- nificant and open research question is currently being investigated by the authors. References Aquilina, Matteo, Eric Budish, and Peter O’Neill, P. (2022) “Quantifying the High- Frequency Trading Arms Race,” The Quarterly Journal of Economics , 137, 493–564. Berkes, Istvan, Lajos Horvath and Piotr Kokoszka (2003) “GARCH Processes: Struc- ture and Estimation,” Bernoulli , 9(2), 201–227. Bhogal, Saranjeet K., and Ramanathan Thekke Variyam (2019) “Conditional Dura- tion Models for Highfrequency Data: A Review on Recent Developments,” Journal of Economic Surveys, 33(1), 252–273. Busch, Thomas (2005) “A Robust LR Test for the GARCH Model,” Economics Letters, 88, 358–364. Buraczewski, Dariusz, Ewa Damek, E. and Thomas Mikosch, T. (2016) Stochastic Models with Power-Law Tails . NY: Springer. Cavaliere, Giuseppe, Ye Lu, Anders Rahbek and Jacob Stærk-Østergaard (2023) “Bootstrap Inference for Hawkes and General Point Processes,” Journal of Econometrics , 235, 133–165. Cavaliere, Giuseppe, Thomas Mikosch, Anders Rahbek, and Frederik Vilandt (2024) “Tail Behavior of ACD Models and Consequences for Likelihood-based Estimation,” Journal of Econometrics , 238(2), 105613. Cavaliere, Giuseppe, Thomas Mikosch, Anders Rahbek, and Frederik Vilandt (2025) “A Comment on: “Autoregressive Conditional Duration: A New Model for Irregularly Spaced Transaction Data,” Econometrica , 93(2), 719-729. Engle, Robert F. and Jeffry R. Russell (1998) “Autoregressive Conditional Duration: A New Model for Irregularly Spaced Transaction Data,” Econometrica , 66(5), 1127–1162. Fernandes, Marcelo, Marcelo C. Medeiros and Alvaro Veiga (2016) “The (Semi-) Parametric Functional Coefficient Autoregressive Conditional Duration Model,” Econometric Reviews , 35, 1221–1250. Francq, Christian and Jean-Michel Zakoian (2019) GARCH Models: Structure, Statis- tical Inference and Financial Applications . NY: Wiley. 17 Francq, Christian and Jean-Michel Zakoian (2022) “Testing the existence of moments for GARCH processes,” Journal of Econometrics, 227, 47–64. Gourieroux, Christian and Jean-Michel Zako ¨ıan(2017) “Local Explosion Modelling by Non-Causal Process,” Journal of the Royal Statistical Society B , 79, 737–756. Hamilton, James .D., and Oscar Jord `a(2002) “A Model of the Federal Funds Rate Target,” Journal of Political Economy, 110, 1135–1167. Hautsch, Nikolaus (2012) Econometrics of Financial High-Frequency Data , Berlin: Springer . Hidalgo, Javier and Paolo Zaffaroni (2007) “A goodness-of-fit test for ARCH( ∞) mod- els,” Journal of Econometrics , 141, 835–875. Jakubowski, Adam and Zbigniew S. Szewczak (2021) “Truncated Moments of Perpetu- ities and a New Central Limit Theorem for GARCH Processes without Kesten’s Regularity,” Stochastic Processes and
|
https://arxiv.org/abs/2505.06190v1
|
their Applications , 131, 151-171. Jensen, Søren T. and Anders Rahbek (2004) “Asymptotic Inference for Nonstationary GARCH, ” Econometric Theory , 20(6), 1203–1226 . Lee, SanG-Won and Bruce E. Hansen (1994) “Asymptotic Theory for the GARCH(1, 1) Quasi-Maximum Likelihood Estimator”, Econometric Theory, 10, 29–52. Ling, Shiqing (2004) “Estimation and testing stationarity for double-autoregressive models,” Journal of the Royal Statistical Society B , 66, 63–78. Lumsdaine, Robin L. (1995) “Finite-Sample Properties of the Maximum Likelihood Estimator in GARCH(1,1) and IGARCH(1,1) Models: A Monte Carlo Investigation,” Journal of Business & Economic Statistics , 13(1), 1-10. Lumsdaine, Robin L. (1996) “Consistency and Asymptotic Normality of the Quasi-Maximum Likelihood Estimator in IGARCH(1,1) and Covariance Stationary GARCH(1,1) Models,” Econo- metrica , 64(3), 575–96. Pacurar, Maria (2008) “Autoregressive Conditional Duration Models in Finance: a Survey of the Theoretical and Empirical Literature,” Journal of Economic Surveys , 22, 711–751. Pedersen, Rasmus S. and Anders Rahbek (2019) “Testing GARCH-X Type Models,” Econometric Theory, 35, 1012–1047. Perera, Indeewara, Javier Hidalgo and Mervyn J. Silvapulle (2016) “A Goodness of-Fit Test for a Class of Autoregressive Conditional Duration Models,” Econometric Reviews , 35(6), 1111-1141. Saulo, Helton, Pal Suvra, Souza Rubens, Roberto Vila and Alan Dasilva (2025) “Parametric Quantile Autoregressive Conditional Duration Models With Application to Intra- day Value-at-Risk Forecasting,” Journal of Forecasting , 44: 589-605. 18 Appendix A.1 Proof of Lemma 2.1 With sn=Pn i=1xi,andndeterministic, we first establish that sn nlogn→pc0, asn→ ∞ . (A.1) To see this, note that sn=s1n+s2n,with s1n=Pn i=1ψiands2n=Pn i=1ψi(εi−1), and the result holds by establishing (i)s1n/(nlogn)→pc0and(ii)s2n/(nlogn)→p0. It follows by Lemma 4 in CMRV that ψi=ψi(θ0) satisfies the stochastic recurrence equation, ψi=ω0+ (α0εi−1+ 1−α0)ψi−1=Aiψi−1+Bi (A.2) with Ai= 1+ α0(εi−1−1) and Bi=ω0. Moreover, ψi, and hence also xi=ψiεi, have tail index κ0= 1>0, such that in particular, P(ψi> x)∼c0x−1,asx→ ∞ , (A.3) with c0given by (2.8). Next, by (A.3) and L’Hˆ opital’s rule, E[ψiI(ψi≤n)] =Zn 0P(ψi> x)dx−nP(ψi> n)∼c0log (n),asn→ ∞ . (A.4) Using (A.4) and (12) and (15) in Theorem 1.1 in Jakubowski and Szewczak (2021) (henceforth JS), it follows by Theorem 2.1 in JS, that (i)holds. For (ii)decompose s2nas follows, s2n=nX i=1ψiI(ψi≤nlog (n))(εi−1) +nX i=1ψiI(ψi> nlog (n))(εi−1) =s21n+s22n. For the first term s21nwe have V[s21n/(nlog (n))] = nP(ψi> nlog (n))V[εi]E[ψ2 iI(ψi≤nlog (n))] (nlog (n))2P(ψi> nlog (n)). Using (A.3), nP(ψ1> nlog (n))→0, while E[ψ2 iI(ψi≤nlog (n))] (nlog (n))2P(ψi> nlog (n))∼1,asn→ ∞ by Karamata’s theorem (see e.g. pages 26-27 in Bingham, Goldie and Teugels, (1987)). Hence s21n→0 as desired. For the second term s22nwe have for any δ >0 P(|s22n|> δn log (n))≤P n[ i=1{ψi> nlog (n)}! ≤nP(ψ1> nlog (n))→0, using (A.3) for the convergence. Hence s22n/(nlog (n))→0 in probability, and hence (ii),such that (A.1) holds. 19 Finally, as by definition n(t) = maxn k:Pk i=1xi≤to , using (A.1) and g(t) =t/logt, P(n(t)/g(t)≤z) =P(n(t)≤zg(t)) =P zg(t)X i=1xi≥t = 1−P log (zg(t)) logt (zg(t) log ( zg(t)))−1zg(t)X i=1xi< z−1 →1−I(c0< z−1) =I(z≥c−1 0), which establishes the desired result, n(t)/g(t)→pc−1 0. □ B Proof of Theorems 3.1 and 3.2 B.1 Proof of Theorem 3.1 We apply Lemma 2.1 in CMRV with T=treplaced there by g(t) :=t/logtandµ=c0. With θ= (θ1, θ2, θ3)′=
|
https://arxiv.org/abs/2505.06190v1
|
(ω, α, β )′, conditions (C.1)-(C.3) in CMRV hold by the proof of Theorem 2 there .To see this, for ndeterministic, define Ln(θ) by replacing n(t) bynin (2.4), and define Sn(θ) and In(θ) similarly. Then, by CMRV, as n→ ∞ , 1√nS[n·](θ0)→wΩ1/2 SB(·),1 nIn(θ0)→a.sΩI, where B(·) is a three dimensional Brownian motion on [0 ,∞), Ω S=E[si(θ0)si(θ0)′] and Ω I= E[ιi(θ0)]. Moreover, supθ∈Θ 1 n∂3Ln(θ)/∂θi∂θj∂θk ≤cn→a.s.c <∞,i, j, k = 1,2,3, and finally, as (C.4) holds by Lemma 2.1, we conclude by Lemma 2.1 in CMRV p g(t)(ˆθt−θ0)′→dN 0, c0Ω−1 IΩSΩ−1 I ast→ ∞ , holds. By standard results from GARCH models, see e.g. Jensen and Rahbek (2004), Ω S= V[εi] ΩI, using that by assumption E[εi] = 1, and the desired result in Theorem 3.1 holds. □ B.2 Proof of Theorem 3.2 The result in (3.5) follows by the proof of Theorem 3.1. Thus with γ=∂θ(ϕ)/∂ϕ′= 1 0 0 0 1 −1!′ , (B.1) ∂Ln(t)(θ(ϕ))/∂ϕ =γ′Sn(t)(θ(ϕ0)) and ∂2Ln(t)(θ(ϕ0))/∂ϕ∂ϕ′=−γ′In(t)(θ(ϕ0))γby the chain rule. In particular, (C.1) and (C.2) in Lemma 2.1 of CMRV hold as, 1√nγ′S[n·](θ(ϕ0))→wγ′Ω1/2 SB(·),1 nγ′In(θ0)γ→a.sγ′ΩIγ. Similarly for (C.3), while (C.4) as before holds by Lemma 2.1. Hence (3.5) holds. The asymp- totic distribution of the QLR statistic follows by arguments as in the proof of Lemma 2.1 in Pedersen and Rahbek, 2019, using the identity Ω S=V[εi] ΩIandn(t) log ( t)/t→p1/c0.□ 20
|
https://arxiv.org/abs/2505.06190v1
|
arXiv:2505.06405v1 [math.CO] 9 May 2025Mixing and Merging Metric Spaces using Directed Graphs Mahir Bilen Can1and Shantanu Chakrabartty2 1Department of Mathematics, Tulane University, USA, mahirbilencan@gmail.com 2Department of Electrical and Systems Engineering, Washington University in St. Louis, USA, shantanu@wustl.edu Abstract Let (X1, d1), . . . , (XN, dN) be metric spaces with respective distance functions di: Xi×Xi→[0,1],i= 1, . . . , N . LetXdenote the set theoretic product X1× ··· × XN and let g∈ X andh∈ X denote two elements in this product space. Let G= (V,E) be a directed graph with vertices V={1, . . . , N }and with a positive weight P= {pij}, pij∈(0,1], i, j= 1, .., N associated with each edge ( i, j)∈ EofG. We define the function dX,G,P(g,h) := 1−1 NNX j=1NY i=1[1−di(gi, hi)]1 pji . In this paper we show that dX,G,Pdefines a metric space over Xand we investigate the properties of this distance under graph operations, which includes disjoint unions and cartesian products. We show two limiting cases: (a) where dX,G,Pdefined over a finite field leads to a broad generalization of graph-based distances that is widely studied in the theory of error-correcting codes; and (b) where dX,G,Pis extended to measuring distances over graphons. Keywords: metric spaces, directed graphs, error-correcting codes, random graphs, graphons, machine learning, artificial intelligence 1 Introduction The main goal of this article is to investigate the function dX,G,P, introduced in the abstract, as a new tool to merge various metric spaces. The framework could be useful and applicable to several domains ranging from machine learning [14, 12, 6] to biology and ecology [15, 3, 7]. The first corollary of the main result is an interpretation of dX,G,Pas a generalization of graph-based distances used in the theory of error-correcting codes [8, 2, 4]. 1 Corollary 1.1. We assume that the underlying sets in the main result, X1, . . . , X Nare finite and that the corresponding metrics d1, . . . , d Ntake values in {0,1}. Then for every g= (g1, . . . , g N)∈ X andh= (h1, . . . , h N)∈ X, we have dX,G,P(g,h) =|⟨supp( g,h)⟩|, where |.|denotes the size of the set and supp( g,h) ={j∈ {1, . . . , N } |gj̸=hj}, with ⟨supp( g,h)⟩being the set of indices i∈ {1, . . . , N }such that there is a directed path from i to an element of supp( g,h). Previous work has explored the use of directed graphs or posets for defining metrics for error-correcting codes (ECCs) [2, 4], however, these approaches do not directly address the combination of various metrics using graphs. The proposed metric dX,G,Pin Corollary 1.1 provides a broader generalization and not only extends existing graph-based metrics, but also allows us to work with mixed channels and modulation schemes. These include phase-shift keying modulation [13], or channels that are susceptible to synchronization errors [11]. The next corollary extends dX,G,Pto the setting of graphons .Graphons , also known as graph limits, are symmetric measurable functions that serve as analytic representations for sequences of large,
|
https://arxiv.org/abs/2505.06405v1
|
dense graphs. Introduced in the foundational work by Lov´ asz and Szegedy [10], they provide a powerful framework for studying the properties of large graphs and their limiting behavior, analogous to how probability distributions describe the limits of sequences of random variables [1]. Corollary 1.2. In the limit N→ ∞ , where the graph Gin the main result approaches a graphon W: [0,1]2→(0,1]that is defined over the random variables x, y∈[0,1]. Then dG,W(g, h) := 1 −Ex exp Ey|xlog (1−d(g(y), h(y))) W(x, y) is a metric defined on the set of all continuous functions g, h: [0,1]→C, where Ex(.)denotes the expectation with respect to xandEy|x(.)denotes the conditional expectation of ygiven x, whenever it is defined. Note that Ey|x(.)<0 hence dG,Wis well defined in the limit. Note also that the continuity requirement can be relaxed depending on the graphon used. The next corollary uses the property that dX,G,Prespects the structure of the semiring of directed graphs. A semiring is an algebraic structure ( S,+,·), where addition, denoted +, and multiplication, denoted ·, are associative binary operations on the set S. Furthermore, (S,+) is required to be a commutative monoid with 0, and ( S,·) is required to be a monoid with identity 1. Finally, multiplication distributes over addition both on the left and the right. Semirings are one of the most commonly studied structures in algebra. It is not difficult to see that the set of all directed graphs is a semiring, where the addition is defined as the disjoin union of directed graphs, and the multiplication is defined as the Cartesian product of directed graphs. In this context, our metric has the following properties: 2 Corollary 1.3. Let(X1, d1), . . . , (XN, dN)be metric spaces as in the main result. Let G(1) (resp. G(2)) be a directed graph on the set {1, . . . , N 1}(resp. on the set {N1+ 1, . . . , N }). We denote by X(1)(resp. by X(2)) the product metric space X1×···× Xr(resp. by (Xr+1× ··· × XN)). Let P(1)(resp. P(2)) be the matrix of probability scores on the edges of G(1) (resp. on the edges of G(2)). If the directed graph Gis a disjoint union of two sub-directed graphs, G=G(1)⊔ G(2), then we have dX,G,P=N1 NdX(1),G(1),P(1)+N2 NdX(2),G(2),P(2), where X=X(1)× X(2)andN2:=N−N1. As an example of Corollary 1.3, if G1= (V1, E1) andG2= (V2, E2) be two finite directed graphs. The Cartesian product ofG1andG2, denoted by G1□G2, is a directed graph whose vertex set is given by V(G1□G2) :=V1×V2. The edge set of G1□G2, denoted E(G1□G2) is defined so that there is a directed edge from (u1, u2) to ( v1, v2) inG1□G2if and only if •u1=v1and ( u2, v2)∈E2, or •u2=v2and ( u1, v1)∈E1. In other words, edges in G1□G2connect vertices where one coordinate is adjacent in one graph, while the other coordinate remains unchanged. This dichotomy allows us to associate a well-defined probability score to each edge of G1□G2as long as we have probability scores on the edges of the graphs G1andG2separately. Let pu1,v1(resp. pu2,v2) denote the probability score associated with
|
https://arxiv.org/abs/2505.06405v1
|
the edge ( u1, v1) inG1(resp. ( u2, v2) inG2). If (( u1, u2),(v1, v2)) is an edge of G1□G2, then we define p(u1,u2),(v1,v2):=( pu1,v1ifu2=v2, pu2,v2ifu1=v1. The last corollary uses dX,G,Pto understand the nature of a product metric space. Let X1, . . . , X Nbe metric spaces. Let N1∈ {1, . . . , N −1}and let N2:=N−N1. For i∈ {1, . . . , N 1}andj∈ {N1+ 1, . . . , N }, we denote by F(Xi, Xj) the set of all functions from XitoXj. The corresponding uniform metric onF(Xi, Xj) is defined by di,j(f, g) := sup x∈XidXj(f(x), g(x)), where f, g∈F(Xi, Xj), and sup denotes the supremum. We let Fdenote the product metric space: F:=N1Y i=1NY j=N1+1F(Xi, Xj). 3 It is notationally convenient to view Fas a collection of N1×N2matrices f:= (fij)i=1,...,N 1,j=1,...,N 2, where the ( i, j)-th entry of the matrix fis a function fijinF(Xi, Xj+N1). Our goal is to understand the nature of the joint metric on F. Let dF1,G1,P1anddF2,G2,P2denote the joint-metrics on the disjoint union N2Y j=1F(X1, Xj)⊔ ··· ⊔N2Y j=1F(XN1, Xj) andN1Y i=1F(Xi, XN1+1)⊔ ··· ⊔N1Y i=1F(Xi, XN), respectively. Corollary 1.4. We maintain the notation from above to define dF1,G1,P1anddF2,G2,P2. Fur- thermore, we denote by P1(resp. P2) the matrix of probability scores at the edges of G1 (resp. of G2). Then we have dF,G1□G2,P(g,h) = 1−N1 N|V(G1)| −dF1,G1,P1N2 N|V(G2)| −dF2,G2,P2 . We now outline the structure of our paper. In Section 2, we provide the necessary background on directed graph metrics, and introduce graphons. Section 3 is dedicated to proving the main result of the paper which is preceded by several preparatory results. Then in Sections 4 5 and 6, we present the proofs of Corollaries 1.1, 1.2, 1.3and 1.4. 2 Preliminaries In this section, we introduce our notation, discuss preliminary results, and make comments on the usefulness of our proposed metrics. Throughout this article, Z+stands for the semigroup of positive integers. For N∈Z+, we use the notation [ N] to denote the set {1, . . . , N }. For a group ( V,·), where ·denotes the group operation, we use eV(or simply eif confusion is unlikely) to denote the identity element of V. 2.1 Graphs and Posets. AgraphGis an ordered pair ( V, E), where Vis a non-empty set of vertices (or nodes), and Eis a set of edges, where each edge connects two vertices. In this work, we use graphs where both VandEare finite sets. A simple graph is a graph that has no loops and no multiple edges. A directed graph (digraph) is a graph in which each edge has a direction. Apath in a directed graph ( V, E) is a sequence of vertices v0, v1, . . . , v ksuch that ( vi−1, vi)∈ Efor all 1 ≤i≤k. In simpler terms, a path is a sequence of edges that can be traversed continuously. A cycle is a path v0, v1, . . . , v ksuch that vi̸=vjfor all {i, j} ⊆ { 0,1, . . . , k } buti= 0 and
|
https://arxiv.org/abs/2505.06405v1
|
j=k. Adirected acyclic graph (DAG) is a directed graph that contains no cycles. Here, a cycle is a path that starts and ends at the same vertex. 4 LetG1= (V1, E1) and G2= (V2, E2) be directed graphs, where E1⊆V1×V1and E2⊆V2×V2. The Cartesian product of directed graphs G1□G2is a directed graph with the following properties: •The vertex set of G1□G2is the Cartesian product V1×V2. •There is a directed edge from ( u1, u2) to ( v1, v2) inG1□G2if and only if one of the following conditions holds: 1.u1=v1and ( u2, v2)∈E2(a directed edge in G2), or 2.u2=v2and ( u1, v1)∈E1(a directed edge in G1). Apartially ordered set (poset) is a pair P= (S,⪯), where Sis a set and ⪯is a binary relation on Sthat is reflexive ( x⪯xfor all x∈S), antisymmetric (if x⪯yandy⪯x, then x=y), and transitive (if x⪯yandy⪯z, then x⪯z). Similarly to our assumption on the finiteness of graphs, unless otherwise stated, our posets are assumed to be finite. Asubposet ofPis a subset T⊆Stogether with the restriction of ⪯toT. For elements x, y∈S, the interval between xandyis defined as [ x, y] ={z∈S|x⪯z⪯y}. Ifx⪯̸y, then [ x, y] =∅. If [x, y] ={x, y}, we say that y covers x . A poset P= (S,⪯) can be conveniently visualized by its Hasse diagram , which is a directed graph ( V, E) = (S, C), where Cis the set of all cover relations. A DAG Gnaturally defines a poset. If there is a directed edge from vertex xto vertex yinG, we write x⋖y. The transitive closure of the relation ⋖, denoted by ≤, defines a partial order on the vertices ofG, thus forming a poset. For a poset ( R,⪯), alower order ideal is a subset S⊆Rsuch that for every r∈Rand s∈S, ifr⪯s, then r∈S. A closely related concept appears in graph theory. Let ( V, E) be a directed graph. A subset W⊆Vis called a hereditary subset if every vertex v∈Vwith a path to some vertex in Walso belongs to W. In terms of posets, hereditary subsets correspond precisely to lower order ideals. To see this, consider the poset ( V,⪯) where v⪯v′if and only if there exists a directed path from vtov′in the graph ( V, E). Under this construction, the hereditary subsets of Vare exactly the lower order ideals of the associated poset, and vice versa. LetAbe a subset of the vertex set of a directed graph ( V, E). We denote by ⟨A⟩the smallest hereditary subset Wof (V, E) such that A⊆W. 2.1.1 Metric Spaces. Ametric space is an ordered pair ( X, d), where Xis a set and d:X×X→[0,∞) is a function called a metric (ordistance function ) that satisfies the following axioms for all x, y, z ∈X: 1.Non-negativity: d(x, y)≥0, and d(x, y) = 0 if and only if x=y. 2.Symmetry: d(x, y) =d(y, x). 5 3.Triangle inequality: d(x, z)≤d(x, y) +d(y, z). Let ( X, d X) and ( Y, d Y) be metric spaces, where XandYare sets and dXanddYare distance functions on XandY, respectively. A function f:X→Yis called an isometry if for
|
https://arxiv.org/abs/2505.06405v1
|
all points a, b∈X, the distance between aandbinXis equal to the distance between their images f(a) and f(b) inY. In other words: dY(f(a), f(b)) =dX(a, b). (2.1) Notice that an isometry is necessarily injective. Unless otherwise noted, we assume that isometries are surjective as well. When we write ‘ fis an isometry of ( X, d)’ we mean that f:X→Xis an isometry. A metric disnormalized if its range is contained within the unit interval: 0≤d(x, y)≤1 for all x, y∈X. Note that this is equivalent to supx,y∈Xd(x, y)≤1. If supx,y∈Xd(x, y) = 1, we say the metric isstrictly normalized. As we mentioned in the introduction, every metric can be normalized to take values in the interval [0 ,1]. Lemma 2.2. Letdbe a distance function on a set X. We assume that dtakes values that are greater than or equal to 1. Let d′:X×X→Rbe the function defined by d′(x, y) = d(x,y) sups,t∈Xd(s,t)ifXis finite or dis bounded , d(x,y) 1+d(x,y)otherwise . Then d′is a normalized metric. Proof. Ifd′is defined by d′(x, y) :=d(x,y) sups,t∈Xd(s,t)forx, y∈X, then it is straightforward to check that d′is a normalized metric since it is obtained from dby scaling it by its supremum. We proceed with the second case. The verification of the following properties are easy to check: 1.d′(x, y)≥0 for every {x, y} ⊂V, with equality if and only if x=y. 2.d′(x, y) =d′(y, x) for every {x, y} ⊂V. To check the triangle inequality, we apply the following observation: For three nonnegative integers a, b, and csuch that a≤b+c, we have a 1 +a≤b 1 +b+c 1 +c. (2.3) 6 Indeed, it is easy to check that a+a(b+c) +abc≤(b+c+ 2bc) +a(b+c+ 2bc) which is equivalent to the inequality a(1 +b)(1 + c)≤(1 +a)(b(1 +c) +c(1 +b)). Dividing the both sides of this inequality by (1 + a)(1 + b)(1 + c) we obtain the inequality in (2.3). This shows that the triangle inequality holds. Hence, the proof follows. 2.2 Metrics from directed graphs. If the underlying set of a metric space ( X, d) is a group such that the group operations are compatible with the metric (specifically, translations are isometries), then ( X, d) is called a metric group . LetKbe a group. The N-fold direct product KN=K× ··· × Kadmits a natural translation-invariant metric, the Hamming metric , defined as follows. Let edenote the identity element of K. The support of an N-tuple v= (v1, . . . , v N)∈KNis defined by supp( v) ={i∈[N]|vi̸=e}. The Hamming metric on KNis then given by dH(v, w) =|supp( vw−1)|, where vw−1= (v1w−1 1, . . . , v Nw−1 N). It is easy to check that this metric is translation- invariant. The Hamming weight ofv, denoted by ωH(v), is the Hamming distance from vto the identity element ( e, . . . , e )∈KN: ωH(v) =dH(v,(e, . . . , e )) =|supp( v)|. Conversely, the Hamming metric can be defined in terms of the Hamming weight as dH(v, w) =ωH(vw−1). Now, let us assume that Kis the additive group of the finite
|
https://arxiv.org/abs/2505.06405v1
|
field Fq. In [2], Brualdi, Gravis, and Lawrance introduced metrics on the Fq-vector space KN(which is isomorphic toFN q) defined by posets. Let Pbe a poset on the set [ N] ={1, . . . , N }. The P-weight of v= (v1, . . . , v N)∈FN qis defined by ωP(v) =|{i∈[N]|i⪯jfor some j∈supp( v)}|. Brualdi, Gravis, and Lawrance showed that dP(v, w) =ωP(v−w) defines a metric on FN q. This concept was generalized to digraphs in [4]. If Gis a digraph on [ N], theG-weight of a vector vis defined as ωG(v) =|{i∈[N]|there is a directed path from ito some j∈supp( v)}|. 7 2.3 Graphons. In precise terms, a graphon is a function W: [0,1]×[0,1]→[0,1] which satisfies the following two properties: (1) For all x, y∈[0,1], W(x, y) =W(y, x). (2) The function Wis measurable with respect to the Lebesgue measure on [0 ,1]×[0,1]. The motivation behind this definition can be explained as follows. Suppose we have a finite graph G(n) with nvertices, labeled as v1, v2, . . . , v n. The adjacency matrix AofG(n) is an n×nmatrix where the ( i, j)-th entry is given by Aij=( 1 if there is an edge between viandvj, 0 otherwise. To connect the adjacency matrix to a continuous function on the unit square [0 ,1]×[0,1], we divide [0 ,1] into nequal subintervals defined by Ik:=k−1 n,k n , k = 1,2, . . . , n. Each vertex vkis mapped to the corresponding interval Ik. Over the unit square [0 ,1]×[0,1], we construct a piecewise constant function Wn(x, y) based on the adjacency matrix Aby setting Wn(x, y) :=Aijifx∈Iiandy∈Ij. In other words, the value of Wn(x, y) depends on which subintervals xandybelong to, and it is constant within each Ii×Ijblock. As the number of vertices nincreases, the subintervals Ikbecome smaller, and the step function Wn(x, y) becomes finer. If the graphs G(n) are part of a converging sequence of dense graphs, the step functions Wn(x, y) converge (in a suitable sense, such as the L1norm) to a continuous limiting function W(x, y), which is graphon . For a comprehensive introduction to graphons, we recommend the book [9]. 3 Proof of the main result We begin with an elementary lemma. Lemma 3.1. Letf(t)denote the real-valued function defined by f(t) = 1−e−t. Then f(t) is subadditive on the interval [0,∞). In other words, we have f(a+b)≤f(a) +f(b) for every a, b≥0 8 Proof. The inequality we want to prove is given by 1−e−(a+b)≤(1−e−a) + (1 −e−b). To get a clearer form of this inequality we rearrange the terms: 0≤1−e−a−e−b+e−ae−b. But the right hand side of this inequality is equal to (1 −e−a)(1−e−b). Hence, the inequality we wish to prove is equivalent to the inequality 0 ≤(1−e−a)(1−e−b). Now, since aandbare non-negative, e−aande−bare both in the interval (0 ,1]. Therefore, (1 −e−a) and (1 −e−b) are both in the interval [0 ,1]. Thus, the inequality (1 −e−a)(1−e−b)≥0 always holds for a, b≥0. This finishes the proof of the fact that f(t) = 1−e−tis a subadditive function on the entire interval [0 ,∞). The
|
https://arxiv.org/abs/2505.06405v1
|
following lemma will be useful for our next theorem. Lemma 3.2. Leta:X×X→[0,1)be a distance function on a set X. Then d:X×X→R defined by d(x, y) =−log(1−a(x, y)) where x, y∈Xis also a distance function. Proof. The non-negativity, identity of indiscernibles, and the symmetry properties of dare straightforward to verify. It remains to show that dsatisfies the triangle inequality. For every x, y, z ∈X, we want to show that −log(1−a(x, z))≤ −log(1−a(x, y))−log(1−a(y, z)) holds. Since the triangle inequality a(x, z)≤a(x, y) +a(y, z) holds for the metric a, we always have 1−a(x, z)≥1−a(x, y)−a(y, z). In fact, since a(x, y)a(y, z) is nonnegative, we have 1−a(x, z)≥1−a(x, y)−a(y, z)−a(x, y)a(y, z). But the expression on the right hand side is equal to (1 −a(x, y))(1−a(y, z)), which is a positive real number. Hence, we showed that 1−a(x, z)≥(1−a(x, y))(1−a(y, z)). Taking the natural logarithm of both sides (and since the natural logarithm is a monotoni- cally increasing function, the inequality is preserved), and multiplying by -1, we obtain the inequality −log(1−a(x, z))≤ −log(1−a(x, y))−log(1−a(y, z)). This completes the proof of our assertion. 9 Let (X1, d1), . . . , (XN, dN) be metric spaces, where each of the distance functions d1, . . . , d N takes values in [0 ,1). For positive real numbers a1, . . . , a N, we define a function da1,...,aNon the product X1× ··· × XNby setting da1,...,aN(x,y) := 1 −NY i=1(1−di(xi, yi))ai, (3.3) where x= (x1, . . . , x N) and y= (y1, . . . , y N) are from X1× ··· × XN. Theorem 3.4. Let(a1, . . . , a N)be a list of real numbers such that ai≥1fori= 1, . . . , N . Letda1,...,aN:X1× ··· × XN→Rbe the function defined above. Then da1,...,aNis a metric that takes values in [0,1). Proof. The fact that da1,...,aNtakes values in [0 ,1) is easily verified. Additionally, the non- negativity, identity of indiscernibles, and the symmetry properties of da1,...,aNare straight- forward to verify. Thus, it remains to show that da1,...,aNsatisfies the triangle inequality. To this end, we define an auxiliary function Da1,...,aNby Da1,...,aN(x,y) =−log(1−da1,...,aN(x,y)). We will then show that Da1,...,aNis itself a metric, which will imply that da1,...,aNis also a metric. Notice that Da1,...,aNtakes nonnegative values since da1,...,aNtakes values in [0 ,1). Sim- ilarly to the definition of Da1,...,aN, we introduce auxiliary functions for each ( Xi, di),i= 1, . . . , N , as follows: Dai(xi, yi) :=−ailog(1−di(xi, yi)). It is easy to check, using Lemma 3.2, that Daiis a metric on Xi. Now, using the definition of da1,...,aN, we have 1−da1,...,aN(x,y) =NY i=1(1−di(xi, yi))ai. Taking the negative logarithm of both sides, we get −log(1−da1,...,aN(x,y)) =NX i=1−ailog(1−di(xi, yi)). Substituting the definitions of Da1,...,aNandDai, fori∈ {1, . . . , N }, we find that Da1,...,aN(x,y) =Da1(x1, y1) +···+DaN(xN, yN). (3.5) Since Da1, . . . , D aNare metrics, they satisfy the triangle inequality: Dai(xi, zi)≤Dai(xi, yi) +Dai(yi, zi) 10 fori∈ {1, . . . , N }. Adding these inequalities, we obtain NX i=1Dai(xi,
|
https://arxiv.org/abs/2505.06405v1
|
zi)≤NX i=1Dai(xi, yi) +NX i=1Dai(yi, zi). Using (3.5), this inequality becomes Da1,...,aN(x,z)≤Da1,...,aN(x,y) + Da1,...,aN(y,z), where x= (x1, . . . , x N),y= (y1, . . . , y N), and z= (z1, . . . , z N). This shows that Da1,...,aN satisfies the triangle inequality. The other metric properties of Da1,...,aN(non-negativity, identity of indiscernibles, and symmetry) are straightforward to verify. Hence, Da1,...,aNis a metric on X1× ··· × XN. We now apply the subadditive function f(t) = 1−e−t,t≥0, considered in Lemma 3.1, which establishes a relationship between Da1,...,aNtoda1,...,aNvia composition: da1,...,aN(x,y) =f(Da1,...,aN(x,y)). We notice also that f(t) is an increasing function. Therefore, since Da1,...,aNsatisfies the triangle inequality and f(t) is subadditive, the function da1,...,aNalso satisfies the triangle inequality. Therefore, da1,...,aNis a metric on X1× ··· × XN. We will strengthen Theorem 3.4 by relaxing boundaries. More precisely, we will allow the distance functions d1, . . . , d Ntake values in [0 ,1] instead of [0 ,1). Theorem 3.6. Let(a1, . . . , a N)be a list of real numbers such that ai≥1fori= 1, . . . , N . Letda1,...,aN:X1× ··· × XN→Rbe the function defined in (3.3). Then da1,...,aNis a metric that takes values in [0,1]. Proof. The fact that da1,...,aNtakes values in [0 ,1] is evident. The non-negativity, identity of indiscernibles, and symmetry properties of the function da1,...,aNare straightforward to check. It remains to prove that the function da1,...,aNsatisfies the triangle inequality. Letx= (x1, . . . , x N),y= (y1, . . . , y N), and z= (z1, . . . , z N) be three N-tuples from X1× ··· × XN. If none of the distances di(xi, yi), di(yi, zi), and di(xi, zi), where i= 1, . . . , N , is 1, then the proof of Theorem 3.4 shows that da1,...,aN(x,y)≤da1,...,aN(x,z) +da1,...,aN(z,y). We proceed with the assumption that, for some i∈ {1, . . . , N }, one of the numbers di(xi, yi), di(yi, zi), ordi(xi, zi) is 1. Case 1. We assume that di(xi, yi) = 1 for some i∈ {1, . . . , N }. We set i= 1 for simplicity. Under this assumption, we will use mathematical induction on Nto prove that da1,...,aNsatisfies the triangle inequality. We begin with the base case, where N= 1. Then our goal is to show the following inequality 1−(1−a)a1≤(1−(1−b)a1) + (1 −(1−c)a1), 11 where a:=d1(x1, y1),b=d1(x1, z1),c=d1(z1, y1) are such that a= 1, 1 ≤b+c, and 0≤b, c≤1. Equivalently, we will show that (1−b)a1+ (1−c)a1≤1, (3.7) where we have 1 ≤b+cand 0 ≤b, c≤1. But these constraints describe the upper right triangle, denoted by B, in the unit square in Figure 3.1. Since we are interested in the 0 11 (1,1) bc b+c= 1 AB Figure 3.1: The shaded triangle Bis where 0 ≤b, c≤1 and b+c≤1 hold. inequality (3.7), we apply the transformation ( b, c)7→(1−b,1−c) to triangle B. The image of Bunder this transformation is precisely the non-shaded triangle, denoted by A, in Figure 3.1. Clearly, the triangle Ais contained in the unit disc x2+y2≤1 inR2. This finishes
|
https://arxiv.org/abs/2505.06405v1
|
the proof of the base case of our induction. We now assume that our claim holds for N−1, and proceed to prove it for N. Notice the decomposition da1,...,aN(x,y) = 1−(1−d1(x1, y1))a1)| {z } α(x,y)+ (1−d1(x1, y1))a1)| {z } β(x,y) 1−NY i=2(1−di(xi, yi))ai! | {z } γ(x,y), (3.8) where α(x,y),β(x,y), and γ(x,y) are nonnegative numbers. In fact, under our assumptions, we have α(x,y) = 1, β(x,y) = 0, and γ(x,y)≥0. We have similar decompositions for da1,...,aN(x,z) and da1,...,aN(z,y). Our goal is to prove the triangle inequality, 1≤(α(x,z) +β(x,z)γ(x,z)) + ( α(z,y) +β(z,y)γ(z,y)). (3.9) But we already proved in the base case earlier that the inequality 1 ≤α(x,z)+α(z,y) holds. Since β(x,z)γ(x,z) +β(z,y)γ(z,y) is nonnegative, the inequality in (3.9) follows. This finishes the proof of our assertion that if d1(x1, y1) = 1, then da1,...,aN(x,y)≤da1,...,aN(x,z)+ da1,...,aN(z,y) holds. Case 2. We assume that one of the numbers di(xi, zi) ordi(zi, yi) is 1 for some i∈ {1, . . . , N }. Since there is no difference between our arguments for di(xi, zi) = 1 or di(xi, zi) = 12 1, we proceed with the assumption that d1(x1, z1) = 1. Hence, we know that da1,...,aN(x,z) = 1. But we already pointed out at the beginning of our proof that da1,...,aNtakes values in [0,1]. Therefore, the inequality da1,...,aN(x,y)≤1 +da1,...,aN(z,y) holds. This finishes the proof of our theorem. We are now ready to prove the main result which we restate here for the sake of conve- nience. Let ( X1, d1), . . . , (XN, dN) be metric spaces whose distance functions di,i= 1, . . . , N take values in the interval [0 ,1]. Let Xdenote the set theoretic product X1× ··· × XN. LetGbe a directed graph with vertex set {1, . . . , N }. If every edge ( i, j) ofGis assigned a probability score pij∈(0,1], then the mapping dX,G,P(g,h) := 1−1 NNX j=1NY i=1[1−di(gi, hi)]1 pji! (3.10) forg= (g1, . . . , g N) and h= (h1, . . . , h N) defines a metric on X. Proof of the Main Theorem. Forj= 1, . . . , N , letdP,jdenote the function defined by dP,j(x,y) := 1 −Y i∈[N](1−di(xi, yi))1/pji, where x= (x1, . . . , x N) and y= (y1, . . . , y N) are from X1× ··· × XN. Since pij∈(0,1], we have 1 /pji≥1. It is easy to check that dP,j(x,y)∈[0,1]. We notice also that dX,G,P(x,y) =1 NNX j=1dP,j(x,y). Hence, the proof will follow once we show that each of the functions dP,j, where i∈ ⟨supp( x,y)⟩, satisfies the triangle inequality. But this is a consequence of our Theorem 3.4 thatdP,jis a metric on the appropriate subproduct of X1× ··· × XN. Hence, it satisfies the triangle inequality where it is defined. This finishes the proof of our theorem. 4 Proof of Corollary 1.1 Let ( X1, d1), . . . , (XN, dN) be a list of metric spaces as before. We notice that the metric dX,G,Pcan be equivalently defined as follows dX,G,P(x,y) =X j∈⟨supp( x,y)⟩ 1−Y
|
https://arxiv.org/abs/2505.06405v1
|
i∈[N](1−di(xi, yi))1/pji , (4.1) where x= (x1, . . . , x N) and y= (y1, . . . , y N) are from X. 13 We are now ready to prove our Corollary 1.1. Let us recall its statement for convenience. We assume that the underlying sets in the main result, X1, . . . , X Nare finite and that the corresponding metrics d1, . . . , d Ntake values in {0,1}. Then for every g= (g1, . . . , g N)∈ X andh= (h1, . . . , h N)∈ X, we have dX,G,P(g,h) =|⟨supp( g,h)⟩|, where |.|denotes the size of the set and supp( g,h) ={j∈ {1, . . . , N } |gj̸=hj}, with ⟨supp( g,h)⟩being the set of indices i∈ {1, . . . , N }such that there is a directed path from ito an element of supp( g,h). Proof of Corollary 1.1. Since dX,G,Pis defined by dX,G,P(x,y) :=N−NX j=1NY i=1, pji̸=0(1−di(xi, yi))1/pji, and since we have X j∈⟨supp( x,y)⟩ 1−Y i∈[N](1−di(xi, yi))1/pji = |⟨supp( x,y)⟩| −X j∈⟨supp( x,y)⟩Y i∈[N](1−di(xi, yi))1/pji, it suffices to show that N− |⟨supp( x,y)⟩|=NX j=1NY i=1(1−di(xi, yi))1/pji−X j∈⟨supp( x,y)⟩Y i∈[N](1−di(xi, yi))1/pji.(4.2) Let us focus on the summands of the first summation on the right. For j∈ ⟨supp( x,y)⟩, we can re-express it as follows: NY i=1(1−di(xi, yi))1/pji=Y i∈[N], pji̸=0(1−di(xi, yi))1/pji. This term cancels with the term in the corresponding summand of the second summation on the right. Hence, the right hand side of (4.2) has a simpler expression in the form X j /∈⟨supp( x,y)⟩NY i=1(1−di(xi, yi))1/pji. (4.3) In fact, we notice that the products in (4.3) can be taken over i∈[N] where pji>0. We are now ready to compare the number on the left hand side of (4.2) and (4.3). Since 14 N− |⟨supp( x,y)⟩|=|{1, . . . , N } \ ⟨supp( x,y)⟩|, the left hand side of (4.2) gives the count ofj∈[N] such that there is no path from jto an element of supp( x,y). At the same time, the summation in (4.3) is taken over all such j’s. Let i∈[N] be such that pji̸= 0. Notice in this case, if it were true that i∈ ⟨supp( x,y)⟩, then we would have j∈ ⟨supp( x,y)⟩, which is absurd. Hence, we know that i /∈ ⟨supp( x,y)⟩showing that di(xi, yi) = 0. It follows that for every j∈[N] such that j /∈ ⟨supp( x,y)⟩, we haveQ i∈[N](1−di(xi, yi))1/pji= 1. We conclude from these counts that the equality in (4.2) holds true. This finishes the proof of the first part of our result. It remains to show that if the component metrics d1, . . . , d Ntake values in {0,1}, then we have dX,G,P(x,y) =|⟨supp( x,y)⟩|/N. To this end we use the arguments of the previous paragraph and the formula (4.1) we just proved. Let j∈ ⟨supp( x,y)⟩. This means that there exists i∈supp( x,y) such that pji>0. Hence, the productQ i∈[N](1−di(xi, yi))1/pji is equal to 1. Therefore, the right hand side of (4.1) is equal toP j∈⟨supp( x,y)⟩1, which is exactly the number |⟨supp(
|
https://arxiv.org/abs/2505.06405v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.