text
string
source
string
have been previously considered in the literature. To motivate our proposal, we return to the key idea, mentioned in Section 1, that our regularization should be such that the bias term is linear in the regularization parameter. We develop this idea for the simplest linear program: given a vector cwith positive coordinates, consider min x∈Rm⟨c,x⟩, s.t.x≥0, (18) the optimal solution to which is clearly x⋆=0. We aim to design a regularized version of the same problem: min x∈Rmfr(x) (19) where fr(x) =⟨c,x⟩+ψr(x) for a strictly convex function ψrto be chosen. First-order optimality conditions imply that the solution xrto (19) satisfies c+∇ψr(xr) = 0 , leading to the explicit expression xr= (∇ψr)−1(−c), (20) where the invertibility of ∇ψris guaranteed by the strict convexity of ψr. Since ψris convex, its inverse gradient satisfies ( ∇ψr)−1=∇ϕr, where we have defined ϕr=ψ∗ rto be the convex conjugate of ψr. The desired linearity of the bias term is therefore equivalent to the requirement that as r→0 xr−x⋆=∇ϕr(−c) =rd⋆+o(r), (21) where d⋆is some vector depending on cbut not on r. The simplest means of guaranteeing this expansion is to simply assume that ϕris a linear function of r; that is, that ϕr(−c) =rφ(c) (22) for some fixed convex function φ, so that (21) holds with d⋆=−∇φ(−c). Since convex duality implies that ψr=ϕ∗ r, we therefore obtain ψr(x) = sup y⟨x,y⟩ −ϕr(y) = sup y⟨x,y⟩ −rφ(−y) =rsup y⟨−x/r,y⟩ −φ(y) =rφ∗(−x/r). The above ansatz is the heart of our approach. We will assume that our regularizer takes the form rφ∗(−x/r) for a convex function φ. For computational simplicity, we will further assume that φis separable, so that 6 φ(y) =Pm i=1q(yi) for some univariate convex q. Under these assumptions, the objective of our regularized program (11) ultimately reads x(r,b) = arg min x∈Rmfr(x), s.t.Ax=b, where fr(x) :=⟨c,x⟩+rmX i=1p(−xi r), (23) for a convex function p:=q∗. This is the form of regularization that we study in what follows. Remark 1. Prior work on regularization for optimal transport (Klatt et al., 2020) has focused almost exclusively on regularizers of the form λφ(π)for some convex ϕ, as in (6). Recalling (22), we observe that the regularizations we study are in fact the convex conjugates of those typically considered in the optimal transport literature. Our results reveal that the effect of this “dual regularization” is very different from that of standard regularizers, and necessitates the development of new proof techniques. In fact, our new techniques also yield insight into traditional regularizers such as the entropy function, see Proposition 2.1. 3 Technical preliminaries Throughout the paper, we make certain mild assumptions on the random vector bn, the linear program, and the choice of penalty function. We detail these assumptions below. 3.1 Assumption on the random vector Assumption 1. The limiting random variable in (9)satisfies P(G=0) = 0 . This assumption is automatically satisfied in our primary setting of interest, when Gis a non-degenerate Gaussian. In particular, Assumption 1 guarantees that qnis the proper scaling for convergence of bntob. 3.2 Assumptions on the linear program Assumption 2. The linear program satisfies the following conditions: 1. The
https://arxiv.org/abs/2505.04312v1
solution to (8)exists and is unique. 2.(Slater’s condition) The feasible region D={x:Ax=b,x≥0}of(8)has non-empty relative interior and there exists x∈relintDsuch that x>0. 3. The constraint matrix Ain(8)has full row rank. These assumptions are not restrictive and are natural in this setting. The assumption that (8) has a unique solution is true of most linear programs that arise in practice1and is necessary to be able to formulate standard distributional limit results for the stochastic analogue of (8). Likewise, Slater’s condition guarantees that the solution to (8) remains feasible under small perturbations of b, which guarantees that (12) is well defined. Finally, the assumption that Ahas full rank is without loss of generality, as redundant constraints can always be removed without changing the program. 3.3 Valid penalty functions The penalty function p, which we always assume to be convex and lower semi-continuous, plays a central role in this work. As Section 2.1 makes clear, it is natural to make assumptions on q, where p=q∗. Assumption 3. The penalty function p:R→R∪ {+∞}is given by p=q∗, where q: (0,∞)→Ris twice differentiable, has locally Lipschitz and strictly positive second derivative, and satisfies limx→0q′(x) =−∞, limx→∞q′(x)≥0 In words, we require that qis a strictly convex barrier function for the set (0 ,∞). As we shall see, this property of qimplies that the dual of (11) can be written without inequailty constraints, which significantly simplifies the analysis. This assumption implies the following properties of p. Lemma 3.1. Under Assumption 3, the following properties hold: 1Indeed, uniqueness holds for all but a measure-zero set of costs c. 7 1. The domain of pcontains (−∞,0). 2.p′(x)is unbounded above. 3.limx→−∞ p′(x) = 0 . 4.p′′is strictly positive and locally Lipschitz on its domain. In the following sections, we will need to choose the sequence of penalty coefficient rnbased on how fast the derivative of the penalty function decays to zero at −∞. Therefore, to facilitate the discussion, we define the decay rate of a function at −∞: Definition 1 (Decay rate of a function at −∞).Letβ:R→Rbe any continuous function such that limr→0β(r) = 0 . We say a function fdecays to zero at −∞ at rate β(r)if lim r↓0f(−1 r) β(r)<∞. (24) Table 1 lists several examples of penalty functions that satisfy the assumptions: the log-barrier function, inverse polynomial functions, smoothed quadratic penalty function, and the exponential function. The admis- sibility of the log-barrier function is particularly notable, since penalties of this type are already widely used in interior point methods for linear programming (Boyd and Vandenberghe, 2004). Name Domain Decay rate of p′(x) log-barrier function(−∞,0) rp(x) =−ln(−x) inverse polynomial functions(−∞,0) rα+1 p(x) =1 |x|α, α > 0 smoothed quadratic penalty function(−∞,∞)Any polynomial of rp(x) = (log(1 + exp( x)))2 exponential function(−∞,∞)Any polynomial of rp(x) = exp( x) Figure 1: Examples of proper penalty functions (table on the left) and plot of these functions (figure on the right): log-barrier function (blue), inverse polynomial function with α= 1 (yellow), smoothed quadratic penalty function (green), exponential function (red). 4 Convergence of trajectory of the penalized linear program In this section, we focus on the “population”
https://arxiv.org/abs/2505.04312v1
version of the penalized linear program, with bn=b. Our goal is to precisely characterize the bias induced by the regularization term as a stepping stone to (13). Specifically, we study the relationship between (8) and (11) as r→0, and the main result of this section is that x(r,b)−x⋆ is asymptotically linear in r. This extends a result of Cominetti and Martin (1994), who showed an analogous claim in the special case of the exponential penalty p(x) = exp( x). An important feature of our result, as compared with that of Cominetti and Martin (1994), is that we identify the magnitude of the error term, and show that it depends on the rate of decay of p′at−∞given by Definition 1. We first show that that the solution to (11) is unique. Proposition 4.1. Under Assumptions 2 and 3, the penalized program (23)has a unique solution, which satisfies x(r,b)∈int dom fr. The following theorem provides a first-order expansion of the solution to the penalized program (11) around the solution x⋆to (8) of the form x(r,b) =x⋆+rd⋆+o(r), where d⋆is the solution to an auxiliary optimization problem. Denote by I0:={i∈[m]|x⋆ i= 0}the set of zero entries in the optimal solution x⋆. 8 Theorem 4.2 (Convergence of trajectory of the penalized program) .Under Assumptions 2 and 3, the solution to(11) can be written as x(r,b) =x⋆+rd⋆+O(rβ(r))asr→0, (25) where x⋆is the true optimal LP solution, β(r)is the decay rate of p′at−∞, and d⋆is the unique solution to min d∈Rm⟨c,d⟩+X i∈I0p(−di),s.t.Ad= 0. (26) The proof is based on the following observation. Write d(r) = (x(r,b)−x⋆)/r. Then, since x(r,b) mini- mizes (11) and Ax⋆=b, we obtain that d(r) = arg min d∈Rmgr(d), s.t.Ad=0, (27) where we define gr(d) =⟨c,d⟩+mX i=1p(−x⋆ i r−di). (28) Suppose that we adopt the additional assumption that p(−x)→0 asx→ ∞ . Then lim r→0gr(d) =⟨c,d⟩+P i∈I0p(−di). Therefore, comparing (27) with (26), we observe that (26) is the r→0 limit of (27). This suggests, that d(r)→d⋆, so that x(r,b) =x⋆+rd⋆+o(r). The full proof of this theorem in Appendix B.2 shows that this intuition holds even if p(x)̸→0 as x→ ∞ , and shows that the error term depends on the behavior of p′at−∞quantified by the decay rate β. 5 Asymptotic expansion of the solution to the random penalized program The goal of this section is to extend Theorem 4.2 to the setting where the vector bis replaced by a random counterpart bn. The following theorem is deterministic, and gives an asymptotic expansion for the solution x(r,b′), where b′is a small perturbation of bsuch that b′→basr→0. We then use this result to obtain a limit law for x(rn,bn) where bnsatisfies (9) and rnis a sequence of penalization parameters with prescribed decay. We first establish that the penalized program remains feasible under small perturbations. Proposition 5.1. There exists a neighborhood Uofbsuch that for any r >0, the penalized program (11) is feasible and has a unique solution when bis replaced by any b′∈U. To state our result, we need another auxiliary program. Define Σ∈Rm×mto be a diagonal matrix satisfying Σii=p′′(−d⋆ i) 1{i∈I0}, (29) where d⋆andI0are defined in (26). Consider
https://arxiv.org/abs/2505.04312v1
the quadratic program min x∈Rmx⊤Σx,such that Ax=y, (30) Proposition 5.2. There exists a unique solution to (30), which is a linear function of y, and can therefore be written M⋆yfor some M⋆∈Rm×k. Moreover, the matrix M⋆has the following properties: •M⋆is a right-inverse of A, i.e., AM⋆=Ik. •ker(A)⊆ker(M⋆⊤Σ). The following result is our main technical result: a rigorous version of the asymptotic expansion promised in (13). Theorem 5.3. Under Assumptions 2 and 3, if rβ(r)≪ ∥b−b′∥ ≪r, then x(r,b′) =x⋆+rd⋆+M⋆(b′−b) +O∥b′−b∥2 r+rβ(r) ,asr→0, (31) where β(r)andd⋆are as in Theorem 4.2 and M⋆is defined as in Proposition 5.2. 9 Comparing Theorem 5.3 to Theorem 4.2, we see that x(r,b′) has the same first-order “bias” term rd⋆as x(r,b), and that the second order term is asymptotically linear in the perturbation b′−b. In particular, if b′−b is asymptotically Gaussian, then the fluctuations of x(r,b′) will have the same property. Like Theorem 4.2, Theorem 5.3 is based on analysis of (27). If we write g(d) = lim r→0gr(d), for grdefined in (28), then the matrix Σis the Hessian of gatd⋆. It is therefore natural to conjecture that solutions to perturbations of the equation (26) around d⋆will be minimizers of a quadratic form involving Σ. The proof of Theorem 5.3 establishes a quantitative form of this argument. Remark 2. To be explicit, Theorem 5.3 holds in the asymptotic framework where r→0andb′→bsimulta- neously, at a specified rate. The requirement that ∥b−b′∥ ≪ris necessary for the expansion in (31) to hold. Indeed, following the proof of Theorem 4.2, it can be shown that if r≪ ∥b−b′∥, the solution to the penalized program can be written as x(r,b′) =x⋆ b′+rd⋆ b′+O(rβ(r)), (32) where x⋆ b′is the solution to the linear program with equality constraints Ax=b′andd⋆ b′is the solution to (26) with I0replaced by the set of zero entries of x⋆ b′. Prior work (Klatt et al., 2022; Liu et al., 2023) shows that x⋆ b′is asymptotically non-linear in b′−b, so no expansion similar to (31) holds in this regime. With Theorem 5.3 in hand, we can obtain a distributional limit result for x(rn,bn) under the assumption thatbnis random. In light Remark 2, the following assumption is natural. Assumption 4. The sequence rnis chosen to satisfy the relation rnβ(rn)≪q−1 n≪rn, where β(r)is the decay rate of p′at−∞. Since bn−b=Op(q−1 n) and Theorem 4.2 shows that the effect of the penalty term is proportional to r, this assumption is equivalent to requiring that the random fluctuations in bnare smaller than the penalty term but larger than the residual term, which has order O(rβ(r)). Under this assumption, the following is a direct corollary of Theorem 5.3. Theorem 5.4. Under Assumptions 2, 3 and 4, qn(x(rn,bn)−rnd⋆−x⋆)d→M⋆G, (33) where d⋆andM⋆are as in Theorem 5.3. Note that the limit law in Theorem 5.4 involves the bias term rnd⋆, which depends on the true linear program solution x⋆. In Section 6, we introduce a debiasing procedure which removes this additional term. 6 Debiasing and distributional limits of random linear program so- lutions with penalties In this section, we exploit the linearity of the leading-order bias term to construct
https://arxiv.org/abs/2505.04312v1
an asymptotically unbiased estimator of x⋆. Our main insight is that the linearity of the bias makes it possible to construct a linear combination of the solutions to (12) with two different values of rnso that the term involving d⋆exactly cancels. Indeed, the following theorem is immediate from Theorem 5.4. Theorem 6.1. Letx(rn,bn)andx(rn 2,bn)be the solution to the problem (12) with penalties rnandrn 2, respectively. Define ˆxnas ˆxn= 2x(rn 2,bn)−x(rn,bn). (34) Then under Assumptions 2, 3 and 4, qn(ˆxn−x⋆)d→M⋆G, (35) where M⋆is as in Theorem 5.3. In addition to being asymptotically unbiased, the estimator ˆxnsometimes gives rise to estimates of the objective value that converge strictly faster than the plug-in estimate. The following result develops this phe- nomenon in the particular context of optimal transport problems when s=t. (See Corollary B.8.1 in the appendix for the extension to general linear programs.) 10 Corollary 6.1.1 (Special case of Corollary B.8.1) .Consider the discrete optimal transport problem (1). Suppose thats=t,ci,i= 0,ci,j>0fori̸=j, and c=c⊤. Let snandtnbe such that√n(sn−s)and√n(tn−t) converge to non-degenerate random variables. Let ˆπnbe the debiased estimator constructed as in Theorem 6.1 with the exponential penalty function p(x) = exp( x). Then√n(⟨c,ˆπn⟩ − ⟨c,π⋆⟩)d→0. (36) The fast convergence identified in Corollary 6.1.1 occurs at the “null” when the two marginal distributions are equal to each other. The Sinkhorn divergence (Genevay et al., 2018), a loss connected to entropic penalization, is known to evince a similar phenomenon (Bigot et al., 2019, Theorem 2.7). 7 Bootstrap As emphasized above, the fact that the unique solution x⋆to (8) is not a smooth function of b(when viewing Aandcas fixed) is the source of the asymptotic bias in the plug-in estimator xnobtained by solving (8) with breplaced by its empirical counterpart bn. More precisely, Klatt et al. (2022) show that x⋆is in general only directionally Hadamard differentiable with respect to b. An implication of this fact is that the na¨ ıve bootstrap is not consistent (D¨ umbgen, 1993; Fang and Santos, 2018) and does not lead to asymptotically valid inference forxwhen applied with the plug-in estimator. However, we show in this section that our debiased estimator from Theorem 6.1 avoids this problem and can be bootstrapped without issue. The resulting procedure is a practical and fully data driven method for asymptotic inference on x. To be more precise, we consider the setting where the random vector bnis an average of ni.i.d. vectors Z1, . . . , Z nwith expectation b. In particular, this is the case for the discrete optimal transport problem. We denote by ˜bna bootstrap copy of bnobtained by resampling with replacement from {Z1, . . . , Z n}. Finally, we form the estimator ˆxnas in Theorem 6.1 from bn, and use the same procedure to obtain a bootstrap version ˜xncomputed with ˜bnin place of bn. The following theorem shows that ˜xnis consistent for inference on x⋆. Theorem 7.1. Under the same assumptions as in Theorem 5.4, let ˆxnand˜xnbe the estimator defined in Theorem 6.1 with vectors bnand˜bnrespectively. Then sup h∈BL1(Rk) E h√n(˜xn−ˆxn) |Z1, . . . , Z n −E h√n(ˆxn−x⋆) p→0, (37) where
https://arxiv.org/abs/2505.04312v1
BL1((Rk))denotes the set of bounded Lipschitz functions on Rk, whose L∞norm and Lipschitz constant are both bounded by 1. As is well known, such conditional weak convergence results lead to consistency for quantile estimators and confidence sets obtained from ˜xn. For example, the following result is a direct application of (van der Vaart, 1998, Lemma 23.3) showing that bootstrap estimates readily yield valid confidence intervals for arbitrary coordinates of the optimal solution. For any i∈[m] and t∈(0,1), let bF−1 n,i(t) be the corresponding quantile of√n((˜xn)i−(ˆxn)i) given Z1, . . . , Z n. Corollary 7.1.1. Under the same assumptions as Theorem 7.1, for any α∈(0,1), lim inf n→∞P (ˆxn)i−bF−1 n,i(1−α/2)√n≤x⋆ i≤(ˆxn)i−bF−1 n,i(α/2)√n! = 1−α . (38) 8 Simulation results for the optimal transport problem A central application of our work is to providing asymptotically unbiased (and asymptotically Gaussian) es- timators for the optimal plan in the discrete optimal transport problem. Motivated by this application, we perform simulations to assess the performance of our estimator on optimal transport problems, mimicking an experimental setup for the entropic penalty due to Klatt et al. (2020). 8.1 The 2×2OT problem revisited We consider a 2-by-2 optimal transport problem: w⋆= min π∈R2×2π12+ 2π21, s.t.π1=t, π⊤1=s, π≥0. (39) 11 witht=s=1 2,1 2⊤and a unique solution π⋆=1 20 01 2 , and in the statistical setting where tandsare only available via empirical frequencies tnandsn. This is the same setting as in (3), though we have used an asymmetric cost to avoid the degenerate limiting situation described in Corollary 6.1.1. It is easy to check that the asymptotic characterization of the plug-in estimator given in Proposition 1.1 holds for this asymmetric cost as well. In particular, the limiting distribution of πnis non-Gaussian. Our approach in this paper suggests solving a penalized optimal transport problem: π((tn,sn), rn) = arg min π∈R2×2π12+ 2π21+rn2X i,j=1p(−πi,j rn), s.t.π1=tn, π⊤1=sn, (40) and using ˆ πn= 2π((tn,sn), rn/2)−π((tn,sn), rn) instead of πnas an estimator for π⋆. We first consider the case of the log penalty p(x) =−log(−x). Since p′decays at the rate β(r) =rat−∞, Theorems 5.3 and 6.1 imply that as long as the sequence rnsatisfies r2 n≪n−1/2≪rn, then ˆπn=π⋆+M⋆(tn,sn) +Op(max( r2 n,1 nrn)). (41) The error term is minimized when rnis of the order n−1/3, so we choose rn=r0 3√nfor a positive constant r0, and we write ˆ πn,r0to emphasize that the estimator depends on the choice of r0. By Theorem 6.1, the estimator of the transportation cost ˆ wn,r0:=⟨c,ˆπn,r0⟩satisfies √n( ˆwn,r0−w⋆)d→ ⟨c,M⋆(Gt,Gs)⟩:=Gw, (42) where w⋆=⟨c, π⋆⟩and√n((tn,sn)−(t,s))d→(Gt,Gs). Using (2) and the definition of M⋆in (30), one can show that Gw∼ N(0,1 8). We conduct experiments with varying nand varying r0and compute the rescaled cost ∆wn,r0:=p n/var(Gw)( ˆwn,r0−w⋆) (43) for 1000 independent trails. Our theoretical results indicate that ˆ wn,r0converges to w⋆and that ∆ wn,r0 converges in distribution to a standard Gaussian. To assess this convergence, we plot both the mean squared error∥ˆwn,r0−w⋆∥2and the Kolmogorov–Smirnov distance between the distribution of ∆ wn,r0and the standard Gaussian (Figure 2a, top two rows). The simulation reveals that the distributional behavior of the estimator is sensitive to the choice of
https://arxiv.org/abs/2505.04312v1
r0, with substantial deviations from Gaussianity when r0is too small or too large. This phenomenon can be understood by recalling the condition r2 n≪n−1/2≪rn, or, equivalently, n−1/2≪rn≪n−1/4. When nis small, the window of valid penalization parameters is relatively narrow, implying that r0should be chosen with caution. By contrast, with the exponential penalty p(x) = exp( x), since β(r) in this case can be any polynomial in r, the range of admissible values of rnis much wider. Performing the same experiment with the exponential penalty (Figure 2b, top two rows) shows that it is much less sensitive to the choice of r0, with good distributional convergence as soon as r0is large enough. In the bottom two rows, we fix r0= 1 and assess the qualitative convergence of ∆ wn,r0to a standard Gaussian for small and large sample sizes. Overall, performance of the exponential penalty function appears more reliable. 8.2 Large scale simulation To illustrate the finite sample behavior of our estimator in high-dimensional settings, we employ our method on the same example as in (Klatt et al., 2020, Section 5). Specifically, we examine an optimal transportation problem between two measures supported on an L×Lequispaced grid over [0 ,1]×[0,1], where the transportation cost is proportional to the Euclidean distance between grid points. We take L= 10 so that the marginal distributions each have support size 102and the optimal plan is an element of R102×102. For our first simulation, we draw t,sindependently and uniformly from the simplex ∆ L×L, then generate empirical distributions tnandsnusing ni.i.d. samples from the two measures, and then compute our estimator of the optimal plan ˆ πn,r0and estimator of the cost ˆ wn,r0using the exponential penalty function with coefficient rn=r0 L4n1/3, forr0a positive constant. Our theoretical results indicate that the rescaled cost ∆ wn,r0defined in (43) converges to a standard Gaus- sian. We took 2000 independent replicates with various samples sizes and choices of r0. The results are plotted in Figure 3. We also consider the case where t=s. As discussed in Corollary 6.1.1, with the exponential penalty function, we anticipate that√n( ˆwn,r0−w⋆) will converge in probability to 0. Since higher-order asymptotics in 12 (a) log penalty function (b) exponential penalty function Figure 2: Comparison between the log and exponential penalty functions. Subfigures (a), (b) show the same experiment conducted with the logarithmic and exponential penalties, respectively, with 1000 independent replicates in each case. The first row shows plots of the mean-square error E∥ˆπn,r0−π⋆∥2as the sample size nand regularization parameter r0vary. The second shows the Kolmogorov–Smirnov distance between ∆ wn,r0 and the standard Gaussian in the same setting. In the last two rows, we fix r0= 1 and consider a small sample sizen= 102and a large sample size n= 106. The dashed line is a kernel density estimate of the density of ∆wn,r0and the solid line is the standard Gaussian density function as a reference. On the right of each density function plot is the corresponding Q–Q plot with a 45-degree reference line in red. 13 Figure 3: Large scale simulation: Experiments of optimal transportation between two independent Dirichlet distributions on
https://arxiv.org/abs/2505.04312v1
equi-spaced 10 ×10 grid (2000 independent replicas in each case). With varies quantities of regularization parameter r0and sample size n, the left two plots shows the Mean Squared Error of the estimated plan (E∥ˆπn,r0−π⋆∥2), and the Kolmogorov—Smirnov distance between ∆ wn,r0and the standard Gaussian. The middle two plots show the density comparison between the standard Gaussian (solid lines) and the kernel density estimation of ∆ wn,r0(dashed lines) with r0= 1 and n= 200,5000 respectively, facilitated with the Q–Q plot on their right. 14 Figure 4: Fast convergence with exponential penalty function in the t=scase: The left two plots show the density comparison between the standard Gaussian (solid lines) and the kernel density estimation of√n( ˆwn,1−w⋆) (dashed lines) with n= 5000 and n= 109respectively. The right log–log plot shows the rate of decay of E( ˆwn,1−w⋆)2with respect to nin the large nregime. this case lie beyond the scope of this paper, we simply visualize the speed of convergence. In this experiment, we take rn=r0 L4n1/4with r0= 1. We took 2000 independent replicas of the empirical cost ˆ wn,r0and plot a kernel density estimate of√n( ˆwn,r0−w⋆) when n= 5000 and n= 109. We also plot the Mean Squared Error of ˆwn,r0as a function of non a log–log plot and estimate the rate of decay via linear regression. The results are plotted in Figure 4. The experiment shows that E( ˆwn,1−w⋆)2=o(n−1) which are consistent with the finding that ˆwn,r0−w⋆=op(n−1/2). 9 Numerical Examples In this section, we apply our method to two real-data examples. The first is an analysis of the bike reallocation problem for the Citi-Bike program in New York City. In the second, we revisit the protein colocalization analysis of Klatt et al. (2020) and illustrate the benefits of our method in comparison to entropic regularization. In both examples, we adopt the log-penalty function due to its computational advantages: (1) it is widely supported by convex optimization solvers, and (2) it offers better numerical stability compared to the exponential penalty function. 9.1 Citi-Bike reallocation problem Citi-Bike is a bike-sharing program in New York City which allows residents and tourists to rent bikes from various docking stations across the city. The Citi-Bike program faces a persistent challenge with bike imbalance, where some docking stations experience high demand while others have surplus bikes. This imbalance, common in busy urban areas, means that in peak locations, riders may struggle to find available bikes or empty docks for returns. To alleviate this problem, Citi-Bike implements daily rebalancing by transporting bikes from stations with a daily surplus to stations with daily deficits. Effective rebalancing improves user satisfaction and increases the system’s overall efficiency, making the bike-sharing service more reliable and convenient for daily commuters and tourists alike. We formulate the bike rebalancing problem as a linear program, and use open-sourced daily trip data provided by Citi-Bike (Lyft Bikes and Scooters, LLC, 2024) to estimate the average daily need for rebalancing across the Citi-Bike system. We take daily trip data for n= 84 weekdays between June and September 2023 in Manhattan, the New York City borough where Citi-Bike primarily operates.
https://arxiv.org/abs/2505.04312v1
We record the net bike flow each day for all N= 678 stations in a vector d(i)∈ZN, for i= 1, . . . , n . A positive entry in d(i)means that there were more bikes docking at that station than bikes being taken away from the station during the ith day, and vice versa. We assume that the vectors d(i)are i.i.d. samples of an underlying random variable, so that the central limit theorem gives√n(¯dn−d)D→ N (0,Σ) for some deterministic mean dand covariance Σ, where ¯dn=1 nPn i=1d(i). The problem of rebalancing the bikes among stations while reducing the total cost of rebalancing can be formulated as a linear program: π⋆= arg min π∈RN×N⟨c,π⟩,s.t.πi,j≥0,NX j=1πi,j−πj,i=di, (44) 15 Figure 5: Empirical bike transportation plan among all stations in Manhattan. The estimated quantities of bikes to be transported between station pairs are shown in purple lines. The two ends of a line are the locations of two stations. A thicker and darker line indicates that more bikes need to be transported between the two stations. Middle: overall plan. Left: Details of plan near Penn Station and Port Authority Bus Terminal. Upper right: Details of plan around Central Park. Lower right: Details of plan around Lower Manhattan and East Village. where πi,jis the number of bikes that are transported from station ito station junder a bike-imbalance vector dand where ci,jdenotes the cost of transporting bikes from station ito station j. Understanding the optimal rebalancing strategy π⋆can provide the company with a strategy for hiring and assigning workers and trucks. We use Theorem 6.1 with the log-penalty function to obtain an estimator ˆπnof the optimal rebalancing strategy π⋆, and we use B= 100 bootstrap samples to generate entrywise 95% confidence intervals as in Corollary 7.1.1. We plot our results on a map of Manhattan in Figure 5. Purple lines on the map indicate pairs of stations for which the corresponding entry of ˆπnis at least 1 and for which the corresponding confidence interval excludes zero. Our results recover observed patterns in bike rebalancing obtained from the monthly operational report provided by Citi-Bike. Citi-Bike provides incentives for bike rebalancing at certain locations, which are revealed by our estimator. We also found significant need for rebalancing near Central Park, especially around Columbus Circle, a major transportation hub where tourists usually get off from subways to visit this famous park. 9.2 Colocalization Analysis In this section, we compare the performance of our estimator with that of the entropic penalization approach (6) on the same data analyzed by Klatt et al. (2020), a protein colocalization study consisting of fluorescence microscopy images of cells. In this data set, pixel intensities reflect the location and amount of proteins in each image. Analyzing the spatial correlations of different proteins gives insight into cellular behaviors. Klatt et al. (2020) introduce a colocalization measure to quantify the spatial correlation of two proteins. The normalized gray-scale fluorescence microscopy images can be viewed as discrete probability distributions with support on the equidistant grid of pixels indicating concentration of certain proteins. Let tandsbe the distributions of two different
https://arxiv.org/abs/2505.04312v1
proteins. The optimal transport plan between these two distributions with cost proportional to the Euclidean distance between pixels is denoted by π⋆. The colocalization measure, as a function of distance ξ, quantifies the fraction of proteins which are “colocalized”—i.e., which are transported only a short distance under the optimal plan. Specifically, the colocalization measure is defined as Col⋆:= Col( π⋆)(ξ) =NX i,j=1π⋆ ij 1{cij≤ξ}, (45) 16 Figure 6: Zoomed in 128 ×128 pixels of STED images. Pixel size=15 nm. Left: ATP Synthase in green channel; Right: MIC60 in red channel; Middle: overlay of the green and the red channel. where N=NxNyis the number of total pixels on the ( Ny×Nx)-size images and c∈RN×Nstores the Euclidean distance between the pair of pixels on the ( Ny×Nx) grid. More generally, for any joint distribution π:=π(u,v) with marginals u,v, we have the colocalization measure for the plan π(u,v) as Col(π(u,v))(ξ) :=NX i,j=1πij 1{cij≤ξ}. Computing the colocalization measure directly is computationally hard in practice. For example, even 128×128-pixel images will generate a transportation plan with 228variables. To reduce the storage requirement and to save computational time, Klatt et al. (2020) propose subsampling from tandsto obtain empirical measures tnandsnwith n < N , and using these to estimate the true colocalization measure. Although both empirical measures lie in the probability simplex in RN, the sizes of their supports are at most nafter the subsampling procedure. Due to the nonnegativity constraint on the transportation plan, mass can only be transported between the supports of the two measures. This reduces the computation of the optimal plan to the space of R|supp(tn)|×|supp(sn)|. Klatt et al. (2020) use the entropic regularization discussed in Section 2 to compute the empirical regularized plan πλ,n=:πλ,n(tn,sn), which they show enjoys a central limit theorem centered at the population-level regularized plan π⋆ λ:=πλ(t,s). Consequently, their proposed estimator [RCol n,n:= Col( πλ,n)(ξ) enjoys a central limit theorem when centered at RCol⋆ λ:= Col( π⋆ λ)(ξ). However, they do not compare the regularized colocalization measure RCol⋆ λwith the true colocalization measure Col⋆to assess the effect of the bias introduced by entropic regularization. We apply the method developed in this paper to compute the debiased estimator ˆπnusing the log-penalty function, and subsequently obtain the corresponding colocalization estimator. To distinguish our approach from the entropic regularization method proposed by Klatt et al. (2020), we refer to it as the penalization method and denote the resulting colocalization estimator as [PCol n,n:= Col( ˆπn)(ξ). In both methods, empirical transport plans are restricted to the positive orthant, effectively reducing the optimization domain to R|supp(tn)|×|supp(sn)|. For a 128 ×128-pixel image, we can compute (at considerable computational cost) the “population-level” colocalization Col. We can therefore compare the estimators [PCol n,nand[RCol n,nwith Col⋆. To make a fair comparison, we use the same 2-colour-STED images of ATP Synthase and MIC60 from Tameling et al. (2021b) (shown in figure 6) and apply the same parameter settings ( n= 2000 samples with regularization parameter λ= 2) as Klatt et al. (2020). We compute [PCol n,nand[RCol n,nas well as 95% confidence bands for each estimator with B= 100 bootstrap replicates.
https://arxiv.org/abs/2505.04312v1
Since Col( π), viewed as mapping from plans to the space of c` adl` ag functions, is linear and Lipschitz, Theorem 6.1 implies that √n([PCol n,n−Col⋆)D→Col(M⋆G), (46) where Gis the asymptotic limit of√n(tn−t,sn−s). We construct uniform confidence bands by letting u1−α be the 1 −αquantile of ∥Col(M⋆G)∥∞and defining In:= −u1−α√n+[PCol n,n,u1−α√n+[PCol n,n , (47) which is an asymptotic 1 −αconfidence interval for Col. By the results of Section 7, we can estimate u1−αby bootstrapping tnandsn. We plot our results in Figure 7a. 17 (a) 95% confidence intervals for true colocalization function, computed with regularized Col estimator ([RCol n,n) and penalized Col estimator ( [PCol n,n). (b) Estimated colocalization [PCol n,nwith varying number of samples. Figure 7: Colocalization estimation with subsampled images We observe that our approach yields an estimator that is close to Col⋆for all sufficiently large ξ, and that our confidence set captures Col⋆for a large range of ξ. By contrast, [RCol n,nsubstantially deviates from Col⋆, and the corresponding confidence set is not accurate. This observation is consistent with our discussion in Section 2: using entropic optimal transport yields substantially biased estimates of the true optimal plan. Neither method succeeds in estimating Col⋆(ξ) when ξis small. This is an unavoidable feature of the subsampling method: the original distributions tandshave support of size 16384, but the subsamples have much smaller support, of size at most 2000. This extremely sparse sampling means that tnandsnfail to capture the small-scale structure of the optimal plan. We show in Figure 7b an example on a smaller image (of size 32 ×32), where it is clear that the errors present for small ξsubstantially decrease as the sample size increases. 10 Conclusion In this work, we propose and analyze a new inference technique for optimal solutions of optimal transport problems and other linear programs. Unlike the widely applied entropic regularization, our approach yields asymptotically unbiased estimators, centered at the true population-level quantities. Our main insight is to construct a penalized linear program whose bias is linear in the penalty parameter, which allows us to apply the Richardson extrapolation technique to eliminate the bias. In future work, it would be valuable to investigate whether similar techniques could be used to develop new inference methods for non-parametric settings, such as for the solution to optimal transport problems with continuous marginals (Gonzalez-Sanz et al., 2022; Manole et al., 2023). Acknowledgments Florentina Bunea was supported in part by NSF–DMS 2210563. Jonathan Niles-Weed was supported in part by NSF–DMS 2210583 and 2339829. A Preliminaries We first introduce some general preliminary results that will be applied in the proof of our main theorems. A.1 Random linear programs Linear programs like (8) have polytope feasible sets, whose vertices are determined by the feasible hyperplane Ax=b. If solutions exist, the linear program must have a solution at one of the vertices of the feasible polytope. For any subset I⊆ {1, . . . , m }, we denote by AIthek×|I|submatrix of Aformed by taking the columns of A corresponding to the elements of I. Analogously, for x∈Rm, we write xIfor the vector of length |I|consisting of
https://arxiv.org/abs/2505.04312v1
the coordinates of xcorresponding to I. 18 Definition 2. A set I⊆[m]is abasis if |I|=k, rank(AI) =k (48) Given a basis I, we can define the basic solution x(I;b) to be the vector xsatisfying xI=A−1 Ib xIC=0.(49) If this basic solution x(I;b) is also feasible to (8), i.e., xIis nonnegative, we say that x(I;b) is a feasible basic solution . We define the optimal set of bases for (8) as I⋆(b), which contains all the bases corresponding to optimal solutions. We say that the linear program has a unique non-generate solution ifx⋆is the unique solution to the linear program and supp( x⋆) =k. In this situation, I⋆(b) has only one element. LetIbe a basis of A, we show that if xis in the hyperplane Ax=b0,xis uniquely determined by its entries on IC. Lemma A.1. Under the condition that the linear program (8)has a unique solution x⋆ LPwith the index set for the zeros entries I0:={i∈ {1, ..., m}|x⋆ LP,i= 0}, the linear equation for x: Ax=b0, xi=αi, i∈I0 cannot have multiple solutions, where Ais the same matrix as shown in the equality constraints of the linear program and α′ isare constants. If the equation has a solution x⋆, we have x⋆ Ic 0as a linear function of (b0,α)and there exists a constant C depending on the matrix A, and I0such that ∥x⋆∥ ≤C(max i∈I0|αi|+∥b0∥). Proof. Since the linear program (8) has unique solution, according to the properties of solutions to linear programs (Bertsimas and Tsitsiklis, 1997, Theorem 2.2), there is a basis Isuch that Ic⊆I0and the constraints xi≥0, i∈IcandAx=bare linearly independent. Therefore, the equation below, which has reduced constraints, has a unique solution. This indicates that the original equation has at most one solution. Ax=b0, xi=αi, i∈Ic The solution to the reduced equation x⋆can be written as x⋆ I=A−1 I(b0−AIcx⋆ Ic), xi=αi, i∈Ic, where AIcandAIare submatrices of Ataking columns with indices of IcandIseparately. Therefore, if the equation in the lemma has a solution x⋆, ∥x⋆∥ ≤(m−k)(∥A−1 IAIc∥+ 1) max i∈I0|αi|+∥A−1 I∥∥b0∥ (50) Dual programs are utilized a lot in the discussion of convex programs. The dual linear program for (8) is max λ∈Rk⟨b,λ⟩, s.t.c−A⊤λ≥0. (51) The primal and dual linear programs achieve optima at ( x⋆(b),λ⋆(b)) if and only if ∃η∈Rmsuch that: A⊤λ⋆(b) +η=c,Ax⋆(b) =b,x⋆(b)≥0,η≥0,x⋆(b)⊤η= 0. (52) Thecomplementary slackness condition x⋆(b)⊤η= 0 is equivalent to the condition that supp( x⋆(b))∩supp(η) = ∅. If it also holds that supp( x⋆(b))∪supp(η) = [m], we say that the solutions satisfy the strict complementary slackness condition. Linear programs have their optimal solutions at vertices or on optimal faces of the feasible polytope. When the Slater’s condition holds for both dual and primal problems, the optimal faces are bounded (Schrijver, 1998, Corollary 7.1k) and the strict complementary slackness condition can be achieved at the centroids of the primal and dual optimal faces. 19 When we consider a perturbed linear program where the vector bis replaced by a nearby vector b′, according to Proposition 2 in (Liu et al., 2023), we have the inclusion of the optimal basis set I⋆(b′)⊆ I⋆(b) if∥b′−b∥ ≤ δ(A,b). Under the assumption for
https://arxiv.org/abs/2505.04312v1
the linear programs 2 which ensures that the primal problem (8) has a unique solution, the feasible basic solution x⋆(b′) can be expressed as x⋆(b′) =x⋆(b) +x(I;b′−b) for I∈ I⋆(b′). (53) In the case where x(I;b′−b) is not unique, the optimal solution to the linear program with parameter b′is achieved at an optimal face with the points x⋆(b′) as its vertices. In the situation where bin the linear program (8) is replaced by a random vector bn(such as a plug- in estimator for bwith distributional limit given in (9)), our previous result demonstrates that the random solution shares the same basis as the true solution. Furthermore, the limit distribution of the random solution has a nonlinear relationship with G. Theorem A.2 (Theorem 3, Liu et al. (2023)) .Suppose that (8)satisfies 2. If bnsatisfies the distributional limit (9), then qn(x⋆(bn)−x⋆(b))D→p⋆ b(G), (54) where p⋆ b(G)is the set of optimal solutions to the following linear program: min⟨c,p⟩:Ap=G,pi≥0∀i /∈S(x⋆(b)). (55) Here x⋆(bn)andx⋆(b)are the solutions to the linear programs with bnandbin the equality constraints separately, and S(x⋆(b))is the support of the x⋆(b). We have provided a 2 ×2 optimal transport problem in Section 1, Proposition 1.1 as a small example for such nonlinearity. We give a brief proof here. Proof of Proposition 1.1. Since tnandsnrepresent empirical frequencies, they are naturally nonnegative and satisfy tn,1+tn,2= 1 and sn,1+sn,2= 1. Then, the feasibility of πncan be easily checked by substituting (4) into the constraints of the program (3). To show the optimality of πn, we consider an arbitrary alternative feasible plan π′and their difference ∆π=π′−πn. Since tn,2−sn,2=−(tn,1−sn,1), we have min {πn,12, πn,21}= 0.Since both πnandπ′obey the constraints in (3), ∆ πtakes the form: ∆π=−δ δ δ−δ , (56) forδ >0. Thus, we have the comparison of transport cost π′ 12+π′ 21> πn,12+πn,21. (57) This indicates that πnis the only minimal cost transport plan. We have the convergence law for the empirical frequency: √n(tn−t)d→1 2G, (58) where Grepresents a standard Gaussian random variable and the same law applies the sn. Therefore, the convergence law (5) is a direct result of applying the continuous mapping theorem to πn. A.2 Properties of convex functions and convex optimization Firstly we introduce the key lemma we will apply during the proof of convergence of the penalized program trajectories. Intuitively, this lemma establishes a lower bound on how quickly a locally strongly convex function grows compared to its tangent hyperplane. It guarantees that when moving from a point to any other point in the domain (both within an affine subspace), the function increases by at least a constant times either the distance between points or the squared distance, whichever is smaller. This bound captures both the quadratic growth behavior near the reference point and the potentially faster growth far from it. Lemma A.3 (Lower bound on convex function growth) .LetA={α+w:w∈ V} ⊂ Rmbe an affine subspace ofRmwhere Vis a vector space. Letg:Rm→Rbe a lower semi-continuous convex function. Assume that gisC2onint dom gand that gis locally strongly convex on A. Let Kbe a compact subset of relint(dom g∩A). There exists a constant C=C(K)>0such that for
https://arxiv.org/abs/2505.04312v1
any d1∈Kand any d2∈domg∩A, g(d2)−g(d1)− ⟨∇ g(d1),d2−d1⟩ ≥Cmin{∥d2−d1∥,∥d2−d1∥2}. 20 Proof. Define s(d,d′) :=g(d′)−g(d)−⟨∇g(d),d′−d⟩. The locally strong convexity of gwithin space Aimplies thatsis strictly positive for all d,d′∈Aandd̸=d′. We first show that for C0>0 sufficiently small, the set {(d1,d2)∈K×(dom g∩A) :s(d1,d2)≤C0∥d1−d2∥}is compact. Indeed, if not, we can find a sequence (d1,n,d2,n) with ∥d2,n∥ → ∞ such that lim n→∞s(d1,n,d2,n) ∥d1,n−d2,n∥= 0. The convexity of gimplies that for any d∈domg∩A,v∈ Vandt≥1 s(d,d+tv) t=g(d+tv)−g(d) t− ⟨∇ g(d),v⟩ ≥g(d+v)−g(d)− ⟨∇ g(d),v⟩=s(d,d+v). Hence 0 = lim n→∞s(d1,n,d2,n) ∥d1,n−d2,n∥≥lim sup n→∞s(d1,n,d1,n+ (d2,n−d1,n)/∥d1,n−d2,n∥). (59) Passing to a subsequence, we may assume that d1,n→d1∈Kand (d2,n−d1,n)/∥d1,n−d2,n∥ →vwith ∥v∥= 1. Since d1∈K⊆relint(dom g∩A), both gand∇gare continuous at d1. The lower semi-continuity of gthen implies lim sup n→∞s(d1,n,d1,n+ (d2,n−d1,n)/∥d1,n−d2,n∥)≥s(d1,d1+v), which contradicts (59). LetC0be such that S:={(d1,d2)∈K×(dom g∩A) :g(d2)−g(d1)− ⟨∇ g(d1),d2−d1⟩ ≤C0∥d1−d2∥} is compact. Since gisC2with inf w∈V,∥w∥2=1⟨w,(∇2g)w⟩>0 on dom g∩A, there exists λ >0 such that gis λ-strongly convex on the convex hull of S. In particular, for ( d1,d2)∈S, it holds λ 2∥d1−d2∥2≤g(d2)−g(d1)− ⟨∇ g(d1),d2−d1⟩. Therefore either g(d2)−g(d1)− ⟨∇ g(d1),d2−d1⟩ ≥C0∥d1−d2∥or (d1,d2)∈S, sog(d2)−g(d1)− ⟨∇g(d1),d2−d1⟩ ≥λ 2∥d1−d2∥2. Choosing C= min {C0, λ/2}proves the claim. Below is another lemma related to a barrier-type convex optimization problem. This lemma establishes an upper bound on the gradient magnitudes at the optimal solution when minimizing a barrier function over a compact polytope. This lemma indicates that if Pnis a sequence of shrinking polytopes such that vn max≪1 and vn max vn min= Θ(1), then with fsatisfying conditions in the lemma, we have max i∈I′|f′(x0,i+x⋆ i)|=O(f′(vn max)). Lemma A.4. Let polytope PinRmbe defined as: P={x∈Rm|Ax=b,xi≥0fori∈I}, (60) where the A∈Rk×mhas full row rank and Iis a index set. Suppose that Pis compact with a centroid xc. Letf(x) : (0 ,+∞)→Rbe a convex function in C1that has the properties: 1) limx↓0+f′(x) =−∞ 2) limx↓0+−xf′(x)<∞and 3) there exits δ >0such that xf′(x)decreasing in (0, δ). Consider the convex optimization problem: x⋆= arg min x∈PmX i=1f(x0+x)i, (61) where x0∈Rm ≥0and supp x0=Ic. Suppose that m=| ∪x∈Psupp(x+x0)|. Let vmax= max x∈P|x|∞andvmin= min i∈[m]xc,i. Let xmax= max i∈ICx0,iandxmin= min i∈ICx0,i. Ifvmax< δandxmin>2vmax, we have max i∈[m]|f′(x0,i+x⋆ i)| ≤mvmax vmin(|f′(vmax)|+ 2(|f′(xmax+vmax)| ∨ |f′(xmin−vmax)|)). Proof. Since every point on the boundary of the polytope Phas at least one zero entry and f′(x) diverges to−∞ at zero, the first order derivative of the cost function in (61) diverges on the boundary of the feasible domain. Hence, the optimal solution achieves in the relative interior of P, i.e. x⋆ i>0 for i∈I. We consider the first order optimal condition of the minimization problem: f′(x0+x⋆) =A⊤ξ, (62) 21 with some dual optimal vector ξ∈Rk. Letxcbe the centroid of P. Multiplying the optimal condition with xc andx⋆, we obtain: x⊤ cf′(x0+x⋆) =b⊤ξ=x⋆⊤f′(x0+x⋆) (63) Since x0has support only on IC, we have x⊤ cf′(x0+x⋆) =X j∈ICxc,jf′(x0,j+x⋆ j) +X j∈Ixc,jf′(x⋆ j), (64) and (x⋆)⊤f′(x0+x⋆) =X j∈ICx⋆ jf′(x0,j+x⋆ j) +X j∈Ix⋆ jf′(x⋆ j). (65) Since xmin>2vmax, we have max j∈IC f′(x0,j+x⋆ j) ≤ |f′(xmax+vmax)| ∨ |f′(xmin−vmax)|, (66) hence X j∈ICx⋆ jf′(x0,j+x⋆ j) ∨ X j∈ICxc,jf′(x0,j+x⋆ j) ≤mvmax(|f′(xmax+vmax)| ∨ |f′(xmin−vmax)|). (67) Since the function xf′(x) is non-positive and is decreasing when x≤δ, 0≥X j∈Ix⋆
https://arxiv.org/abs/2505.04312v1
jf′(x⋆ j)≥mvmaxf′(vmax). (68) Combining equation (64) to (68), we have 0≤ −X j∈Ixc,jf′(x⋆ j)≤m(−vmaxf′(vmax) + 2vmax(|f′(xmax+vmax)| ∨ |f′(xmin−vmax)|)). (69) Therefore, max j∈I f′(x⋆ j) ≤mvmax vmin(|f′(vmax)|+ 2(|f′(xmax+vmax)| ∨ |f′(xmin−vmax)|)) (70) A.3 Penalty functions We have introduced our assumptions on the penalty function through a conjugate expression. To understand properties of the proper penalty functions, we list several results of the Legendre transformation (Rockafellar, 1997). Definition 3 (Corollary 26.3.1 in Rockafellar (1997)) .A closed proper convex fis a convex function of Legendre type if ∂fis one-to-one. The theorem characterizes the duality properties of convex functions of Legendre type. Theorem A.5 (Theorem 26.5 in Rockafellar (1997)) .Letfbe a closed convex function. Let C= int(dom f) andC∗= int(dom f∗). Then (C, f)is a convex function of Legendre type if and only if (C∗, f∗)is a convex function of Legendre type. When these conditions hold, (C∗, f∗)is the Legendre conjugate of (C, f), and (C, f) is in turn the Legendre conjugate of (C∗, f∗). The gradient mapping ∇fis then one-to-one from the open convex setConto the open convex set C∗, continuous in both directions, and ∇f∗= (∇f)−1. Applying those results to the function q(x) and p(x) in assumption 3, we prove the lemma 3.1. Proof of Lemma 3.1. Since q(x)∈C2andq′′(x)>0,q′(x) is increasing in its domain and cl qis a convex function of Legendre type. Theorem A.5 indicates that p(x) is a convex function of Legendre type and that p′= (q′)−1. Thus we have (−∞,0)⊆imq′= int(dom p),imp′= int(dom(cl q)) = (0 ,+∞). (71) Also, by taking second derivative to p(x) and q(x), we have when y=p′(x) and y∈int(dom(cl q)), p′′(x) =1 q′′(y). (72) This indicates that p′′(x) is strict positive in its domain and locally Lipschitz by the Lipschitzness of q′′(x). Hence, p′(x) is increasing in its domain and combined with (71), we have lim x→−∞ p′(x) = 0. 22 B Proof of main theorems B.1 Bias in entropic optimal transport: proof of 2.1 First of all, let us restate the regularized optimal transport settings in Klatt et al. (2020). The optimal transportation is written as: π⋆= arg min π∈RN×N⟨c,π⟩,s.t.NX i=1πi,j=sj,NX j=1πi,j=ti, πi,j≥0. (73) The entropic regularized optimal transport with the entropic function as f(x) =xlogx−xand a regularized λis formularized as: πλ= arg min π∈RN×N⟨c,π⟩+λNX i,j=1f(πi,j),s.t.NX i=1πi,j=sj,NX j=1πi,j=ti. (74) We restrict our analysis for the empirical problems to the unique solution case, i.e., π⋆is a singleton, and the one-sided case where only tis replaced by its empirical density tn. Letπλ,ndenote the empirical regularized optimal transport plan and π(tn,s) the empirical true optimal plan. Let λ=λnbe a decreasing sequence of n. We first examine the small regularization regime, i.e., λlog(√n)→0. We prove (17) by showing that the intrinsic distance d(πλ,n,π(tn,s)) =Op(ϵ(λ))∼op(1√n). Note that when π⋆is a degenerate vertex, π(tn,s) can be a convex polytope and d(πλ,n,π(tn,s)) denotes the minimal distance between πλ,nand the vectors in the set π(tn,s). Consequently, the regularized optimal transportation plan inherits the non-Gaussianity from π(tn, s). In the large regularization regime, where λlog(√n)→+∞, we prove our results under the special case where π⋆is non-generate. Under this condition, we demonstrate that ( πλ,n−π⋆) =Op(Eλ)
https://arxiv.org/abs/2505.04312v1
implying that√n(πλ,n−π⋆) diverges with probability 1. Such a divergence behavior should be generalizable to the general generate case. B.1.1 The small regularization regime: λlog(√n)→0 We introduce serval lemmas that describe the properties of the solutions for (73) and (74) and are key to the proof for 2.1. Lemma B.1. Letπ⋆ 0be one of the optimal plan for (73) andπλbe the unique optimal plans defined in (74). Let∆π=πλ−π⋆ 0, we have X (i,j)∈supp(π⋆ 0)cf(∆πi,j)≤ −η0 λX (i,j)∈supp(η⋆)∆πi,j−X (i,j)∈supp(π⋆ 0)f′((π⋆ 0)i,j)∆πi,j, (75) where η0>0is a constant depending only on candη⋆is the centroid of the dual optimal face (defined in (52)) for(73). Proof. Since π⋆ 0is also feasible to (74), we have ⟨c,∆π⟩+λNX i,j=1(f((πλ)i,j)−f((π⋆ 0)i,j))≤0,NX i=1∆πi,j= 0,NX j=1∆πi,j= 0. (76) When ( i, j)∈supp(π⋆ 0)c, we have f((πλ)i,j)−f((π⋆ 0)i,j) =f(∆πi,j) (77) When ( i, j)∈supp(π⋆ 0), by the convexity of f(x) on [0 ,+∞) we have f((πλ)i,j)−f((π⋆ 0)i,j)≥f′((π⋆ 0)i,j)∆πi,j (78) By the definition of dual variable ηin (52) for the problem (73), for every feasible ηwe have ⟨c,∆π⟩=⟨η,∆π⟩=X (i,j)∈supp(η)ηi,j∆πi,j. (79) Let{vi, i∈[N]}be the vertices of the feasible polytope for η. Since η⋆is the centroid of a face or is a vertex, we have min i∈supp(η⋆)η⋆ i≥1 Nmin imin j∈supp(vi)vi,j:=η0. (80) Note that η0only depends on the geometry of the dual feasible polytope which is independent of tands. Combining the facts above, we have the inequality in the lemma proved. 23 Another technique employed in our analysis involves establishing bounds for Legendre-type convex functions. Lemma B.2. Letf(x) :C →Rbe a convex function of Legendre type and g=f∗. Letx0∈ C. Iff(x0)≤ ⟨α,x0⟩, for every y0∈ C∗, we have ⟨y0−α,x0⟩ ≤g(y0). (81) Proof. By Theorem A.5, g(x) is a convex function of Legendre type and f=g∗. Therefore, for every y0∈ C∗, we have ⟨α,x0⟩ ≥f(x0) = max y∈C∗⟨x0,y⟩ −g(y)≥ ⟨x0,y0⟩ −g(y0). (82) This conclude the proof. Lemma B.3. Letπ⋆ n∈π(tn,s)be defined as π⋆ n= arg min π∈π(tn,s)NX i,j=1f(πi,j). (83) Let∆π∈RN×Nsatisfy NX i=1∆πi,j= 0,NX j=1∆πi,j= 0. (84) For every such vector ∆π, we have X (i,j)∈supp(π⋆n)f′((π⋆ n)i,j)∆πi,j=X (i,j)∈supp(π⋆n)cαi,j∆πi,j, (85) with a uniform vector αthat only depends on π⋆ nand∥α∥ ≤Op(logn). Proof. We perform separate proofs for two distinct cases:when π(tn,s) is a singleton and when it is a polytope. 1.π(tn,s)is a singleton: In this case, π⋆ nis the unique empirical optimal transport plan. Since π⋆ nand ∆ πare constrained with the same linear operator, we can apply Lemma A.1 to ∆ πand conclude that there exists a tensor L′ i,j,k,l with (i, j)∈supp(π⋆ n) and ( k, l)∈supp(π⋆ n)csuch that X (i,j)∈supp(π⋆n)f′((π⋆ n)i,j)∆πi,j=X (k,l)∈supp(π⋆n)c X (i,j)∈supp(π⋆n)f′((π⋆ n)i,j)L′ i,j,k,l ∆πk,l. (86) Therefore, αk,l=P (i,j)∈supp(π⋆n)f′((π⋆ n)i,j)L′ i,j,k,l and there exists a constant C, ∥α∥ ≤C max (i,j)∈supp(π⋆n)∥f′((π⋆ n)i,j)∥. Since ( π⋆ n)i,j≥C′∥tn−t∥with a constant C′, we have ∥f′((π⋆ n)i,j)∥=Op(logn). 2.π(tn,s)is a non-degenerate polytope: LetS=∪π∈π(tn,s)supp(π). Since f′(x) diverges on the boundary of π(tn, s), we have supp( π⋆ n) =S. The first-order optimality condition for (83) writes f′((π⋆ n)i,j) =ϕi+ψjfor (i, j)∈S, (87) with some vector ϕ,ψ∈RNlinearly with respect to f′((π⋆ n)). Therefore, X (i,j)∈supp(π⋆n)f′((π⋆ n)i,j)∆πi,j=X (i,j)∈supp(π⋆n)(ϕi+ψj)∆πi,j=−X (i,j)∈supp(π⋆n)c(ϕi+ψj)∆πi,j. (88) This is because ofPN i,j=1(ϕi+ψj)∆πi,j= 0 by the linear constraints for ∆ π.
https://arxiv.org/abs/2505.04312v1
Give in equation (53), the vertices ofπ(tn, s) are feasible basis solutions. Applying A.4 with P=π(tn, s)−π⋆,m=|S|, and x0=π⋆, we conclude that ∥f′((π⋆ n)i,j)∥=Op(logn) so that |αi,j|=|ϕi+ψj|=Op(logn). Utilizing the lemmas above, we can prove the small λregime in proposition 2.1 Proposition B.4. In the regime of λlog(√n)→0, we have d(πλ,n,π(tn,s)) =Op(ϵ(λ)). 24 Proof. Letπ⋆ nbe defined in Lemma B.3 as the solution to a convex minimization problem over π(tn,s)). Let η⋆ nbe a dual variable defined in equation (52) that is also the centroid of the dual optimal face for π(tn,s). When π(tn,s) forms a non-degenerate polytope, Lemma B.3 establishes that π⋆ nis achieved within the relative interior of this polytope. Consequently, strict complementary slackness is satisfied between π⋆ nandη⋆ n , i.e., supp(η⋆ n) = supp( π⋆ n)c. (89) Let ∆ π=πλ,n−π⋆ n. Applying Lemma B.1 for π⋆ nandπλ,nthen we have X (i,j)∈supp(η⋆n)f(∆πi,j)≤ −η0 λX (i,j)∈supp(η⋆n)∆πi,j−X (i,j)∈supp(π⋆n)f′((π⋆ n)i,j)∆πi,j. (90) Applying Lemma B.3 to f′((π⋆ n)i,j) in the inequality above we have: X (i,j)∈supp(η⋆n)f(∆πi,j)≤ −X (i,j)∈supp(η⋆n)(η0 λ+αi,j)∆πi,j, (91) where ∥α∥ ≤Op(logn) =op(1/λ). Applying Lemma B.2 to the entropic regularizer fwith dual function f⋆(y) = exp( y), by taking y0,i,j= 1−αi,j−η0 λ, we have X (i,j)∈supp(ηn)∆πi,j≤X (i,j)∈supp(ηn)exp(1−αi,j−η0 λ)≤exp(−C0η0 λ). (92) Here C0is a constant independent of n,λ, and η0. LetDnbe the feasible set of program (73) with tn. We can write the optimal face π(tn,s) as π(tn,s) ={π∈ D|πi,j= 0,(i, j)∈supp(η⋆ n)}. (93) Andπλ,nbelongs to the convex set: π′(tn,s):={π∈ D|πi,j= ∆πi,j,(i, j)∈supp(η⋆ n)}. (94) Therefore, due to the Lipschitzian property of polytopes proved in Walkup and Wets (1969), d(πλ,n,π(tn,s))≤dH(π′(tn,s),π(tn,s)))≤C′exp(−C0η0 λ), (95) where dH(·,·) denotes the Hausdorff distance between two convex polytopes. B.1.2 The large regularization regime: λlog(√n)→+∞ For simplicity of the proof, we inexplicitly write the equality constraints in (73) as A0π= (t,s), (96) with a matrix A0∈RN2×2N. We consider the special case that π⋆is a non-degenerate basic solution (definition in Section A.1) with a unique basis I⋆, hence supp( π⋆) =I⋆. We make our prove in two steps: 1) Give expression of E(λ) and showing that λlog∥E(λ)∥= Θ(1); 2) Give the expression of the Matrix Mand show that residual term ϵ(λ) in (16) is exponentially small. Proposition B.5. LetE(λ) =πλ−π⋆. We have λlog∥E(λ)∥= Θ(1) . Proof. We consider the dual program for (74): ξλ= arg min ξ∈R2N⟨ξ,(t,s)⟩+λexp(−c−A⊤ 0ξ λ). (97) This dual minimization problem can be transform to the formulation in Theorem 4.2 through a linear transform η=c−A⊤ 0ξ. Our theorem indicates that the minimizer satisfies c−A⊤ 0ξλ:=ηλ=η⋆+λd⋆+β(λ), (98) where β(λ)≪λandη⋆is the dual optimal solution for (73). Since π⋆is the unique non-degenerate solution, the primal–dual solution pair ( π⋆,η⋆) satisfies the strict complementary slackness. 25 We can express the solution πλusing the dual program solution: πλ= exp( −ηλ λ). (99) Since A0E(λ) = 0 and the unique basis I⋆= supp( π⋆), by Lemma A.1, we know that E(λ)I⋆is a linear function ofE(λ)I⋆c. Thus, we only need to show that E(λ)I⋆cis exponentially small. Since supp( π⋆)c= supp( η⋆),E(λ)I⋆c= (πλ)I⋆cand λlog(E(λ)I⋆c) =−(η⋆+λd⋆+β(λ))I⋆c= Θ(1) . (100) Proposition B.6. LetM∆t:=M(tn−t)be defined as (M∆t)I⋆=A−1 0,I⋆(tn−t,0),(M∆t)I⋆c= 0, (101) andπ′ n,λ=πλ+M∆t. We have ∆π:=πλ,n−π′ n,λ=Op(∥tn−t∥ϵλ), where ϵλconverge exponentially fast
https://arxiv.org/abs/2505.04312v1
to 0. Proof. By the definition of ∆ π, we have A0∆π= 0. In the empirical situation with treplaced by tn, we evaluate the difference in the cost function in equation (74) at two different vectors: the optimal plan πλ,nand a feasible plan π′ λ,n. Since πλ,nis optimal for the program, we have ⟨c,∆π⟩+λNX i,j=1f((πλ,n)i,j)−f (π′ λ,n)i,j ≤0 (102) The first order optimality condition for (74) writes: c+λf′(πλ) =A⊤ 0ξ. (103) Therefore, we have ⟨c,∆π⟩=⟨−λf′(πλ),∆π⟩, (104) and the governing inequality (102) can be written as NX i,j=1hi,j:=NX i,j=1f((πλ,n)i,j)−f (π′ λ,n)i,j −f′((πλ)i,j) ∆πi,j≤0 (105) We separately simplify the summations over the set I⋆and its complement. For (i, j)∈I⋆, by the convexity of fwe have hi,j≥(f′ (π′ λ,n)i,j −f′((πλ)i,j))∆πi,j= ∆πi,jlog(π′ λ,n)i,j (πλ)i,j For (i, j)∈I⋆c, since supp( M∆t)⊆I⋆, we have ( π′ λ,n)i,j= (πλ)i,jand hi,j=f((πλ+ ∆π)i,j)−f((πλ)i,j)−f′((πλ)i,j) ∆πi,j = (πλ)i,j f 1 +∆πi,j (πλ)i,j −f(1) ≥C(πλ)i,jmin ∆πi,j (πλ)i,j ,∆πi,j (πλ)i,j2! , where Cis a fixed constant. The second equality is obtained utilizing properties of the entropy function that f(x) =xlogx−x,f′(x) = log x, and f(1) =−1. The last inequality is a direct result of Lemma A.3. Finally, we have CX (i,j)∈I⋆cmin |∆πi,j|,(∆πi,j)2 (πλ)i,j ≤ −X (i,j)∈I⋆∆πi,jlog(π′ λ,n)i,j (πλ)i,j . (106) We then generate a bound for the right hand side of the inequality. When ( i, j)∈I⋆, (πλ)i,j= Θ(1), thus log(π′ λ,n)i,j (πλ)i,j = log 1 +(M∆t)i,j (πλ)i,j =Op(∥M∆t∥)≪1. 26 By Lemma A.1, we have ∥(∆π)I⋆∥ ≤C′∥(∆π)I⋆c∥. Consequently, X (i,j)∈I⋆cmin |∆πi,j|,(∆πi,j)2 (πλ)i,j ≤C′′∥M∆t∥∥(∆π)I⋆c∥, (107) which indicates that min ∆πmax,(∆πmax)2 (πλ)i,j ≤C′′N2∥M∆t∥∆πmax (108) with ∆ πmax= max (i,j)∈I⋆c|∆πi,j|. Since ∥M∆t∥=op(1), we conclude that ∥(∆π)I⋆c∥≲∥(πλ)I⋆c∥∥M∆t∥=Op((tn−t)E(λ)). (109) B.2 Convergence of penalized linear program: proof of Theorem 4.2 Theorem 4.2 establishes the convergence behavior of solutions to the penalized program as the penalty parameter approaches zero. The asymptotic expansion in the theorem is primarily characterized by the solution for an auxiliary convex program (26). Before we prove the this theorem, we first prove the existence and uniqueness of the penalized program and the auxiliary program solutions. Proof of the existence and uniqueness of the penalized program solution (Proposition 4.1). By Assumption 2, there is a feasible vector x0for (8) with all positive entries. In particular, x0∈int dom fr, sox0is feasible for (23) as well. Lemma 3.1 implies that fr(x) is strictly convex, so, if a solution exists, it is unique. It therefore suffices to show that a solution exists. This follows from convex duality: the penalized pro- gram (11) is the convex dual of max λ∈Rk⟨b,λ⟩ −rX iq(ci−(A⊤λ)i). (110) Fenchel duality (Bertsekas, 2009, Proposition 5.3.8) implies that a primal solution is guaranteed to exist so long as Slater’s condition for the dual problem holds, that is, so long as there exists λ⋆such that c−A⊤λ⋆>0. But this follows from the fact that the set of solutions to the original linear program (8) is bounded (Schrijver, 1998, Corollary 7.1k). Finally, we show that the solution satisfies x(r,b)∈int dom fr. Let t= sup {s:s∈domp}. Since domfr= (dom p)m, if the x(r,b) does not lie in int dom fr, then there exists a coordinate isuch that x(r,b)i=t. Lemma 3.1 shows that t≥0 and
https://arxiv.org/abs/2505.04312v1
p′(x)→+∞asx↑t; in particular, this implies that the directional derivative offratx(r,b) in the direction x0−x(r,b) is−∞, contradicting the claimed optimality of x(r,b). To prove that the auxiliary program has a unique solution, we show that it is locally strong convex. Here, I0is defined the same as in (26). Lemma B.7 (Locally strong convexity of the auxiliary program) .Letg(x) =⟨c,x⟩+P i∈I0p(−xi)and the affine space A={x∈Rm|Ax=α0}. Then g(x)is locally strongly convex and C2onA, i.e., for every x∈int(dom g), inf w∈NulA,∥w∥2=1⟨w,(∇2g)w⟩>0. Proof. Directly taking derivatives to g, we have ⟨w,(∇2g)w⟩=X i∈I0ω2 ip′′(xi)≥min i∈I0p′′(−xi)X i∈I0ω2 i. Consider the quadratic program vmin= min ∥w∥2=1X i∈I0ω2 i,Aω= 0. (111) The program attains a solution in the set ∥w∥2= 1 by the compactness of the feasible set. Since by Lemma A.1, wi= 0 for all i∈I0indicates that w= 0 which is not feasible for the quadratic program. Therefore, vmin>0 and by the locally strong convexity of p(x) we have ⟨w,(∇2g)w⟩ ≥vminmin i∈I0p′′(−xi)>0. 27 Lemma B.8. The minimization problem in (26) is feasible and has a unique solution Proof. Feasibility: By the Slater’s condition of the original linear program (8), there is a vector xswith all positive entries that is feasible to the linear program. It is easy to verify that d0=xs−x⋆is feasible to (26) since ( d0)I0≥0. Existence and uniqueness: Following a similar procedure as in the proof of Proposition 4.1, we write the dual program of (26): max λ∈Rk−X i∈I0q(ci−(A⊤λ)i), c i−(A⊤λ)i= 0 for i∈Ic 0. (112) Feasibility of the dual program, i.e. exists λsuch that ci−(A⊤λ)i>0 for i∈I0andci−(A⊤λ)i= 0 fori∈Ic 0, is a direct result of the complementary slackness of the linear program solutions (Schrijver, 1998, Section 7.9 ). Therefore, under the same arguments as in the proof of Proposition 4.1 we have the existence of a solution for (26). Also, since the cost function of the primal program is strictly convex on the affine space {d∈Rm|Ad= 0}by Lemma B.7, we can conclude that such a solution is unique. We now proceed with proving the theorem. Our approach leverages two key facts: both x⋆+rd⋆andx(r,b) are feasible solutions to the penalized program, and x(r,b) achieves optimality. The proof employs an important property of locally strongly convex functions established in Lemma A.3. Proof of Theorem 4.2. Define d(r) = (x(r,b)−x⋆)/r. It suffices to show that ∥d⋆−d(r)∥=O(β(r)) as r→0. Define gr(d) =⟨c,d⟩+Pm i=1p(−x⋆ i r−di), and let g(d) =⟨c,d⟩+P i∈I0p(−di). Define d(r) = (x(r,b)−x⋆)/r. The convexity of pimplies that hr(d) :=gr(d)−g(d) =X i/∈I0p(−x⋆ i r−di) is a convex function of d. Hence hr(d⋆)−hr(d(r))≤ ⟨∇ hr(d⋆),d⋆−d(r)⟩. On the other hand, since d(r) minimizes gr, we also have g(d(r))−g(d⋆)≤hr(d⋆)−hr(d(r)) Combining these facts yields g(d(r))−g(d⋆)≤ ⟨∇ hr(d⋆),d⋆−d(r)⟩. For any v∈Rm, we have |⟨∇hr(d⋆),v⟩| ≤X i/∈I0p′(−x⋆ i r−d⋆ i)|vi|=O(β(r)∥vi∥) as r→0, (113) where the implicit constant depends on x⋆but can taken independent of randv. We obtain that there an exists a positive constant C′such that g(d(r))−g(d⋆))≤C′β(r)∥d⋆−d(r)∥. Applying Lemma A.3 and Lemma B.7 with K={d⋆}and with A={x∈Rm|Ax=0}and using the fact that d⋆is a minimizer of gover ker A, we obtain Cmin{∥d⋆−d(r)∥,∥d⋆−d(r)∥2} ≤C′β(r)∥d⋆−d(r)∥, (114) which implies for rsmall enough that ∥d⋆−d(r)∥ ≤C′ Cβ(r) =O(β(r)), as desired. B.3
https://arxiv.org/abs/2505.04312v1
Asymptotic law of random penalized program solutions: proof of theorems in Section 5, 6,and 7 This section provides the complete proofs for the main theoretical results in this paper. We first establish the asymptotic behavior of the penalized program under small non-random perturbations of the equality constraints. We then extend these results to scenarios involving random constraints and prove the convergence law of the extrapolated estimator. 28 B.3.1 Asymptotic behavior of perturbed penalized program We first show that the solution for the perturbed penalized program (31) exists uniquely when the perturbation on vector bis small enough. Proof of Proposition 5.1. We only prove feasibility. The existence and uniqueness of the solution follows from arguments identical to that presented in Proposition 4.1. By the Slater’s condition for the linear program in Assumption 2, there exists a vector x0such that Ax0=b, x 0,i>0. (115) LetA0∈Rk×kbe a full rank submatrix of Aobtained by selecting columns corresponding to an index set I. Let ∆ x∈Rmbe constructed such that (∆ x)I=A−1 0(b′−b) and (∆ x)IC= 0. If∥b′−b∥ ≤ ∥A−1 0∥mini∈{1,..,m}x0,i, then ∥∆x∥ ≤mini∈{1,..,m}x0,i, ensuring that x′=x0+ ∆xhas all positive entries and satisfies Ax′=b′, thereby remaining feasible with b′. The matrix M⋆serves as the key component in the expansion trajectory of the perturbed program (31). We prove its properties stated in Proposition 5.2, which is crucial in the proof of Theorem 5.3. Proof of Proposition 5.2. Existence: Feasiblity of the program and the non-negativity of p′′(−d⋆) indicate existence of the solutions. Uniqueness: Assume there are two different vectors x1andx2both solving the program (30). Let ∆x=x1−x2. Then ∆ xlies in the kernel of A. The strict convexity of quadratic functions f(xi) =p′′(−d⋆ i)x2 i fori∈I0indicates that ∆ xi= 0 for i∈I0. Moreover, due to Lemma A.1, entries on I0uniquely determine ∆ x and we hence conclude ∆ x= 0. The two properties are a direct result of the first order optimality condition for this quadratic program. Let µ⋆∈Rkandx⋆∈Rmbe the optimal dual and primal variables separately. The first order condition indicates that Ax⋆=y,diag( p′′(−d⋆)I0)x⋆−A⊤µ⋆= 0. (116) Hence, ( x⋆,µ⋆) is a linear function of ( y,0). Therefore, there is a matrix representation M⋆∈Rm×ksuch that x⋆=M⋆y. Since for every y∈Rkand every q∈ker(A), Ax⋆=AM⋆y=y, 0 =q⊤(Σ⊤x⋆−A⊤µ⋆) =q⊤Σ⊤x⋆=q⊤Σ⊤M⋆y, we have AM⋆=Ikandq∈ker(M⋆⊤Σ⊤). Therefore, we conclude that M⋆is a right inverse of Aand that ker( A)⊆ker(M⋆⊤Σ⊤). Building on these results, we now characterize the trajectory of the penalized program under a small per- turbation on b. The techniques employed are similar to that in the proof of Theorem 4.2. We utilize the fact that ( x⋆+rd⋆+Mx⋆(b′−b)) is feasible for the perturbed penalized program and apply Lemma A.3. Proof of Theorem 5.3. We proceed as in the proof of Theorem 4.2. Define e(r,b′) = (x(r,b′)−x⋆−rd⋆)/r. Similarly, let e⋆=M⋆(b′−b)/r. Our aim is to show that ∥e(r,b′)−e⋆∥=O(β(r) +∥b′−b∥2/r2). Since Mx⋆is a right-inverse of Aby Proposition 5.2, we have that Ae(r,b′) =Ae⋆= (b′−b)/r. Therefore, as in the proof of Theorem 4.2, we have that gr(d⋆+e(r,b′))≤gr(d⋆+e⋆). Following the steps of that proof, we obtain g(d⋆+e(r,b′))−g(d⋆+e⋆)≤ ⟨∇ hr(d⋆+e⋆),e⋆−e(r,b′)⟩=O(β(r)∥e⋆−e(r,b′)∥), and therefore g(d⋆+e(r,b′))−g(d⋆+e⋆)−⟨∇g(d⋆+e⋆),e(r,b′)−e⋆⟩ ≤ ⟨∇ g(d⋆+e⋆),e⋆−e(r,b′)⟩+O(β(r)∥e⋆−e(r,b′)∥). Since p′′is locally Lipschitz by Lemma 3.1,
https://arxiv.org/abs/2505.04312v1
Taylor’ theorem implies ⟨∇g(d⋆+e⋆),e(r,b′)−e⋆⟩=⟨∇g(d⋆),e(r,b′)−e⋆⟩+⟨∇2g(d⋆)e⋆,e(r,b′)−e⋆⟩+O(∥e⋆∥2∥e(r,b′)−e⋆∥). The first-order optimality conditions of d⋆show that the first term vanishes. Moreover, since ∇2g(d⋆)e⋆=r−1ΣM⋆(b′−b) ande(r,b′)−e⋆lies in the kernel of A, Proposition 5.2 implies that the second term vanishes as well. We obtain g(d⋆+e(r,b′))−g(d⋆+e⋆)− ⟨∇ g(d⋆+e⋆),e(r,b′)−e⋆⟩=O((β(r) +∥e⋆∥2)∥e(r,b′)−e⋆∥). Since∥e⋆∥=O(∥b′−b∥/r) =o(1), we can choose a compact set K⊆domgsuch that d⋆+e⋆remains in K. Applying Lemma A.3 and Lemma B.7 with A={x∈Rm|Ax=b′−b}, we conclude as in the proof of Theorem 4.2. 29 B.3.2 Convergence law for random penalized program The convergence law in equation (9) for the random variable bnindicates that, with high probability, ∥bn−b∥ falls within the interval ( rnβ(rn), rn). We leverage our understanding of the convergence trajectory for the non- random perturbed problem, characterized in Theorem 5.3, to demonstrate the convergence law of x(rn,bn). Proof of Theorem 5.4. Since rnβ(rn)≪q−1 n≪rn, we can choose sequences unandlnsuch that rnβ(rn)≪ ln≪q−1 n≪un≪p rnq−1n. Therefore, we have P(∥bn−b∥ ∈[ln, un])→1. (117) Letη(rn,bn−b) :=x(rn,bn)−rnd⋆−x⋆−M⋆(bn−b). When ∥bn−b∥ ∈[ln, un], Theorem 5.3 indicates that qnη(rn,bn−b) 1∥bn−b∥∈[ln,un]=op(1). Therefore, as a result of the Slusky’s theorem and Theorem 5.3, qn(x(rn,bn)−rnd⋆−x⋆) =qnM⋆(bn−b) +qnη(rn,bn−b) 1∥bn−b∥∈[ln,un] +qnη(rn,bn−b) 1{ln≥∥bn−b∥}∪{∥ bn−b∥≥un} =qnM⋆(bn−b) +op(1)D→M⋆(G)(118) Now we establish the asymptotic law of the debiased estimator. Our proof demonstrates that by utilizing the two solutions under different penalty strengths, we can construct an effective estimator for the bias vector d⋆. Proof of Theorem 6.1. Define an estimator for d⋆as ˆdn=2 rn(x(rn,bn)−x(rn 2,bn)). (119) Consider the event Hn:={ln≤ ∥bn−b∥ ≤un}, where lnandunare sequences satisfying rnβ(rn)≪ln≪ q−1 n≪un≪p rnq−1n. We have lim n→∞P(Hn) = 1. On the event Hn, we have ∥rn(ˆdn−d⋆)∥ ≤2∥η(rn,bn−b)∥+ 2∥η(rn 2,bn−b)∥, where η(rn,bn) =x(rn,bn)−(x⋆+rnd⋆+M⋆(bn−b))≲Op(max(∥bn−b∥2 rn, rnβ(rn))). Therefore, we have ∥ˆdn−d⋆∥ ≤Op(max(∥bn−b∥2 r2n, β(rn))) = op(1). Therefore, ˆdnis an asymptotically consistent estimator for d⋆. Furthermore, qnrn∥ˆdn−d⋆∥ ≤Op(max(qn∥bn−b∥2 rn, qnrnβ(rn))) = op(1). Combining this result with Theorem 5.4, we have qn(ˆxn−x⋆)D→M⋆(G). (120) The following corollary establishes a remarkable convergence property for the objective function value. Under specific conditions, the debiased estimator ˆxnachieves an accelerated convergence to the optimal value of the linear program (8). Corollary B.8.1 (fast convergence of optimal value) .Letx⋆(b)be the solution for the linear program (8) with the parameter b. Ifc≥0,⟨c,x⋆⟩= 0, and there exists ϵ >0, supp (x⋆(b+ϵAρ)) = supp(x⋆(b))for ρ= ln(c)· 1{x⋆=0}and for ρ= 1{x⋆=0}, with the exponential penalty function, qn(⟨c,ˆxn⟩ − ⟨c,x⋆⟩)D→0. (121) 30 Proof. If is enough to show under given conditions, ⟨c,M⋆(G)⟩= 0. The condition supp( x⋆(b+ϵAρ)) = supp( x⋆(b)) indicates that the equations {Ax= 0,x 1x⋆=0=−ρ}has a solution xρ=−ρ+x⋆(b+ϵAρ )−x⋆(b) ϵ. Therefore, when taking ρ0= ln(c)· 1{x⋆=0}, we have xρ0satisfy c= exp( −xρ0) 1{x⋆=0},Axρ0= 0. (122) This implies that xρ0satisfies the optimality condition of (26) with the exponential penalty function hence d⋆=xρ0. Also, when taking ρ1= 1{x⋆=0}. By the definition of M⋆, we have ⟨xρ1,diag( p′′(−d⋆) 1{x⋆=0})M⋆(G)⟩= 0. Since xρ1 1{x⋆=0}= 1{x⋆=0}andp′′(−d⋆) = exp( −xρ0),combined with (122), this further implies that ⟨α,diag( p′′(−d⋆) 1{x⋆=0})M⋆(G)⟩=⟨p′′(−d⋆) 1{x⋆=0},M⋆(G)⟩=⟨c,M⋆(G)⟩= 0. Corollary 6.1.1 applies the results of Corollary B.8.1 to the special case of optimal transport problems where the source and target distributions are identical. Proof of Corollary 6.1.1. The proof consists of verifying that all conditions specified in Corollary B.8.1 are
https://arxiv.org/abs/2505.04312v1
satisfied. Since t=sandci,i= 0, the optimal transport plan π⋆only has support on the diagonal entries. For the symmetric cost matrix, ρis a symmetric matrix in both scenarios. Therefore, for small enough ϵ, the transportation plan π⋆(t+ϵPN j=1ρi,j,s+ϵPN i=1ρi,j) also has support on the diagonal entries. B.3.3 Bootstrap consistency Finally, we demonstrate that the naive bootstrap estimator ˜xn, generated using the bootstrap copy of bn, is consistent for conducting inferences on x⋆. Proof of Theorem 7.1. The proof mainly follows the proof of (van der Vaart and Wellner, 1996, Theorem 3.9.11). Since ˜bnis a bootstrap copy of bnthrough resampling and the function h(M⋆(·)) is bounded Lipschitz functions for every h∈BL1(Rk), we have sup h∈BL1(Rk) Eh h√nM⋆(˜bn−bn) |Z1, . . . , Z ni −E h√nM⋆(bn−b) P→0. (123) Since the random vector ˜bnhas the same convergence law as bn, Theorem 5.4 and 6.1 also holds for x(rn,˜bn) and for ˜xn. Therefore, we have√n(˜xn−x⋆) =M⋆(˜bn−b) +op(1), (124) and√n(ˆxn−x⋆) =M⋆(bn−b) +op(1). (125) Thus, by subtracting the above two equations we have √n(˜xn−ˆxn−√nM⋆(˜bn−bn))P→0. (126) Furthermore, for every ϵ >0 sup h∈BL1(Rk) E h√n(˜xn−ˆxn) |Z1, . . . , Z n −Eh h√nM⋆(˜bn−bn) |Z1, . . . , Z ni ≤ϵ+ 2P ∥√n(˜xn−ˆxn−√nM⋆(˜bn−bn))∥ ≥ϵ|Z1, . . . , Z n . Combining with the convergence law for ˆxngiven in Theorem 6.1, we get the desired statement (37). 31 References R. K. Ahuja, T. L. Magnanti, J. B. Orlin, et al. Network flows: theory, algorithms, and applications , volume 1. Prentice hall Englewood Cliffs, NJ, 1993. J. Aitchison and S. D. Silvey. Maximum-likelihood estimation of parameters subject to restraints. The Annals of Mathematical Statistics , 29(3):813–828, 1958. ISSN 00034851, 21688990. URL http://www.jstor.org/ stable/2237265 . S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society: Series B (Methodological) , 28(1):131–142, 1966. D. W. K. Andrews. Generalized method of moments estimation when a parameter is on a boundary. volume 20, pages 530–544. 2002. doi: 10.1198/073500102288618667. URL https://doi.org/10.1198/ 073500102288618667 . Twentieth anniversary GMM issue. A. Auslender, R. Cominetti, and M. Haddou. Asymptotic analysis for penalty and barrier methods in convex and linear programming. Mathematics of Operations Research , 22(1):43–62, 1997. doi: 10.1287/moor.22.1.43. URL https://doi.org/10.1287/moor.22.1.43 . F. Bach. On the effectiveness of richardson extrapolation in data science. SIAM Journal on Mathematics of Data Science , 3(4):1251–1277, Jan. 2021. ISSN 2577-0187. doi: 10.1137/21m1397349. URL http://dx.doi. org/10.1137/21M1397349 . D. Bertsekas. Convex optimization theory , volume 1. Athena Scientific, 2009. D. Bertsimas and J. N. Tsitsiklis. Introduction to linear optimization , volume 6. Athena scientific Belmont, MA, 1997. A. Bhattacharyya. On a measure of divergence between two statistical populations defined by their probability distribution. Bulletin of the Calcutta Mathematical Society , 35:99–110, 1943. J. Bigot, E. Cazelles, and N. Papadakis. Central limit theorems for entropy-regularized optimal transport on finite spaces and statistical applications. Electronic Journal of Statistics , 13(2):5120 – 5150, 2019. doi: 10.1214/19-EJS1637. URL https://doi.org/10.1214/19-EJS1637 . S. Boyd and L. Vandenberghe. Convex Optimization . Cambridge University Press, Mar. 2004. ISBN 9780511804441. doi: 10.1017/cbo9780511804441. URL http://dx.doi.org/10.1017/CBO9780511804441 . C. Bunne, S. G.
https://arxiv.org/abs/2505.04312v1
Stark, G. Gut, J. S. Del Castillo, M. Levesque, K.-V. Lehmann, L. Pelkmans, A. Krause, and G. R¨ atsch. Learning single-cell perturbation responses using neural optimal transport. Nature methods , 20 (11):1759–1768, 2023. T. Cai, J. Cheng, N. Craig, and K. Craig. Linearized optimal transport for collider events. Physical Review D , 102(11):116019, 2020. A. Charpentier, E. Flachaire, and E. Gallic. Optimal transport for counterfactual estimation: A method for causal inference. In Optimal Transport Statistics for Economics and Related Topics , pages 45–89. Springer, 2023. H. Chernoff. On the distribution of the likelihood ratio. Ann. Math. Statistics , 25:573–578, 1954. ISSN 0003- 4851. doi: 10.1214/aoms/1177728725. URL https://doi.org/10.1214/aoms/1177728725 . L. Chizat, P. Roussillon, F. L´ eger, F.-X. Vialard, and G. Peyr´ e. Faster wasserstein distance esti- mation with the sinkhorn divergence. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems , volume 33, pages 2257–2269. Cur- ran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ 17f98ddf040204eda0af36a108cbdea4-Paper.pdf . R. Cominetti and J. S. Martin. Asymptotic analysis of the exponential penalty trajectory in linear programming. Mathematical Programming , 67:169–187, 1994. N. Courty, R. Flamary, and D. Tuia. Domain adaptation with regularized optimal transport. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part I 14 , pages 274–289. Springer, 2014. N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence , 39(9):1853–1865, 2016. 32 D. Davis, D. Drusvyatskiy, and L. Jiang. Asymptotic normality and optimality in nonsmooth stochastic ap- proximation. The Annals of Statistics , 52(4), Aug. 2024. ISSN 0090-5364. doi: 10.1214/24-aos2401. URL http://dx.doi.org/10.1214/24-AOS2401 . N. Deb and B. Sen. Multivariate rank-based distribution-free nonparametric testing using measure transporta- tion. Journal of the American Statistical Association , 118(541):192–207, 2023. J. C. Duchi and F. Ruan. Asymptotic optimality in stochastic optimization. The Annals of Statistics , 49(1), Feb. 2021. ISSN 0090-5364. doi: 10.1214/19-aos1831. URL http://dx.doi.org/10.1214/19-AOS1831 . L. D¨ umbgen. On nondifferentiable functions and the bootstrap. Probability Theory and Related Fields , 95 (1):125–140, Mar. 1993. ISSN 1432-2064. doi: 10.1007/bf01197342. URL http://dx.doi.org/10.1007/ BF01197342 . J. Dupacova and R. Wets. Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems. The Annals of Statistics , 16(4), Dec. 1988. ISSN 0090-5364. doi: 10.1214/aos/ 1176351052. URL http://dx.doi.org/10.1214/aos/1176351052 . M. Essid and J. Solomon. Quadratically regularized optimal transport on graphs. SIAM Journal on Scientific Computing , 40(4):A1961–A1986, 2018. Z. Fang and A. Santos. Inference on directionally differentiable functions. The Review of Economic Studies , 86(1):377–412, 09 2018. ISSN 0034-6527. doi: 10.1093/restud/rdy049. URL https://doi.org/10.1093/ restud/rdy049 . S. Ferradans, N. Papadakis, G. Peyr´ e, and J.-F. Aujol. Regularized discrete optimal transport. SIAM Journal on Imaging Sciences , 7(3):1853–1882, 2014. A. Genevay, G. Peyre, and M. Cuturi. Learning generative models with sinkhorn divergences. In A. Storkey and F. Perez-Cruz, editors, Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics , volume 84 of Proceedings of Machine Learning Research , pages 1608–1617. PMLR, 09–11 Apr 2018. URL https://proceedings.mlr.press/v84/genevay18a.html . P. Ghosal and
https://arxiv.org/abs/2505.04312v1
B. Sen. Multivariate ranks and quantiles using optimal transport: Consistency, rates and nonparametric testing. The Annals of Statistics , 50(2):1012–1037, 2022. Z. Goldfeld, K. Kato, G. Rioux, and R. Sadhu. Limit theorems for entropic optimal transport maps and sinkhorn divergence. Electronic Journal of Statistics , 18(1), Jan. 2024. ISSN 1935-7524. doi: 10.1214/24-ejs2217. URL http://dx.doi.org/10.1214/24-EJS2217 . A. Gonz´ alez-Sanz and S. Hundrieser. Weak limits for empirical entropic optimal transport: Beyond smooth costs. 05 2023. URL https://arxiv.org/pdf/2305.09745.pdf . A. Gonzalez-Sanz, J.-M. Loubes, and J. Niles-Weed. Weak limits of entropy regularized optimal transport; potentials, plans and divergences. 07 2022. URL https://arxiv.org/pdf/2207.07427.pdf . A. Gretton, K. Borgwardt, M. Rasch, B. Sch¨ olkopf, and A. Smola. A kernel method for the two-sample-problem. Advances in neural information processing systems , 19, 2006. Z. Harchaoui, L. Liu, and S. Pal. Asymptotics of discrete schr¨ odinger bridges via chaos decomposition. Bernoulli , 30(3), Aug. 2024. ISSN 1350-7265. doi: 10.3150/23-bej1659. URL http://dx.doi.org/10. 3150/23-BEJ1659 . P. J. Huber. The behavior of maximum likelihood estimates under nonstandard conditions. In Proc. Fifth Berkeley Sympos. Math. Statist. and Probability (Berkeley, Calif., 1965/66), Vol. I: Statistics , pages 221– 233. Univ. California Press, Berkeley, CA, 1967. G.-J. Huizing, G. Peyr´ e, and L. Cantini. Optimal transport improves cell–cell similarity inference in single-cell omics data. Bioinformatics , 38(8):2169–2177, 2022. D. C. Joyce. Survey of extrapolation processes in numerical analysis. SIAM Review , 13(4):435–490, Oct. 1971. ISSN 1095-7200. doi: 10.1137/1013092. URL http://dx.doi.org/10.1137/1013092 . A. J. King and R. T. Rockafellar. Asymptotic theory for solutions in statistical estimation and stochastic programming. Mathematics of Operations Research , 18(1):148–162, 1993. doi: 10.1287/moor.18.1.148. URL https://doi.org/10.1287/moor.18.1.148 . 33 M. Klatt, C. Tameling, and A. Munk. Empirical regularized optimal transport: Statistical theory and appli- cations. SIAM Journal on Mathematics of Data Science , 2(2):419–443, Jan. 2020. ISSN 2577-0187. doi: 10.1137/19m1278788. URL http://dx.doi.org/10.1137/19M1278788 . M. Klatt, A. Munk, and Y. Zemel. Limit laws for empirical optimal solutions in random linear programs. Annals of Operations Research , 315(1):251–278, Aug 2022. ISSN 1572-9338. doi: 10.1007/s10479-022-04698-0. URL https://link.springer.com/content/pdf/10.1007/s10479-022-04698-0.pdf . P. T. Komiske, E. M. Metodiev, and J. Thaler. Metric space of collider events. Physical review letters , 123(4): 041801, 2019. S. Kullback and R. A. Leibler. On information and sufficiency. The annals of mathematical statistics , 22(1): 79–86, 1951. S. Liu, F. Bunea, and J. Niles-Weed. Asymptotic confidence sets for random linear programs. In G. Neu and L. Rosasco, editors, Proceedings of Thirty Sixth Conference on Learning Theory , volume 195 of Proceedings of Machine Learning Research , pages 3919–3940. PMLR, 12–15 Jul 2023. URL https://proceedings.mlr. press/v195/liu23d.html . Lyft Bikes and Scooters, LLC. Citi bike system data. https://s3.amazonaws.com/tripdata/index.html , 2024. T. Manole, S. Balakrishnan, J. Niles-Weed, and L. Wasserman. Central limit theorems for smooth optimal transport maps. 12 2023. URL https://arxiv.org/pdf/2312.12407.pdf . G. Mordant. The entropic optimal (self-)transport problem: Limit distributions for decreasing regularization with application to score function estimation. 12 2024. URL https://arxiv.org/pdf/2412.12007.pdf . A. M¨ uller. Integral probability metrics and their generating classes of functions. Advances in applied probability , 29(2):429–443, 1997. W. K. Newey, F. Hsieh, and J. M. Robins. Twicing kernels and a small
https://arxiv.org/abs/2505.04312v1
bias property of semiparametric estimators. Econometrica , 72(3):947–962, 2004. S. E. Park, P. Harris, and B. Ostdiek. Neural embedding: learning the embedding of the manifold of physics data. Journal of High Energy Physics , 2023(7):1–38, 2023. G. Peyr´ e, M. Cuturi, et al. Computational optimal transport: With applications to data science. Foundations and Trends ®in Machine Learning , 11(5-6):355–607, 2019. B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization , 30(4):838–855, 1992. doi: 10.1137/0330046. URL https://doi.org/10.1137/ 0330046 . A. Rakotomamonjy, R. Flamary, G. Gasso, M. E. Alaya, M. Berar, and N. Courty. Optimal transport for conditional domain matching and label shift. Machine Learning , pages 1–20, 2022. I. Redko, N. Courty, R. Flamary, and D. Tuia. Optimal transport for multi-source domain adaptation under target shift. In The 22nd International Conference on artificial intelligence and statistics , pages 849–858. PMLR, 2019. L. F. Richardson. The approximate arithmetical solution by finite differences of physical problems involving differential equations, with an application to the stresses in a masonry dam. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character , 210:307–357, 1911. R. T. Rockafellar. Convex analysis , volume 28. Princeton university press, 1997. G. Schiebinger, J. Shu, M. Tabaka, B. Cleary, V. Subramanian, A. Solomon, J. Gould, S. Liu, S. Lin, P. Berube, et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in repro- gramming. Cell, 176(4):928–943, 2019. A. Schrijver. Theory of linear and integer programming . John Wiley & Sons, 1998. A. Shapiro. Asymptotic properties of statistical estimators in stochastic programming. The Annals of Statistics , 17(2):841–858, 1989. ISSN 00905364, 21688966. URL http://www.jstor.org/stable/2241591 . 34 A. Shapiro. Asymptotic analysis of stochastic programs. Annals of Operations Research , 30(1):169–186, Dec 1991. ISSN 1572-9338. doi: 10.1007/BF02204815. URL https://link.springer.com/content/pdf/10. 1007/BF02204815.pdf . A. Shapiro. Asymptotic behavior of optimal solutions in stochastic programming. Mathematics of Operations Research , 18(4):829–845, 1993. doi: 10.1287/moor.18.4.829. URL https://doi.org/10.1287/moor.18.4. 829. H. Shi, M. Drton, and F. Han. Distribution-free consistent independence tests via center-outward ranks and signs. Journal of the American Statistical Association , 117(537):395–410, 2022. B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Sch¨ olkopf, and G. R. Lanckriet. On integral probability metrics, \phi-divergences and binary classification. arXiv preprint arXiv:0901.2698 , 2009. W. Stuetzle and Y. Mittal. Some comments on the asymptotic behavior of robust smoothers , pages 191–195. Springer Berlin Heidelberg, 1979. ISBN 9783540384755. doi: 10.1007/bfb0098497. URL http://dx.doi. org/10.1007/BFb0098497 . C. Tameling, S. Stoldt, T. Stephan, J. Naas, S. Jakobs, and A. Munk. Colocalization for super-resolution microscopy via optimal transport. Nature computational science , 1(3):199–211, 2021a. C. Tameling, S. Stoldt, T. Stephan, J. Naas, S. Jakobs, and A. Munk. Simluated and real data. Zenodo, https://doi.org/10.5281/zenodo.4553856 , 2021b. L. Tang. Optimal transport for single-cell genomics. Nature Methods , 22(3):452–452, 2025. W. Torous, F. Gunsilius, and P. Rigollet. An optimal transport approach to estimating causal effects via nonlinear difference-in-differences. Journal of Causal Inference , 12(1):20230004, 2024. J. W. Tukey. Exploratory data analysis . Addison–Wesley, 1977. A. W. van der Vaart. Asymptotic Statistics . Cambridge
https://arxiv.org/abs/2505.04312v1
Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998. A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes . Springer New York, 1996. ISBN 9781475725452. doi: 10.1007/978-1-4757-2545-2. URL http://dx.doi.org/10.1007/ 978-1-4757-2545-2 . A. Wald. Tests of statistical hypotheses concerning several parameters when the number of observations is large. Transactions of the American Mathematical Society , 54(3):426–482, 1943. ISSN 00029947, 10886850. URL http://www.jstor.org/stable/1990256 . A. Wald. Note on the consistency of the maximum likelihood estimate. The Annals of Mathematical Statistics , 20(4):595–601, 1949. ISSN 00034851. URL http://www.jstor.org/stable/2236315 . D. W. Walkup and R. J.-B. Wets. A lipschitzian characterization of convex polyhedra. Proceedings of the American Mathematical Society , pages 167–173, 1969. H. Wang, J. Fan, Z. Chen, H. Li, W. Liu, T. Liu, Q. Dai, Y. Wang, Z. Dong, and R. Tang. Optimal transport for treatment effect estimation. Advances in Neural Information Processing Systems , 36:5404–5418, 2023. C.-H. Zhang and S. S. Zhang. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society Series B: Statistical Methodology , 76(1):217–242, 07 2013. ISSN 1369-7412. doi: 10.1111/rssb.12026. URL https://doi.org/10.1111/rssb.12026 . W. Zhang and Y. Xia. Twicing local linear kernel regression smoothers. Journal of Nonparametric Statistics , 24(2):399–417, 2012. 35
https://arxiv.org/abs/2505.04312v1
SPARSE REGULARIZED OPTIMAL TRANSPORT WITHOUT CURSE OF DIMENSIONALITY BYALBERTO GONZÁLEZ -SANZ1,a, STEPHAN ECKSTEIN2,bAND MARCEL NUTZ3,c 1Department of Statistics, Columbia Universityaag4855@columbia.edu 2Department of Mathematics, University of Tübingenbstephan.eckstein@uni-tuebingen.de 3Departments of Statistics and Mathematics, Columbia Universitycmnutz@columbia.edu Entropic optimal transport—the optimal transport problem regularized by KL divergence—is highly successful in statistical applications. Thanks to the smoothness of the entropic coupling, its sample complexity avoids the curse of dimensionality suffered by unregularized optimal transport. The flip side of smoothness is overspreading: the entropic coupling always has full support, whereas the unregularized coupling that it approximates is usually sparse, even given by a map. Regularizing optimal transport by less-smooth f-divergences such as Tsallis divergence (i.e., Lp-regularization) is known to allow for sparse approximations, but is often thought to suffer from the curse of dimensionality as the couplings have limited differentiability and the dual is not strongly concave. We refute this conventional wisdom and show, for a broad family of divergences, that the key empirical quantities converge at the parametric rate, independently of the dimension. More precisely, we provide central limit theorems for the optimal cost, the optimal coupling, and the dual potentials induced by i.i.d. samples from the marginals. These results are obtained by a powerful yet elementary approach that is of broader interest for Z-estimation in function classes that are not Donsker. 1. Introduction. Optimal transport has become ubiquitous following computational ad- vances enabling applications in statistics, machine learning, image processing, and other do- mains where distributions or data sets need to be compared (e.g., [37, 43]). Given probability measures PandQonRd, and a cost function c:Rd×Rd→R, the Monge–Kantorovich optimal transport problem is (1) OT(P,Q) = inf π∈Π(P,Q)Z c(x,y)dπ(x,y) where Π(P,Q)denotes the set of couplings; i.e., joint distributions πonRd×Rdwith marginals PandQ. A key bottleneck for statistical applications is that optimal transport suffers from the curse of dimensionality in terms of sample complexity (see [4, 12, 16] among many others). More specifically, let X1,...,X nandY1,...,Y nbe i.i.d. samples from PandQ, respectively, and consider their empirical measures Pn=1 nPn i=1δXiand Qn=1 nPn i=1δYi. Then OT(Pn,Qn), the value of (1) computed with marginals (Pn,Qn) instead of (P,Q), converges to the population value OT(P,Q)at a rate that deteriorates ex- ponentially with the dimension d. For instance, for the important cost c(x,y) =∥x−y∥2 defining the 2-Wasserstein distance, E[|OT(Pn,Qn)−OT(P,Q)|]∼n−2/dfor dimension d≥5, under regularity conditions on the population marginals PandQ(see [33] for this particular result). MSC2020 subject classifications: Primary: 62G05 Secondary: 62R10, 62G30. Keywords and phrases: Optimal Transport, Regularization, Sample Complexity, Central Limit Theorem. 1arXiv:2505.04721v1 [math.ST] 7 May 2025 2 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ By contrast, the celebrated entropy-regularized optimal transport (EOT) avoids the curse of dimensionality. The EOT problem penalizes (1) with the Kullback–Leibler (KL) divergence KL(π|P⊗Q)between the coupling πand the product P⊗Qof the marginals, (2) EOT ε(P,Q) := inf π∈Π(P,Q)Z cdπ+εKL(π|P⊗Q) where ε >0is a fixed parameter determining the strength of regularization. Indeed, a series of works starting with [18] has shown that EOT ε(Pn,Qn)converges to EOT ε(P,Q)at the parametric rate n−1/2, thus the dimension d(and the parameter ε) only affect the constants. Literature is reviewed in Section 1.1 below.
https://arxiv.org/abs/2505.04721v1
The prevailing thinking is that EOT overcomes the curse of dimensionality thanks to its smoothness. Indeed, the entropic penalty leads to optimal couplings whose density is smooth (or at least as smooth as the cost c), and this regularity holds independently of the marginals. In particular, it holds uniformly over the empirical measures (Pn,Qn), and this fact drives the aforementioned result of [18]. The flip side of smoothness is that the optimal coupling of EOT always has full support (i.e., the same support as P⊗Q), for any value of the regularization parameter ε >0. By con- trast, the unregularized optimal transport coupling that it approximates, typically has sparse support (the graph of a function, namely the Monge map). This disconnect can be undesir- able depending on the application; for instance, the large support (or “overspreading”) can amount to blurrier images in an image processing task as shown in [2], bias in a manifold learning task as in [45], or unfaithful approximation of barycenters [29]. In such cases, sparse approximations are desirable. It is known that less-smooth f-divergence penalties give rise to sparse approximations. Indeed, consider the divergence-regularized optimal transport problem with the divergence Dφ(π|P⊗Q)defined by a function φ, ROT ε(P,Q) := inf π∈Π(P,Q)Z cdπ+εDφ(π|P⊗Q), (3) Dφ(π|P⊗Q) :=Z φdπ d(P⊗Q) d(P⊗Q), (4) where φ:R+→Ris convex and satisfies certain conditions (see Assumption 2.1). While the KL divergence of EOT is recovered for φ(t) =tlogt, replacing this by a power φ(t) = (p−1)−1(tp−1)forp >1(Tsallis divergence, including the quadratic or χ2divergence for p= 2) was studied starting with [35, 2, 15] and empirically seen to have sparse solutions. Recent investigations underline this finding with theoretical results; see the literature review below. The main objection against such regularizations is that they lack the smoothness of EOT. Indeed, the density of the optimal coupling is only as smooth as the derivative of the convex conjugate ψ(s) = supt≥0{st−φ(t)}ofφ, which for φ(t) = (p−1)−1(tp−1)isk- times differentiable only for k < p/ (p−1). In particular, it does not enjoy the C∞-smoothness crucially used in EOT, where the proof of sample complexity uses derivatives of higher and higher order as the dimension dgrows. Following the aforementioned prevailing thinking, it is often assumed that (3) thus suffers from the curse of dimensionality when it is not smooth. This idea is in line with the recent work [1] which gives an upper bound for the sample complexity that deteriorates exponentially with the dimension d(see Section 1.1). In the present paper, our goal is to refute this prevailing thinking. For a broad class of reg- ularizations φ, we show that (3) overcomes the curse of dimensionality and converges at the parametric rate, despite a lack of smoothness (and also of a PL inequality, cf. Section 1.1). In fact, our results are much more detailed: we establish central limit theorems for all key ob- jects. As a consequence, (3) provides approximate solutions to optimal transport with precise statistical guarantees. SPARSE REGULARIZED OPTIMAL TRANSPORT 3 Synopsis of main results. We consider marginals P,Q with bounded supports, at least one of them connected. The transport cost cis
https://arxiv.org/abs/2505.04721v1
a general C1function. The main assumption on the divergence is that the convex conjugate ψofφisC2. This includes the KL divergence where ψisC∞, but also Tsallis divergences that yield sparse approximations of unregu- larized optimal transport. There are three main objects under consideration. The first is the optimal cost ROT ε(Pn,Qn); here the result is simple to state as the object is scalar. In Theo- rem 3.5, we show that the optimal cost is asymptotically normal at rate√n. More precisely,√n ROT( Pn,Qn)−ROT( P,Q) →N(0,σ2)where N(0,σ2)is the centered normal dis- tribution with a variance σ2detailed in Theorem 3.5. The second key object is the opti- mal coupling πn∈Π(Pn,Qn). Here the asymptotic normality can be stated by integrating a test function: Theorem 3.6 shows that for any bounded measurable function η, we have√nR ηd(πn−π) →N(0,σ2(η)), where again the variance is detailed in the theorem. The third object is the pair of dual potentials; that is, functions (f∗,g∗)solving the dual problem of (3) in the sense of convex analysis, sup (f,g)∈L∞(P)×L∞(Q)Z f(x) +g(y)−ε·ψf(x) +g(y)−c(x,y) ε d(P⊗Q)(x,y).(5) The potentials determine the optimal coupling via dπ=ψ′f∗(x) +g∗(y)−c(x,y) ε d(P⊗Q) where ψ′is the derivative of the conjugate of φ. Unlike in the special case of EOT, ψ′is generally not invertible, and hence the potentials are the main object in our study. Denoting by(f∗,g∗)the population potentials and by (fn,gn)the empirical counterparts, our central limit theorem for the potentials states that√n(fn−f∗,gn−g∗)converges (weakly wrt. the uniform norm) to the Gaussian random process detailed in Theorem 3.4. Of course this result can only hold if (f∗,g∗)is unique. Indeed, we obtain uniqueness (up to additive constants) as a by-product of our analysis; this is a new result of its own interest and holds even for merely continuous transport costs c(Theorem 3.2). 1.1. Related literature. Next, we review literature on optimal transport regularized by an f-divergence. Following the main focus of the present work, we begin with divergences other than the entropic one. After that, we review the existing sample complexity results for EOT. 1.1.1. Non-entropic divergences. Optimal transport problems regularized by Tsallis di- vergence were first considered in the discrete setting, starting with [35] for applications in ecological inference. Focusing on the particular case of quadratic (i.e., χ2) divergence, [2] highlighted the sparsity of the solution for use in image processing and [15] studied minimum-cost flow problems on graphs. The earlier work [10] considered optimal transport with a more general convex regularization. In the continuous setting, divergence-regularized optimal transport was first explored in the computational literature. Several works including [13, 18, 26, 29, 40] approach the dual problem by optimization techniques. For instance, [29] computes regularized Wasserstein barycenters using neural networks. From a computational point of view, Tsallis divergences are attractive to mitigate a well-known issue of EOT for small values of the regularization parameter ε, namely that optimization methods struggle with the exponentially small values of the density (e.g., [39]). On the theoretical side, [31] established duality results for the case of quadratic divergence. Also for quadratic divergence, [30] showed convergence ROT ε→OTas the regularization parameter εtends to zero. More recently, [14] derives
https://arxiv.org/abs/2505.04721v1
a rate for this convergence, for general 4 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ f-divergences. For the special case of quadratic divergence, [17] shows that this rate cor- responds to the exact leading order and identifies the multiplicative constant, whereas [24] focuses on the discrete case where convergence is stationary. The recent works [25, 44] study the size of the optimal support in the regime ε→0, thus quantifying the sparsity of the solu- tion (qualitatively proved in [36] but empirically known long before, as mentioned above). The only previous work on sample complexity for non-entropic divergences is [1]. The authors bound the expected absolute difference between the population and empirical optimal costs. The bound depends on the smoothness and the dimension of the problem. Recall that ψdenotes the conjugate of the convex function φdefining the divergence in (4). Let k∈N be such that ψ∈ Ckas well as c∈ Ck. Moreover, let dbe the dimension of the marginals. The main result [1, Theorem 5.3] states that for d >2k, (6) E[|ROT ε(P,Q)−ROT ε(Pn,Qn)|]≲n−k d. For most applications, the bottleneck in this bound is the smoothness of ψ, rather than c. Assuming merely ψ∈ C2as in the present work, the rate (6) is n−2 dford≥5, suggesting that the curse of dimensionality is exactly the same as for the unregularized optimal trans- port problem mentioned at the beginning of this introduction. The bound (6) is obtained by adapting the classical approach going back to [18] for EOT: estimating the regularity of the empirical potentials to define a function space with controlled covering number and then ap- plying empirical process theory. Because the potentials are only as smooth as ψ′, this yields a bound (6) that in general deteriorates exponentially with the dimension (whereas when ψ∈ C∞as for EOT, the dimension can be eliminated). The present work indicates that the bound (6) is loose, even to the extent that its implicit message regarding the curse of dimensionality is misleading. Indeed, we show that the actual rate is n−1 2,independently of the dimension d,and in particular that the influence of the dimension is more similar to EOT than to unregularized optimal transport. One may consider this result highly surprising given the lack of smoothness (and failure of strong concavity, cf. Section 1.1.2). As we derive central limit theorems with the usual√nscaling for all key quantities, this rate is sharp. (Except possibly in degenerate examples where the Gaussian limit has vanishing variance; then, the actual rate is likely faster.) 1.1.2. Entropic divergence. The literature on EOT is extremely large, and fairly well known. We only review the literature on sample complexity, focusing on continuous pop- ulation marginals (and with a view towards explaining why the existing approaches fail in our setting). As mentioned above, parametric rates for the optimal EOT cost (2) were first obtained by [18]. Assuming a smooth (i.e, C∞) transport cost cand compactly supported marginals, the authors show that the potentials are smooth with Cknorm bounded indepen- dently of the marginal measures, for any k∈N. Thus the empirical potentials all belong to a function
https://arxiv.org/abs/2505.04721v1
class with controlled covering number. The result then follows by applying empiri- cal process theory to the dual problem of EOT. A similar bound with improved constants and more general (sub-Gaussian) marginals was obtained in [34] using a refinement of the same approach. Moreover, [34] provided the first central limit-type theorem on EOT, namely for the optimal cost. In this result, the centering is by the mean of the empirical cost, instead of the population cost as in a usual central limit theorem. The proof follows an approach based on the Efron–Stein inequality that is adapted from (unregularized) optimal transport; cf. [7]. A central limit theorem for the EOT cost in the classical sense (centered at the population value) was first derived in [9]. The proof combines the result of [34] with an analysis of the bias–variance decomposition showing that the bias tends to zero faster than n−1/2. This is based on the fact that the empirical EOT potentials belong to a Donsker class. Namely, the Ck-norm of the potentials is uniformly bounded for any k, and choosing k > d/ 2implies SPARSE REGULARIZED OPTIMAL TRANSPORT 5 the Donsker property [42, p. 157]. The authors also show that the potentials converge at the parametric rate (in the norm of Ck, for any k∈N); this is based on the strong convexity of the dual problem (see [9, Lemma 4.6]). In concurrent work, [20] provided a very similar central limit theorem for the EOT cost, using a different proof technique based on the functional delta method for supremum-type functionals (see [6]). This argument again rests on the potentials belonging to a Donsker class. While the aforementioned central limit theorems were all about the optimal cost, central limit theorems for the potentials and couplings were first derived in [23] and [21]. Both works use the same methodology of Z-estimation and the delta method (see also Section 1.2 below). Specifically, [21, Theorem 3] shows that the potentials are Hadamard differentiable as functions of the marginal measures, tangentially to perturbations in Ck(Ω)′for any k∈N. As the unit ball in Ck(Ω)is a Donsker class for k > d/ 2, the delta method then yields the central limit theorem for the potentials, and the theorems for the couplings are derived from there. The aforementioned works all deal with a transport cost function c∈ C∞, and exploit the fact that the EOT potentials are as smooth as that cost. A substantially different approach was taken in [38]. Assuming only that cis bounded, the approach is based on the fact that the dual problem of EOT is strongly concave, and hence satisfies a PL inequality, uniformly over the empirical marginals (see [38, Lemma 16]). The authors then obtain the parametric rate for the convergence of the empirical EOT cost; more precisely, the mean squared error and bias are both bounded by 1/n, with constants that are fully dimension-independent. In the setting of non-smooth cost, central limit theorems for the optimal costs and the optimal couplings were established in [22]. The authors linearize the potentials in the empirical L2 norms and, unlike
https://arxiv.org/abs/2505.04721v1
[21, 23], do not use empirical processes theory but instead approximate the optimal couplings by infinite order U-statistics. This approximation holds by a uniform contraction argument over the linearized Schrödinger system, which is deeply related to the PL inequality in [38]. All of the above approaches fail in our setting because the empirical potentials are not smooth (and do not belong to a Donsker class) while the dual problem is not strongly concave and does not satisfy a uniform PL inequality. While our main interest is in divergences other than the entropic one, let us mention that we add to the literature even in the special case of EOT. Namely, we provide a central limit theorem for the potentials under the assumption that the cost cisC1, where such a result pre- viously existed only for c∈ C∞(see [21, 23]). Our proof technique is substantially different, as we explain next. 1.2. Methodology. We follow the approach of Z-estimation in deriving central limit the- orems; see [42, Chapter 3.3]. While the usual argument via Donsker classes is doomed to fail due to the missing regularity of the empirical potentials, one key methodological innovation is to overcome this issue with a novel line of argument. As our approach may be useful for Z-estimation problems in other areas, we sketch the approach in general terms. In Z-estimation, the empirical quantities θnof interest are described by an equation of the form Φn(θn) = 0 with a random operator Φnwhile the population counterpart θ∗is described by Φ(θ∗) = 0 with a deterministic Φ. The goal is to show a central limit theorem for the convergence θn→θ∗. More specifically, θn,θ∗are elements of the parameter set Θ which is contained in a Banach space (B,∥·∥). We assume for simplicity that Φn,Φ :B → B ; in general the image may be in another Banach space. In our particular problem, θnare the empirical potentials (fn,gn)andθ∗is the population counterpart (f∗,g∗). The nonlinear operators Φn,Φrepresent the first-order conditions of optimality in the dual problem. As in 6 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ many applications of Z-estimation, these operators have the integral form Φn(θ) =Z ϕ(θ)dPn, Φ(θ) =Z ϕ(θ)dP (7) where ϕis deterministic and Pnare empirical measures derived from i.i.d. samples of P. The basic theorem of Z-estimation (see [42, Theorem 3.3.1]) separates the conditions for the desired asymptotic normality of√n(θn−θ∗)into analytical conditions on the population quantities θ∗,Φand a stochastic condition stating that the remainder in a Taylor expansion is negligible. Roughly, these conditions are (i)[Φn−Φ](θ∗)satisfies a central limit theorem in B, (ii)Φis Fréchet differentiable at θ∗with invertible derivative L:=DΦ(θ∗)∈ L(B,B), (iii)∥θn−θ∗∥P− →0, (iv) the following expansion holds, (8) ∆n:= [Φ n−Φ](θn)−[Φn−Φ](θ∗) =oP n−1/2+∥θn−θ∗∥ . While obtaining invertibility has its own challenges in our setting, we defer that discussion to Section 3.1. Like in most applications, the main issue is to establish (8). The standard ap- proach (see [42, Lemma 3.3.5]) is to use a sufficient condition whose main part is that the random functions ϕ(θn)−ϕ(θ∗)form a P-Donsker class (for ∥θn−θ∗∥sufficiently small). In the setting of EOT, the smoothness of the empirical
https://arxiv.org/abs/2505.04721v1
potentials indeed yields the Donsker prop- erty, and together with the aforementioned Hadamard differentiability, this allowed [21, 23] to infer the desired central limit theorem for the potentials. In the present case, this approach is a nonstarter because the regularity of the potentials is too poor to give a Donsker class. We take a different route. While our aim is, as before, a central limit theorem in B, we also use an auxiliary Banach space Bs⊂ B equipped with a stronger norm ∥·∥ssuch that the unit ball of Bsis compactly embedded in B. In our case, Bis (up to some details) the space of continuous functions on a compact set endowed with the uniform norm whereas Bsis (up to an isomorphism) the space of Hölder continuous functions with a fixed Hölder exponent β∈(0,1). The stronger Hölder norm guarantees the compact embedding. While we keep the conditions (ii), (iii) above, we strengthen (i) to (i’)[Φn−Φ](θ∗)satisfies a central limit theorem in Bs as the stronger topology will be used to help establish (8). We verify (i’) by applying a general central limit theorem for Hölder spaces detailed in Appendix B. The key intermediate step towards (8) is that (iv’) there are a compact K ⊂ B , random elements Un∈ K, and random elements Vn∈ B with∥Vn∥P− →0, such that ∆n:= [Φ n−Φ](θn)−[Φn−Φ](θ∗) = (Un+Vn)∥θn−θ∗∥. This is verified using the specifics of the problem at hand (cf. Lemma 5.4), without requiring smoothness. Next, we outline how these conditions lead to (8). Note that the differentiability condition (ii) implies (θ∗−θn) =L−1(Φ(θ∗)−Φ(θn)) +Rn with∥Rn∥=oP(∥θ∗−θn∥). Using Φ(θ∗) = 0 = Φ n(θn), this can be written as θ∗−θn=L−1(Φn(θn)−Φ(θn)) +Rn=L−1 ∆n+ [Φ n−Φ](θ∗) +Rn. SPARSE REGULARIZED OPTIMAL TRANSPORT 7 By (i’), the sequence√n[Φn−Φ](θ∗)is tight in Bs. Using also (iv’) and the continuity of L−1, we deduce a key decomposition (cf. Proposition 5.3) θ∗−θn=∥θn−θ∗∥(U′ n+V′ n) +n−1 2Wn (9) with random functions U′ ntaking values in a compact K′⊂ B, random functions ∥V′ n∥P− →0, and(LWn)tight in Bs. From here, we make (mild) use of the particular properties the func- tionals (7) at hand. Namely, ϕis differentiable, hence the fundamental theorem of calculus allows us to write ∆n=Z (ϕ(θ)−ϕ(θn))d(P−Pn) =Z (θ∗−θn)ϕnd(P−Pn) forϕn:=R1 0ϕ′(λθ∗+ (1−λ)θn)dλ. Inserting (9) yields ∆n=∥θ∗−θn∥Z (U′ n+V′ n)ϕnd(P−Pn) +n−1 2Z Wnϕnd(P−Pn). The stated properties of U′ n,V′ nenable us to verify that first term is oP ∥θn−θ∗∥ . Similarly, the tightness of (LWn)inBsand the fact that ∥ · ∥ s-bounded sets are relatively compact in Ballow us to verify that the second term is oP n−1/2 . This completes the derivation of (8). The central limit theorem for the potentials then follows by standard arguments. The proofs of the central limit theorems for the optimal cost and the optimal couplings are based on the result for the potentials, (8) and the central limit theorem for two-sample U-statistics. Finally, we comment on the uniqueness of the population potentials, shown in Theorem 3.2 when the cost cis continuous and one marginal has connected support. The usual technique to obtain uniqueness—familiar in unregularized optimal transport and adapted in [36]
https://arxiv.org/abs/2505.04721v1
for the regularized problem with quadratic divergence—is to argue that the gradient of the poten- tial equals the partial derivative of the cost con the support of the optimal coupling. This argument is a nonstarter if the cost cis not differentiable. In the present work, uniqueness is instead deduced from the invertibility of L. Clearly uniqueness of the population potentials is a precondition for any result on the convergence to them. On the other hand, we mention that the empirical potentials are in general not unique. They are unique in the special case of EOT and more generally when the optimal coupling has sufficiently large support, but not in general. The family of all potentials can be parametrized by the components (in the sense of connectedness of graphs) of the support, as shown in [36]. Organization. The remainder of this paper is organized as follows. Section 2 details the problem formulation and notation, and summarizes basic facts about the regularized optimal transport problem. Section 3 first states the main results, namely the uniqueness of the popu- lation potentials and the three central limit theorems, then continues with an overview of the proof strategy. The proofs start in Section 4 by analyzing the linearization of the first-order condition of the potentials, here the key result is the invertibility of the derivative L(Propo- sition 4.4). The proofs of the central limit theorems are presented in Section 5. We first show in Section 5.1 that the empirical potentials are a consistent estimator. Section 5.2 contains the technical core of the proof, namely the aforementioned decomposition into three terms, each with a different compactness property. Section 5.3 concludes with the proofs of the central limit theorems. In Appendix A we provide a numerical analysis of a particular example where the population problem can be solved in closed form and hence the convergence rate can be accurately observed. Appendix B provides an abstract central limit theorem for the Banach space of Hölder functions that is used in the proof of our main results. Omitted proofs are collected in Appendix C. 8 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ 2. Problem statement and preliminaries. 2.1. Notation. LetΩbe a compact subset of Rd. For f: Ω→R, we write ∥f∥∞= supx∈Ω|f(x)|. The space of continuous functions Ω→Ris denoted C(Ω) and endowed with∥ · ∥∞. Moreover, Ck(Ω) denotes k-times continuously differentiable functions. For α∈(0,1], theα-Hölder semi-norm of f: Ω→Ris [f]α= sup x̸=y|f(x)−f(y)| ∥x−y∥α. We set C0,α(Ω) = {f∈ C(Ω) : [ f]α<∞}, a Banach space under the norm ∥f∥0,α:=∥f∥∞+ [f]α.IfB1andB2are Banach spaces, the space of bounded linear operators F:B1→ B 2is denoted L(B1,B2)and endowed with the norm topology. The open unit ball in Rdis denoted by Band its closure by B. Thus x+rBis the ball centered at xwith radius r. The gradient of f:Rd→Ris denoted ∇fand the gradients of a function c:Rd×Rd→Rwith respect to the first and second coordinates are denoted ∇xc and∇yc, respectively. The derivative of a univariate function ψ:R→Ris denoted ψ′. Given functions x7→f(x)andy7→g(y), we denote by f⊕gthe function (x,y)7→f(x) +g(y). We fix a probability space (Ω,A,P)where all random
https://arxiv.org/abs/2505.04721v1
variables are defined. For a measur- able function f:Rd→Rand a random vector X:Ω→Rdwith distribution P, the expec- tation of f(X)is denoted E[f(X)] =R f(x)dP(x) =R f dP. Almost-sure convergence is denoted bya.s.− →, convergence in probability byP− →, and convergence in distribution (or weak convergence) byw− →. The latter refers to convergence induced by continuous and bounded test functions. For scalar random variables Xn,Yn, we write Xn=OP(Yn)ifXn/Ynis stochas- tically bounded and Xn=oP(Yn)ifXn/YnP− →0. More terminology, related to probability in Banach spaces, can be found in Appendix B. 2.2. Divergence. Given measures µ,ν on the same space, the divergence Dφ(µ|ν)is determined by the function φ: [0,∞)→Rvia Dφ(µ|ν) =Z φdµ dν dµ, with the convention that Dφ(µ|ν) =∞ifµ̸≪ν. The following assumption on φis in force throughout the paper. ASSUMPTION 2.1 (Divergence). The function φ: [0,∞)→Ris strictly convex with φ(1) = 0 ,limx→∞φ(x)/x= +∞and such that the conjugate y7→ψ(y) :=φ∗(y) := sup x≥0{xy−φ(x)} is inC2(R). Moreover, there exists C >0such that ψ′(x)≥xforx≥C, and there exist t0>0andδ >0such that ψ′(t0) = 1 andψis strictly convex on [t0−δ,∞). The detailed assumptions about the shape of ψare useful to derive basic estimates for the potentials; cf. Proposition 2.3. While they are a bit clumsy, they are verified in all examples of our interest. The more restrictive assumption is that ψisC2. EXAMPLE 2.2. For the Kullback–Leibler divergence of EOT, we take φ(x) =xlog(x) which yields ψ(y) =ey−1. Note that ψ∈ C∞andψ′(y)>0for all y, corresponding to the SPARSE REGULARIZED OPTIMAL TRANSPORT 9 fact that the optimal coupling always has full support. Next, consider the polynomial (Tsallis) divergence φ(x) =xα−1 α forα∈(1,∞). Then ψ′(y) =yβ−1 +where1 α+1 β= 1andy+= max {y,0}. This truncation at zero corresponds to the fact that the optimal coupling does not have full support in general. Note that ψis strictly convex on R+and in C1. Moreover, ψisC2forβ >2; i.e., for any α∈(1,2). We note that the quadratic case α= 2is not covered. For this boundary case, we expect to provide similar results as in the present paper in separate work, assuming that cis the quadratic cost and the marginals have additional regularity (see also Appendix A for nu- merical hints). That setting implies additional convexity properties for the potentials. In the present work, we focus on giving a general result for a broad class of divergences, costs and marginals, thus showing that the conclusions do not hinge on a particular algebraic structure. 2.3. Regularized optimal transport. For brevity, we treat the regularized optimal trans- port problem for regularization parameter ε= 1. The general case is recovered by a simple scaling argument detailed in Remark 2.5 below. Thus the primal problem is (10) ROT( P,Q) := inf π∈Π(P,Q)Z cdπ+Z φdπ d(P⊗Q) d(P⊗Q). The associated dual problem is (11) DUAL( P,Q) := sup (f,g)∈L∞(P)×L∞(Q)Z f⊕g−ψ(f⊕g−c) d(P⊗Q) and its first-order condition for optimality is (12)(R ψ′(f∗(·) +g∗(y)−c(·,y))dQ(y) = 1 P-a.s.,R ψ′(f∗(x) +g∗(·)−c(x,·))dP(x) = 1 Q-a.s. The following proposition summarizes some basic facts to be used throughout the paper. While the proofs are essentially known and largely contained in [1], we give a self-contained and simpler proof in Appendix C.1, for
https://arxiv.org/abs/2505.04721v1
the convenience of the reader and also to rectify minor inaccuracies in [1]. PROPOSITION 2.3. LetP,Q be probability measures on Rdwith compact supports Ω,Ω′. Moreover, let c∈ C(Ω×Ω′). (i)The strong duality ROT( P,Q) = DUAL( P,Q)holds. (ii)The primal problem (10) admits a unique optimizer π∈Π(P,Q). (iii) The dual problem (11) admits a (non-unique) optimizer (f∗,g∗)∈L∞(P)×L∞(Q). (iv) A pair (f∗,g∗)∈L∞(P)×L∞(Q)is an optimizer of the dual problem (11) if and only if it satisfies the first-order condition (12). (v)Any dual optimizer (f∗,g∗)yields a primal optimizer via dπ:=ψ′(f∗⊕g∗−c)d(P⊗Q). (vi) Let(f∗,g∗)∈L∞(P)×L∞(Q)satisfy (12). Then one can choose a version of (f∗,g∗) satisfying (12) everywhere; that is, (13)(R ψ′(f∗(x) +g∗(y)−c(x,y))dQ(y) = 1 for all x∈Ω,R ψ′(f∗(x) +g∗(y)−c(x,y))dP(x) = 1 for all y∈Ω′. For such a version (f∗,g∗), the uniform bound ∥f∗⊕g∗∥∞≤5∥c∥∞+ (ψ′)−1(1)<∞ holds. Moreover, if ρis a modulus of uniform continuity for conΩ×Ω′, then f∗andg∗ are also uniformly continuous with modulus ρ. 10 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ (vii) Let(f∗,g∗)solve (13) and abbreviate ξ∗(x,y) :=f∗(x) +g∗(y)−c(x,y). Then there exists δ >0such thatZ ψ′′(ξ∗(·,y))dQ(y)≥δonΩ,Z ψ′′(ξ∗(x,·))dP(x)≥δonΩ′. Functions (f∗,g∗)that are optimal for the dual problem (11), or equivalently solve the first- order condition (12), will be called potentials of the regularized optimal transport problem. REMARK 2.4 (Limited Smoothness). Proposition 2.3 (vi) may suggest that the poten- tialsf∗andg∗are “as smooth as the cost c.” While this holds for the KL divergence of EOT, ψis in general only C2for the polynomial divergences included in our setting, and this implies that f∗andg∗are only C1in general. Indeed, the formula ∇f∗(·) =R ψ′′(f∗(·) +g∗(y)−c(·,y))∇xc(·,y)dQ(y)R ψ′′(f∗(·) +g∗(y)−c(·,y))dQ(y) highlights how the smoothness of both ψandcaffects the regularity of f∗. (See also [36] for a concrete counterexample.) As mentioned in the introduction, the limited smoothness of the empirical potentials is a major obstacle for deriving our main results. REMARK 2.5 (Regularization parameter). The regularized optimal transport problem is often considered with a regularization parameter ε >0, which was taken to be ε= 1above. In general, ROT ε(P,Q) := inf π∈Π(P,Q)Z cdπ+εZ φdπ d(P⊗Q) d(P⊗Q), the associated dual problem is (14) DUAL ε(P,Q) := sup (f,g)∈L∞(P)×L∞(Q)Z f⊕g−ε·ψf⊕g(y)−c ε d(P⊗Q) and the first-order condition for optimality reads   R ψ′ f(·)+g(y)−c(·,y) ε dQ(y) = 1, R ψ′ f(x)+g(·)−c(x,·) ε dP(x) = 1. The results for the general problem can be deduced from the special case ε= 1by a simple scaling. Namely, defining ¯c=c/ε, we have ROT ε(P,Q) =εROT( P,Q, ¯c) where ROT( P,Q, ¯c)is the problem with ε= 1and cost ¯c. Moreover, the optimal couplings of these two problems coincide. Similarly for the dual: (fε,gε)is optimal for (14) if and only if (εfε,εgε)is optimal for the problem DUAL( P,Q, ¯c)withε= 1and cost ¯c. For convenience, we detail in Remark 3.7 below how the main results translate to a general parameter ε >0. 3. Main results. Our main results are central limit theorems for the potentials, the opti- mal costs, and the optimal couplings. The following condition on the population marginals P andQis imposed throughout this section. ASSUMPTION 3.1 (Marginals). The probability measures PandQonRdhave compact supports ΩandΩ′, respectively, and Ωis connected. SPARSE REGULARIZED OPTIMAL TRANSPORT 11 A central limit
https://arxiv.org/abs/2505.04721v1
theorem for the potentials can only hold if the limiting (population) po- tentials (f∗,g∗)are unique in a suitable sense. Indeed we obtain a uniqueness result un- der quite general conditions as by-product of our approach to the central limit theorem. Note that potentials always admit a degree of freedom: if (f∗,g∗)are potentials, then so are(f∗+a,g∗−a)for any a∈R. Additionally, there is some freedom in changing poten- tials on nullsets, which can be mended by using continuous versions as in Proposition 2.3. To eliminate those degrees of freedom, we define the Banach space C⊕= (C(Ω)× C(Ω′)/∼⊕ where (f,g)∼⊕(u,v)iff⊕g=u⊕v, endowed with the quotient norm ∥(f,g)∥⊕= inf a∈R{∥f−a∥∞+∥g+a∥∞}. The equivalence class of (f,g)will be denoted by [(f,g)]⊕when we want to emphasize the identification; however, we often just write (f,g)for the sake of brevity. We can now state the uniqueness of the population potentials. We emphasize that this result holds for costs cthat are merely continuous, in contrast to related results in the literature that rely on differentiating c(cf. Section 1.2). THEOREM 3.2. Letc∈ C(Ω×Ω′)and let (P,Q)satisfy Assumption 3.1. Then the as- sociated potentials (f∗,g∗)are unique in C⊕. That is, there exist (f∗,g∗)∈ C(Ω)× C(Ω′) solving (13), and any pair solving (13) is of the form (f∗+a,g∗−a)for some a∈R. Next, we present the central limit theorems. Given (population) marginals P,Q satisfying Assumption 3.1, we consider the two-sample case. Let X1,...,X nbe i.i.d. samples from P and let Y1,...,Y nbe i.i.d. samples from Q; more precisely, the two sequences are indepen- dent and defined on the common probability space (Ω,A,P). The random samples give rise to the empirical measures Pn=1 nPn i=1δXiandQn=1 nPn i=1δYi, which in turn lead to em- pirical potentials (fn,gn)by Proposition 2.3. More precisely, Proposition 2.3 yields (fn,gn) satisfying (12) on the supports supp( Pn)andsupp( Qn), respectively, but we can extend (fn,gn)to continuous functions on Ω×Ω′. While (fn,gn)are not unique, we choose and fix them. Note that all these objects are random; that is, depend on the parameter ω∈Ωdeter- mining the realization of the random samples. We choose (fn,gn)such that the dependence onωis measurable. To simplify the notation we also introduce (15) ξ∗(x,y) :=f∗(x) +g∗(y)−c(x,y) and ξn(x,y) :=fn(x) +gn(y)−c(x,y). ASSUMPTION 3.3 (Cost). The transport cost cis inC1(Ω×Ω′). This assumption is in force for the three central limit theorems below, in addition to As- sumption 2.1 on the divergence and Assumption 3.1 on the population marginals (P,Q). We can now state our main result. THEOREM 3.4 (CLT for potentials). LetGP= (GP(y))y∈Ω′be a centered Gaussian process in C(Ω′)with covariance function E[GP(y)GP(y′)] =E ψ′(ξ∗(X,y))−E ψ′(ξ∗(X,y)) ψ′(ξ∗(X,y′))−E ψ′(ξ∗(X,y′)) 12 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ for(y,y′)∈Ω′, where X∼P. LetGQ= (GQ(x))x∈Ωbe a centered Gaussian process in C(Ω)with analogous covariance, independent of GP. Then the weak limit √n fn−f∗ gn−g∗ w− → −L−1" GQ Rψ′′(ξ∗(·,x))dQ(y) GP Rψ′′(ξ∗(x,·))dP(x)!# ⊕ holds in C⊕, where L−1∈ L(C⊕,C⊕)is the inverse of the (bounded, bijective) linear operator L:C⊕→ C⊕, (f,g)7→ f(·) +Rψ′′(ξ∗(·,y))g(y)dQ(y)Rψ′′(ξ∗(·,y))dQ(y) g(·) +Rψ′′(ξ∗(x,·))f(x)dP(x)Rψ′′(ξ∗(x,·))dP(x) . THEOREM 3.5 (CLT for costs). We have √n ROT( Pn,Qn)−ROT( P,Q)w− →N(0,σ2) where the variance σ2is that of the random variable f∗(X) +g∗(Y)−Z ψ(ξ∗(x,Y))dP(x) +Z ψ(ξ∗(X,y))dQ(y) (16) for(X,Y)∼P⊗Q. Letπ∈Π(P,Q)denote the
https://arxiv.org/abs/2505.04721v1
optimal coupling for the population marginals and let πn∈ Π(Pn,Qn)denote the empirical counterpart. To state the central limit theorem for πn→ π, recall that [(f,g)]⊕denotes the equivalence class of (f,g)∈ C(Ω)× C(Ω′)inC⊕. To streamline the notation, we also define the operator [(f,g)]⊕7→ ⊕(f,g) :=f⊕g∈ C(Ω×Ω′). THEOREM 3.6 (CLT for couplings). For any bounded and measurable η: Ω×Ω′→R, √nZ ηd(πn−π) w− →N(0,σ2(η)) withσ2(η) = Var( VX+VY)where, for Z∈ {X,Y}, VZ:=E"Z U(x,y,X,Y )¯η(x,y)ψ′(ξ∗(x,y))d(P⊗Q)(x,y)−¯η(X,Y)ψ′(ξ∗(X,Y)) Z# for(X,Y)∼P⊗Qand¯η:=η−R ηψ′′(ξ∗)d(P⊗Q)and U(·,·,X,Y ) :=⊕ L−1" ψ′(ξ∗(·,Y))Rψ′′(ξ∗(·,y))dQ(y) ψ′(ξ∗(X,·))Rψ′′(ξ∗(x,·))dP(x)!# ⊕ . REMARK 3.7 (Regularization parameter). Let us detail how the main results, stated above for ε= 1, read in the case of a general regularization parameter ε >0. The follow- ing can be deduced by the scaling argument in Remark 2.5. First, (f∗,g∗)are replaced by the rescaled potentials (fε,gε)as defined in Remark 2.5, and ξ∗is replaced by ξε(x,y) =fε(x) +gε(y)−c(x,y) ε. Apart from those changes, the formula for Lremains unchanged. In the statement of Theo- rem 3.4, we further add a multiplicative factor εin front of L−1. While in Theorem 3.5, we replace ROT byROT εon the left-hand side and add a multiplicative factor εin front of the bracket in (16). Finally, the statement of Theorem 3.6 requires not further changes, with the understanding that π,πnare now the optimal couplings for ROT ε. SPARSE REGULARIZED OPTIMAL TRANSPORT 13 3.1. Proof strategy. Our arguments center on the potentials (f∗,g∗)and their empirical counterparts (fn,gn). The first-order condition of optimality (13) can be written in operator form as Γ(f∗,g∗) = (1 ,1)andΓn(fn,gn) = (1 ,1). In fact, for reasons discussed later, we work with closely related operators ˜Γand˜Γn. Linearizing the first-order condition yields an identity of the form (17) ˜Γ(fn,gn)−˜Γn(fn,gn) =L(fn−f∗,gn−g∗) + (...) where Lis a linear operator and (...)are terms that do not cause difficulties. The first step is to show the invertibility of L(Proposition 4.4). To this end, we show that L= Id + Awhere Ais a compact operator. Thus by Fredholm’s alternative, Lis invertible if (and only if) −1is not an eigenvalue of A. To prove the latter, suppose that (f,g)is an eigenvector corresponding to the eigenvalue −1ofA. We first show that (18) ∥(f,g)∥⊕=∥A(f,g)∥⊕, which implies that fis constant on the set ∪y∈SQ(x)SP(y)for all x∈argmax |f|, where SQ(x)andSP(y)are the sections of the set S={(x,y)∈Ω×Ω′:ψ′′(ξ∗(x,y))>0}. In the case of EOT, Sis the full space and one immediately concludes that fis constant, completing the proof. In the general case, Scan be smaller (even sparse, see Remark 4.3) and then ∪y∈SQ(x)SP(y)does not cover the whole space Ωin general. Instead, we prove that (18) implies that fis constant in a neighborhood of fixed radius αaround the set argmax |f|. We then iterate this reasoning to show that fis constant on the whole space Ω. Once the invertibility of Lis shown, we focus on the central limit theorem. A high-level perspective on the proof was already given in Section 1.2 based on Z-estimation. Here, we give a more pedestrian sketch. The basic idea is that if we had a central limit theorem for the left hand side of (17), then we could apply L−1on both sides
https://arxiv.org/abs/2505.04721v1
and deduce the desired result for(fn−f∗,gn−g∗). To follow this strategy, one would like to replace the left hand side ˜Γ(fn,gn)−˜Γn(fn,gn)by the more tractable expression ˜Γ(f∗,g∗)−˜Γn(f∗,g∗). Indeed, gen- eral arguments show that the latter expression satisfies a central limit theorem (Lemma 5.2). Let us proceed with that replacement, by defining the difference ∆n:= [˜Γ−˜Γn](f∗,g∗)−[˜Γ−˜Γn](fn,gn) and hence ˜Γ(f∗,g∗)−˜Γn(f∗,g∗) =L(fn−f∗,gn−g∗) + ∆ n+ (...). At this point we would like an estimate along the lines of ∥∆n∥⊕=o(∥(f∗−fn,g∗−gn)∥⊕). A first attempt may be to bound ∥∆n∥⊕in terms of (19) ∥(f∗−fn,g∗−gn)∥⊕sup ∥(f,g)∥⊕≤1∥[˜Γn−˜Γ](f,g)∥∞. However, this expression does not behave like o(∥(f∗−fn,g∗−gn)∥⊕). Indeed, the empir- ical measures would have to converge in total variation for the supremum over continuous functions to be of order o(1). Loosely speaking, this failure arises because the unit ball of C⊕ is not compact. We resolve this with a finer analysis decomposing ∆ninto two sequences, one that is valued in a compact subset of C0and one whose uniform norm converges to zero almost surely (Lemma 5.4). This leads to a key decomposition of (f∗−fn,g∗−gn)into three sequences (Propo- sition 5.3): two as just described, and a third arising from the aforementioned central limit theorem in Lemma 5.2. A crucial detail is that we establish this central limit theorem not only inC⊕but for a stronger norm whose unit ball is compactly embedded in the uniform topology 14 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ and hence satisfies the universal Glivenko–Cantelli property, avoiding the roadblock (19). On the strength of the decomposition, we derive in Corollary 5.7 that ∥∆n∥⊕=oP ∥(f∗−fn,g∗−gn)∥⊕+n−1 2 . Using this, the three-sequence decomposition, and the central limit theorem for two-sample U-statistics, Section 5.3 completes the proofs of our main results. 4. Linearization of the Optimality Condition. Throughout this section, the transport costc∈ C(Ω×Ω′)is fixed and (P,Q)are fixed marginals satisfying Assumption 3.1. (Dif- ferentiability of cis not assumed.) Note that Proposition 2.3 applies not only to those pop- ulation marginals (P,Q)but also to the empirical measures (Pn,Qn)of their i.i.d. samples. We choose continuous versions of the potentials as detailed in Proposition 2.3. While at this point we do not yet know the uniqueness of the population potentials, we may choose and fix one such pair (f∗,g∗). By (13), the potentials (f∗,g∗)∈ C(Ω)× C(Ω′)are a solution of the equation Γ(f∗,g∗) = (1 ,1), where Γ :C⊕→ C(Ω)× C(Ω′), (f,g)7→ Γ(1)(f,g) Γ(2)(f,g) =R ψ′(f(·) +g(y)−c(·,y))dQ(y)R ψ′(f(x) +g(·)−c(x,·))dP(x) . Similarly, the empirical potentials (fn,gn)are a solution of Γn(fn,gn) = (1 ,1), where Γn:C⊕→ C(Ω)× C(Ω′), (f,g)7→ Γ(1) n(f,g) Γ(2) n(f,g)! =R ψ′(f(·) +g(y)−c(·,y))dQn(y)R ψ′(f(x) +g(·)−c(x,·))dPn(x) . As mentioned in the preceding section, we choose and fix empirical potentials (fn,gn)that are continuous on Ω×Ω′and depend measurably on ω∈Ω. Recall the shorthand ξ∗(x,y) =f∗(x) +g∗(y)−c(x,y)from (15). The following lemma details the first-order development of the operator ˜Γ :C⊕→ C(Ω)× C(Ω′), (f,g)7→˜Γ(f,g) := Γ(1)Rψ′′(ξ∗(·,y))dQ(y) Γ(2)Rψ′′(ξ∗(x,·))dP(x)! . Here the denominators are bounded away from zero by Proposition 2.3. We introduce those denominators so that the derivative L(see (20) below) becomes an operator C⊕→ C⊕that is a perturbation of the identity. (A priori, these operators could depend on the chosen and fixed
https://arxiv.org/abs/2505.04721v1
potentials (f∗,g∗), which causes no harm. Once the proof of uniqueness is complete, it will be clear that there was in fact no ambiguity.) LEMMA 4.1. The operator ˜Γ :C⊕→ C(Ω)× C(Ω′)is continuously Fréchet differen- tiable; we denote its derivative at the point (f,g)∈ C⊕byL(f,g). That is, we have lim ∥(u,v)∥⊕→0∥˜Γ(f+u,g+v)−˜Γ(f,g)−L(f,g)(u,v)∥C(Ω)×C(Ω′) ∥(u,v)∥⊕= 0 and the function C⊕∋(f,g)7→L(f,g)∈ L(C⊕,C(Ω)× C(Ω′))is continuous. The derivative at(f∗,g∗)is given by L(f∗,g∗):C⊕→ C(Ω)× C(Ω′), (u,v)7→ u(·) +Rψ′′(f∗(·)+g∗(y)−c(·,y))v(y)dQ(y)Rψ′′(ξ∗(·,y))dQ(y) v(·) +Rψ′′((f∗(x)+g∗(·)−c(x,·))u(x)dP(x)Rψ′′(ξ∗(x,·))dP(x) . SPARSE REGULARIZED OPTIMAL TRANSPORT 15 PROOF . Using that ψ′isC1, the proof is a direct calculation. Composing with the quotient map, we define the operator L∈ L(C⊕,C⊕)by L: (u,v)7→ L(f∗,g∗)(u,v) ⊕. (20) The main goal of this section is to show that Lis invertible. We first state an auxiliary result on the sections of the set S:={(x,y)∈Ω×Ω′:ψ′′(ξ∗(x,y))>0}whose interpretation is discussed in Remark 4.3 below. The proof is reported in Appendix C.2. LEMMA 4.2. Consider the sections of S={(x,y)∈Ω×Ω′:ψ′′(ξ∗(x,y))>0}, SQ(x) :={y∈Ω′:ψ′′(ξ∗(x,y))>0}, S P(y) :={x∈Ω :ψ′′(ξ∗(x,y))>0}. There exists α >0such that (x+αB)∩Ω⊂ ∪y∈SQ(x)SP(y)for all x∈Ω. REMARK 4.3. The set S={(x,y)∈Ω×Ω′:ψ′′(ξ∗(x,y))>0}is closely related to the support of the optimal coupling π. Indeed, ψ′◦ξ∗is the density of πby Proposition 2.3. For the KL and polynomial divergences detailed in Example 2.2, {t:ψ′(t)>0}={t:ψ′′(t)> 0}, hence Sis exactly the set where the density is positive, and its closure is the support of π. For the KL divergence of EOT, ψ′′>0on all of Rand thus S= Ω×Ω′, so that Lemma 4.2 is trivial. For the polynomial divergences, however, it is known that the support is often sparse (see Section 1.1), and then Lemma 4.2 is relevant. The formula in Lemma 4.1 suggests to see L= Id + Aas a perturbation of the identity Id∈ L(C⊕,C⊕). To prove the invertibility of L, we will show that the operator A= (A1,A2) :C⊕→ C⊕, (f,g)7→ Rψ′′(ξ∗(·,y))g(y)dQ(y)Rψ′′(ξ∗(·,y))dQ(y) Rψ′′(ξ∗(x,·))f(x)dP(x)Rψ′′(ξ∗(x,·))dP(x)  (21) is compact and −1is not an eigenvalue. Then, we conclude by the Fredholm alternative. While similar results are known for EOT [3, 23], we require a substantially different proof because our optimal coupling need not have full support. PROPOSITION 4.4. The operator L∈ L(C⊕,C⊕)admits an inverse L−1∈ L(C⊕,C⊕). PROOF . We first show that the operator Aof (21) is compact. We need to show that if{(un,vn)}n∈Nis bounded in C⊕then{(A1un,A2vn)}n∈Nis relatively compact in C⊕. Since{(un,vn)}is bounded in C⊕, there exists a sequence {an}of real numbers such that ∥un−an∥∞and∥vn+an∥∞are bounded. A sufficient condition for the relative compactness of{(A1vn,A2un)}inC⊕is that{A1(vn+an)}and{A2(un−an)}are relatively compact in C(Ω)andC(Ω′), respectively. We only prove that {A2(un−an)}is relatively compact, the former is analogous. By the Arzelà–Ascoli theorem, it suffices to show equicontinuity. As the functionR ψ′′(ξ∗(x,·))dP(x)−1is continuous by Proposition 2.3, and since the product of an equicontinuous sequence with a continuous function (on the compact space Ω′) remains equicontinuous, it moreover suffices to show that (22)Z ψ′′(ξ∗(x,·))(un(x)−an)dP(x), n∈N is equicontinuous. Indeed, let ρbe a modulus of continuity forR ψ′′(ξ∗(x,·))dP(x)and C= supn∥un−an∥∞, then Cρis a modulus of continuity for the functions in (22). This completes the proof that Ais compact. 16 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ Next, we show that −1is not an eigenvalue of A. Suppose that (23)
https://arxiv.org/abs/2505.04721v1
f+a=−A1(g) and g−a=−A2(f) for some (f,g)∈ C(Ω)× C(Ω′)anda∈R; we need to prove that [(f,g)]⊕= [(0,0)]⊕. As a first step, we show that a= 0must hold in (23). Using (23) together with the definition of A2, we have (24)Z ψ′′(ξ∗(x,y))(f(x) +g(y))dQ(y) =−aZ ψ′′(ξ∗(x,y))dQ(y),for all x∈Ω and similarly by the definition of A1, (25)Z ψ′′(ξ∗(x,y))(f(x) +g(y))dP(x) =aZ ψ′′(ξ∗(x,y))dP(x),for all y∈Ω′. Integrating in (24) and (25) w.r.t. PandQ, respectively, we get aZ ψ′′(ξ∗(x,y))d(P⊗Q)(y) = 0, which implies a= 0. Hence, (23) simplifies to f=−A1(g)andg=−A2(f),which we can concatenate to deduce (26) f=A1A2(f). Next, we claim that for any x∈argmax |f|,fis constant on the set ∪y∈SQ(x)SP(y)⊂Ω. Indeed, taking norms on both sides of (26) yields ∥f∥∞=∥A1A2(f)∥∞. It then suffices to show that, for any h∈ C(Ω), ∥A1A2(h)∥∞<∥h∥∞ unless his constant on the set ∪y∈SQ(x)SP(y)for all x∈argmax |h|. To see the latter, fix x∗∈argmax |h|and define the probability measure A7→µx∗(A) =R ψ′′(ξ∗(x∗,y))R Aψ′′(ξ∗(x,y))dP(x)Rψ′′(ξ∗(x,y))dP(x)dQ(y) R ψ′′(ξ∗(x∗,y))dQ(y) onΩ. ThenR h(x)dµx∗(x) =A1A2(h)(x∗)for any h∈ C(Ω). In particular, using also (26), ∥f∥∞=|f(x∗)|=|A1A2(f)(x∗)|= Z f(x)dµx∗(x) . Asfis continuous, this implies that f(x) =∥f∥∞for all x∈supp( µx∗), the support of µx∗. Observing that supp( µx∗)contains the setS y∈SQ(x∗)SP(y)completes the proof of the claim thatfis constant on the set ∪y∈SQ(x)SP(y)for all x∈argmax |f|. Next, we show that f=∥f∥∞onΩ. By Lemma 4.2 there is α >0such that (x+αB)∩Ω⊂ ∪y∈SQ(x)SP(y)for all x∈Ω. Fixx∗∈argmax |f|. The above then implies f=∥f∥∞on(x∗+αB)∩Ω. We iterate this argument to prove that f=∥f∥∞onΩ. Indeed, let ¯x∈Ωbe arbitrary; we show that f(¯x) = ∥f∥∞. AsΩis compact and connected, there exist N∈Nandx1,x2,...,x N∈Ωwithx1= x∗,xN= ¯xand|xi−xi+1| ≤α/2(e.g., such xiarise from an open cover of Ωby balls of radius α/2). We know f(x1) =∥f∥∞and the above argument shows that if f(xi) = ∥f∥∞then also f=∥f∥∞on(xi+αB)∩Ω, which in particular yields f(xi+1) =∥f∥∞. SPARSE REGULARIZED OPTIMAL TRANSPORT 17 Inductively, we obtain f(¯x) =f(xN) =f(x1) =∥f∥∞. This completes the proof that f= ∥f∥∞onΩ. Using the fact that g=−A2(f), we derive that ∥g∥∞=−∥f∥∞, implying that (f,g)∼⊕ (0,0). In summary, we have shown that −1is not an eigenvalue of A, and hence that the linear operator L= Id + Ais injective. By Fredholm’s alternative, it follows that L:C⊕→ C⊕is also surjective. As C⊕is a Banach space, the inverse is necessarily continuous, completing the proof. As a by-product of the invertibility of L, we obtain the uniqueness of the potentials. PROOF OF THEOREM 3.2. Let (f∗,g∗)be as above and let (˜f∗,˜g∗)be another solution of the first-order condition (13). We first note that any convex combination (λ˜f∗+ (1− λ)f∗,λ˜g∗+ (1−λ)g∗), forλ∈(0,1), is again a solution of (13). Indeed, by Proposition 2.3, the set of solutions of (13) is the set of (continuous) optimizers of the dual problem (11), and the latter set is convex since (11) is a concave maximization problem. We can now use Lemma 4.1 to derive 0 =d dλ λ=0˜Γ(λ˜f∗+ (1−λ)f∗,λ˜g∗+ (1−λ)g∗) =L(˜f∗−f∗,˜g∗−g∗). AsLis invertible by Proposition 4.4, it follows that (˜f∗−f∗,˜g∗−g∗) = 0 inC⊕. 5. Proofs of the Central Limit Theorems. 5.1. Consistency of the Potentials. We first state the consistency of the empirical poten- tials towards the population counterpart. LEMMA 5.1. For each n∈N, let(fn,gn)∈ C⊕be any solution of
https://arxiv.org/abs/2505.04721v1
Γn(fn,gn) = (1 ,1). Then ∥(fn,gn)−(f∗,g∗)∥⊕a.s.− − →0. PROOF . Recall that empirical quantities such as Pn,fn,... implicitly depend on the re- alization X1(ω),...,X n(ω),Y1(ω),...,Y n(ω)of the sample. To make this dependence ex- plicit, we add a superscript ω. AsX1,...,X n,Y1,...,Y nare i.i.d., the Glivenko–Cantelli theorem implies that there is a set Ω1⊂ΩwithP(Ω1) = 1 such that for all ω∈Ω1, sup ∥f∥0,1≤1 Z fd(Pω n−P) →0 and sup ∥g∥0,1≤1 Z gd(Qω n−Q) →0. By Proposition 2.3 there is a constant K > 0such that ∥(fω n⊕gω n)∥0,1≤Kfor all n≥ 1andω∈Ω. Recalling the notation ξn(x,y) :=fn(x) +gn(y)−c(x,y), it follows that ∥ψ′(ξω n)∥0,1≤K′for a constant K′. We conclude that for all (x0,y0)∈Ω×Ω′and all ω∈Ω1, (1,1)−Γ(fω n,gω n)(x0,y0) = Γω n(fω n,gω n)−Γ(fω n,gω n) (x0,y0) =R ψ′(ξω n(x0,y))d(Qω n−Q)(y)R ψ′(ξω n(x,y0))d(Pω n−P)(x) →0. (27) Fixω∈Ω1. After passing to a subsequence, (fω n,gω n)converges in C⊕to a limit (fω 0,gω 0). We have Γ(fω n,gω n)→Γ(fω 0,gω 0)by the continuity of Γ, and thus (27) shows that Γ(fω 0,gω 0) = (1,1). The uniqueness in Theorem 3.2 now implies that (fω 0,gω 0) = (f∗,g∗)as elements ofC⊕. More precisely, this argument shows that any convergent subsequence converges to(f∗,g∗), and hence that the whole sequence (fω n,gω n)converges to (f∗,g∗)inC⊕. As P(Ω1) = 1 , this amounts to the claimed a.s. convergence. 18 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ 5.2. Key Decomposition. To facilitate the reading we will assume from now on that (fn,gn)are appropriately shifted so that ∥(fn−f∗,gn−g∗)∥⊕=∥fn−f∗∥∞+∥gn−g∗∥∞. We also fix a Hölder exponent β∈(0,1). AsR ψ′′(ξ∗(·,y))dQ(y)is bounded away from zero and continuous (Proposition 2.3), the linear operator F:C(Ω)× C(Ω′)∋(f,g)7→ fRψ′′(ξ∗(·,y))dQ(y) gRψ′′(ξ∗(x,·))dP(x)! ∈ C(Ω)× C(Ω′) is bounded and bijective. We define the Banach space BF,βof all functions (f,g)∈ C(Ω)× C(Ω′)with finite norm ∥(f,g)∥F,β:=∥F−1(f,g)∥0,β. This norm is stronger than the uniform norm. Indeed, a crucial fact for our subsequent ar- guments is that the unit ball of BF,βis compactly embedded in C(Ω)× C(Ω′)(by the same reasoning as in [19, Lemma 6.33]). It follows from Theorem B.4 of Appendix B that the central limit theorem √n(Γ(f∗,g∗)−Γn(f∗,g∗)) =√nR ψ′(ξ∗(·,y))d(Q−Qn)(y)R ψ′(ξ∗(x,·))d(P−Pn)(x) w− → GQ GP holds in C0,β(Ω)× C0,β(Ω′), where GQ,GPare Gaussian processes as detailed in Theo- rem 3.4. As a consequence, the isometric image ˜Γn=F(Γn)satisfies the following central limit theorem in the space BF,β. LEMMA 5.2. The following weak limit holds in BF,β, √n[˜Γ−˜Γn](f∗,g∗) :=√n ˜Γ(f∗,g∗)−˜Γn(f∗,g∗)w− → GQ Rψ′′(ξ∗(·,y))dQ(y) GP Rψ′′(ξ∗(x,·))dP(x)! . In particular, the sequence√n[˜Γ−˜Γn](f∗,g∗)is tight in BF,β. The goal of this subsection, and indeed the key technical result, is the following decompo- sition of (fn−f∗,gn−g∗)into three terms. PROPOSITION 5.3. There exist random functions Un,Vn,Wn∈ C(Ω)andU′ n,V′ n,W′ n∈ C(Ω′), and compact sets K ⊂ C (Ω)andK′⊂ C(Ω′), such that (i)(Un,U′ n)takes values in K × K′for all n, (ii)∥(Vn,V′ n)∥∞a.s.− − →0, (iii)L(Wn,W′ n)is tight in BF,β, (iv) for all n, (28) fn−f∗ gn−g∗ =∥fn−f∗,gn−g∗∥⊕ Un+Vn U′ n+V′ n +n−1 2 Wn W′ n . Next, we collect the higher-level steps of the proof, while outsourcing the more technical parts to the subsequent lemmas. SPARSE REGULARIZED OPTIMAL TRANSPORT 19 PROOF OF PROPOSITION 5.3. By Lemma
https://arxiv.org/abs/2505.04721v1
4.1 there are random functions (Vn,L,V′ n,L)∈ C(Ω)× C(Ω′)with∥(Vn,L,V′ n,L)∥∞a.s.− →0such that ˜Γ(fn,gn)−˜Γ(f∗,g∗) =L(fn−f∗,gn−g∗)− ∥(fn−f∗,gn−g∗)∥⊕(Vn,L,V′ n,L). Using the fact that Γ(f∗,g∗) = (1 ,1) = Γ n(fn,gn)and setting ˜Γn:= Γ(1) n Rψ′′(ξ∗(·,y))dQ(y) Γ(2) n Rψ′′(ξ∗(x,·))dP(x) , this yields (29) ˜Γ(fn,gn)−˜Γn(fn,gn) =L(fn−f∗,gn−g∗)− ∥(fn−f∗,gn−g∗)∥⊕(Vn,L,V′ n,L). Recall from Proposition 4.4 that Lis a bounded bijection. Moreover, ∥˜Γ(f∗−fn,g∗− gn)∥C⊕→0a.s. by Lemma 5.1 and the continuity of ˜Γ. Setting also (30) ∆n:= [˜Γ−˜Γn](f∗,g∗)−[˜Γ−˜Γn](fn,gn), we deduce (fn−f∗,gn−g∗) =L−1([˜Γ−˜Γn](f∗,g∗))−L−1(∆n) +∥(fn−f∗,gn−g∗)∥⊕L−1(Vn,L,V′ n,L). Set(Wn,W′ n) =n1 2L−1([˜Γ−˜Γn](f∗,g∗)). Then L(Wn,W′ n)is tight by Lemma 5.2 and L−1([˜Γ−˜Γn](f∗,g∗)) =n−1 2(Wn,W′ n). Therefore, (fn−f∗,gn−g∗) =n−1 2(Wn,W′ n)−L−1(∆n) +∥(fn−f∗,gn−g∗)∥⊕L−1(Vn,L,V′ n,L). Lemma 5.4 below derives a decomposition for the remaining term ∆nwhich completes the proof of Proposition 5.3 after recalling that L−1is continuous. Specifically, the func- tion(Vn,V′ n)in the assertion of Proposition 5.3 arises from combining the function L−1(Vn,L,V′ n,L)above with the function L−1(˜Vn,˜V′ n)from Lemma 5.4. The following lemma encapsulates the main step of the preceding proof of Proposition 5.3 and uses the notation ∆nfrom (30). LEMMA 5.4. There exist compact sets ˜K ⊂ C (Ω)and˜K′⊂ C(Ω′), and random functions ˜Un,˜Vn∈ C(Ω)and˜U′ n,˜V′ n∈ C(Ω′), such that (i)(˜Un,˜U′ n)takes values in ˜K טK′for all n, (ii)∥˜Vn,˜V′ n∥∞a.s.− − →0, (iii) for all n, ∆n= ∥fn−f∗∥∞˜Vn ∥gn−g∗∥∞˜V′ n + ∥gn−g∗∥∞˜Un ∥fn−f∗∥∞˜U′ n . PROOF . We prove the claims concerning the functions on Ω; the functions on Ω′are obtained analogously. We start by recalling that Γ(1)(fn,gn)−Γ(1) n(fn,gn) =Z ψ′(ξn(·,y))d(Q−Qn)(y) 20 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ and Γ(1)(f∗,g∗)−Γ(1) n(f∗,g∗) =Z ψ′(ξ∗(·,y))d(Q−Qn)(y). Set ∆(1) n:= Γ(1)(f∗,g∗)−Γ(1) n(f∗,g∗)− Γ(1)(fn,gn)−Γ(1) n(fn,gn) . Note that writing ∆(1) nis a slight abuse of notation: defining analogously ∆(2) n, we have (31) ∆n= ∆(1) nR ψ′′(ξ∗(·,y))dQ(y),∆(2) nR ψ′′(ξ∗(x,·))dP(x)! . The fundamental theorem of calculus implies that ∆(1) n=Z {ψ′(ξ∗(·,y))−ψ′(ξn(·,y))}d(Q−Qn)(y) =Z Z1 0ψ′′ (1−λ)ξn(·,y) +λξ∗(·,y) (ξ∗(·,y)−ξn(·,y))dλd(Q−Qn)(y). We set ψn(·,y) :=R1 0ψ′′ (1−λ)ξn(·,y) +λξ∗(·,y) dλand use the definitions of ξ∗,ξnto deduce ∆(1) n=Z ψn(·,y)(ξ∗(·,y)−ξn(·,y))d(Q−Qn)(y) = (f∗−fn)Z ψn(·,y)d(Q−Qn)(y) +Z ψn(·,y)(g∗(y)−gn(y))d(Q−Qn)(y). (32) Considering the first term in (32) and adding back the denominator from (31), set ˜Vn:=1R ψ′′(ξ∗(·,y))dQ(y)Z ψn(·,y)d(Q−Qn)(y). Then ˜Vnis a random variable with values in C(Ω). Asξn,ξ∗have uniformly bounded Lip- schitz norms by Proposition 2.3, it follows from Lemma 5.5 below that ∥˜Vn∥∞→0a.s. Regarding the second term in (32), set (using the convention 0/0 := 0 if necessary) ˜Un:=1R ψ′′(ξ∗(x,·))dP(x)Z ψn(·,y)g∗(y)−gn(y) ∥gn−g∗∥∞d(Q−Qn)(y). Then Lemma 5.6 below, applied with dµ(y) :=g∗(y)−gn(y) ∥gn−g∗∥∞d(Q−Qn)(y), shows that ˜Unis a random variable with values in a compact set ˜K ⊂ C (Ω). The next two technical lemmas were used in the preceding proof of Lemma 5.4. LEMMA 5.5. For any C∈R, the class of functions F:= Ω′∋y7→Z1 0ψ′′(h(x,y,λ ))dλ:x∈Ω,∥h∥0,1≤C is relatively compact in C(Ω). As a consequence, sup f∈F Z fd(Qn−Q) a.s.− − →0. SPARSE REGULARIZED OPTIMAL TRANSPORT 21 PROOF . Let ρbe a modulus of continuity of ψ′′on[−C,C], i.e., ψ′′(t1)−ψ′′(t2)≤ ρ(|t1−t2|)for all t1,t2∈[−C,C], where ρis monotone increasing and ρ(s)→0ass→0. Then the modulus of continuity ofR1 0ψ′′(h(x,·,λ))dλis bounded as follows, Z1 0ψ′′(h(x,y1,λ))dλ−Z1 0ψ′′(h(x,y2,λ))dλ ≤Z1 0|ψ′′(h(x,y1,λ))−ψ′′(h(x,y2,λ))|dλ ≤Z1 0ρ(|h(x,y1,λ)−h(x,y2,λ)|)dλ ≤ρ(C∥y1−y2∥). In view of the Arzelà–Ascoli theorem, this implies that Fis relatively compact in C(Ω). The second claim follows from this
https://arxiv.org/abs/2505.04721v1
compactness; see [42, Theorem 2.4.1]. LEMMA 5.6. Setψn(·,y) :=R1 0ψ′′((1−λ)ξn(·,y) +λξ∗(·,y))dλ. The family of func- tionsZ ψn(·,y)dµ(y), n≥1 where µ∈ C(Ω)′is any signed measure with ∥µ∥TV≤2, is relatively compact in C(Ω′). PROOF . Similarly as in the proof of Lemma 5.5, these functions are uniformly bounded and admit a common modulus of continuity. We conclude this subsection with a corollary of Proposition 5.3 that will be used to prove the central limit theorem, Theorem 3.4. COROLLARY 5.7. The sequence ∆n:= [˜Γ−˜Γn](f∗,g∗)−[˜Γ−˜Γn](fn,gn)satisfies ∥∆n∥⊕=oP ∥(f∗−fn,g∗−gn)∥⊕+n−1 2 . PROOF . We write ∆nas in (31) and detail the proof for the first component only. As in (32), ∆(1) n=Z ψn(·,y)(f∗−fn+g∗(y)−gn(y))d(Q−Qn)(y). (33) Recall the notation from Proposition 5.3, in particular Un,Vn,Wn∈ C(Ω)andU′ n,V′ n,W′ n∈ C(Ω′), and the compact sets KandK′. Inserting (28) into (33), we have ∆(1) n=∥(f∗−fn,g∗−gn)∥⊕Z ψn(·,y)(Un+U′ n(y))d(Q−Qn)(y) | {z } =:An +∥(f∗−fn,g∗−gn)∥⊕Z ψn(·,y)(Vn+V′ n(y))d(Q−Qn)(y) | {z } =:Bn +n−1 2Z ψn(·,y)(Wn+W′ n(y))d(Q−Qn)(y) | {z } =:Cn. 22 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ Asψn(x,·)is an equicontinuous and bounded family, F:= y7→ψn(x,y)(f(x) +g(y)) : (f,g)∈ K × K′, x∈Ω, n≥1 is relatively compact in C(Ω′). Thus Fis uniformly Glivenko–Cantelli and we get ∥An∥∞≤ ∥(f∗−fn,g∗−gn)∥⊕sup g∈F Z g(y)d(Q−Qn)(y) =oP(∥(f∗−fn,g∗−gn)∥⊕).(34) Next, recalling ∥(Vn,V′ n)∥∞a.s.− →0, the bound ∥Bn∥∞≤ ∥(f∗−fn,g∗−gn)∥⊕∥ψn∥∞∥(Vn,V′ n)∥∞sup ∥g∥∞≤1 Z g(y)d(Q−Qn)(y) ≤2∥(f∗−fn,g∗−gn)∥⊕∥ψn∥∞∥(Vn,V′ n)∥∞ yields ∥Bn∥∞=oP(∥(f∗−fn,g∗−gn)∥⊕). (35) Finally, as the unit ball wrt. ∥ · ∥F,βis compact in C(Ω)× C(Ω′)andLis a continuous bijection, the class F′:={y7→ψn(x,y)(f(x) +g(y)) :∥L(f,g)∥F,β≤1, x∈Ω, n≥1} is uniformly Glivenko–Cantelli. Since L(Wn,W′ n)is tight in BF,β,∥L(Wn,W′ n)∥F,βis stochastically bounded and we deduce ∥Cn∥∞≤n−1 2∥L(Wn,W′ n)∥F,βsup g∈F′ Z g(y)d(Q−Qn)(y) ≤oP n−1 2 . (36) Combining the bounds (34), (35) and (36), we conclude ∥∆(1) n∥∞=oP ∥(f∗−fn,g∗−gn)∥⊕+n−1 2 . The result follows since the denominator in (31) is continuous and bounded away from zero. 5.3. Deriving the central limit theorems. We can now establish the central limit theo- rems. We begin with the one for the potentials. PROOF OF THEOREM 3.4. Armed with the results of Sections 5.1 and 5.2, the central limit theorem for the potentials follows by the standard reasoning in Z-estimation; see [42, Theorem 3.3.1]. We give the details for the sake of completeness. We can write (29) as L(fn−f∗,gn−g∗) = [˜Γ−˜Γn](fn,gn) +∥(fn−f∗,gn−g∗)∥⊕(Vn,L,V′ n,L) = [˜Γ−˜Γn](f∗,g∗)−∆n+∥(fn−f∗,gn−g∗)∥⊕(Vn,L,V′ n,L). In view of Lemma 5.2 and Corollary 5.7, we deduce that ∥L(fn−f∗,gn−g∗)∥⊕=oP(∥(f∗−fn,g∗−gn)∥⊕) +OP n−1 2 . Recalling that Lhas a bounded inverse (Proposition 4.4), it follows that ∥(fn−f∗,gn−g∗)∥⊕≤oP(∥(f∗−fn,g∗−gn)∥⊕) +OP n−1 2 . SPARSE REGULARIZED OPTIMAL TRANSPORT 23 As we also have ∥(fn−f∗,gn−g∗)∥⊕=oP(1)by Lemma 5.1, this implies the rate ∥(fn− f∗,gn−g∗)∥⊕=OP n−1 2 . Substituting this into (29), we obtain ˜Γ(fn,gn)−˜Γn(fn,gn)−L(fn−f∗,gn−g∗) ⊕=oP n−1 2 which, using again Corollary 5.7 and ∥(fn−f∗,gn−g∗)∥⊕=oP(1), leads to ˜Γ(f∗,g∗)−˜Γn(f∗,g∗)−L(fn−f∗,gn−g∗) ⊕=oP n−1 2 . (37) Denote by ˜Gthe right-hand side in Lemma 5.2. As n1/2˜Γ(f∗,g∗)−˜Γn(f∗,g∗)w− →˜Gin C⊕by Lemma 5.2, (37) implies that n1/2L(fn−f∗,gn−g∗)w− →˜G inC⊕as well. Using that L−1is continuous (Proposition 4.4) and applying the continuous mapping theorem, we conclude that n1/2(fn−f∗,gn−g∗)w− →L−1˜G, which was the claim. We continue with the central limit theorem for the optimal cost. PROOF OF THEOREM 3.5. From the fact that (f∗,g∗)and(fn,gn)are optimizers of
https://arxiv.org/abs/2505.04721v1
the dual problem (11) for (P,Q)and(Pn,Qn), respectively, we deduce the two inequalities ROT( Pn,Qn)−ROT( P,Q) ≥Z f∗(x) +g∗(y)−ψ(f∗(x) +g∗(y)−c(x,y)) d((Pn⊗Qn)−(P⊗Q))(x,y) and ROT( Pn,Qn)−ROT( P,Q) ≤Z fn(x) +gn(y)−ψ(fn(x) +gn(y)−c(x,y)) d((Pn⊗Qn)−(P⊗Q))(x,y). Using our shorthands (15) and dropping the integration variables from the notation, the dif- ference between the upper and the lower bound is (38)Z fn(x) +gn(y)−ψ(fn(x) +gn(y)−c(x,y)) −(f∗(x) +g∗(y)−ψ(f∗(x) +g∗(y)−c(x,y))) d((Pn⊗Qn)−(P⊗Q))(x,y) =Z (ξn−ξ∗)−(ψ(ξn)−ψ(ξ∗)) d((Pn⊗Qn)−(P⊗Q)). Using the fundamental theorem of calculus in the same way as below (31), and setting ˜ψn:= 1 +R1 0ψ′ (1−λ)ξn+λξ∗ dλ, the right-hand side of (38) can be written as Z ˜ψn·(ξn−ξ∗)d((Pn⊗Qn)−(P⊗Q)) 24 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ which has the same structure as (33), except that now the integration is over both variables. We proceed as below (33): insert the decomposition (28), split the integral into three terms, and treat them with the same arguments as in Eqs. (34) to (36). The result is that (38)=oP ∥(f∗−fn,g∗−gn)∥⊕+n−1 2 . From Theorem 3.4 we further know that ∥(f∗−fn,g∗−gn)∥⊕=OP n−1 2 , whence we conclude that (38) =oP n−1 2 . Recall that (38) is the difference of the upper and lower bounds for ROT( Pn,Qn)−ROT( P,Q). As a consequence, ROT( Pn,Qn)−ROT( P,Q) equals the lower bound up to oP n−1 2 ; to wit, ROT( Pn,Qn)−ROT( P,Q) +oP n−1 2 =Z (f∗(x) +g∗(y)−ψ(f∗(x) +g∗(y)−c(x,y))) d((Pn⊗Qn)−(P⊗Q))(x,y). We can now apply the central limit theorem for U-statistics [41, Theorem 12.6] to derive the result. (In fact, to obtain the formula for the variance σ2, it is easier to directly use the formula of the Hájek projection stated just before [41, Theorem 12.6].) Finally, we show the central limit theorem for the couplings. PROOF OF THEOREM 3.6. Note thatZ ηd(πn−π) =Z ¯ηd(πn−π). Recalling from Proposition 2.3 thatdπ dP⊗Q=ψ′(ξ∗)anddπn dPn⊗Qn=ψ′(ξn), we obtain Z ηd(πn−π) =Z ¯η{ψ′(ξn)−ψ′(ξ∗)}d((Pn⊗Qn)−(P⊗Q)) | {z } =:An +Z ¯η{ψ′(ξn)−ψ′(ξ∗)}d(P⊗Q) | {z } =:Bn +Z ¯ηψ′(ξ∗)d((Pn⊗Qn)−(P⊗Q)) | {z } =:Cn. Arguing as in (34) and then using Theorem 3.4, we have An=oP(∥(f∗−fn,g∗−gn)∥⊕) =oP n−1 2 . Turning to Bn, a first-order Taylor development of ψ′yields Bn=Z (ξn−ξ∗)¯ηψ′′(ξ∗)d(P⊗Q) +oP(∥(fn−f∗,gn−g∗)∥⊕) =Z (ξn−ξ∗)¯ηψ′′(ξ∗)d(P⊗Q) +oP n−1 2 . Moreover, we see from (37) that (fn−f∗,gn−g∗)−L−1 [˜Γ−˜Γn](f∗,g∗) ⊕=oP n−1 2 . SPARSE REGULARIZED OPTIMAL TRANSPORT 25 After inserting the definitions of ˜Γand˜Γn, this yields the third equality in Bn=Z (ξn−ξ∗)¯ηψ′′(ξ∗)d(P⊗Q) +oP n−1 2 =Z (⊕(fn−f∗,gn−g∗))¯ηψ′′(ξ∗)d(P⊗Q) +oP n−1 2 =−Z ⊕ L−1  Rψ′(ξ∗(·,y))d(Qn−Q)(y)Rψ′′(ξ∗(·,y))dQ(y) Rψ′(ξ∗(x,·))d(Pn−P)(x)Rψ′′(ξ∗(x,·))dP(x)   ⊕ ¯ηψ′′(ξ∗)d(P⊗Q) +oP n−1 2 =−1 n2nX i,j=1Z ⊕ L−1  ψ′(ξ∗(·,Yj))−Rψ′(ξ∗(·,y))dQ(y)Rψ′′(ξ∗(·,y))dQ(y) ψ′(ξ∗(Xi,·)−Rψ′(ξ∗(x,·))dP(x))Rψ′′(ξ∗(x,·))dP(x)   ⊕ ¯ηψ′′(ξ∗) d(P⊗Q) +oP n−1 2 =:¯Bn+oP n−1 2 . Note also that ¯Bnis centered (w.r.t. P) because the precomposition of a centered random variable with a bounded linear operator is again centered. Turning to Cn, we can write Cn=1 n2nX i,j=1¯η(Xi,Yj)ψ′(ξ∗(Xi,Yj))−Z ¯ηψ′(ξ∗)d(P⊗Q) which is also centered. Moreover, the variances of ¯BnandCnare bounded by a constant depending only on ∥ξ∗∥∞,∥η∥∞, Z ψ′′(ξ∗(·,y))dQ(y)−1 ∞and Z ψ′′(ξ∗(x,·))dP(x)−1 ∞. In summary, up to negligible terms,R ηd(πn−π)is aU-statistic with finite variance. We can then apply the central limit theorem for U-statistics (see [41, Theorem 12.6]) to conclude the result. (Once again, to obtain the formula for the
https://arxiv.org/abs/2505.04721v1
variance σ2(η), it is easier to directly use the formula of the Hájek projection stated just before [41, Theorem 12.6].) Funding. SE is grateful for support by the German Research Foundation through Project 553088969 as well as the Cluster of Excellence “Machine Learning — New Perspectives for Science” (EXC 2064/1 number 390727645). MN acknowledges funding by NSF Grants DMS-2407074 and DMS-2106056. REFERENCES [1] B AYRAKTAR , E., E CKSTEIN , S. and Z HANG , X. (2025). Stability and sample complexity of divergence regularized optimal transport. Bernoulli 31213–239. [2] B LONDEL , M., S EGUY , V. and R OLET , A. (2018). Smooth and Sparse Optimal Transport. Proceedings of Machine Learning Research 84880–889. [3] C ARLIER , G. and L ABORDE , M. (2020). A Differential Approach to the Multi-Marginal Schrödinger Sys- tem. SIAM Journal on Mathematical Analysis 52709-717. https://doi.org/10.1137/19M1253800 [4] C HEWI , S., N ILES -WEED, J. and R IGOLLET , P. (2025). Statistical Optimal Transport . Springer. [5] C HOW , Y. S. and T EICHER , H. (1997). Probability Theory . Springer New York. https://doi.org/10.1007/978-1-4612-1950-7 [6] C ÁRCAMO , J., C UEVAS , A. and R ODRÍGUEZ , L.-A. (2020). Directional differentiability for supremum- type functionals: Statistical applications. Bernoulli 26. https://doi.org/10.3150/19-bej1188 [7] DEL BARRIO , E. and L OUBES , J.-M. (2019). Central limit theorems for empirical transportation cost in general dimension. The Annals of Probability 47926 – 951. https://doi.org/10.1214/18-AOP1275 26 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ [8] DEL BARRIO , E. and L OUBES , J.-M. (2020). The statistical effect of entropic regularization in optimal transportation. arXiv:2006.05199 . [9] DEL BARRIO , E., S ANZ, A. G., L OUBES , J.-M. and N ILES -WEED, J. (2023). An Improved Central Limit Theorem and Fast Convergence Rates for Entropic Transportation Costs. SIAM Journal on Mathematics of Data Science 5639-669. https://doi.org/10.1137/22M149260X [10] D ESSEIN , A., P APADAKIS , N. and R OUAS , J.-L. (2018). Regularized Optimal Transport and the rot mover’s Distance. J. Mach. Learn. Res. 191–53. [11] D IMARINO , S. and G EROLIN , A. (2020). Optimal Transport losses and Sinkhorn algorithm with general convex regularization. Preprint arXiv:2007.00976v1 . [12] D UDLEY , R. M. (1968). The speed of mean Glivenko-Cantelli convergence. Ann. Math. Statist. 4040–50. MR236977 [13] E CKSTEIN , S. and K UPPER , M. (2021). Computation of optimal transport and related hedging problems via penalization and neural networks. Appl. Math. Optim. 83639–667. MR4239795 [14] E CKSTEIN , S. and N UTZ, M. (2024). Convergence Rates for Regularized Optimal Transport via Quantiza- tion. Math. Oper. Res. 491223-1240. https://doi.org/10.1287/moor.2022.0245 [15] E SSID , M. and S OLOMON , J. (2018). Quadratically regularized optimal transport on graphs. SIAM J. Sci. Comput. 40A1961–A1986. MR3820384 [16] F OURNIER , N. and G UILLIN , A. (2014). On the rate of convergence in Wasserstein distance of the empirical measure. Probability Theory and Related Fields 162707–738. https://doi.org/10.1007/s00440-014-0583-7 [17] G ARRIZ -MOLINA , A., G ONZÁLEZ -SANZ, A. and M ORDANT , G. (2024). Infinitesimal behavior of Quadratically Regularized Optimal Transport and its relation with the Porous Medium
https://arxiv.org/abs/2505.04721v1
Equation. arXiv:2407.21528 . [18] G ENEVAY , A., C HIZAT , L., B ACH, F., C UTURI , M. and P EYRÉ , G. (2019). Sample Complexity of Sinkhorn Divergences. In Proceedings of the Twenty-Second International Conference on Artificial Intel- ligence and Statistics (K. C HAUDHURI and M. S UGIYAMA , eds.). Proceedings of Machine Learning Re- search 891574–1583. PMLR. [19] G ILBARG , D. and T RUDINGER , N. S. (1983). Elliptic partial differential equations of second order , second ed.Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 224. Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-642-61798-0 MR737190 [20] G OLDFELD , Z., K ATO, K., R IOUX , G. and S ADHU , R. (2024). Statistical inference with regularized opti- mal transport. Inf. Inference 13Paper No. 13, 68. https://doi.org/10.1093/imaiai/iaad056 [21] G OLDFELD , Z., K ATO, K., R IOUX , G. and S ADHU , R. (2024). Limit theorems for entropic optimal trans- port maps and Sinkhorn divergence. Electronic Journal of Statistics 18. https://doi.org/10.1214/24-ejs2217 [22] G ONZÁLEZ -SANZ, A. and H UNDRIESER , S. (2023). Weak Limits for Empirical Entropic Optimal Trans- port: Beyond Smooth Costs. arXiv:2305.09745 . [23] G ONZÁLEZ -SANZ, A., L OUBES , J.-M. and N ILES -WEED, J. (2024). Weak limits of entropy regularized Optimal Transport; potentials, plans and divergences. arXiv:2207.07427 . [24] G ONZÁLEZ -SANZ, A. and N UTZ, M. (2024). Quantitative Convergence of Quadratically Regularized Lin- ear Programs. arXiv:2408.04088 . [25] G ONZÁLEZ -SANZ, A. and N UTZ, M. (2024). Sparsity of Quadratically Regularized Optimal Transport: Scalar Case. arXiv:2410.03353 . [26] G ULRAJANI , I., A HMED , F., A RJOVSKY , M., D UMOULIN , V. and C OURVILLE , A. (2017). Improved Training of Wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems 5769–5779. [27] J AIN, N. C. (1976). Central limit theorem in a Banach space. In Probability in Banach Spaces (A. B ECK, ed.) 113–130. Springer, Berlin, Heidelberg. [28] L EDOUX , M. and T ALAGRAND , M. (1991). Probability in Banach Spaces . Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-20212-4 [29] L I, L., G ENEVAY , A., Y UROCHKIN , M. and S OLOMON , J. M. (2020). Continuous Regularized Wasserstein Barycenters. In Advances in Neural Information Processing Systems (H. L AROCHELLE , M. R ANZATO , R. H ADSELL , M. F. B ALCAN and H. L IN, eds.) 3317755–17765. Curran Associates, Inc. [30] L ORENZ , D. and M AHLER , H. (2022). Orlicz space regularization of continuous optimal transport prob- lems. Appl. Math. Optim. 85Paper No. 14, 33. MR4409806 [31] L ORENZ , D., M ANNS , P. and M EYER , C. (2021). Quadratically regularized optimal transport. Appl. Math. Optim. 831919–1949. MR4261277 [32] M ALLASTO , A., G EROLIN , A. and M INH, H. Q. (2021). Entropy-regularized 2-Wasserstein distance be- tween Gaussian measures. Information Geometry 5289–323. https://doi.org/10.1007/s41884-021-00052-8 [33] M ANOLE , T. and N ILES -WEED, J. (2024). Sharp convergence rates for empirical optimal transport with smooth costs. Ann. Appl. Probab. 341108–1135. https://doi.org/10.1214/23-aap1986 MR4700254 SPARSE REGULARIZED OPTIMAL TRANSPORT 27 [34] M ENA, G. and N
https://arxiv.org/abs/2505.04721v1
ILES -WEED, J. (2019). Statistical bounds for entropic optimal transport: sample complex- ity and the central limit theorem. In Advances in Neural Information Processing Systems (H. W ALLACH , H. L AROCHELLE , A. B EYGELZIMER , F. D' ALCHÉ -BUC, E. F OXand R. G ARNETT , eds.) 32. Curran Associates, Inc. [35] M UZELLEC , B., N OCK, R., P ATRINI , G. and N IELSEN , F. (2017). Tsallis Regularized Optimal Trans- port and Ecological Inference. Proceedings of the AAAI Conference on Artificial Intelligence 31. https://doi.org/10.1609/aaai.v31i1.10854 [36] N UTZ, M. (2024). Quadratically Regularized Optimal Transport: Existence and Multiplicity of Potentials. SIAM J. Math. Anal., to appear . [37] P EYRÉ , G. and C UTURI , M. (2019). Computational Optimal Transport: With Applications to Data Science. Foundations and Trends® in Machine Learning 11355–607. https://doi.org/10.1561/2200000073 [38] R IGOLLET , P. and S TROMME , A. J. (2025). On the sample complexity of entropic optimal transport. Ann. Statist. 5361–90. https://doi.org/10.1214/24-aos2455 MR4865008 [39] S CHMITZER , B. (2019). Stabilized sparse scaling algorithms for entropy regularized transport problems. SIAM J. Sci. Comput. 41A1443–A1481. https://doi.org/10.1137/16M1106018 MR3947294 [40] S EGUY , V., D AMODARAN , B. B., F LAMARY , R., C OURTY , N., R OLET , A. and B LONDEL , M. (2018). Large Scale Optimal Transport and Mapping Estimation. In International Conference on Learning Repre- sentations . [41] VAN DER VAART , A. W. (1998). Asymptotic Statistics . Cambridge University Press. https://doi.org/10.1017/cbo9780511802256 [42] VAN DER VAART , A. W. and W ELLNER , J. A. (1996). Weak Convergence and Empirical Processes . Springer. https://doi.org/10.1007/978-1-4757-2545-2 [43] V ILLANI , C. (2009). Optimal Transport: Old and New . Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-540-71050-9 MR2459454 [44] W IESEL , J. and X U, X. (2024). Sparsity of Quadratically Regularized Optimal Transport: Bounds on con- centration and bias. Preprint arXiv:2410.03425v1 . [45] Z HANG , S., M ORDANT , G., M ATSUMOTO , T. and S CHIEBINGER , G. (2023). Manifold Learning with Sparse Regularised Optimal Transport. arXiv:2307.09816 . 28 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ APPENDIX A: SIMULATIONS In this section we provide numerical examples to illustrate our results, and circumstantial evidence that the obtained results may extend to quadratic regularization. We recall from Example 2.2 that quadratic regularization is a boundary case of Assumption 2.1. For our experiments we need to solve the population problem very accurately. In EOT, one typically uses quadratic cost with Gaussian marginals in such situations, as that example allows for a closed-form solution (e.g., [8, 32]). The first subsection below details the only example with semi-closed form solution for more general divergences that we are aware of. The second subsection uses that example to numerically study the convergence rate for two divergences. A.1. Example with semi-explicit population potentials. LetTd=Rd/Zdbe the flat torus and let µbe the volume measure on Td. Denote by [x]the equivalence class of a vector x∈Rd. The distance on Tdis given by d([x],[y]) = inf z∈Zd∥x−y−z∥. We consider the marginals P=Q=µand the transport cost c([x],[y]) =d2([x],[y]). In this symmetric “self-transport” problem, the population potentials (f∗,g∗)can be chosen
https://arxiv.org/abs/2505.04721v1
such thatg∗=f∗; this choice also removes the ambiguity about an additive constant. Indeed, f∗is uniquely determined by the equation 1 =Z ψ′f∗([x]) +f∗([y])−d2([x],[y]) ε dµ([y]), x∈Rd. Define the constant Cε,d,ψ∈Rvia 1 =Z ψ′2Cε,d,ψ−d2([0],[y]) ε dµ([y]). For fixed x∈Rd, consider the translation y7→Tx(y) :=x+yonRd, which induces the map [y]7→Sx([y]) := [ Tx(y)] = [x+y] on the quotient space Td=Rd/Zd. Note that Sxis an isometry and that the “translation invariance” of the volume measure implies (Sx)#µ=µ. Hence 1 =Z ψ′2Cε,d,ψ−d2([0],[y]) ε dµ(y) =Z ψ′2Cε,d,ψ−d2([x],Sx([y])) ε dµ(y) =Z ψ′2Cε,d,ψ−d2([x],[y]) ε d(Sx)#µ(y) =Z ψ′2Cε,d,ψ−d2([x],[y]) ε dµ(y) for all x∈R, showing that f∗≡Cε,d,ψ. A.2. Numerical illustrations. While the above example allows for explicit calculation, we emphasize that it actually leads to a degenerate regime, since the variance of the limits is zero. This is most easily seen in Theorem 3.5, where the variance is governed by the terms f∗(X)andR ψ f∗(X)+g∗(y)−c(X,y) ε dµ(y)forX∼µ(as well as the symmetric terms for Y). We see that both terms are constant due to f∗=g∗≡Cε,d,ψ and the shift invariance of the cost c=d2andµ. Hence, we expect to see rates which are even faster than n−1/2, irrespective of dimension. While ideally we would like to also illustrate the results in non- degenerate examples, we are not aware of further examples where explicit solutions exist, and the latter are required to correctly visualize the asymptotic regime. SPARSE REGULARIZED OPTIMAL TRANSPORT 29 FIG1. Log-log plots of |OTε(Pn,Qn)−OTε(P,Q)|for dimensions d∈ {1,5,10}(left, middle, right) and two different divergences, namely φ(x) =2 3(x3/2−1)leading to ψ(x) =1 3x3++2 3(top) and φ(x) =1 2(x2−1) leading to ψ(x) =1 2x2++1 2(bottom). We use ε= 0.5. We numerically observe approximate rates nαwith α≈ −1for all cases, irrespective of dimension. The approximation is based on thirty independent evaluations of |OTε(Pn,Qn)−OTε(P,Q)|illustrated by the gray dots, while the black dots are the respective averages for eachn∈ {10,30,100,300,1000,3000}. To obtain approximate solutions for the empirical marginals PnandQn, we use the Sinkhorn-like iterations described for instance in [30]. The results are illustrated in Fig- ure A.2. As predicted by Theorem 3.5, we observe rates even faster than n−1/2, and the slopes are similar for each dimension. (This is in contrast to upper bounds in [1], which would only yield max{n−1/2,n−2/d}in this case.) We observe similar slopes for φ(x) =2 3(x3/2−1)and φ(x) =1 2(x2−1). Note that the former divergence satisfies Assumption 2.1 as the conjugate is cubic, whereas the latter (quadratic) divergence does not. APPENDIX B: CENTRAL LIMIT THEOREM IN HÖLDER SPACE The aim of this section is to provide a central limit theorem in the Hölder space C0,β(Ω) with0≤β <1for an i.i.d. sequence of random variables with values in C0,1(Ω)and finite second order moments. While it is possible that this result is known, we have not found a suitable reference in the literature. We first recall some preliminaries on type II spaces, following the excellent survey [27]. DEFINITION B.1. Let (E,∥ · ∥ E)and(F,∥ · ∥ F)be separable Banach spaces such that E⊂F. The pair (E,F)is said to be of type II if there exists C≥0such that E  nX i=1Xi 2 F ≤C·nX i=1Eh ∥Xi∥2 Ei
https://arxiv.org/abs/2505.04721v1
for any n∈Nand any sequence X1,...,X nof independent E-valued random variables with E[∥Xi∥E]<∞for all i= 1,...,n . 30 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ The type II property is fundamental to derive central limit theorems in Banach spaces (see [27] and the references therein). To state the central limit theorem, recall that the expectation E[X]of a random variable XwithE[∥X∥E]<∞in a Banach space Eis defined as the following element of the bidual E′′: E′∋g7→E[g(X)]∈R, where E′denotes the dual of E. A measure on Eis called Gaussian if its finite-dimensional distributions are Gaussian; i.e., given g1,...,g n∈E′, the distribution on Rninduced by (g1,...,g n)is Gaussian. A random variable G∈Eis called Gaussian if its distribution is Gaussian. THEOREM B.2 ([27, Theorem 3.1]). Let(E,F)be of type II. Then every sequence {Xn}n⊂Eof i.i.d. random vectors with E[∥Xi∥2 E]<∞andE[Xi] = 0 satisfies the cen- tral limit theorem in E; i.e., there exists a centered Gaussian random variable G∈Esuch that √n 1 nnX i=1Xi! w− →G inE. To establish the desired central limit theorem, we need to show that the Banach pair (C0,1(Ω),C0,β(Ω)) is of type II. To that end, we first state an alternative characterization and a sufficient condition for being of type II. Recall that a Rademacher sequence {ϵn}nis a sequence of i.i.d. (scalar) random variables with P(ϵn= 1) = P(ϵn=−1) =1 2. PROPOSITION B.3 ([27, Theorems 3.1 and 2.3]). Let(E,∥ · ∥E)and(F,∥ · ∥F)be sep- arable Banach spaces such that E⊂F. The following are equivalent: (i)The pair (E,F)is of type II, (ii)for any sequence {xn}n⊂Esuch thatP∞ n=1∥xn∥2 E<∞, and any Rademacher se- quence {ϵn}n, the sumP∞ n=1xnϵnconverges a.s. in F. Moreover, the following is a sufficient condition for (i) and (ii): (iii) for any sequence {xn}n⊂Esuch thatP∞ n=1∥xn∥2 E<∞, and any i.i.d. sequence {ηn}nof standard Gaussian random variables, the sumP∞ n=1xnηnconverges a.s. in F. We can now state and prove the desired central limit theorem in C0,β(Ω). THEOREM B.4. Letβ∈[0,1)and let Ωbe a compact subset of Rd. Then the pair (C0,1(Ω),C0,β(Ω)) is of type II. As a consequence, any i.i.d. sequence of random functions {Xn}ninC0,1(Ω)withE[∥X1∥2 0,1]<∞andE[X1] = 0 satisfies the central limit theorem in C0,β(Ω); i.e., there exists a centered Gaussian random variable G∈ C0,β(Ω)such that √n 1 nnX i=1Xi! w− →G inC0,β(Ω). Note that, necessarily, E[G(x)G(x′)] =E[X1(x)X1(x′)]for all x,x′∈Ω. PROOF . Let{fn}n⊂ C0,1(Ω)be a sequence of functions withP∞ n=1∥fn∥2 0,1<∞and let{ηn}n∈Nbe an i.i.d. sequence of (scalar) standard Gaussian random variables. In view of Theorem B.2 and Proposition B.3, our goal is to show that the partial sumPN n=1fnηn converges a.s. in C0,β(Ω)to some limit G. SPARSE REGULARIZED OPTIMAL TRANSPORT 31 Note that for every x∈Ω, the partial sum GN(x) :=PN n=1ηnfn(x)converges a.s. by Kol- mogorov’s Three Series Theorem, hence we can define G(x)as the a.s. limitP∞ n=1ηnfn(x). ThusGis a scalar, centered Gaussian process indexed by x∈Ω. To show that the a.s. conver- genceGN→Galso holds in the norm of C0,β(Ω), it suffices to show weak convergence in C0,β(Ω), because by the Lévy–Itô–Nisio theorem [28, Theorem 2.4], these two convergences are equivalent the for partial sum GN=PN n=1ηnfnof the symmetric i.i.d. random variables {ηnfn}n. In view
https://arxiv.org/abs/2505.04721v1
of the pointwise convergence, weak convergence is implied by relative compactness (for the weak convergence topology), hence by Prokhorov’s theorem [28, The- orem 2.1] it suffices to check that given δ∈(0,1)there exists a compact K ⊂ C0,β(Ω)such that P(GN∈ K)≥1−δfor all N≥1. (39) SinceC0,β′(Ω)is compactly embedded in C0,β(Ω)forβ′> β (e.g., [19, Lemma 6.33]), it is enough to show that sup N≥1E[∥GN∥γ 0,β′]<∞ for some β′> βandγ≥1. To that end, we shall use Kolmogorov’s regularity theorem. Recall that two processes X(x)and˜X(x)indexed by x∈Ωare modifications of one another if P{˜X(x) =X(x)}= 1 for all x∈Ω, whereas they are called indistinguishable if P{˜X(x) =X(x)for all x∈Ω}= 1. Kolmogorov’s regularity theorem asserts that if {G(x)}x∈Ωis a real-valued stochastic process satisfying E[|G(x)−G(y)|γ]≤C∥x−y∥d+ϵ(40) for some γ≥1andϵ >0, then Gadmits a modification ˜Gwith Eh ∥˜G∥γ 0,ϵ γi ≤Cd,γ,ϵ,C (41) where the constant Cd,γ,ϵ,C depends only on d,γ,ϵandC. In particular, establishing (40) forG=GNwith constants independent of Nimplies (41) for a version ˜GNwith a right- hand side Cd,γ,ϵ,C independent of N. Moreover, as we already know that x7→GN(x)is continuous, the modification ˜GNis necessarily indistinguishable from GN. In summary, es- tablishing (40) for G=GNwith constants independent of Nimplies sup N≥1Eh ∥GN∥γ 0,ϵ γi <∞, which is the desired bound provided that β′:=ϵ γ∈(β,1]. Fixα≥1large enough such that setting ϵ:=αandγ:=d+α, we have β′:=ϵ γ=α d+α> β. As explained above, it suffices to show E[|GN(x)−GN(y)|γ]≤C∥x−y∥γ(42) for some C >0independent of N, as that will imply (39) and thus the claim. By the Marcinkiewicz–Zygmund inequality (e.g., [5, Theorem 10.3.2]) there exists a con- stantC >0independent of Nsuch that E[|GN(x)−GN(y)|γ] =E" NX n=1ηn(fn(x)−fn(y)) γ# 32 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ ≤CE  NX n=1(ηn(fn(x)−fn(y)))2 γ 2  ≤C∥x−y∥γE  NX n=1(ηn)2∥fn∥2 0,1 γ 2 . We may assume without loss of generality that sN:=PN n=1∥fn∥2 0,1>0. Noting that sN≤ s∞<∞by the choice of {fn}n, we deduce E[|GN(x)−GN(y)|γ]≤C(s∞)γ 2∥x−y∥γE  1 sNNX n=1(ηn)2∥fn∥2 0,1!γ 2 . Now Jensen’s inequality yields E[|GN(x)−GN(y)|γ]≤C(s∞)γ 2∥x−y∥γE" 1 sNNX n=1(ηn)γ∥fn∥2 0,1# =C(s∞)γ 2∥x−y∥γ1 sNNX n=1E[(ηn)γ]∥fn∥2 0,1 =CE[|η1|γ](s∞)γ 2∥x−y∥γ1 sNNX n=1∥fn∥2 0,1 =CE[|η1|γ](s∞)γ 2∥x−y∥γ, which is the desired estimate (42). APPENDIX C: OMITTED PROOFS C.1. Proofs for Section 2. In this subsection we prove Proposition 2.3 summarizing background results that were used throughout the text. We first state a lemma recalling the analogue of the c-conjugate that is standard in optimal transport. For the setting of regular- ization by f-divergence, this notion was introduced by [11]. (Note, however, that some of the results in [11] are flawed because the conjugate of φwas taken over Rinstead of R+, leading to signed measures instead of couplings.) LEMMA C.1. LetP,Q be probability measures on Rdwith supports Ω,Ω′. Letc∈ C(Ω× Ω′)be bounded and have modulus of continuity ρ. Given any bounded measurable function g: Ω′→R, there exists a unique function f: Ω→Rsuch that (43)Z ψ′(f(x) +g(y)−c(x,y))dQ(y) = 1 for all x∈Ω. Moreover, fis uniformly continuous with modulus ρand its oscillation is bounded as (44) sup x∈Ωf(x)−inf x∈Ωf(x)≤2∥c∥∞, while infx,y{f(x) +g(y)} ≤t0+∥c∥∞andsupx,y{f(x) +g(y)} ≥t0− ∥c∥∞, where t0is defined in Assumption 2.1. Finally, fsolves the concave optimization sup f∈L∞(P)Z f⊕g−ψ(f⊕g−c) d(P⊗Q). SPARSE REGULARIZED OPTIMAL TRANSPORT 33 PROOF
https://arxiv.org/abs/2505.04721v1
OF LEMMA C.1. As gandcare bounded, lims→∞ψ′(s+g(y)−c(x,y)) =∞ andlims→0ψ′(s+g(y)−c(x,y))<1by the properties of ψ, where the limits are uniform in(x,y). Asψ′is continuous, the intermediate value theorem yields the existence of fsolv- ing (43). Let x,˜x∈Ωand assume without loss of generality that f(˜x)≤f(x). Asψ′is nondecreasing, (43) yields Z ψ′(f(x) +g(y)−c(x,y))dQ(y) = 1 =Z ψ′(f(˜x) +g(y)−c(˜x,y))dQ(y) ≤Z ψ′(f(˜x) +g(y)−c(x,y) +ρ(∥x−˜x∥))dQ(y). Asψ′is, in addition, strictly increasing on [t0−δ,∞), this implies f(x)≤f(˜x)+ρ(∥x−˜x∥), showing that fhas modulus of continuity ρ. The same argument, applied with x= ˜x, also shows that f(x)is uniquely determined by (43), and also that the oscillation of fis bounded by the one of c: (45) sup xf(x)−inf xf(x)≤sup x,yc(x,y)−inf x,yc(x,y)≤2∥c∥∞. Furthermore, ψ′(t0) = 1 by Assumption 2.1. Hence (43) implies that inf x,y{f(x) +g(y)−c(x,y)} ≤t0≤sup x,y{f(x) +g(y)−c(x,y)}. Thus infx,y{f(x) +g(y)} ≤t0+∥c∥∞andsupx,y{f(x) +g(y)} ≥t0− ∥c∥∞. PROOF OF PROPOSITION 2.3. (i) We first show ROT( P,Q)≥DUAL( P,Q), the so- called weak duality. The definition of ψimplies that ψ(f⊕g−c)≥dπ d(P⊗Q)·(f⊕g−c)−φdπ d(P⊗Q) for any (f,g)∈L∞(P)×L∞(Q)and any π∈Π(P,Q)withπ≪P⊗Q. Combining this withR f⊕g dπ=R f⊕g d(P⊗Q)yields the claimed weak duality. The converse inequality will be shown in (v) below. (ii) Primal existence follows directly from the weak compactness of Π(µ,ν)and the weak lower semi-continuity of the objective. (iii) To show dual existence, let (˜fn,˜gn)nbe a maximizing sequence for DUAL( P,Q). Define fnas the “conjugate” of ˜gnas provided by Lemma C.1, and let similarly gnbe the conjugate of fn(as provided by the symmetric analogue of Lemma C.1 with interchanged roles of the marginals). By Lemma C.1, we see that (fn,gn)improves upon (˜fn,˜gn), so that it is again a maximizing sequence. Moreover, fnandgnareρ-continuous. And as (44) holds for both fandg, the inequalities below (44) yield the uniform bound ∥fn⊕gn∥∞≤t0+ 5∥c∥∞. We can shift (fn,gn)by a constant such that fn(x0) = 0 at some reference point x0∈ Ω. It follows from Lemma C.1 that fnandgnare uniformly bounded. The Arzelà–Ascoli theorem then yields a subsequential uniform limit (f∗,g∗). Using the uniform convergence, it is easy to see that (f∗,g∗)maximizes DUAL( P,Q). (iv) If (f∗,g∗)maximizes DUAL( P,Q), taking the directional derivative in an arbitrary direction (f,g)∈L∞(P)×L∞(Q), as well as its negative (−f,−g), yields the first-order condition (12). Conversely, let (f∗,g∗)solve (12). Note that for any fixed (x,y), the function R∋s7→s−ψ(s−c(x,y))is concave with gradient 1−ψ′(s−c(x,y)). Given any functions 34 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ (f,g)∈L∞(P)×L∞(Q), it follows that for (x,y)∈Ω×Ω′, f(x)+g(y)−ψ(f(x) +g(y)−c(x,y)) ≤f∗(x) +g∗(y)−ψ(f∗(x) +g∗(y)−c(x,y)) +{1−ψ′(f∗(x) +g∗(y)−c(x,y))(f(x)−f∗(x))} +{1−ψ′(f∗(x) +g∗(y)−c(x,y))(g(y)−g∗(y))}.(46) Integrating (46) with respect to P⊗Q, the final two lines vanish due to (12), which shows that(f∗,g∗)is an optimizer of (11). (v) We have ψ′≥0asψis increasing, hence dπ:=ψ′(f∗⊕g∗−c)d(P⊗Q)is a non- negative measure. The first equation of (12) shows that its first marginal is Pand the second equation of (12) shows that its second marginal is Q. Hence π∈Π(P,Q). Using the equality φ(ψ′(z)) =ψ′(z)z−ψ(z)withz:=f∗⊕g∗−c, integrating, and rearranging, we find Z f∗⊕g∗−ψ(f∗⊕g∗−c) d(P⊗Q) =Z cdπ+Z φdπ d(P⊗Q) d(P⊗Q). In view of the weak duality ROT( P,Q)≥DUAL( P,Q)shown at the beginning of the proof, this shows that πis an optimizer of ROT( P,Q)and also completes the proof of the strong duality ROT( P,Q) = DUAL( P,Q)stated in (i).
https://arxiv.org/abs/2505.04721v1
(vi) Given any solution (f∗,g∗)of (12), applying Lemma C.1 twice yields versions that solve (13). Iff∗,g∗solve (13), then they are conjugates of one another, hence Lemma C.1 yields the modulus of continuity. As in the proof of (iii), the uniform bound follows from the inequali- ties below (44) using that (44) holds for both fandg. (vii) Let Cbe such that f∗⊕g∗−c≤C. By Assumption 2.1 there are t0>0andδ,α > 0 such that ψ′(t0) = 1 , and ψ′(t)<1fort≤t0−δ, and ψ′′(t)≥αfort∈[t0−δ,C]. Let x∈Ω. Consider the sets A={y∈Ω′:f∗(x) +g∗(y)−c(x,y)< t0−δ}, B={y∈Ω′:t0−δ≤f∗(x) +g∗(y)−c(x,y)≤C} and note that B= Ω′\A. Setp:=Q(A). Asψ′is nondecreasing, (13) implies 1 =Z ψ′(f∗(·) +g∗(y)−c(·,y))dQ(y)≤pψ′(t0−δ) + (1−p)ψ′(C). This yields the upper bound p≤ψ′(C)−1 ψ′(C)−ψ′(t0−δ)<1which is uniform in x. Asψ′′≥0, we deduce the uniform lower boundZ ψ′′(f∗(·) +g∗(y)−c(·,y))dQ(y)≥αQ(B) = (1 −p)α. The second claim is shown analogously. C.2. Proofs for Section 4. In this subsection we prove the auxiliary Lemma 4.2 on the sections of the set S={(x,y)∈Ω×Ω′:ψ′′(ξ∗(x,y))>0}. PROOF OF LEMMA 4.2. Recall from Proposition 2.3 (vii) that SQ(x)̸=∅andSP(y)̸=∅ for all (x,y)∈Ω×Ω′. This implies that x∈O(x) :=∪y∈SQ(x)SP(y)for all x∈Ω. By continuity of ζ:=ψ′′◦ξ∗, each set SP(y), and then also the union O(x), is relatively open inΩ. That is, (47) for each x∈Ωthere exists r >0such that (x+rB)∩Ω⊂O(x). SPARSE REGULARIZED OPTIMAL TRANSPORT 35 Consider the set A={(x,z)∈Ω×Ω :z∈Ω\O(x)}. Ifz∈O(x), then there exists ysuch that ζ(x,y)>0andζ(z,y)>0. By continuity of ζ, for˜x,˜zsufficiently close to (x,z), we still have ζ(˜x,y)>0andζ(˜z,y)>0, showing that ˜z∈O(˜x). This proves that {(x,z) :z∈O(x)}is open and hence that Ais closed. Consider also, for each r >0, the closed subset Ar=A∩ {(x,z)∈Ω×Ω :z∈(x+rB)}. In view of the definition of A, the fact (47) translates to ∩r>0Ar=∅. Now the finite inter- section property of the compact set Ω×Ωyields that Ar=∅for some r >0, which was the claim.
https://arxiv.org/abs/2505.04721v1
arXiv:2505.04884v1 [stat.ME] 8 May 2025Model selection for unit-root time series with many predictors SHUO-CHIEH HUANG1,a, CHING-KANG ING2,band RUEY S. TSAY3,c 1Department of Statistics, Rutgers University,ash1976@stat.rutgers.edu 2Institute of Statistics, National Tsing Hua University,bcking@stat.nthu.edu.tw 3Booth School of Business, University of Chicago,cruey.tsay@chicagobooth.edu This paper studies model selection for general unit-root time series, including the case with many exogenous pre- dictors. We propose FHTD, a new model selection algorithm that leverages forward stepwise regression (FSR), a high-dimensional information criterion (HDIC), a backward elimination method based on HDIC, and a data-driven thresholding (DDT) approach. Under some mild assumptions that allow for unknown locations and multiplicities of the characteristic roots on the unit circle of the time series and conditional heteroscedasticity in the predictors and errors, we establish the sure screening property of FSR and the selection consistency of FHTD. Central to our analysis are two key technical contributions, a new functional central limit theorem for multivariate linear processes and a uniform lower bound for the minimum eigenvalue of the sample covariance matrices, both of which are of independent interest. Simulation results corroborate the theoretical properties and show the superior performance of FHTD in model selection. We showcase the application of the proposed FHTD by modeling U.S. monthly housing starts and unemployment data. Keywords: ARX model; Conditional heteroscedasticity; High-dimensional information criterion; Nonstationarity; Forward selection 1. Introduction With the widespread availability of large-scale fine-grained datasets, researchers analyzing time series data now have a plethora of predictors available for constructing informative and interpretable models. Regularization techniques (Candes and Tao, 2007, Tibshirani, 1996, Zhang, 2010, Zheng, Fan and Lv, 2014, Zou, 2006), which select a few relevant features in a sparse model for prediction, have thus been adapted from the independent framework to time series data (Han and Tsay, 2020, Medeiros and Mendes, 2016). In addition, greedy forward selection algorithms (Bühlmann, 2006, Fan and Lv, 2008, Ing and Lai, 2011, Wang, 2009) have also proven useful for a similar task involving dependent data (Ing, 2020). However, the aforementioned methods are generally not applicable to unit-root nonstationary time series, prevalent in economics, finance, and environmental sciences. To apply these methods to unit- root time series, one must carefully transform the series under study into stationary ones. This step often involves multiple intricate unit-root tests since the underlying unit-root structure is typically unknown. In addition, it becomes even more challenging to take the correct difference transforms when the data are driven by complex unit roots, such as those exhibiting persistent cyclic behavior. Determining the order of integration and the frequency at which the series is integrated are far from straightforward and are sometimes sensitive to model specifications. Yet, persistent cyclic (or seasonal) time series are widely encountered in applications, such as the unemployment rate (Bierens, 2001), spot exchange rates (Al-Zoubi, 2008), entrepreneurship series (Faria, Cuestas and Gil-Alana, 2009), firms’ capital structure (Al-Zoubi, O’Sullivan and Alwathnani, 2018), sunspot numbers (Gil-Alana, 2009, Maddanu and Proietti, 2022), oil prices (Gil-Alana and Gupta, 2014), tourist arrivals (del Barrio Castro, Cubadda and Osborn, 2022), and CO 2concentrations (Proietti and Maddanu, 2024), to name just a few examples. 1
https://arxiv.org/abs/2505.04884v1
2 In this paper, we study model selection for an autoregressive model with exogenous variables, known as the ARX model, when the dependent variable contains general unit roots and the number of exoge- nous variables is large. Specifically, the model employed is (1−𝐵)𝑎(1+𝐵)𝑏𝑙Ö 𝑘=1(1−2 cos𝜗𝑘𝐵+𝐵2)𝑑𝑘𝜓𝑛(𝐵)𝑦𝑡,𝑛=𝑝𝑛∑︁ 𝑗=1𝑟(𝑛) 𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙,𝑛𝑥(𝑛) 𝑡−𝑙,𝑗+𝜖𝑡,𝑛, (1) 𝑡=1,...,𝑛 , where𝑛is the sample size, 𝐵denotes the back-shift operator, 𝑎,𝑏, and𝑑𝑘are nonnegative integers indicating the order of integration with respect to different (conjugate) unit roots, 𝑙is the number of complex conjugate unit-root pairs, 𝜗𝑘∈(0,𝜋)denotes the location of a complex unit root, and𝜓𝑛(𝑧)=1+Í𝜄𝑛 𝑠=1𝑎𝑠,𝑛𝑧𝑠≠0 for all|𝑧|≤1, with both 𝑎𝑠,𝑛∈Rand𝜄𝑛∈N∪{0}unknown. In particular, we assume no prior knowledge about the unit roots in the model, so 𝑎,𝑏,𝑙,𝑑𝑘and𝜗𝑘are all unknown. In model (1), {𝜖𝑡,𝑛}denotes a sequence of random errors with mean zero, {𝑥(𝑛) 𝑡−𝑙,𝑗}and {𝛽(𝑗) 𝑙,𝑛}, for 1≤𝑙≤𝑟(𝑛) 𝑗,1≤𝑗≤𝑝𝑛, are observable exogenous variables and their respective unknown coefficients. Let 𝑑=𝑎+𝑏+2Í𝑙 𝑘=1𝑑𝑘. The number of AR lags in (1), 𝑚𝑛=𝜄𝑛+𝑑, is assumed to be smaller than 𝑛, whereas the number of exogenous predictors, 𝑝∗ 𝑛=Í𝑝𝑛 𝑗=1𝑟(𝑛) 𝑗,can be much greater than 𝑛. We adopt𝑦𝑡,𝑛=0 for𝑡≤0 as the initial conditions, which are widely used in the literature for unit- root series (e.g. Chan and Wei, 1988). Last but not least, we allow {𝑥(𝑛) 𝑡,𝑗,1≤𝑡≤𝑛}, for 1≤𝑗≤𝑝𝑛, and{𝜖𝑡,𝑛}to be conditionally heteroscedastic. Due to the practical importance of the ARX model (1), numerous authors have investigated its model selection for the special cases when 𝑑=0 or𝑝∗ 𝑛=0. When𝑑=0,𝑦𝑡,𝑛is stationary. In this case, under some strong sparsity conditions, the LASSO (Tibshirani, 1996) and the adaptive LASSO (Zou, 2006) have been shown to achieve model selection consistency (Han and Tsay, 2020, Medeiros and Mendes, 2016). In addition, Ing (2020) proved that the orthogonal greedy algorithm (OGA), used in conjunction with a high-dimensional information criterion (HDIC), is rate-optimal adaptive to unknown sparsity patterns. When 𝑝∗ 𝑛=0, model (1) reduces to a non-stationary AR model with unit roots. In this case, traditional information criteria such as AIC, BIC, and Fisher information criterion (FIC) can be em- ployed to perform model selection (Ing, Sin and Yu, 2012, Tsay, 1984, Wei, 1992). More recently, Kock (2016) applied the adaptive LASSO to the Dickey-Fuller regression of fixed AR order under the special case of a single unit root (i.e., 𝑎=1,𝑏=𝑑1=...=𝑑𝑙=0, and𝜄𝑛is a fixed positive integer). Although there are methods available for the special cases when 𝑑=0 or𝑝∗ 𝑛=0, applying them to model (1) in a general context remains a challenging task. As pointed out earlier, the existence of unknown 𝜗𝑘makes it difficult to transform {𝑦𝑡,𝑛}into an asymptotically stationary time series. Even worse, when applied to nonstationary time series, LASSO performs poorly due to its internal standardization of unit-root variables, which can “wash out the dependence of the stationary part” (Han and Tsay, 2020). In fact, due to the near-perfect correlation of some (or all) lagged variables in model (1) when𝑑>0, the strong irrepresentable condition, which is almost necessary and sufficient for LASSO to achieve selection consistency in high-dimensional regression models (Zhao and Yu, 2006), is no longer valid. This issue also undermines the effectiveness of
https://arxiv.org/abs/2505.04884v1
other correlation-based feature selection methods, such as 𝐿2-Boosting and OGA. Indeed, in Sections 3.2 and 3.3, we prove that both LASSO and OGA can fail to achieve variable selection consistency in the presence of unit roots. While AIC, BIC, and FIC are reliable methods for selecting the AR order when 𝑑>0 and𝑝∗ 𝑛=0, they involve subset selection and are therefore not suitable for selecting exogenous variables when 𝑝∗ 𝑛is large, especially when 𝑝∗ 𝑛≫𝑛. We address these difficulties by combining the strengths of the least squares method in unit-root AR models with forward stepwise regression (FSR, defined in Section 2) in high-dimensional regression Unit root processes with many predictors 3 models, and work directly with the observed nonstationary series. Our procedure starts by rewriting (1) as 𝑦𝑡=𝑞𝑛∑︁ 𝑖=1𝛼𝑖𝑦𝑡−𝑖+𝑝𝑛∑︁ 𝑗=1𝑟(𝑛) 𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑥𝑡−𝑙,𝑗+𝜖𝑡, (2) where𝑞𝑛<𝑛is a prescribed upper bound for 𝑚𝑛, 1−Í𝑞𝑛 𝑖=1𝛼𝑖𝑧𝑖=(1−𝑧)𝑎(1+𝑧)𝑏Î𝑙 𝑘=1(1−2 cos𝜗𝑘𝑧+ 𝑧2)𝑑𝑘𝜓𝑛(𝑧), and the dependence of 𝑦𝑡,𝛼𝑖,𝛽(𝑗) 𝑙,𝑥𝑡−𝑙,𝑗, and𝜖𝑡on𝑛is suppressed for simplicity in no- tation. Then, FSR is used to sequentially choose the exogenous predictors after𝑦𝑡−1,...,𝑦𝑡−𝑞𝑛are coerced into the model. By fitting an AR( 𝑞𝑛) model by least squares in advance, this approach handles the nonstationarity of {𝑦𝑡}without recourse to any tests for (complex) unit roots, thereby facilitating the implementation of FSR without being encumbered by the highly correlated lagged dependent vari- ables. Next, we use HDIC to guide the stopping rule of FSR, and use a backward elimination method also based on HDIC, which we call Trim, to remove redundant exogenous predictors that have been previously included by FSR. Finally, we introduce a data-driven thresholding (DDT) method to weed out irrelevant lagged dependent variables. Throughout the paper, the combined model selection proce- dure is called the FHTD algorithm. Under a strong sparsity condition, which assumes that the number of relevant predictors in model (2) is smaller than 𝑛, we establish the sure screening property of FSR and the selection consistency of FHTD. Since complex unit roots, conditional heteroscedasticity, and high dimensionality are allowed simultaneously, this is one of the most comprehensive results to date on model selection consistency established for the ARX model. The rest of the paper is organized as follows. We detail the FSR and FHTD algorithms in Sec- tion 2. Relevant theoretical properties of these methods are given in Section 3; see Theorems 3.1– 3.3. The finite-sample performance of the proposed methods is illustrated using simulations and two U.S. monthly macroeconomic datasets in Sections 4 and 5, respectively. Section 6 concludes. We have moved the proofs and auxiliary results to the supplementary material. Nevertheless, it is noteworthy that, to tackle the nonstationary series, we derived a novel functional central limit theorem (FCLT) for linear processes driven by {Í𝑝𝑛 𝑗=1Í𝑟(𝑛) 𝑗 𝑙=1𝛽(𝑗) 𝑙𝑥𝑡−𝑙,𝑗+𝜖𝑡}and a uniform lower bound for the minimum eigenvalue of the sample covariance matrices associated with model (2). These theoretical foundations, crucial for Theorems 3.1–3.3, can be found in Appendix A. The following notation is used throughout the paper. For a matrix A,𝜆min(A),𝜆max(A),∥A∥, and A⊤ denote its minimum eigenvalue, maximum eigenvalue, operator norm, and transpose, respectively. For a set𝐽,♯(𝐽)is its cardinality. For two sequences
https://arxiv.org/abs/2505.04884v1
of positive numbers, {𝑎𝑛}and{𝑏𝑛},𝑎𝑛≍𝑏𝑛means 𝐿<𝑎𝑛/𝑏𝑛<𝑈 for some 0<𝐿≤𝑈<∞. For an event 𝐸, its complement and indicator function are denoted by𝐸𝑐andI𝐸, respectively. For 𝑘∈{1,2,...},[𝑘]={1,2,...,𝑘}. For𝑟∈R,⌊𝑟⌋is the largest integer≤𝑟. For two real numbers 𝑎and𝑏,𝑎∨𝑏=max{𝑎,𝑏}and𝑎∧𝑏=min{𝑎,𝑏}. For a vector v,∥v∥denotes its Euclidean norm. For a random variable 𝑋,∥𝑋∥𝑞=(E|𝑋|𝑞)1/𝑞. Generic absolute constants are denoted by 𝐶whose value may vary at different places. In what follows, we use Sx to refer to Sections in the supplementary material. 2. The FHTD algorithm Lety𝑛=(𝑦𝑛,𝑦𝑛−1,...,𝑦 ¯𝑟𝑛+1)⊤and𝒐𝑖=(𝑦𝑛−𝑖,𝑦𝑛−𝑖−1,...,𝑦 ¯𝑟𝑛−𝑖+1)⊤, where𝑖=1,2,...,𝑞𝑛and ¯𝑟𝑛={max 1≤𝑗≤𝑝𝑟(𝑛) 𝑗}∨𝑞𝑛. Define 𝒙(𝑗) 𝑙=(𝑥𝑛−𝑙,𝑗,𝑥𝑛−𝑙−1,𝑗,...,𝑥 ¯𝑟𝑛−𝑙+1,𝑗)⊤, where𝑙=1,2,...,𝑟(𝑛) 𝑗 and𝑗=1,2,...,𝑝𝑛. Then, it follows from (2) that y𝑛=O𝑛𝜶+X𝑛𝜷+𝜺𝑛:=𝝁𝑛+𝜺𝑛, where O𝑛= 4 (𝒐1,...,𝒐𝑞𝑛),X𝑛=(𝒙(1) 1,...,𝒙(1) 𝑟(𝑛) 1,...,𝒙(𝑝𝑛) 1,...,𝒙(𝑝𝑛) 𝑟(𝑛) 𝑝𝑛),𝜶=(𝛼1,...,𝛼𝑞𝑛)⊤,𝜺𝑛=(𝜖𝑛,...,𝜖 ¯𝑟𝑛+1)⊤, and𝜷=(𝛽(1) 1,...,𝛽(1) 𝑟(𝑛) 1,...,𝛽(𝑝𝑛) 1,...,𝛽(𝑝𝑛) 𝑟(𝑛) 𝑝𝑛)⊤. Note that 𝝁𝑛can be expressed as (𝜇𝑛,...,𝜇 ¯𝑟𝑛+1)⊤, with𝜇𝑡=Í𝑞𝑛 𝑖=1𝛼𝑖𝑦𝑡−𝑖+𝑣𝑡,𝑛and 𝑣𝑡,𝑛=𝑝𝑛∑︁ 𝑗=1𝑟(𝑛) 𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑥𝑡−𝑙,𝑗. (3) We index the candidate variable 𝑥𝑡−𝑙,𝑗by the tuple(𝑗,𝑙). FSR is an iterative algorithm that greedily chooses variables from ¯𝐽:={(𝑗,𝑙):𝑗∈[𝑝𝑛],𝑙∈[𝑟(𝑛) 𝑗]}after𝑦𝑡−1,...,𝑦𝑡−𝑞𝑛are included in the regression model. Specifically, the algorithm begins with ˆ𝐽0=∅and generates ˆ𝐽𝑚⊂¯𝐽via ˆ𝐽𝑚=ˆ𝐽𝑚−1∪{( ˆ𝑗𝑚,ˆ𝑙𝑚)}, where𝑚≥1 and (ˆ𝑗𝑚,ˆ𝑙𝑚)=arg max(𝑗,𝑙)∈¯𝐽\ˆ𝐽𝑚−1𝑛−1|y⊤ 𝑛(I−H[𝑞𝑛]⊕ˆ𝐽𝑚−1)𝒙(𝑗) 𝑙| (𝑛−1𝒙(𝑗)⊤ 𝑙(I−H[𝑞𝑛]⊕ˆ𝐽𝑚−1)𝒙(𝑗) 𝑙)1/2, (4) where H𝑄⊕𝐽is the orthogonal projection matrix associated with the linear space spanned by {𝒐𝑙: 𝑙∈𝑄⊆[𝑞𝑛]}∪{ 𝒙(𝑗) 𝑙:(𝑗,𝑙)∈𝐽⊆¯𝐽}. In the sequel, we also use 𝑄⊕𝐽to denote a candidate model consisting of predictor variables {𝑦𝑡−𝑖,𝑖∈𝑄}and{𝑥𝑡−𝑙,𝑗,(𝑗,𝑙)∈𝐽}. When𝑚reaches a prescribed upper bound 𝐾𝑛≤𝑝∗ 𝑛, the algorithm stops and outputs the index set ˆ𝐽𝐾𝑛. Because the effects of the unit root have been taken care of by employing the lagged depen- dent variables 𝑦𝑡−1,...,𝑦𝑡−𝑞𝑛beforehand, the algorithm is expected to exhibit reliable performance in including the set of relevant exogenous variables J𝑛={(𝑗,𝑙):𝛽(𝑗) 𝑙≠0,𝑙∈[𝑟(𝑛) 𝑗],𝑗∈[𝑝𝑛]}. However,[𝑞𝑛]⊕ˆ𝐽𝐾𝑛may contain some irrelevant variables, in particular, when 𝑞𝑛or𝐾𝑛is large compared to ♯(Q𝑛)or♯(J𝑛), whereQ𝑛={𝑞:𝛼𝑞≠0,𝑞∈[𝑞𝑛]}is the set of relevant lagged depen- dent variables. To alleviate this overfitting problem with FSR, we propose eliminating the irrelevant exogenous variables in ˆ𝐽𝐾𝑛using HDIC and Trim, followed by DDT to remove the redundant lagged dependent variables in [𝑞𝑛]. Given a candidate model 𝑄⊕𝐽, its HDIC value is given by HDIC(𝑄⊕𝐽)=𝑛log ˆ𝜎2 𝑄⊕𝐽+[♯(𝐽)+♯(𝑄)]𝑤𝑛,𝑝𝑛, (5) where ˆ𝜎2 𝑄⊕𝐽=𝑛−1y⊤ 𝑛(I−H𝑄⊕𝐽)y𝑛and𝑤𝑛,𝑝𝑛, penalty for the model complexity ♯(𝐽)+♯(𝑄), depends on the sample size 𝑛as well as the number of candidate exogenous variables 𝑝∗ 𝑛. Our approach is to first find a “promising” subset ˆ𝐽ˆ𝑘𝑛ofˆ𝐽𝐾𝑛that minimizes the HDIC values along the FSR path{ˆ𝐽1,..., ˆ𝐽𝐾𝑛}, where ˆ𝑘𝑛=arg min1≤𝑚≤𝐾𝑛HDIC([𝑞𝑛]⊕ˆ𝐽𝑚) (6) is an early stopping rule. We then refine ˆ𝐽ˆ𝑘𝑛by comparing the HDIC values of [𝑞𝑛]⊕ ˆ𝐽ˆ𝑘𝑛and [𝑞𝑛]⊕( ˆ𝐽ˆ𝑘𝑛\{(ˆ𝑗𝑖,ˆ𝑙𝑖)}),1≤𝑖≤ˆ𝑘𝑛, to judge whether the marginal contribution of (ˆ𝑗𝑖,ˆ𝑙𝑖)is sufficiently significant to warrant its inclusion in the final model. The resultant refinement of ˆ𝐽ˆ𝑘𝑛is ˆJ𝑛={(ˆ𝑗𝑖,ˆ𝑙𝑖): 1≤𝑖≤ˆ𝑘𝑛,HDIC([𝑞𝑛]⊕( ˆ𝐽ˆ𝑘𝑛\{(ˆ𝑗𝑖,ˆ𝑙𝑖)}))>HDIC([𝑞𝑛]⊕ˆ𝐽ˆ𝑘𝑛)}, (7) Unit root processes with many predictors 5 and the method is called “Trim.” For𝐽∈¯𝐽, define x𝑡(𝐽)=(𝑥𝑡−𝑙,𝑗:(𝑗,𝑙)∈𝐽)⊤andw𝑡(𝐽)=(𝑦𝑡−1,...,𝑦𝑡−𝑞𝑛,x⊤ 𝑡(𝐽))⊤. The least squares estimates of the regression coefficients for model [𝑞𝑛]⊕ ˆJ𝑛is (ˆ𝛼1(ˆJ𝑛),..., ˆ𝛼𝑞𝑛(ˆJ𝑛),ˆ𝜷⊤(ˆJ𝑛))⊤=©­ «𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(ˆJ𝑛)w⊤ 𝑡(ˆJ𝑛)ª® ¬−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(ˆJ𝑛)𝑦𝑡. With the estimated AR coefficients, ˆ 𝛼𝑖(ˆJ𝑛),1≤𝑖≤𝑞𝑛, we suggest using a data-driven thresholding (DDT) method, ˆQ𝑛={1≤𝑞≤𝑞𝑛:|ˆ𝛼𝑞(ˆJ𝑛)|≥ ˆ𝐻𝑛}, (8) to weed out redundant AR variables, where ˆ𝐻𝑛is a data-driven thresholding value depending on ˆJ𝑛 and𝑞𝑛; see Section 3.3. Note that identifying ˆQ𝑛is crucial for accurate prediction because an overfitted model tends to
https://arxiv.org/abs/2505.04884v1
have a larger mean squared prediction error, especially in tackling nonstationary time series where the cost of overfitting is more prominent (see Example 3.3). The final estimated model is ˆ𝑁𝑛=ˆQ𝑛⊕ˆJ𝑛. The above procedure, which combines FSR, HDIC, Trim, and DDT, is referred to as FHTD. FHTD is related to a number of algorithms in the literature. First, Chudik, Kapetanios and Pesaran (2018) have also employed a forward selection method similar to (4) in the One Covariate at a Time Multiple Testing (OCMT) procedure, which can control the false positive rate and the false discovery rate in high-dimensional linear regression models. However, their analysis of OCMT does not account for the scenario in which the pre-selected covariates exhibit near-perfect correlations. Note also that (4) simplifies to the forward regression algorithm in Wang (2009) when [𝑞𝑛]becomes an empty set, and it further simplifies to the OGA studied in Ing and Lai (2011) if the orthogonal projection matrix in the denominator is removed. Second, HDIC becomes BIC if 𝑤𝑛,𝑝𝑛=log𝑛and AIC if𝑤𝑛,𝑝𝑛=2. How- ever, failing to account for potential spuriousness of the greedily chosen variables among 𝑝∗ 𝑛candidate variables, AIC and BIC may result in serious overfitting in the case of 𝑝∗ 𝑛≫𝑛. Third, in the context of independent observations, (5) and (7) have been employed by Ing and Lai (2011) to eliminate the redun- dant variables introduced by OGA for high-dimensional regression models. This combined technique is called OGA+HDIC+Trim by the authors. Indeed, under an appropriate “beta-min" condition, it can be derived from an argument in Ing (2020) that, with probability approaching 1, OGA+HDIC+Trim is capable of directly selecting J𝑛andQ𝑛instationary ARX models without having to fit an AR( 𝑞𝑛) model beforehand. However, the effectiveness of OGA+HDIC+Trim in identifying J𝑛andQ𝑛is sig- nificantly compromised under model (1); see Examples 4.1 and 4.2 of Section 4. This difficulty is also encountered by LASSO and adaptive LASSO, highlighting the inherent challenges in model selection for high-dimensional nonstationary ARX models with highly correlated lagged dependent variables. In light of the existing works, we argue that it is the innovative way in which FHTD combines the com- ponent techniques that successfully tackles the highly challenging model selection problem outlined in Section 1, which has been known as notoriously difficult for most high-dimensional methods. A comprehensive analysis in non-standard scenarios is necessary to provide a theoretical justification for FHTD, particularly when some predictors display near-perfect correlations and all of them are conditionally heteroscedastic. In the next section, we show that FSR boasts the sure screening property while ˆ𝑁𝑛consistently estimate 𝑁𝑛=J𝑛⊕Q𝑛. 6 3. Screening and selection consistency In this section, we present the sure-screening property of FSR and the model selection consistency of FHTD in Sections 3.2 and 3.3. To this end, we introduce in Section 3.1 Assumptions (A1)–(A6) concerning model (2). 3.1. Model assumptions Consider model (2). Let 𝑥𝑡,𝑗, for 1≤𝑗≤𝑝𝑛, and𝜖𝑡beF𝑡-measurable random variables, where {F𝑡} is an increasing sequence of 𝜎-fields representing available information up to time 𝑡. We impose the following assumptions. (A1){𝜖𝑡,F𝑡}is a martingale difference sequence (m.d.s.) with E𝜖2 𝑡=𝜎2and 𝜖2 𝑡−𝜎2=∞∑︁ 𝑗=0𝜃⊤ 𝑗𝑒𝑡−𝑗, (9) where𝜃𝑗are𝑙0-dimensional real vectors such
https://arxiv.org/abs/2505.04884v1
that ∞∑︁ 𝑗=0∥𝜃𝑗∥≤𝐶, (10) with𝑙0being a fixed positive integer, and {𝑒𝑡,F𝑡}is an𝑙0-dimensional m.d.s. with sup 𝑡E∥𝑒𝑡∥𝜂≤𝐶,for some𝜂≥2. (11) (A2) For each 1≤𝑠≤𝑝𝑛,{𝑥𝑡,𝑠}−∞<𝑡<∞is a covariance stationary time series with mean zero and admits a one-sided moving average representation, 𝑥𝑡,𝑠=∞∑︁ 𝑘=0𝑝𝑘,𝑠𝜋𝑡−𝑘,𝑠, (12) where𝑝0,𝑠=1,{𝜋𝑡,𝑠,F𝑡}is an m.d.s. and ∞∑︁ 𝑘=0max 1≤𝑠≤𝑝𝑛√ 𝑘|𝑝𝑘,𝑠|≤𝐶. (13) Moreover, for 0≤𝑠1≤𝑠2≤𝑝𝑛and𝑠1+𝑠2≥1, 𝜋𝑡,𝑠1𝜋𝑡,𝑠2−𝜎𝑠1,𝑠2=∞∑︁ 𝑗=0𝜃⊤ 𝑗,𝑠1,𝑠2𝑒𝑡−𝑗,𝑠1,𝑠2, (14) where𝜋𝑡,0=𝜖𝑡,𝜎𝑠1,𝑠2=E(𝜋𝑡,𝑠1𝜋𝑡,𝑠2),𝜃𝑗,𝑠1,𝑠2are𝑙𝑠1,𝑠2-dimensional real vectors, with 𝑙𝑠1,𝑠2being a fixed positive integer, such that ∞∑︁ 𝑗=0∥𝜃𝑗,𝑠1,𝑠2∥≤𝐶, (15) Unit root processes with many predictors 7 and{𝑒𝑡,𝑠1,𝑠2,F𝑡}is a𝑙𝑠1,𝑠2-dimensional m.d.s. satisfying for some 𝑞0>2, sup 𝑡E∥𝑒𝑡,𝑠1,𝑠2∥𝑞0𝜂≤𝐶,if min{𝑠1,𝑠2}>0, (16) sup 𝑡E∥𝑒𝑡,𝑠1,𝑠2∥2𝑞0𝜂/(1+𝑞0)≤𝐶,if min{𝑠1,𝑠2}=0, (17) where𝜂is defined in (11). Note that 𝐶does not depend on 𝑠1or𝑠2in the above. (A3) There exists a positive definite sequence, {𝛾ℎ}−∞<ℎ<∞, of real numbers such that lim𝑛→∞∞∑︁ ℎ=0|𝛾ℎ,𝑛−𝛾ℎ|=0, (18) where𝛾ℎ,𝑛=E(𝛿𝑡𝛿𝑡+ℎ)and𝛿𝑡=𝛿𝑡,𝑛=𝑣𝑡,𝑛+𝜖𝑡, with𝑣𝑡,𝑛defined in (3). (A4)Í𝑝𝑛 𝑗=1Í𝑟(𝑛) 𝑗 𝑙=1|𝛽(𝑗) 𝑙|≤𝐶andÍ𝜄𝑛 𝑗=1𝑗|𝑎𝑗|≤𝐶, where𝑎𝑗=𝑎𝑗,𝑛is defined after (1). (A5) max 1≤𝑗≤𝑝𝑛𝑟(𝑛) 𝑗=𝑜(𝑛1/2),𝑝∗ 𝑛≍𝑛𝜈, and𝑞𝑛=𝑜(𝑛1/2−𝜃𝑜), where𝜈∈[1,𝜂/2]and𝜃𝑜=𝜈(1+ 𝑞0)/(2𝜂𝑞0). Assumption (A1), implying sup 𝑡E|𝜖𝑡|2𝜂<𝐶, (19) is satisfied by many conditionally heteroscedastic processes, such as the stationary GJR-GARCH model with a finite 2 𝜂-th moment. Assumption (A2) assumes {𝑥𝑡,𝑠}follows an MA(∞) process driven by the conditionally heteroscedastic innovations {𝜋𝑡,𝑠}, while also requiring max 1≤𝑠≤𝑝𝑛sup 𝑡E|𝑥𝑡,𝑠|2𝜂𝑞0<𝐶. (20) Note that it allows that (𝜖𝑡,𝜋𝑡,1...,𝜋𝑡,𝑝𝑛)⊤constitutes a multivariate GARCH process, with the diago- nal VEC model Bollerslev, Engle and Wooldridge (1988) being a particular instance. Assumption (A3) is used to derive the FCLT for the multivariate linear process driven by {𝛿𝑡}(Theorem A.1), and As- sumption (A4), known as the weak sparsity condition, is frequently employed in the high-dimensional statistics literature. Finally, Assumption (A5) allows that the covariate dimension, 𝑝∗ 𝑛, is at least of the same order as 𝑛, and can be much larger than 𝑛if𝜂>2. It also permits that 𝑞𝑛, the prescribed upper bound of the number of AR variables, increases to ∞at a rate slower than 𝑛1/2. For a more detailed and comprehensive exploration of (A1)–(A5), readers are referred to Section S1 of the supplementary material. Apart from (A1)–(A5), we also require (A6), which assumes the covariance structures of 𝑥𝑡,𝑗and the stationary component of 𝑦𝑡, i.e.,𝑧𝑡=[𝜓−1(𝐵)𝜙(𝐵)]𝑦𝑡, where 𝜙(𝑧)=(1−𝑧)𝑎(1+𝑧)𝑏𝑙Ö 𝑘=1(1−2 cos𝜗𝑘𝑧+𝑧2)𝑑𝑘𝜓(𝑧), with𝜓(𝑧)=𝜓𝑛(𝑧)defined after (1). Since 𝜓(𝑧)≠0 for all|𝑧|≤1, by the second part of (A4) and Theorem 3.8.4 of Brillinger (1975), 𝑧𝑡can be expressed as 𝑧𝑡=Í𝑡−1 𝑗=0𝑏𝑗𝛿𝑡−𝑗, with𝑏0=1,Í∞ 𝑗=0𝑏𝑗𝑧𝑗≠ 8 0,|𝑧|≤1, andÍ∞ 𝑗=0|𝑗𝑏𝑗|≤𝐶. Define𝑧𝑡,∞=Í∞ 𝑗=0𝑏𝑗𝛿𝑡−𝑗,z⊤ 𝑡,∞(𝑘)=(𝑧𝑡−1,∞,...,𝑧𝑡−𝑘,∞), 𝚪𝑛(𝐽)=Ez𝑡,∞(𝑞𝑛−𝑑) x𝑡(𝐽)z⊤ 𝑡,∞(𝑞𝑛−𝑑),x⊤ 𝑡(𝐽) , 𝐽⊆¯𝐽, and for(𝑖,𝑙)∉𝐽,g⊤ 𝐽(𝑖,𝑙)=(E(z⊤ 𝑡,∞(𝑞𝑛−𝑑)𝑥𝑡−𝑙,𝑖),E(x⊤ 𝑡(𝐽)𝑥𝑡−𝑙,𝑖)). Now, (A6) is presented as follows: (A6) max ♯(𝐽)≤𝐾𝑛𝜆−1 min(𝚪𝑛(𝐽))≤𝐶, (21) and 𝑞𝑛−𝑑∑︁ 𝑠=1max ♯(𝐽)≤𝐾𝑛,(𝑖,𝑙)∉𝐽|𝑎𝑠,𝐽(𝑖,𝑙)|+ max ♯(𝐽)≤𝐾𝑛,(𝑖,𝑙)∉𝐽∑︁ (𝑖∗,𝑙∗)∈𝐽|𝑎(𝑖∗,𝑙∗)(𝑖,𝑙)|≤𝐶, (22) where(𝑎1,𝐽(𝑖,𝑙),...,𝑎𝑞𝑛−𝑑,𝐽(𝑖,𝑙),(𝑎(𝑖∗,𝑙∗)(𝑖,𝑙):(𝑖∗,𝑙∗)∈𝐽))⊤=𝚪−1 𝑛(𝐽)g𝐽(𝑖,𝑙). In Section S1, we provide examples to demonstrate the applicability of (A6). Specifically, we establish that (22) remains valid, even when model (2) contains highly correlated lag-dependent variables and exogenous variables with strong correlations. 3.2. The sure screening property of FSR In addition to (A1)–(A6), we require a strong sparsity condition (SS X)on{𝛽(𝑗) 𝑙}to ensure the sure screening property of FSR. (SS X)𝑠0=♯(J𝑛)and min(𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|obey 𝑠1/2 0𝑝∗¯𝜃 𝑛 𝑛1/2=𝑜(min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|), (23) where ¯𝜃=max{2/(𝑞0𝜂),(𝑞0+1)/(2𝜂𝑞0)}, and𝜂and𝑞0are defined in Assumptions (A1) and (A2). Medeiros and Mendes (2016) used a similar condition to derive the selection consistency of adaptive LASSO when{𝑦𝑡}is stationary. However, (SS X)is less
https://arxiv.org/abs/2505.04884v1
stringent than their strong sparsity condition; see the discussion of Section S1. With this assumption, Theorem 3.1 below shows that FSR asymptoti- cally screens all relevant variables. Theorem 3.1. Assume that (A1)–(A6) and(SS X)hold. Then, for 𝐾𝑛≍(𝑛/𝑝∗ 𝑛2¯𝜃)𝜍, (24) where 1/3<𝜍< 1/2, lim𝑛→∞𝑃(J𝑛⊆ˆ𝐽𝐾𝑛)=1. (25) Unit root processes with many predictors 9 Note that forcing the lagged dependent variables in FSR is essential to the sure-screening property. As illustrated in the following example, in the presence of unit-roots the OGA may fail to include all relevant variables when applied to choose the lagged dependent variables and exogenous variables simultaneously. Example 3.1. Consider a special case of model (2), 𝑦𝑡=𝛼1𝑦𝑡−1+𝛼2𝑦𝑡−2+𝑝𝑛∑︁ 𝑗=1𝛽𝑗𝑥𝑡−1,𝑗+𝜖𝑡, (26) where𝛼1=1+𝑎,𝛼2=−𝑎,|𝑎|<1,𝛽𝑗=0, 1≤𝑗≤𝑝𝑛, and{(𝑥𝑡,1,...,𝑥𝑡,𝑝𝑛,𝜖𝑡)⊤}is a sequence of independent normal random vectors with mean zero and identity covariance matrix. It is easy to see that (26) is an AR(2) model whose characteristic polynomial (1−𝑎𝑧)(1−𝑧)has a unit root of 1. In addition, all 𝑥𝑡−1,𝑗,1≤𝑗≤𝑝𝑛, are redundant. Under this model, when OGA is directly applied to the set of candidate variables {𝑦𝑡−1,𝑦𝑡−2,𝑥𝑡−1,𝑗: 1≤𝑗≤𝑝𝑛}, one of the relevant variables {𝑦𝑡−1,𝑦𝑡−2} willnotbe included in the OGA path. To see this, let 𝐹1,𝑛=(y⊤ 𝑛𝒐1)2/∥𝒐1∥2,𝐹2,𝑛=(y⊤ 𝑛𝒐2)2/∥𝒐2∥2, and𝑄𝑗,𝑛=(y⊤ 𝑛𝒙(𝑗))2/∥𝒙(𝑗)∥2for 1≤𝑗≤𝑝𝑛, where, analogous to Section 2, we define y𝑛=(𝑦𝑛,...,𝑦 3)⊤,𝒐1=(𝑦𝑛−1,...,𝑦 2)⊤,𝒐2= (𝑦𝑛−2,...,𝑦 1)⊤,𝒙(𝑗)=(𝑥𝑛−1,𝑗,...,𝑥 2,𝑗)⊤, 1≤𝑗≤𝑝𝑛. Then 𝐹1,𝑛 𝑛2⇒(1−𝑎)−2∫1 0𝑤2(𝑡)𝑑𝑡,𝐹2,𝑛 𝑛2⇒(1−𝑎)−2∫1 0𝑤2(𝑡)𝑑𝑡, (27) where𝑤(𝑡)is the standard Brownian motion and ⇒denotes convergence in law. Moreover, it is shown in Section S4 of the supplement that 1 𝑛(𝐹1,𝑛−𝐹2,𝑛)→1+2𝑎 1−𝑎2in probability . (28) By Bernstein’s inequality, we have max 1≤𝑗≤𝑝𝑛𝑄𝑗,𝑛=𝑂𝑝(𝑛log𝑝𝑛). Hence, if𝑎>−0.5, then (27)– (28) imply that with probability tending to 1, 𝑦𝑡−1will be selected in the initial iteration of OGA, provided that log 𝑝𝑛=𝑜(𝑛). Assuming that 𝑦𝑡−1is already included by OGA, define 𝝐=(I−𝒐1𝒐⊤ 1/∥𝒐1∥2)y𝑛, which is the resid- ual vector obtained by regressing 𝑦𝑡on𝑦𝑡−1. It can be shown that (𝒐⊤ 2𝝐)2/∥𝒐2∥2=𝑂𝑝(1)and for some small𝑐>0,𝑃(max 1≤𝑗≤𝑝𝑛[(𝒙(𝑗))⊤𝝐]2/∥𝒙(𝑗)∥2>𝑐log𝑝𝑛)→ 1. Hence, the probability of choosing 𝑦𝑡−2in the second OGA iteration approaches 0 provided log 𝑝𝑛→∞ . By a similar argument, 𝑦𝑡−2will not be selected by OGA in the first 𝐾𝑛iterations when 𝑝𝑛≫𝑛≫𝐾𝑛with probability approaching 1. Thus, while 𝑦𝑡−1will be selected by OGA with probability tending to 1, it is very difficult for OGA to choose the other relevant lagged dependent variable in the presence of unit roots. If 𝑎<−0.5,𝑦𝑡−2will enter the model in the first iteration and 𝑦𝑡−1will then be neglected by OGA due to the same argument. The above example not only highlights the limitations of OGA in handling nonstationary time series but also suggests that proving Theorem 3.1 requires a different strategy compared to existing works on greedy-type methods (Bühlmann, 2006, Ing, 2020, Ing and Lai, 2011). In particular, these works typically rely on the convergence rates of the “population" OGA and its “semi-population" version (see Section 6 of Bühlmann (2006), Sections 2 and 3 of Ing and Lai (2011), or Section 2 and Appendix A of Ing (2020)). However, the population OGA can hardly be defined for nonstationary time series due to the varying covariances between the input variables and between the input and dependent variables over time. 10 To resolve this dilemma, we introduce the
https://arxiv.org/abs/2505.04884v1
weak “noiseless" FSR. Define 𝜓𝐽,(𝑖,𝑙)=𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙 (𝑛−1𝒙(𝑖) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙)1/2and ˆ𝜓𝐽,(𝑖,𝑙)=𝑛−1y⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙 (𝑛−1𝒙(𝑖) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙)1/2. The main distinction between 𝜓𝐽,(𝑖,𝑙)and ˆ𝜓𝐽,(𝑖,𝑙)is that the y𝑛in the latter is substituted with its noiseless counterpart 𝝁𝑛in the former. Recalling that ˆ𝐽0=∅, FSR chooses, according to (4), (ˆ𝑗𝑚,ˆ𝑙𝑚)=arg max (𝑗,𝑙)∈¯𝐽\ˆ𝐽𝑚−1ˆ𝜓ˆ𝐽𝑚−1,(𝑖,𝑙), 𝑚≥1, at the𝑚-th iteration, and then updates ˆ𝐽𝑚−1byˆ𝐽𝑚=ˆ𝐽𝑚−1∪{( ˆ𝑗𝑚,ˆ𝑙𝑚)}. Alternatively, we consider the weak noiseless FSR, which selects (𝑗𝑚,𝑙𝑚)satisfying |𝜓𝐽𝑚−1,(𝑗𝑚,𝑙𝑚)|≥𝜉 max (𝑗,𝑙)∈¯𝐽\𝐽𝑚−1|𝜓𝐽𝑚−1,(𝑗,𝑙)|, 𝑚≥1, (29) where𝐽0=∅and 0<𝜉≤1 is a constant. It subsequently updates 𝐽𝑚−1by𝐽𝑚=𝐽𝑚−1∪{(𝑗𝑚,𝑙𝑚)}. When𝜉=1, the algorithm is called the noiseless FSR. We derive in Section S3 of the supplement the rate of convergence of the “noiseless" mean squared error ˆ𝑎𝑚=𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽𝑚)𝝁𝑛, (30) as𝑚increases. The rate of convergence of ˆ 𝑎𝑚, together with a probability bound for max ♯(𝐽)≤𝐾𝑛𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽)ª® ¬, (31) developed in Theorem A.3, leads to a convergence rate of ˆ𝑠𝑚=𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕ˆ𝐽𝑚)𝝁𝑛. (32) Note that the sole distinction between ˆ 𝑎𝑚and ˆ𝑠𝑚is that the infeasible 𝐽𝑚in the former is replaced with the data-driven ˆ𝐽𝑚in the latter. The rate of convergence of ˆ 𝑠𝑚serves as the key vehicle to prove (25). In sharp contrast to conventional high-dimensional models where the sample covariance matrix of the explanatory variables can be accurately approximated by a non-random and positive definite matrix, the sample covariance matrix, 𝑛−1Í𝑛 𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽), in (31) lacks a non-random limit due to the presence of highly correlated lagged dependent variables. Our probability bound for (31) requires an intricate analysis based on an FCLT and moment bounds for linear processes driven by 𝛿𝑡=𝛿𝑡,𝑛= 𝑣𝑡,𝑛+𝜖𝑡, as detailed in Appendix A, where 𝑣𝑡,𝑛is defined in (3). 3.3. Selection Consistency This section starts by establishing the selection consistency of ˆJ𝑛defined in (7), which is a backward elimination method based on a refinement, ˆ𝐽ˆ𝑘𝑛, of ˆ𝐽𝐾𝑛; see (5) and (6). To this end, we impose a sparsity condition slightly stronger than (SS X). (SS) There exists 𝑑𝑛/log𝑛→∞ such that 𝑠1/2 0𝑝∗¯𝜃 𝑛𝑑1/2 𝑛 𝑛1/2=𝑜(min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|). (33) Unit root processes with many predictors 11 Note that the left-hand side in (33) is larger than that of (23) by a factor of about (log𝑛)1/2. Further discussion of (SS) is deferred to Section S1. Based on (SS), among other conditions, Theorem 3.2 ensures the consistency of Trim in selecting the exogenous variables. Theorem 3.2. Assume that the assumptions of Theorem 3.1 hold with (SS X)replaced by (SS). Let the 𝑤𝑛,𝑝𝑛in(5)satisfy 𝑤𝑛,𝑝𝑛 𝑝∗𝑛2¯𝜃→∞ and𝑤𝑛,𝑝𝑛 𝑝∗𝑛2¯𝜃=𝑂((𝑑𝑛/log𝑛)1−𝛿)for any 0<𝛿< 1. (34) Then, ˆ𝑘𝑛and ˆJ𝑛defined in (6)and(7)satisfy lim𝑛→∞𝑃(J𝑛⊆ˆ𝐽ˆ𝑘𝑛)=1, (35) lim𝑛→∞𝑃(ˆJ𝑛=J𝑛)=1. (36) As an early stopping rule for FSR, ˆ𝐽ˆ𝑘𝑛not only preserves ˆ𝐽𝐾𝑛’s sure screening property (35), but also substantially suppresses the impact of spurious variables greedily chosen by FSR, resulting in reliable performance of Trim. With the help of (36), we are now in a position to develop the consistency of DDT in selecting the AR variables. Likewise, we rely on a strong sparsity condition on the AR coefficients. (SS A)min𝑞∈Q𝑛|𝛼𝑞|,𝑠0, and𝑠0obey max{𝑞3/2 𝑛/√𝑛,[(𝑠0+𝑞𝑛)1/2∧(𝑠1/2 0𝑞1/𝜂 𝑛)]} 𝑛1/2=𝑜(min 𝑞∈Q𝑛|𝛼𝑞|), (37) where𝑠0=♯(D0)andD0={𝑗:(𝑗,𝑙)∈J𝑛}. Compared with (SS) or(SS X),(SS A)allows for a much smaller lower bound for the non-zero co- efficients, enabling detection of weaker
https://arxiv.org/abs/2505.04884v1
signals. In particular, since the spurious exogenous variables chosen by FSR among the 𝑝∗ 𝑛candidates have been (asymptotically) eliminated after the HDIC and Trim steps,𝑝∗ 𝑛is now replaced by the much smaller 𝑞𝑛in the lower bound for min 𝑞∈Q𝑛|𝛼𝑞|. Theorem 3.3. Assume that the assumptions of Theorem 3.2, (34), and (SS A)hold. Then, the DDT procedure, ˆQ𝑛, defined in (8), satisfies lim𝑛→∞𝑃(ˆQ𝑛=Q𝑛)=1, (38) provided that the data-driven threshold satisfies ˆ𝐻𝑛=max{𝑞3/2 𝑛/√𝑛,[(𝑞𝑛+ˆ𝑠0)1/2∧(ˆ𝑠1/2 0𝑞1/𝜂 𝑛)]} 𝑛1/2˜𝑑𝑛, (39) where ˜𝑑𝑛diverges to∞at an arbitrarily slow rate, ˆ𝑠0=♯(ˆJ𝑛), and ˆ𝑠0=♯({𝑖:(𝑖,𝑙)∈ˆJ𝑛}). DDT turns the infeasible thresholding value in (SS A)into the feasible one, ˆ𝐻𝑛, by replacing the unknown𝑠0and𝑠0with their consistent estimates ˆ 𝑠0and ˆ𝑠0. Combining Theorems 3.2 and 3.3 yields 12 that FHTD asymptotically captures exactly the relevant AR and exogenous covariates despite complex unit roots, conditional heteroscedasticity, and a large pool of candidate variables. We stress that model selection consistency for general unit-root ARX model like Theorems 3.2 and 3.3 are nontrivial theoretical contributions. Recall that as demonstrated in Example 3.1, OGA overlooks certain relevant variables when applied directly to model (2). The following example further shows, regardless of the penalty sequence {𝜆𝑛}, the LASSO method lacks selection consistency in the presence of a unit root. Example 3.2. Consider the model 𝑦𝑡=𝛽∗ 1𝑦𝑡−1+𝛽∗ 2𝑦𝑡−2+𝛽∗ 3𝑥𝑡−1+𝜖𝑡,𝑡=1,2,...,𝑛 , where(𝜖𝑡,𝑥𝑡)⊤ are i.i.d. Gaussian with zero mean and an identity covariance matrix, and 𝛽∗ 1=𝛽∗ 3=1 and𝛽∗ 2=0. If we apply LASSO to estimate (𝛽∗ 1,𝛽∗ 2,𝛽∗ 3)with ˆ𝜷(𝜆𝑛)=(ˆ𝛽(𝜆𝑛) 1,ˆ𝛽(𝜆𝑛) 2,ˆ𝛽(𝜆𝑛) 3)⊤∈arg min {𝛽𝑗}3 𝑗=1𝑛∑︁ 𝑡=3(𝑦𝑡−𝛽1𝑦𝑡−1−𝛽2𝑦𝑡−2−𝛽3𝑥𝑡−1)2+𝜆𝑛3∑︁ 𝑗=1|𝛽𝑗|, then LASSO will notexhibit model selection consistency, as described in equations (36) and (38). This lack of consistency holds true whether the sequence {𝜆𝑛}is chosen such that (a) lim sup𝑛→∞𝜆𝑛/𝑛=∞, (b) lim inf𝑛→∞𝜆𝑛/𝑛=0, or (c)𝜆𝑛≍𝑛. In fact, one can show that for any sequence {𝜆𝑛}satisfying (a), (b), or (c), we always have lim inf𝑛→∞𝑃(ˆ𝛽(𝜆𝑛) 1≠0,ˆ𝛽(𝜆𝑛) 2=0,ˆ𝛽(𝜆𝑛) 3≠0)≤1 2. (40) The proof of (40) can be found in Section S4. It is also worth noting that (38), achieving consistency for subset selection, is more desirable for pre- diction than order selection consistency, whose corresponding model may still contain redundant AR variables. To the best of our knowledge, this type of consistency has not been reported elsewhere, even when𝑞𝑛is bounded, and 𝑣𝑡,𝑛(see (3)) is dropped from model (2). The following example elucidates why achieving consistency in subset selection can offer significantly greater advantages compared to order selection from a predictive standpoint. Example 3.3. Consider the model 𝑦𝑡=𝑘∑︁ 𝑗=1𝛽𝑗𝑦𝑡−𝑗+𝜖𝑡, 𝑡=1,...,𝑛, (41) where𝑘≥1 is an integer, 𝛽1=···=𝛽𝑘−1=0,𝛽𝑘=1, and𝜖𝑡are i.i.d. random variables with a mean of zero and a constant variance of 0 <𝜎2<∞. Clearly, model (41) is a nonstationary AR( 𝑘) model containing𝑘unit roots. If 𝑘is known or can be consistently estimated by an order selection criterion such as BIC, then it is natural to predict 𝑦𝑛+1using the least squares predictor, ˆ 𝑦𝑛+1(𝑘)=y⊤ 𝑛(𝑘)ˆ𝜷(𝑘), where y𝑡(𝑘)=(𝑦𝑡,...,𝑦𝑡−𝑘+1)⊤and ˆ𝜷(𝑘)=(Í𝑛−1 𝑡=𝑘y𝑡(𝑘)y⊤ 𝑡(𝑘))−1Í𝑛−1 𝑡=𝑘y𝑡(𝑘)𝑦𝑡+1. The performance of ˆ𝑦𝑛+1(𝑘)can be evaluated using its mean squared prediction error (MSPE), defined as MSPE 𝑘= E(𝑦𝑛+1−ˆ𝑦𝑛+1(𝑘))2. Assume E|𝜖1|𝑠<∞for some𝑠>4, and a smoothness condition on 𝜖𝑡described in Section 2 of Ing, Sin and Yu (2010). Then, by
https://arxiv.org/abs/2505.04884v1
extending an argument used in Ing (2001), Ing, Sin and Yu (2010), and Ing and Yang (2014), it can be shown that lim𝑛→∞𝑛(MSPE𝑘−𝜎2)=𝜎2plim𝑛→∞log det(Í𝑛−1 𝑡=𝑘y𝑡(𝑘)y⊤ 𝑡(𝑘)) log𝑛=2𝑘𝜎2, (42) Unit root processes with many predictors 13 where the second equality is ensured by Theorem 5 of Wei (1987). Alternatively, if a method can consistently select the non-zero coefficient 𝛽𝑘while excluding the redundant ones, as is offered by the FHTD, the least squares predictor, ˜𝑦𝑛+1(𝑘)=𝑦𝑛+1−𝑘˜𝛽𝑘=𝑦𝑛+1−𝑘Í𝑛−1 𝑡=𝑘𝑦𝑡−𝑘+1𝑦𝑡+1 Í𝑛−1 𝑡=𝑘𝑦2 𝑡−𝑘+1, would emerge as another appropriate predictor for 𝑦𝑛+1, where ˜𝛽𝑘is the least squares estimate of 𝛽𝑘 obtained from regressing 𝑦𝑡on𝑦𝑡−𝑘. By an argument similar to that used to prove (42), it can be shown that^MSPE𝑘=E(𝑦𝑛+1−˜𝑦𝑛+1(𝑘))2obeys lim𝑛→∞𝑛(^MSPE𝑘−𝜎2)=𝜎2plim𝑛→∞log det(Í𝑛−𝑘 𝑡=1𝑦2 𝑡) log𝑛=2𝜎2. (43) Equations (42) and (43) reveal that the least squares predictor constructed from a consistent order selection method could indeed lead to significantly higher MSPE than the one derived from a consis- tent subset selection method, especially when the underlying unit-root model contains many irrelevant lagged dependent variables. Therefore, excessive overfitting in a unit-root time series could result in a notably larger MSPE than in a stationary series. In closing this section, we note that, while it appears possible to extend our argument to establish the selection consistency of Lasso or OGA based on the residuals of a long autoregression, our simulation results in the following section do not support the effectiveness of such an approach in finite-sample performance. This also underscores the distinctiveness of FHTD in dealing with high-dimensional unit-root time series data. 4. Simulation studies In this section, we examine the model selection performance of FHTD using synthetic data gener- ated from model (1), with coefficients, covariates, and error terms specified below. For the purpose of comparison, we employ several existing high-dimensional model selection methods, such as LASSO, adaptive LASSO (ALasso), and OGA+HDIC+Trim (OGA-3), where the names in the parentheses are shorthands used throughout the paper. Since FHTD first coerces all candidate AR variables into the model, we modify ALasso and OGA-3 accordingly and consider the analogous methods, AR-ALasso and AR-OGA-3. For AR-ALasso, the AR variables are not penalized in the first-stage LASSO and the resulting coefficients are used as the initial weights (weighted inversely) for the second-stage LASSO. Similarly, AR-OGA-3 forces the AR variables into the base model when implementing OGA-3. According to Theorem 3.2, the penalty term, 𝑤𝑛,𝑝𝑛, in HDIC can be taken to be 𝑡𝑛𝑝∗2¯𝜃 𝑛, where ¯𝜃is defined in (SS X)and{𝑡𝑛}diverges to∞arbitrarily slowly. Here, we approximate 2 ¯𝜃using 1/𝜂because the exogenous variables are often allowed to have finite higher-order moments. On the other hand, we set𝜂=2 to include GARCH-type errors with relatively heavy tails. As a result, for the FSR- and OGA-based methods, HDIC(𝑄⊕𝐽)=𝑛log ˆ𝜎2 𝑄⊕𝐽+𝑐𝑝∗1/𝜂(♯(𝑄)+♯(𝐽)), 𝜂=2, (44) is used throughout all simulations, where 𝑐>0 is a tuning parameter. In view of Theorem 3.3 and since ˆ𝐻𝑛in (39) simplifies to [(𝑞𝑛+ˆ𝑠0)1/2∧(ˆ𝑠1/2 0𝑞1/𝜂 𝑛)]˜𝑑𝑛/𝑛1/2if𝑞𝑛=𝑜(𝑛1/3), the threshold in DDT is 14 set to ˆ𝐻𝑛=[(𝑞𝑛+ˆ𝑠0)1/2∧(ˆ𝑠1/2 0𝑞1/2 𝑛)]𝑑 𝑛1/2, (45) where𝑑is also subject to fine-tuning. In practice, one may use a hold-out validation set to determine 𝑐and𝑑. To reduce the computational burden, we
https://arxiv.org/abs/2505.04884v1
set 𝑐=𝑑=0.5 in all simulation examples and only tune𝑐and𝑑more carefully in the real data analysis in Section 5. The number of iterations, 𝐾𝑛, of FSR and OGA is set to 40. The tuning parameters for LASSO-type methods are selected using BIC as in Medeiros and Mendes (2016). Finally, 𝑞𝑛and𝑟(𝑛) 𝑗are set to𝑞𝑛=⌊2𝑛0.25⌋and𝑟(𝑛) 𝑗=𝑟(𝑛)for all 1≤ 𝑗≤𝑝𝑛, where(𝑛,𝑝𝑛,𝑟(𝑛))=(200,100,4),(400,200,5), and (800, 500, 6). Note that 𝑝∗ 𝑛=𝑝𝑛𝑟(𝑛)>𝑛 in all cases. Let ˜Q𝑖and ˜J𝑖denote the sets of the AR and exogenous variables chosen by a model selection method in the 𝑖-th simulation. Then its performance is measured by the frequencies of selecting exactly the relevant variables (E) and including all relevant variables (SS) as well as the average numbers of true positives (TP) and false positives (FP), namely, E=1000∑︁ 𝑖=1I{˜Q𝑖=Q𝑛}I{˜J𝑖=J𝑛},SS=1000∑︁ 𝑖=1I{Q𝑛⊆˜Q𝑖}I{J𝑛⊆˜J𝑖}, TP=1 10001000∑︁ 𝑖=1(♯{˜Q𝑖∩Q𝑛}+♯{˜J𝑖∩J𝑛}),FP=1 10001000∑︁ 𝑖=1(♯{˜Q𝑖∩Q𝑐 𝑛}+♯{˜J𝑖∩J𝑐 𝑛}), whereQ𝑐=[𝑞𝑛]\Q𝑛andJ𝑐 𝑛=¯𝐽\J𝑛. All simulation results are based on 1,000 replicates. Example 4.1. In this example, we generate 𝑛observations from (1−0.45𝐵4−0.45𝐵5)(1−𝐵)𝑦𝑡=5∑︁ 𝑗=1𝛽(𝑗) 1𝑥𝑡−1,𝑗+10∑︁ 𝑗=6𝛽(𝑗) 2𝑥𝑡−2,𝑗+𝜖𝑡, (46) where𝜖𝑡is independently drawn from a 𝑡(6)distribution. The candidate covariates are generated ac- cording to the AR(1) model, 𝑥𝑡,𝑗=0.8𝑥𝑡−1,𝑗+2𝑤𝑡+𝑣𝑡,𝑗,𝑗=1,2,...,𝑝𝑛, where{𝑤𝑡}and{𝑣𝑡,𝑗}are independent standard Gaussian white noise processes and are independent of {𝜖𝑡}. The coefficients are given by(𝛽(1) 1,𝛽(2) 1,𝛽(3) 1,𝛽(4) 1,𝛽(5) 1)=(3,3.75,4.5,5.25,6), and(𝛽(6) 2,𝛽(7) 2,𝛽(8) 2,𝛽(9) 2,𝛽(10) 2)= (6.75,7.5,8.25,9,9.25). Since a unit-root is introduced in (46), {𝑦𝑡}is nonstationary and the model contains three lagged dependent variables, 𝑦𝑡−1,𝑦𝑡−4,𝑦𝑡−6, and ten exogenous variables. In addition, the candidates 𝑥𝑡−𝑙,𝑗are highly correlated because Corr (𝑥𝑡,𝑖,𝑥𝑡,𝑗)=0.8, for𝑖≠𝑗. Simulation results for Example 4.1 are summarized in Table 1. Clearly, the LASSO-type methods fail to identify the correct model. Their TP values are only slightly larger than 1, meaning on average they detect only one relevant variable. A closer look at the results reveals that 𝑦𝑡−1is always included by these methods. However, they include only another one or two variables at most, which are usually irrelevant, resulting in a low FP value. OGA-3 performs equally poorly in terms of TP values, and tends to select more irrelevant variables. AR-OGA-3 has much higher TP values than OGA-3 though its performance in variable screening and selection is unsatisfactory. This inferior performance of AR- OGA-3 is mainly ascribed to OGA’s relatively poor selection path, which falls short of including all relevant exogenous variables after adding all candidate AR variables in the model. By contrast, FSR Unit root processes with many predictors 15 successfully includes all relevant exogenous variables. Based on the reliable screening capability of FSR, HDIC, Trim, and DDT further remove all redundant variables and identify the true ARX model over 90% of the time when 𝑛≥400. Table 1. Values of E, SS, TP, and FP in Example 4.1, where E denotes selecting exactly the relevant variables and SS including all relevant variables, and TP and FP are the average numbers of true positives and false positives. Results are based on 1000 replications. LASSO ALasso OGA-3 AR-ALasso AR-OGA-3 FHTD (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(200,400,100,4,7) E 0 0 0 0 1 431 SS 0 0 0 0 1 1000 TP 1.02 1.02 1.16 1.12 6.67 13.00 FP 0.73 0.39 3.36 0.50 12.49 0.98 (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(400,1000,200,5,8) E 0 0 0
https://arxiv.org/abs/2505.04884v1
0 78 919 SS 0 0 0 0 78 1000 TP 1.01 1.00 1.12 1.07 10.46 13.00 FP 0.22 0.09 4.39 0.63 11.12 0.09 (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(800,3000,500,6,10) E 0 0 0 0 229 998 SS 0 0 0 0 229 1000 TP 1.03 1.00 1.32 1.06 11.87 13.00 FP 0.13 0.00 5.62 0.66 9.29 0.00 Example 4.2. In this example, we generate data from (1−0.3𝐵)(1−2 cos(0.1)𝐵+𝐵2)𝑦𝑡=5∑︁ 𝑗=1𝛽(𝑗) 1𝑥𝑡−1,𝑗+10∑︁ 𝑗=6𝛽(𝑗) 2𝑥𝑡−2,𝑗+𝜖𝑡, (47) where{𝜖𝑡}is a GARCH(1,1) process satisfying 𝜖𝑡=𝜎𝑡𝑍𝑡, 𝜎2 𝑡=5×10−2+0.05𝜖2 𝑡−1+0.9𝜎2 𝑡−1, in which{𝑍𝑡}is a sequence of i.i.d. standard Gaussian random variables. By Theorem 2.2 of Ling and McAleer (2002), 𝜖𝑡has finite sixth moment. Let w𝑡=A𝝅𝑡, where A=(𝑎𝑖𝑗)1≤𝑖,𝑗≤𝑝𝑛, with𝑎𝑖𝑗= 0.6|𝑖−𝑗|if|𝑖−𝑗|≤7 and𝑎𝑖𝑗=0 otherwise, and{𝝅𝑡}, independent of{𝑍𝑡}, is a sequence of i.i.d. random vectors whose entries are independently drawn from a 𝑡(13)distribution. We then generate 𝑥𝑡,𝑗by (1−0.1𝐵+0.7𝐵2)𝑥𝑡,𝑗=(1+0.7𝐵)𝑤𝑡,𝑗, 1≤𝑗≤𝑝𝑛, where𝑤𝑡,𝑗is the𝑗-th component of w𝑡. Note that {𝑥𝑡,𝑗}is an ARMA(2,1) process. Moreover, the relevant coefficients are (𝛽(1) 1,𝛽(2) 1,𝛽(3) 1,𝛽(4) 1,𝛽(5) 1)= (0.82,−1.03,1.92,−2.21,2.42), and(𝛽(6) 2,𝛽(7) 2,𝛽(8) 2,𝛽(9) 2,𝛽(10) 2)=(−2.57,3.28,−3.54,3.72,−3.90). Table 2 reports the performance of the same methods as those in 4.1. In addition to conditionally heteroscedastic errors, the major challenge in this example lies in the fact that the AR component on the left-hand side of (47) contains complex unit roots; thus, {𝑦𝑡}cannot be made stationary through simple 16 difference transforms. As observed in Table 2, this challenge hinders the performance of the OGA- and LASSO-type methods, all of which have zero SS and E values and low TP values even when 𝑛=800. In contrast, FHTD still works well under the challenge. Specifically, it detects all relevant variables over 94% of the time for 𝑛≥200. In addition, its E value rapidly increases from 493 to over 840 when 𝑛increases from 200 to 400 (or 800). Table 2. Values of E, SS, TP, and FP in Example 4.2, where E, SS, TP and FP are defined similarly as thos of Table 1. Results are also based on 1000 replications. LASSO ALasso OGA-3 AR-ALasso AR-OGA-3 FHTD (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(200,400,100,4,7) E 0 0 0 0 0 493 SS 0 0 0 0 1 943 TP 1.45 1.24 1.33 1.19 5.40 12.93 FP 3.04 2.22 2.09 1.73 6.33 0.80 (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(400,1000,200,5,8) E 0 0 0 0 0 845 SS 0 0 0 0 0 999 TP 1.21 1.10 1.83 1.05 5.63 13.00 FP 1.93 1.58 3.19 1.50 6.40 0.24 (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(800,3000,500,6,10) E 0 0 0 0 0 850 SS 0 0 0 0 0 1000 TP 1.04 1.01 1.94 1.00 6.19 13.00 FP 1.32 1.18 3.34 1.18 6.66 0.33 We also considered another challenging example, where the error term and all candidate exogenous variables are conditionally heteroscedastic in addition to two unit roots in the AR component. FHTD still substantially outperforms the other methods in this example. Details are deferred to Section S5 of the supplement. 5. Applications In this section, we apply the proposed FHTD to the U.S. monthly housing starts and unemployment series. 5.1. Housing starts in the U.S. In this application, we are interested in modeling the logarithm of U.S. monthly housing starts. As depicted in Figure 1a, the series exhibits an apparent
https://arxiv.org/abs/2505.04884v1
seasonal pattern along with a drastic level change around the subprime financial crisis of 2008. For covariates, we collect the monthly new private hous- ing units authorized by building permits for each state (for instance, data for Illinois are retrieved from https://fred.stlouisfed.org/series/ILBP1FH) and the 30-year fixed rate mortgage averages from the Economic Data of St. Louis Federal Reserve (Freddie Mac, 30-Year Fixed Rate Mortgage Av- erage in the United States [MORTGAGE30US], retrieved from FRED, Federal Reserve Bank of St. Unit root processes with many predictors 17 3.54.04.55.0 1991 1996 2001 2006 2011 2016 2021 Yearlog housing starts (a) Logarithm of housing starts. 57911 1960196519701975198019851990199520002005201020152020 YearUnemployment rate (%) (b) Unemployment rates, seasonally adjusted. Figure 1: Time plots of U.S. monthly housing starts and unemployment series Louis; https://fred.stlouisfed.org/series/MORTGAGE30US, October 27, 2022). After removing series with missing values, we have 49 housing permits series {𝑥𝑡,𝑗,𝑗=1,2,..., 49}and the mortgage rate series𝑟𝑡from January 1988 through August 2022. We also remove the seasonality and unit root by taking ˜𝑥𝑡,𝑗=(1−𝐵12)(1−𝐵)log𝑥𝑡,𝑗,𝑗=1,2,..., 49, and ˜𝑟𝑡=𝑟𝑡−𝑟𝑡−1. Consequently, we have 403 observations for each series. Then we employ the following predictive model ℎ𝑡=18∑︁ 𝑙=1𝛼𝑙ℎ𝑡−𝑙+49∑︁ 𝑗=118∑︁ 𝑘=1𝛽(𝑗) 𝑘˜𝑥𝑡−𝑘,𝑗+18∑︁ 𝑘=1𝛽(50) 𝑘˜𝑟𝑡−𝑘+𝜖𝑡, (48) whereℎ𝑡denotes the logarithm of U.S. housing starts at month 𝑡. Note that there are 918 potential predictors. We also consider the model with a drift, ℎ𝑡=𝛽0+18∑︁ 𝑙=1𝛼𝑙ℎ𝑡−𝑙+49∑︁ 𝑗=118∑︁ 𝑘=1𝛽(𝑗) 𝑘˜𝑥𝑡−𝑘,𝑗+18∑︁ 𝑘=1𝛽(50) 𝑘˜𝑟𝑡−𝑘+𝜖𝑡. (49) In implementing FHTD, we estimate (49) via the following procedure. Subtract first from each variable (including the dependent variable) its own sample average, and then apply FHTD to the transformed data. Finally, (49) is estimated by OLS with the selected variables and an intercept. We perform rolling-window one-step-ahead prediction using FHTD as well as the other methods described in Section 4. We reserve the last 18 years of data as the test set, resulting in 𝑊=216 windows. Each window contains 169 observations as training data. Figure 2 plots some selected windows. As shown in the figure, the methods are challenged to forecast the sharp dip around 2008 and the following recovery. Since the true model is unknown, the performance of the methods under consideration is measured by the root mean squared prediction error (RMSE) and the median absolute prediction error (MAE), where RMSE ={𝑊−1Í𝑊 𝑤=1(ℎ𝑇−𝑤+1−ˆℎ𝑇−𝑤+1)2}1/2and MAE is the median of {|ℎ𝑇−𝑤+1− ˆℎ𝑇−𝑤+1|:𝑤=1,2...,𝑊}, in which𝑇is the time index for the last data point and ˆℎ𝑇−𝑤+1is the predicted value of ℎ𝑇−𝑤+1. In implementing FHTD and AR-OGA-3, we use HDIC in (44), ˆ𝐻𝑛in (45), and choose𝑐and𝑑therein over a grid of values between 0.1 and 0.7 via a hold-out validation set consisting of the last 20% of the training data in each window. The BIC is used to select the penalty parameters for LASSO-type methods. 18 4.04.24.44.64.85.05.2Rolling window: 1 4.04.24.44.64.85.05.2Rolling window: 24 4.24.44.64.85.05.2Rolling window: 48 3.54.04.55.0Rolling window: 72 3.54.04.55.0Rolling window: 963.54.04.55.0Rolling window: 120 3.54.04.55.0Rolling window: 144 3.54.04.55.0Rolling window: 168 3.54.04.55.0Rolling window: 192 3.5 4.0 4.5 5.0Rolling window: 216 Figure 2: Time plots of logarithim of monthly U.S. Housting Starts, ℎ𝑡, of selected windows The prediction results are recorded in Table 3. Note that LASSO-type methods are highly sensitive to the specification of the intercept.
https://arxiv.org/abs/2505.04884v1
They performed poorly when the intercept is omitted. In view of Figure 2, fitting the drift term to the upward trend in the first few windows may help alleviate the unit- root property in the data in finite sample, and without the drift, LASSO-type methods are unable to adapt to the unit-root behavior in the data. On the contrary, FHTD remains stable whether or not an intercept is included, and its prediction errors are substantially lower than the other methods. Table 3. Out-of-sample RMSEs and MAEs of competing methods applied to (48) and (49) FHTD OGA-3 AR-OGA-3 LASSO ALasso AR-ALasso Model (48) RMSE (×10) 1.08 1.34 1.32 2.14 2.13 1.43 MAE (×10) 0.72 0.82 0.82 1.21 1.17 0.97 Model (49) RMSE (×10) 1.11 1.31 1.29 1.24 1.21 1.14 MAE (×10) 0.71 0.82 0.91 0.86 0.88 0.81 5.2. U.S. unemployment rate Next, we consider the U.S. monthly unemployment rate {𝑢𝑡}, shown in Figure 1b. In some empirical studies, the unemployment rate is considered difference-stationary. Nevertheless, Bierens (2001) has found some evidence that the fluctuations in {𝑢𝑡}may be due to complex unit roots. Montgomery et al. (1998) have also noted the possibility of complex unit roots. Regardless of such complications, we can directly apply FHTD and the other methods discussed in the previous section to select a model Unit root processes with many predictors 19 for{𝑢𝑡}and to predict its future values. The data used are from the FRED-MD dataset (available on https://research.stlouisfed.org/econ/mccracken/fred-databases/), which contains 128 U.S. monthly macroeconomic variables from January 1959 to July 2019. We use data from January 1973 to June 2019 and again consider rolling-window one-step-ahead predictions. After discarding the series with missing values during the time span, there remain 124 macroeconomic time series that can be used to forecast 𝑢𝑡. Following McCracken and Ng (2016), we transform some series by taking logs, differencing, or both, so that all 124 series are considered stationary after the transformations. Denote these series by {𝑥𝑡,𝑗},𝑗=1,..., 124. Then we apply FHTD to the following model, 𝑢𝑡=6∑︁ 𝑖=1𝛼𝑖𝑢𝑡−𝑖+124∑︁ 𝑗=16∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑥𝑡−𝑙,𝑗+𝜖𝑡, (50) which contains 750 candidate predictors. The last two years of data are reserved as test samples, result- ing in a window size of 310 observations. The results are reported in Table 4. In both performance measures, FHTD outperforms OGA-3, AR-OGA-3, AR-ALasso, and LASSO. Its RMSE improves by 6%, 7%, 12%, and 14% over OGA-3, LASSO, AR-OGA-3, and AR-ALasso, respectively. The results are similar when comparing the MAEs. Note that for LASSO, ALasso, and AR-ALasso, we only report their performance when an intercept is included, since, as observed in the previous application, these methods performed poorly when the intercept is omitted. In this particular application, without the intercept the RMSEs and MAEs of LASSO and ALasso can be more than 14 times as large as their counterparts for the AR-AIC. These results, combined with those in Table 3, show that FHTD is applicable to general unit-root time series, stable across specifications, and makes best use of the available predictors. Finally, we remark that the performance of ALasso (with intercept) is also quite competitive, implying that
https://arxiv.org/abs/2505.04884v1
both are the most recommendable approaches to forecast {𝑢𝑡}. Table 4. Out-of-sample RMSEs and MAEs of competing methods applied to (50) for U.S. monthly unemployment rate series. FHTD OGA-3 AR-OGA-3 LASSO ALasso AR-ALasso RMSE (×10) 1.34 1.42 1.50 1.44 1.38 1.52 MAE (×10) 0.88 0.91 0.95 0.96 0.88 1.00 6. Concluding remarks This paper proposed the FHTD algorithm for variable selection in high-dimensional nonstationary ARX models with heteroscedastic covariates and errors. Under strong sparsity conditions, we estab- lished its selection consistency, valid even when the lagged dependent variables are highly correlated and sample covariance matrices lack deterministic limits. Finally, we point out some potential research directions. First, shifting from (SS) and(SS X)to weak sparsity assumptions (where coefficients are mostly non-zero but only a few are significant) may prioritize optimal forecasting over variable selec- tion consistency. Addressing this issue remains challenging, particularly in the presence of complex unit roots. Another intriguing avenue is the model selection for cointegrated data, common in economics and environmental studies, within the realm of high-dimensional data analysis. 20 Appendix A: Key technical tools As noted earlier, the analysis of FHTD relies on a number of key theoretical tools, which are of inde- pendent interests. In particular, Theorems A.1–A.3 presented in this section not only are indispensable to our analysis, but may also be of use in future research related to unit-root ARX models with high- dimensional inputs. Their proofs can be found in Section S2 of the supplementary material. Let B𝑛(𝑡1,𝑡2,...,𝑡 2𝑙+2) =1√𝑛 ⌊𝑛𝑡1⌋∑︁ 𝑘=1𝛿𝑘,⌊𝑛𝑡2⌋∑︁ 𝑘=1(−1)𝑘𝛿𝑘,⌊𝑛𝑡3⌋∑︁ 𝑘=1√ 2 sin(𝑘𝜗1)𝛿𝑘,⌊𝑛𝑡4⌋∑︁ 𝑘=1√ 2 cos(𝑘𝜗1)𝛿𝑘,...,⌊𝑛𝑡2𝑙+2⌋∑︁ 𝑘=1√ 2 cos(𝑘𝜗𝑙)𝛿𝑘! . (51) Note that B𝑛is a random element in 𝐷2𝑙+2, where𝐷is the Skorohod space 𝐷=𝐷[0,1](Billingsley, 1999). Our first theoretical apparatus is a novel FCLT for the multivariate linear processes (51) under a set of mild conditions. Theorem A.1. Assume that (A1)–(A4) hold with𝜂in(11),(16), and (17)replaced by some 𝜂1>1. In addition, assume max 1≤𝑗≤𝑝𝑛𝑟(𝑛) 𝑗=𝑜(𝑛𝜅), (52) where𝜅=min{1/2,1−𝜂−1 1}. Then V−1/2B𝑛⇒W, (53) where⇒denotes convergence in law, Wis a(2𝑙+2)-dimensional standard Brownian motion, and V=diag(𝑣2 1,𝑣2 2,...,𝑣2 2𝑙+2)is a(2𝑙+2)-dimensional diagonal matrix with 𝑣2 1=∞∑︁ ℎ=−∞𝛾ℎ, 𝑣2 2=∞∑︁ ℎ=−∞(−1)ℎ𝛾ℎ, 𝑣2 2𝑘+1=𝑣2 2𝑘+2=∞∑︁ ℎ=−∞cos(ℎ𝜗𝑘)𝛾ℎ, 𝑘=1,2,...,𝑙 . Note that𝑣2 𝑖>0,𝑖=1,..., 2𝑙+2, are ensured by (A3). If (2) reduces to an AR( 𝑝) model with a fixed𝑝and a GARCH error 𝜖𝑡, then𝑣2 𝑚=𝜎2for all𝑚, and the weak limit in Theorem A.1 coincides with Theorem 3.3 of Ling and Li (1998). Furthermore, when the error 𝜖𝑡reduces to an m.d.s. with constant conditional variance, Theorem A.1 reduces to Theorem 2.2 of Chan and Wei (1988). Here, due to the presence of 𝑣𝑡,𝑛=Í𝑝𝑛 𝑗=1Í𝑟(𝑛) 𝑗 𝑙=1𝛽(𝑗) 𝑙𝑥𝑡−𝑙,𝑗in𝛿𝑡, the FCLT is quite different from the classical ones, where the linear processes are driven by {𝜖𝑡}only. Theorem A.2. Let 𝑋𝑖=𝑖∑︁ 𝑙=−∞𝑎𝑖,𝑙˜𝑥𝑙, 𝑌𝑗=𝑗∑︁ 𝑘=−∞𝑏𝑗,𝑘˜𝑦𝑘, Unit root processes with many predictors 21 whereÍ𝑖 𝑙=−∞𝑎2 𝑖,𝑙<∞andÍ𝑗 𝑘=−∞𝑏2 𝑗,𝑘<∞, for all 1≤𝑖,𝑗≤𝑛, and{(˜𝑥𝑡,˜𝑦𝑡),F𝑡}is an m.d.s. Suppose also ˜𝑥𝑡˜𝑦𝑡−E(˜𝑥𝑡˜𝑦𝑡)=∞∑︁ 𝑗=0𝜽⊤ 𝑗˜z𝑡−𝑗, (54) where{˜z𝑡,F𝑡}is a multivariate m.d.s. andÍ∞ 𝑗=0∥𝜽𝑗∥≤𝐶, and sup 𝑡{E|˜𝑥𝑡|𝑝+E|˜𝑦𝑡|𝑞+E∥˜z𝑡∥2𝑟}≤𝐶, (55) where 1/𝑝+1/𝑞=1/(2𝑟)for some𝑟≥1. Define𝑄𝑛=Í𝑛 𝑖,𝑗=1𝑞𝑖,𝑗𝑋𝑖𝑌𝑗, where𝑞𝑖,𝑗are real numbers. Then, E|𝑄𝑛−E𝑄𝑛|2𝑟≤𝐶𝑟  E©­ «𝑛∑︁ 𝑖,𝑗=1𝑞𝑖,𝑗𝑋∗ 𝑖𝑌∗ 𝑗ª® ¬2  𝑟 , where𝐶𝑟is a positive constant depending only on 𝑟, 𝑋∗
https://arxiv.org/abs/2505.04884v1
𝑖=𝑖∑︁ 𝑙=−∞𝑎𝑖,𝑙𝑥∗ 𝑙, 𝑌∗ 𝑗=𝑗∑︁ 𝑘=−∞𝑏𝑗,𝑘𝑦∗ 𝑘, and{𝑥∗ 𝑡}and{𝑦∗ 𝑡}are independent processes that are identically distributed with {˜𝑥𝑡}and{˜𝑦𝑡}, re- spectively. Theorem A.2 provides a novel moment bound for quadratic forms of linear processes driven by conditional heteroscedastic m.d.s., and it resembles the First Moment Bound theorem of Findley and Wei (1993), with some essential improvements. First, their theorem assumes that the ˜ 𝑥𝑡and ˜𝑦𝑡in (54) obey E(˜𝑥𝑡˜𝑦𝑡|F𝑡−1)=E(˜𝑥𝑡˜𝑦𝑡)a.s.; hence the conditional heteroscedasticity is precluded. On the other hand, as discussed in Section S1 in the supplementary material, (54) permits many conditionally heteroscedastic processes. Finally, (55) allows ˜ 𝑥𝑡and ˜𝑦𝑡to be integrable up to different orders, which helps deal with quadratic forms involving {𝜖𝑡}and{𝑥𝑡,𝑗}in (2) since they may exhibit different tail behaviors. Theorem A.1 and Theorem A.2 are crucial for the following uniform lower bound for the minimum eigenvalues of the sample covariance matrices. Theorem A.3. Assume (A1)–(A5) and(21). Then, for 𝐾𝑛=𝑜(𝑛1/2/𝑝∗ 𝑛¯𝜃), (56) where ¯𝜃is defined in (SS X), max ♯(𝐽)≤𝐾𝑛𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)ª® ¬=𝑂𝑝(1), (57) where s𝑡(𝐽)is defined in Section S2 of the supplementary material. Moreover, max ♯(𝐽)≤𝐾𝑛𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽)ª® ¬=𝑂𝑝(1). (58) 22 Theorem A.3 highlights one of the most intriguing subtleties of our analysis. Since our predictors contain highly correlated lagged dependent variables, it is not known a priori whether they lead to asymptotically ill-conditioned (or even singular) sample covariance matrices. This issue is resolved through the FCLT (Theorem A.1), and Theorem A.3 thus suggests that model selection criteria based on least squares can differentiate between candidate predictors, albeit containing 𝑞𝑛highly correlated AR variables. Equations (58) and (57), respectively, are also aligned with (3.10) of Lai and Wei (1982) and (3.5.1) of Chan and Wei (1988), in which model (2) is simplified to a finite-order nonstationary AR model with a conditionally homogeneous error. Supplementary Material Supplement for “Model Selection for Unit-root Time Series with Many Predictors” The supplementary material contains the theoretical proofs of Theorem 3.1–3.3, Theorems A.1–A.3, and further technical discussions, including some remarks on Assumptions (A1)–(A6), ( SSX), and (SS) as well as further information regarding Example 3.1 and 3.1. Additional simulation results are also presented. References AL-ZOUBI , H. A. (2008). The long swings in the spot exchange rates and the complex unit roots hypothesis. Journal of International Financial Markets, Institutions and Money 18236–244. AL-ZOUBI , H. A., O’S ULLIVAN , J. A. and A LWATHNANI , A. M. (2018). Business cycles, financial cycles and capital structure. Annals of Finance 14105–123. BAXTER , G. (1962). An asymptotic result for the finite predictor. Mathematica Scandinavica 10137–144. BICKEL , P. J., R ITOV , Y. and T SYBAKOV , A. B. (2009). Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics 371705–1732. BIERENS , H. J. (2001). Complex unit roots and business cycles: Are they real? Econometric Theory 17962–983. BILLINGSLEY , P. (1999). Convergence of Probability Measures . Wiley. BOLLERSLEV , T., E NGLE , R. F. and W OOLDRIDGE , J. M. (1988). A capital asset pricing model with time- varying covariances. Journal of Political Economy 96116–131. BRILLINGER , D. R. (1975). Time Series:
https://arxiv.org/abs/2505.04884v1
Data Analysis and Theory . Holt, Rinehart, and Winston, New York. BÜHLMANN , P. (2006). Boosting for high-dimensional linear models. The Annals of Statistics 34559–583. CANDES , E. and T AO, T. (2007). The Dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics 352313–2351. CHAN, N. H. and W EI, C. Z. (1988). Limiting distributions of least squares estimates of unstable autoregressive processes. The Annals of Statistics 16367–401. CHUDIK , A., K APETANIOS , G. and P ESARAN , M. H. (2018). A one covariate at a time, multiple testing approach to variable selection in high-dimensional linear regression models. Econometrica 861479–1512. DEL BARRIO CASTRO , T., C UBADDA , G. and O SBORN , D. R. (2022). On cointegration for processes integrated at different frequencies. Journal of Time Series Analysis 43412–435. FAN, J. and L V, J. (2008). Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 70849–911. FARIA , J. R., C UESTAS , J. C. and G IL-ALANA , L. A. (2009). Unemployment and entrepreneurship: A cyclical relation? Economics Letters 105318–320. FINDLEY , D. F. and W EI, C.-Z. (1993). Moment bounds for deriving time series CLT’s and model selection procedures. Statistica Sinica 453–480. GIL-ALANA , L. A. (2009). Time series modeling of sunspot numbers using long-range cyclical dependence. Solar Physics 257371–381. GIL-ALANA , L. A. and G UPTA , R. (2014). Persistence and cycles in historical oil price data. Energy Economics 45511–516. Unit root processes with many predictors 23 GLOSTEN , L. R., J AGANNATHAN , R. and R UNKLE , D. E. (1993). On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks. The Journal of Finance 481779–1801. HAN, Y. and T SAY, R. S. (2020). High-dimensional linear regression for dependent data with applications to nowcasting. Statistica Sinica 301797–1827. HELLAND , I. S. (1982). Central limit theorems for martingales with discrete or continuous time. Scandinavian Journal of Statistics 79–94. HUANG , H.-H., C HAN, N. H., C HEN, K. and I NG, C.-K. (2022). Consistent order selection for ARFIMA pro- cesses. The Annals of Statistics . ING, C. K. (2001). A note on mean-squared prediction errors of the least squares predictors in random walk models. Journal of Time Series Analysis 22711–724. ING, C.-K. (2020). Model selection for high-dimensional linear regression with dependent observations. The Annals of Statistics 481959–1980. ING, C.-K., C HIOU , H.-T. and G UO, M. (2016). Estimation of inverse autocovariance matrices for long memory processes. Bernoulli 221301–1330. ING, C.-K. and L AI, T. L. (2011). A stepwise regression method and consistent model selection for high- dimensional sparse linear models. Statistica Sinica 1473–1513. ING, C. K., S IN, C. Y. and Y U, S. H. (2010). Prediction errors in nonstationary autoregressions of infinite order. Econometric Theory 26774–803. ING, C.-K., S IN, C.-Y. and Y U, S.-H. (2012). Model selection for integrated autoregressive processes of infinite order. Journal of Multivariate Analysis 10657–71. ING, C. K. and Y ANG, C. Y. (2014). Predictor selection for positive
https://arxiv.org/abs/2505.04884v1
autoregressive processes. Journal of the American Statistical Association 109243–253. KOCK, A. B. (2016). Consistent and conservative model selection with the adative Lasso in stationary and non- stationary autoregressions. Econometric Theory 32243–259. LAI, T. L. and W EI, C. Z. (1982). Asymptotic properties of projections with applications to stochastic regression problems. Journal of Multivariate Analysis 12346–370. LING, S. and L I, W. K. (1998). Limiting distributions of maximum likelihood estimators for unstable autoregres- sive moving-average time series with general autoregressive heteroscedastic errors. The Annals of Statistics 26 84–125. LING, S. and M CALEER , M. (2002). Stationarity and the existence of moments of a family of GARCH processes. Journal of Econometrics 106109–117. MADDANU , F. and P ROIETTI , T. (2022). Modelling persistent cycles in solar activity. Solar Physics 297. MCCRACKEN , M. W. and N G, S. (2016). FRED-MD: A monthly database for macroeconomic research. Journal of Business and Economic Statistics 34574–589. MEDEIROS , M. C. and M ENDES , E. F. (2016). ℓ1-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors. Journal of Econometrics 191255–271. MONTGOMERY , A. L., Z ARNOWITZ , V., T SAY, R. S. and T IAO, G. C. (1998). Forecasting the U.S. unemploy- ment rate. Journal of the American Statistical Association 93478–493. PROIETTI , T. and M ADDANU , F. (2024). Modelling cycles in climate series: The fractional sinusoidal waveform process. Journal of Econometrics 239105299. TIBSHIRANI , R. (1996). Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58267–288. TROPP , J. A. (2004). Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Infor- mation Theory 502231–2242. TSAY, R. S. (1984). Order selection in nonstationary autoregressive models. The Annals of Statistics 121425– 1433. WANG, H. (2009). Forward regression for ultra-high dimensional variable screening. Journal of the American Statistical Association 1041512–1524. WEI, C. Z. (1987). Adaptive prediction by least squares predictors in stochastic regression models with applica- tions to time series. The Annals of Statistics 151667–1682. WEI, C. Z. (1992). On predictive least squares principles. The Annals of Statistics 201–42. ZHANG , C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics 38894–942. 24 ZHAO, P. and Y U, B. (2006). On model selection consistency of lasso. Journal of Machine Learning Research 7 2541–2563. ZHENG , Z., F AN, Y. and L V, J. (2014). High dimensional thresholded regression and shrinkage effect. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76627–649. ZOU, H. (2006). The adaptive Lasso and its oracle properties. Journal of the American Statistical Association 101 1418–1429. Unit root processes with many predictors 1 Supplement for “Model Selection for Unit-root Time Series with Many Predictors” This supplemental material contains five sections. Section S1 provides comments on Assumptions (A1)–(A6) and discusses the sparsity conditions (SS X)and(SS). Section S2 presents proofs for the main results, Theorems 3.1–3.3 of the paper, as well as the proofs of Theorem A.1–A.3. Further details related to Section S2 can be found in Section S3. Section S4 contains specific information regarding Examples 3.1 and 3.2. Finally, Section
https://arxiv.org/abs/2505.04884v1
S5 offers additional simulation results. S1. Comments on Assumptions (A1)–(A6), (SS X), and (SS) In this section, we provide some comments on Assumptions (A1)–(A6). Assumption (A1) is fulfilled by many conditionally heteroscedastic processes. For example, consider a stationary GJR-GARCH (𝑝′ 0,𝑞′ 0) model (Glosten, Jagannathan and Runkle, 1993), 𝜖𝑡=𝜈𝑡𝑧𝑡, 𝜈2 𝑡=E(𝜖2 𝑡|F𝑡−1)=𝜑0,0+𝑝′ 0∑︁ 𝑖=1𝜑0,𝑖𝜖2 𝑡−𝑖+𝑞′ 0∑︁ 𝑗=1𝜓0,𝑗𝜈2 𝑡−𝑗+𝑝′ 0∑︁ 𝑘=1𝜁0,𝑘𝜖2 𝑡−𝑘I{𝜖𝑡−𝑘<0}, (S1.1) where𝑧𝑡are independent and identically distributed (i.i.d.) symmetric random variables with zero mean, unit variance, and finite 2 𝜂-th moment, 𝜂≥2,F𝑡=𝜎(𝑧𝑠,𝑠≤𝑡),𝑝′ 0and𝑞′ 0are positive integers, 𝜑0,0>0, and𝜑0,𝑖,𝜓0,𝑗, and𝜁0,𝑘are non-negative constants that guarantee E|𝜖1|2𝜂<∞. Following Huang et al. (2022), it can be shown that (A1) is fulfilled by (S1.1) with 𝑙0=2,e𝑡=(𝑤1,𝑡,𝑤2,𝑡)⊤, and 𝜽𝑠=(˜𝑏𝑠,˜𝑐𝑠)⊤, where𝑤1,𝑡=𝜖2 𝑡−𝜈2 𝑡,𝑤2,𝑡=𝜖2 𝑡𝐼{𝜖𝑡<0}−1 2𝜖2 𝑡, and ˜𝑏𝑗and ˜𝑐𝑗, respectively, satisfy ∞∑︁ 𝑗=0˜𝑏𝑗𝑧𝑗=1−Í𝑞′ 0 𝑗=1𝜓0,𝑗𝑧𝑗 1−Ímax{𝑝′ 0,𝑞′ 0} 𝑖=1(𝜑0,𝑖+𝜓0,𝑖+𝜁0,𝑖 2)𝑧𝑗 and ∞∑︁ 𝑗=0˜𝑐𝑗𝑧𝑗=Í𝑝′ 0 𝑗=1𝜁0,𝑗𝑧𝑗 1−Ímax{𝑝′ 0,𝑞′ 0} 𝑖=1(𝜑0,𝑖+𝜓0,𝑖+𝜁0,𝑖 2)𝑧𝑗, in which𝜓0,𝑗=0 if𝑗 >𝑞′ 0,𝜑0,𝑖=𝜁0,𝑖=0 if𝑖>𝑝′ 0, andÍ𝑝′ 0 𝑖=1𝜑0,𝑖+Í𝑞′ 0 𝑗=1𝜓0,𝑗+Í𝑝′ 0 𝑘=1𝜁0,𝑘 2<1. When 𝜁0,𝑘=0 for all𝑘, (S1.1) reduces to the well-known stationary GARCH ( 𝑝′ 0,𝑞′ 0) process; the same argument ensures that (A1) holds with 𝑙0=1,𝑒𝑡=𝑤1,𝑡, and𝜃𝑠=˜𝑏𝑠. Assumption (A2) requires that {𝑥𝑡,𝑠}is an MA(∞) process driven by the conditionally heteroscedas- tic innovations{𝜋𝑡,𝑠}. This type of assumption is broadly adopted in time series analysis. In fact, (A2) allows(𝜖𝑡,𝜋𝑡,1...,𝜋𝑡,𝑝𝑛)⊤to be a multivariate GARCH process. By the same argument used in the previous paragraph, it can be shown that the diagonal VEC model of Bollerslev, Engle and Wooldridge 2 (1988) is a special case of (14). Furthermore, in the notable special case where {𝜋𝑡,𝑠}is a sequence of independent and identically distributed random variables with E(𝜋𝑡,𝑠)=0 and E(𝜋𝑡,𝑠1𝜋𝑡,𝑠2)=𝜎𝑠1,𝑠2, (14) remains valid, with 𝑒𝑡,𝑠1,𝑠2=𝜋𝑡,𝑠1𝜋𝑡,𝑠2−𝜎𝑠1,𝑠2,𝜃0,𝑠1,𝑠2=1, and𝜃𝑗,𝑠1,𝑠2=0 for𝑗 >0. Moment conditions (16) and (17) are more stringent than (11). These stronger moment assumptions ensure a reliable screening performance of FSR when the number of exogenous covariates is larger than the sample size, as allowed by Assumption (A5). Assumption (A3) is used to derive Theorem A.1, leading to a uniform lower bound for the minimum eigenvalues of the sample covariance matrices of dimensions less than or equal to 𝑞𝑛+𝐾𝑛; see the proof for Theorem A.3. Assumption (A4), referred to as the weak sparsity condition, is commonly used in the literature on high-dimensional data analysis. It follows from (12), (13), and (A4) that sup 𝑛≥1∞∑︁ ℎ=−∞|𝛾ℎ,𝑛|≤𝐶, (S1.2) which, together with (18), leads to ∞∑︁ ℎ=−∞|𝛾ℎ|≤𝐶. (S1.3) When the moment conditions are controlled, (A5) is more flexible than the assumptions on model dimensions in Medeiros and Mendes (2016), where {𝑦𝑡}is assumed to be stationary, corresponding to the case of 𝑎=𝑏=𝑑1=···=𝑑𝑙=0. To see this, note that (A1) and (A2) imply (19) and (20) respectively. Moreover, (A1), together with (A2), yields sup 𝑡E|𝑦𝑡|2𝜂<𝐶, (S1.4) provided𝑎=𝑏=𝑑1=···=𝑑𝑙=0. By (19), (20), (S1.4), and Hölder’s inequality, sup 𝑡E|𝑦𝑡−𝑖𝜖𝑡|𝜂<𝐶and sup 𝑡E|𝑥𝑡−𝑙,𝑗𝜖𝑡|2𝜂𝑞0/(𝑞0+1)<𝐶, for all 1≤𝑖≤𝑞𝑛, 1≤𝑙≤𝑟(𝑛) 𝑗, and 1≤𝑗≤𝑝𝑛. Therefore, the 𝑚in Assumption DGP(4) of Medeiros and Mendes (2016) obeys 𝑚=min{𝜂,2𝜂𝑞0/(𝑞0+1)}=𝜂. (S1.5) Equation (S1.5) and the discussion after Assumption (REG) of Medeiros and Mendes (2016) lead to a restriction on the number of candidate variables such that 𝑞𝑛+𝑝∗ 𝑛=𝑜(𝑛𝛼𝜂(𝜂−2)/(2𝜂+4𝑏)), (S1.6) where 0<𝛼< 1
https://arxiv.org/abs/2505.04884v1
and𝑏>0 are positive numbers defined therein. Equation (S1.6) requires 𝑝∗ 𝑛to be much smaller than 𝑛unless𝜂>4. In contrast, (A5) allows 𝑝∗ 𝑛>𝑛even if𝜂=2. We also make a few comments on (A6). For 𝐷⊂{1,...,𝑝𝑛}, let 𝝅𝑡(𝐷)=(𝜋𝑡,𝑠:𝑠∈𝐷)⊤,𝝁𝑡(𝐷)=(𝜖𝑡,𝝅⊤ 𝑡(𝐷))⊤, 𝚺𝑛(𝐷)=E(𝝁𝑡(𝐷)𝝁⊤ 𝑡(𝐷)).(S1.7) Unit root processes with many predictors 3 Then, it can be shown that (21) holds if {𝑥𝑡,𝑗}admits an infinite-order AR representation with abso- lutely summable coefficients and max ♯(𝐷)≤𝐾𝑛+𝑠0𝜆−1 min(𝚺𝑛(𝐷))≤𝐶, (S1.8) where𝑠0is defined in (SS A). When the AR components are deleted from model (2), (22) reduces to (3.2) of Ing and Lai (2011), which is closely related to the “exact recovery condition” introduced by Tropp (2004) in the analysis of the orthogonal matching pursuit and plays a role similar to the “restricted eigenvalue assumption” introduced by Bickel, Ritov and Tsybakov (2009) in the study of LASSO. Condition (22) is a natural generalization of (3.2) of Ing and Lai (2011) when the (asymptoti- cally) stationary AR component, z𝑡(𝑞𝑛−𝑑)=(𝑧𝑡−1,...,𝑧𝑡−𝑞𝑛+𝑑)⊤, is taken into account. We now present an example that illustrates the validity of (22) even when model (2) includes highly correlated lagged dependent variables and highly correlated exogenous variables. Assume in model (2) that𝑟(𝑛) 𝑗=1 for all 1≤𝑗≤𝑝𝑛and{(𝑥𝑡−1,1,...,𝑥𝑡−1,𝑝𝑛,𝜖𝑡)⊤}is a sequence of white noise vectors obeying E(𝜖2 𝑡)=E(𝑥2 𝑡,𝑗)=1 for all 1≤𝑗≤𝑝𝑛and 0≤E(𝑥𝑡,𝑖𝑥𝑡,𝑗)=E(𝑥𝑡,𝑙𝜖𝑡)=𝜆<1 for all 1≤𝑖≠ 𝑗≤𝑝𝑛and 1≤𝑙≤𝑝𝑛. In this model specification, not only are 𝑦𝑡−𝑗,1≤𝑗≤𝑞𝑛, highly correlated, but also𝑥𝑡−1,𝑗,1≤𝑗≤𝑝𝑛, especially when 𝜆is close to 1. Define G(𝑞𝑛−𝑑)=[G𝑖𝑗]1≤𝑖,𝑗≤𝑞𝑛−𝑑= E−1(z𝑡,∞(𝑞𝑛−𝑑)z⊤ 𝑡,∞(𝑞𝑛−𝑑))and𝑐2 𝐽=𝜆2♯(𝐽)/(1−𝜆+♯(𝐽)𝜆). Since 0<𝑐2 𝐽<𝜆and 0<G11≤1, it holds that max ♯(𝐽)≤𝐾𝑛,(𝑖,1)∉𝐽∑︁ (𝑖∗,1)∈𝐽|𝑎𝑖∗,1(𝑖,1)|≤ max ♯(𝐽)≤𝐾𝑛(1−𝜆G11)𝜆♯(𝐽) (1−𝑐2 𝐽G11)(1−𝜆+♯(𝐽)𝜆)≤1. (S1.9) Moreover, one has 𝑞𝑛−𝑑∑︁ 𝑠=1max ♯(𝐽)≤𝐾𝑛,(𝑖,1)∉𝐽|𝑎𝑠,𝐽(𝑖,1)|≤𝑞𝑛−𝑑∑︁ 𝑠=1max ♯(𝐽)≤𝐾𝑛𝜆−𝑐2 𝐽 1−𝑐2 𝐽G11|G𝑠1|≤𝑞𝑛−𝑑∑︁ 𝑠=1|G𝑠1|. (S1.10) Define𝑎2=1+𝜆(Í𝑝𝑛 𝑗=1𝛽(𝑗) 1)2+(1−𝜆)Í𝑝𝑛 𝑗=1(𝛽(𝑗) 1)2,𝑏=𝜆Í𝑝𝑛 𝑗=1𝛽(𝑗) 1, andℎ2=(𝑎2+(𝑎4−4𝑏2)1/2)/2, noting that|𝑏|<𝑎2/2 and ℎ2>max{1,𝜆(𝑝𝑛∑︁ 𝑗=1𝛽(𝑗) 1)2+(1−𝜆)𝑝𝑛∑︁ 𝑗=1(𝛽(𝑗) 1)2}. (S1.11) Then, it can be shown that {𝑧𝑡,∞}admits an infinite-order AR representation, 𝑧𝑡,∞+∞∑︁ 𝑗=1𝜙𝑗𝑧𝑡−𝑗,∞=𝜂𝑡, (S1.12) where 1+Í∞ 𝑗=1𝜙𝑗𝑧𝑗≠0, for|𝑧|≤1,Í∞ 𝑗=0|𝜙𝑗|≤𝐶, and{𝜂𝑡}is a white noise sequence with variance ℎ2. By using a modified Cholesky decomposition (e.g., Ing, Chiou and Guo (2016)), (S1.11), (S1.12), and Baxter’s inequality (Baxter, 1962), one getsÍ𝑞𝑛−𝑑 𝑠=1|G𝑠1|≤𝐶ℎ−2Í∞ 𝑗=0|𝜙𝑗|≤𝐶, which, together with (S1.9) and (S1.10), leads to (22). Finally, we provide a brief discussion of the strong sparsity conditions (SS X)and(SS) in Sections 3.2 and 3.3. As mentioned earlier, a condition similar to (SS X)has been utilized by Medeiros and 4 Mendes (2016) to establish the selection consistency of the adaptive LASSO when {𝑦𝑡}is stationary. Specifically, they assume 𝜆𝑠1/2 0 𝑛1−𝜉/2𝜙min=𝑜(min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|), (S1.13) where 0<𝜉< 1 is some constant defined in their Assumption (WEIGHTS) and 2 𝜙minis a lower bound for the minimum eigenvalue of the covariance matrix of the random vector formed by all relevant predictors. Assuming that 𝜙minis bounded away from 0 and choosing 𝜆to be the value suggested after Assumption (REG) of Medeiros and Mendes (2016), (S1.13) becomes 𝑠1/2 0𝑝∗1/𝑚 𝑛𝑛𝜉/𝑚 𝑛1/2=𝑜(min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|). (S1.14) In view of (S1.5) and the definitions of ¯𝜃and𝜉, we conclude that (S1.14) is more stringent than (23) in (SS X). While the left-hand side of (33) in (SS) is larger than that of (23) by a factor about (log𝑛)1/2, it is still smaller than that of (S1.14).
https://arxiv.org/abs/2505.04884v1
S2. Main proofs In this section, we present the proofs of the main results in the paper, namely Theorems 3.1–3.3. The proofs, shown in Section S2.2, are proceeded by the proofs for Theorems A.1–A.3. Some further details regarding the proofs are collected in Section S2.3. S2.1. Proofs of Theorems A.1–A.3 and discussions Proof of Theorem A.1. Let𝑎(𝑚) 𝑡,𝑡≥1 and 1≤𝑚≤2𝑙+2, be defined by 𝑎(1) 𝑡=1,𝑎(2) 𝑡=(−1)𝑡,𝑎(3) 𝑡=√ 2 sin(𝑡𝜗1),𝑎(4) 𝑡=√ 2 cos(𝑡𝜗1),...,𝑎(2𝑙+1) 𝑡=√ 2 sin(𝑡𝜗𝑙), and𝑎(2𝑙+2) 𝑡=√ 2 cos(𝑡𝜗𝑙). Note first that each element of B𝑛is of the form 1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝛿𝑡, (S2.1) where𝑢∈[0,1]and{𝑎𝑡}is one of{𝑎(𝑚) 𝑡},1≤𝑚≤2𝑙+1. In Section S2.3, we show that (S2.1) has the following decomposition 1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝛿𝑡=𝑅𝑛(𝑢)+⌊𝑛𝑢⌋−1∑︁ 𝑡=1 𝑋𝑡,𝑛+𝑎𝑡√𝑛𝜖𝑡 +𝑎⌊𝑛𝑢⌋√𝑛𝜖⌊𝑛𝑢⌋, (S2.2) where 𝑋𝑡,𝑛=1√𝑛𝑝𝑛∑︁ 𝑗=1𝑛∑︁ 𝑠=𝑡+1𝑎𝑠©­ «(𝑠−𝑡)∧𝑟𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑝𝑠−𝑙−𝑡,𝑗ª® ¬𝜋𝑡,𝑗, Unit root processes with many predictors 5 and𝑅𝑛(𝑢)is a remainder term whose explicit form is given in Section S2.3. By Assumption (A1), sup 𝑡E|𝜖𝑡|2𝜂1<𝐶, (S2.3) yielding sup 𝑢∈[0,1]|𝑎⌊𝑛𝑢⌋𝜖⌊𝑛𝑢⌋| √𝑛=𝑜𝑝(1) (S2.4) and E𝑎2 ⌊𝑛𝑢⌋𝜖2 ⌊𝑛𝑢⌋ 𝑛=𝑜(1), 𝑢∈[0,1]. (S2.5) By Assumption (A2), sup 𝑡max 1≤𝑗≤𝑝𝑛E|𝜋𝑡,𝑗|2𝜂1≤𝐶. (S2.6) Using (S2.6), after some algebraic manipulations, we show in Section S2.3 that sup 𝑢∈[0,1]|𝑅𝑛(𝑢)|=𝑜𝑝(1), (S2.7) E𝑅2 𝑛(𝑢)=𝑜(1), 𝑢∈[0,1]. (S2.8) By (S2.2), (S2.4), and (S2.7), (53) in the main paper holds if 𝑣−1 1⌊𝑛𝑢1⌋−1∑︁ 𝑡=1 𝑋(1) 𝑡,𝑛+𝑎(1) 𝑡√𝑛𝜖𝑡! ,...,𝑣−1 2𝑙+2⌊𝑛𝑢2𝑙+2⌋−1∑︁ 𝑡=1 𝑋(2𝑙+2) 𝑡,𝑛+𝑎(2𝑙+2) 𝑡√𝑛𝜖𝑡!! ⇒W, (S2.9) where𝑋(𝑚) 𝑡,𝑛is𝑋𝑡,𝑛with𝑎𝑠=𝑎(𝑚) 𝑠, and𝑢𝑗∈[0,1]for 1≤𝑗≤2𝑙+2. Define 𝜉(𝑚) 𝑡,𝑛=𝑣−1 𝑚(𝑋(𝑚) 𝑡,𝑛+𝑎(𝑚) 𝑡√𝑛𝜖𝑡). Note that{(𝜉(1) 𝑡,𝑛,...,𝜉(2𝑙+2) 𝑡,𝑛)⊤,F𝑡}is an m.d.s. By Theorem 3.3 of Helland (1982), (S2.9) follows if ⌊𝑛𝑢⌋∑︁ 𝑡=1E(𝜉(𝑚) 𝑡,𝑛2|F𝑡−1)𝑝→𝑢,1≤𝑚≤2𝑙+2, (S2.10) ⌊𝑛𝑢⌋∑︁ 𝑡=1E(|𝜉(𝑚) 𝑡,𝑛|2+𝑐|F𝑡−1)𝑝→0,1≤𝑚≤2𝑙+2, (S2.11) for some𝑐>0, and ⌊𝑛𝑢⌋∑︁ 𝑡=1E(𝜉(𝑘) 𝑡,𝑛𝜉(𝑚) 𝑡,𝑛|F𝑡−1)𝑝→0,1≤𝑘≠𝑚≤2𝑙+2. (S2.12) 6 In the following, we only prove (S2.10) because (S2.12) can be proved similarly and (S2.11) follows directly from (S2.86) in Section S2.3. Let 𝜁(𝑚) 𝑡,𝑛=𝑣𝑚𝜉(𝑚) 𝑡,𝑛. Then, 𝑣2 𝑚⌊𝑛𝑢⌋∑︁ 𝑡=1E(𝜉(𝑚) 𝑡,𝑛2|F𝑡−1)=⌊𝑛𝑢⌋∑︁ 𝑡=1E(𝜁(𝑚) 𝑡,𝑛2)−⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2|F𝑡−1) +⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2) .(S2.13) We show in Section S2.3 that E⌊𝑛𝑢⌋∑︁ 𝑡=1𝜁(𝑚) 𝑡,𝑛 2≤𝐶, (S2.14) which, together with (S2.2), (S2.5), (S2.8), Assumption (A3), (S1.2), and (S1.3), gives, for 1 ≤𝑚≤ 2𝑙+2, lim𝑛→∞⌊𝑛𝑢⌋∑︁ 𝑡=1E(𝜁(𝑚) 𝑡,𝑛2)=lim𝑛→∞E 1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎(𝑚) 𝑡𝛿𝑡!2 =𝑢𝑣2 𝑚. (S2.15) Moreover, by making use of (S2.3), (S2.6), and Assumptions (A1) and (A2), we prove in Section S2.3 that ⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2|F𝑡−1) + ⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2) =𝑜𝑝(1). (S2.16) Combining (S2.16) with (S2.15) and (S2.13), we have (S2.10). Thus, the proof is complete. Proof of Theorem A.2. In the following, we denote by 𝐶𝑟any generic absolute constant depending only on𝑟whose value may vary at different places. By some straightforward algebra, it is not difficult to see E|𝑄𝑛−E𝑄𝑛|2𝑟=E 𝑛∑︁ 𝑙=−∞𝑛∑︁ 𝑘=−∞𝑐𝑘,𝑙˜𝑥𝑙˜𝑦𝑘−E𝑄𝑛 2𝑟 , where𝑐𝑘,𝑙=Í𝑛 𝑖=𝑙∨1Í𝑛 𝑗=𝑘∨1𝑞𝑖,𝑗𝑎𝑖,𝑙𝑏𝑗,𝑘. Then observe that E 𝑛∑︁ 𝑙=−∞𝑛∑︁ 𝑘=−∞𝑐𝑘,𝑙˜𝑥𝑙˜𝑦𝑘−E𝑄𝑛 2𝑟 ≤𝐶𝑟  E 𝑛∑︁ 𝑙=−∞𝑐𝑙,𝑙(˜𝑥𝑙˜𝑦𝑙−E(˜𝑥𝑙˜𝑦𝑙)) 2𝑟 +E 𝑛∑︁ 𝑘=−∞ 𝑘−1∑︁ 𝑙=−∞𝑐𝑘,𝑙˜𝑥𝑙! ˜𝑦𝑘 2𝑟 +E 𝑛∑︁ 𝑙=−∞ 𝑙−1∑︁ 𝑘=−∞𝑐𝑘,𝑙˜𝑦𝑘! ˜𝑥𝑙 2𝑟   Unit root processes with many predictors 7 :=𝐶𝑟{(I)+(II)+(III)}. By Burkholder’s inequality, Minkowski’s inequality, Cauchy-Schwart inequality, and the fact that sup𝑡E∥˜z𝑡∥2𝑟≤𝐶, we have (I)=E 𝑛∑︁ 𝑗=−∞©­ «𝑛∑︁ 𝑙=𝑗𝑐𝑙,𝑙𝜽⊤ 𝑙−𝑗ª® ¬˜z𝑗 2𝑟 ≤𝐶𝑟  𝑛∑︁ 𝑗=−∞ ©­ «𝑛∑︁ 𝑙=𝑗𝑐𝑙,𝑙𝜽⊤ 𝑙−𝑗ª® ¬˜z𝑗 2 2𝑟  𝑟 ≤𝐶𝑟  𝑛∑︁ 𝑗=−∞ 𝑛∑︁ 𝑙=𝑗𝑐𝑙,𝑙𝜽𝑙−𝑗 2  𝑟 . (S2.17) Note that 𝑛∑︁ 𝑗=−∞ 𝑛∑︁ 𝑙=𝑗𝑐𝑙,𝑙𝜽𝑙−𝑗 2 =𝑛∑︁ 𝑙=−∞𝑛∑︁
https://arxiv.org/abs/2505.04884v1
𝑘=−∞𝑐𝑙,𝑙𝑐𝑘,𝑘∞∑︁ 𝑗=0𝜽⊤ 𝑗𝜽𝑗+|𝑙−𝑘|≤𝐶©­ «∞∑︁ 𝑗=0∥𝜽𝑗∥ª® ¬2 𝑛∑︁ 𝑙=−∞𝑐2 𝑙𝑙! . This, combined with (S2.17) andÍ∞ 𝑗=0∥𝜽𝑗∥≤𝐶, yields (I)≤𝐶𝑟(𝑛∑︁ 𝑙=−∞𝑐2 𝑙,𝑙)𝑟 . (S2.18) Next, by Burkholder’s inequality, Minkowski’s inequality, Hölder’s inequality, and (55) in the main paper, we have (II)≤  𝐶𝑟 𝑛∑︁ 𝑘=−∞ 𝑘−1∑︁ 𝑙=−∞𝑐𝑘,𝑙˜𝑥𝑙!2 ˜𝑦2 𝑘 𝑟  𝑟 ≤𝐶𝑟  𝑛∑︁ 𝑘=−∞ 𝑘−1∑︁ 𝑙=−∞𝑐𝑘,𝑙˜𝑥𝑙 2 𝑝∥˜𝑦𝑘∥2 𝑞  𝑟 ≤𝐶𝑟(𝑛∑︁ 𝑘=−∞𝑘−1∑︁ 𝑙=−∞𝑐2 𝑘,𝑙)𝑟 . (S2.19) By a similar argument, it can be shown that (III)≤𝐶𝑟(𝑛∑︁ 𝑙=−∞𝑙−1∑︁ 𝑘=−∞𝑐2 𝑘,𝑙)𝑟 . This, in conjunction with (S2.18) and (S2.19), yields the desired conclusion. Theorems A.1 and A.2 can be used to bound from below the minimum eigenvalues of the sample covariance matrices of the candidate models; see Theorem A.3. To state this result, we need to introduce some notations. 8 Recall𝜙(𝐵)in Section 3.1. Inspired by Chan and Wei (1988), we define 𝑢𝑡(𝑗)=[(1−𝐵)−𝑗𝜙(𝐵)]𝑦𝑡, 𝑣𝑡(𝑗)=[(1+𝐵)−𝑗𝜙(𝐵)]𝑦𝑡, 𝑔𝑡(𝑘,𝑗)=[(1−2 cos𝜗𝑘𝐵+𝐵2)−𝑗𝜙(𝐵)]𝑦𝑡,(S2.20) where𝑘=1,...,𝑙 . For𝑘=1,...,𝑙 , it can be shown that 𝑔𝑡(𝑘,1)=1 sin𝜗𝑘𝑡∑︁ 𝑠=1sin[(𝑡−𝑠+1)𝜗𝑘]𝛿𝑠 =𝑡−1∑︁ 𝑠=0sin[(𝑠+1)𝜗𝑘] sin𝜗𝑘𝛿𝑡−𝑠:=𝑡−1∑︁ 𝑠=0𝜅𝑠(𝑘,1)𝛿𝑡−𝑠, where|𝜅𝑠(𝑘,1)|≤𝐶for all𝑠≥0. By induction it follows that 𝑔𝑡(𝑘,𝑗)=𝑡−1∑︁ 𝑠=0𝜅𝑠(𝑘,𝑗)𝛿𝑡−𝑠, where |𝜅𝑠(𝑘,𝑗)|≤𝐶(𝑠+1)𝑗−1, (S2.21) for all𝑠≥0, 1≤𝑘≤𝑙, and 1≤𝑗≤𝑑𝑘. Similarly, 𝑢𝑡(𝑗1)=𝑡−1∑︁ 𝑠=0𝜄𝑠(𝑗1)𝛿𝑡−𝑠, 𝑣𝑡(𝑗2)=𝑡−1∑︁ 𝑠=0𝜗𝑠(𝑗2)𝛿𝑡−𝑠, where |𝜄𝑠(𝑗1)|≤𝐶(𝑠+1)𝑗1−1,|𝜗𝑠(𝑗)|≤𝐶(𝑠+1)𝑗2−1, (S2.22) for all𝑠≥0, 1≤𝑗1≤𝑎, and 1≤𝑗2≤𝑏. Let𝑄𝑛be defined implicitly by 𝑄𝑛w𝑡(𝐽)=(u⊤ 𝑡,v⊤ 𝑡,g⊤ 𝑡(1),..., g⊤ 𝑡(𝑙),z⊤ 𝑡(𝑞𝑛−𝑑),x⊤ 𝑡(𝐽))⊤, where u𝑡=(𝑢𝑡−1(𝑎),...,𝑢𝑡−1(1))⊤, v𝑡=(𝑣𝑡−1(𝑏),...,𝑣𝑡−1(1))⊤, g𝑡(𝑘)=(𝑔𝑡−1(𝑘,1),𝑔𝑡−2(𝑘,1),...,𝑔𝑡−1(𝑘,𝑑𝑘),𝑔𝑡−2(𝑘,𝑑𝑘))⊤,1≤𝑘≤𝑙, and recall that w𝑡(𝐽)=(𝑦𝑡−1,...,𝑦𝑡−𝑞𝑛,x⊤ 𝑡(𝐽))⊤,x𝑡(𝐽)=(𝑥𝑡−𝑙,𝑗:(𝑗,𝑙)∈𝐽)⊤, and z𝑡(𝑘)=(𝑧𝑡−1,...,𝑧𝑡−𝑘)⊤. It is not difficult to show that Q𝑛is nonsingular and satisfies for all 𝐽⊂{(𝑗,𝑙): 1≤𝑗≤𝑝𝑛,1≤𝑙≤ 𝑟(𝑛) 𝑗}, 𝜆max(Q⊤ 𝑛Q𝑛)≤𝐶. (S2.23) Unit root processes with many predictors 9 It is well known that each element in Q𝑛w𝑡(𝐽)has certain stochastic order of magnitude (see, for example, Chan and Wei, 1988, Ing, Sin and Yu, 2010, Tsay, 1984). Thus, we may consider a normalized version, s𝑡(𝐽)=(˜𝑦𝑡,1,..., ˜𝑦𝑡,𝑑,z⊤ 𝑡(𝑞𝑛−𝑑),x⊤ 𝑡(𝐽))⊤=𝐺𝑛𝑄𝑛w𝑡(𝐽), of𝑄𝑛w𝑡(𝐽), where 𝐺𝑛=diag(𝐺𝑛,𝑢,𝐺𝑛,𝑣,𝐺𝑛,𝑔(1),...,𝐺𝑛,𝑔(𝑙),I𝑞𝑛+♯(𝐽)−𝑑)∈R(𝑞𝑛+♯(𝐽))×(𝑞𝑛+♯(𝐽)), with 𝐺𝑛,𝑢=diag(𝑛−𝑎+1/2,...,𝑛−1/2), 𝐺𝑛,𝑣=diag(𝑛−𝑏+1/2,...,𝑛−1/2), 𝐺𝑛,𝑔(𝑘)=diag(𝑛−1/2,𝑛−1/2,𝑛−3/2,𝑛−3/2,...,𝑛−𝑑𝑘+1/2,𝑛−𝑑𝑘+1/2 | {z } 2𝑑𝑘), 𝑘=1,...,𝑙. The following moment bounds imply useful concentration inequalities for quadratic forms involving the covariates and the lagged dependent variables that lend a helping hand throughout this paper. These bounds can be verified using Theorem A.2 and the definitions above. Lemma SL2.1. Assume that (A1), (A2) , and (A4) hold. Then, max 1≤𝑗1,𝑗2≤𝑝𝑛 1≤𝑙1≤𝑟(𝑛) 𝑗1,1≤𝑙2≤𝑟(𝑛) 𝑗2E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2−E(𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2) 𝜂𝑞0 =𝑂(1), (S2.24) max 1≤𝑖,𝑗≤𝑞𝑛−𝑑E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑧𝑡−𝑖𝑧𝑡−𝑗−E(𝑧𝑡−𝑖,∞𝑧𝑡−𝑗,∞) 𝜂 =𝑂(1), (S2.25) max 1≤𝑗≤𝑝𝑛,1≤𝑙≤𝑟(𝑛) 𝑗 1≤𝑘≤𝑞𝑛−𝑑E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑥𝑡−𝑙,𝑗𝑧𝑡−𝑘−E(𝑥𝑡−𝑙,𝑗𝑧𝑡−𝑘,∞) 2𝜂𝑞0 𝑞0+1 =𝑂(1), (S2.26) max 1≤𝑗≤𝑝𝑛,1≤𝑙≤𝑟(𝑛) 𝑗 1≤𝑖≤𝑑E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−𝑙,𝑗˜𝑦𝑡,𝑖 2𝜂𝑞0 𝑞0+1 =𝑂(1), (S2.27) max 1≤𝑘≤𝑞𝑛−𝑑 1≤𝑖≤𝑑E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑧𝑡−𝑘˜𝑦𝑡,𝑖 𝜂 =𝑂(1). (S2.28) Now we can prove Theorem A.3. 10 Proof of Theorem A.3. Define ˜y𝑡=(˜𝑦𝑡,1,..., ˜𝑦𝑡,𝑑)⊤. By Theorem A.1, it can be shown that P𝑛≡𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1˜y𝑡˜y⊤ 𝑡⇒©­­­­­­­ «𝑣2 1𝐹 𝑣2 2˜𝐹 𝑣2 3𝐻1 ... 𝑣2 2𝑙+1𝐻𝑙ª®®®®®®® ¬, (S2.29) where𝐹,˜𝐹,𝐻 1,...,𝐻𝑙are independent and almost surely nonsingular random matrices defined in Theorem 3.5.1 of Chan and Wei (1988). Since 𝑣2 𝑖>0, it follows from (S2.29) and the continuous mapping theorem that 𝜆−1 min(P𝑛)=𝑂𝑝(1). (S2.30) Let S𝑛(𝐽)=P𝑛 0 0𝚪𝑛(𝐽) . Then, min ♯(𝐽)≤¯𝐾𝑛𝜆min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)ª® ¬ ≥min ♯(𝐽)≤¯𝐾𝑛𝜆min(S𝑛(𝐽))−max ♯(𝐽)≤¯𝐾𝑛 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)−S𝑛(𝐽) . (S2.31) Note that if max ♯(𝐽)≤¯𝐾𝑛 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)−S𝑛(𝐽) =𝑂𝑝©­­ «¯𝐾𝑛𝑝∗2𝜂𝑞0 𝑛 𝑛1/2+𝑞1/2 𝑛¯𝐾1/2 𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2+𝑞𝑛 𝑛1/2ª®® ¬ =𝑜𝑝(1), (S2.32) then (57) in the main paper follows from (S2.30)–(S2.32) and (A6). To
https://arxiv.org/abs/2505.04884v1
show (S2.32), note first that by (S2.24), max ♯(𝐽)≤¯𝐾𝑛vut∑︁ (𝑙2,𝑗2)∈𝐽∑︁ (𝑙1,𝑗1)∈𝐽𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2−E(𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2) 2 ≤¯𝐾𝑛 max 1≤𝑗1,𝑗2≤𝑝𝑛 1≤𝑙1≤𝑟(𝑛) 𝑗1,1≤𝑙2≤𝑟(𝑛) 𝑗2 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2−E(𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2) =𝑂𝑝¯𝐾𝑛𝑝∗2𝜂𝑞0 𝑛 𝑛1/2.(S2.33) Unit root processes with many predictors 11 In addition, (S2.25) and (S2.26) ensure vuut𝑞𝑛−𝑑∑︁ 𝑖=1𝑞𝑛−𝑑∑︁ 𝑗=1𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑧𝑡−𝑖𝑧𝑡−𝑗−E(𝑧𝑡−𝑖,∞𝑧𝑡−𝑗,∞) 2=𝑂𝑝𝑞𝑛 𝑛1/2, (S2.34) and vuut𝑞𝑛−𝑑∑︁ 𝑘=1max ♯(𝐽)≤¯𝐾𝑛∑︁ (𝑗,𝑙)∈𝐽𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑥𝑡−𝑙,𝑗𝑧𝑡−𝑘−E(𝑥𝑡−𝑙,𝑗𝑧𝑡−𝑘,∞) 2 =𝑂𝑝𝑞1/2 𝑛¯𝐾1/2 𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2,(S2.35) respectively. Similarly, (S2.27) and (S2.28) imply vuut𝑑∑︁ 𝑖=1max ♯(𝐽)≤¯𝐾𝑛∑︁ (𝑗,𝑙)∈𝐽𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−𝑙,𝑗˜𝑦𝑡,𝑖2=𝑂𝑝¯𝐾1/2 𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2(S2.36) and vuut𝑑∑︁ 𝑖=1𝑞𝑛−𝑑∑︁ 𝑘=1𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑧𝑡−𝑘˜𝑦𝑡,𝑖2=𝑂𝑝𝑞1/2 𝑛 𝑛1/2. (S2.37) Combining (S2.33)–(S2.37) yields the first equality of (S2.32). The second equality of (S2.32) is en- sured by (A5) and that ¯𝐾𝑛=𝑜(𝑛1/2/𝑝∗ 𝑛¯𝜃). Finally, by noticing 𝜆min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽)ª® ¬ ≥𝜆min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)ª® ¬𝜆−1 max(Q⊤ 𝑛Q𝑛)𝜆min G−2 𝑛 , (58) in the main paper follows from (57), 𝜆min(G−2 𝑛)=1, and (S2.23). As the above proof shows, the delicacy of Theorem A.3 lies in the fact that P𝑛:=𝑛−1Í𝑛 𝑡=¯𝑟𝑛+1˜y𝑡˜y⊤ 𝑡 does not converge in probability to a deterministic limit; hence the analysis of its minimum eigenvalue is much more involved. The problem is resolved through the novel FCLT developed in Theorem A.1 that ensures P𝑛’s weak limit exists and is almost surely positive definite. Consequently, as long as the size of a candidate model is equal to (or less than) 𝐾𝑛+𝑞𝑛, Theorem A.3 guarantees that the corresponding sample covariance matrix is well-behaved. 12 S2.2. Proofs of Theorems 3.1–3.3 The performance of the weak noiseless FSR is evaluated by the “noiseless" mean squared error ˆ 𝑎𝑚 defined in (30). In (S3.3) of Section S3, we derive a convergence rate of ˆ 𝑎𝑚as𝑚increases. The convergence of ˆ 𝑎𝑚, along with Theorem A.3, enables us to establish in (S2.45) the convergence rate of ˆ𝑎𝑚’s semi-noiseless counterpart, ˆ 𝑠𝑚, defined in (32). As we will see later, (S2.45) serves as the key vehicle for us to develop the sure-screening property of ˆ𝐽𝑚. Proof of Theorem 3.1. By (23), there exists 𝑙𝑛→∞ such that 𝑙𝑛𝑠0𝑝∗2¯𝜃 𝑛 𝑛min(𝑗,𝑙)∈J 𝑛𝛽(𝑗) 𝑙2=𝑜(1). Define A𝑛(𝐾𝑛)=  max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽|𝜓𝐽,(𝑖,𝑙)−ˆ𝜓𝐽,(𝑖,𝑙)|≤𝑙1/2 𝑛𝑝∗(𝑞0+1)/(2𝜂𝑞0) 𝑛 𝑛1/2   and B𝑛(𝐾𝑛)=( min 0≤𝑚≤𝐾𝑛−1max (𝑗,𝑙)∉ˆ𝐽𝑚|𝜓ˆ𝐽𝑚,(𝑗,𝑙)|>˜𝜉𝑙1/2 𝑛𝑝∗(𝑞0+1)/(2𝜂𝑞0) 𝑛 𝑛1/2) , where ˜𝜉>2 is some constant. On A𝑛(𝐾𝑛)∩B𝑛(𝐾𝑛), it holds that for all 1 ≤𝑚≤𝐾𝑛, |𝜓ˆ𝐽𝑚−1,(ˆ𝑗𝑚,ˆ𝑙𝑚)|≥−| ˆ𝜓ˆ𝐽𝑚−1,(ˆ𝑗𝑚,ˆ𝑙𝑚)−𝜓ˆ𝐽𝑚−1,(ˆ𝑗𝑚,ˆ𝑙𝑚)|+|ˆ𝜓ˆ𝐽𝑚−1,(ˆ𝑗𝑚,ˆ𝑙𝑚)| ≥− max ♯(𝐽)≤𝑚−1 (𝑗,𝑙)∉𝐽|ˆ𝜓𝐽,(𝑗,𝑙)−𝜓𝐽,(𝑗,𝑙)|+ max (𝑗,𝑙)∉ˆ𝐽𝑚−1|ˆ𝜓ˆ𝐽𝑚−1,(𝑗,𝑙)| ≥−2𝑙1/2 𝑛𝑝∗(𝑞0+1)/(2𝜂𝑞0) 𝑛𝑛−1/2+max (𝑗,𝑙)∉ˆ𝐽𝑚−1|𝜓ˆ𝐽𝑚−1,(𝑗,𝑙)| ≥𝜉max (𝑗,𝑙)∉ˆ𝐽𝑚−1|𝜓ˆ𝐽𝑚−1,(𝑗,𝑙)|,(S2.38) where 0<𝜉=1−2/˜𝜉<1. By (S2.38) and (S3.3), we show in Section S3 that for all 1 ≤𝑚≤𝐾𝑛, ˆ𝑠𝑚IA𝑛(𝐾𝑛)∩B 𝑛(𝐾𝑛)≤𝐶𝑛exp(−𝑚𝜉2𝐷𝑛/𝑠0), (S2.39) where 𝐶𝑛=𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑣2 𝑡,𝑛, 𝐷𝑛=min 1≤♯(𝐽)≤𝐾𝑛𝜆min(𝑛−1Í𝑛 𝑡=¯𝑟𝑛+1w𝑡(J𝑛∪𝐽)w⊤ 𝑡(J𝑛∪𝐽)) max(𝑗,𝑙)∈J 𝑛𝑛−1∥𝒙(𝑗) 𝑙∥2,(S2.40) Unit root processes with many predictors 13 recalling that 𝑣𝑡,𝑛is defined in (3). We also show in Section S3 that for all 1 ≤𝑚≤𝐾𝑛, ˆ𝑠𝑚IB𝑐𝑛(𝐾𝑛)≤𝑠0𝑙𝑛˜𝜉2𝑝∗(𝑞0+1)/(𝜂𝑞0) 𝑛 𝑛𝐷𝑛, (S2.41) and lim𝑛→∞𝑃(A𝑐 𝑛(𝐾𝑛))=0. (S2.42) By (A4) and (33), 𝑠0=𝑜((𝑛/𝑝∗2¯𝜃 𝑛)1/3), (S2.43) which, together with (24), yields 𝑠0𝑢𝑛log𝑛=𝑜(𝐾𝑛)for some𝑢𝑛→∞ . It follows from (58), (A2), (A4), and (S2.43) that 𝐶𝑛=𝑂𝑝(1)and𝐷−1 𝑛=𝑂𝑝(1). (S2.44) According to (S2.39)–(S2.42) and (S2.44), max 1≤𝑚≤𝐾𝑛ˆ𝑠𝑚 exp(−𝑚𝜉2𝐷𝑛/𝑠0)+(𝑠0𝑙𝑛𝑝∗(𝑞0+1)/(𝜂𝑞0) 𝑛/𝑛)=𝑂𝑝(1). (S2.45) Let ˜𝑚𝑛=𝑠0𝑢𝑛log𝑛. The second equation of (S2.44) implies for any ¯𝑀> 0, exp(−˜𝑚𝑛𝜉2𝐷𝑛/𝑠0)=𝑂𝑝(𝑛−¯𝑀), and hence (S2.45) leads to ˆ𝑠˜𝑚𝑛=𝑂𝑝(𝑠0𝑙𝑛𝑝∗(𝑞0+1)/(𝜂𝑞0) 𝑛/𝑛). (S2.46) On the set{J𝑛⊈ˆ𝐽˜𝑚𝑛}, one has ˆ𝑠˜𝑚𝑛≥𝜆min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(J𝑛∪ˆ𝐽˜𝑚𝑛)w⊤ 𝑡(J𝑛∪ˆ𝐽˜𝑚𝑛)ª® ¬min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|2 ≥min ♯(𝐽)≤𝐾𝑛𝜆min©­ «𝑛−1𝑛∑︁
https://arxiv.org/abs/2505.04884v1
𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽)ª® ¬min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|2.(S2.47) Combining (S2.47), (S2.46), (58), and (33) leads to 𝑃(J𝑛⊈ˆ𝐽𝐾𝑛)≤𝑃(J𝑛⊈ˆ𝐽˜𝑚𝑛)≤𝑃 𝑂𝑝(𝑠0𝑙𝑛𝑝∗(𝑞0+1)/(𝜂𝑞0) 𝑛/𝑛)≥ min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|2 =𝑜(1).(S2.48) Thus, the desired conclusion (25) follows. Proof of Theorem 3.2. Let˜𝑘𝑛=min{1≤𝑘≤𝐾𝑛:J𝑛⊆ˆ𝐽𝑘}ifJ𝑛⊆ˆ𝐽𝐾𝑛and𝐾𝑛ifJ𝑛⊈ˆ𝐽𝐾𝑛. We start by showing that lim𝑛→∞𝑃(ˆ𝑘𝑛=˜𝑘𝑛)=1. (S2.49) 14 By Theorem 3.1, (35) is an immediate consequence of (S2.49). In the rest of the proof, we suppress the dependence on 𝑛and write ˆ𝑘and˜𝑘instead of ˆ𝑘𝑛and˜𝑘𝑛. Let ˜𝑚∗ 𝑛=𝑠0log𝑛min{𝑢𝑛,(𝑑𝑛/log𝑛)𝛿}, where𝑢𝑛is defined after (S2.43) and 𝛿is defined in (34). By an argument similar to that used to prove (S2.48), it holds that lim𝑛→∞𝑃(D𝑛)=1, (S2.50) whereD𝑛={J𝑛⊆ˆ𝐽˜𝑚∗𝑛}={˜𝑘𝑛≤˜𝑚∗ 𝑛}. Therefore, (S2.49) is ensured by 𝑃(ˆ𝑘<˜𝑘,D𝑛)=𝑜(1) (S2.51) and 𝑃(˜𝑘<ˆ𝑘,D𝑛)=𝑜(1). (S2.52) By the definition of HDIC, ˆ𝜎2 𝑀˜𝑘−1−ˆ𝜎2 𝑀˜𝑘≤ˆ𝜎2 𝑀˜𝑘( exp (˜𝑘−ˆ𝑘)𝑤𝑛,𝑝𝑛 𝑛! −1) on{ˆ𝑘<˜𝑘}, (S2.53) where𝑀𝑘=[𝑞𝑛]⊕ˆ𝐽𝑘. Straightforward calculations give ˆ𝜎2 𝑀˜𝑘−1−ˆ𝜎2 𝑀˜𝑘 =𝑛−1(𝛽(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘𝒙(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘+𝜺𝑛)⊤(H𝑀˜𝑘−H𝑀˜𝑘−1)(𝛽(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘𝒙(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘+𝜺𝑛) =𝛽(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘2ˆ𝐴𝑛+2𝛽(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘ˆ𝐵𝑛+ˆ𝐴−1 𝑛ˆ𝐵2 𝑛onD𝑛,(S2.54) in which ˆ𝐴𝑛=𝑛−1𝒙(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘⊤ (H𝑀˜𝑘−H𝑀˜𝑘−1)𝒙(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘, ˆ𝐵𝑛=𝑛−1𝒙(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘⊤ (H𝑀˜𝑘−H𝑀˜𝑘−1)𝜺𝑛. In view of (S2.53), (S2.54), and 𝑤𝑛,𝑝𝑛˜𝑚∗ 𝑛 𝑛=𝑜(min (𝑗,𝑙)∈J 𝑛(𝛽(𝑗) 𝑙)2)=𝑜(1) (S2.55) (which is ensured by (33), the second part of (34), and the definition of ˜ 𝑚∗ 𝑛), we have for all large 𝑛, 𝛽(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘2ˆ𝐴𝑛+2𝛽(ˆ𝑗˜𝑘) ˆ𝑙˜𝑘ˆ𝐵𝑛+ˆ𝐴−1 𝑛ˆ𝐵2 𝑛≤(ˆ𝐶𝑛+𝜎2)𝜆1˜𝑚∗ 𝑛𝑤𝑛,𝑝𝑛/𝑛onD𝑛∩{ˆ𝑘<˜𝑘}, (S2.56) where ˆ𝐶𝑛=ˆ𝜎2 𝑀˜𝑘−𝜎2and𝜆1>1 is some constant. In addition, by making use of Theorem A.3 and Lemma SL2.1, we show in Section S3 that ˆ𝐴−1 𝑛=𝑂𝑝(1), (S2.57) Unit root processes with many predictors 15 |ˆ𝐵𝑛|=𝑂𝑝©­­ «©­­ «𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬ª®® ¬, (S2.58) |ˆ𝐶𝑛|=𝑂𝑝©­­ «𝑛1/2+𝑞𝑛+˜𝑚∗ 𝑛𝑝∗𝑞0+1 𝜂𝑞0 𝑛 𝑛ª®® ¬=𝑜𝑝(1). (S2.59) As a result, (S2.51) follows from (S2.55)–(S2.59). On the other hand, ˆ𝜎2 𝑀˜𝑘−ˆ𝜎2 𝑀ˆ𝑘≥ˆ𝜎2 𝑀˜𝑘{1−exp(𝑛−1𝑤𝑛,𝑝𝑛(˜𝑘−ˆ𝑘))}on{˜𝑘<ˆ𝑘}. (S2.60) Let𝐹ˆ𝑘,˜𝑘=(𝒙(𝑗) 𝑙:(𝑗,𝑙)∈ˆ𝐽ˆ𝑘∩ˆ𝐽𝑐 ˜𝑘). Then on{˜𝑘<ˆ𝑘}∩D𝑛, ˆ𝜎2 𝑀˜𝑘−ˆ𝜎2 𝑀ˆ𝑘=𝑛−1𝜺⊤ 𝑛(H𝑀ˆ𝑘−H𝑀˜𝑘)𝜺𝑛 ≤2 n 𝑛−1𝐹⊤ ˆ𝑘,˜𝑘(I−H𝑀˜𝑘)𝐹ˆ𝑘,˜𝑘o−1  𝑛−1𝐹⊤ ˆ𝑘,˜𝑘𝜺𝑛 2 + 𝑛−1𝐹⊤ ˆ𝑘,˜𝑘H𝑀˜𝑘𝜺𝑛 2 ≤2(ˆ𝑘−˜𝑘)(ˆ𝑎𝑛+ˆ𝑏𝑛),(S2.61) where ˆ𝑎𝑛=𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(ˆ𝐽ˆ𝑘)w⊤ 𝑡(ˆ𝐽ˆ𝑘)ª® ¬max 1≤𝑗≤𝑝𝑛 1≤𝑙≤𝑟(𝑛) 𝑗 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−𝑙,𝑗𝜖𝑡 2 , ˆ𝑏𝑛=𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(ˆ𝐽ˆ𝑘)w⊤ 𝑡(ˆ𝐽ˆ𝑘)ª® ¬max ♯(𝐽)≤˜𝑚∗𝑛 (𝑗,𝑙)∉𝐽 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1ˆ𝑥𝑡−𝑙,𝑗;𝐽𝜖𝑡 2 , with ˆ𝑥𝑡−𝑙,𝑖;𝐽:=w⊤ 𝑡(𝐽)©­ «𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽)ª® ¬−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(𝐽)𝑥𝑡−𝑙,𝑖 =s⊤ 𝑡(𝐽)©­ «𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)ª® ¬−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)𝑥𝑡−𝑙,𝑖. Combining (S2.60) and (S2.61), we have 2(ˆ𝑘−˜𝑘)(ˆ𝑎𝑛+ˆ𝑏𝑛)≥ˆ𝜎2 𝑀˜𝑘{1−exp(𝑛−1𝑤𝑛,𝑝𝑛(˜𝑘−ˆ𝑘))}on{˜𝑘<ˆ𝑘}∩D𝑛. (S2.62) With the help of the first part of (34) and Theorem A.3, we also show in Section S3 that for any 𝛿>0, 𝑃{(ˆ𝑘−˜𝑘)(ˆ𝑎𝑛+ˆ𝑏𝑛)≥𝛿[1−exp(𝑛−1𝑤𝑛,𝑝𝑛(˜𝑘−ˆ𝑘))],˜𝑘<ˆ𝑘}=𝑜(1). (S2.63) 16 As a consequence of (S2.60)–(S2.63) and (S2.59), (S2.52) follows. Thus, the proof of (S2.49) is com- plete. Moreover, by an argument similar to that used to prove (S2.49), it can be shown that (36) holds true. The details are omitted here for brevity. Proof of Theorem 3.3. By Theorem 3.2, lim𝑛→∞𝑃(ˆ𝑠0=𝑠0)=lim𝑛→∞𝑃(ˆ𝑠0=𝑠0)=1. (S2.64) In view of (S2.64), Theorem 3.2, and (SS A), it suffices for Theorem 3.3 to show that lim𝑛→∞𝑃(max 1≤𝑖≤𝑞𝑛|ˆ𝛼𝑖(J𝑛)−𝛼𝑖|≥𝐻𝑛)=0, (S2.65) where𝐻𝑛isˆ𝐻𝑛with ˆ𝑠0and ˆ𝑠0replaced by𝑠0and𝑠0, respectively. Straightforward calculations yield max 1≤𝑖≤𝑞𝑛|ˆ𝛼𝑖(J𝑛)−𝛼𝑖|≤𝐶(𝐽1,𝑛+𝐽2,𝑛+𝐽3,𝑛), (S2.66) where 𝐽1,𝑛=∥(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(J𝑛)s⊤ 𝑡(J𝑛))−1−S−1 𝑛(J𝑛))∥∥𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(J𝑛)𝜖𝑡∥, 𝐽2,𝑛=𝑛−1/2∥(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1˜y𝑡˜y⊤ 𝑡)−1𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1˜y𝑡𝜖𝑡∥, 𝐽3,𝑛=max 1≤𝑖≤𝑞𝑛−𝑑|𝝂⊤ 𝑖𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1(z⊤ 𝑡(𝑞𝑛−𝑑),x⊤ 𝑡(J𝑛))⊤𝜖𝑡|, where 𝝂𝑖is the𝑖-th column (row) of 𝚪−1(J𝑛). By Lemma SL2.1, Theorem A.3, (S2.43), and (S3.6) and (S3.12) in Section S3, it can be shown that 𝐽1,𝑛=𝑂𝑝(𝑠0+𝑞𝑛)3/2 𝑛 =𝑂𝑝 𝑛1/2+𝑞3/2 𝑛 𝑛! and𝐽2,𝑛=𝑂𝑝(𝑛−1). (S2.67) Since (21) ensures max 1≤𝑖≤𝑞𝑛−𝑑∥𝝂𝑖∥≤𝐶, (S2.68) it follows from (S3.6) that 𝐽3,𝑛≤𝐶∥𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1(z𝑡(𝑞𝑛−𝑑),x⊤ 𝑡(J𝑛))⊤𝜖𝑡∥=𝑂𝑝(𝑠0+𝑞𝑛)1/2 𝑛1/2 . (S2.69) In addition, we write 𝝂⊤ 𝑖(z⊤ 𝑡(𝑞𝑛−𝑑),x⊤ 𝑡(J𝑛))⊤=∞∑︁ 𝑚=0(𝒄(𝑖) 𝑚)⊤𝝁𝑡−1−𝑚(D0),
https://arxiv.org/abs/2505.04884v1
Unit root processes with many predictors 17 whereD0and𝝁𝑡(·)are defined in (SS A)and (S1.7), respectively, and {𝒄(𝑖) 𝑚}is a sequence of(𝑠0+1)- dimensional vectors depending on 𝝂𝑖,{𝑝𝑚,𝑗},1≤𝑗≤𝑝𝑛,{𝑏𝑚}, and𝛽(𝑙) 𝑗,1≤𝑗≤𝑟(𝑛) 𝑗,1≤𝑗≤𝑝𝑛. By (21) and (S2.68), max 1≤𝑖≤𝑞𝑛−𝑑∞∑︁ 𝑚=0∥(𝑐(𝑖) 𝑚,1,...,𝑐(𝑖) 𝑚,𝑠0+1)⊤∥2≡max 1≤𝑖≤𝑞𝑛−𝑑∞∑︁ 𝑚=0∥𝒄(𝑖) 𝑚∥2≤𝐶, which, together with Theorem A.2, gives, max 1≤𝑖≤𝑞𝑛−𝑑E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1[∞∑︁ 𝑚=0(𝒄(𝑖) 𝑚)⊤𝝁𝑡−1−𝑚(D0)]𝜖𝑡 𝜂 ≤𝐶max 1≤𝑖≤𝑞𝑛−𝑑  𝑠0+1∑︁ 𝑗=1(∞∑︁ 𝑚=0𝑐(𝑖) 𝑚,𝑗2)1/2  𝜂 ≤𝐶(max 1≤𝑖≤𝑞𝑛−𝑑∞∑︁ 𝑚=0∥𝒄(𝑖) 𝑚∥2)𝜂/2(𝑠0+1)𝜂/2 ≤𝐶𝑠𝜂/2 0, yielding 𝐽3,𝑛≤ max 1≤𝑖≤𝑞𝑛−𝑑 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1[∞∑︁ 𝑚=0(𝒄(𝑖) 𝑚)⊤𝝁𝑡−1−𝑚(D0)]𝜖𝑡 =𝑂𝑝 𝑞1/𝜂 𝑛𝑠1/2 0 𝑛1/2! . (S2.70) Consequently, (S2.65) follows from (S2.66), (S2.67), (S2.69), and (S2.70). Thus, the proof of Theorem 3.3 is complete. S2.3. Proofs of (S2.2) ,(S2.7) ,(S2.8) ,(S2.14) , and (S2.16) Proof of (S2.2) .Let 𝑣∗ 𝑡,𝑛=𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑥∗ 𝑡−𝑙,𝑗, 𝑥∗ 𝑡,𝑗=𝑡∑︁ 𝑠=1𝑝𝑡−𝑠,𝑗𝜋𝑠,𝑗. Then, 1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝛿𝑡=𝐻𝑛(𝑢)+1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝑣∗ 𝑡,𝑛+1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝜖𝑡, where𝐻𝑛(𝑢)=𝑛−1/2Í⌊𝑛𝑢⌋ 𝑡=1𝑎𝑡(𝑣𝑡,𝑛−𝑣∗ 𝑡,𝑛). By changing the order of summation, we can write 1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝑣∗ 𝑡,𝑛=⌊𝑛𝑢⌋−1∑︁ 𝑠=1𝑝𝑛∑︁ 𝑗=11√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=𝑠+1𝑎𝑡©­ «(𝑡−𝑠)∧𝑟𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑝𝑡−𝑙−𝑠,𝑗ª® ¬𝜋𝑠,𝑗 18 =⌊𝑛𝑢⌋−1∑︁ 𝑠=1𝑋𝑠,𝑛−⌊𝑛𝑢⌋−1∑︁ 𝑠=1𝑝𝑛∑︁ 𝑗=11√𝑛𝑛∑︁ 𝑡=⌊𝑛𝑢⌋+1𝑎𝑡©­ «(𝑡−𝑠)∧𝑟𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑝𝑡−𝑙−𝑠,𝑗ª® ¬𝜋𝑠,𝑗 :=⌊𝑛𝑢⌋−1∑︁ 𝑠=1𝑋𝑠,𝑛−𝐺𝑛(𝑢). Hence (S2.2) is proved with 𝑅𝑛(𝑢)=𝐻𝑛(𝑢)−𝐺𝑛(𝑢). Proofs of (S2.7) and(S2.8) .Since𝑅𝑛(𝑢)=𝐻𝑛(𝑢)−𝐺𝑛(𝑢), it is sufficient for (S2.7)–(S2.8) to show sup 0≤𝑢≤1|𝐻𝑛(𝑢)|=𝑜𝑝(1), (S2.71) E𝐻2 𝑛(𝑢)=𝑜(1),0≤𝑢≤1, (S2.72) E sup 0≤𝑢≤1|𝐺𝑛(𝑢)|2𝜂1 =𝑜(1). (S2.73) For (S2.71), with 𝑝𝑠,𝑗=0 for𝑠<0 notice that sup 𝑢∈[0,1]|𝐻𝑛(𝑢)|≤𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙|max 1≤𝑘≤𝑛 0∑︁ 𝑠=−∞ 1√𝑛𝑘∑︁ 𝑡=1𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗! 𝜋𝑠,𝑗 ≤√ 2𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙|0∑︁ 𝑠=−∞1√𝑛𝑛∑︁ 𝑡=1|𝑝𝑡−𝑙−𝑠,𝑗||𝜋𝑠,𝑗|. This and (S2.6) imply Esup 𝑢∈[0,1]|𝐻𝑛(𝑢)|≤𝐶𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙|1√𝑛0∑︁ 𝑠=−∞𝑛∑︁ 𝑡=1|𝑝𝑡−𝑙−𝑠,𝑗|. (S2.74) By (52) in the main paper, we can choose a sequence of positive integers {𝐶𝑛}such that𝐶𝑛→∞ with 𝐶𝑛=𝑜(𝑛1/2)and(max 1≤𝑗≤𝑝𝑛𝑟(𝑛) 𝑗)/𝐶𝑛→0. Then for each 1 ≤𝑗≤𝑝𝑛and 1≤𝑙≤𝑟(𝑛) 𝑗, it can be shown that 1√𝑛0∑︁ 𝑠=−∞𝑛∑︁ 𝑡=1|𝑝𝑡−𝑙−𝑠,𝑗|=1√𝑛∞∑︁ 𝑠=−𝑙𝑛∑︁ 𝑡=1|𝑝𝑡+𝑠,𝑗| ≤√𝑛∞∑︁ 𝑠=𝑛−𝑙|𝑝𝑠,𝑗|+1√𝑛𝐶𝑛−𝑙∑︁ 𝑠=1−𝑙|𝑝𝑠,𝑗|(𝑠+𝑙)+1√𝑛𝑛−𝑙−1∑︁ 𝑠=𝐶𝑛−𝑙+1|𝑝𝑠,𝑗|(𝑠+𝑙) ≤𝐶  max 1≤𝑗≤𝑝𝑛∞∑︁ 𝑠=𝑛−¯𝑟√𝑠|𝑝𝑠,𝑗|+𝐶𝑛√𝑛max 1≤𝑗≤𝑝𝑛∞∑︁ 𝑠=0|𝑝𝑠,𝑗|+∞∑︁ 𝑠=𝐶𝑛−𝑙+1max 1≤𝑗≤𝑝𝑛√𝑠|𝑝𝑠,𝑗|   =𝑜(1), where ¯𝑟=max 1≤𝑗≤𝑝𝑛𝑟(𝑛) 𝑗and the last equality uses (13) in the main paper. This, Assumption (A4), and (S2.74) prove (S2.71). Unit root processes with many predictors 19 Next we prove (S2.72). Assume, without loss of generality, that 𝑢>0. By Minkowski’s inequality and (S2.6), E𝐻2 𝑛(𝑢)=E©­ «𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙0∑︁ 𝑠=−∞ 1√𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗! 𝜋𝑠,𝑗ª® ¬2 ≤  𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙|𝐶√𝑛0∑︁ 𝑠=−∞©­ «⌊√𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗ª® ¬2 +0∑︁ 𝑠=−∞©­ «⌊𝑛𝑢⌋∑︁ 𝑡=⌊√𝑛𝑢⌋+1𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗ª® ¬21/2  2 :=  𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙|𝐶√𝑛 𝐾1,𝑛,𝑗,𝑙(𝑢)+𝐾2,𝑛,𝑗,𝑙(𝑢)1/2  2 . (S2.75) Observe that if{ℎ𝑡:𝑡=1,2,...}is a sequence of nonnegative numbers, thenÍ∞ 𝑠=0(Í𝑛 𝑖=1ℎ𝑠+𝑖)2≤ 𝑛(Í∞ 𝑠=0ℎ𝑠)2. By applying this inequality to 𝐾1,𝑛,𝑗,𝑙(𝑢)and𝐾2,𝑛,𝑗,𝑙(𝑢), we have 𝐾1,𝑛,𝑗,𝑙(𝑢)≤𝐶⌊√𝑛𝑢⌋ max 1≤𝑗≤𝑝𝑛∞∑︁ 𝑠=0|𝑝𝑠,𝑗|!2 ≤𝐶√𝑛 (S2.76) and 𝐾2,𝑛,𝑗,𝑙(𝑢)=∞∑︁ 𝑠=0©­ «⌊𝑛𝑢⌋−⌊√𝑛𝑢⌋∑︁ 𝑡=1𝑎𝑡+⌊√𝑛𝑢⌋𝑝𝑠+𝑡+⌊√𝑛𝑢⌋−𝑙,𝑗ª® ¬2 ≤𝐶𝑛©­ «max 1≤𝑗≤𝑝𝑛∞∑︁ 𝑠=⌊√𝑛𝑢⌋−¯𝑟|𝑝𝑠,𝑗|ª® ¬2 =𝑜(𝑛). (S2.77) (S2.75)–(S2.77) and Assumption (A4) together prove (S2.72). Finally, we prove (S2.73). Notice that |𝐺𝑛(𝑢)|≤⌊𝑛𝑢⌋−1∑︁ 𝑠=1𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙| ©­ «1√𝑛𝑛∑︁ 𝑡=(𝑙+𝑠)∨(⌊𝑛𝑢⌋+1)𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗ª® ¬𝜋𝑠,𝑗 . Therefore, by Minkowski’s inequality, Esup 𝑢∈[0,1]|𝐺𝑛(𝑢)|2𝜂1≤  𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙| max 1≤𝑘≤𝑛−1 𝑘∑︁ 𝑠=1©­ «1√𝑛𝑛∑︁ 𝑡=(𝑙+𝑠)∨(𝑘+2)𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗ª® ¬𝜋𝑠,𝑗 2𝜂1  2𝜂1 :=  𝑝𝑛∑︁ 𝑗=1𝑟𝑗∑︁ 𝑙=1|𝛽(𝑗) 𝑙|𝑓𝑛(𝑗,𝑙)  2𝜂1 . (S2.78) 20 For 2≤𝑙≤𝑟𝑗and 1≤𝑗≤𝑝𝑛, we obtain from (S2.6), Burkholder’s inequality, and Minkowski’s in- equality that (𝑓𝑛(𝑗,𝑙))2𝜂1=𝑛−𝜂1Emax 1≤𝑘≤𝑛−1 𝑘∑︁ 𝑠=1©­ «𝑛∑︁ 𝑡=(𝑙+𝑠)∨(𝑘+2)𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗ª® ¬𝜋𝑠,𝑗 2𝜂1 ≤𝑛−𝜂1𝑛−1∑︁ 𝑘=1E 𝑘∑︁ 𝑠=1©­ «𝑛∑︁ 𝑡=(𝑙+𝑠)∨(𝑘+2)𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗ª® ¬𝜋𝑠,𝑗 2𝜂1 ≤𝐶𝑛−𝜂1𝑛−1∑︁ 𝑘=1  𝑘∑︁ 𝑠=1©­ «𝑛∑︁ 𝑡=(𝑙+𝑠)∨(𝑘+2)𝑎𝑡𝑝𝑡−𝑙−𝑠,𝑗ª® ¬2  𝜂1 ≤𝐶𝑛−𝜂1𝑙−1∑︁ 𝑘=1  𝑙−1∑︁ 𝑠=1 max 1≤𝑗≤𝑝𝑛𝑛∑︁ 𝑡=𝑙+𝑠|𝑝𝑡−𝑙−𝑠,𝑗|!2  𝜂1 +𝐶𝑛−𝜂1𝑛−1∑︁ 𝑘=𝑙 
https://arxiv.org/abs/2505.04884v1
𝑘−𝑙+1∑︁ 𝑠=11 𝑘+2−𝑙−𝑠 𝑛∑︁ 𝑡=𝑘+2max 1≤𝑗≤𝑝𝑛(𝑡−𝑙−𝑠)1/2|𝑝𝑡−𝑙−𝑠,𝑗|!2  𝜂1 +𝐶𝑛−𝜂1𝑛−1∑︁ 𝑘=𝑙  𝑘∑︁ 𝑠=𝑘−𝑙+2 max 1≤𝑗≤𝑝𝑛𝑛∑︁ 𝑡=𝑙+𝑠|𝑝𝑡−𝑙−𝑠,𝑗|!2  𝜂1 ≤𝐶¯𝑟𝜂1+1+𝑛log𝜂1𝑛+𝑛¯𝑟𝜂1 𝑛𝜂1. Hence, by (52) in the main paper, max 1≤𝑗≤𝑝𝑛,2≤𝑙≤𝑟𝑗𝑓𝑛(𝑗,𝑙)=𝑜(1). (S2.79) Similarly, it can be shown that max 1≤𝑗≤𝑝𝑛𝑓𝑛(𝑗,1)≤𝐶𝑛(log𝑛)𝜂1 𝑛𝜂1=𝑜(1). (S2.80) Consequently, (S2.73) is ensured by (S2.78)–(S2.80) and Assumption (A4). Unit root processes with many predictors 21 Proof of (S2.14) .By Minkowski’s inequality, (S2.6), (13) in the main paper, and (A4), E ⌊𝑛𝑢⌋∑︁ 𝑡=1𝑋(𝑚) 𝑡,𝑛!2 =⌊𝑛𝑢⌋∑︁ 𝑡=1E 𝑋(𝑚) 𝑡,𝑛2 ≤𝐶 𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1©­ «𝑝𝑛∑︁ 𝑗=1 𝑛∑︁ 𝑠=𝑡+1𝑎(𝑚) 𝑠(𝑠−𝑡)∧𝑟𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑝𝑠−𝑙−𝑡,𝑗 ª® ¬2 ≤𝐶 𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1©­ «𝑝𝑛∑︁ 𝑗=1𝑟𝑗∧(𝑛−𝑡)∑︁ 𝑙=1|𝛽(𝑗) 𝑙|𝑛∑︁ 𝑠=𝑡+𝑙|𝑝𝑠−𝑙−𝑡,𝑗|ª® ¬2 ≤𝐶.(S2.81) In addition, Assumption (A1) and (S2.3) ensure E(⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎(𝑚) 𝑡𝜖𝑡√𝑛)2≤𝐶, which, together with (S2.81), yields (S2.14). Proof of (S2.16) .Equation (S2.16) follows from ⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2|F𝑡−1) =𝑜𝑝(1) (S2.82) and ⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2) =𝑜𝑝(1). (S2.83) It suffices for (S2.82) to show that E ⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2|F𝑡−1) 𝜂1=𝑜(1), (S2.84) where we may assume, without loss of generality, that 1 <𝜂1<2. By Burkholder’s inequality, Jensen’s inequality for conditional expectations, and 1 <𝜂1<2, it holds that E ⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2|F𝑡−1) 𝜂1≤𝐶⌊𝑛𝑢⌋∑︁ 𝑡=1E|𝜁(𝑚) 𝑡,𝑛|2𝜂1. (S2.85) 22 Moreover, by (S2.3), (S2.6), and an argument used in (S2.81), ⌊𝑛𝑢⌋∑︁ 𝑡=1E|𝜁(𝑚) 𝑡,𝑛|2𝜂1≤𝐶⌊𝑛𝑢⌋∑︁ 𝑡=1E 𝑎(𝑚) 𝑡𝜖𝑡 𝑛1/2 2𝜂1 +⌊𝑛𝑢⌋∑︁ 𝑡=1E|𝑋(𝑚) 𝑡,𝑛|2𝜂1 ≤𝐶  𝑛−𝜂1+1+𝑛−𝜂1⌊𝑛𝑢⌋∑︁ 𝑡=1©­ «𝑝𝑛∑︁ 𝑗=1𝑟𝑗∧(𝑛−𝑡)∑︁ 𝑙=1|𝛽(𝑗) 𝑙|𝑛∑︁ 𝑠=𝑡+𝑙|𝑝𝑠−𝑙−𝑡,𝑗|ª® ¬2𝜂1   =𝑜(1).(S2.86) Consequently, (S2.84) (and hence (S2.82)) follows from (S2.85) and (S2.86). Since ⌊𝑛𝑢⌋∑︁ 𝑡=1 𝜁(𝑚) 𝑡,𝑛2−E(𝜁(𝑚) 𝑡,𝑛2) =⌊𝑛𝑢⌋∑︁ 𝑡=1 𝑋(𝑚) 𝑡,𝑛2−E(𝑋(𝑚) 𝑡,𝑛2) +2⌊𝑛𝑢⌋∑︁ 𝑡=1𝑛−1/2 𝑋(𝑚) 𝑡,𝑛𝑎(𝑚) 𝑡𝜖𝑡−E(𝑋(𝑚) 𝑡,𝑛𝑎(𝑚) 𝑡𝜖𝑡) +𝑛−1⌊𝑛𝑢⌋∑︁ 𝑡=1𝑎(𝑚) 𝑡2(𝜖2 𝑡−𝜎2) ≡(I)+(II)+(III), (S2.83) is ensured by (I)=𝑜𝑝(1),(II)=𝑜𝑝(1),(III)=𝑜𝑝(1). (S2.87) In the following, we only prove the first equation of (S2.87) because the other two can be obtained similarly. Write 𝑋(𝑚) 𝑡,𝑛=𝑛−1/2𝑝𝑛∑︁ 𝑗=1𝑉(𝑚) 𝑛,𝑗(𝑡)𝜋𝑡,𝑗, where 𝑉(𝑚) 𝑛,𝑗(𝑡)=𝑛∑︁ 𝑠=𝑡+1𝑎(𝑚) 𝑠(𝑠−𝑡)∧𝑟𝑗∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑝𝑠−𝑙−𝑡,𝑗. Let¯𝑉(𝑚) 𝑛,𝑗=max 1≤𝑡≤𝑛|𝑉(𝑚) 𝑛,𝑗(𝑡)|. By the same argument used in (S2.81), it is not difficult to show that Í𝑝𝑛 𝑗=1|¯𝑉(𝑚) 𝑛,𝑗|≤𝐶. Then, by Assumption (A2), Burkholder’s inequality, Minkowski’s inequality, and Cauchy-Schwarz inequality, E ⌊𝑛𝑢⌋∑︁ 𝑡=1{𝑋(𝑚) 𝑡,𝑛2−E(𝑋(𝑚) 𝑡,𝑛2)} 𝜂1 =E 1 𝑛⌊𝑛𝑢⌋∑︁ 𝑡=1𝑝𝑛∑︁ 𝑠1=1𝑝𝑛∑︁ 𝑠2=1𝑉𝑛,𝑠1(𝑡)𝑉𝑛,𝑠2(𝑡)(𝜋𝑡,𝑠1𝜋𝑡,𝑠2−𝜎𝑠1,𝑠2) 𝜂1 Unit root processes with many predictors 23 ≤  1 𝑛𝑝𝑛∑︁ 𝑠1,𝑠2=1 ⌊𝑛𝑢⌋∑︁ 𝑡=1𝑉𝑛,𝑠1(𝑡)𝑉𝑛,𝑠2(𝑡)∞∑︁ 𝑗=0𝜽⊤ 𝑗,𝑠1,𝑠2e𝑡−𝑗,𝑠1,𝑠2 𝜂1  𝜂1 =  1 𝑛𝑝𝑛∑︁ 𝑠1,𝑠2=1 ⌊𝑛𝑢⌋∑︁ 𝑗=−∞©­ «⌊𝑛𝑢⌋∑︁ 𝑡=1∨𝑗𝑉𝑛,𝑠1(𝑡)𝑉𝑛,𝑠2(𝑡)𝜽⊤ 𝑡−𝑗,𝑠1,𝑠2ª® ¬e𝑗,𝑠1,𝑠2 𝜂1  𝜂1 ≤𝐶  1 𝑛𝑝𝑛∑︁ 𝑠1,𝑠2=1⌊𝑛𝑢⌋∑︁ 𝑗=−∞ ©­ «⌊𝑛𝑢⌋∑︁ 𝑡=1∨𝑗𝑉𝑛,𝑠1(𝑡)𝑉𝑛,𝑠2(𝑡)𝜽⊤ 𝑡−𝑗,𝑠1,𝑠2ª® ¬e𝑗,𝑠1,𝑠2 2 𝜂11/2  𝜂1 ≤𝐶  1 𝑛𝑝𝑛∑︁ 𝑠1,𝑠2=1⌊𝑛𝑢⌋∑︁ 𝑗=−∞©­ «⌊𝑛𝑢⌋∑︁ 𝑡=1∨𝑗|𝑉𝑛,𝑠1(𝑡)||𝑉𝑛,𝑠2(𝑡)|∥𝜽𝑡−𝑗,𝑠1,𝑠2∥ª® ¬21/2  𝜂1 ≤𝐶  1 𝑛𝑝𝑛∑︁ 𝑠1,𝑠2=1|¯𝑉(𝑚) 𝑛,𝑠1||¯𝑉(𝑚) 𝑛,𝑠2|©­­ « ∞∑︁ 𝑡=0∥𝜽𝑡,𝑠1,𝑠2∥!⌊𝑛𝑢⌋∑︁ 𝑡=1𝑡∑︁ 𝑗=−∞∥𝜽𝑡−𝑗,𝑠1,𝑠2∥1/2 ª®® ¬2  𝜂1 ≤𝐶(√︁ ⌊𝑛𝑢⌋/𝑛)𝜂1=𝑜(1). Thus, the desired conclusion follows. S3. Proofs of (S2.39), (S2.41), (S2.42), (S2.57)–(S2.59), and (S2.63) in Section S2.2 PROOF OF (S2.39). Let’s recall ˆ 𝑎𝑚as defined in (30). It follows from (29) that ˆ𝑎𝑚=ˆ𝑎𝑚−1−𝜓2 𝐽𝑚−1,(𝑗𝑚,𝑙𝑚)≤ˆ𝑎𝑚−1−𝜉2max (𝑗,𝑙)∉𝐽𝑚−1𝜓2 𝐽𝑚−1,(𝑗,𝑙). (S3.1) Moreover, since for 1 ≤𝑚≤𝐾𝑛, ˆ𝑎𝑚−1=𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽𝑚−1)𝝁𝑛 ≥(∑︁ (𝑗,𝑙)∈J 𝑛−𝐽𝑚−1𝛽(𝑗) 𝑙2)min 1≤♯(𝐽)≤𝐾𝑛𝜆min(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(J𝑛∪𝐽)w⊤ 𝑡(J𝑛∪𝐽)), it holds that ˆ𝑎𝑚−1=∑︁ (𝑗,𝑙)∈J 𝑛𝛽(𝑗) 𝑙𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽𝑚−1)𝒙(𝑗) 𝑙 ≤ max (𝑗,𝑙)∈J 𝑛−𝐽𝑚−1𝑛−1|𝝁⊤ 𝒏(I−H[𝑞𝑛]⊕𝐽𝑚−1)𝒙(𝑗) 𝑙|𝑠1/2 0∑︁ (𝑗,𝑙)∈J 𝑛−𝐽𝑚−1𝛽(𝑗) 𝑙21/2 ≤ max (𝑗,𝑙)∉𝐽𝑚−1|𝜓𝐽𝑚−1,(𝑗,𝑙)|ˆ𝑎1/2 𝑚−1𝑠1/2 0𝐷−1/2 𝑛,(S3.2) 24 where𝐷𝑛is defined in (S2.40). Equations (S3.1) and (S3.2) imply for 1 ≤𝑚≤𝐾𝑛, ˆ𝑎𝑚≤ˆ𝑎𝑚−1 1−𝜉2𝐷𝑛 𝑠0 , noting that𝐷𝑛is bounded by
https://arxiv.org/abs/2505.04884v1
1. Thus, as long as a selection path obeying (29) is chosen, the resultant noiseless mean squared error satisfies ˆ𝑎𝑚≤ˆ𝑎0exp(−𝜉2𝑚𝐷𝑛/𝑠0)≤𝐶𝑛exp(−𝜉2𝑚𝐷𝑛/𝑠0),1≤𝑚≤𝐾𝑛, (S3.3) where𝐶𝑛is also defined (S2.40). Now since (S2.38) ensures that on A𝑛(𝐾𝑛)∩B𝑛(𝐾𝑛),{ˆ𝐽1,..., ˆ𝐽𝐾𝑛}obeys (29), with 0 <𝜉 < 1 defined after (S2.38), we conclude that (S3.3) holds with ˆ 𝑎𝑚replaced by ˆ𝑠𝑚onA𝑛(𝐾𝑛)∩B𝑛(𝐾𝑛). This completes the proof of (S2.39). PROOF OF (S2.41). By an argument similar to (S3.2), one has for 1 ≤𝑚≤𝐾𝑛, 𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕ˆ𝐽𝑚)𝝁𝑛≤min 0≤𝑘≤𝑚−1𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕ˆ𝐽𝑘)𝝁𝑛 ≤min 0≤𝑘≤𝑚−1max (𝑗,𝑙)∉ˆ𝐽𝑘𝜓2 ˆ𝐽𝑘,(𝑗,𝑙)𝑠0𝐷−1 𝑛.(S3.4) Consequently, (S2.41) follows from (S3.4) and min 0≤𝑘≤𝑚−1max (𝑗,𝑙)∉ˆ𝐽𝑘𝜓2 ˆ𝐽𝑘,(𝑗,𝑙)≤˜𝜉2𝑙𝑛𝑝∗(𝑞0+1)/(𝜂𝑞0) 𝑛 𝑛 onB𝑐 𝑛(𝑚). To prove (S2.42), we need an auxiliary lemma. Lemma SL3.1. Assume that (A1), (A2), (A4) , and (A5) hold. Then, max 1≤𝑙≤𝑟(𝑛) 𝑗,1≤𝑗≤𝑝𝑛 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡𝑥𝑡−𝑙,𝑗 +max 1≤𝑘≤𝑞𝑛 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡𝑧𝑡−𝑘 =𝑂𝑝 𝑝∗ 𝑛(𝑞0+1)/(2𝜂𝑞0) 𝑛1/2+𝑞1/𝜂 𝑛 𝑛1/2! =𝑂𝑝 𝑝∗ 𝑛(𝑞0+1)/(2𝜂𝑞0) 𝑛1/2! .(S3.5) PROOF . The first identity of (S3.5) is ensured by max 1≤𝑙≤𝑟(𝑛) 𝑗,1≤𝑗≤𝑝𝑛E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡𝑥𝑡−𝑙,𝑗 2𝜂𝑞0 𝑞0+1 + max 1≤𝑘≤𝑞𝑛−𝑑E 𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡𝑧𝑡−𝑘 𝜂 <𝐶, (S3.6) which can be proved using Burkholder’s inequality, Jensen’s inequality, Hölder’s inequality, E |𝑥𝑡−𝑙,𝑗|2𝜂𝑞0< 𝐶, and E|𝑧𝑡−𝑘|2𝜂<𝐶 for all−∞<𝑡<∞,1≤𝑙≤𝑟(𝑛) 𝑗,1≤𝑗≤𝑝𝑛, and 1≤𝑘≤𝑞𝑛−𝑑. The second identity of (S3.5) follows from 𝑞𝑛=𝑜(𝑛1/2)and𝑝∗ 𝑛≍𝑛𝜈with𝜈≥1. Unit root processes with many predictors 25 PROOF OF (S2.42). It suffices for (S2.42) to show that max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽|𝜓𝐽,(𝑖,𝑙)−ˆ𝜓𝐽,(𝑖,𝑙)|= max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽𝑛−1|𝜺⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙| (𝑛−1𝒙(𝑖) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙)1/2 =𝑂𝑝 𝑝∗ 𝑛(𝑞0+1)/(2𝜂𝑞0) 𝑛1/2! ,(S3.7) which is, in turn, ensured by max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽𝑛−1|𝜺⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙|=𝑂𝑝 𝑝∗ 𝑛(𝑞0+1)/(2𝜂𝑞0) 𝑛1/2! (S3.8) and max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽(𝑛−1𝒙(𝑖) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙)−1/2=𝑂𝑝(1). (S3.9) Note that (S3.9) is an immediate consequence of max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽|𝑛−1𝒙(𝑖) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙|−1/2≤max ♯(𝐽)≤𝐾𝑛𝜆−1/2 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽)ª® ¬ and Theorem A.3. Hence, it remains to prove (S3.8). Since max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽1 𝑛|𝜺⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙|≤ max 1≤𝑖≤𝑝𝑛 1≤𝑙≤𝑟(𝑛) 𝑖 1 𝑛𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡𝑥𝑡−𝑙,𝑖 + max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑗)∉𝐽 1 𝑛𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡ˆ𝑥𝑡−𝑙,𝑖;𝐽 , (S3.8) follows from max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑙)∉𝐽 1 𝑛𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡ˆ𝑥𝑡−𝑙,𝑖;𝐽 =𝑂𝑝 𝑝∗ 𝑛(𝑞0+1)/(2𝜂𝑞0) 𝑛1/2! (S3.10) in light of Lemma SL3.1. For(𝑖,𝑙)∉𝐽 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡ˆ𝑥𝑡−𝑙,𝑖;𝐽 ≤ 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡s𝑡(𝐽) × ©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)ª® ¬−1 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)𝑥⊥ 𝑡−𝑙,𝑖;𝐽 + 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡s⊤ 𝑡(𝐽)b𝐽(𝑖,𝑙) ,(S3.11) 26 where𝑥⊥ 𝑡−𝑙,𝑖;𝐽=𝑥𝑡−𝑙,𝑖−s⊤ 𝑡(𝐽)b𝐽(𝑖,𝑙)andb𝐽(𝑖,𝑙)=(0,..., 0| {z } 𝑑,g⊤ 𝐽(𝑖,𝑙)𝚪−1 𝑛(𝐽))⊤. By Lemma SL3.1, (22), (S3.6), and max 1≤𝑘≤𝑑E|𝑛−1/2𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡˜𝑦𝑡,𝑘|𝜂<𝐶, (S3.12) one obtains max ♯(𝐽)≤𝐾𝑛−1 (𝑖,𝑗)∉𝐽 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡s⊤ 𝑡(𝐽)b𝐽(𝑖,𝑙) =𝑂𝑝©­ «𝑝∗ 𝑛𝑞0+1 2𝜂𝑞0 𝑛1/2ª® ¬(S3.13) and max ♯(𝐽)≤𝐾𝑛−1 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡s𝑡(𝐽) =𝑂𝑝©­ «𝐾1/2 𝑛𝑝∗ 𝑛𝑞0+1 2𝜂𝑞0+𝑞1/2 𝑛 𝑛1/2ª® ¬. (S3.14) Define (I)= 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)𝑥𝑡−𝑙,𝑖−(0,..., 0| {z } 𝑑,g⊤ 𝐽(𝑖,𝑙))⊤ , (II)= (𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)−S𝑛(𝐽))b𝐽(𝑖,𝑙) . Then, 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)𝑥⊥ 𝑡−𝑙,𝑖;𝐽 ≤(I)+(II). (S3.15) Unit root processes with many predictors 27 It follows from Lemma SL2.1 that max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽(I)≤√ 𝑑 max 1≤𝑙≤𝑟(𝑛) 𝑖 1≤𝑖≤𝑝𝑛,1≤𝑘≤𝑑|𝑛−1𝑏∑︁ 𝑡=¯𝑟𝑛+1˜𝑦𝑡,𝑘𝑥𝑡−𝑙,𝑖| +  𝑞𝑛−𝑑∑︁ 𝑘=1max 1≤𝑙≤𝑟(𝑛) 𝑖,1≤𝑖≤𝑝𝑛 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1{𝑧𝑡−𝑘𝑥𝑡−𝑙,𝑖−E(𝑧𝑡−𝑘𝑥𝑡−𝑙,𝑖)} 2  1/2 +  𝐾𝑛 max 1≤𝑙1≤𝑟(𝑛) 𝑖1,1≤𝑙≤𝑟(𝑛) 𝑖 1≤𝑖1,𝑖≤𝑝𝑛 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1{𝑥𝑡−𝑙1,𝑖1𝑥𝑡−𝑙,𝑖−E(𝑥𝑡−𝑙1,𝑖1𝑥𝑡−𝑙,𝑖)} 2  1/2 =𝑂𝑝©­­ «𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬+𝑂𝑝©­­ «𝑞𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬+𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬ =𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗2𝜂𝑞0 𝑛+𝑞𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬.(S3.16) By Lemma SL2.1 and (22), we also show below that max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽(II)=𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗2𝜂𝑞0 𝑛+(𝐾1/2 𝑛+𝑞1/2 𝑛)𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬. (S3.17) According to (S3.15)–(S3.17), max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)𝑥⊥ 𝑡−𝑙,𝑖;𝐽 =𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗2𝜂𝑞0 𝑛+(𝐾1/2 𝑛+𝑞1/2 𝑛)𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬.(S3.18)
https://arxiv.org/abs/2505.04884v1
Consequently, (S3.11), (S3.13), (S3.14), (S3.18), and (57) imply max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡ˆ𝑥𝑡−𝑙,𝑖;𝐽 =𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗2𝜂𝑞0 𝑛+(𝐾1/2 𝑛+𝑞1/2 𝑛)𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬ ×𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛+𝑞1/2 𝑛 𝑛1/2ª®® ¬, 28 which, together with (24) and (A5), leads to (S3.10). Thus, the proof is complete. Proof of (S3.17) .Note first that max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽(II) ≤ max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽  𝑑∑︁ 𝑘=1𝑞𝑛−𝑑∑︁ 𝑠=1𝑎𝑠,𝐽(𝑖,𝑙)(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1˜𝑦𝑡,𝑘𝑧𝑡−𝑠) +∑︁ (𝑖∗,𝑙∗)∈𝐽𝑎(𝑖∗,𝑙∗)(𝑖,𝑙)(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1˜𝑦𝑡,𝑘𝑥𝑡−𝑙∗,𝑖∗)2  1/2 + max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽  𝑞𝑛−𝑑∑︁ 𝑘=1𝑞𝑛−𝑑∑︁ 𝑠=1𝑎𝑠,𝐽(𝑖,𝑙)(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑧𝑡−𝑘𝑧𝑡−𝑠−E(𝑧𝑡−𝑘𝑧𝑡−𝑠)) +∑︁ (𝑖∗,𝑙∗)∈𝐽𝑎(𝑖∗,𝑙∗)(𝑖,𝑙)(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑧𝑡−𝑘𝑥𝑡−𝑙∗,𝑖∗−E(𝑧𝑡−𝑘𝑥𝑡−𝑙∗,𝑖∗))2  1/2 + max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽  ∑︁ (˜𝑖,˜𝑙)∈𝐽𝑞𝑛−𝑑∑︁ 𝑠=1𝑎𝑠,𝐽(𝑖,𝑙)(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−˜𝑙,˜𝑖𝑧𝑡−𝑠−E(𝑥𝑡−˜𝑙,˜𝑖𝑧𝑡−𝑠)) +∑︁ (𝑖∗,𝑙∗)∈𝐽𝑎(𝑖∗,𝑙∗)(𝑖,𝑙)(𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−˜𝑙,˜𝑖𝑥𝑡−𝑙∗,𝑖∗−E(𝑥𝑡−˜𝑙,˜𝑖𝑥𝑡−𝑙∗,𝑖∗))2  1/2 ≡(III)+(IV)+(V).(S3.19) By Lemma SL2.1 and (22), it can be shown that (III)=𝑂𝑝©­­ «𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬(S3.20) and (IV)=𝑂𝑝©­­ «𝑞2−1+𝜂−1+𝛿𝐼{𝜂=2} 𝑛 𝑛1/2+𝑞1/2 𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬=𝑂𝑝©­­ «𝑞1/2 𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬, (S3.21) Unit root processes with many predictors 29 where𝛿>0 is arbitrarily small and the second equality is ensured by (A5). Moreover, it follows that (V)2≤𝐶(𝐾𝑛−1)𝑞𝑛−𝑑∑︁ 𝑠1=1𝑞𝑛−𝑑∑︁ 𝑠2=1𝑏𝑠1𝑏𝑠2𝐴𝑠1𝐴𝑠2 +𝐶 max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽∑︁ (𝑖∗,𝑙∗)∈𝐽|𝑎(𝑖∗,𝑙∗)(𝑖,𝑙)| × max 1≤˜𝑖,𝑖∗≤𝑝𝑛 1≤˜𝑙≤𝑟(𝑛) ˜𝑖,1≤𝑙∗≤𝑟(𝑛) 𝑖∗ 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−˜𝑙,˜𝑖𝑥𝑡−𝑙∗,𝑖∗−E(𝑥𝑡−˜𝑙,˜𝑖𝑥𝑡−𝑙∗,𝑖∗) 2 =𝑂𝑝©­­ «𝐾𝑛(𝑝∗𝑞0+1 𝜂𝑞0 𝑛+𝑝∗4𝜂𝑞0 𝑛) 𝑛ª®® ¬,(S3.22) where 𝑏𝑠= max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽|𝑎𝑠,𝐽(𝑖,𝑙)|, 𝐴𝑠= max 1≤𝑖≤𝑝𝑛,1≤𝑙≤𝑟(𝑛) 𝑖|𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−𝑙,𝑖𝑧𝑡−𝑠−E(𝑥𝑡−𝑙,𝑖𝑧𝑡−𝑠)|. Combining (S3.19)–(S3.22) yields (S3.17). Proof of (S2.57) –(S2.59) .Since ˆ𝐴−1 𝑛≤ max ♯(𝐽)≤𝐾𝑛−1 (𝑗,𝑙)∉𝐽{𝑛−1𝒙(𝑗) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑗) 𝑙}−1, |ˆ𝐵𝑛|≤ max ♯(𝐽)≤𝐾𝑛−1 (𝑗,𝑙)∉𝐽|𝑛−1𝒙(𝑗) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝜺𝑛|, (S2.57) and (S2.58) follow directly from (S3.8) and (S3.9), respectively. To show (S2.59), note first that |ˆ𝐶𝑛|≤ 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖2 𝑡−𝜎2 +𝑛−1𝜺⊤ 𝑛H[𝑞𝑛]⊕ˆ𝐽˜𝑘𝜺𝑛. (S3.23) By Assumption (A1), it is easy to show that |𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖2 𝑡−𝜎2|=𝑂𝑝(𝑛−1/2). (S3.24) 30 In addition, 𝑛−1𝜺⊤ 𝑛H[𝑞𝑛]⊕ˆ𝐽˜𝑘𝜺𝑛≤max ♯(𝐽)≤𝑚∗𝑛𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(𝐽)s⊤ 𝑡(𝐽)ª® ¬ ×max ♯(𝐽)≤𝑚∗𝑛 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡s𝑡(𝐽) 2 onD𝑛, which, together with (S3.23), (S3.24), (S3.14), (57), and (S2.50), gives (S2.59). Proof of (S2.63) .Note first that for some 𝑐1>0, 1−exp(−𝑤𝑛,𝑝𝑛𝑛−1(ˆ𝑘−˜𝑘)) ˆ𝑘−˜𝑘≥𝑐1n𝑤𝑛,𝑝𝑛 𝑛∧1 ˆ𝑘−˜𝑘o ≥𝑐1n𝑤𝑛,𝑝𝑛 𝑛∧𝐾−1 𝑛o on{˜𝑘<ˆ𝑘}. Define𝐵𝑛,𝑝𝑛=(𝑤𝑛,𝑝𝑛/𝑛)∧𝐾−1 𝑛. Then, it follows from (24) and the first part of (34) that 𝑝∗¯𝜃 𝑛/𝑛1/2=𝑜(𝐵1/2 𝑛,𝑝𝑛). (S3.25) Now, for any 𝛿>0, 𝑃{(ˆ𝑘−˜𝑘)(ˆ𝑎𝑛+ˆ𝑏𝑛)≥𝛿[1−exp(−𝑛−1𝑤𝑛,𝑝𝑛(ˆ𝑘−˜𝑘)]),˜𝑘<ˆ𝑘} ≤𝑃©­­­ «𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(ˆ𝐽ˆ𝑘)s⊤ 𝑡(ˆ𝐽ˆ𝑘)ª® ¬max 1≤𝑗≤𝑝𝑛 1≤𝑙≤𝑟(𝑛) 𝑗 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝑥𝑡−𝑙,𝑗𝜖𝑡 2 ≥𝑐1𝛿𝐵𝑛,𝑝𝑛ª®®® ¬ +𝑃©­­ «𝜆−1 min©­ «𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1s𝑡(ˆ𝐽ˆ𝑘)s⊤ 𝑡(ˆ𝐽ˆ𝑘)ª® ¬max ♯(𝐽)≤𝐾𝑛−1 (𝑗,𝑙)∉𝐽 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1ˆ𝑥𝑡−𝑙,𝑗;𝐽𝜖𝑡 2 ≥𝑐1𝛿𝐵𝑛,𝑝𝑛ª®® ¬ :=(I)+(II). By (S3.5), (S3.10), Theorem A.3, and (S3.25), (I)+(II)=𝑜(1). Thus (S2.63) is proved. S4. Some technical details about Examples 3.1 and 3.2 S4.1. Proof of (28) in Example 3.1 In this subsection, all summations are understood as summing from 𝑡=3 to𝑡=𝑛. Let𝑧𝑡=𝑦𝑡−𝑦𝑡−1. Clearly,𝑧𝑡=𝑎𝑧𝑡−1+𝜖𝑡for𝑡=1,2,...,𝑛 . Note that with some algebraic manipulation and using the AR definition, we can express 𝐹2,𝑛=(Í𝑦𝑡𝑦𝑡−1)2 Í𝑦2 𝑡−1+(Í𝑦𝑡𝑦𝑡−1)2 Í𝑦2 𝑡−1 2Í𝑦𝑡−2𝑧𝑡−1+Í𝑧2 𝑡−1Í𝑦2 𝑡−2! Unit root processes with many predictors 31 −2(Í𝑦𝑡𝑦𝑡−1)(Í𝑦𝑡𝑧𝑡−1)Í𝑦2 𝑡−1 1+2Í𝑦𝑡−2𝑧𝑡−1+Í𝑧2 𝑡−1Í𝑦2 𝑡−2! +(Í𝑦𝑡𝑧𝑡−1)2 Í𝑦2 𝑡−1 1+2Í𝑦𝑡−2𝑧𝑡−1+Í𝑧2 𝑡−1Í𝑦2 𝑡−2! . By a similar argument used in Lemma SL2.1 and Theorem A.3, we have 1 𝑛(𝐹1,𝑛−𝐹2,𝑛)=−Í𝑦𝑡𝑦𝑡−1Í𝑦2 𝑡−1Í𝑦𝑡𝑦𝑡−1Í𝑦2 𝑡−2(2𝑛−1∑︁ 𝑦𝑡−2𝑧𝑡−1+𝑛−1∑︁ 𝑧2 𝑡−1) +2Í𝑦𝑡𝑦𝑡−1Í𝑦2 𝑡−1(𝑛−1∑︁ 𝑦𝑡𝑧𝑡−1) 1+2Í𝑦𝑡−2𝑧𝑡−1+Í𝑧2 𝑡−1Í𝑦2 𝑡−2! −(Í𝑦𝑡𝑧𝑡−1)2 Í𝑦2 𝑡−1 1 𝑛+2Í𝑦𝑡−2𝑧𝑡−1+Í𝑧2 𝑡−1 𝑛Í𝑦2 𝑡−2! =(−1+𝑂𝑝(𝑛−1)) 2𝑛−1∑︁ 𝑦𝑡−2𝑧𝑡−1+𝑛−1∑︁ 𝑧2 𝑡−1 +2(𝑛−1∑︁ 𝑦𝑡𝑧𝑡−1)(1+𝑂𝑝(𝑛−1))+𝑂𝑝(𝑛−1) =𝑛−1∑︁ 𝑧2 𝑡−1+2𝑛−1∑︁ 𝑧𝑡𝑧𝑡−1+𝑂𝑝(𝑛−1), which implies 1 𝑛(𝐹1,𝑛−𝐹2,𝑛)→1 1−𝑎2+2𝑎 1−𝑎2in probability . S4.2. Proof of (41) in Example 3.2 Note that A𝑛={ˆ𝜷(𝜆𝑛)selects the correct model }={ˆ𝛽(𝜆𝑛) 1≠0,ˆ𝛽(𝜆𝑛) 3≠0,ˆ𝛽(𝜆𝑛) 2=0} = 𝒔𝑛(1)=(sign(ˆ𝛽(𝜆𝑛) 1),sign(ˆ𝛽(𝜆𝑛) 3))⊤∈{𝒂1,...,𝒂4}and ˆ𝛽(𝜆𝑛) 2=0 ,(S4.1) where 𝒂⊤ 1=(1,1),𝒂⊤ 2=(1,−1),𝒂⊤ 3=(−1,1), and 𝒂⊤ 4=(−1,−1). Define C11=𝑛∑︁ 𝑡=3𝑦𝑡−1 𝑥𝑡−1𝑦𝑡−1𝑥𝑡−1,c21=𝑛∑︁ 𝑡=3𝑦𝑡−1𝑦𝑡−2𝑥𝑡−1𝑦𝑡−2, ˆu𝑛= ˆ𝛽(𝜆𝑛)
https://arxiv.org/abs/2505.04884v1
1−1 ˆ𝛽(𝜆𝑛) 3−1! ,w𝑛(1)=𝑛∑︁ 𝑡=3𝜖𝑡𝑦𝑡−1 𝑥𝑡−1 , 𝑤𝑛(2)=𝑛∑︁ 𝑡=3𝜖𝑡𝑦𝑡−2. 32 Then by an argument used in Zhao and Yu (2006), A𝑛⊆4Ø 𝑖=1E𝑛(𝑖), (S4.2) where E𝑛(𝑖)={C11ˆu𝑛−w𝑛(1)=−𝜆𝑛𝒂𝑖 2,−𝜆𝑛 2≤c21ˆu𝑛−𝑤𝑛(2)≤𝜆𝑛 2,𝒔𝑛(1)=𝒂𝑖}. In the following, we will show that regardless of whether {𝜆𝑛}satisfies (a’) 𝜆𝑛/𝑛→∞ , (b’)𝜆𝑛/𝑛→0, or (c’) 0<lim𝑛→∞𝜆𝑛/𝑛=𝑑∗<∞, lim sup 𝑛→∞𝑃(E𝑛(1))≤ 1/2, (S4.3) and lim𝑛→∞𝑃(E𝑛(𝑖))=0,𝑖=2,3,4. (S4.4) By (S4.1)–(S4.4), the desired conclusion (41) follows. We commence by proving (S4.3). Straightforward calculations give E𝑛(1)⊆ c21C−1 11w𝑛(1)−𝑤𝑛(2)≥−𝜆𝑛 2 1−c21C−1 11𝒂1 , (S4.5) c21C−1 11= 1+𝑂𝑝(𝑛−1) −𝑛−1𝑛∑︁ 𝑡=3𝑥𝑡−1𝛿𝑡−1+𝑂𝑝(𝑛−1)! , where𝛿𝑡=𝑥𝑡−1+𝜖𝑡, c21C−1 11w𝑛(1)−𝑤𝑛(2)=𝑛∑︁ 𝑡=3𝛿𝑡−1𝜖𝑡+𝑂𝑝(1), (S4.6) 1−c21C−1 11𝒂1=1 𝑛𝑛∑︁ 𝑡=3𝑥𝑡−1𝛿𝑡−1+𝑂𝑝(1/𝑛), (S4.7) and ©­­­­­ «1√𝑛𝑛∑︁ 𝑡=3𝛿𝑡−1𝜖𝑡 1√𝑛𝑛∑︁ 𝑡=3𝑥𝑡−1𝛿𝑡−1ª®®®®® ¬⇒𝑁1 𝑁2 , (S4.8) Unit root processes with many predictors 33 where𝑁1and𝑁2are two independent Gaussian random variables with mean zero and variance 2. By (S4.5)–(S4.8), it holds that 𝑃(E𝑛(1))≤𝑃 1√𝑛𝑛∑︁ 𝑡=3𝛿𝑡−1𝑥𝑡−1≥𝑜𝑝(1)! →1/2 in case(𝑎′), 𝑃(E𝑛(1))≤𝑃 1√𝑛𝑛∑︁ 𝑡=3𝛿𝑡−1𝜖𝑡≥𝑜𝑝(1)! →1/2 in case(𝑏′), 𝑃(E𝑛(1))≤𝑃 1√𝑛𝑛∑︁ 𝑡=3𝛿𝑡−1𝜖𝑡≥−𝜆𝑛 2𝑛1√𝑛𝑛∑︁ 𝑡=3𝛿𝑡−1𝑥𝑡−1+𝑜𝑝(1)! →𝑃 𝑁1≥−𝑑∗ 2𝑁2 =1/2 in case(𝑐′).(S4.9) Thus, (S4.3) follows. For𝑖=2,3, and 4, E𝑛(𝑖)⊆{ ˆu𝑛=C−1 11[w𝑛(1)−(𝜆𝑛𝒂𝑖)/2],𝒔𝑛(1)=𝒂𝑖} ={(ˆ𝛽(𝜆𝑛) 1,ˆ𝛽(𝜆𝑛) 3)⊤=(1,1)⊤+C−1 11[w𝑛(1)−(𝜆𝑛𝒂𝑖)/2],𝒔𝑛(1)=𝒂𝑖}.(S4.10) By an argument similar to that used to prove (S4.6) and (S4.7), C−1 11[w𝑛(1)−(𝜆𝑛𝒂𝑖)/2] = 𝑂𝑝𝑛−1+𝜆𝑛 𝑛2,𝑂𝑝(𝑛−1/2)+𝜆𝑛 2𝑛(𝐼{𝑖=2,4}−𝐼{𝑖=3})(1+𝑜𝑝(1)).⊤ (S4.11) Combining (S4.10) and (S4.11) yields that there exist an arbitrarily small positive constant 𝜀>0 and an arbitrarily large positive constant 𝑀<∞such that for all sufficiently large 𝑛, 𝑃(E𝑛(𝑗))≤𝑃(ˆ𝛽(𝜆𝑛) 3>1−𝜀,ˆ𝛽(𝜆𝑛) 3<0)+𝑜(1) =𝑜(1)in all cases of(𝑎′),(𝑏′),and(𝑐′),(S4.12) where𝑗=2,4, and 𝑃(E𝑛(3))≤𝑃(ˆ𝛽(𝜆𝑛) 3<−𝑀,ˆ𝛽(𝜆𝑛) 3>0)+𝑜(1)=𝑜(1)in case(𝑎′), 𝑃(E𝑛(3))≤𝑃(ˆ𝛽(𝜆𝑛) 1>1−𝜀,ˆ𝛽(𝜆𝑛) 1<0)+𝑜(1)=𝑜(1)in cases(𝑏′)and(𝑐′).(S4.13) Consequently, (S4.4) is ensured by (S4.12) and (S4.13). This completes the proof of (41). S5. Complementary simulation results We generate data from (1+0.4𝐵)(1−𝐵)2𝑦𝑡=2∑︁ 𝑗=14∑︁ 𝑙=1𝛽(𝑗) 𝑙𝑥𝑡−𝑙,𝑗+𝜖𝑡, (S5.1) 34 where{𝜖𝑡}is a GARCH(1,1) model, 𝜖𝑡=𝜎𝑡𝑍𝑡, 𝜎2 𝑡=5×10−2+0.5𝜖2 𝑡−1+0.1𝜎2 𝑡−1, in which{𝑍𝑡}is a sequence of i.i.d. standard Gaussian random variables. Let {𝜋𝑡,1}and{𝜋𝑡,2}be two independent ARCH(1) processes such that for 𝑗=1 and 2, 𝜋𝑡,𝑗=ℎ𝑡,𝑗𝐺𝑡,𝑗, ℎ2 𝑡,𝑗=1+0.2𝜋2 𝑡−1,𝑗, where{𝐺𝑡,1}and{𝐺𝑡,2}have the same probabilistic structure as that of {𝑍𝑡}and these three sequences are independent of each other. Also let 𝑣𝑡,𝑗, 1≤𝑡≤𝑛,1≤𝑗≤𝑝𝑛, be independent standard Gaussian random variables and independent of {𝐺𝑡,1},{𝐺𝑡,2}, and{𝑍𝑡}. Define𝑤𝑡,𝑗=𝜋𝑡,1+𝑣𝑡,𝑗if𝑗is odd, 𝑤𝑡,𝑗=𝜋𝑡,2+𝑣𝑡,𝑗if𝑗is even. Then, 𝑥𝑡,𝑗are MA(2) processes satisfying 𝑥𝑡,𝑗=0.8𝑤𝑡,𝑗+0.1𝑤𝑡−1,𝑗if𝑗 is odd and𝑥𝑡,𝑗=0.2𝑤𝑡,𝑗+0.6𝑤𝑡−1,𝑗otherwise. The coefficients are set to (𝛽(1) 1,𝛽(1) 2,𝛽(1) 3,𝛽(1) 4)=(−7.62,6.72,−5.55,3.77), (𝛽(2) 1,𝛽(2) 2,𝛽(2) 3,𝛽(2) 4)=(6.89,−6.18,4.47,−3.10). Using Theorem 2.2 of Ling and McAleer (2002) again, one can verify that 𝜖𝑡only has a finite fourth moment and 𝑥𝑡,𝑗has a finite twelfth moment. Moreover, it is easy to show that (A1) and (A2) in Section 3.1 are fulfilled by the above model specification. One distinct feature of this example is that the error term and all candidate covariates are condition- ally heteroscedastic. Table S5.1 records the performance of the methods introduced in Section 4 based on 1000 replications and (𝑛,𝑝𝑛,𝑟(𝑛))=(800,250,4),(1000,275,5), and (1500, 300, 6). The table re- veals that FHTD is the only method that efficiently identifies the correct ARX model. More specifically, it successfully chooses the correct ARX model over 89% of the time, in all cases of 𝑛considered in this example. Unit root processes with many predictors 35 Table S5.1. Values of E, SS, TP, and FP in Example S5.1 LASSO ALasso OGA-3 AR-ALasso AR-OGA-3 FHTD (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(800,1000,250,4,10) E 0 0 0 0 137 926 SS 0 0 0 0
https://arxiv.org/abs/2505.04884v1
The Poisson tensor completion non-parametric differential entropy estimator Daniel M. Dunlavy∗, Richard B. Lehoucq†, Carolyn D. Mayer‡, and Arvind Prasadan§ Sandia National Labaoratories , Albuquerque, NM and Livermore, CA Abstract We introduce the Poisson tensor completion (PTC) estimator , a non-parametric differential entropy estimator. The PTC estimator leverages inter-sample relationships to compute a low-rank Poisson tensor decomposition of the frequency histogram. Our crucial observation is that the histogram bins are an instance of a space partitioning of counts and thus can be identified with a spatial Poisson process. The Poisson tensor decomposition leads to a completion of the intensity measure over all bins—including those containing few to no samples—and leads to our proposed PTC differential entropy estimator. A Poisson tensor decomposition models the underlying distribution of the count data and guarantees non-negative estimated values and so can be safely used directly in entropy estimation. We believe our estimator is the first tensor-based estimator that exploits the underlying spatial Poisson process related to the histogram explicitly when estimating the probability density with low-rank tensor decompositions or tensor completion. Furthermore, we demonstrate that our PTC estimator is a substantial improvement over standard histogram-based estimators for sub-Gaussian probability distributions because of the concentration of norm phenomenon. 1 Introduction The differential entropy of a multivariate random variable has practical use in a number of applications in addition to the intrinsic mathematical and statistical desire to estimate entropy from sample observations. Example applications in statistical modeling include goodness-of-fit tests and tests of uniformity [1, 2], alternatives to maximum likelihood estimators in settings where such estimators are not consistent [3], feature selection in biostatistics and machine learning models with tens of thousands of features [4], and independent component analysis applied to blind source separation problems [5, 6]. In addition, differential entropy has been used in quantifying the thermodynamics [7] of a computational process. A recent result explains that histogram-based estimators for the differential entropy of a multivariate random variable are asymptotically optimal [8]. Unfortunately, in practice, we are often limited to a finite sample of the unknown probability density upon which the differential entropy relies. It is also well understood that the accuracy of histogram-based probability density estimators depends upon an exponentially large number of bins with an increasing number of variates (see, e.g., [9]). This suggests whether the histogram data can be better leveraged by exploiting inherent relational structure across the samples and/or bins. In previous work, Vandermeulen and Ledent demonstrate that leveraging low-rank models of the histogram counts can lead to improved estimates of the density over histogram-based estimators using the same samples and bins [10]. However, their approach assumes near independence of the variates and does not explicitly account for the underlying distribution of the histogram data, thus providing opportunities for estimator improvements and applicability to more general multivariate distributions. ∗dmdunla@sandia.gov ,†rblehou@sandia.gov ,‡cdmayer@sandia.gov ,§aprasad@sandia.gov Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. SAND2025-05664R
https://arxiv.org/abs/2505.04957v2
1arXiv:2505.04957v2 [math.ST] 9 May 2025 Our contribution is the Poisson tensor completion (PTC) estimator , a non-parametric differential entropy esti- mator that expands on this previous work, explicitly modeling the underlying distribution of the histogram data by exploiting connections between frequency histograms, spatial Poisson processes, and low-rank Poisson tensor decom- positions. The PTC estimator leverages inter-sample relationships by computing a low-rank decomposition of the frequency histogram, which is an instance of a tensor, or multi-dimensional array. However, we observe that the histogram bins are an instance of a space partitioning of counts and thus can be identified with a spatial Poisson process—enabling us to remove the assumption made by Vandermeulen and Ledent that the variates are nearly independent. Following this observation, we compute a low-rank Poisson tensor decomposition [11] of the histogram bin counts to approximate the Poisson intensity measure, which is the probability density integrated over the bins. A Poisson tensor decomposition models the underlying distribution of the count data and guarantees non-negative estimated values and so can safely be used directly in entropy estimation. The Poisson tensor decomposition leads to a completion of the intensity measure over all bins—including those containing a few to no samples—and leads to our proposed PTC differential entropy estimator. We believe our estimator is the first tensor-based estimator that exploits the underlying spatial Poisson process related to the histogram explicitly when estimating the prob- ability density with low-rank tensor decompositions or tensor completion. Furthermore, we demonstrate that our PTC estimator is a substantial improvement over standard histogram-based estimators for sub-Gaussian probability distributions because of the concentration of norm phenomenon. The remainder of our paper is organized as follows: Sections 1.1 and 1.2 present the mathematical background and related work related to the PTC estimator. Section 2 provides a detailed description of the PTC estimator and a preliminary error analysis. Section 3 compares and demonstrates the improvements of the PTC estimator over existing estimators via several experiments. Conclusions and potential future work are presented in Section 4. 1.1 Background: frequency histograms, spatial Poisson processes, and count tensors We first review the frequency histogram approximation of a d-variate density punder which the samples are inde- pendently drawn. We then demonstrate that the histogram counts can be identified with a spatial Poisson process and we can then conclude that the PTC estimator is a principled approach. Frequency histograms. Suppose we have a collection of random points x1, . . . , x s∈ Rd(1.1) independently distributed with respect to a multivariate density pand then partition a finite volume B=∪n1n2···nd j=1 Bj⊂ RdBi∩Bj=∅when i̸=j (1.2) where niis the number of bins along the i-th axis for i= 1, . . . , d and denote n:=n1n2···nd so that |B|=Pn j=1|Bj|is finite. Letcjdenote the count of the points (1.1) that lie in bin Bj. The resulting frequency histogram can be normalized so that the approximation bphist 1B(x):=nX j=1cj s|Bj|1Bj(x)≈p 1B(x) (1.3) where |Bj|=R BjdVis the volume of Bjand the indicator function is defined by 1Bj(x):=( 1x∈Bj; 0 otherwise holds over Rdand implies that the error depends upon the size of
https://arxiv.org/abs/2505.04957v2
bphist−pover the finite volume region Band the size of poutside of B. We can then approximate the differential entropy of random variable Xover region B ent(p 1B) =−Z B⊂RdplogpdV=Elogp(X) 1B(X) (1.4) 2 with ent(bphist 1B) =−nX j=1Z BjbphistlogbphistdV =−nX j=1cj slogcj m|Bj| =−nX j=1cj slogcj s+nX j=1cj slog|Bj|. (1.5) Note that ent( bphist 1B) remains well-defined for finite bin volumes |Bj|and zero counts because lim x→0+xlogx= 0. Spatial Poisson processes. We now show that a useful model for the random set of points (1.1) is a spatial Poisson process. Define the counting measure N(A):=sX k=11A(xk) that returns the number of points (1.1) in A⊂Rdso that N(Bj) =sX k=11Bj(xk) =cj. In words, the number of points in the frequency histogram bin Bjis given by N(Bj). Well-known (see, e.g., [9, p.53]) is that the count cjis binomially distributed with sevents and success probability µ(Bj) =Z BjpdV . (1.6) Approximation of the Binomial distribution by the Poisson distribution is excellent when nis large and µ(Bj) is small so that the bin counts cjcan be approximated with a spatial Poisson process because the following two properties are satisfied: 1. For each bin Bj, P[N(Bj) =k] =e−µ(Bj) µ(Bj)k k!(1.7a) where the success probability (1.6) is the mean measure for the spatial Poisson process and pcan be identified with the Poisson intensity measure. This identification is the crucial relationship among the multivariate random variable density p, the Poisson intensity measure, and the spatial Poisson process that are leveraged by the PTC estimator. 2. Given the disjoint bins B1, . . . , B n, then N(B1),···, N(Bn) (1.7b) are independent random variables. This holds because we assumed that the set of points (1.1) are randomly drawn under pand we remark that the independence among the N(Bj) holds regardless of the independence of variates. Tensors. A tensor is multi-dimensional array where the order is the number of dimensions d. Examples include scalars (tensors of order zero), e.g., x; vectors (tensors of order one), e.g., v; and matrices (tensors of order two), e.g.,A. Following tensor notation in [12], we denote tensors of order three and higher with bold capital script letters, e.g., X. The frequency histogram for the probability density pcan be represented as a tensor of count data of order d, which we call the histogram tensor , with the partitioning scheme specified in (1.2). Unless the number of points (1.1) is exponentially large in the number of variates, an unlikely scenario, the histogram-based density estimator bphistis sparse with many zero or near-zero bins, and so does not provide an accurate approximation of the probability density so that the differential entropy approximation suffers. Exploiting this sparsity, we compute a low- rank Poisson canonical polyadic (CP) tensor decomposition [11] of the histogram tensor and use tensor completion 3 to provide a model of the expected counts in all bins including those containing no sample points. We will show that for important classes of multivariate densities with appropriate tail behavior effecting concentration of norm, the density estimator based on the low-rank Poisson tensor decomposition and completion of the frequency histogram is
https://arxiv.org/abs/2505.04957v2
an improvement over the histogram-based density estimator (1.3). The ensuing differential entropy estimator is an improvement, at times dramatically so, over the histogram-based differential entropy approximation (1.5). As noted above, we develop our differential entropy estimator using the Poisson CP tensor decomposition. In future work, we may consider other low-rank tensor Poisson decompositions—e.g., Tucker [13] or tensor train [14] decompositions—when such models and associated statistical theory are adapted for use with Poisson data. 1.2 Related work For a general survey of non-parametric entropy estimation, we refer the interested reader to [15]. We reviewed in Section 1.1 that the differential entropy of a random variable is a function of the corresponding probability density. Classical approaches to estimating the multivariate probability density include histograms and kernel density esti- mators (KDE) with the limitation that given ssamples in ddimensions, consistency generally requires shd→ ∞ , where his the bin-width of the histogram or the bandwidth parameter for the KDE, see, e.g., the textbook [9] for a review. The authors of [10] introduce the use of tensor decompositions as an improvement on the histogram and KDE for high-dimensional density estimation. In contrast to our approach that only assumes the counts are inde- pendent (1.7b), the authors of [10] approach assumes independence of the variates and is equivalent to modeling the histogram counts as Gaussian data by using least-squares loss functions in computing the low-rank tensor decompo- sitions. Three other papers [16, 17, 18] also use tensor decompositions to estimate a multivariate density. The report [17] is similar in spirit to that in [10], where a transformation into the Fourier domain leads to low-rank structure assumed in [19, 10]. Similarly, we note that our estimator is distinguished from these tensor-based estimators in that we model the histogram tensor values as counts and compute low-rank Poisson tensor factorizations to satisfy these modeling assumptions. The accuracy of the histogram-based differential entropy estimator (1.5) depends upon an exponentially large number of bins with an increasing number of variates. The plug-in estimator −1 sX ilogbphist(xi) (1.8) is simpler, however, its accuracy is also beholden to an exponential number of samples [20]. Our work directly improves upon the plug-in estimator (1.8) that is obtained with a histogram or KDE density estimate for a fixed sample size. A slight improvement to (1.8) might be obtained by splitting the data, that is, by using s−1 points to estimate the density and then using the held-out point to estimate the entropy, before averaging over all held-out points, as in [21]. In one dimension, the inverse of the spacing between samples yields a rough estimate of the density at a given point, which can be used (with some bias-correction terms) in (1.8) [22]. In larger dimensions, the natural analog of the spacing estimator is based on the k-nearest neighbor distances. The work in [23] is such an estimator, which uses 1-nearest neighbor distances; later work, including [24] and [25] generalized the estimator to k-nearest neighbor distances for k >1. We refer the interested reader to [26] and [27, 28] for a more thorough characterization of the
https://arxiv.org/abs/2505.04957v2
spacing and k-NN estimators, respectively. In short, for ssamples, we require kto be chosen so that k/s→0 for the estimator to be consistent, but k→ ∞ as a function of sfor the variance to be minimized. estimator; a choice of ks∝log6sis sufficient. As a point of interest, the requirement that a density have at least one continuous derivative at all points (see [27, Theorem 1]) precludes the application of the above theory to the uniform distribution. A recent density estimator of non-trivial complexity is that in [29], which seeks to estimate a set of marginal distributions and an accompanying copula or dependence structure of said marginals. The performance of this estimator is generally worse than the k-nearest neighbor ( k-NN) distances [30], which is easily adapted to yield an estimator for the entropy and is based upon pairwise distances between samples. As we show in Section 3, our new PTC differential entropy estimator improves upon estimators based on these k-NN density approximations for several classes of probability distributions. 2 The Poisson tensor completion estimator for differential entropy The histogram-based estimator ent( bp 1B) to ent( p 1B) in 1.5 is poor when the number of sample points in 1.1 needed for an acceptable approximation is prohibitive. This is especially so as the dimension dincreases because many bins will contain zero counts in which the contribution to the differential entropy is needed. In Section 1.1, we 4 demonstrated the relationship among the frequency histogram, a spatial Poisson process modeling the sample points drawn from the d-variate distribution defined by density p, and an order- dtensor of counts. In this section, we leverage those relationships to define the Poisson tensor completion (PTC) estimator for differential entropy and demonstrate how it can be used for efficient estimation. Tensor completion is a method for computing a low-rank model of tensor data given a sample of that data; see [31] and the references therein for an introduction to tensor completion and a survey of the many approaches developed. A standard approach to tensor completion is to compute a low-rank decomposition of the sampled data and then estimate the unobserved values in the tensor from that low-rank model. Here, we use that approach and compute a Poisson canonical polyadic (CP) tensor decomposition [11] of the histogram tensor, an approach novel with this paper. We denote the histogram tensor for a d-variate density pdefined over n=n1n2···ndbins as T∈Zn1×n2×···× nd + , where niis the number of bins into which dimension iis partitioned and Z+is the set of nonnegative integers. Using multi-index notation to indicate a tensor element—i.e., ti≡ti1i2···iddenotes the entry i= (i1, . . . , i d)∈ [n1]⊗[n2]⊗···⊗ [nd] with [ ni] ={1, . . . , n i}—we model the elements of the histogram tensor as independent Poisson random variables ti∼Poisson ( mi). (2.1) In other words, tiis the number of samples xias defined in (1.1) that are located in bin Bℓfor a unique linear index ℓ∈[n] so that the Poisson tensor assumption (2.1) is equivalent to the spatial Poisson process 1.7 modeling the bin counts. The
https://arxiv.org/abs/2505.04957v2
linear index ℓdenotes a mapping between bin and tensor multi-indices and an example is the natural ordering index notation for tensors introduced in [32] where the relationship is ℓ=L(i) and i=T(ℓ) and L: [n1]⊗ ··· ⊗ [nd]7→Z+andT:Z+7→[n1]⊗ ··· ⊗ [nd]. To the best of our knowledge, we are the first to identify the relationship between a spatial Poisson process and a Poisson canonical polyadic (CP) tensor decomposition [11] to model histogram bin counts. We also remark that T will contain many entries corresponding to unobserved or low counts unless the number of samples sis exponentially large in the number of dimensions (i.e. variates) d. Poisson tensor completion is a mechanism for imputing the expected values for these entries under the Poisson assumption (2.1). We complete Tby imposing low-rank CP structure on the corresponding Poisson parameter tensor M∈Rn1×n2×···× nd + as follows M:=RX r=1λra(1) r◦a(2) r◦ ··· ◦ a(d) r, (2.2) where ◦denotes the outer product of two vectors, ∥a(i) r∥1= 1,∀i∈ {1, . . . , d },∀r∈ {1, . . . , R }, andR+is the set of nonnegative real values. Note that the entries miof the low-rank CP parameter tensor Mare nonnegative because, by definition, they model the probability of a number of events occuring in a given space—in this context as defined by the spatial Poisson process associated with the frequncy histogram. This approach improves upon the related work of Vandermeulen and Ledent [10], which relies on tensor decompositions designed for Gaussian data and does not guarantee nonnegative entries without imposing additional constraints. We estimate the mean measure (1.6) of the spatial Poisson process using the maximum Poisson likelihood estima- tion introduced in [11]. Then, with the natural ordering index mapping Tdefined above, each entry in cMestimates the true Poisson parameter of (2.1), which in turn represents the expected count of the corresponding bin in the frequency histogram as follows: bmT(ℓ)=bmi≈mi=E(ti). (2.3) We can therefore denote the induced probability density bpptc 1B:=1 ∥cM∥1nX j=1bmT(j) |Bj|(2.4a) where ∥cM∥1:=Pn j=1bmT(j)so that (bpptc 1B)(x) =(bmT(j) ∥cM∥1|Bj|x∈Bj 0 x̸∈Bj.(2.4b) 5 For a real-valued function f:Rd→R, we then have that Z Rdfbpptc 1BdV=Z BfbpptcdV=nX j=1Z BjfbpptcdV=1 ∥cM∥1nX j=1bmT(j)¯fBj where ¯fBj:=1 |Bj|R Bjf dV is the average value of fover the bin Bj. In so many words, the frequency histogram for samples of a multivariate probability density pwithdvariates can be identified with an estimate of the mean measure of the spatial Poisson process, cM, computed via Poisson tensor completion. Setting f= log(bpptc 1B) defines the PTC estimator for differential entropy as ent(bpptc 1B) =−nX j=1bmT(j) ∥cM∥1log bmT(j) ∥cM∥1|Bj|! (2.5) because ¯fBj= logbmT(j) ∥cM∥1|Bj| . Our estimate of the underlying multivariate density is given by the dense tensor cMthat is the same size as the histogram tensor T, which containsQd i=1nientries and subject to the available computation resources, may be prohibitively too large to store. Thus, it useful to store cMin its decomposed form defined in (2.2), which is comprised of ( R+ 1)Pd i=1nientries, computing ∥cM∥1andbmT(j)in (2.5) only as needed. Recall that the vectors of a Poisson CP tensor decomposition (2.2) are normalized to unit
https://arxiv.org/abs/2505.04957v2
length in the 1-norm, thus leading to the simple computation ∥cM∥1=PR r=1λr. Computing each bmT(j)requires R(d+ 1) operations. 2.1 Error Analysis The error of PTC entropy estimator is ent(bpptc)−ent(p) = ent( bpptc 1B)−ent(p 1B)−ent(p 1Rd\B) =−nX j=1Z B bpptclog(bpptc)−plog(p) 1BjdV−ent(p 1Rd\B) =−nX j=1 bmT(j) ∥cM∥1logbmT(j) ∥cM∥1|Bj| −Z Bjp 1Bjlog(p 1Bj) dV! −ent(p 1Rd\B) (2.6a) and an analogous estimate for the histogram error is ent(bphist)−ent(p) =−nX j=1 cj s1Bjlogcj s|Bj|1Bj −Z Bjp 1Bjlog(p 1Bj) dV! −ent(p 1Rd\B). (2.6b) where recall that sis the number of samples and cjis count of samples in bin Bj, which is binomially distributed and we model with a spatial Poisson process, see the discussion following (1.6). Both estimates enables us to conclude that error can be no larger than the size of ponRd\B. This, for example suggests that the tail behavior of pplays a crucial role independent of the estimator. The estimators differ in their bin-wise expressions and consists of the error in density estimation and then entropy estimation. The value of ent( bphist) over bins without counts is zero whereas ent( bpptc) is non-zero because of tensor completion. The bias-variance tradeoff for the size of the bins for a histogram density estimator is well-known, see, e.g., [9] reviewing that decreasing bin size increases the variance and increasing bin size increases the bias. However the bias-variance tradeoff for bin sizes of PTC density estimation is nontrivial because as we explained in the paragraph containing (2.3) the tensor cMis determined by a maximum Poisson likelihood estimation introduced in [11]. This likelihood approach was introduced as a loss function for an optimization algorithm to approximate the Poisson parameters mi=E(ti)—the bias and variance of cMwas not considered. In addition, the propagation of the error in completion to the error in entropy approximation requires investigation. The comparison of the bin-wise error suggests that distributions with many bins with zero or near-zero counts are ripe to reap the rewards of completion (and the numerical results in §3.1 will confirm this). Such distributions enable a tremendous amount of tensor completion. Such a trend, as it turns out, is generic for the important class of sub-gaussian distributions as we now review. The random vector X∈Rdhas sub-gaussian components Xiwhen P{|Xi|⩾t}⩽2e−ct2(2.7a) 6 for a constant candi= 1, . . . , d . In words, the tails of the multivariate distribution for Xdecay as for a multivariate normal distribution. Examples of sub-gaussian distributions include Gaussians, uniform, bounded distributions and mixtures of sub-gaussian distributions. An important consequence is that for such distributions there is a concentration of norm, i.e., P{∥X∥2−√ d⩾t}⩽2e−˜ct2(2.7b) where ∥X∥2=qPd i=1X2 iand a constant ˜ c. In particular, a sample of a multivariate standard normal concentrates about a thin spherical shell of radius√ dso that as the number of variates dincreases so does the distance of the spherical shell from the origin. This suggests a threshold on the size of the elements of cMduring the computation ofcMand such a scheme is introduced §3.3. 3 Experiments Our experiments were performed in Python and are summarized by the following three steps. 1. Sample the appropriate distribution pto
https://arxiv.org/abs/2505.04957v2
generate srandom points (1.1). 2. Bin the random points according to a specified binning scheme. Use a sparse representation of the histogram, keeping track of the nonempty bins, counts, and bin volumes. Unless otherwise specified: •estimates of differential entropy from ssamples of a d-dimensional distribution made using a histogram directly used bins of width 3.5s−1 d+2in each dimension so that |Bj|= (3.5)ds−d d+2 (3.1) where sis the number of samples. We selected this value to be close to the asymptotically optimal bin widths for the multivariate Gaussian N(0d, Id), [9, Eq. (3.66)]). The bin edges can be found using, e.g., arange from numpy [33]. •histogram tensors have ni= 20 in each dimension for a total of n= 20dbins. The bin edges can be found using, e.g., histogram binedges from numpy . 3. Use the routines sptensor.from data andcpaprin the Tensor Toolbox pyttb [34] to compute the low-rank Poisson CP tensor model cMas in (2.2). Unless otherwise specified, the experiments used rank Rat most 5. Our experiments investigate the effect of bin size when the entropy of the underlying density is known in §3.1 and then in §3.2 we use a Gaussian mixture model, which does not have a closed-form expression for the entropy, to understand the role of rank selection. Because the entropy is small where the density is small, then we introduce a simple thresholding in §3.3 on the size of the entries of cMto reduce the cost, which becomes prohibitive with an increasing number of variates dbecause cMcannot be formed without a significant amount of memory nor can we compute the individual elements bmT(j)without a significant amount of computation. 3.1 Bin width In order to determine the influence of bin width, we select two multivariate normal distributions and multivariate uniform distributions with independent variates, each with closed-form formulas for the entropy so that we can determine the relative error in the approximation. The multivariate normal distribution has entropy d/2 log(2 πe) + 1/2 log det Σ where Σ is the covariance matrix of order dand the entropy for a multivariate uniform distributions with independent variates is log( b1−a1)···log(bd−ad) where the univariate distribution for the i-th variate is over the interval ( ai, bi). Figure 1 shows that larger bin size favor the histogram-based entropy estimates and smaller bin sizes favor the tensor-based entropy estimates, and the difference is nearly two-orders of magnitude for smallest bin sizes. Figure 2 shows the proportion of nonempty bins for the binning schemes in Figure 1 with the fraction of nonempty bins occupied differing by four orders of magnitude. The two figures suggest that increasing the number of bins decreases the fraction of bins with positive counts and so favors the tensor-based entropy estimates. The figures also imply that tensor completion effected has a dramatic impact upon the relative accuracy. The three distributions are examples of sub-gaussian distributions reviewed at the end of §2 and demonstrate that tensor completion favors a binning scheme with small sized bins in contrast to the histogram estimator of the entropy. 7 Figure 1: Comparing estimates and bins used from
https://arxiv.org/abs/2505.04957v2
25 trials for dimension 6 distributions. The histogram is con- structed by placing s= 2500 samples from the distributions into bins of width c s−1 8in each dimension, for different values of c. Here, the tensor-based approximation uses the same bins as the histogram-based approximation. Figure 2: The proportion of nonempty bins for the histograms used to estimate entropy in Figure 1. The distributions are dimension d= 6, and the histograms use bins of width c·(2500)−1 8in each dimension, for different values of c. Ascincreases, the number of bins decreases. (a) (b) Figure 3: Error in estimated entropy of a uniform distribution over (a) [0 ,1]5or (b) [0 , e2]5with independent dimensions. Estimates use a histogram directly, the tensor method, or the k-NN method. The results shown are for dimension 5 over 25 trials using the k∈ {1,2,3, . . . , 10,25,50,100,200}or rank ≤5 leading to the smallest error. 8 (a) (b) (c) Figure 4: Error in estimated entropy using a histogram directly, the tensor method, or using the k-NN method. The results shown are for dimension 5, over 25 trials using the k∈ {1,2,3, . . . , 10,25,50,100,200}or rank ≤5 leading to the smallest error. The distributions shown are (a) Normal with independent dimensions (b) Normal with correlation between dimensions, and (c) twith one degree of freedom and independent dimensions (equivalent to a Cauchy distribution). Figure 5: Estimated entropy for Gaussian mixtures with 3, 4, and 5 components. Dotted lines: tensor estimates from 25 trials of s= 2,500 samples from the distribution. Dashed lines: average histogram estimate in 25 trials of 2,500 samples. Solid lines: average histogram estimate in 25 trials with s= 1,000,000 samples. Figures 3 and 4 show a comparison of the error in estimating differential entropy using a histogram directly, using a low-rank tensor approximation to a histogram, or using the k-NN method for several multivariate distributions with dimension 5. In these plots we observe that the PTC estimator outperforms the k-NN estimator for the uniform distribution on [0 ,1]d, the two methods are similar (within an order of magnitude) for normal distributions and the uniform distribution on [0 , e2]d, and the k-NN estimator outperforms the tensor-based estimator for the heavy-tailed Cauchy distribution. The poor behavior of PTC on the heavy-tailed Cauchy distribution and excellent behavior of PTC on sub-gaussian distributions suggests that an adequate number of samples in a small number of bins is important attribute. 3.2 Gaussian Mixtures Figure 5 shows the estimated entropy for different tensor ranks compared to estimates from histograms with the s= 2,500 samples and estimates from histograms with sequal to one million samples for three Gaussian mixtures with three, four and five components. The bin sizes for the histogram is given by (3.1) and the binning for the tensor estimates uses 20 bins in each direction. The large sample histogram provides a basis of comparison because there is no closed form expression for the entropy of a Gaussian mixture with more than one component. We conclude that the tensor rank Rneeds to equal the number of components and
https://arxiv.org/abs/2505.04957v2
that PTC increases in accuracy over the histogram estimate as the number of variates increases. The latter accuracy allows us to conclude that PTC leads to a more accurate estimate with a fixed sample size. 9 (a) Histogram (b) Rank 1 decomposition (c) Rank 2 decomposition (d) Rank 3 decomposition Figure 6: Plots of a histogram of 1000 samples from two-dimensional Gaussian mixture with three equidistant modes and its rank 1, 2, and 3 tensor decompositions. Figure 7:||vec(cM1)···vec(cMR)||F ||cM||Ffor components cMRof tensor cMfrom 25 trials of 2500 samples from Gaussian mixtures with 5 clusters. A practical detail associated with PTC is the selection of the rank Rneeded for cM. There is little theory to guide us and determining Rdepends upon the application. The number of components of the mixture provides a clue. We observe a relationship between the number of mixture components and the tensor rank used to estimate the entropy. Figure 6 shows pictures of the tensor decompositions of a histogram for a two-dimensional Gaussian mixture with three components. The sequence suggests that the rank Rneeds to be as least as large as the number of components. Denote each of the Rrank-one matrices in (2.2) by cMrso that cM=PR r=1cMrand let bNbe the tensor of order d+ 1 with dimensions R×n1×n2× ··· × nd. Each subplot of Figure 7 plots the ratio ρk:=∥h vec(cM1)··· vec(cMk)i ∥F ∥cM∥F=∥h cN⊤ (1)e1···cN⊤ (1)eki⊤ ∥F ∥bN∥F=∥h cN⊤ (1)e1···cN⊤ (1)eki ∥F ∥bN∥F fork= 1,···, RandcN(1)is the matricization of bNalong the first dimension. Note that the r-th row of matrix cN(1) is denoted by e⊤ rcN(1)=cN⊤ (1)er⊤and equals vec( cMr) so that the above ratio determines the amount of linear independence in the first krows of cN(1). Equivalently, the ratio indicates whether there is a value of ksuch that tensorPk r=1cMr≈cMand note that the ratio is unity for k=R. In Figure 7, we observe that a noticeable change occurs close to where the rank of the decomposition matches the number of components in the mixture. 3.3 Tensor Thresholding A useful observation is that the Rdvectors ba(i) rin the Poisson CP estimator cMof the model defined in (2.2) are non-negative and can be identified with a probability mass function. In particular, the non-negative elements are no 10 Figure 8: Tensor top- tsampling. The first row shows the entropy estimates using a sample of a tensor. Solid lines show the true entropy and dashed lines show the (full) rank 5 tensor estimate. The second row shows the portion of the sum of values in the tensor accounted for through sampling the tensor. larger than one. Hence, the small elements of a(j) rimply that the corresponding elements in the rank-one tensor cMr:=λrba(1) r◦ba(2) r◦ ··· ◦ba(d) r are also small and have negligible contribution to ent( bpptc 1B). For example, if the first element of ba(1) ris small, then n2n3···ndelements of the r-th rank-one tensor of cMare also small. This corresponds to a sub-tensor of order d−1 for the r-th rank-one tensor corresponding to ba(1) r. This suggests a simple thresholding algorithm,that can be used to approximate the estimator in
https://arxiv.org/abs/2505.04957v2
(2.5) and thus lower the memory and computational requirements: •Determine the Rdsets Ω r,icontaining the indices for each of the Rdvectors ba(i) rwhere the elements are less than a threshold 0 < τ < 1. Approximate ent( bpptc 1B) using the elements of cMnot containing any of thePR r=1Pd i=1Ωr,iindices. The number of elements in the sum of the Rrank-one tensors is Rn. If|Ωr,i|denotes the number of indices in Ω r,i, then Rn−RX r=1dX i=1|Ωr,j|n ni is the number of elements needed to approximate ent( bpptc 1B). The number of non-negligible elements can be as small as zero when |Ωr,i|=niand as large as Rnwhen|Ωr,i|= 0. Note that |Ωr,i|=nioccurs when ba(i) ris a uniform distribution (or nearly so) and τni>1. Figure 8 shows entropy estimates found by sampling a tensor for the uniform distribution on [0 ,1]dand normal distributions N(0, Id) and N(0,1 2[1dI⊤ d+Id]). The estimate to the tensor found by combining the top- tindices approaches the estimate found using the full tensor for increasing t. 4 Conclusion We introduced the non-parametric Poisson tensor completion (PTC) estimator (2.5) using the low-rank Poisson tensor cMto approximate the entropy estimator for a multivariate random variable from a frequency histogram. The 11 PTC estimator leverages the inter-sample relationships determined during a maximum Poisson likelihood estimation introduced in [11] to compute a low-rank Poisson tensor decomposition of the frequency histogram. Our crucial observation is that the histogram bins are an instance of a space partitioning of counts and thus can be identified with a spatial Poisson process. The Poisson tensor decomposition leads to a completion of the intensity measure over all bins—including those containing few to no samples—and leads to our proposed PTC differential entropy estimator. An error analysis underscored the role played by the tensor completion to impute values for the density and numerical experiments examined the role of sub-gaussian distribution, thresholding and the determination of tensor rank. PTC appears to work well on sub-gaussian distributions because when there is an adequate number of samples in a small number of bins. Our future work consists in determining the approximation provided by tensor completion and appropriate bin- ning strategies. Our previous work on Poisson tensor completion demonstrated that a zero-truncated Poisson CP decomposition [35] can better estimate expected counts than when using a Poisson CP decomposition if the number of zero counts in the observed tensor is not too large relative to the sizes of the dimensions of the tensor. We plan to explore the conditions where such a result combined with the thresholding approach above may lead to an even more efficient estimator for sub-gaussian distributions based on zero-truncated Poisson tensor completion. Acknowledgments We thank Derek Tucker of Sandia National Labs for several helpful discussions during the writing of this manuscripts. The second author thanks Scott McKinley of Tulane University for several helpful discussions on spatial Poisson Processes. This work was supported by the Laboratory Directed Research and Development program (Project 233076) at Sandia National Laboratories, a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary
https://arxiv.org/abs/2505.04957v2
of Honeywell International Inc. for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. References [1] D. Gokhale, “On entropy-based goodness-of-fit tests,” Computational Statistics & Data Analysis , vol. 1, pp. 157–165, 1983. [2] E. Parzen, “Goodness of fit tests and entropy,” Journal of Combinatorics, Infor-mation, and System Science , vol. 16, pp. 129–136, 1991. [3] R. Cheng and N. Amin, “Estimating parameters in continuous univariate distributions with a shifted origin,” Journal of the Royal Statistical Society: Series B (Methodological) , vol. 45, no. 3, pp. 394–403, 1983. [4] S. Zhu, D. Wang, K. Yu, T. Li, and Y. Gong, “Feature selection for gene expression using model-based entropy,” IEEE/ACM Transactions on Computational Biology and Bioinformatics , vol. 7, no. 1, pp. 25–36, 2008. [5] A. Hyv¨ arinen, “New approximations of differential entropy for independent component analysis and projection pursuit,” Advances in neural information processing systems , vol. 10, 1997. [6] L. Faivishevsky and J. Goldberger, “Ica based on a smooth estimation of the differential entropy,” Advances in neural information processing systems , vol. 21, 2008. [7] S. Talukdar, S. Bhaban, and M. V. Salapaka, “Memory erasure using time-multiplexed potentials,” Physical Review E , vol. 95, no. 6, p. 062121, 2017. [8] Y. Han, J. Jiao, T. Weissman, and Y. Wu, “Optimal rates of entropy estimation over lipschitz balls,” The Annals of Statistics , vol. 48, no. 6, pp. 3228–3250, 2020. [9] D. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization , ser. A Wiley-interscience pub- lication. Wiley, 2015. [10] R. A. Vandermeulen and A. Ledent, “Beyond smoothness: Incorporating low-rank analysis into nonparametric density estimation,” Advances in Neural Information Processing Systems , vol. 34, pp. 12 180–12 193, 2021. [11] E. C. Chi and T. G. Kolda, “On tensors, sparsity, and nonnegative factorizations,” SIAM Journal on Matrix Analysis and Applications , vol. 33, no. 4, pp. 1272–1299, 2012. 12 [12] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review , vol. 51, no. 3, pp. 455–500, Sep. 2009. [13] L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika , vol. 31, pp. 279–311, 1966. [14] I. V. Oseledets, “Tensor-train decomposition,” SIAM Journal on Scientific Computing , vol. 33, no. 5, pp. 2295– 2317, 2011. [15] J. Beirlant, E. J. Dudewicz, L. Gy¨ orfi, E. C. Van der Meulen et al. , “Nonparametric entropy estimation: An overview,” International Journal of Mathematical and Statistical Sciences , vol. 6, no. 1, pp. 17–39, 1997. [16] M. Amiridi, N. Kargas, and N. D. Sidiropoulos, “Information-theoretic feature selection via tensor decomposition and submodularity,” Trans. Sig. Proc. , vol. 69, p. 6195–6205, Jan. 2021. [Online]. Available: https://doi.org/10.1109/TSP.2021.3125147 [17] ——, “Low-rank characteristic tensor density estimation part i: Foundations,” IEEE Transactions on Signal Processing , vol. 70, pp. 2654–2668, 2022. [18] ——, “Low-rank characteristic tensor density estimation part ii: Compression and latent density estimation,” IEEE Transactions on Signal Processing , vol. 70, pp. 2669–2680, 2022. [19] R. A. Vandermeulen, “Improving nonparametric density estimation with tensor decompositions,” 2020. [Online]. Available: https://arxiv.org/abs/2010.02425 [20] H. Joe, “Estimation of entropy and other functionals of a multivariate
https://arxiv.org/abs/2505.04957v2
density,” Annals of the Institute of Statistical Mathematics , vol. 41, pp. 683–697, 1989. [21] P. Hall and S. C. Morton, “On the estimation of entropy,” Annals of the Institute of Statistical Mathematics , vol. 45, pp. 69–88, 1993. [22] F. Tarasenko, “On the evaluation of an unknown probability density function, the direct estimation of the entropy from independent observations of a continuous random variable, and the distribution-free entropy test of goodness-of-fit,” Proceedings of the IEEE , vol. 56, no. 11, pp. 2052–2053, 1968. [23] L. F. Kozachenko and N. N. Leonenko, “Sample estimate of the entropy of a random vector,” Problemy Peredachi Informatsii , vol. 23, no. 2, pp. 9–16, 1987. [24] H. Singh, N. Misra, V. Hnizdo, A. Fedorowicz, and E. Demchuk, “Nearest neighbor estimates of entropy,” American journal of mathematical and management sciences , vol. 23, no. 3-4, pp. 301–321, 2003. [25] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi, “A new class of random vector entropy estimators and its applications in testing statistical hypotheses,” Journal of Nonparametric Statistics , vol. 17, no. 3, pp. 277–297, 2005. [26] P. Hall, “Limit theorems for sums of general functions of m-spacings,” in Mathematical Proceedings of the Cambridge Philosophical Society , vol. 96, no. 3. Cambridge University Press, 1984, pp. 517–532. [27] T. B. Berrett, R. J. Samworth, and M. Yuan, “Efficient multivariate entropy estimation via k-nearest neighbour distances,” The Annals of Statistics , vol. 47, no. 1, pp. 288–318, 2019. [28] R. Mnatsakanov, N. Misra, S. Li, and E. Harner, “ kn-nearest neighbor estimators of entropy,” Mathematical Methods of Statistics , vol. 17, pp. 261–277, 2008. [29] G. Ariel and Y. Louzoun, “Estimating differential entropy using recursive copula splitting,” Entropy , vol. 22, no. 2, p. 236, 2020. [30] Y. Mack and M. Rosenblatt, “Multivariate k-nearest neighbor density estimates,” Journal of Multivariate Anal- ysis, vol. 9, no. 1, pp. 1–15, 1979. [31] A. Liu and A. Moitra, “Tensor completion made practical,” in Advances in Neural Information Processing Systems , H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33, 2020, pp. 18 905–18 916. [32] G. Ballard and T. G. Kolda, Tensor Decompositions for Data Science . Cambridge University Press, 2025. 13 [33] C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del R´ ıo, M. Wiebe, P. Peterson, P. G´ erard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant, “Array programming with NumPy,” Nature , vol. 585, no. 7825, pp. 357–362, Sep. 2020. [Online]. Available: https://doi.org/10.1038/s41586-020-2649-2 [34] D. M. Dunlavy, N. T. Johnson et al. , “pyttb: Python Tensor Toolbox, v1.8.2,” Jan. 2025. [Online]. Available: https://github.com/sandialabs/pyttb [35] O. F. L´ opez, D. M. Dunlavy, and R. B. Lehoucq, “Zero-truncated poisson regression for sparse multiway count data corrupted by false zeros,” Information and Inference: A Journal of the IMA , vol. 12,
https://arxiv.org/abs/2505.04957v2
arXiv:2505.05080v1 [math.ST] 8 May 2025EXPECTATIONS OF SOME RATIO-TYPE ESTIMATORS UNDER THE GAMMA DISTRIBUTION JIA-HAN SHIH Abstract. We study the expectations of some ratio-type estimators under the gamma distribution. Expectations of ratio-type estimators are often difficult to compute due to the nature that they are constructed by combining two separate estimators. With the aid of Lukacs’ The- orem and the gamma-beta (gamma-Dirichlet) relationship, we provide alternative proofs for the expected values of some common ratio-type estimators, including the sample Gini index, the sample Theil index, and the sample Atkinson index, under the gamma distribution. Our proofs using the distributional properties of the gamma distribution are much simpler than the existing ones. In addition, we also derive the expected value of the sample variance-to-mean ratio under the gamma distribution. 1.Introduction Ratio-type statistical measures arise when two quantities are combined to form a scale-free population characteristic. Since the numerator and denominator are measured in the same units, their ratio is scale-invariant, resulting in a standardized quantity that is easy to interpret. If the goal is to produce a scale-free measure, a natural choice for the denominator is the population mean. For instance, the well-known Gini index [4] is a measure of income inequality, which is defined as the Gini mean difference divided by the population mean. For more details about the Gini index, we refer to Baydil et al. [2] and the references therein. Estimation of ratio-type measures is typically carried out by estimating the numerator and denominator separately. If both components are esti- mated consistently, their ratio will also yield a consistent estimator for the population ratio thanks to the continuous mapping theorem. However, the finite-sample properties of a ratio-type estimator, such as its bias, can be challenging to derive due to the fact that the estimator is formed by com- bining two separate estimators. Recently, Baydil et al. [2] proved that the sample Gini index is actually unbiased under the gamma distribution by using the identity (4) and the Laplace transform. Date : May 23, 2025. 2010 Mathematics Subject Classification. 60E10, 62E10, 62F10. Key words and phrases. Atkinson index, Dirichlet distribution, Theil index, Gini index, Proportion-sum independence theorem. 1 2 JIA-HAN SHIH In this note, we give a simple alternative proof for the unbiasedness of the sample Gini index under the gamma distribution. The main tool of our proof is Lukacs’ Theorem [6], which is stated below. Theorem 1 (Lukacs [6]) .LetX1andX2be nondegenerate, independent, and positive random variables. Then the proportion X1/(X1+X2)and the sumX1+X2are independent if and only if X1andX2are gamma random variables with the same scale (or rate) parameter. Lukacs’ Theorem shows that the independence between a proportion and the corresponding sum uniquely characterizes the gamma distribution. Ap- plying Lukacs’ Theorem along with the gamma-beta relationship, one can readily verify the unbiasedness of the sample Gini index. To further illus- trate the usefulness of Lukacs’ Theorem, we derive the expectation of the sample variance-to-mean ratio (VMR), which is also a ratio-type estimator. Besides the Gini index, there are other measures of income inequality, such as the Theil index and the Atkinson index
https://arxiv.org/abs/2505.05080v1
[8, 1], both of which are ratio- type measures. Recently, Vila and Saulo [9] employed the same approach as Baydil et al. [2] and computed the expected values of the sample Theil index and the sample Atkinson index under the gamma distribution. It turns that the gamma-Dirichlet relationship can be utilized for computing the expectation of the sample Theil index and the sample Atkinson index. The rest of this note is organized as follows: Section 2 presents an alter- native proof for the unbiasedness of the sample Gini index. Section 3 gives alternative proofs for computing the expectation of the sample Theil index and the sample Atkinson index. Section 4 derives the expectation of the sample VMR. Section 5 concludes. 2.The Gini index under the gamma distribution Recall that the gamma density is defined as f(y) =λα Γ(α)yα−1e−λy, y > 0, (1) where Γ( ·) is the gamma function, α >0 is the shape parameter, and λ >0 is the rate parameter. Throughout this note, let Ybe a nondegenerate and positive random variable with mean 0 < µ= E(Y)<∞. The Gini index of Yis defined as G(Y) =E|Y1−Y2| 2µ, where Y1andY2are two independent and identically distributed (i.i.d.) copies of Y. IfYfollows the gamma density in (1), it has been shown that G(Y) =Γ(α+ 1/2)√πΓ(α+ 1), (2) which does not depend on the rate parameter λ; see [7]. 3 LetY1, . . . , Y nbe i.i.d. random samples from the same population. The sample Gini index is given by Gn=1 n−1P 1≤i<j≤n|Yi−Yj|Pn i=1Yi, n≥2. (3) It is obvious that Gnis a consistent estimator for G, i.e., Gn→Gin probability as n→ ∞ . Ratio-type estimators are generally expected to be biased, but Baydil et al. [2] showed that Gnis, in fact, an unbiased estimator forGunder the gamma distribution. Theorem 2 (Baydil et al., [2]) .LetY1, . . . , Y nbe i.i.d. copies of Ywhich follows the gamma density given in (1). Then E(Gn) =G(Y)in (2). As mentioned in Baydil et al. [2], the main challenge in evaluating E( Gn) lies in the denominator in (3). To handle this term, they rewrite the expec- tation as follows: E(Gn) =1 n−1EP 1≤i<j≤n|Yi−Yj|Pn i=1Yi =1 n−1E  X 1≤i<j≤n|Yi−Yj|Z∞ 0e−zPn i=1Yidz  , where the last equality follows from the identity 1 s=Z∞ 0e−zsdz, s > 0. (4) By using the Laplace transform of the gamma random variable along with some complicated integration, they managed to show that E( Gn) =G, thus establishing that Gnis an unbiased estimator for Gunder the gamma dis- tribution. We now present an alternative proof for the unbiasedness of Gnunder the gamma distribution via Lukacs’ Theorem and the gamma-beta relationship. Proof. It suffices to consider E|Y1−Y2|Pn i=1Yi = EY1+Y2 Y1+Y2+Pn i=3Yi 2Y1 Y1+Y2−1  . First, note that the pair ( Y1+Y2, Y1/(Y1+Y2)) and the vector ( Y3, . . . , Y n) are independent. In addition, since Y1andY2are i.i.d. gamma random variables, we also have the independence between Y1+Y2andY1/(Y1+Y2) according to Lukacs’ Theorem [6]. Therefore, Y1+Y2,Y1/(Y1+Y2), and (Y3, . . . , Y n) are mutually independent. Consequently,
https://arxiv.org/abs/2505.05080v1
the above expectation equals to EY1+Y2 Y1+Y2+Pn i=3Yi E 2Y1 Y1+Y2−1  . 4 JIA-HAN SHIH We now compute the expectations separately. Note that ( Y1+Y2)/Pn i=1Yi follows the beta distribution with shape parameters 2 αand ( n−2)α. We have EY1+Y2 Y1+Y2+Pn i=3Yi =2 n. On the other hand, the random variable R=Y1/(Y1+Y2) follows the beta distribution with both shape parameters equal to α. We have E|2R−1|=1 B(α, α)Z1 0|2r−1|rα−1(1−r)α−1dr =2 B(α, α)Z1 1/2(2r−1)rα−1(1−r)α−1dr, where B(·,·) is the beta function and the last equality follows from the symmetry of the integrand about r= 1/2. After making a change of variable u= (2r−1)2, the integral transforms to E|2R−1|=1 22α−1B(α, α)Z1 0(1−u)α−1du=Γ(2α) 22α−1αΓ(α)2 =Γ(α+ 1/2)√πΓ(α+ 1), where the last equality follows from the duplication formula [3, Eq. (5.5.5)], i.e., Γ( α)Γ(α+ 1/2) = 21−2α√πΓ(2α), and the identity αΓ(α) = Γ( α+ 1). Combining all the results, we arrive at E(Gn) =1 n−1EP 1≤i<j≤n|Yi−Yj|Pn i=1Yi =1 n−1n 2 E|Y1−Y2|Pn i=1Yi =Γ(α+ 1/2)√πΓ(α+ 1). Hence Gnis an unbiased estimator for Gunder the gamma distribution. □ It is clear that our new proof relies on Lukacs’ Theorem, i.e., the inde- pendence between Y1+Y2andY1/(Y1+Y2), which is an unique character- ization of the gamma distribution. In fact, the same approach can be used to compute the population Gini index G(Y). Indeed, under the gamma distribution, E|Y1−Y2|= E(Y1+Y2)E|2R−1|= 2µΓ(α+ 1/2)√πΓ(α+ 1). After cancelling out 2 µ, we obtain the population Gini index (2). Remark 1. Our proof does not imply that E( Gn) =G(Y) if and only if Yfollows the gamma distribution. It would be unrealistic to expect that a moment condition is able to enforce the underlying random variable to 5 follow a specific distribution. For instance, one can directly verify that for a discrete random variable Ysatisfying Pr(Y=a) = Pr( Y=b) = 1 /2,0< a < b < ∞, the sample Gini index Gnis unbiased for G(Y) when n= 2. 3.The Theil and Atkinson indices under the gamma distribution In this section, we apply the gamma-beta and the gamma-Dirichlet rela- tionship to compute the sample Theil index and the sample Atkinson index. 3.1.The Theil index. The Theil Tindex of Yis defined as T(Y) = EY µlogY µ . IfYfollows the gamma density in (1), it has been shown that T(Y) =ψ(α) +1 α−log(α), which does not depend on the rate parameter λandψ(·) is the digamma function; see [7]. The sample Theil Tindex is given by Tn=Pn i=1Yilog(Yi/µn)Pn i=1Yi, n≥1, where µn=Pn i=1Yi/nis the sample mean. Vila and Saulo [9] followed the same approach as Baydil et al. [2] to compute the expectation of Tnunder the gamma distribution. Here we state the result in Vila and Saulo [9] and provide a simple alternative proof. Theorem 3 (Vila and Saulo, [9]) .LetY1, . . . , Y nbe i.i.d. copies of Ywhich follows the gamma density given in (1). Then E(Tn) =ψ(α) +1 α+ log( n)−ψ(nα)−1 nα. Proof. It suffices to consider EY1log(Y1/µn)Pn i=1Yi = EY1Pn i=1YilogY1Pn i=1Yi + EY1Pn i=1Yi log(n). Since the random variable W=Y1/Pn i=1Yifollows the beta distribution with shape parameters αand ( n−1)α, we only need
https://arxiv.org/abs/2505.05080v1
to compute the expecta- tion of Wlog(W). For a beta random variable Uwith the shape parameters 6 JIA-HAN SHIH a >0 and b >0, we have E{Ulog(U)}=1 B(a, b)Z1 0ulog(u)ua−1(1−u)b−1du =1 B(a, b)Z1 0∂ ∂aua(1−u)b−1du =1 B(a, b)∂ ∂aZ1 0ua(1−u)b−1du =1 B(a, b)∂ ∂aB(a+ 1, b) =a a+b{ψ(a+ 1)−ψ(a+b+ 1)}, (5) where the third equality holds due to the fact that the beta density belongs to the exponential family. Setting aandbasαand ( n−1)αin (5), respectively, we obtain E{Wlog(W)}=1 n{ψ(α+ 1)−ψ(nα+ 1)} =1 n ψ(α) +1 α−ψ(nα)−1 nα , where ψ(α+ 1) = ψ(α) + 1/α. Finally, we arrive at E(Tn) =nEY1log(Y1/µn)Pn i=1Yi =ψ(α) +1 α+ log( n)−ψ(nα)−1 nα. Hence Tnis biased under the gamma distribution. □ 3.2.The Atkinson index. The Atkinson index of Yis defined as A(Y) = 1−exph −En logµ Yoi , which may also be defined through the so-called Theil Lindex [8, 9]. If Y follows the gamma density in (1), it has been shown that A(Y) = 1−exp{ψ(α)} α, which does not depend on the rate parameter λ; see [9]. The sample Atkin- son index is given by An= 1−(Qn i=1Yi)1/n µn, n≥1. In a similar fashion, we state the result of E( An) under the gamma distri- bution in Vila and Saulo [9] and provide a simple alternative proof. Theorem 4 (Vila and Saulo, [9]) .LetY1, . . . , Y nbe i.i.d. copies of Ywhich follows the gamma density given in (1). Then E(An) = 1−Γn(α+ 1/n) αΓn(α). 7 Proof. Define Zj=Yj/Pn i=1Yiforj= 1, . . . , n . We rewrite An= 1−(Qn i=1Yi)1/n µn= 1−nnY i=1YiPn i=1Yi1/n = 1−nnY i=1Z1/n i. Since the random vector ( Z1, . . . , Z n) follows the Dirichlet distribution with all shape parameters equal to α, by the product moment formula under the Dirichlet distribution [5, Eq. (49.7)], we obtain E nY i=1Z1/n i! =Γ(nα) Γ(nα+ 1)Qn i=1Γ(α+ 1/n)Qn i=1Γ(α)=Γn(α+ 1/n) nαΓn(α). Consequently, E(An) = 1−Γn(α+ 1/n) αΓn(α). Hence Anis biased under the gamma distribution. □ 4.The variance-to-mean ratio under the gamma distribution The variance-to-mean ratio (VMR) of Yis defined as VMR( Y) =σ2 µ, provided that σ2= Var( Y)<∞. IfYfollows the gamma density in (1), it is straightforward to obtain that VMR( Y) =1 λ. The sample VMR is given by VMR n=1 n−1Pn i=1(Yi−µn)2 µn, n≥2. Below we demonstrate that Lukacs’ Theorem can be applied to compute the expectation of VMR n. Theorem 5. LetY1, . . . , Y nbe i.i.d. copies of Ywhich follows the gamma density given in (1). Then E(VMR n) =nα (nα+ 1)λ. Proof. We begin by rewriting VMR n=1 n−1Pn i=1Y2 i−nµ2 n µn =n n−1nX i=1YiPn i=1Yi2 nX i=1Yi! −1 n−1nX i=1Yi. 8 JIA-HAN SHIH By Lukacs’ Theorem, Yj/Pn i=1Yiis independent ofPn i=1Yi. Moreover, Yj/Pn i=1Yifollows the beta distribution with shape parameters αand ( n− 1)αfor all j= 1, . . . , n . Therefore, we have E(Y1Pn i=1Yi2 nX i=1Yi!) = EY1Pn i=1Yi2 E nX i=1Yi! =n−1 n2(nα+ 1)+1 n2nα λ. Consequently, E(VMR n) =n n−1n−1 n2(nα+ 1)+1 n2nα λ−n n−1α λ=nα (nα+ 1)λ. □ Theorem
https://arxiv.org/abs/2505.05080v1
5 implies that if the random samples are drawn from the gamma distribution, the sample VMR tends to be biased downward: E(VMR n)−VMR( Y) =nα (nα+ 1)λ−1 λ=−1 (nα+ 1)λ<0. 5.Concluding remarks This note illustrates that the distributional properties of the gamma dis- tribution, namely Lukacs’ Theorem and the gamma-beta (gamma-Dirichlet) relationship, are extremely powerful for deriving the expected values of ratio- type estimators. Although we present the results only for measures of in- equality, i.e., dispersion indices, the method may also be applied to other ratio-type statistical measures. Acknowledgments The author thank Po-Han Hsu and Chee Han Tan for helpful discussions. J.-H. Shih is funded by National Science and Technology Council of Taiwan (NSTC 112-2118-M-110-004-MY2). Declaration The author declare that there is no conflict of interest. References [1] Anthony B Atkinson. On the measurement of inequality. Journal of Economic Theory , 2(3):244–263, 1970. [2] Banu Baydil, Victor H. de la Pe˜ na, Haolin Zou, and Heyuan Yao. Un- biased estimation of the gini coefficient. Statistics & Probability Letters , 222:110376, 2025. 9 [3]NIST Digital Library of Mathematical Functions .https://dlmf.nist. gov/, Release 1.2.4 of 2025-03-15. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds. [4] Corrado Gini. Variabilit` a e mutabilit` a: contributo allo studio delle dis- tribuzioni e delle relazioni statistiche. [Fasc. I.] . Tipogr. di P. Cuppini, 1912. [5] Samuel Kotz, Narayanaswamy Balakrishnan, and Norman L Johnson. Continuous multivariate distributions, Volume 1: Models and applica- tions, volume 1. John wiley & sons, 2019. [6] Eugene Lukacs. A Characterization of the Gamma Distribution. The Annals of Mathematical Statistics , 26(2):319–324, 1955. [7] James B. McDonald and Bartell C. Jensen and. An analysis of some prop- erties of alternative measures of income inequality based on the gamma distribution function. Journal of the American Statistical Association , 74(368):856–860, 1979. [8] H. Theil. Economics of Information Theory . Chicago: Rand McNally, 1967. [9] Roberto Vila and Helton Saulo. Closed-form formulas for the biases of the theil and atkinson index estimators in gamma distributed popula- tions, 2025. Department of Applied Mathematics, National Sun Yat-sen University, Kaoh- siung, Taiwan Email address :jhshih@math.nsysu.edu.tw
https://arxiv.org/abs/2505.05080v1