text
string
source
string
have been previously considered in the literature. To motivate our proposal, we return to the key idea, mentioned in Section 1, that our regularization should be such that the bias term is linear in the regularization parameter. We develop this idea for the simplest linear program: given a vector cwith positive coordin...
https://arxiv.org/abs/2505.04312v1
solution to (8)exists and is unique. 2.(Slater’s condition) The feasible region D={x:Ax=b,x≥0}of(8)has non-empty relative interior and there exists x∈relintDsuch that x>0. 3. The constraint matrix Ain(8)has full row rank. These assumptions are not restrictive and are natural in this setting. The assumption that (8) has...
https://arxiv.org/abs/2505.04312v1
version of the penalized linear program, with bn=b. Our goal is to precisely characterize the bias induced by the regularization term as a stepping stone to (13). Specifically, we study the relationship between (8) and (11) as r→0, and the main result of this section is that x(r,b)−x⋆ is asymptotically linear in r. Thi...
https://arxiv.org/abs/2505.04312v1
the quadratic program min x∈Rmx⊤Σx,such that Ax=y, (30) Proposition 5.2. There exists a unique solution to (30), which is a linear function of y, and can therefore be written M⋆yfor some M⋆∈Rm×k. Moreover, the matrix M⋆has the following properties: •M⋆is a right-inverse of A, i.e., AM⋆=Ik. •ker(A)⊆ker(M⋆⊤Σ). The follow...
https://arxiv.org/abs/2505.04312v1
an asymptotically unbiased estimator of x⋆. Our main insight is that the linearity of the bias makes it possible to construct a linear combination of the solutions to (12) with two different values of rnso that the term involving d⋆exactly cancels. Indeed, the following theorem is immediate from Theorem 5.4. Theorem 6....
https://arxiv.org/abs/2505.04312v1
BL1((Rk))denotes the set of bounded Lipschitz functions on Rk, whose L∞norm and Lipschitz constant are both bounded by 1. As is well known, such conditional weak convergence results lead to consistency for quantile estimators and confidence sets obtained from ˜xn. For example, the following result is a direct applicati...
https://arxiv.org/abs/2505.04312v1
r0, with substantial deviations from Gaussianity when r0is too small or too large. This phenomenon can be understood by recalling the condition r2 n≪n−1/2≪rn, or, equivalently, n−1/2≪rn≪n−1/4. When nis small, the window of valid penalization parameters is relatively narrow, implying that r0should be chosen with caution...
https://arxiv.org/abs/2505.04312v1
equi-spaced 10 ×10 grid (2000 independent replicas in each case). With varies quantities of regularization parameter r0and sample size n, the left two plots shows the Mean Squared Error of the estimated plan (E∥ˆπn,r0−π⋆∥2), and the Kolmogorov—Smirnov distance between ∆ wn,r0and the standard Gaussian. The middle two pl...
https://arxiv.org/abs/2505.04312v1
We record the net bike flow each day for all N= 678 stations in a vector d(i)∈ZN, for i= 1, . . . , n . A positive entry in d(i)means that there were more bikes docking at that station than bikes being taken away from the station during the ith day, and vice versa. We assume that the vectors d(i)are i.i.d. samples of a...
https://arxiv.org/abs/2505.04312v1
proteins. The optimal transport plan between these two distributions with cost proportional to the Euclidean distance between pixels is denoted by π⋆. The colocalization measure, as a function of distance ξ, quantifies the fraction of proteins which are “colocalized”—i.e., which are transported only a short distance un...
https://arxiv.org/abs/2505.04312v1
Since Col( π), viewed as mapping from plans to the space of c` adl` ag functions, is linear and Lipschitz, Theorem 6.1 implies that √n([PCol n,n−Col⋆)D→Col(M⋆G), (46) where Gis the asymptotic limit of√n(tn−t,sn−s). We construct uniform confidence bands by letting u1−α be the 1 −αquantile of ∥Col(M⋆G)∥∞and defining In:=...
https://arxiv.org/abs/2505.04312v1
the coordinates of xcorresponding to I. 18 Definition 2. A set I⊆[m]is abasis if |I|=k, rank(AI) =k (48) Given a basis I, we can define the basic solution x(I;b) to be the vector xsatisfying xI=A−1 Ib xIC=0.(49) If this basic solution x(I;b) is also feasible to (8), i.e., xIis nonnegative, we say that x(I;b) is a feasi...
https://arxiv.org/abs/2505.04312v1
the linear programs 2 which ensures that the primal problem (8) has a unique solution, the feasible basic solution x⋆(b′) can be expressed as x⋆(b′) =x⋆(b) +x(I;b′−b) for I∈ I⋆(b′). (53) In the case where x(I;b′−b) is not unique, the optimal solution to the linear program with parameter b′is achieved at an optimal face...
https://arxiv.org/abs/2505.04312v1
any d1∈Kand any d2∈domg∩A, g(d2)−g(d1)− ⟨∇ g(d1),d2−d1⟩ ≥Cmin{∥d2−d1∥,∥d2−d1∥2}. 20 Proof. Define s(d,d′) :=g(d′)−g(d)−⟨∇g(d),d′−d⟩. The locally strong convexity of gwithin space Aimplies thatsis strictly positive for all d,d′∈Aandd̸=d′. We first show that for C0>0 sufficiently small, the set {(d1,d2)∈K×(dom g∩A) :s(d1...
https://arxiv.org/abs/2505.04312v1
jf′(x⋆ j)≥mvmaxf′(vmax). (68) Combining equation (64) to (68), we have 0≤ −X j∈Ixc,jf′(x⋆ j)≤m(−vmaxf′(vmax) + 2vmax(|f′(xmax+vmax)| ∨ |f′(xmin−vmax)|)). (69) Therefore, max j∈I f′(x⋆ j) ≤mvmax vmin(|f′(vmax)|+ 2(|f′(xmax+vmax)| ∨ |f′(xmin−vmax)|)) (70) A.3 Penalty functions We have introduced our assumptions on the pe...
https://arxiv.org/abs/2505.04312v1
implying that√n(πλ,n−π⋆) diverges with probability 1. Such a divergence behavior should be generalizable to the general generate case. B.1.1 The small regularization regime: λlog(√n)→0 We introduce serval lemmas that describe the properties of the solutions for (73) and (74) and are key to the proof for 2.1. Lemma B.1....
https://arxiv.org/abs/2505.04312v1
Give in equation (53), the vertices ofπ(tn, s) are feasible basis solutions. Applying A.4 with P=π(tn, s)−π⋆,m=|S|, and x0=π⋆, we conclude that ∥f′((π⋆ n)i,j)∥=Op(logn) so that |αi,j|=|ϕi+ψj|=Op(logn). Utilizing the lemmas above, we can prove the small λregime in proposition 2.1 Proposition B.4. In the regime of λlog(√...
https://arxiv.org/abs/2505.04312v1
to 0. Proof. By the definition of ∆ π, we have A0∆π= 0. In the empirical situation with treplaced by tn, we evaluate the difference in the cost function in equation (74) at two different vectors: the optimal plan πλ,nand a feasible plan π′ λ,n. Since πλ,nis optimal for the program, we have ⟨c,∆π⟩+λNX i,j=1f((πλ,n)i,j)−...
https://arxiv.org/abs/2505.04312v1
p′(x)→+∞asx↑t; in particular, this implies that the directional derivative offratx(r,b) in the direction x0−x(r,b) is−∞, contradicting the claimed optimality of x(r,b). To prove that the auxiliary program has a unique solution, we show that it is locally strong convex. Here, I0is defined the same as in (26). Lemma B.7 ...
https://arxiv.org/abs/2505.04312v1
Asymptotic law of random penalized program solutions: proof of theorems in Section 5, 6,and 7 This section provides the complete proofs for the main theoretical results in this paper. We first establish the asymptotic behavior of the penalized program under small non-random perturbations of the equality constraints. We...
https://arxiv.org/abs/2505.04312v1
Taylor’ theorem implies ⟨∇g(d⋆+e⋆),e(r,b′)−e⋆⟩=⟨∇g(d⋆),e(r,b′)−e⋆⟩+⟨∇2g(d⋆)e⋆,e(r,b′)−e⋆⟩+O(∥e⋆∥2∥e(r,b′)−e⋆∥). The first-order optimality conditions of d⋆show that the first term vanishes. Moreover, since ∇2g(d⋆)e⋆=r−1ΣM⋆(b′−b) ande(r,b′)−e⋆lies in the kernel of A, Proposition 5.2 implies that the second term vanishes...
https://arxiv.org/abs/2505.04312v1
satisfied. Since t=sandci,i= 0, the optimal transport plan π⋆only has support on the diagonal entries. For the symmetric cost matrix, ρis a symmetric matrix in both scenarios. Therefore, for small enough ϵ, the transportation plan π⋆(t+ϵPN j=1ρi,j,s+ϵPN i=1ρi,j) also has support on the diagonal entries. B.3.3 Bootstrap...
https://arxiv.org/abs/2505.04312v1
Stark, G. Gut, J. S. Del Castillo, M. Levesque, K.-V. Lehmann, L. Pelkmans, A. Krause, and G. R¨ atsch. Learning single-cell perturbation responses using neural optimal transport. Nature methods , 20 (11):1759–1768, 2023. T. Cai, J. Cheng, N. Craig, and K. Craig. Linearized optimal transport for collider events. Physic...
https://arxiv.org/abs/2505.04312v1
B. Sen. Multivariate ranks and quantiles using optimal transport: Consistency, rates and nonparametric testing. The Annals of Statistics , 50(2):1012–1037, 2022. Z. Goldfeld, K. Kato, G. Rioux, and R. Sadhu. Limit theorems for entropic optimal transport maps and sinkhorn divergence. Electronic Journal of Statistics , 1...
https://arxiv.org/abs/2505.04312v1
bias property of semiparametric estimators. Econometrica , 72(3):947–962, 2004. S. E. Park, P. Harris, and B. Ostdiek. Neural embedding: learning the embedding of the manifold of physics data. Journal of High Energy Physics , 2023(7):1–38, 2023. G. Peyr´ e, M. Cuturi, et al. Computational optimal transport: With applic...
https://arxiv.org/abs/2505.04312v1
Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998. A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes . Springer New York, 1996. ISBN 9781475725452. doi: 10.1007/978-1-4757-2545-2. URL http://dx.doi.org/10.1007/ 978-1-4757-2545-2 . A. Wald. Tests of stati...
https://arxiv.org/abs/2505.04312v1
SPARSE REGULARIZED OPTIMAL TRANSPORT WITHOUT CURSE OF DIMENSIONALITY BYALBERTO GONZÁLEZ -SANZ1,a, STEPHAN ECKSTEIN2,bAND MARCEL NUTZ3,c 1Department of Statistics, Columbia Universityaag4855@columbia.edu 2Department of Mathematics, University of Tübingenbstephan.eckstein@uni-tuebingen.de 3Departments of Statistics and M...
https://arxiv.org/abs/2505.04721v1
The prevailing thinking is that EOT overcomes the curse of dimensionality thanks to its smoothness. Indeed, the entropic penalty leads to optimal couplings whose density is smooth (or at least as smooth as the cost c), and this regularity holds independently of the marginals. In particular, it holds uniformly over the ...
https://arxiv.org/abs/2505.04721v1
a general C1function. The main assumption on the divergence is that the convex conjugate ψofφisC2. This includes the KL divergence where ψisC∞, but also Tsallis divergences that yield sparse approximations of unregu- larized optimal transport. There are three main objects under consideration. The first is the optimal c...
https://arxiv.org/abs/2505.04721v1
a rate for this convergence, for general 4 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ f-divergences. For the special case of quadratic divergence, [17] shows that this rate cor- responds to the exact leading order and identifies the multiplicative constant, whereas [24] focuses on the discrete case where convergence is...
https://arxiv.org/abs/2505.04721v1
class with controlled covering number. The result then follows by applying empiri- cal process theory to the dual problem of EOT. A similar bound with improved constants and more general (sub-Gaussian) marginals was obtained in [34] using a refinement of the same approach. Moreover, [34] provided the first central limi...
https://arxiv.org/abs/2505.04721v1
[21, 23], do not use empirical processes theory but instead approximate the optimal couplings by infinite order U-statistics. This approximation holds by a uniform contraction argument over the linearized Schrödinger system, which is deeply related to the PL inequality in [38]. All of the above approaches fail in our s...
https://arxiv.org/abs/2505.04721v1
potentials indeed yields the Donsker prop- erty, and together with the aforementioned Hadamard differentiability, this allowed [21, 23] to infer the desired central limit theorem for the potentials. In the present case, this approach is a nonstarter because the regularity of the potentials is too poor to give a Donsker...
https://arxiv.org/abs/2505.04721v1
for the regularized problem with quadratic divergence—is to argue that the gradient of the poten- tial equals the partial derivative of the cost con the support of the optimal coupling. This argument is a nonstarter if the cost cis not differentiable. In the present work, uniqueness is instead deduced from the invertib...
https://arxiv.org/abs/2505.04721v1
variables are defined. For a measur- able function f:Rd→Rand a random vector X:Ω→Rdwith distribution P, the expec- tation of f(X)is denoted E[f(X)] =R f(x)dP(x) =R f dP. Almost-sure convergence is denoted bya.s.− →, convergence in probability byP− →, and convergence in distribution (or weak convergence) byw− →. The lat...
https://arxiv.org/abs/2505.04721v1
the convenience of the reader and also to rectify minor inaccuracies in [1]. PROPOSITION 2.3. LetP,Q be probability measures on Rdwith compact supports Ω,Ω′. Moreover, let c∈ C(Ω×Ω′). (i)The strong duality ROT( P,Q) = DUAL( P,Q)holds. (ii)The primal problem (10) admits a unique optimizer π∈Π(P,Q). (iii) The dual proble...
https://arxiv.org/abs/2505.04721v1
theorem for the potentials can only hold if the limiting (population) po- tentials (f∗,g∗)are unique in a suitable sense. Indeed we obtain a uniqueness result un- der quite general conditions as by-product of our approach to the central limit theorem. Note that potentials always admit a degree of freedom: if (f∗,g∗)are...
https://arxiv.org/abs/2505.04721v1
optimal coupling for the population marginals and let πn∈ Π(Pn,Qn)denote the empirical counterpart. To state the central limit theorem for πn→ π, recall that [(f,g)]⊕denotes the equivalence class of (f,g)∈ C(Ω)× C(Ω′)inC⊕. To streamline the notation, we also define the operator [(f,g)]⊕7→ ⊕(f,g) :=f⊕g∈ C(Ω×Ω′). THEOREM...
https://arxiv.org/abs/2505.04721v1
and deduce the desired result for(fn−f∗,gn−g∗). To follow this strategy, one would like to replace the left hand side ˜Γ(fn,gn)−˜Γn(fn,gn)by the more tractable expression ˜Γ(f∗,g∗)−˜Γn(f∗,g∗). Indeed, gen- eral arguments show that the latter expression satisfies a central limit theorem (Lemma 5.2). Let us proceed with ...
https://arxiv.org/abs/2505.04721v1
potentials (f∗,g∗), which causes no harm. Once the proof of uniqueness is complete, it will be clear that there was in fact no ambiguity.) LEMMA 4.1. The operator ˜Γ :C⊕→ C(Ω)× C(Ω′)is continuously Fréchet differen- tiable; we denote its derivative at the point (f,g)∈ C⊕byL(f,g). That is, we have lim ∥(u,v)∥⊕→0∥˜Γ(f+u,...
https://arxiv.org/abs/2505.04721v1
f+a=−A1(g) and g−a=−A2(f) for some (f,g)∈ C(Ω)× C(Ω′)anda∈R; we need to prove that [(f,g)]⊕= [(0,0)]⊕. As a first step, we show that a= 0must hold in (23). Using (23) together with the definition of A2, we have (24)Z ψ′′(ξ∗(x,y))(f(x) +g(y))dQ(y) =−aZ ψ′′(ξ∗(x,y))dQ(y),for all x∈Ω and similarly by the definition of A1,...
https://arxiv.org/abs/2505.04721v1
Γn(fn,gn) = (1 ,1). Then ∥(fn,gn)−(f∗,g∗)∥⊕a.s.− − →0. PROOF . Recall that empirical quantities such as Pn,fn,... implicitly depend on the re- alization X1(ω),...,X n(ω),Y1(ω),...,Y n(ω)of the sample. To make this dependence ex- plicit, we add a superscript ω. AsX1,...,X n,Y1,...,Y nare i.i.d., the Glivenko–Cantelli th...
https://arxiv.org/abs/2505.04721v1
4.1 there are random functions (Vn,L,V′ n,L)∈ C(Ω)× C(Ω′)with∥(Vn,L,V′ n,L)∥∞a.s.− →0such that ˜Γ(fn,gn)−˜Γ(f∗,g∗) =L(fn−f∗,gn−g∗)− ∥(fn−f∗,gn−g∗)∥⊕(Vn,L,V′ n,L). Using the fact that Γ(f∗,g∗) = (1 ,1) = Γ n(fn,gn)and setting ˜Γn:= Γ(1) n Rψ′′(ξ∗(·,y))dQ(y) Γ(2) n Rψ′′(ξ∗(x,·))dP(x) , this yields (29) ˜Γ(fn,gn)−˜Γn(...
https://arxiv.org/abs/2505.04721v1
compactness; see [42, Theorem 2.4.1]. LEMMA 5.6. Setψn(·,y) :=R1 0ψ′′((1−λ)ξn(·,y) +λξ∗(·,y))dλ. The family of func- tionsZ ψn(·,y)dµ(y), n≥1 where µ∈ C(Ω)′is any signed measure with ∥µ∥TV≤2, is relatively compact in C(Ω′). PROOF . Similarly as in the proof of Lemma 5.5, these functions are uniformly bounded and admit ...
https://arxiv.org/abs/2505.04721v1
the dual problem (11) for (P,Q)and(Pn,Qn), respectively, we deduce the two inequalities ROT( Pn,Qn)−ROT( P,Q) ≥Z f∗(x) +g∗(y)−ψ(f∗(x) +g∗(y)−c(x,y)) d((Pn⊗Qn)−(P⊗Q))(x,y) and ROT( Pn,Qn)−ROT( P,Q) ≤Z fn(x) +gn(y)−ψ(fn(x) +gn(y)−c(x,y)) d((Pn⊗Qn)−(P⊗Q))(x,y). Using our shorthands (15) and dropping the integration vari...
https://arxiv.org/abs/2505.04721v1
variance σ2(η), it is easier to directly use the formula of the Hájek projection stated just before [41, Theorem 12.6].) Funding. SE is grateful for support by the German Research Foundation through Project 553088969 as well as the Cluster of Excellence “Machine Learning — New Perspectives for Science” (EXC 2064/1 numb...
https://arxiv.org/abs/2505.04721v1
Equation. arXiv:2407.21528 . [18] G ENEVAY , A., C HIZAT , L., B ACH, F., C UTURI , M. and P EYRÉ , G. (2019). Sample Complexity of Sinkhorn Divergences. In Proceedings of the Twenty-Second International Conference on Artificial Intel- ligence and Statistics (K. C HAUDHURI and M. S UGIYAMA , eds.). Proceedings of Machi...
https://arxiv.org/abs/2505.04721v1
ILES -WEED, J. (2019). Statistical bounds for entropic optimal transport: sample complex- ity and the central limit theorem. In Advances in Neural Information Processing Systems (H. W ALLACH , H. L AROCHELLE , A. B EYGELZIMER , F. D' ALCHÉ -BUC, E. F OXand R. G ARNETT , eds.) 32. Curran Associates, Inc. [35] M UZELLEC ...
https://arxiv.org/abs/2505.04721v1
such thatg∗=f∗; this choice also removes the ambiguity about an additive constant. Indeed, f∗is uniquely determined by the equation 1 =Z ψ′f∗([x]) +f∗([y])−d2([x],[y]) ε dµ([y]), x∈Rd. Define the constant Cε,d,ψ∈Rvia 1 =Z ψ′2Cε,d,ψ−d2([0],[y]) ε dµ([y]). For fixed x∈Rd, consider the translation y7→Tx(y) :=x+yonRd, ...
https://arxiv.org/abs/2505.04721v1
for any n∈Nand any sequence X1,...,X nof independent E-valued random variables with E[∥Xi∥E]<∞for all i= 1,...,n . 30 A. GONZÁLEZ-SANZ, S. ECKSTEIN, AND M. NUTZ The type II property is fundamental to derive central limit theorems in Banach spaces (see [27] and the references therein). To state the central limit theorem...
https://arxiv.org/abs/2505.04721v1
of the pointwise convergence, weak convergence is implied by relative compactness (for the weak convergence topology), hence by Prokhorov’s theorem [28, The- orem 2.1] it suffices to check that given δ∈(0,1)there exists a compact K ⊂ C0,β(Ω)such that P(GN∈ K)≥1−δfor all N≥1. (39) SinceC0,β′(Ω)is compactly embedded in C...
https://arxiv.org/abs/2505.04721v1
OF LEMMA C.1. As gandcare bounded, lims→∞ψ′(s+g(y)−c(x,y)) =∞ andlims→0ψ′(s+g(y)−c(x,y))<1by the properties of ψ, where the limits are uniform in(x,y). Asψ′is continuous, the intermediate value theorem yields the existence of fsolv- ing (43). Let x,˜x∈Ωand assume without loss of generality that f(˜x)≤f(x). Asψ′is nonde...
https://arxiv.org/abs/2505.04721v1
(vi) Given any solution (f∗,g∗)of (12), applying Lemma C.1 twice yields versions that solve (13). Iff∗,g∗solve (13), then they are conjugates of one another, hence Lemma C.1 yields the modulus of continuity. As in the proof of (iii), the uniform bound follows from the inequali- ties below (44) using that (44) holds for...
https://arxiv.org/abs/2505.04721v1
arXiv:2505.04884v1 [stat.ME] 8 May 2025Model selection for unit-root time series with many predictors SHUO-CHIEH HUANG1,a, CHING-KANG ING2,band RUEY S. TSAY3,c 1Department of Statistics, Rutgers University,ash1976@stat.rutgers.edu 2Institute of Statistics, National Tsing Hua University,bcking@stat.nthu.edu.tw 3Booth Sc...
https://arxiv.org/abs/2505.04884v1
2 In this paper, we study model selection for an autoregressive model with exogenous variables, known as the ARX model, when the dependent variable contains general unit roots and the number of exoge- nous variables is large. Specifically, the model employed is (1−𝐵)𝑎(1+𝐵)𝑏𝑙Ö 𝑘=1(1−2 cos𝜗𝑘𝐵+𝐵2)𝑑𝑘𝜓𝑛(𝐵)𝑦�...
https://arxiv.org/abs/2505.04884v1
other correlation-based feature selection methods, such as 𝐿2-Boosting and OGA. Indeed, in Sections 3.2 and 3.3, we prove that both LASSO and OGA can fail to achieve variable selection consistency in the presence of unit roots. While AIC, BIC, and FIC are reliable methods for selecting the AR order when 𝑑>0 and𝑝∗ 𝑛...
https://arxiv.org/abs/2505.04884v1
of positive numbers, {𝑎𝑛}and{𝑏𝑛},𝑎𝑛≍𝑏𝑛means 𝐿<𝑎𝑛/𝑏𝑛<𝑈 for some 0<𝐿≤𝑈<∞. For an event 𝐸, its complement and indicator function are denoted by𝐸𝑐andI𝐸, respectively. For 𝑘∈{1,2,...},[𝑘]={1,2,...,𝑘}. For𝑟∈R,⌊𝑟⌋is the largest integer≤𝑟. For two real numbers 𝑎and𝑏,𝑎∨𝑏=max{𝑎,𝑏}and𝑎∧𝑏=min{𝑎,�...
https://arxiv.org/abs/2505.04884v1
have a larger mean squared prediction error, especially in tackling nonstationary time series where the cost of overfitting is more prominent (see Example 3.3). The final estimated model is ˆ𝑁𝑛=ˆQ𝑛⊕ˆJ𝑛. The above procedure, which combines FSR, HDIC, Trim, and DDT, is referred to as FHTD. FHTD is related to a number...
https://arxiv.org/abs/2505.04884v1
that ∞∑︁ 𝑗=0∥𝜃𝑗∥≤𝐶, (10) with𝑙0being a fixed positive integer, and {𝑒𝑡,F𝑡}is an𝑙0-dimensional m.d.s. with sup 𝑡E∥𝑒𝑡∥𝜂≤𝐶,for some𝜂≥2. (11) (A2) For each 1≤𝑠≤𝑝𝑛,{𝑥𝑡,𝑠}−∞<𝑡<∞is a covariance stationary time series with mean zero and admits a one-sided moving average representation, 𝑥𝑡,𝑠=∞∑︁ 𝑘=0𝑝�...
https://arxiv.org/abs/2505.04884v1
stringent than their strong sparsity condition; see the discussion of Section S1. With this assumption, Theorem 3.1 below shows that FSR asymptoti- cally screens all relevant variables. Theorem 3.1. Assume that (A1)–(A6) and(SS X)hold. Then, for 𝐾𝑛≍(𝑛/𝑝∗ 𝑛2¯𝜃)𝜍, (24) where 1/3<𝜍< 1/2, lim𝑛→∞𝑃(J𝑛⊆ˆ𝐽𝐾𝑛)=1. ...
https://arxiv.org/abs/2505.04884v1
weak “noiseless" FSR. Define 𝜓𝐽,(𝑖,𝑙)=𝑛−1𝝁⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙 (𝑛−1𝒙(𝑖) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙)1/2and ˆ𝜓𝐽,(𝑖,𝑙)=𝑛−1y⊤ 𝑛(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙 (𝑛−1𝒙(𝑖) 𝑙⊤(I−H[𝑞𝑛]⊕𝐽)𝒙(𝑖) 𝑙)1/2. The main distinction between 𝜓𝐽,(𝑖,𝑙)and ˆ𝜓𝐽,(𝑖,𝑙)is that the y𝑛in the latter is substituted with ...
https://arxiv.org/abs/2505.04884v1
signals. In particular, since the spurious exogenous variables chosen by FSR among the 𝑝∗ 𝑛candidates have been (asymptotically) eliminated after the HDIC and Trim steps,𝑝∗ 𝑛is now replaced by the much smaller 𝑞𝑛in the lower bound for min 𝑞∈Q𝑛|𝛼𝑞|. Theorem 3.3. Assume that the assumptions of Theorem 3.2, (34)...
https://arxiv.org/abs/2505.04884v1
extending an argument used in Ing (2001), Ing, Sin and Yu (2010), and Ing and Yang (2014), it can be shown that lim𝑛→∞𝑛(MSPE𝑘−𝜎2)=𝜎2plim𝑛→∞log det(Í𝑛−1 𝑡=𝑘y𝑡(𝑘)y⊤ 𝑡(𝑘)) log𝑛=2𝑘𝜎2, (42) Unit root processes with many predictors 13 where the second equality is ensured by Theorem 5 of Wei (1987). Alternativ...
https://arxiv.org/abs/2505.04884v1
set 𝑐=𝑑=0.5 in all simulation examples and only tune𝑐and𝑑more carefully in the real data analysis in Section 5. The number of iterations, 𝐾𝑛, of FSR and OGA is set to 40. The tuning parameters for LASSO-type methods are selected using BIC as in Medeiros and Mendes (2016). Finally, 𝑞𝑛and𝑟(𝑛) 𝑗are set to𝑞𝑛=⌊...
https://arxiv.org/abs/2505.04884v1
0 78 919 SS 0 0 0 0 78 1000 TP 1.01 1.00 1.12 1.07 10.46 13.00 FP 0.22 0.09 4.39 0.63 11.12 0.09 (𝑛,𝑝∗𝑛,𝑝𝑛,𝑟(𝑛),𝑞𝑛)=(800,3000,500,6,10) E 0 0 0 0 229 998 SS 0 0 0 0 229 1000 TP 1.03 1.00 1.32 1.06 11.87 13.00 FP 0.13 0.00 5.62 0.66 9.29 0.00 Example 4.2. In this example, we generate data from (1−0.3𝐵)(1−2 cos...
https://arxiv.org/abs/2505.04884v1
seasonal pattern along with a drastic level change around the subprime financial crisis of 2008. For covariates, we collect the monthly new private hous- ing units authorized by building permits for each state (for instance, data for Illinois are retrieved from https://fred.stlouisfed.org/series/ILBP1FH) and the 30-yea...
https://arxiv.org/abs/2505.04884v1
They performed poorly when the intercept is omitted. In view of Figure 2, fitting the drift term to the upward trend in the first few windows may help alleviate the unit- root property in the data in finite sample, and without the drift, LASSO-type methods are unable to adapt to the unit-root behavior in the data. On t...
https://arxiv.org/abs/2505.04884v1
both are the most recommendable approaches to forecast {𝑢𝑡}. Table 4. Out-of-sample RMSEs and MAEs of competing methods applied to (50) for U.S. monthly unemployment rate series. FHTD OGA-3 AR-OGA-3 LASSO ALasso AR-ALasso RMSE (×10) 1.34 1.42 1.50 1.44 1.38 1.52 MAE (×10) 0.88 0.91 0.95 0.96 0.88 1.00 6. Concluding r...
https://arxiv.org/abs/2505.04884v1
𝑖=𝑖∑︁ 𝑙=−∞𝑎𝑖,𝑙𝑥∗ 𝑙, 𝑌∗ 𝑗=𝑗∑︁ 𝑘=−∞𝑏𝑗,𝑘𝑦∗ 𝑘, and{𝑥∗ 𝑡}and{𝑦∗ 𝑡}are independent processes that are identically distributed with {˜𝑥𝑡}and{˜𝑦𝑡}, re- spectively. Theorem A.2 provides a novel moment bound for quadratic forms of linear processes driven by conditional heteroscedastic m.d.s., and it rese...
https://arxiv.org/abs/2505.04884v1
Data Analysis and Theory . Holt, Rinehart, and Winston, New York. BÜHLMANN , P. (2006). Boosting for high-dimensional linear models. The Annals of Statistics 34559–583. CANDES , E. and T AO, T. (2007). The Dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics 352313–2351. CHAN,...
https://arxiv.org/abs/2505.04884v1
autoregressive processes. Journal of the American Statistical Association 109243–253. KOCK, A. B. (2016). Consistent and conservative model selection with the adative Lasso in stationary and non- stationary autoregressions. Econometric Theory 32243–259. LAI, T. L. and W EI, C. Z. (1982). Asymptotic properties of projec...
https://arxiv.org/abs/2505.04884v1
S5 offers additional simulation results. S1. Comments on Assumptions (A1)–(A6), (SS X), and (SS) In this section, we provide some comments on Assumptions (A1)–(A6). Assumption (A1) is fulfilled by many conditionally heteroscedastic processes. For example, consider a stationary GJR-GARCH (𝑝′ 0,𝑞′ 0) model (Glosten, Ja...
https://arxiv.org/abs/2505.04884v1
and𝑏>0 are positive numbers defined therein. Equation (S1.6) requires 𝑝∗ 𝑛to be much smaller than 𝑛unless𝜂>4. In contrast, (A5) allows 𝑝∗ 𝑛>𝑛even if𝜂=2. We also make a few comments on (A6). For 𝐷⊂{1,...,𝑝𝑛}, let 𝝅𝑡(𝐷)=(𝜋𝑡,𝑠:𝑠∈𝐷)⊤,𝝁𝑡(𝐷)=(𝜖𝑡,𝝅⊤ 𝑡(𝐷))⊤, 𝚺𝑛(𝐷)=E(𝝁𝑡(𝐷)𝝁⊤ 𝑡(𝐷)).(S1.7) Uni...
https://arxiv.org/abs/2505.04884v1
S2. Main proofs In this section, we present the proofs of the main results in the paper, namely Theorems 3.1–3.3. The proofs, shown in Section S2.2, are proceeded by the proofs for Theorems A.1–A.3. Some further details regarding the proofs are collected in Section S2.3. S2.1. Proofs of Theorems A.1–A.3 and discussions...
https://arxiv.org/abs/2505.04884v1
𝑘=−∞𝑐𝑙,𝑙𝑐𝑘,𝑘∞∑︁ 𝑗=0𝜽⊤ 𝑗𝜽𝑗+|𝑙−𝑘|≤𝐶©­ «∞∑︁ 𝑗=0∥𝜽𝑗∥ª® ¬2 𝑛∑︁ 𝑙=−∞𝑐2 𝑙𝑙! . This, combined with (S2.17) andÍ∞ 𝑗=0∥𝜽𝑗∥≤𝐶, yields (I)≤𝐶𝑟(𝑛∑︁ 𝑙=−∞𝑐2 𝑙,𝑙)𝑟 . (S2.18) Next, by Burkholder’s inequality, Minkowski’s inequality, Hölder’s inequality, and (55) in the main paper, we have (II)≤  ...
https://arxiv.org/abs/2505.04884v1
show (S2.32), note first that by (S2.24), max ♯(𝐽)≤¯𝐾𝑛vut∑︁ (𝑙2,𝑗2)∈𝐽∑︁ (𝑙1,𝑗1)∈𝐽𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2−E(𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2) 2 ≤¯𝐾𝑛 max 1≤𝑗1,𝑗2≤𝑝𝑛 1≤𝑙1≤𝑟(𝑛) 𝑗1,1≤𝑙2≤𝑟(𝑛) 𝑗2 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1 𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2−E(𝑥𝑡−𝑙1,𝑗1𝑥𝑡−𝑙2,𝑗2) =𝑂𝑝¯𝐾𝑛𝑝∗2𝜂�...
https://arxiv.org/abs/2505.04884v1
𝑡=¯𝑟𝑛+1w𝑡(𝐽)w⊤ 𝑡(𝐽)ª® ¬min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|2.(S2.47) Combining (S2.47), (S2.46), (58), and (33) leads to 𝑃(J𝑛⊈ˆ𝐽𝐾𝑛)≤𝑃(J𝑛⊈ˆ𝐽˜𝑚𝑛)≤𝑃 𝑂𝑝(𝑠0𝑙𝑛𝑝∗(𝑞0+1)/(𝜂𝑞0) 𝑛/𝑛)≥ min (𝑗,𝑙)∈J 𝑛|𝛽(𝑗) 𝑙|2 =𝑜(1).(S2.48) Thus, the desired conclusion (25) follows. Proof of Theorem 3.2. Let˜𝑘𝑛=min{1≤...
https://arxiv.org/abs/2505.04884v1
Unit root processes with many predictors 17 whereD0and𝝁𝑡(·)are defined in (SS A)and (S1.7), respectively, and {𝒄(𝑖) 𝑚}is a sequence of(𝑠0+1)- dimensional vectors depending on 𝝂𝑖,{𝑝𝑚,𝑗},1≤𝑗≤𝑝𝑛,{𝑏𝑚}, and𝛽(𝑙) 𝑗,1≤𝑗≤𝑟(𝑛) 𝑗,1≤𝑗≤𝑝𝑛. By (21) and (S2.68), max 1≤𝑖≤𝑞𝑛−𝑑∞∑︁ 𝑚=0∥(𝑐(𝑖) 𝑚,1,...,𝑐(�...
https://arxiv.org/abs/2505.04884v1
𝑘−𝑙+1∑︁ 𝑠=11 𝑘+2−𝑙−𝑠 𝑛∑︁ 𝑡=𝑘+2max 1≤𝑗≤𝑝𝑛(𝑡−𝑙−𝑠)1/2|𝑝𝑡−𝑙−𝑠,𝑗|!2  𝜂1 +𝐶𝑛−𝜂1𝑛−1∑︁ 𝑘=𝑙  𝑘∑︁ 𝑠=𝑘−𝑙+2 max 1≤𝑗≤𝑝𝑛𝑛∑︁ 𝑡=𝑙+𝑠|𝑝𝑡−𝑙−𝑠,𝑗|!2  𝜂1 ≤𝐶¯𝑟𝜂1+1+𝑛log𝜂1𝑛+𝑛¯𝑟𝜂1 𝑛𝜂1. Hence, by (52) in the main paper, max 1≤𝑗≤𝑝𝑛,2≤𝑙≤𝑟𝑗𝑓𝑛(𝑗,𝑙)=𝑜(1). (S2.7...
https://arxiv.org/abs/2505.04884v1
1. Thus, as long as a selection path obeying (29) is chosen, the resultant noiseless mean squared error satisfies ˆ𝑎𝑚≤ˆ𝑎0exp(−𝜉2𝑚𝐷𝑛/𝑠0)≤𝐶𝑛exp(−𝜉2𝑚𝐷𝑛/𝑠0),1≤𝑚≤𝐾𝑛, (S3.3) where𝐶𝑛is also defined (S2.40). Now since (S2.38) ensures that on A𝑛(𝐾𝑛)∩B𝑛(𝐾𝑛),{ˆ𝐽1,..., ˆ𝐽𝐾𝑛}obeys (29), with 0 <𝜉 < 1 ...
https://arxiv.org/abs/2505.04884v1
Consequently, (S3.11), (S3.13), (S3.14), (S3.18), and (57) imply max ♯(𝐽)≤𝐾𝑛−1,(𝑖,𝑙)∉𝐽 𝑛−1𝑛∑︁ 𝑡=¯𝑟𝑛+1𝜖𝑡ˆ𝑥𝑡−𝑙,𝑖;𝐽 =𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗2𝜂𝑞0 𝑛+(𝐾1/2 𝑛+𝑞1/2 𝑛)𝑝∗𝑞0+1 2𝜂𝑞0 𝑛 𝑛1/2ª®® ¬ ×𝑂𝑝©­­ «𝐾1/2 𝑛𝑝∗𝑞0+1 2𝜂𝑞0 𝑛+𝑞1/2 𝑛 𝑛1/2ª®® ¬, 28 which, together with (24) and (A5), leads to (S3...
https://arxiv.org/abs/2505.04884v1
1−1 ˆ𝛽(𝜆𝑛) 3−1! ,w𝑛(1)=𝑛∑︁ 𝑡=3𝜖𝑡𝑦𝑡−1 𝑥𝑡−1 , 𝑤𝑛(2)=𝑛∑︁ 𝑡=3𝜖𝑡𝑦𝑡−2. 32 Then by an argument used in Zhao and Yu (2006), A𝑛⊆4Ø 𝑖=1E𝑛(𝑖), (S4.2) where E𝑛(𝑖)={C11ˆu𝑛−w𝑛(1)=−𝜆𝑛𝒂𝑖 2,−𝜆𝑛 2≤c21ˆu𝑛−𝑤𝑛(2)≤𝜆𝑛 2,𝒔𝑛(1)=𝒂𝑖}. In the following, we will show that regardless of whether {𝜆𝑛}sat...
https://arxiv.org/abs/2505.04884v1
The Poisson tensor completion non-parametric differential entropy estimator Daniel M. Dunlavy∗, Richard B. Lehoucq†, Carolyn D. Mayer‡, and Arvind Prasadan§ Sandia National Labaoratories , Albuquerque, NM and Livermore, CA Abstract We introduce the Poisson tensor completion (PTC) estimator , a non-parametric differenti...
https://arxiv.org/abs/2505.04957v2
1arXiv:2505.04957v2 [math.ST] 9 May 2025 Our contribution is the Poisson tensor completion (PTC) estimator , a non-parametric differential entropy esti- mator that expands on this previous work, explicitly modeling the underlying distribution of the histogram data by exploiting connections between frequency histograms,...
https://arxiv.org/abs/2505.04957v2
bphist−pover the finite volume region Band the size of poutside of B. We can then approximate the differential entropy of random variable Xover region B ent(p 1B) =−Z B⊂RdplogpdV=Elogp(X) 1B(X) (1.4) 2 with ent(bphist 1B) =−nX j=1Z BjbphistlogbphistdV =−nX j=1cj slogcj m|Bj| =−nX j=1cj slogcj s+nX j=1cj slog|Bj|. (1....
https://arxiv.org/abs/2505.04957v2
an improvement over the histogram-based density estimator (1.3). The ensuing differential entropy estimator is an improvement, at times dramatically so, over the histogram-based differential entropy approximation (1.5). As noted above, we develop our differential entropy estimator using the Poisson CP tensor decomposit...
https://arxiv.org/abs/2505.04957v2
spacing and k-NN estimators, respectively. In short, for ssamples, we require kto be chosen so that k/s→0 for the estimator to be consistent, but k→ ∞ as a function of sfor the variance to be minimized. estimator; a choice of ks∝log6sis sufficient. As a point of interest, the requirement that a density have at least on...
https://arxiv.org/abs/2505.04957v2
linear index ℓdenotes a mapping between bin and tensor multi-indices and an example is the natural ordering index notation for tensors introduced in [32] where the relationship is ℓ=L(i) and i=T(ℓ) and L: [n1]⊗ ··· ⊗ [nd]7→Z+andT:Z+7→[n1]⊗ ··· ⊗ [nd]. To the best of our knowledge, we are the first to identify the relat...
https://arxiv.org/abs/2505.04957v2
length in the 1-norm, thus leading to the simple computation ∥cM∥1=PR r=1λr. Computing each bmT(j)requires R(d+ 1) operations. 2.1 Error Analysis The error of PTC entropy estimator is ent(bpptc)−ent(p) = ent( bpptc 1B)−ent(p 1B)−ent(p 1Rd\B) =−nX j=1Z B bpptclog(bpptc)−plog(p) 1BjdV−ent(p 1Rd\B) =−nX j=1 bmT(j) ∥cM∥1...
https://arxiv.org/abs/2505.04957v2
generate srandom points (1.1). 2. Bin the random points according to a specified binning scheme. Use a sparse representation of the histogram, keeping track of the nonempty bins, counts, and bin volumes. Unless otherwise specified: •estimates of differential entropy from ssamples of a d-dimensional distribution made us...
https://arxiv.org/abs/2505.04957v2
25 trials for dimension 6 distributions. The histogram is con- structed by placing s= 2500 samples from the distributions into bins of width c s−1 8in each dimension, for different values of c. Here, the tensor-based approximation uses the same bins as the histogram-based approximation. Figure 2: The proportion of none...
https://arxiv.org/abs/2505.04957v2
that PTC increases in accuracy over the histogram estimate as the number of variates increases. The latter accuracy allows us to conclude that PTC leads to a more accurate estimate with a fixed sample size. 9 (a) Histogram (b) Rank 1 decomposition (c) Rank 2 decomposition (d) Rank 3 decomposition Figure 6: Plots of a h...
https://arxiv.org/abs/2505.04957v2
(2.5) and thus lower the memory and computational requirements: •Determine the Rdsets Ω r,icontaining the indices for each of the Rdvectors ba(i) rwhere the elements are less than a threshold 0 < τ < 1. Approximate ent( bpptc 1B) using the elements of cMnot containing any of thePR r=1Pd i=1Ωr,iindices. The number of el...
https://arxiv.org/abs/2505.04957v2
of Honeywell International Inc. for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. References [1] D. Gokhale, “On entropy-based goodness-of-fit tests,” Computational Statistics & Data Analysis , vol. 1, pp. 157–165, 1983. [2] E. Parzen, “Goodness of fit tests and e...
https://arxiv.org/abs/2505.04957v2
density,” Annals of the Institute of Statistical Mathematics , vol. 41, pp. 683–697, 1989. [21] P. Hall and S. C. Morton, “On the estimation of entropy,” Annals of the Institute of Statistical Mathematics , vol. 45, pp. 69–88, 1993. [22] F. Tarasenko, “On the evaluation of an unknown probability density function, the d...
https://arxiv.org/abs/2505.04957v2
arXiv:2505.05080v1 [math.ST] 8 May 2025EXPECTATIONS OF SOME RATIO-TYPE ESTIMATORS UNDER THE GAMMA DISTRIBUTION JIA-HAN SHIH Abstract. We study the expectations of some ratio-type estimators under the gamma distribution. Expectations of ratio-type estimators are often difficult to compute due to the nature that they are...
https://arxiv.org/abs/2505.05080v1
[8, 1], both of which are ratio- type measures. Recently, Vila and Saulo [9] employed the same approach as Baydil et al. [2] and computed the expected values of the sample Theil index and the sample Atkinson index under the gamma distribution. It turns that the gamma-Dirichlet relationship can be utilized for computing...
https://arxiv.org/abs/2505.05080v1
the above expectation equals to EY1+Y2 Y1+Y2+Pn i=3Yi E 2Y1 Y1+Y2−1  . 4 JIA-HAN SHIH We now compute the expectations separately. Note that ( Y1+Y2)/Pn i=1Yi follows the beta distribution with shape parameters 2 αand ( n−2)α. We have EY1+Y2 Y1+Y2+Pn i=3Yi =2 n. On the other hand, the random variable R=Y1/(Y1+Y2) ...
https://arxiv.org/abs/2505.05080v1
to compute the expecta- tion of Wlog(W). For a beta random variable Uwith the shape parameters 6 JIA-HAN SHIH a >0 and b >0, we have E{Ulog(U)}=1 B(a, b)Z1 0ulog(u)ua−1(1−u)b−1du =1 B(a, b)Z1 0∂ ∂aua(1−u)b−1du =1 B(a, b)∂ ∂aZ1 0ua(1−u)b−1du =1 B(a, b)∂ ∂aB(a+ 1, b) =a a+b{ψ(a+ 1)−ψ(a+b+ 1)}, (5) where the third equalit...
https://arxiv.org/abs/2505.05080v1
5 implies that if the random samples are drawn from the gamma distribution, the sample VMR tends to be biased downward: E(VMR n)−VMR( Y) =nα (nα+ 1)λ−1 λ=−1 (nα+ 1)λ<0. 5.Concluding remarks This note illustrates that the distributional properties of the gamma dis- tribution, namely Lukacs’ Theorem and the gamma-beta (g...
https://arxiv.org/abs/2505.05080v1