text
string | source
string |
|---|---|
19.61 0.406 0.526 0.394 1.054 9.38 7.884 25000 45.026 28.7 28.614 28.7 28.614 35.684 0.0 0.014 0.008 0.948 12.724 10.0 37 Table 8: Estimated L2risks for densities 1-8(where applicable). Density n Wand AIC BIC BR Knuth SC RIH RMG-B RMG-R TS L2CV KLCV 1 50 0.165 0.194 0.195 0.19 0.191 0.189 0.202 0.32 0.217 0.234 0.209 0.226 200 0.106 0.122 0.128 0.123 0.121 0.119 0.139 0.172 0.15 0.119 0.146 0.168 1000 0.062 0.07 0.081 0.071 0.074 0.074 0.091 0.103 0.096 0.068 0.09 0.103 5000 0.038 0.04 0.051 0.042 0.045 0.043 0.058 0.063 0.06 0.039 0.058 0.063 25000 0.022 0.024 0.032 0.024 0.027 0.027 0.037 0.037 0.037 0.022 0.035 0.037 2 50 0.309 0.308 0.209 0.218 0.216 0.244 0.197 0.33 0.198 0.259 0.325 0.328 200 0.197 0.154 0.097 0.108 0.099 0.104 0.097 0.134 0.096 0.117 0.232 0.273 1000 0.117 0.069 0.043 0.05 0.044 0.045 0.044 0.052 0.044 0.053 0.205 0.239 5000 0.071 0.029 0.019 0.021 0.019 0.019 0.018 0.024 0.018 0.024 0.121 0.136 25000 0.044 0.014 0.009 0.01 0.009 0.009 0.008 0.008 0.008 0.01 0.064 0.069 4 50 0.195 0.225 0.252 0.253 0.227 0.232 0.235 0.325 0.248 0.204 0.337 0.38 200 0.148 0.163 0.203 0.184 0.177 0.188 0.173 0.212 0.183 0.139 0.354 0.363 1000 0.102 0.146 0.172 0.152 0.154 0.157 0.144 0.125 0.128 0.112 0.386 0.399 5000 0.06 0.122 0.16 0.136 0.147 0.144 0.089 0.074 0.084 0.062 0.212 0.294 25000 0.037 0.065 0.157 0.068 0.072 0.072 0.05 0.045 0.054 0.037 0.061 0.055 5 50 0.173 0.194 0.202 0.203 0.193 0.19 0.212 0.294 0.227 0.219 0.313 0.313 200 0.114 0.128 0.154 0.143 0.138 0.139 0.155 0.19 0.166 0.131 0.167 0.18 1000 0.069 0.079 0.109 0.089 0.094 0.096 0.102 0.116 0.108 0.076 0.08 0.087 5000 0.041 0.049 0.074 0.055 0.06 0.058 0.066 0.068 0.068 0.044 0.053 0.055 25000 0.024 0.03 0.05 0.032 0.035 0.029 0.042 0.041 0.043 0.025 0.033 0.034 7 50 0.276 0.295 0.289 0.286 0.287 0.284 0.27 0.358 0.276 0.354 0.422 0.388 200 0.238 0.217 0.235 0.231 0.223 0.221 0.22 0.24 0.221 0.245 0.265 0.265 1000 0.169 0.133 0.173 0.138 0.141 0.143 0.173 0.189 0.173 0.134 0.144 0.154 5000 0.083 0.08 0.119 0.085 0.099 0.098 0.105 0.111 0.107 0.071 0.087 0.09 25000 0.043 0.047 0.077 0.049 0.058 0.05 0.067 0.066 0.066 0.039 0.054 0.057 8 50 0.181 0.204 0.196 0.194 0.197 0.196 0.184 0.249 0.193 0.212 0.212 0.233 200 0.118 0.134 0.134 0.132 0.13 0.127 0.132 0.178 0.137 0.131 0.15 0.175 1000 0.07 0.077 0.09 0.079 0.082 0.083 0.091 0.099 0.095 0.083 0.092 0.105 5000 0.042 0.045 0.058 0.046 0.051 0.047 0.057 0.059 0.058 0.039 0.06 0.063 25000 0.025 0.026 0.036 0.027 0.031 0.03 0.037 0.037 0.037 0.022 0.035 0.038 38 Table 9: Estimated L2risks for densities 9-16. Density n Wand AIC BIC BR Knuth SC RIH RMG-B RMG-R TS L2CV KLCV 9 50 0.215 0.242 0.231 0.229 0.234 0.23 0.226 0.307 0.231 0.258 0.248 0.261 200 0.168 0.174 0.177 0.172 0.17 0.167 0.175 0.214 0.184 0.184 0.191 0.19 1000 0.117 0.11 0.127 0.11 0.112 0.111 0.124 0.137 0.127 0.115 0.119 0.126 5000 0.073 0.065 0.081 0.066 0.071 0.072 0.078 0.083 0.079 0.057
|
https://arxiv.org/abs/2505.22034v1
|
0.072 0.077 25000 0.043 0.041 0.053 0.041 0.046 0.042 0.05 0.05 0.05 0.031 0.043 0.045 10 50 0.145 0.15 0.136 0.137 0.138 0.142 0.136 0.176 0.138 0.205 0.142 0.162 200 0.131 0.116 0.128 0.125 0.119 0.111 0.129 0.137 0.13 0.165 0.132 0.134 1000 0.123 0.071 0.083 0.071 0.071 0.069 0.086 0.109 0.09 0.082 0.083 0.085 5000 0.069 0.042 0.049 0.041 0.042 0.041 0.058 0.061 0.058 0.043 0.05 0.053 25000 0.027 0.025 0.029 0.024 0.025 0.025 0.038 0.038 0.037 0.024 0.033 0.033 11 50 0.257 0.314 0.305 0.302 0.31 0.305 0.314 0.505 0.33 0.353 0.303 0.348 200 0.16 0.189 0.191 0.186 0.185 0.182 0.213 0.281 0.224 0.198 0.233 0.267 1000 0.097 0.111 0.117 0.108 0.108 0.108 0.145 0.162 0.153 0.117 0.158 0.184 5000 0.057 0.063 0.069 0.062 0.063 0.063 0.092 0.099 0.095 0.07 0.104 0.114 25000 0.034 0.038 0.041 0.036 0.037 0.037 0.058 0.059 0.058 0.043 0.063 0.067 12 50 0.335 0.302 0.322 0.32 0.3 0.305 0.32 0.339 0.237 0.258 0.408 0.406 200 0.215 0.201 0.218 0.207 0.203 0.207 0.216 0.213 0.188 0.161 0.393 0.398 1000 0.13 0.119 0.175 0.125 0.122 0.13 0.121 0.125 0.121 0.084 0.379 0.372 5000 0.067 0.07 0.107 0.074 0.076 0.075 0.073 0.074 0.076 0.048 0.076 0.081 25000 0.034 0.041 0.066 0.042 0.048 0.051 0.046 0.044 0.047 0.027 0.035 0.043 13 50 0.162 0.15 0.16 0.159 0.149 0.146 0.163 0.421 0.156 0.174 0.172 0.168 200 0.145 0.121 0.138 0.127 0.122 0.121 0.132 1.302 0.116 0.097 0.171 0.167 1000 0.098 0.07 0.096 0.073 0.077 0.075 0.084 0.621 0.081 0.05 0.114 0.109 5000 0.057 0.038 0.056 0.041 0.047 0.047 0.049 0.349 0.049 0.027 0.046 0.052 25000 0.024 0.022 0.039 0.023 0.031 0.03 0.031 0.043 0.031 0.015 0.025 0.026 14 50 1.13 1.124 1.124 1.124 1.124 1.125 1.154 0.873 0.741 0.702 1.201 1.162 200 1.073 1.096 1.095 1.095 1.096 1.096 1.097 0.339 0.264 0.348 1.162 1.198 1000 0.922 1.003 1.003 1.003 1.004 1.007 0.999 0.123 0.117 0.167 1.156 1.15 5000 0.738 0.63 0.63 0.63 0.665 0.684 0.64 0.052 0.052 0.073 1.129 1.146 25000 0.256 0.027 0.027 0.027 0.081 0.092 0.358 0.023 0.023 0.038 0.48 0.553 15 50 0.784 0.706 0.769 0.77 0.695 0.664 0.774 1.141 0.752 0.835 1.719 1.334 200 0.558 0.474 0.495 0.473 0.463 0.457 0.503 0.505 0.401 0.423 0.906 0.829 1000 0.39 0.272 0.278 0.263 0.265 0.265 0.287 0.197 0.177 0.226 0.375 0.305 5000 0.275 0.136 0.133 0.131 0.134 0.136 0.152 0.093 0.091 0.115 0.203 0.153 25000 0.189 0.061 0.06 0.06 0.063 0.065 0.079 0.042 0.041 0.054 0.101 0.085 16 50 0.857 0.77 0.8 0.799 0.743 0.728 0.812 1.333 0.757 0.835 1.139 1.141 200 0.649 0.585 0.614 0.598 0.582 0.577 0.623 0.589 0.486 0.518 1.039 0.952 1000 0.46 0.347 0.521 0.361 0.377 0.366 0.383 0.25 0.231 0.254 0.71 0.576 5000 0.332 0.185 0.279 0.191 0.232 0.233 0.204 0.105 0.1 0.142 0.23 0.192 25000 0.197 0.085 0.085 0.085 0.085 0.1 0.106 0.043 0.042 0.072 0.115 0.094 39 References A. Barron, M. J. Schervish, and L. Wasserman. The consistency of posterior distributions in nonparametric problems. The Annals of Statistics , 27:536–561, 1999. doi:10.1214/aos/1018031206. Y. Benjamini and Y. Hochberg. Controlling the false discovery
|
https://arxiv.org/abs/2505.22034v1
|
rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological) , 57:289–300, 1995. doi:10.1111/j.2517-6161.1995.tb02031.x. L. Birgé and Y. Rozenholc. How many bins should be put in a regular histogram. ESAIM: Probability and Statistics, 10:24–45, 2006. doi:10.1051/ps:2006001. A. Celisse and S. Robin. Nonparametric density estimation by exact leave-p-out cross-validation. Computa- tional Statistics & Data Analysis , 52:2350–2368, 2008. doi:10.1016/j.csda.2007.10.002. P. L. Davies and A. Kovac. Densities, spectral densities and modality. The Annals of Statistics , 32:1093–1136, 2004. doi:10.1214/009053604000000364. P. L. Davies and A. Kovac. ftnonpar: Features and strings for nonparametric regression , 2012. URL https: //cran.r-project.org/src/contrib/Archive/ftnonpar/ . R package version 0.1-88. P. L. Davies, U. Gather, D. Nordman, and H. Weinert. A comparison of automatic histogram constructions. ESAIM: Probability and Statistics , 13:181–196, 2009. doi:10.1051/ps:2008005. L. Denby and C. Mallows. Variations on the histogram. Journal of Computational and Graphical Statistics , 18: 21–31, 2009. doi:10.1198/jcgs.2009.0002. C. R. Genovese and L. Wasserman. Rates of convergence for the Gaussian mixture sieve. The Annals of Statistics, 28:1105–1127, 2000. doi:10.1214/aos/1015956709. S. Ghosal. Convergence rates for density estimation with Bernstein polynomials. The Annals of Statistics , 29: 1264–1280, 2001. doi:10.1214/aos/1013203453. S. Ghosal and A. W. van der Vaart. Fundamentals of Nonparametric Bayesian Inference . Cambridge University Press, 2017. doi:10.1017/9781139029834. S. Ghosal, J. K. Ghosh, and R. V. Ramamoorthi. Posterior consistency of Dirichlet mixtures in density estimation. The Annals of Statistics , 27:143–158, 1999. doi:10.1214/aos/1018031105. S. Ghosal, J. K. Ghosh, and A. W. van der Vaart. Convergence rates of posterior distributions. The Annals of Statistics, 28:500–531, 2000. doi:10.1214/aos/1016218228. Z. Guan, B. Wu, and H. Zhao. Nonparametric estimator of false discovery rate based on Bernstein polyno- mials.Statistica Sinica , 18:905–923, 2008. P.Hall.Akaike’sinformationcriterionandKullback–Leiblerlossforhistogramdensityestimation. Probability Theory and Related Fields , 85:449–467, 1990. doi:10.1007/BF01203164. P. Hall and E. J. Hannan. On stochastic complexity and nonparametric density estimation. Biometrika , 75: 705–714, 1988. doi:10.1093/biomet/75.4.705. W. Härdle. Smoothing Techniques: With Implementation in S . Springer New York, NY, 1991. doi:10.1007/978- 1-4612-4432-5. 40 I. Hedenfalk, D. Duggan, Y. Chen, et al. Gene-expression profiles in hereditary breast cancer. New England Journal of Medicine , 344:539–548, 2001. doi:10.1056/NEJM200102223440801. Y. Kanazawa. An optimal variable cell histogram. Communications in Statistics - Theory and Methods , 17: 1401–1422, 1988. doi:10.1080/03610928808829688. J. Klemelä. Bin smoother. Wiley Interdisciplinary Reviews: Computational Statistics , 4:384–393, 2012. doi:10.1002/wics.1214. K. H. Knuth. Optimal data-based binning for histograms and histogram-based probability density models. Digital Signal Processing , 95, 2019. doi:10.1016/j.dsp.2019.102581. W. Kruijer and A. W. van der Vaart. Posterior convergence rates for Dirichlet mixtures of beta densities. Journal of Statistical Planning and Inference , 138:1981–1992, 2008. doi:10.1016/j.jspi.2007.07.012. F. Kwasniok. Semiparametric maximum likelihood probability density estimation. PLOS One , 16, 2021. doi:10.1371/journal.pone.0259111. M. Langaas, B. H. Lindqvist, and E. Ferkingstad. Estimating the proportion of true null hypotheses, with application to DNA microarray data. Journal of the Royal Statistical Society: Series B (Methodological) , 67: 555–572, 2005. doi:10.1111/j.1467-9868.2005.00515.x. E. L. Lehmann and G. Casella. Theory of Point Estimation . Springer New York, NY, 2nd edition, 1998. doi:10.1007/b98854. H. Li, A. Munk, H. Sieling, and G. Walther. The essential histogram. Biometrika , 107:347–364,
|
https://arxiv.org/abs/2505.22034v1
|
2020. doi:10.1093/biomet/asz081. J.S.MarronandM.P.Wand. Exactmeanintegratedsquarederror. TheAnnalsofStatistics ,20:712–736,1992. doi:10.1214/aos/1176348653. P.Massart. ConcentrationInequalitiesandModelSelection: Ecoled’EtédeProbabilitésdeSaint-FlourXXXIII-2003 . Springer Berlin, Heidelberg, 2007. doi:10.1007/978-3-540-48503-2. V. Z. Mendizábal, M. Boullé, and F. Rossi. Fast and fully-automated histograms for large-scale data sets. Computational Statistics & Data Analysis , 180, 2023. doi:10.1016/j.csda.2022.107668. T. Mildenberger, Y. Rozenholc, and D. Zasada. histogram: Construction of regular and irregular histograms with differentoptionsforautomaticchoiceofbins ,2019. URL https://CRAN.R-project.org/package=histogram . R package version 0.0-25. B. Phipson. Empirical Bayes modelling of expression profiles and their associations . PhD thesis, Department of Mathematics and Statistics, University of Melbourne, 2013. Y. Rozenholc, T. Mildenberger, and U. Gather. Combining regular and irregular histograms by penalized likelihood. Computational Statistics & Data Analysis , 54:3313–3323, 2010. doi:10.1016/j.csda.2010.04.021. M.Rudemo. Empiricalchoiceofhistogramsandkerneldensityestimators. ScandinavianJournalofStatistics , 9:65–78, 1982. R. L. Schilling. Measures, Integrals and Martingales . Cambridge University Press, 2005. doi:10.1017/CBO9780511810886. 41 D. W. Scott. On optimal and data-based histograms. Biometrika , 66:605–610, 1979. doi:10.1093/biomet/66.3.605. D. W. Scott. Multivariate Density Estimation: Theory, Practice, and Visualization . Wiley, 1992. doi:10.1002/9780470316849. D. W. Scott. Histogram. Wiley Interdisciplinary Reviews: Computational Statistics , 2:44–48, 2010. doi:10.1002/wics.59. C. Scricciolo. On rates of convergence for Bayesian density estimation. Scandinavian Journal of Statistics , 34: 626–642, 2007. doi:10.1111/j.1467-9469.2006.00540.x. S.J.SheatherandM.C.Jones. Areliabledata-basedbandwidthselectionmethodforkerneldensityestima- tion.Journal of the Royal Statistical Society: Series B (Methodological) , 53:683–690, 1991. doi:10.1111/j.2517- 6161.1991.tb01857.x. O. H. Simensen. AutoHist.jl: A pure Julia library for fast automatic histogram construction , 2025. URL https: //github.com/oskarhs/AutoHist.jl.git . J. D. Storey and R. Tibshirani. Statistical significance for genomewide studies. Proceedings of the National Academy of Sciences , 100:9440–9445, 2003. doi:10.1073/pnas.1530509100. C. C. Taylor. Akaike’s information criterion and the histogram. Biometrika , 74:636–639, 1987. doi:10.1093/biomet/74.3.636. A. B. Tsybakov. Introduction to Nonparametric Estimation . Springer New York, NY, 2009. doi:10.1007/b13794. W.N.VenablesandB.D.Ripley. ModernAppliedStatisticswithS-PLUS . SpringerNewYork,NY,3rdedition, 1999. doi:10.1007/978-1-4757-3121-7. M. P. Wand. Data-based choice of histogram bin width. The American Statistician , 51:59–64, 1997. doi:10.2307/2684697. M. P. Wand. KernSmooth: Functions for kernel smoothing supporting Wand & Jones (1995) , 2021. URL https: //CRAN.R-project.org/package=KernSmooth . R package version 2.23-20. W.H.WongandX.Shen. ProbabilityinequalitiesforlikelihoodratiosandconvergenceratesofsieveMLES. The Annals of Statistics , 23:339–362, 1995. doi:10.1214/aos/1176324524. G.Zhong. EfficientandrobustdensityestimationusingBernsteintypepolynomials. JournalofNonparametric Statistics, 28:250–271, 2016. doi:10.1080/10485252.2016.1163349. 42
|
https://arxiv.org/abs/2505.22034v1
|
arXiv:2505.22206v1 [math.ST] 28 May 2025Directional ρ-coefficients Enrique de Amoa, David Garc´ ıa-Fern´ andezb,∗, Manuel ´Ubeda-Floresa aDepartment of Mathematics, University of Almer´ ıa, 04120 Almer´ ıa, Spain edeamo@ual.es, mubeda@ual.es bResearch Group of Theory of Copulas and Applications, University of Almer´ ıa, 04120 Almer´ ıa, Spain davidgfret@correo.ugr.es May 29, 2025 Abstract In this paper we obtain advances for the concept of directional ρ-coefficients, originally defined for the trivariate case in [Nelsen, R.B., ´Ubeda-Flores, M. (2011). “Directional dependence in multi- variate distributions”. Ann. Inst. Stat. Math 64, 677–685] by extending it to encompass arbitrary dimensions and directions in multivariate space. We provide a generalized definition and establish its fundamental properties. Moreover, we resolve a conjecture from the aforementioned work by proving a more general result applicable to any dimension, correcting a result in [Garc´ ıa, J.E., Gonz´ alez-L´ opez, V.A., Nelsen, R.B. (2013). “A new index to measure positive dependence in trivariate distributions”. J. Multivariate Anal. 115, 481–495] an erratum in the current literature. Our findings contribute to a deeper understanding of multivariate dependence and association, offering novel tools for detecting directional dependencies in high-dimensional settings. Finally, we introduce nonparametric estimators, based on ranks, for estimating directional ρ-coefficients from a sample. Keywords : copula, directional ρ-coefficients, multivariate dependence, nonparametric estimators, Spearman’s rho. MSC(2020) : 62H05; 62H12. 1 Introduction Fisher, in his article on copulas within the Encyclopedia of Statistical Science [5], notes: “Copulas [are] of interest to statisticians for two main reasons: First, as a way of studying scale-free measures of dependence; and secondly, as a starting point for constructing families of bivariate distributions . . .” On the other hand, as Jodgeo [8] points out: “Dependence relations between random variables is one of the most studied subjects in probability and statistics.” Dependence measures play a crucial role in multivariate statistics, allowing researchers to quantify the relationships between random variables beyond simple pairwise associations. Traditional measures, such as Spearman’s rho and Kendall’s tau, provide useful insights but may fail to detect complex dependency structures in higher dimensions. To address this issue, the directional ρ-coefficients were introduced in [14] as a tool to measure association in specific directions within trivariate distributions. These coefficients, based on copula theory, ∗Corresponding author 1 capture dependence structures that remain undetected by classical multivariate association measures. However, their study was limited to the three-dimensional case, leaving open the question of whether such coefficients in higher dimensions could be expressed as a linear combination in terms of ρ-coefficients in lower dimensions. This paper extends the concept of directional ρ-coefficients to encompass arbitrary dimensions and directions within a multivariate setting. We establish a robust mathematical framework for their definition and demonstrate key properties that validate their practical application. Furthermore, we address a conjecture posed in [14] by presenting a generalized proof, applicable across all dimensions, leveraging a result from [1]. The structure of this paper is as follows: Section 2 presents the necessary preliminaries on copulas, measures of association and concepts of dependence. Section 3 introduces the generalization of direc- tional ρ-coefficients to higher dimensions, we establish their key properties and explore their
|
https://arxiv.org/abs/2505.22206v1
|
theoretical implications, alongside providing a more general response to the conjecture presented in [14] and correct- ing [7, Equation (10)]. In Section 4, we present nonparametric rank-based estimators with the objective of estimating the values of the directional ρ-coefficients from the observed data in a sample. Potential applications are given in Section 5. Finally, Section 6 is devoted to conclusions. 2 Preliminaries In this section we will outline some relevant known results. Specifically, we will dedicate three sub- sections to: copulas, measures of association, and concepts of dependence. 2.1 Copulas and Sklar’s theorem Letn≥2 be a natural number. An n-dimensional copula (for short, n-copula), C, is the restriction to In(= [0 ,1]n) of a continuous n-dimensional distribution functions whose univariate margins are uniform onI. Equivalently, we have the following definition: Definition 1. Ann-copula Cis a function C:In−→Isatisfying the following properties: i)C(u) = 0 for every u∈Insuch that at least one coordinate of uis equal to 0. ii)C(u) =ukwhenever all coordinates of uare1except, maybe, uk. iii) For every a= (a1, a2, . . . , a n)andb= (b1, b2, . . . , b n)∈In, such that ak≤bkfor all k= 1, . . . , n , VC([a,b]) =X csgn(c)C(c)≥0, where [a,b]denotes the n-box×n i=1[ai, bi]and the sum is taken over the vertices c= (c1, c2, . . . , c n) of[a,b],i.e., each ckis equal to either akorbk, and sgn(c) = 1 ifck=akfor an even number of k’s, and sgn(c) =−1otherwise. The importance of copula theory lies in Sklar’s theorem [19] —for a complete proof of this result, see [20]. Theorem 1 (Sklar) .LetX= (X1, X2, ..., X n)be a random n-vector with joint distribution function Hand one-dimensional marginal distributions F1, F2, ..., F n. Then there exists an n-copula (uniquely determined on ×n i=1Range (Fi)) such that H(x) =C(F1(x1), F2(x2), ..., F n(xn)) 2 for all x∈[−∞,+∞]n. If all the marginals Fiare continuous, then the n-copula is unique. For instance, some of the most known copulas are the product copula Π n(u) =Qn i=1ui—which characterizes the independence of random variables and the subscript indicates dimension—, Mn(u) := min(u) and Wn(u) = max {Pn i=1ui−n+ 1,0}for all uinIn.Wnis an n-copula only when n= 2; and for any n-copula Cwe have Wn≤C≤Mn, so they are respectively known as the lower and upper Fr´ echet-Hoeffding bounds. Finally, let bCbe the survival n-copula ofC, i.e.,bC(u) =P[U>1−u], where Uis a random n-vector whose components are uniform Iand with associated n-copula C. For further information about copulas, see [3, 13]. 2.2 Measures of association In the case of two variables, a measure of association quantifies both the strength and direction of their relationship. A value close to +1 indicates that high (or low) values of both variables tend to appear together, whereas a value near −1 suggests that large values of one variable are typically associated with small values of the other. One of the best-known measures of association for two random variables ( X, Y) is Spearman’s rho. This measure relies solely on the copula Cassociated with the pair ( X, Y). So we can denote
|
https://arxiv.org/abs/2505.22206v1
|
it by ρ(C), and its expression is given by ρ(C) = 12Z I2C(u, v)dudv−3. Consequently, when analyzing their properties, we can assume that the random variables XandYare uniformly distributed on I. For a detailed exploration of their characteristics, refer to [13] and the sources cited therein. In this paper, we will focus on an extension of ρ(C) in the context of multiple variables, where the analysis becomes more complex. To simplify and illustrate the problem, we focus on the case involving three variables. A common approach to measuring association in this setting is to take the average of the three pairwise association measures, denoted by ρ∗ 3. However, this method is often insufficient in capturing the overall dependence among all the three variables. It is well-established that pairwise independence among a set of three random variables ( X, Y, Z ) does not, in general, imply their mutual independence (see [14, Example 1]). 2.3 Directional dependence Among the most widely recognized concepts describing potential dependence relationships between random variables are positive (respectively, negative) upper orthant dependence (PUOD) (respectively, NUOD), and positive (respectively, negative) lower orthant dependence (PLOD) (respectively, NLOD). The concepts of PUOD and NUOD intuitively describe how a set of random variables X1, X2, . . . , X nde- viate from independence in terms of their likelihood of attaining large values simultaneously. Specifically, under PUOD, these variables are more prone to exhibit high values together compared to a collection of independent random variables with the same marginal distributions, whereas under NUOD, this tendency is weaker (for some results about positive dependece properties using copulas see [10, 11, 13, 21]). How- ever, dependence among random variables can be characterized in other ways. For instance, considering a subset J⊂I={1,2, . . . , n }one might observe a dependence structure where large (or small) values of the variables {Xj:j∈J}are systematically associated with small (or large) values of the remaining variables {Xj:j∈I\J}. This idea was explored in [17], leading to the following definition. 3 Definition 2 (PD( α) dependence concept) .LetXbe a continuous random vector with joint distribution function H, and let α= (α1, α2, . . . , α n)∈Rnsuch that |αi|= 1for all i= 1,2, . . . , n . We say that X (orH) is (orthant) positive (respectively, negative) dependent according to the direction α—denoted by PD(α)(respectively, ND (α))— if, for every x∈[−∞,+∞]n, P"n\ i=1(αiXi> xi)# ≥nY i=1P[αiXi> xi] (1) (respectively, by reversing the sense of the inequality “ ≥” in (1)). Following from the definition above, the concept PD( 1) (respectively, PD( −1)), where 1= (1,1, . . . , 1) corresponds to the PUOD (respectively, PLOD); an similarly for the corresponding negative concepts. For more details, see [17, 18]. Based on the PD( α) dependence concept, a dependence order is provided in [2] with the aim of comparing the strength of the positive dependence in a particular direction of two random vectors. Definition 3 (PD( α) order) .LetXandYbe two random n-vectors with respective distribution functions FandG. Let α∈Rnsuch that |αi|= 1for all i= 1,2, . . .
|
https://arxiv.org/abs/2505.22206v1
|
, n .Xis said to be smaller than Yin the positive dependence order according to the direction α, denoted by X≤PD(α)Y, if, for every x∈Rn, we have P"n\ i=1(αiXi> xi)# ≤P"n\ i=1(αiYi> xi)# . 3 Multivariate directional ρ-coefficients The directional ρ-coefficients were first introduced in [14] order to measure directional dependence for the case n= 3, where ( X, Y, Z ) are three random variables with associated copula C. For each direction α∈R3such that |αi|= 1 for i= 1,2,3, they were defined by ρ(α1,α2,α3) 3 (C) = 8Z I3Q(α1,α2,α3)(u, v, w )dudvdw, where Q(α1,α2,α3)(u, v, w ) =P[α1X > α 1u, α 2Y > α 2v, α 3Z > α 3w]−P[α1X > α 1u]P[α2Y > α 2v]P[α3Z > α 3w] for all u, v, w ∈I. As is noted in [14, Remark 3], ρα 3is not a multivariate measure of association, in general. From now on, we will focus on the multidimensional case. In what follows, Xwill be an n-dimensional random vector whose marginals are uniform on [0 ,1]. Inspired by [12, 14, 17] we introduce directional ρ-coefficients of association as in [7]. Definition 4. LetXbe an n-dimensional random vector with associated n-copula C. The directional ρ-coefficients are defined for each direction α∈Rn, with |αi|= 1for all i= 1, . . . , n , as ρα n(C) =2n(n+ 1) 2n−(n+ 1)Z InQα(u1, . . . , u n)du1···dun, (2) where Qα(u1, . . . , u n) =P"n\ i=1(αiXi> α iui)# −nY i=1P[αiXi> α iui]. 4 It should be noted that the directional coefficients for directions α=−1andα=1were defined for the first time in [12] in terms of copulas as it follows ρ− n(C) =n+ 1 2n−(n+ 1) 2nZ InC(u)du−1 and ρ+ n(C) =n+ 1 2n−(n+ 1) 2nZ InΠn(u)dC(u)−1 , respectively. We can rewrite the probabilities using indicator functions and expectations. Let 1(A) denote the indicator function of event A. Applying this, we get: ρα n(C) =2n(n+ 1) 2n−(n+ 1)Z In E"nY i=11(αiXi> α ixi)# −nY i=1E[1(αiXi> α ixi)]! dx1···dxn Here, the expectation Eis taken with respect to the joint distribution of the random vector Xunder the copula C. For uniform marginals, E[1(αiXi> α ixi)] = 1 /2. Thus, the formula can be written as: ρα n(C) =2n(n+ 1) 2n−(n+ 1) E"nY i=11(αiXi> α iUi)# −1 2n! . Some of the main properties of these coefficients are listed in the following result. Corollary 2. LetXbe an n-dimensional random vector with associated n-copula C, and let α∈Rnsuch thatαi∈ {− 1,1}for all i= 1, . . . , n . Then: i)P αρα n(C) = 0 . ii)ρα n(C) =ρ−α n(bC). Proof. Part i) is straightforward since we haveP αP[Tn i=1(αiXi> α ixi)] =P αQn i=1P[αiXi> α ixi] = 1. For part ii), we know that bC(u) =P[U > 1−u] =P[−U < u −1] =P[−U >−u] since Uis uniform in [0,1]. Therefore, ρα n(C) =2n(n+ 1) 2n−(n+ 1)Z In P"n\ i=1(αiXi> α ixi)# −nY i=1P[αiXi> α ixi]! dx1···dxn =2n(n+ 1) 2n−(n+ 1)Z In P"n\ i=1(−αi(−Xi)>−αi(−xi))# −nY i=1P[−αi(−Xi)>−αi(−xi)]! dx1···dxn =ρ−α n(bC), as desired. Next, we provide some examples of multivariate ρ-coefficients for some well-known
|
https://arxiv.org/abs/2505.22206v1
|
families of n- copulas. Example 1 (Directional ρ-coefficients for Π n).IfC= Π nthen we have ρα n(Πn) = 0 since, in this case, P[Tn i=1(αiXi> xi)] =Qn i=1P[αiXi> xi]. Example 2 (Directional ρ-coefficients for Mn).For the minimum copula C=Mn, we have X1= X2=···=Xnin distribution. Since the marginals of the copula are uniformly distributed on [0,1], the associated random variables Xiare also uniformly distributed on [0,1]; therefore, we consider vi(ui, αi) :=P[αiXi> α iui] =( 1−uiifαi= 1, ui ifαi=−1.(3) 5 We consider the integral In(α) =Z In P"n\ i=1(αiX > α iui)# −nY i=1vi(ui, αi)! du1···dun for the cases where all αiare the same. We study several cases depending on k, i.e. the number of components of αequal to −1. 1.α= (1,1, . . . , 1)(k= 0). The joint probability is P[Tn i=1(X > u i)] = 1 −max( u1, . . . , u n). The product of marginal probabilities isQn i=1(1−ui). The integral becomes: In((1, . . . , 1)) =Z In 1−max( u1, . . . , u n)−nY i=1(1−ui)! du1···dun Using the known results Z In(1−max( u1, . . . , u n))du1···dun=1 n+ 1 andZ InnY i=1(1−ui)du1···dun=1 2n , we get: In((1, . . . , 1)) =1 n+ 1−1 2n. Substituting into the definition of ρα n(Mn): ρ(1,1,...,1) n (Mn) =2n(n+ 1) 2n−(n+ 1)1 n+ 1−1 2n =2n(n+ 1) 2n−(n+ 1)2n−(n+ 1) (n+ 1)2n= 1. 2.α= (−1,−1, . . . ,−1)(k=n). The joint probability is P[Tn i=1(X < u i)] = min( u1, . . . , u n). The product of marginal probabilities isQn i=1ui. The integral becomes: In((−1, . . . ,−1)) =Z In min(u1, . . . , u n)−nY i=1ui! du1···dun Using the known result Z Inmin(u1, . . . , u n)du1···dun=1 n+ 1, we get: In((−1, . . . ,−1)) =1 n+ 1−1 2n. Substituting into the definition of ρα n(Mn): ρ(−1,−1,...,−1) n (Mn) =2n(n+ 1) 2n−(n+ 1)1 n+ 1−1 2n =2n(n+ 1) 2n−(n+ 1)2n−(n+ 1) (n+ 1)2n= 1. 3.kis even ( 0< k < n ). When kis even, we have kvariables where we are looking at the probability that−Xi>−ui(i.e., Xi< ui) and n−kvariables where we are looking at Xj> uj. Due to the perfect positive dependence ( Xi=X∼U[0,1]), the joint probability is P X < min i∈IuiandX > max j∈Juj = max 0,min i∈Iui−max j∈Juj . 6 The integral In(α)involves the integral of this joint probability minus the product of the marginal probabilities. The integration of max (0 ,mini∈Iui−max j∈Juj)over the unit hypercube yields k!(n−k)! (n+ 1)!. Subtracting the integral of the product of marginals, which is 1 2n, we get: In(α) =k!(n−k)! (n+ 1)!−1 2n. Multiplying by the normalization factor2n(n+1) 2n−(n+1), we obtain ρα n(Mn) =2n(n+ 1) 2n−(n+ 1)k!(n−k)! (n+ 1)!−1 2n for even k. 4.kis odd ( 0< k < n ). When kis odd, the joint probability is P"n\ i=1(αiX > α iui)# = max 0,min j∈Juj−max i∈Iui , where |I|=kand|J|=n−k. The formula for ρα n(Mn)is the same as the one derived for the even case of k. The difference in the directional dependence for odd
|
https://arxiv.org/abs/2505.22206v1
|
karises from the structure of the joint probability involving minj∈Juj−max i∈Iui, which effectively reverses the roles of the minimum and maximum compared to the even kcase where we had mini∈Iui−max j∈Juj. However, the integral of max(0 , a−b)is the same as the integral of max(0 , b−a)over the appropriate domains, leading to the same formula for In(α)and consequently for ρα n(Mn). In summary, where kis the number of −1inα: ρα n(Mn) = 1 ifk= 0ork=n, 2n(n+ 1) 2n−(n+ 1)k!(n−k)! (n+ 1)!−1 2n otherwise . Example 3 (Directional ρ-coefficients for the Farlie-Gumbel-Morgenstern (FGM) family of n-copulas) . Let us consider the FGM family of n-copulas given by Cλ n(u1, . . . , u n) =nY i=1ui 1 +λnY i=1(1−ui)! , λ ∈[−1,1] (see [3, 13]). From (3)we have P[αiXi> α iui] = 1−1 +αi 2ui−1−αi 2(1−ui). The joint probability P[Tn i=1(αiXi> α iui)]is given by the (directional) function Sα(u1, . . . , u n) :=nY i=1 1−1 +αi 2ui−1−αi 2(1−ui) · 1 + (−1)|J|λnY i=11 +αi 2ui+1−αi 2(1−ui)! . 7 Then, Qα(u1, . . . , u n) =Sα(u1, . . . , u n)−nY i=1P[αiXi> α iui] = (−1)|J|λnY i=1P[αiXi> α iui](1−P[αiXi> α iui]). Now, we calculate the integral of QαoverIn: Z InQα(u1, . . . , u n)du1···dun= (−1)|J|λnY i=1Z1 0P[αiXi> α iui](1−P[αiXi> α iui])dui. After some elementary calculations, we obtain that, for every αi∈ {− 1,1}, Z1 0P[αiXi> α iui](1−P[αiXi> α iui])dui=1 6. Thus,Z InQα(u1, . . . , u n)du1···dun= (−1)|J|λ1 6n = (−1)|J|λ 6n. Finally, it holds ρα n(Cλ n) =(−1)|J|2n(n+ 1)λ [2n−(n+ 1)]6n. To illustrate a simple case, let α∈R4be a direction. Then, the respective directional ρ-coefficients are given by ρα 4(C4 λ) =( −5λ/891,if|J|is odd, 5λ/891,if|J|is even, where J⊆ {1,2, . . . , n }is such that i∈Jifαi= 1, for i= 1,2, . . . , n The following result can be trivially extracted from Definition 3 and the expression of the directional ρ-coefficient given by (2). Corollary 3. LetXandYbe two n-dimensional random vectors of continuous random variables uni- formly distributed on [0,1], whose respective joint distribution functions are the n-copulas CXandCY. Letα= (α1, ..., α n)∈Rnsuch that |αi|= 1for all i= 1, ..., n . IfX≤PD(α)Y, then ρα n(CX)≤ρα n(CY). For instance, if Cλ1nandCλ2nare two members of the FGM family of n-copulas given in Example 3, it is known that Cλ1n≤PD(α)Cλ2nif, and only if, λ1≤λ2([2]). Therefore, ρα n(Cλ1n)≤ρα n(Cλ2n) if, and only if,λ1≤λ2. In this work, we address a conjecture by Nelsen and ´Ubeda-Flores ([14]) which posits that the mul- tidimensional directional ρ-coefficient can be expressed as a linear combination of lower-dimensional directional ρ-coefficients. Specifically, [14] suggests this involves a linear combination of ρ+andρ−coef- ficients. Building on this, and noting the complexity highlighted by [7] regarding the representation of 2n directional coefficients ρα nforn≥4 in terms of lower-dimensional marginal ρ-coefficients, we prove that any directional ρ-coefficient can be expressed as a linear combination of ρ−coefficients of the constituent random variables —we adopt the convention that the sum (respectively, product) over an empty set is 0 (respectively, 1)— and correcting [7, Equation
|
https://arxiv.org/abs/2505.22206v1
|
(10)], as well. 8 Theorem 4. LetXbe an n-dimensional random vector whose marginals are uniform on [0,1], letCbe its associated n-copula and α∈Rnsuch that αi∈ {− 1,1}for all i= 1, . . . , n . Let I⊆ {1,2, . . . , n }such thatαi=−1ifi∈Iandαi= 1ifi∈J={1,2, . . . , n } \I. Then ρα n(C) =2n(n+ 1) 2n−(n+ 1)X S⊆J(−1)|S|2|I|+|S|−(|I|+|S|+ 1) 2|I|+|S|(|I|+|S|+ 1)ρ− XI∪S where XKis the random vector whose components are Xifori∈K, and ρ− XKcorresponds to the coefficient ρ− kforXK. Proof. By using [1, Lemma 3.4 and Remark 3.5], we have ρα n(C) =2n(n+ 1) 2n−(n+ 1)Z In P"n\ i=1(αiXi> α ixi)# −nY i=1P[αiXi> α ixi]! dx1···dxn =2n(n+ 1) 2n−(n+ 1)Z In P"\ i∈IXi< xi,\ i∈JXi> xi# −Y i∈IP[Xi< xi]Y i∈JP[Xi> xi]! dx1···dxn =2n(n+ 1) 2n−(n+ 1)Z In C(xI,1)−X j∈JC(xI∪{j},1) +X j1∈JX j2∈J j2>j1C(xI∪{j1,j2},1)− ··· + (−1)|J|C(x) −Y i∈IxiY i∈J(1−xi)! dx1···dxn =2n(n+ 1) 2n−(n+ 1)Z In |J|X k=0(−1)kX S⊆J |S|=kC(xI∪S,1)−Y i∈IxiY i∈J(1−xi) dx. Note Z InC(xI∪S,1)dx=Z InC(xI∪S,1)dx−1 2|I|+|S|+1 2|I|+|S| =Z In C(xI∪S,1)−Y i∈I∪Sxi! dx+1 2|I|+|S| =2|I|+|S|(|I|+|S|+1) 2|I|+|S|−(|I|+|S|+1)R In C(xI∪S,1)−Q i∈I∪Sxi dx 2|I|+|S|(|I|+|S|+1) 2|I|+|S|−(|I|+|S|+1)+1 2|I|+|S| =2|I|+|S|−(|I|+|S|+ 1) 2|I|+|S|(|I|+|S|+ 1)ρ− XI∪S+1 2|I|+|S| and Z InY i∈IxiY i∈J(1−xi)dx1···dxn=1 2n. 9 Considering the alternating sign and the number of such subsets |J| k , the expression for ρα n(C) becomes: ρα n(C) =2n(n+ 1) 2n−(n+ 1) |J|X k=0(−1)kX S⊆J |S|=k" 2|I|+k−(|I|+k+ 1) 2|I|+k(|I|+k+ 1)ρ− XI∪S+1 2|I|+k# −1 2n =2n(n+ 1) 2n−(n+ 1) X S⊆J(−1)|S|2|I|+|S|−(|I|+|S|+ 1) 2|I|+|S|(|I|+|S|+ 1)ρ− XI∪S+|J|X k=0(−1)k|J| k1 2|I|+k−1 2n . The term that does not involve the ρ−coefficients is: |J|X k=0(−1)k|J| k1 2|I|+k−1 2n=1 2|I||J|X k=0|J| k −1 2k −1 2n =1 2|I| 1−1 2|J| −1 2n =1 2|I|1 2|J| −1 2|I|+|J|=1 2|I|+|J|−1 2n= 0. Therefore, the desired formula is obtained. From Theorem 4, it can be verified that the expressions for n= 3 align with those presented in [7, Theorem 3.1]. Furthermore, Table 2 provides the sixteen ρ-coefficients for n= 4. Direction α Coefficient ρα 4(C) (1,1,1,1)20 33(ρ− X12+ρ− X13+ρ− X14+ρ− X23+ρ− X24+ρ− X34)−10 11(ρ− X123+ρ− X124+ρ− X134+ρ− X234)+ρ− X1234 (−1,1,1,1) −20 33(ρ− X12+ρ− X13+ρ− X14) +10 11(ρ− X123+ρ− X124+ρ− X134)−ρ− X1234 (1,−1,1,1) −20 33(ρ− X12+ρ− X23+ρ− X24) +10 11(ρ− X123+ρ− X124+ρ− X234)−ρ− X1234 (1,1,−1,1) −20 33(ρ− X13+ρ− X23+ρ− X34) +10 11(ρ− X123+ρ− X134+ρ− X234)−ρ− X1234 (1,1,1,−1) −20 33(ρ− X14+ρ− X24+ρ− X34) +10 11(ρ− X124+ρ− X134+ρ− X234)−ρ− X1234 (−1,−1,1,1)20 33ρ− X12−10 11(ρ− X123+ρ− X124) +ρ− X1234 (−1,1,−1,1)20 33ρ− X13−10 11(ρ− X123+ρ− X134) +ρ− X1234 (−1,1,1,−1)20 33ρ− X14−10 11(ρ− X124+ρ− X134) +ρ− X1234 (1,−1,−1,1)20 33ρ− X23−10 11(ρ− X123+ρ− X234) +ρ− X1234 (1,−1,1,−1)20 33ρ− X24−10 11(ρ− X124+ρ− X234) +ρ− X1234 (1,1,−1,−1)20 33ρ− X34−10 11(ρ− X134+ρ− X234) +ρ− X1234 (−1,−1,−1,1)10 11ρ− X123−ρ− X1234 (−1,−1,1,−1)10 11ρ− X124−ρ− X1234 (−1,1,−1,−1)10 11ρ− X134−ρ− X1234 (1,−1,−1,−1)10 11ρ− X234−ρ− X1234 (−1,−1,−1,−1)ρ− X1234 Table 2: The sixteen directional ρ-coefficients for n= 4. 10 4 Nonparametric estimators for directional dependence There are several possible approaches to defining an estimator for the directional ρ-coefficients since, in practice, the copula Cremains unknown and must be estimated from the available data. Perhaps the most immediate way to define it could be using the empirical copula; however, in [16], P´ erez and Prieto-Alariz
|
https://arxiv.org/abs/2505.22206v1
|
already warn of the lack of good properties of the estimators resulting from this method; for example, these may fall outside the parametric space ([16, Example 1]). That is why we have decided that the estimators, bρα n, proposed in this paper are based on ranks. Inspired by [7], where a nonparametric estimator for tridimensional directional ρ-coefficients is defined, we introduce a nonparametric estimator for multivariate directional ρ-coefficients as follows: Consider a d-dimensional random sample of size n,{X1j, . . . , X dj}n j=1, of a d-dimensional random vector ( X1, . . . , X d) with associated d-copula C. Let Rijbe the rank of Xijin{Xi1, . . . , X in},Rij= n+ 1−Rijand let Rαi ijbeRijifαi=−1 and Rij=n+ 1−Rijifαi= 1 , for i= 1, . . . , d . Then we define bρα d=1 nPn j=1Qd i=1Rαi ij− n+1 2d 1 nPn j=1jd− n+1 2d. Note that in the case of perfect positive dependence, that is, in the case in which one random variable increases when the others also increase —i.e., the ranks in each dimension coincide—, we have bρ+ d=bρ− d= 1. Every other directional estimator is equal to 1 provided that the ranks Rαi ijare equal for anyi= 1, . . . , d and for every j= 1, . . . , n . It is also worth observing that the formula for d= 3 matches the one presented in [7]: bρ(α1,α2,α3) 3 =8 n(n−1)(n+ 1)2nX j=1Rα1 1jRα2 2jRα3 3j−n+ 1 n−1. For this dimension, d= 3, we can define one more coefficient, the arithmetic mean of the three two-dimensional coefficients, ρ∗ 3= (ρ12+ρ13+ρ23)/3. To estimate this coefficient we can calculate the arithmetic mean of the estimators of the two-dimensional coefficients as in [7], i.e., bρ∗ 3= (bρ12+bρ13+bρ23)/3. However, as with the coefficient, this estimator may not capture dependence relationships between the samples of the random variables, as we illustrate in the following example. Example 4. For the FGM family of 3-copulas with λ= 0.6, we performed a Monte Carlo simulation to generate 1000 samples, each of size 500. For each of these samples, both the ρ∗ 3coefficient and the directional ρ-coefficients ρ− 3andρ+ 3have been calculated. The arithmetic mean of all the results obtained in the samples yielded the following values bρ∗ 3= 0.0006 bρ− 3=−0.0215 bρ+ 3= 0.0217. So the value of bρ∗ 3tends to 0asnincreases, therefore, there may be dependence among the samples of the random variables undetected by the estimator bρ∗ 3. It is possible to express these estimators as a linear combination of the estimators of the lower- dimensional bρ−-coefficients, analogously to the combination presented in Theorem 4. In order to prove this result, we first establish the following technical lemma. Lemma 5. LetU1, . . . , U dbe random variables with marginal distributions uniform on [0,1]. For any partition of the set of indices {1, . . . , d }into two disjoint sets IandJ(i.e., I∪J={1, . . . , d }and 11 I∩J=∅), the following identity holds for the expectation of a mixed product: E"Y i∈IUiY i∈J(1−Ui)# =X S⊆J(−1)|S|E"Y
|
https://arxiv.org/abs/2505.22206v1
|
l∈I∪S(1−Ul)# . This identity also applies to empirical averages for a sample U1j, . . . , U djforj= 1, . . . , n : 1 nnX j=1 Y i∈IUij! Y i∈J(1−Uij)! =X S⊆J(−1)|S| 1 nnX j=1Y l∈I∪S(1−Ulj) . Proof. Consider the product termQ i∈IUiQ i∈J(1−Ui). We can rewrite each Uifori∈Ias 1−(1−Ui): Y i∈IUi=Y i∈I(1−(1−Ui)). Applying the inclusion-exclusion principle for products (i.e., expanding the product of terms (1 −xi) asP S′⊆I(−1)|S′|Q s′∈S′xs′with xi= (1−Ui)), we get: Y i∈I(1−(1−Ui)) =X S′⊆I(−1)|S′|Y s′∈S′(1−Us′). Substitute this back into the original product: X S′⊆I(−1)|S′|Y s′∈S′(1−Us′) Y i∈J(1−Ui)! . Because the distributive law, and since IandJare disjoint, and S′⊆I, the sets S′andJare also disjoint, the product of terms (1 −Ul) can be combined over the union of their index sets S′∪J: X S′⊆I(−1)|S′| Y s′∈S′(1−Us′)Y i∈J(1−Ui)! =X S′⊆I(−1)|S′|Y l∈S′∪J(1−Ul). Now, let us take the expectation (or empirical average) of both sides. E"Y i∈IUiY i∈J(1−Ui)# =E X S′⊆I(−1)|S′|Y l∈S′∪J(1−Ul) ; By the linearity of expectation: E"Y i∈IUiY i∈J(1−Ui)# =X S′⊆I(−1)|S′|E"Y l∈S′∪J(1−Ul)# . This form is equivalent to the identity in the lemma statement by simply relabeling S′asS. The extension to empirical averages follows directly by replacing expectations with sample means. We are now ready to prove that the directional coefficients can be represented as a linear combination ofbρ−lower-dimensional coefficients. 12 Theorem 6. Let{X1j, . . . , X dj}n j=1be a d-dimensional random sample of size nof a d-dimensional random vector (X1, . . . , X d)with associated d-copula C. Let α∈Rnbe such that αi∈ {− 1,1}for all i= 1, . . . , n . Let I⊆ {1,2, . . . , n }such that αi=−1ifi∈Iandαi= 1ifi∈J={1,2, . . . , n }\I. Then bρα d=(n+ 1)d 1 nPn j=1jd− n+1 2d|J|X k=0(−1)k2|I|+k−(|I|+k+ 1) 2|I|+k(|I|+k+ 1)X S⊆J |S|=kbρ− XI∪S(4) Proof. LetX1, . . . ,Xnbe a random sample from a d-variate distribution, and let Rαi ijthe (directional) ranks. The pseudo-observations are defined as Uij=Rij/(n+ 1) for i= 1, . . . , d andj= 1, . . . , n . For a subset of indices K⊆ {1, . . . , d }with|K|components, the empirical upper-tail dependence coefficient bρ− Kis defined as: bρ− K=2|K|(|K|+ 1) 2|K|−(|K|+ 1) 1 nnX j=1Y i∈K(1−Uij)−1 2|K| . Now, let Nrankdenote the numerator of bρα dandDrankdenote its denominator, i.e., Nrank=1 nnX j=1dY i=1Rαi ij−n+ 1 2d Drank=1 nnX j=1jd−n+ 1 2d By definition, Rαi ij= (n+ 1)Uijifαi=−1 (i.e., i∈I), and Rαi ij= (n+ 1)(1 −Uij) ifαi= 1 (i.e., i∈J). Thus, the productQd i=1Rαi ijbecomes: dY i=1Rαi ij= Y i∈I(n+ 1)Uij! Y i∈J(n+ 1)(1 −Uij)! = (n+ 1)|I|(n+ 1)|J| Y i∈IUij! Y i∈J(1−Uij)! = (n+ 1)d Y i∈IUij! Y i∈J(1−Uij)! . Substituting this expression into the expression for Nrank: Nrank=1 nnX j=1(n+ 1)d Y i∈IUij! Y i∈J(1−Uij)! −n+ 1 2d = (n+ 1)d 1 nnX j=1 Y i∈IUij! Y i∈J(1−Uij)! −1 2d . LetKemp αdenote the term in the square brackets: Kemp α=1 nnX j=1 Y i∈IUij! Y i∈J(1−Uij)! −1 2d ; 13 so,Nrank= (n+ 1)dKemp α. Lemma 5 states that Kemp αcan be expressed as: Kemp α=X S⊆J(−1)|S|
|
https://arxiv.org/abs/2505.22206v1
|
1 nnX j=1Y l∈I∪S(1−Ulj)−1 2|I|+|S| . This identity allows us to decompose Kemp αinto a sum of terms related to products of (1 −Ulj). From the definition of bρ− K, we can isolate the parenthesized term: 1 nnX j=1Y i∈K(1−Uij)−1 2|K| =2|K|−(|K|+ 1) 2|K|(|K|+ 1)bρ− K. Substituting this, and by setting K=I∪S, we have: Kemp α=X S⊆J(−1)|S| 2|I|+|S|−(|I|+|S|+ 1) 2|I|+|S|(|I|+|S|+ 1)bρ− XI∪S! . This sum can be regrouped by the size of S,k=|S|: Kemp α=|J|X k=0(−1)k2|I|+k−(|I|+k+ 1) 2|I|+k(|I|+k+ 1)X S⊆J |S|=kbρ− XI∪S. Finally, recall that bρα d=Nrank Drankand substitute the expression for Nrank= (n+ 1)dKemp α, whence Equation (4) follows, completing the proof. Remark 1. Observe that, since lim n→∞(n+ 1)d 1 nPn l=1ld− n+1 2d= lim n→∞nd 1 +1 nd nd d+1+O(nd−1) − nd 2d+O(nd−1) = lim n→∞nd 1 d+1−1 2d nd+O(nd−1) = lim n→∞1 1 d+1−1 2d +O 1 n =1 1 d+1−1 2d =2d(d+ 1) 2d−(d+ 1), from Theorem 6 we have lim n→∞bρα d=2d(d+ 1) 2d−(d+ 1)|J|X k=0(−1)k2|I|+k−(|I|+k+ 1) 2|I|+k(|I|+k+ 1)X S⊆J |S|=kρ− XI∪S=ρα d(C). Remark 2. Note that, for instance, the estimators for dimensions 3 and 4 are given by bρα 3=8(n+ 1) n−1 X S⊆J |I|+|S|=21 12bρ− XI∪S−X S⊆J |I|+|S|=31 8bρ− XI∪S 14 and bρα 4=240(n+ 1)3 33n3+ 27n2−37n−23 X S⊆J |I|+|S|=21 12bρ− XI∪S−X S⊆J |I|+|S|=31 8bρ− XI∪S+X S⊆J |I|+|S|=411 80bρ− XI∪S , respectively. 4.1 Empirical processes and some properties of these estimators 4.1.1 Empirical process In [7], an empirical process was introduced in order to estimate a generalization of a 3-copula which can be easily extended to the n-dimensional case. As the authors did, we commence considering βan index set, such that β⊆ {1, . . . , d }. We define xβ= (xβ1, . . . , x βk) an arbitrary value of Xβ. Let |β|denote the cardinal of βandα∈Rda direction. Consider the function Hβ,α(xβ) =P[αiXi≤αixi], i∈β, (5) i.e., Hβ,α(xβ) =P \ i∈β αi=1{Xi≤xi},\ i∈β αi=−1{Xi≥xi} . Letuβbe the analogous of xβandFithe marginal cumulative distribution function of Xi. Then Cβ,α(uβ) =Hβ,α((vi)i∈β), (6) where vk=( F−1 k(ui), ifαi= 1, F−1 k(1−ui),ifαi=−1, is a generalization of the d-copula when αi= 1 and β={1, . . . , d }. From this expression, we can introduce as in [7] the empirical process to estimate it Cβ,α,n(uβ) =1 n+ 1nX j=1Y i∈β1n αiRij n+1≤αiuio. (7) When β={1, . . . , d }andα=1, Expression (7) is the empirical estimator of the d-copula. To determine the weak convergence —denoted by “w− →”— of the empirical process √n{Cβ,α,n(uβ)−Cβ,α(uβ)}, u β∈[0,1]|β|, it suffices to proceed as in [7], where the following two conditions are considered: Condition 1. For each i∈β, the i-th first-order partial derivative ˙Ci,β,αexists and is continuous on the set{uβ∈[0,1]|β|: 0< ui<1}. 15 Condition 2. BCβ,α(uβ)is aCβ,α-tight centered Gaussian process on [0,1]|β|.u(i) βis a vector such that itsj-th component, for j∈β, is: (u(i) β)j= 1ifj̸=iwhen αj= 1, 0ifj̸=iwhen αj=−1, uiifj=i. The covariance function is E(BCβ,α(uβ),BCβ,α(vβ)) =Cβ,α(wβ)−Cβ,α(uβ)Cβ,α(vβ), where the j-th com- ponent (wβ)jis: (wβ)j=( min(uj, vj)ifαj= 1, max( uj, vj)ifαj=−1, for all j∈β. From Conditions 1 and 2, the following result is obtained, which is valid for any
|
https://arxiv.org/abs/2505.22206v1
|
dimension dand, therefore, guarantees the weak convergence of our process. Theorem 7. LetHβ,αbe a|β|-dimensional distribution function, given by (5) with continuous marginal distributions Fi,i∈βand with Cβ,αgiven by (6), where β⊆ {1, ..., d}andα∈Rd,αi={−1,1}. Under Condition 1 on the function Cβ,α, when n−→ ∞ √n{Cβ,α,n(uβ)−Cβ,α(uβ)}w− →GCβ,α(uβ). Weak convergence takes place in l∞([0,1]|β|)andGCβ,α(uβ) =BCβ,α(uβ)−P i∈β˙Ci,β,α(uβ)BCβ,α(u(i) β), where BCβ,αis aCβ,α-tight centered Gaussian process on [0,1]|β|andGCβ,αfollows Condition 2. 4.1.2 Properties of the estimators Our main objective in this section is to prove that the estimators we have defined are unbiased. To do so, first, we must emphasize that our estimators verify the following: bρα d=1 n(n+ 1)d 1 nPn j=1jd−(n+1 2)dZ IdCβ,α,n(uβ)duβ−(n+1 2)d 1 nPn j=1jd−(n+1 2)d,forβ={1, . . . , d }, (8) sinceZ |β|Cβ,α,n(uβ)duβ=1 (n+ 1)|β|nX j=1Y i∈βRαi ij. In order to prove that our estimators are unbiased we will use the following theorem from [7], which is an adaptation of [4, Theorem 6] and [6]. Theorem 8. Under the assumptions of Theorem 7, let {an}n≥1and{bn}n≥1be two sequences of real numbers satisfying√n(an−a0) =O(n−1/2)and√n(bn−b0) =O(n−1/2), respectively, where a0andb0 are constant values. Let Tn(f) =anZ I|β|f(uβ)duβ+bn, forn≥0, where fis a|β|-integrable function. Then, when n−→ ∞ , √n{Tn(Cβ,α,n)−T0(Cβ,α)}w− →ZCβ,α∼N(0, σ2 Cβ,α), 16 with σ2 Cβ,α=a2 0Z I|β|Z I|β|E[GCβ,α(uβ)GCβ,α(vβ)]duβdvβ and ZCβ,α:=a0Z I|β|GCβ,α(uβ)duβ. As a consequence, we have the following result. Corollary 9. Under assumptions of Theorem 7, when n−→ ∞ √n{bρα d−ρα d}w− →ZCβ,α∼N(0, σ2 Cβ,α), where β={1,2, . . . , d }andα∈Rdis a vector such that αi∈ {− 1,1}for all i= 1, . . . , d . Proof. We consider the real number sequences in Theorem 8 as an=1 n(n+ 1)d 1 nPn j=1jd−(n+1 2)d, a 0=2d(d+ 1) 2d−(d+ 1), b n=(n+1 2)d 1 nPn j=1jd−(n+1 2)dand b0=d+ 1 2d−(d+ 1). We are going to check that the conditions√n(an−a0) =O(n−1/2) and√n(bn−b0) =O(n−1/2) hold. We start by using the fact that nX j=1jd=nd+1 d+ 1+O(nd). Therefore, after some tedious calculations, we obtain √n(an−a0) =√n nd+O(nd−1) nd/(d+ 1) + O(nd−1)− n+1 2d−2d(d+ 1) 2d−(d+ 1)! =√nO(nd−1) O(nd) =O(n−1/2). Analogously, √n(bn−b0) =√n (n+1 2)d nd/(d+ 1) + O(nd−1)− n+1 2d−d+ 1 2d−(d+ 1)! =√nO(nd−1) O(nd) =O(n−1/2). So, using (8), the desired result is obtained. By Corollary 9, the estimators bρα dare asymptotically unbiased and Var( bρα d)−→0 whenever n−→ ∞ . Therefore, bρα dis also an asymptotically consistent estimator since convergence in probability, bρα dp− →ρα d, is guaranteed as a consequence of Chebyshev’s inequality. 17 Sample size n= 20 n= 50 n= 100 n= 500 θ ρα dbρα dmean mean mean mean 0.4 -0.0726 bρ(−1,1,1) d-0.0740 -0.0697 -0.0735 -0.0714 0.6 -0.0969 bρ(−1,1,1) d-0.0924 -0.0943 -0.0966 -0.0969 1 -0.1338 bρ(−1,1,1) d-0.1313 -0.1327 -0.1355 -0.1347 2 -0.1906 bρ(−1,1,1) d-0.1895 -0.1920 -0.1917 -0.1928 5 -0.2684 bρ(−1,1,1) d-0.2609 -0.2661 -0.2673 -0.2686 Table 3: Results obtained in Monte Carlo simulations in order to calculate the value of bρ(−1,1,1) dbased on a total of 1000 samples of size ngenerated for the Clayton 3-copula for different values of the parameter θ. 4.2 Simulations To test the performance of these estimators, we have carried out Monte Carlo simulations for d-copulas of the parametric Clayton family, whose expression,
|
https://arxiv.org/abs/2505.22206v1
|
with θ∈[0,∞), is given by Cθ(u1, . . . , u d) = dX i=1u−θ i−d+ 1!−1 θ ,∀u∈[0,1]d (see [13]). We consider two dimensions, d={3,4}, and four sample sizes n={20,50,100,500}. We have generated 1000 Monte Carlo replicates of size nfor each parameter value and for each direction. In each of these cases, we have calculated the estimators, bρα d, of the directional ρ-coefficients, ρα d. Once obtained, we have calculated their mean, which appears in the tables presented. They also show an approximate value of the corresponding directional ρ-coefficient calculated by numerical integration. Table 3 displays the estimations of ρ(−1,1,1) 3 for the 3-copula of Clayton with sample sizes n={20,50,100,500}and parameter values θ={0.4,0.6,1,2,5}comparing them with their exact value. Table 4 displays the estimations of ρ(−1,−1,1) 3 for the 3-copula of Clayton with sample sizes n={20,50,100,500}and parameter values θ={0.4,0.6,1,2,5}comparing them with their exact value. Table 5 displays the estimations of ρ(−1,1,1,−1) 4 for the 4-copula of Clayton with sample sizes n={20,50,100,500}and parameter values θ={0.4,0.6,1,2,5}comparing them with their exact value. Likewise, the values of ρ(−1,1,1−1) 4 have been estimated by the expression of Theorem 6. Table 4 shows the obtained results. Displayed values correspond to the mean of the estimators bρ− 14,bρ− 124,bρ− 134,bρ− 1234and bρ(−1,1,1,−1) 4 of 1000 Monte Carlo simulations. Analyzing the tables, we can see that these estimators are a good approximation to the real value of the directional ρ-coefficients and more so the larger the sample size considered. In all considered cases, negative values have been obtained for these coefficients. However, by changing the direction, for the Clayton d-copula, positive values can also be obtained, closer to one the larger the value of the parameter θis, as would be expected. It can also be seen that the values obtained by estimating bρ(−1,1,1,−1) 4 through the expression of Theorem 6 in Table 6 provide values closer to the real value of the coefficient ρ(−1,1,1,−1) 4 , although the difference with the values estimated in Table 5 is not directly significant. 18 Sample size n= 20 n= 50 n= 100 n= 500 θ ρα dbρα dmean mean mean mean 0.4 -0.0919 bρ(−1,−1,1) d-0.0848 -0.0906 -0.0918 -0.0919 0.6 -0.1287 bρ(−1,−1,1) d-0.1203 -0.1284 -0.1277 -0.1278 1 -0.1850 bρ(−1,−1,1) d-0.1778 -0.1826 -0.1827 -0.1849 2 -0.2621 bρ(−1,−1,1) d-0.2478 -0.2580 -0.2591 -0.2612 5 -0.3212 bρ(−1,−1,1) d-0.3105 -0.3162 -0.3195 -0.3207 Table 4: Results obtained in Monte Carlo simulations in order to calculate the value of bρ(−1,−1,1) dbased on a total of 1000 samples of size ngenerated for the Clayton 3-copula for different values of the parameter θ. Sample size n= 20 n= 50 n= 100 n= 500 θ ρα dbρα dmean mean mean mean 0.4 -0.0664 bρ(−1,1,1,−1) d-0.0650 -0.0661 -0.0652 -0.0670 0.6 -0.0876 bρ(−1,1,1,−1) d-0.0888 -0.0880 -0.0866 -0.0874 1 -0.1176 bρ(−1,1,1,−1) d-0.1165 -0.1180 -0.1189 -0.1169 2 -0.1583 bρ(−1,1,1,−1) d-0.1612 -0.1594 -0.1590 -0.1584 5 -0.1966 bρ(−1,1,1,−1) d-0.2017 -0.1985 -0.1975 -0.1967 Table 5: Results obtained in Monte Carlo simulations in order to calculate the value of bρ(−1,1,1,−1) dbased on a total of 1000 samples of size ngenerated for the Clayton 4-copula for different values of the parameter θ. 5
|
https://arxiv.org/abs/2505.22206v1
|
Applications In this final section, we explore potential applications of the proposed directional dependence estimators in various fields, with a particular focus on health, climate, and rainfall studies. By leveraging these estimators, researchers can gain deeper insights into the directional dependencies that exist between different components of multivariate data, which are often overlooked in traditional analyses. In the con- text of health, for example, understanding the directional relationships between risk factors and health outcomes can improve predictive models. Similarly, in climate and rainfall studies, these estimators can be instrumental in analyzing how different atmospheric variables influence each other in a directional manner, thereby enhancing forecasting and decision-making processes. Through these applications, we highlight the practical value of directional dependence measures in addressing complex, real-world prob- lems particularly relevant in finance and economics, as the following example illustrates. Example 5. In this example we will model the association of three companies: Intel (INTC), General Electrics (GE) and Microsoft (MSFT). To do this, we are going to use a database of daily log-returns of the three companies for five years (1996–2000) obtained from the QRM (Quantitative Risk Management) package of R [15]. In Figure 1 we can see the three-dimensional scatterplot of the data available for the three companies, as well as two-dimensional scatterplots of the data involving only two companies. To study the levels of directional association of these companies, the directional ρ-coefficients have been calculated for each of the possible directions of R3, obtaining Table 7. 19 Sample size θ ρ(−1,1,1−1) 4 bρ− 14 bρ− 124 bρ− 134 bρ− 1234 bρ(−1,1,1,−1) 4 n= 20 0.4 -0.06640.2258 0.2244 0.2247 0.2068 -0.0650 n= 50 0.2357 0.2336 0.2294 0.2122 -0.0658 n= 100 0.2456 0.2393 0.2364 0.2163 -0.0672 n= 200 0.2471 0.2388 0.2375 0.2169 -0.0664 n= 20 0.6 -0.08760.3220 0.31047 0.3062 0.2827 -0.0828 n= 50 0.3376 0.3241 0.3190 0.2939 -0.0862 n= 100 0.3373 0.3209 0.3199 0.2922 -0.0860 n= 200 0.3376 0.3210 0.3223 0.2924 -0.0878 n= 20 1 -0.11760.4499 0.4312 0.4318 0.4006 -0.1113 n= 50 0.4645 0.4441 0.4448 0.4116 -0.1149 n= 100 0.4754 0.4510 0.4511 0.4140 -0.1179 n= 200 0.4765 0.4520 0.4522 0.4156 -0.1176 n= 20 2 -0.15830.6593 0.6309 0.6312 0.5934 -0.1545 n= 50 0.6680 0.6374 0.6350 0.5949 -0.1570 n= 100 0.6743 0.6412 0.6418 0.6000 -0.1582 n= 200 0.6773 0.6439 0.6443 0.6022 -0.1583 n= 20 5 -0.19660.8630 0.8360 0.8368 0.8054 -0.1923 n= 50 0.8732 0.8470 0.8473 0.8167 -0.1944 n= 100 0.8792 0.8535 0.8534 0.8234 -0.1954 n= 200 0.8816 0.8558 0.8562 0.8256 -0.1965 Table 6: Results obtained in Monte Carlo simulations using the expression in Theorem 6 in order to calculate the value of bρ(−1,1,1,−1) dbased on a total of 1000 samples of size ngenerated for the Clayton 4-copula for different values of the parameter θ. (a) 3-dimensional scatterplot. (b) 2-dimensional scatterplots. Figure 1: Scatterplots of log-return data. The largest values for the estimators are in the directions (1,1,1)and(−1,−1,−1), which can be 20 α bρα 3 (-1,-1,-1) 0.4400 (1,1,1) 0.4330 (-1,1,1) -0.1640 (1,-1,1) -0.2060 (1,1,-1) -0.0525 (-1-1,1) -0.0605 (-1,1,-1) -0.2160 (1,-1,-1) -0.1741 Table 7: Different values obtained for the directional estimators according to the data. interpreted as “large”
|
https://arxiv.org/abs/2505.22206v1
|
values of all three variables tend to occur simultaneously, and the same with “small” values. 6 Conclusion In this paper, the authors have explored the use of copulas and directional ρ-coefficients as effective tools for measuring directional association between random variables. We have presented the generalization of directional ρ-coefficients to higher dimensions, highlighting their key properties and theoretical implica- tions. Additionally, we addressed the previously posed conjecture in a more generalized manner, providing new insights into the behavior of these coefficients in multivariate settings. Nonparametric rank-based estimators were also introduced as a means to estimate the values of the directional ρ-coefficients from observed data, offering a robust and flexible approach to capturing dependency patterns in empirical datasets. Studying directional dependence offers several advantages, particularly in capturing the directional nature of relationships between random variables. Unlike traditional measures of association, which often treat dependency symmetrically, directional coefficients allow us to distinguish how the relationship between random variables may differ in magnitude depending on the direction of change. This may be especially valuable in fields such as finance, economics, and environmental science, where understanding the direction of dependency can provide more accurate predictions and decision-making tools. In conclusion, the exploration of directional dependence through copulas and ρ-coefficients offers valuable insights into the relationships between variables, with a wide range of potential applications across various scientific disciplines. Future work in this area promises to expand our understanding of complex dependencies and provide new tools for analyzing high-dimensional data. Acknowledgments The first and third authors acknowledge the support of the research project PID2021-122657OB-I00 by the Ministry of Science, Innovation and Universities (Spain). The first author thanks to P FORT GRU- POS 2023/76, PPIT-UAL, Junta de Andaluc´ ıa-ERDF 2021-2027. Programme: 54.A, and is also partially supported by the CDTIME of the University of Almer´ ıa (Spain). Third author also acknowledges the support of P FORT GRUPOS 2023/104, PPIT-UAL, Junta de Andaluc´ ıa-ERDF 2021-2027, Objective RS01.1, Programme 54.A. 21 References [1] de Amo, E., Garc´ ıa-Fern´ andez, D., Quesada-Molina, J.J., ´Ubeda-Flores, M. (2025). Modeling direc- tional monotonicity with copulas. Iranian J. Fuzzy Syst. 22, 135–146. [2] de Amo, E., Rodr´ ıguez-Gri˜ nolo, M.R., ´Ubeda-Flores, M. (2024). Directional dependence orders of random vectors. Mathematics 12, article 419. [3] Durante, F., Sempi, C. (2016). Principles of Copula Theory . Chapman & Hall/CRC, Boca Raton. [4] Fermanian, J.D., Radulovic, D., Wegkamp, M. (2004). Weak convergence of empirical copula pro- cesses. Bernoulli 10, 847–860. [5] Fisher, N.I. (1997). Copulas. In: Encyclopedia of statistical sciences , Vol. 1 (S. Kotz, C.B. Read, D.L. Banks, Eds.), Wiley, New York, pp. 159–163. [6] Gaenssler, P., Stute, W. (1987). Seminar on Empirical Processes . Springer Basel, Basel. [7] Garc´ ıa, J.E., Gonz´ alez-L´ opez, V.A., Nelsen, R.B. (2013). A new index to measure positive depen- dence in trivariate distributions. J. Multivariate Anal. 115, 481–495. [8] Jogdeo, K. (1982). Concepts of dependence. In: Encyclopedia of Statistical Sciences , Vol. 1 (S. Kotz, N.L. Johnson, Eds.), Wiley, New York, pp. 324–334. [9] Joe, H. (1990). Multivariate concordance. J. Multivariate Anal. 35, 12–30. [10] Joe, H. (2014). Dependence modeling
|
https://arxiv.org/abs/2505.22206v1
|
with copulas . Chapman & Hall, New York. [11] M¨ uller, A., Scarsini, M. (2006). Archimedean copulae and positive dependence. J. Multivariate Anal. 93, 434–445. [12] Nelsen, R.B. (1996). Nonparametric measures of multivariate association. In: Distributions with Fixed Marginals and Related Topics , Vol. 28 (L. R¨ uschendorf, B. Schweizer, M.D. Taylor, Eds.), Institute of Mathematical Statistics, Hayward, 223–232. [13] Nelsen, R.B. (2006). An Introduction to Copulas (2nd ed.) . Springer, New York. [14] Nelsen, R.B., ´Ubeda-Flores, M. (2011). Directional dependence in multivariate distributions. Ann. Inst. Stat. Math 64, 677–685. [15] Pfaff, B., Hofert, M., McNeil, A., Ulmann, S. (2025). Quantitative Risk Management Library, Version 0.4-35 . https://CRAN.R-project.org/package=QRMlib [16] P´ erez, A., Prieto-Alaiz, M. (2016). A note on nonparametric estimation of copula-based multivariate extensions of Spearman’s rho. Stat. Probab. Lett. 112, 41–50. [17] Quesada-Molina, J.J., ´Ubeda-Flores, M. (2012). Directional dependence of random vectors. Inf. Sci. 215, 67–74. [18] Quesada-Molina, J.J., ´Ubeda-Flores, M. (2024). Monotonic random variables according to a direc- tion. Axioms 13, 275. [19] Sklar, A. (1959). Fonctions de r´ epartition ` a ndimensions et leurs marges. Publ. Inst. Statist. Univ. Paris 8, 229–231. 22 [20] ´Ubeda-Flores, M., Fern´ andez-S´ anchez, J. (2017). Sklar’s theorem: The cornerstone of the Theory of Copulas. In: Copulas and Dependence Models with Applications (M.´Ubeda-Flores, E. de Amo Artero, F. Durante, J. Fern´ andez-S´ anchez, Eds.), Springer, Cham, pp. 241–258. [21] Wei, Z., Wang, T., Panichkitkosolkul, W. (2014). Dependence and association concepts through copulas. In: Modeling Dependence in Econometrics - Advances in Intelligent Systems and Computing , Vol. 251 (V.N. Huynh, V. Kreinovich, S. Sriboonchitta, Eds.), Springer, Cham, pp. 113–126. 23
|
https://arxiv.org/abs/2505.22206v1
|
arXiv:2505.22371v1 [stat.OT] 28 May 2025Adaptive tail index estimation: minimal assumptions and non-asymptotic guarantees Johannes Lederer∗Anne Sabourin†Mahsa Taheri‡ Abstract A notoriously difficult challenge in extreme value theory is the choice of the number k≪n, where nis the total sample size, of extreme data points to consider for inference of tail quantities. Existingtheoreticalguaranteesforadaptivemethodstypicallyrequiresecond-order assumptions or von Mises assumptions that are difficult to verify and often come with tuning parameters that are challenging to calibrate. This paper revisits the problem of adaptive selection of kfor the Hill estimator. Our goal is not an ‘optimal’ kbut one that is ‘good enough’, in the sense that we strive for non-asymptotic guarantees that might be sub-optimal but are explicit and require minimal conditions. We propose a transparent adaptive rule that does not require preliminary calibration of constants, inspired by ‘adaptive validation’ developed in high-dimensional statistics. A key feature of our approach is the consideration of a grid for kof size ≪n, which aligns with common practice among practitioners but has remained unexplored in theoretical analysis. Our rule only involves an explicit expression of a variance-type term; in particular, it does not require controlling or estimating a bias term. Our theoretical analysis is valid for all heavy-tailed distributions, specifically for all regularly varying survival functions. Furthermore, when von Mises conditions hold, our method achieves ‘almost’ minimax optimality with a rate of√log log n n−|ρ|/(1+2|ρ|)when the grid size is of order logn, in contrast to the (log log( n)/n)|ρ|/(1+2|ρ|)rate in existing work. Our simulations show that our approach performs particularly well for ill-behaved distributions. Keywords: Heavy tails; Non-asymptotic guarantees; Adaptive validation; Hill estimator 1 Introduction Extreme value theory (EVT) provides a statistical framework for the analysis of tail events. The results of EVT are essential in applications ranging from finance as far as environmental sciences. Statistical methods in EVT typically rely on choosing a small fraction of the available data to make inference about tail quantities. In univariate peaks-over-threshold methods, given a dataset (Xi, i≤n)of independent observations with a common cumulative distribution function F, this amounts to choosing the number kof the largest order statistics (X(1)≥X(2)≥. . . X (n))retained for inference. Here and throughout, we order the samples by decreasing order of magnitude and the number kis referred to as the ‘extreme sample size’, as it represents the number of observations used in practice for inference. The minimal working assumption in EVT is that for some sequences an>0, bn∈R, for x∈R,Fn(anx+bn)→G(x)asn→ ∞, where Gis an extreme value distribution Gof the type G(x) = exp[ −(1 + γx)−1/γ +]. The distribution Fis then said to belong to the max-domain of attraction of Gandγ∈Ris called the tail index. The Fréchet case γ > 0corresponds to heavy-tailed distributions, for which the survival function ¯F= 1−Fis regularly varying, that is ¯F(tx)/¯F(t)− →x−1/γast→ ∞ , (1.1) ∗Universität Hamburg, Germany. Email: johannes.lederer@uni-hamburg.de †Université Paris Cité, CNRS, MAP5, Paris; Télécom Paris, Institut polytechnique de Paris. Email: anne.sabourin@math.cnrs.fr ‡Universität Hamburg, Germany. Email: mahsa.taheri@uni-hamburg.de 1 for any fixed x >0. Standard reference textbooks on EVT and its applications include Embrechts et al. (2013); Beirlant et al. (2006); Resnick
|
https://arxiv.org/abs/2505.22371v1
|
(2007a, 2008). Estimating the tail index in this context has been the subject of a wealth of works, one main reason being that estimating γalso allows to estimate high quantiles with the help of the Weissman estimator (Weissman, 1978). A particularly popular estimator for γif the Hill estimator proposed in Hill (1975) ˆγ(k) =1 kkX i=1log X(i)/X(k+1) . (1.2) Tail-index estimation has been the focus of an extensive body of research over the past decades. Providing a comprehensive review of the literature is out of our scope and we refer the reader to the reviewpaperFedotenkov(2020). AmajorbottleneckinapplicationsofEVTisthatonemustchoose the threshold (or k) to hopefully reach a bias-variance compromise. Too small a kleads to a high variance, whereas too large a kmay result in bias. This ‘choice of k’ issue is a pervasive challenge in EVT, of course not limited to the Hill estimator. Regarding tail index estimation, Scarrott & MacDonald (2012) provide an overview of proven methods for threshold selection, from rules of thumb and graphical diagnostics (Beirlant et al., 1996) to resampling based approaches (Gomes & Oliveira, 2001), see also Gomes et al. (2012, 2016), and automatic adaptive procedures based on second order assumptions with asymptotic guarantees (Hall & Welsh, 1985; Drees & Kaufmann, 1998; Drees, 2001). As emphasized in Scarrott & MacDonald (2012), a significant limitation of the latter approaches is the second order assumptions and the associated parameter which may be difficult to estimate. Grama & Spokoiny (2008) constitutes a notable exception insofar as the working assumption is an ‘accompanying Pareto tail’ which does not imply nor is implied by (1.1), although the rates of convergence obtained on concrete examples indeed require second order or von Mises assumptions. All the results mentioned thus far are of asymptotic nature. Non-asymptotic guarantees in EVT are much scarcer than asymptotic ones, although this emerging topic has been developing fast over the past decade, see e.g.Carpentier & Kim (2015); Boucheron & Thomas (2015); Goix et al. (2015);Lhautetal.(2021);Engelkeetal.(2021);Drees&Sabourin(2021);Clémençonetal.(2023); Aghbalou et al. (2024) and the references therein. Beyond the principled benefit of being valid for finite (observable) sample sizes, the techniques of proofs involved in these analyses typically require less regularity conditions than asymptotic studies. Indeed it is relatively easy in this framework to separate bias terms stemming from the non-asymptotic nature of the data at hand, from deviation terms (which we refer to as ‘variance’ terms by a slight abuse of terminology) arising from the natural variability of the considered process, above a fixed threshold. The variance term may then be analysed independently from the bias term, and most importantly, it is not technically required that the bias be negligible compared with the variance, mainly because Slutsky-type arguments are not needed. As an example, existence of partial derivatives of the standard tail-dependence function (a functional measure of dependence of extremes) is not required in Goix et al. (2015), and second-order assumptions are unnecessary in Clémençon et al. (2023). The primary goal of this work is to leverage the advantages of a non-asymptotic analysis to develop an adaptive selection rule for k. Our method provides certain ’optimality’
|
https://arxiv.org/abs/2505.22371v1
|
guarantees under the minimal Condition (1.1). In addition, it achieves strong (nearly minimax, up to a power oflog log n) guarantees under the more restrictive von Mises Condition 3. A key feature of our approach is the consideration of a grid Kforkof size |K| ≪ n, which aligns with common practice among practitioners but has remained unexplored in theoretical analysis. The core strength of this work lies in its robust, data-driven selection of k, enabled by expressing the error as an exact quantile of a known distribution, with the remainder absorbed into a bias term that does not need to be specified explicitly. To the best of our knowledge, no existing statistical method relying on EVT has achieved this. For simplicity, we focus on the flagship example of the Hill estimator in this paper. Our method does not replace classical asymptotic approaches in EVT, but rather offers a robust, assumption-light complement—an especially valuable addition given that strong parametric assumptions are often seen as a key limitation in practical applications of extreme value analysis. 2 The only existing pieces of work regarding adaptive tail index estimation in a non-asymptotic framework are Carpentier & Kim (2015) and Boucheron & Thomas (2015). The latter authors de- rivetailboundsfortheHillestimator, whiletheformerproposeanovelestimatorwhichmaybeseen as a generalization of Pickands estimator (Pickands III, 1975). Both references propose an adaptive selection rule for kwith minimax guarantees. Following in the footsteps of previous asymptotic studies, their theoretical analysis requires stronger assumptions than (1.1). Namely Carpentier & Kim (2015) work under a second order Pareto assumption, |1−F(x)−Cx−1/γ| ≤C′x−(1+β)/γ where βis the second order parameter. On the other hand, Boucheron & Thomas (2015) impose that the survival function ¯F(or equivalently a quantile function) admits a standardized Karamata representation with a bias function (also called von Mises function) decreasing faster than a certain power of the quantile level (see Condition 3 in Section 4). This implies in particular the condition required by Carpentier & Kim (2015). Further discussion regarding different types of second-order assumptions required in the literature is deferred to Section 4. Additionally, it is important to note that the analysis carried out in Boucheron & Thomas (2015) involves critical tuning parame- ters, which are chosen in their experiments through preliminary calibration. With these carefully tuned parameters, the authors show through simulation studies that their adaptive rule attains similar performance as the one proposed by Drees & Kaufmann (1998), under weaker second order conditions. Our contribution in this work is to provide a simple and transparent adaptive method for tail index estimation, not requiring any calibration nor relying on second order or von Mises assumptions that can be difficult to verify. Our approach is inspired by an adaptive strategy called ‘Adaptive Validation’ (AV) (Lederer, 2022, Chapter 4), which itself is inspired by Lepski’s method in non-parametric regression (Lepski, 1990; Lepski et al., 1997; Goldenshluger & Lepski, 2011). Lepski’s method has inspired a variety of adaptations to specific settings in statistics (Comte & Lacour, 2013; Lacour & Massart, 2016). The AV approach primarily requires a sharp control of a variance term within an error decomposition, without imposing
|
https://arxiv.org/abs/2505.22371v1
|
any particular condition on the bias term except that it can be made smaller than the variance term for certain values of the tuning parameter (here, for small k). To date, these general tools have not been considered within the field of EVT. Under the minimal regular variation assumption (1.1), we propose an ‘Extreme Adaptive Validation’ (EAV) rule for selecting k, denoted as ˆkEAV. We demonstrate that ˆkEAVperforms comparably to a certain ‘oracle’ choice of k, denoted as k∗, for which we derive some optimality properties (see Section 2.1). Furthermore, under additional von Mises Condition 3, we establish that k∗achieves minimax optimal error rates, up to a power of log log nand we show that the adaptive validation method ˆkEAVattains nearly minimax rates up to a nearly constant factor which is a power of log log n. Also instead of considering all possible values of k, our approach restricts to a much smaller logarithmic grid, a strategy commonly used in practice. This significantly reduces computational complexity: while previous methods have worst-case complexity of O(n2)and a posteriori complexity of O(ˆk2), our approach achieves O((log n)2)complexity, and in practice, a posteriori complexity of only O(log(ˆkEAV)2). This reduction in complexity is not crucial for estimatorsthatarenotcomputationallyintensive. However,itisanticipatedthatthissimplification may prove beneficial in future works extending the EAV framework to more complex tasks, such as multivariate extreme value analysis. The paper is organized as follows: In Section 2, we describe the general principles underlying an EAV rule, along with its guarantees, at a rather high level of generality, as the described methodology is applicable to any estimator of the tail index that satisfies a certain bias-variance decomposition. InSection3, wespecializetothecaseoftheHillestimator, forwhichwederivenon- asymptoticerrorbounds, allowingustoleveragetheresultsofSection2.1. InSection4, weconsider the case where additional second-order conditions hold in addition to Equation (1.1), and we provide additional guarantees in this setting. We support our theoretical findings with simulations in Section 5. Additional theoretical details and simulations are deferred to the Appendix. 3 2 Adaptive validation We begin by introducing the Extreme Adaptive Validation (EAV) procedure that we promote, highlighting its broad applicability. Our analysis is applicable to any estimator ˆγ(k)ofγthat takes as input the klargest order statistics of an i.i.d. sample drawn from F, provided that non- asymptotic error bounds are known and take the form of a bias-variance decomposition satisfying the following condition. Condition 1 (Bias-Variance Decomposition) There exists two functions V:N×(0,1)→R+ andB:N×N×(0,1)→R+, such that for any confidence level δ∈(0,1), and any k∈ {1, . . . , n }, with probability at least 1−δ, |ˆγ(k)−γ| ≤ γ V(k, δ) + B(k, n, δ ), (2.1) where the functions V(·,·),B(·,·,·)satisfy the following requirements: (a) The term V(k, δ)has an explicit expression, and V(·, δ)is a non-increasing function of its first argument ksuch that V(k, δ)→0ask→ ∞. (b) The function B(·,·, δ)is non-negative and non-decreasing in its first argument k. While it may not have an explicit expression, it satisfies B(k, n, δ )→0ask/n→0. In Condition 1, the function Vshould be seen as a deviation term associated with the variability of averages over the klargest order statistics. The larger k, the smaller V. The rationale
|
https://arxiv.org/abs/2505.22371v1
|
behind the factor γin the required error decomposition (2.1) is that the standard deviation of the Hill estimator is γ/√ kin the ideal case of a Pareto distribution. The function Bshould be seen as a bias term: the larger k, the less extreme the considered data are, the larger B. We emphasize that explicit knowledge of Bis not required. Note that the requirement in (b) that Bis a non- decreasing function of kis automatically satisfied when replacing B(k, n, δ )with ˜B(k, n, δ ) = supk′≤kB(k′, n, δ). WeshallproveinSection3thattheHillestimator ˆγindeedsatisfiesCondition1 with an explicit deviation term that is a quantile of a centered Gamma random variable of order V(k, δ)≤p 2 log(4 /δ)/k+2 log(4 /δ)/k, andabiastermthatisnotexplicitbutsatisfiesrequirement (b) in Condition 1. 2.1 Adaptive Validation framework We are now prepared to introduce an Adaptive Validation (AV) approach for selecting the extreme sample size kwith statistical guarantees, building upon the methodologies developed in Chichig- noud et al. (2016); Li & Lederer (2019); Taheri et al. (2023); Laszkiewicz et al. (2021). These works have set forth algorithms for calibrating tuning parameters in diverse contexts, including standard linear and logistic regression, as well as graphical models, outside the EVT framework. In this section, we refrain from imposing additional assumptions (such as second-order or von Mises conditions). Consequently, our objective is not to identify an optimal k, but rather a ‘good enough’ kthat comes with certain guarantees. Let ˆγsatisfy Condition 1. For any confidence level δ, we define an ‘oracle’ k∗(δ, n)as one that balances V(k, δ)andB(k, n, δ )in (2.1), among candidates kchosen from a grid K ⊂ { 1, . . . , n }of size |K| ≤ n. In this setting, we (naturally) need to assume that the grid is chosen ‘wide’ enough to contain a kreaching a bias-variance compromise, as encapsulated in the following condition: Condition 2 (Sufficiently wide grid) The triplet (δ, n,K)is such that with kmin= min( K), kmax= max( K), B(kmin, n, δ)≤γV(kmin, δ)andB(kmax, n, δ)> γV (kmax, δ). Note that Condition 2 is automatically satisfied for grids such that kmin= 1, kmax=n, fornlarge enough, in view of Condition 1. For (δ, n,K)satisfying Condition 2 we define the oracle k∗(δ, n) as follows: k∗(δ, n) = max {k∈ K:B(k, n, δ )≤γV(k, δ)}. (2.2) 4 Note that under Condition 2, the oracle k∗(δ, n)is well-defined and kmin≤k∗(δ, n)< kmax. It may seem surprising and perhaps not ideal that the definition of the oracle depends on the choice of the grid. One might argue that this approach simply replaces the choice of kwith the choice of K. However, there are universal default choices for K, such as geometric grids or linearly spaced grids. Moreover, with sufficiently fine grids that have a high enough maximum and logarithmic grid size, |K| ∝ log(n), we can show that the oracle error achieves known minimax rates under second-order conditions. Further discussion is deferred to Section 4. Informally, the oracle k∗(δ, n)ensures a compromise between bias and variance, in the sense that γV(k, δ)≈B(k, n, δ ). Formally, we obtain immediately an upper bound for the error
|
https://arxiv.org/abs/2505.22371v1
|
of k∗(δ, n), without a bias term. The following result derives immediately from the very definition of k∗, from the upper bound (2.1) and from the monotonicity requirements on the functions Vand B. Proposition 1 (Explicit error bound for the oracle) Letˆγsatisfy Condition 1. For any δ∈ (0,1),n≥1andKsatisfying Condition 2, we have for all fixed k≤k∗(δ, n), with probability at least 1−δ, |γ−ˆγ(k)| ≤2γV(k, δ). (2.3) Note that the ‘price’ paid by the oracle k∗(δ, n)to benefit from a clear guarantee without a bias term (in the ideal case where the value of k∗(δ, n)were known) is relatively small. Indeed the variance term in the upper bound is only multiplied by a factor 2. The upper bound on the oracle error, as stated in Proposition 1, is derived under Condition 1, where the function Vis assumed to be explicit. However, since k∗(δ, n)itself is unknown, this bound is not fully explicit unless a lower bound on k∗(δ, n)can be derived. We conjecture that there is no universal method to achieve this without further assumptions. In Section 4, we obtain an explicit lower bound for k∗for the Hill estimator under additional von Mises conditions. The following result shows that the oracle k∗enjoys approximate optimality guarantees under minimal assumptions: Proposition 2 (Optimality properties of k∗)For fixed δ > 0, n≥1,Ksatisfying Condi- tion 2, let E(k) = γV(k, δ) +B(k, n, δ ) denote the upper bound on the error of an estimator ˆγsatisfying Condition (1). Then for all 1≤k≤k∗(δ, n), E(k)≤min 1≤j≤nE(j) +γV(k, δ). Proof. Letk≤k∗(δ, n). For simplicity of notation, we abbreviate V(k, δ)asV(k), and B(k, n, δ ) asB(k). For j≥k, we have E(j)≥B(j)≥B(k) = E(k)−γV(k). On the other hand notice that for any k≤k∗we have γV(k)≥γV(k∗)≥B(k∗)≥B(k)by monotonicity of the functions V, B. Thus for 1≤j≤k≤k∗, E(j) =B(j) +γV(j)≥B(j) +γV(k)≥B(j) +B(k)≥B(k) = E(k)−γV(k). □ Remark 1 (Interpretation) Proposition 2 implies that for k≤k∗(δ, n)the upper bound E(k) on the error of ˆγ(k)is optimal, up to an additional ‘regret’ term V(k, δ). This regret term is minimized when k=k∗(δ, n). Another interpretation is as follows: k∗minimizes the upper bound 2γV(k, δ)over the set of indices kfor which this upper bound (which is the only possible explicit upper bound, given that B is unknown and only Vis known) is valid: k∗(δ, n)∈ arg min k∈K:E(k)≤2γV(k,δ)2γV(k, δ). 5 We now consider an empirical version of k∗using an Adaptive Validation rule, akin to those in Boucheron & Thomas (2015) and Drees & Kaufmann (1998). Our approach distinguishes itself by providing an explicit stopping rule, unlike Boucheron & Thomas (2015), and guarantees valid for any sample size, unlike Drees & Kaufmann (1998). These guarantees, derived in Section 2.2, ensure that the error from the adaptive rule is comparable to the oracle error induced by k∗. We first need to define minimal admissible sample sizes depending on the confidence level δ, for technical reasons which have an intuitive interpretation. First, because our proof techniques employ union bounds on the probability of adverse events defined for any k∈ K, our analysis will repeatedly involve a tolerance level δK:=δ/|K|. Alternative
|
https://arxiv.org/abs/2505.22371v1
|
approaches involving chaining techniques are discussed in Remark 6. Second, we need to consider only extreme sample sizes ksuch that the error due to variance is less than 1/2. This is because our rule involves division by 1−2V(k, δK). We thus restrict the search to k≥k0(δ), where k0(δ)satisfies k0(δ)≥inf{k≥1 :V(k, δK)<1/2}. (2.4) Finally, given a choice of k0(δ)satisfying (2.4), we must ensure that the bias term is indeed less than the variance term for k=k0(δ), so that k∗(δK, n)is not less than k0(δ). We thus define the minimum (full) sample size as n0(δ) = inf n:k0(δ)≤k∗(δK, n) , (2.5) Note that if the triplet (δK, n,K)satisfies Condition 2, then n0(δ) = inf n:B(k0(δ), n, δK)≤ γV k0(δ), δK and although n0(δ)is unknown, it is guaranteed to be a finite integer due to our requirement that B(k, n, δ )→0asn→ ∞for fixed k(Requirement (b) in Condition 1). In Section 4, we derive an explicit expression for n0(δ)under additional von Mises conditions in the context of Hill estimation. This expression can serve as a guideline to assess whether the available sample size is sufficient for our theory to apply (see Corollary 2). We are now ready to define an Extreme Adaptive Validation (EAV) sample size ˆkEAVchosen by adaptive validation as follows, for any pair (δ, n)∈(0,1)×Nsuch that n≥n0(δ), ˆkEAV = maxn k∈ K, k≥k0(δ),such that ∀j∈ Kwith j≤k, |ˆγ(k)−ˆγ(j)| ≤ˆγ(k) 1−2V(k, δK) V(j, δK) + 3V(k, δK)o .(2.6) It will be useful in subsequent analysis to introduce the binary random variable associated with our stopping criterion, for k0(δ)≤k≤n, S(k) = 1n ∃j∈ K, j≤k,such that |ˆγ(k)−ˆγ(j)|>ˆγ(k) 1−2V(k, δK) V(j, δK) + 3V(k, δK)o .(2.7) Thus, if K={km, m≤ |K|}andˆkEAV=kbm<max(K)then S(kbm+1) = 1while S(kbm) = 0, with probabilityone. Implementing ˆkEAVcanbeachievedbyastraightforwardalgorithmtakingasinput (n, δ, k 0(δ),K)andcomputing S(km)forincreasingvaluesof kmwithintherange K∩{k0(δ), . . . , n }. The algorithm stops when S(km) = 1and returns ˆkEAV=km−1where mis the current index value on the grid K. Of course there is the theoretical possibility that S(max(K)) = 0in which case the algorithm should return ˆkEAV= max( K). This latter event is however unlikely if the upper end-point kmaxof the grid is chosen large enough. 2.2 Statistical guarantees for EAV We now derive guarantees for the adaptive index ˆkEAVproposed in (2.6). Our main result (The- orem 1 below) shows that the error of ˆγ(ˆkEAV)is of the same magnitude as the oracle error 6 |ˆγ(k∗(δK, n))−γ|. As a first go, we state convenient uniform bounds on the error over candidates kinKsuch that k≤k∗(Recall from (2.2) the definition of the oracle k∗). Lemma 1 Letˆγbe an estimator of γsatisfying Condition 1. Let δ∈(0,1),n∈NandKbe such thatn≥n0(δ)and the triplet (δK, n,K)satisfies Condition 2. On an event Eof probability at least 1−δ, we have for all k∈ Ksuch that k≤k∗(δK, n), |ˆγ(k)−γ| ≤ γV(k, δK) +B(k, n, δ K)≤2γV(k, δK). (2.8) On the same event, for all k∈ Ksuch that k0(δ)≤k≤k∗(δK, n), we have γ≤ˆγ(k)/(1−2V k, δK) . (2.9) Proof. Condition 1 combined with a union bound ensures that on an event Eof probability at least 1−δ, Inequality (2.1) is satisfied with δreplaced with δKfor all k∈
|
https://arxiv.org/abs/2505.22371v1
|
K. Thus, on the latter eventE, it holds that for all k∈ K ∩ { 1, . . . , k∗(δK, n)},|ˆγ(k)−γ| ≤γV(k, δK) +B(k, n, δ K). This proves the first inequality in (2.8). The second inequality in (2.8) derives immediately from the definition of the oracle k∗in (2.2). Also on E, it holds that for all k∈ Ksuch that k≤k∗(δK, n), γ≤ˆγ(k) +γV(k, δK) +B(k, n, δ K)≤ˆγ(k) + 2γV(k, δK). Finally for k≥k0(δ), we have V(k, δK)≤1/2, and the latter display yields (2.9) by a straightfor- ward inversion. □ The following result shows that the algorithm described above for computing ˆkEAVdoes not stop too early. Proposition 3 ( ˆkEAV≥k∗(δK, n)with high probability) In the setting of Lemma 1, and on the same event E, for all (j, k)∈ {1, . . . , k∗(δK, n)}such that j≤kandk≥k0(δ, n), we have |ˆγ(k)−ˆγ(j)| ≤ˆγ(k) 1−2V(k, δK) 3V(k, δK) +V(j, δK) . (2.10) As a consequence, for fixed (δ, n)the adaptive ˆkEAVdefined in (2.6)satisfies ˆkEAV≥k∗(δK, n). Proof of Proposition 3. On the event Efrom Lemma 1, we have for all j, k∈ Ksuch that j≤k≤k∗(δK, n), |ˆγ(k)−ˆγ(j)| ≤ | ˆγ(k)−γ|+|ˆγ(j)−γ| ≤γ V(k, δK) +V(j, δK) +B(k, n, δ K) +B(j, n, δ K)(from (2.8)) ≤γ V(k, δK) +V(j, δK) + 2B(k, n, δ K)(from Condition 1(b)) ≤γ 3V(k, δK) +V(j, δK) (from (2.2)) . (2.11) Combining (2.11) and (2.9) yields that on E, for j, k∈ Ksuch that k0(δ)≤k≤k∗(δK, n)and 1≤j≤k, (2.10) holds. Writing ˆkEAV=kbmandk∗(δK, n) =km∗, we obtain that on E,S(km) = 0 for all m≤m∗. However from the definitions, eiher ˆkEAV = max( K)orS(kbm+1) = 1, thus bm+ 1> m∗, from which it follows that bm≥m∗and finally ˆkEAV≥k∗(δK, n). □ Theorem 1 (Error bounds for ˆγ(ˆkEAV))In the setting of Lemma 1 and on the same favourable eventEof probability at least 1−δ, the adaptive estimator resulting from the AV rule satisfies |ˆγ(ˆkEAV)−γ| ≤4ˆγ(ˆkEAV) 1−2V(ˆkEAV, δK)+ 2γ V k∗(δK, n), δK . Assuming V(k∗(δK, n), δK)<1/6, the above inequality implies the simplified bound |ˆγ(ˆkEAV)−γ| ≤6γ 1−6V k∗(δK, n), δKV k∗(δK, n), δK . 7 We provide the proof below after a few comments. Theorem 1 establishes an explicit upper bound on the error of ˆγ(ˆkEAV)in terms of k∗, under the minimal Condition 1. As with the oracle error (Proposition 1), the bound in Theorem 1 depends on the unknown magnitude of k∗. To address this, in Section 4, we establish a lower bound for k∗under the additional von Mises Condition 3. Remark 2 (Comparison With The Oracle Error) With reasonably large sample sizes, the variance term V(k∗(δK, n), δK)should be small. Also from Proposition 3 and Condition 1, we have V(ˆkEAV, δK)≤V(k∗(δK, n), δK), so that V(ˆkEAV, δK)should also be small. In this setting, the upper bound in Theorem 1 becomes approximately |ˆγ(ˆkEAV)−γ|≲6γV k∗(δK, n), δK , which is only three times the oracle error stated in Proposition 1, up to replacing δwith δK. For the Hill estimator, the leading term of Vdepends on δonly through a logarithmic termp log(1/δ). Consequently, the difference between the errors of the oracle k∗(δ, n)andk∗(δK, n)is
|
https://arxiv.org/abs/2505.22371v1
|
merely a factor ofp log|K|, meaning that for grids of logarithmic size, this difference is O(√log log n). Thus, the adaptive extreme sample size ˆkEAVis almost optimal in the sense that its associated error is bounded by a multiple of the error bound associated with the (unknown) oracle k∗up to a quasi-constant factor. This oracle is itself optimal in a certain sense, as detailed in Proposition 2 and Remark 1. Proof of Theorem 1. For simplicity of notation, we abbreviate k∗=k∗(δK, n),V(k) = V(k, δK), and B(k) =B(k, n, δ K)in this proof. We first decompose the error as |ˆγ(ˆkEAV)−γ| ≤ | ˆγ(ˆkEAV)−ˆγ(k∗)|+|ˆγ(k∗)−γ|. (2.12) By construction, S(ˆkEAV) = 0(see (2.7) and the discussion below). Also, on the favourable event Eintroduced in Lemma 1, we have from Proposition 3 that k∗≤ˆkEAV. In addition both k∗and ˆkEAVbelong to the grid Kwhere the stopping criterion in (2.6) is evaluated, so the first term in the right-hand side of (2.12) must satisfy |ˆγ(ˆkEAV)−ˆγ(k∗)| ≤ˆγ(ˆkEAV) 1−2V(ˆkEAV) 3V(ˆkEAV) +V(k∗) ≤4ˆγ(ˆkEAV)V(k∗) 1−2V(ˆkEAV),(2.13) where we have used (see Condition 1(a)) that Vis a non-increasing function of kto obtain the latter inequality. On the other hand from Lemma 1, the second term in (2.12) satisfies on E, |ˆγ(k∗)−γ| ≤γV(k∗) +B(k∗)≤2γV(k∗). (2.14) Combining (2.12), (2.13) and (2.14) ends the proof of the first claim. Now, using the fact that ˆkEAV≥k∗onE, together with Condition 1(a), and the triangle inequality, the first claim implies that |ˆγ(ˆkEAV)−γ| ≤4ˆγ(ˆkEAV) 1−2V(k∗)+ 2γ V(k∗)≤4|ˆγ(ˆkEAV)−γ|+ 4γ 1−2V(k∗)+ 2γ V(k∗), so that 1−4V(k∗) 1−2V(k∗) |ˆγ(ˆkEAV)−γ| ≤4γ 1−2V(k∗)+ 2γ V(k∗). Using that V(k∗)<1/6we obtain |ˆγ(ˆkEAV)−γ| ≤4γ 1−2V(k∗)+ 2γ 1−4V(k∗) 1−2V(k∗)−1 V(k∗) =γ 6−4V(k∗) 1−6V(k∗)−1V(k∗) =6γ 1−6V(k∗)V(k∗), as desired. □ 8 3 Bias-Variance decomposition for the Hill estimator Wenowconductanon-asymptoticanalysisoftheHillestimator. Weimposethe—veryweak—assumption that the observations X1, . . . , X nare independent samples with a survival function ¯Fsatisfying the regular variation condition (1.1). A key result is a bias-variance decomposition of the error that meets exactly the requirements of Condition 1 (Theorem 2). Our main result (Corollary 1) follows directly from this, demonstrating that our general analysis in Section 2.1 applies to the Hill estimator. 3.1 Karamata representation of the Hill estimator We first recall basic facts about regularly varying functions (Bingham et al., 1987; De Haan & Resnick, 1987) and introduce some notation. A function H:R+→R+is called regularly varying with index r∈R, ifH(tx)/H(t)→xrast→ ∞, for all x >0, as in (1.1). The exponent ris the regular variation index and His called ‘slowly varying’ if r= 0. Now, any regularly varying function Hwith index rcan be written as H(x) =L(x)xrwith L:R+→R+slowing varying. Finally, a key observation is that if ¯Fis regularly varying with index −α <0then also the quantile function Qdefined as Q(t) =F←(1−1/t)is regularly varying with positive index γ= 1/α. Note that the usual notation for our function QisU. We prefer not using Uin order to avoid confusion with uniform order statistics which play a central role in this section. The next representation of slowly varying function is a famous consequence of Karamata’s Tauberian theorem, and it is key to understand the deviations of the Hill estimator. A function Lis slowly varying
|
https://arxiv.org/abs/2505.22371v1
|
if and only if there exists two functions b:R+→Randa:R+→R+such that limt→∞b(t) = 0and limt→∞a(t) =A≥0, such that for some t0≥0and all t≥t0, L(t) = a(t) expZt t0b(u) udu . (3.1) Summarizing, for ¯Fsatisfying (1.1) and Qthe quantile function of Fas above, we may write Q(t) =L(t)tγ, where Lis as in (3.1). Define ¯b(t) = sup x≥t|b(x)|, ¯a(t) = sup x≥t|a(x)−A|. (3.2) Then both functions ¯band¯aare monotonically non-increasing and both converge to 0. Remark 3 (Comparison with Boucheron & Thomas (2015)) Thevon Misesassumption made in Boucheron & Thomas (2015) is that the quantile function Qwrites Q(t) =tγL(t)where Lsat- isfies(3.1)with a constant function a(t) =A, so that ¯a≡0. In addition, lower bounds on the Hill estimator are obtained under the additional assumption that the so-called von Mises function ¯b(denoted by ¯ηin the cited reference) satisfies |¯b(t)| ≤Ctρfor some C > 0andρ <0. Note that our guarantees on the Hill estimator in this section do not require such second order assumptions, although we consider it (Condition 3) in Section 4. LetU(1)≥ ··· ≥ U(n)denote the order statistics of an independent sample of standard uniform variables. Then, ˆγd=Pk i=1logQ[(1−U(i))−1]−logQ[(1−U(k+1))−1], which by (3.1) yields ˆγ(k)d=γ kkX i=1 −log(1−U(i)) + log(1 −U(k+1)) +··· (3.3) 1 kkX i=1 a (1−U(i))−1 −a (1−U(k+1))−1 +··· (3.4) 1 kkX i=1Z(1−U(i))−1 (1−U(k+1))−1b(u)/udu , (3.5) 9 where the functions aandbare the ones appearing in the Karamata representation of Q. The decomposition in the above display is well known can be found e.g.in Mason (1982, Proof of Proposition 1(E)). Introduce the random variable Zk=1 kkX i=1 −log(1−U(i)) + log(1 −U(k+1)) , (3.6) so that the first term (3.3) above writes γZk. A second important fact also shown in Mason (1982), relying on Renyi representation of exponential spacings, is that Zkd=1 kkX i=1Ei, (3.7) where (Ei, i≤n)are independent unit exponential random variables. Thus, Zkfollows a Gamma distribution G(α, β)with shape and rate parameter, respectively α=k, β=k. 3.2 Error decomposition Equipped with the Karamata representation of the Hill estimator from Section 3.1 and the notation above, notice first that the absolute value of third term (3.5) is less than ¯b (1−U(k+1))−1 Zk. (3.8) Regarding the second term (3.4), note that for t≤x≤y, it holds that |a(x)−a(y)| ≤ | a(x)−A|+|a(y)−A| ≤ 2¯a(t), so that the absolute value of (3.4) is less than 2¯a((1−U−1 (k+1)). We obtain the following key error decomposition. Lemma 2 (An almost sure error decomposition of the Hill estimator) Let¯Fsatisfy(1.1), and let ¯aand¯bdenote the functions defined in (3.2)relative to the Karamata representation of the quantile function Q, and let Zkbe defined in (3.6). The error of the Hill estimator for fixed k≤n satisfies almost surely |ˆγ(k)−γ| ≤ γ|Zk−1|+ 2¯a (1−U(k+1))−1 +¯b (1−U(k+1))−1 Zk. (3.9) The first term in the right-hand side of (3.9) can be explicitly controlled with high probability because its distribution is known. It can be viewed as a ‘variance’ term. The second and third terms can be viewed as bias terms, as shown next. 3.3 Upper bounds in probability Asanintermediatestep,westatebelowastandardresultregardingconcentrationoforderstatistics. This key result is also used in the working paper Clémençon & Sabourin (2025). We provide the proof for completeness in the
|
https://arxiv.org/abs/2505.22371v1
|
Appendix, Section A.1. Lemma 3 (Concentration of U(k+1))With probability at least at least 1−δ, 1−U(k+1)≤k+ 1 n 1 +R(k+ 1, δ) , (3.10) withR(k, δ) =p 3 log(1 /δ)/k+ 3 log(1 /δ)/k. To treat the second term of (3.9) as a bias term, we need to bound Zkfrom above with high proba- bility. This can be done using a quantile of the Gamma distribution. However, we provide explicit bounds for readability. Note that the following bound may be useful for chaining (Remark 6). 10 Lemma 4 (Gamma upper bound) With probability at least 1−δ, |Zk−1| ≤r 2 log(2 /δ) k+log(2/δ) k:=˜V(k, δ). (3.11) Proof. The (recentered) Gamma distribution G(α, β)belongs to the class of sub-gamma distri- butions Γ(v, c)with v=α/β2andc= 1/β(see e.g. Boucheron et al. (2013, Chapter 2). Thus Zk−1∈Γ(v= 1/k, c = 1/k). Recall also (see Boucheron et al. (2013, Page 29) for a proof) that a random variable Z∈Γ(v, c)satisfies the tail bounds P Z−E(Z)> ct +√ 2vt ∨P Z−E(Z)<−ct−√ 2vt ≤e−t.(3.12) WithE[Zk] = 1,v= 1/k, c = 1/k, we obtain P Zk−1> t/k +p 2t/k ∨P Zk−1<−t/k−p 2t/k ≤e−t. The result follows by a union bound. □ We are ready to state the main result of this section. Theorem 2 (Bias-Variance probability upper bound on the Hill error) Let¯Fsatisfy(1.1). For1≤k≤n−1, with probability at least 1−δ, the error of the Hill estimator ˆγ(k)satifies |ˆγ(k)−γ| ≤ γ V(k, δ) +B(k, n, δ ) where V(k, δ) =V1(k, δ/2)is the 1−δ/2-quantile of |Zk−1|and satisfies V(k, δ)≤˜V(k, δ/2) =r 2 log(4 /δ) k+log(4/δ) k, (3.13) and B(k, n, δ ) = 2¯ an (k+ 1) 1 +R(1, δ/2) + 1 +˜V(1, δ/2)¯bn (k+ 1)(1 + R(1, δ/2) .(3.14) where Ris as defined in Lemma 2. Thus the Hill estimator satisfies Condition 1 with variance and bias functions V, Bgiven by (3.13)and(3.14). The proof follows straightforwardly from combining the bounds established in Lemmas 2, 3, and 4. The details are deferred to Section A.2 in the Appendix. Remark 4 (Explicit expression for the smallest admissible extreme sample size k0(δ)) The upper bound (3.13)on the deviation term V(k, δ)chosen as the 1−δ/2quantile of |Zk−1| as in Theorem 2, yields a control of the quantity k0(δ)in(2.4), k0(δ)≤22(1 +√ 2)2log(4|K|/δ)≤36 log(4 |K|/δ). The main conclusion of this work, stated below, is an immediate corollary from Theorem 1 and Theorem 2. Corollary 1 (Main result: Guarantees of the EAV Hill estimator) Let¯Fsatisfy(1.1), let V(k, δ)be the 1−δ/2quantile of |Zk−1|where Zkfollows a G(k, k)distribution (see (3.7)), and letˆkEAVbe defined according to the latter deviation function V, through the adaptive rule (2.6), with a triplet (δK, n,K)satisfying Condition 2 and n≥n0(δ)defined in (2.5). On a favorable event Ewith probability at least 1−δ, the absolute error |ˆγ(ˆkEAV)−γ|of the adaptive Hill estimator is less than the upper bound stated in Theorem 1. 11 4 Additional guarantees under second order conditions Our goal in this section is to derive explicit upper bounds on the oracle k∗(δ, n)and its adaptive version ˆkEAV, thereby reinforcing the guarantees established in Propositions 1, 2, and Corollary 1. ByconsideringcaseswhereanadditionalvonMisesconditionholds(Condition3below), weprovide a sharper characterization of the oracle’s behavior. This
|
https://arxiv.org/abs/2505.22371v1
|
analysis also facilitates direct comparisons with existing results in the literature under comparable conditions. Our main result demonstrates that the oracle indeed attains the optimal rate, and that for ‘well chosen’ grids (in a sense that shall be made precise shortly) of size O(logn)spanning the full interval [0, n], the adaptive estimator also achieves the known minimax rate for adaptive estimators, up to a moderate additional factor (log log n)1 2(1+2|ρ|), where ρis a second-order parameter defined below. This further validates its optimality in extreme value estimation. 4.1 Second order and von Mises conditions, associated exisitng results In the setting of Theorem 2, we follow Boucheron & Thomas (2015) in assuming that the follow- ing condition holds true in addition to the regular variation condition (1.1). We recall that the latter condition implies already that the quantile function below writes Q(t) =tγL(t)where L(t) satisfies (3.1). Condition 3 (Von Mises condition) The tail quantile function Q(t) =F←(1−1/t)has Kara- mata representation Q(t) = AtγexpZt t0b(u) udu , t ≥t0, (4.1) where A >0,t0>0, and where the function bsatisfies ¯b(t) := sup x≥t|b(x)| ≤Ctρ for some ρ <0andC >0. Compared with the minimal regular variation assumption on ¯F, the additional assumption encap- sulated in Condition 3 is that the function a(t)relative to the quantile function Q(t)in (3.1) is constant, so that ¯a≡0, and that the von Mises function b(t)vanishes fast enough. As emphasized in Boucheron & Thomas (2015), Condition 3 is weaker than the popular second order ‘Hall con- dition’ (Hall & Welsh (1985); Drees & Kaufmann (1998); Csorgo et al. (1985)), which stipulates that Q(t) = Aγtγ 1 +γDtρ+o(tρ) . (4.2) Indeed the Hall condition (4.2) implies that the function bis regularly varying, which facilitates adaptive estimation through estimation of the second order parameter ρ. Note that (Csorgo et al., 1985) the condition in (4.2) is equivalent to 1−F(x) =B1x−1/γ(1 +B2xρ/γ+o(xρ/γ)), for some constants B1, B2We refer to Segers (2002) for a thorough discussion of second-order conditions for tail-index estimations, equivalent statements in terms of the survival function, and their consequences on the bias of the Hill estimator. As noted in Boucheron & Thomas (2015), a consequence of Theorems 1 and 2 of Segers (2002) is the following: Condition 3 is stronger than (implies) the second order Pareto assumption made e.g.in Carpentier & Kim (2015), which is that for some constants C >0, D, |¯F(x)−Cx−1/γ| ≤ Dx(ρ−1)/γ. (4.3) We now provide a minimal overview of existing results on optimal selection of the number k=k(n) (ork(n) + 1) of upper order statistics for tail-index estimation, under second order assump- tions (4.2), (4.3), or Condition 3. Our aim is simply to shed light on the quality of the guarantees 12 obtained below regarding the oracle k∗defined in (2.2) and its adaptive version ˆkEAVin (2.6) un- der additional second-order conditions. We do not intend to provide an exhaustive account of the mathematical statistics on this subject. For a thorough review, we refer the reader to Carpentier & Kim (2015) or Boucheron & Thomas (2015). It is shown in Hall (1982) that if (4.2) is satisfied, then any
|
https://arxiv.org/abs/2505.22371v1
|
sequence kopt(n)such that kopt(n)∼λn−2ρ/(1−2ρ)is asymptotically optimal in terms of asymptotic mean squared error of the Hill estimator, and the associated asymptotic rate of convergence is nρ/(1−2ρ), meaning that n−ρ/(1−2ρ)(ˆγ(kopt(n))−γ)converges in distribution to a non degenerate limit. A lower bound derived in Hall & Welsh (1984) shows that the latter rate is minimax optimal among all possible estimators of the tail index, over a class of functions satisfying (4.3). An adaptive procedure is next proposed in Hall & Welsh (1985), based on estimating ρand plugging the estimate ˆρinto the expression of kopt. It is shown that under the Hall condition (4.2), the resulting adaptive rule ˆkis asymptotically equivalent to koptand thus enjoys the same minimax error rate. A wealth of refinements of this adaptive method have been obtained under additional conditions such as third order ones (see e.g.Gomes et al. (2008) ). An alternative selection rule based on a Lepski-type procedure is proposed in Drees & Kaufmann (1998), offering asymptotic optimality guarantees that similarly rely on (4.2) as well as the first display of Condition 3. Notably, this approach does not require prior knowledge of the possible range of the value of ρthereby somewhat relaxing the conditions required in Hall & Welsh (1985). Significant breakthroughs towards relaxing the second order condition (4.2) have been achieved by Carpentier & Kim (2015) and Boucheron & Thomas (2015), who work instead respectively in the relaxed setting (4.3) or under Condition 3. Both analyses are conducted in a non asymptotic setting. A lower bound on the minimax error of any adaptive estimator of the tail index is ob- tained in both references, which is of order ((log log n)/n)|ρ|/(1+2|ρ|)based on the construction of an ill behaved distribution satisfying however Condition 3. These results confirm that adaptiv- ity has a price, namely the error of any adaptive estimator suffers from an multiplicative factor (log log n)|ρ|/(1+2|ρ|)compared with the optimal achievable rate when ρis known. Regarding pos- itive results, Carpentier & Kim (2015) focus on a specific tail-index estimator different from the Hill estimator also involving the top korder statistics, and they propose an adaptive selection rule fork, for which they prove upper bounds on the error with rates matching the lower bound. On the other hand, Boucheron & Thomas (2015) consider a Lepski-type rule similar in spirit to Drees & Kaufmann (1998)’s preliminary selection rule for the Hill estimator, and they derives guarantees for an adaptive estimator (Equation 3.7 of the cited reference) with a stopping criterion involving a sequence of tolerance thresholds rn(δ). The latter incorporate quantities that are arguably difficult to track. The authors recognize that their guarantees intend to serve as reassuring guidelines, and they investigate in their simulation study a different, simplified rule where all unknown constants and intermediate sequences are replaced with a term 2.1 log log n. In contrast, our stopping crite- rion is based solely on explicit constants from the very start, and our guarantees directly apply to the rule implemented in our simulation study. We now analyze the error of the Hill estimator using k=k∗(δ, n)upper order statistics. We proceed
|
https://arxiv.org/abs/2505.22371v1
|
in two steps: In Section 4.2 we establish a lower bound on k∗under Condition 3, thereby obtaining an explicit expression for the minimum sample size n0(δ)introduced in (2.5). In Section 4.3 we leverage this lower bound together with previously proved upper bounds on the oracle error involving k∗, to establish an explicit bound on the oracle error. Finally we use Theorem 1 to control the error of ˆγ(ˆkEAV)based on the oracle error, i.e.the error of ˆγ(k∗). 4.2 A lower bound on k∗and an explicit expression for n0(δ) Under Condition 3, the bias function Bin the error bound stated in Theorem 2 satisfies B(k, n, δ )≤C1(δ, ρ)n k+ 1ρ , (4.4) 13 where C1(δ, ρ) = C 1 +˜V(1, δ/2) 1 +R(1, δ/2)−ρ =C 1 +p 2 log(4 /δ) + log(4 /δ) 1 +p 3 log(2 /δ) + 3 log(2 /δ)−ρ ≤C 1 +p 3 log(4 /δ) + 3 log(4 /δ)1−ρ. (4.5) Ontheotherhand,thedeviationbound V(k, δ)isprovablyboundedfrombelowbyasubgamma- type quantile, as shown next. Lemma 5 (Gamma’s absolute deviations: lower bounds on the quantiles) LetV(k, δ)de- note the 1−δ/2quantile of the |Zk−1|, as in Eq. (3.13)from the statement of Theorem 2, where we recall Zk∼Gamma( α=k, β=k). There exists universal constants, 0< c1≤1and0< c2≤2 such that for all (k, δ), V(k, δ)≥c1r 0∨log(c2/δ) k+log(c2/δ) k . The proof is deferred to Appendix A.3. Combining the upper bound (4.4) on B(k, n, δ )offered by Condition 3 together with the lower bound on V(k, δ)from Lemma 5 we are ready to state a lower bound on k∗(δ, n). Assumptions on the grid are necessary, stated below, in addition to Condition 2. Condition 4 (Fine enough grid) The grid K={km, m≤ |K|}is chosen such that: km+1 km≤βfor some β >1and for all m≤ |K| . Remark 5 (Grid choice) For moderately large nand a fixed δ∈(0,1), there are several natural choices for Kensuring that both Condition 2 and 4 are satisfied. Specifically, if B(1, n, δ)< γV (1, δ) andB(n, n, δ )> γV (n, δ), then the exhaustive grid K={1, . . . , n }and geometric grids K= {⌊βm⌋: 0≤m≤logβ(n)}for some β >1, satisfy Conditions 2 and 4. Additionally, for sufficiently large M < nsuch that B(n/M, n, δ )< γV (n/M, δ ), the uniform grid K={⌊mn/M ⌋,1≤m≤M} also satisfies Condition 2 and Condition 4 with β= 2. Proposition 4 (A lower bound on k∗under Conditions 2,3 and 4) Let(δ, n,K)satisfy Con- ditions 2 and 4, and let the data distribution satisfy Condition 3. For any δsuch that 0< δ≤c2 2/4, (4.6) where c2is defined in Proposition 5, the oracle k∗satisfies k∗(δ, n)≥β−1C2(ρ)γ2/(1−2ρ) log(4/δ)n−2ρ/(1−2ρ)−1 , fornlarge enough so that n/2is at least as large as the lower bound in the above expression. Here, C2(ρ) =4 212c1√ 2C2/(1−2ρ) , where c1is as defined in Lemma 5, and Cis as specified in Condition 3. The proof is deferred to Appendix A.4. Remark 4 combined with the lower bound on k∗in Proposition 4 yields immediately the following control of n0(δ). Corollary 2 (Control of minimum required sample sizes) Let(δK, n,K)satisfy Conditions 2, 4. Under Condition 3, the minimum
|
https://arxiv.org/abs/2505.22371v1
|
sample size n0(δ)defined in (2.5)with V(k, δ)as in Theo- rem 2, is no greater than the first integer nsuch that 36 log(4 |K|/δ)≤β−1C2(ρ)γ2/(1−2ρ) log(4|K|/δ)n−2ρ/(1−2ρ)−1 . 14 4.3 Oracle and adaptive error bounds The following error bound is a consequence of the previously established lower bound of k∗(Propo- sition 4), of the control of the oracle error viathe deviation term alone (Proposition 1) and the upper bound (3.13) on the deviation term in Theorem 2. It shows that under Condition 3, for grid choices such that Conditions 2 and 4 hold, the oracle k∗(δ, n)indeed meets the minimax rate of convergence for the Hill estimator obtained in Hall (1982) when considering a restricted class of dis- tributions satisfying the (stronger) second order assumption (4.2). This brings further justification for considering k∗(δ, n)as an oracle rule, in addition to the optimality properties (Propositions 1, 2) stated in Section 2.1, which are valid under weaker assumptions. Theorem 3 (Error bound for the oracle) Let(δ, n,K)satisfy Conditions 2 and Condition 4 for some β > 1, and let the data distribution satisfy Condition 3. Assume in addition that δ satisfies (4.6), and that k∗(δ, n)≥log(4/δ). Then, the error for the oracle k∗(δ, n)satisfies, with probability at least 1−δ, ˆγ k∗(δ, n) −γ ≤2(1 +√ 2)√βp C2(ρ)p 1 + log(4 /δ) (n/γ2)ρ 1−2ρ, where C2(ρ)is as in Proposition 4. The proof is deferred to Appendix A.5. The proof reveals that, under the assumptions of the statement, the deviation term V(k∗(δ, n), δ)satisfies V(k∗(δ, n), δ)≤(1 +√ 2)√β C2(ρ)−1/2p 1 + log(4 /δ)γ−1 1−2ρnρ 1−2ρ. Combining the above bound with Theorem 1 yields immediately a tail bound on the error of the adaptive rule ˆkEAVstated in Corollary 3 below. Note that from Proposition 4, the condition that k∗≥log(4|K|/δ)in the statement below is satisfied for sample sizes large enough such that C2(ρ)γ1/(1−2ρ)n−2ρ/(1−2ρ) log(4|K|/δ)−1≥log(4|K|/δ). Corollary 3 (Error bound for ˆkEAV)Let Condition 3 be satisfied, and let δ, n,Kbe such that (δK, n,K)satisfy Conditions2and 4. Assumein additionthat δKsatisfies(4.6), andthat k∗(δK, n)≥ log(4|K|/δ). Then the error for the adaptive validation Hill estimator ˆγ(ˆkEAV)satisfies, with probability at least 1−δ, |ˆγ(ˆkEAV)−γ| ≤6γV∗ 1−6V∗, where V∗=V(k∗(δ/|K|), δ/|K|)satisfies V∗≤(1 +√ 2)√β C2(ρ)−1/2p 1 + log(4 |K|/δ)γ−1 1−2ρnρ 1−2ρ. Remark 6 (Minimax optimality, grids of logarithmic size, and chaining.) For|K| ≤ D lognwith some D > 0,ˆkEAVincurs an adaptivity cost, i.e., an additional multiplicative factor compared to the oracle error k∗(δ, n)(see Theorem 3), of orderp log log( n). This is the case in particular for geometric grids mentioned earlier in Remark 5, which also satisfy Conditions 2 and 4. The full grid K={1, . . . , n }has a multiplicative factor of log(n). Uniform grids km= ⌊mn/M ⌋,1≤m≤Mwith M=Dlogn, also fit within this framework under the condition that n/(Dlogn)< k∗(δK, n). However, the latter condition is not guaranteed for small values of |ρ|in view of the lower bound on k∗in Proposition 4. From a theoretical viewpoint, we recommend the use of a geometric grid. However, our experi- ments suggest that uniformly spaced grids have comparable performance. An informal explanation for this good behavior is as
|
https://arxiv.org/abs/2505.22371v1
|
follows: inspection of the proof of Theorem 3 reveals that the only 15 necessary condition on the grid is that km∗+1/km∗is not too large, where m∗is the grid index such thatk∗=km∗. Therefore, the structure of the grid matters only in the neighborhood of k∗. For large enough n,k∗is sufficiently large so that even with a uniformly spaced grid, km∗+1/km∗is small enough to result in good performance. In comparison, the adaptive methods proposed in Boucheron & Thomas (2015) and Carpen- tier & Kim (2015) pay the price of adaptivity with an additional multiplicative factor of order (log log( n))|ρ|/(1+2|ρ|). Both cited references (and in particular Boucheron & Thomas (2015) under Assumption 3) show minimax optimality of this multiplicative factor among adaptive estimators. One can conclude that the adaptive validation method seems to benefit from a nearly free lunch: if the second-order conditions are not met, it still enjoys the guarantees stated in Proposition 1 and Proposition 2. Moreover, if the second-order condition 4 is satisfied, it attains a nearly minimax adaptive rate, matching the lower bound established in Carpentier & Kim (2015) and Boucheron & Thomas (2015) up to an additional multiplicative factor √log log n (log log( n))|ρ|/(1+2|ρ|)= (log log n)1 2(1+2|ρ|), which is nearly constant, especially for large values of |ρ|. Finally, as mentioned earlier, chainingtechniquessimilar tothose used, forexample, inBoucheron & Thomas (2015) yield a finer uniform control of the fluctuations ( |ˆγ(k)−γ|,k≤k∗(δ, n)) at the price of larger constants in the deviation term V(k, δ). These techniques allow replacing the quantities δ/|K|byδ/log(n)to obtain a√log log nmultiplicative factor even with a grid of size n. However, preliminary experiments showed no improvement over the basic union bound approach taken in Lemma 1 on a grid of logarithmic size, precisely because of the larger constants involved, leading to late stopping. In fact, the ’geometric grid’ approach may be understood as a concrete implementation of the theoretical chaining technique, leading to similar results and simplifying the computational costs. This approach also allows keeping explicit deviation controls in terms of the quantiles of the centered Gamma variable quantity |Zk−1|(see(3.7)), which can be easily calibrated. 5 Numerical experiments The code for our experiments is available at https://github.com/mahsa-taheri/EAV . Comparedmethodsandgeneralobjectives Ourexperimentsaimtocomparetheperformance of the proposed Extreme Adaptive Validation method with two existing state-of-the-art approaches for adaptive Hill estimation under comparable assumptions, specifically those of Boucheron et al. (2015) and Drees et al. (1998). For a discussion of their respective underlying assumptions, refer to Section 4.1. Focusing on quantitative performance assessment, we restrict our analysis to simulated data experiments, which provide a ground truth for evaluation. Our results demonstrate that the proposed EAV method effectively outperforms the approaches of Boucheron & Thomas (2015) and Drees & Kaufmann (1998) when dealing with very ill-behaved distributions, while showing comparable performance for distributions with survival functions converging rapidly to their limiting power law. We confirm the finding of Boucheron & Thomas (2015) that Drees & Kaufmann (1998)’s asymptotic approach outperforms adaptive rules grounded on non-asymptotic guarantees (that is, EAV and Boucheron & Thomas (2015)’s method) for particularly well-behaved distributions with rapidly converging
|
https://arxiv.org/abs/2505.22371v1
|
tail behavior. The adaptive rules of Boucheron & Thomas (2015) and Drees & Kaufmann (1998) are imple- mentedwiththeexactsameparametersandcalibratedconstantsastheonesproposedbyBoucheron & Thomas (2015, Page 30; Equations (5.1) and (5.2)). For ˆkBTandˆkDK, we set the lower limit of the admissible range for ktoln= 30, following the recommendations and notation of Boucheron & Thomas (2015). Regarding Boucheron & Thomas (2015)’s approach, It is worth noting that these constants are slightly different from those for which they provide theoretical guarantees. Regarding the EAV method, we use logarithmic grids of the form K={km=⌊βm⌋,1≤m≤ ⌊logn/log(β)⌋}, so that |K|= log n/log(β)with β= 1.1and1≤m≤ |K|. Additional results 16 for other grid choices, including the uniformly spaced grid, are provided in Appendix A.8. We implement the stopping condition proposed in (2.6), that is, ˆkEAV = maxn k∈ K, k≥k0(δ),such that ∀j∈ Kwith j≤k, |ˆγ(k)−ˆγ(j)| ≤ˆγ(k) 1−2V(k, δK) V(j, δK) + 3V(k, δK)o , where V(k, δK)is a quantile of order 1−δK/2of|Zk−1|, with Zk∼ G(α=k, β=k). Here, with a geometric grid as described above and β= 1.1, we have |K| ≈ logn/logβ. We fix δ= 0.9and we compute numerically the ( 1−(δlogβ)/(2 log n) )-quantile of |Zk−1|by Monte-Carlo sampling with N= 2000independent draws of Zk. The confidence level ( δ) is the sole free parameter in the proposed EVA method. We recommend selecting a relatively high value (e.g., 0.9) to mitigate the pessimistic upper bound on the variance term, obtained via the union bound technique over multiple candidate values of k. Smaller values of δoften result in delayed stopping and poorer performance. Datasets We assess the performance of the three methods using n= 10 000 samples generated from the distributions listed below to estimate the unknown parameter γwith the Hill estimator. Smaller sample sizes ( n= 1 000) are considered in Table 2, with almost similar conclusions. We consider the following data distributions: 1. An ill-behaved distribution which is regularly varying but does not satisfy the standardized Karamata representation (4.1) (see Appendix A.7 for a proof.) To our best knowledge, this counter-example is new. This distribution, parametrized by the tail index γ > 0and an additional scaling parameter s∈(0,1]is designed so that its density vanishes infinitely often on intervals of the kind [(n1/s+(n+1)1/s)/2,(n+1)1/s]. Namely we consider the distribution Fs,αof a random variable Xdefined by X=⌊Zs⌋1/s+1 2(Z− ⌊Zs⌋1/s), where Zfollows a Pareto distribution with tail index γ= 1/α. In our simulation we consider α= 2, s∈ {2/3,1/2}. 2. Symmetric α-stable distributions with α∈ {1.5,1.7,1.99}(α= 1/γ), denoted by S1.5,S1.7, andS1.99. These distributions have characteristic function E[eitX] =e−|t|α. It is known (Csorgo et al., 1985) that for α >1, these distributions satisfy (4.2), and thus also Condi- tion 3. Indeed, (Ibragimov, 1975, Chapter 2, Sections 3, 4) their density can be expressed as the restriction to Rof a complex function that is holomorphic everywhere in the com- plex plane. This allows for asymptotic expansions of the form: f(x) =PN n=0anx−nα−1+ o(x−Nα−1), asx→ ∞. However convergence is slow and the Hill estimatir is generally ill-behaved as α→2 3. A distribution with survival function of the form ¯F(x) = 1 −F(x) =cx−α(logx)β, where β̸= 0,α, c > 0,
|
https://arxiv.org/abs/2505.22371v1
|
with c= (eα/β )βdefined on the domain [x0,∞)where x0= exp( β/α). This distribution is referred to as the ‘Perturb’ distribution in Resnick (2007a), p. 87. In our simulation we fix α= 2andβ= 1and we denote this distribution as L2,1. This distribution has standardized Karamata representation (4.1) but it does not satisfy Condition 3, thus it does not satisfy (4.2) either. We refer to Section A.6 for proofs of our claims about the properties of Lγ,β. 4. A Fréchet distribution with γ= 1and a shift of 10, denoted by F1,10, which satisfies (4.2) with γ= 1, ρ=−1, as easily seen by an expansion of the survival function at infinity. 5. A Fréchet distributions with γ= 1, denoted by F1. Again F1satisfies (4.2) with γ= 1, ρ=−1, although the convergence of the tail towards a power law is faster than for the shifted version. 17 6. A Pareto Change Point distribution (PCP) defined by F(x) =x−1/γ′1{1≤x≤τ}+τ−1/γ′(x/τ)−1/γ1{x > τ}, similar to the one considered in Boucheron & Thomas (2015). We consider three cases, (γ′, γ) = (1 .0,1.1),(γ′, γ) = (1 .0,1.25), and (γ′, γ) = (1 .0,1.5)with values for τchosen such that the tail probability equal respectively 1/25,1/25, and 1/15. Since the PCP tail is exactly Pareto above a finite threshold, it satisfies the strongest second order assumption (4.2) considered here. These distributions span a broad spectrum of tail behaviours. Distributions in Family 1 (Counter- example) is a particularly ill-behaved example where no second order condition is satisfied, not even the standardized Karamata representation (4.1). Stable distributions in Family 2. are a typical example where good asymptotic properties do not prevent poor finite-sample behavior. When α→2, the Hill estimator of γ= 1/αis notoriously ill-behaved because convergence in (4.2) is slow (Resnick, 2007b; Nolan, 2020). Distribution 3 (Perturb) satisfies (4.1) but with von Mises function b(t)that decreases too slowly for Condition 3 to holds. It is well known to produce ‘Hill horror plots’. Another example of ‘horror plot’ distributions is provided by the symmetric stable distributions (Family 2), although they satisfy the strongest second order assumption considered here, namely (4.2). Intermediate cases are provided by shifted distributions (Family 4), which satisfy also (4.2) with a faster convergence to the limiting power law behaviour, although they are known to produce Hill horror plots as well. The Fréchet distribution (Family 6) is a typical example of a well-behaved distribution where Hill estimation is easy and kis easily chosen either by a visual inspection of Hill plots or by automatic adaptive procedures. Finally, Family 7 may be viewed as a counterexample to the relevance of second-order assumptions for practical (i.e., finite sample) purposes. Although it satisfies strong second-order assumptions, it results in poor behavior of the Hill estimator when the cut-off point is unknown. In Boucheron & Thomas (2015), it is presented as a typical example where asymptotic strategies, such as those in Drees & Kaufmann (1998), fail in comparison to a non-asymptotic approach. Performance metric As a performance metric for a fixed kwe consider the standardized mean squared error (MSE) of the Hill-estimator MSE (k) =Eh ˆγ(k)
|
https://arxiv.org/abs/2505.22371v1
|
γ−12i ,where ˆγ(k)is the Hill es- timator for kextreme order statistics. As an illustration, Figure 1 displays the standardized root mean squared error RMSE (k) =p MSE(k)for datasets of sample size n= 10 000 generated from one representant of the 6 distribution families described above, as a function of k. The expecta- tion in the definition of MSE is approximated by Monte-Carlo sampling with N= 500replications. Results Table 1 reports the estimated MSE and the standard error of this estimator (both mul- tipliedby 100)associatedwiththedifferentadaptiverulesconsideredhere, namelyEAV,Boucheron & Thomas (2015)’s method and Drees & Kaufmann (1998)’s, denoted respectively by ˆkEAV,ˆkBT andˆkDK, over N= 500experiments. Namely the quantity reported for ˆk∈ {ˆkEAV,ˆkBT,ˆkDK} is[MSE(ˆk) = N−1PN i=1(ˆγi(ˆki)/γ−1)2, where ˆγi(ˆki)is the Hill estimator obtained at the ith replication, using the number of extreme order statistics ˆkiwhich is the output of the adaptive rule ˆkapplied to the ithgenerated dataset. Additionally we provide the standard error of this estimator of the expected MSE, namely stderr( ˆk) = N−1 PN i=1 (ˆγi(ˆki)/γ−1)2−[MSE( ˆk)21/2. The first five rows of Table 1 reveal that for very ill-behaved distributions, specifically those from Family 1 (Counter-example) and Family 2 (Stable), ˆkEAVoutperforms both ˆkBTandˆkDKin terms of stan- dardized MSE. The ’Perturb’ distribution (Family 3) stands apart, as it is relatively ill-behaved for Hill estimation; however, ˆkDKandˆkBTdemonstrate better performance than ˆkEAVin this case. For well-behaved distributions satisfying the classical second-order assumption 4.2, the asymptotic ruleˆkDKoutperforms both ˆkEAVandˆkBT. The PCP case (Family 6.) is a counter-example where the asymptotic ˆkDKmethod struggles compared with ˆkEAV,ˆkBT, despite its tail regularity. To better understand the differences in behavior between ˆkEAV,ˆkBT, and ˆkDK, we present in Table 3 (in the Appendix), for ˆk∈ {ˆkEAV,ˆkBT,ˆkDK}, the range of values for kproduced by these three 18 Figure 1: Monte-Carlo estimates of the standardised RMSE of Hill estimators as a function of the number of order statistics kfor samples of size 10 000from the sampling distributions. rules, along with the average values, denoted as ¯kEAV,¯kBT, and ¯kDK, respectively. The results indicate that ˆkEAVconsistently yields larger values of k. In contrast, the range of kvalues output byˆkBTandˆkDKis wider, often including indices near the lower limit ln= 30. 19 n= 10 000 d.f. γ [MSE(ˆkEAV); (stderr) ×100[MSE(ˆkBT); (stderr) ×100[MSE(ˆkDK); (stderr) ×100 C2,2/30.5 01.06 ; (0.03) 03 .05; (0 .16) 13 .80; (0 .88) C2,1/20.5 04.55 ; (0.11) 05 .14; (0 .24) 29 .01; (5 .99) S1.7 1/1.7 01.28 ; (0.08) 04 .01; (0 .10) 04 .40; (0 .06) S1.5 1/1.5 00.35 ; (0.01) 01 .33; (0 .05) 01 .23; (0 .03) S1.99 1/1.99 26.20 ; (0.03) 47 .70; (0 .24) 45 .88; (0 .48) L2,1 0.5 21.40; (0 .14) 04 .47; (0 .14) 04.40 ; (0.08) F1,10 1.0 12.08; (0 .13) 03 .49; (0 .09) 01.78 ; (0.12) F1 1.0 06.36; (0 .06) 00 .69; (0 .05) 00.24 ; (0.03) PCP 1.1 1.1 00.62 ; (0.01) 00 .70; (0 .11) 00 .73; (0 .04) PCP 1.5 1.5 05.22; (0 .05) 00.58 ; (0.07) 05 .99; (0 .36) PCP 1.25 1.25 03.39; (0 .02) 01.13 ; (0.06) 03 .34; (0 .02) Table 1: [MSE(ˆk)(stderr (ˆk)), both multiplied by
|
https://arxiv.org/abs/2505.22371v1
|
100, for ˆk∈ {ˆkEAV,ˆkBT,ˆkDK}, over N= 500 experiments. n= 1 000 d.f. γ [MSE(ˆkEAV); (stderr) ×100[MSE(ˆkBT); (stderr) ×100[MSE(ˆkDK); (stderr) ×100 C2,2/30.5 01.63 ; (0.04) 04 .50; (0 .22) 07 .15; (0 .18) C2,1/20.5 02.77 ; (0.07) 07.60;(0.26) 09.15; (0 .19) S1.7 1/1.7 52.48; (0 .71) 04.62;(0.15) 06.73; (0 .21) S1.5 1/1.5 75.63; (0 .92) 01.17;(0.07) 02.35; (0 .11) S1.99 1/1.99 24.95 ; (0.42) 37 .70; (0 .32) 106 .94; (37 .95) L2,1 0.5 56.39; (0 .30) 09 .28; (0 .27) 07.23 ; (0.22) F1,10 1.0 31.61; (0 .29) 14 .67; (0 .28) 06.32 ; (0.25) F1 1.0 64.74; (0 .35) 01.92 ; (0.08) 11 .44; (0 .07) PCP 1.1 1.1 00.80 ; (0.02) 00 .95; (0 .04) 00 .82; (0 .03) PCP 1.5 1.5 10.12; (0 .07) 07.71 ; (0.16) 09 .57; (0 .09) PCP 1.25 1.25 03.73; (0 .04) 03.46 ; (0.06) 03 .55; (0 .06) Table 2: [MSE(ˆk)(stderr (ˆk)), both multiplied by 100, for ˆk∈ {ˆkEAV,ˆkBT,ˆkDK}, over N= 500 experiments. 6 Discussion In this work, we propose a practical and computationally efficient method for selecting the tuning parameter kof the Hill estimator in EVT. Rather than exploring all possible values of k, we limit our search to a smaller logarithmic grid, a common practice in real-world applications. This reduces computational complexity significantly. Theoretically, our AV method provides statistical guarantees with minimal assumptions—eliminating the need for second-order conditions or von Mises-type assumptions. To the best of our knowledge, this represents the first such result in EVT. Moreover, when von Mises conditions are met, our approach achieves nearly minimax optimality. Our simulations reveal that our approach outperforms Boucheron & Thomas (2015) and Drees & Kaufmann (1998) when dealing with ill-behaved distributions. Our framework is also flexible and could be extended beyond the Hill estimator to other EVT problems where error bounds take the form error ≤V+B. Unlike existing adaptive Hill-based methods, the AV method relies on weaker assumptions, paving the way for future research. We plan to explore its applicability to other extreme value estimators, leveraging recent advances in the field. Acknowledgement J.L. and M.T. are grateful for partial funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project numbers 502906238, 543964668, and 520388526 (TRR391). A.S. acknowledges the funding of the French ANR grant EXSTA, ANR-23-CE40-0009- 20 01. The authors thank Stéphane Boucheron, John Einmahl, and Holger Drees for constructive discussions at different stages of the project. Supplementary material A Appendix 1 (Proofs) In this section, we provide proofs for some of the results stated in the main text. A.1 Proof of Lemma 3 We reproduce below the proof given in Clémençon & Sabourin (2025) for the sake of completeness. The order statistics of a uniform random sample are sub-Gamma. More precisely, it is shown in Reiss (2012, Lemma 3.1.1.) that for every k≤n, P√n σ 1−U(k)−k n+ 1 ≥t ≤exp −t2 3 1 +t/(σ√n) , with σ2= (1−k/(n+ 1))( k/(n+ 1))≤k/n. The above bound derives immediately from the cited reference and the fact that 1−U(k)d=U(n+1−k). Re-arranging we obtain P 1−U(k)−k/n > t ≤P 1−U(k)−k/(n+ 1) > t ≤exp −nt2/σ2 3(1
|
https://arxiv.org/abs/2505.22371v1
|
+ t/σ2) . Inverting the above inequality yields that with probability greater than 1−δ, 1−U(k)≤k n+r 3σ2log(1/δ) n+3 log(1 /δ) n =k n 1 +r 3 log(1 /δ) k+3 log(1 /δ) k . □ A.2 Proof of Theorem 2 In the setting of Lemma 2, combining Lemmas 2, 3, 4 we obtain the following upper bound on the error: with probability at least 1−δ, from a union bound over two adverse events of probability δ/2each, |ˆγ(k)−γ| ≤ γV1(k, δ/2) + 2¯ an (k+ 1) 1 +R(k+ 1, δ/2) +. . . 1 +˜V(k, δ/2)¯bn (k+ 1) 1 +R(k+ 1, δ/2) ,(A.1) where ¯a,¯bare as in Lemma 2, R(k, δ)is defined in Lemma 3, V1(k, δ)is the (1−δ)-quantile of |Zk−1|and˜V(k, δ)≥V1(k, δ)is defined in Lemma 4. The statement of the theorem now follows immediately from (A.1) and the fact that the function ˜V(resp. R) in that statement is a non increasing ( resp.non decreasing) function of k. 21 A.3 Proof of Lemma 5 The random variable kZkhas distribution Gamma( k,1). Using Zhang & Zhou (2020, Theorem 5) gives universal constants 0< c≤1,˜C >0such that for any x >0, P(kZk−k > x )≥max ce−˜Cx, ce−x2/k . Thus P(|Zk−1|> x)≥P(kZk−k > kx )≥max ce−˜Ckx, ce−kx2 . Now, V(k, δ) = inf {y:P(|Zk−1| ≤y)≥1−δ/2}= inf {y:P(|Zk−1|> y)≤δ/2}, and the above argument yields the following inclusion {y≥0 :P(|Zk−1|> y)≤δ/2} ⊂ { y≥0 : max( ce−˜Cyx, ce−ky2)≤δ/2}. This implies that V(k, δ)≥inf y≥0 : max ce−˜Cky, ce−ky2 ≤δ/2 = inf y≥0 :y≥log(2c/δ) ˜Ckand y≥r log(2c/δ) k = maxlog(2c/δ) ˜Ck,r 0∨log(2c/δ) k . The statement follows with c2= 2candc1= min(1 ,1/˜C). □ A.4 Proof of Proposition 4 Write K={k1< k 2<···< kM}with M=|K|and let m∗denote the grid index such that km∗=k∗(δ, n). Condition 2 guarantees that m∗< M. From the non-increasing property of Vand Lemma 5 we may bound the quantity V(k, δ)from below as follows, for any 1≤k≤n, V(k, δ)≥V(k+ 1, δ)≥c1r 0∨log(c2/δ) k+ 1+log(c2/δ) k+ 1 ≥c1r 0∨log(c2/δ) k+ 1.(A.2) Combining the latter lower bound with the upper bound (4.4) on the bias term we obtain the following implications, m > m∗⇐⇒ B(km, n, δ)> γV (km, δ) ⇒C1(δ, ρ)km+ 1 n−ρ > γc 1s 0∨log(c2/δ) km+ 1 ⇐⇒ (km+ 1)−ρ+1/2>γc1p 0∨log(c2/δ) C1(δ, ρ)n−ρ ⇐⇒ (km+ 1) >γc1p 0∨log(c2/δ) C1(δ, ρ) 2 1−2ρ n−2ρ/(1−2ρ) ⇒km≥γc1p 0∨log(c2/δ) C1(δ, ρ) 2 1−2ρ n−2ρ/(1−2ρ)−1. (A.3) Hence, because km∗+1satisfies (A.3), we have km∗+1≥˜C2(δ, ρ)γ2 1−2ρn−2ρ 1−2ρ−1, 22 where ˜C2(δ, ρ) = c1p 0∨log(c2/δ)/C1(δ, ρ) 2 1−2ρ. We obtain k∗(δ, n) = km∗≥km∗ km∗+1 ˜C2(δ, ρ)γ2 1−2ρn−2ρ 1−2ρ−1 . (A.4) We now derive an upper bound for ˜C2(δ, ρ), leveraging the lower bound (4.5) on C1(δ, ρ). Under Condition (4.6) on (c2, δ)we get c2/δ≥p 4/δ, whence ˜C2(δ, ρ)1−2ρ 2≥c1p log(c2/δ) C 1 +p 3 log(4 /δ) + 3 log(4 /δ)1−ρ ≥c1p 2−1log(4/δ) C 3 log(4 /δ)1−ρ | {z } C′ 2×1 1 +1√ 3 log(4 /δ)+1 3 log(4 /δ)1−ρ | {z } D. Simplifying C′ 2we get C′ 2=c1 C√ 2p log(4/δ) 3 log(4 /δ1−ρ ≥c1 C31−ρ√ 2 log(4/δ)−(1−2ρ) 2, and under (4.6), we have that 3 log(4
|
https://arxiv.org/abs/2505.22371v1
|
/δ)>4, so that D≥(4/7)1−ρ. Combining the two latter bounds we obtain ˜C2(δ, ρ)1−2ρ 2≥c1 (21/4)1−ρ√ 2C log(4/δ)−(1−2ρ) 2, hence, ˜C2(δ, ρ)≥c2 1 2C21/(1−2ρ) ×1 (21/4)2×1 log(4/δ). Theresultfollowsfromtheabovedisplaycombinedwith(A.4)andtheassumptioninthestatement thatkm∗/km∗+1≥β−1. 23 A.5 Proof of Theorem 3 From Proposition 1, for any δ >0andnlarge enough so that k∗(δ, n)≥log(4/δ)it holds that ˆγ k∗(δ, n) −γ ≤2γ V k∗(δ, n), δ ≤2γs 2 log(4 /δ) k∗(δ, n)+log(4/δ) k∗(δ, n) (from (3.13)) ≤2(1 +√ 2)γs log(4/δ) k∗(δ, n) ≤2(1 +√ 2)γp βs log(4/δ) β k∗(δ, n) ≤2(1 +√ 2)γp βs 1 + log(4 /δ) β k∗(δ, n) + 1 ≤2(1 +√ 2)γp βp 1 + log(4 /δ)γ−1 1−2ρC2(ρ)−1/2nρ 1−2ρ(from Proposition 4) = 2(1 +√ 2)p βp 1 + log(4 /δ)C2(ρ)−1/2γ−2ρ 1−2ρnρ 1−2ρ. Appendix 2 (Complementary results) In this appendix, we provide additional discussion about our simulations and complementary ex- periments. A.6 Perturbed Pareto distribution We provide some additional details regarding the perturbed Pareto distribution used in our ex- periments, see also Resnick (2007b), defined by its cumulative distribution function F(x) = 1−cx−α(logx)β, for x > x 0= exp( α/β)with β, α > 0, with c= (eα/β )β. We show that F thus defined is a valid distribution function. That F(x)→1asx→ ∞derives immediately from asymptotic comparison between powers of xand of logx. Now for x≥x0, the derivative of Fexists and is given by f(x) =c(logx)β−1x−α−1(−β+αlogx)≥0. Finally, straightforward computations show that F(c) = 0. We now show that the quantile function Qassociated to Fadmits the standardized Karamata representation 4.1. We use the following characterization based on the so-called ‘von Mises condi- tion’ as stated in standard reference books (de Haan, 1970; Resnick, 2007b; Beirlant et al., 2006). A distribution function Fwith density fsatisfies the von Mises condition if ∃ℓ >0, such that xf(x) 1−F(x)→ℓ. (A.5) From de Haan (1970, Theorem 2.7.1), (A.5) implies that 1−Fis regularly varying with regular variation index −α=−ℓ, i.e. the tail index is γ= 1/α= 1/ℓ. Also from Csorgo et al. (1985, Lemma 6) (and as noted in Drees & Kaufmann (1998)), more is true: (A.5) is equivalent to the condition that the quantile function Q(t) =F←(1−1/t)has Karamata representation (4.1) which is nearly as strong as Condition 3, although the additional requirement in that ¯b(t)≤Ctρis not guaranteed. In the case of the ‘Perturb’ distribution defined above, the von Mises ratio satisfies xf(x) 1−F(x)=(logx)β−1x−α−1(−β+αlogx) x−α(logx)β= (log x)−1(−β+αlogx) − − − − → x→∞α, 24 which shows (A.5), thus ensuring that the Karamata representation (4.1) holds. We now show that the additional condition that ¯b(t)≤Ctρdoes not hold for any ρ <0n so that Condition 3 is not satisfied. We use the fact that (see the discussion in Drees & Kaufmann, 1998, following equation (5)) in (4.1), one may choose b(s) =q(Q(s))−γwhere q(x) =¯F(x)/(xf(x))is the inverse of the ratio in (A.5), and Q(s) =F←(1−1/s). Now with the ‘perturb’ distribution, q(Q(s))−γ=logQ(s) −β+αlogQ(s)−1/α =β α(−β+αlog(Q(s)) ≥β α2logQ(s). In addition, because the function Qis regularly varying with regular variation index 1/αwe have that for any ϵ >0, forssufficiently large, Q(s)≤cs1/α+ϵfor some constant c >0and we obtain q(Q(s))−γ≥β α2(logc+ (1/α+ϵ) logs) ∼A logs where A >0. This well result is known as
|
https://arxiv.org/abs/2505.22371v1
|
a Potter bound. The latter quantity converges to zero much slower than sρfor any ρ <0. We conclude that for the perturb distribution, we cannot have ¯b(t)< Ctρfor some ρ <0, thus Condition 3 is not satisfied. A.7 Counter-example distribution The general idea behind the proposed counter-example distribution is to start from a well behaved survival function 1−Fand to modify it and ‘draw holes’ in the support, by pushing back some mass from some intervals to neighboring intervals. This strategy is suggested by well-known facts: From de Haan (1970, Theorem 2.7.1 (b)), if 1−Fis regularly varying and if in addition, F has a density which is ultimately non-increasing, then (A.5) holds true, thus, by the argument developed in Section A.6, also the standardized Karamata representation (4.1) is satisfied. As a consequence, to construct a distribution function Fsuch that (i) 1−Fis regularly varying; but (ii)the standardized Karamata representation does not hold, and if we would like to stay in the continuous case where a density exists, so as to avoid additional difficulties with potential ties in the order statistics, then the density fmustnotbe ultimately non-increasing. Consider the random variable Xdefined by X=⌊Zs⌋1/s+1 2(Z− ⌊Zs⌋1/s), (A.6) where Z∼Pareto (1/γ)for some γ >0,i.e.P(Z > t ) =t−1/γ, t≥1ands∈(0,1]is a scaling parameter. For z≥1, define r(z) =z− ⌊zs⌋1/s. Note that r(z)≥0and that the discrepancy between ZandXwrites Z−X=1 2r(Z).For z∈[n1/s,(n+ 1)1/s), we have 0≤r(z)≤(n+ 1)1/s−n1/sand straightforward calculations show that lim inf z→∞r(z) s−1z1−s= 0 ; lim sup z→∞r(z) s−1z1−s= 1. Thus for s < 1the discrepancy between XandZbecomes more and more pronounced as Z becomes large. Let us denote by Hthe cumulative distribution function of Xin the remainder of the proof. Consider the intervals In=h n1/s,n1/s+ (n+ 1)1/s 2i ;Jn=n1/s+ (n+ 1)1/s 2,(n+ 1)1/s . 25 Then the positive real line is the disjoint union of the In, Jn’s and we have X=( n1/s+1 2r(Z)ifZ∈In n1/s+(n+1)1/s 2ifZ∈Jn. so that P X∈S n≥0Jn = 0. As a consequence Xadmits a density that vanishes on the intervals Jnwhile it is non-zero on the In’s. Writing F(t) =P(Z≤t) = 1−t−1/γ, the identity in the above display shows that H(x) =( F ⌊xs⌋1/s+ 2r(x) ifr(x)≤(⌊xs⌋+1)1/s−⌊xs⌋1/s 2, F ⌊xs⌋1/s+(⌊xs⌋+1)1/s 2 ifr(x)>1/2. Because rhas derivative r′(x) = 1onIn, we deduce that Hhas density h(x) = 2 f ⌊xs⌋1/s+ 2r(x) = 2α ⌊xs⌋1/s+ 2r(x)−α−1, heref(x) =αx−α−1. Thus on such an interval In, we have xh(x)/ 1−H(x) =2xα ⌊xs⌋1/s+ 2r(x)−α−1 ⌊xs⌋1/s+ 2r(x)−α∼n→∞2αxx−α−1 x−α→2α. Thus the von Mises ratio in the left-hand side of (A.5) jumps from 0onJnto a level close to 2α infinitely often, and it cannot have a limit, which proves that (4.1) cannot hold for the distribution ofX. We now check that 1−His regularly varying with regular variation index −α. Since HandF coincide on the set {n1/s, n∈N}, we have for x >0, t >0: (⌊(tx)s⌋+ 1)−α/s ⌊ts⌋−α/s=1−F (⌊(tx)s⌋+ 1)1/s 1−F(⌊ts⌋1/s)≤1−H(tx) 1−H(t)≤1−F(⌊(tx)s⌋1/s) 1−F (⌊ts⌋+ 1)1/s=⌊(tx)s⌋−α/s (⌊ts⌋+ 1)−α/s and both sides of the sandwich converge to x−αast→ ∞, showing that (1−H(tx))/(1−H(t))→ x−αast→ ∞, which concludes the proof. A.8 Additional Simulations We repeat the experiments from Section 5, this time using a linear grid defined
|
https://arxiv.org/abs/2505.22371v1
|
by km=⌊mn/|K|⌋ for1≤m≤ |K|, where |K|= log n/log(β)andβ= 1.1. The results, shown in Table 4 and Table 5, indicate that our EAV estimator is largely robust to changes in the grid. d.f. γ ˆkEAV;¯kEAVˆkBT;¯kBTˆkDK;¯kDK C2,2/30.5 [0717,1051];0922 [31,0127];0066 [001,0011];0007 C2,1/20.5 [0368,1399];0595 [31,0109];0043 [001,0241];0046 S1.7 1/1.7[0652,3991];3427 [35,2253];0515 [323,0729];0495 S1.5 1/1.5[0717,4830];4356 [33,3221];1154 [534,1298];0935 S1.99 1/1.99[0072,1862];1371 [42,0679];0281 [002,0102];0035 L2,10.5 [4830,7778];6605 [35,3854];1230 [250,1974];1132 F1,10 1.0[0717,1862];1334 [33,1064];0459 [042,0121];0068 F1 1.0[5313,7071];6216 [38,4195];1430 [853,2837];1665 PCP 1.1 1.1[0717,8500];4663 [50,9900];3727 [3908,4605];4340 PCP 1.5 1.5[0789,3991];2200 [47,1057];0694 [0073,5083];2890 PCP 1.25 1.25[0717,9999];7111 [33,9998];0723 [4048,5050];4519 Table 3: Range of values of koutput by ˆkEAV,ˆkBT,ˆkDK, and mean values ¯kEAV,¯kBT,¯kDK, over N= 500experiments 26 n= 10 000 d.f. γ [MSE(ˆkEAV); (stderr) ×100[MSE(ˆkBT); (stderr) ×100[MSE(ˆkDK); (stderr) ×100 C2,2/30.5 01.51 ; (0.04) 03 .05; (0 .16) 13 .80; (0 .88) C2,1/20.5 04.92 ; (0.09) 05 .14; (0 .24) 29 .01; (5 .99) S1.7 1/1.7 00.61 ; (0.03) 04 .01; (0 .10) 04 .40; (0 .06) S1.5 1/1.5 00.59 ; (0.01) 01 .33; (0 .05) 01 .23; (0 .03) S1.99 1/1.99 21.79 ; (0.09) 47 .70; (0 .24) 45 .88; (0 .48) L2,1 0.5 22.79; (0 .15) 04 .47; (0 .14) 04.40 ; (0.08) F1,10 1.0 13.65; (0 .12) 03 .49; (0 .09) 01.78 ; (0.12) F1 1.0 07.11; (0 .07) 00 .69; (0 .05) 00.24 ; (0.03) PCP 1.1 1.1 00.63 ; (0.01) 00 .70; (0 .11) 00 .73; (0 .04) PCP 1.5 1.5 05.57; (0 .05) 00.58 ; (0.07) 05 .99; (0 .36) PCP 1.25 1.25 03.51; (0 .02) 01.13 ; (0.06) 03 .34; (0 .02) Table 4: [MSE(ˆk)(stderr (ˆk)), both multiplied by 100, for ˆk∈ {ˆkEAV,ˆkBT,ˆkDK}, over N= 500 experiments. d.f. γ ˆkEAV;¯kEAVˆkBT;¯kBTˆkDK;¯kDK C2,2/30.5 [0729,1145];0967 [31,0127];0066 [001,0011];0007 C2,1/20.5 [0312,1562];1118 [31,0109];0043 [001,0241];0046 S1.7 1/1.7[0937,4375];3724 [35,2253];0515 [323,0729];0495 S1.5 1/1.5[1354,5000];4591 [33,3221];1154 [534,1298];0935 S1.99 1/1.99[1250,2083];1653 [42,0679];0281 [002,0102];0035 L2,10.5 [5312,8437];6970 [35,3854];1230 [250,1974];1132 F1,10 1.0[0833,1979];1447 [33,1064];0459 [042,0121];0068 F1 1.0[5208,7395];6381 [38,4195];1430 [853,2837];1665 PCP 1.1 1.1[1770,9687];6429 [50,9900];3727 [3908,4605];4340 PCP 1.5 1.5[1354,4166];2362 [47,1057];0694 [0073,5083];2890 PCP 1.25 1.25[1562,9998];7831 [33,9998];0723 [4048,5050];4519 Table 5: Range of values of koutput by ˆkEAV,ˆkBT,ˆkDK, and mean values ¯kEAV,¯kBT,¯kDK, over N= 500experiments References Aghbalou, A. ,Bertail, P. ,Portier, F. &Sabourin, A. (2024). Cross-validation on extreme regions.Extremes , 1–51. Beirlant, J. ,Goegebeur, Y. ,Segers, J. &Teugels, J. L. (2006).Statistics of extremes: theory and applications . John Wiley & Sons. Beirlant, J. ,Vynckier, P. &Teugels, J. L. (1996). Tail index estimation, pareto quantile plots regression diagnostics. Journal of the American Statistical Association 91, 1659–1667. Bingham, N. ,Goldie, C. &Teugels, J. (1987).Regular Variation . Cambridge University Press. Boucheron, S. ,Lugosi, G. &Massart, P. (2013).Concentration Inequalities: A Nonasymp- totic Theory of Independence . Oxford University Press. Boucheron, S. &Thomas, M. (2015). Tail index estimation, concentration and adaptivity. Electronic Journal of Statistics 9, 2751–2792. Carpentier, A. &Kim, A. K. (2015). Adaptive and minimax optimal estimation of the tail coefficient. Statistica Sinica , 1133–1144. 27 Chichignoud, M. ,Lederer, J. &Wainwright, M. J. (2016). A practical scheme and fast algorithm to tune the lasso with optimality guarantees. Journal of Machine Learning Research 17, 1–20. Clémençon, S. ,Jalalzai, H. ,Lhaut, S. ,Sabourin, A. &Segers, J. (2023). Concentration bounds for the empirical angular measure with statistical learning applications. Bernoulli 29, 2797–2827. Clémençon, S.
|
https://arxiv.org/abs/2505.22371v1
|
&Sabourin, A. (2025). Weak signals and heavy tails: Machine-learning meets extreme value theory. arXiv:2504.06984 . Comte, F. &Lacour, C. (2013). Anisotropic adaptive kernel deconvolution. Annales de l’IHP Probabilités et statistiques 49, 569–609. Csorgo, S. ,Deheuvels, P. &Mason, D. (1985). Kernel estimates of the tail index of a distribution. The Annals of Statistics , 1050–1077. De Haan, L. &Resnick, S. (1987). On regular variation of probability densities. Stochastic processes and their applications 25, 83–93. de Haan, L. F. M. (1970).On regular variation and its application to the weak convergence of sample extremes , vol. 32. Mathematisch Centrum. Drees, H. (2001). Minimax risk bounds in extreme value theory. The Annals of Statistics 29, 266–294. Drees, H. &Kaufmann, E. (1998). Selecting the optimal sample fraction in univariate extreme value estimation. Stochastic Processes and their applications 75, 149–172. Drees, H. &Sabourin, A. (2021). Principal component analysis for multivariate extremes. Electronic Journal of Statistics 15, 908–943. Embrechts, P. ,Klüppelberg, C. &Mikosch, T. (2013). Modelling extremal events: for insurance and finance , vol. 33. Springer Science & Business Media. Engelke, S. ,Lalancette, M. &Volgushev, S. (2021). Learningextremalgraphicalstructures in high dimensions. arXiv:2111.00840 . Fedotenkov, I. (2020). Areviewofmorethanonehundredpareto-tailindexestimators. Statistica 80, 245–299. Goix, N. ,Sabourin, A. &Clémençon, S. (2015). Learning the dependence structure of rare events: a non-asymptotic study. In Conference on Learning Theory . Goldenshluger, A. &Lepski, O. (2011). Bandwidth selection in kernel density estimation: Oracle inequalities and adaptive minimax optimality. The Annals of Statistics 39, 1608–1632. Gomes, I. ,De Haan, L. &Rodrigues, L. H. (2008). Tail index estimation for heavy-tailed models: accommodation of bias in weighted log-excesses. Journal of the Royal Statistical Society Series B: Statistical Methodology 70, 31–52. Gomes, M. I. ,Caeiro, F. ,Henriques-Rodrigues, L. &Manjunath, B. (2016). Bootstrap methods in statistics of extremes. Handbook of Extreme Value Theory and Its Applications to Finance and Insurance. Handbook Series in Financial Engineering and Econometrics (Ruey Tsay Adv. Ed.). John Wiley and Sons , 117–138. Gomes, M. I. ,Figueiredo, F. &Neves, M. M. (2012). Adaptive estimation of heavy right tails: resampling-based methods in action. Extremes 15, 463–489. 28 Gomes, M. I. &Oliveira, O. (2001). The bootstrap methodology in statistics of ex- tremes—choice of the optimal sample fraction. Extremes 4, 331–358. Grama, I. &Spokoiny, V. (2008). Statistics of extremes by oracle estimation. The Annals of Statistics 36, 1619–1648. Hall, P. (1982). On some simple estimates of an exponent of regular variation. Journal of the Royal Statistical Society: Series B (Methodological) 44, 37–42. Hall, P. &Welsh, A. H. (1984). Bestattainableratesofconvergenceforestimatesofparameters of regular variation. The Annals of Statistics , 1079–1084. Hall, P. &Welsh, A. H. (1985). Adaptive estimates of parameters of regular variation. The Annals of Statistics , 331–341. Hill, B. M. (1975). A simple general approach to inference about the tail of a distribution. The Annals of Statistics 3, 1163–1174. Ibragimov, I. (1975). Independent and stationary sequences of random variables. Wolters, No- ordhoff Pub. . Lacour, C. &Massart, P. (2016). Minimal penalty for goldenshluger–lepski method. Stochastic Processes and their Applications 126, 3774–3789. Laszkiewicz, M. ,Fischer, A. &Lederer, J. (2021). Thresholded adaptive validation: Tuning the graphical lasso for graph recovery. In International Conference
|
https://arxiv.org/abs/2505.22371v1
|
on Artificial Intelligence and Statistics . PMLR. Lederer, J. (2022).Fundamentals of High-Dimensional Statistics: with exercises and R labs . Springer Texts in Statistics. Lepski, O. V. (1990). A problem of adaptive estimation in gaussian white noise. Teoriya Veroy- atnostei i ee Primeneniya 35-3, 459–470. Lepski, O. V. ,Mammen, E. &Spokoiny, V. G. (1997). Optimal spatial adaptation to inhomo- geneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. The Annals of Statistics , 929–947. Lhaut, S. ,Sabourin, A. &Segers, J. (2021). Uniform concentration bounds for frequencies of rare events. arXiv:2110.05826 . Li, W.&Lederer, J. (2019). Tuning parameter calibration for ℓ1-regularized logistic regression. Journal of Statistical Planning and Inference 202, 80–98. Mason, D. M. (1982). Lawsoflargenumbersforsumsofextremevalues. The Annals of Probability , 754–764. Nolan, J. P. (2020). Univariate stable distributions. Springer Series in Operations Research and Financial Engineering 10, 978–3. Pickands III, J. (1975). Statistical inference using extreme order statistics. The Annals of Statistics , 119–131. Reiss, R.-D. (2012).Approximate distributions of order statistics: with applications to nonpara- metric statistics . Springer science & business media. Resnick, S. (2007a). Heavy-tail phenomena: probabilistic and statistical modeling . Springer Science & Business Media. Resnick, S. (2007b). Heavy-tail phenomena: probabilistic and statistical modeling , vol. 10. Springer Science & Business Media. 29 Resnick, S. I. (2008).Extreme values, regular variation, and point processes , vol. 4. Springer Science & Business Media. Scarrott, C. &MacDonald, A. (2012). A review of extreme value threshold estimation and uncertainty quantification. REVSTAT-Statistical journal 10, 33–60. Segers, J. (2002). Abelian and tauberian theorems on the bias of the hill estimator. Scandinavian Journal of Statistics 29, 461–483. Taheri, M. ,Lim, N.&Lederer, J. (2023). Balancing statistical and computational precision: A general theory and applications to sparse regression. IEEE Transactions on Information Theory 69, 316–333. Weissman, I. (1978). Estimation of parameters and large quantiles based on the k largest obser- vations.Journal of the American Statistical Association 73, 812–815. Zhang, A. R. &Zhou, Y. (2020). On the non-asymptotic and sharp lower tail bounds of random variables. Stat9, e314. 30
|
https://arxiv.org/abs/2505.22371v1
|
arXiv:2505.22417v1 [math.ST] 28 May 2025High-Dimensional Binary Variates: Maximum Likelihood Estimation with Nonstationary Covariates and Factors Xinbing Kong1, Bin Wu∗2, and Wuyi Ye2 1Southeast University, Nanjing 211189, China 2University of Science and Technology of China, Hefei 230026 , China Abstract This paper introduces a high-dimensional binary variate model that accommodates non- stationary covariates and factors, and studies their asymptotic theory. This framework encompasses scenarios where single indices are nonstationary or c ointegrated. For nonsta- tionary single indices, the maximum likelihood estimator (MLE) of the co efficients has dual convergenceratesand is collectivelyconsistent under the conditio nT1/2/N→0, as both the cross-sectional dimension Nand the time horizon Tapproach infinity. The MLE of all non- stationary factors is consistent when Tδ/N→0, where δdepends on the link function. The limiting distributions of the factors depend on time t, governed by the convergence of the Hessian matrix to zero. In the case of cointegrated single indices, t he MLEs of both factors and coefficients converge at a higher rate of min(√ N,√ T). A distinct feature compared to nonstationary single indices is that the dual rate of convergence o f the coefficients increases from (T1/4,T3/4) to (T1/2,T). Moreover, the limiting distributions of the factors do not depend on tin the cointegrated case. Monte Carlo simulations verify the accura cy of the estimates. In an empirical application, we analyze jump arrivals in fina ncial markets using this model, extract jump arrivalfactors, and demonstrate their efficacy in large-cross-section asset pricing. Keywords: Binary Response; Non-stationary Factors; Maximum Likelih ood Estimation; Jump Arrival Factors. ∗Corresponding author. Email: bin.w@ustc.edu.cn 1 1 Introduction SinceChamberlain and Rothschild (1983)’sintroductionofthelinear approximatefactormodel, factor models have been the focus of extensive research. Sem inal contributions include those byBai(2003),Fan et al. (2013),Pelger(2019), andHe et al. (2025). In recent years, atten- tion has increasingly turned toward nonlinear factor model s. For example, Chen et al. (2021) developed estimation methods and asymptotic theory for qua ntile factor models. In particu- lar, considerable works have addressed nonlinear binary fa ctor models, c.f., Chen et al. (2021), Ando et al. (2022),Wang(2022),Ma et al. (2023), andGao et al. (2023). Binary factor models have diverse applications in fields such as engineering (dat a compression, visualization, pattern recognition, and machine learning), economics and finance ( credit default analysis, macroeco- nomic forecasting, jump arrival analysis), and biology (ge ne sequence analysis). However, the aforementioned studies assume that both the factors and cov ariates are stationary processes— an assumption that may not hold in practice. For instance, da ily jump frequencies in financial markets often exhibit aggregation in jump occurrence proba bilities (see Bollerslev and Todorov 2011a,b), suggesting nonstationarity, which is also justified in ou r empirical studies. This paper addresses this gap by investigating a general class of binar y factor models where both covariates and factors are generated by integrated processes. Specifically, consider the following single-index general factor model: yit= Ψ(z0it)+uitwherez0it=β′ 0ixit+λ′ 0if0t,fori= 1,...,Nandt= 1,...,T. (1) Here,xitis aq-dimensional covariate with coefficient vector β0i, andf0tis anr-dimensional factor with factor loading vector λ0i. The binary outcome yitis modeled through a known nonlinear link function Ψ( ·) (such as logit or
|
https://arxiv.org/abs/2505.22417v1
|
probit). Both xitandf0tare integrated of order one, i.e., I(1) processes. It is certainly that yitcan be extended to other types of variate like counts. We consider two cases for the single index z0it, one is nonstationary I(1) index and one is cointegrated I(0) index. In the nonstationary univariate regression sett ing (i.e., λ′ 0if0t= 0 andN= 1),Park and Phillips (2000) provide the relevant asymptotic theory. However, in high-dimensional settings with a factor structure, the asy mptotic properties remain unexplored. Furthermore, when z0itis cointegrated, the associated asymptotic theory is also m issing. This paper develops a theoretical framework for binary fact or models with nonstationary covariates and factors, modeled as integrated time series. Specifically, we model the covariates 2 and factors as integrated time series and consider scenario s where the single index is either nonstationary or cointegrated. Our approach builds on earl ier work on the asymptotics of non- linear functions of integrated time series (c.f., Park and Phillips 1999 ,2000,2001;Dong et al. 2016;Zhou et al. 2024 ) and on MLE methodologies for high-dimensional stationary factor mod- els (c.f., Bai and Li 2012 ;Chen et al. 2021 ;Gao et al. 2023 ;Yuan et al. 2023 ;Xu et al. 2025 ). There are also studies involving nonstationary time series in high-dimensional linear models, such asZhang et al. (2018),Dong et al. (2021),Trapani (2021), andBarigozzi et al. (2024). Our main theoretical contribution lies in establishing the asymptotic properties of the MLEs for both coefficients and factors within the class of general b inary factor models. The findings of this paper are summarized below. In the model ( 1) with a nonstationary single-index, the convergence rate of the estimated α0i(α0i= (β′ 0i,λ′ 0i)′) is characterized by two distinct rates along different axes in a new coordinate sys tem where α0i//bardblα0i/bardbldefines one axis. Along the axis of α0i//bardblα0i/bardbl, the estimated α0i(denoted by ˆ αi) converges at a rate of T1/4, while along axes orthogonal to α0i//bardblα0i/bardbl, it converges at a faster rate of T3/4. This dual-rate convergence is consistent with findings in binary regressio n models, such as the univariate case documented in Park and Phillips (2000). Collectively, ˆ αiexhibits a convergence rate of T1/4. The convergence rate of ˆfttof0tist−δ/2N1/2, whereδis determined by the property of the link function. The normalized estimator ˆ αi//bardblˆαi/bardblconverges to α0i//bardblα0i/bardblat a rate of T3/4, faster than that of ˆ αi. In the model ( 1) with a cointegrated single index, the convergence rates of ˆαi are faster. Along the axis of α0i//bardblα0i/bardbl, the convergence rate improves to T1/2, and along axes orthogonal to α0i//bardblα0i/bardbl, it improves to T. Consequently, ˆ αiachieves an overall convergence rate ofT1/2. Moreover, the convergence rate of ˆfttof0tisN1/2, aligning with conventional esti- mation rates of factors, as discussed in prior studies (c.f. ,Bai 2003 ;Chen et al. 2021 ;Gao et al. 2023). The normalized estimator ˆ αi//bardblˆαi/bardblconverges to α0i//bardblα0i/bardblat an accelerated rate of T, surpassing the convergence rate of ˆ αi. It is worth noting that the asymptotic distributions of the coefficient estimates are significantly different under the cointegrated single index from that under the nonstationary single
|
https://arxiv.org/abs/2505.22417v1
|
index. The modeling framework of this paper has a wide range of appli cations; we apply it specif- ically to jump arrival events in financial markets. We find tha t the model captures well the potential jump arrival factor, which is nonstationary. Add itionally, we find that the jump ar- rival factor effectively explains the asset panel data, which contains information not captured by the Fama–French–Carhart five factors. The jump arrival fa ctor—which benefits from our 3 nonstationary binary model—is effectively applicable in emp irical financial asset pricing and is distinct from jump size factors (e.g., ?;Pelger 2020 ). The rest of the paper is organized as follows. In Section 2, we introduce the model, outline the underlying assumptions, and describe the estimation pr ocedure. Section 3presents the asymptotic properties of the proposed estimators. In Secti on4, we report Monte Carlo simu- lation results to assess the accuracy of estimation. Sectio n5offers an empirical application of the model, and Section 6concludes the paper. Notation ./bardbl·/bardbldenotes the Euclidean norm of a vector or the Frobenius norm o f a matrix. For any matrix Awith real eigenvalues, let ρmax(A) be its largest eigenvalue. Convergence in probability and in distribution are denoted by →Pand→D, respectively. MN(0,Ω) denotes a mixture normal distribution with conditional covariance matrix Ω. For any function f(·), the notation ˙f(x),¨f(x), and... f(x) refer to the first, second, and third derivatives of f(·) at x, respectively. The indicator function is written as I{·}. We write aNT≍bNTto mean that there exist positive constants candC, independent of NandT, such that c≤aNT/bNT≤C. Similarly, aNT/lessorsimilarbNTmeans that aNT≤CbNTfor some positive C. The symbol ⊗denotes the Kronecker product. The phrase “w.p.a.1” stands for“ with pr obability approaching to 1”. 2 Models and Estimation Procedure 2.1 Model Setup Model (1) can be rewritten in a form equivalent to y∗ it=β′ 0ixit+λ′ 0if0t+ǫit, yit=I{y∗ it≥0}, The term λ′ 0if0tcaptures unobserved components of individual i’s utility. The error term ǫitis independently and identically distributed (i.i.d.) with d istribution function Ψ : R→[0,1] (e.g., standard normal or standard logistic). The dependent varia bley∗ itis latent, and the observed outcome yitis binary, taking values of either 0 or 1. In our framework, both the explanatory variable xitand the factor f0tare nonstationary processes, integrated of order one (denoted as I(1)). We first assume that there exist neigh- borhoods around the true parameters β0iandλ0isuch that βiandλialways lie within these neighborhoods, ensuring that β′ ixitandλ′ if0tremainI(1) processes. In Section 3.1, we treat 4 β′ 0ixit+λ′ 0if0tas anI(1) process. In Section 3.3, we allow for cointegration between β′ 0ixit andλ′ 0if0t, i.e.,β′ 0ixit+λ′ 0if0t∼I(0). These two settings encompass both stationary and nonstationary single indices and are broadly applicable. The assumptions regarding xitandf0tare outlined below for the subsequent development of the theory. Assumption 1. (Integrated Processes) (i) The covariate series follow xit=xit−1+eitand the factor series follow f0t=f0t−1+vt, with initial conditions xi0=OP(1)andf00=OP(1), where {eit}and{vt}are linear processes defined as eit=Πe(L)εe it=/summationtext∞ k=0Πe kεe it−kandvt=Πv(L)εv t=/summationtext∞ k=0Πv kεv t−k, whereΠj(L) =/summationtext∞ k=0Πj kLkwith the lag operator Lforj∈ {e,v},Πe(1)andΠv(1)are nonsingular, and Πe 0=IqandΠv 0=Ir. Additionally, we assume/summationtext∞ k=0k(/bardblΠe k/bardbl+/bardblΠv k/bardbl)< ∞.
|
https://arxiv.org/abs/2505.22417v1
|
The vector {εit= (εe′ it,εv′ t)′}are i.i.d. with mean zero and satisfies E/bardblεit/bardblι<∞for someι >8. The distribution of (εit)is absolutely continuous with respect to the Lebesgue measure and has a characteristic function ϕi(t)such that ϕi(t) =o(/bardblt/bardbl−κ)as/bardblt/bardbl → ∞ for some κ >0. (ii) Define Ui,n(s) =1√ T/summationtext[Ts] t=1uit,Ei,n(s) =1√ T/summationtext[Ts] t=1eit, andVn(s) =1√ T/summationtext[Ts] t=1vt. We assume that there exist (q+r+ 1)-dimensional Brownian motion (Ui,Ei,V)such that (Ui,n(s),Ei,n(s),Vn(s))→D(Ui(s),Ei(s),V(s))inD[0,1]q+r+1for alli∈ {1,...,N}. Theseconditionsarestandardinnonstationary modelestim ation. Inparticular, Assumption 1(ii) is widely used in related studies, such as Park and Phillips (2000),Dong et al. (2016), and Trapani (2021). We define Hi(s) = (Ei(s)′,V(s)′)′. Assumption 2. (Covariate Coefficients, Factors, and Factor Loadings) (i) There exists a positive constant C, such that /bardblβ0i/bardbl ≤Cand/bardblλ0i/bardbl ≤Cfor alli= 1,...,N. (ii) AsT→ ∞,/summationtextT t=1f0tf′ 0t/T2→D/integraltext1 0WsW′ sds, whereWsis a vector of Brownian motions with a positive definite covariance matrix. (iii)/summationtextN i=1λ0iλ′ 0i/N=diag(σN1,...,σNr)withσN1≥σN2≥σNr, andσNi→σiasN→ ∞for i= 1,...,r, where∞> σ1> σ2>···> σr>0. Assumption 2(i) means that the loadings are in compact set. Sufficient cond itions for As- sumption 2(ii) can be found in Hansen(1992). The assumption of positive definiteness in 5 Assumption 2(ii) precludes cointegration among the components of f0t. Similar assumptions are made in Bai(2004). Assumption 2(iii) is a version of the strong factors assumption, which is commonly used in the literature, such as in Bai(2003) andChen et al. (2021). The require- ment that σ1,...,σrare distinct is similar to the assumption in Bai(2003), which provides a convenient way to identify and order the factors. 2.2 Estimation Procedure Letzit=β′ ixit+λ′ ift,αi= (β′ i,λ′ i)′,git= (x′ it,f′ t)′,A= (α1,...,αN)′,B= (β1,...,βN)′, Λ = (λ1,...,λN)′, andF= (f1,...,fT)′. Letg0it,A0,B0, Λ0, andF0be the true parameters. Given the stochastic properties of {uit}, the log-likelihood function is logL(B,Λ,F) =N/summationdisplay i=1T/summationdisplay t=1[yitlogΨ(zit)+(1−yit)log(1−Ψ(zit))]. (2) Due to the rotational indeterminacy of the factor loadings λiand the factors ft, we impose identification constraints, following standard approache s: F=/braceleftbigg F∈RT×r:F′F T2=Ir/bracerightbigg , L=/braceleftbigg Λ∈RN×r:Λ′Λ Nis diagonal with non-increasing elements/bracerightbigg .(3) The estimators for ( B0,Λ0,F0) are then defined as: (ˆB,ˆΛ,ˆF) = argmin B∈RN×q,Λ∈L,F∈FlogL(B,Λ,F). (4) Direct optimization of this expression is challenging due t o the absence of closed-form solutions for our estimators, unlike in principal component analysis (PCA). To address this, we propose an iterative optimization algorithm. Define the serial and cross-section averages as LiT(αi,F) =T/summationdisplay t=1lit(zit),andLNt(A,ft) =N/summationdisplay i=1lit(zit), wherelit(zit) = [yitlogΨ(zit)+(1−yit)log(1−Ψ(zit))]. The iterative optimization algorithm proceeds as follows: Step 1: Randomly select the initial parameter A(0). 6 Step 2: Given A(l−1), solvef(l−1) t= argminfLNt/parenleftbig A(l−1),f/parenrightbig fort= 1,...,T; givenF(l−1), solveα(l) i= argminαLiT/parenleftbig α,F(l−1)/parenrightbig fori= 1,...,N. Step 3: Repeat Step 2 until a convergence criterion is met. Step 4: Let Λ∗andF∗be the final estimators after the iteration process. Normali zeΛ∗and F∗to satisfy the constraints in (3). In Step 3, various tolerance conditions can be employed, suc h as those based on parameter changes or the objective function. In this paper, we termina te the iteration when the change in the objective function is below a threshold ̺, i.e.,|Lnew−Llast time|< ̺, whereLnewand Llast timedenote the current and previous values of the log-likelihoo d function, respectively. In Step 4, we compute /parenleftigg F∗′F∗ T2/parenrightigg1/2/parenleftigg Λ∗′Λ∗ N/parenrightigg/parenleftigg F∗′F∗ T2/parenrightigg1/2 =Q∗D∗Q∗,
|
https://arxiv.org/abs/2505.22417v1
|
whereD∗is a diagonal matrix. We then sort the diagonal elements of Λ∗(F∗′F∗/T2)1/2Q∗in descending order to obtain ˆΛ. Similarly, we sort F∗(F∗′F∗/T2)−1/2Q∗according to the same order to obtain ˆF. Remark 1. If the covariate component β0ixitis absent, the binary probability simplifies to Ψ(λ′ 0if0t), reducing the model to a pure binary factor model. Although t he likelihood function remains non-convex in (Λ,F), the limit of LiT(λi,F)becomes globally convex for each λigiven F, and the limit of LNt(Λ,ft)becomes globally convex for each ftgivenΛ. Consequently, these two optimization problems can be efficiently solved, as demonstrated in prior research, e.g.,Chen et al. (2021). To illustrate this, consider the logit model. The leading t erms in the Hessian matrices are: ∂LiT(λ,F) ∂λ∂λ′=−T/summationdisplay t=1ηit(1−ηit)f0tf′ 0t,∂LNt(Λ,f) ∂f∂f′=−N/summationdisplay i=1ηit(1−ηit)λ0iλ′ 0i, whereηit(1−ηit) =eλ′ 0if0t (1+eλ′ 0if0t)2is the logistic function, a nonlinear integrable function o ff0t. When appropriately normalized, the Hessian matrix∂LiT(λ,F) ∂λ∂λ′converges weakly to a random limit matrix, rather than a constant matrix (see Park and Phillips 2000 ). Additionally, the Hessian matrix∂LNt(Λ,f) ∂f∂f′may converge to a neighborhood of zero when tis large (due to x/ma√sto→ ex/(1 +ex)2is integrable andeλ′ 0if0t (1+eλ′ 0if0t)2≈0outside the effective range of this function), 7 making the consistency of ftmore challenging to ensure. However, when∂LNt(Λ,f) ∂f∂f′is adjusted with respect to t(see Section 3), it also converges weakly to a stochastic limit matrix. Bot h Hessian matrices are almost surely negative definite, ensur ing that the limit functions are globally concave. 3 Main Results In this section, we delve into the theoretical foundations o f the maximum likelihood estimation presented in ( 4). To facilitate understanding, we introduce the class of re gular functions. Definition 1. A function f∈FR:R→Ris termed regularif it satisfies the following conditions: (i) |f(x)| ≤Mfor some M >0and allx∈R; (ii)/integraltext∞ −∞|f(x)|dx <∞; (iii)f is differentiable with a bounded derivative. Let FIbe the set of functions that are bounded and integrable, and FBbe the set of bounded functions that vanish at infinity. Clearl y, the inclusions FR⊂FI⊂FBhold. Next, we define the leading terms in score and Hessian as follo ws. M=˙Ψ Ψ(1−Ψ), K=M˙Ψ =M2Ψ(1−Ψ). For the Logit case, M(x) = 1 and K(x) =ex/(1 +ex)2. For the Probit case, M(x) = φ(x)/(Φ(x)(1−Φ(x))) andK(x) =φ2(x)/(Φ(x)(1−Φ(x))), where φis the probability den- sity function of standard normal variable, and Φ is the cumul ative distribution function of the standard normal distribution. 3.1 Theoretical Results for Nonstationary Models In this subsection, we assume that the process {g0it}is integrated and of full rank, indicating the absence of cointegrating relationships among its compo nent time series. To develop an asymptotic theory for the estimators ( ˆA,ˆF) defined in ( 4), we impose several assumptions on the functions Ψ, M, andK. Assumption 3. (Function Categories) The function Ψis three times differentiable on R. Addi- tionally, the following conditions hold: (i) K2∈FR; (ii)˙Ψ1,(˙M˙Ψ),(M¨Ψ)2,¨M˙Ψ,(¨MΨ1/2(1− Ψ)1/2)2∈FI; (iii)(˙MΨ1/2(1−Ψ)1/2)2,(M3˙Ψ)4∈FB. 8 These assumptions are mild and are satisfied by common models such as logit and probit. For notational convenience in the subsequent derivations, we define Fv= (f′ 1,...,f′ T)′andAv= (α′ 1,...,α′ N)′. To facilitate the analysis, we introduce block matrices for the score and Hessian. De- fineSNT,1(A,F)
|
https://arxiv.org/abs/2505.22417v1
|
=∂ ∂AvlogL(A,F),JNT,11(A,F) =∂2 ∂Av∂A′vlogL(A,F),SNT,2(A,F) =∂ ∂Fv logL(A,F), andJNT,22(A,F) =∂2 ∂Fv∂F′vlogL(A,F). The score function with respect to Ais expressed as SNT,1(A,F) =/parenleftbigg/parenleftig S(1)′ NT,1(α1,F),...,S(N)′ NT,1(αN,F)/parenrightig′/parenrightbigg N(q+r)×1. The corresponding Hessian is JNT,11(A,F) =/parenleftig diag/parenleftig J(1) NT,11(α1,F),...,J(N) NT,11(αN,F)/parenrightig/parenrightig N(q+r)×N(q+r). Similarly, the score function with respect to FisSNT,2(A,F) =/parenleftbigg/parenleftig S(1)′ NT,2(A,f1),...,S(T)′ NT,2(A,fT)/parenrightig′/parenrightbigg Tr×1, andtheHessian is JNT,22(A,F) =/parenleftig diag/parenleftig J(1) NT,22(A,f1),...,J(T) NT,22(A,fT)/parenrightig/parenrightig Tr×Tr. Thediagonal structure of the Hessian is straightforward to verify. For αi, S(i) NT,1(αi,F) =T/summationdisplay t=1M(zit)git(yit−Ψ(zit)), J(i) NT,11(αi,F) =−T/summationdisplay t=1K(zit)gitg′ it+T/summationdisplay t=1˙M(zit)gitg′ it(yit−Ψ(zit)). Forft, S(t) NT,2(A,ft) =N/summationdisplay i=1M(zit)λi(yit−Ψ(zit)), J(t) NT,22(A,ft) =−N/summationdisplay i=1K(zit)λiλ′ i+N/summationdisplay i=1˙M(zit)λiλ′ i(yit−Ψ(zit)). Due to the unit root behavior of z0it, its probability mass spreads out in a manner similar to a Lebesgue type. Given that Kis integrable (as per Assumption 3),K(z0it)≈0 outside the effective range of K. This indicates that only moderate values of z0itpreventK(z0it) from diminishing. Unlike/summationtextT t=1K(z0it)g0itg′ 0it, the sum/summationtextN i=1K(z0it)λ0iλ′ 0ivaries with time t, reflecting the spread of z0itat specific time points. Therefore, to normalize the Hessian appropriately, it is crucial to analyze the convergence beh avior of/summationtextN i=1K(z0it) and select a suitable normalizing sequence concerning t. Based on this analysis, we introduce the following assumptions. Assumption 4. (Integrable Functions) (i) For a normally distributed random variable z∼N(0,σ2), we assume E(K(z))≍σ−2δ. 9 Additionally, E(˙K(z)z)∨E(¨M(Ψ(1−Ψ))1/2(z))∨E(K2(z)z2)∨E(K2(z))/lessorsimilarσ−2δfor σ−1=o(1)andδ∈(1/4,3/4). (ii) The variance Var/parenleftig tδ N/summationtextN i=1K(z0it)/parenrightig →0asN→ ∞for allt= 1,...,T. (iii) For each t, asN→ ∞,tδ/2√ N/summationtextN i=1M(z0it)λ0iuit→DN(0,Ωf,t), where the covariance matrixΩf,t= limN→∞tδ N/summationtextN i=1E[K(z0it)]λ0iλ′ 0i. (iv) The sequences satisfy Tδ′/N=o(1)andNδ′′/T=o(1)for some δ′′>0, whereδ′≥ max(δ,δ/2+1/2). (v) For functions fdefined in Assumptions 3(i) and (ii),1√ T/summationtextT t=1tδf/parenleftig h(1) 0it/parenrightig =OP(1), where h(1) 0it=z0it//bardbla0i/bardbl. Assumption 4(i) is mild. To illustrate its plausibility, simulations (r eported in the Supple- mentary Material) indicate that δis approximately 0.5 in both the logit and probit models. Assumption 4(ii) ensures that the sum/summationtextT t=1K(z0it) appropriately utilizes the properties out- lined in Assumption 4(i). In the degenerate case, where z0itis independent across iandttakes moderate values, the variance satisfies Var/parenleftig tδ N/summationtextN i=1K(z0it)/parenrightig =O(1/N). In the stationary case, the δin Assumption 4(iii) should be 0. Assumption 4(iv) imposes a constraint on the sample size and the dimensionality. Assumption 4(v) addresses information overflow in the cross-section and therefore requires a slightly stronger c ondition than Assumption 3. As is standard, we have the following Taylor expansions. 0 =S(i) NT,1(ˆαi,ˆF) =S(i) NT,1(α0i,F0)+J(i) NT,11(αi,F)(ˆαi−α0i)+T/summationdisplay t=1J(i,t) NT,12(αi,ft)(ˆft−f0t), 0 =S(t) NT,2(ˆA,ˆft) =S(t) NT,2(A0i,f0t)+J(t) NT,22(Ai,ft)(ˆft−f0i)+N/summationdisplay i=1J(t,i) NT,21(αi,ft)(ˆαi−α0t),(5) whereJ(i,t) NT,12denotes the ( i,t)th block of the matrix JNT,12=∂2 ∂Av∂F′vlogL(A,F), andJ(t,i) NT,21 denotes the ( t,i)th block of the matrix JNT,21=∂2 ∂Fv∂A′vlogL(A,F). Additionally, ( α′ i,f′ t) is some point between (ˆ α′ i,ˆf′ t) and (α′ 0i,f′ 0t). The asymptotic theory for ( ˆA,ˆF) can be derived from Equation ( 5). To aid in the devel- opment of this theory, we rotate the coordinate system based on the true parameter A0using an orthogonal matrix Qi∈R(q+r)×(q+r), whereQi= (Q(1) i,Q(2) i) andQ(1) i=α0i//bardblα0i/bardbl.1This 1Because estimators converge at different rates in both the pa rallel and orthogonal directions relative to α0i, performing a rotation enables a more comprehensive theoret ical examination. 10 matrixQiwill be used to rotate all vectors in Rq+rfori= 1,...,N.
|
https://arxiv.org/abs/2505.22417v1
|
Specifically, we define the following quantities: θ0i:=Q′ iα0i= (θ(1) 0i,θ(2)′ 0i)′whereθ(1) 0i=/bardblα0i/bardbl,θ(2) 0i=Q(2)′ iα0i= 0, h0it:=Q′ ig0it= (h(1) 0it,h(2)′ 0it)′whereh(1) 0it=α′ 0ig0it//bardblα0i/bardbl=z0it//bardblα0i/bardbl,h(2) 0it=Q(2)′ ig0it. In the general case, we define θi:=Q′ iαiandhit:=Q′ igit. With these definitions, we can rewrite the model as yit= Ψ(α′ 0iQiQ′ ig0it)+uit= Ψ(θ′ 0ih0it)+uit. By Assumption 1and applying the continuous mapping theorem, we obtain the following conver gence results for s∈[0,1], 1√ Th(1) 0i[Ts]→DH1i(s) =Q(1)′ iHi(s) and1√ Th(2) 0i[Ts]→DH2i(s) =Q(2)′ iHi(s). It is important to note that the rotation is not required in pr actice, and indeed, is conceptually impossible since A0is unknown. The rotation serves only as a tool for deriving th e asymptotic theory for the proposed estimators. Ifˆθiis the maximum likelihood estimator of θ0i, then we have ˆθi=Q′ iˆαi. The score function and Hessian for the parameter θican be expressed in terms of αias follows: S(i) NT,1(θi,F) = Q′ iS(i) NT,1(αi,F) andJ(i) NT,11(θi,F) =Q′ iJ(i) NT,11(αi,F)Q′ i. Using this relationship, we can derive the following Taylor expansion: 0 =S(i) NT,1(ˆθi,ˆF) =S(i) NT,1(θ0i,F0)+J(i) NT,11(θi,F)(ˆθi−θ0i)+T/summationdisplay t=1J(i,t) NT,12(θi,ft)(ˆft−f0t).(6) Define Θ 0= (θ01,...,θ0N)′,DT= diag(T1/4,T3/4Iq+r−1),BN= diag(1δ/2,2δ/2,...,Tδ/2) and CNT= min{√ N,√ T}. Assumption 5. (Covariance) ρmax(A11),ρmax(A22), ρmax(A−1 11),ρmax(A−1 22),ρmax(A12A−1 22A21), andρmax(A21A−1 11A12)are finite, where A11= (I⊗DT)−1JNT,11(1)(I⊗DT)−1,A12= (I⊗ DT)−1JNT,12(1)(BN⊗I)√ N,A21=A′ 21,A22=(BN⊗I)√ NJNT,22(1)(BN⊗I)√ N, with the ith block of JNT,11(1)given by J(i) NT,11(1) =−/summationtextT t=1K(z0it)h0ith′ 0it, thetth block of JNT,22(1)given by J(t) NT,22(1) =−/summationtextN i=1K(z0it)λ0iλ′ 0i, and the (i,t)th block of JNT,12(1)given by J(i) NT,12(1) = −K(z0it)h0itλ0i. Assumption 5is used to deriving results related to the inverse of the Hess ian matrix. It is a mild assumption because both A11andA22are diagonal matrices. 11 We now present the average rate of convergence for ˆΘ andˆF. Theorem 3.1. Under Assumptions 1-5, the following results hold: 1√ N/bardblT−1/4(ˆΘ−Θ0)DT/bardbl=OP/parenleftig T1/4C−1 NT/parenrightig and1√ T/bardblB−1 N(ˆF−F0)/bardbl=OP/parenleftbig C−1 NT/parenrightbig . Theorem 3.1shows that the estimator ˆΘ exhibits dual convergence rates. (i) Along the coordinates parallel to {α0i,i= 1,...,N}(i.e.,/parenleftig ˆθ(1) 1,...,ˆθ(1) N/parenrightig′ /√ N), the average rate of convergence is T1/4C−1 NT. (ii) Along the coordinates orthogonal to {α0i,i= 1,...,N}(i.e.,/parenleftig ˆθ(2) 1,...,ˆθ(2) N/parenrightig′ /√ N), the average rate of convergence is T−1/4C−1 NT. ForˆF, the rate of convergence depends on BN, with the difference ˆft−f0tscaled by tδ/2. IfTδ/N=o(1), the estimators for {ˆft}are consistent. Otherwise, only a subset of {ˆft}are consistent. It is noteworthy that the estimation of f0tbecomes less accurate for larger t. This is intuitive, as Var(f0t) =O(t), meaning that the uncertainty increases as tgrows. To the best of our knowledge, this is the first time such a phenomenon has b een observed in the context of factor estimators. The explanation lies in the fact that J(t) NT,22(A0,f0t)≈0 for larger t. After performing a rotation A= (Q1θ1,...,QNθN), we present the convergence rates for ˆA in Corollary 3.2 below. Corollary 3.1. Under Assumptions 1-5, 1√ N/bardbl(ˆA−A0)/bardbl=OP/parenleftig T1/4C−1 NT/parenrightig . Corollary 3.1demonstrates the collective consistency of the estimator ˆA. The subsequent theorem provides the asymptotic distributions of ˆθiandˆft. Theorem 3.2. Under Assumptions 1-5, asN,T→ ∞, DT(ˆθi−θ0i)→DΩ−1/2 θ,iWi(1)and√ Nt−δ/2(ˆft−f0t)→DN(0,Ω−1 f,t), whereWi(1)is(q+r)-dimensional vector of Brownian motion with covariance mat rixIq+rand 12 independent of H. Let Ωθ,i= L1i(1,0)/integraltext Rm2K(/bardblα0i/bardblm)dm/integraltext1 0H2i(s)′dL1i(s,0)/integraltext RmK(/bardblα0i/bardblm)dm /integraltext1 0H2i(s)dL1i(s,0)/integraltext RmK(/bardblα0i/bardblm)dm/integraltext1 0H2i(s)H2i(s)′dL1i(s,0)/integraltext
|
https://arxiv.org/abs/2505.22417v1
|
RK(/bardblα0i/bardblm)dm andΩf,t= limN→∞tδ N/summationtextN i=1E[K(z0it)]λ0iλ′ 0i, whereL1i(s,0) =LH1i(s,0)σH1iwithLH1i(s,0) being the local time of H1iandσH1iits variance. The asymptotic behavior of the estimator ˆftvaries with t, influenced by the Hessian matrix. Recall the notation of DT, two distinct limiting distributions for ˆθiemerge. T1/4/parenleftig ˆθ(1) i−θ(1) 0i/parenrightig →DMN/parenleftig 0,¯ωθ,i 11/parenrightig andT3/4/parenleftig ˆθ(2) i−θ(2) 0i/parenrightig →DMN/parenleftig 0,¯ωθ,i 22/parenrightig ,(7) where ¯ωθ,i 11=/parenleftig ωθ,i 11−ωθ,i 12(ωθ,i 22)−1ωθ,i 21/parenrightig−1 and ¯ωθ,i 22=/parenleftig ωθ,i 22−ωθ,i 21(ωθ,i 11)−1ωθ,i 12/parenrightig−1 , with Ω θ,i:= ωθ,i 11ωθ,i 12 ωθ,i 21ωθ,i 22 andωθ,i 11=L1i(1,0)/integraltext Rm2K(/bardblα0i/bardblm)dm. The dual convergence rates presented in Equation ( 7) are not surprising; similar results have been observed in various problems involving nonlinear functions, such as Park and Phillips (2000) andDong et al. (2016). Thisimpliesthat, inmultivariate cases ( q+r >1), modestvalues of{g0it}significantly influence a nonlinear function along α0i//bardblα0i/bardbl. In contrast, there are no such restrictions on {g0it}in the direction orthogonal to α0i//bardblα0i/bardbl, allowing larger values of {g0it}to contribute. We introduce the normalized estimators ˆ α◦ i:= ˆαi//bardblˆαi/bardblandˆθ◦ i:=ˆθi//bardblˆθi/bardbl, derived from ˆθi=Qiˆαiand/bardblˆαi/bardbl=/bardblˆθi/bardbl. Specifically, ˆθ◦ i= (ˆθ(1)◦ i,ˆθ(2)◦′ i)′:= (ˆθ(1) i//bardblˆθi/bardbl,ˆθ(2)′ i//bardblˆθi/bardbl)′. The following corollary characterizes the asymptotic behavio r ofˆθ(1)◦ iandˆθ(2)◦ i. Corollary 3.2. Under Assumptions 1-5, asT→ ∞ T3/2(ˆθ(1)◦ i−1)→D−1 2/bardblα0i/bardbl2/bardblζi/bardbl2andT3/4ˆθ(2)◦ i→D1 /bardblα0i/bardblζi, whereζi∼MN(0,¯ωθ,i 22). After normalization, the convergence rate along the α0i//bardblα0i/bardbldirection increases to T3/2, 13 while in the orthogonal direction, it remains at T3/4. By leveraging the linear relationship between ˆ αiandˆθi(and similarly between ˆ α◦ iandˆθ◦ i), we can derive the following asymptotic distribution. Theorem 3.3. Under Assumptions 1-5, asT→ ∞ T1/4(ˆαi−α0i)→DMN/parenleftbigg 0,¯ωθ,i 11α0iα′ 0i /bardblα0i/bardbl2/parenrightbigg andT3/4/parenleftbigg ˆα◦ i−α0i /bardblα0i/bardbl/parenrightbigg →DMN/parenleftigg 0,Q(2) i¯ωθ,i 22Q(2)′ i /bardblα0i/bardbl2/parenrightigg . Here, the normalization of ˆ αiscales it to the unit sphere, focusing on angular convergenc e rather than magnitude. Consequently, the convergence rate is accelerated due to the differing rates for ˆθi. This suggests that imposing the constraint /bardblα0i/bardbl= 1 on the binary probability allows ˆα◦ ito serve as a more precise estimator of α0i. Estimating binary event probabilities is a crucial aspect o f statistical analysis. The following corollary presents the corresponding theoretical result. Corollary 3.3. Under Assumptions 1-5, asT,N→ ∞ CNT,t(Ψ(ˆzit)−Ψ(z0it)) |˙Ψ(z0it)|/radicalbigg C2 NT,t√ T¯ωθ,i 11(α′ 0ig0it)2+C2 NT,ttδ Nλ′ 0iΩ−1 f,tλ0i→DN(0,1), whereCNT,t= min{N1/2t−δ/2,T1/4}. Corollary 3.3indicatesthattheconvergence rateismin {N1/2t−δ/2,T1/4}. IfF0isobservable, the convergence rate for ˆ αiisT1/4; IfA0is observable, the rate for ˆftisN1/2t−δ/2. Therefore, when estimating both parameters simultaneously, the best r ate for ˆzitis the minimum of T1/4 andN1/2t−δ/2. To extend the previous results to the plug-in version, we firs t derive estimators for the quantities of interest. According to Theorems 3.2and3.3, the estimators ˆft∼N(f0t,tδ NΩ−1 f,t) and ˆαi∼MN(α0i,1√ T¯ωθ,i 11α0iα′ 0i /bardblα0i/bardbl2). Based on these distributions, we define two estimators for the inverse Hessian of J(i) NT,11(α0i,F0):/bracketleftig J(i) NT,11(ˆαi,ˆF)/bracketrightig−1 and/bracketleftig J(i) NT,11(ˆαi,ˆF)/bracketrightig−1 . Similarly, for the inverse Hessian of J(t) NT,22(A0,f0t), we define:/bracketleftig J(t) NT,22(ˆA,ˆft)/bracketrightig−1 and/bracketleftig J(t) NT,22(ˆA,ˆft)/bracketrightig−1 . Specifically, we define the dominant terms for the Hessians as : J(i) NT,11(ˆαi,ˆF) =−T/summationdisplay t=1K(ˆzit)ˆgitˆg′ itandJ(t) NT,22(ˆA,ˆft) =−N/summationdisplay i=1K(ˆzit)ˆλiˆλ′ i, 14 where ˆzit=ˆβ′ ixit+ˆλ′ iˆgtand ˆgt= (x′ it,ˆf′ t)′. The following corollary establishes the consistency of the proposed estimators: Corollary 3.4. Under Assumptions 1-5, asT→ ∞, the following
|
https://arxiv.org/abs/2505.22417v1
|
results hold: −√ T/bracketleftig J(i) NT,11(ˆαi,ˆF)/bracketrightig−1 ,−√ T/bracketleftig J(i) NT,11(ˆαi,ˆF)/bracketrightig−1 →P¯ωθ,i 11α0iα′ 0i /bardblα0i/bardbl2, −Nt−δ/bracketleftig J(t) NT,22(ˆA,ˆft)/bracketrightig−1 ,−Nt−δ/bracketleftig J(t) NT,22(ˆA,ˆft)/bracketrightig−1 →PΩ−1 f,t. Based on Corollary 3.4and Slutsky’s theorem, we replace the limiting distributio n of ˆαi in Theorem 3.3and that of ˆftin Theorem 3.2to obtain the following plug-in version of the limiting distribution: /bracketleftig −J(i) NT,11(ˆαi,ˆF)/bracketrightig1/2 (ˆαi−α0i)→DN(0,Iq+r) and/bracketleftig −J(t) NT,22(ˆA,ˆft)/bracketrightig1/2 (ˆft−f0t)→DN(0,Ir). Next, we provide the local time estimator for L1i(1,0): ˆL1i(1,0) =/bardblˆαi/bardbl√ TT/summationdisplay t=1˙Ψ(ˆzit). (8) This estimator is intuitive because, by consistency,/bardblα0i/bardbl√ T/summationtextT t=1Ψ′(z0it)→DL1i(1,0)Ψ(z)|∞ −∞= L1i(1,0). Finally, we define the mean squared error (MSE) for the obs ervations to assess the accuracy of the model estimates: ˆMSE =1 N√ TN/summationdisplay i=1T/summationdisplay t=1[yit−Ψ(ˆzit)]2. The following proposition establishes the consistency of t he proposed estimators: Proposition 1. Under Assumptions 1-5, asT→ ∞, the following results hold: ˆL1i(1,0)→PL1i(1,0)andˆMSE→P/integraldisplay RΨ(s)(1−Ψ(s))ds/parenleftigg plimN→∞1 NN/summationdisplay i=1L1i(1,0) /bardblα0i/bardbl/parenrightigg . The local time estimator in Equation ( 8) is not unique; it suffices that the nonlinear function within it is integrable and integrates to 1 over R. Additionally, ˆMSE serves as an approximation of1 N√ TlogL(ˆB,ˆΛ,ˆF) in (2) (seeGao et al. 2023 for this insight) assessing the model’s goodness of fit. 15 Remark 2. When allowing for partial cointegration in z0it—where the cointegration rank is smaller than q+r−1—in the series {g0it}, while maintaining the nonstationarity of {z0it}, similar results can be obtained. For instance, if λ′ if0tis stationary for all i= 1,...,N, the pri- mary characteristics of the model remain largely unchanged , except for H1iandH2i, since{z0it} continuous to be nonstationary. In this scenario, the local t ime corresponds to the Brownian motionβ′ 0iEi(s)//bardblα0i/bardbl, rather than (β′ 0iEi(s)+λ′ 0iV(t))//bardblα0i/bardbl, leading to a modification in H2i. To uphold the asymptotic theory under these conditions, it i s essential that {z0it}remains non- stationary and that the cointegration rank of each {g0it}is less than q+r−1, thereby ensuring the persistence of dual convergence rates. 3.2 Selecting the Number of Factors In this section we introduce a rank minimization method to se lect the number of factors. Assumekis a positive integer larger than r. We solve the optimization problem in Equation (4)usingkfactors, resultinginestimators ˆBk,ˆΛkandˆFk. Define( ˆΛk)′ˆΛk/N= diag(ˆσk N,1,...,ˆσk N,k). The estimator for the number of factors is then given by ˆr=k/summationdisplay j=1I{ˆσk N,j>πNT}, whereπNTis a sequence approaching zero as N,T→ ∞. In other words, ˆ rcounts the number of diagonal elements in ( ˆΛk)′ˆΛk/Nthat exceed the threshold πNT. To elucidate, decompose ˆΛk intoˆΛk= (ˆΛk,r,ˆΛk,−r), where ˆΛk,rcomprises the first rcolumns of ˆΛk, andˆΛk,−rincludes the remaining k−rcolumns. It can be shown that ˆ σk N,j=σk N,j+oP(1)→Pσjforj= 1,...,r, and ˆσk N,j=oP(1) forj > r. Consequently, ( ˆΛk)′ˆΛk/Nconverges in probability to a matrix with rankrat certain rates. By selecting a suitable threshold πNT, which is greater than this rate and less than σr, we can accurately determine r. Theorem 3.4. Under Assumptions 1-5, asN,T→ ∞, ifk > r,√ T/N=o(1),πNT→0, and πNTC2 NTT−1/2→ ∞, thenP(ˆr=r)→1. A threshold value of πNT= ˆσk N,1/parenleftbig C2 NTT−1/2/parenrightbig−1/3has been found to perform well in numer- ical simulations. Some methods of choosing the number of fac tors with the help of eigenvalue properties can also be
|
https://arxiv.org/abs/2505.22417v1
|
generalized here, e.g. Trapani (2018) andYu et al. (2024). 16 3.3 Cointegrated Single Index Inthis subsection, weexamine thecase wherealinear cointe gration happensamongcomponents ofg0itfor alli= 1,...,N. In other words α′ 0ig0it∼I(0) for every i. Withinthiscontext, wesolvetheoptimization probleminEq uation(4)undertheassumption thatg0itis a (q+r)-dimensional I(1) process and that the single index α′ 0ig0itisI(0). In Remark 3below, we relax the assumption to accommodate nonstationar ity in the single indices {z0it} for some i. To derive the asymptotic properties of the estimators ( ˆA,ˆF), we first rotate the coordinate system using a ( q+r)×(q+r) orthogonal matrix Qi= (Q(1) i,Q(2) i), where Q(1) i=α0i//bardblα0i/bardbl defines the primary axis. This transformation enables us to e xpress the single index z0itas z0it=α′ 0iQiQ′ ig0it=θ(1) 0ih(1) 0it+θ(2)′ 0ih(2) 0it, whereθ(1) 0i=/bardblα0i/bardblandθ(2) 0i= 0. In contrast to Section 3.1, here the component h(1) 0it= α′ 0ig0it//bardblα0i/bardblis a stationary scalar process, while h(2) 0it=Q(2)′ ig0itis a (q+r−1)-dimensional nonstationary process. Because the series {z0it}is concentrated within the effective range of K, some aspects of the original asymptotic theory must berevis ed, and we updatethe assumptions. Assumption 6. (Cointegrated Single Indices) (i) There exists a set Ξ = [Ξ l,Ξu]such that z0itbelongs to Ξw.p.a.1. Moreover, Ψ(Ξl)>0 andΨ(Ξu)<1. (ii)K,˙Ψ,˙M˙Ψ,M¨Ψ,¨MΨ1/2(1−Ψ)1/2,˙MΨ1/2(1−Ψ)1/2, andM3˙Ψall belong to the class FB. (iii)Πe(1)andΠv(1)are nonsingular. Π(1)has rank q+r−1andα′ 0iΠ(1) = 0 1×(q+r), where Π(L) =/summationtext∞ k=0diag(Πe k,Πv k)Lk. All other conditions remain as in Assumption 1. (iv) Let {z0it}be a strictly stationary process that is α-mixing over twith mixing coefficient αij(τ)satisfying maxi≥1/summationtext∞ τ=1(αii(τ))ν/(4+ν)<∞,1 N/summationtextN i,j=1/summationtext∞ τ=1(αij(τ))ν/(4+ν)<∞, and1 N/summationtextN i,j=1(αij(0))ν/(4+ν)<∞for some ν >0. (v) Assume that maxi≥,t≥1E/bardblg0it/bardbl4+ν<∞and/bardblg0it/bardbl ≤Cfor some positive constant C. (vi) For each t, asN→ ∞,1√ N/summationtextN i=1M(z0it)λ0iuit→DN(0,Ω∗ f,t), where the covariance matrixΩ∗ f,t= limN→∞1 N/summationtextN i=1E[K(z0it)]λ0iλ′ 0i. 17 Assumption 6(i) ensures that the support of {z0it}is bounded, which is reasonable when z0it is a stationary scalar. Assumption 6(ii) relaxes the function class in Assumption 4. Assumption 6(iii) follows from the requirement only {z0it}is stationary. Assumptions 6(iv) and (v) ensure theα-mixing for z0itand the boundedness of g0it, such as Trapani (2021). We first give results for the average rate of convergence and t he number of factors under the cointegrated single-index case. Define DT= diag(√ T,TIq+r−1). The estimation of the number of factors is fully consistent with Section 3.2 except for th e choice of thresholds πNT. Theorem 3.5. Under Assumptions 2,5, and6, the following results hold. (i)1√ N/bardbl(ˆΘ−Θ0)DT/bardbl=OP/parenleftbig C−1 NT/parenrightbig and1√ T/bardblˆF−F0/bardbl=OP/parenleftbig C−1 NT/parenrightbig . (ii) Ifk > r,πNT→0, andπNTC2 NT→ ∞, thenP(ˆr=r)→1. We find that the convergence rates of both the coefficients and t he factors are improved, suggesting that estimating the model in the cointegrated si ngle-index case is more accurate. In addition, the threshold selection for the number of estimat ed factors is less demanding. We now study the asymptotic distribution of the estimators ( ˆθi,ˆft). Unlike in Section 3.1, whereh(1) 0itis a nonstationary process, here h(1) 0itis stationary. This change may affect the convergence rate of ˆθ(1) i, which lies in the direction of α0i. Similarly, the convergence rate of ˆθ(2) i in the orthogonal direction may also differ. The following the orem addresses these questions. Theorem 3.6. Under
|
https://arxiv.org/abs/2505.22417v1
|
Assumptions 2,5, and6, asN,T→ ∞, DT(ˆθi−θ0i)→DΩ∗−1 θ,iξi,and√ N(ˆft−f0t)→DN(0,Ω∗−1 f,t), whereξi= (ξ1i,ξ′ 2i)′with ξ1i=/radicalbigg E/bracketleftig M(/bardblα0i/bardblh(1) 0i1)h(1) 0i1/bracketrightig2/integraldisplay1 0dUi(s)andξ2i=/radicalbigg E/bracketleftig M(/bardblα0i/bardblh(1) 0i1)/bracketrightig2/integraldisplay1 0H2i(s)dUi(s), Ω∗ θ,i= E/bracketleftig K(/bardblα0i/bardblh(1) 0i1)(h(1) 0i1)2/bracketrightig E/bracketleftig K(/bardblα0i/bardblh(1) 0i1)h(1) 0i1/bracketrightig/integraltext1 0H′ 2i(s)ds E/bracketleftig K(/bardblα0i/bardblh(1) 0i1)h(1) 0i1/bracketrightig/integraltext1 0H2i(s)ds E/bracketleftig K(/bardblα0i/bardblh(1) 0i1)/bracketrightig/integraltext1 0H2i(s)H′ 2i(s)ds , andΩ∗ f,t= limN→∞1 N/summationtextN i=1E/parenleftig K/parenleftig /bardblα0i/bardblh(1) 0it/parenrightig/parenrightig λ0iλ′ 0iandH2i=Q(2)′ iHi(r). Theorem 3.6shows that the convergence rates for ˆθ(1) iandˆθ(2) idiffer from those in Theorem 3.2. Specifically, whenthesingleindicesarecointegrated, th econvergence rates oftheparameter 18 estimators improve, and the asymptotic results resemble th ose observed in linear models. For ˆft, the conventional asymptotics hold, owing to a constant low er bound on K(z0it) for all i= 1,...,Nandt= 1,...,T. Rewrite Ω∗ θ,i= ωθ,i∗ 11ωθ,i∗ 12 ωθ,i∗ 21ωθ,i∗ 22 and the inverse matrix Ω∗−1 θ,i= ¯ωθ,i∗ 11¯ωθ,i∗ 12 ¯ωθ,i∗ 21¯ωθ,i∗ 22 , where ¯ωθ,i∗ 11=/parenleftig ωθ,i∗ 11−ωθ,i∗ 12(ωθ,i∗ 22)−1ωθ,i∗ 21/parenrightig−1 and ¯ωθ,i∗ 22=/parenleftig ωθ,i∗ 22−ωθ,i∗ 21(ωθ,i∗ 11)−1ωθ,i∗ 12/parenrightig−1 . Leveraging the linear relationship between ˆ αiandˆθi, we can derive the asymptotic results for ˆαi. The following theorem presents the asymptotic distributi ons of both ˆ αiand ˆα◦ i. Theorem 3.7. Under Assumptions 2,5, and6, asN,T→ ∞, T1/2(ˆαi−α0i)→Dα0i /bardblα0i/bardbl/bracketleftig ¯ωθ,i∗ 11ξ1i+ ¯ωθ,i∗ 12ξ2i/bracketrightig , T/parenleftbigg ˆα◦ i−α0i /bardblα0i/bardbl/parenrightbigg →DQ(2) i /bardblα0i/bardbl/bracketleftig ¯ωθ,i∗ 21ξ1i+ ¯ωθ,i∗ 22ξ2i/bracketrightig . whereξ1iandξ2iare defined in Theorem 3.6. The convergence rates of the estimators ˆ αiand ˆα◦ i, as presented in Theorem 3.7, are in- fluenced by the convergence rates of ˆθ(1) iandˆθ(2) i, respectively, as detailed in Theorem 3.6. Notably, these convergence rates for ˆ αiand ˆα◦ idiffer markedly from those in Theorem 3.3, pri- marily due to the impact of cointegration. Consequently, th e asymptotic distributions undergo significant alterations. Remark 3. In practice, the time series {z0it}may have both stationary and nonstationary series for different i. We can partition the indices into two exclusive sets, N1andN2, such that z0itis stationary for i∈N1and nonstationary for i∈N2. In this scenario, it is necessary to integrate the asymptotic theories in Sections 3.1and3.3. For the estimators ˆθiandˆαi, the asymptotic distributions remain consistent within th eir separate sets. However, the estimation of ˆftbecomes more intricate. When tis large, the asymptotic behavior of ˆftis predominantly influenced by the stationary components in N1. While for moderate values of t, the asymptotic properties are determined by the combined c ontributions of bothN1andN2. Identifying N1andN2is not easy, we leave it to our future research works. 4 Simulation In this section, we perform simulations to verify the accura cy of the estimation results. 19 4.1 Simulation Design To do so, we consider a model with the following data generati ng process (DGP). Case1.Non-stationary probabilities :q= 4,r= 2,xit=xit−1+eitwitheit= 0.1× eit−1+ 0.1×N(0,I4),f0t=f0t−1+vtwithvt= 0.01×N(0,I2),β0i∼U[0,1], and λ0i∼N(0,diag(2,1)). The error term ǫitis linked to the binary response via either the logit or probit function. The covariate {xit}is observable, while the remaining parameters are unobservable. Case2.Cointegrated probabilities :q= 4,r= 2,f0t=f0t−1+vtwithvt= 0.01× N(0,I2),β0i= (1,0.5,0.5,1)′,λ0i∼N(0,diag(2,1)), and xit= (−0.5λ0i(1)×f0t(1)− 0.25×λ0i(2)f0t(2),−0.5λ0i(1)×f0t(1),−0.5×λ0i(2)f0t(2),−0.25λ0i(1)×f0t(1)−0.5× λ0i(2)f0t(2))′+eitwitheit= 0.1×eit−1+N(0,I4), where λ0i(j) andf0t(j) represent thejth value of λ0iandf0t, respectively. As in Case 1, the binary response is modeled using either the logit or probit function. The covariate {xit}is observable, while all other parameters remain
|
https://arxiv.org/abs/2505.22417v1
|
unobservable. For each generated dataset, we first determine the number of f actors. Then, we evaluate the parameter estimates by measuring the error according to the following criteria, where M denotes the number of iterations. MAE 1 =1 NTMN/summationdisplay i=1T/summationdisplay t=1M/summationdisplay j=1|ˆz(j) it−z0it|,MAE 2 =1 NTMN/summationdisplay i=1T/summationdisplay t=1M/summationdisplay j=1|ˆβ(j)′ ixit−β′ 0ixit|, MAE 3 =1 NTMN/summationdisplay i=1T/summationdisplay t=1M/summationdisplay j=1|ˆλ(j)′ iˆf(j) t−λ′ 0if0t|,MAE 4 =1 NTMN/summationdisplay i=1T/summationdisplay t=1M/summationdisplay j=1/bardblˆβ(j) i−β0i/bardbl,(9) whereX(j)represents the jth replication for X∈ {(z0it,β0i,λ0i,f0t)}. We set the number of simulations Mto 200. 4.2 Simulation Results Table1presents the simulation outcomes. We observe that the estim ated number of factors is consistent when both NandTare sufficiently large. In the nonstationary scenario, when Nis relatively small, increasing Tcan actually lead to larger errors and less stable results for parameter estimat ion. This finding aligns with Theorem 20 Table 1: Notes. “Logit” and “Probit” refer to the logit and probit link func tions, respectively, while “Nonstationary” and “Cointegration” denote the sing le-index scenarios of being nonsta- tionary and cointegrated. Nonstationary Cointegration Logit Probit Logit Probit N\T100 300 500 100 300 500 100 300 500 100 300 500 ˆr100 1.6985 1.9426 1.8013 1.2055 1.1410 1.2543 1.3652 1.4218 1.8149 1.1323 1.6886 1.7258 300 1.7983 1.9894 1.9878 1.8142 1.9827 1.9352 1.7935 1.9209 1.9919 1.5960 1.9142 1.9827 500 1.8030 1.9958 2.0065 1.8930 1.9948 2.0843 1.8428 1.9252 2.0013 1.7025 1.924- 2.0110 MAE 1100 0.8970 0.5798 0.6709 5.3008 7.8874 3.3996 2.5923 1.7338 0.7315 1.4395 0.5834 0.3640 300 0.6199 0.4216 0.4242 0.4841 0.3922 0.4350 0.8119 0.3842 0.3126 0.6141 0.2840 0.2278 500 0.5752 0.4016 0.3749 0.4138 0.4053 0.3981 0.6687 0.3827 0.2943 0.5216 0.2721 0.2131 MAE 2100 0.4975 0.3781 0.3739 0.4672 0.6724 0.7863 0.6519 0.3581 0.2545 0.8636 0.4060 0.2398 300 0.4525 0.3511 0.4166 0.4120 0.4145 0.6068 0.5178 0.2584 0.1964 0.4985 0.2033 0.1537 500 0.4367 0.3256 0.3347 0.3765 0.4612 0.4688 0.4912 0.2524 0.1909 0.4293 0.1972 0.1461 MAE 3100 0.7428 0.4868 0.5682 5.2045 7.8320 3.4389 2.3198 1.6066 0.6587 1.0169 0.4231 0.2851 300 0.4488 0.3740 0.4312 0.3126 0.3646 0.5679 0.5505 0.2768 0.2366 0.3416 0.1944 0.1645 500 0.4236 0.3187 0.3430 0.3008 0.3901 0.4119 0.4247 0.2726 0.2163 0.2869 0.1763 0.1502 MAE 4100 0.8308 0.3090 0.2095 0.6284 0.3405 0.2460 0.3611 0.1947 0.1355 0.4901 0.2273 0.1293 300 0.7769 0.2802 0.2064 0.5800 0.2443 0.2104 0.2821 0.1367 0.1034 0.2746 0.1081 0.0813 500 0.7459 0.2762 0.1880 0.5371 0.2526 0.1857 0.2682 0.1350 0.1010 0.2361 0.1062 0.0776 3.1, whichstates that therateof convergence is T1/4/parenleftig min{√ N,√ T}/parenrightig−1 ; thus, when Nis small, a largerTdecreases the convergence rate. In contrast, for the cointegration scenario, larger values ofNandTnot only reduce the error but also eliminate prior heterogeneity. This result i s consistent with Theorem 3.6and Theorem 3.7, which establish a convergence rate of/parenleftig min{√ N,√ T}/parenrightig−1 . 5 Empirical Application In this section, we apply our nonstationary binary factor mo del to high-frequency financial data. By treating daily higher frequency is also possible ju mps as binary events and acknowl- edging that their dynamic is nonstationary (see, for exampl e,Bollerslev and Todorov 2011a and Bollerslev and Todorov 2011b ), we extract the corresponding jump arrival factor and inco rpo- rate them into our asset pricing framework: Jumpit= Ψ(β′
|
https://arxiv.org/abs/2505.22417v1
|
0ixit+λ′ 0if0t)+uit, i= 1,...,N;t= 1,...,T, 21 where Jumpitindicates whether or not asset iundergoes jumps on day t, with 1 representing presence of jumps and 0 representing absence of jumps. 5.1 Data We collected intraday observations of S&P 500 index constit uents from January 2004 to De- cember 2016.2Using these high-frequency data, we identify daily jumps wi th robust detection methods. In particular, we employ the MINRV method proposed byAndersen et al. (2012). Additional results regarding the link function and jump det ection methods are provided in the Supplementary Material. We set the confidence level for the j ump test as 95%. Moreover, the overall jump probability (or intensity) traj ectory is strongly influenced by volatility (see, for example, Bollerslev and Todorov 2011b ). For our covariates, we use the historical volatility of each stock, with daily volatility calculated from high-frequency data. 5.2 Estimation Results For the period from 2004 to 2016, we estimate three jump arriv al factor. To capture dynamic changes, we compute the number of factors for each year, as il lustrated in Figure 1. 2005 2007 2009 2011 2013 201511.522.533.54NumberNumber of factors Figure 1: Estimated number of factors for each year. Figure1shows that the number of factors is typically three, increas es by one during the financial crisis, and then drops to one afterward. Since our jump arrival factor captures the relationship bet ween jump events (such as jump arrivals)—akin to the mutually exciting jumps described in Dungey et al. 2018 —it differs sig- nificantly from the high-frequency continuous factors in Pelger(2020). While Pelger(2020) also constructs jump arrival factors, they rely on sparse jump si ze data and typically yield only a single factor. In contrast, we extract jump arrival informa tion using a nonlinear factor model that leverages the complete dataset—including both jump oc currence and occurrence rate. 2We used the publicly available database provided by Pelger(2020), which includes 332 constituents; see https://doi.org/10.1111/jofi.12898 . 22 ElectricityFinanceFoodHealth Machinery ManufacturingOil and gas Pharma & ChemicalsPrimary manufacturingServices TechnologyTrade Transportation-0.05-0.045-0.04-0.035Industry factor loadings (1st) Global mean ElectricityFinanceFoodHealth Machinery ManufacturingOil and gas Pharma & ChemicalsPrimary manufacturingServices TechnologyTrade Transportation-4-2024610-3 Industry factor loadings (2nd) Global mean ElectricityFinanceFoodHealth Machinery ManufacturingOil and gas Pharma & ChemicalsPrimary manufacturingServices TechnologyTrade Transportation-3-2-1012310-3 Industry factor loadings (3rd) Global mean Figure 2: Box plots of factor loadings across different industries. Notes. The three graphs correspond to the three sets of factor loadings, with the horizontal axis r epresenting various industries. The red “+” symbols indicate outliers, and the plots display the confidence inte rval gaps. Figure2shows the distribution of factor loadings across various in dustries. Our analysis reveals that the first factor’s loadings are predominantly n egative, with particularly large mag- nitudes for oil industry. In contrast, the second and third f actors fluctuate around zero. Unlike Pelger(2020), which finds that the first four continuous factors are domin ated by the financial, oil, and electricity sectors, our results indicate that the industry factor loadings contribute more uniformly to the jump arrival factors. In addition, we show estimation results for the three jump ar rival factors, as well as first- order difference results for
|
https://arxiv.org/abs/2505.22417v1
|
the factors. 2006 2008 2010 2012 2014 20160501001501st factor 2006 2008 2010 2012 2014 2016-150-100-500501002nd factor 2006 2008 2010 2012 2014 2016-150-100-500501001503rd factor 2006 2008 2010 2012 2014 2016-100-50050100First-order difference for 1st factor 2006 2008 2010 2012 2014 2016-100-50050100First-order difference for 1st factor 2006 2008 2010 2012 2014 2016-100-50050100First-order difference for 1st factor Figure 3: Estimated factors and their corresponding first-order diffe rences.Note. The top panel displays the three estimated factors, while the bottom panel shows the co rresponding first-order difference series. The top panel of Figure 3displays the three jump arrival factor sequences, which app ear 23 to be nonstationary. However, their first-order differences a re nearly stationary, aligning well with our model assumptions. To further validate these findin gs, we conduct an ADF test on the factor series of each year, as presented in Table 2. Table 2: ADF test p-value for factors and and their differences. Notes. The table’s first three rows represent the factors, and the last three rows show thei r first-order differences. Year 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 201 5 2016 1st factor 0.6002 0.4890 0.4523 0.3966 0.4905 0.3886 0.3605 0.4488 0.4279 0.4736 0.4722 0.3644 0.3734 2nd factor 0.6082 0.4302 0.0527 0.3833 0.1840 0.4706 0.4682 0.5151 0.4812 0.2984 0.4239 0.3021 0.5256 3rd factor 0.0224 0.0252 0.2689 0.1107 0.0017 0.4234 0.2987 0.4244 0.5268 0.3136 0.5261 0.5718 0.5770 1st factor diff 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0 010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 2nd factor diff 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0 010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 3rd factor diff 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0 010 0.0010 0.0010 0.0010 0.0010 0.0010 0.0010 Table2indicates that the first two factors are nonstationary in eve ry year, while the third factor is nonstationary in almost all years. This confirms th at the jump arrival factors are predominantly nonstationary. Furthermore, after applyin g first-order differencing, all factors pass the stationarity test. Our model has diverse applications. In finance, for instance , we extract jump arrival factors that can be used to screen portfolios, explain asset pricing , and more. The next section demon- strates how our estimated jump arrival factors help explain excess returns, using asset pricing as an example. 5.3 Applications in Asset Pricing In this section, we investigate the impact of jump arrival fa ctors on pricing models. We analyze each year separately and choose the maximum number of factor s observed across all years (i.e., r= 4) to ensure that variations in the number of factors across periods do not affect the final results. First, to assess whether the identified jump arrival factors can be explained by established financial factors, we compute the canonical correlations be tween the four jump arrival factors and the Fama–French–Carhart five factors over the entire sam ple period. The left panel of Figure4displays these canonical correlation coefficients. Figure4reveals that most correlation coefficients are relatively lo w—except for the first coefficient, which shows a modest increase during the financia l crisis. This observation suggests that the jump arrival factors are not fully
|
https://arxiv.org/abs/2505.22417v1
|
captured by the Fa ma–French–Carhart five factors. Motivated by the Fama–French–Carhart five factors model (e. g.,Fama and French 2015 ), 24 Figure 4: Canonical correlations and asset pricing results. Notes. The left panel displays the canonical correlation coefficients between the four jump arrival facto rs and the Fama-French-Carhart five factors. The middle panel shows the incremental R2from adding jump arrival factors to the Fama-French-Carhar t five-factor model, while the right panel displays the incremental impro vement in the joint test statistic (GRS statistic) for alpha. we incorporate the jump arrival factors to form the followin g six-factor model: Rit−Rf,t=α∗ i+βi,MKTMKTt+βi,SMBSMBt+βi,HMLHMLt+βi,RMWRMW t +βi,CMACMAt+β′ i,Jft+ǫ∗ i,t,(10) whereRit−Rf,trepresents the excess return of asset iat timet(withRf,tas the risk-free rate), MKTtis the market excess return, SMBtcaptures the size effect, HMLtrepresents the value factor,RMW treflects profitability, CMAtmeasures investment, and ftdenotes the set of jump arrival factors. We evaluate the contribution of the jump arrival factors fro m two perspectives. First, by comparing the R2values from regressions based on the six-factor model ( 10) and the original Fama–French–Carhart five factors model, we determine wheth er including the jump arrival factors improves the explanation of excess returns. Second , we test the null hypothesis H0: α∗ 1=...=α∗ N= 0 using the Gibbons–Ross–Shanken (GRS) test to assess the v alidity of the six-factor model relative to the Fama–French–Carhart five f actors model. The middle panel of Figure 4plots the annual increases in R2for all assets. This panel displays the median, as well as the 5th/95th and 10th/90th pe rcentiles of the R2increments. LargerR2improvements indicate that the jump arrival factors enhanc e the model’s ability to explain asset pricing. On average, the inclusion of jump arr ival factors results in nearly a 5% improvement in R2, with some assets exhibiting gains of more than 20%. Similarly, the right panel of Figure 4presents the annual changes in the GRS statistic. A 25 lower GRS statistic suggests that the joint alphas are stati stically indistinguishable from zero, implying that the model effectively explains excess returns. Negative increments in the GRS statistic indicate that the jump arrival factors enhance th e model’s performance. As observed, theGRSstatistic decreased inalmost every year, reinforci ngtheeffectiveness of thejumparrival factors in explaining asset returns. 2005 2007 2009 2011 2013 2015 Year0.120.140.160.180.20.220.240.260.28Variation explained FF5 FF5+JF Figure 5: Time-variation in the percentage of explained variation fo r different factors. Note. This figure plots the percentage of explained variation calculated on a movin g window of one year (252 trading days). Tofurtherdemonstratethatthejumparrivalfactorshavein cremental explanatorypower, we examine how the proportion of variation explained by jump ar rival factors varies over time. We adopt the two-stage regression framework of Fama and MacBeth (1973). Figure 5illustrates the temporal dynamics using local-regression analysis ove r a rolling one-year window. The addition of jump arrival factors to the Fama–French–Carhar t five factors significantly increases the explained variation by nearly 30%. 6 Conclusion This paper considers a single-index general factor model wi th integrated covariates and factors, considering two distinct cases: nonstationary and cointeg rated single indices. The estimators are obtained via maximum likelihood estimation, and new asy mptotic
|
https://arxiv.org/abs/2505.22417v1
|
properties have been established. First, the convergence rates differ between the two cases, with an elevated rate when the single index is cointegrated. Second, while the con vergence rate for factor estima- tors depends on time tin the nonstationary case—necessitating a larger sample si zeN—but is independent of tin the cointegrated case. Third, in a transformed coordinat e system, the coefficient estimates exhibit two distinct convergence comp onents. Finally, the limiting distri- 26 butions of the coefficient estimates are entirely different acr oss the two single-index scenarios. Monte Carlo simulations validate these theoretical result s, and empirical studies demonstrate that the extracted nonstationary jump arrival factors play a crucial role in asset pricing. Future research could extend our modelling framework to matrix fac tor structures, such as Yuan et al. (2023),He et al. (2024), andXu et al. (2025), or to the high-frequency econometrics, such as Pelger(2019) andChen et al. (2024). Supplementary Material The Supplementary Material contains the proofs of the main t heoretical results, additional numerical studies, and more details in the empirical analys is. References Andersen, T. G., D. Dobrev, and E. Schaumburg (2012). Jump-r obust volatility estimation using nearest neighbor truncation. Journal of Econometrics 169 (1), 75–93. Ando, T., J. Bai, and K. Li (2022). Bayesian and maximum likel ihood analysis of large-scale panel choice models with unobserved heterogeneity. Journal of Econometrics 230 (1), 20–38. Bai, J. (2003). Inferential theory for factor models of larg e dimensions. Econometrica 71 (1), 135–171. Bai, J. (2004). Estimating cross-section common stochasti c trends in nonstationary panel data. Journal of Econometrics 122 (1), 137–183. Bai, J. and K. Li (2012). Statistical analysis of factor mode ls of high dimension. The Annals of Statistics 40 (1), 436–465. Barigozzi, M., G. Cavaliere, and L. Trapani (2024). Inferen ce in heavy-tailed nonstationary multivariate time series. Journal of the American Statistical Association 119 (545), 565–581. Bollerslev, T. and V. Todorov (2011a). Estimation of jump ta ils.Econometrica 79 (6), 1727– 1783. Bollerslev, T. and V. Todorov (2011b). Tails, fears, and ris k premia. The Journal of Fi- nance 66 (6), 2165–2211. 27 Chamberlain, G. and M. Rothschild (1983). Arbitrage, facto r structure in arbitrage pricing models. Econometrica 51 (5), 1281–1304. Chen, D., P. A. Mykland, and L. Zhang (2024). Realized regres sion with asynchronous and noisy high frequency and high dimensional data. Journal of Econometrics 239 (2), 105446. Chen, L., J. J. Dolado, and J. Gonzalo (2021). Quantile facto r models. Econometrica 89 (2), 875–910. Chen, M., I. Fern´ andez-Val, and M. Weidner (2021). Nonline ar factor models for network and panel data. Journal of Econometrics 220 (2), 296–324. Dong, C., J. Gao, and B. Peng (2021). Varying-coefficient pane l data models with nonstationar- ity and partially observed factor structure. Journal of Business & Economic Statistics 39 (3), 700–711. Dong, C., J. Gao, and D. B. Tjostheim (2016). Estimation for s ingle-index and partially linear single-index integrated models. Annals of Statistics 44 (1), 425–453. Dungey, M., D. Erdemlioglu, M. Matei, and X. Yang (2018). Tes ting for mutually exciting jumps and financial flights in high frequency data. Journal of
|
https://arxiv.org/abs/2505.22417v1
|
Econometrics 202 (1), 18–44. Fama, E. F. and K. R. French (2015). A five-factor asset pricin g model. Journal of Financial Economics 116 (1), 1–22. Fama, E. F. and J. D. MacBeth (1973). Risk, return, and equili brium: Empirical tests. Journal of Political Economy 81 (3), 607–636. Fan, J., Y. Liao, and M. Mincheva (2013). Large covariance es timation by thresholding prin- cipal orthogonal complements. Journal of the Royal Statistical Society Series B: Statisti cal Methodology 75 (4), 603–680. Gao, J., F. Liu, B. Peng, and Y. Yan (2023). Binary response mo dels for heterogeneous panel data with interactive fixed effects. Journal of Econometrics 235 (2), 1654–1679. Hansen, B. E. (1992). Convergence to stochastic integrals f or dependent heterogeneous pro- cesses.Econometric Theory 8 (4), 489–500. He, Y., Y. Hou, H. Liu, and Y. Wang (2024). Generalized princi pal component analysis for large-dimensional matrix factor model. arXiv preprint arXiv:2411.06423 . 28 He, Y., L. Li, D. Liu, and W.-X. Zhou (2025). Huber principal c omponent analysis for large- dimensional factor models. Journal of Econometrics 249 , 105993. Ma, T. F., F. Wang, and J. Zhu (2023). On generalized latent fa ctor modeling and inference for high-dimensional binomial data. Biometrics 79 (3), 2311–2320. Park, J. Y. and P. C. Phillips (1999). Asymptotics for nonlin ear transformations of integrated time series. Econometric Theory 15 (3), 269–298. Park, J. Y. and P. C. Phillips (2000). Nonstationary binary c hoice.Econometrica 68 (5), 1249–1280. Park, J. Y. and P. C. Phillips (2001). Nonlinear regressions with integrated time series. Econo- metrica 69 (1), 117–161. Pelger, M. (2019). Large-dimensional factor modeling base d on high-frequency observations. Journal of Econometrics 208 (1), 23–42. Pelger, M. (2020). Understanding systematic risk: A high-f requency approach. The Journal of Finance 75 (4), 2179–2220. Trapani, L. (2018). A randomized sequential procedure to de termine the number of factors. Journal of the American Statistical Association 113 (523), 1341–1349. Trapani, L.(2021). Inferentialtheoryforheterogeneity a ndcointegration inlargepanels. Journal of Econometrics 220 (2), 474–503. Wang, F. (2022). Maximum likelihood estimation and inferen ce for high dimensional general- ized factor models with application to factor-augmented re gressions. Journal of Economet- rics 229(1), 180–200. Xu, S., C.Yuan, and J.Guo (2025). Quasimaximum likelihood e stimation for large-dimensional matrix factor models. Journal of Business & Economic Statistics 43 (2), 439–453. Yu, L., P. Zhao, and W. Zhou (2024). Testing the number of comm on factors by bootstrapped sample covariance matrix in high-dimensional factor model s.Journal of the American Sta- tistical Association , 1–12. 29 Yuan, C., Z. Gao, X. He, W. Huang, and J. Guo (2023). Two-way dy namic factor models for high-dimensional matrix-valued time series. Journal of the Royal Statistical Society Series B: Statistical Methodology 85 (5), 1517–1537. Zhang, B., G. Pan, and J. Gao (2018). Clt for largest eigenval ues and unit root testing for high-dimensional nonstationary time series. The Annals of Statistics 46 (5), 2186–2215. Zhou, W., J. Gao, D. Harris, and H. Kew (2024). Semi-parametr ic single-index predictive regression models with cointegrated regressors. Journal of Econometrics 238 (1), 105577. 30
|
https://arxiv.org/abs/2505.22417v1
|
arXiv:2505.22423v1 [math.ST] 28 May 2025Max-laws of large numbers for weakly dependent high dimensional arrays with applications Jonathan B. Hill∗ Dept. of Economics, University of North Carolina, Chapel Hill, NC This draft: May 29, 2025 Abstract We derive so-called weak and strong max-laws of large numbers for max 1≤i≤kn|1/nPn t=1xi,n,t| for zero mean stochastic triangular arrays {xi,n,t: 1≤t≤n}n≥1, with dimension counter i = 1, ..., k nand dimension kn→ ∞ . Rates of convergence are also analyzed based on feasible sequences {kn}. We work in three dependence settings: independence, Dedecker and Prieur’s (2004) τ-mixing and Wu’s (2005) physical dependence. We initially ignore cross-coordinate i dependence as a benchmark. We then work with martingale, nearly martingale, and mixing coordinates to deliver improved bounds on kn. Finally, we use the results in three applications, each representing a key novelty: we ( i) bound knfor a max-correlation statistic for regression residuals under α-mixing or physical dependence; ( ii) extend correlation screening, or marginal regressions, to physical dependent data with diverging dimension kn→ ∞ ; and ( iii) test a high dimensional parameter after partialling out a fixed dimensional nuisance parameter in a linear time series regression model under τ-mixing. Key words and phrases : law of large numbers, high dimensional arrays, suprema, correlation screening, parametric tests. AMS classifications : 62E99, 60F99, 60F10. JEL classifications : C55. 1 Introduction In this article we derive and compare laws of large numbers for the maximum sample mean of a triangular array {xn,t: 1≤t≤n}n≥1,xn,t= [xi,n,t]k i=1∈Rkwith dimension k∈N, and sample size n. When k=kn>> n we have a high dimensional [HD] setting that may ∗Department of Economics, University of North Carolina, Chapel Hill, North Carolina, E- mail: jbhill@email.unc.edu ;https://tarheels.live/jbhill . We are grateful for the comments from two anonymous referees that lead to significant improvements of the manuscript. 1 be potentially huge relative to the sample size (e.g. ln( kn)∼anbfor some a,b >0, or kn → ∞ arbitrarily fast, depending on available information). We are particularly interested in disparate settings of weak dependence and their impact on feasible sequences {kn}. High dimensionality is common due to the enormous amount of available data, survey techniques, and technology for data collection. Examples span social, communication, bio-genetic, electrical, and engineering sciences to name a few. See, for instance, Fan and Li (2006), B¨ uhlmann and van de Geer (2011), Fan, Lv and Qi (2011), and Belloni, Chernozhukov and Hansen (2014) for examples and surveys. Our main results are then applied to three settings in econometrics and statistics detailed below. Assuming Exn,t= 0 for all ( n, t), we derive what we call a max-Weak LLN (max-WLLN) or max-Strong LLN (max-SLLN) for certain integer sequences {kn}by case, Mn:= max 1≤i≤kn 1 nnX t=1xi,n,t p→0 orMna.s.→0. (1.1) Typically we obtain Mnp→0 by proving E|Mn|p→0 for p≥1, and we establish {kn} such that√nMn=Op(g(kn)) for case-specific monotonic mappings g. We will call the weaker property E|Mn|p→0 a max-WLLN throughout as a convenience. Although max-laws are implicitly used in many papers too numerous to cite, often under sub-exponential or sub-Gaussian tails and independence, we believe this is the
|
https://arxiv.org/abs/2505.22423v1
|
first attempt to derive and compare possible laws and their resulting bounds on knunder various serial or cross-coordinate dependence and heterogeneity settings. A very few examples where max-WLLN’s appear include HD model inference under independence (Dezeure, B¨ uhlmann and Zhang, 2017; Hill, 2025b) or weak dependence (e.g. Adamek, Smeekes and Wilms, 2023; Mies and Steland, 2023), and wavelet-like HD covariance stationary tests under linearity (Jin, Wang and Wang, 2015; Hill and Li, 2025). Hill (2025b) explores max-LLN’s for standard least squares components in an iid linear regression setting. Jin et al. (2015) exploit HD theory for autocovariances dating to Hannan and Deistler (1988, Chapt. 7) and Keenan (1997). They require linearity with iid innovations, and only work with high dimensionality across autocovariance lags and so-called systematic samples (sub-sample counters). Hill and Li (2025) work in the same setting under a broader dependence concept. Thus neither systematically presents max-LLN’s for heterogeneous high dimensional arrays. Adamek et al. (2023) develop inference methods for debiased Lasso in a linear time series setting. Their Lemma A.4 presents an implicit max-WLLN by using a union bound and mixingale maximal inequality (for sub-samples). That result is quite close to what we present here. They require uniform Lp-boundedness for some p >2, and near epoch 2 dependence [NED]. We allow for trending higher moments and p >1 under physical de- pendence yielding both max-WLLN and max-SLLN, while NED implies mixingale, and adapted mixingales are physical dependent (Davidson, 1994; Hill, 2025a). We also use cross-coordinate dependence to improve kn. Thus our results are more general and broad in scope. See Remark 2.6 for details. Mies and Steland (2023) exploit martingale theory in Pinelis (1994) to yield an Lq- maximal inequality under Lp-physical dependence, 2 ≤p≤q. Their upper bound appears sharper than the one we present in Lemma 2.4 and Theorem 2.5, also based on a martingale approximation. The improvement, however, does not yield a faster rate kn→ ∞ , while the latter can only be deduced once p=q. Moreover, we allow for sub-exponential tails orLp-boundedness, p >1, we deliver weak andstrong laws, and exploit cross-coordinate dependence, each new and ignored in Mies and Steland (2023). Apparently only max-WLLN’s exist: max-SLLN’s have not been explored. Moreover, max-LLN’s are not explicitly available for τ-mixing and physical dependent arrays under broad tail conditions, and to the best our of knowledge inter-coordinate dependence is universally ignored where union bounds, Lyapunov’ inequality, and log-exp bounds under sub-exponentiality are the standard for getting around max 1≤i≤kn| · |, and bounding kn. We work under three broad dependence and heterogeneity settings: (i)τ-mixing (Dedecker and Prieur, 2004) (ii)Lp-physical dependent arrays, p >1 (Wu, 2005; Wu and Min, 2005) a.unrestricted coordinates (across i);b.martingale coordinates c.nearly martingale coordinates; d.mixing coordinates (iii) independence Under ( i), (ii.a) and ( iii) we do not restrict dependence coordinate-wise. This is the seemingly universal setting in the high dimensional literatures. A variety of mixing and related properties promote a Bernstein-type inequality that yield (1.1) and bounds on knqualitatively similar to the independence case. We treat a recent representative sub- exponential τ-mixing (Dedecker and Prieur,
|
https://arxiv.org/abs/2505.22423v1
|
2004, 2005). The latter construction along with other recent mixing concepts, like mixingale and related moment-based constructions (Gordin, 1969; McLeish, 1975), were proposed to handle stochastic processes that are not, e.g., uniform σ-field based α-,β-, or ϕ-mixing. This includes possibly infinite order functions of mixing processes, and Markovian dynamical systems and related expanding maps, covering simple autoregressions with Bernoulli shocks, and various attractors in mathematical physics with applications in atmospheric mapping, electrical components and artificial intelligence (e.g. Chernick, 1981; Andrews, 1984; Rio, 1996; Collet, Martinez 3 and Schmitt, 2002; Dedecker and Prieur, 2005; Chazottes and Gouezel, 2012). Thus they fill certain key gaps in the field of processes that yield deviation or concentration bounds and central limits. We include ( ii.b)-(ii.d) to show that bounds on kncan be improved when cross- coordinate dependence is available. We work under serial physical dependence to focus ideas, but the result appears to apply generally. Strong coordinate dependence ( ii.b), where xi,n,tis a martingale over i, yields unbounded kn(the result is truly dimension- agnostic ).1Under ( ii.c) the condition is weakened such that xi,n,tbecomes a martingale asn→ ∞ :P(E[xi+1,n,t|Fi,n] =xi,n,t)→1 for some filtration {Fi,n}.2We show that even in a Gaussian setting knmust be restricted, but a better bound is yielded by using cross- coordinate information. We obtain the same result under cross-coordinate mixing ( ii.d) where improvements are gained in Gaussian, sub-exponential and heavy-tailed cases. As a third dependence setting ( iii) we deliver max-LLN’s under serial independence in the supplemental material Hill (2024, Appendix B). We prove a max-SLLN under L1- boundedness and show that knis unrestricted when a cross-coordinate probability decay property holds. The proof exploits a new necessary and sufficient HD three-series theorem. The cases are naturally nested: mixing includes independence, and physical dependence covers mixing and non-mixing cases. Moreover, τ-mixing and adapted mixingale properties are closely related (Hill, 2024, Appendix C), while adapted mixingale and physical depen- dence properties are asymmetrically related (Hill, 2025a). Mixingale-like constructs date at least to Gordin (1969), Hannan (1973, eq. (4)), and McLeish (1975), with expansions toLp-arrays in, e.g., Andrews (1988) and Hansen (1991). In the Lp-physical dependence case if the coefficients grow in pat a polynomial rate then a Bernstein inequality promotes an exponential bound on kn. Key technical tools, depending on the dependence property, are: log-exp (or “ log-sum- exp”) bound on the maximum of a sequence when a moment generating function exists; Bernstein, Fuk-Naegev, and Nemirovski (2000) inequalities; and maximal inequalities, e.g. for physical dependent arrays. The log-exp transform yields a “smooth-max” approxima- tion that has been broadly exploited when cross-coordinate dependence is not modeled (see, e.g., Talagrand, 2003; B¨ uhlmann and van de Geer, 2011; Chernozhukov, Chetverikov and Kato, 2013). Bernstein-type inequalities exist for iid and various mixing and related sequences, cov- ering α-,β-,ϕ-,˜ϕ-,φ-,τ- andC- mixing random variables in array, random field and lattice forms (e.g. Rio, 1995; Samson, 2000; Merlev` ede, Peligrad and Rio, 2011; Hang and Stein- 1I would like to give a special thanks to an anonymous referee for pointing out this case. 2Nearly martingale in this paper is distinctly
|
https://arxiv.org/abs/2505.22423v1
|
different than near-martingale (EFi−1,txi,n,t = EFi−1,txi−1,n,t),weak-martingale , orlocal-martingale (cf. Kallenberg, 2021). 4 wart, 2017), and physical dependent processes (Wu, 2005).3In most cases the random variables are assumed bounded or sub-exponential, and in many cases only 1-Lipschitz functions are treated. We generalize the τ-mixing L1metric to an Lpmetric, p≥1, and derive a Bernstein inequality under so-called τ(p)-mixing by closely following Merlev` ede et al. (2011). We do not attempt to use the sharpest available bounds within the Bernstein-Hoeffding class, or under physical dependence. This is both for clarity and ease of presenting proofs, and generally because sharp bounds will only lead to modest, or no, improvements for kn. See Talagrand (1995a,b), Bentkus (2008) and Dumbgen, van de Geer, Veraar and Wellner (2010) for many results and suggested readings. Bernstein and Fuk-Nagaev inequalities that can be used for max-LLN’s have been expanded beyond classic settings, covering bounded or sub-exponential α- and β-mixing random variables (Viennet, 1997; Bosq, 1993; Krebs, 2018b) with exponential memory de- cay (e.g. Merlev` ede et al., 2011), or geometric or even hyperbolic decay (see Wintenberger, 2010, for bounded φ-mixing 1-Lipschitz functions). Results allowing for strong (or similar) mixing have gone much farther to include spatial lattices (Valenzuela-Dominguez, Krebs and Franke, 2017), random fields (Krebs, 2018a), and less conventional mixing properties (Hang and Steinwart, 2017). Seminal generic results are due to Talagrand (1995a,b), lead- ing to inequalities for bounded stochastic objects (see, e.g., Samson, 2000, who work with bounded envelopes of f-mixing processes). As a secondary contribution that will be of independent interest, we apply the max- LLN’s to three settings in order to yield new results. In each case a bootstrap theory would complement the application but is ignored here for brevity. We first consider a serial max-correlation statistic derived from a model residual. Hill and Motegi (2020) exploit Ramsey theory in order to yield a complete bootstrap theory under a broad Near Epoch Dependence property, yet without being able to characterize an upper bound on the number of lags kn. We provide new bounds on knunder α-mixing and physical dependence. The second application extends the marginal screening method to allow for an increas- ing number of covariates under weak dependence. Marginal regressions with “optimal” covariate selection is also called sure screening and correlation learning; see Genovese, Jin, Wasserman and Yao (2012) for references and historical details. In a recent contribution McKeague and Qian (2015) regress some yton each covariate ( xi,t: 1≤i≤k) one at a time for fixed kthat is allowed to be larger than n(note t= 1, ..., n ). This yields marginal coefficients ˆθn,i=ccov(y, xi)/dvar(xi), max-index ˆln= arg max 1≤l≤k|ˆθn,l|ideally represent- ing the most informative regressor, and therefore ˆθn,ˆln. Let θ0,i=cov(y, xi)/var(xi). An 3Consult, e.g., Dedecker, Doukhan, Lang, Leon, Louhichi and Prieur (2007) for many mixing definitions, cf. Rio (1996), Dedecker and Prieur (2004, 2005), and Maume-Deschamps (2006). 5 implicit iid assumption is imposed in order to study ˆθn,ˆlnas a vehicle for testing that no regressor is correlated with yt,H0:θ0,i∗= 0 where i∗= arg max 1≤l≤k|cov(y, xl)/var(xl)|. See McKeague and Qian (2015) for discussion, and resulting non-standard
|
https://arxiv.org/abs/2505.22423v1
|
asymptotics for√n(ˆθn,ˆln−θ0,i∗). We instead study max 1≤i≤kn|√nˆθn,i|to test H0:θ0,i= 0∀i⇔H0:θ0,i∗= 0, under weak dependence, allowing for non-stationarity, and high dimensionality kn>> n , where kn→ ∞ andkn/n→ ∞ are allowed. We do not explore, nor do we need, an endogenously selected optimal covariate index ˆlnunder weak dependence. This narrowly relates to work in Hill (2025b) where low dimensional models with a fixed dimension nuisance covariate are used to test a HD parameter in an iid regression setting. The third application rests in the settings of Cattaneo, Jansson and Newey (2018) and Hill (2025b). Cattaneo et al. (2018) study post-estimation inference when there are many “nuisance” parameters δn,0in a linear regression model yn,t=δ′ n,0wn,t+θ′ n,0xn,t+un,t. Allow- ing for arbitrary in-group dependence of finite group size, they deliver a heteroscedasticity- robust limit theory for an estimator of the low dimensional θn,0by partialling out δn,0. We extend their idea to weakly dependent and heterogeneous data, but focus instead on testing the HD parameter δn,0. Finally, we focus on pointwise convergence throughout, ignoring uniform convergence for high dimensional measurable mappings xi,n,t(θ) with finite or infinite dimensional θ. Generic results are well known in low dimensional settings: see, e.g., Andrews (1987) and Newey (1991) for weak laws, P¨ otscher and Prucha (1989) for a strong law, and van der Vaart and Wellner (1996) for classic results for low dimensional xi,n,t(·) with infinite dimen- sional θ. Sufficient conditions generally reduce to pointwise convergence, plus stochastic equicontinuity (or related) conditions. The same generality likely extends to a high dimen- sional setting, but this is left for future work. The remainder of the paper is as follows. In Section 2 we present max-LLN’s for mixing and physical dependent arrays. Sections 3-5 contain applications, with concluding remarks in Section 6. Technical proofs of the main results are presented in Appendix A, and omitted content is relegated to Hill (2024). We assume all random variables exist on the same complete measure space (Ω ,F,P) in order to side-step any measurability issues concerning suprema (e.g. Pollard, 1984, Ap- pendix C). |x|=P i,j|xi,j|is the l1-norm, |x|2= (P i,jx2 i,j)1/2is the Euclidean, Frobenius orl2norm; ||·||is the spectral norm; ||·|| pdenotes the Lp-norm ( ||x||p:= (Pk i=1E|xi|p)1/p). a.s.isP-almost surely .Eis the expectations operator; EAis expectations conditional on F-measurable A.p→,Lp→anda.s→denote convergence in probability, in Lpnorm and almost surely .op(1) and oa.s.(1) depict little “ o” convergence in probability andalmost surely . 6 awp1 = “asymptotically with probability approaching one”. κ-Lipschitz functions f:Rr →Rsatisfy |f(x)−f(y)| ≤κ|x−y|.{kn}n∈Nis monotonically increasing. K > 0 and tinyι >0 are constants that may change from line to line. O(m−λ) for m∈Z+andλ > 0 implies O((m∨1)−λ). 2 High dimensional maximal inequalities and LLN’s LetExn,t= 0 throughout. We first work with mixing arrays. 2.1 Sub-Exponential τ-Mixing arrays We discuss τ-mixing in this section, but any cited in the introduction with a sub- exponential condition will yield (1.1) under an exponential bound for kn. Define the filtration Ft i,n,s=σ(xi,n,τ: 1≤s≤τ≤t≤n). The first result exploits a Fuk-Nagaev type Bernstein-inequality in Merlev` ede et al. (2011, Theorem 1) for geometric τ-mixing processes. τ-mixing nests some non- α-mixing processes
|
https://arxiv.org/abs/2505.22423v1
|
(cf. Dedecker and Prieur, 2004), and is implied by α-mixing and nests Lp- mixingales (Hill, 2024). Let Λ 1(Rr) denote the class of 1-Lipschitz functions f:Rr→ R, letAbe a σ-subfield of F, and define for Rr-valued random variable Xas in Dedecker and Prieur (2004): τ(1)(A, X) :=||supf∈Λ1(Rr)|EAf(X)−Ef(X)| ||1. If we write for any l-tuple Jl:= (j1, ..., j l)∈Nl Xi,n(Jl) :={xi,n,j 1, ..., x i,n,jl}, then we have a generalization of the τ-mixing coefficient (see Dedecker and Prieur (2004, Defn. 2) and Merlev` ede et al. (2011, eq.’s (2.2)-(2.3))) τ(1) i,n(m) := sup r≥0max 1≤l≤rmax 1≤t≤nmax t≤j1<···<jl1 lτ(1) Ft−m i,n,−∞,Xi,n(Jl) . We use a trivial shift supt≤j1<···<jlτ(Ft−m i,n,−∞,·) rather than supt+m≤j1<···<jlτ(Ft i,n,−∞,·) in Dedecker and Prieur (2004) in order to draw a direct comparison with mixingales in Hill (2024): the two versions are identical as n→ ∞ , or under stationarity ∀n. Notice we do not restrict dependence across coordinates ( xi,n,s, xj,n,t) for i̸=j. See Dedecker and Prieur (2004, 2005) and Dedecker et al. (2007) for examples and further theory related to τ-mixing and its relationship to other mixing properties. 7 Now use the Lpmetric to define in general for p≥1 τ(p)(A, X) :=E sup f∈Λ1(Rr)|EAf(X)−Ef(X)| p τ(p) i,n(m) := sup r≥0max 1≤l≤rmax 1≤t≤nmax t≤j1<···<jl1 lτ(p) Ft−m i,n,−∞,Xi,n(Jl) . Add and subtract f(0) in EAf(X)−Ef(X), and use the 1-Lipschitz property coupled with Minkowski and Jensen inequalities to deduce an upper bound τ(p)(A, X)≤2E|X|p, hence lim supn→∞τ(p) i,n(m)≤2 lim supn→∞max 1≤t≤nE|xi,n,t|p. We say xi,n,tisτ(p)-mixing when lim m→∞τ(p) i,n(m)→0. Clearly τ(p) i,n(m)≤τ(q) i,n(m)p/q,p≤q. TheLp-variants ( τ(p), τ(p) i,n) share the same properties as ( τ(1), τ(1) i,n), typically by simple adjustments to existing proofs in Dedecker and Prieur (2004), cf. Peligrad (2002). In particular, it retains a useful coupling property: for any σ-fieldAofF, there exists a random variable X∗distributed asXand independent of Asuch that τ(p)(A, X) =E|X−X∗|p. Assume geometric mixing decay and a sub-exponential tail condition max 1≤i≤knτ(p) i,n(m)≤ae−bmγ1for some p≥1,∀n∈N (2.1) max 1≤i≤kn,1≤t≤nP(|xi,n,t|> ϵ)≤dexp{−cϵγ2} ∀ϵ >0,∀n∈N (2.2) where ( a, b, c, d, γ 1, γ2)>0 are universal constants. Define γby 1/γ:= 1/γ1+ 1/γ2≤1. (2.3) The latter imposes γi>1, forcing a trade-off between tail decay and memory decay. When γ2= 2 we have the sub-Gaussian class (e.g. Vershynin, 2018, Chapt. 2.5). We now have the following Fuk-Nagaev type inequality under τ(p). Lemma 2.1. Under (2.1)-(2.3), for some (Ki}5 i=1>0depending only on {a, b, c, d, γ, γ 1, p}, and any n≥4, max 1≤i≤knP max 1≤l≤n 1 nlX t=1xi,n,t ≥ϵ! (2.4) ≤nexp{−K 1ϵγnγ}+ exp −K 2ϵ2n2 1 +K3n + exp −K 4ϵ2neK5(ϵn)γ(1−γ) [ln(ϵn)]γ . The subsequent max-WLLN is proved using Lemma 2.1 and a log-exp bound. Theorem 2.2 (max-WLLN: τ(p)-mixing) .Let{xi,n,t: 1≤i≤kn}n t=1satisfy (2.1)-(2.3). 8 ThenMnL1→0provided ln(kn)≤ Anfor some A>0that depends on (K1,K2,K3, γ). Moreover,√nMn=Op(p ln(kn))ifln(kn) =O(n). Remark 2.1. The second result somewhat remarkably does not rely on the degree of dependence or tail decay, ( γ1, γ2): the iid case√nMn=Op(p ln(kn)) is achieved generally. That said, it arguably becomes less remarkable in view of the coupling between τ(p)-mixing and independence, cf. Dedecker and Prieur (2004, Lemma 5) and Hill (2024, Lemma
|
https://arxiv.org/abs/2505.22423v1
|
C.2). We next have a corresponding max-SLLN that exploits a maximal concentration in- equality for max 1≤l≤n|1/nPl t=1xi,n,t|. The proof is similar to the one for Theorem 2.6.b under physical dependence and therefore presented in Hill (2024, Appendix G). Theorem 2.3 (max-SLLN: τ(p)-mixing) .Let{xi,n,t: 1≤i≤kn}n t=1satisfy (2.1)-(2.3) withγ= 1. Then Mna.s.→0provided ln(kn) =o(n). Remark 2.2. We use γ= 1 to ease bounding an exponential integral. The requirement implies a tight tail-memory trade-off γi=γj/(γj−1) for i̸=j, with both ( γ1, γ2)>1. Thus if, e.g., γ1∈(1,2) (slower memory decay) then γ2>2 (faster tail decay). EXAMPLE 1 (Linear Processes ).Letxi,t=P∞ j=0ψi,jϵi,t−j,{ϵi,t}are iid for each i, P(|ϵi,t|> u)≤dexp{−cuγ2} ∀i, tand constants d, γ2>0,ψi,0= 1 andP∞ j=0|ψi,j|<∞ for each i. By exploiting coupling results in Merlev` ede and Peligrad (2002), and in view of τ(p)-coupling (Hill, 2024, Lemma C.2), arguments in Dedecker and Prieur (2005, p. 214) yield τ(p) i,n(m)≤K(P∞ j=m|ψi,j|)p. Thus if max i∈N|ψi,m|=O(e−bmγ1) then by Theorems 2.2 and 2.3 respectively√nMn=Op(p ln(kn)) and Mna.s.→0 whenever ln( kn) =o(n). EXAMPLE 2 (ρ-Lipschitz Markov Chains ).Letxi,t=fi(xi,t−1) +ϵi,t,ϵi,tas above, where fiisρi–Lipschitz ρi∈[0,1). If xi,tisLp-bounded then τ(p) i,n(m)≤Kρm i(see Dedecker and Prieur (2005, p. 215)). Theorems 2.2 and 2.3 apply when ρi∈(0, e−b],b >0. 2.2 Physical dependence Next we augment the physical dependence measure in Wu (2005, Defn. 1) to cover non-stationary arrays, similar to Chang, Chen and Wu (2024, Section 2.1.3). We initially ignore dependence across coordinates, and then control cross-coordinate dependence to improve the bound on kn. Strong laws are presented in most cases to conserve space. Suppose for measurable functions gi,n,t(·) that may depend on ( i, n, t ), xi,n,t=gi,n,t(ϵi,t, ϵi,t−1, . . .) (2.5) 9 where {ϵi,t}are for each iiid sequences. Examples include linear and nonlinear time series like switching, random coefficient and (non)linear Markov processes. Let {ϵ′ i,t}be an independent copy of {ϵi,t}, and define the coupled process x′ i,n,t(m) :=gi,n,t ϵi,t, . . . , ϵ i,t−m+1, ϵ′ i,t−m, ϵi,t−m−1, . . . ,m= 0,1,2, ... The (serial) Lp-physical dependence measure θ(p) i,n,t(m) and its accumulation are defined as θ(p) i,n,t(m) := xi,n,t−x′ i,n,t(m) pand Θ(p) i,n,t:=∞X m=0θ(p) i,n,t(m). We say xi,n,tisLp-physical dependent (over t, for each i) when Θ(p) i,n,t<∞. This covers α-mixing, τ(p)-mixing, non-mixing and mixingale arrays (Wu, 2005; Hill, 2024, 2025a) 2.2.1 Unrestricted Coordinates The following generalizes Lp-moment (i.e. Rosenthal-type) and Bernstein inequalities in Wu (2005, Theorem 2) to possibly non-stationary arrays, complementing the HD central limit theory in Chang et al. (2024, Scetion 2.1.3). See Liu, Xiao and Wu (2013, Theorem 1) for a modest improvement in the stationary case. We need Zi,l:= 1/√nPl t=1xi,n,tand γi(α) := lim sup p→∞p1/2−1/αΘ(p) iwith α >0 and Θ(p) i:= lim sup p→∞max 1≤t≤nΘ(p) i,n,t. Lemma 2.4. Assume Θ(p) i,n,t∈(0,∞)forp >1, and each 1≤i≤knand1≤t≤n. Write p′:=p∧2. a.||max 1≤l≤n|Zi,l|||p≤ B pn1/p′−1/2max 1≤t≤nΘ(p) i,n,t, where Bp= 18[ p5/2/(p−1)3/2]ifp∈ (1,2), elseBp=√ 2[p3/2/(p−1)]. b.Ifmax i∈Nγi(α)<∞for some 1< α≤2, then max i∈NP(max 1≤l≤n|Zi,l|> u)≤ Cexp{−Kuα}for some C,K ∈ (0,∞)that depend on γi(α)(uniformly) and α. Remark 2.3. (b) exploits ( a). (a) uses a martingale difference approximation (e.g. Wu, 2005, 2011), with Doob’s inequality, and either Burkholder’s inequality when
|
https://arxiv.org/abs/2505.22423v1
|
p∈(1,2), or a moment bound due to Dedecker and Doukhan (2003), cf. Rio (2017, Chapt. 2.5). See also Jirak and K¨ ostenberger (2024, Lemma 21). We can evidently also set p∈(0,1] by using related general Doob-type bounds (e.g. K¨ uhn and Schilling, 2023, Theorem 4.4). Remark 2.4. max i∈Nγi(α)<∞nests a well known polynomial moment growth property that is equivalent to sub-exponential tails. It holds, for example, if we first set θ(p) i,n,t(m)≤ 10 d(p) i,n,tψi,mwhere max i∈Nψi,m=O(m−λ−ι),λ≥1 and tiny ι >0. By construction θ(p) i,n,t(m)≤ 2||xi,n,t||phence d(p) i,n,t≤2||xi,n,t||p. Then by a change of variable p−b||xi,n,t||p<∞uniformly in (i, n, p, t ) for some b∈[0,∞) yields max i∈Nγi(α)<∞with α= 2/(1 + 2 b). When b≤1 we have classically defined sub-exponential tails (cf. Vershynin, 2018, Proposition 2.7.1(b)). The latter holds, for example, when P(|xi,n,t|> u)≤ Cexp{−Kuα}uniformly in (i, t). Thus α= 1 (i.e. b= 1/2) implies sub-Gaussian tails. We now have a max-WLLN under physical dependence [ pd]. The result allows for trend- ing dependence coefficients Θ(p) n:= max 1≤i≤kn,1≤t≤nΘ(p) i,n,t→ ∞ asn→ ∞ . We work under Lp-boundedness or sub-exponential tails. In the former case, without cross-coordinate dependence information we use Lyapunov’s inequality and max 1≤i≤k|xi| ≤Pk i=1|xi|to obtain EMn≤ ∥M n∥p≤ knX i=1E 1 nnX t=1xi,n,t p!1/p ≤k1/p nmax 1≤i≤kn 1 nnX t=1xi,n,t p. (2.6) A max-WLLN thus rests on a maximal inequality to bound ||1/nPn t=1xi,n,t||p. Theorem 2.5 (max-WLLN: pd).Letxi,n,tbeLp-physical dependent, p >1, with Θ(p) i,n,t∈ (0,∞)for each 1≤i≤knand1≤t≤n. Write p′:=p∧2. a.MnLp→0ifkn=o(np(1−1/p′)/Θ(p) n), and√nMn=Op(k1/p nn1/p′−1/2Θ(p) n). b.If additionally max i∈Nγi(α)<∞for some 1< α≤2, then MnL1→0for any ln(kn)≤ K1/α√n/2, and√nMn=Op(ln(kn)), where K>0is defined in Lemma 2.4.b. Remark 2.5. Under ( b) we need 1 < α≤2 in order to bound the moment generating func- tionEexp{λ|1/√nPn t=1xi,n,t|}following a log-exp bound and Bernstein inequality Lemma 2.4.b. This is not inconsequential considering Θ(p) iis non-decreasing in pby Lyapunov’s inequality. This rules out lim supp→∞Θ(p) i/p1/α−1/2<∞for small α < 1, thus excluding “heavier tailed” cases where Θ(p) i→ ∞ rapidly in p. Remark 2.6. Adamek et al. (2023, Lemma A.4) derive a concentration inequality under an NED property with uniform Lp-boundedness and p >2. Their result yields Mnp→ 0 ifkn=o(np/2). We allow for trending higher moments and p∈(1,2], where physical dependence is implied by the adapted mixingale property (Hill, 2025a, Theorem 2.1), and mixingales nest NED (Davidson, 1994, Chap. 17). Thus our max-WLLN, and the related max-SLLN below, are broader in scope. Remark 2.7. Using our notation and expanding terms, Mies and Steland (2023, Theorem 3.2) prove under Lq-bounded Lp-physical dependence with coefficients θ(p) i,n,t(m)≤d(p) i,n,t× 11 (m∨1)−β−ι,β≥1, where d(p) i,n,t≤2||xi,n,t||p, and 2 ≤p≤q, Emax 1≤l≤n knX i=1 1 nlX t=1xi,n,t p!q/p 1/q ≤K1 n1/2Dn∞X m=11 mβ+ι, andDn:= 2[max 1≤t≤nE(Pkn i=1|xi,n,t|p)q/p]1/q. Cf. Mies and Steland (2023, Theorem 6.6) and Pinelis (1994, Theorem 4.1). Thus, they deliver an Lq-maximal inequality for the lp-norm (Pkn i=1|Pl t=1xi,n,t|p)1/p. The bound depends implicitly on knthrough Dn. Set q =pto be able to yield ( Emax 1≤l≤nknX i=1 1 nlX t=1xi,n,t p)1/p ≤Kkn np/2max 1≤i≤kn,1≤t≤n∥xi,n,t∥p p1/p forp≥2. Compare that with the implication of Lemma 2.4.a and (2.6), (
|
https://arxiv.org/abs/2505.22423v1
|
E max 1≤i≤kn,1≤l≤n 1 nlX t=1xi,n,t p)1/p ≤( Emax 1≤l≤nknX i=1 1 nlX t=1xi,n,t p)1/p ≤ B pkn np(1−1/p′)max 1≤i≤kn,1≤t≤nn Θ(p) i,n,top1/p forp >1. Ifp≥2 then np(1−1/p′)=np/2and the upper bounds are virtually identical since cosmetically Θ(p) i,n,t≤2||xi,n,t||p p. The major differences are Mies and Steland (2023) ( i) operate on thelarger max 1≤l≤n(Pkn i=1|1/nPl t=1xi,n,t|p)q/pwith q≥p; (ii) require p≥2; (iii) only study Lp-bounds, while we also deliver a.s.convergence; ( iv) use telescoping sums of approximating martingales based on arguments in Pinelis (1994). We also use martingale approximation theory, based on classic arguments, to prove Lemma 2.4.a, and therefore Theorem 2.5.a. In order to prove a max-SLLN we use a standard subsequence argument. Notation is eased by working with sequences {xt}n t=1onRkn, with θ(p) i,t(m) =||xi,t−x′ i,t(m)||pand Θ(p) i,t =P∞ m=0θ(p) i,t(m). We further aid arguments by decomposing θ(p) i,t(m) ` a la mixingales. Sup- pose θ(p) i,t(m)≤d(p) i,tψi,mwhere d(p) i,tcaptures Lpheterogeneity, and max i∈Nψi,m=O(m−λ−ι) for some sizeλ≥1 (c.f. McLeish, 1975; Andrews, 1988). We can always take d(p) i,t≤ 2||xi,t||p, hence Θ(p) i,t≤K||xi,t||p. Let 1 < α≤2 and define ˚γi(α) := lim sup p→∞p1/2−1/α¯d(αp)∞X m=0ψi,mwith ¯d(p):= lim sup n→∞max 1≤i≤kn,1≤t≤n∥xi,t∥p. 12 We use classic Cauchy sequence and Kronecker lemma arguments. Recall p′:=p∧2. Theorem 2.6 (max-SLLN: pd).Letxi,tbeLp-physical dependent, p >1, with θ(p) i,t(m)≤ d(p) i,tψi,mandmax i∈Nψi,m=O(m−λ−ι)for some λ≥1. a.IfP∞ s=1max 1≤i≤kn{d(p) i,s/sb}p′<∞for some b∈(1/p′,1), then Mna.s.→0when kn= o(np(1−b)/max 1≤i≤kn,1≤t≤nE|xi,t|p). b.Ifmax i∈N˚γi(α)<∞for some 1< α≤2andln(kn) =o(nα/2−ι)thenMna.s.→0. Remark 2.8. Strong convergence Mna.s.→0 comes at a small cost under ( a): with p′b >1 we require kn=o(np−pb/···), compared to kn=o(np−p/p′/···) from max-WLLN Theorem 2.5.a and Θ(p) i,t≤K||xi,t||p. Ultimately this is due to the use of Borel-Cantelli and Kronecker lemma arguments. The latter max-WLLN bound on knreduces to kn= o(np−1/···) ifp <2, else kn=o(np/2/···), while for a strong law np−pb< np−1when p <2 andnp−pb< np/2when p≥2. Under stationarity or bounded trend max (i,t)∈NE|xi,t|p<∞ notice the max-SLLN kn=o(np−1) follows fromP∞ s=1max 1≤i≤kn{d(p) i,s/sb}p≤P∞ s=11/sbp <∞ ∀b >1/p. This matches the max-WLLN bound only when p <2. EXAMPLE 3 (Iterated Random Functions ).Consider xi,n,t=Fi,t(xi,n,t−1, ϵi,t) = F(ϵ) i,t(xi,n,t−1) where Fi,tare measurable bivariate functions, and ϵi,tare iid. Assume as in Liu et al. (2013, Example 1) the following fixed point and Lipschitz properties: there exist points zi,0, max i∈N|zi,0|<∞, and p >2 such that κi(p) := lim sup n→∞ max 1≤t≤n zi,0−F(ϵ) i,t(zi,0) p <∞ λi(p) := lim sup n→∞max 1≤t≤n sup x̸=x′ F(ϵ) i,t(x)−F(ϵ) i,t(x′) p |x−x′| <1 uniformly in i. Replicating arguments in Wu and Shao (2004, Theorem 2) and Liu et al. (2013, Example 1) yields a uniform functional dependence bound θ(p) i(m) := lim sup n→∞max 1≤t≤n xi,n,t−x′ i,n,t(m) p≤ K pλm i(p) for some finite universal constant Kp>0 depending only on max i∈N|zi,0|, supi∈Nκi(p), and max i∈Nλi(p). Therefore Θ(p) i≤ K p/(1−λi(p)). Thus by Theorem 2.5.a Mnp→0 ifkn= o(np(1−1/p′)/{Θ(p) n}), and√nMn=Op(k1/p nn1/p′−1/2{Θ(p) n}p). EXAMPLE 4 (max-SLLN with Lp-Trend ).Let max 1≤i≤knE|xi,t|p≤Ktafor some p >1,a < b−1/p′, and b∈(1/p′,1), and ∀n. ThenP∞ s=1max 1≤i≤kn{d(p) i,s/sb}p′<∞holds, and max 1≤i≤kn,1≤t≤n||xi,t||p≤Kna. Hence we need kn=o(np(1−b−a)) to yield Mna.s.→0. 13 We continue to work under serial Lp-physical dependence, but
|
https://arxiv.org/abs/2505.22423v1
|
now impose restrictions across coordinates ito improve bounds on kn. 2.2.2 Martingale Coordinates Write Si,n:=1 nnX t=1xi,n,t, and let the filtrations {Fi,n}i∈Nbe such that σ {xi,n,t}n t=1 ⊆Fi,nandEFi,n xi+1,n,t = xi,n,t∀i, n, t. Then EFi,n Si+1,n =Si,n, hence the collection {Si,n,Fi,n}i≥1forms a martingale. Doob’s inequality applies for any p >1 (Hall and Heyde, 1980, Theorem 2.2), Emax 1≤i≤kn|Si,n|p≤p p−1E|Skn,n|p. Now apply Lemma 2.4.a under physical dependence to E|Skn,n|pto deduce Emax 1≤i≤kn|Si,n|p≤ Bp pp p−1np/p′−p max 1≤t≤nΘ(p) kn,n,tp =Cpnp/p′−p max 1≤t≤nΘ(p) kn,n,tp , withCpimplicit. Since p/p′< pwe have Emax 1≤i≤kn|Si,n|p→0for any {kn}n∈Nsuffi- ciently when max 1≤t≤nΘ(p) kn,n,t=o(n1−1/p′). The latter holds, for example, when θ(p) i,n,t(m) ≤d(p) i,n,tψi,mwith bounded Lp-trend d(p) i,n,t≤2||xi,n,t||p≤Kt1−1/p′−ιfor tiny ι >0. A trivial special case is perfect dependence P(xi,n,t=xj,n,t) = 1∀(i, j), and a similar result extends to submartingales (e.g. Hall and Heyde, 1980, Theorem 2.1). The preceding discussion with Markov’s inequality proves the next result. Theorem 2.7 (max-WLLN: pd over t, martingale over i).Letxi,n,tbeLp-physical de- pendent over t,p >1, with Θ(p) i,n,t∈(0,∞)for each 1≤i≤knand1≤t≤n. If there exist filtrations {Fi,n}i∈Nsatisfying σ({xi,n,t}n t=1)⊆Fi,nandEFi,nxi+1,n,t=xi,n,t∀i, n, t , thenMnLp→0forany{kn}provided max 1≤t≤nΘ(p) kn,n,t=o(n1−1/p′). A corresponding max-SLLN for sequences {xt}follows. Assume θ(p) i,t(m) =||xi,t− x′ i,t(m)||p≤d(p) i,tψi,mas with Theorem 2.6, where max i∈Nψi,m=O(m−λ−ι),λ≥1. We again find knis unbounded. Theorem 2.8 (max-SLLN: pd over t, martingale over i).Assume for some b∈(1/p′,1), lim supn→∞P∞ s=1max 1≤i≤kn{d(p) i,s/sb}p<∞. Under the conditions of Theorem 2.7 Mna.s.→ 0 for any {kn}provided max 1≤i≤kn,1≤t≤nE|xi,t|p=o(np(1−b)). 14 EXAMPLE 5 (Serial Random Walk with PD Coordinates ).Suppose xi,t=xi,t−1+ ϵi,t,t≥1,xi,0= 0a.s., where ϵi,tare zero mean iid, and Lp-bounded, p >1. As always Exi,t = 0∀i, t. Assume xi,n,tisLp-physical dependent over i. Then xi,tis not serially physical dependent becausePN m=0θ(p) i,n,t(m)→ ∞ asN→ ∞ , hence the above results cannot be used to study max 1≤i≤kn|1/nPn t=1xi,t|. We can, however, study the converse problem of a max-LLN for the cross-coordinate mean ˜Mn:= max 1≤t≤gn|1/knPkn i=1xi,t|where {gn}n∈N is a sequence of positive integers, gn→ ∞ . By Theorem 2.8 ˜Mna.s.→0 for any knorgn (e.g. gn=n) provided max 1≤i≤kn,1≤t≤gnΘ(p) i,t=o(k1−b n). The latter automatically holds if max 1≤i≤kn,1≤t≤gnΘ(p) i,t=o(g1−b n) and kn/gn→ ∞ (e.g. gn=nandkn/n→ ∞ ). 2.2.3 Nearly Martingale Gaussian Coordinates We relax the martingale assumption to hold only as n→ ∞ at a sufficiently slow rate. In the remainder of this section we only develop weak max-LLN’s to focus ideas. In the following we use maximum domain of attraction theory for a Gaussian array to explore how classic theory yields a better bound on kn. Write ˜Zi,n:=1 Vi,n1√nnX t=1xi,n,t=√n Vi,nSi,nwhere V2 i,n:=E 1√nnX t=1xi,n,t!2 , and assume lim inf n→∞inf1≤i≤knV2 i,n>0. Notice max 1≤i≤knV2 i,n=O(1) by Lemma 2.4.a when max 1≤i≤kn,1≤t≤nΘ(q) i,n,t=O(1). Assume xi,n,t∼N(0,1) is strictly stationary over ( i, t) to ease discussion, thus {˜Zi,n: 1≤i≤kn}n∈Nis a stationary standard normal array. Define cross-coordinate correlations ρn,j:=E˜Zi,n˜Zi+j,n. Then for some ϑ∈[0,1], and sequences an=p 2 ln(n) and bn∼p 2 ln(n), under regularity conditions on ρn,jthat include (1 −ρn,j) ln(kn)→δj∈(0,∞], we have (see Hsing, H¨ usler and Reiss (1996, eq.’s (2.1)-(2.3)), cf. Berman (1964)) P max 1≤i≤kn ˜Zi,n −akn bkn≤u →exp{−ϑexp(−u)} ∀u∈(−∞,∞). (2.7) The latter permits strong dependence
|
https://arxiv.org/abs/2505.22423v1
|
with |ρn,j|<1 and |ρn,j| → 1 as n→ ∞ at a sufficiently slow rate. For example ρn,j= (1−ζ/ln(kn))jhence δj=jζ(see Example 6 below). By (2.7) it follows for√n/bkn→ ∞ andakn/√n→0, thus ln( kn) =o(n), P(Mn> u)≤P max 1≤i≤kn ˜Zi,n −akn bkn>√n bknu max 1≤i≤knVi,n−akn√n →0∀u≥0. 15 Compare this to ln( kn) =O(√n) for sub-Gaussian arrays in Theorem 2.5.b. This proves the next max-WLLN result. Theorem 2.9 (max-WLLN: pd over t, nearly martingale over i).Letxi,n,tbe stationary Lp-physical dependent, p >1, over t. Assume xi,n,t∼N(0,1),ρn,jsatisfies (2.1)-(2.3) in Hsing et al. (1996), and lim inf n→∞inf1≤i≤knV2 i,n>0. Then Mnp→0ifln(kn) =o(n). EXAMPLE 6 (Gaussian AR(1) Array ).Assume xi,n,tis serially Lp-physical depen- dent, p >1, with Θ(q) i,t∈(0,∞). Suppose coordinate-wise xi+1,n,t=dnxi,n,t+p 1−d2 nεi+1,t ∀(i, t) with mutually and serially iid εi,t∼N(0,1), and dn:= 1−ζ/ln(kn) for some ζ > 0. Hence (1 −ρn,j) ln(kn)→jζ(see Hsing et al., 1996, Section 3). Moreover, Ex2 i+1,n,t= 1 hence xi,t∼N(0,1) and V2 i,n= 1/nPn t=1P∞ j=0d2j n(1−d2 n) = 1∀i. Thus we have ˜Zi+1,n =dn˜Zi,n+Ei+1,nfor iid Ei,n∼N(0,1). Then Mnp→0 if ln( kn) =o(n). 2.2.4 Mixing Coordinates Finally, define serial and cross-coordinate α-mixing coefficients under stationarity (to ease notation below): αn(m) := max 1≤i≤kn,1≤t≤nsup A∈Ft i,n,−∞,B∈F∞ i,n,t+m|P(A ∩ B )−P(A)P(B)| ˜αn(m) := max 1≤i≤knsup A∈Gi n,−∞,B∈G∞ n,i+m|P(A ∩ B )−P(A)P(B)|, where Ft i,n,s:=σ(xi,n,τ: 1≤s≤τ≤t) andGj n,i:=σ({xl,n,t}n t=1: 1≤i≤l≤j). Notice lim m→∞˜αn(m) = 0 dictates cross-coordinate independence between samples {xi,n,t}n t=1and{xi+m,n,t}n t=1asm→ ∞ . We need uniform serial mixing αn(m)→0 fast enough to ensure the serial physical dependence property holds supporting max 1≤i≤knV2 i,n =O(1). We need ˜ αn(m)→0 fast enough to ensure cross-coordinate tail-based D- and D′-mixing properties hold (Leadbetter, 1974, 1983), promoting a maximum domain of at- traction condition akin to (2.7). Recall Si,n:= 1/nPn t=1xi,n,tandZi,l:= 1/√nPl t=1xi,n,t. Theorem 2.10 (max-WLLN: mixing over ( i, t)).Let{xi,n,t: 1≤t≤n}n∈Nbe stationary Lp-bounded, p >1, with lim supn→∞αn(m) =O(m−λ−ι)for some λ > qp/ (q−p)andq > p, and lim supn→∞˜αn(m) =O(m−2−ι). Let max 1≤i≤knknP(|√nSi,n|> u kn)→τ∈[0,∞) ∀un=an+ubn(∀u∈R, and some an>0andbn∈R). Then Mnp→0sufficiently if √n/bkn→ ∞ andakn/√n→0. (2.8) It remains to ensure max 1≤i≤knknP(|√nSi,n|> u kn)→τ≥0 for un=an+ubn, allu, such that (2.8) holds. We look at Gaussian, sub-exponential and stable domain cases, each 16 yielding a distinct bound on knthat dominates Theorem 2.5. Moreover, in each case we can take akn∼bkn>0, hence akn/bkn=O(1). Throughout serial and coordinate mixing hold: lim supn→∞αn(m) =O(m−λ−ι) and lim supn→∞˜αn(m) =O(m−2−ι). EXAMPLE 7 (Gaussian ).Letxi,n,t∼N(0,1)∀i, n, t , hence√nSi,n∼N(0,V2 i,n) where V2 i,n=nE(S2 i,n). By the proof of Theorem 2.10 we may invoke Lemma 2.4.a to de- duceV2 i,n=O(1). Assume lim inf n→∞min 1≤i≤knV2 i,n>0. We want {un}n∈Nsuch that max 1≤i≤knP(|√nSi,n|> u kn)∼τ/k n. Well known Gaussian tail properties yield ukn≃q 2 ln(kn)−2 ln(τ√ 2π)≃p 2 ln(kn). Thus we need ln( kn) =o(n) to yield (2.8). Com- pare that to ln( kn) =O(√n) from Theorem 2.5.a. EXAMPLE 8 (Sub-Exponential ).Let max 1≤i≤kn,1≤t≤nP(|xi,n,t|> u)≤ Cexp{−Kuα} for some 1 < α≤2. Then by Lemma 2.4.b and Remark 2.4, max i∈NP(max 1≤l≤n|Zi,l|> u)≤ Cexp{−Kuα}. Thus τ/k n∼max i∈NP(|√nSi,n|> u kn)≤ Cexp{−Kuα kn}ifukn≤ (K−1ln(kn))1/α. We therefore need ln( kn) =o(nα/2). The sub-Gaussian case holds when α = 2,
|
https://arxiv.org/abs/2505.22423v1
|
reducing to Example 7. This is a mild improvement over Theorem 2.5.b where ln( kn) =O(n1/2)for any 1< α≤2. EXAMPLE 9 (Stable Domain ).Suppose max 1≤i≤knP(Pn t=1xi,n,t/[n1/φh(n)]> u)→ Sφ(u)∀u∈R, some φ∈(1,2), slowly varying h(n) that may be different in different places, and some zero mean non-degenerate distribution Sφ(u). Then under the stated mixing conditions Sφ(u) is a stable law with infinite variance (symmetry and scale parameters are not shown here) (Ibragimov (1962, Theorem 1.1), cf. Nagaev (1957, Theorem 2.1)). Hence max 1≤i≤knknP √nSi,n > u = max 1≤i≤knknP 1 n1/φh(n)nX t=1xi,n,t >ukn n1/φ−1/2h(n)! ∼knh(n)ukn n1/φ−1/2h(n)−φ , yielding ukn∼n1/φ−1/2h(n)(kn/τ)1/φ. We therefore need kn=o(nφ−1/h(n)) for some slowly varying h(n). Compare this to kn=o(nq(1−1/q′)) under Theorem 2.5.a with station- arity for q >1. The index φis identically the moment supremum arg sup {r:E|Zi,n|r< ∞}, thus q < φ < 2. This implies q(1−1/q′) =q−1< φ−1, and we again yield a modest improvement. 17 3 Application #1: max-correlation We now present three applications of the main results, pointing out max-LLN usage by case. The first is a max-correlation test under mixing and physical dependence settings. We do not develop a bootstrap theory in any application to focus ideas. 3.1 Residual max-correlation Consider a linear regression model yt=ϕ′ 0xt−1+ϵt, where ϕ0∈Rkx,kx≥0,Eϵt= 0, with zero mean covariates xt= [xt,j]∈Rkx. We do not require E[ϵt|xt−1] = 0 a.s.thus mis-specification is allowed. At the expense of additional notation and assumptions, we can allow for a non-linear model and conditional volatility (see Hill and Motegi, 2020). We want to test whether the model error is white noise, H0:Eϵtϵt−h= 0∀h∈Nagainst H1:Eϵtϵt−h̸= 0 for some h∈N. Assume least squares estimation when kx>0,ˆϕn= (Pn t=2xt−1x′ t−1)−1×Pn t=2xt−1yt. Define the residual and its sample serial covariance and correlation at lag h≥1, ϵt(ˆϕn) :=yt−ˆϕ′ nxt−1and ˆ γn(h) :=1 nnX t=1+hϵt(ˆϕn)ϵt−h(ˆϕn) and ˆ ρn(h) :=ˆγn(h) ˆγn(0). The test statistic is√nmax 1≤h≤kn|ˆρn(h)|for some sequence of positive lags {kn}. A weighted version of ˆ ρn(h) is possible, allowing for standardization, or weights to account for lagging. Similarly, other estimators can be entertained, e.g. GMM, LAD, QML, and so on, although√n-asymptotics is assumed. Hill and Motegi (2020) use Ramsey theory to sidestep conventional HD approximations, ultimately using standard theory. They therefore cannot bound {kn}although kn=o(n) must hold for consistency of ˆ ρn(h). We now use HD max-LLN’s in part to prove anykn =o(n) is valid. 3.2 Strong mixing 3.2.1 Assumptions Let{υt}be an α-mixing process with σ-fieldsVt s:=σ(υτ:s≤τ≤t) andVt:=Vt −∞, and coefficients α(m) = supt∈NsupA⊂V∞ t,B⊂Vt−m −∞|P(A ∩ B )−P(A)P(B)| → 0 as m→ ∞. We impose second order stationarity to reduce notation, but otherwise allow for global 18 nonstationarity. Define bHn:= 1/nPn t=2xt−1x′ t−1,H:=Extx′ t,ρ(h) :=Eϵtϵt−h/Eϵ2 t, Dt(h) :=Ext−1ϵtϵt−h+Eϵtxt−1−hϵt−handDn(h) :=1 nnX t=1+h{Dt(h) +Dt(−h)−2ρ(h)Dt(0)} zn,t(h) :={ϵtϵt−h−Eϵtϵt−h} −ρ(h){ϵ2 t−Eϵ2 t} − H−1xt−1ϵtDn(h) Eϵ2 t Zn(h) :=1√nnX t=1+hzn,t(h). Assumption 1 (data generating process) . a.(ϵt, xt)are zero-mean, Vt-measurable, second order stationary, Eϵ2 t>0, andE(ϵtxt−1) = 0for unique ϕ0, an interior point of compact Φ⊂Rkx. Each wt∈ {ϵt, xt}is governed by a non-degenerate distribution satisfying max t∈NP(|wt|> z)≤b1exp{−b2zγ1} ∀z≥0 for some universal constants (b1, b2, γ1)>0. b. α(m)≤a1exp{−a2mγ2}for constants (a1, a2, γ2)>0. c.His positive
|
https://arxiv.org/abs/2505.22423v1
|
definite, and bHnisa.s.positive definite ∀n≥n0, and some n0∈N. d.lim inf n→∞infλ′λ=1E[(PL h=1λhZn(h))2]>0for each L ∈N. Remark 3.1. (a)-(c) allow us to use Theorem 2.2 for key summands by exploiting the fact that geometric α-mixing implies geometric τ(p)-mixing. ( c) is standard for least squares identification. (d) is a conventional non-degeneracy property, required here for a HD central limit theorem. It holds when Zn(L) := [ Zn(1), ...,Zn(L)]′satisfy a standard positive definiteness property: inf λ′λ=1E[(λ′Zn(L))2]>0∀L,∀n≥n1and some n1∈N. 3.2.2 Main results We may use√nmax 1≤h≤kn|ˆZn(h)|for a HD Gaussian approximation for√nmax 1≤h≤kn{ |ˆρn(h)−ρ(h)|}. The proof exploits α- and τ(p)-mixing linkage Lemma C.3 in Hill (2024), andτ(p)-mixing max-LLN Theorem 2.2. See Hill (2024, Appendix D.1) for proofs. Lemma 3.1. Under Assumption 1 for any {kn},kn=o(n), we have |√nmax 1≤h≤kn|ˆρn(h) −ρ(h)| −√nmax 1≤h≤kn|Zn(h)||p→0. Next, a HD Gaussian approximation for max 1≤h≤kn|Zn(h)|. Define σ2 n(h) :=E[Z2 n(h)]. Let{Zn(h) :h∈N}h≥1be an array of normally distributed random variables, Zn(h) ∼N(0, σ2 n(h)). Assumption 1.a,b,e ensure 0 < σ2 n(h)<∞uniformly in h≥1 and n ≥n1for some n1∈N. The lower bound is Assumption 1.e. For the upper bound, by geometric α-mixing, and uniform sub-exponentiality Lemma D.1 in Hill (2024), zn,t(h) 19 is an adapted geometric L2-mixingale with uniformly bounded constants, and therefore geometrically physical dependent (Hill, 2025a, Corollary 2.2). Hence max 1≤h≤knσ2 n(h) = O(1) by Lemma 2.4. Define the Kolmogorov distance δn:= sup z≥0 P max 1≤h≤kn|Zn(h)| ≤z −P max 1≤h≤kn|Zn(h)| ≤z . (3.1) Lemma 3.2. Under Assumption 1, δn→0for any {kn}satisfying kn=o(n1/9(ln(n))1/3). Remark 3.2. We need to describe the joint mixing property of high dimensional [ zn,t(h)]kn h=1: the latter is α-mixing with coefficients at displacement mbounded from above by α({m −kn} ∨0). This slows down dependence decay, strongly impacting the allowed rate of divergence kn→ ∞ (cf. Chang, Jiang and Shao, 2023, Proposition 3). We will see below that use of physical dependence is a boon when the dimension involves such lags, since it need only hold for zn,t(h)uniformly overhrather than jointly for [zn,t(h)]kn h=1, and allows for slower-than-geometric memory decay. Lemmas 3.1 and 3.2 yield the desired sharpening of Hill and Motegi’s (2020) HD Gaus- sian approximation theory. Theorem 3.3. Under Assumption 1, for any {kn},kn=o(n1/9(ln(n))1/3), sup z≥0 P√nmax 1≤h≤kn|ˆρn(h)−ρ(h)| ≤z −P max 1≤h≤kn|Zn(h)| ≤z →0. 3.3 Physical dependence Now let {ut, vt}t∈Zbe iid sequences, and assume there exist measurable Rkx-valued and R-valued functions gt(·) and ht(·) satisfying xt=gt(ut, ut−1, ...) and ϵt=ht(vt, vt−1, ...). Let{u′ t, v′ t}t∈Zbe independent copies of {ut, vt}t∈Z, and ϵ′ t(m) and x′ t(m) be the coupled versions based on {u′ t, v′ t}t∈Z. Define Lp-dependence coefficients θ(p) t(m) :=||xt−x′ t(m)||p and˜θ(p) t(m) :=||ϵt−ϵ′ t(m)||p. Assumption 1 .b∗(physical dependence). (xi,t, ϵt)areLp-physical dependent: there exist constants d(p) t(h)and coefficients ψmsatisfying for some p≥4, and some size λ≥2, n θ(p) t(m)∨˜θ(p) t(m)∨˜θ(p) t−h(m)o ≤d(p) t(h)ψmandψm=O(m−λ−ι)andψ0= 1. The following result produces a significant improvement on knfor reasons given above (see Remark 3.2). See Hill (2024, Appendix D.2) for a proof. 20 Theorem 3.4. Under Assumption 1.a,b∗,c,d, for any {kn},kn=o(n), sup z≥0 P√nmax 1≤h≤kn|ˆρn(h)−ρ(h)| ≤z −P max 1≤h≤kn|Zn(h)| ≤z →0. 4 Application #2: marginal screening 4.1
|
https://arxiv.org/abs/2505.22423v1
|
Test statistic Consider a scalar outcome yand set of covariates x= [xi]kn i=1, with variances v(y), v(xi) ∈(0,∞). We want to test the hypothesis that no covariate is linearly related to y, H0:cov(y, xi) = 0∀i= 1, ..., k nand each n (4.1) H1:cov(y, xi)̸= 0 for some ∀i= 1, ..., k nasn→ ∞ , where the number of covariates kn→ ∞ , and kn/n→ ∞ is allowed. It is a simple generalization to permit kn→k∗for some k∗∈N∪ ∞. Now consider a sample of a covariance stationary process {yt, xt}n t=1, and marginal regression models yt=δi+ϕixi,t+vi,t=β′ i˜xi,t+vi,t, where Evi,t= 0 for each model i= 1, ..., k n, ˜xi,t= [1, xi,t]′, and the errors and covariates are orthogonal Exi,tvi,t= 0. The sub-script “ i” shows βimay be different for different regressors xi,t. Classically of course the (pseudo) true values are ϕi=cov(y, xi) v(xi)andδi=Eyt−ϕiExi,t. Notice yt=δi+vi,tunder H0for all i, thus δi:=Eyt∀i, and tautologically vi,t=vt:=yt −Eyt∀i. Define least squares estimators ˆβi= [ˆδi,ˆϕi]′:= (Pn t=1˜xi,t˜x′ i,t)−1×Pn t=1˜xi,tyt, and, e.g., ¯xi,n:= 1/nPn t=1xi,t, hence ˆϕi=1/nPn t=1(xi,t−¯xi,n) (yt−¯yn) 1/nPn t=1(xi,t−¯xi,n)2. McKeague and Qian (2015) study ˆϕ(ˆın),nas a mechanism for testing (4.1) based on an adaptively selected ˆ ın:= arg max 1≤i≤kn|ˆϕi|with iid {yt, xt}. They allow k > n but for fixed k. They present an adaptive resampling test in order to resolve non-uniform, and therefore non-standard, asymptotics implicit in√n|ˆϕ(ˆın),n|. See their introduction for 21 historical references. A test of H0, however, can also be based simply on√nmax 1≤i≤kn|ˆϕi|without an endogenously selected ˆ ın. This alleviates the need for adaptive re-sampling, while inference is easily gained by multiplier (block) bootstrap in high dimension, cf. Hill (2025b) and Hill and Li (2025). Historically, of course, there is interest in an endogenously selected “most informative” regressor, and generally post-model-selection inference. See, e.g., Leeb and P¨ otscher (2006), and consult McKeague and Qian (2015) for further reading. The present theory is related to HD parameter test in Hill (2025b). In that setting an iid linear regression model is explored, with fixed (low) dimension nuisance parame- ter and a HD parameter to be tested. The present weak dependence setting allows for non-stationarity, with two tail settings: sub-exponentiality and Lp-boundedness, yielding respectively exponential and polynomial bounds on kn. We impose second order station- arity to focus on the marginal regression setting itself. 4.2 Assumptions and main results Define compact parameter spaces Φ i,D ⊂R, and assume 0 and ϕiare interior points of Φ i. Define Bi:= Φ i⊗ D. As long as there is no confusion we say uniformly to denote lim supn→∞max 1≤i≤kn,1≤t≤nor lim inf n→∞min 1≤i≤kn, depending on the case. Let{ϵt}t∈Zbe an iid sequence, and assume there exists a measurable Rkn+1-valued function gt(·) satisfying [ x1,t, ..., x kn,t, yt]′=gt(ϵt, ϵt−1, . . .).Define Lp-dependence coeffi- cients θ(p) i,t(m) :=||xi,t−x′ i,t(m)||pand ˜θ(p) t(m) :=||yt−y′ t(m)||p, with accumulations Θ(p) i,t:=P∞ m=0θ(p) i,t(m) and ˜Θ(p) t:=P∞ m=0˜θ(p) t(m). Write compactly ||ˇz||n,p:= max 1≤i≤kn,1≤t≤n{||xi,t||p∨ ||yt||p}.In order to yield a clear upper bound on knthat can be easily understood in terms of heterogeneity and de- pendence decay, assume as before θ(p) i,t(m)∨˜θ(p) t(m)≤d(p) i,tψi,m,
|
https://arxiv.org/abs/2505.22423v1
|
where max i∈Nψi,m= O(m−λ−ι) for some size λ≥1,ψi,0= 1, and logically d(p) i,t≤K||ˇz||n,p. Then λ≥1 yields max 1≤i≤kn,1≤t≤n{Θ(p) i,t∨˜Θ(p) t} ≤K||ˇz||n,pP∞ m=1m−λ−ι=K||ˇz||n,p. Define Hi:=E˜xi,t˜x′ i,t. Assumption 2. a.(xi,t, yt)are covariance stationary, governed by non-degenerate distributions uniformly over (i, t),Lp-physical dependent for some p≥4, size λ≥1, and lim supn→∞||ˇz||n,p≤ apbfor some finite a >0andb∈[0,∞). b.E(yt−β′ ixi,t)xi,t= 0for all iand unique βiin the interior of Bi. c.lim inf n→∞infλ′λ=1E(1/√nPn t=1λ′H−1 i˜xi,tvi,tλ)2>0for each i;1/nPn t=1(xi,t−¯xi,n)2 >0a.s.uniformly; E(xi,t−Exi,t)2>0uniformly. 22 Remark 4.1. (a) restricts tail thickness prompting different exponential bounds on kn based on b >0. Tails are sub-exponential when b≤1, while moments grow too fast to be sub-exponential when b >1 (cf. Vershynin, 2018, Proposition 2.7.1). ( b) is a standard identification condition. ( c) implies ( E˜xi,t˜x′ i,t)−1and (1 /nPn t=1˜xi,t˜x′ i,t)−1exist uniformly ini(lim inf n→∞min 1≤i≤kninfλ′λ=1λHiλ >0). The first item in ( c), non-degeneracy, holds whenZn,i= [xi,1vi,1, ..., x i,nvi,n]′satisfies lim inf n→∞infλ′λ=1E(λ′Zn,i)2>0, a conventional positive definiteness property. We now present the main results. First, a HD first order approximation that exploits Lemma 2.4 and max-WLLN Theorem 2.5. See Hill (2024, Appendix E) for proofs. Lemma 4.1. Let Assumption 2 and H0hold, and assume {xi,t, yt}areLp-physical depen- dent, p≥4, with size λ≥1. Then for any ln(kn) =o(n1/4) max 1≤i≤kn √nˆϕi −max 1≤i≤kn 1 n1/2nX t=1[0,1]H−1 i˜xi,tvt p→0 For a Gaussian approximation define σ2 n,i:=E(1/√nPn t=1[0,1]H−1 i˜xi,tvt)2. Conditions imposed in Assumption 2 yield uniformly σ2 n∈(0,∞). The lower bound is ( c). The upper bound is due to ( a): By the proof of Lemma E.1 in Hill (2024), {xi,tvt}isL2- physical dependent when {xi,t, yt}areL4-physical dependent. Hence by Lemma 2.4.a max 1≤i≤kn||1/√nPn t=1xi,tvt||2≤2 max 1≤i≤kn,1≤t≤n{Θ(4) i,t∨˜Θ(4) t}=O(1) under uniform L4-boundedness ||ˇz||n,4=O(1), ruling out unbounded fourth moment heterogeneity. Now let {Zn,i: 1≤i≤kn}n≥1be a Gaussian array, Zn,i∼ N(0, σ2 n,i), define ρn:= sup c≥0P max 1≤i≤kn 1√nnX t=1[0,1]H−1 i˜xi,tvt −max 1≤i≤kn|Zn,i| > c! , and recall b >0 in Assumption 2.a. Lemma 4.2. Let Assumption 2 and H0hold. Assume {xi,t, yt}areLp-physical dependent, p≥4, with size λ >2. Then for any {kn}satisfying kn→ ∞ andln(kn) =o(ng(b,λ))where g(b, λ) :=λ 8+2λ1 (7/6)∨(1+b), we have ρn→0. Moreover max 1≤i≤kn|1/√nPn t=1[0,1]H−1 i˜xi,tvt| d→max i∈N|Zi|for some Gaussian process {Zi},Zi∼N(0, σ2 i)with σ2 i= lim n→∞σ2 n,i∈ (0,∞). Remark 4.2. kndepends on tail conditions, memory decay, and heterogeneity. As λ↘2 (far from independent) and b= 4 (non-sub-exponential tails) then ln( kn) =o(n1/30). Con- versely, as λ→ ∞ (approaching geometric memory/independence) with sub-exponential tails b= 1/6 we have g(b, λ)→1 21 7/6hence ln( kn) =o(n3/7). 23 Remark 4.3. The proof exploits HD Gaussian approximation Theorem 3( ii) in Chang et al. (2024). They propose two results: the first Theorem 3( i) supposedly imposing only their Condition 3 nondegeneracy, and the second Theorem 3( ii) imposing also their Condition 1 sub-exponential tails. However, the dependence adjusted norms that they exploit to bound kn, based on ideas in Wu and Wu (2016), only make sense when all moments exist, e.g. lim supn→∞|ˇz||n,p≤apbfor some b >0. See especially Wu and Wu (2016, Section 2.3, cf. eq. (2.21)). Lemmas 4.1 and 4.2 imply the main result for the
|
https://arxiv.org/abs/2505.22423v1
|
max-test statistic max 1≤i≤kn|√nˆθi,n|. Theorem 4.3. Let Assumption 2 and H0hold. Assume {xi,t, yt}areLp-physical de- pendent, p≥4, with size λ > 2. Then max 1≤i≤kn|√nˆθi,n|d→max i∈N|Zi|for any {kn} satisfying ln(kn) =o(ns(b,λ))where by case ifb∈(0,1/6]then s(b, λ) =( 1 4ifλ≥28 5 λ 8+2λ1 (7/6)∨(1+b)ifλ <28 5(4.2) ifb∈(1/6,1)then s(b, λ) = 1 4ifλ≥4 2 1+b−1 λ 8+2λ1 (7/6)∨(1+b)ifλ <4 2 1+b−1 ifb≥1then s(b, λ) =λ 8 + 2 λ1 1 +b. Remark 4.4. Ifb <1/6 then tails are sub-exponential (cf. Vershynin, 2018, Proposition 2.7.1) and ln( kn) =o(n1/4) if memory decay is fast enough λ≥28/5. If, e.g., b <1/6 with slower decay λ= 4 then ln( kn) =o(n3/14), a slower rate. In the intermediate range b∈(1/6,1) tails are still sub-exponential, but ln( kn) =o(n1/4) when λ≥4/(2 1+b−1)↘ 28/5 asb↘1/6: thinner tails allow for slower memory decay. Finally, b≥1 allows for non-sub-exponential tails, yielding only ln( kn) =o(nλ 8+2λ1 1+b). If, e.g., b= 2 then ln( kn) = o(nλ 4+λ1 6) whereλ 4+λ1 6↘1 18asλ↘2 (far from independence) andλ 4+λ1 6↗1/6 asλ→ ∞ (independence/geometric decay). In the hairline case b= 1 tails are sub-exponential, and ln(kn) =o(nλ 4+λ1 4) whereλ 4+λ1 4↘1 12asλ↘2 andλ 4+λ1 4↗1/4 asλ→ ∞ . 5 Application #3: testing parametric restrictions Our final application combines methods in Cattaneo et al. (2018) and Hill (2025b). Consider a triangular array of observations {wn,t, xn,t, yn,t: 1≤t≤n}n≥1with dependent variable yn,t, and covariates ( wn,t, xn,t) of dimensions ( kn, kθ). The model is yn,t=δ′ n,0wn,t+θ′ n,0xn,t+un,t (5.1) 24 with error term un,t. LetE(yn,t−δ′ n,0wn,t−θ′ n,0xn,t)[w′ n,t, x′ n,t]′= 0 for unique [ δ′ n,0, θ′ n,0]′. The model may be pseudo-true in the sense P(Ewnt,xnt(yn,t−δ′ nwn,t−θ′ nxn,t) = 0) <1 ∀(δn, θn), where, e.g., wnt:={wn,1, ..., w n,t}. The array representation covers many cases in social sciences and statistics, including ( i) linear models with increasing dimension via kn; (ii) models with basis expansions of flexible functional forms, like partially linear models yt=g(zt) +θ′ 0xt+utfor some unknown measurable function g, and regressor set zt; and (iii) models with many dummy variables, e.g. panel models with multi-way fixed effects. Cf. Cattaneo et al. (2018, Section 3.3). Cattaneo et al. (2018) partial out the HD δn,0in order to estimate the fixed low di- mensional θn,0, and propose HAC methods for robust inference with arbitrary in-group dependence with finite fixed group size. We consider the converse problem in a far broader setting. We test the HD parameter H0:δn,0= 0 vs. H1:δn,0̸= 0 by partialling out θn,0, but exploit many low dimensional or parsimonious models under H0as in Hill (2025b) to yield ˆδi,n. We then use a max-statistic max 1≤i≤kn√n|ˆδi,n|for testing H0. Partialling out is useful when kθis large relative to n, or consistency of ˆθnis not guaranteed (e.g. in panel settings with many fixed effects). Although we do not allow for xn,tto be high dimen- sional, we anticipate the following will extend to that case. The parsimonious approach alleviates the need for regularization and therefore sparsity, as in de-biased Lasso, and is significantly (potentially massively) faster to compute than de-biased Lasso (see Hill, 2025b). Moreover, a
|
https://arxiv.org/abs/2505.22423v1
|
max-statistic sidesteps HAC estimation and therefore inversion of a large dimension matrix, both of which may lead to poor inference. See Hill and Motegi (2020), Hill, Ghysels and Motegi (2020) and Hill (2025b) for demonstrations of asymptotic max-test superiority in models with (potentially very) many parameters. The paritalled-out ˆδnis derived as follows. First, estimate parsimonious models yn,t=δ∗ i,nwi,n,t+θ∗′ i,nxn,t+ei,n,t, i= 1, ..., k n. (5.2) Define δ∗ n:= [δ∗ i,n]kn i=1. By Theorem 2.1 in Hill (2025b) δ∗ n= 0if and only if δn,0= 0, hence θ∗ n=θn,0andei,n,t=un,t∀iunder H0. Thus, we need only estimate each model in (5.2) to yield some ˆδn= [ˆδi,n]kn i=1and thereby test H0. Define an l2orthogonal projection matrix Mn:=In−xn(x′ nxn)−1x′ n∈Rn×nwith identity matrix In, where xn:= [xn,1, ..., x n,n]′. After partialling out based on a projection onto the linear space spanned by xn,t, yielding Mnyn=δ∗ i,nˆvi,n+Mnei,n, where ˆvi,n:= Mnwi,n∈Rn×1, the estimator of δ∗ i,nreduces to ˆδi,n= ˆv′ i,nˆvi,n−1ˆv′ i,nyn. 25 The test statistic is Tn:= max 1≤i≤kn√n|ˆδi,n|. We assume below E(γ′wn,t+δ′xn,t)2>0 uniformly in ( n, t, γ′γ=δ′δ= 1), hence inf 1≤i≤kn{ˆv′ i,nˆvi,n/n}>0awp1 (Hill, 2024, Lemma F.3). Thus logically wn,tandxn,tcannot be perfectly linearly related. We assume stochastic components {wn,t, xn,t, un,t}areρi-Lipschitz Markov processes in order to focus ideas, implying both τ(p)-mixing and Lp-physical dependence. Define ˆZn,i:= 1 nnX t=1Evi,n,tv′ i,n,t!−1 1√nnX t=1vi,n,tun,tandσ2 n,i:=EˆZ2 n,i. Assumption 3. Letzi,n,t∈ {wi,n,t, xi,n,t, un,t}. a.Each zi,n,t=fzi(zi,n,t−1) +ϵi,t, for ρzi-Lipschitz fzi(·),ρzi∈(0, e−azi]for some azi>0, serially iid ϵi,t, andLp-bounded {ϵi,t, zi,n,t}for some p≥4. b.zi,n,tare governed by non-degenerate distributions for all (i, n, t ), with max 1≤i≤kn,1≤t≤nP(|zi,n,t|> z)≤aziexp{bziz−γzi} ∀nfor some (azi, bzi, γzi)∈(0,∞). c.lim inf n→∞infλ′λ=1{λ′x′ nxnλ/n∧λ′w′ i,nwi,nλ/n}>0a.s.; andE(γ′wn,t+δ′xn,t)2>0 uniformly over (n, t, δ′δ= 1, γ′γ= 1). d.σ2 n,i∈[K,∞)for some K > 0and each (i, n). Remark 5.1. (a) implies zi,n,t=gzi(ϵi,t, ϵi,t−1, ...) for measurable gzi(e.g. Diaconis and Freedman, 1999), and is geometrically τ(p)-mixing by Example 2, and (therefore) geo- metrically uniformly Lp-physical dependent by Lemma C.4 in Hill (2024) and linkages in Hill (2025a). See also Wu (2005, p. 14152). Thus intertemporal dependence decays geometrically fast. We can easily allow for arbitrary group-wise dependence for finite, heterogeneously sized groups by assuming zi,n,t=ri,n,t+si,n,twhere ρzi-Lipschitz ri,n,t= fri(ri,n,t−1) +ϵi,t, and si,n,tisMsi-dependent for finite heterogeneous Msi:zi,n,tis still geo- metrically Lp-physical dependent. Indeed, Msi-dependence can be replaced with arbitrary dependence in arbitrary groups (e.g. ti 1, ..., ti Msi), nesting the Assumption 1 independence setting in Cattaneo et al. (2018). We work under ( a) instead to save notation. Remark 5.2. (b) ensures both a max-LLN and HD central limit theorem apply, and implies zi,n,tare uniformly Lp-bounded ∀p≥1. In ( c), uniformly E(γ′wn,t+δ′xn,t)2>0 ensures posi- tive definiteness inf λ′λ=1λ′(1/nPn t=1Ewn,tw′ n,t)λ >0∀nand inf λ′λ=1λ′(1/nPn t=1Exn,tx′ n,t)λ >0∀n, and rules out deviant cross-correlations ensuring ˆv′ i,nˆvi,n/nis positive definite awp1. Non-degeneracy ( d) is standard: see remarks following Assumptions 1 and 2. Remark 5.3. Our assumptions differ from Cattaneo et al. (2018, Assumptions 1-3). They impose cross-group independence with finite heterogeneous group sizes, and allow for het- eroscedasticity. They need (5.1) to be very close to the true model by several measures (see 26 their Assumption 3; e.g. E(Ewnt,xntun,t)2=o(1/n)). We
|
https://arxiv.org/abs/2505.22423v1
|
allow for nonstationarity, (5.1) need not be the true model, and within-group dependence can be arbitrary as discussed above. Nonstationarity allows for heteroscedasticity and other forms of heterogeneity, and a max-test allows us to by-pass covariance matrix estimation entirely (it is ipso facto het- eroscedasticity robust). Of course, they partial-out the high dimensional term and estimate one model, while we ( i) partial out the fixed (low) dimensional term, ( ii) estimate many low dimension models, and therefore ( iii) use an entirely different asymptotic theory. Let{Zn,i: 1≤i≤kn}n≥1be Gaussian, Zn,i∼ N(0, σ2 n,i) with σ2 n,i:=EˆZ2 n,i, and define ρn:= sup c≥0P max 1≤i≤kn ˆZn,i −max 1≤i≤kn|Zn,i| > c . We require a moment growth parameter bdeveloped in Hill (2024, Appendix F), similar to Assumption 2.a. By Lemma F.4 each zn,t(i, j)∈ {wi,n,txj,n,t−Ewi,n,txj,n,t,xi,n,txj,n,t− Exi,n,txj,n,tandxi,n,tun,tsatisfies supp≥2p−b{supm≥0(m+ 1) max 1≤i,j≤kn,1≤t≤n||zn,t(i, j)− z′ n,t(i, j)||p/2} ≤Kfor some b >0 that depends only on the Assumption 3.b tail parameters. Ifb≤1 then zn,t(i, j) have sub-exponential tails. The following omnibus result character- izes first order and Gaussian approximations, and the max-statistic limit. MAX-WLLN Theorem 2.5 is utilized in the proof. Theorem 5.1. Let Assumption 3 and H0hold. a.(Non-Gaussian Approximation) .|√nmax 1≤i≤kn|ˆδi,n| −max 1≤i≤kn|ˆZn,i||=op(1)for any{kn},ln(kn) =o(n1/4). b.(Gaussian Approximation). ρn→0for any {kn},ln(kn) =o(n1/[2(1+ φ)]). c.Tnd→max i∈NZiwhere Zi∼N(0,limn→∞σ2 n,i)for any {kn}satisfying ln(kn) =o(ns(b)) where s(b) = lim λ→∞s(b, λ),s(b, λ)is depicted in (4.2), and bis defined above. Thus s(b) = 1/4ifb∈(0,1)ands(b) = 1 /[2(1 + b)]ifb≥1. 6 Conclusion We present weak and strong laws of large numbers for the maximum sample average max 1≤i≤kn|1/nPn t=1xi,n,t|of a high dimensional array {xi,n,t: 1≤i≤kn}n t=1. We work under updated τ-mixing and physical dependence properties, while deriving new relational results. Certain max-LLN’s reveal a memory and dimension growth trade-off, depending on nuances of the underlying dependence property. We work with and without cross- coordinate dependence restrictions, where generally cross-coordinate dependence can be 27 wielded to achieve an improvement on kn. The results are applied to three settings: a max-correlation white noise test; correlation screening under dependence and kn/n→ ∞ ; and a high dimensional regression parameter test under dependence. As next steps, it would be interesting to ( i) extend the results to near epoch dependent [NED] arrays which are nested under mixingales, or a spatial setting; ( ii) study cross- coordinate dependence further in an attempt to yield general results with applications; (iii) extend the results to high dimensional laws of iterated logarithm under dependence; (iv) extend results to uniform laws in high dimension. All such ideas are left for future consideration. A Appendix: technical proofs Proof of Lemma 2.1. Under τ(1), and mixing and tail decay (2.1)-(2.3), we have uni- formly over ( i, n, t ) (Merlev` ede et al., 2011, Theorem 1), P max 1≤l≤n 1 nlX t=1xi,n,t ≥ϵ! (A.1) ≤nexp{−K 1ϵγnγ}+ exp −K 2ϵ2n2 1 +K3n + exp −K 4ϵ2n2 neK5(ϵn)γ(1−γ) [ln(ϵn)]γ , for some γ∈(0,1). Merlev` ede et al. (2011) assume d= exp{1}in (2.2), but this can be generalized to any d >0. Their proof, with coupling result Lemma C.2 in Hill (2024), and arguments in Dedecker and
|
https://arxiv.org/abs/2505.22423v1
|
Prieur (2004, Lemma 5) and Merlev` ede et al. (2011, p. 460), directly imply (A.1) holds under τ(p). Indeed max 1≤i≤knτ(1) i,n(m)≤ {max 1≤i≤knτ(p) i,n(m)}1/p ≤a1/pe−(b/p)mγ1by Lyapunov’s inequality and (2.1). Hence Merlev` ede et al. (2011, proof of Theorem 1) arguments go through with ( a, b) replaced with ( a1/p, b/p). The upper bound in (A.1) is not a function of i, hence (2.4). QED . Proof of Theorem 2.2 . Jensen’s inequality gives a log-exp bound ∀λ >0, Emax 1≤i≤kn 1 nnX t=1xi,n,t ≤1 λln Eexp( λmax 1≤i≤kn 1 nnX t=1xi,n,t )! ≤1 λln knEexp( λ 1 nnX t=1xi,n,t )! . (A.2) 28 Furthermore Eexp( λ 1 nnX t=1xi,n,t ) =Z∞ 0P 1 nnX t=1xi,n,t >1 λln (u)! du. (A.3) In (A.1), cf. (2.4) in Lemma 2.1, because γ∈(0,1) the first term trivially dominates the third, and dominates the second for all ϵ≥1 and n≥n0, and finite n0≥1 depending on (K1,K2,K3, γ). Hence for some Kdepending on ( K1,K2,K3, γ) that may be different in different places, P max 1≤l≤n 1 nlX t=1xi,n,t ≥ϵ! ≤3nexp{−Kϵγnγ} ∀ϵ≥1 and n≥n0. Moreover, 3 nexp{−Kϵγnγ} ≤exp{−(K/2)ϵγnγ} ∀ϵ≥1,n≥n1, and finite n1≥1. There- fore,∀n≥n0∨n1, and any λ >0, Eexp( λ 1 nnX t=1xi,n,t ) ≤e+Z∞ eP 1 nnX t=1xi,n,t >1 λln (u)! du ≤e+Z∞ eexp −Knγ 2λγ(ln (u))γ du =e+1 γZ∞ 11 v(γ−1)/γexp v1/γ−Knγ 2λγv dv ≤e+1 γZ∞ 11 v(γ−1)/γexp −Knγ 2λγ−1 v dv ≤e+1 γZ∞ 1exp −Knγ 2λγ−1 v dv. The second equality uses a change of variables v= (ln( u))γ∈[0,∞), the third inequality usesγ≥1 from (2.3), and the fourth uses v >1. Notice for all v >1, all n, some ˜K ∈ (0,K/2) and any λ≤(K/2−˜K)1/γn, exp −Knγ 2λγ−1 v ≤exp −˜Knγ λγv . Therefore, ∀n≥n0∨n1and any λ≤(K/2−˜K)1/γn, Eexp( λ 1 nnX t=1xi,n,t ) ≤e+1 γZ∞ 1exp −˜Knγ λγv dv ≤e+λγ γ˜Knγexp −˜Knγ λγ ≤e+λγ γ˜Knγ. 29 Now use (A.2) with λ= ln( kn) + ln(ln( n))≤(K/2−˜K)1/γnand ln( kn)≤ AnforA= (K/2−˜K)1/γ/2 to yield Emax 1≤i≤kn 1 nnX t=1xi,n,t ≤1 λ ln(kn) + ln e+λγ γ˜Knγ =ln(kn) + ln e+1 γ˜K ln(kn)+ln(ln( n)) nγ ln(kn) + ln(ln( n))→0. Hence MnL1→0 whenever kn≥1 and ln( kn)≤ An. Finally, the above arguments with λ=p nln (kn) and ln( kn) =O(n) imply identically P max 1≤i≤kn 1p nln (kn)nX t=1xi,n,t > c! ≤1 crn ln (kn)1 λ ln(kn) + ln e+1 γ˜Kλ nγ =1 c1 ln (kn)( ln(kn) + ln e+1 γ˜K r ln (kn) n!γ!) =1 c 1 +1 ln (kn)ln (e+O(1)) =O(1)∀c >0, completing the proof. QED . Proof of Lemma 2.4. Write Zi,l:= 1/√nPl t=1xi,n,t. Claim (a). For similar arguments see Jirak and K¨ ostenberger (2024, Lemma 21) when p >1 and Wu (2005, Theorem 2( i)) when p≥2. Recall ξi,t:={ϵi,t, ϵi,t−1, ..}. Define Mr,m:=Pm l=1y(r) i,n,lwhere y(r) i,n,l:=E(xi,n,l|ξi,l−r)−E(xi,n,l|ξi,l−r−1). ThenPn t=1xi,n,t=P∞ r=0Mr,n, hence by triangle and Minkowski inequalities, and Doob’s mar- tingale inequality when p >1 (e.g. Hall and Heyde, 1980, Theorem 2.2), max 1≤l≤n lX t=1xi,n,t p≤∞X r=0 max 1≤l≤n lX t=1y(r) i,n,t p≤p p−1∞X r=0 nX t=1y(r) i,n,t p. (A.4) Define A(r) i,n,j:=σ(y(r) i,n,1, ..., y(r) i,n,j), hence A(r) i,n,j=σ(ξi,j−r). Define Burkholder (1973)’s
|
https://arxiv.org/abs/2505.22423v1
|
constant Cp:= 18 p3/2/(p−1)1/2, andC′ p:=pCp/(p−1). Case 1 ( p∈(1,2)). Apply Lemma 2.2 in Li (2003) to ||Pn l=1y(r) i,n,l||p, cf. Wu and Shao 30 (2007, Lemma 1), to yield nX t=1y(r) i,n,t p≤ C p nX t=1 y(r) i,n,t p p!1/p ≤ C pn1/pmax 1≤t≤n y(r) i,n,t p. Hence ||max 1≤t≤n|Zi,t|||p≤ C′ pn1/p−1/2max 1≤t≤nP∞ r=0||y(r) i,n,t||p. By definition ||y(r) i,n,t||p= ||E(xi,n,t|ξi,t−r)−E(xi,n,t|ξi,t−r−1)||p=:ρ(p) i,n,t(r), thus max 1≤t≤n|Zi,t| p≤ C′ pn1/p−1/2∞X m=0max 1≤t≤nρ(p) i,n,t(m). Hence ||max 1≤t≤n|Zi,t|||p≤ C′ pn1/p−1/2max 1≤t≤nΘ(p) i,n,tby Theorem 2.1 in Hill (2025a). Case 2 ( p≥2). The above argument exploit’s Burkholder’s inequality and carries over to any p >1 (see Jirak and K¨ ostenberger, 2024, Lemma 21). We get a better a constant, however, when p≥2 based on arguments in Dedecker and Doukhan (2003), cf. Rio (2017, Chapt. 2.5). Apply Proposition 4 in Dedecker and Doukhan (2003) to ||Pn l=1y(r) i,n,l||pin (A.4) to yield nX l=1y(r) i,n,l p≤p 2p nX j=1max j≤l≤n y(r) i,n,jlX m=jE y(r) i,n,m|A(r) i,n,j p/2 1/2 (A.5) =p 2p nX j=1 y(r) i,n,jE y(r) i,n,j|A(r) i,n,j p/2!1/2 ≤p 2p√nmax 1≤t≤n y(r) i,n,t p. The equality follows from the martingale difference property of y(r) i,n,m, measurability, and iterated expectations since E y(r) i,n,m|A(r) i,n,j =Eh E y(r) i,n,m|σ(ξi,j−r) |A(r) i,n,ji =Eh E{E(xi,n,m|ξi,m−r)−E(xi,n,m|ξi,m−r−1)|σ(ξi,j−r)}|A(r) i,n,ji = 0∀m≥j+ 1 The second inequality uses Cauchy-Schwartz and Lyapunov inequalities. Now use (A.4) and repeat the argument in Case 1 to complete the proof. Claim (b). Recall Θ(p) i:= lim supn→∞max 1≤t≤nΘ(p) i,n,tandγi(α) := lim supp→∞p1/2−1/αΘ(p) i, and by assumption (Θ(p) i, γi(α))∈(0,∞) uniformly in ifor some 1 < α≤2. Define ¯ γ:= max i∈Nγi(α)>0 and ¯λ0:= (eα¯γα)−12−α/2. 31 By Stirling’s formula and max i∈Nγi(α)<∞, for any 0 < λ≤¯λ0(Wu, 2005, proof of Theorem 2.(ii)) lim sup p→∞λn√2αpmax i∈NΘ(αp) ioα (p!)1/p= lim sup p→∞λn√2αpmax i∈NΘ(αp) ioα p/e = lim sup p→∞λeα√ 2 (αp)1/2−1/αmax i∈NΘ(αp) iα =λeα2α/2γi(α)α<1. Thus from ( a) and uniform boundedness max i∈N∞X p=[2/α]+1E(λmax 1≤l≤n|Zi,l|α)p p!≤∞X p=[2/α]+1λp√2αpαp αp−1max i∈NΘ(αp) iαp p!=O(1). Hence by the Maclaurin series max i∈NEexp{λmax 1≤l≤n|Zi,l|α}<∞. The proof now mimics Wu (2005, proof of Theorem 2(ii)) by choosing any λ∈(0,¯λ0).QED . Proof of Theorem 2.5. Claim (a). Lemma 2.4.a and (2.6) yield for p >1, and some Bp∈(0,∞), EMn≤k1/p nmax 1≤i≤kn 1 nnX t=1xi,n,t p≤ B pk1/p n1 n1−1/p′max 1≤i≤kn,1≤t≤nΘ(p) i,n,t. Therefore√nMn=Op(k1/p nn1/p′−1/2max 1≤i≤kn,1≤t≤nΘ(p) i,n,t). Thus Mnp→0 if kn= o(np(1−1/p′)/max 1≤i≤kn,1≤t≤n{Θ(p) i,n,t}p). Claim (b). Use Lemma 2.4.b with C= 1 (to reduce notation) together with (A.2) and (A.3). First, for some 1 < α≤2 and any λ >0, and by a change of variables v= (ln ( u))α, Eexp( λ 1 nnX t=1xi,n,t ) ≤e+Z∞ eP 1 nnX t=1xi,n,t >1 λln (u)! du ≤e+Z∞ eexp −K n1/21 λln (u)α du =e+1 αZ∞ 11 v1−1/αexp v1/α− Knα/2 λαv dv ≤e+Z∞ 1exp v− Knα/2 λαv dv, 32 where the last inequality uses ( a, v)≥1. Hence for any λ <K1/α√n, max 1≤i≤knEexp( λ 1 nnX t=1xi,n,t ) ≤e+Z∞ 1exp − Knα/2/λα−1 v dv ≤e+exp − Knα/2/λα−1 Knα/2/λα−1 ≤e+1 Knα/2/λα−1. (A.6) Now use (A.2) and (A.6) to deduce for λ= ln( kn) + ln ln( n) and ln( kn)≤ K1/α√n/2 Emax
|
https://arxiv.org/abs/2505.22423v1
|
1≤i≤kn 1 nnX t=1xi,n,t ≤1 λln kn e+1 Knα/2/λα−1 =ln(kn) ln(kn) + ln ln( n)+1 ln(kn) + ln ln( n)ln e+1 K √n ln(kn)+ln ln( n)α −1 =o(1). Finally, set λ=ξ√nfor any ξ∈(0,K1/α) to yield Emax 1≤i≤kn 1 nnX t=1xi,n,t ≤ln(kn) ξ√n+1 ξ√nln e+1 K(√n/[ξ√n])α−1 =ln(kn) ξ√n+1 ξ√nln e+1 K/ξα−1 =ln(kn) ξ√n+O1√n , hence max 1≤i≤kn|1/√nPn t=1xi,n,t|=Op(ln (kn)) by Markov’s inequality. QED . Proof of Theorem 2.6 . Write Xi,l:=Pl t=1xi,t/tbfor any b∈(1/p′,1],p′:=p∧ 2. Write compactly ¯d(p) n:= max 1≤i≤kn,1≤t≤n||xi,t||p, hence with any ˜ nwe have ¯d(p) ˜n= max 1≤i≤k˜n,1≤t≤˜n||xi,t||p. Claim (a). We prove the claim after we first prove max 1≤i≤kn,1≤l≤n|Xi,l|=op k1/p n¯d(p) n . (A.7) Step 1 (A.7). Recall Cp:= 18 p3/2/(p−1)1/2. Use the proof of Lemma 2.4.a with 33 θ(p) i,t(m)≤d(p) i,tψi,mand max i∈Nψi,m=O(m−1−ι) to deduce for some b >1/p′andp >1 max 1≤l≤n lX t=1xi,t tb p≤ K p nX t=1( d(p) i,t tb)p′ 1/p′ , (A.8) where Kp:=BpP∞ m=0max 1≤i≤knψi,m<∞withBp= 36√ 2pCpifp∈(1,2), orBp= 23/2√p ifp≥2. Use the same argument with triangle and Minkowski inequalities, and a∨b≤a+b ∀a, b≥0, to deduce for any integers n > m > 0, max 1≤i≤kn max 1≤l≤n|Xi,l| p−max 1≤i≤kn max 1≤l≤m|Xi,l| p ≤max 1≤i≤kn max 1≤l≤n|Xi,l| −max 1≤l≤m|Xi,l| p = max 1≤i≤kn max 1≤l≤m|Xi,l| ∨ max m+1≤l≤n|Xi,l| −max 1≤l≤m|Xi,l| p ≤max 1≤i≤kn max m+1≤l≤n|Xi,l| p ≤ K p nX t=m+1max 1≤i≤kn( d(p) i,t tb)p′ 1/p′ ≤ K p¯d(p) n nX t=m+11 tbp′!1/p′ . Since bp′>1 it follows {max 1≤i≤kn||max 1≤l≤n|Xi,l|||p/[Kp¯d(p) n]}n≥1is Cauchy, hence max 1≤i≤kn||max 1≤l≤n|Xi,l|||p=o(Kp¯d(p) n). Therefore, by Minkowski’s inequality max 1≤i≤kn|Xi,l| p≤k1/p nmax 1≤i≤kn max 1≤l≤n|Xi,l| p=o k1/p nKp¯d(p) n . Now invoke Markov’s inequality and Kp<∞to conclude (A.7). Step 2. We expand arguments in Meng and Lin (2009, p. 1544) to a high dimensional setting. By Step 1 max 1≤i≤kn|Xi|/[k1/p n¯d(p) n]p→0, hence there exists a sequence of positive integers {nr}r∈Nsatisfying max 1≤i≤kn|Xi,nr| k1/p nr¯d(p) nra.s.→0 asr→ ∞ . (A.9) Furthermore, with Dn(p) :=P∞ s=1max 1≤i≤kn{d(p) i,s/sb}p′<∞by supposition for some b > 34 1/p′, arguments in Step 1 yield for any ε >0 Emax nr<l≤nr+1 lX s=nr+1xi,s sb p ≤ Kp p lX s=nr+1( d(p) i,s sb)p′ p/p′ =Kp pDn(p)(p∧2)/p 1 Dn(p)lX s=nr+1( d(p) i,s sb)p′ p/(p∧2) ≤ Kp pDn(p)(p∧2)/p1 Dn(p)lX s=nr+1( d(p) i,s sb)p′ =˜KplX s=nr+1( d(p) i,s sb)p′ , say. The second inequality usesPl s=nr+1{d(p) i,s/sb}p′≤ D n(p) and p/(p∧2)≥1. Thus ∞X r=1P max nr<l≤nr+1|Xi,l− X i,nr|> ε ≤1 εp∞X r=1Emax nr<l≤nr+1|Xi,l− X i,nr|p =1 εp∞X r=1Emax nr<l≤nr+1 lX s=nr+1xi,s sb p ≤˜Kp εp∞X r=1nr+1X s=nr+1max 1≤i≤kn( d(p) i,s sb)p′ ≤˜Kp εp∞X s=1max 1≤i≤kn( d(p) i,s sb)p′ <∞. Therefore by the Borel-Cantelli lemma max nr<l≤nr+1|Xi,l− X i,nr|a.s.→0 asr→ ∞ . (A.10) Combine (A.9) and (A.10) to deduce max 1≤i≤kn|Pn t=1xi,t/tb|/(k1/p n¯d(p) n)a.s.→0, hence by Kronecker’s lemma max 1≤i≤kn|1/nbPn t=1xi,t|/(k1/p n¯d(p) n)a.s.→0. Now deduce max 1≤i≤kn 1 nnX t=1xi,t =oa.s. k1/p n¯d(p) n n1−b! a.s.→0 ifkn=o np(1−b) n ¯d(p) nop . (A.11) Claim (b). Write ¯d(p):= lim supn→∞max 1≤i≤kn,1≤t≤n{d(p) i,t}<∞, and recall ˚ γi(α) := lim supp→∞p1/2−1/α¯d(αp)P∞ m=0ψi,m. Define ˚ γi(α, b) := ˚γi(α)×(P∞ l=11/l2b)1/2for any b∈ 35 (1/2,1). Step 1 proves for some C>0 and any α∈(1,2] such
|
https://arxiv.org/abs/2505.22423v1
|
that max i∈N˚γi(α)<∞, max 1≤i≤knP max 1≤l≤n 1 nlX t=1xi,t > u! ≤2 exp −Cnα(1−b)uα . (A.12) Step 2 proves for some C>0, any ξ∈(0,C), and any positive λ <(C −ξ)nα(1−b), max 1≤i≤knE" exp( λmax 1≤l≤n 1 nlX t=1xi,t )# ≤e+2λ ξnα(1−b). (A.13) We then prove the claim in Step 3. Step 1 (A.12). By arguments in the proofs of ( a) and Lemma 2.5.a it can be shown that when p >[2/α] then for any b∈(1/2,1) and any α∈(1,2] E λmax 1≤l≤n lX t=1xi,n,t tb α!p ≤λp 23/2√αp∞X m=0ψi,m¯d(αp) n nX t=11 t2b!1/2 αp . Define ˚γ(α, b) := max i∈N˚γi(α, b)>0. By Stirling’s formula for any 0 < λ≤˚λ0:= (23α/2αe)−1˚γ(α, b)−α, lim sup p→∞λ (p!)1/p 23/2√αp¯d(αp) n∞X m=0ψi,m nX t=11 t2b!1/2 α = 23α/2λαelim sup p→∞ (αp)1/2−1/α¯d(αp) n∞X m=0ψi,m nX t=11 t2b!1/2 α = 23α/2λαe˚γ(α, b)α<1. Therefore, for any 0 < λ≤˚λ0 max i∈N∞X p=[2/α]+1E(λmax 1≤l≤n|Xi,l|α)p p!(A.14) ≤∞X p=[2/α]+1λpn 23/2√αpP∞ m=0ψi,m¯d(αp) n Pn t=11 t2b1/2oαp p!<∞. A Taylor expansion thus yields lim supn→∞max 1≤i≤knE[exp{˚λ0max 1≤l≤n|Xi,l|αp}]<∞. Next, max 1≤l≤n|Xi,l|is Cauchy as shown under ( a). Indeed, (A.8) and arguments 36 leading to (A.14) imply for any integers n > m > 0, max 1≤i≤kn∞X p=[2/α]+1E(λ|max 1≤l≤n|Xi,l|α−max 1≤l≤m|Xi,l|α|)p p! ≤max 1≤i≤kn∞X p=[2/α]+1E(λmax m+1≤l≤n|Xi,l|α)p p! ≤ ∞X p=[2/α]+1λp 23/2√αpmax 1≤i≤knP∞ m=0ψi,mαp p! nX t=m+1max 1≤i≤kn( d(αp) i,t tb)2 αp/2 ≤ ∞X p=[2/α]+1λp 23/2√αpmax 1≤i≤knP∞ m=0ψi,m¯d(αp) nαp p! nX t=m+1max 1≤i≤kn1 t2b!αp/2 . Hence by Kronecker’s lemma and arguments above max 1≤i≤knE" exp( ˚λp 0max 1≤l≤n 1 nblX t=1xi,t αp)# →1. This proves (A.12) by a change of variables since by Chernoff’s inequality with C:=˚λ0, some n0and all n≥n0 max 1≤i≤knP max 1≤l≤n 1 nblX t=1xi,t > u! ≤2 exp{−Cuα}. Step 2 (A.13). Use (A.12), α > 1 and a change of variables to deduce for any ξ∈ (0,C) and any λ <(C −ξ)nα(1−b), E" exp( λmax 1≤l≤n 1 nlX t=1xi,t )# ≤e+Z∞ eP max 1≤l≤n 1 nlX t=1xi,t >1 λln(u)! du =e+λZ∞ 1P max 1≤l≤n 1 nlX t=1xi,t > v! exp{λv}dv ≤e+ 2λZ∞ 1exp λv− Cnα(1−b)vα dv ≤e+ 2λZ∞ 1exp −ξnα(1−b)v dv≤e+2λ ξnα(1−b). Step 3. By (A.13), Jensen’s inequality and a usual log-exp bound, for λ=ωnα(1−b) 37 and any ω∈(0,C −ξ) E max 1≤i≤knmax 1≤l≤n 1 nlX t=1xi,t ! ≤1 λln kn e+2λ ξnα(1−b) ≤ln(kn) ωnα(1−b)+ln (e+ 2ω/ξ) ωnα(1−b). Since b∈(1/2,1) is arbitrary, put b= 1/2 +ιfor infinitessimal ι >0. Thus if ln(kn) =o nα/2−ι (A.15) thenEmax 1≤i≤knmax 1≤l≤n|1/nPl t=1xi,t| →0. Hence under (A.15) there exists a sequence of positive integers {nr}r∈Nsatisfying max 1≤i≤kn 1 nrnrX t=1xi,t a.s.→0 asr→ ∞ . (A.16) Moreover, the same argument yielding (A.10) implies max nr<l≤nr+1 1 llX t=1xi,t−1 nrnrX t=1xi,t a.s.→0 asr→ ∞ . (A.17) Therefore, if knsatisfies (A.15) then combining (A.16) and (A.17) yields as claimed max 1≤i≤kn|1/nPn t=1xi,t|a.s.→0, which completes the proof. QED . Proof of Theorem 2.8. We borrow notation and arguments from the proofs of Theorems 2.6.a and 2.7. Recall ¯d(p) n:= max 1≤i≤kn,1≤t≤n{d(p) i,t}. First, max 1≤i≤kn||max 1≤l≤n|Pl t=1xi,t/tb|||p =o(Kp¯d(p) n) for some b∈(1/p′,1]. Moreover, {max 1≤l≤n|Pl t=1xi,t/tb|,Fi,n}forms a (posi- tive) submartingale under the martingale
|
https://arxiv.org/abs/2505.22423v1
|
supposition. Apply Doob’s inequality to yield max 1≤i≤kn,1≤l≤n lX t=1xi,t tb p≤p p−1 max 1≤l≤n lX t=1xkn,t tb p=o Kp¯d(p) n for some p >1. Thus max 1≤i≤kn,1≤l≤n|Pl t=1xi,t/tb|=op(¯d(p) n). This implies max 1≤i≤kn|Pnr t=1xi,t/tb|/¯d(p) nra.s.→0 asr→ ∞ for some sequence of positive integers {nr}r∈N. Now use (A.10) and Kro- necker’s lemma to deduce max 1≤i≤kn|1/nbPn t=1xi,t|/¯d(p) na.s.→0, hence max 1≤i≤kn|1/nPn t=1xi,t| a.s.→0 if¯d(p) n=o(n1−b). Finally, Θ(p) i,t=P∞ m=0θ(p) i,t(m)≤d(p) i,n,tP∞ m=0ψi,m≤Kd(p) i,n,t≤K||xi,t||p hence max 1≤i≤kn|1/nPn t=1xi,t|a.s.→0 if max 1≤i≤kn,1≤t≤nΘ(p) i,t=o(n1−b), which occurs if max 1≤i≤kn,1≤t≤nE|xi,t|p=o(np(1−b)).QED . 38 Proof of Theorem 2.10. Under α-mixing lim supn→∞αn(m) =O(m−λ−ι),λ > qp/ (q− p) and q > p, it follows xi,tis anLq-bounded Lp-mixingale for each i, 1≤p≤q, with size λ(1/p−1/q) (McLeish, 1975, Lemma 1.6). Thus xi,tisLp-physical dependent given λ > qp/(q−p) for each i(Hill, 2025a, Theorem 2.1). Moreover, by measurability√nSi,nis mixing with coefficients lim supn→∞˜αn(m) =O(m−2−ι). Hence√nSi,nsatisfies Leadbetter (1974, 1983)’s D(un) property for un=an+u×bn, allu∈R, and some an>0 and bn∈ R. Furthermore, Leadbetter (1974, 1983)’s D′(un) property also holds since for any l >0 knknX m=2P √nSi,n> u knl,√nSi+m,n> u knl =knknX m=2 P √nSi,n> u knl,√nSi+m,n> u knl −P √nSi,n> u knl P √nSi+m,n> u knl +knknX m=2P √nSi,n> u knl P √nSi+m,n> u knl ≤knkn−1X m=1˜αn(m) +1 l2×lknP √nSi,n> u knl ×1 knknX m=2lknP √nSi+m,n> u knl ≤Kknkn−1X m=1m−2−ι+τ21 l2(1 +o(1)) ≃Kkn1 k1+ι n+τ21 l2(1 +o(1)) = o(1/l) , cf. Leadbetter (1974, eq. (3.2)) . The second and third inequalities use max 1≤i≤knknP(√nSi,n> u kn)→τ. The first uses theα-mixing coefficient construction implication P √nSi,n> u knl,√nSi+m,n> u knl −P √nSm,n> u knl ×P √nSi+m,n> u knl ≤˜αn(m). The conditions of Theorem 1.2 in Leadbetter (1983) therefore hold: P({max 1≤i≤kn|√nSi,n|− akn}/bkn> u) =P(max 1≤i≤kn|√nSi,n| ≤ukn)→exp{−τ} ∀u∈R. Therefore ∀u >0 P(Mn> u)≤P1 bkn max 1≤i≤kn √nSi,n −akn >√n bkn u−akn√n . This suffices to prove Mnp→0 if√n/bkn→ ∞ andakn/√n→0 as required. QED . References Adamek, R., Smeekes, S., Wilms, I., 2023. Lasso inference for high-dimensional time series. J. Econometrics 235, 1114–1143. 39 Andrews, D.W.K., 1984. Nonstrong mixing autoregressive processes. J. Appl. Probab. 21, 930–934. Andrews, D.W.K., 1987. Consistency in nonlinear econometric models: A generic uniform law of large numbers. Ecnoometrica 55, 1465–1471. Andrews, D.W.K., 1988. Laws of large numbers for dependent non-identically distributed random variables. Econometric Theory 4, 458–467. Belloni, A., Chernozhukov, V., Hansen, C., 2014. High-dimensional methods and inference on structural and treatment effects. J. Econom. Perspect. 28, 29–50. Bentkus, V., 2008. An extension of the Hoeffding inequality to unbounded random vari- ables. Lith. Math. J. 48, 137–157. Berman, S.M., 1964. Limit theorems for the maximum term in stationary sequences. Ann. Math. Stat. 35, 502–516. Bosq, D., 1993. Bernstein-type large deviations inequalities for partial sums of strong mixing processes. Statistics 24, 59–70. B¨ uhlmann, P., van de Geer, S., 2011. Statistics for High-Dimensional Data. Springer, Berlin. Burkholder, D.L., 1973. Distribution function inequalities for martingales. Ann. Probab. 1, 19–42. Cattaneo, M.D., Jansson, M., Newey, W., 2018. Inference in linear regression models with many covariates and heteroscedasticity. J. Amer. Statist. Assoc. 113, 1350–1361. Chang, J., Chen, X., Wu, M., 2024. Central limit theorems for high dimensional dependent data. Bernoulli 30, 712–742. Chang, J., Jiang, Q., Shao, X.,
|
https://arxiv.org/abs/2505.22423v1
|
2023. Testing the martingale difference hypothesis in high dimension. J. Econometrics 235, 972–1000. Chazottes, J.R., Gouezel, S., 2012. Optimal concentration inequalities for dynamical sys- tems. Comm. Math. Phys. 316, 843–889. Chernick, M.R., 1981. A limit theorem for the maximum of autoregessive processes with uniform marginal distribution. Ann. Probab. 9, 145–149. Chernozhukov, V., Chetverikov, D., Kato, K., 2013. Gaussian approximations and multi- plier bootstrap for maxima of sums of high-dimensional random vectors. Ann. Statist. 41, 2786–2819. Collet, P., Martinez, S., Schmitt, B., 2002. Exponential inequalities for dynamical measures of expanding maps of the interval. Probab. Theory Rel. 123, 301–322. Davidson, J., 1994. Stochastic Limit Theory. Oxford University Press, Oxford, U. K. Dedecker, J., Doukhan, P., 2003. A new covariance inequality and applications. Stochastic Process. Appl. 106, 63–80. Dedecker, J., Doukhan, P., Lang, G., Leon, J.R., Louhichi, S., Prieur, C., 2007. Weak Dependence: With Examples and Applications. Springer. Dedecker, J., Prieur, C., 2004. Coupling for τ-dependent sequences and applications. J. Theoret. Probab. 17, 861–885. 40 Dedecker, J., Prieur, C., 2005. New dependence coefficients. examples and applications to statistics. Probab. Theory Rel. 132, 203–235. Dezeure, R., B¨ uhlmann, P., Zhang, C.H., 2017. High-dimensional simultaneous inference with the bootstrap. Test 26, 685–719. Diaconis, P., Freedman, D., 1999. Iterated random functions. SIAM Rev. 41, 45–76. Dumbgen, L., van de Geer, S., Veraar, M.C., Wellner, J.A., 2010. Nemirovski’s inequalities revisited. Amer. Math. Monthly 117, 138–160. Fan, J., Li, R., 2006. Statistical challenges with high dimensionality: Feature selection in knowledge discovery, in: Sanz-Sole, M., Soria, J., Varona, J.L., Verdera, J. (Eds.), Proceedings of the International Congress of Mathematicians, European Mathematical Society, Zurich. pp. 595–622. Fan, J., Lv, J., Qi, .L., 2011. Sparse high-dimensional models in economics. Annu. Rev. Economics 3, 291–317. Genovese, C.R., Jin, J., Wasserman, L., Yao, Z., 2012. A comparison of the lasso and marginal regression. J. Mach. Learn. Res. 13, 2107–2143. Gordin, M.I., 1969. The central limit theorem for stationary processes. Soviet. Math. Dokl. 10, 1174–1176. Hall, P., Heyde, C.C., 1980. Martingale Limit Theory and Its Application. AcademicPress, New York. Hang, H., Steinwart, I., 2017. A Bernstein-type inequality for some mixing processes and dynamical systems with an application to learning. Ann. Statist. 45, 708–743. Hannan, E.J., 1973. Central limit theorems for time series regression. Z. Wahrschein- lichkeitstheorie 26, 157–170. Hannan, E.J., Deistler, M., 1988. The Statistical Theory of Linear Systems. Wiley, New York. Hansen, B.E., 1991. Strong laws for dependent heterogeneous processes. Econometric Theory 7, 213–221. Hill, J.B., 2024. Supplemental material for ‘ Max-Laws of Large Numbers for High Dimen- sional Arrays with Applications ’. Dept. of Economics, University of North Carolina - Chapel Hill. Hill, J.B., 2025a. Mixingale and physical dependence equality with applications. Statist. Probab. Lett. 221, in press. Hill, J.B., 2025b. Testing many zero restrictions in a high dimensional linear regression setting. J. Bus. Econom. Statist. 43, 55–67. Hill, J.B., Ghysels, E., Motegi, K., 2020. Testing a large set of zero restrictions in regression models, with an application to mixed frequency granger causality. J. Econometrics 218, 633–654. Hill, J.B., Li, T., 2025. A bootstrapped test of covariance stationarity based on orthonormal transformations. Bernoulli
|
https://arxiv.org/abs/2505.22423v1
|
31, 1527–1551. Hill, J.B., Motegi, K., 2020. A max-correlation white noise test for weakly dependent time series. Econometric Theory 36, 907–960. 41 Hsing, T., H¨ usler, Reiss, R.D., 1996. The extremes of a triangular array of normal random variables. Ann. Appl. Probab. 6, 671–686. Ibragimov, I.A., 1962. Some limit theorems for stationary processes. Theory Probab. Appl. 7, 349–382. Jin, L., Wang, S., Wang, H., 2015. A new non-parametric stationarity test of time series in the time domain. J. Roy. Stat. Soc. Ser. B 77, 893–922. Jirak, M.J., K¨ ostenberger, G., 2024. Sharp oracle inequalities and universality of the aic and fpe. ArXiv:2406.13513v1. Kallenberg, O., 2021. Foundations of Modern Probability. 3rd ed., Springer Nature, Switzerland. Keenan, D.M., 1997. A central limit theorem for m(n) autocovariances. J. Time Series Anal. 18, 61–78. Krebs, J.T.N., 2018a. A Bernstein inequality for exponentially growing graphs. Comm. Statist. - Theory and Method 47, 5097–5106. Krebs, J.T.N., 2018b. A large deviation inequality for βmixing time series and its appli- cations to the functional kernel regression mode. Statist. Probab. Lett. 113, 50–58. K¨ uhn, F., Schilling, R.L., 2023. Maximal inequalities and some applications. Probab. Surveys 20, 382–485. Leadbetter, M.R., 1974. On extreme values in stationary sequences. Z. Wahrscheinlichkeit- stheorie verw. Geb. 28, 289–303. Leadbetter, M.R., 1983. Extremes and local dependence in stationary sequences. Z. Wahrscheinlichkeitstheorie verw. Gebiete 65, 291–306. Leeb, H., P¨ otscher, B.M., 2006. Can one estimate the conditional distribution of post- model-selection estimators. Ann. Statist. 34, 2554–2591. Li, Y.L., 2003. A martingale inequality and large deviations. Statist. Probab. Lett. 62, 317–321. Liu, W., Xiao, H., Wu, W.B., 2013. Probability and moment inequalities under depen- dence. Statist. Sinica 23, 257–1272. Maume-Deschamps, V., 2006. Exponential inequalities and functional estimations for weak dependent data; applications to dynamical systems. Stochastic Dyn. 6, 535–560. McKeague, I., Qian, M., 2015. An adaptive resampling test for detecting the presence of significant predictors. J. Amer. Statist. Assoc. 110, 1422–1433. McLeish, D.L., 1975. A maximal inequality and dependent strong laws. Ann. Probab. 3, 829–839. Meng, Y., Lin, Z., 2009. Maximal inequalities and laws of large numbers for lq-mixingale arrays. Statist. Probab. Lett. 79, 1539–1547. Merlev` ede, F., Peligrad, M., 2002. On the coupling of dependence random variables and applications, in: Empical Processes Techniques for Dependent Data. Birkh¨ auser, pp. 171–193. Merlev` ede, F., Peligrad, M., Rio, E., 2011. Bernstein inequality and moderate deviations for weakly dependent sequences. Probab. Theory Rel. 151, 435–474. Volume 5. 42 Mies, F., Steland, A., 2023. Sequential gaussian approximation for nonstationary time series in high dimensions. Bernoulli 29, 3114–3140. Nagaev, S.V., 1957. Some limit theorems for stationary markov chains. Theory Probab. Appl. 11, 378–406. Nemirovski, A.S., 2000. Topics in nonparametric statistics, in: Emery, M., Nemirovski, A., Voiculescu, D., Bernard, P. (Eds.), Ecole d’Ete´ e de Probabilitis de Saint-Flour XXVII. Springer, Berlin. volume 1738, pp. 87–285. Lectures Notes on Mathematics. Newey, W.K., 1991. Uniform convergence in probability and stochastic equicontinuity. Econometrica 59, 1161–1167. Peligrad, M., 2002. Some remarks on coupling of dependent random variables. Statist. Probab. Lett. 60, 201–209. Pinelis, I., 1994. Optimum bounds for the distributions of martingales in banach spaces. Ann. Probab.
|
https://arxiv.org/abs/2505.22423v1
|
22, 1679–1706. Pollard, D., 1984. Convergence of Stochastic Processes. Springer Verlag, New York. P¨ otscher, B.M., Prucha, I.R., 1989. A uniform law of large numbers for dependent and heterogeneous data processes. Econometrica 5, 675–683. Rio, E., 1995. The functional law of the iterated logarithm for stationary strongly mixing sequences. Ann. Probab. 23, 1188–1203. Rio, E., 1996. Sur le theoreme de Berry-Esseen pour les suites faiblement. Probab. Theory Rel. 104, 255–282. Rio, E., 2017. Asymptotic Theory of Weakly Dependent Random Processes. Springer. Samson, P.M., 2000. Concentration of measure inequalities for markov chains and ϕ-mixing processes. Ann. Probab. 28, 416–461. Talagrand, M., 1995a. Concentration of measure and isoperimetric inequalities in product spaces. Pub. Math. de l’IH ´ES 81, 73–205. Talagrand, M., 1995b. The missing factor in hoeffding’s inequalities,. Ann. Henri Poincare . Talagrand, M., 2003. Spin Glasses: A Challenge for Mathematicians: Cavity and Mean. Springer, Berlin. van der Vaart, A., Wellner, J., 1996. Weak Convergence and Empirical Processes. Springer, New York. Valenzuela-Dominguez, E., Krebs, J.T.N., Franke, J.E., 2017. A Bernstein inequality for spatial lattice processes. ArXiv preprint arXiv:1702.02023. Vershynin, R., 2018. High-Dimensional Probability. Cambridge University Press, Cam- bridge, UK. Viennet, G., 1997. Inequalities for absolutely regular sequences: Application to density estimation. Probab. Theory Rel. 107, 467–492. Wintenberger, O., 2010. Deviation inequalities for sums of weakly dependent time series. Electron. Commun. Probab. 15, 489–503. 43 Wu, W.B., 2005. Nonlinear system theory: Another look at dependence. Proc. Natl. Acad. Sci. 102, 14150–14154. Wu, W.B., 2011. Asymptotic theory for stationary processes. Statist. Interface 0, 1–20. Wu, W.B., Min, M., 2005. On linear processes with dependent innovations. Stochastic Process. Appl. 115, 939–958. Wu, W.B., Shao, X., 2004. Limit theorems for iterated random functions. J. Appl. Probab. 41, 425–436. Wu, W.B., Shao, X., 2007. A limit theorem for quadratic forms and its applications. Econometric Theory 23, 930–951. Wu, W.B., Wu, Y.N., 2016. Performance bounds for parameter estimates of high- dimensional linear models with correlated errors. Electron. J. Statist. 10, 352–379. 44
|
https://arxiv.org/abs/2505.22423v1
|
arXiv:2505.22646v1 [math.ST] 28 May 2025Path-Dependent SDEs: Solutions and Parameter Estimation Pardis Semnani, Vincent Guan, Elina Robeva, and Darrick Lee May 29, 2025 Abstract . We develop a consistent method for estimating the parameters of a rich class of path- dependent SDEs, called signature SDEs , which can model general path-dependent phenomena. Path signatures are iterated integrals of a given path with the property that any sufficiently nice function of the path can be approximated by a linear functional of its signatures. This is why we model the drift and diffusion of our signature SDE as linear functions of path signatures. We provide conditions that ensure the existence and uniqueness of solutions to a general signature SDE. We then introduce the Expected Signature Matching Method (ESMM) for linear signature SDEs, which enables inference of the signature-dependent drift and diffusion coefficients from observed trajecto- ries. Furthermore, we prove that ESMM is consistent: given sufficiently many samples and Picard iterations used by the method, the parameters estimated by the ESMM approach the true parameter with arbitrary precision. Finally, we demonstrate on a variety of empirical simulations that our ESMM accurately infers the drift and diffusion parameters from observed trajectories. While pa- rameter estimation is often restricted by the need for a suitable parametric model, this work makes progress toward a completely general framework for SDE parameter estimation, using signature terms to model arbitrary path-independent and path-dependent processes. Keywords: Path signatures, Rough paths, Path-dependent, Stochastic differential equations, Con- sistent estimator 2020 Mathematics Subject Classification: 60L20, 60L90, 62M99, 62M09 1. Introduction Stochastic differential equations (SDEs) capture both random and deterministic dynamics, making them powerful tools for modeling processes in a broad range of fields, including biol- ogy, chemistry, physics, economics, and computer vision [ 36]. Due to the need for analytical and computational tractability, the SDE is commonly assumed to be Markovian, and therefore path-independent , i.e., the process’ evolution depends only on its current state. Formally, a path- independent SDE obeys the form dYt=a(Yt)dt+b(Yt)dW t, where Ytis an m-dimensional process and Wtis an n-dimensional Brownian motion. Due to its importance in applications, estimating the drift a and diffusion b of an unknown SDE from sample observations has been an active area of research for several decades, and has been especially well- studied for path-independent SDEs. Once a parametric drift and diffusion family is specified, a number of approaches, including maximum likelihood estimation (MLE) [ 37, 2, 36, 23, 41 ], the method of moments [ 33, 32, 35 ] and kernel density estimation [ 31, 5 ] may be used to infer the underlying parameters. 1 2 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE However, real-world processes may be highly non-Markovian, since the process evolution may exhibit delays, cyclic patterns, dependence on trends, and other historical dependencies. Examples include biological processes (e.g., lung function [ 1]), psychological behavior [ 18], pop- ulation dynamics [ 13], mechanics [ 20], and climate dynamics [ 21]. Non-Markovian dynamics are commonly known as path-dependent processes, and can thus be modeled by path-dependent SDEs, whose drift and diffusion coefficients
|
https://arxiv.org/abs/2505.22646v1
|
may depend on the entire history of the path, dYt=a(Y[0,t])dt+b(Y[0,t])dW t. A simple construction of a path-dependent SDE is given in [ 29]. First, consider the three- dimensional linear SDE, dY(1) t=dW(1) t dY(2) t=Y(3) tdt+dW(2) t (1.1) dY(3) t=Y(1) tdt. Although this three-dimensional SDE is Markovian, we note that if the last component Y(3) tis not observed, then the second component becomes dependent on the path history of the first component, yielding dY(1) t=dW(1) t dY(2) t=Zt 0Y(1) sds dt+dW(2) t. The above is a simple example of a distributed-delay SDE, which is a well-studied class of path- dependent SDEs. In particular, distributed-delay SDEs assume an integro-differential form, such that the drift and diffusion parameters integrate a kernel over a time interval: dYt=a Yt,Zt t−τK(t,s,Ys)ds dt+b Yt,Zt t−τK(t,s,Ys)ds dW t, where τis the time lag, and Kis a bounded kernel from a parametric family. Conditions for the existence and uniqueness of solutions, as well as numerical approximations, have been studied under various assumptions [ 6, 40 ]. Although distributed delay SDEs are natural examples, path- dependence encompasses a much broader class of dependencies beyond those captured by delay kernels. Furthermore, existing parameter estimation methods for the drift and diffusion are typically confined to narrowly defined models and rely on strong assumptions about the delay kernel [ 3, 43 ] . In this article, we develop a consistent method for estimating the parameters of a rich class of path-dependent SDEs, called signature SDEs , which can model general path-dependent phenom- ena. The path signature of a sufficiently smooth path Y:[0,T]→Rnis derived from an infinite sequence of iterated integrals Yt:= (Yk t)∞ k=0where Yk t:=Z 0≤t1<...<tk≤tdYt1⊗. . .⊗dYtk∈(Rn)⊗k. (1.2) The path signature S(Y):=YTcompletely characterizes the path Yup to tree-like equivalence [8, 19, 4 ], i.e., it identifies the path up to reparametrizations and retracings. In fact, by appending the time parametrization and considering the signature of the augmented path Yt= (t,Yt), the PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 3 signature becomes injective [ 10, Section 4.3]. We leverage two fundamental properties of the signature [ 9, 10, 11 ]: •universal : linear functionals of the path signature can approximate sufficiently nice functions on the space of paths; and •characteristic : the expected signature µ7→EY∼µ[S(Y)], where µis a probability measure on the space of paths, is injective. Linear signature SDEs, introduced in [ 12], are Stratonovich SDEs of the form dYt=Aθ(Yt)dt+Bθ(Yt)◦dW t, (1.3) where the drift Aθand diffusion Bθare linear functionals of the signature, parametrized by θ∈Θ. As the signature faithfully represents the history of the path, these are path-dependent SDEs. Moreover, by the universality of the signature, general continuous path-dependent drift and diffusion terms can be approximated by Aθand Bθ. Our main contributions are twofold. Existence and Uniqueness of Solutions to Signature SDEs in Short Time Intervals. The question of existence and uniqueness of solutions to signature SDEs is not addressed in [ 12], and our first contribution is to fill this gap in the restricted setting of short time intervals. Our approach uses the theory of rough paths [27, 15, 14 ].
|
https://arxiv.org/abs/2505.22646v1
|
While the integrals in (1.2) are well-defined if the path Yis sufficiently smooth, such integrals may not exist if Yis highly irregular. Instead, arough path is a path Y, together with postulated signature terms Yk, for k≤p, where pis determined by the regularity of the path. Equipped with this additional data, integrals can be defined with respect to rough paths; in fact, the Universal Limit Theorem determines the existence and uniqueness of solutions for path-independent rough differential equations [ 27]. After presenting preliminaries in Section 2, in Section 3 we consider a general signature controlled differential equation dYt=F(Yt)dXt along with its corresponding lifted equation dYt=F(Yt)dXt, (1.4) where Xis a deterministic rough driving signal. The latter equation extends the state space of the former equation by directly incorporating the signatures Yof the solution Yinto the differential equation. This procedure is analogous to the example in (1.1), where an SDE becomes path- independent by considering a larger system. Then, by restricting our attention to sufficiently bounded driving noise, we can apply the Universal Limit Theorem [ 27] to obtain existence and uniqueness for the solutions to these equations (see Theorem 3.9), as stated informally here. Theorem 1.1. (Informal) If Xis an appropriately bounded rough path, then there exists a unique solution Yto(1.4) , and there exists a sequence of Picard iterations {Y(r)}r∈Z≥0, where Y=lim r→∞Y(r). Next, in Section 4, we specialize to the case of the linear signature SDE in (1.3) and show that for all parameters θ∈Θ, there exist uniform bounds on the driving noise such that both the solution Yand the Picard estimates Y(r)are well-defined. Consistent Parameter Estimation. Our second contribution, in Section 5, is to develop a consistent method to estimate the parameters θof the signature SDE in (1.3). We leverage the characteristic property of the signature to extend the Expected Signature Matching Method , intro- duced in [ 35] for path independent SDEs with polynomial vector fields, to linear signature SDEs. 4 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE We begin in Theorem 5.4 by expressing the Picard iterations Yθ(r)of the differential equation in (1.3) as a polynomial in θ, with coefficients determined by the signature Xof an admissible deterministic driving signal. Then, given a stochastic driving signal, the restricted expectation of therth Picard iteration is a polynomial in θ, given by Pr(θ):=E[Yθ(r)·χEξ], where χEξis the indicator function on a subset Eξ⊂ S of the sample space with appropri- ately bounded sample driving signals. Now given a collection {Yθ0(σi)}N i=1of solutions sampled from (1.3) with respect to an unknown parameter θ0∈Θ, we can solve a system of polynomial equations Pr(θ) =1 NN ∑ i=1Yθ0(σi)·χEξ(σi), (1.5) to estimate θ0. Following [ 35], we call this method the Expected Signature Matching Method . Our main result shows that this method is a consistent estimator for linear signature SDEs, stated with explicit rates, and proved in Theorem 5.11. Theorem 1.2. (Informal) Let P (θ):=E[Yθ·χEξ]denote the restricted expected signature of the solution. Suppose P is differentiable at θ0∈Θwith an invertible Jacobian. Then, almost surely, for all ε>0, there exist N 0,r0∈Nsuch that
|
https://arxiv.org/abs/2505.22646v1
|
for N ≥N0and r≥r0, the polynomial system (1.5) has a solution θr,N∈Bε(θ0). In fact, we show in Proposition 5.12 that Pis locally Lipschitz, and thus differentiable al- most everywhere. Our assumptions and proof of consistency are distinct from those of [ 35]. In particular, [ 35] assumes a priori that a unique solution to (1.5) exists, while we do not make this assumption. Furthermore, [ 35] requires invertibility and a uniform lower bound on the Jacobian ofPrfor all r∈Nandθ∈Θ, while our result only requires invertibility of the Jacobian of P at the true parameter value θ0∈Θ. We also note that linear signature SDEs can model path- independent SDEs with polynomial vector fields (see Experiment 6.2), so Theorem 1.2 can be viewed as a generalization of [ 35, Theorem 3.6] in this setting; see Remark 5.1 and Remark 5.5. We demonstrate the efficacy of our algorithm on numerical simulations in Section 6. We then conclude in Section 7 by discussing the extent of identifiability of the parameters of the signature SDE from the observed trajectories. We also provide a table of notation in Appendix A. Acknowledgments. We thank Emilio Ferrucci for discussions on the signature SDE, and James Foster for suggestions on simulations. We also thank Anastasia Papavasiliou for answer- ing several question about her work [ 35]. Pardis Semnani was supported by a Vanier Canada Graduate Scholarship. Vincent Guan was supported by an NSERC Graduate Fellowship. Elina Robeva, Pardis Semnani, and Vincent Guan were supported by a Canada CIFAR AI Chair and an NSERC Discovery Grant (DGECR-2020-00338). Part of this research was performed while Par- dis Semnani was visiting the Institute for Mathematical and Statistical Innovation (IMSI), which is supported by the National Science Foundation (Grant No. DMS-1929348). Darrick Lee was supported by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA) during part of this work. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 5 2. Preliminaries In this section, we provide some of the required background and notation on path signatures and rough paths. For further details, we refer the reader to [ 27, 15 ]. In general, we will be working with bounded p-variation paths. Definition 2.1. LetVbe a Banach space with norm ∥ · ∥ and p≥1. Let X:[0,T]→Vbe continuous, i.e. X∈C([0,T],V). For 0 ≤s<t≤T, we define the the p-variation of X on [s,t]by |X|p,[s,t]:= sup DrD ∑ i=1 Xti−Xti−1 p!1 p , where the supremum is taken over all partitions D={s=t0<t1<· · ·<trD=t}of[s,t]. We define the p-variation norm of X to be ∥X∥p:=|X|p,[0,T]+∥X0∥. The space of bounded p-variation paths is Cp−var([0,T],V):= X∈C([0,T],V):∥X∥p<∞ . 2.1. Path Signatures in the Young Regime. We begin with background on path signatures for sufficiently regular paths. Throughout this article, we will primarily focus on finite-dimensional Hilbert spaces V. Suppose we have an orthonormal basis (e1, . . . , en)ofV. This induces an or- thonormal basis (and thus an inner product) on V⊗k, where the basis elements are eI:=ei1⊗. . .⊗eik(2.1) for all multi-indices I= (i1, . . . , ik), where ij∈[n]. The tensor algebra T (V)and its completion T( (V) )are
|
https://arxiv.org/abs/2505.22646v1
|
respectively defined as the direct sum and product of all such tensor powers T(V):=∞M k=0V⊗kand T( (V) ):=∞ ∏ k=0V⊗k. Note that the individual Hilbert space structure of V⊗kdoes not induce a Hilbert space structure onT( (V) ), but we may restrict to finite norm elements to obtain a Hilbert space, H( (V) ):=( s= (sk)∞ k=0∈T( (V) ):∑ k≥0∥sk∥<∞) . We will also need to work with truncations of the tensor algebra, which we denote by T(≤q)(V):=qM k=0V⊗k. We can now define path signatures for paths in the Young regime, with bounded p-variation where p<2. Definition 2.2. Letp∈[1, 2)and X∈Cp−var([0,T],V). The path signature is a map S:Cp−var([0,T],V)→T( (V) ), defined by S(X):= Sk(X)∞ k=0= Z ∆k 0,TdXt1⊗. . .⊗dXtk!∞ k=0, 6 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE where ∆k s,t:={(t1, . . . , tk):s≤t1<. . .<tk≤t}for all s≤t. The integral is defined as a Young integral, which is well-defined for X∈Cp−var([0,T],V)with p∈[1, 2). The component Sk(X)∈V⊗kis called the level k component of the signature. The level 0 component is S0(X) =1. In view of the rough paths setting in the following section, we will also consider the collection of signatures of a path X∈Cp−var([0,T],V), restricted to all subintervals. In particular, we define the map X:∆T→T( (V) ), where ∆T:=∆2 0,Tand for all (s,t)∈∆T, Xs,t:=S(X|[s,t]):= Z ∆k s,tdXt1⊗. . .⊗dXtk!∞ k=0. The signature preserves the underlying concatenation structure of paths and satisfies an algebraic property called Chen’s identity , Xs,t⊗Xt,u=Xs,ufor all 0 ≤s<t<u≤T. Following standard notation for the signature, we denote the level kcomponent by Xk. Given a path X= (X1, . . . , Xn)∈Cp−var([0,T],V), the path signature with respect to the multi-index I= (i1, . . . , ik)∈[n]kis denoted by XI s,t:=Z ∆k s,tdXi1 t1· · ·dXik tk. The components of the path signature satisfy the shuffle product defined as follows. The permu- tation group on kelements is denoted by Σk. For k,ℓ∈N, the set of (k,ℓ)-shuffles is defined as Sh(k,ℓ):={σ∈Σk+ℓ:σ−1(1)<. . .<σ−1(k),σ−1(k+1)<. . .<σ−1(k+ℓ)}. The shuffle of two multi-indices I= (i1, . . . , ik)and J= (ik+1, . . . , ik+ℓ)is defined by the multi-set I J:={(iσ(1), . . . , iσ(k+ℓ)):σ∈Sh(k,ℓ)}. The path signature satisfies the following shuffle identity , XI·XJ=∑ K∈I JXK. Example 2.3. ForI= (1, 2)and J= (3, 4), we have I J={(1, 2, 3, 4 ),(1, 3, 2, 4 ),(1, 3, 4, 2 ),(3, 1, 2, 4 ),(3, 1, 4, 2 ),(3, 4, 1, 2 )}. ForI= (1)and J= (1), we have I J={(1, 1),(1, 1)}. 2.2. Rough Paths. As we are primarily interested in differential equations driven by Brow- nian motion trajectories, which are almost surely bounded p-variation paths only for p>2, we consider path signatures beyond the Young regime, where we will use the theory of rough paths [ 28]. While Young integration allows us to compute signatures (and more generally, in- tegrals) of bounded p-variation paths with p<2, we must enrich lower regularity paths with additional data in order to properly define signatures and integrals. We begin with the notion of a control, which is
|
https://arxiv.org/abs/2505.22646v1
|
used to measure the regularity of paths. Definition 2.4. Acontrol is a continuous non-negative function ω:∆T→R≥0such that ω(t,t) =0 and ω(s,t) +ω(t,u)≤ω(s,u)for all s≤t≤u. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 7 Now, we turn to the definition of a rough path. Definition 2.5. Letp≥1. A p-rough path is a function X:∆T→T(≤⌊p⌋)(V)such that for all 0≤s≤t≤u≤T,X0 s,t=1, Chen’s identity holds, i.e., Xs,t⊗Xt,u=Xs,u, and it satisfies the regularity conditions ∥Xk s,t∥ ≤ω(s,t)k/p β(k/p)!for all 1 ≤k≤ ⌊p⌋ (2.2) for some control ωand a constant β, which only depends on p.1Note that (k/p)!:=Γ(k/p+1) equals the Gamma function. The p-variation of the rough path Xis said to be controlled by ωif (2.2) holds. The space of p-rough paths is equipped with the p-variation metric dp(X,Y):=max 1≤k≤⌊p⌋sup D⊂[0,T] rD ∑ i=1∥Xk ti−1,ti−Yk ti−1,ti∥p/k!1/p . The following extension theorem shows that path signatures are well-defined for rough paths. Theorem 2.6. [27, Theorem 3.7] For p ≥1, let X:∆T→T(≤⌊p⌋)(V)be a p-rough path whose p-variation is controlled by some control ω. Then, there exists a unique extension eX:∆T→T( (V) ) ofXsuch that eXk=Xkfor all k ≤ ⌊p⌋, Chen’s identity holds for eX, i.e.eXs,t⊗eXt,u=eXs,ufor all 0≤s≤t≤u≤T, and eXsatisfies the regularity conditions ∥eXk s,t∥ ≤ω(s,t)k/p β(k/p)!for all k ≥1. By uniqueness, we also denote the extension (to arbitrary levels) by X, and the signature of a rough path is given by S(X):=X0,T∈T( (V) ). (2.3) In this article, we will work with a class of rough paths called geometric rough paths . Definition 2.7. Ageometric p-rough path is a p-rough path which is the limit of 1-rough paths in thep-variation metric. The space of geometric p-rough paths in Vis denoted by GΩp(V). The extension of geometric rough paths still satisfies the shuffle identity. Corollary 2.8. [7, Corollary 3.9] LetX:∆T→T( (V) )be the extension of a geometric p-rough path. Then, for multi-indices I ,J, we have XI·XJ=∑ K∈I JXK. Remark 2.9. Throughout this article, we will interchangeably view paths as functions X:[0,T]→ Vand as functions X:∆T→VbyXs,t:=Xt−Xs. Similarly, we can also view rough paths X:∆T→T(V)as functions X:[0,T]→T(V)byXt:=X0,t. We end this section by defining a trivial rough path , which we will use in later sections. Definition 2.10. For a Banach space U, the trivial rough path inGΩp(U)is denoted by 0U, and defined to be (0U)s,t:= (1, 0, . . . , 0 )∈T(≤⌊p⌋)(U)for all (s,t)∈∆T. Note that (1, 0, . . . , 0 )is the multiplicative identity in T(≤⌊p⌋)(U). With abuse of notation, when the choice of Uis clear from the context, we may denote 0Uby0. 1The presence or absence of constant βand factor (k/p)! in (2.2) does not affect the definition of the class of p-rough paths. However, we choose to include them in (2.2) to be consistent with the notation in [ 27]. 8 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE 2.3. Universal Limit Theorem. The theory of rough paths allows us to study controlled dif- ferential equations driven by highly irregular signals. In particular, for Banach spaces U,V, given X∈GΩp(U),F:V→L(U,V), and ζ∈V, we wish to make sense of rough differential equations of the form dYt=F(Yt)dXt,Y0=ζ. (2.4) In order
|
https://arxiv.org/abs/2505.22646v1
|
to do so, we consider the notion of Lip (γ)functions in the sense of [ 42, Section VI]. We state the definition for such functions on Banach spaces U; for the more general definition for closed sets F⊂U, see [ 42, Section VI.2.3] and [ 27, Definition 1.21]. Definition 2.11. [15, Definition 10.2] Let γ>0, and U,Vbe Banach spaces. A function f:U→V is Lip (γ)if it is γ0-times continuously differentiable, and there exists M≥0 such that sup u∈U∥f(k)(u)∥ ≤ Mfor all k=0, . . . , γ0and sup u1,u2∈U∥f(γ0)(u1)−f(γ0)(u2)∥ ∥u1−u2∥γ−γ0≤M, where f(k)denotes the kth derivative of f, and γ0denotes the largest integer that is strictly smaller than γ. The smallest such M≥0 is the Lip (γ)norm of f, denoted ∥f∥Lip(γ). We record a basic lemma which states that the product of two Lip (γ)functions is still Lip (γ). Lemma 2.12. Suppose U ,V1,V2,V3are Banach spaces, U is finite-dimensional, and f :U→L(V1,V2) and g :U→L(V2,V3)areLip(γ)functions. Then, the product h :U→L(V1,V3)defined by h(u):=g(u)◦f(u)for all u ∈U, is aLip(γ)function. Furthermore, ∥h∥Lip(γ)≤K∥f∥Lip(γ)∥g∥Lip(γ), where K >0is a constant that only depends on γand the dimension of U. Proof . The proof is given in Appendix B. □ A key property of rough paths is that it allows us to consider integrals of Lip (γ)1-forms, and we refer the reader to [ 27, Definition 4.9] for details on the construction. Theorem 2.13. [27, Theorem 4.12] Let p≥1. Suppose f :V→L(V,U)is aLip(γ−1)function for γ>p. Then, there exists a continuous integration map If:GΩp(V)→GΩp(U)defined by If(Z) =Z f(Z)dZ. Ifω:∆T→R≥0is a control, there exists a constant K >0dependent on ∥f∥Lip(γ), p,γ, and ω(0,T) such that for all Z∈GΩp(V)with p-variation controlled by ω, we have ∥Yk s,t∥ ≤ Kω(s,t)k/pfor all (s,t)∈∆Tand k =0, . . . ,⌊p⌋, where Y=If(Z). Now, we can use this definition of an integral to define the solution of the rough differential equation in (2.4). We will denote the projection maps πUandπVfrom U⊕VtoUand V respectively, and use the same symbol for their induced maps on (truncated) tensor algebras, πU:T( (U⊕V) )→T( (U) )and πV:T( (U⊕V) )→T( (V) ). PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 9 Definition 2.14. Let 1≤p<γ,F:V→L(U,V)be a Lip (γ−1)function, and ζ∈V. Define the vector field hF:U⊕V→End(U⊕V)by hF(u,v):=IdU 0 F(v+ζ)0 . We call Z∈GΩp(U⊕V)acoupled solution to the rough differential equation in (2.4) if Z=Z hF(Z)dZand πU(Z) =X, where the integral is understood in terms of Theorem 2.13. In this case, we call Y=πV(Z)∈ GΩp(V)thesolution to the rough differential equation in (2.4).2 This definition couples together the driving rough path X=πU(Z)with the solution rough path Y=πV(Z). In order to obtain solutions to such rough differential equations, we turn to the familiar concept of Picard iterations (though in a generalized form). We define Z(0):= (X,0V)∈ GΩp(U⊕V). We define the Picard iterations with respect to h Frecursively as Z(r+1):=Z hF(Z(r))dZ(r). (2.5) The following Universal Limit Theorem shows the existence and uniqueness of solutions to rough differential equations. Theorem 2.15. [27, Theorem 5.3] Let1≤p<γ, F:V→L(U,V)be aLip(γ)function, and ζ∈V. ForX∈GΩp(U), the following hold: (1)The rough differential equation in (2.4) admits a unique coupled solution Z= (X,Y)∈GΩp(U⊕ V). (2)The map I f:GΩp(U)×V→GΩp(V)which sends
|
https://arxiv.org/abs/2505.22646v1
|
(X,ζ)toY=πV(Z)is continuous in the p-variation topology. (3)LetZ(r)be the sequence of Picard iterations defined in (2.5) , and define Y(r):=πV(Z(r)). The solution is given as the limit Y=lim r→∞Y(r). (4)Letωbe a control for the p-variation of X. For all ρ>1, there exists some T ρ∈(0,T]such that ∥Y(r)k s,t−Y(r+1)k s,t∥ ≤ 2kρ−rω(s,t)k/p β k p ! for all (s,t)∈∆Tρand k =0, . . .⌊p⌋. The parameter T ρdepends only on ∥f∥Lip(γ), p,γ, and ω. 2.4. Universal and Characteristic Properties. While the signature of rough paths character- izes the paths up to tree-like equivalence [4], we wish to use the signature to characterize paths without this equivalence relation. Furthermore, while rough differential equations are formu- lated as in (2.4), we wish to study SDEs of the form (1.3), which have both drift and diffusion terms. In order to deal with both of these issues, we will consider (rough) paths equipped with time parametrization . We define time parametrized rough paths to be GΩtp p(R×V):={X∈GΩp(R×V):X1 t= (t,Xt)}. 2In [27], the rough path Zis called the solution . Here, we call Ythe solution in order to differentiate between various notions in the following sections. 10 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE Remark 2.16. For a rough path X∈GΩp(V), there is a canonical way to construct a rough path X∈GΩp(R×V)such that X1 t= (t,X1 t)andπV(X) =X; see [ 15, Section 9.4]. In particular, ιtp:GΩp(V),→GΩtp p(R×V),ιtp(X) =X is continuous [ 15, Theorem 9.33]. As stated in the introduction, the signature allows us to approximate functions and charac- terize measures on the path space [ 9, 10, 11 ]. The result from [ 10] performs a normalization on the signature such that the normalized signature is a bounded continuous map. Here, we adapt this result and remove the normalization procedure by restricting our attention to a bounded subset of the path space, which is sufficient for the purposes of this article. Theorem 2.17. Letξ>0, and B ξ:={X∈GΩtp p(V):dp(X,0)<ξ}. (1)The signature S :Bξ→H( (R×V) ), defined in (2.3) , is a bounded continuous map. (2)(Universal) The space of linear functionals ⟨ℓ,S(·)⟩:Bξ→Rof the signature, where ℓ∈ T(R×V), is dense in continuous bounded functions C b(Bξ,R)equipped with the strict topol- ogy3[16]. (3)(Characteristic) LetB(Bξ)denote finite regular Borel measures on B ξ. The expected signature E[S]:B(Bξ)→H( (R×V) ),µ7→EX∼µ[S(X)] is injective. Proof . Because we consider bounded rough paths in Bξ, each Xhas a control function ω such that ω(0,T)<Cξpfor some constant C>0. Then, by Theorem 2.6, the signature satisfies the bound ∥S(X)∥ ≤∞ ∑ k=0Ck/pξk β(k/p)!<∞, which does not depend on X. Thus S:Bξ→H( (R×V) )is a bounded continuous map, and linear functionals are also bounded continuous functions ⟨ℓ,S(·)⟩ ∈Cb(Bξ,R). The remainder of the proof is identical to [ 10, Theorem 21]. □ Remark 2.18. We note that more general approximation results can be found in [ 11], which provide universality and characteristicness results on the entire path space. This is done by considering weighted topologies, which replaces the above boundedness conditions by sufficient decay conditions on functions and measures. 3. Path-Dependent Differential Equations This section consists of the first step towards
|
https://arxiv.org/abs/2505.22646v1
|
understanding the path-dependent SDEs in (1.3), by considering a (deterministic) path-dependent rough differential equation. Throughout this article, assume Uand Vare Banach spaces whose tensor powers are endowed with norms which satisfy the usual requirements of symmetry and consistency [ 27, Definition 1.25]. Moreover, assume Vis finite-dimensional. We fix p≥1,γ>p, and the natural number q≥ ⌊p⌋throughout, and assume that all rough paths are defined on ∆Tfor some T>0. 3For a topological space A, a function ψ:A→Rvanishes at infinity if for all ε>0, there exists a compact K⊂A such that supx∈A\K|ψ(x)|<ε. The strict topology onCb(A,R)is the topology generated by the family of seminorms pψ(f) =supx∈A|f(x)ψ(x)|for all functions ψthat vanish at infinity. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 11 We study path-dependent rough differential equations (RDEs) of the form dYt=F(Yt)dXt, (3.1) where X∈GΩp(U)is the driving signal as a geometric p-rough path, Y:[0,T]→Vis the solution with signature denoted by Y, and Fis a Lip (γ−1)vector field. Throughout this article, we consider vector fields Fwhich depend only on the truncated signature of Yup to level q, and thus denote eV:=T(≤q)(V)and F:eV→L(U,V). In order to formally define solutions in this path-dependent context using the rough path theory discussed in the previous section, we reformulate this RDE as an ordinary path-independent RDE of the (truncated) signature of Y, dYt=F(Yt)dXt Y0=1, (3.2) where we call F:eV→L(U,eV)thelifted vector field of F , and 1:= (1, 0, . . . , 0 )∈eVis the multiplicative identity in eV. To begin, let’s consider the case of a bounded 1-variation driving signal, X. Our aim is to define the lifting of FtoF. In particular, we express F= (F0,F1, . . . , Fq), where Fk:eV→ L(U,V⊗k), and note that F0=0 and F1=F. Then for k=2, . . . , q, the level kpath signature of Ysatisfies dYk t=Yk−1 t⊗dY1 t=Yk−1 t⊗F(Yt)dXt, and therefore suggests the definition Fk(s)(x) =sk−1⊗F(s)(x)∈V⊗k, (3.3) where s= (s0, . . . , sq)∈eVand x∈U. The aim is to define solutions of (3.1) as solutions of (3.2) using Definition 2.14. However, an immediate issue arises: while Fmay be a Lip (γ−1)function, Fis not Lip (γ−1)in general for any γ>1; even if Fhas bounded derivatives in s, the vector field Fmight still have unbounded derivatives. In this section, we will consider a modification of the lifted vector field Fin order to maintain the Lip (γ−1)condition. We can then apply the Universal Limit Theorem from Theorem 2.15 to obtain an existence and uniqueness result for such path-dependent RDEs. 3.1. Reformulation with Modified Vector Field. The main idea in reformulating the vector field is to restrict Fto a closed subset on which it is Lip (γ−1), and use the following extension theorem to extend it back to the whole space as a Lip (γ−1)function. Theorem 3.1. [42, Section VI.2, Theorem 4] Letα>0. Let U ,V be Banach spaces, where V is finite-dimensional, and K ⊂V be a closed subset. Let f :K→U be a Lip(α)function. Then there exists an extension f:V→U of f such that f is a Lip(α)function on V. Furthermore, there is a constant C ∈R, independent of the choice of K,
|
https://arxiv.org/abs/2505.22646v1
|
such that for all Lip(α)functions f :K→U, we have ∥f∥Lip(α)≤C∥f∥Lip(α). We will apply this result by first factoring the vector field F:eV→L(U,eV)as defined in (3.3) into two components. We define the map tens : eV→L(V,eV)to be the truncated tensor product ineV; in particular, for s= (s0, . . . , sq)∈eVand v∈V, we have tens(s)(v):= (1,s1, . . . , sq−1)⊗v∈eV. 12 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE Then, we can express the lifted vector field Fas F(s):=tens(s)◦F(s). for all s∈eV. Note that tens is an unbounded function, but its restriction to a bounded subset is a Lip (α)function for any α>0 (i.e. a bounded Lipschitz function). Thus, we use Theorem 3.1 to extend this restriction to a Lip (α)function on eV. Definition 3.2. LetM>0. Set KM:={s∈eV:∥s∥ ≤ M} ⊂eV. We define tens M:eV→L(V,eV) to be the extension from Theorem 3.1 of the restriction tens |KM:KM→L(V,eV)toKM⊂eV. The map tens Mis a Lip (α)function for any α>0. Now, by applying Lemma 2.12, we obtain a Lip (γ−1)modification of F. Corollary 3.3. Let F :eV→L(U,V)be a Lip(α)function for some α>0. For M >0, we define FM:eV→L(U,eV)to be FM(s):=tens M(s)◦F(s) (3.4) for all s∈eV. Then the vector field FMisLip(α). We can now define one notion of a solution to the path-dependent RDE in (3.1) in terms of the modified vector field. Definition 3.4. LetM>0, and consider FM:eV→L(U,eV)from (3.4). Define the vector field hM:U⊕eV→End(U⊕eV)by hM(u,s):=IdU 0 FM(s+1)0 . (3.5) for all u∈Uands∈eV. We call Z ∈ GΩp(U⊕eV)acoupled M-solution to the differential equation in (3.1) if Z=Z hM(Z)dZand πU(Z) =X. In this case, we call Y:=πeV(Z)∈GΩp(eV)alifted M-solution . Moreover, we define the M- solution to be Y:=Y1+1, where Y1the level 1 component of Y. Note that Y0,·:[0,T]→eVis a bounded p-variation path, i.e. Y0,·∈Cp−var([0,T],eV). We will further discuss the interpretation of these solutions in the following section, but we first note that the Universal Limit Theorem, Theorem 2.15, can be directly applied to this setting ifF:eV→L(U,V)is a Lip (γ)function. Therefore, we can define the function IF,Mas follows. Definition 3.5. LetF:eV→L(U,V)be a Lip (γ)function. For the path-dependent differential equation (3.1), define IF,M:GΩ(U)→GΩ(eV)to be the continuous function which takes the driving signal X∈GΩ(U)to the corresponding lifted M-solution Y ∈ GΩ(eV). Note that by part (2) of Theorem 2.15, IF,Mis well-defined. Definition 3.6 (Picard iterations) .Let M>0 and X∈GΩp(U). Define Z(0):= (X,0eV)∈ GΩp(U⊕eV)(see Definition 2.10). For all r∈N, define Z(r):=Z hM(Z(r−1))dZ(r−1) using the rough integral in Theorem 2.13. The sequence {Z(r)}r∈Z≥0is called the sequence of M-Picard iterations of (3.1). PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 13 We note that the definition of the M-Picard iterations is exactly the definition used in the Universal Limit Theorem, Theorem 2.15. Thus, the Universal Limit Theorem can be directly applied to show that unique lifted M-solutions exist as the limit of the projections of the M- Picard iterations on eV. 3.2. The Solution as a Geometric Rough Path. Using the Universal Limit Theorem in The- orem 2.15 provides a lifted M-solution to (3.1) as a geometric p-rough path Y ∈ GΩp(eV). In this section, we discuss when the underlying M-solution is a
|
https://arxiv.org/abs/2505.22646v1
|
geometric p-rough path in GΩp(V). Before we continue, we will briefly discuss some notational conventions. Notation 3.7. We denote signatures of signatures or rough paths valued in T(eV)using calli- graphic symbols Y. Integer superscripts will continue to denote the outer level of the signature, for instance Yk∈eV⊗k. Given a basis of V∼=Rn, we obtain a basis of eVindexed by multi-indices {I= (i1, . . . , ik)}q k=0with ij∈[n], with eI:=ei1⊗. . .⊗eik. We will denote the outer tensor product ofT(eV)by⊠, and we obtain a basis of T(eV)by multi- indices {I= (I1, . . . , Ik)}∞ k=0where each Ijis a multi-index valued in [n], defined by eI:=eI1⊠. . .⊠eIk. Superscripts using such multi-indices Iwill denote the component YIofY. We denote the empty multi-index corresponding to k=0 with ∅. In order to show that Yis a geometric p-rough path, we begin by relating various components ofYandY. Proposition 3.8. Assume F :eV→L(U,V)is aLip(γ)function, M >0, and X∈GΩp(U)such that Y=IF,M(X)is the lifted M-solution and Yis the underlying M-solution to (3.1) . If∥Y0,t∥<M for all t∈[0,T], then for any (s,t)∈∆T,Y∅ s,t=1, and for any i 1, . . . , ik∈[dim V]with 1≤k≤q, Y((i1),...,(ik)) s,t =Y((i1,...,ik)) s,t =Y(i1,...,ik) s,t . (3.6) In particular, Y=πV(Y). Proof . The proof is given in Appendix B. □ This result shows that under a certain condition on the M-solution Y, it coincides with the projection of the lifted M-solution Yonto V. In particular, this justifies why Ycan be interpreted as a geometric p-rough path. Now, we can state a reformulation of the Universal Limit Theorem from Theorem 2.15 in terms of the M-solutions. Theorem 3.9. Assume F :eV→L(U,V)is aLip(γ)function, and M >0. Define GM:={X∈GΩp(U):∥IF,M(X)1 0,t+1∥<M for all t ∈[0,T]}. Then, the following hold: (1)For any X∈GM, the unique M-solution Yto(3.1) is a geometric p-rough path, i.e. Y∈ GΩp(V). In particular, we will call Yasolution4to(3.1) . 4Here, we omit the reference to M, as we have restricted the driving noise such that Proposition 3.8 holds, and the first level of Yis correctly interpreted as the signature of some underlying path Y1. This definition of solution should be viewed as the path-dependent analogue of a solution to an ordinary RDE in Definition 2.14. 14 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE (2)The map J F,M:GM→GΩp(V), which sends XtoY, is continuous in the p-variation topology. (3)For any X∈GΩp(U), let{Z(r)}r∈Z≥0be the sequence of M-Picard iterations of (3.1) . For all r∈Z≥0, define Y(r):=πeV(Z(r))andY(r):=Y(r)1+1. Then Y=lim r→∞Y(r)in p-variation norm, where Y(r)0,·andY0,·are considered as paths in Cp−var([0,T],eV). (4)Letωbe a control of the p-variation of X∈GΩp(U). For all ρ>1, there exists T ρ∈(0,T] such that for all n ∈Z≥0, ∥Y(r)s,t−Y(r+1)s,t∥≤2ρ−rω(s,t)1 p β 1 p !for all (s,t)∈∆Tρ, where T ρdepends on M, ∥F∥Lip(γ),dim V, p, γ, and ω. Proof . Part (1) is straightforward since by Proposition 3.8, Y=πV(Y)is the projection of a geometric rough path Y ∈ GΩp(eV). Part (2) is also immediate from the original Universal Limit Theorem in Theorem 2.15 and the fact that on GM, we have JF,M=πV◦IF,M. For part (3), we note that by Theorem 2.15, we get that
|
https://arxiv.org/abs/2505.22646v1
|
IF,M(X) =lim r→∞Y(r)in the p-variation rough path topology. Then, restricting this to the level 1 component of YandY(r), we obtain the desired result. Finally, part (4) also follows directly from the analogous result in part (4) of Theorem 2.15 and considering the level 1 component. □ Remark 3.10. An important point in this result is that Y(r):=Y(r)1+1in parts (3) and (4) is nota geometric p-rough path in general; in fact, it may not even be multiplicative. Part (1) shows that in the limit, Ycoincides with πV(Y) =πV(Z), but this is not true at finite Picard iterations. Thus, we treat Y(r)as a bounded p-variation path in Cp−var([0,T],eV). Remark 3.11. Let(S,F,P)be a probability space, and GΩp(U)be equipped with the Borel σ- field. When the driving noise X:S → GΩp(U)of the RDE is a stochastic process, the M-solution is also a stochastic process. Indeed, by Remark 2.9, the map JF,Mcan be extended to a continuous map JF,M:GΩp(U)→Cp−var([0,T],eV), where Cp−var([0,T],eV)is equipped with the p-variation norm. Then given the Borel σ-field on Cp−var([0,T],eV), the map JF,M◦Xis measurable. 4. Parametrized Signature SDEs In this section, we will consider a specific model of path-dependent rough differential equa- tions, where the underlying vector field F:eV→L(U,V)isaffine with respect to the signature of the solution. This choice of vector field is motivated by the universal approximation property of signatures; see Theorem 2.17 and Remark 2.18. To leverage this property, we design our RDE so that the underlying path in the solution is Yt= (t,Yt)and the vector field is a linear functional of the truncated signatures of Y. To simplify the notation, we will denote all parametrized paths without the overline in the remainder of this article. Suppose that W∈GΩp(Rn)is a deterministic rough path, and Aθ:eV→Vand Bθ∈ L(eV, L(Rn,V))are vector fields parametrized by θ∈Θ, where Θis the parameter space. We wish to consider the solution Y∈GΩp(V)to the rough differential equation dYt=Aθ(Yt)dt+Bθ(Yt)dWt. The vector fields are affine functionals of the components of Y, and we assume V∼=Rm+1and U∼=Rn+1, equipped with their standard bases, for the rest of the section. We can express the PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 15 vector fields as Aθ(Yt) = 1 ⟨θ1,0,Yt⟩ ... ⟨θm,0,Yt⟩ and Bθ(Yt) = 0 · · · 0 ⟨θ1,1,Yt⟩ · · · ⟨ θ1,n,Yt⟩ ......... ⟨θm,1,Yt⟩ · · · ⟨ θm,n,Yt⟩ , where θi,j∈eVfori=1, . . . , m, and j=0, . . . n. Thus, the parameter space is Θ:={(θi,j)i∈[m],j∈[n]0: θi,j∈eV}=Mat m,n+1(eV). To simplify our notation such that it coincides with the notation used in the previous section, we define X= (t,W)∈GΩp(U). More precisely, Xis a canonical p-rough path lift of the path (t,W1 0,t); see Remark 2.16. Therefore, we can express the above differential equation as dYt=Fθ(Yt)dXtwhere Fθ(Yt):= Aθ(Yt)Bθ(Yt) . (4.1) In this case, the vector field Fθis unbounded because it is affine. We will modify this vector field in the same way as in Definition 3.2. In particular, for M>0, we set KM:={s∈eV:∥s∥ ≤ M} ⊂eVas before, and define Fθ,M:eV→L(U,V)to be the extension from Theorem 3.1 of the restriction Fθ|KM:KM→L(U,V)toKM⊂eV. Thus, we in fact consider the path-dependent RDE dYt=Fθ,M(Yt)dXt, where we define
|
https://arxiv.org/abs/2505.22646v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.