text
string
source
string
Thompson and Fox-Kean (2005)). The following theoretical results describe the asymptotic behavior of the success-probability Pt,h and of the number St,hof successes along time, for each Bernoulli process h, with h= 1, . . . , N . The rigorous proofs of all the theorems are collected in Appendix B. Theorem 3.1. Under assumptions ( A1) and ( A2), denote by γ∗∈(0,1]the Perron-Frobenius eigen- value of Γ. Then, for each h∈ H, we have t1−γ∗Pt,ha.s.−→P∞,h, where P∞,his a finite strictly positive random variable. Moreover, for each h, j∈ H , the ratio P∞,h/P∞,jof the above limit random variables is almost surely equal to the deterministic quantity uh/uj>0, where u= (uh)his the (unique up to a multiplicative non-zero constant) left eigenvector of Γassociated to γ∗. Finally, the above convergence is also in quadratic mean. This result implies that 7This extension may be particularly relevant in contexts where the USPTO –or other patent offices—modify the structure of technological categories through reclassification (Lafond and Kim 2019, Chae and Gim 2019) thus changing the landscape of interaction among the latter. 10 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI •in the case γ∗<1 (that is when 1⊤Γ̸=1⊤) the probability of observing a success in process h converges almost surely toward zero at the same rate 1 /t(1−γ∗)for every process; •in the case γ∗= 1 (that is when 1⊤Γ =1⊤) the probability of observing a success in process hconverges almost surely to a finite strictly positive random limit, which is the same for each process (note that in this case uis, up to a multiplicative non-zero constant, equal to the vector 1); in other words, at the steady state, the probability of having a success in a process is the same for each process of the system. In the innovation literature, the first case is supported by empirical findings, which note that the absence of autocatalytic structures in innovation networks can lead to a decline in innovative output (Napolitano et al. 2018). As we will see in the sequel, also our data analysis provides an estimated value γ∗<1. The second case, instead, is well exemplified in Kim and Magee (2017) who empirically showed that cross-domain knowledge flows help maintain a globally balanced and sustained innovation ecosystem. Moreover, we recall that, according to the Perron-Frobenius theory, the components of the vector uare all different from zero and with the same sign and, in graph theory, they correspond to the relative eigenvector centrality scores (with respect to Γ⊤). The eigenvector centrality is a measure of the importance of a node in a graph with respect to its out-links (if it is computed for the adjacency matrix) or its in-links (if it is computed for the transpose of the adjacency matrix). A high eigenvector centrality score means that the node points to, or respectively is pointed to by, many nodes with high scores. In the above theorems uis the vector of the relative eigenvector centrality scores with respect to Γ⊤and, hence, by (3.1), a high value of uhmeans that the probability of having a success for process hdepends on
https://arxiv.org/abs/2505.13364v1
the number of past successes observed in many processes who themselves have high scores. The uncertainty—or volatility—of the innovation process within technological domains, with higher variance often associated with emerging or rapidly evolving fields, has been a subject of debate (Jalonen and Lehtonen 2011, Jalonen 2012, Allen 2013). Specifically, understanding the nature of uncertainty during the innovation process is particularly helpful to clarify how evolving technologies shape ca- pability development within ecosystems, reinforcing or constraining firms’ ability to transform skills into core competencies (Leonard-Barton 1992, O’Connor 2008, Teece and Leih 2016). In this context, Theorem 3.1 offers valuable insights. Indeed, we obtain the following corollary. Corollary 3.2. Under assumptions ( A1) and ( A2), denote by γ∗∈(0,1]the Perron-Frobenius eigen- value of Γand by u= (uh)hits corresponding (unique up to a multiplicative non-zero constant) left eigenvector. Then, for each h∈ H, we have t1−γ∗V ar[Xt+1,h]−→ |uh|α(u), where α(u)>0(and such that α(Cu) =α(u)/|C|for each constant C̸= 0). Moreover, for each pair h̸=jof different processes, we get t2(1−γ∗)cov(Xt+1,h, Xt+1,j)−→uhujσ2(u), where σ2(u)>0(and such that σ2(Cu) =σ2(u)/C2for each constant C̸= 0), so that t1−γ∗ρ(Xt+1,h, Xt+1,j)−→√uhujσ2(u) α(u). Hence, the ratio of the variances V ar[Xt,h]/V ar [Xt,j] converges to the ratio uh/ujof the relative eigenvector centrality scores (with respect to Γ⊤) of the two nodes (processes) handj. Moreover, the correlation coefficient between the observations related to any pair of different processes converges to zero at the rate 1 /t(1−γ∗)when γ∗<1 and converges to the same strictly positive value when γ∗= 1. MODELING INNOVATION DYNAMICS 11 In short, a balanced innovation ecosystem ( γ∗= 1) supports ongoing, shared uncertainty and cor- related success across technologies, which characterizes connected innovation ecosystems (Jacobides et al. 2024). In contrast, a not-balanced one ( γ∗<1) leads to asymmetric, decoupled innovation dynamics over time. In other words, in well-connected systems, sustained uncertainty and interdepen- dence across technological domains facilitate diverse learning paths and capability recombination. This interdependence fosters mutual reinforcement across domains, enabling firms to accumulate diverse, complementary capabilities. Conversely, in fragmented systems, limited spillovers may rigidify special- ization patterns, potentially turning capabilities into rigidities over time. From Theorem 3.1, we can also deduce the following result for the processes ( St,h)t, with h∈ H, that represent the number of successes observed along time in each Bernoulli process h. It is important to note that, differently from the previous quantities Pt,h, that are not observable (due to the presence of the unknown model parameters), the quantities St,hcan be directly observed and so used for data analyses. Theorem 3.3. Under assumptions ( A1) and ( A2), denote by γ∗∈(0,1]the Perron-Frobenius eigen- value of Γand by u= (uh)hits corresponding (unique up to a multiplicative non-zero constant) left eigenvector. Then, for each h∈ H, we have St,h tγ∗a.s.−→S∞,h, where S∞,his a finite strictly positive random variable. Moreover, for each h, j∈ H, we have St,h St,ja.s.−→uh ujandSt,hPN j=1St,ja.s.−→uhPN j=1uj. This result states that the number St,hof successes for all the processes hgrows with the same Heaps’ exponent γ∗∈(0,1]: that is St,ha.s.∼S∞,htγ∗. In addition, the ratio St,h/St,jprovides a strongly con- sistent estimator of the ratio uh/ujof the relative
https://arxiv.org/abs/2505.13364v1
eigenvector centrality scores (with respect to Γ⊤) of the two nodes (processes) handjand the share of successes observed for process hconverges almost surely to the absolute eigenvector centrality score of h. It is interesting to note that, in the innovation framework, Theorem 3.3 highlights how the long- run proportions of successful patents in each category reflect the connectedness and influence of that category within the broader innovation network. While this relationship is formally established in the present work, related insights have been suggested in the literature without formal proof. For example, Pichler et al. (2020) show that the innovation rate of a technological domain is influenced by the innovation rates of the domains it depends on. Similarly, Sampat and Ziedonis (2005), Zhang et al. (2025) emphasize the importance of the quality of a patent’s prior connections as well as their technological domain. In an effort to further deepen this latter aspect, we developed Theorem 3.4. As explained in the following, this theorem examines not only how many successful patents there are in each target category h, but also traces the source category kto which each successful patent originally belonged. Namely, in our particular context, we can enrich the model assuming that the number St,k,h of successes until time-step tin category (process) hcoming from belonging category kis of the form St,k,h=Pt n=1Xn,hYn,k, where Yn,ktakes value 1 if the category of patent niskand 0 otherwise and (A3)Yn= (Yn,k)⊤ k∈His independent of Xn= (Xn,h)⊤ h∈Hand of all the past until time-step n−1 with P(Yn,k= 1) = πk∈(0,1) (whereP k∈Hπk= 1). Under these assumptions, we obtain the following result. 12 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI Theorem 3.4. Under assumptions ( A1), (A2) and ( A3), denote by γ∗∈(0,1]the Perron-Frobenius eigenvalue of Γ. Then, for each pair h, k∈ H, we have St,k,h tγ∗a.s.−→πkS∞,h and so St,k,h/St,j,ha.s.−→πk/πj. To summarize, Theorem 3.4 provides a formal representation of how the structure of innovation inputs shapes the distribution of successful outcomes across technological domains. Specifically, it shows that, asymptotically, the share of successful innovations in a target category mirrors the origin distribution of patent activity. This result offers entrepreneurs a valuable mechanism for understanding how ecosystems embed and propagate specialization trajectories, which is a critical insight, given that the transformation of skills into core capabilities depends fundamentally on how technological domains interact and reinforce one another over time (Teece 2009, Jacobides et al. 2018, Kim and Magee 2017). Figure 1. Linear behavior of the successes counter St,1(red line) and St,2(blue line) along t, in log10−log10scale, in two different scenarios with a positive shock on process 1. The two scenarios are distinguished by the type of interaction, i.e. by the form of the interaction matrix Γ used in the simulations. The shock on process 1 occurs at time tshock = 104and the system is observed until time-step t= 107. We can observe the sudden rise of St,1(red line) caused by the shock and how the positive effect of the shock on process 1 propagates to process 2 (consequent progressive rise of the blue line). We can also
https://arxiv.org/abs/2505.13364v1
note that the slope of the two lines and the distance between their intercepts are the same before and after the shock, because the interaction matrix Γ does not change: indeed, the slope is equal to γ∗and the distance between the intercepts corresponds to the quantity log10(u1/u2) = log10(u1)−log10(u2). 3.1.Model implication: simulations with a shock. In order to highlight what is the implication of the fact that a system follows the provided model, we here show how the effect of a shock on a process propagates to the other ones by means of the interaction in their dynamics. A vivid example of this phenomenon is the surge in pharmaceutical innovation during the COVID-19 pandemic, which trig- gered subsequent innovations in other fields such as logistics (Dovbischuk 2022) and IT (Li et al. 2022), MODELING INNOVATION DYNAMICS 13 and also prompted a wave of open innovation practices through which inventors widely disseminated their contributions (Brem et al. 2021, Lee and Trimi 2021, Ho 2023). Accounting for innovation shocks is essential. As the COVID-19 surge demonstrated, when a shock occurs, society may be compelled to transform latent capabilities—such as vaccine development or its digital capabilities—into core capa- bilities (Li et al. 2022, Dovbischuk 2022, Oladapo et al. 2023). Moreover, the timing and adaptability of firms in responding to such shocks –with the consequent propagation of innovative responses via dynamic interactions– are critical to ensuring that newly required capabilities do not merely displace former core capabilities, which, in the aftermath, risk becoming core rigidities (Atanassova and Bednar 2022). More precisely, we have considered a system with N= 2 processes starting with the same initial composition: θh= 1/2 and ch= 1. We take into account two different scenarios: •mean-field interaction, that corresponds to a symmetric interaction matrix Γ of the form (3.2) γj,h=( γ∗(ι/N+ (1−ι)) if j=h γ∗ι/N ifj̸=h, with γ∗, ι∈(0,1], where we have chosen γ∗= 0.7 and ι= 0.9, so that Γ = 0.385 0 .315 0.315 0 .385 ; •non-symmetric interaction matrix Γ with γ∗= 0.685 and u= (1.394,0.575)⊤, i.e. Γ = 0.50 0 .20 0.45 0 .20 . Theshock occurs at time tshock = 104and it acts as follows: at time tshock = 104, the probability of having a success for process 1 is increased (that is, we give a positive shock to process 1) by replacing the parameters θ1= 1/2 and c1= 1 by the new ones θshock, 1= 1/2 + 104andcshock, 1= 1 + 104. These new values of the parameters remain in the dynamics of process 1 for all the future time-steps until t= 107. The parameters for process 2 are unchanged: θ2= 1/2 and c2= 1 remain the same along all the time-steps. The number of successes for the two processes, that is St,1andSt,2, are observed. In Figure 1, we can see how the positive shock causes the number of successes for process 1 to rise (red line) and how this positive effect propagates to the other process (blue line) by means of the interaction terms in their dynamics. 4.Data analysis Our dataset consists of the GLOBAL PATSTAT
https://arxiv.org/abs/2505.13364v1
database8. More specifically, we collected all the patents published in the period [1980 −2018], with their exact (full) date of publication and their CPC-1 category. Moreover, for each patent n, we know if it has been cited by subsequent patents (published in the considered period) and which are the citing patents. We consider the patents with publication year in the period 1980 −2013 so that, for each of them, we are able to compute the index In,h. We limit ourselves to the CPC categories h∈ H = {A, B, C, D, E, F, G, H }(i.e. we exclude category Y9). We fix T= 5 and the threshold τ= 0.8 (see Section D for some details on the choice of the threshold). We thus obtain a matrix with N= card( H) = 8 columns, where each row corresponds to a patent n. The patents (and so the rows) are ordered with respect to their publication (full) date. The total number of rows is ntot= 5 004 253. The matrix entry xn,his equal to 1 if the patent nis a success for category h, i.e. if its value of the 8The dataset is maintained at the IMT School. 9This category has been excluded due to its broad scope and ambiguity, which are well-documented in the literature (Leydesdorff et al. 2017, Rainville et al. 2025). Although it is often broadly referred to as the class encompassing green patents, it is widely recognized in the green innovation literature that only the subclasses Y02 and Y04 accurately represent genuine green technologies (Corrocher and Mancusi 2021, Barbieri et al. 2023). 14 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI index is above the threshold τ, or equal to 0 if nis not a success. Hence, the constructed matrix can be seen as the realization of a finite system {X(h)= (Xn,h)n:h∈ H} of Bernoulli processes. Our analyses show that the real data {xn,h}exhibit a behavior along time in agreement with the theoretical results of the previous section. Figure 2 provides the asymptotic behavior, in log10−log10scale, of every process St,h, that represents the number of successes observed in category huntil time-step (patent) t. We can appreciate how the lines exhibit the same slope, which indicates that the processes have the same Heaps’ exponent. This is exactly in accordance with Theorem 3.3. The emergence of a common Heaps’ exponent across all categories supports the fact that these innovation processes are not evolving independently but are shaped by shared systemic dynamics. We provide here, therefore, an evidence for the co-evolution of innovation processes, suggesting that the success trajectory of one technological domain is influenced by –and in turn influences –others, thereby reflecting the structural interdependencies and shared constraints within the broader innovation ecosystem. The value of the common Heaps’ exponent, Figure 2. Linear behavior of the successes counter St,halong t, in log10−log10scale, for each category h. The lines are obtained by a least square interpolation using a subsample of size 104of the data set. We can appreciate how the lines exhibit the same slope, which has been estimated to be equal
https://arxiv.org/abs/2505.13364v1
to bγ∗= 0.689. Indeed, the goodness of fit R2index obtained imposing a common slope is 0 .969 and so basically equal to the one obtained allowing the slopes to be different across categories, i.e. 0 .972. The emergence of a common Heaps’ exponent across all categories supports the fact that these innovation processes are not evolving independently but are shaped by shared systemic dynamics. Moreover, the sub-linear growth reflects diminishing returns to cumulative knowledge, increasing innovation complexity, and a bounded scalability in innovation outputs over time. estimated as the common slope of the lines in the log10−log10plot, is bγ∗= 0.689.10Therefore, the number of successes in each category grows sub-linearly, i.e. we have a decreasing (as t−(1−bγ∗)) 10In order to measure the uncertainty of the slopes estimated for each category hat a given time-step, we have performed a linear regression with random effects, on both intercept and slope and using a subsample of size 104of the data set. The estimated values of the slopes, one for each category h, presents a mean equal to bγ∗= 0.689 and standard deviation equal to 0 .044. MODELING INNOVATION DYNAMICS 15 probability of having a success and vanishing (as t−(1−bγ∗)) correlation coefficients among the categories, mirroring increasing complexity, higher search costs, and diminishing returns to cumulative knowledge in each domain. This uniform sub-linear growth pattern across categories reflects a form of bounded scalability in innovation outputs: while innovation continues over time, the marginal probability of producing a new success in any category diminishes. This aligns with the recently emerging notion that innovation becomes harder as technological frontiers advance (Bloom et al. 2020). Figure 3. Plot of the process St,h/St,Halong time t, for all the categories h̸=H. Category His arbitrarily chosen to be the baseline category. The horizontal dashed red lines represent an estimation of the ratio uh/uH, obtained as 10dh,H, where dh,His the difference between the intercepts of the regression lines in Figure 2 for the categories h andH. This long-run convergence across categories highlights a structural stabilization in innovation outcomes, despite increasing complexity and declining marginal returns to knowledge production. Moreover, note that all the estimated ratios uh/uHare around 1. This means that in the long run the numbers of successes in the different categories tend to coincide. Figure 3 shows, for each category h, the convergence of the process ( St,h/St,H)ttoward the quantity uh/uH, estimated as 10dh,Hwhere dh,His the difference between the intercepts of the regression lines in Figure 2 for the two processes related to the pair ( h, H) of categories.11Also this fact is in accor- dance with Theorem 3.3 and it means that the relative number of successes across categories stabilizes along time. This effect reflects the fact that, in the long run, the rates at which capabilities evolve into core capabilities—and core capabilities into rigidities—within a technological domain, including the cross-sectoral influences, tend to settle. In other words, the temporal stabilization of innovation successes implies two distinct mechanisms: (i) the convergence of the catalytic effect of general-purpose technologies on innovation across diverse domains (Trajtenberg et al. 1997), as reflected in the
https://arxiv.org/abs/2505.13364v1
equi- librium configuration of the subset of Γ related to such technologies; and (ii) the stabilization in the rate of recombination of existing knowledge, as captured by the behavior of Pt,hover time. Moreover, the limit corresponds to the ratio of their respective components of the eigenvector-centrality scores u (with respect to Γ⊤). In Figure 3 we can note that the estimated quantities uh/uHare all very near 11Category H in Figure 3 is arbitrarily chosen to be the baseline category. Any other category can be used as a baseline. 16 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI to 1 (more precisely, they are all within the interval [1 ,1.3]). This means that in the long run the numbers of successes in the different categories tend to coincide, which carries significant implications for innovation policy. In the case of the green transition, for instance, it means that even if different technology areas seem to be producing similar results over time, this similarity can hide the fact that it’s actually getting harder and more complex to create those innovations (Bloom et al. 2020). Figure 4. Linear behavior of the successes counter St,k,halong tin log10−log10scale, for each pair ( k, h) of categories ( k=sub-figure, h=color). The lines are obtained by a least square interpolation based on a subsample of the dataset with a slope equal to the previously estimated value bγ∗= 0.689. The goodness of fit R2index is 0 .851, which is basically the same as the one obtained allowing the slopes to be different across the pairs ( k, h) of categories, i.e. 0 .855. The common slope supports one again the presence of interaction across the categories. Moreover, the sub-linear growth reflects a consistent sub-linear scaling of cross-domain innovation flows, suggesting that while knowledge recombination persists, its marginal productivity diminishes over time across all category pairs. Figure 4 provides the asymptotic behavior, in log10−log10scale, of every process St,k,h, that repre- sents the number of successes for category hcoming from category kobserved until time-step (patent) t. We can appreciate how the lines exhibit the same slope, equal to the previously estimated value bγ∗= 0.689 in accordance with Theorem 3.4, supporting once again the presence of interaction across the categories. Finally, each panel of Figure 5 refers to the successes observed in a given category hand shows, for each coming category k, the convergence of the process St,k,h/St,H,h toward the quantity πk/πH, estimated as 10 to the power of the difference between the intercepts of the corresponding regression lines in Figure 4.12Also this fact is coherent with Theorem 3.4. In the long run, therefore, the number of successful innovations in each category ends up matching how much patent activity started in each category. Hence, if a certain type of technology had a greater patent activity early on, it will also tend to have more successful innovations over time. 12Category H in Figure 5 is arbitrarily chosen to be the baseline category. Any other category can be used as a baseline. MODELING INNOVATION DYNAMICS 17 Figure 5. Plot of the process St,k,h/St,H,h along time
https://arxiv.org/abs/2505.13364v1
t, for all the pairs ( k, h) of categories ( k=sub-figure, h=color), with k̸=H. Category His arbitrarily chosen to be the baseline category. The horizontal dashed red lines represent the value 10 to the power of the differences between the intercepts of the regression lines in Figure 4 for the pairs of categories ( k, h) and ( H, h). This long-run proportionality links early patenting intensity to long-term innovation success, highlighting the path-dependent and self-reinforcing nature of technological trajectories. 4.1.Statistical inference on the interaction intensity under the mean-field assumption. Assuming the system is subjected to a mean-field interaction (note that in this case we have uh/uj= 1 for each h, j∈ H), i.e. assuming the interaction matrix Γ to be of the form (3.2), we can perform a suitable test on the parameter ι, that rules the intensity of the interaction. Indeed, since the leading eigenvalue γ∗can be estimated with high accuracy by a suitable regression on the vector process St= (St,h)⊤ h∈H(see Aletti et al. (2023a) and references therein), we can assume this parameter to be known and we can make inference only on the intensity parameter ι. For a two-sided test with H0:ι=ι0, where ι0≤1 and ι0>1/2 (the required condition in the theoretical result for having the second eigenvalue of Γ strictly smaller than γ∗/2), we can use the test statistic (see Appendix B for the technical details) (4.1) 2∆ 0 St−eSt1 2 eStd∼χ2(N−1) under H0, where ∆ 0=ι0−1 2andeSt=1⊤St N=P h∈HSt,h/N. Since this statistic is increasing in ι0, it works well also for the one-sided test with H0:ι≥ι0, with ι0∈(1/2,1]. Applying this one-sided test to our real dataset of patents (with the estimated value bγ∗= 0.689), we obtain different p-values, one for each tested value ι0, that are collected in Table 1. We can see that, if we choose the significance level α= 0.05, then the minimum value of ι0at which we can reject the null hypothesis is ι0= 0.75. 18 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI We have also computed an estimate of the parameter of interest ιmaximizing the (joint) likelihood of the processes X(h)= (Xn,h)n,h∈ H, i.e. L(ι|xn,h, h∈ H, n= 1, . . . , n tot) =ntotY n=1Y h∈HPxh,n h,n(1−Ph,n)1−xh,n. The obtained estimate is in accordance with the results of Table 1. Indeed, the maximum-likelihood estimate of the interaction intensity ιisbι= 0.643 and from Table 1 we get that, at the level α= 0.05, we can reject the null hypothesis H0:ι≥0.75, while we do not have enough statistical evidence to reject for smaller values of ι0. ι0 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 p-value 0.893 0.562 0.273 0.113 0.042 0.015 0.005 0.002 <0.001 Table 1. p-values associated to the hypothesis test with H0:ι≥ι0under the mean- field interaction . Our estimate ˆ ιof the intensity parameter ιshows that on the one hand, processes are interdependent, i.e. they share knowledge, influence, or innovation spillovers and on the other, that self-dynamics still matter. This reinforces the concept of dynamic competitive advantage as introduced by Teece et al. (1997) and Giannitsis
https://arxiv.org/abs/2505.13364v1
and Kager (2009). Broadly speaking, the transformation of general capabilities into core capabilities occurs through a process of smart specialization. In this process, technological capabilities must adapt to multidisciplinary stimuli originating from other technologies, translate them into their own technological language, and leverage them to explore new opportunities within their own domain. 5.Conclusions Formalizing the mechanisms by which innovation trajectories emerge, persist, and evolve into spe- cialized paths is essential for understanding how capabilities develop into either core competencies or core rigidities (Leonard-Barton 1992). This paper offers a formal framework to capture the dynamics of innovation ecosystems through a novel model of interacting reinforced Bernoulli processes. Our contribution advances existing models of interacting Bernoulli processes by introducing a fully historical reinforcement mechanism with temporally decaying influence. This formulation enables a uni- fied mathematical treatment of innovation dynamics, jointly capturing—for the first time to our knowl- edge—a set of empirical regularities typically addressed in isolation: sublinear growth of the number of successes, convergence of relative success rates across technological domains, and a decline in cross- category correlations. Central to our approach is the explicit modeling of both self- and cross-category reinforcement, offering a rigorous explanation for how success propagates across interdependent in- novation ecosystems. The theoretical results, all proven rigorously, show that success probabilities evolve under a Perron-Frobenius regime, generating stable long-run success distributions that reflect the centrality of each domain in the interaction network. Empirical analysis of GLOBAL PATSTAT data (1980–2018) confirms these patterns, revealing a common Heaps’ exponent across categories, time- stabilizing ratios of success, and directional propagation of innovation activity. Through simulation, we also demonstrate how shocks in one domain can systematically influence others—particularly under strong coupling conditions. Moreover, we perform a statistical inference analysis under a mean-field assumption, which reveals substantial but non-maximal interaction strength across domains. Lastly, it is worth mentioning that, due to the generality of the introduced model and the related theoretical results and statistical procedures, it can be fruitfully applied to other contexts. Beyond its theoretical contributions, the framework offers practical relevance for managers. It namely underscores the importance of early and well-targeted interventions—particularly before ecosystems fragment into highly specialized, path-dependent trajectories (David 1985, Mazzucato 2018). The MODELING INNOVATION DYNAMICS 19 model further suggests that supporting cross-domain connectivity (e.g., via interdisciplinary R&D programs or incentives for recombination across patent classes) may delay the onset of technological rigidities and foster broader capability evolution (Jacobides et al. 2018, Kim and Magee 2017). Finally, by identifying the statistical structure of innovation interactions, our model provides policymakers with a novel tool to detect when technological fields are becoming saturated—informing decisions on resource allocation and diversification strategies in national or regional innovation systems (Fleming 2001, Jaffe and Trajtenberg 2002). Overall, this work contributes to the growing body of research aiming to formalize the dynamic in- terplay between innovation and capability development (see e.g. Lawson and Samson (2001), Breznik and D. Hisrich (2014)), offering a rigorous yet adaptable framework for analyzing the evolution of innovation processes in shaping innovation ecosystems. Author contributions. All the authors contributed equally to the present work.
https://arxiv.org/abs/2505.13364v1
Acknowledgments. Giacomo Aletti is a member of the Italian Group “Gruppo Nazionale per il Calcolo Scientifico” of the Italian Institute “Istituto Nazionale di Alta Matematica”. Irene Crimaldi is a member of the Italian Group “Gruppo Nazionale per l’Analisi Matematica, la Probabilit` a e le loro Applicazioni” of the Italian Institute “Istituto Nazionale di Alta Matematica”. Fundings. Irene Crimaldi and Federico Nutarelli are partially supported by the European Union - NextGenerationEU through the Italian Ministry of University and Research under the National Recov- ery and Resilience Plan (PNRR) - M4C2 - Investment 1.3, title [Public Sector Indicators for Sustain- ability and Wellbeing (PUWELL)] - Program [Growing Resilient, INclusive and Sustainable (GRINS)] - PE18 - CUP J33C22002910001. This research was supported by the “Resilienza Economica e Digitale” (RED) project (CUP D67G23000060001) funded by the Italian Ministry of University and Research (MUR) as “De- partment of Excellence” (Dipartimenti di Eccellenza 2023-2027, Ministerial Decree no. 230/2022) 20 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI Modeling Innovation Ecosystem Dynamics through Interacting Reinforced Bernoulli Processes. Supplementary material Appendix A.Definitions of novelty in the literature For a patent to be defined as such, it must be “novel” at the time of its filing (Mueller 2024). In other words, a patent is novel by definition, and the concept of novelty is widely discussed in the literature (Abbas et al. 2014). Is novelty related to how impactful a patent is? Or how influential it becomes? Or perhaps how unexpected its release was? Thus, the papers that address “patent novelty” adopt their Source Novelty Index Quick explanation Based on Squicciarini et al. (2013) Originalityp= 1−Pnp js2 pjspjis the percentage of citations made by patent p to patent class jout of the npIPC 4-digit (or 7-digit) patent codes contained in the patents cited by patent p. Citation measures are built on EPO patents and account for patent equivalents.Citations Squicciarini et al. (2013) Radicalness p=Pnp jCTj np; IPC pj̸= IPC pCTjdenotes the count of IPC-4 digit codes CPC pjof patent jcited in patent pthat is not allocated to patent p, out of nCPC classes in the backward citations counted at the most disaggregated level available (up to the 5thhierarchical level). The higher the ratio, the more diversified the array of technologies on which the patent relies upon.Citations Tech.Knowledge Own elaboration based on intuitions in Hall et al. (2000) and Lanjouw and Schankerman (2004)ANPCI i,t=Ci,t λt· 1 ntPnt j=1Cj,t where: - Ci,tis the number of citations received by patent iin year t, -ntis the number of patents filed in year t, -1 ntPnt j=1Cj,tis the average number of citations per patent in year t, -λtis a correction factor to account for citation inflation, which adjusts the average citation counts for inter-year comparability. Is the average number of citations per patent across all fields in year t, divided by a baseline year (e.g., t0)Intra-year comparison: index is normalized by the average citations within the same cohort (year and field). Inter-year comparison: adjusting for citation inflation λtwith , we should mitigate the bias of comparing older patents with more citations to newer ones that haven’t
https://arxiv.org/abs/2505.13364v1
had as much time to accumulate citations.Citations Trajtenberg et al. (1997) (Modified better version)GX= 1−PMi j=1 1 NPN i=1Tn ji Tn i2Xis the focal patent with Yipatents citing the focal patent X, with i= 1,·, N. Tn iis the total number of IPC n-digit classes in yi Tn jiis the total number of IPC n-digit classes in the jthIPC4 digit class in yi andj= 1. . . M iis the cardinal of all IPC4-digit classes in yiCitations Tech.Knowledge VariousPatent quality: Composite indicesTypically based on patent citations, claims, patent renewals and patent family size. Usually different compositions as follows: [i]Patent quality index 4 – 4 components: number of forward citations (up to 5 years after publication); patent family size; number of claims; and the patent generality index. Only granted patents are covered by the index. [ii]Patent quality index 4b – 4 components, bis: number of forward citations (up to 5 years after publication); patent family size; corrected claims; and the patent generality index. Only granted patents are covered by the index. [iii]Patent quality index 6 – 6 components: covers the same components as above, plus the number of backward citations and the grant lag index. Entropy based approaches: Defines a weighting scheme ( W) for all indicators of a patent (citations, claims...) based on entropy. With Mpatents, it then constructs M·W. A set of negative (patents with the lowest weighted scores across the majority of indicators) and positive patents is then identified and divided; a similarity across good and negative patents is then computed. The following steps are then performed: Maximum Similarity : For each patent in MR, the maximum similarity to any patent in MNis determined. If the maximum similarity S(MR i, MN) exceeds a predefined threshold τ, that remaining patent is marked as negative. The process repeats iteratively. Newly marked negative patents are moved to MN, and the model re-evaluates the remaining patents in MR. The iteration stops when the number of remaining patents MRis less than a threshold θ, or no new patents are marked as negative. The final output is a small set of patents in MRthat are not marked as negative, indicating their higher potential for technological innovation based on distinctiveness and low similarity to the negative patents.Various Table 2. This Table lists the various novelty indices brought by the literature. As noted in Lanjouw and Schankerman (2004), however, no single indicator can fully capture the quality or value of a patent and multiple indicators (citations, claims, oppositions, family size) provide a more comprehensive and nuanced understanding of both the technological importance and the economic potential of innovations. own definitions of the term. Consequently, if we attempt to classify the main streams of literature on patent novelty, we may identify the following: i) The older and more established literature is the one that defines novelty basing on the forward and backward citations (Trajtenberg et al. 1997, Squicciarini et al. 2013). Moreover, in the document about PATSTAT database we can read: “The number of citing patent documents can be an indicator of the importance of the patent. A frequently
https://arxiv.org/abs/2505.13364v1
cited patent can be an indication of a core technology”. This literature have been recently criticized. In Abbas et al. (2014), for instance, the authors primarily discuss patent novelty in the context of citation analysis, classification systems, and keywords. They highlight how backward citations (patents cited by MODELING INNOVATION DYNAMICS 21 a new patent) are commonly used to infer novelty—specifically, fewer backward citations can signal higher novelty because the patent draws less from existing technologies. However, they criticize citation-based methods, noting that they don’t always capture the full semantic or functional novelty of a patent. ii) In Fleming (2001) novelty is primarily defined through the recombination of existing knowl- edge. The core idea is that innovations arise by combining different components (Fleming uses U.S. patent classification codes as a proxy for these components by examining how fre- quently different combinations of these classifications appear together in patents, Fleming can assess the novelty of the recombination13) in new ways. A patent is considered novel when it brings together elements (or knowledge) that haven’t been combined before, thus creating something new. Fleming emphasizes that recombinant novelty can vary in terms of uncertainty and outcomes: some combinations lead to breakthroughs, while others fail to generate impact- ful results. Fleming also makes a distinction between simple recombination, which involves putting together elements that are closely related or have been combined before, and more radical recombination, where the components are from different, often unrelated, technological fields. The latter leads to higher novelty and a greater chance of disruption but also comes with more uncertainty in terms of success. Thus, novelty in this context is seen as the creation of new combinations of existing components, with the degree of novelty depending on how distant or different these components are from one another. iii) In the early 2000s also another stream of literature developed following the influential publica- tion of Aghion et al. (2005) according to which novelties were characterized by a certain degree of creative destruction. This idea has been incorporated in some studies on patents the most insightful of which is maybe Autor et al. (2020). The idea, though not explicitly stated, is that a novel patent is one that “destroys” similar patents (e.g. reducing the sales of products sold with patents in the same sector). Though this idea is very appealing it cannot be easily implemented. iv) The most recent stream is the one of Natural Language Programming (NPL) and text analyses of patents (Abbas et al. 2014, Gerken and Moehrle 2012). In essence in Gerken and Moehrle (2012) the process of calculating patent novelty compares the text of a new patent with all previously filed patents. The similarity between the new patent and each prior one is measured, and the highest similarity score (the patent’s “oldness”) is subtracted from 1 to determine the novelty score. Essentially, the more distinct a new patent is from earlier patents, the higher its novelty. This approach emphasizes the unique content of a patent by minimizing similarities with existing technologies. In Abbas et al. (2014), semantic analysis is mentioned as
https://arxiv.org/abs/2505.13364v1
a developing method, with an emphasis on how linguistic analysis (e.g., analyzing keywords, abstracts) is becoming crucial in identifying novel patents. But, their definition of novelty primarily revolves around citation gaps14and technological distance15(e.g., patents in new technological classes). More recently Kelly et al. (2021) defined, again, patent novelty by focusing on textual similarity. They measure novelty by comparing the text content of patents 13Specifically, he looks at how often certain technological combinations have been used before. If a patent combines classifications that rarely appear together or haven’t been combined before, it’s considered more novel. This method helps to quantify novelty based on how “new” or “unusual” the combination of technological elements is, rather than just looking at the number of citations or backward links. Fleming also explores the uncertainty associated with novelty, showing that the more novel the combination (i.e., the less frequently those components have been combined in the past), the greater the uncertainty in terms of the patent’s success. 14If a patent doesn’t reference many previous patents (or none at all), it suggests that the new patent is introducing ideas not heavily reliant on earlier work. This is seen as a sign of novelty because it indicates that the patent isn’t just building on existing knowledge, but possibly presenting something new. 15This refers to how different a patent is from those in similar fields. If a patent belongs to a new or less crowded technological class (based on how patents are categorized), it suggests that the patent is more novel because it explores a technological area that hasn’t been fully developed yet. 22 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI to both previous and future patents. A patent is considered novel if it is distinct from earlier patents but also relevant to future innovations. Their approach uses textual analysis to identify patents that contribute significantly to technological progress, and these are often predictive of future citations and market value. Table 2 summarizes the above indices. Note that different indices may capture different aspects of a patent. For example, the forward citations of a patent nmeasure its impact on subsequent “production” or innovation, whereas the composition of patent nacross categories reflects its degree of “interdisci- plinarity”, which can be interpreted as a form of “originality”. Therefore, it is not necessarily the case that all indices are positively correlated with one another. In the main text we adopt a category-version of the Squicciarini et al. (2013) index as our primary measure of patent novelty due to its widespread recognition and methodological robustness within the economics of innovation literature (Higham et al. 2021, Dehghani et al. 2023). This index, endorsed and implemented by the OECD in its patent quality indicators framework, captures the diversity of technological fields cited by a patent-interpreted as a proxy for knowledge recombination and inventive originality. It has also been applied across a range of recent empirical studies (Dehghani et al. 2023, Gao and Lazarova 2023, Angori et al. 2024) reflecting its growing standardization and comparability across datasets and time. Compared to newer text-based or machine-learning approaches,
https://arxiv.org/abs/2505.13364v1
the Squicciarini et al. (2013)’s index offers a better replicability, and ease of modification in a cross-category setting as ours. Notice, however that our setting is general enough to allow for other measures of patent novelty as well. Appendix B.Proofs of the main results and technical details for the statistical test We here provide the rigorous proofs of the main results and the technical details that explain the performed statistical test. These proofs are based on the general theory we develop in Subsection B.1 and which has the merit to provide a general mathematical framework that can be applied also in other settings. Proof. Proof of Theorem 3.1 For each time step t, setXt= (Xt,1, . . . , X t,N)⊤,Pt= (Pt,1, . . . , P t,N)⊤ andθ= (θ1, . . . , θ N)⊤. Hence, the vectorial dynamics for the random vector Ptis, for t≥0, (B.1)P0=θ/c̸=0a.s. Pt+1= 1−1 t+ 1 Pt+1 t+ 1Γ⊤Pt+1+O(1/t2)1 =Pt−1 t+ 1(I−Γ⊤)Pt+1 t+ 1Γ⊤∆M′ t+1+O(1/t2)1, where ∆ M′ t+1=Xt+1−Pt. Now, we fix x >0 and set (B.2) ζ0(x) = 1 , ζ t(x) =Γ(t+x) Γ(t)∼tx↑+∞. More precisely, from (Gouet 1993, Lemma 4.1) we have ζt(x) =tx+O(tx−1) and1 ζt(x)=1 tx+O(1/tx+1), MODELING INNOVATION DYNAMICS 23 and so (B.3)1 ζt+1(x)ζt+1(1−x)= 1 (t+ 1)x+O(1/tx+1)1 (t+ 1)1−x+O(1/t2−x) = 1 t+ 1+O(1/t2). Hence, multiplying (B.1) by ζt+1(1−γ∗) and using the relation ζt+1(x) ζt(x)=Γ(t+x+ 1) Γ(t+ 1)Γ(t) Γ(t+x)= 1 +x t= 1 +x t+ 1+O(1/t2), with x= 1−γ∗, we get the following dynamics for At=ζt(1−γ∗)Pt, where we set ∆ Mt+1= ζt+1(1−γ∗)∆M′ t+1: At+1= 1−1 t+ 1(I−Γ⊤)ζt+1(1−γ∗) ζt(1−γ∗)At+1 t+ 1Γ⊤∆Mt+1+Oζt+1(1−γ∗) t2 1 = 1−1 t+ 1(I−Γ⊤) 1 +1−γ∗ t+ 1+O(1/t2) At+1 t+ 1Γ⊤∆Mt+1+Oζt+1(1−γ∗) t2 1 =At−1 t+ 1(γ∗I−Γ⊤)At+1 t+ 1Γ⊤∆Mt+1+Oζt+1(1−γ∗) t2 1, with O(ζt(1−γ∗)/t2) =O(1/t1+γ∗). Now, we are going to apply Theorem B.2 of Section B.1 with Φ = Γ, ϕ∗=γ∗andF= (Ft)tthe natural filtration associated to the model, i.e. Ft=σ(Xn,h:n≤t, h= 1, . . . , N ). To this purpose, we choose uandvas the left and the right eigenvectors of Γ associated to γ∗, with strictly positive components and such that v⊤1= 1 and v⊤u= 1. (Recall that γ∗is simple and it is possible to choose the components of these vectors all strictly positive because of the Frobenius-Perron theory.) We set eAt=v⊤At=ζt(1−γ∗)v⊤Pt. First of all, we observe that assumptions (i) and (ii) of Theorem B.2 are satisfied because of ( A1) and ( A2), that imply γ∗∈(0,1]. Moreover, also assumption (iii) is verified: indeed, we have suptE[eAt]<+∞aseAtis non-negative (see Remark B.1). Finally, we have Wt=NX j=1E[(∆Mt+1,j)2|Ft] =ζt+1(1−γ∗)2NX j=1Pj,t(1−Pj,t)≤ζt+1(1−γ∗)2NX j=1Pj,t. Then, denoting by vmin>0 the minimum element of v, we obtain ζt+1(1−γ∗)2NX j=1Pj,t≤ζt+1(1−γ∗)2v⊤Pt/vmin≤2ζt+1(1−γ∗)eAt/vmin. and so Wt≤1 γ∗vminζt+1(1−γ∗)eAt. Therefore, recalling that suptE[eAt]<+∞as told before, we get X t1 (t+ 1)2E[Wt]≤1 γ∗vminsup tE[eAt]X tζt+1(1−γ∗) (t+ 1)2<+∞. Hence, also assumption (iv) in Theorem B.2 is satisfied and we get t1−γ∗Pta.s./L2 ∼ζt(1−γ∗)Pt=Ata.s.−→P∞=eP∞u, 24 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI where eP∞=eA∞is a square-integrable non-negative random variable, which is the almost sure limit of the above defined eAt, or equivalently (by (B.2)), of t1−γ∗v⊤Pt. It remains to prove that P(eP∞>0) = 1. To this purpose, we
https://arxiv.org/abs/2505.13364v1
use (Aletti et al. 2023a, Theorem S1.3). Indeed, setting ePt=v⊤Pt, we have (B.4)eP0=v⊤P0=1 cv⊤θ>0 ePt+1= 1−1 t+ 1 ePt+γ∗ t+ 1eXt+1+O(1/t2), t≥0, witheXt+1=v⊤Xt+1. Hence, if we define the stochastic process V= (Vt)t≥0, taking values in the interval [0 ,1], as V0=eP0>0, Vt+1= 1−1 t+ 2 Vt+1 t+ 2Yt+1, t≥0, where Yt+1=γ∗eXt+1(that takes values in [0 ,1], since 0 < γ∗≤1,Xt+1,j∈ {0,1}andv⊤1= 1), then by the technical result in Subsection B.2 (applied to Wt=ePtwith β= 1) we have |ePt− Vt|=O(ln(t)/t)→0 and also t1−γ∗|ePt− Vt|=O(t1−γ∗ln(t)/t) =O(ln(t)/tγ∗)→0. Hence, from (Aletti et al. 2023a, Theorem S1.3) applied to V= (Vt) with δ=γ∗(note that E[Yt+1|Ft] = γ∗ePt=γ∗Vt+O(ln(t)/t), we get that t1−γ∗Vtconverges almost surely to a strictly positive finite random variable. This random variable is obviously also the almost sure limit of t1−γ∗ePtand so we can conclude thatP(eP∞>0) = 1. □ □ Proof. Proof of Corollary 3.2 In the previous proof, we have proven that the limit random vector of t1−γ∗PtisP∞=eP∞u, where here eP∞refers to a precise choice for the vector u, that is eP∞=eP∞(u). If we choose a different (left) eigenvector u′of Γ associated to γ∗, then we necessarily have have u′=Cu with C̸= 0 and so we can write the limit random vector as P∞=eP∞(u′)u′witheP∞(u′) =eP∞(u)/C. Summing up, for any choice of the (left) eigenvector u, the limit random vector can be written as P∞=eP∞(u)uwith a suitable square-integrable random variable eP∞(u) such that P(eP∞(u)̸= 0) = 1 andeP∞(Cu) =eP∞(u)/C. In other words, each random variable P∞,hcan be factorized in the product of a deterministic term specific for each h, i.e. uh, and a common random term, i.e. eP∞(u). Since Xt+1,h∈ {0,1}with E[Xt+1,h|Ft] =Pt,hand the convergence of t1−γ∗Pttoward P∞is also in quadratic mean, we obtain t1−γ∗V ar[Xt+1,h] =t1−γ∗E[Pt,h(1−Pt,h)]→( E[P∞,h] =uhE[eP∞(u)] for γ∗<1 E[P∞,h(1−P∞,h)] =uhEh eP∞(u)(1−uheP∞(u))i forγ∗= 1. Therefore, since the sign of the components of u(which is the same for all of them) necessarily coincides with the one of eP∞(u), in order to obtain the first limit relation in Corollary 3.2, we can set α(u) equal to|E[eP∞(u)]|when γ∗<1 and equal to |Eh eP∞(u)(1−uheP∞(u))i |when γ∗= 1, so that the two possible cases in the above formula can be summarized as |uh|α(u). Moreover, for each pair h̸=j, since Xt+1,handXt+1,jare conditionally independent given Ft, we get t2(1−γ∗)cov(Xt+1,h, Xt+1,j) =t2(1−γ∗)cov(Pt,h, Pt,j)−→cov(P∞,h, P∞,j) =uhujV ar[eP∞(u)]. MODELING INNOVATION DYNAMICS 25 Hence, in order to obtain the second desired relation, we can set σ2(u) =V ar[eP∞(u)]. Finally, as a consequence, for the correlation coefficients, we have t1−γ∗ρ(Xt+1,h, Xt+1,j) =t2(1−γ∗)cov(Xt+1,h, Xt+1,j)q t(1−γ∗)V ar[Xt+1,h]q t(1−γ∗)V ar[Xt+1,j]−→√uhujσ2(u) α(u). □ Proof. Proof of Theorem 3.3 From Theorem 3.1, using the same notation adopted in its proof, we get St,h=tX n=1Xn,h with E[Xt+1,h|Ft] =Pt,ha.s.∼P∞,h t1−γ∗ and so, by (Williams 1991, sec. 12.15), we get (B.5) St,ha.s.∼S∞,htγ∗with S∞,h=P∞,h γ∗. (We can also note that, similarly, since the convergence in Theorem 3.1 is also in mean, we also have E[St,h] =Pt n=0E[Xn,h] =Pt n=0E[Pn,h]∼tγ∗E[S∞,h].) As a consequence of (B.5) and the fact that P∞,h/P∞,j=uh/uj, we obtain St,h St,ja.s.−→S∞,h S∞,j=uh uj. □ □ Proof. Proof of Theorem 3.4 From Theorem 3.1 and Theorem 3.3, using the same notation adopted in their
https://arxiv.org/abs/2505.13364v1
proofs, we have by ( A3) E[Xt+1,hYt+1,k|Ft] =E[Yt+1,k]E[Xt+1,h|Ft] =πkPt,h a.s.∼t−(1−γ∗)πkP∞,h=t−(1−γ∗)πkγ∗S∞,h and so it is enough to apply (Williams 1991, sec. 12.15) in order to obtain St,k,h=Pt n=1Xn,hYn,ka.s.∼ πkS∞,htγ∗. □ □ Technical details for the statistical inference (Subsec. 4.1). We here use the same choice of the eigenvectors vanduand the same notation adopted in the proof of Theorem 3.1. We recall that the dynamics of the above defined process At=ζt(1−γ∗)Ptwith ζt(·) defined in (B.2), is A0=θ/c At+1=At−1 t+ 1(γ∗I−Γ⊤)At+1 t+ 1Γ⊤∆Mt+1+Oζt+1(1−γ∗) t2 1, where O(ζt(1−γ∗)/t2) =O(1/t1+γ∗) and ∆ Mt+1=ζt+1(1−γ∗)∆M′ t+1with ∆ M′ t+1=Xt+1−Pt. Moreover, setting St= (St,1, . . . , S t,N) and Bt=1 ζt(γ∗)St, we find the following vectorial dynamics: B0=0 Bt+1=ζt(γ∗) ζt+1(γ∗)Bt+1 ζt+1(γ∗)Xt+1= 1−γ∗ t+ 1 Bt+1 ζt+1(γ∗)∆M′ t+1+1 ζt+1(γ∗)Pt = 1−γ∗ t+ 1 Bt+1 ζt+1(γ∗)ζt+1(1−γ∗)∆Mt+1+1 ζt+1(γ∗)ζt+1(1−γ∗)ζt+1(1−γ∗) ζt(1−γ∗)At. Using (B.3) and the relation ζt+1(x)/ζt(x) = 1 + O(1/t), we obtain (B.6)Bt+1=Bt−1 t+ 1(γ∗Bt−At) +1 t+ 1∆Mt+1+O(ζt(1−γ∗)/t2)1, 26 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI where again O(ζt(1−γ∗)/t2) =O(1/t1+γ∗). Finally, we observe that, by (B.2), we have t−(1−γ∗)E[∆Mt+1∆M⊤ t+1|Ft]a.s.∼t1−γ∗diag(Pt,1(1−P1,t), . . . , P t,N(1−Pt,N)) a.s.−→( diag(P∞) for γ∗<1 diag(P∞,1(1−P∞,1), . . . , P ∞,N(1−P∞,N)) for γ∗= 1 (and so u=1) =(eP∞diag(u) for γ∗<1 eP∞(1−eP∞)Iforγ∗= 1 (and so u=1) and we recall that, as proven before, eAt=v⊤At=ζt(1−γ∗)v⊤Pta.s.→eP∞. Therefore, the pair ( At,Bt)t satisfies the dynamics and the conditions required in the general central limit theorem proven in (Aletti et al. 2025, Appendix A), provided we assume Re(γ∗ 2)/γ∗<1/2, where γ∗ 2is an eigenvalue of Γ different from γ∗with highest real part, that is γ∗ 2∈Sp(Γ)\{γ∗}withRe(γ∗ 2) = max {Re(γ) :γ∈Sp(Γ)\{γ∗}}. Hence, we can apply the statistical tools based on that result and described in (Aletti et al. 2025, Appendix B and C). In particular, in the mean-field case we have v=N−11,u=1andγ∗ 2=γ∗(1−ι) so that, when ι >1/2, we have (2ι−1)tγ∗∥Bt−v⊤Bt∥2 v⊤Btd−→χ2(N−1). B.1.General results. LetAt= (At,1, . . . , A t,N)⊤, with t≥0, be a multi-dimensional real stochastic processes, adapted to a filtration ( Ft)t, with the following dynamics: (B.7) At+1=At−1 t+ 1(ϕ∗I−Φ⊤)At+1 t+ 1Φ⊤∆Mt+1+RA,t+1 where A0isintegrable and (i) Φ⊤is a non-negative irreducible matrix with leading eigenvalue 0 < ϕ∗≤1; (ii)RA,t+1=O(t−(1+β))1for some β >0.16 Letuandvbe the left and the right eigenvectors of Φ associated to ϕ∗, with strictly positive components and such that v⊤1= 1 and v⊤u= 1.(Recall that ϕ∗is real and simple and it is possible to choose the components of these vectors all strictly positive because of the Frobenius-Perron theory). Set eAt=v⊤At. Theorem B.1. Under (i) and (ii) and assuming (iii) suptE[|eAt|]<+∞, we have eAta.s.−→eA∞,where eA∞is an integrable random variable. Moreover, if we also assume (iv)A0square-integrable andP twt/(t+ 1)2<+∞where wt=PN h=1E[(∆Mt+1,j)2], then we have suptE[(eAt)2]<+∞and so eAtconverges to eA∞also in quadratic mean (i.e. in L2). Note that (iii) is verified also when ( eAt)tis uniformly integrable and in this case we also have that the convergence is in mean. Proof. Proof. By multiplying equation (B.7) by v⊤we obtain (B.8) eAt+1=eAt+1 t+ 1ϕ∗∆fMt+1+v⊤RA,t+1, 16The notation Rt=O(st) means that Rtis a (possibly random) reminder term such that |Rt| ≤Cstfor a suitable constant Cand for tlarge enough. MODELING INNOVATION
https://arxiv.org/abs/2505.13364v1
DYNAMICS 27 where ∆ fMt+1=v⊤∆Mt+1. Setting ˘Mt=Pt n=11 n∆fMnandeRA,t+1=v⊤RA,t+1=O(1/t1+β) (by (ii)), we have (B.9) eAt+1−eA0=tX n=0(eAn+1−eAn) =ϕ∗˘Mt+1+tX n=0eRA,n+1. Hence, since suptE[|eAt|]<+∞(by (iii)) andP t1/t1+β<+∞, we also have suptE[|˘Mt|]<+∞. Therefore, ( ˘Mt)tis a martingale bounded in L1and so it converges almost surely to an integrable random variable ˘M∞. It follows from (B.9) that ( eAt) converges almost surely to an integrable random variable eA∞. Moreover, we obtain (B.10)(eAt−eA∞)2= ϕ∗(˘Mt−˘M∞) +X n≥teRA,n+12 = (ϕ∗)2(˘Mt−˘M∞)2+O(1/tβ). Now, we are going to prove that, under assumption (iv), we have supt(eAt)2<+∞so that (by (B.9)) (˘Mt)tis a martingale bounded in L2and so ˘M∞is square-integrable and ˘Mtconverges in quadratic mean to it. By (B.10) this fact obviously implies that eAtconverges in quadratic mean to eA∞. We observe that, from (iv) and the fact that (∆ fMt+1)2≤CPN j=1(∆Mt+1,j)2, we obtain Eh ∆fMt+1 t+ 1eRA,t+1 i2 ≤Eh(∆fMt+1)2 (t+ 1)2i E[(eRA,t+1)2]≤Cwt (t+ 1)2O(1/t2(1+β)) and so Eh ∆fMt+1 t+ 1eRA,t+1 i =o(1/t1+β). Therefore, from (B.8), since suptE[eAt]<+∞, we get E[(eAt+1)2]≤E[(eAt)2] + (ϕ∗)2Cwt (t+ 1)2+O1 t2(1+β) +OsuptE[eAt] t1+β +o1 t1+β , =E[(eAt)2] + (ϕ∗)2Cwt (t+ 1)2+O(1/t1+β). Then, we find |E[(eAt)2]−E[(eA0)2]| ≤t−1X n=0|E[(eAn+1)2]−E[(eAn)2]| ≤(ϕ∗)2CX nwn (n+ 1)2+X nO(1/n1+β)<+∞, where we have used (iv) in order to say that the first series is finite. Therefore, assuming A0(and so eA0) square-integrable, we have suptE[(eAt)2]<+∞. □ □ Remark B.1 (non-negative case) .In particular, condition (iii) is verified when eAtis non-negative for each t. Indeed, if eAtis non-negative, we have, for each t, |E[eAt]−E[eA0]| ≤t−1X n=0|E[eAn+1]−E[eAn]| ≤X nO(1/n1+β) and thus, since the last series is finite and eA0integrable, we have suptE[eAt]<+∞, that is (iii) is verified. 28 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI Theorem B.2. Assuming (i), (ii), (iii) and (iv), we have Ata.s./L2 −→eA∞u, where eA∞is a square-integrable random variable. Proof. Proof. We firstly want to prove that we can neglect the term RA,t+1in the dynamics (B.7) of At. We recall that the matrix Φ⊤can be decomposed as Φ⊤=ϕ∗uv⊤+UDV⊤, where Dis the diagonal matrix whose elements are the eigenvalues of Φ (i.e. of Φ⊤) different from ϕ∗ andUandVdenote the matrices whose columns are the left (right) and the right (left) eigenvectors of Φ (of Φ⊤, respectively) associated to these eigenvalues, so that we have (B.11) V⊤u=U⊤v= 0, V⊤U=U⊤V=I, I =uv⊤+UV⊤. Therefore the dynamics of Atcan be rewritten as follows: (B.12)At+1= I−1 t+ 1U(Iϕ∗−D)V⊤ At +1 t+ 1Φ⊤∆Mt+1+RA,t+1. Letαj=ϕ∗−ϕjwith ϕjeigenvalue of Φ different from ϕ∗. Then Re(αj)>0. Moreover, we have (B.13)At+1=Cm0,tAm0+tX k=m01 k+ 1Ck+1,tΦ⊤∆Mk+1 +tX k=m0Ck+1,tRA,k+1 =Cm0,tAm0+tX k=m01 k+ 1Ck+1,tΦ⊤∆Mk+1+ρt+1, where m0is such that Re(αj)/(m0+ 1)<1 for each jand Ck+1,t=tY m=k+1 I−1 m+ 1(Iϕ∗−Φ⊤) =tY m=k+1 I−1 m+ 1U(Iϕ∗−D)V⊤ =U tY m=k+1 I−1 m+ 1(Iϕ∗−D) V⊤, and so we can write Ck+1,t=UAk+1,tV⊤and Ak+1,t=tY m=k+1 I−1 m+ 1(Iϕ∗−D) . Moreover, setting for any x∈CwithRe(x)/(m0+ 1)<1,pm0(x) = 1 and pk(x) =Qk m=m0(1−x m+1) fork≥m0andFk+1,t=pt(x) pk(x)form0−1≤k≤t−1, from (Aletti et al. 2019, Lemma A.5) we get [Ak+1,t]jj=Fk+1,t(αj). MODELING INNOVATION DYNAMICS 29 We now prove that |ρt+1|a.s.→0. To this end, first notice that O(|Ck+1,t|) = O(|Ak+1,t|) and, setting a∗ 2=Re(α∗ 2) =ϕ∗− Re(ϕ∗ 2) with ϕ∗ 2eigenvalue of Φ, different form ϕ∗, such that Re(ϕ∗ 2) = max j{Re(ϕj)}, we have (see
https://arxiv.org/abs/2505.13364v1
(Aletti et al. 2017, Lemma A.4)) |Ak+1,t|=O|pt(α∗ 2)| |pk(α∗ 2)| =Ok ta∗ 2 fork=m0, . . . , t −1, and simply |At+1,t|=O(1) for k=t. Moreover, recalling that RA,t+1=O(t−(1+β))1for some β >0, we have |ρt+1|= tX k=m0Ck+1,tRA,k+1 =Ot−1X k=m0k ta∗ 21 k1+β +O(1/t1+β) =O1 ta∗ 2t−1X k=m0ka∗ 2−1−β +O(1/t1+β)→0, because a∗ 2>0 and β >0. Therefore, in all the sequel, without loss of generality, we can assume that Atfollows the dynamics (B.7) with RA,t+1=0. We now decompose the vectorial process Atby means of the Jordan representation of the matrix Φ. Specifically, for any ϕ∈Sp(Φ)\ {ϕ∗}, we can denote as Jϕthe Jordan block and with UϕandVϕ the matrices whose columns are, respectively, the left and right (possibly generalized) eigenvectors of Φ associated to the eigenvalue ϕ, i.e. ΦVϕ=VϕJϕ and U⊤ ϕΦ =JϕU⊤ ϕ. Then, we can consider the decomposition At=eAtu+X ϕ∈Sp(Φ)\{ϕ∗}Aϕ,t, where eAt=v⊤At(as defined above) and Aϕ,t=UϕV⊤ ϕAt. We have already proven the almost sure convergence and the convergence in L2foreAtunder (i), (ii), (iii) and (iv). In the following steps we are going to show that, under (i), (ii) and (iv), each Aϕ,tconverges almost surely and in L2to zero. In particular, this last task will be done separately for the eigenvalues with |ϕ|< ϕ∗and with |ϕ|=ϕ∗. Remember that the assumption that Φ (or, equivalently, Φ⊤) is irreducible ensures that ϕ∗is real, simple and |ϕ| ≤ϕ∗for any ϕ∈Sp(Φ). Moreover, let us set Wt=PN h=1E[(∆Mt+1,j)2|Ft] and observe that assumption (iv) means E[P tWt/(t+1)2] =P tE[Wt]/(t+1)2=P twt/(1+t)2<+∞, which also impliesP tWt/(t+1)2<+∞ almost surely. Study of Aϕ,twith|ϕ|< ϕ∗.Let˘At=V⊤ ϕAtand since Aϕ,t=UϕV⊤ ϕAt=Uϕ˘At, it is enough to prove that∥˘At∥2converges a.s. and in L2to zero. To this end, by multiplying equation (B.7) by V⊤ ϕ, we have ˘At+1= I−1 t+ 1(ϕ∗I−J⊤ ϕ) ˘At+1 t+ 1J⊤ ϕV⊤ ϕ∆Mt+1. Then, since for any real matrix Qwe can write (B.14)E[∆M⊤ t+1Q∆Mt+1|Ft] =NX j=1q2 jjE[(∆Mj,t+1)2|Ft] ≤max jq2 jjWt, 30 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI we have that E[∥˘At+1∥2|Ft] ≤  1−ϕ∗ t+ 1 I+1 t+ 1Jϕ ˘At 2+1 (t+ 1)2NX j=1[¯Vϕ¯JϕJ⊤ ϕV⊤ ϕ]2 jjE[(∆Mj,t+1)2|Ft] ≤ 1−ϕ∗ t+ 1+∥Jϕ∥2,2 t+ 12 ∥˘At∥2+1 (t+ 1)2max j{[¯Vϕ¯JϕJ⊤ ϕV⊤ ϕ]2 jj}Wt. Then, regarding the first term, we note that  1−ϕ∗ t+ 1+∥Jϕ∥2,2 t+ 12 ≤ 1−ϕ∗ t+ 1+|ϕ|+ϕ∗ 2(t+ 1)2 = 1−ϕ∗− |ϕ| 2(t+ 1)2 , and so E[∥˘At+1∥2|Ft]≤ 1−ϕ∗− |ϕ| 2(t+ 1)2 ∥˘At∥2+C (t+ 1)2Wt. Therefore, since ϕ∗>|ϕ|and by (iv), the process ∥˘At∥2is a non-negative almost supermartingale so that it converges almost surely (see Robbins and Siegmund (1971)). Moreover, by applying the expectation, we obtain E[∥˘At+1∥2]≤ 1−ϕ∗− |ϕ| 2(t+ 1)2 E[∥˘At∥2] +C (t+ 1)2E[Wt], which, sinceP t(ϕ∗−|ϕ|)/(t+1) = + ∞, by (iv) and (Aletti et al. 2023a, Lemma S1.6), we can conclude that∥˘At∥a.s./L2 −→ 0, and hence ˘Ata.s./L2 −→0. Study of Aϕ,twith|ϕ|=ϕ∗.From the Frobenius-Perron theory, we know that each eigenvalue with maximum modulus is simple. Let us denote the corresponding right and left eigenvectors by vϕand uϕof Φ. Then, set aϕ,t=v⊤ ϕAtso that, since we have Aϕ,t=uϕv⊤ ϕAt=uϕaϕ,t, it is enough to prove that|aϕ,t|converges to zero almost surely and in L2. To this end, by multiplying equation (B.7) by v⊤ ϕ,
https://arxiv.org/abs/2505.13364v1
we have aϕ,t+1= 1−1 t+ 1(ϕ∗−ϕ) aϕ,t+ϕ t+ 1v⊤ ϕ∆Mt+1. Then, using (B.14), we have that E[|aϕ,t+1|2|Ft]≤ 1−ϕ∗ t+ 1+ϕ t+ 1 2 |aϕ,t|2+|ϕ|2 (t+ 1)2NX j=1|vj|2E[(∆Mj,t+1)2|Ft] ≤ 1−ϕ∗ t+ 1+ϕ t+ 1 2 |aϕ,t|2+|ϕ|2 (t+ 1)2max j{|vj|2}Wt. MODELING INNOVATION DYNAMICS 31 Then, regarding the first term we have that 1−ϕ∗ t+ 1+ϕ t+ 1 2 = 1−ϕ∗ t+ 1+Re(ϕ) t+ 12 +Im(ϕ) t+ 12 = 1 +ϕ∗− Re(ϕ) t+ 12 −2ϕ∗− Re(ϕ) t+ 1 +Im(ϕ) t+ 12 = 1−2(ϕ∗− Re(ϕ)) t+ 1 +ϕ∗2−2ϕ∗Re(ϕ) +Re(ϕ)2+Im(ϕ)2 (t+ 1)2 = 1−2(ϕ∗− Re(ϕ)) t+ 1 +2ϕ∗(ϕ∗− Re(ϕ)) (t+ 1)2 = 1−21 t+ 1−ϕ∗ (t+ 1)2 (ϕ∗− Re(ϕ)) and so E[|aϕ,t+1|2|Ft]≤ 1−21 t+ 1−ϕ∗ (t+ 1)2 (ϕ∗− Re(ϕ)) |aϕ,t|2+C (t+ 1)2Wt. Therefore, since ϕ∗>Re(ϕ) and by (iv), the process |aϕ,t|2is a non-negative almost supermartingale so that it converges almost surely (see Robbins and Siegmund (1971)). Moreover, by applying the expectation, we obtain E |aϕ,t+1|2 ≤ 1−21 t+ 1−ϕ∗ (t+ 1)2 (ϕ∗− Re(ϕ)) E |aϕ,t|2 +C (t+ 1)2E[Wt]. SinceP t(1/(t+ 1)−ϕ∗/(t+ 1)2) = + ∞and by (iv) and (Aletti et al. 2023a, Lemma S1.6), we can conclude that |aϕ,t|a.s./L2 −→ 0, and hence aϕ,ta.s./L2 −→ 0. □ □ B.2.Technical result. LetW= (Wt)t≥0be a bounded stochastic process with the following dynamics Wt+1= 1−1 t+ 1 Wt+1 t+ 1Yt+1+Rt+1, t≥0, where ( Yt)tis a bounded stochastic process and Rt=O(1/t1+β) with β > 0. Define a bounded stochastic process V= (Vt)t≥0with dynamics Vt+1= 1−1 t+ 2 Vt+1 t+ 2Yt+1, t≥0. Then |Wt− Vt|=O(1/t)|W0− V0|+1 ttX n=0O(n−β′), where β′= min( β,1). Indeed, we can observe that we can write Wt+1= 1−1 t+ 2 Wt+1 t+ 2Yt+1+R′ t+1, t≥0, with R′ t+1=Rt+1+O(1/t2) =O(1/t1+β′) and so Wt+1=C0,tW0+tX n=0Cn+1,tYn+1 n+ 2+tX n=0Cn+1,tR′ n+1 32 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI where C0,t=tY m=0 1−1 m+ 2 =O(1/t) Cn+1,t=tY m=n+1 1−1 m+ 2 =Qt m=0 1−1 m+2 Qn m=0 1−1 m+2=O(n/t) Similarly, we have Vt+1=C0,tV0+tX n=0Cn+1,tYn+1 n+ 2. Hence, we obtain |Wt+1− Vt+1| ≤C0,t|W0− V0|+ tX n=0Cn+1,tR′ n+1 =O(1/t)|W0− V0|+tX n=0On t1 n1+β′ =O(1/t)|W0− V0|+1 ttX n=0O(n−β′). Appendix C.The case of a reducible interaction matrix When the interaction matrix Γ is not irreducible, it is possible to decompose it in irreducible sub- matrices such that the union of the spectra of the sub-matrices coincides with the spectrum of the original matrix. In the following, we will describe an heuristic argument (also employed in Iacopini et al. (2020) and Aletti et al. (2023a)), useful in order to detect the rate at which each St,hgrows along time in the case of a general matrix Γ. The dynamics that rules the vectorial process St= (St,1, . . . , S t,N)⊤can be approximated (as t→ +∞) by the linear system of (deterministic) differential equations ˙s(t) = Γs(t) t and hence we can say that St≈s(t) for t→+∞. By the change of variable t=ez, we get ˙s(z) = Γs(z), whose general solution is given by s(z) =eΓzc. Now, the term eΓzcan be expressed using the canonical Jordan form of the matrix Γ, so that we obtain s(z) =rX k=1eγkzpk−1X i=0zici, where γ1, . . . ,
https://arxiv.org/abs/2505.13364v1
γ rare the distinct eigenvalues of Γ, p1, . . . , p rare the sizes of the corresponding Jordan blocks and ciare suitable vectors related to cand to the generalized eigenvectors of Γ. Indeed, we can write Γ as PJP−1, where Jis its canonical Jordan form and Pis a suitable invertible matrix of generalized eigenvectors. Therefore, we have eΓz=PeJzP−1, where eJzis a block matrix with blocks of the form eJkzwith Jkblock in J. On the other hand, if Jk=γkI+Nkis a generic Jordan block of Γ with size pkand associated to the eigenvalue γk, we have eJkz=eγkzeNkz=eγkzpk−1X i=0zi (i−1)!Ni k. MODELING INNOVATION DYNAMICS 33 Changing the variable from ztot, we find (C.1) St≈s(t) =rX k=1tγkpk−1X i=0lni(t)ci and so the rate at which St,hincreases is given by the leading term in the expression of sh(t). Appendix D.Choice of the threshold (robustness check) Figure 6. Behavior of the mean index per category over the years. In this section we discuss the choice of the threshold τthat has been used in the data analysis of Section 4 to define when a given patent ncan be considered a success for (or in) category h, that is, if and only if In,h> τ. Since we want to identify as a “success” only patents that have an extraordinary impact on at least one category, the value of the threshold τshould be greater than the index value of the vast majority of patents in the data set. However, this requirement is not so stringent in the sense that the percentage of patents with ( In,h> τ) is already around 1% for τ= 0.1 and is below 0 .1% for anyτ >0.5 (see Figure 6 and Table 3 for further details). This means that the majority of patents in the data set have an index value very low (precisely, below 0 .1). This is in accordance with Squicciarini et al. (2013), who observed that only a very small subset of patents typically receives a large number of forward citations and the mean value of the forward index decreases along time. Then, to verify the robustness of the results of the paper with respect to the choice of the threshold τ, we performed the data analysis presented in Section 4 multiple times, each one using a different threshold value. In particular, we have performed a linear regression for every process St,h, in the log10−log10scale, allowing the slopes to be different between categories and also imposing a common slope. The results are collected in Figure 7, where we can see that the variability presented by the slopes is essentially very similar for any value of τ >0.2. In addition, when we perform the linear regression that imposes the same slope for all categories, the estimated common slope bγ∗is always very close to the value 0 .689, that is the one estimated with τ= 0.8 in Section 4. Finally, we have calculated the goodness-of-fit index R2obtained imposing a common slope and the one obtained allowing the slopes to be different across categories, and they are always very close to each other and always
https://arxiv.org/abs/2505.13364v1
higher than 34 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI category (In,h>0.1)% ( In,h>0.3)% ( In,h>0.5)% ( In,h>0.7)% ( In,h>0.9)% A 1.640 % 0.185 % 0.045 % 0.017 % 0.008 % B 2.440 % 0.250 % 0.059 % 0.021 % 0.008 % C 1.180 % 0.154 % 0.042 % 0.017 % 0.008 % D 0.554 % 0.1410 % 0.031 % 0.017 % 0.011 % E 1.070 % 0.149 % 0.038 % 0.017 % 0.009 % F 1.420 % 0.170 % 0.043 % 0.018 % 0.008 % G 1.450 % 0.153 % 0.038 % 0.016 % 0.008 % H 1.350 % 0.150 % 0.037 % 0.016 % 0.008 % Table 3. For each category h, percentage of patents with a value of the index In,h greater than τ= 0.1,0.3,0.5,0.7,0.9. Figure 7. Different slopes estimated through linear regression on St,h, in log10−log10 scale, for each category h(color line) and each value of the threshold τ(x-axis). The solid black line indicates the common slope bγ∗estimated imposing the same slope for all categories. The dashed horizontal black line indicates the common slope bγ∗= 0.689 estimated for τ= 0.8 in Section 4. 0.95 for any value of τ(see Figure 8). References Abbas A, Zhang L, Khan SU (2014) A literature review on the state-of-the-art in patent analysis. World Patent Information 37:3–13. Acemoglu D, Akcigit U, Kerr W (2016) Networks and the macroeconomy: An empirical exploration. Nber macroeconomics annual 30(1):273–335. Adner R, Feiler D (2019) Interdependence, perception, and investment choices: An experimental approach to decision making in innovation ecosystems. Organization science 30(1):109–125. Adner R, Kapoor R (2010) Value creation in innovation ecosystems: How the structure of technological interde- pendence affects firm performance in new technology generations. Strategic management journal 31(3):306– 333. MODELING INNOVATION DYNAMICS 35 Figure 8. Goodness-of-fit index R2of the linear regression on St,h, in log10−log10 scale, for each value of the threshold τ(x-axis). The red line indicates the R2obtained imposing a common slope and the blue line the R2obtained allowing the slopes to be different across categories. Both the lines are always very close to each other and always higher than 0 .95 for any value of τ. Aghion P, Bloom N, Blundell R, Griffith R, Howitt P (2005) Competition and innovation: An inverted-u rela- tionship. The quarterly journal of economics 120(2):701–728. Aghion P, Howitt P (2007) Capital, innovation, and growth accounting. Oxford Review of Economic Policy 23(1):79–93. Akcigit U, Kerr WR (2018) Growth through heterogeneous innovations. Journal of Political Economy 126(4):1374–1443. Aletti G, Crimaldi I, Ghiglietti A (2017) Synchronization of reinforced stochastic processes with a network-based interaction. Ann. Appl. Probab. 27(6):3787–3844, URL http://dx.doi.org/10.1214/17-AAP1296 . Aletti G, Crimaldi I, Ghiglietti A (2019) Networks of reinforced stochastic processes: asymptotics for the empir- ical means. Bernoulli 25(4 B):3339–3378, URL http://dx.doi.org/10.3150/18-BEJ1092 . Aletti G, Crimaldi I, Ghiglietti A (2023a) Interacting innovation processes. Sci. Rep. 13:17187, URL http: //dx.doi.org/10.1038/s41598-023-43967-1 . Aletti G, Crimaldi I, Ghiglietti A (2023b) Interacting innovation processes. Scientific Reports 13(1):17187. Aletti G, Crimaldi I, Ghiglietti A (2025) Statistical inference for interacting innovation processes and related general results. URL https://arxiv.org/abs/2501.09648 . Allen P (2013) Complexity, uncertainty
https://arxiv.org/abs/2505.13364v1
and innovation. Economics of Innovation and New Technology 22(7):702– 725. Alves J, Marques MJ, Saur I, Marques P (2007) Creativity and innovation through multidisciplinary and multi- sectoral cooperation. Creativity and innovation management 16(1):27–34. and MN (2005) Power laws, pareto distributions and zipf’s law. Contemporary Physics 46(5):323–351, URL http://dx.doi.org/10.1080/00107510500052444 . Angori G, Marzocchi C, Ramaciotti L, Rizzo U (2024) A patent-based analysis of the evolution of basic, mission- oriented, and applied research in european universities. The Journal of Technology Transfer 49(2):609–641. Antelman GR (1972) Interrelated bernoulli processes. Journal of the American Statistical Association 67(340):831–841, URL http://dx.doi.org/10.1080/01621459.1972.10481301 . 36 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI Arts S, Cassiman B, Gomez JC (2018) Text matching to measure patent similarity. Strategic Management Journal 39(1):62–84. Atanassova I, Bednar P (2022) Managing uncertainty: Company’s adaptive capabilities during covid-19. Complex Systems Informatics and Modeling Quarterly 33:14–39. Autor D, Dorn D, Hanson GH, Pisano G, Shu P (2020) Foreign competition and domestic innovation: Evidence from us patents. American Economic Review: Insights 2(3):357–374. Barbieri N, Consoli D, Napolitano Lea (2023) Regional technological capabilities and green opportunities in europe. Journal of Technology Transfer 48:749–778. Bekamiri H, Hain DS, Jurowetzki R (2024) Patentsberta: A deep nlp based hybrid model for patent distance and classification using augmented sbert. Technological Forecasting and Social Change 206:123536. Blazsek S, Escribano A (2010) Knowledge spillovers in us patents: A dynamic patent intensity model with secret common innovation factors. Journal of Econometrics 159(1):14–32. Bloom N, Jones CI, Van Reenen J, Webb M (2020) Are ideas getting harder to find? American Economic Review 110(4):1104–1144. Bondarev A, Krysiak FC (2021) Economic development and the structure of cross-technology interactions. Eu- ropean Economic Review 132:103628. Brem A, Viardot E, Nylund PA (2021) Implications of the coronavirus (covid-19) outbreak for innovation: Which technologies will improve our lives? Technological Forecasting and Social Change 163:120451, ISSN 0040-1625, URL http://dx.doi.org/https://doi.org/10.1016/j.techfore.2020.120451 . Breschi S, Malerba F, Orsenigo L (2000) Technological regimes and schumpeterian patterns of innovation. The economic journal 110(463):388–410. Breznik L, D Hisrich R (2014) Dynamic capabilities vs. innovation capability: are they related? Journal of small business and enterprise development 21(3):368–384. Castaldi C, Frenken K, Los B (2015) Related variety, unrelated variety and technological breakthroughs: An analysis of us state-level patenting. Regional Studies 49(5):767–781, URL http://dx.doi.org/10.1080/ 00343404.2014.940305 . Chae S, Gim J (2019) A study on trend analysis of applicants based on patent classification systems. Information 10(12):364. Chen XA, Burke J, Du R, Hong MK, Jacobs J, Laban P, Li D, Peng N, Willis KDD, Wu CS, Zhou B (2023) Next steps for human-centered generative ai: A technical perspective. URL https://arxiv.org/abs/ 2306.15774 . Clancy M (2023) Are ideas getting harder to find? a short review of the evidence. OECD Publishing P, ed., Artificial Intelligence in Science: Challenges, Opportunities and the Future of Research (OECD), URL http://dx.doi.org/https://doi.org/10.1787/a8d820bd-en . Clauset A, Shalizi CR, Newman MEJ (2009) Power-law distributions in empirical data. SIAM Review 51(4):661– 703, URL http://dx.doi.org/10.1137/070710111 . Coccia M (2017) A new classification of technologies. URL https://arxiv.org/abs/1712.07711 . Cohen WM, Levinthal DA, et al. (1990) Absorptive capacity: A new perspective on learning and innovation. Administrative
https://arxiv.org/abs/2505.13364v1
science quarterly 35(1):128–152. Colladon AF, Guardabascio B, Venturini F (2025) A new mapping of technological interdependence. Research Policy 54(1):105126. Corrocher N, Mancusi ML (2021) International collaborations in green energy technologies: What is the role of distance in environmental policy stringency? Energy Policy 156:112470. Dahlin KB, Behrens DM (2005) When is an invention really radical?: Defining and measuring technological radicalness. Research policy 34(5):717–737. David PA (1985) Clio and the economics of qwerty. The American economic review 75(2):332–337. Dehghani MA, Karavidas D, Panourgias N, Hutchinson M, O’Reilly P (2023) Assessing the quality of financial technology patents through the development of a patent quality index for comparing jurisdictions, technical domains, and leading organizations. IEEE Transactions on Engineering Management 71:3934–3950. MODELING INNOVATION DYNAMICS 37 Dias Sant´ Ana T, de Souza Bermejo PH, Moreira MF, de Souza WVB (2020) The structure of an innovation ecosystem: foundations for future research. Management Decision 58(12):2725–2742. Dovbischuk I (2022) Innovation-oriented dynamic capabilities of logistics service providers, dynamic resilience and firm performance during the covid-19 pandemic. The International Journal of Logistics Management 33(2):499–519. Du H, Niyato D, Kang J, Xiong Z, Zhang P, Cui S, Shen X, Mao S, Han Z, Jamalipour A, Poor HV, Kim DI (2024) The age of generative ai and ai-generated everything. IEEE Network 38(6):501–512, URL http: //dx.doi.org/10.1109/MNET.2024.3422241 . Eisenhardt KM, Martin JA (2000) Dynamic capabilities: what are they? Strategic management journal 21(10- 11):1105–1121. Fleming L (2001) Recombinant uncertainty in technological search. Management science 47(1):117–132. Fortini S, Petrone S, Sporysheva P (2018) On a notion of partially conditionally identically distributed sequences. Stochastic Processes and their Applications 128(3):819–846, ISSN 0304-4149. Fransman M (2001) Evolution of the telecommunications industry into the internet age. Communications and Strategies 43:57–113. Ganguli I, Lin J, Meursault V, Reynolds NF (2024) Patent text and long-run innovation dynamics: The critical role of model selection. Technical report, National Bureau of Economic Research. Gao Y, Lazarova E (2023) Ex-ante novelty and invention quality: A cross-country sectoral empirical study. Royal Economics Society Annual Conference 2024 . Garrone P, Mariotti S, Sgobbi F (2002) Technological innovation in telecommunications: an empirical analysis of specialisation paths. Economics of innovation and new technology 11(1):1–23. Gerken JM, Moehrle MG (2012) A new instrument for technology monitoring: novelty in patents measured by semantic patent analysis. Scientometrics 91(3):645–670. Giannitsis T, Kager M (2009) Technology and specialization: dilemmas, options and risks. Knowledge for Growth. Prospect for Science, Technology and Innovation 1–35. Godin B (2017) Models of Innovation: The History of an Idea (The MIT Press), ISBN 9780262338806, URL http://dx.doi.org/10.7551/mitpress/10782.001.0001 . Gouet R (1993) Martingale functional central limit theorems for a generalized p´ olya urn. The Annals of Probability 1624–1639. Granstrand O (1998) Towards a theory of the technology-based firm. Research policy 27(5):465–489. Granstrand O, Holgersson M (2020) Innovation ecosystems: A conceptual review and a new definition. Techno- vation 90:102098. Hall BH, Jaffe AB, Trajtenberg M (2000) Market value and patent citations: A first look. Working Paper 7741, National Bureau of Economic Research, URL http://dx.doi.org/10.3386/w7741 . Heaps HS (1978) Information Retrieval: Computational and Theoretical Aspects (USA: Academic Press, Inc.), ISBN 0123357500. Helfat CE, Peteraf MA (2003) The dynamic
https://arxiv.org/abs/2505.13364v1
resource-based view: Capability lifecycles. Strategic management journal 24(10):997–1010. Higham K, Contisciani M, De Bacco C (2022) Multilayer patent citation networks: A comprehensive analytical framework for studying explicit technological relationships. Technological forecasting and social change 179:121628. Higham K, De Rassenfosse G, Jaffe AB (2021) Patent quality: Towards a systematic framework for analysis and measurement. Research Policy 50(4):104215. Ho CM (2023) Research on interaction of innovation spillovers in the ai, fin-tech, and iot industries: considering structural changes accelerated by covid-19. Financial Innovation 9(1):7. Iacopini I, Di Bona G, Ubaldi E, Loreto V, Latora V (2020) Interacting discovery processes on complex networks. Phys. Rev. Lett. 125:248301, URL http://dx.doi.org/10.1103/PhysRevLett.125.248301 . Jacobides MG, Cennamo C, Gawer A (2018) Towards a theory of ecosystems. Strategic management journal 39(8):2255–2276. 38 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI Jacobides MG, Cennamo C, Gawer A (2024) Externalities and complementarities in platforms and ecosystems: From structural solutions to endogenous failures. Research Policy 53(1):104906. Jaffe AB, Trajtenberg M (2002) Patents, Citations, and Innovations: A Window on the Knowledge Economy (The MIT Press), ISBN 9780262276238, URL http://dx.doi.org/10.7551/mitpress/5263.001.0001 . Jaffe AB, Trajtenberg M, Fogarty MS (2000) Knowledge spillovers and patent citations: Evidence from a survey of inventors. American Economic Review 90(2):215–218. Jalonen H (2012) The uncertainty of innovation: a systematic review of the literature. Journal of management research 4(1):1–47. Jalonen H, Lehtonen A (2011) Uncertainty in the innovation process. European Conference on Innovation and Entrepreneurship , 51–51 (Academic Conferences International Limited). Jones BF (2009) The burden of knowledge and the “death of the renaissance man”: Is innovation getting harder? The Review of Economic Studies 76(1):283–317. Jones WT, McGuirt FM (1991) Telecommunications and computer science: two merging paradigms. ACM SIGCSE Bulletin 23(4):13–22. Jurek D (2024) Pypatentalice: Text-based classification of patents after alice. Software Impacts 19:100611. Katselis D, Beck CL, Srikant R (2019) Mixing times and structural inference for bernoulli autoregressive pro- cesses. IEEE Transactions on Network Science and Engineering 6(3):364–378, URL http://dx.doi.org/ 10.1109/TNSE.2018.2829520 . Kattel R, Mazzucato M (2018) Mission-oriented innovation policy and dynamic capabilities in the public sector. Industrial and Corporate Change 27(5):787–801, ISSN 0960-6491, URL http://dx.doi.org/10.1093/ icc/dty032 . Keller AA (2007) Stochastic differential games and queueing models to innovation and patenting. Contributions to Game Theory and Management 1(0):245–269. Kelly B, Papanikolaou D, Seru A, Taddy M (2021) Measuring technological innovation over the long run. Amer- ican Economic Review: Insights 3(3):303–320. Kim J, Magee CL (2017) Dynamic patterns of knowledge flows across technological domains: empirical results and link prediction. URL https://arxiv.org/abs/1706.07140 . Kim SH, Jeon JH, Aridi A, Jun B (2022) Factors that affect the technological transition of firms toward the industry 4.0 technologies. IEEE Access 11:1694–1707. Kov´ acs B, Carnabuci G, Wezel FC (2021) Categories, attention, and the impact of inventions. Strategic Man- agement Journal 42(5):992–1023. Lafond F, Kim D (2019) Long-run dynamics of the us patent classification system. Journal of Evolutionary Economics 29(2):631–664. Lanjouw JO, Schankerman M (2004) Patent quality and research productivity: Measuring innovation with multiple indicators. The economic journal 114(495):441–465. Lawson B, Samson D (2001) Developing innovation capability in organisations: a dynamic capabilities approach. International journal of innovation management
https://arxiv.org/abs/2505.13364v1
5(03):377–400. Lee C, Cho Y, Seol H, Park Y (2012) A stochastic patent citation analysis approach to assessing future techno- logical impacts. Technological Forecasting and Social Change 79(1):16–29. Lee SM, Trimi S (2021) Convergence innovation in the digital age and in the covid-19 pandemic crisis. Journal of Business Research 123:14–22. Leiner BM, Cerf VG, Clark DD, Kahn RE, Kleinrock L, Lynch DC, Postel J, Roberts LG, Wolff S (2009) A brief history of the internet. ACM SIGCOMM computer communication review 39(5):22–31. Leonard-Barton D (1992) Core capabilities and core rigidities: A paradox in managing new product development. Strategic management journal 13(S1):111–125. Leydesdorff L, Kogler DF, Yan B (2017) Mapping patent classifications: portfolio and statistical analysis, and the comparison of strengths and weaknesses. Scientometrics 112:1573–1591. Li L, Tong Y, Wei L, Yang S (2022) Digital technology-enabled dynamic capabilities and their impacts on firm performance: Evidence from the covid-19 pandemic. Information & Management 59(8):103689. Malerba F (2002) Sectoral systems of innovation and production. Research policy 31(2):247–264. MODELING INNOVATION DYNAMICS 39 Malerba F, Orsenigo L (1997) Technological regimes and sectoral patterns of innovative activities. Industrial and corporate change 6(1):83–118. Maragakis M, Rouni M, Mouza E, Kanetidis M, Argyrakis P (2023) Tracing technological shifts: time-series analysis of correlations between patent classes. The European Physical Journal Plus 138(9):776. Maslach D (2016) Change and persistence with failed technological innovation. Strategic Management Journal 37(4):714–723. Mazzucato M (2018) Mission-oriented innovation policies: challenges and opportunities. Industrial and Corporate Change 27(5):803–815, ISSN 0960-6491, URL http://dx.doi.org/10.1093/icc/dty034 . Messerschmitt DG (1996) The convergence of telecommunications and computing: What are the implications today? Proceedings of the IEEE 84(8):1167–1186. Mitzenmacher M (2004) A Brief History of Generative Models for Power Law and Lognormal Distributions. Internet Mathematics 1(2), URL http://dx.doi.org/10.1080/15427951.2004.10129088 . Mohammadabadi SMS (2025) From generative ai to innovative ai: An evolutionary roadmap. URL https: //arxiv.org/abs/2503.11419 . Mueller J (2024) Aspen Treatise for Patent Law . Aspen Treatise Series (Aspen Publishing), ISBN 9798892072823. Napolitano L, Evangelou E, Pugliese E, Zeppini P, Room G (2018) Technology networks: The autocatalytic origins of innovation. Royal Society open science 5(6):172445. Nemet GF (2012) Inter-technology knowledge spillovers for energy technologies. Energy Economics 34(5):1259– 1270. O’Connor GC (2008) Major innovation as a dynamic capability: A systems approach. Journal of product inno- vation management 25(4):313–330. Oladapo IA, Alkethery NM, AlSaqer NS (2023) Consequences of covid-19 shocks and government initiatives on business performance of micro, small and medium enterprises in saudi arabia. Journal of Small Business Strategy 33(2):64–88. Paasi J, Wiman H, Apilo T, Valkokari K (2023) Modeling the dynamics of innovation ecosystems. International Journal of Innovation Studies 7(2):142–158. Pandit P, Sahraee-Ardakan M, Amini A, Rangan S, Fletcher AK (2019) Sparse multivariate bernoulli processes in high dimensions. Chaudhuri K, Sugiyama M, eds., Proceedings of the Twenty-Second International Con- ference on Artificial Intelligence and Statistics , volume 89 of Proceedings of Machine Learning Research , 457–466 (PMLR), URL https://proceedings.mlr.press/v89/pandit19a.html . Park H, Magee CL (2017) Tracing technological development trajectories: A genetic knowledge persistence-based main path approach. PLOS ONE 12(1):1–18, URL http://dx.doi.org/10.1371/journal.pone.0170895 . Pemantle R (2007) A survey of random processes with reinforcement. Probability Surveys 4(none):1 – 79,
https://arxiv.org/abs/2505.13364v1
URL http://dx.doi.org/10.1214/07-PS094 . Perri A, Silvestri D, Zirpoli F (2020) Change and stability in the automotive industry: a patent analysis. Technical Report 5/2020, Department of Management, Universit` a Ca’Foscari Venezia, URL https://hdl.handle. net/10278/3752090 . Pichler A, Lafond F, Farmer JD (2020) Technological interdependencies predict innovation dynamics. arXiv preprint arXiv:2003.00580 . Pitman J, Yor M (1997) The two-parameter poisson-dirichlet distribution derived from a stable subordinator. Ann. Appl. Probab. 25(2):855–900. Pugliese E, Napolitano L, Chinazzi M, Chiarotti G (2019) The emergence of innovation complexity at different geographical and technological scales. URL https://arxiv.org/abs/1909.05604 . Rainville A, Dikker I, Buggenhagen M (2025) Tracking innovation via green patent classification systems: Are we truly capturing circular economy progress? Journal of Cleaner Production 486:144385. Rigby DL (2015) Technological relatedness and knowledge space: entry and exit of us cities from patent classes. Regional Studies 49(11):1922–1937. Robbins H, Siegmund D (1971) A convergence theorem for non negative almost supermartingales and some applications. Rustagi JS, ed., Optimizing Methods in Statistics , 233–257 (New York: Academic Press), ISBN 978-0-12-604550-5, URL http://dx.doi.org/10.1016/B978-0-12-604550-5.50015-8 . 40 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI Russell MG, Smorodinskaya NV (2018) Leveraging complexity for ecosystemic innovation. Technological Fore- casting and Social Change 136:114–131, ISSN 0040-1625, URL http://dx.doi.org/https://doi.org/ 10.1016/j.techfore.2017.11.024 . Salter A, Alexy O (2014) The nature of innovation. The Oxford Handbook of Innovation Management (Oxford University Press), ISBN 9780199694945, URL http://dx.doi.org/10.1093/oxfordhb/9780199694945. 013.034 . Sampat BN, Ziedonis AA (2005) Patent Citations and the Economic Value of Patents , 277–298 (Dordrecht: Springer Netherlands), ISBN 978-1-4020-2755-0, URL http://dx.doi.org/10.1007/1-4020-2755-9_13 . Shin DH (2010) Convergence and divergence: Policy making about the convergence of technology in korea. Government Information Quarterly 27(2):147–160. Singh A, Triulzi G, Magee CL (2021) Technological improvement rate predictions for all technologies: Use of patent data and an extended domain description. Research Policy 50(9):104294. Squicciarini M, Dernis H, Criscuolo C (2013) Measuring Patent Quality: Indicators of Technological and Eco- nomic Value. OECD Science, Technology and Industry Working Papers 2013/3, OECD Publishing, URL http://dx.doi.org/10.1787/5k4522wkw1r8-en . Taalbi J (2023) Long-run patterns in the discovery of the adjacent possible. URL https://arxiv.org/abs/ 2208.00907v2 . Teece D (2009) Dynamic Capabilities and Strategic Management: Organizing for Innovation and Growth . EBSCO ebook academic collection (OUP Oxford), ISBN 9780199545124, URL https://books.google.it/books? id=88EUDAAAQBAJ . Teece D, Leih S (2016) Uncertainty, innovation, and dynamic capabilities: An introduction. California manage- ment review 58(4):5–12. Teece DJ, Pisano G, Shuen A (1997) Dynamic capabilities and strategic management. Strategic management journal 18(7):509–533. Thomas LD, Autio E (2019) Innovation ecosystems. Available at SSRN 3476925 . Thompson P, Fox-Kean M (2005) Patent citations and the geography of knowledge spillovers: A reassessment. American Economic Review 95(1):450–460. Trajtenberg M, Henderson R, Jaffe A (1997) University versus corporate patents: A window on the basicness of invention. Economics of Innovation and new technology 5(1):19–50. Tria F, Loreto V, Servedio VDP, Strogatz SH (2014) The dynamics of correlated novelties. Scientific reports 4(1):5890. Verspagen B (2007) Mapping technological trajectories as patent citation networks: A study on the history of fuel cell research. Advances in complex systems 10(01):93–115. Wang Q, Schneider JW (2020) Consistency and validity of interdisciplinarity measures. Quantitative Science Studies
https://arxiv.org/abs/2505.13364v1
1(1):239–263. Weitzman ML (1998) Recombinant growth. The quarterly journal of economics 113(2):331–360. Williams D (1991) Probability with Martingales (Cambridge: Cambridge University Press). Yang S (2023) Predictive patentomics: Forecasting innovation success and valuation with chatgpt. arXiv preprint arXiv:2307.01202 . Youn H, Strumsky D, Bettencourt LM, Lobo J (2015) Invention as a combinatorial process: evidence from us patents. Journal of the Royal Society interface 12(106):20150272. Younge KA, Kuhn JM (2016) Patent-to-patent similarity: A vector space model. Available at SSRN 2709238 . Zhang C, Zhang D, Pan Y, Wang Y (2025) Whom you connect with matters: Innovation collaboration network centrality and innovative productivity in chinese cities. Growth and Change 56(1):e70015. Zollo M, Winter SG (2002) Deliberate learning and the evolution of dynamic capabilities. Organization science 13(3):339–351. MODELING INNOVATION DYNAMICS 41 G. Aletti, ADAMSS Center, Universit `a degli Studi di Milano, Milan, Italy Email address :giacomo.aletti@unimi.it I. Crimaldi, IMT School for Advanced Studies Lucca, Lucca, Italy Email address :irene.crimaldi@imtlucca.it A. Ghiglietti, Universit `a degli Studi di Milano-Bicocca, Milan, Italy Email address :andrea.ghiglietti@unimib.it (Corresponding author) F. Nutarelli, IMT School for Advanced Studies Lucca, Lucca, Italy Email address :federico.nutarelli@imtlucca.it
https://arxiv.org/abs/2505.13364v1
arXiv:2505.14051v1 [math.ST] 20 May 2025Information bounds for inference in stochastic evolution equations observed under noise Gregor Pasemann∗Markus Reiß∗ May 21, 2025 Abstract We consider statistics for stochastic evolution equations in Hilbert space with emphasis on stochastic partial differential equations (SPDEs). We observe a solution process under ad- ditional measurement errors and want to estimate a real or functional parameter in the drift. Main targets of estimation are the diffusivity, transport or source coefficient in a parabolic SPDE. By bounding the Hellinger distance between observation laws under different parame- ters we derive lower bounds on the estimation error, which reveal the underlying information structure. The estimation rates depend on the measurement noise level, the observation time, the covariance of the dynamic noise, the dimension and the order, at which the parametrised coefficient appears in the differential operator. A general estimation procedure attains these rates in many parametric cases and proves their minimax optimality. For nonparametric esti- mation problems, where the parameter is an unknown function, the lower bounds exhibit an even more complex information structure. The proofs are to a large extent based on functional calculus, perturbation theory and monotonicity of the semigroup generators. Keywords: Hellinger distance, cylindrical Gaussian measure, stochastic partial differential equa- tion, stochastic evolution equation, weak solution, Ornstein-Uhlenbeck process, minimax rate, Cauchy problem. MSC classification: 46N30, 60H15, 62G20 1 Introduction Many dynamical systems in nature and society are subject to randomness and stochastic partial differential equations (SPDEs) yield prototypical examples of such stochastic dynamical models. Let us consider linear parabolic SPDEs of the form ˙X(t, y) =DϑX(t, y) +˙W(t, y), t⩾0, y∈Λ, (1.1) with space-time Gaussian white noise ˙Wor equivalently in Itˆ o form dX(t, y) =DϑX(t, y)dt+dW(t, y), t⩾0, y∈Λ, (1.2) with a cylindrical Brownian motion W(t) on L2(Λ), subject to some initial condition X(0, y), y∈Λ. Here, Λ ⊆Rdis a spatial domain and Dϑdenotes a general second-order differential operator Dϑf(y) :=∇• c(2) ϑ(y)∇f+c(1) ϑ(y)f (y) +c(0) ϑ(y)f(y), y∈Λ, involving some boundary condition and (matrix-/vector-/scalar-valued) coefficients c(2) ϑ, c(1) ϑ, c(0) ϑ, parametrised by a Euclidean or functional parameter ϑ∈Θ. First fundamental results for esti- mation in a general SPDE framework have been obtained by Huebner & Rozovskii (1995) and ∗Institut f¨ ur Mathematik, Humboldt-Universit¨ at zu Berlin, Germany ( gregor.pasemann@hu-berlin.de , mreiss@math.hu-berlin.de ) This research has been partially funded by the Deutsche Forschungsgemeinschaft (DFG)- Project-ID 318763901 - SFB1294. 1 Ibragimov & Khas’minskii (2001). Afterwards, estimators for ϑin several specific settings and from observing Xglobally, locally or at discrete points in time and space have been developed, see Cialenco (2018) for an excellent survey of the state of the art up to this time. Recently, in the general setting (1.2) a parametric estimation theory has been developed by Altmeyer et al. (2024) under multiple local observations of X. In many cases, however, measurements of Xinvolve an additional error, see e.g Pasemann et al. (2020) for measurements of cell motility. Under measurement noise different observation schemes often lead to the same estimation theory because the measurement noise typically dominates the discretisation errors. Therefore we consider the noisy observations dYofXgiven by
https://arxiv.org/abs/2505.14051v1
dY(t, y) =BX(t, y)dt+εdV(t, y), t∈[0, T], y∈Λ, (1.3) where Vis a cylindrical Brownian motion on L2(Λ), independent of W,Bis an observation operator and ε⩾0 is the noise level. This formulation is standard for stochastic filtering problems in dynamical systems, see e.g. Bain & Crisan (2009). From an alternative point of view, we may consider a regression setting with discrete data Yi,j= (BX)(ti, yj) +εi,j at some time points tiand spatial locations yjwith i.i.d. error variables εi,j∼ N(0, σ2), indepen- dent of the driving noise W. If the design points ( ti, xj) are equally spaced and become dense in [0, T]×Λ for sample size n→ ∞ , then one can conclude in a rigorous manner that these obser- vation schemes become asymptotically equivalent with observing (1.3) for ε2=σ2T|Λ| n, compare Reiß (2008), Section 3 in Bibinger et al. (2014) and the references therein. This interpretation, however, requires that point evaluations of BXare well defined. Given the observations (1.3), we aim at establishing minimax optimal estimation rates for ϑ. It turns out that these rates differ significantly whenever this parameter only appears in the diffusivity (orconductivity ) coefficient c(2) ϑ, the transport (oradvection ) coefficient c(1) ϑor in the source (orreaction ) coefficient c(0) ϑ. Moreover, when these coefficient functions c(k) ϑ(y) = ϑ(y) are not parametrised by a real parameter, but assumed to belong just to a functional class of H¨ older regularity α >0, then the rates become nonparametric and exhibit structural differences, depending on the asymptotics taken. Establishing minimax optimal estimation rates requires to construct estimators attaining these rates as well as to derive general information-theoretic lower bounds proving that no estimator can converge faster, see Tsybakov (2009) for a comprehensive introduction. Lower bounds provide a genuine insight into the intrinsic difficulty in drawing inference on the parameter and reveal the underlying information structure. We thus first concentrate on a general lower bound result in the framework of stochastic evolution equations (see e.g. Da Prato & Zabczyk (2014); Lototsky & Rozovskii (2018)), generalizing the SPDE (1.2), dXt=AϑXtdt+dWt, t∈[0, T], (1.4) on a real Hilbert space Hwhere Aϑis the generator of a strongly continuous semigroup on H. Then the observations are given by dYt=BXtdt+εdVt, t∈[0, T], (1.5) with an independent H-valued cylindrical Brownian motion V. By linearity, the observations dY follow a cylindrical Gaussian law on L2([0, T];H) and our main information-theoretic result in Theorem 2.1 gives a tight bound for the Hellinger distance between the observation laws for two different parameters ϑ0, ϑ1∈Θ. The bound is given in terms of the Hilbert-Schmidt norm of functions of Aϑ0, Aϑ1. The fundamental functional-analytic difficulty consist in reducing the intrinsic Hilbert-Schmidt norm for the covariance operators of the time-continuous processes on L2([0, T];H) to a norm of the generators on Hwithout assuming commutativity of the generators. A particular consequence is the absolute continuity of the laws, which for direct observations has been studied by Peszat (1992). 2 SPDE with parameter ϑ >0 ratevpar n dX(t) =ϑ∆X(t)dt+dW(t) T−1/2ε(d+2)/4 dX(t) = (ν∆X(t) +ϑ∂ξX(t))dt+dW(t)T−1/2εd/4ν(d+2)/4 dX(t) = (ν∆X(t)−ϑX(t))dt+dW(t) T−1/2ε(d−2)+/4νd/4 Table 1: Rates for estimating ϑ >0
https://arxiv.org/abs/2505.14051v1
in different coefficients as a function of observation time T, static noise level ε, dimension dand diffusivity ν(dropping a log factor in the last row for d= 2). In Section 4 we construct estimators in a general parametric setting with real parameter ϑ and commuting operators which under quite general conditions attain the lower bound rates, thus establishing minimax optimality. The concrete construction and analysis of the estimators is rather involved, but the main idea is to reduce the observational noise first by averaging with an operator-valued kernel and then to apply a continuous-regression estimator as in the case of direct observations of X. The implications of these abstract results for fundamental parametric estimation problems are demonstrated in Section 5. Already for noisy observations of the standard Ornstein-Uhlenbeck process dYt=Xtdt+ε dVtwith dXt=−ϑXtdt+σ dW t, t∈[0, T], X0= 0, with ϑ >0 our optimal estimation rate vn= (ϑ1/2T−1/2+T−1)(ε2σ−2ϑ2+ 1) from Proposition 5.1 generalizes results from Kutoyants (2004) and reveals interesting phenomena. For fixed ϑ >0 we obtain the classical T−1/2-rate under ergodicity when ε/σremains bounded and for ϑ↓0 we approach the null-recurrent T−1-rate. In both cases the rate does not suffer from measurement noise and equals that for direct observation of X(ε= 0). In Table 1 we gather the minimax rates obtained for SPDEs of the form (1.2) with a Laplacian ∆ on ad-dimensional bounded domain Λ and a first-order derivative ∂ξin direction ξunder standard boundary conditions, see Propositions 5.3, 5.5 and 5.7 for the precise formulations. Ergodicity in time yields the T−1/2-rate, keeping the range of ϑfixed. The higher the order of the coefficient is, the easier it is estimated for small static noise level ε. There is yet another asymptotics that permits consistent estimation of transport or source coefficient, namely that the second-order coefficient degenerates as ν↓0, which was first statistically exploited for direct observations of Xby Gaudlitz & Reiß (2023). The larger the dimension d, the faster the rates become, which is explained by the Weyl asymptotics of the underlying eigenvalues. All rates are valid up to some maximal dimension d, afterwards ϑis directly identifiable. In Section 5 we also discuss the rate for the fractional Laplacian, the general question of absolute continuity of laws of Xunder different parameters and the dependence of the rate on different spatial correlation induced by the operator B. The nonparametric lower bounds are much more involved because the operators Aϑdo not commute, even in standard examples like a space-dependent diffusivity ϑ(y). The main tool to bound the Hilbert-Schmidt norm in the Hellinger bound for differential operators Aϑ=Dϑare estimates of the form g(Dϑ1)≼Cg(Dϑ0) for functions gand some constant C > 0 with respect to the partial order ≼induced by positive semi-definiteness. Since the functions gare mostly not operator monotone in the sense of Bhatia (2013), a careful specific analysis is required, drawing on PDE ideas in Engel & Nagel (2000) and similar to the non-commutative SPDE analysis by Lototsky & Rozovskii (2000). The rates obtained for space-dependent diffusivity, transport and source terms exhibit an ellbow effect: they relate to the corresponding parametric rates vpar nas
https://arxiv.org/abs/2505.14051v1
in classical nonparametric regression whenever Tis not growing too fast (or is constant) as ε↓0, while they are completely different for T→ ∞ with T⩾ε−pfor certain powers p. Table 2 reports where (at which p) the ellbow occurs and states the result for T→ ∞ and noise levels εof order one. Detailed results are given in Theorems 6.4, 6.7 and 6.10. In particular, for ε∼1 the rates necessarily slow down in smaller dimensions compared to the classical T−α/(2α+d)-rate ( d⩽2 for diffusivity, d⩽4 for transport and d⩽6 for source estimation). In the companion paper Pasemann & Reiß (2024), a nonparametric estimator of diffusivity was constructed that attains the lower bound rate for 3 SPDE with α-regular function ϑ(•) rate ( vpar n)2α 2α+d rate for ε∼1 dX(t) =∇•(ϑ∇X)(t)dt+dW(t) forT⩽ε1−αT−α 2α+3 dX(t) = (∆ X(t) +∂ξ(ϑX)(t))dt+dW(t)forT⩽ε−αT−α 2α+5 dX(t) = (∆ X(t)−(ϑX)(t))dt+dW(t) forT⩽ε−α−(d∧2)/2T−α 2α+4+d∧3 Table 2: Rates for estimating ϑ(•) nonparametrically in different coefficients. The second column shows when the classical scaling of the parametric rate vpar napplies (a log factor is dropped in the last row for d= 2). The third column gives the rate for non-vanishing noise level ε. fixed Tand regularity α∈[1, αmax], which turned out to be highly non-trivial. The analysis there shows that for α <1 the approach to reduce locally the static noise first cannot yield optimal rates, which gives an upper bound perspective of the ellbow effect. We conjecture that our nonparametric lower bounds give the minimax rates also in all other cases, but the construction and analysis of corresponding estimators remains a challenging open problem. In Section 2 we bound the Hellinger distance between two cylindrical Gaussian measures, which might be of independent interest. This is then used in Section 3 to bound the laws for noisy obser- vations of stochastic evolution equations. In a generic setting we construct parametric estimators in Section 4 whose risk attain the lower bounds in wide generality. This is exemplified in Section 5 for the Ornstein-Uhlenbeck process and standard SPDEs. Section 6 derives the nonparametric lower bounds for SPDEs. We introduce specific notation before its first usage and gather all nota- tion in Appendix A for the convenience of the reader. Appendix B provides more standard proofs together with the bounds for the error of the estimator constructed in Section 4. Auxiliary results are collected in Appendix C. 2 Hellinger bounds for cylindrical Gaussian measures To establish minimax lower bounds, we use the classical approach based on bounding the Hellinger distance between two parameters. For observations from the statistical model ( X,F,(Pϑ)ϑ∈Θ) where the (non-empty) parameter set Θ is equipped with a semi-metric d, we combine the reduction scheme in Tsybakov (2009, Section 2.2) with Tsybakov (2009, Thm. 2.2(ii)) to obtain 2.1 Theorem. Letδ >0. Assume there are ϑ0, ϑ1∈Θwith semi-distance d(ϑ0, ϑ1)⩾δsuch that their respective laws have Hellinger distance H(Pϑ0,Pϑ1)⩽1. Then inf ˆϑsup ϑ∈ΘPϑ δ−1d(ˆϑ, ϑ)⩾1/2 ⩾2−√ 3 4(2.1) holds where the infimum is taken over all estimators (measurable Θ-valued functions) in the sta- tistical model. In a nutshell, the idea for establishing tight minimax lower bounds is to
https://arxiv.org/abs/2505.14051v1
find two parameters (real values or functions) under which the observation laws satisfy a non-trivial Hellinger bound and which at the same time have a large (Euclidean or functional) distance. We shall apply the lower bound for sequences of models, which become more informative in n∈N, and try to find the largest δ=δnin (2.1). Then no estimator sequence ˆϑncan satisfy δ−1 nd(ˆϑn, ϑ)Pϑ− →0 uniformly over ϑ. In other words, estimators ˆϑnare minimax rate-optimal when they satisfy d(ˆϑn, ϑ) =OPϑ(vn) for a rate vn∼δn, uniformly over ϑ(Appendix A recalls uniform stochastic convergence). Next, we study the Hellinger distance between cylindrical Gaussian measures with different covariance operators. 2.2 Lemma. The squared Hellinger distance between Gaussian laws N(0, σ2 0)andN(0, σ2 1)sat- isfies for σ0, σ1>0 H2(N(0, σ2 0),N(0, σ2 1))⩽1 4σ1 σ0−σ0 σ12 . 4 Proof. By invariance of the Hellinger distance under bi-measurable bijections, see Reiß (2011, Appendix A.1), we have H2(N(0, σ2 0),N(0, σ2 1)) = H2(N(0,1),N(0,(σ1/σ0)2)), so it suffices to show H2(N(0,1),N(0, σ2))⩽1 4(σ−σ−1)2(2.2) forσ >0. If ( σ−σ−1)2>8, this bound holds trivially as the squared Hellinger distance is always bounded by two. Now assume ( σ−σ−1)2⩽8. Following Reiß (2011) the squared Hellinger distance for scalar Gaussian laws is given by H2(N(0,1),N(0, σ2)) = 2 −2p 2σ/(σ2+ 1). We use ( σ2+ 1)(1 + σ−1)2⩾8 (the mininum is attained at σ= 1) and bound, differently from that reference, p 2σ/(σ2+ 1) =p 1−(σ−1)2/(σ2+ 1)⩾q 1−1 8(σ−1)2(1 +σ−1)2 =q 1−1 8(σ−σ−1)2⩾1−1 8(σ−σ−1)2, given that 0 ⩽(σ−σ−1)2⩽8. This implies (2.2). This real-valued bound gives rise to a corresponding Hilbert space bound. For details on cylindrical Gaussian measures Ncyl(µ, Q) we refer to Bogachev (1998). Here, the main use is that X∼ N cyl(0, Q) means that ( ⟨X, v⟩)v∈His a centred Gaussian process indexed by vwith Cov(⟨X, v 1⟩,⟨X, v 2⟩) =⟨Qv1, v2⟩where the covariance operator Q:H→His bounded (not necessarily trace class) and positive semi-definite, notation Q≽0. 2.3 Proposition. Consider cylindrical Gaussian laws Ncyl(0, Q0),Ncyl(0, Q1)in some separable real Hilbert space Hwith covariance operators Q0, Q1. IfQ0, Q1are one-to-one and im(Q1/2 0) = im(Q1/2 1), then H2(Ncyl(0, Q0),Ncyl(0, Q1))⩽1 4∥Q−1/2 0Q1/2 1−(Q−1/2 1Q1/2 0)∗∥2 HS, provided the right-hand side is finite. In that case the laws are equivalent. 2.4 Remark. (a) If N(0, Q0),N(0, Q1) are equivalent proper laws on H(i.e., Q0, Q1are trace class), the Feldman-H´ ajek Theorem (Da Prato & Zabczyk, 2014, Thm. 2.25) shows that im( Q1/2 0) = im(Q1/2 1) and that Q−1/2 0Q1/2 1(Q−1/2 0Q1/2 1)∗−Id is a Hilbert-Schmidt operator on H(note that here im( Q1/2 0) is dense in Hby the injectivity of Q0). Since ( Q−1/2 0Q1/2 1)∗has the bounded inverse ( Q−1/2 1Q1/2 0)∗, we deduce that also the product Q−1/2 0Q1/2 1−(Q−1/2 1Q1/2 0)∗ is a Hilbert-Schmidt operator and the Hellinger bound is finite. (b) We may write Q−1/2 0Q1/2 1−(Q−1/2 1Q1/2 0)∗=Q−1/2 0(Q1−Q0)Q−1/2 1 if we consider the Gelfand triple H1,→H ,→H−1with H1= im( Q1/2 i),H−1= (H1)∗and interpret Q1−Q0:H−1→H1. In contrast to the bound (cf. Reiß (2011)) H2(Ncyl(0, Q0),Ncyl(0, Q1))⩽2∥Q−1/2 0(Q1−Q0)Q−1/2 0∥2 HS, which is asymmetric in Q0andQ1, the bound of Proposition
https://arxiv.org/abs/2505.14051v1
2.3 will allow us to obtain feasible expressions also for non-commuting covariance operators. 5 Proof. Following the proof of the Feldman-H´ ajek Theorem (step 2 for Thm. 2.25 in Da Prato & Zabczyk (2014)) we let ( ek)k⩾1be the orthonormal basis of eigenvectors of R:=Q−1/2 0Q1/2 1(Q−1/2 0Q1/2 1)∗ with corresponding positive eigenvalues ( τk)k⩾1, noting that Ris an injective positive operator and that R−Id = Q−1/2 0Q1/2 1−(Q−1/2 1Q1/2 0)∗ (Q−1/2 0Q1/2 1)∗ is by assumption a Hilbert-Schmidt operator, which has an orthonormal basis of eigenvectors. We can generate the laws via an i.i.d. sequence ζk∼N(0,1): X k⩾1ζkQ1/2 0ek∼ N cyl(0, Q0),X k⩾1ζkτ1/2 kQ1/2 0ek∼ N cyl(0, Q1). In fact, we must check for all g∈H X k⩾1ζk⟨Q1/2 0ek, g⟩ ∼ N (0,⟨Q0g, g⟩),X k⩾1ζk⟨τ1/2 kQ1/2 0ek, g⟩ ∼ N (0,⟨Q1g, g⟩), compare Bogachev (1998, Thm. 2.2.4). The first statement follows fromP k⩾1⟨Q1/2 0ek, g⟩2= ∥Q1/2 0g∥2, the second statement from X k⩾1⟨Q1/2 0ek, g⟩⟨τkQ1/2 0ek, g⟩=⟨Q1/2 0g, RQ1/2 0g⟩=⟨Q1g, g⟩. Then by the subadditivity of the squared Hellinger distance under independence and Lemma 2.2 H2(Ncyl(0, Q0),Ncyl(0, Q1)) =H2O k⩾1N(0,∥Q1/2 0ek∥2),O k⩾1N(0,∥τ1/2 kQ1/2 0ek∥2) ⩽X k⩾1H2 N(0,∥Q1/2 0ek∥2),N(0, τk∥Q1/2 0ek∥2) ⩽1 4X k⩾1(τ1/2 k−τ−1/2 k)2 =1 4trace R−2 Id + R−1 =1 4trace (Q−1/2 0Q1/2 1)∗−Q−1/2 1Q1/2 0 Q−1/2 0Q1/2 1−(Q−1/2 1Q1/2 0)∗ =1 4∥Q−1/2 0Q1/2 1−(Q−1/2 1Q1/2 0)∗∥2 HS, where we used R−1= (Q−1/2 1Q1/2 0)∗Q−1/2 1Q1/2 0and commutativity under the trace. The equivalence follows verbatim as for the Feldman-H´ ajek Theorem. 3 Stochastic evolution equations under noise For a comprehensive treatment of stochastic evolution equations we refer to Da Prato & Zabczyk (2014) and Lototsky & Rozovskii (2018). We consider the observation process dYt(1.5) of the solu- tionXto the stochastic evolution equation (1.4). This means in particular that conditionally on X we observe dY∼ N cyl(BX, ε2Id) on L2(H), that isRT 0f(t)dYt∼ N(RT 0⟨BXt, f(t)⟩dt, ε2∥f∥2 L2(H)) for all test functions f∈L2(H). This generalises the setting of Kutoyants (2004, Section 3.1) 6 (there, XandYare interchanged) and of Pasemann & Reiß (2024) (there, formally dYt/dtde- scribes the observation process). For the observational noise level we assume ε∈[0,1] so that also non-noisy observations of BXare included for ε= 0. The observation time is T⩾1 and usually remains fixed or tends to infinity. B:H→His a known bounded linear operator. The unknown parameter is denoted by ϑ∈Θ and for the parameter set we assume throughout that at least two parameters ϑ0, ϑ1lie in Θ. We abbreviate A0:=Aϑ0,A1:=Aϑ1and apply this rule to all further indexing. For simplicity the initial condition X0is taken to be zero, noting that all minimax lower bounds derived later trivially extend when the maximum is taken over arbitrary X0∈H. The possibly unbounded linear operators Aϑ: dom( Aϑ)⊆H→Hare normal with the same domain dom( Aϑ) for all ϑand generate a strongly continuous semigroup ( eAϑu)u⩾0by functional calculus. Note that then ( eA∗ ϑu)u⩾0is its adjoint semigroup. As a generator of a semigroup, Aϑis necessarily quasi-dissipative, meaning that Aϑhas a spectrum whose real part is bounded from above, see e.g. B¨ uhler & Salamon (2018, Lemma 7.2.6). Then Xt:=Zt 0eAϑ(t−s)dWs, t∈[0, T],
https://arxiv.org/abs/2505.14051v1
(3.1) defines for each ta cylindrical Gaussian measure on Hvia⟨Xt, z⟩=Rt 0⟨eA∗ ϑ(t−s)z, dW s⟩,z∈H, which solves for z∈dom( A∗ ϑ) the weak formulation (Da Prato & Zabczyk, 2014, Thm. 5.4) d⟨z, X t⟩=⟨A∗ ϑz, X t⟩dt+⟨z, dW t⟩, t∈[0, T],with⟨z, X 0⟩= 0. Let us stress that we do not assume commutativity of the operators ( Aϑ)ϑ∈Θ. 3.1 Example. Writing ˜Xt=BXtand assuming Bto commute with all Aϑ, we observe dYt=˜Xtdt+εdVtwith d˜Xt=Aϑ˜Xtdt+BdW t,˜X0= 0. This way we can also treat cases of evolution equations with more general noise as a driver. An instructive example is given by B=σId for some known σ >0. Then we observe dYt=˜Xtdt+εdVtwith d˜Xt=Aϑ˜Xtdt+σdW t,˜X0= 0. On the other hand, multiplying the observation process (1.5) by σ−1and replacing Ybyσ−1Y, we observe equivalently dYt=Xtdt+εσ−1dVtwith dXt=AϑXtdt+dWt, X0= 0, Consequently, introducing a dynamic noise level σ >0 is statistically equivalent to replacing the observation noise level ε >0 by the ratio ε/σ. This means that higher dynamic noise levels will lead to smaller statistical errors, compare the discussion in Pasemann & Reiß (2024). 3.2 Lemma. The observations (RT 0⟨g(t), dYt⟩, g∈L2(H))with dYfrom (1.5) generate a cylin- drical Gaussian measure Ncyl(0, Qϑ)onL2(H)with Qϑ=ε2Id +BCϑB∗, where Cϑ=SϑS∗ ϑ and for f∈L2(H) Sϑf(t) =Zt 0eAϑ(t−s)f(s)ds, S∗ ϑf(t) =ZT teA∗ ϑ(s−t)f(s)ds, t ∈[0, T]. 3.3 Remark. Since BCϑB∗is positive semi-definite, Qϑis strictly positive for ε >0 and an isomorphism on L2(H). For proper Gaussian measures and ε= 0 the result from Lemma 3.2 and consequences for absolutely continuous laws can be found in Peszat (1992). 7 Proof. The variation-of-constants formula (3.1) and the stochastic Fubini Theorem (Da Prato & Zabczyk, 2014, Thm. 4.18) yield ZT 0⟨g(t), X(t)⟩dt=ZT 0Zt 0⟨g(t), eAϑ(t−s)dWs⟩dt =ZT 0DZT seA∗ ϑ(t−s)g(t)dt, dW sE =ZT 0⟨S∗ ϑg(s), dW s⟩. We deduce by Itˆ o isometry EhZT 0⟨g(t), X(t)⟩dt2i =∥S∗ ϑg∥2 L2(H)=⟨SϑS∗ ϑg, g⟩L2(H). By independence of dVanddWwe have EhZT 0⟨g(t), dYt⟩2i =EhZT 0⟨g(t), BX (t)⟩dt2i +ε2∥g∥2 L2(H) =⟨SϑS∗ ϑB∗g, B∗g⟩L2(H)+ε2⟨g, g⟩L2(H) =⟨Qϑg, g⟩L2(H). By linearity and polarisation, (RT 0⟨g(t), dYt⟩, g∈L2(H))∼ N cyl(0, Qϑ) follows. In order to be able to apply functional calculus for normal operators, we complexify Hand all operators in the usual way (see Section 5.1 in B¨ uhler & Salamon (2018) for more formal details) by introducing the complex Hilbert space HC:=H+iHwith∥v+iw∥2 HC:=∥v∥2 H+∥w∥2 H and the extension AC ϑ: dom( Aϑ) +idom( Aϑ)⊆HC→HCviaAC ϑ(v+iw) := Aϑv+iAϑwfor v, w∈dom( Aϑ). Then by normality of Aϑ((AC ϑ)∗denotes the complex adjoint) (AC ϑ)∗AC ϑ(v+iw) =A∗ ϑAϑv−iA∗ ϑAϑw=AC ϑ(AC ϑ)∗(v+iw) for all v, w∈dom( A∗ ϑAϑ) = dom( AϑA∗ ϑ) and AC ϑis normal on HC. With the canonical isometric injection ι:H→HC,ι(v) :=v+i•0 we have AC ϑι(v) =Aϑv,v∈H, and AC ϑextends Aϑ. The extension of a bounded operator TonHsatisfies ∥TC∥=∥T∥and∥TC∥2 HS= 2∥T∥2 HSin case of a Hilbert-Schmidt operator. In order to ease notation, the superscript Cwill be dropped from now on and functional calculus for operators will be implicitly complexified. Let us emphasise that the observation model and in particular the Gaussian noise is always defined over the real numbers. By B¨ uhler & Salamon (2018, Theorem 6.3.11) the normal operators Aϑon
https://arxiv.org/abs/2505.14051v1
the complex Hilbert space Hsatisfy dom( Aϑ) = dom( A∗ ϑ) and can be decomposed by two self-adjoint operators Rϑ: dom( Rϑ)→H,Jϑ: dom( Jϑ)→Hwith dom( Aϑ) = dom( Rϑ)∩dom( Jϑ) such that Aϑv=Rϑv+iJϑv, A∗ ϑv=Rϑv−iJϑv,∥Aϑv∥2=∥Rϑv∥2+∥Jϑv∥2 for all v∈dom( Aϑ). Note that any two operators from Aϑ, A∗ ϑ, Rϑ, Jϑcommute. Since Aϑis normal, we can use its spectral measure to define f(Aϑ) for measurable f:C→C(Birman & Solomjak, 2012; Schm¨ udgen, 2012), in particular Rϑ= Re( Aϑ) and Jϑ= Im( Aϑ) hold on dom( Aϑ). Frequently, we shall make use of the bound ∥f(Aϑ)∥⩽∥f∥∞for bounded f. Moreover, we shall employ the Bochner integral over Banach space-valued functions (e.g. with values in a Hilbert space or a space of bounded linear operators) and use standard properties, as exposed e.g. in Engel & Nagel (2000, Appendix C). We lift linear operators LonHnaturally to L2(H) by setting pointwise ( Lf)(t) :=L(f(t)) for f∈L2(H). 3.4 Lemma. In the setting of Lemma 3.2, we have the explicit representation Cϑf(t) =1 2ZT 0eiJϑtZt+s |t−s|eRϑvdv e−iJϑsf(s)ds. 8 Proof. The asserted identity follows from Fubini’s theorem for Bochner integrals with continuous integrands: Cϑf(t) =SϑS∗ ϑf(t) =Zt 0eAϑ(t−u)ZT ueA∗ ϑ(s−u)f(s)ds du =ZT 0Zt∧s 0eAϑt−2Rϑu+A∗ ϑsdu f(s)ds =ZT 0eiJϑtZt∧s 0eRϑt−2Rϑu+Rϑsdu e−iJϑsf(s)ds =1 2ZT 0eiJϑtZt+s |t−s|eRϑvdv e−iJϑsf(s)ds. We proceed to bound the operator norm of RϑSϑonL2(H). This is analogous to the con- volution operator f7→RT 0Re(λ)eλ(•−s)f(s)dsonL2([0, T];C) with the classical norm bound ∥Re(λ)eRe(λ)•∥L1([0,T]). The proof involves, however, more involved functional calculus and ten- sorisation. 3.5 Proposition. With (x)+=x∨0we have for the operator norm in L2(H) ∥RϑSϑ∥2=∥S∗ ϑRϑ∥2⩽3 4+1 4e2∥(Rϑ)+∥T=:α2 ϑ. (3.2) Proof. See Section B.1. We have a tighter bound in the contractive setting where Rϑ≼0. In the sequel, we also write short|a|−p T:=|a|−p∧Tpand|a|p T−1:=|a|p∨T−p= (|a|−p T)−1fora∈C,p, T > 0 with |0|−p T:=Tp. 3.6 Corollary. IfRϑ≼0holds, then we can bound ∥|Rϑ|T−1Sϑ∥=∥S∗ ϑ|Rϑ|T−1∥⩽1. (3.3) Proof. ForRϑ≼0 we have ( Rϑ)+= 0 and αϑ= 1 in Proposition 3.5. By functional calculus, Πϑ,T:=1(Rϑ∈[−T−1,0]) and Id −Πϑ,Tare orthogonal projections in L2(H) (Rϑbeing lifted from HtoL2(H)). Since Sϑcommutes with Rϑ, it also commutes with Π ϑ,Tso that by Proposition 3.5 for f∈L2(H) ∥(Id−Πϑ,T)|Rϑ|Sϑf∥L2(H)=∥−RϑSϑ(Id−Πϑ,T)f∥L2(H)⩽∥(Id−Πϑ,T)f∥L2(H) follows. On the other hand, we have by direct calculation ∥Πϑ,TSϑf∥2 L2(H)=ZT 0 Zt 0eAϑ(t−s)Πϑ,Tf(s)ds 2 dt ⩽ZT 0Zt 0∥eAϑ(t−s)∥∥Πϑ,Tf(s)∥ds2 dt ⩽T2∥Πϑ,Tf∥2 L2(H), where ∥eAϑ(t−s)∥=∥eRϑ(t−s)∥⩽1 was used due to Rϑ≼0. Combining the two bounds, we obtain the claim, writing (Id −Πϑ,T)|Rϑ|+T−1Πϑ,T=|Rϑ|T−1. In view of Proposition 2.3 we can bound the Hellinger distance between the observations in (1.5) for ϑ0,ϑ1by bounding the Hilbert-Schmidt norm of Q−1/2 0Q1/2 1−(Q−1/2 1Q1/2 0)∗onL2(H). Our aim is to find tight bounds only involving the generators A0andA1onH. We establish simple properties of the operator Sϑ, which are similar to the well known solution theory for non-homogeneous Cauchy problems (Da Prato & Zabczyk, 2014, Appendix 3). We use theH-valued Sobolev space H1([0, T];H) =H1(H) and its subspaces H1 0(H),H1 T(H) of functions f∈ H1(H) vanishing in 0 and in T, respectively, see Appendix A for more details. 9 3.7 Proposition. Suppose dom( Aϑ) = dom( Rϑ)for some ϑ∈Θ. Then: (a)Sϑ, S∗ ϑmapL2([0, T];H)intoL2([0, T]; dom( Aϑ)),SϑmapsH1(H)intoH1 0(H)andS∗ ϑmaps H1(H)intoH1 T(H). (b) Let ∂tdenote
https://arxiv.org/abs/2505.14051v1
the time derivative. Then we have the identities (∂t−Aϑ)Sϑ= (−∂t−A∗ ϑ)S∗ ϑ= IdonH1(H)andSϑ(∂t−Aϑ) =S∗ ϑ(−∂t−A∗ ϑ) = Id onH1 0(H)∩L2([0, T]; dom( Aϑ)). (c) We have on L2(H) S1−S0=S0(A1−A0)S1=S1(A1−A0)S0. Proof. See Section B.1. The perturbation property in (c) will be essential for us. The assumption dom( Aϑ) = dom( Rϑ) is required to have all evaluations well defined. Since always dom( Aϑ)⊆dom( Rϑ) holds, it means |Jϑ|≼C(|Rϑ|+ Id) for some constant C >0 and Aϑis a sectorial operator. The next step is to find an appropriate upper bound for Q−1/2 ϑ, appearing in the Hellinger bound. To do so, we assume that there are injective operators ¯Bϑ≻0, commuting with Aϑ, such that ¯B2 ϑ≽B∗Bor equivalently ∥¯B−1 ϑB∗∥⩽1. (3.4) Assuming ¯Bϑto be injective is not very restrictive. If ¯Bϑis not invertible, we can add δId and then let δ↓0 in the final lower bound. 3.8 Lemma. For an invertible operator ¯Bϑ, commuting with Aϑand satisfying (3.4), we have (BSϑ)∗Q−1 ϑBSϑ≼(ε2 ϑR2 ϑ¯B−2 ϑ+ Id)−1 forε >0, where εϑ:=ε/αϑwithαϑfrom (3.2). If additionally Rϑ≼0, then we even have (BSϑ)∗Q−1 ϑBSϑ≼(ε2|Rϑ|2 T−1¯B−2 ϑ+ Id)−1. Proof. Proposition 3.5 yields Id ≽α−2 ϑSϑR2 ϑS∗ ϑ, whence by Lemma 3.2 and the property (3.4) of ¯Bϑ Qϑ=ε2Id +BCϑB∗≽B(ε2¯B−2 ϑ+SϑS∗ ϑ)B∗≽BSϑ(ε2 ϑR2 ϑ¯B−2 ϑ+ Id)( BSϑ)∗. Therefore the first claim follows from Lemma C.2(b) below. The second claim follows when Corol- lary 3.6 is used instead of Proposition 3.5. We arrive at our main general result. For this let f(λ) = (R1 0Rt 0eλvdvdt)1/2, i.e. f(λ) = (eλ−1−λ λ2)1/2forλ∈R\{0}andf(0) = 1 /√ 2. 3.9 Theorem. Assume for ϑ∈ {ϑ0, ϑ1}thatdom( Aϑ) = dom( Rϑ)and that there are ¯Bϑ≻0 which commute with Aϑand satisfy (3.4). Then: H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽T2 2∥(ε2 0R2 0¯B−2 0+ Id)−1/2(A1−A0)f(2TR1)(ε2 1R2 1¯B−2 1+ Id)−1/2∥2 HS +T2 2∥(ε2 1R2 1¯B−2 1+ Id)−1/2(A1−A0)f(2TR0)(ε2 0R2 0¯B−2 0+ Id)−1/2∥2 HS. If additionally R1≼0andR0≼0hold, then we even have H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽T 4∥(ε2|R0|2 T−1¯B−2 0+ Id)−1/2(A1−A0)|R1|−1/2 T−1(ε2|R1|2 T−1¯B−2 1+ Id)−1/2∥2 HS +T 4∥(ε2|R1|2 T−1¯B−2 1+ Id)−1/2(A1−A0)|R0|−1/2 T−1(ε2|R0|2 T−1¯B−2 0+ Id)−1/2∥2 HS. 10 Proof. We apply Proposition 2.3 with the interpretation of Remark 2.4(b), Lemma 3.2 on the form of the covariance operator Qϑand the formula from Proposition 3.7(c) consecutively to obtain: H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽1 4∥Q−1/2 0(Q1−Q0)Q−1/2 1∥2 HS(L2([0,T];H)) =1 4∥Q−1/2 0B(S1S∗ 1−S0S∗ 0)B∗Q−1/2 1∥2 HS(L2(H)) =1 4∥Q−1/2 0B((S1−S0)S∗ 1+S0(S1−S0)∗)B∗Q−1/2 1∥2 HS(L2(H)) =1 4∥Q−1/2 0BS0((A1−A0)S1+ ((A1−A0)S0)∗)(BS1)∗Q−1/2 1∥2 HS(L2(H)) In view of the monotonicity in Lemma C.2(a) below this can be bounded by Lemma 3.8 for ε >0 and by the polar decomposition BSϑ=Q1/2 ϑUwith unitary Uforε= 0 such that H2(Ncyl(0, Q0),Ncyl(0, Q1)) (3.5) ⩽1 2∥(ε2 0R2 0¯B−2 0+ Id)−1/2(A1−A0)S1(ε2 1R2 1¯B−2 1+ Id)−1/2∥2 HS(L2(H)) +1 2∥(ε2 0R2 0¯B−2 0+ Id)−1/2((A1−A0)S0)∗(ε2 1R2 1¯B−2 1+ Id)−1/2∥2 HS(L2(H)). Given the form of S1in Lemma 3.2, we obtain from the Hilbert-Schmidt norm for kernel operators in Lemma C.3 below ∥(ε2 0R2 0¯B−2 0+ Id)−1/2(A1−A0)S1(ε2 1R2 1¯B−2 1+ Id)−1/2∥2 HS(L2(H)) =ZT 0Zt 0∥(ε2 0R2 0¯B−2 0+ Id)−1/2(A1−A0)eA1(t−v)(ε2 1R2 1¯B−2 1+ Id)−1/2∥2 HS(H)dvdt. Now, A1, R1,¯B1commute and by the linearity of the trace the last expression equals ZT 0Zt 0trace (A1−A0)∗(ε2 0R2 0¯B−2 0+ Id)−1(A1−A0)e2R1u(ε2 1R2 1¯B−2
https://arxiv.org/abs/2505.14051v1
1+ Id)−1 dudt = trace (A1−A0)∗(ε2 0R2 0¯B−2 0+ Id)−1(A1−A0)T2f(2TR1)2(ε2 1R2 1¯B−2 1+ Id)−1 =T2∥(ε2 0R2 0¯B−2 0+ Id)−1/2(A1−A0)f(2TR1)(ε2 1R2 1¯B−2 1+ Id)−1/2∥2 HS. Changing the roles of ϑ0andϑ1and using ∥F∥2 HS=∥−F∗∥2 HSfor Hilbert–Schmidt operators F, we obtain an analogous bound for the second summand in (3.5). This proves the first inequality. The improved bound for Rϑ≼0 follows the same way, but using the second inequality of Lemma 3.8 for bounding (3.5) and using Lemma C.2(a) below with f(2TRϑ)2≼1 2T−1|Rϑ|−1 T, which follows from f(−x)2= (e−x−1 +x)/x2⩽1 2∧ |x|−1forx >0. 4 Parametric minimax rates We consider a slightly simpler model for XanddYthan (1.4), (1.5), in particular assuming a real parameter ϑand commutativity of all operators. We construct an estimator whose error rate attains under mild restrictions the lower bound. This establishes the minimax rate in a quite general parametric setting. For each n∈Nlet the observations ( dYt, t∈[0, Tn]) be given by dYt=BnXtdt+εndVtwith dXt=AϑXtdt+dWt (4.1) andX0= 0,Tn⩾1,εn∈[0,1],ϑ∈[ϑn,¯ϑn] for ¯ϑn> ϑn⩾0. The parametrisation of the operator Aϑis given by Aϑ=Aϑ,n=Mn+ϑΛ with known possibly unbounded linear operators Mn,Λ. 11 4.1 Assumption. (a)Mnis selfadjoint and Λis normal with Mn≾0,Re(Λ) ≾0. Moreover, dom( Aϑ) = dom( Rϑ) holds for ϑ∈[ϑn,¯ϑn]. (b)Bnis a bounded and invertible selfadjoint operator. (c) All three operators Mn,ΛandBncommute and allow for a common functional calculus. (d)Aϑhas a compact resolvent for ϑ∈[ϑn,¯ϑn]. We abbreviate ¯An=A¯ϑn,¯Rn=R¯ϑn,¯Jn=J¯ϑn. ByMn≾0, Re(Λ) ≾0 and Jϑ=ϑIm(Λ), we have the ordering |Rϑ|≼|¯Rn|,|Aϑ|≼|¯An|under Assumption 4.1(a). For later applications note that under Assumption 4.1(c) observing (4.1) is equivalent to observing dYt=˜Xtdt+εndVtwith d˜Xt=Aϑ˜Xtdt+BndWt (4.2) with ˜X0= 0 as in Example 3.1. We obtain immediately a general lower bound for this setting. 4.2 Theorem. Grant Assumption 4.1(a-c). Let vn=T−1/2 n trace |Λ|2(ε4 n|¯Rn|4 T−1 nB−4 n+ Id)−1|¯Rn|−1 Tn−1/2 , and assume ¯ϑn−ϑn⩾vn>0. Then the lower bound inf ˆϑnsup ϑ∈[ϑn,¯ϑn]Pϑ v−1 n|ˆϑn−ϑ|⩾2−7/2 ⩾2−√ 3 4, holds where the infimum is taken over all estimators based on observing (4.1). Hence, estimators ˆϑnthat fulfill ˆϑn−ϑ=OPϑ(vn)uniformly over ϑ∈[ϑn,¯ϑn]are minimax rate-optimal. Proof. By Theorem 3.9 for R0, R1≼0 we have for ϑ0=¯ϑn−2−5/2vnandϑ1=¯ϑnin our setting, noting ϑ0, ϑ1∈[ϑn,¯ϑn],|R0|≼|R1|and the commutativity, H2(Ncyl(0, Q0),Ncyl(0, Q1))⩽Tn2−5v2 n∥(ε2 n|R0|2 T−1 nB−2 n+ Id)−1Λ|R0|−1/2 Tn∥2 HS ⩽Tnv2 ntrace |Λ|2(ε4 n24|R0|4 T−1 nB−4 n+ Id)−11 2|R0|−1 Tn . We have |R0|−1≼¯ϑn¯ϑn−2−5/2vn|¯Rn|−1≼2|¯Rn|−1so that our choice of vnyields H2(Ncyl(0, Q0),Ncyl(0, Q1))⩽1. Therefore the result follows from Theorem 2.1 with δ= 2−5/2vn. Estimation of the parameter ϑin (4.1) is non-trivial. In the finite-dimensional case the laws of the Ornstein-Uhlenbeck processes are equivalent for different ϑand estimation of ϑcould be based on standard filter theory. Even then, however, the maximum-likelihood estimator (MLE) is not explicit and a one-step MLE based on a preliminary estimator is preferred (Kutoyants & Zhou, 2021). In the infinite-dimensional setting, e.g. for the stochastic heat equation with Aϑ=ϑ∆, the laws of ( Xt, t∈[0, Tn]) are often singular for different parameters ϑ, see also the interesting study by Hildebrandt & Trabs (2021) for one-dimensional SPDEs under different observation schemes. In that case the observations ( dYt, t∈[0, Tn]) only become equivalent through the action of the observation noise dVtand the likelihood becomes intractable besides an abstract
https://arxiv.org/abs/2505.14051v1
Gaussian approach. Our estimation procedure is based on a preaveraging method, where the observational noise is first reduced by local averaging and then a regression-type estimator is applied to the averaged data. A similar approach is employed in nonparametric drift estimation for diffusions under noise by Schmisser (2011), yet there the measurement noise level persists in the final rate, which we avoid, when specialised to the parametric Ornstein-Uhlenbeck case. 12 If we observed Xdirectly, a natural estimation approach for ϑwould be to regress dXt− MnXtdt=ϑΛXtdt+dWton Λ Xtdtover t∈[0, T]. Given the noisy observations dYof X, we smooth out the observational noise dVby regressing the averageRT 0(∂tK(n)(t, s)− MnK(n)(t, s))dYson Λ dYtwith an operator-valued kernel K(n)(t, s), similarly to estimation in instrumental variable regression. We require K(n)(t, s) = 0 for s > t so that the stochas- tic integral is taken over s∈[0, t] to keep the martingale structure. Integration by parts with (∂t−Mn)∗=−(∂t+Mn) and vanishing boundary terms leads to our estimator. 4.3 Definition. For the operator-valued kernel K(n)given by K(n)(t, s) =Knψ(n) t−s,s(¯An), t, s ∈[0, T],with ψ(n) v,s(a) =v+∧ |a|−1 Tn−s−v +, Kn=B2 n ε4 n|¯Rn|T−1 n+B4 n|¯An|−3 Tn−1|¯An|−1 Tn, consider the estimator ˆϑn:= (Zn/Nn)1(Nn̸= 0) with Zn=−ZTn 0Zt 0⟨(∂tΛK(n)(t, s) +MnΛK(n)(t, s))dYs, dYt⟩, (4.3) Nn=ZTn 0Zt 0⟨|Λ|2K(n)(t, s)dYs, dYt⟩ (4.4) based on observing dYtaccording to (4.1). A detailed analysis yields the main parametric upper bound. 4.4 Theorem. Grant Assumption 4.1(a-d). Assume that In(ϑ) :=Tntrace |Λ|2 ε4 n|¯Rn|T−1 n|¯An|3 T−1 nB−4 n+ Id−1|Rϑ|−1 Tn is finite for all nwith limn→∞In(ϑ) =∞and Tntrace |Λ|4 ε4 n|¯Rn|T−1 n|¯An|3 T−1 nB−4 n+ Id)−2 ε4 n|¯An|T−1 nB−4 n+|Rϑ|−3 Tn =o(In(ϑ)2) (4.5) uniformly over ϑ∈[ϑn,¯ϑn]. Then ˆϑnis well defined and ˆϑn−ϑ=OPϑ In(ϑ)−1/2 holds uniformly over ϑ∈[ϑn,¯ϑn]. Proof. See Section B.2 below. 4.5 Remark. IfIn(ϑ) =∞holds already for finite n, then ϑcan be usually identified non- asymptotically. To see this, denote by Π jthe orthogonal projection from Honto the eigenspace Vjfor the first jeigenvalues of Aϑ(ordered arbitrarily, taking account of multiplicities). Remark that Aϑhas pure point spectrum under Assumption 4.1(d). Then for fixed nwe can consider the projected finite-dimensional observations Π jdYtofdYtfrom (4.1) on Vj, involving Π jdXt= Aϑ,jXtdt+dWt,jwith finite-rank operator Aϑ,j= Π jAϑandVj-valued Brownian motion Wt,j= ΠjWt. The corresponding estimator ˆϑn,jconverges for j→ ∞ toϑin probability (or identify ϑ) if Tntrace |Λj|4 ε4 n|¯Rn,j|T−1 n|¯An,j|3 T−1 nB−4 n,j+ Id)−2 ε4 n|¯An,j|T−1 nB−4 n,j+|Rϑ,j|−3 Tn =o(In,j(ϑ)2) holds with Fj:= Π jF|Vjdenoting the projection of an operator Fonto Vj. The convergence follows from Theorem 4.4 with index jinstead of nbecause dim( Vj)<∞andS j⩾1Vj=Himply In,j(ϑ)<∞andIn,j(ϑ)↑ In(ϑ) =∞asj→ ∞ . 13 We can find simple sufficient conditions for rate optimality. 4.6 Corollary. Grant the assumptions of Theorem 4.4. If |¯Jn|≼C|¯Rn|holds for a constant C >0, independent of n, then we have ˆϑn−ϑ=OPϑ(In(ϑ)−1/2)uniformly over ϑ∈[ϑn,¯ϑn]with In(ϑ)∼Tntrace |Λ|2(ε4 n|¯Rn|4 T−1 nB−4 n+ Id)−1|Rϑ|−1 Tn (4.6) andˆϑnis minimax rate-optimal if ϑn+In(¯ϑn)−1/2⩽¯ϑn≲ϑn. Moreover, in this case a sufficient condition for (4.5) is Tntrace |Λ|4 ε4 n|¯Rn|4 T−1 nB−4 n+ Id)−1|Rϑ|−3 Tn =o(I(ϑ)2) (4.7) uniformly for ϑ∈[ϑn,¯ϑn], which is satisfied for ∥Λ|Rϑn|−1 Tn∥≲1. Proof. From|¯Jn|≼C|¯Rn|we infer |¯Rn|≼|¯An|≼(C+ 1)|¯Rn|and hence in Theorem 4.4 In(ϑ)∼Tntrace |Λ|2
https://arxiv.org/abs/2505.14051v1
ε4 n|¯Rn|4 T−1 nB−4 n+ Id−1|Rϑ|−1 Tn . The minimax optimality follows directly from the lower bound in Theorem 4.2 because |Rϑ|has the same order for all ϑ∈[ϑn,¯ϑn] by¯ϑn≲ϑn. Using that |¯Rn|is of the same order as |¯An|and|Rϑ|≼|¯Rn|, the left-hand side of Condition (4.5) is at most of order Tntrace |Λ|4 ε4 n|¯Rn|4 T−1 nB−4 n+ Id)−1|Rϑ|−3 Tn ⩽In(ϑ)∥|Λ|2|Rϑ|−2 Tn∥. For∥Λ|Rϑn|−1 Tn∥≲1 this bound has the order In(ϑ) =o(In(ϑ)2) due to |Rϑ|−1 Tn≼|¯Rϑn|−1 Tnand In(ϑ)→ ∞ . This yields the two sufficient conditions for (4.5). If the spectrum of the real part Rϑis asymptotically approaching zero, then the upper bound may not match the lower bound. The reason is that the kernel K(n)(t, s) used for ˆϑninvolves an indicator 1(|¯An|⩽(t−s)−1), on which event Re( eAϑ(t−s))≽e−1cos(1) Id is ensured and thus E[Nn]≳In(ϑ) in Proposition B.3 below. For unknown Jϑand significantly smaller Rϑit remains an intriguing open question how to attain the lower bound. Yet, in our cases of interest it concerns only transport estimation under a very small diffusivity asymptotics, see Remark 5.6 below. 5 Fundamental parametric examples 5.1 Scalar Ornstein-Uhlenbeck processes Consider the scalar case H=Rof (4.2) where we observe ( dYt, t∈[0, Tn]) for each n∈Ngiven by dYt=Xtdt+εndVtwith dXt=−ϑXtdt+σndWt, X0= 0. (5.1) That is, we have Mn= 0, Λ = −1 and Bn=σn>0. The parametric theory yields directly upper and lower bounds. 5.1 Proposition. Assume ¯ϑn−ϑn⩾vn(¯ϑn)with vn(ϑ) := ( ϑ1/2T−1/2 n∨T−1 n)(ε2 nσ−2 n¯ϑ2 n∨1)→0asn→ ∞ . Then for a constant c >0we have the lower bound lim inf n→∞inf ˆϑnsup ϑ∈[ϑn,¯ϑn]Pϑ vn(¯ϑn)−1|ˆϑn−ϑ|⩾c >0, where for each nthe infimum is taken over all estimators ˆϑnbased on (5.1). 14 IfTnϑn(1 +ε4 nσ−4 n¯ϑ4 n)−1→ ∞ , then the estimator ˆϑnfrom Definition 4.3 satisfies ˆϑn−ϑ=OPϑ(vn(ϑ))uniformly over ϑ∈[ϑn,¯ϑn]. In particular, for ϑnσ−1 nεn≲1andTnϑn→ ∞ the rate vn(ϑn) =ϑ1/2 nT−1/2 n is minimax optimal, even locally over [ϑn−vn, ϑn]. Proof. The lower bound follows directly from Theorem 4.2 for the Ornstein-Uhlenbeck case. For the upper bound we obtain in Theorem 4.4 due to ¯ϑn⩾ϑn≳T−1 n In(ϑ)∼Tn ε4 n¯ϑ4 nσ−4 n+ 1−1(ϑ−1∧Tn)∼vn(ϑ)−2. It remains to check condition (4.5) which reads here Tn ε4 n¯ϑ4 nσ−4 n+ 1−1 ϑ−3∧T3 n =o T2 n ε4 n¯ϑ4 nσ−4 n+ 1−2(ϑ−2∧T2 n) ⇐⇒ ε4 n¯ϑ4 nσ−4 n+ 1 = o(Tnϑ∨1). This holds uniformly over ϑbyTnϑn(1+ε4 nσ−4 n¯ϑ4 n)−1→ ∞ and yields the upper bound. Applying the lower bound to ¯ϑn=ϑnandϑn=ϑn−vn(ϑn) yields minimax optimality due to ¯ϑn−vn(ϑn)∼ ϑn⩾0. Let us discuss this minimax rate in different cases and first assume εn= 0 (no noise). Then the rateT−1 n∨ϑ1/2 nT−1/2 n combines the asymptotic Fisher information boundp 2ϑ/Tnfor fixed ϑ >0 (positive recurrent case) as Tn→ ∞ with the rate T−1 nforϑ= 0 (null-recurrent case) (Kutoyants, 2004, Example 2.14, Prop. 3.46). For the upper bound we need the condition Tnϑn→ ∞ to stabilise the denominator in ˆϑnaround its expectation. A more specific analysis, allowing for a diffuse distributional limit, would avoid this condition in a classical way. In the noisy case εn>0 we see that for the asymptotics Tn→ ∞ andεn, σn,¯ϑnfixed the rate does not suffer from noisy observations even with
https://arxiv.org/abs/2505.14051v1
constant noise intensity, which for fixed ϑis also well known (Kutoyants, 2004, Example 3.3). In view of the spectral approach to SPDE statistics (Huebner & Rozovskii, 1995), where the operator Aϑis unbounded and each Fourier mode forms an Ornstein-Uhlenbeck process with drift given by the eigenvalues of Aϑ, we note that the noise level intervenes for the asymptotics ϑnσ−1 nεn→ ∞ . 5.2 Stochastic evolution equations without noise Forε= 0,B= Id and R0, R1≼0 Theorem 3.9 shows that H2(Ncyl(0, C0),Ncyl(0, C1)) is bounded by T 4 ∥(A1−A0)|R1|−1 T−1∥2 HS+∥(A1−A0)|R0|−1 T−1∥2 HS . Let us consider the case of selfadjoint, negative A0andA1that can be jointly diagonalized with common eigenvectors vkand negative eigenvalues λk,0andλk,1, respectively. Then H2(Ncyl(0, C0),Ncyl(0, C1))⩽T∞X k=1(λk,0−λk,1)2 |λk,0| ∧ |λk,1| follows. A well known example is given by the fractional Laplacian A0=−(−∆)m/2andA1= A0−δ(−∆)m1/2for the Laplace operator ∆ on a smooth bounded domain Λ ⊆Rdwith Dirichlet boundary conditions and m⩾m1⩾0,δ >0. Then |λk,0|∧|λk,1|=|λk,0|∼km/dand|λk,1−λk,0|∼ δkm1/dhold by the Weyl asymptotics of Lemma C.5 below. Using that A0, A1have the same eigenfunctions, we obtain H2(Ncyl(0, C0),Ncyl(0, C1))≲Tδ2∞X k=1k(2m1−m)/d. (5.2) Consequently, the equivalence statement of Proposition 2.3 yields 15 5.2 Proposition. ForA0=−(−∆)m/2andA1=A0−δ(−∆)m1/2with m1<(m−d)/2 (5.3) andδ >0the laws Ncyl(0, C0)andNcyl(0, C1)are equivalent. ForA0= ∆ ( m= 2), (5.3) holds if d= 1 and m1<1/2. This is exactly the Huebner & Rozovskii (1995, Cor. 2.3) condition for equivalence of the laws Ncyl(0, C0),Ncyl(0, C1). In the asymptotics T→ ∞ we can then estimate ϑinAϑ=A0−ϑ(−∆)m1/2at best with rate T−1/2, setting δ∼T−1/2in the lower bound of Theorem 2.1. This asymptotic result slightly extends a result in Huebner & Rozovskii (1995) for fixed T. In the sequel we are mainly interested in the noisy case ε >0 where equivalence of observation laws holds in far greater generality. Still, the Hellinger bound will become infinite for differential operators in large dimensions. 5.3 Parameter in the fractional Laplacian We study a generalisation of the classical stochastic heat equation dXt=ϑ∆Xtdt+dWtwhere the diffusivity ϑ >0 is the target of estimation. For each n∈Nlet the observations ( dYt, t∈[0, Tn]) be given by dYt=Xtdt+εndVtwith dXt=−ϑ(−∆)ρXtdt+ (Id +( −∆)ρ)−βdWt, (5.4) with X0= 0,Tn⩾1,εn∈[0,1],ϑ, ρ > 0 and β⩾0. The Laplacian ∆ on a d-dimensional domain Dis supposed to have eigenvalues −λk∼k2/daccording to the Weyl asymptotics. By Lemma C.5 this holds for a Laplacian with Dirichlet, Neumann or periodic boundary conditions, or on a compact d-dimensional manifold without boundary. We are in the general parametric setting of (4.2) with Mn= 0, Λ = −(−∆)ρ,Bn= (Id +( −∆)ρ)−βandJϑ= 0. We study the dependence of the estimation rate on the dimension d, the fractional index ρ, the dynamic noise correlation index β, the observational noise level εnand the observation time Tn. We apply Corollary 4.6 with In(ϑ) =Tntrace (−∆)2ρ(ε4 n¯ϑ4 n(−∆)4ρ(Id +(−∆)ρ)4β+ Id)−1(ϑ−1(−∆)−ρ∧Tn) For a fixed range of parameters ϑ∈[ϑ,¯ϑ] with ¯ϑ > ϑ >0 we obtain In(ϑ)∼Tntrace ((−∆)ρ∧Tn(−∆)2ρ)(ε4 n(−∆)4ρ(1+β)+ Id)−1 ∼TnX k:λρ+βρ k⩽ε−1 nλρ k+ε−4 nX k:λρ+βρ k>ε−1 nλ−3ρ−4βρ k ∼Tnε−(2ρ+d)/(2ρ(1+β)) n , whenever d <(6 + 8 β)ρ. From Corollary 4.6, noting ∥Λ|Rϑ|−1 Tn∥⩽ϑ−1∼1, we thus deduce the following minimax rate. 5.3 Proposition. Assume d <(6 + 8
https://arxiv.org/abs/2505.14051v1
β)ρand the asymptotics vn:=T−1/2 nε(2ρ+d)/(4ρ(1+β)) n →0asn→ ∞ . Then uniformly over ϑ∈[ϑ,¯ϑ]the estimator ˆϑnfrom Definition 4.3 satisfies ˆϑn−ϑ=OPϑ(vn) and the rate vnis minimax optimal. The rate for εn→0 is becoming faster with the dimension dand slower with the fractional index ρand the dynamic correlation index β. The fact that the rate slows down for larger β, i.e. with more correlation in the dynamic noise, reflects classical behaviour, in contrast to the inverse 16 scaling of the dynamic noise from Example 3.1. In the fundamental case of the classical white-noise stochastic heat equation with ρ= 1 and Bn= Id ( β= 0) the minimax rate becomes vn=T−1/2 nε(2+d)/4 n ford⩽5. (5.5) Note that ϑis not identifiable for fixed Tn, εn>0 in dimension d⩽5 because the observation laws are equivalent. This is in stark contrast to the noiseless case εn= 0 where ϑis identifiable for fixed Tn(Huebner & Rozovskii, 1995). In case d⩾6 (or generally d⩾(6+8 β)ρ) one can check by Remark 4.5 that In(ϑ) =∞holds and ϑis identifiable already non-asymptotically, which means that the observation laws are singular for different ϑ. 5.4 Parameter in the transport term We consider the estimation of the first order coefficient ϑin a second order differential operator Aϑ. This is exemplified by Aϑ=νn∆ +ϑ∂ξwith the directional derivative ∂ξ=ξ•∇=Pd j=1ξj∂xjfor ξ∈Rd\{0}. For the Laplacian ∆ the operator Aϑ=νn∆ +ϑ∂ξon the d-dimensional torus [0 ,1]d with periodic boundary conditions is normal with eigenfunctions eℓ(x) =e2πi⟨ℓ,x⟩,ℓ∈Zd, and corresponding eigenvalues λℓ=−(2π)2νn|ℓ|2+ 2πi⟨ξ, ℓ⟩. Thus, for each n∈Nlet the observations (dYt, t∈[0, Tn]) be given by dYt=Xtdt+εndVtwith dXt= (νn∆Xt+ϑ∂ξXt)dt+ (Id−∆)−βdWt (5.6) andX0= 0,Tn⩾1,εn∈[0,1],ϑ∈[ϑ,¯ϑ],νn∈(0,1],β⩾0. This has the general form (4.2) with Rϑ=Mn=νn∆, Λ = ∂ξ,Bn= (Id−∆)−β. Theorem 4.2 yields the minimax lower bound rate vn=T−1/2 n trace |Λ|2(ε4 n¯R4 nB−4 n+ Id)−1|¯Rn|−1 Tn−1/2 ∼T−1/2 nX ℓ∈Zd ⟨ξ, ℓ⟩2(ε4 nν4 n|ℓ|8(1 +|ℓ|2)4β+ 1)−1(ν−1 n|ℓ|−2∧Tn)−1/2 ∼T−1/2 nX k⩾1 (ε−4 nν−4 nk−(8+8β)/d∧1)(ν−1 n∧Tnk2/d)−1/2 . The sum over k⩾(νnεn)−d/(2+2β)equals X k⩾(νnεn)−d/(2+2β)ε−4 nν−4 nk−(8+8β)/dν−1 n∼ν−(d+2+2 β)/(2+2β) n ε−d/(2+2β) n ford <8 + 8 β. The sum over 1 ⩽k <(νnεn)−d/(2+2β)is X 1⩽k<(νnεn)−d/(2+2β)(ν−1 n∧Tnk2/d)≲ν−(d+2+2 β)/(2+2β) n ε−d/(2+2β) n and thus bounded by the sum over larger k. Theorem 4.2 therefore yields a simple lower bound result. 5.4 Proposition. Letd <8 + 8 βand vn=T−1/2 nν(d+2+2 β)/(4+4β) n εd/(4+4β) n . Then there is a constant c >0such that inf ˆϑnsup ϑ∈[ϑ,¯ϑ]Pϑ v−1 n|ˆϑn−ϑ|⩾c ⩾2−√ 3 4 holds for all n∈N, where the infimum is taken over all estimators based on observing (5.6). 17 The lower bound suggests that ϑcan be consistently estimated under any of the three asymp- totics εn→0,Tn→ ∞ ,νn→0. The small diffusivity asymptotics νn→0 quantifies how the SPDE (5.6) approaches the usually singular first order SPDE dXt=ϑ∂ξXtdt+ (Id−∆)−βdWt, see also Gaudlitz & Reiß (2023). The upper bound is more involved because Jϑ=ϑ∂ξis not dominated by Rϑ=νn∆ when νn→0. In Theorem 4.4 we have In(ϑ)∼TnX k⩾1 k2/d ε4 n(νnk2/d∨T−1 n)(νnk2/d+k1/d)3k8β/d+ 1−1(ν−1 nk−2/d∧Tn) Consider indices k∗ nsatisfying k∗ n∼(νnεn)−d/(2+2β)∧(νnε4 n)−d/(5+8β). Let us assume νn(k∗ n)2/d≳T−1 n, that is ν3+8β n≳ε8 nT−5−8β n . Then we can lower bound In(ϑ)≳Tnν−1 nX k∗n⩽k⩽2k∗n ε4 n ν4 nk(8+8β)/d∨νnk(5+8β)/d + 1−1 ∼Tnν−1
https://arxiv.org/abs/2505.14051v1
nk∗ n∼Tnν−1 n (νnεn)−d/(2+2β)∧(νnε4 n)−d/(5+8β) , where we used that the summands are of order 1 for k∼k∗ n. It remains to check Condition (4.5) in Theorem 4.4. A simple bound of its left-hand side is In(ϑ)∥|Λ|2|Rϑ|−2 Tn∥≲TnIn(ϑ), which for εnνn→0 iso(In(ϑ)2) as required. Otherwise, we have νn, εn∼1 and Tn→ ∞ . In that case Condition (4.5) follows from Tn=o(T2 n). Disentangling the cases thus yields: 5.5 Proposition. Letd <8 + 8 βandν3+8β n≳ε8 nT−5−8β n . Under the asymptotics vn→0for vn:=( T−1/2 nν(2+d+2β)/(4+4β) n εd/(4+4β) n , ifνn⩾ε1/(1+2β) n , T−1/2 nν(5+d+8β)/(10+16 β) n ε2d/(5+8β) n ,ifνn< ε1/(1+2β) n ,(5.7) the estimator ˆϑnfrom Definition 4.3 satisfies uniformly over ϑ∈[ϑ,¯ϑ] ˆϑn−ϑ=OPϑ(vn). The rate vnis minimax optimal for νn≳ε1/(1+2β) n . 5.6 Remark. The results hold generally for real-valued ϑ,¯ϑbecause Rϑis here independent of ϑ, compare the proofs of Theorem 4.2 and 4.4. Forνn=o(ε1/(1+2β) n ) the upper bound does not match the lower bound because the real part Rϑ=νn∆ is too close to zero compared to the imaginary part Jϑ=ϑ∂ξ. If even ν3+8β n = o(ε8 nT−5−8β n ) holds, then the upper bound rate stays the same as for ν3+8β n =ε8 nT−5−8β n . In the fundamental case Bn= Id ( β= 0) and νn>0 fixed, the minimax rate is vn=T−1/2 nεd/4 nford⩽7 (5.8) for all Tn, εn. Compared with the rate (5.5) for a parameter in front of the Laplacian, the rate is by a factor ε−1/2 n slower. Again, for d⩾8 + 8 βwe have In(ϑ) =∞and the parameter ϑis non-asymptotically identifi- able, checking Remark 4.5. 18 5.5 Parameter in the source term Finally, we consider estimation of the coefficient in the zero order term of a second order differential operator Aϑ. We specify Aϑ=ν∆−ϑId which satisfies Aϑ=Rϑ≼0 for ϑ⩾0,ν >0. For each n∈Nlet the observations ( dYt, t∈[0, Tn]) be given by dYt=Xtdt+εndVtwith dXt= (νn∆Xt−ϑXt)dt+ (Id−νn∆)−βdWt (5.9) andX0= 0, Tn⩾1,εn∈[0,1],νn∈(0,1],β⩾0. The Laplacian ∆ on a d-dimensional domain Λ is supposed to have eigenvalues −λk∼k2/daccording to the Weyl asymptotics. By Lemma C.5 this holds for a Laplacian with Dirichlet, Neumann or periodic boundary conditions, or on a compact d-dimensional manifold without boundary. We apply Corollary 4.6 with Mn=νn∆, Λ = Id, Bn= (Id−νn∆)−βand a fixed parameter range ϑ∈[ϑ,¯ϑ],¯ϑ > ϑ >0, so that In(ϑ) =Tntrace ε4 n(ϑId−νn∆)4(Id−νn∆)4β+ Id−1|ϑId−νn∆|−1 Tn ∼Tntrace ε4 n(Id−νn∆)4(1+β)+ Id−1(Id−νn∆)−1 . Given the Weyl asymptotics −λk∼k2/d, we calculate In(ϑ)∼TnX k:νn|λk|⩽11 +X k:1<νn|λk|⩽ε−1/(1+β) n|νnλk|−1 +ε−4 nν−5−4β nX k:νn|λk|>ε−1/(1+β) n|λk|−5−4β ∼Tn ν−d/2 n +ν−1 nX ν−d/2 n<k⩽ν−d/2 nε−d/(2+2β) nk−2/d +ε−4 nν−5−4β nX k>ν−d/2 nε−d/(2+2β) nk−(10+8 β)/d ∼Tnν−d/2 n 1 +ε−(d−2)/(2+2β) n , where we assumed d <10 + 8 βfor summability and d̸= 2 for bounding the second sum by the first and third sum. In the case d= 2 the second sum dominates and In(ϑ)∼Tnν−1 nlog(eε−1 n) holds. Thus Corollary 4.6, noting ∥Λ|Rϑ|−1 Tn∥≲1, gives the following result. 5.7 Proposition. Ford <10 + 8 βand under the asymptotics vn→0for vn:=  T−1/2 nν1/4 n, ifd= 1, T−1/2 nν1/2 n(log(eε−1 n))−1/2,ifd= 2, T−1/2 nνd/4 nε(d−2)/(4+4β) n , ifd⩾3,(5.10) the estimator ˆϑnfrom Definition 4.3 satisfies uniformly over
https://arxiv.org/abs/2505.14051v1
ϑ∈[ϑ,¯ϑ] ˆϑn−ϑ=OPϑ(vn) and the rate vnis minimax optimal. 5.8 Remark. If the noise covariance operator is Bn= (Id −∆)−β, that is not depending onνn, then similar calculations for εn≲νβ nyield the same rate for d∈ {1,2}, but vn= T−1/2 nν(d+2β)/(4+4β) n ε(d−2)/(4+4β) n for 3⩽d < 10 + 8 β. This is slower in νnthan before which is reasonable in view of Example 3.1 because the noise covariance operator is smaller in the high frequencies. 19 This means that in dimension d= 1 the minimax rate is independent of βandεn. It matches the upper bound derived by Gaudlitz & Reiß (2023) in the vanishing diffusivity regime νn→0 without noise. In the fundamental case β= 0 and νn= 1 the rate is vn=  T−1/2 n, ifd= 1, (Tnlog(eε−1 n))−1/2,ifd= 2, T−1/2 nε(d−2)/4 n , if 3⩽d⩽9.(5.11) The fact that vanishing observation noise εn→0 leads to consistent estimation of the reaction parameter in d⩾2 agrees with the results from Huebner & Rozovskii (1995), where in the noiseless setting εn= 0 the reaction parameter is identified in these dimensions. Generally, for d⩾10 + 8 β we have In(ϑ) =∞andϑis non-asymptotically identifiable by Remark 4.5. 6 Fundamental nonparametric examples 6.1 Space-dependent diffusivity Passing to nonparametric problems, we now consider a space-dependent coefficient in the leading order of the second-order differential operator, more specifically the weighted Laplacian ∆ ϑ= ∇•(ϑ(x)∇) on H=L2(Rd) with space-dependent diffusivity ϑ:Rd→R+in a nonparametric regularity class. For each n∈Nwe consider observations ( dYt, t∈[0, Tn]) given by dYt= (Id−∆)−βXtdt+εndVtwith dXt= ∆ ϑXtdt+dWt, (6.1) where X0= 0,Tn⩾1,β∈[0,1/2],εn∈[0,1], and ϑ(•) belonging to the class Θdif(α, R) :=n ϑ∈C4∨α(Rd) inf x∈Rdϑ(x)⩾1 2;∥ϑ∥Cα⩽Ro , α > 0, R > 1. (6.2) 6.1 Remark. For the domain of the weighted Laplacians we need dom(∆ ϑ) =H2(Rd). Taking into account that by Theorem 6.4 below we only consider d <6 + 8β⩽10, an easy sufficient condition isϑ∈C4(Rd) by Triebel (2010, Section 2.8.2). Notice that for different ϑ, the operators ∆ ϑdo not commute. The bound infx∈Rdϑ(x)⩾1/2 ensures that ( u, v)7→ ⟨u,(−∆ϑ)v⟩=⟨∇u, ϑ∇v⟩defines a positive definite bilinear form on H1(Rd). In order to apply Theorem 3.9, we consider A0= ∆ and the alternative A1= ∆ ϑwith ϑ(x) = 1 + L(x/h) for some test function L:Rd→Rand a bandwidth h∈(0,1), to be chosen later. We have B= (Id −∆)−βand set ¯B0=B.Bdoes not commute with A1, but −∆ϑ≼∥ϑ∥∞(−∆) implies (Id−∥ϑ∥−1 ∞∆ϑ)−2β≽(Id−∆)−2β=B∗Bforβ∈[0,1/2], using operator monotonicity of t7→ − t−ronR+forr∈(0,1] (Bhatia, 2013). For ¯B1:= (Id−∥ϑ∥−1 ∞∆ϑ)−βthis yields (3.4). By Engel & Nagel (2000, Thm. VI.5.22) dom( A1) = dom( A0) =H2(Rd) holds, as soon as L∈C1(Rd). The minimax lower bound later can still be established over H¨ older classes with α < 1 by using smooth Lwhose bounds on the first derivatives are finite, but grow with the asymptotics. To deal with the non-commutativity of A0andA1, we establish for certain functions f:R+→ Rthat f(−∆ϑ)≼Cf,ϑf(−∆) with a well quantified constant Cf,ϑ. Let us remark that most monotone functions gare not operator monotone so that operators T1, T2with T1≼T2do not necessarily satisfy g(T1)≼g(T2). In particular, g:R+→R,g(λ) =−(ε2λ2+2β+ 1)−1is not an operator monotone function because
https://arxiv.org/abs/2505.14051v1
λ7→λ2+2βis not operator monotone, see Bhatia (2013, Chapter V) for this and more results on operator monotonicity for matrices which directly extend to linear operators. Based on perturbation ideas from Engel & Nagel (2000, Prop. VI.5.24), however, we are able to establish ( ε2(−∆ϑ)2+2β+ Id)−1≼Cd,β(ε2(−∆)2+2β+ Id)−1for some precise constant Cd,β, provided a suitable smoothness norm of ϑis bounded by ε−1. 20 6.2 Proposition. Consider the operators ∆ϑand∆onH2(Rd)⊆L2(Rd)withϑ(x) = 1+ L(x/h), L∈C7(Rd),supp L⊆[−1/2,1/2]dand∥L∥∞⩽1/2. There are positive quantities C(i) d,β,i= 1, . . . , 4, only depending on d⩾1andβ∈[0,1/2], such that: (a) if C(1) d,βh−2(∥∇L∥4 ∞+∥∇2L∥4/3 ∞)⩽ε−1/(1+β), then ε2(−∆ϑ)2+2β+ Id−1≼C(2) d,β ε2(−∆)2+2β+ Id−1; (6.3) (b) if C(3) d,βh−2(∥∇L∥4 ∞+∥∇L∥10/3 ∞+∥∇2L∥4/3 ∞+∥∇∆L∥4/5 ∞)⩽ε−1/(1+β), then |∆ϑ|−1 T−1 ε2(−∆ϑ)2+2β+ Id−1≼C(4) d,β|∆|−1 T−1 ε2(−∆)2+2β+ Id−1. (6.4) Proof. See Section B.3. 6.3 Remark. The condition L∈C7(Rd) ensures dom(∆4 ϑ) =H8(Rd). Later we shall consider kernels Lof the form L=hα nKwith hn↓0 and Kfixed so that the norms in terms of Lmatter for the asymptotics. In the case β= 0 Proposition 6.2(a) holds already under the condition C(1) dh−2∥∇L∥4 ∞⩽ε−1 and Proposition 6.2(b) under C(3) dh−2(∥∇L∥4 ∞+∥∇2L∥4/3 ∞)⩽ε−1. The reason is that in the proof we only need to consider ( −∆ϑ)mform⩽3 instead of m⩽4. 6.4 Theorem. For each nconsider the observations given by (6.1) with a space-varying diffusivity ϑ∈Θdif(α, R)forα >0,R >1. Assume vn:=( (Tnε−(d+2)/(2+2β) n )−α/(2α+d), ifTn⩽ε(1−α)/(1+β) n , (Tnε−(5+4β)/(2+2β) n )−α/(2α+3+4 β),ifTn⩾ε(1−α)/(1+β) n−→0 asn→ ∞ . If1⩽d <6 + 8 βand α >lim sup n→∞(5+5β) log( Tn)+5 log( ε−1 n) (2+2β) log( Tn)+(10+4 β) log( ε−1 n)(6.5) is satisfied, then for a constant c >0 lim inf n→∞inf ˆϑnsup ϑ∈Θdif(α,R)Pϑ v−1 n|ˆϑn(0)−ϑ(0)|⩾c >0 holds, where the infimum is taken over all estimators ˆϑnofϑbased on (6.1). 6.5 Remark. (a) The same lower bound holds for |ˆϑn(x0)−ϑ(x0)|at any x0∈Rd, the concrete choice x0= 0 just simplifies notation. Working on the unbounded domain Rdallows us to use Fourier transforms which facilitates the analysis considerably. It is intuitive, but not rigorously es- tablished that for our pointwise estimation risk the asymptotic results transfer to smooth bounded domains. (b) We can understand the rate in case Tn⩽ε(1−α)/(1+β) n in terms of the parametric rate T−1/2 nε(d+2)/(4+4β) n from Proposition 5.3. For fixed Tn>0 and β= 0 the rates simplify to vn=( ε(d+2)α/(4α+2d) n , ifα⩾1, ε5α/(4α+6) n , ifα∈(1/2,1].(6.6) In view of Remark 6.3 the second case holds even for α∈(3/10,1]. With considerable efforts Pasemann & Reiß (2024) have obtained a corresponding upper bound for the case α⩾1. 21 (c) In dimension d <3 + 4 β, the rate slows down in the regime Tn≪ε(1−α)/(1+β) n . This ellbow effect stems from the bound (6.7) below in the Sobolev space H(d−3−4β)/2(Rd) of negative order for relatively rough ϑwith bandwidth hn≲ε1/(2+2β) n . In the critical case α < 1 for fixed Tna standard preaveraging estimator does no longer work (Pasemann & Reiß, 2024), which might give an upper bound perspective on this effect. (d) The additional condition (6.5) on αis technically required in order to profit from the operator monotonicity in Proposition 6.2. It requires necessarily α >5/(10 + 4
https://arxiv.org/abs/2505.14051v1
β) (which suffices also for fixed Tn) and is always satisfied if α⩾5/2. Proof. For simplicity we drop the index natεn, Tn. We transfer the problem into the spectral domain by the Fourier transform Ff(u) =R ei⟨u,x⟩f(x)dx. In particular, we have F(∆ϑg)(u) = (2π)−dM[−iu⊤]C[Fϑ(u)]M[−iu]Fg(u) for g∈ H2(Rd), where M[f]g=fgandC[f]g=f∗g are multiplication and convolution operators. Consider ϑ∈C∞(Rd) fulfilling the conditions of Proposition 6.2(a,b) and the derived constant C(5) d,β:=1 2(C(2) d,β+C(4) d,β). Note that using x2(1+x)2β⩾ x2+2β,x⩾0, and functional calculus, we have (ε2R2 0¯B−2 0+ Id)−1≼(ε2∆2+2β+ Id)−1, (ε2R2 1¯B−2 1+ Id)−1≼(ε2∥ϑ∥−2β ∞∆2+2β ϑ+ Id)−1≼∥ϑ∥2β ∞(ε2∆2+2β ϑ+ Id)−1. Then by using these estimates, Proposition 6.2, Lemma C.2 (a) below and the isometry of (2π)−d/2F, we obtain from Theorem 3.9 for the case Rϑ≼0, writing δ=ϑ−1 ∥ϑ∥−2β ∞H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽T 4∥(ε2∆2+2β+ Id)−1/2(∆ϑ−∆)|∆ϑ|−1/2 T−1(ε2∆2+2β ϑ+ Id)−1/2∥2 HS +T 4∥(ε2∆2+2β ϑ+ Id)−1/2(∆ϑ−∆)|∆|−1/2 T−1(ε2∆2+2β+ Id)−1/2∥2 HS ⩽C(5) d,βT∥(ε2∆2+2β+ Id)−1/2(∆ϑ−∆)|∆|−1/2 T−1(ε2∆2+2β+ Id)−1/2∥2 HS ⩽C(5) d,βT M[(ε2|u|4+4β+ 1)−1/2(−iu⊤)]C[Fδ] M[(−iu)(|u|−1∧T1/2)(ε2|u|4+4β+ 1)−1/2] 2 HS. The operator Kinside the Hilbert-Schmidt norm is given as an integral operator of the form Kf(u) =R Rdk(u, v)f(v)dvwith real-valued kernel k(u, v) = (ε2|u|4+4β+ 1)−1/2⟨u, v⟩Fδ(u−v)(|v|−1∧T1/2)(ε2|v|4+4β+ 1)−1/2. Its Hilbert-Schmidt norm is therefore given by ∥k∥L2(Rd×Rd)and we obtain with the substitution w=ε1/(2+2β)u (∥ϑ∥2β ∞C(5) d,β)−1H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽TZ RdZ Rd(ε2|u|4+4β+ 1)−1⟨u, v⟩2(|v|−2∧T)|Fδ(u−v)|2(ε2|v|4+4β+ 1)−1dudv ⩽TZ Rd|Fδ(r)|2Z Rd(ε2|u|4+4β+ 1)−1|u|2(ε2|u−r|4+4β+ 1)−1du dr ⩽Tε−d+2 2+2βZ Rd|Fδ(r)|2Z Rd(|w|4+4β+ 1)−1|w|2(||w| − |ε1 2+2βr||4+4β+ 1)−1dw dr ⩽C(6) dTε−d+2 2+2βZ Rd|Fδ(r)|2Z∞ 0(zd−3−4β∧zd+1)(|z−ε1 2+2β|r||−4−4β∧1)dz dr, where C(6) dis the surface of the d-dimensional unit sphere. By Lemma C.6(b) below, for d <6+8β the inner integral is finite and bounded in order by (1 + ε1/(2+2β)|r|)d−3−4β. Hence, we obtain H2(Ncyl(0, Q0),Ncyl(0, Q1))≲Tε−(d+2)/(2+2β)Z Rd|Fδ(r)|2(1 +ε1/(2+2β)|r|)d−3−4βdr. (6.7) 22 Note that the weight function in the integral is fundamentally different for d < 3 + 4 βand d > 3 + 4 β. The integral corresponds to a squared L2-Sobolev norm of δ=ϑ−1 with order (d−3−4β)/2 and additional ε-dependent weighting. We choose δ(x) =hαK(x/h) for some bandwidth h∈(0,1) and K∈C∞(Rd) with support in [−1/2,1/2]d,∥K∥∞⩽1/2 and ∥K∥Cα⩽R−1. In particular, K,xKand all derivatives of Kup to order dare in L1(Rd). We assume K(0)̸= 0 as well asR RdxmK(x)dx= 0, m∈ {0,1}. Then |FK(u)|≲|u|2∧ |u|−d, and Z Rd|Fδ(r)|2(1 +ε1/(2+2β)|r|)d−3−4βdr =h2α+2dZ Rd|FK(hr)|2(1 +ε1/(2+2β)|r|)d−3−4βdr =h2α+dZ Rd|FK(u)|2(1 +ε1/(2+2β)h−1|u|)d−3−4βdu ≲h2α+dZ Rd(|u|4∧ |u|−2d)(1 + ε1/(2+2β)h−1|u|)d−3−4βdu ≲h2α+dZ∞ 0(zd+3∧z−d−1)(1 + ε1/(2+2β)h−1z)d−3−4βdz. Lemma C.6(a) below together with d+ 3 + d−3−4β⩾0 (recall β⩽1/2) as well as −d−1 + d−3−4β <−1 shows that the integral is finite and of order (1 + ε1/(2+2β)h−1)d−3−4β. Inserting into (6.7), we arrive at H2(Ncyl(0, Q0),Ncyl(0, Q1))≲Tε−(d+2)/(2+2β)h2α+d(1 +ε1/(2+2β)h−1)d−3−4β. Forh⩾ε1/(2+2β)this gives the order Th2α+dε−(d+2)/(2+2β)and for h⩽ε1/(2+2β)the order Th2α+3+4 βε−(5+4β)/(2+2β). We choose h∼( (Tε−(d+2)/(2+2β))−1/(2α+d), ifT⩽ε(1−α)/(1+β), (Tε−(5+4β)/(2+2β))−1/(2α+3+4 β),ifT⩾ε(1−α)/(1+β).(6.8) With Theorem 2.1 this yields the claim in view of 1, ϑ∈Θdif(α, R) by construction (note ∥ϑ∥Cα⩽ 1 +∥δ∥Cα⩽1 +∥K∥Cα⩽R) and |ϑ(0)−1|=hα|K(0)|∼hα. It remains to verify the conditions from Proposition 6.2(a,b). In order to do so, we note for L=hαKthat h4α/5−2≲ε−1/(1+β)with a sufficiently small constant implies both conditions. In the first case of (6.8) this holds due to h≳ε1/(2+2β). In the second case of (6.8) a short calculation shows that it suffices to consider the case α <5/2, and in this case
https://arxiv.org/abs/2505.14051v1
the condition is equivalent to T≲ε−((10+4 β)α+5)/((1+β)(5−2α)), which is equivalent to (6.5) by Lemma C.7 below. 6.2 Space-dependent transport For each n∈Nconsider the observations ( dYt, t∈[0, Tn]) given by dYt=Xtdt+εndVtwith dXt= (νn∆Xt+∇•(ϑ(x)Xt))dt+dWt (6.9) andX0= 0, Tn⩾1,εn∈[0,1] and νn∈(0,1]. The Laplace operator ∆ is considered on H2(Rd)⊆L2(Rd) =Handϑ:Rd→Rdis a smooth divergence-free vector field. More specifically, we assume that ϑis in Θtra(α, R) :=n ϑ∈C3∨α(Rd;Rd) ∥ϑ∥Cα⩽Rand∇•ϑ= 0o , α > 0, R > 0. 6.6 Remark. We require that U7→ ∇ •(ϑU) defines a bounded operator H2(Rd)→L2(Rd). From Theorem 6.7 below we see that we are only interested in the case d⩽7, in which case ϑ∈C3(Rd;Rd) is a simple sufficient condition by Triebel (2010, Section 2.8.2). In particular, we have dom( Aϑ) =H2(Rd) and∇•(ϑU) =ϑ•∇UonH2(Rd), which implies A∗ ϑf=νn∆f−∇•(ϑf) = A−ϑfsuch that Rϑ=νn∆. 23 In the notation of Theorem 3.9 consider ¯B0:=¯B1:=B= Id, A0=νn∆ and A1=νn∆ + ϑ(x)•∇. We only consider the case B= Id (white noise), but with more technical efforts we can handle for instance B= (Id−∆)−β,β∈[0,1/2], as in the preceding section. 6.7 Theorem. For each n∈Nconsider the observations given by (6.9) with the space-varying transport coefficient ϑ∈Θtra(α, R)for some α >0,R >0. Assume the asymptotics vn:=( (T−1 nεd/2 nν(d+2)/2 n )α/(2α+d), ifTn⩽ε−α nν1−α n, T−α/(2α+5) n ε5α/(4α+10) n ν7α/(4α+10) n , ifTn⩾ε−α nν1−α n,→0 asn→ ∞ . For 1⩽d⩽7there is a constant c >0such that lim inf n→∞inf ˆϑnsup ϑ∈Θtra(α,R)Pϑ v−1 n|ˆϑn(0)−ϑ(0)|⩾c >0, where the infimum is taken over all estimators ˆϑnofϑbased on (6.9). 6.8 Remark. In the first case Tn⩽ε−α nν1−α nwe obtain the nonparametric analogue of the parametric rate from Proposition 5.4. In general, we observe again an ellbow effect whenever hn=v1/α nbecomes smaller than ε1/2 nν1/2 nin the sense that the rate slows down for d⩽4 and speeds up for d⩾6. There is no further restriction on α >0 as in Theorem 6.4 because the real partRϑdoes not depend on ϑfor divergence-free vector fields ϑ. Proof. Throughout the proof we drop again the index n. Passing into the Fourier domain as for Theorem 6.4 and applying multiplication and convolution operations coefficientwise, we find by Theorem 3.9: H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽T 2∥(ε2ν2∆2+ Id)−1/2(M[ϑ]•∇)|ν∆|−1/2 T−1(ε2ν2∆2+ Id)−1/2∥2 HS ⩽T∥M[(ε2ν2|u|4+ 1)−1 2]C[Fϑ]•M[(−iu)(ν−1 2|u|−1∧T−1/2)(ε2ν2|u|4+ 1)−1 2]∥2 HS, where we write A•B:=Pd i=1AiBifor operators Ai, Bi, 1⩽i⩽d, thus H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽T νZ RdZ Rd(ε2ν2|u|4+ 1)−1|Fϑ(u−v)|2(ε2ν2|v|4+ 1)−1dudv =T νZ Rd|Fϑ(r)|2Z Rd(ε2ν2|u|4+ 1)−1(ε2ν2|u−r|4+ 1)−1du dr =T ν(νε)−d/2Z Rd|Fϑ(r)|2Z Rd(|w|4+ 1)−1(|w−ε1/2ν1/2r|4+ 1)−1dw dr ≲T ν(νε)−d/2Z Rd|Fϑ(r)|2Z∞ 0(zd−5∧zd−1)(|z−ε1/2ν1/2|r||−4∧1)dz dr. As in the derivation of (6.7), using Lemma C.6(b) below, the last line can be bounded in order by Tε−d/2ν−(d+2)/2Z Rd|Fϑ(r)|2(1 +ε1/2ν1/2|r|)d−5dr, provided d⩽7. We take ϑ(x) =hαK(x/h) for some bandwidth h∈(0,1) and a smooth divergence- free vector field K= (K1, . . . , K d) :Rd→Rdwith K(0)̸= 0,∥K∥Cα⩽R,R RdKi(x)dx= 0 andR RdKi(x)x dx= 0,i= 1, . . . , d , such that Ki,xKi,|x|2Kiand all derivatives of Kiup to order dare inL1(Rd) implying |FK(u)|≲|u|2∧|u|−d. A concrete construction is given by K(x) =A∇φ(x) for an invertible antisymmetric matrix A∈Rd×dandφ∈C∞(Rd) with compact support, ∇φ(0)̸= 0 andR∞ −∞φ(x1, . . . , x d)dxj= 0 for all j=
https://arxiv.org/abs/2505.14051v1
1, . . . , d andx∈Rd. Then Kis divergence-free and 24 FK(u) =Fφ(u)A(−iu) has the desired properties. Following the proof of Theorem 6.4 and with ∥A∇φ∥Cαsufficiently small, we deduce by Lemma C.6(a) below H2(Ncyl(0, Q0),Ncyl(0, Q1)) ≲Tε−d/2ν−(d+2)/2h2α+dZ Rd(|u|4∧ |u|−2d)(1 + ε1/2ν1/2h−1|u|)d−5du ≲Tε−d/2ν−(d+2)/2h2α+dZ∞ 0(zd+3∧z−d−1)(1 + ε1/2ν1/2h−1z)d−5dz ≲( Tε−d/2ν−(d+2)/2h2α+d, ifh⩾ε1/2ν1/2, Tε−5/2ν−7/2h2α+5, ifh⩽ε1/2ν1/2. We choose h= (T−1εd/2ν(d+2)/2)1/(2α+d)in case T⩽ε−αν1−αandh= (T−1ε5/2ν7/2)1/(2α+5) otherwise. Then we can apply Theorem 2.1 with Euclidean norm |ϑ(0)|∼hα, noting 0, ϑ∈ Θtra(α, R) for large nby construction. 6.3 Space-dependent source For each n∈Nconsider observations ( dYt, t∈[0, Tn]) given by dYt=Xtdt+εndVtwith dXt= (νn∆Xt−M[ϑ]Xt)dt+dWt (6.10) with the multiplication operator M[•] and X0= 0,Tn⩾1,εn∈[0,1], and νn∈(0,1], where ∆ is the Laplacian on H2(Rd)⊆L2(Rd) =Handϑ(•) belongs to the class Θsou(α, R) :=n ϑ∈Cα(Rd) ϑ⩾0,∥ϑ∥Cα⩽Ro , α > 0, R > 1. We have Aϑ=Rϑ=νn∆−M[ϑ] and dom( Aϑ) =H2(Rd) for ϑ∈Θsou(α,R). To apply Theorem 3.9, we take A0=νn∆−Id and the alternative A1=νn∆−M[ϑ] with ϑ(x) = 1 + hαK(x/h), h∈(0,1), and K∈C1∨α(Rd) of compact support. Then A0andA1are selfadjoint negative operators. We have ¯B0=¯B1=B= Id. First, we establish an operator monotonicity result for this case in analogy to Proposition 6.2. 6.9 Proposition. Consider A0=ν∆−Id,A1=ν∆−M[ϑ]withν >0,ϑ(x) = 1 + L(x/h)for h∈(0,1),L∈C1(Rd)andsupp L⊆[−1/2,1/2]d,∥L∥∞⩽1/2. (a) We have (ε2A2 1+ Id)−1≼2(ε2A2 0+ Id)−1. (6.11) (b) If ∥L∥∞+d1/2ν1/4h−1/2∥∇L∥∞⩽1 4√ 2ε−1, then we have |A1|−1 T−1(ε2A2 1+ Id)−1≼16|A1|−1 T−1(ε2A2 0+ Id)−1. (6.12) Proof. See Section B.3. 6.10 Theorem. In terms of the parametric rate vpar nfrom (5.10) withβ= 0define the rate vn:=  (vpar n)2α/(2α+d), ifvpar n⩾(νnεn)(2α+d)/4, (νnεnvpar n)2α/(2α+d+4), ifd⩽2, vpar n⩽(νnεn)(2α+d)/4, ((νnεn)(7−d)/4vpar n)2α/(2α+7),ifd⩾3, vpar n⩽(νnεn)(2α+d)/4. If1⩽d⩽9andvn→0hold, then there is a constant c >0such that lim inf n→∞inf ˆϑnsup ϑ∈Θsou(α,R)Pϑ v−1 n|ˆϑn(0)−ϑ(0)|⩾c >0, 25 where the infimum is taken over all estimators ˆϑnofϑbased on (6.10) , provided the following condition on αis satisfied: in case vpar n⩾(νnεn)(2α+d)/4 α >lim sup n→∞  log(Tn)−2dlog(ε−1 n) 2 log( Tn)+4 log( ε−1 n)+(d+1) log( ν−1 n), ifd= 1, log(Tn)+log(log( ε−1 n))−2dlog(ε−1 n) 2 log( Tn)+2 log(log( ε−1 n))+4 log( ε−1 n)+(d+1) log( ν−1 n),ifd= 2, log(Tn)−(1.5d+1) log( ε−1 n) 2 log( Tn)+(d+2) log( ε−1 n)+(d+1) log( ν−1 n), ifd⩾3, and in case vpar n⩽(νnεn)(2α+d)/4 α >lim sup n→∞  log(Tn)−(2d+6) log( ε−1 n) 2 log( Tn)+8 log( ε−1 n)+(d+5) log( ν−1 n), ifd= 1, log(Tn)+log(log( ε−1 n))−(2d+6) log( ε−1 n) 2 log( Tn)+2 log(log( ε−1 n))+8 log( ε−1 n)+(d+5) log( ν−1 n),ifd= 2, log(Tn)−11.5 log( ε−1 n) 2 log( Tn)+9 log( ε−1 n)+8 log( ν−1 n), ifd⩾3. 6.11 Remark. (a) Writing vpar n=T−1/2 nνd/4 nε(d−2)+/4 n (neglecting log terms), we can write the rate as vn=  (T−1/2 nνd/4 nε(d−2)+/4 n )2α/(2α+d),ifvpar n⩾(νnεn)(2α+d)/4, (T−1/2 nν(d+4)/4 n εn)2α/(2α+d+4),ifd⩽2, vpar n⩽(νnεn)(2α+d)/4, (T−1/2 nν7/4 nε5/4 n)2α/(2α+7), ifd⩾3, vpar n⩽(νnεn)(2α+d)/4. The first case gives the classical nonparametric analogue of the parametric rate, which always applies for fixed observation time Tn. On the other hand, for Tn→ ∞ andνn, εnfixed, the two other cases apply and for d⩽2 the rate necessarily slows down to T−α/(2α+d+4) n instead ofT−α/(2α+d) n . (b) The technical condition that αis not too small is always satisfied if α⩾1/2 or if Tn≲ε−2 n ind= 1 or
https://arxiv.org/abs/2505.14051v1
Tn≲ε−(11.5∧(1.5d+1)) n ind⩾2. In particular, it is true if Tnis fixed. Proof. We follow the road exposed in Theorem 6.4 and also drop the index n. Using A0andA1 as in Proposition 6.9, for which the conditions will be discussed below, we find from Theorem 3.9, writing δ=ϑ−1 1 4H2(Ncyl(0, Q0),Ncyl(0, Q1)) ⩽T∥(ε2(ν∆−Id)2+ Id)−1/2M[δ]|Id−ν∆|−1/2 T−1(ε2(ν∆−Id)2+ Id)−1/2∥2 HS ⩽T∥(ε2ν2∆2+ Id)−1/2M[δ](Id−ν∆)−1/2(ε2ν2∆2+ Id)−1/2∥2 HS ⩽T∥M[(ε2ν2|u|4+ 1)−1/2]C[Fδ(u)]M[(ν|u|2+ 1)−1/2(ε2ν2|u|4+ 1)−1/2]∥2 HS =TZ RdZ Rd(ε2ν2|u|4+ 1)−1|Fδ(u−v)|2(ν|v|2+ 1)−1(ε2ν2|v|4+ 1)−1dudv =T(νε)−d 2Z RdZ Rd|Fδ(r)|2(|w+ (νε)1 2r|4+ 1)−1(ε−1|w|2+ 1)−1(|w|4+ 1)−1dwdr ≲T(νε)−d 2Z Rd|Fδ(r)|2Z∞ 0(|z−(νε)1 2|r||−4∧1)((εz−2)∧1)(zd−5∧zd−1)dz dr. Setρ:= (νε)1/2|r|⩾0. For 1 ⩽d⩽2 the inner integral can be estimated in order with the help of Lemma C.6(b) below, inserting a= 0,b=d−7,c=−4 therein, by Z1 0(εzd−3∧zd−1)dzmax 0⩽z⩽1 |z−ρ|−4∧1 +εZ∞ 1(|z−ρ|−4∧1)zd−7dz ≲Z√ε 0zd−1dz+εZ1 √εzd−3dz+ε (1 +ρ)−4∼( εd/2(1 +ρ)−4, d = 1, εd/2log(eε−1)(1 + ρ)−4, d= 2. 26 For 3⩽d⩽9 the inner integral can be estimated in order by εZ∞ 0(|z−ρ|−4∧1)(zd−7∧zd−3)dz≲ε(1 +ρ)d−7 using Lemma C.6(b) below. For d⩾10 the inner integral is infinite. Now let δ(x) =hαK(x/h) for h∈(0,1) and K∈C∞(Rd) with support in [ −1/2,1/2]d, ∥K∥∞⩽1/2,∥K∥α⩽R−1, and K(0)̸= 0. Then K,xKand all derivatives of Kare in L1(Rd). AssumeR RdxmK(x)dx= 0 for m∈ {0,1}so that |FK(u)|≲|u|2∧ |u|−6. For d∈ {1,2}the Hellinger bound becomes in terms of the parametric rate vpar nfrom Proposition 5.7, using Lemma C.6(a) below, (vpar n)−2Z Rd|Fδ(r)|2(1 + ( νε)1/2|r|)−4dr ≲(vpar n)−2h2α+dZ∞ 0(zd+3∧zd−13)(1 + ( νε)1/2h−1z)−4dz ∼(vpar n)−2h2α+d(1 + ( νε)1/2h−1)−4. Hence, for h⩾(νε)1/2we choose h∼(vpar n)2/(2α+d), while for h⩽(νε)1/2we choose h∼ (νεvpar n)2/(2α+d+4). For 3 ⩽d⩽9 the Hellinger bound becomes (vpar n)−2Z Rd|Fδ(r)|2(1 + ( νε)1/2|r|)d−7dr ≲(vpar n)−2h2α+dZ∞ 0(zd+3∧zd−13)(1 + ( νε)1/2h−1z)d−7dz ∼(vpar n)−2h2α+d(1 + ( νε)1/2h−1)d−7. Forh⩾(νε)1/2we choose h∼(vpar n)2/(2α+d), while for h⩽(νε)1/2we choose h∼ ((νε)(7−d)/4vpar n)2/(2α+7). Writing the condition h⩽(νε)1/2in terms of vpar n, we obtain the as- serted lower bound rate vn∼|ϑ(0)−1|∼hαvia Theorem 2.1, using 1, ϑ∈Θsou(α, R) for large n. It remains to verify the condition for Proposition 6.9 with L=hαK, that is hα−1/2≲ν−1/4ε−1 with a sufficiently small constant. In case h⩾(νε)1/2this reduces to 1 ≲T2α−1ν−(d+1)αε−(4α+2d) ford= 1 and 1 ≲T2α−1ν−(d+1)αε−((d+2)α+1.5d+1)ford⩾3. In case h⩽(νε)1/2we have to check 1≲T2α−1ν−(d+5)αε−(8α+2d+6)ford= 1 and 1 ≲T2α−1ν−8αε−(9α+11.5)ford⩾3. The case d= 2 is treated analogously to d= 1. By Lemma C.7 below, all relations hold due to the condition on α. A Notation Consider real numbers an, bn, a, b. We write an≲bnoran=O(bn) ifan⩽Cbnholds for a constant C >0, uniform over all parameters involved. In an analogous way an≳bnis defined and an∼bn means an≲bnandbn≲an. Moreover, an=o(bn) stands for an/bn→0. We set a∧b:= min( a, b), a∨b:= max( a, b),a+=a∨0 and |a|−p T:=|a|−p∧Tp,|a|p T−1:=|a|p∨T−p= (|a|−p T)−1forp, T > 0 with|0|−p T:=Tp. The latter is also applied to complex a. For real random variables Xnand deterministic an>0 we say that Xn=OPϑ(an) holds uniformly over ϑ∈[ϑn,¯ϑn] if lim R→∞lim sup n→∞sup ϑ∈[ϑn,¯ϑn]Pϑ(|Xn|> Ra n) = 0 , that is the laws of a−1 nXnare uniformly tight under all Pϑand over all n. Forv, w in a Hilbert space Hthe corresponding norm and scalar product are canonically denoted by ∥v∥and⟨v, w⟩, respectively. Yet, when H=Rd, we write instead |v|for the Euclidean 27 norm. We denote by Id the identity
https://arxiv.org/abs/2505.14051v1
operator on Hand by im( Q) the range or image of an operator Q. For (possibly unbounded) self-adjoint operators A, B on a Hilbert space Hwrite A≼Band B≽Aif dom( B)⊆dom( A) and ⟨(B−A)v, v⟩⩾0 for all v∈dom( B). ForT >0 letL2([0, T];H) =L2(H) be the Hilbert space of all Borel-measurable f: [0, T]→H with∥f∥2 L2(H):=RT 0∥f(t)∥2dt < ∞. We write ∥A∥for the operator (or spectral) norm and ∥A∥HS(H)=∥A∥HS= trace( A∗A)1/2for the Hilbert-Schmidt (or Frobenius) norm of a linear operator A:H→H. In analogy to real-valued Sobolev spaces we consider H1([0, T];H) =H1(H) with H1(H) =n f∈L2(H) ∃g∈L2(H)∀t∈[0, T] :f(t) =f(0) +Rt 0g(s)dso andH1 0(H) ={f∈ H1(H)|f(0) = 0 },H1 T(H) ={f∈ H1(H)|f(T) = 0}. Note that point evaluations at t∈[0, T] are well-defined by Sobolev embedding. Further, using the same notation, define the operator ∂t:H1(H)→L2(H) via ∂tf:=g. Forα >0 we consider the H¨ older spaces Cα([0, T];Rd) of⌊α⌋-times continuously differentiable functions f: [0, T]→Rdwith L:= supt̸=s|f(⌊α⌋)(t)−f(⌊α⌋)(s)| |t−s|α−⌊α⌋ <∞, where f(⌊α⌋)denotes the corresponding derivative. The H¨ older norm is ∥f∥Cα=L+max k=0,...⌊α⌋∥f(k)∥∞, where for integer αwe require instead that f(α−1)isL-Lipschitz continuous. Let∥∇u∥= (Pd i=1R Rd(∂xiu)2(x)dx)1/2denote the L2(Rd;Rd)-norm of the gradient and ∥∇2u∥theL2(Rd;HS(Rd×d))-norm of the Hessian for uin a Sobolev space of sufficient reg- ularity. Using integration by parts, we deduce ∥∇u∥2=∥(−∆)1/2u∥2,∥∇2u∥2=∥∆u∥2. Let ∥∇L∥∞:= supx∈Rd∥∇L(x)∥Rdand∥∇2L∥2 ∞:= supx∈RdPd j,k=1(∂xj∂xkL(x))2for sufficiently smooth L:Rd→R. We frequently use the interpolation inequality ∥(−∆)(1−α)ρ1+αρ2u∥⩽∥(−∆)ρ1u∥1−α∥(−∆)ρ2u∥α, α∈[0,1], ρ1, ρ2⩾0, (A.1) whenever the right-hand side is finite. This follows for instance by applying H¨ older’s inequality in the Fourier representation. TheL2-Fourier transform is given by extending Fg(u) :=R Rdg(x)ei⟨u,x⟩dxfrom g∈L1(Rd) toL2(Rd). Denote by M[f]g:=fgthe multiplication operator with f, and by C[f]g:=f∗gthe convolution operator for functions on Rd, whenever these quantities are defined. B Further proofs B.1 Proofs for Section 3 Proof of Proposition 3.5. Forf∈L2([0, T]; dom( Aϑ)) put g(t) :=e−iJϑtf(t) and note ∥g∥L2(H)= ∥f∥L2(H). Moreover, it is convenient to extend gto all of Rviag(t) := 0 for t∈R\[0, T]. Then we obtain from Lemma 3.4 ⟨CϑRϑf, Rϑf⟩L2(H) =1 2ZT 0ZT 0D eiJϑtRϑZt+s |t−s|eRϑvdv Rϑe−iJϑsf(s), f(t)E dsdt =1 2ZT 0ZT 0⟨Rϑ(eRϑ(t+s)−eRϑ|t−s|)g(s), g(t)⟩dsdt. Observing ZT 0ZT 0⟨(Rϑ)−eRϑ(t+s)g(s), g(t)⟩dsdt= ZT 0(Rϑ)1/2 −eRϑtg(t)dt 2 ⩾0, 28 we have for the negative part ZT 0ZT 0⟨−(Rϑ)−(eRϑ(t+s)−eRϑ|t−s|)g(s), g(t)⟩dsdt ⩽ZT 0ZT 0⟨(Rϑ)−eRϑ|t−s|g(s), g(t)⟩dsdt =Z∞ −∞Z∞ −∞⟨(Rϑ)−e−(Rϑ)−|t−s|g(s), g(t)⟩dsdt. Expressing the scalar product via the trace involving the tensor product of vectors, we can disen- tangle the integrals over gand the kernel: Z∞ −∞Z∞ −∞⟨(Rϑ)−e−(Rϑ)−|t−s|g(s), g(t)⟩dsdt =Z∞ −∞trace (Rϑ)−e−(Rϑ)−|r|ReZ∞ −∞g(t+r)⊗g(t)dt dr. Now for v∈Hby Cauchy-Schwarz inequality DZ∞ −∞g(t+r)⊗g(t)dt v, vE =Z∞ −∞⟨g(t+r), v⟩⟨v, g(t)⟩dt ⩽Z∞ −∞|⟨g(t), v⟩|2dt=DZ∞ −∞g(t)⊗g(t)dt v, vE holds. We thus obtain by Lemma C.2(a) below Z∞ −∞trace (Rϑ)−e−(Rϑ)−|r|ReZ∞ −∞g(t+r)⊗g(t)dt dr ⩽Z∞ −∞Z∞ −∞trace (Rϑ)−e−(Rϑ)−|r|g(t)⊗g(t) dtdr =Z∞ −∞traceZ∞ −∞(Rϑ)−e−(Rϑ)−|r|dr g(t)⊗g(t) dt =Z∞ −∞trace 2g(t)⊗g(t) dt= 2∥g∥2 L2(H). For the positive part we argue directly, using ( Rϑ)+eRϑ(t+s)= (Rϑ)+e(Rϑ)+(t+s)as well as that (Rϑ)+is a bounded operator whose semigroup can be extended to negative times, and applying the Cauchy-Schwarz inequality twice: ZT 0ZT 0⟨(Rϑ)+(eRϑ(t+s)−eRϑ|t−s|)g(s), g(t)⟩dsdt =ZT 0ZT 0⟨(Id−e(Rϑ)+(|t−s|−(t+s)))(Rϑ)1/2 +e(Rϑ)+sg(s),(Rϑ)1/2 +e(Rϑ)+tg(t)⟩dsdt ⩽ZT 0ZT 0∥Id−e−2(Rϑ)+(t∧s)∥∥(Rϑ)1/2 +e(Rϑ)+s∥∥g(s)∥∥(Rϑ)1/2 +e(Rϑ)+t∥∥g(t)∥dsdt ⩽ZT 0∥(Rϑ)1/2 +e(Rϑ)+t∥2dtZT 0∥g(t)∥2dt ⩽1 2(e2∥(Rϑ)+∥T−1)∥g∥2 L2(H), using∥h((Rϑ)+)∥⩽h(∥(Rϑ)+∥) for non-negative increasing functions h. We conclude ∥S∗ ϑRϑf∥2 L2(H)=⟨CϑRϑf,
https://arxiv.org/abs/2505.14051v1
Rϑf⟩L2(H) ⩽1 2 2 +1 2(e2∥(Rϑ)+∥T−1) ∥g∥2 L2(H)= 3 4+1 4e2∥(Rϑ)+∥T ∥f∥2 L2(H). 29 Since L2([0, T]; dom( Aϑ)) is dense in L2(H), this shows that S∗ ϑRϑand its adjoint RϑSϑextend to bounded linear operators on L2(H). Proof of Proposition 3.7. We present the proof only for Sϑ, that for S∗ ϑis completely analogous. We proceed stepwise. Proof of (a). Proposition 3.5 shows that RϑSϑextends to a bounded operator on L2(H), whence Sϑmaps L2(H) into L2([0, T]; dom( Rϑ)) and the first statement follows from the assumption dom( Aϑ) = dom( Rϑ). For the second property we use standard differentiability properties of Bochner integrals. For f∈ H1(H) we obtain by substitution and the chain rule ∂tSϑf(t) =∂tZt 0eAϑuf(t−u)du =eAϑtf(0) +Zt 0eAϑuf′(t−u)du (B.1) and the right-hand side is in L2(H). This shows Sϑf∈ H1 0(H) in view of Sϑf(0) = 0 by definition. Proof of (b). Forf∈ H1(H) we have by (a) that Sϑf∈ H1 0(H)∩L2([0, T]; dom( Aϑ)). Thus, by formula (B.1) and partial integration (∂t−Aϑ)Sϑf(t) =eAϑtf(0) +Zt 0eAϑuf′(t−u)du−Zt 0AϑeAϑuf(t−u)du =eAϑtf(0) +Zt 0eAϑuf′(t−u)du− eAϑuf(t−u) t u=0+Zt 0eAϑuf′(t−u)du =f(t). This gives ( ∂t−Aϑ)Sϑ= Id on H1(H). Clearly, AϑandSϑcommute on L2([0, T]; dom( Aϑ)). Moreover, for f∈ H1 0(H) we find by formula (B.1) with f(0) = 0 ∂tSϑf(t) =Zt 0eAuf′(t−u)du=Sϑ∂tf(t). (B.2) This finishes the proof of (b). Proof of (c). Forf∈ H1(H) we have S0f, S1f∈ H1 0(H)∩L2([0, T]; dom( Aϑ)) by (a). Part (b) yields S0(A1−A0)S1f=S0(∂t−A0)S1f−S0(∂t−A1)S1f=S1f−S0f, S1(A1−A0)S0f=S1(∂t−A0)S0f−S1(∂t−A1)S0f=S1f−S0f. Hence, S1−S0=S0(A1−A0)S1=S1(A1−A0)S0holds on H1(H). Since all three terms present bounded linear operators on L2(H) andH1(H) is dense in L2(H) (Amann, 2019, Eq. (1.2.3)), the assertion follows by a continuity argument. B.2 Proofs for Section 4 We prove the parametric upper bound in Theorem 4.4 by establishing several intermediate results. Throughout Assumption 4.1 is in force. In Definition 4.3 we write short Zn=−⟨(∂t+Mn)ΛK(n)˙Y ,˙Y⟩L2,Nn=⟨|Λ|2K(n)˙Y ,˙Y⟩L2 and we shall mostly use this formal notation with scalar products in L2=L2([0, T]2;H) and Itˆ o differentials, interpreting ˙Ytdt=dYtand taking the inner integral over the entry of the L2- scalar product, in the sequel, making sure that the integrands are always adapted. By definition, K(n)(t, s) = 0 holds for t⩽sand for t=Tn. For s < t we have ψ(n) t−s,s(a) = 0 for |a|⩾(t−s)−1so thatK(n)(t, s) introduces the projection 1(|¯An|⩽(t−s)−1) onto a finite-dimensional subspace by the assumption of a compact resolvent. This shows that K(n)(t, s) is for all t, s∈[0, Tn] a finite-rank operator. Since ψ(n) t−s,sis weakly differentiable in t, we have ( ∂t+Mn)ΛK(n)(t, s),|Λ|2K(n)(t, s)∈ 30 HS(H) for all t, s∈[0, Tn]. The stochastic integrals in the definition are well-defined if ( ∂t+ Mn)ΛK(n),|Λ|2K(n)∈HS(L2) := HS(L2([0, Tn];H)) holds (cf. Da Prato & Zabczyk (2014)), which depends on the behaviour of K(n)(t, s) ast−s↓0. In Propositions B.2 and B.3 below, we shall establish conditions ensuring that Zn−ϑNnandNnare well defined. Before, we work implicitly under this assumption. B.1 Lemma. The estimation error of ˆϑnallows on {Nn̸= 0}the decomposition ˆϑn−ϑ=⟨ΛK(n)˙Y , B n˙W⟩L2− ⟨(∂t+A∗ ϑ)ΛK(n)˙Y , εn˙V⟩L2 Nn. Proof. By partial integration in t, using K(n)(Tn, s) =K(n)(s, s) = 0 for the boundary values and noting that the
https://arxiv.org/abs/2505.14051v1
inner integral is adapted due to supp( K(n)(t,•))⊆[0, t], we have − ⟨(∂t+A∗ ϑ)ΛK(n)˙Y , B nX⟩L2 =−ZTn 0ZTn 0⟨∂t+A∗ ϑ)ΛK(n)(t, s)dYs, BnXtdt⟩ =ZTn 0Zt 0 ⟨ΛK(n)(t, s)dYs, BndXt⟩ − ⟨ΛK(n)(t, s)dYs, AϑBnXtdt⟩ =ZTn 0ZTn 0⟨ΛK(n)(t, s)dYs, BndWt⟩=⟨ΛK(n)˙Y , B n˙W⟩L2, by commutativity of AϑandBn. This gives Zn−ϑNn=−⟨(∂t+A∗ ϑ)ΛK(n)˙Y , B nX+εn˙V⟩L2 =⟨ΛK(n)˙Y , B n˙W⟩L2− ⟨(∂t+A∗ ϑ)ΛK(n)˙Y , εn˙V⟩L2. (B.3) It remains to divide by NnforNn̸= 0. B.2 Proposition. Zn−ϑNnis well defined and we have E[(Zn−ϑNn)2]≲Tntrace K2 n|Λ|2 ε4 n|¯An|−1 Tn+B4 n|¯An|−4 Tn|Rϑ|−1 Tn if the right-hand side is finite. Proof. The representation in (B.3) shows that Zn−ϑNnis the sum of stochastic integrals with respect to dWanddV. This implies E[Zn−ϑNn] = 0, provided the corresponding variance expression is finite, which will be established next. By Lemma C.4 below we obtain for the variance Var(Z−ϑN) = Var( ⟨(∂t+A∗ ϑ)ΛK(n)˙Y ,˙Y⟩L2)⩽2∥Qϑ(∂t+A∗ ϑ)ΛK(n)∥2 HS(L2). ByS∗ ϑ(∂t+A∗ ϑ) =−Id on L2([0, Tn]; dom( Aϑ))∩H1(H) from Proposition 3.7 and commutativity we obtain Var(Zn−ϑNn)⩽2∥(ε2 nId +BnSϑS∗ ϑBn)(∂t+A∗ ϑ)ΛK(n)∥2 HS(L2) ⩽4ε4 n∥Λ(∂t+A∗ ϑ)K(n)∥2 HS(L2)+ 4∥ΛB2 nSϑK(n)∥2 HS(L2) (B.4) Inserting K(n)(t, s) =Knψ(n) t−s,s(¯An), the Hilbert-Schmidt norm representation via kernels accord- ing to Lemma C.3 below yields ∥Λ(∂t+A∗ ϑ)K(n)∥2 HS(L2)=ZTn 0ZTn s∥ΛKn(∂t+A∗ ϑ)ψ(n) t−s,s(¯An)∥2 HS(H)dtds =ZTn 0trace K2 n|Λ|2ZTn−s 0|(∂v+A∗ ϑ)ψ(n) v,s(¯An)|2dv ds. 31 The simple identities ZTn−s 0|ψ(n) v,s(a)|2dv=1 12|a|−3 Tn−s,ZTn−s 0|∂vψ(n) v,s(a)|2dv=|a|−1 Tn−s (B.5) and|Aϑ|2≼|¯An|2yield further, using Lemma C.2(a) below, ∥Λ(∂t+A∗ ϑ)K(n)∥2 HS(L2) (B.6) ≲Tntrace K2 n|Λ|2 Id +|Aϑ|2|¯An|−2 Tn |¯An|−1 Tn ≲Tntrace K2 n|Λ|2|¯An|−1 Tn . For the second term in (B.4) we use the kernel of Sϑfrom Lemma 3.2, ZTn−s 0ψ(n) v,s(a)dv=1 4|a|−2 Tn−s (B.7) as well as (note |Rϑ|w≼Id on the support of ψ(n) w,s(¯An)) e−Rϑwψ(n) w,s(¯An)≼e1ψ(n) w,s(¯An), w⩾0, (B.8) to find for s⩽t |(Sϑ,nK(n))(t, s)|= Zt 0eAϑ(t−u)Knψ(n) u−s,s(¯An)du ≾KneRϑ(t−s)Zt seRϑ(s−u)ψ(n) u−s,s(¯An)du≾KneRϑ(t−s)|¯An|−2 Tn. We thus obtain ∥ΛB2 nSϑK(n)∥2 HS(L2)=ZTn 0Zt 0∥ΛB2 n(SϑK(n))(t, s)∥2 HS(H)dsdt ⩽trace K2 n|Λ|2B4 n|¯An|−4 TnZTn 0Zt 0e2Rϑsdsdt ⩽Tntrace K2 n|Λ|2B4 n|¯An|−4 Tn|Rϑ|−1 Tn . Inserting (B.6) and this bound into (B.4), we arrive at Var(Zn−ϑNn)≲Tntrace K2 n|Λ|2 ε4 n|¯An|−1 Tn+B4 n|¯An|−4 Tn|Rϑ|−1 Tn , which gives the claim. Arguing from bottom to top, Zn−ϑNnis well defined if the right-hand side is finite. B.3 Proposition. We have E[Nn]≳Tntrace Kn|Λ|2B2 n|¯An|−2 Tn|Rϑ|−1 Tn , Var(Nn)≲Tntrace K2 n|Λ|4 ε4 n|¯An|−3 Tn+B4 n|¯An|−4 Tn|Rϑ|−3 Tn , where Nnis well defined if both expressions on the right are finite. Proof. ByE[⟨|Λ|2K(n)˙Y ,˙V⟩L2] = 0 and by independence between XandVwe have E[Nn] =E[⟨|Λ|2K(n)˙Y , B nX⟩L2] =E[⟨Bn|Λ|2K(n)BnX, X⟩L2] =ZTn 0Zt 0trace Kn|Λ|2B2 nψ(n) t−s,s(¯An)1 2Re eAϑ(t−s)Z2s 0eRϑvdv dsdt, 32 where we used the real part of the covariance kernel for Xfrom Lemma 3.4, noting that E[Nn] is real-valued. Now we have for s < t due to |Aϑ|≼|¯An|and min z∈C,|z|⩽1Re(ez)⩾e−1cos(1) ψ(n) t−s,s(¯An) Re(eAϑ(t−s)) =ψ(n) t−s,s(¯An)1(|¯An|⩽(t−s)−1) Re(eAϑ(t−s)) ≽e−1cos(1) ψ(n) t−s,s(¯An). With (B.7) andRt 0R2s 0e−rvdvds =Rt 0r−1(1−e−2rs)ds⩾t 2(r−1∧t) for r, t > 0 this implies, again using Lemma C.2(a), E[Nn]⩾cos(1) 2eZTn 0trace Kn|Λ|2B2 nZTn−s 0ψ(n) v,s(¯An)dvZ2s 0eRϑvdv ds =cos(1) 8eZTn 0trace Kn|Λ|2B2 n|¯An|−2 Tn−sZ2s 0eRϑvdv ds ⩾cos(1) 8etrace Kn|Λ|2B2 n|¯An|−2 Tn/2ZTn/2 0Z2s 0eRϑvdvds ≳Tntrace Kn|Λ|2B2 n|¯An|−2 Tn|Rϑ|−1 Tn . (B.9) For the variance of Nnwe obtain from
https://arxiv.org/abs/2505.14051v1
Lemma C.4 and Lemma C.3 below Var(Nn)⩽2ZTn 0ZTn 0∥((ε2 nId +B2 nCϑ)|Λ|2K(n))(t, s)∥2 HSdsdt in terms of the corresponding operator kernel. We bound the kernel of CϑK(n), using the kernel ofCϑfrom Lemma 3.4, (B.8) and (B.7): |(CϑK(n))(t, s)|≼1 2ZTn 0Zt+u |t−u|eRϑvdv Knψ(n) u−s,s(¯An)du =1 2KneRϑ|t−s|ZTn 0Zt+u−|t−s| |t−u|−|t−s|eRϑvdv ψ(n) u−s,s(¯An)du ≼1 2KneRϑ|t−s|ZTn−s 0Z2Tn −weRϑvdv ψ(n) w,s(¯An)dw ≼1 2KneRϑ|t−s|ZTn 0e|Rϑ|−1 3Tnψ(n) w,s(¯An)dw ≼e 2KneRϑ|t−s||Rϑ|−1 3Tn|¯An|−2 Tn and therefore, using Lemma C.2(a) and (B.5), Var(Nn) ≲trace K2 n|Λ|4ZTn 0ZTn 0 ε4 nψ(n) t−s,s(¯An)2+B4 ne2Rϑ|t−s||Rϑ|−2 Tn|¯An|−4 Tn dsdt ≲Tntrace K2 n|Λ|4 ε4 n|¯An|−3 Tn+B4 n|Rϑ|−3 Tn|¯An|−4 Tn . as asserted. Nnis well defined if this variance bound and E[Nn] are finite. In fact, by construction we have E[Nn]⩾0. Tracing the lower bound (B.9) for E[Nn], we see that the converse inequalities also hold up to multiplicative constants, which establishes E[Nn]<∞if the trace is finite as assumed. Proof of Theorem 4.4. From Proposition B.2 and Lemma C.2(a) below with |Rϑ|T−1 n≼|¯Rn|T−1 nwe obtain E[(Zn−ϑNn)2]≲Tntrace K2 n|Λ|2 ε4 n|¯Rn|T−1 n+B4 n|¯An|−3 Tn |¯An|−1 Tn|Rϑ|−1 Tn , 33 which equals In(ϑ) by inserting the choice of Kn. Equally, E[Nn]≳In(ϑ) follows from Proposition B.3. In view of Proposition B.3, Condition (4.5) then ensures that Var( Nn)1/2/E[Nn]→0. This implies Nn/E[Nn]Pϑ− →1 and in particular Pϑ(Nn= 0)→0, while (Zn−ϑNn)/E[Nn] =OPϑ In(ϑ)1/2/E[Nn] =OPϑ In(ϑ)−1/2 . Since Zn−ϑNn,Nnare well defined, ˆϑnis well defined and we conclude that ˆϑn−ϑ= (Zn1(Nn̸= 0)−ϑNn)/Nn=OPϑ(In(ϑ)−1/2) holds. The convergence of the error is uniform over ϑbecause (4.5) and all employed moment bounds hold uniformly in ϑ. B.3 Proofs for Section 6 Proof of Proposition 6.2. In the proof the constants are tracked precisely and the C(i) d,βare used only to state the results concisely. We use repeatedly that for x, y⩾0 ((x−y)+)2⩾1 2x2−y2. (B.10) Proof of (a): Foru∈ H6(Rd) integration by parts and ϑ(x)⩾1/2 yield ⟨(−∆ϑ)3u, u⟩=⟨ϑ∇∆ϑu,∇∆ϑu⟩⩾1 2⟨∇∆ϑu,∇∆ϑu⟩=1 2∥∇∆ϑu∥2. (B.11) From the identity (with matrix-vector multiplication) ∇∆ϑu=ϑ∇∆u+ (∇2ϑ)∇u+ (∆ u)(∇ϑ) + (∇2u)(∇ϑ) we derive by the inverse triangle inequality and ϑ(x)⩾1/2 ∥∇∆ϑu∥⩾1 2∥∇∆u∥ − ∥ (∇2ϑ)∇u∥ − ∥ (∆u)(∇ϑ)∥ − ∥ (∇2u)(∇ϑ)∥. (B.12) We have, using Lemma C.1 below, (A.1) as well as AB⩽A2/8 + 2 B2forA, B⩾0 in the last step, ∥(∇2ϑ)∇u∥⩽(2d3h)1/2∥∇2ϑ∥∞∥∆u∥1/2∥∇u∥1/2 ⩽√ 2d3h−2+1/2∥∇2L∥∞∥∇∆u∥1/2∥u∥1/2 ⩽1 8∥∇∆u∥+ 4d6h−3∥∇2L∥2 ∞∥u∥. Similarly, we obtain, using A5/6B1/6⩽5 6A+1 6B, ∥(∆u)(∇ϑ)∥+∥(∇2u)∇ϑ∥⩽2(2d3h)1/2h−1∥∇L∥∞∥∇∆u∥1/2∥∆u∥1/2 ⩽23/2d3/2h−1/2∥∇L∥∞∥∇∆u∥5/6∥u∥1/6 ⩽1 8∥∇∆u∥+105213 36d9h−3∥∇L∥6 ∞∥u∥. Hence, in (B.12) we have obtained the bound ∥∇∆ϑu∥⩾1 4∥∇∆u∥ − 4d6∥∇2L∥2 ∞+105213 36d9∥∇L∥6 ∞ h−3∥u∥. Using (B.10), insertion into (B.11) and the denseness of H6(Rd) inH3(Rd) therefore yield (−∆ϑ)3≽1 64(−∆)3−c2 ϑ,dh−6Id with cϑ,d= 23/2d6∥∇2L∥2 ∞+105225/2 36d9∥∇L∥6 ∞. Generally, for positive operators T1, T2andC > 0,γ∈[0,1] we obtain from the operator monotonicity of t7→tγ(Bhatia, 2013, Thm. V.1.9) T1≼T2+CId⇒Tγ 1≼(T2+CId)γ≼Tγ 2+CγId. (B.13) 34 ForT1= (−∆)3,T2= (−∆ϑ)3andγ= (2 + 2 β)/3 this yields (−∆ϑ)2+2β≽2−4−4β(−∆)2+2β−c(4+4β)/3 ϑ,dh−(4+4β)Id, ε2(−∆ϑ)2+2β+ Id≽ 2−4−4β∧(1−ε2c(4+4β)/3 ϑ,dh−(4+4β)) ε2(−∆)2+2β+ Id and thus by operator monotonicity of x7→ −x−1(Bhatia, 2013, Prop. V.1.6) the bound in (a) for h−2c2/3 ϑ,d⩽1 2ε−1/(1+β). Proof of (b): Given the result in (a) and1 2(x+T−1)⩽x∨T−1⩽x+T−1,x⩾0, it suffices to show (−∆ϑ)(ε2(−∆ϑ)2+2β+ Id)≽2(C(4) d,β)−1(−∆)(ε2(−∆)2+2β+ Id) . (B.14) Foru∈ H4(Rd) we bound by Lemma C.1 below, (A.1) and AB⩽3 4A4/3+1 4B4 ∥∆2 ϑu∥=∥ϑ∆∆ϑu+⟨∇ϑ,∇∆ϑu⟩∥ ⩾1 2∥∆∆ϑu∥ −(2d2h)1/2h−1∥∇L∥∞∥∆∆ϑu∥1/2∥∇∆ϑu∥1/2 ⩾1 2∥∆∆ϑu∥
https://arxiv.org/abs/2505.14051v1
−√ 2dh−1/2∥∇L∥∞∥∆∆ϑu∥3/4∥∆ϑu∥1/4 ⩾1 4∥∆∆ϑu∥ −27d4h−2∥∇L∥4 ∞∥∆ϑu∥. (B.15) Further, by expanding ∆∆ ϑand using the inverse triangle inequality, Lemma C.1 below together with the bound ∥∆L∥2 ∞⩽d∥∇2L∥2 ∞, (A.1) and AαB1−α⩽αA+(1−α)BforA, B⩾0,α∈[0,1], 1 2∥∆2u∥ − ∥ ∆∆ϑu∥ ⩽∥3⟨∇ϑ,∇∆u⟩+ (∆ ϑ)(∆u) + 2⟨∇2ϑ,∇2u⟩HS+⟨∇∆ϑ,∇u⟩∥ ⩽3(2d2h)1/2h−1∥∇L∥∞∥∆2u∥1/2∥∇∆u∥1/2 + 3(2 d4h)1/2h−2∥∇2L∥∞∥∇∆u∥1/2∥∆u∥1/2 + (2d2h)1/2h−3∥∇∆L∥∞∥∆u∥1/2∥∇u∥1/2 ⩽√ 18dh−1/2∥∇L∥∞∥∆2u∥5/6∥∇u∥1/6 +√ 18d2h−3/2∥∇2L∥∞∥∆2u∥1/2∥∇u∥1/2 +√ 2dh−5/2∥∇∆L∥∞∥∆2u∥1/6∥∇u∥5/6 ⩽1 12∥∆2u∥+ 15527d6h−3∥∇L∥6 ∞∥∇u∥+1 12∥∆2u∥+ 54d4h−3∥∇2L∥2 ∞∥∇u∥ +1 12∥∆2u∥+ 2−1/53−15d6/5h−3∥∇∆L∥6/5 ∞∥∇u∥. Consequently, ∥∆∆ϑu∥⩾1 4∥∆2u∥ − 15527d6∥∇L∥6 ∞+ 54d4∥∇2L∥2 ∞ + 2−1/53−15d6/5∥∇∆L∥6/5 ∞ h−3∥∇u∥ holds. Using ∥ϑ∥∞⩽3/2, the other term in (B.15) can be simply bounded for any η >0 via ∥∆ϑu∥⩽3 2∥∆2u∥1/3∥∇u∥2/3+h−1∥∇L∥∞∥∇u∥ ⩽η∥∆2u∥+ 2−1/2η−1/2+h−1∥∇L∥∞ ∥∇u∥, Choosing η=1 321 27d−4h2∥∇L∥−4 ∞, we obtain from (B.15) ∥∆2 ϑu∥⩾1 32∥∆2u∥ − 15525d6∥∇L∥6 ∞+27 2d4∥∇2L∥2 ∞+ 2−11/53−15d6/5∥∇∆L∥6/5 ∞ + 27∥∇L∥4 ∞ 2233/2d6∥∇L∥2 ∞+d4∥∇L∥∞ h−3∥∇u∥ =1 32∥∆2u∥ −¯cϑ,dh−3∥∇u∥ 35 where ¯cϑ,d= 15525+ 2239/2 d6∥∇L∥6 ∞+ 27d4∥∇L∥5 ∞ +27 2d4∥∇2L∥2 ∞+ 2−11/53−15d6/5∥∇∆L∥6/5 ∞. With (B.10) we thus deduce ∆4 ϑ≽2−11∆4−¯c2 ϑ,dh−6(−∆). The weighted geometric mean inequality (Kubo & Ando, 1980, Section 3) asserts for positive operators A≼C,B≼Dandα∈[0,1] A1/2 A−1/2BA−1/2)αA1/2≼C1/2 C−1/2DC−1/2)αC1/2. We use this in the simpler case when AandBas well as CandDcommute such that A1−αBα≼ C1−αDα. Together with ( −∆ϑ)4≽0, (−∆ϑ)≽1 2(−∆) and (B.13) we find that (−∆ϑ)3+2β= (−∆ϑ)(1−2β)/3((−∆ϑ)4)(2+2β)/3 ≽2−(23+20 β)/3(−∆)3+2β−¯c(4+4β)/3 ϑ,d2−(1−2β)/3h−(4+4β)(−∆). Proceeding as in part (a) and gathering the numerical constants in C(3) d,β, this implies (B.14) and therefore the result in (b). Proof of Proposition 6.9. Foru∈ H2(Rd) we have ∥(ν∆−M[ϑ])u∥⩾∥(ν∆−Id)u∥ − ∥ L∥∞∥u∥ and thus, with (B.10), ∥A1u∥2⩾1 2∥A0u∥2− ∥L∥2 ∞∥u∥2. (B.16) This shows for all test functions u∈ H4(Rd) that due to ∥L∥∞⩽1/2 and ε⩽1 ⟨(ε2A2 1+ Id) u, u⟩⩾1 2⟨(ε2A2 0+ Id) u, u⟩. By the denseness of H4(Rd) inL2(Rd) and the operator monotonicity of x7→ −x−1we establish (6.11). For (b) let us first bound from below for u∈ H6(Rd),η >0 by Lemma C.1 and 2 AB⩽A2+B2 ∥∇A1u∥=∥∇(A0−M[L(x/h)]u∥ ⩾∥∇A0u∥ −(2dh)1/2h−1∥∇L∥∞∥u∥1/2∥∇u∥1/2− ∥L∥∞∥∇u∥ ⩾∥∇A0u∥ −1 2dh−1∥∇L∥2 ∞η−1∥u∥ −(η+∥L∥∞)∥∇u∥. From (B.10) we have ( A−B−C)2 +⩾1 2A2−2B2−2C2and thus ⟨(−∆)A1u, A 1u⟩⩾1 2∥∇A0u∥2−1 2d2h−2∥∇L∥4 ∞η−2∥u∥2−2(∥L∥∞+η)2∥∇u∥2. In view of infx∈Rdϑ(x)⩾1/2 (due to ∥L∥∞<1/2) together with (B.16), and choosing η2= dν1/2h−1∥∇L∥2 ∞, we arrive at ⟨(−A1)3u, u⟩⩾ν 2∥∇A0u∥2−1 2d2νη−2h−2∥∇L∥4 ∞∥u∥2−2ν(∥L∥∞+η)2∥∇u∥2 +1 4∥A0u∥2−1 2∥L∥2 ∞∥u∥2 ⩾1 4⟨(−A0)3u, u⟩ −1 2 dν1/2h−1∥∇L∥2 ∞+∥L∥2 ∞ ∥u∥2 −4ν(∥L∥2 ∞+dν1/2h−1∥∇L∥2 ∞)∥∇u∥2 ⩾1 4⟨(−A0)3u, u⟩ −4(∥L∥2 ∞+dν1/2h−1∥∇L∥2 ∞)⟨(Id−ν∆)u, u⟩. 36 We thus obtain ⟨(−A1)(ε2A2 1+ Id) u, u⟩ ⩾1 2⟨(−A0)u, u⟩+1 4ε2⟨(−A0)3u, u⟩ −4ε2(∥L∥2 ∞+dν1/2h−1∥∇L∥2 ∞)⟨(Id−ν∆)u, u⟩ ⩾1 4⟨(−A0)(ε2A2 0+ Id) u, u⟩ −4ε2 ∥L∥2 ∞+dν1/2h−1∥∇L∥2 ∞ ⟨(−A0)u, u⟩ ⩾1 4−4ε2(∥L∥2 ∞+dν1/2h−1∥∇L∥2 ∞) ⟨(−A0)(ε2A2 0+ Id) u, u⟩. Together with part (a) and ( x+T−1)/2⩽x∨T−1⩽x+T−1we have thus shown (Id−2TA1)(ε2A2 1+ Id)≽1 16(Id−2TA0)(ε2A2 0+ Id) , provided ∥L∥2 ∞+dν1/2h−1∥∇L∥2 ∞⩽1 32ε−2. By the operator monotonicity of x7→ − x−1and A+B⩽ε−1⇒A2+B2⩽ε−2,A, B⩾0, this yields assertion (b). C Auxiliary Results In the following interpolation inequalities the dependence on the dimension dis not always opti- mised, but some dimension dependence cannot be avoided. C.1 Lemma. Letϑ∈C2(Rd)have support in [−h/2, h/2]d. Then ∥ϑu∥⩽(2h)1/2∥ϑ∥∞∥u∥1/2∥∂iu∥1/2, u∈ H1(Rd). (C.1) holds for any i= 1, . . . , d . In particular, we have ∥(∇ϑ)u∥⩽(2dh)1/2∥∇ϑ∥∞∥u∥1/2∥∇u∥1/2, u∈ H1(Rd), (C.2) ∥⟨∇ϑ,∇u⟩∥⩽(2d2h)1/2∥∇ϑ∥∞∥∇u∥1/2∥∆u∥1/2, u∈ H2(Rd), (C.3) ∥(∇2ϑ)∇u∥⩽(2d3h)1/2∥∇2ϑ∥∞∥∇u∥1/2∥∆u∥1/2, u∈ H2(Rd), (C.4) ∥(∇2u)∇ϑ∥⩽(2d3h)1/2∥∇ϑ∥∞∥∆u∥1/2∥∇∆u∥1/2, u∈ H3(Rd), (C.5)
https://arxiv.org/abs/2505.14051v1
∥⟨∇2ϑ,∇2u⟩HS∥⩽(2d4h)1/2∥∇2ϑ∥∞∥∆u∥1/2∥∇∆u∥1/2, u∈ H3(Rd). (C.6) Proof. To prove (C.1), use integration by parts and obtain ∥ϑu∥2=Z Rdϑ2(x)u2(x)dx = −Z RdZxi∧(h/2) −h/2ϑ2(x1, . . . , x i−1, y, x i+1, . . . , x d)dy 2u(x)∂iu(x)dx ⩽2h∥ϑ∥2 ∞Z Rd|u(x)||∂iu(x)|dx⩽2h∥ϑ∥2 ∞∥u∥∥∂iu∥. All other estimates follow by replacing uandϑin (C.1) by partial derivatives of uandϑ, where we take into account ∥∇2u∥=∥∆u∥(and thus ∥∇3u∥=∥∇∆u∥, where ∥∇3u∥2=P i,j,k∥∂ijku∥2). For example, we have ∥(∇2ϑ)∇u∥2=dX i=1Z RddX j=1∂ijϑ(x)∂ju(x)2 dx⩽ddX i,j=1Z Rd(∂ijϑ)(x)2(∂ju)(x)2dx ⩽ddX i,j=12h∥∂ijϑ∥2 ∞∥∂ju∥∥∇∂ju∥⩽2d3h∥∇2ϑ∥2 ∞∥∇u∥∥∆u∥. The other estimates work analogously. 37 C.2 Lemma. (a) For bounded selfadjoint operators T1, T2, T3withT1≽0andT2≼T3 trace( T1T2)⩽trace( T1T3) holds, provided the right-hand side is finite. Also, for operators T4, T5, T6withT∗ 4T4≼T∗ 5T5 ∥T4T6∥2 HS⩽∥T5T6∥2 HS holds, provided the right-hand side is finite. (b) For operators R1≽0,R2≽0with bounded inverses and a bounded operator Twe have R1≽TR2T∗⇒T∗R−1 1T≼R−1 2. Proof. In (a) the operator T1/2 1(T3−T2)T1/2 1is positive semi-definite such that its trace is non- negative, implying the first claim in (a): trace( T1T3)−trace( T1T2) = trace( T1/2 1(T3−T2)T1/2 1)⩾0. In view of ∥ST∥2 HS= trace( S∗STT∗) for operators S, T the second claim follows from the first with T1=T∗ 4T4,T2=T∗ 5T5andT3=T6T∗ 6. In (b) we have R1≽TR2T∗⇒R−1/2 1TR2T∗R−1/2 1≼Id ⇒R1/2 2T∗R−1 1TR1/2 2= (R−1/2 1TR1/2 2)∗R−1/2 1TR1/2 2≼Id ⇒T∗R−1 1T≼R−1 2, where ∥LL∗∥=∥L∗L∥was used for L=R−1/2 1TR1/2 2. C.3 Lemma. LetK:L2(H)→L2(H)be the operator given by Kf(t) =RT 0k(t, s)f(s)dsfor an operator-valued kernel function k∈L2([0, T]2;HS(H)). Then we have ∥K∥2 HS(L2(H))=ZT 0ZT 0∥k(t, s)∥2 HS(H)dtds. Proof. This extension from the classical result for scalar-valued kernels can be found in Birman & Solomjak (2012, Thm. 11.3.6). C.4 Lemma. Suppose Γ∼Ncyl(0,Σ)is a cylindrical Gaussian measure on a real Hilbert space H andLis a bounded normal operator on Hsuch that Σ1/2Re(L)Σ1/2is a Hilbert-Schmidt operator. Then Var(⟨LΓ,Γ⟩) = 2∥Σ1/2Re(L)Σ1/2∥2 HS holds. An upper bound is given by 2∥ΣL∥2 HS. Proof. Without loss of generality assume Σ = Id, writing ⟨LΓ,Γ⟩=⟨(Σ1/2LΣ1/2)Σ−1/2Γ,Σ−1/2Γ⟩ and replacing Lby Σ1/2LΣ1/2(if Σ is not one-to-one restrict to the range of Σ). Denote the eigenvalues of Lby (λk) and the associated orthonormal basis of eigenvectors by ( ek). Then due to⟨LΓ,Γ⟩=⟨L∗Γ,Γ⟩and⟨Γ, ek⟩ ∼N(0,1) Var(⟨LΓ,Γ⟩) = Var( ⟨Re(L)Γ,Γ⟩) =P kVar(Re( λk)⟨Γ, ek⟩2) = 2P kRe(λk)2, which is 2 ∥Re(L)∥2 HS. For the upper bound we note ∥Σ1/2Re(L)Σ1/2∥2 HS=⟨Σ Re( L),Re(L)Σ⟩HS⩽∥Σ Re( L)∥2 HS⩽∥ΣL∥2 HS. 38 For reference, we state the Weyl (1912) asymptotics of the Laplacian eigenvalues, see Shubin (2001) for a general discussion. C.5 Lemma. Let∆be the Laplacian on a smooth bounded domain of dimension dwith Dirich- let, Neumann or periodic boundary conditions, or on a d-dimensional compact manifold without boundary. Then the ordered eigenvalues (λk)k⩾1of∆satisfy −λk∼k2/d, k⩾2. (C.7) Under Neumann or periodic boundary conditions we have λ1= 0, under Dirichlet boundary con- ditions −λ1∼1holds. C.6 Lemma. (a) Let a, b, c∈Rwith (a+c)∧a >−1>(b+c)∨b. Then uniformly in t⩾0 Z∞ 0(za∧zb)(1 + tz)cdz∼(1 +t)c. (b) Let a, b, c∈Rwitha > b anda >−1>(b+c)∨c. Then uniformly in t⩾0 Z∞ 0(za∧zb)(1∧ |z−t|c)dz≲(1 +t)b∨c. In particular, if a⩾0andb=a+c, the right-hand side is of order (1 +t)b. Proof. Fort, z⩾0 we have 1 ∧z⩽1+tz 1+t⩽1∨z, so in (a) 0<Z∞ 0(za∧zb)(1∧zc)dz⩽Z∞
https://arxiv.org/abs/2505.14051v1
0(za∧zb)1+tz 1+tcdz⩽Z∞ 0(za∧zb)(1∨zc)dz, and for a > b this is finite if and only if ( a+c)∧a >−1 and ( b+c)∨b <−1. For 0 ⩽t⩽2 the claim in (b) reduces to 1 ∼1, so assume t⩾2. We split the integralR∞ 0=R1 0+Rt−1 1+Rt+1 t−1+R∞ t+1and treat each part separately. First,R1 0za(t−z)cdz≲(t−1)c≲tc. Next, Zt−1 1zb(t−z)cdz≲t1+b+cZ1/2 1/tubdu+Z1−1/t 1/2(1−u)cdu ≲t1+b+c(1∨log(t)∨t−b−1+t−c−1)≲tb∨c, thenRt+1 t−1zbdz≲tband finally Z∞ t+1zb(z−t)cdz≲t1+b+cZ2 1+1/t(u−1)cdu+Z∞ 2ub(u−1)cdu ≲t1+b+c(t−c−1+ 1)∼tb. C.7 Lemma. Letα >0andpi, qi∈Rfori= 1,2,3. Then Tp1α+q1 n ε−(p2α+q2) n ν−(p3α+q3) n ≳1 (C.8) holds for positive sequences (Tn),(εn),(νn)if|p1log(Tn) +p2log(ε−1 n) +p3log(ν−1 n)| → ∞ and α >lim sup n→∞−q1log(Tn)−q2log(ε−1 n)−q3log(ν−1 n) p1log(Tn) +p2log(ε−1n) +p3log(ν−1n). Proof. By taking logarithms, (C.8) is equivalent to (p1α+q1) log( Tn) + (p2α+q2) log( ε−1 n) + (p3α+q3) log( ν−1 n)⩾log(c), where c >0 is the proportionality constant. Solve for α. 39 References Altmeyer, R., Tiepner, A., & Wahl, M. (2024). Optimal parameter estimation for linear SPDEs from multiple measurements. The Annals of Statistics , 52(4), 1307 – 1333. Amann, H. (2019). Linear and quasilinear parabolic problems: Function Spaces . Birkh¨ auser. Bain, A. & Crisan, D. (2009). Fundamentals of stochastic filtering . Springer. Bhatia, R. (2013). Matrix analysis . Springer. Bibinger, M., Hautsch, N., Malec, P., & Reiß, M. (2014). Estimating the quadratic covariation ma- trix from noisy observations: Local method of moments and efficiency. The Annals of Statistics , 42(4), 1312–1346. Birman, M. S. & Solomjak, M. Z. (2012). Spectral theory of self-adjoint operators in Hilbert space . Springer. Bogachev, V. I. (1998). Gaussian measures . American Mathematical Society. B¨ uhler, T. & Salamon, D. (2018). Functional analysis . American Mathematical Society. Cialenco, I. (2018). Statistical inference for spdes: an overview. Statistical Inference for Stochastic Processes , 21(2), 309–329. Da Prato, G. & Zabczyk, J. (2014). Stochastic equations in infinite dimensions . Cambridge University Press. Engel, K.-J. & Nagel, R. (2000). One-parameter semigroups for linear evolution equations . Springer. Gaudlitz, S. & Reiß, M. (2023). Estimation for the reaction term in semi-linear spdes under small diffusivity. Bernoulli , 29(4), 3033–3058. Hildebrandt, F. & Trabs, M. (2021). Parameter estimation for spdes based on discrete observations in time and space. Electronic Journal of Statistics , 15, 2716–2776. Huebner, M. & Rozovskii, B. (1995). On asymptotic properties of maximum likelihood estimators for parabolic stochastic pde’s. Probability Theory and Related Fields , 103(2), 143–163. Ibragimov, I. A. & Khas’minskii, R. Z. (2001). Estimation problems for coefficients of stochastic partial differential equations. part iii. Theory of Probability & Its Applications , 45(2), 210–232. Kubo, F. & Ando, T. (1980). Means of positive linear operators. Mathematische Annalen , 246, 205–224. Kutoyants, Y. A. (2004). Statistical inference for ergodic diffusion processes . Springer. Kutoyants, Y. A. & Zhou, L. (2021). On parameter estimation of the hidden Gaussian process in perturbed SDE. Electronic Journal of Statistics , 15(1), 211 – 234. Lototsky, S. & Rozovskii, B. (2018). Stochastic evolution systems: linear theory and applications to non-linear filtering, 2nd edition . Springer Cham. Lototsky, S. V. & Rozovskii, B. L. (2000). Skorokhod’s Ideas in Probability Theory , chapter Pa- rameter Estimation for Stochastic Evolution Equations with Non-commuting
https://arxiv.org/abs/2505.14051v1
Operators, (pp. 271–280). Institute of Mathematics of the National Academy of Sciences of Ukraine. Pasemann, G., Flemming, S., Alonso, S., Beta, C., & Stannat, W. (2020). Diffusivity estimation for activator–inhibitor models: Theory and application to intracellular dynamics of the actin cytoskeleton. Journal of Nonlinear Science , 31. 40 Pasemann, G. & Reiß, M. (2024). Nonparametric diffusivity estimation for the stochastic heat equation from noisy observations. arXiv preprint arXiv:2410.00677 . Peszat, S. (1992). Law equivalence of solutions of some linear stochastic equations in hilbert spaces. Studia Math , 101(3), 269–284. Reiß, M. (2008). Asymptotic equivalence for nonparametric regression with multivariate and random design. The Annals of Statistics , 36(4), 1957–1982. Reiß, M. (2011). Asymptotic equivalence for inference on the volatility from noisy observations. The Annals of Statistics , 39(2), 772–802. Schmisser, E. (2011). Non-parametric drift estimation for diffusions from noisy data. Statistics & Decisions , 28(2), 119–150. Schm¨ udgen, K. (2012). Unbounded self-adjoint operators on Hilbert space . Springer. Shubin, M. A. (2001). Pseudodifferential Operators and Spectral Theory . Springer, second edition. Triebel, H. (2010). Theory of Function Spaces . Birkh¨ auser / Springer. Tsybakov, A. B. (2009). Introduction to nonparametric estimation . Springer. Weyl, H. (1912). Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differen- tialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung). Mathematische Annalen , 71(4), 441–479. 41
https://arxiv.org/abs/2505.14051v1
arXiv:2505.14058v1 [math.ST] 20 May 2025A Characterization of a Subclass of Separate Ratio-Type Copulas ∗Ziad Adwan†Nicola Sottocornola May 24, 2025 Abstract Copulas are essential tools in statistics and probability theory, enabling the study of the dependence structure between random variables independently of their marginal distributions. Among the various types of copulas, Ratio-Type Copulas have gained significant attention due to their flexibility in modeling joint distributions. This paper focuses on Separate Ratio-Type Copulas, where the dependence function is a separate product of univariate functions. We revisit a theorem characterizing the validity of these copulas under certain as- sumptions, generalize it to broader settings, and examine the conditions for reversing the theorem in the case of concave generating functions. To address its limitations, we propose new assumptions that ensure the validity of separate copulas under spe- cific conditions. These results refine the theoretical framework for separate copulas, extending their applicability to pure mathematics and applied fields such as finance, risk management, and machine learning. Keywords : Bivariate Copulas, Ratio-Type Copulas 2000 Mathematics Subject Classification : 60E05, 62H05, 62H20. 1 Introduction Copulas are powerful tools in statistics and science for modeling and analyzing complex relationships between random variables. Unlike traditional methods that focus on linear correlations, copulas comprehensively capture dependencies, including tail dependence and asymmetries. This makes them particularly valuable in fields where understanding intricate interdependencies is critical [10]. One of the primary advantages of copulas is their ability to separate the marginal distributions of random variables from their dependence structure. This flexibility enables researchers to model the behavior of individual variables using appropriate distributions while independently specifying their dependence [11]. For instance, in finance, copulas are ∗Liwa College, Abu Dhabi, UAE. Email: ziad.adwan@lc.ac.ae †New York University, Abu Dhabi, UAE. Email: ns6159@nyu.edu 1 used to analyze the joint risk of assets, allowing for better portfolio optimization and risk management [6]. Inenvironmentalscience, copulashelpmodeltheinterplaybetweenvariableslikerainfall and temperature, enabling more accurate predictions of extreme weather events [7]. Simi- larly, in medicine, they are used to study the correlation between biomarkers and disease outcomes, advancing personalized treatment plans [8]. Beyond applied sciences, copulas play a key role in machine learning, reliability analysis, and econometrics. By capturing the essence of dependence structures, copulas provide a versatile framework for addressing real-world problems characterized by uncertainty and interdependence, making them indispensable in modern statistical analysis. Formally, a bivariate copula C(u, v)is a function that maps the unit square S= [0,1]2 toI= [0,1], satisfying the following conditions: 1.C(u,0) = C(0, v) = 0for all (u, v)∈S, 2.C(u,1) = uandC(1, v) =vfor all u, v∈I, 3. For any (u1, v1),(u2, v2)∈Ssuch that u1≤u2andv1≤v2, C(u2, v2)−C(u2, v1)−C(u1, v2) +C(u1, v1)≥0. (1) These properties ensure that a copula captures the dependency structure of a joint distribution while being independent of the marginal distributions of the random variables. One particularly flexible family of copulas is the Ratio-Type Copulas , which are defined as: Bθ(u, v) =uv 1−θϕ(u, v), 0≤u, v≤1, θ∈R where ϕisarealfunctiondefinedon S. Inrecentyearstheyattractedconsiderableattention ([1], [2], [3], [4], [9]). To simplify the analysis, researchers have focused on Separate Ratio- Type Copulas , a subclass where the dependence function
https://arxiv.org/abs/2505.14058v1
ϕ(u, v)is expressed as a separable product of two univariate functions that we assume differentiable a.e.: Dθ(u, v) =uv 1−θf(u)g(v), 0≤u, v≤1, θ∈R. (2) Now let’s consider the function Gdefined on Sin this way: G= (f−uf′)(g−vg′)−2uvf′g′(3) and define α1=min S(G) α2=max S(G). (4) In a nice paper published in 2024 [4], El Ktaibi, Bentoumi and Mesfioui examine the conditions under which (2) is a valid copula. Given the assumptions: A1.f(1) = g(1) = 0. 2 A2.fandgare strictly monotonic functions. A3.f(u)g(v) f(0)g(0)≤1−uv. they proved that Dθis a valid copula provided that 1/α1≤θ≤1/α2. The aim of this paper is: •To prove that under the Assumptions A1, A2, A3 the theorem cannot be reversed. •To provide additional Assumptions to make it possible. 2 Bivariate copulas Beforemovingforward, werecallthegeneraldefinitionofabivariatecopula C, assuming that all the derivatives involved exist a.e. ([10], [5]): Definition 2.1. The function C:S−→Iis a bivariate copula if: 1.C(u,0) = C(0, v) = 0 , C(u,1) = u, C (1, v) =v,∀u, v∈I 2.∂2C ∂u∂v≥0,∀(u, v)∈S Here inequality (1) has been replaced with the more comfortable condition 2. on the second derivative. In the case of copulas of the form (2) the first condition is trivially verified. The second one reduces to (see [4]): 1−θ[(f−uf′)(g−vg′)−2Dθf′g′] (1−θfg)2≥0 or, which is the same, to 1−θ (f−uf′)(g−vg′)−2Dθf′g′ ≥0 (5) 3 The theorem We start this section with a couple of simple observations: Remark 3.1. We can assume, without loss of generality, that f(0) = g(0) = 1. Proof.It’s enough to remark that, if (2) is a valid copula, so is ˜D=uv/(1−˜θ˜f˜g)with ˜f=f/f(0),˜g=g/g(0)and˜θ=f(0)g(0)θ. Remark 3.2. IfM= max( f−uf′)andN= max( g−vg′)then max ∂S(G) = max {M, N}. 3 Proof.Let’s find the maximum of Gon∂S: v= 0 = ⇒ G(u,0) = f(u)−uf′(u) =⇒ max( G(u,0)) = M v= 1 = ⇒ G(u,1) = ( f(u)−uf′(u))(−g′(1))−2g′(1)uf′(u) =−g′(1)(f(u) +uf′(u)) =⇒ max( G(u,1) =−g′(1) Analogously u= 0 = ⇒ max( G(0, v)) =N u= 1 = ⇒ G(1, v) =−f′(1) Remembering that −f′(1) = ( f−uf′)|u=1≤Mand−g′(1) = ( g−vg′)|v=1≤N, we conclude that max ∂S(G) = max {M, N}. (6) Conditions A1,...,A3 can be simplified in light of Remark 3.1: A1.f(0) = g(0) = 1 , f(1) = g(1) = 0. A2.fandgare strictly decreasing functions. A3.fg≤1−uv . So finally the starting point of our investigation is: Theorem 3.3. Letfandgverify A1, A2, A3. Then Dθis a valid copula ⇐= θ∈1 α1,1 α2 Proof.See Theorem 1 in [5]. 4 The restriction on the concavity Assumptions A1, A2, A3 alone cannot guarantee the converse of Theorem 3.3 as shown in the following example: Example 4.1. Letf(u) = (1 −u)3andg(v) = (1 −v)3. The minimum and maximum of Gare reached on the diagonal v=uwhere Ghas the expression G(u, u) = (f(u)−uf′(u))2−2u2(f′(u))2=−(−1 +u)4 14u2−4u−1 so, according to (4), we have α1=min I(G(u, u)) =G4 7,4 7 =−729 16807=⇒1 α1≈ −23.0549 α2=G(0,0) = 1 . 4 If Theorem 3.3 were reversible, θwould be constrained to [−23.0549,1]. However, as shown in Figure 1, inequality (5)holds even for θ=−30, indicating that the lower bound is overly restrictive. As a matter of fact, the minimum θverifying (5)isθmin≈ −36.1903, significantly smaller than 1/α1. Figure 1: Inequality
https://arxiv.org/abs/2505.14058v1
(5) with θ=−30onS(left) and zoomed around the minimum point (right). In order to avoid these situations we restrict our analysis to the case where fandgare concave down. Therefore we will use the following assumptions: B1.f(0) = g(0) = 1 , f(1) = g(1) = 0. B2.fandgare strictly decreasing functions. B3.fandgare concave functions. We introduce now a family of continuous functions hM, M≥1: hM(u) =  1 0 ≤u≤1−1 M, M·(1−u) 1 −1 M< u≤1.(7) Remark 4.2. Iffandgverify B1, B2, B3 we have fg≤α2(1−uv). Proof.We assume M=−f′(1), N=−g′(1)and, without loss of generality, α2=M≥N. Because f≤hMand g≤hN it is enough to prove the result with f=hMandg=hN. 5 •f=g= 1. 1≤ −1 M+ 2 1≤M" 1− 1−1 M2# 1≤M 1− 1−1 M 1−1 N fg≤M(1−uv) •f= 1, g=N·(1−v). N(1−v)≤M(1−v) N(1−v)≤M 1− 1−1 M v N(1−v)≤M(1−uv) fg≤M(1−uv) •f=M·(1−u), g=N·(1−v). 1≤1−uv 1−u 1 1−v≤1−uv (1−u)(1−v) N≤1−uv (1−u)(1−v) MN(1−u)(1−v)≤M(1−uv) fg≤M(1−uv) where the third inequality follows from N≤1 1−v(see the definition of hN). 5 A preliminary result The possibility to reverse Theorem 3.3 is related to the position of the maximum of G inS. Theorem 5.1. LetDθbe the copula (2). Assume that fandgverify B1, B2, B3 and α2=max ∂S(G). Then Dθis a valid copula =⇒θ∈1 α1,1 α2 . 6 Proof.The maximum of Gon∂S, if we call a=−f′(1)andb=−g′(1), is now α2= max {a, b} (8) because of (6). Dθbeing a valid copula, θhas to verify (5) for all (u, v)inS. In particular: v= 0 = ⇒1−θ(f−uf′)≥0 =⇒θ≤1 a u= 0 = ⇒1−θ(g−vg′)≥0 =⇒θ≤1 b so finally θ≤1/max{a, b}= 1/α2because of (8). The proof for 1/α1is very similar: G(u,0) = ( f(u)−uf′(u))↗ =⇒min I(G(u,0)) = 1 and G(u,1) = b(f(u) +uf′(u))↘ =⇒min I(G(u,1)) =−ab so finally min(G)|∂S=−ab.Now G=fg−vfg′−uf′g−uvf′g′≥ −uvf′g′≥ −ab=⇒α1=−ab . (9) Replacing now u=v= 1in (5) we get 1−θ(−ab)≥0 =⇒θ≥1 −ab=1 α1 because of (9). So the problem is now to identify those functions fandgthat generate the function G in (3) with the property that the maximum is reached on the boundary of S. 6 The restriction on the derivatives In this section, we will impose another constraint on f′andg′to ensure that the maximum of Gis on the boundary of Sas it is in Theorem 5.1. If 1≤max{a, b}<2, then we can construct functions fandgfor which the maximum of Gis not on the boundary ofSand the following is one such example. To start, consider the continuous functions (7) with M≥1. For such functions we can observe that we have, in the right neighborhood of u= 1−1/M: u≈1−1 Mh(u)≈1 −h′(u) =M . (10) 7 Figure 2: The function z=G(u, u)with f=h1.2and z=1.2 (dotted). Restricting Gto the diagonal of S, and replacing these values of hfor both fandg, we find: G(u, u) = (h(u)−uh′(u))2−2u2(h′(u))2≈ −M2−2 + 4 M > M =max ∂S(G) if1< M < 2(see Figure 2). The problem with this (symmetric) example is that fand−f′can simultaneously have values that are very close to their maximum possible values, that are f≈1,−f′≈a. If we want to rule out such cases we have to introduce an additional bound on the values of the function and its derivative: B4.H=f·(1−vg′) +g·(1−uf′)≤max{a, b}+ 1. Before moving
https://arxiv.org/abs/2505.14058v1
further it could be useful to check if this new condition is reasonable, testing some simple examples where the maximum of Gis clearly reached on the boundary ofS. We use as a benchmark the examples provided in Table 1 in [4]. The results in Table 1 (see Appendix) should convince the reader that B4 is a reasonable choice if we want to get rid of functions like (7), while preserving a great variety of well known functions. Example 6.1. We examine condition B4 on one example: f(x) = 1−ung(v) = 1−vm1≤m≤n we have a=−f′(1) = n, b =−g′(1) = mand so max{a, b}=n. It can be easily checked that there is only one critical point (u0, v0)ofHin the interior of S: (u0, v0) = 8 m−1 n+m1/n ,n−1 n+m1/m! . A simple calculation gives H(u0, v0) = 2 +(n−1)(m−1) n+m≤2 + (n−1) = n+ 1. On the boundary of Sthe maximum of HisH(1,0) = n+1. So finally condition B5 reduces to max S(H) =n+ 1≤max{a, b}+ 1 = n+ 1. Now, we state the main result of this paper. 7 The inverse of the theorem We are finally ready to prove the following Theorem 7.1. Let Dθ(u, v) =uv 1−θf(u)g(v), 0≤u, v≤1, θ∈R where fandgverify the following conditions: B1.f(0) = g(0) = 1 , f(1) = g(1) = 0. B2.fandgare strictly decreasing functions. B3.fandgare concave down. B4.f·(1−vg′) +g·(1−uf′)≤max{a, b}+ 1. Then: Dθis a valid copula ⇐⇒ θ∈1 α1,1 α2 Proof. ⇐=)We have α2≥1and so θ≤1 α2≤1. We start our proof checking that 0≤Dθ≤1. θfg≤fg =⇒ 1−θfg≥1−fg≥0 =⇒ Dθ≥0. Remark 4.2 =⇒ fg≤α2(1−uv) =⇒fg α2≤1−uv =⇒ θfg≤1−uv =⇒ 1−θfg≥uv =⇒ Dθ≤1. The last thing we have to prove is (5). If θ >0we have 1−θ (f−uf′)(g−vg′)−2Dθf′g′ ≥1−θG≥1−θα2≥0 9 and similarly, if θ <0, 1−θ (f−uf′)(g−vg′)−2Dθf′g′ ≥1−θG≥1−θα1≥0. =⇒)According to Theorem 5.1 we only have to prove that α2=max ∂S(G) = max {a, b}. If we remember that f−uf′andg−vg′are increasing functions we immediately realize that their minimum is 1 and so uf′≤f−1≤0 vg′≤g−1≤0. Multiplying the previous inequalities, we get uvf′g′≥(f−1)(g−1)that implies −uvf′g′≤ −fg+f+g−1. (11) Now G = ( f−uf′)(g−vg′)−2uvf′g′ (11) ≤ fg−vfg′−ugf′−fg+f+g−1 = f(1−vg′) +g(1−uf′) −1 (B4) ≤ [max{a, b}+ 1]−1 = max {a, b}. Remark 7.2. •In Example 6.1 we showed that condition B4 reduced to max( H) = max{a, b}+ 1. This is not a specific feature of this particular example. It is always the case. A simple calculation shows that, if B4 is verified, then: H(1,0) = a+ 1, H (0,1) = b+ 1 = ⇒ max( H) = max {a, b}+ 1. •Condition A3 is quite restrictive. Note that, at least in the case where fandgare concave functions, it can be omitted (see Theorem 7.1). 10 Appendix f(u) g(v) Conditions B1 B2 A3 B3 B4 1−un1−vn1≤n≤2 T T T T T 2< n T T F T T logb(u+b(1−u)) logb(v+b(1−v)) 1 < b T T T T T cos(πu/2) cos( πv/2) T T T T T 1−u logb(v+b(1−v)) 1 < b≤41 T T T T T 41< b T T F T T cos(πu/2) 1 −v T T T T T
https://arxiv.org/abs/2505.14058v1
logb(v+b(1−v)) cos( πv/2) 1 < b T T T T T (1−u)ecu(1−v)ecv0≤c≤1 T T T T T eau−ea 1−eaeav−ea 1−ea0< a≤3.7T T T T T 3.7< a T T F T T hM(u) hM(v) 1<M<2 T T T T F Table 1: Testing conditions B1,...,B4 (T=True, F=False). We added A3 to show how restrictive this condition is. 11 References [1] Chesneau C. (2022). Some new ratio-type copulas: Theory and properties. Appl. Math. 2022, 49, 79–101. [2] Chesneau C. (2023). A Collection of Two-Dimensional Copulas Based on an Original Parametric Ratio Scheme. Symmetry 15, 977. https://doi.org/10.3390/sym15050977 [3] Chesneau C. (2024). Exploring Two Modified Ali-Mikhail-Haq Copulas and New Bi- variate Logistic Distributions. Pan-Am. J. Math. 3, 4. https://doi.org/10.28919/cpr-pajm/3-4 [4] El Ktaibi F., Bentoumi R. and Mesfioui M. (2024). On the Ratio-Type Family of Copulas, Mathematics , 12, 1743. https://doi.org/10.3390/math12111743 [5] El Ktaibi F., Bentoumi R., Sottocornola N., Mesfioui M. (2022). Bivariate Copulas Based on Counter-Monotonic Shock Method. Risks, 10, 202. https://doi.org/10.3390/risks10110202 [6] Embrechts, P., McNeil, A. J., & Straumann, D. (2002). Correlation and Dependence in Risk Management: Properties and Pitfalls. In M. A. H. Dempster (Ed.), Risk Man- agement: Value at Risk and Beyond (pp. 176-223). Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511615337.008 [7] Genest C., & Favre A.-C. (2007). Everything you always wanted to know about copula modeling but were afraid to ask. Journal of Hydrologic Engineering , 12(4), 347–368. [8] Joe H. (2014). Dependence Modeling with Copulas. CRC Press. [9] Mesiar R.; Stupňanová A. (2015). Open problems from the 12th International Confer- ence on Fuzzy Set Theory and Its Applications. Fuzzy Sets Syst. 261, 112–123. [10] Nelsen R. B. (2006). An Introduction to Copulas. Springer. [11] Sklar A. (1959). Fonctions de répartition à n dimensions et leurs marges. Publications de l’Institut de Statistique de l’Université de Paris , 8, 229–231. 12
https://arxiv.org/abs/2505.14058v1
arXiv:2505.14138v1 [math.ST] 20 May 2025Sample Complexity of Correlation Detection in the Gaussian Wigner Model Dong Huang and Pengkun Yang∗ May 21, 2025 Abstract Correlation analysis is a fundamental step in uncovering meaningful insights from complex datasets. In this paper, we study the problem of detecting correlations between two random graphs following the Gaussian Wigner model with unlabeled vertices. Specifically, the task is formulated as a hypothesis testing problem: under the null hypothesis, the two graphs are independent, while under the alternative hypothesis, they are edge-correlated through a latent vertex permutation, yet maintain the same marginal distributions as under the null. We focus on the scenario where two induced subgraphs, each with a fixed number of vertices, are sampled. We determine the optimal rate for the sample size required for correlation detection, derived through an analysis of the conditional second moment. Additionally, we propose an efficient approximate algorithm that significantly reduces running time. Keywords— Correlation detection, Gaussian Wigner model, graph sampling, induced subgraphs, effi- cient algorithm 1 Introduction Understanding the correlation between datasets is one of the most significant tasks in statistics. In many applications, the observations may not be the familiar vectors but rather graphs. Recently, there have been many studies on the problem of detecting graph correlation and recovering the alignments of two correlated graphs. This problem arises across various domains: •In computer vision, 3-D shapes can be represented as graphs, where nodes are subregions and weighted edges represent adjacency relationships between different regions. A fundamental task for pattern recognition and image processing is determining whether two graphs represent the same object under different rotations [BBM05, MHK+08]. •In natural language processing, each sentence can be represented as a graph, where nodes correspond to words or phrases, and the weighted edges represent syntactic and semantic relationships [HR07]. The ontology alignment problem refers to uncovering the correlation between knowledge graphs that are in different languages [HNM05]. •In computational biology, protein–protein interactions (PPI) and their networks are crucial for all biological processes. Proteins can be regarded as vertices, and the interactions between them can be formulated as weighted edges [SXB08, VCL+15]. ∗D. Huang and P. Yang are with the Department of Statistics and Data Science, Tsinghua University. P. Yang is supported in part by the National Key R&D Program of China 2024YFA1015800. 1 Following the hypothesis testing framework proposed in [BCL+19], we formulate the graph correlation detection problem in Problem 1. For a weighted graph Gwith vertex set V(G) and edge set E(G), the weight associated with each edge uvis typically denoted as βuv(G) for any u, v∈V(G). Problem 1. LetG1andG2be two weighted random graphs with vertex sets V(G1), V(G2)and edge sets E(G1), E(G2). Under the null hypothesis H0,G1andG2are independent; under the alternative hypothesis H1, there exists a correlation between E(G1)andE(G2). Given G1andG2, the goal is to test H0against H1. A variety of studies have extensively investigated detection problems. However, the previous studies typically required full observation of all edges in G1andG2for detection, which is imprac- tical when the entire graph is unknown in certain scenarios. In such cases, graph sampling—the process of sampling
https://arxiv.org/abs/2505.14138v1
a subset of vertices and edges from the graph—becomes a powerful approach for exploring graph structure. This technique has been widely used in various settings, as it allows for inference about the graph without needing full observation [LF06, HL13]. In fact, there are several motivations leading us to consider the graph sampling method: •Lack of data. In social network analysis, the entire network is often unavailable due to API limitations. As a result, researchers typically select a subset of users from the network, which essentially constitutes a sampling of vertices [PDK11]. •Testing costs. The Protein Interaction Network is a common focus in biochemical research. However, accurately testing these interactions can be prohibitively expensive. As a result, testing methods based on sampled graphs are often employed [SWM05]. •Visualization. The original graph is sometimes too large to be displayed on a screen, and sampling a subgraph provides a digestible representation, making it easier for visualization [WCA+16]. In this paper, we consider sampling induced subgraphs for testing H0against H1when given two random graphs G1andG2with|V(G1)|=|V(G2)|=n. We randomly sample two induced subgraphs G1, G2with svertices from G1andG2, respectively. An induced subgraph of a graph is formed from a subset of the vertices of the graph, along with all the edges between them from the original graph. Specifically, the sampling process for G1andG2is as follows: we first independently select vertex sets V(G1)⊆V(G1) and V(G2)⊆V(G2) with |V(G1)|=|V(G2)|=s, and then retain the weighted edge between V(G1) and V(G2) from the original graphs. We assume s≤n throughout the paper. 1.1 Main Results In this subsection, we present the main results of the paper. Numerous graph models exist, with the Gaussian Wigner model being a prominent example [DMWX21, FMWX23], under which the weighted edges βuv(G) follow independent standard normals for any vertices u, v∈V(G). This paper focuses on the Gaussian Wigner model with vertex set size n. Under the null hypothesis H0,G1andG2follow independent Gaussian Wigner model with nvertices. Under the alternative hypothesis H1,G1andG2follow the following correlated Gaussian Wigner model. Definition 1 (Correlated Gaussian Wigner model) .Letπ∗denote a latent bijective mapping from V(G1) toV(G2). We say that a pair of graphs ( G1,G2) are correlated Gaussian Wigner graphs if each pair of weighted edges βuv(G1) and βπ∗(u)π∗(v)(G2) for any u, v∈V(G1) are correlated standard normals with correlation coefficient ρ∈(0,1). 2 LetQandPdenote the probability measures for the sampled subgraphs ( G1, G2) under H0 andH1, respectively. We then focus on the following two detection criteria. Definition 2 (Strong and weak detection) .We say a testing statistic T=T(G1, G2) with a threshold τachieves •strong detection , if the sum of Type I and Type II errors converges to 0 as n→ ∞ : lim n→∞[P(T< τ) +Q(T ≥τ)] = 0; •weak detection , if the sum of Type I and Type II errors is bounded away from 1 as n→ ∞ : lim n→∞[P(T< τ) +Q(T ≥τ)]<1. It is well-known that the minimal value of the sum of Type I and Type II errors between Pand Qis 1−TV(P,Q) (see, e.g., [PW25, Theorem 7.7]), achieved by the likelihood ratio test, where TV(P,Q) =1 2R |dP −
https://arxiv.org/abs/2505.14138v1
dQ|is the total variation distance between PandQ. Thus strong and weak detection are equivalent to lim n→∞TV(P,Q) = 1 and lim n→∞TV(P,Q)>0, respectively. We then establish the main results of correlation detection in the Gaussian Wigner model. Theorem 1. There exist constants C, C such that, for any 0< ρ < 1, ifs2≥C nlogn log(1 /(1−ρ2))∨n , TV(P,Q)≥0.9. Moreover, if s2=ω(n),TV(P,Q) = 1−o(1). Conversely, if s2≤C nlogn log(1 /(1−ρ2))∨n , TV(P,Q)≤0.1. Moreover, if s2≤Cnlogn log(1 /(1−ρ2))ors2=o(n),TV(P,Q) =o(1). The proof of Theorem 1 is deferred to Appendix A. Theorem 1 implies that, for the hypothesis testing problem between H0andH1when sampling two induced subgraphs, the optimal rate for the sample size sis of the order nlogn log(1 /(1−ρ2))∨n1/2 . Above this order, detection is possible, while below it, detection is impossible. Specifically, whenCnlogn log(1 /(1−ρ2))> n2, the possibility condition requires s > n in the above Theorem. However, we assume that the sample size s≤n, which indicates that there is no theoretical guarantee on detection, even when we sample the entire graph. Indeed, it is shown in [WXY23] that the detection threshold on ρin the fully correlation Gaussian Wigner model is ρ2≍logn n. Our results match the thresholds established in the previous work up to a constant for the special case s=n. The possibility results can serve as a criterion for successful correlation detection in prac- tice. For example, in computational biology, one may sample subgraphs to reduce testing costs, and the possibility results indicate when accurate detection remains feasible. Conversely, the im- possibility results offer a theoretical tool for privacy protection. For instance, in social network de-anonymization, they imply that no test can succeed under certain conditions, thus providing a theoretical guarantee of privacy for anonymized networks. 3 1.2 Related Work Graph matching The problem of graph matching refers to finding a correspondence between the nodes of different graphs [CCLS07, LR13]. Recently, there have been many studies on the analysis of matching two correlated random graphs. In addition to the Gaussian Wigner model, another important model is the Erd˝ os-R´ enyi model [ER59], where the edge follows Bernoulli distribution instead of normal distribution. As shown in [CK16, CK17, HM23], some sufficient and necessary conditions for the matching problem in the Erd˝ os-R´ enyi model were provided. The optimal rate for graph matching in the Erd˝ os-R´ enyi model has been established in [WXY22], and the constant was sharpened by analyzing the densest subgraph in [DD23]. There are also many extensions on Gaussian Wigner model and correlated Erd˝ os-R´ enyi graph model, including the inhomogeneous Erd˝ os-R´ enyi model [RS23, DFW23], the partially correlated graphs model [HSY24], the correlated stochastic block model [CDGL24, CDGL25], the multiple correlated graphs model [AH24, AH25], and the correlated random geometric graphs model [WWXY22]. Efficient algorithms and computational hardness There are many algorithms on the cor- relation detection and graph matching problem, including percolation graph matching algorithm [YG13], subgraph matching algorithm [BCL+19], message-passing algorithm [PSSZ22], and spectral algorithm [FMWX23], while some algorithms may be computationally inefficient. There are also many efficient algorithms, based on the different correlation coefficient, including
https://arxiv.org/abs/2505.14138v1
[BES80, Bol82, DCKG19, GM20, DMWX21, MRT23, DL23, MWXY23, ABT24, DL24, MWXY24, GMS24, MS24]. The low-degree likelihood ratio [HS17, Hop18] has emerged as a framework for studying com- putational hardness in high-dimensional statistical inference. It conjectures that polynomial-time algorithms succeed only in regimes where low-degree statistics succeed. Based on the low-degree conjecture, the recent work by [DDL23, MWXY24] established sufficient conditions for computa- tional hardness results on the recovery and detection problems. 1.3 Contributions and Outlines In this paper, we derive the optimal rate on sample size for correlation detection in the Gaus- sian Wigner model. Specifically, we prove that the optimal sample complexity is of rate s≍ nlogn log(1 /(1−ρ2))∨n1/2 . We also propose a polynomial algorithm that significantly reduces compu- tational cost. In Sections 2 and 3, we prove the possibility results and impossibility results on sample size, respectively. Section 4 introduces our polynomial algorithm for correlation detection. Then, we run some numerical experiments in Section 5 to verify the effectiveness for our algorithm proposed in Section 4. Finally, Section 6 offers further discussion and future research directions, and the appendices contain detailed proofs and additional experimental results. 2 Possibility Results In this section, we prove the possibility results in Theorem 1 by analyzing the error probability P(T< τ)+Q(T ≥τ) under different regimes of ρ, which provides an upper bound for the optimal sample complexity. Given a domain subset S⊆V(G1) and an injective mapping π:S7→V(G2), along with a bivariate function f:R×R7→R, we define the f−similarity graph Hf πas follows. 4 The vertex set of Hf πisV(Hf π) =V(G1), and for each edge e, the weighted edge is defined as βe(Hf π) =( f βe(G1), βπ(e)(G2) ife∈S 2 0 otherwise, (1) where π(e) denotes the edge π(u)π(v) for any edge e=uv. Let m≜(1−ϵ)s2 nfor some constant 0< ϵ < 1, and denote Ss,mas the set of injective mappings π:S⊆V(G1)7→V(G2) with |S|=m. Lete Hf π ≜P e∈E(G1)βe Hf π define the sum of weighted edges in Hf π. In fact, the quantity e(Hf π) can be regarded as a similarity score between two graphs. Our test statistic takes the form T(f) = max π∈Ss,me Hf π = max π∈Ss,mX e∈E(G1)βe Hf π . (2) For simplicity, we write TforT(f) when the choice of fis clear from the context. By the detection criteria in Definition 2, it suffices to bound the Type I error Q(T(f)≥τ) and the Type II error P(T(f)< τ) for some appropriate threshold τ. In the following, we outline a general recipe to derive an upper bound for error probabilities. Type I error. Under the null hypothesis H0, the sampled subgraphs G1andG2are independent. Given a bivariate function fand a threshold τ, it should be noted that the distribution of the f−similarity graph Hf πfollows the same distribution for any π∈ Ss,m. Consequently, applying the union bound yields that Q(T ≥τ)≤ |S s,m|Q e Hf π ≥τ . We then bound the tail probability by a standard Chernoff bound Q e Hf π ≥τ ≤exp (−λτ)Eh exp λe Hf πi . See (14) in Appendix B.1 for more details. Type II error.
https://arxiv.org/abs/2505.14138v1
Under the alternative hypothesis H1, recall that π∗denotes the latent bijective mapping from V(G1) to V(G2). For the induced subgraphs G1, G2sampled from G1,G2, we denote the set of common vertices as Sπ∗≜V(G1)∩(π∗)−1(V(G2)), (3) Tπ∗≜π∗(V(G1))∩V(G2). (4) We note that the restriction of π∗toSπ∗is a bijective mapping between Sπ∗andTπ∗, and thus |Sπ∗|=|Tπ∗|. In our random sampling models, the vertices of G1andG2are independent and identically sampled without replacement from the two graphs G1andG2, which yields the following Lemma regarding the sizes of Sπ∗andTπ∗. Lemma 1. When randomly sampling vertex sets V(G1), V(G2)from V(G1), V(G2)with|V(G1)|= |V(G2)|=s, the size of common vertex set in (3)follows a Hypergeometric distribution HG(n, s, s ). Specifically, P[|Sπ∗|=t] =s tn−s s−t n s ,for any t∈[s]. 5 We then establish the main ingredients for controlling the Type II error. Under the distribution P, given fandτ, {T< τ}={T< τ,|Sπ∗|< m} ∪ {T < τ,|Sπ∗| ≥m} ⊆{|Sπ∗|< m} ∪ {T < τ,|Sπ∗| ≥m}. (5) Since E[|Sπ∗|] =s2 n> m, the first event {|Sπ∗|< m}can be bounded by the concentration in- equality for Hypergeometric distribution in Lemma 6. For the second event, it can be bounded by P T< τ |Sπ∗| ≥m . Under the event {|Sπ∗| ≥m}, there exists π∗ m∈ Ss,msuch that π∗ m=π∗on its domain set dom ( π∗ m). The error probability of the event {T< τ,|Sπ∗| ≥m}can be bounded byP e Hf π∗m < τ |Sπ∗| ≥m . We then use the concentration inequality to bound the tail prob- ability. See (17) for more details. The quantity e(Hf π) measures the similarity score of a mapping π. Under the null hypothesis, e(Hf π) has a zero mean for all π, whereas under the alternative hypothesis, its mean with π=π∗ mis strictly positive owing to the underlying correlation. We derive concentration inequalities to ensure that e Hf π∗m exceeds the maximum spurious score arising from stochastic fluctuations under the null, as shown in Propositions 1 and 2. 2.1 Detection by Maximal Overlap Estimator In this subsection, we use the test statistic (2) with f(x, y) =xyfor possibility results. Indeed, this estimator is equivalent to maximizing the overlap on induced subgraphs between G1andG2. Specifically, we have the following Proposition. Proposition 1. There exists a universal constant C1>0such that, for any 0< ρ < 1and τ=m 2ρ 2, ifs2≥C1nlogn ρ2, P(T< τ) +Q(T ≥τ) =o(1). Proposition 1 provides a sufficient condition on strong detection for any 0 < ρ < 1. We refer readers to Appendix B.1 for the detailed proof. Since 1 −TV(P,Q)≤ P(T< τ) +Q(T ≥τ), it achieves the optimal rate in Theorem 1 when ρ= 1−Ω(1). However, the rate is sub-optimal when ρ= 1−o(1). In fact, s= 2 succeeds for detection when ρ= 1 by comparing the difference between all edges. We will use a new estimator in Subsection 2.2 to derive the optimal rate. 2.2 Detection by Minimal Mean-Squared Error Estimator In this subsection, we use the test statistic (2) with f(x, y) =−1 2(x−y)2and focus on the scenario where ρ > 1−e−6. Indeed, this estimator is equivalent to minimizing the mean squared error between the induced
https://arxiv.org/abs/2505.14138v1
subgraphs of size minG1andG2, respectively. Indeed, the expected mean- square error for a correlated pair Eh βe(G1)−βπ∗(e)(G2)2i is 2(1 −ρ), while it stays bounded away from 1 for an uncorrelated pair. As a result, the choice of feffectively distinguishes between H0andH1under strong signal condition. We now state the following Proposition. Proposition 2. There exists a universal constant C2>0such that, for any 1−e−6< ρ < 1and τ= 2m 2 (ρ−1), ifs2≥C2 nlogn log(1 /(1−ρ))∨n , P(T< τ) +Q(T ≥τ)≤0.1. Moreover, ifs2 n=ω(1),P(T< τ) +Q(T ≥τ) =o(1). 6 We refer readers to Appendix B.2 for the detailed proof. Proposition 2 provides sufficient conditions on strong and weak detection when ρis close to 1. This result fills the gap for the optimal rate of sin Proposition 1 when ρ= 1−o(1). In view of Propositions 1 and 2, we note that ρ2≍log 1/(1−ρ2) when 0 < ρ≤1−e−6and log (1 /(1−ρ))≍log 1/(1−ρ2) when 1−e−6< ρ < 1. Then, there exists a universal constant C≥C1∨C2such that C log (1 /(1−ρ2))≥(C1 ρ2 if 0< ρ≤1−e−6 C2 log(1 /(1−ρ))if 1−e−6< ρ < 1. We note thatCnlogn ρ2=C nlogn ρ2∨n in Proposition 1, and thus proving the possibility results in Theorem 1. Remark 1. The possibility results can be extended to sub-Gaussian assumption on the weighted edges. The bound for the moment generating function holds under the sub-Gaussian assumption, and consequently, the Chernoff bound remains valid. See Remark 4 for more details. Remark 2. In the previous work [WXY23] on the correlated Gaussian Wigner model, the corre- lation exists over the entire graph. The maximal overlap estimator and the minimal mean-square error estimator over two graphs are equivalent since the sum of squares of the weighted edges is fixed. However, in our sampling model, the sum of squares of the weighted edges in the two sub- graphs are random variables, and thus the two estimators differ. Indeed, the Maximum Likelihood Estimator (MLE) is max π∈Ss,|Sπ∗|e Hf π with f(x, y) =−ρ2(x2+y2) + 2ρxy, where f(x, y)≍ρxy when ρ= 1−Ω(1) and f(x, y)≍ −(x−y)2when ρ= 1−o(1). The choice of different estimators reflects the use of MLE under different regimes. See (31) in Appendix C.2 for details. 3 Impossibility Results In this section, we establish the impossibility results for the detection problem, which provides a lower bound on the optimal sample complexity. We first present an overview of the proof. Recall that Sπ∗andTπ∗are the sets of common vertices defined in (3) and (4), respectively. Under our sampling model, there exists a latent mapping between Sπ∗andTπ∗under the hypothesis H1. When equipped with the additional knowledge of the common vertex sets, our problem can be reduced to detection with full observations on smaller correlated Gaussian Wigner model, the detection threshold for which is established in [WXY23]. As shown in Lemma 1, the size of Sπ∗ andTπ∗follows a hypergeometric distribution. Using the concentration inequality (36), the size ofSπ∗satisfies |Sπ∗| ≤(1 + ϵ)E[|Sπ∗|] with high probability. Therefore, the impossibility results from the previous work on full observations remain valid when the number of correlated nodes is substituted with (1 + ϵ)E[|Sπ∗|]. However, such a reduction only proves
https://arxiv.org/abs/2505.14138v1
tight when the correlation is weak. We will establish the remaining regimes by the conditional second moment method. For notational simplicity, we use TV(P,Q) to denote TV(P(G1, G2),Q(G1, G2)) in this paper. By [Tsy09, Equation 2.27], the total variation distance between PandQcan be upper bounded by the second moment: TV(P,Q)≤s EQP Q2 −1. (6) The likelihood ratio is defined as P(G1, G2) Q(G1, G2)=1 n!X π∈SnP(G1, G2|π) Q(G1, G2), (7) 7 where Sndenotes the set of mappings π:V(G1)7→V(G2) between two original graphs. Note that sometimes certain rare events under Pcan cause the unconditional second moment to explode, while TV(P,Q) remains bounded away from one. To circumvent such catastrophic events, we can compute the second moment conditional on such events. We consider the following event: E≜ (G1, G2, π) :|π(V(G1))∩V(G2)| ≤(1 +ϵ)s2 n . (8) By Lemma 1, the size of common vertex set |π(V(G1))∩V(G2)|follows hypergeometric distribution HG(n, s, s ) under P. In this paper, we define the conditional distribution as P′(G1, G2, π) = P(G1, G2, π|E). By Lemma 6, we have P(E) =o(1) when s=ω n1/2 . Using TV(P,Q)≤ TV(P′,Q)+o(1) and applying (6) on P′andQyields that a sufficient condition for TV(P,Q) =o(1) isEQ P′ Q2 = 1 + o(1). See (25) for more details. 3.1 Weak correlation In this subsection, we show the impossibility results for weak correlation case where 0 < ρ2< n−1/2. Proposition 3. For any 0< ρ2< n−1/2, ifs2≤nlogn 2 log(1 /(1−ρ2)), then TV(P,Q) =o(1). We note that the total variation distance monotonically increases by the sample size s. In view of Proposition 3, we only need to tackle with the situation s2=nlogn 2 log(1 /(1−ρ2)), where s=ω n1/2 since ρ2< n−1/2. Therefore, a sufficient condition for TV(P,Q) = o(1) is TV(P′,Q) = o(1) by the triangle inequality. The proof of TV(P′,Q) =o(1) can be reduced to the lower bound in [WXY23] using a data processing inequality when given the common vertex sets Sπ∗andTπ∗. Under weak correlation, the bottleneck is detecting the existence of latent mapping π∗. The detection is impossible even with the additional knowledge on the location of common vertices. The detailed proof of Proposition 3 is deferred to Appendix B.3. 3.2 Strong correlation In this subsection, we present the impossibility results for strong correlation graphs where n−1/2≤ ρ2<1. Let ˜ πbe an independent copy of π. A key ingredient in the analysis of conditional second moment is the analysis ofP(G1,G2|π) Q(G1,G2)P(G1,G2|˜π) Q(G1,G2). We refer readers to Appendix B.4 for the details. We then analyze the termsP(G1,G2|π) Q(G1,G2)andP(G1,G2|˜π) Q(G1,G2). Recall the common vertex sets Sπand Tπdefined in (3) and (4), respectively. For any e /∈Sπ 2 ande′/∈Tπ 2 ,βe(G1) and βe′(G2) are independent under P, while under the null hypothesis distribution Qthey are also indepen- dent. Therefore, the termP(G1,G2|π) Q(G1,G2)can be decomposed intoQ e∈Sπ 2ℓ(βe(G1), βπ(e)(G2)), where ℓ(a, b)≜P(βe(G1)=a,βπ(e)(G2)=b) Q(βe(G1)=a,βπ(e)(G2)=b)for any a, b∈Ris the ratio of density functions. We note that there are correlations betweenSπ 2 ,S˜π 2 ,Tπ 2 andT˜π 2 , yielding thatP(G1,G2|π) Q(G1,G2)andP(G1,G2|˜π) Q(G1,G2)are correlated. To deal with the correlation, our main idea is to decompose the edge sets into inde- pendent
https://arxiv.org/abs/2505.14138v1
parts. To formally describe all correlation relationships, we use the correlated functional digraph of two mappings πand ˜πbetween a pair of graphs introduced in [HSY24]. Definition 3 (Correlated functional digraph) .Letπand ˜πbe two bijective mappings between V(G1) and V(G2) and Sπ, Tπ, S˜π, T˜πbe the sets of common vertex defined in (3) and (4). The 8 correlated functional digraph of the functions πand ˜πis constructed as follows. Let the vertex sets beSπ 2 ∪S˜π 2 ∪Tπ 2 ∪T˜π 2 . We first add every edge e7→π(e) for e∈Sπ 2 , and then merge each pair of nodes ( e,˜π(e)) for e∈S˜π 2 into one node. After merging all pairs of nodes, the degree of each vertex in the correlated functional digraph is at most two. Therefore, the connected components of the correlated functional digraph consist of paths and cycles. For example, for a path ( e1, π(e1),···, ej, π(ej)), where e1,···, ejare edges inG1, we have ˜ π(e2) =π(e1),···,˜π(ej) =π(ej−1); for a cycle ( e1, π(e1),···, ej, π(ej)), we have ˜π(e2) =π(e1),···,˜π(ej) =π(ej−1),˜π(e1) =π(ej). By decomposing the connected components, the analysis of edge sets is separated into independent parts. Let PandCdenote the collections of vertex sets belonging to different connected paths and cycles, respectively. For any P∈Pand C∈C, we define ℓπ e(G1, G2) =ℓ βe(G1), βπ(e)(G2) and LP≜Y e∈(Sπ 2)∩Pℓπ e(G1, G2)Y e∈(S˜π 2)∩Pℓ˜π e(G1, G2), LC≜Y e∈(Sπ 2)∩Cℓπ e(G1, G2)Y e∈(S˜π 2)∩Cℓ˜π e(G1, G2). Note that the sets from PandCare disjoint. Consequently, for any P, P′∈PandC, C′∈C, LP, LP′, LCandLC′are mutually independent. Furthermore, the expectations of LPandLCcan be derived from the following Lemma. Lemma 2. For any P∈P, C∈C, we have EQ(LP) = 1 andEQ(LC) =1 1−ρ2|C|. By Lemma 2 and the joint independence between different paths and cycles, we have EQP(G1, G2|π) Q(G1, G2)P(G1, G2|˜π) Q(G1, G2) =EQ"Y P∈PLPY C∈CLC# =Y C∈C1 1−ρ2|C| . (9) The cycles set Cplays a key role in the analysis of conditional second moment. In order to analyze the properties of Cin depth, for any πand ˜π, we define the core set as I∗≜I∗(π,˜π)≜∪C∈C∪e∈C∪v∈V(e)∩V(G1)v, (10) where V(e) denotes the two vertices of edge e. Indeed, I∗is the intersection set between V(G1) and all the vertices of edges in cycle set C. In fact, the quantityQ C∈C 1 1−ρ2|C| relies significantly onI∗. We then show the following lemma on the properties of I∗. Lemma 3 (Properties of the core set ).ForI∗in(10) and any t≤s, we have I∗= argmax I⊆V(G1),π(I)=˜π(I)|I|,P[|I∗|=t]≤s n2t . We then propose the following Proposition. Proposition 4. For any n−1/2≤ρ2<1, ifs2≤nlogn 8 log(1 /(1−ρ2)), then TV(P,Q) =o(1). 9 The detailed proof of Proposition 4 is deferred to Appendix B.4. In the proof, we apply the conditional second moment method with the conditional distribution P′=P(·|E), where Eis defined in (8). The analysis of the conditional second moment relies significantly on the decomposition of cycles and paths of a correlated functional digraph. By Lemma 2, the conditional second moment can be reduced to the calculation on cycles, while the vertex set induced by all cycles is exactly I∗. Combining this with the properties of I∗in Lemma
https://arxiv.org/abs/2505.14138v1
3, we finish the proof of Proposition 4. In fact, under the strong correlation condition, detecting π∗is no longer the bottleneck. We instead use a more delicate analysis based on the conditional second moment method. By (36) in Lemma 6, there exists C≤1 8such that, when s2≤Cn, we have P[|Sπ∗|= 0]≥0.9, which implies that TV(P,Q)≤0.1. Specifically, when s2=o(n),P[|Sπ∗|= 0] = 1 −o(1), and thus TV(P,Q) =o(1). Combining this with Propositions 3 and 4, we prove the impossibility results in Theorem 1. Remark 3. The second moment under our induced subgraph sampling model is equivalent to that on the vertex set induced by I∗. When fixing I∗, it is equal to the second moment of correlated Gaussian Wigner model with π:I∗→π(I∗). However, I∗=I∗(π,˜π) is a random variable of π,˜π, and hence a more thorough analysis on I∗is needed, as shown in Lemma 3. 4 Algorithm In this section, we present an efficient algorithm for detection. In Theorem 1, we show that the estimator (2) achieves the optimal sample complexity for correlation detection. However, the estimator requires searching over Ss,m, with time complexitys m2·m!, resulting in poor performance for large graphs. Next, we propose an efficient algorithm to approximate the estimator in (2). When the full observations of the graphs are given, there are many different efficient al- gorithms for detecting correlation and recovering graph matching. For instance, it is shown in [MWXY23, MWXY24] that counting trees is an efficient way to detect correlation and recover graph matching when the correlation coefficient ρ >√α, where α≈0.338 is Otter’s constant intro- duced in [Ott48]. The message-passing algorithm [PSSZ22, GMS24] is also an efficient algorithm in the Erd˝ os-R´ enyi model, which makes substantial use of the local tree structure. Another approach for graph matching is relaxing the original problem to a convex optimization problem [FMWX23]. Additionally, there are approaches based on initial seeds [MX20] and iterative methods [DL24] addressing this problem. However, for the partial alignment problem and partial correlation detection problem, where only part of the original graphs are given, it becomes more challenging to find an efficient algorithm. One approach is to use deep learning techniques [JRA+22, WGJ+23, RNW24], while another way is to use low-degree structures, such as cliques or trees [SNK18]. In this paper, we propose an algorithm that finds the initial seeds by matching the cliques, and then iteratively constructs the remaining mapping. Three main components of our algorithm are outlined as follows. •Match the small cliques. Given two graphs G1,G2and integers K1, N1, N2and a bivariate function f, we first randomly pick N1vertex set V1,···, VN1⊆V(G1) with |V1|=···= |VN1|=K1andVi̸=Vj, for any 1 ≤i < j≤N1. For any 1 ≤i≤N1, define the injection πi:Vi7→V(G2) as πi≜argmax π:Vi7→V(G2) πinjectionX e∈(Vi 2)βe Hf π , (11) 10 where Hf πiis the f−similarity graph defined in (1). We then sort the valuesP e∈(Vi 2)βe Hf πi in decreasing order and select the top N2corresponding pairs of ( Vi, πi). Without loss of generality, we assume that ( V1, π1),···,(VN2, πN2) are the top N2pairs. •Find seeds. Given an integer K2, for any U⊆[N2]
https://arxiv.org/abs/2505.14138v1
with |U|=K2, letVU≜∪j∈UVj. We say Uiscompatible if for any v∈VU,πj(v)’s are identical for all j∈Usuch that v∈Vij. Let I(U) denote the indicator function of compatible setU. IfI(U) = 1, we define πUas the union of πjfor any j∈U. Specifically, πU(v) =πj(v) such that v∈Vij,for any v∈VU. The seed is then defined as π0= argmax πU:I(U)=1,U⊆[N2],|U|=K2X e∈(VU 2)βe Hf πU |VU| 2, (12) which maximize the average similarity score over U. •Iteratively construct mappings. Define the domain set and image set of π0asS0andT0, respectively. Then, we have π0:S0⊆V(G1)7→T0⊆V(G2). Next, we iteratively extend the seed mapping by finding one vertex each from V(G1) and V(G2) until |S0|=|T0|=m. Specifically, given π0:S07→T0, let v1, v2= argmax v1∈V(G1)\S0 v2∈V(G2)\T0X v∈S0f βv1v(G1), βv2π0(v)(G2) . Then, we add the new mapping v17→v2toπ0. This process is repeated iteratively, updating π0until|S0|=m. Finally, we compute the test statisticP e∈(S0 2)βe Hf π0 .H0is rejected if the test statistic exceeds the given threshold τ, otherwise H0is accepted. The detailed algorithm is shown in Algorithm 1. Our algorithm comprises three main steps. In the first step, we select N1vertex sets V1,···, VN1of size K1and search for injections πifrom Vi toV(G2), which requires O(N1·sK1) time. In the second step, we search over all subsets U⊆[N2] with|U|=K2, which takes O(NK2 2) time. In the third step, we iteratively expand the mapping based on our seeds, which takes O(m2s2) time. We typically choose N1≍sK1andK1≥3, and thus the overall time complexity of the algorithm is O(N1·sK1+NK2 2). Since only partial correspondence exists between the two graphs under H1, finding the true mapping is challenging. We first use small cliques of size K1to trade accuracy for computational efficiency, although this often results in many incorrect mappings. To improve accuracy, we then test the compatibility of these small mappings and merge K2of them to construct a larger, more accurate mapping. This larger mapping is then used as a seed, and we iteratively enlarge it by adding one pair at a time until the size reaches m. This approach significantly reduces the running time compared to directly matching the larger cliques. As for the performance, a larger sample size sleads to larger common vertex sets, and thus increases the number of correct mappings in Step 1. A larger K1corresponds to matching larger cliques in the first step. This increases the proportion of correct mappings within the N2candidate pairs when K1is below the size of common vertex sets. However, choosing K1beyond this size introduces wrong mappings. Besides, in the second step, we search over all U⊆[N2] with |U|=K2 11 Algorithm 1 Clique-Based Detection Algorithm 1:Input: Two graphs G1, G2with svertices, mapping size m, clique size K1, combining size K2, number of cliques N1, number N2, threshold τ. 2:Output: Detection result H0orH1. 3:Randomly select N1vertices sets Vi⊆V(G1) with |Vi|=K1, for any i= 1,2,···, N1. 4:For each Vi, compute πiaccording to (11). Then, sort the valuesP e∈(Vi 2)βe Hf πi in descending order and select the top N2corresponding pairs of ( Vi, πi). Without loss of generality, denote pairs as ( V1, π1),···,(VN2, πN2). 5:Find the seed mapping π0:S0⊆V(G1)7→T0⊆V(G2) according to (12). 6:while |S0|<
https://arxiv.org/abs/2505.14138v1
mdo 7:forv1∈V(G1)\S0andv2∈V(G2)\T0do 8: ComputeP v∈S0f βv1v(G1), βv2π0(v)(G2) . 9:end for 10: Find the pair ( v1, v2) for the maximal value ofP v∈S0f βv1v(G1), βv2π0(v)(G2) and add v17→v2intoπ0. 11:end while 12:ComputeP e∈(S0 2)βe Hf π0 , output H1if it exceeds τ, otherwise output H0. to identify the seeds. While a larger K2imposes a stricter matching criterion, choosing K2beyond the number of available correct mappings from Step 1 will degrade performance. The accuracy and running time depend on N1, N2, K1, and K2, and there is a trade-off between them: larger values of these parameters generally improve accuracy but increase the computational cost. 5 Numerical Experiments In this section, we provide numerical results for Algorithm 1 on synthetic data. To this end, we independently generate 100 pairs of graphs that follow the independent Gaussian Wigner model, and another 100 pairs that follow the correlated Gaussian Wigner model with correlation ρ. Fixn= 50, s= 25, ρ= 0.99, K1= 4, K2= 3, N1= 10000 , N2= 500, and ϵ= 0.01. Then, m=j (1−ϵ)s2 nk = 12. In Figure 1, we plot the histogram of our approximated estimatorP e∈(S0 2)βe Hf π0 defined in Algorithm 1. We see that the histograms under the independent model and the correlated model are well-separated. By picking an appropriate threshold τ, the proposed algorithm succeeds in correlation detection. We note that when K1= 2 and K2= 1, Algorithm 1 is equivalent to comparing the pairwise differences of all edges, while our approach with K1= 4 and K2= 3 is more effective than this trivial method. In order to compare our test statistic under different settings, we plot the Receiver Operating Characteristic (ROC) curves by varying the detection threshold and plotting the Type II error against the Type I error. We also compute the area under the curve (AUC), which can be interpreted as the probability that the test statistic is larger for a pair of correlated graphs than a pair of independent graphs. In Figure 2, for each plot, we fix n= 50, ρ= 0.98, K1= 4, K2= 3, N1= 10000 , N2= 500 , ϵ= 0.01, and vary s∈ {10,20,30,40,50}, with m=j (1−ϵ)s2 nk . We observe that as sincreases, the ROC curve is moving toward the upper left corner, and the AUC increases from 0 .52 to 1, indicating an improvement in the performance of our test statistic. Indeed, by Lemma 1, the cardinality of 12 35 30 25 20 15 10 5 0 T est Statistic010203040506070CountCorrelated random graphs Independent random graphsFigure 1: The histogram of the approximate test statisticP e∈(S0 2)βe Hf π0 in Algorithm 1 over 100 pairs of graphs, where the blue one represents the correlated Gaussian Wigner model, and the green one represents the independent graphs. common set increases as sincrease, strengthening the signal and facilitating correlation detection. 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate0.00.20.40.60.81.0True Positive Rate s=10, AUC=0.52 s=20, AUC=0.72 s=30, AUC=0.92 s=40, AUC=0.97 s=50, AUC=1.00 Figure 2: Comparison for the ROC curve of the approximate test statistic for different sample size s. In Figure 3, for each plot, we fix
https://arxiv.org/abs/2505.14138v1
n= 50, s= 40, K1= 4, K2= 3, N1= 10000 , N2= 500 , ϵ= 0.01, and vary ρ∈ {0.95,0.96,0.97,0.98,0.99}, with m=j (1−ϵ)s2 nk = 31. We observe that as ρincreases, the ROC curve is moving toward the upper left corner, and the AUC increases from 0 .55 to 1, indicating an improvement in the performance of our test statistic as the correlation strengthens. It turns out that correlation detection improves as sandρincrease. We also compare our method with the classical Graph Edit Distance (GED) [SF83], a widely used graph similarity measure. When n= 50 , s= 30, and ϵ= 0.01, the AUC values for the GED-based test at ρ= 0.98,1−10−6,1−10−7are 0 .53,0.73, and 0 .88, respectively. In contrast, our algorithm yields significantly higher AUCs of ρare 0 .92,1,1 under the same settings. These results demonstrate the superior performance of our method in detecting correlation in the Gaussian Wigner model. We provide some additional experiments in Appendix E. 13 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate0.00.20.40.60.81.0True Positive Rate rho=0.95, AUC=0.55 rho=0.96, AUC=0.64 rho=0.97, AUC=0.84 rho=0.98, AUC=0.97 rho=0.99, AUC=1.00Figure 3: Comparison for the ROC curve of the approximate test statistic for different correlation coefficients ρ. 6 Future Directions and Discussions This paper focuses on detecting correlation in the Gaussian Wigner model by sampling two induced subgraphs from the original graphs. We determine the optimal rate on the sample size for correlation detection. In comparison to detection problem on the fully correlated Gaussian Wigner model, the additional challenge arises from partial correlation when sampling subgraphs. We provide a detailed analysis of the core set when using the conditional second moment method to derive the impossibility results. We find that the conditional second moment can be reduced to the second moment on the core set . Additionally, we propose an efficient approximate algorithm for correlation detection based on the clique mapping technique and an iterative approach. There are many problems to be further investigated: •Extension to Erd˝ os-R´ enyi model. Most results in this paper can be extended to the Erd˝ os- R´ enyi model. The key difference lies in the additional parameter pcontrolling the edge connection probability. For the possibility results, the estimator is similar to (2), with the bivariate function fselected via MLE under the Erd˝ os-R´ enyi model. For the impossibility results, the reduction procedure provides tight bounds when p=n−Ω(1), and a more delicate event is required for the conditional second moment analysis when p=n−o(1), which is similar to Proposition 4. •Theoretical analysis of the efficient algorithm. We have shown that the Algorithm 1 performs well on synthetic data, while the theoretical guarantee remains an open problem. This guaran- tee can serve as an upper bound for the existence of a polynomial-time algorithm. Moreover, since the tree-counting-based method shows strong performance in the Erd˝ os-R´ enyi model, it would be interesting to investigate whether it remains effective in Gaussian networks. •Computational hardness. The low-degree conjecture has recently provided evidence of the computational hardness on related problems (see, e.g., [Hop18, KWB19]). It is of interest to investigate the computational hardness conditions
https://arxiv.org/abs/2505.14138v1
with respect to the sample size for the correlation detection problem using the low-degree conjecture. 14 •Other graph models. The sample complexity for correlation detection remains unknown for many models (e.g., the stochastic block model, the graphon model). A natural next step is to explore whether our results can be extended to more general settings. References [ABT24] Ernesto Araya, Guillaume Braun, and Hemant Tyagi. Seeded graph matching for the correlated gaussian wigner model via the projected power method. Journal of Machine Learning Research , 25(5):1–43, 2024. [AH24] Taha Ameen and Bruce Hajek. Robust graph matching when nodes are corrupt. In Proceedings of the 41st International Conference on Machine Learning , volume 235, pages 1276–1305. PMLR, 2024. [AH25] Taha Ameen and Bruce Hajek. Detecting correlation between multiple unlabeled gaussian networks. arXiv preprint arXiv:2504.16279 , 2025. [BBM05] Alexander C Berg, Tamara L Berg, and Jitendra Malik. Shape matching and object recognition using low distortion correspondences. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05) , volume 1, pages 26–33. IEEE, 2005. [BCL+19] Boaz Barak, Chi-Ning Chou, Zhixian Lei, Tselil Schramm, and Yueqi Sheng. (Nearly) efficient algorithms for the graph matching problem on correlated random graphs. Advances in Neural Information Processing Systems , 32, 2019. [BES80] L´ aszl´ o Babai, Paul Erd˝ os, and Stanley M Selkow. Random graph isomorphism. SIaM Journal on computing , 9(3):628–635, 1980. [Bol82] B´ ela Bollob´ as. Distinguishing vertices of random graphs. In North-Holland Mathe- matics Studies , volume 62, pages 33–49. Elsevier, 1982. [CCLS07] Tiberio S. Caetano, Li Cheng, Quoc V. Le, and Alex J. Smola. Learning graph matching. In 2007 IEEE 11th International Conference on Computer Vision , pages 1–8, 2007. [CDGL24] Guanyi Chen, Jian Ding, Shuyang Gong, and Zhangsong Li. A computational transi- tion for detecting correlated stochastic block models by low-degree polynomials. arXiv preprint arXiv:2409.00966 , 2024. [CDGL25] Guanyi Chen, Jian Ding, Shuyang Gong, and Zhangsong Li. Detecting correlation efficiently in stochastic block models: breaking otter’s threshold by counting decorated trees. arXiv preprint arXiv:2503.06464 , 2025. [CK16] Daniel Cullina and Negar Kiyavash. Improved achievability and converse bounds for Erd˝ os-R´ enyi graph matching. ACM SIGMETRICS performance evaluation review , 44(1):63–72, 2016. [CK17] Daniel Cullina and Negar Kiyavash. Exact alignment recovery for correlated Erd˝ os- R´ enyi graphs. arXiv preprint arXiv:1711.06783 , 2017. [Dav79] P.J. Davis. Circulant Matrices . Wiley, 1979. 15 [DCKG19] Osman Emre Dai, Daniel Cullina, Negar Kiyavash, and Matthias Grossglauser. Anal- ysis of a canonical labeling algorithm for the alignment of correlated Erd˝ os-R´ enyi graphs. Proceedings of the ACM on Measurement and Analysis of Computing Sys- tems, 3(2):1–25, 2019. [DD23] Jian Ding and Hang Du. Matching recovery threshold for correlated random graphs. The Annals of Statistics , 51(4):1718–1743, 2023. [DDL23] Jian Ding, Hang Du, and Zhangsong Li. Low-degree hardness of detection for corre- lated Erd˝ os-R´ enyi graphs. arXiv preprint arXiv:2311.15931 , 2023. [DFW23] Jian Ding, Yumou Fei, and Yuanzheng Wang. Efficiently matching random inhomo- geneous graphs via degree profiles. arXiv preprint arXiv:2310.10441 , 2023. [DL23] Jian Ding and Zhangsong Li. A polynomial-time iterative algorithm for random graph matching
https://arxiv.org/abs/2505.14138v1
with non-vanishing correlation. arXiv preprint arXiv:2306.00266 , 2023. [DL24] Jian Ding and Zhangsong Li. A polynomial time iterative algorithm for matching gaus- sian matrices with non-vanishing correlation. Foundations of Computational Mathe- matics , pages 1–58, 2024. [DMWX21] Jian Ding, Zongming Ma, Yihong Wu, and Jiaming Xu. Efficient random graph matching via degree profiles. Probability Theory and Related Fields , 179:29–115, 2021. [ER59] Paul Erd˝ os and Alfr´ ed R´ enyi. On random graphs I. Publicationes Mathematicae (Debrecen) , 6:290–297, 1959. [FF79] S.C. Freeman and L.C. Freeman. The Networkers Network: A Study of the Impact of a New Communications Medium on Sociometric Structure . Social sciences research reports. School of Social Sciences University of California, 1979. [FMWX23] Zhou Fan, Cheng Mao, Yihong Wu, and Jiaming Xu. Spectral graph matching and reg- ularized quadratic relaxations I: The gaussian model. Foundations of Computational Mathematics , 23(5):1511–1565, 2023. [Gho21] Malay Ghosh. Exponential tail bounds for chisquared random variables. Journal of Statistical Theory and Practice , 15(2):35, 2021. [GM20] Luca Ganassali and Laurent Massouli´ e. From tree matching to sparse graph alignment. InConference on Learning Theory , pages 1633–1665. PMLR, 2020. [GMS24] Luca Ganassali, Laurent Massouli´ e, and Guilhem Semerjian. Statistical limits of cor- relation detection in trees. The Annals of Applied Probability , 34(4):3701–3734, 2024. [HL13] Pili Hu and Wing Cheong Lau. A survey and taxonomy of graph sampling. arXiv preprint arXiv:1308.5865 , 2013. [HM23] Georgina Hall and Laurent Massouli´ e. Partial recovery in the graph alignment prob- lem. Operations Research , 71(1):259–272, 2023. [HNM05] Aria Haghighi, Andrew Y Ng, and Christopher D Manning. Robust textual inference via graph matching. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing , pages 387–394, 2005. 16 [Hoe94] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. The collected works of Wassily Hoeffding , pages 409–426, 1994. [Hop18] Samuel Hopkins. Statistical inference and the sum of squares method . PhD thesis, Cornell University, 2018. [HR07] Thad Hughes and Daniel Ramage. Lexical semantic relatedness with random graph walks. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL) , pages 581–589, 2007. [HS17] Samuel B Hopkins and David Steurer. Efficient bayesian estimation from few samples: community detection and related problems. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) , pages 379–390. IEEE, 2017. [HSY24] Dong Huang, Xianwen Song, and Pengkun Yang. Information-theoretic thresholds for the alignments of partially correlated graphs. In Conference on Learning Theory , pages 2494–2518. PMLR, 2024. [HW71] David Lee Hanson and Farroll Tim Wright. A bound on tail probabilities for quadratic forms in independent random variables. The Annals of Mathematical Statistics , 42(3):1079–1083, 1971. [JRA+22] Zheheng Jiang, Hossein Rahmani, Plamen Angelov, Sue Black, and Bryan M Williams. Graph-context attention networks for size-varied deep graph matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2343–2352, 2022. [KWB19] Dmitriy Kunisky, Alexander S Wein, and Afonso S Bandeira. Notes on computational hardness of hypothesis testing: Predictions using the low-degree likelihood ratio.
https://arxiv.org/abs/2505.14138v1
In ISAAC Congress (International Society for Analysis, its Applications and Computa- tion) , pages 1–50. Springer, 2019. [LF06] Jure Leskovec and Christos Faloutsos. Sampling from large graphs. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining , pages 631–636, 2006. [LR13] Lorenzo Livi and Antonello Rizzi. The graph matching problem. Pattern Analysis and Applications , 16:253–283, 2013. [MHK+08] Diana Mateus, Radu Horaud, David Knossow, Fabio Cuzzolin, and Edmond Boyer. Articulated shape matching using laplacian eigenfunctions and unsupervised point registration. In 2008 IEEE Conference on Computer Vision and Pattern Recognition , pages 1–8. IEEE, 2008. [MRT23] Cheng Mao, Mark Rudelson, and Konstantin Tikhomirov. Exact matching of random graphs with constant correlation. Probability Theory and Related Fields , 186(1-2):327– 389, 2023. [MS24] Andrea Muratori and Guilhem Semerjian. Faster algorithms for the alignment of sparse correlated Erd˝ os-R´ enyi random graphs. arXiv preprint arXiv:2405.08421 , 2024. 17 [MU05] Michael Mitzenmacher and Eli Upfal. Probability and computing: an introduction to randomized algorithms and probabilistic analysis . Cambridge University Press, 2005. [MWXY23] Cheng Mao, Yihong Wu, Jiaming Xu, and Sophie H Yu. Random graph matching at otter’s threshold via counting chandeliers. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing , pages 1345–1356, 2023. [MWXY24] Cheng Mao, Yihong Wu, Jiaming Xu, and Sophie H Yu. Testing network correlation efficiently via counting trees. The Annals of Statistics , 52(6):2483–2505, 2024. [MX20] Elchanan Mossel and Jiaming Xu. Seeded graph matching via large neighborhood statistics. Random Structures & Algorithms , 57(3):570–611, 2020. [Ott48] Richard Otter. The number of trees. Annals of Mathematics , pages 583–599, 1948. [PDK11] Manos Papagelis, Gautam Das, and Nick Koudas. Sampling online social networks. IEEE Transactions on knowledge and data engineering , 25(3):662–676, 2011. [PSSZ22] Giovanni Piccioli, Guilhem Semerjian, Gabriele Sicuro, and Lenka Zdeborov´ a. Align- ing random graphs with a sub-tree similarity message-passing algorithm. Journal of Statistical Mechanics: Theory and Experiment , 2022(6):063401, 2022. [PW25] Yury Polyanskiy and Yihong Wu. Information theory: From coding to learning . Cam- bridge university press, 2025. [RNW24] Gathika Ratnayaka, James Nichols, and Qing Wang. Optimal partial graph matching. arXiv preprint arXiv:2410.16718 , 2024. [RS23] Mikl´ os Z R´ acz and Anirudh Sridhar. Matching correlated inhomogeneous random graphs using the k-core estimator. In 2023 IEEE International Symposium on Infor- mation Theory (ISIT) , pages 2499–2504. IEEE, 2023. [SF83] Alberto Sanfeliu and King-Sun Fu. A distance measure between attributed relational graphs for pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics , SMC-13(3):353–362, 1983. [SNK18] Charu Sharma, Deepak Nathani, and Manohar Kaul. Solving partial assignment prob- lems using random clique complexes. In International Conference on Machine Learn- ing, pages 4586–4595. PMLR, 2018. [SWM05] Michael PH Stumpf, Carsten Wiuf, and Robert M May. Subnets of scale-free net- works are not scale-free: sampling properties of networks. Proceedings of the National Academy of Sciences , 102(12):4221–4224, 2005. [SXB08] Rohit Singh, Jinbo Xu, and Bonnie Berger. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proceedings of the National Academy of Sciences , 105(35):12763–12768, 2008. [Tsy09] A. B. Tsybakov. Introduction to Nonparametric
https://arxiv.org/abs/2505.14138v1
Estimation . Springer Verlag, 2009. [VCL+15] Joshua T Vogelstein, John M Conroy, Vince Lyzinski, Louis J Podrazik, Steven G Kratzer, Eric T Harley, Donniell E Fishkind, R Jacob Vogelstein, and Carey E Priebe. Fast approximate quadratic programming for graph matching. PLOS ONE , 10(4):1– 17, 2015. 18 [WCA+16] Yanhong Wu, Nan Cao, Daniel Archambault, Qiaomu Shen, Huamin Qu, and Weiwei Cui. Evaluation of graph sampling: A visualization perspective. IEEE transactions on visualization and computer graphics , 23(1):401–410, 2016. [WGJ+23] Runzhong Wang, Ziao Guo, Shaofei Jiang, Xiaokang Yang, and Junchi Yan. Deep learning of partial graph matching via differentiable top-k. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 6272– 6281, 2023. [WWXY22] Haoyu Wang, Yihong Wu, Jiaming Xu, and Israel Yolou. Random graph matching in geometric models: the case of complete graphs. In Conference on Learning Theory , pages 3441–3488. PMLR, 2022. [WXY22] Yihong Wu, Jiaming Xu, and Sophie H Yu. Settling the sharp reconstruction thresholds of random graph matching. IEEE Transactions on Information Theory , 68(8):5391–5417, 2022. [WXY23] Yihong Wu, Jiaming Xu, and Sophie H Yu. Testing correlation of unlabeled random graphs. The Annals of Applied Probability , 33(4):2519–2558, 2023. [YG13] Lyudmila Yartseva and Matthias Grossglauser. On the performance of percolation graph matching. In Proceedings of the first ACM conference on Online social networks , pages 119–130, 2013. A Proof of Theorem 1 For the possibility results, by Propositions 1 and 2, if s2≥  C1nlogn ρ2 0< ρ≤1−e−6 C2 nlogn log(1 /(1−ρ))∨n 1−e−6< ρ < 1, then TV(P,Q)≤0.1. Furthermore, if s2=ω(n), then TV(P,Q) =o(1). Sincelog(1/(1−ρ2)) ρ2 ≤ ρ2/(1−ρ2) ρ2 =1 1−ρ2≤1 1−(1−e−6)2for any 0 < ρ≤1−e−6, we obtain that TV(P,Q)≤0.1 when s2≥C1 1−(1−e−6)2·nlogn log(1 /(1−ρ2)). Sincelog(1/(1−ρ2)) log(1 /(1−ρ))= 1 +log(1 /(1+ρ)) log(1 /(1−ρ))≤2 for any 1 −e−6< ρ < 1, it follows that when s2≥2C2 nlogn log(1 /(1−ρ))∨n ,TV(P,Q)≤0.1. Let C=C1 1−(1−e−6)2∨2C2. Then, fors2≥C nlogn log(1 /(1−ρ2))∨n , we have TV(P,Q)≤0.1. For the impossibility results, by Propositions 3 and 4, if s2≤nlogn 8 log(1 /(1−ρ2)), then TV(P,Q) = o(1). According to the concentration inequality (36) for the Hypergeometric distribution in Lemma 6, there exists a constant C≤1 8such that, when s2≤Cn, we have P[|Sπ∗| ≥1]≤0.1, im- plying P[|Sπ∗|= 0]≥0.9 and thus TV(P,Q)≤0.1. Additionally, when s2=o(n), we have P[|Sπ∗|= 0] = 1 −o(1), which implies TV(P,Q) =o(1). Therefore, if s2≤C nlogn log(1 /(1−ρ2))∨n , then TV(P,Q)≤0.1. Moreover, if s2≤C nlogn log(1 /(1−ρ2))∨n ors2=o(n), we have TV(P,Q) = o(1). This concludes the proof of Theorem 1. 19 B Proof of Propositions B.1 Proof of Proposition 1 We first upper bound Q(T ≥τ) under the null hypothesis H0by the Chernoff bound and union bound. For any X, Yi.i.d.∼ N (0,1) and λ∈(0,1), we have E[exp ( λXY )] =Z Z1 2πexp (λxy) exp −x2+y2 2 dxdy =Z Z1 2πexp −1 2(x−λy)2 exp −1 2(1−λ2)y2 dxdy =Z Z1 2πexp −z2 2 exp −1 2(1−λ2)y2 dzdy =1√ 1−λ2. (13) Letλ=ρ 2. Recall that Ss,mdenotes the set of injective mappings π:S⊆V(G1)7→V(G2) with |S|=m. For any π∈ Ss,m,e Hf π ∼P(m 2) i=1AiBi, where ( Ai, Bi) are independent and identically distributed
https://arxiv.org/abs/2505.14138v1
pairs of standard normals with correlation coefficient ρ. Then, by the Chernoff bound, Qh e Hf π ≥τi ≤exp (−λτ)Eh exp λe Hf πi (14) = exp ( −λτ)E"mY i=1exp (λAiBi)# (a) ≤exp −λm 2ρ 2−1 2m 2 log 1−λ2 = exp −m 2ρ2 4+1 2log 1−ρ2 4(b) ≤exp −1 12m 2 ρ2 , (15) where (a) is because E[λAiBi] =1√ 1−λ2for any 1 ≤i≤m 2 ; (b) follows from log(1 −x)≥ −1 3xfor x=ρ2 4∈ 0,1 4 . Applying the union bound, we obtain that Q(T ≥τ)≤ |S s,m|Qh e Hf π ≥τi(a) ≤s m2 m! exp −1 12m 2 ρ2 (b) ≤exp mlogen 1−ϵ −1 12m 2 ρ2 , where (a) is because |Ss,m|=s m2m! and (15); (b) is becauses m m!≤sm,s m ≤e·s mmandm= (1−ϵ)s2 n. Consequently, when m−1≥24(1+ ϵ) log(en 1−ϵ) ρ2 , we have Q(T ≥τ)≤exp −ϵmlog en 1+ϵ = o(1). We then upper bound P(T< τ) under the alternative hypothesis H1. We note that P(T< τ)(a) ≤ P (|Sπ∗|< m) +P(T< τ,|Sπ∗| ≥m) (b) ≤ P (|Sπ∗|< m) +P T< τ |Sπ∗| ≥m (c) ≤ P (|Sπ∗|< m) +P e Hf π∗m < τ |Sπ∗| ≥m (16) (d) ≤exp −ϵ2s2 2n + exp −m 2ρ2 4c2 0 + exp −m 2ρ 2c0 , (17) 20 where (a) follows from (5); (b) is because P(T< τ,|Sπ∗| ≥m)≤P(T< τ,|Sπ∗| ≥m) P(|Sπ∗| ≥m)=P T< τ |Sπ∗| ≥m ; (c) is because under the event |Sπ∗| ≥m, there exists π∗ m∈ Ss,msuch that π∗ m=π∗on its domain set dom( π∗ m); (d) uses the concentration (37) for Hypergeometric distribution and the Hanson-Wright inequality in Lemma 4 with M0=I(m 2)andδ= exp −m 2 ρ2 4c2 0∧ρ 2c0 , where c0is the universal constant in Lemma 4. Consequently, we obtain that P(T< τ) =o(1) when m−1≥24(1+ ϵ) log(en 1−ϵ) ρ2 . LetC1= 25. Then, we have P(T< τ) +Q(T ≥τ) =o(1) whens2 n−1≥25 log n ρ2asnbecomes sufficiently large. B.2 Proof of Proposition 2 We first upper bound Q(T ≥τ) under the null hypothesis H0. We note that for any X, Yi.i.d.∼ N(0,1) and λ >0, we have E exp −λ 2(X−Y)2 =Z Z1 2πexp −λ 2(x−y)2 exp −1 2 x2+y2 dxdy =Z Z1 2πexp −λ+ 1 2 x−λ λ+ 1y2! exp −2λ+ 1 2(λ+ 1)y2 dxdy =Z Z1 2πexp −λ+ 1 2z2 exp −2λ+ 1 2(λ+ 1)y2 dydz =1√ 1 + 2 λ. (18) Letλ=1 4(1−ρ)−1 2. Then, we have 1 + 2 λ=1 2(1−ρ). Since 1 −e−6< ρ < 1, we also have λ >0. Recall that Ss,mdenotes the set of injective mappings π:S⊆V(G1)7→V(G2) with |S|=m. For any π∈ Ss,m,e Hf π ∼P(m 2) i=1−1 2(Ai−Bi)2, where ( Ai, Bi) are independent and identically distributed pairs of standard normals with correlation coefficient ρ. Then, by the Chernoff bound, Q e Hf π ≥τ ≤exp (−λτ)Eh exp λe Hf πi (a)= expm 2 2(1−ρ)λ−1 2log (1 + 2 λ) = expm 21 2−(1−ρ)−1 2log1 2(1−ρ) (b) ≤exp −log(1/(1−ρ)) 3m 2 , (19) where (a) follows from (18); (b) is because ρ >1−e−6implies that1 2−(1−ρ)−1 2log 1 2(1−ρ) ≤ 1−1 6log 1 1−ρ
https://arxiv.org/abs/2505.14138v1
−1 3log 1 1−ρ ≤ −1 3log 1 1−ρ . Then, applying the union bound yields that Q(T ≥τ)≤ |S s,m|Qh e Hf π ≥τi ≤exp mlogen 1−ϵ −1 3log1 1−ρm 2 , 21 where the last inequality is because |Ss,m|=s m2m!≤es mmsm= en 1−ϵm . Therefore, when m−1≥6(1+ϵ) log(en 1+ϵ) log(1 /(1−ρ)), we have Q(T ≥τ)≤exp −ϵmlog en 1−ϵ . We then upper bound P(T< τ) under the alternative hypothesis H1. By (16), we have that P(T< τ)≤ P(|Sπ∗|< m) +P e Hf π∗m < τ |Sπ∗|> m ≤exp −ϵ2s2 2n + exp −1 2m 2 (1−log 2) , where the last inequality follows from (37) in Lemma 6 and−e Hf π∗m 1−ρ∼χ2m 2 , and the concentra- tion inequality for chi-square distribution (34) in Lemma 5. Since m=(1−ϵ)s2 n, there exists a univer- sal constant c2>0 such that, whens2 n≥c2, we have P(T< τ)≤0.05. Specifically, P(T< τ) = o(1) when s2/n=ω(1). Since Q(T ≥ τ) =o(1) when m−1≥6(1+ϵ) log(en 1+ϵ) log(1 /(1−ρ)), there exists a universal constant C2such that, when s2≥C2 nlogn log(1 /(1−ρ))∨n ,P(T< τ) +Q(T ≥τ)≤0.1. Specifically, when s2/n=ω(1), we have P(T< τ) =o(1), and thus P(T< τ)+Q(T ≥τ) =o(1). Remark 4. The Gaussian assumption on the weighted edges for βe(G1) and βe(G2) in Proposi- tions 1 and 2 can be extended to the sub-Gaussian assumption. The main ingredients of our proof in these two Propositions are the analysis of the tail bound for the Gaussian distribution. We compute the Moment Generating Function (MGF) and use a standard Chernoff bound to bound Q(T ≥τ). Indeed, if we relax the distribution assumption to a sub-Gaussian distribution, since the linear sum of sub-Gaussian random variables remains a sub-Gaussian random variable, the MGF in (18) can be approximated by E[exp ( λXY )]≤1√ 1−c1λ2for some constant c1∈R. Since the product of sub-Gaussian random variables is a sub-exponential random variable, the MGF in (18) can be approximated by Eh exp −λ 2(X−Y)2i ≤1√1+c2λfor some constant c2∈R. For P(T< τ), the tail bound holds for sub-Gaussian as well. B.3 Proof of Proposition 3 For any S⊆V(G1) and T⊆V(G2) with |S|=|T|, define P(G1, G2, S, T ) =˜P(G1[S], G2[T])Y e/∈(S 2)Q0(βe(G1))Y e/∈(T 2)Q0(βe(G2)), (20) Q(G1, G2, S, T ) =Q(G1[S], G2[T])Y e/∈(S 2)Q0(βe(G1))Y e/∈(T 2)Q0(βe(G2)), (21) where G[S] for any S⊆V(G) denotes the induced subgraph with vertex set SofG;˜Pdenotes the distribution of two random graphs follow fully correlated Gaussian Wigner model; Q0denotes the standard normal distribution. Recall that Sπ∗=V(G1)∩(π∗)−1(V(G2)) and Tπ∗=π∗(V(G1))∩ V(G2). Indeed, P(G1, G2, S, T ) denotes the distribution under Pwhen given Sπ∗=SandTπ∗=T. Besides, Q(G1, G2|S, T) and Q(G1, G2) are the same distribution for any S⊆V(G1), T⊆V(G2) with|S|=|T|. Since P(·|E) =P(·,E) P(E)=P(1+ϵ)s2/n i=0P(|Sπ∗|=i)P(·||Sπ∗|=i) P(E)andTV(P iλiPi,Q)≤P iλiTV(Pi,Q) 22 whenP iλi= 1, we obtain TV P′(G1, G2),Q(G1, G2) ≤(1+ϵ)s2 nX i=0P(|Sπ∗|=i) P(E)·TV P G1, G2 |Sπ∗|=i ,Q(G1, G2) . (22) For any 0 ≤i≤(1+ϵ)s2 nandS⊆V(G1), T⊆V(G2) with |S|=|T|=i, by the data processing inequality (see, e.g., [PW25, Section 3.5]), we have TV P G1, G2 |Sπ∗|=i ,Q(G1, G2) ≤TV(P(G1, G2, S, T ),Q(G1, G2, S, T )) =TV ˜P(G1[S], G2[T]),Q(G1[S], G2[T]) , (23) where
https://arxiv.org/abs/2505.14138v1
the last equality follows from (20), (21) and the fact that TV(X⊗Z, Y⊗Z) =TV(X, Y) for any distributions X, Y, Z such that Zis independent with XandY. For the random graphs G1[S] and G2[T] with S⊆V(G1), T⊆V(G2), and |S|=|T|, they follow the correlated Gaussian Wigner model with node set size |S|under ˜P, while they are independent under Q. It follows from [WXY23, Theorem 1] that, when|S| log|S|≤2 ρ2, the total variation distance TV ˜P(G1[S], G2[T]),Q(G1[S], G2[T]) =o(1). We then verify the condition|S| log|S|≤2 ρ2for any 0≤ |S| ≤(1+ϵ)s2 n. In fact, since s2≤nlogn 2 log(1 /(1−ρ2)), we have |S| ≤(1 +ϵ)s2 n≤(1 +ϵ) logn 2 log (1 /(1−ρ2))≤2 log 1/ρ2 ρ2, where the last inequality follows from log 1/(1−ρ2) ≥ρ2,logn 2<log 1/ρ2 andϵ <1. Therefore, we obtain|S| log|S|≤2 ρ2log(1/ρ2) log(1 /ρ2)+log(2 log(1 /ρ2))≤2 ρ2, and thus TV ˜P(G1[S], G2[T]),Q(G1[S], G2[T]) =o(1) for any S⊆V(G1), T⊆V(G2) with |S|=|T| ≤(1+ϵ)s2 n. Combining this with (22) and (23), we conclude that TV P′(G1, G2),Q(G1, G2) ≤(1+ϵ)s2 nX i=0P(|Sπ∗|=i) P(E)·TV P G1, G2 |Sπ∗|=i ,Q(G1, G2) ≤(1+ϵ)s2 nX i=0P(|Sπ∗|=i) P(E)·o(1) = o(1). Therefore, TV(P(G1, G2),Q(G1, G2))(a) ≤TV P(G1, G2),P′(G1, G2) +TV P′(G1, G2),Q(G1, G2) (b) ≤TV P(G1, G2, π),P′(G1, G2, π) +TV P′(G1, G2),Q(G1, G2) =P((G1, G2, π)/∈ E) +TV P′(G1, G2),Q(G1, G2) =o(1),(24) where (a) follows from the triangle inequality and (b) is derived by the data processing inequality (see, e.g., [PW25, Section 3.5]). 23 B.4 Proof of Proposition 4 Recall that the conditional distribution is defined as P′(G1, G2, π) =P(G1, G2, π) 1(G1,G2,π)∈E P(E)= (1 + o(1))P(G1, G2, π) 1(G1,G2,π)∈E, where the last inequality holds because P(E) = 1−o(1). By (6) and (24), we have the following sufficient condition for the impossibility results: EQ"P′(G1, G2) Q(G1, G2)2# = 1 + o(1)⇒TV P′,Q =o(1)⇒TV(P,Q) =o(1). (25) Recall the likelihood ratio in (7). To compute the conditional second moment, we introduce an independent copy ˜ πof the latent permutation πand express the square likelihood ratio as P′(G1, G2) Q(G1, G2)2 = (1 + o(1))EπP(G1, G2|π) Q(G1, G2)1(G1,G2,π)∈E E˜πP(G1, G2|˜π) Q(G1, G2)1(G1,G2,˜π)∈E = (1 + o(1))Eπ⊥˜πP(G1, G2|π) Q(G1, G2)P(G1, G2|˜π) Q(G1, G2)1(G1,G2,π)∈E 1(G1,G2,˜π)∈E . Taking expectation for both sides under Q, the conditional second moment is given by EQ"P′(G1, G2) Q(G1, G2)2# = (1 + o(1))EQ Eπ⊥˜πP(G1, G2|π) Q(G1, G2)P(G1, G2|˜π) Q(G1, G2)1(G1,G2,π)∈E 1(G1,G2,˜π)∈E = (1 + o(1))Eπ⊥˜π EQP(G1, G2|π) Q(G1, G2)P(G1, G2|˜π) Q(G1, G2)1(G1,G2,π)∈E 1(G1,G2,˜π)∈E = (1 + o(1))Eπ⊥˜π 1(G1,G2,π)∈E 1(G1,G2,˜π)∈EEQP(G1, G2|π) Q(G1, G2)P(G1, G2|˜π) Q(G1, G2) , (26) where the last equality holds since Eis independent with the edges in G1andG2. Recall that I∗=I∗(π,˜π) defined in (10). Since I∗=∪C∈C∪e∈C∪v∈V(e)∩V(G1)vby the definition of I∗, we obtain thatI∗ 2 =P C∈C|C|by counting the edges induced by the vertices in I∗. Combining this with (9) and (26), we have that EQ"P′(G1, G2) Q(G1, G2)2# = (1 + o(1))Eπ⊥˜π 1(G1,G2,π)∈E 1(G1,G2,˜π)∈EEQP(G1, G2|π) Q(G1, G2)P(G1, G2|˜π) Q(G1, G2) = (1 + o(1))Eπ⊥˜π" 1(G1,G2,π)∈E 1(G1,G2,˜π)∈EY C∈C1 1−ρ2|C|# ≤(1 +o(1))Eπ⊥˜π" 1(G1,G2,π)∈E 1(G1,G2,˜π)∈EY C∈C1 1−ρ2|C|# = (1 + o(1))Eπ⊥˜π" 1(G1,G2,π)∈E 1(G1,G2,˜π)∈E1 1−ρ2|I∗|(|I∗|−1)/2# , where the inequality follows from1 1−ρ2x− 1 1−ρ2x =(1−ρ2)x+(ρ2)x−1 (1−ρ2x)(1−ρ2)x≤1−ρ2+ρ2−1 (1−ρ2x)(1−ρ2)x= 0 for any 0< ρ < 1 and
https://arxiv.org/abs/2505.14138v1
x≥1. Since P[|I∗|=t]≤s n2tby Lemma 3 and |I∗| ≤ |V(G1)∩π−1(V(G2))| ≤ 24 (1+ϵ)s2 nif (G1, G2, π),(G1, G2,˜π)∈ E, we obtain EQP′(G1, G2) Q(G1, G2)2 ≤(1 +o(1))Eπ⊥˜π" 1(G1,G2,π)∈E 1(G1,G2,˜π)∈E1 1−ρ2|I∗|(|I∗|−1)/2# = (1 + o(1))(1+ϵ)s2 nX t=0P[|I∗|=t]1 1−ρ2t(t−1)/2 ≤(1 +o(1))(1+ϵ)s2 nX t=0s n2t1 1−ρ2t(t−1)/2 . (27) Letat≜s n2t 1 1−ρ2t(t−1)/2 . For any t <(1+ϵ)s2 n, we have at+1 at=s2 n21 1−ρ2t ≤s2 n21 1−ρ2(1+ϵ)s2 n = exp logs2 n2 +(1 +ϵ)s2 nlog1 1−ρ2 .(28) Since s2≤nlogn 8 log(1 /(1−ρ2)), we obtain (1 +ϵ)s2 nlog1 1−ρ2 ≤(1 +ϵ) logn 8 and logs2 n2 ≤loglogn 8nlog (1 /(1−ρ2))(a) ≤ −1 2logn+ loglogn 8 , where (a) is because log 1 1−ρ2 ≥log 1 1−n−1/2 ≥n−1/2. Combining this with (28), we obtain thatat+1 at≤exp −(3−ϵ) logn 8+ log logn 8 ≤n−1/4. Therefore, by (27), EQP′(G1, G2) Q(G1, G2)2 ≤(1 +o(1))(1+ϵ)s2 nX t=0at = (1 + o(1))(1+ϵ)s2 nX t=0s n2t1 1−ρ2t(t−1)/2 ≤1 +o(1) 1−n−1/4= 1 + o(1), which implies that TV(P,Q) =o(1) by (25). C Proof of Lemmas C.1 Proof of Lemma 1 Recall that |V(G1)|=|V(G2)|=sandV(G1)⊆V(G1), V(G2)⊆V(G2). For any π∼ S n, we note that the event |π(V(G1))∩V(G2)|=twith t∈[s] can be divided as: •Picking tvertices in V(G1), V(G2) respectively and constructing the mapping between picked vertices. We haves t2t! options for this step. 25 •Mapping the remaining s−tvertices in V(G1) to V(G2)\V(G2). We haven−s s−t (s−t)! options for this step. •Mapping V(G1)\V(G1) to the remaining vertices in V(G2). We have ( n−s)! options for this step. Then, for any t≤s, we have that P[|π(V(G1))∩V(G2)|=t] =s t2t!·n−s s−t (s−t)!·(n−s)! n!=s tn−s s−t n s, (29) which indicates that the size of intersection set |π(V(G1))∩V(G2)|follows hypergeometric distri- bution HG( n, s, s ) where πUnif.∼ S n. C.2 Proof of Lemma 2 For any P= (e1, π(e1), e2,···, ej, π(ej))∈Pwith ˜ π(e2) =π(e1),···,˜π(ej) =π(ej−1), we have that LP=jY i=1ℓ(ei, π(ei))jY i=2ℓ(ei,˜π(ei)) =jY i=1ℓ(ei, π(ei))jY i=2ℓ(ei, π(ei−1)) =ℓ(e1, π(e1))ℓ(π(e1), e2)···ℓ(π(ej−1), ej)ℓ(ej, π(ej)). (30) Under the distribution Q, it follows from (30) that LP=ℓ(B0, B1)ℓ(B1, B2)···ℓ(Bk−1, Bk) for some k∈NandB0, B1,···, Bki.i.d.∼ N (0,1). Recall that ℓ(a, b) =P βe(G1) =a, βπ(e)(G2) =b Q βe(G1) =a, βπ(e)(G2) =b=1p 1−ρ2exp−ρ2(a2+b2) + 2ρab 2(1−ρ2) ,for any a, b∈R. (31) Then, EQ[LP] =EQ[ℓ(B0, B1)ℓ(B1, B2)···ℓ(Bk−1, Bk)] =1 (2π)(k+1)/2((1−ρ2))k/2Z ···Z exp k−1X t=0−ρ2(b2 t+b2 t+1) + 2ρbtbt+1 2(1−ρ2)! exp kX t=0−b2 t 2! db0···dbk =1 (2π)(k+1)/2((1−ρ2))k/2Z ···Z exp −Pk−1 t=0(bt−ρbt+1)2 2(1−ρ2)−b2 k 2! db0···dbk= 1, where the last equality holds since the transformation B′ t≜Bt−ρBt+1√ 1−ρ2for any 0 ≤t≤k−1 yields thatEQ[LP] =1 (2π)(k+1)/2R ···R exp −Pk−1 t=0b′2 k 2−b2 k 2 db′ 0···db′ k−1dbk= 1. For any C= (e1, π(e1), e2,···, ej, π(ej))∈Cwith ˜ π(e2) =π(e1),···,˜π(ej) =π(ej−1) and ˜π(e1) =π(ej), we denote e0=ejfor notational simplicity. Then, we have that LC=jY i=1ℓ(ei, π(ei))jY i=1ℓ(ei,˜π(ei)) =jY i=1ℓ(ei, π(ei))jY i=1ℓ(ei, π(ei−1)) =ℓ(e1, π(e1))ℓ(π(e1), e2)···ℓ(π(ej−1), ej)ℓ(ej, π(ej))ℓ(π(ej), e1). 26 Then LC=ℓ(B1, B2)···ℓ(Bk−1, Bk)ℓ(Bk, B1) for k= 2jandB1,···, Bki.i.d.∼ N (0,1). Denote Bk+1=B1, we have that EQ[LC] =EQ[ℓ(B1, B2)···ℓ(Bk−1, Bk)ℓ(Bk, B1)] =1 (2π(1−ρ2))k/2Z ···Z exp Pk−1 t=0−ρ2 b2 t+b2 t+1 + 2ρbtbt+1 2(1−ρ2)! exp kX t=1−b2 t 2! db1···dbk =1 (2π(1−ρ2))k/2Z ···Z exp Pk−1 t=0−(bt−ρbt+1)2 2(1−ρ2)! db1···dbk. LetCt≜Bt−ρBt+1for any
https://arxiv.org/abs/2505.14138v1
1 ≤t≤k. Then [C1, C2,···, Ck−1, Ck]⊤=Jk[B1, B2,···, Bk−1, Bk]⊤, where Jk≜ 1−ρ0··· 0 0 1 −ρ··· 0 0 0 1 ··· 0 ............... −ρ0··· 0 1  and thus det ( Jk) = 1−ρk(see, e.g., [Dav79, Section 3.2]). Then, we obtain that EQ[LC] =1 (2π(1−ρ2))k/2det (Jk)Z ···Z exp Pk t=1−c2 t 2(1−ρ2)! dc1···dck=1 1−ρk=1 1−ρ2|C|. C.3 Proof of Lemma 3 LetI′≜argmaxI⊆V(G1),π(I)=˜π(I)|I|, we first show that I′=I∗. On the one hand, since π(I′) = ˜π(I′), we have πI′ 2 = ˜πI′ 2 . Recall that the connected components of the correlated func- tional digraph in Definition 3 consist of paths and cycles. For any path P∈P, we note that π(P∩V(G1))̸= ˜π(P∩V(G1)), and thus πP∩V(G1) 2 ̸= ˜πP∩V(G1) 2 . For any cycle C∈C, we note that π(C∩V(G1)) = ˜ π(C∩V(G1)), and thus πC∩V(G1) 2 = ˜πC∩V(G1) 2 . There- fore,I′ 2 ⊆ ∪ C∈C∪e∈C∩E(G1)e. By the definition of I∗, we obtain I′⊆I∗. On the other hand, for anyC∈C, since π ∪e∈C∩E(G1)e = ˜π ∪e∈C∩E(G1)e by the definition of a cycle and C∩C′=∅ for any C̸=C′∈C, we have that π ∪C∈C∪e∈C∩E(G1)e = ˜π ∪C∈C∪e∈C∩E(G1)e . Therefore, we have π ∪C∈C∪e∈C∪v∈v(e)∩V(G1)v = ˜π ∪C∈C∪e∈C∪v∈v(e)∩V(G1)v , which implies π(I∗) = ˜π(I∗). Since I∗⊆V(G1), by the definition of I′, we conclude that I∗⊆I′. Therefore, we have I∗=I′= argmaxI⊆V(G1),π(I)=˜π(I)|I|. For any t≤s, by the union bound, we obtain P[|I∗|=t]≤P[∃A⊆V(G1),|A|=t, π(A) = ˜π(A)⊆V(G2)] ≤s t P[A⊆V(G1),|A|=t, π(A) = ˜π(A)⊆V(G2)]. (32) 27 For any fixed set A⊆V(G1) with |A|=tandπ(A) = ˜π(A)⊆V(G2), we first choose a set B⊆V(G2) with |B|=t, and set π(A) = ˜π(A) =B. There ares t ways to choose B, and t!2ways to map π(A) = ˜π(A) =B. For the remaining vertices in V(G1), there are ( n−t)!2ways to map them under πand ˜π. Therefore, s t P[A⊆V(G1),|A|=t, π(A) = ˜π(A)⊆V(G2)] =s t ·1 (n!)2s t t!2(n−t)!2≤s n2t , (33) where the last inequality is due to the fact thats t ·1 (n!)2s t t!2(n−t)!2=h s(s−1)···(s−t+1) n(n−1)···(n−t+1)i2 and for anyi= 1,···, t−1,s−i n−i≤s n. Combining this with (32), we obtain P[|I∗|=t]≤s n2t. D Auxiliary Results D.1 Concentration Inequalities for Gaussian Lemma 4 (Hanson-Wright inequality) .LetX, Y∈Rnbe standard Gaussian vectors such that the pairs (Xi, Yi)∼ N0 0 ,1ρ ρ1 are independent for i= 1,···, n. Let M0∈Rn×nbe any deterministic matrix. There exists some universal constant c0>0such that Ph X⊤M0Y−ρTr(M0) ≥c0 ∥M0∥Fp log(1/δ)∨ ∥M0∥2log(1/δ)i ≤δ. Proof. Note that X⊤M0Y=1 4(X+Y)⊤M0(X+Y)−1 4(X−Y)⊤M0(X−Y) and Eh (X+Y)⊤M0(X+Y)i = (2 + 2 ρ)Tr(M0),Eh (X−Y)⊤M0(X−Y)i = (2−2ρ)Tr(M0). By Hanson-Wright inequality [HW71], there exists some universal constant c0such that P 1 4(X+Y)⊤M0(X+Y)−2 + 2 ρ 4Tr(M0) ≥c0 2 ∥M0∥Fp log(1/δ)∨ ∥M0∥2log(1/δ) ≤δ 2, P 1 4(X−Y)⊤M0(X−Y)−2−2ρ 4Tr(M0) ≥c0 2 ∥M0∥Fp log(1/δ)∨ ∥M0∥2log(1/δ) ≤δ 2 for any δ >0. Consequently, Ph X⊤M0Y−ρTr(M0) ≥c0 ∥M0∥Fp log(1/δ)∨ ∥M0∥2log(1/δ)i ≤P 1 4(X+Y)⊤M0(X+Y)−2 + 2 ρ 4Tr(M0) ≥c0 2 ∥M0∥Fp log(1/δ)∨ ∥M0∥2log(1/δ) +P 1 4(X−Y)⊤M0(X−Y)−2−2ρ 4Tr(M0) ≥c0 2 ∥M0∥Fp log(1/δ)∨ ∥M0∥2log(1/δ) ≤δ. D.2 Concentration Inequalities for Chi-Squared Distribution Lemma 5 (Chernoff’s inequality for Chi-squared distribution) .Suppose ξ∼χ2(n). Then, for any δ >0, we have P[ξ >(1 +δ)n]≤exp −n 2(δ−log (1 + δ)) , (34) P[ξ <(1−δ)n]≤exp −n 2(−δ−log(1−δ)) .
https://arxiv.org/abs/2505.14138v1
(35) Proof. The results follow from Theorems 1 and 2 in [Gho21]. 28 D.3 Concentration Inequalities for Hypergeometric Distribution Lemma 6 (Concentration inequalities for Hypergeometric distribution) .Forη∼HG(n, s, s )and anyϵ >0, we have P η≥(1 +ϵ)s2 n ≤exp −ϵ2s2 (2 +ϵ)n ∧exp −ϵ2s3 n2 , (36) P η≤(1−ϵ)s2 n ≤exp −ϵ2s2 2n ∧exp −ϵ2s3 n2 . (37) Proof. Denote ξ∼Bin s,s n , by Theorem 4 in [Hoe94], for any continuous and convex function f, we have E[f(η)]≤E[f(ξ)]. We note the function f(x) = exp ( λx) is continuous and convex for any λ∈R. Therefore, we have E[exp ( λη)]≤E[exp ( λξ)] for any λ∈R, and thus the Chernoff bound for ξremains valid for η. Combining this with Theorems 4.4 and 4.5 in [MU05], we have P η≥(1 +ϵ)s2 n ≤exp −ϵ2s2 (2 +ϵ)n ,P η≤(1−ϵ)s2 n ≤exp −ϵ2s2 2n . By Hoeffding’s inequallity [Hoe94], we also have P η≥(1 +ϵ)s2 n ≤exp −ϵ2s3 n2 ,P η≤(1−ϵ)s2 n ≤exp −ϵ2s3 n2 . Therefore, we finish the proof of Lemma 6. E Additional Experiments We provide a simple illustration on how our algorithm can be applied on real dataset. We conduct an experiment on Freeman’s EIES networks [FF79], a small dataset of 46 researchers, where edge weights represent communication strength at two time points. We apply our method to test for correlation between these two temporal networks. We examine how sample size affects privacy protection by analyzing the normalized similarity score, defined as the similarity score e(Hf π) divided bys 2 . Indeed, a lower score suggests weaker correlation and greater support for the null hypothesis of independence. We apply our algorithm to the EIES dataset at different sample sizes, s= 10,20,40 and compute the corresponding normalized similarity scores: -1.066, -0.905, and -0.651. The scores increase with sample size, indicating stronger detected correlation. The lower scores at small sample sizes reflect failed correlation detection, quantifying the reduction in re-identification risk. 29
https://arxiv.org/abs/2505.14138v1
arXiv:2505.14458v1 [math.ST] 20 May 2025Adaptive Estimation of the Transition Density of Controlled Markov Chains Imon Banerjee1, Vinayak Rao2, and Harsha Honnappa3 1Department of Industrial Engineering and Management Sciences, Northwestern University 2Department of Statistics, Purdue University 3Edwardson School of Industrial Engineering, Purdue University Abstract Estimating the transition dynamics of controlled Markov chains is crucial in fields such as time series analysis, reinforcement learning, and system exploration. Traditional non-parametric density estimation methods often assume independent samples and require oracle knowledge of smoothness parameters like the H ¨older continuity coefficient. These assumptions are unrealistic in controlled Markovian settings, especially when the controls are non-Markovian, since such parameters need to hold uniformly over all control values. To address this gap, we propose an adaptive estimator for the transition densities of con- trolled Markov chains that does not rely on prior knowledge of smoothness parameters or assumptions about the control sequence distribution. Our method builds upon recent advances in adaptive density estimation by selecting an estimator that minimizes a loss function and fitting the observed data well, using a constrained minimax criterion over a dense class of estimators. We validate the performance of our estimator through oracle risk bounds, employing both randomized and deterministic versions of the Hellinger distance as loss functions. This approach provides a robust and flexible framework for estimating transition densities in controlled Markovian systems without imposing strong assumptions. Contents 1 Introduction 2 1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Risk Bounds With Respect to Empirical Hellinger Loss 4 2.1 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 The Risk Bound for the deterministic Hellinger Loss 7 3.1 Ergodic Occupation Measure Exists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Proof of Corollary 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Ergodic Occupation Measure Does Not Exist . . . . . . . . . . . . . . . . . . . . . . . . . 10 4 Applications 13 4.1 Uniform Risk Bounds over Functional Classes . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Estimating the Transition Density of Fully Connected Markovian CMC’s . . . . . . . .
https://arxiv.org/abs/2505.14458v1
. . 14 4.3 Estimating the Transition Density of Fully Connected non-Markovian CMC’s . . . . . . . . 15 5 Conclusions 16 1 A Sketch of Proof of Proposition 2 20 B Proofs 22 B.1 Proof of Proposition 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B.2 Proof of Proposition 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 B.3 Proof of Proposition 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 B.4 Proof of Lemma 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 B.5 Proof of Proposition 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 B.6 Proof of Lemma 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B.7 Proof of Proposition 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.8 Proof of Proposition 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 B.9 Proof of Lemma 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 B.10 Proof of Proposition 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 B.11 Proof of Proposition 11 . . . . . . . . . . . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.14458v1
. . . . . . . . . . . . 39 B.12 Proposition 19 and proof of its upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . 40 B.13 Proof of the lower bound of Proposition 19 . . . . . . . . . . . . . . . . . . . . . . . . . . 43 B.14 Proof of the upper bound in Lemma 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 B.15 Proof of Lemma 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 B.16 Proof of Lemma 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 B.17 Sketch of Proofs of Corollaries 2 and 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 B.18 Proof of Proposition 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 B.19 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 B.20 Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 B.21 Proof of Lemma 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 B.22 Proof of Lemma 24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 B.23 Proof of Lemma 25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.14458v1
. . . . . . . 58 1 Introduction A stochastic process {(Xi, ai)}is called a controlled Markov chain (CMC) [18] if the next “state” Xi+1 depends only on the current state Xiand the current “control” ai. Informally, this means: P Xi+1∈dy|X0, a0, . . . , X i, ai =P Xi+1∈dy|Xi=xi, ai=li =s(xi, li, y)µχ(dy), where s(xi, li, y)gives the probability density of moving from the current state xiwith action lito the next statey. Here, the actions aidepend only on the information available up to time i. This paper addresses adaptive estimation of the transition density sof controlled Markov chains. In general, controlled Markov chains can be used to model both time-homogenous (like i.i.d [56], Markovian [16]) and time-inhomogenous (like i.n.i.d, time-inhomogenous Markovian [26, 43], Markov de- cision process [33]) data. However, they also appear in numerous other problems like offline reinforcement learning [38], system stabilisation [59], or system identification [39, 41]. As a specific example, consider prescribing medication to a diabetic patient, where the state is the current blood glucose level, and the control is the prescribed medication [53]. There is no reason to believe that the previous examples involve controls that are Markovian. It is known that certain categories of adversarial Markov games [57], reward machines [34], and minimum entropy explorations [47] induce Markovian state transitions with non-Markovian controls. This necessitates sharp estimates of the transition dynamics of Markovian systems in the presence of non-Markovian controls. 2 Although nonparametric estimation of the density of i.i.d [56] or (more recently) Markovian [4, 40] samples is a well-studied topic and has wide applications in settings like regression, classification, and unsupervised learning [42], there is little existing work addressing the estimation of controlled Markov chains. An inherent challenge of this setup is non-stationarity. Recall from [4] that a natural approach to estimating the transition density of a Markov chain is to estimate the joint density Xi, Xi+1and the marginal Xidensity, and then take the ratio. This method works well even if the Markov chains are ergodic rather than stationary. However, if the process is non-stationary and non-ergodic, then there are no well-defined estimators for the joint or the marginal, and the conditional cannot be derived from their ratio. On a related note, a controlled Markov chain may have all amenable properties like recurrence and mixing without being ergodic (see Lemma 4). Furthermore, non-parametric estimation presents a number of difficulties, being highly sensitive to the choice of hyperparameters like the bandwidth of the estimator. For example, with nsamples and assuming that the density sisσ-H¨older continuous, one can set the bandwidth to be O(n−1/(2σ+1))to obtain the minimax risk O(n−2σ/(2σ+1))[56, Chapter 1]. However, while it is common practice to assume such oracle knowledge about σ, this is often unrealistic. Such an assumption is especially problematic when the data is generated by a controlled Markovian process since one requires it to hold for allpossible values of controls. Specifically, with Xibeing the state at time i,aibeing the control at time i, andXi+1being the state at time i+ 1, one requires P(Xi+1∈dx|Xi=x, ai=l) =:s(x, l, y )µχ(dx)
https://arxiv.org/abs/2505.14458v1
to be σ-H¨older continuous for all values of l. To avoid such strong assumptions, we rely upon the recent and rapidly evolving techniques of adaptive density estimation . This technique was pioneered by [12] and has been further developed in [42, 8, 10, 11, 17, 52]. In this paper, our objective is to adapt this technique and create an adaptive estimator for the transition densities of controlled Markov chains. Informally, adaptive estimation selects a best estimator with respect to loss Hfrom a known class Mby minimising a contrast (which for us, is eq. (Constrast) below), thereby completely sidestepping the problem of manually setting the bandwidth. We refer the readers to Chapter 1 of the textbook [42] for more details. Two questions remain: 1) Is the optimisation problem introduced by the contrast computationally tractable for our choices of H, andM?, and 2) Is the selected estimator minimax optimal over the class of all possible estimators under appropriate assumptions on the true density? The answer to both of these questions are in the affirmative. For the former, see Remark 3, and for the latter, see Theroem 4, and Corollaries 2, and 3. Importantly, the minimaxity guarantee is achieved without prior knowledge about smoothness parameters. Technical Contributions: Our main contribution is showing that an optimal histogram estimator (com- putable in polynomial time) of the transition function sbased on the dyadic partitions satisfies an oracle risk bound irrespective of the distribution of the controls ai(Theorem 1). Interestingly, we find that the optimal estimator can be constructed without any assumptions on the distribution of the control sequence ai. We then validate its performance through oracle risk bounds, employing both instance dependent (Theorem 1) and in- stance independent (Theorems 2, and 3) versions of the Hellinger distance as our loss function. Although [7] recently derived optimal estimators for the transition density of finite-state, finite-control controlled Markov chains (CMCs), there is surprisingly little work attempting to optimally estimate the transition density of a CMC with continuous state-control spaces. In a series of groundbreaking papers, adaptive estimators were developed for transition densities in various settings: i.i.d. data [9], stationary Markov chains [37], non-stationary β-mixing Markov chains [52], and stationary β-mixing paired processes [1]. This paper gen- eralizes all of these prior works in several directions. Unlike [9, 37, 1], we do not assume our process to be stationary. Furthermore, unlike [52], we do not assume our process to be either Markovian or β-mixing. This generalization brings with it two distinct challenges, which we describe below. 3 1.Question of non-stationarity: In general the n-step occupation measure for the non-stationary pro- cess may not stabilise in the limit. In other words, there may not exist a probability measure νsuch that the n-step occupation measure νn(A) :=Pn i=1P((Xi, ai)∈A)/nn→∞− − − → ν(A). As mentioned above, there is then no meaningful way to estimate νn. Our solution to this problem is twofold. First, we show that for a suitable choice of instance dependent loss function H, the estimator ˆsis optimal for any given n-step occupation measure νn? (Theorem 1). Second, we demonstrate that
https://arxiv.org/abs/2505.14458v1
even when us- ing the traditional Hellinger loss, the assumption of stationarity—though convenient (Theorem 2)—is not necessary (Theorem 3). A careful analysis reveals a deeper connection with the return times of the stochastic process {(Xi, ai)}. Key in making this connection is a Kac-type lower bound (Lemma 25) for recurring processes that we derive, which we believe is of independent interest. 2.Question of mixing: A close inspection of existing literature [22, 52, 1] on statistics on dependent samples reveal (see, for instance, [52, Proposition B.1]) the usage of the celebrated Berbee’s lemma [49, Lemma 5.1], which requires the β-mixing assumption. A key contribution of this paper is to demonstrate that such an assumption is not necessary. In particular, using recent advances on con- centration inequalities for α-mixing processes [45], we derive sharp bounds on the transition density estimator for α-mixing CMCs (Theorems 2 and 3). Since there are α-mixing processes which are not β-mixing [20], this provides an important relaxation of the mixing assumptions. 1.1 Notation LetNandRdenote the natural and real numbers, and the symbol ⌊·⌋, the floor function. All random variables in this paper will be defined with respect to a filtered probability space (Ω,F,F,P), where Fis aσ-algebra and F:={Fi}i≥0, withFi⊂ F , is a given filtration. Let {(Xi, ai)}represent a discrete-time stochastic processes adapted to F, and taking values in χ⊆Rd1,I⊆Rd2. We call χandIthestate and thecontrol spaces respectively. For all non-negative integers i, j, we define Hj i:= (Xj, aj, . . . , X i, ai)and ℏj i:= (xj, lj, . . . , x i, li)and note that ℏj iis an element of (χ×I)j−i+1. The σ-field generated by Hj ishall beFj i. Throughout the paper, we will assume that χandIare compact. When they are not compact, all of our theory still continues to hold on any restriction of son a compact subset A⊂χ×I×χ, given by s 1A. Observe that s 1Ais not necessarily a conditional density, in the sense that it may not integrate upto 1. LetE[X]be the expectation and σ(X)theσ-algebra induced by X. We endow χandIwith integrating measures µχandµIrespectively. One can assume µ’s to be Lebesgue when χandIare continuous, or count when χandIare discrete. By Vol(S)we denote the volume of the set Swith respect to its natural measure. As an example, if S ⊂ χ, then Vol(S) =µχ(S); ifS ⊂I, then Vol(S) =µI(S), etc. C andcare always used to denote universal constants whose values can change from line to line. We call m={k:k⊆χ×I×χ}to be a partition ofχ×I×χifS k∈mk=χ×I×χandkTk′=Ø for all distinct k, k′∈m. Finally, to avoid trivialities, we assume throughout the paper that the number of samples, denoted by nis at least 3. 2 Risk Bounds With Respect to Empirical Hellinger Loss Definitions. For an arbitrary process aiadapted to the filtration Fi, a stochastic process {(Xi, ai)}is said to be a controlled Markov chain (CMC) with transition function s(·,·,·) :χ×I×χ→Rif the conditional probability density (defined as in [3, Chapter 5]) satisfies P Xi+1∈dy|Hi 0=ℏi 0 =P(Xi+1∈dy|(Xi, ai) =xi, li) =s(xi, li, y)µχ(dy), 4 For any partition m, and a sample {(Xi, ai)}n i=0of length n+ 1, the
https://arxiv.org/abs/2505.14458v1
histogram estimator ˆsm(·,·,·)of s(we will just use the term estimator ) is defined as ˆsm(·,·,·) :=X k∈mPn−1 i=0 1k(Xi, ai, Xi+1)Pn−1 i=0R χ1k(Xi, ai, y)dµχ(y)1k(·,·,·). (2.1) For any two bounded positive functions f1andf2(not necessarily densities) define the square of the empirical Hellinger distance H2as H2(f1, f2) :=1 2nn−1X i=0Z χp f1(Xi, ai, y)−p f2(Xi, ai, y)2 dµχ(y). (Empirical Hellinger) Remark 1. Observe that H(f1, f2)follows from the standard Hellinger distance between f1andf2(see Section 3.3, Page 61 [48]), by setting the integrating measure on χ×I×χto be the empirical measure λn:=n−1Pn−1 i=0δXi,ai⊗µχ. It follows that His a nonnegative random variable adapted to Fn 0. LetVm:=P k∈mak 1k:ak≥0∀k∈m be the set of all piecewise constant functions (not nec- essarily histograms) on partition m. The following proposition shows that ˆsmis “almost” as good as the best approximation of sinVm. For a set of integrable functions Land a function f1, define H2(f1,L) := minf2∈LH2(f1, f2). The following proposition is a standard first step (see Proposition 2.1 [52], Proof of Theorem 6 [10] etc) that illustrates how Hcan be used to choose a good estimator. Proposition 1. For a given transition function s, for any partition m, the associated estimator ˆsmsatisfies E H2(s,ˆsm) ≤2E H2(s, Vm) +1.5 + log n n|m|. Remark 2. LetL≥64be a given constant. For convenience of notation, we denote the ‘penalty’ term as pen(m) :=L(1.5 + log n)|m|/n. (2.2) Because Lis known, we have suppressed its dependence from the notation pen(m). The proof of the previous proposition can be found in Section B.1, and involves showing that ˆsmis the approximate projection of son the space of all piecewise constant functions Vmwith respect to the randomized Hellinger loss function H. Now we extend Proposition 1 to the class of all dyadic partitions on χ×I×χ. To that end, we first recursively define Ml, the set of dyadic partitions of χ×I×χupto depth las follows [23]: Definition 1. Define M0:={χ×I×χ}. For any l, letm∈ M landk∈m. Thus kis an element of a partition of χ×I×χ, so that k⊆Rd2+2d1. Letk1, k2, . . . , k2d2+2d1be the 2d2+2d1sets obtained by equally dividing kalong each axis. Let S(m, k) :=mS{k1, k2, . . . , k2d2+2d1}\k. Then Ml+1:=  [ m∈M l[ k∈mS(m, k)  [ Ml. To formally write the contrast, we introduce some notation. For any two functions f1, f2:χ×I×χ→R define T(f1, f2)as, T(f1, f2) :=1 nn−1X i=01√ 2p f2(Xi, ai, Xi+1)−p f1(Xi, ai, Xi+1)p f2(Xi, ai, Xi+1) +f1(Xi, ai, Xi+1) +Zr f1+f2 2·(p f2−p f1)dλn+Z (f1−f2)dλn. (2.3) 5 Following similar literature [9, 10, 51, 52] we measure the “goodness” of a partition m∈ M lcompared to all others in Mlthrough γ(m), defined as γ(m) :=X K∈msup m′∈M l3 4 1−1√ 2 H2(ˆsm 1K,ˆsm′ 1K) +T(ˆsm 1K,ˆsm′ 1K)−pen(m′∨K) + 2pen(m) (2.4) where m′∨K:= K′∩K:K′∈m′, K′∩K̸=Ø . (2.5) Since a partition uniquely defines a histogram, the selection procedure we enact requires us to choose a particular partition. Therefore, it is sufficient to use γto select a partition ˆm. For any given (l, L), we select theˆmsuch that γ( ˆm)≤min m∈M lγ(m) +1 n. (Constrast) Remark 3. The time complexity
https://arxiv.org/abs/2505.14458v1
of finding ˆmisO nl(d1+d2) +l2(l+1)(d1+d2) . See [52, Proposition A.1] or [10, Section 3.2.4] for details. Observe that ˆmdepends solely on{(X0, a0), . . . , (Xn, an)},l, andL. We define the estimator ˆs:= ˆsˆm and highlight its dependence on landL, although we omit these details in the notation for brevity. Theorem 1 demonstrates that the above estimator ˆsachieves an oracle risk bound with respect to H. In Section 3 we demonstrate that ˆsis also optimal under the usual (deterministic) Hellinger loss function. Theorem 1. There exist universal constants L0andCsuch that for all L≥L0andl≥1, the estimator ˆs satisfies CE H2(s,ˆs) ≤inf m∈M l E H2(s, Vm) +pen(m) . Observe that Theorem 1 does not require any recurrence or mixing assumptions on the controlled Markov chain, indicating that ˆsmis the best piecewise constant estimator of swith respect to the loss function Hfor the given sample {(Xi, ai)}. It is instance-dependent since our choice of empirical Hellinger loss function itself depends upon the sample path. And, by satisfying the oracle risk bound presented in Theorem 1, it becomes the best piecewise constant estimator. Because the controls aimay be non-stationary and non-ergodic, this property is even more significant for controlled Markov chains than for stationary ergodic processes such as i.i.d. data or Markov chains. To the best of our knowledge, Theorem 1 is the only result that provides a risk bound for arbitrary controlled Markov chains. We now turn to prove Theorem 1. 2.1 Proof of Theorem 1 Proof. For the case l> n, we leverage Proposition 1 and a union bound to obtain a risk bound over Ml, as demonstrated in equations (2.8) and (B.13), respectively. Case I ( l≤n):We write the following proposition, whose proof is provided in appendix B.2: Proposition 2. For any ζ >0, and for all L≥64and1≤l≤n, and a large enough constant C, the estimator ˆssatisfies for any s, P CH2(s,ˆs)≥inf m∈M l H2(s,ˆsm) +pen(m) +ζ ≤6e−nζ. (2.6) 6 Recall that for any random variable X,R t>0P(X > t )dt=E[X+]≥E[X], where X+= max( X,0). Using this fact and integrating both sides of eq. (2.6) over ζ, we have E CH2(s,ˆs)−inf m∈M l H2(s,ˆsm) +pen(m)  ≤6 n. The main result now follows by trivially upper bounding 6/nbyL(1.5 + log n)|m|/nfor all non-empty partitions m. We move to Case II. Case II ( l≥n+ 1)We will show that, when l≥n+ 1, we the optimal histogram is created by some partition m†such that m†∈ M n. The proof will then proceed similarly to Case I . We begin with the following proposition, whose proof can be found in Section B.3. Proposition 3. For all l≥n+ 1, inf l∈M lγ(m) = inf m∈Mnγ(m). (2.7) Next, for any l≥n+ 1let m†∈argmin m∈M l E H2(s, Vm) +pen(m) . To complete the proof we need to show m†∈ M n. Let∅be the trivial partition of χ×I×χand0∅≡0 be the trivial piecewise constant function associated with it. We now observe that pen(m†)≤E H2(s, Vm†) +pen(m†) ≤E H2(s, V∅) +pen(∅) ≤E H2(s,0∅) +pen(∅) (2.8) =1 2+Llogn n. The first inequality follows trivially from the fact that H2(·,·)≥0. The
https://arxiv.org/abs/2505.14458v1
second inequality follows from the definition of m†. The third inequality follows from the definition of H2(s, Vm)in Proposition 1. The final equality follows by observing that H2(s,0∅) = 1 /2and by substituting the value of pen(∅). Substituting the value of pen(m†)from eq. (2.2) we now get |m†| ≤2 +n/(Llogn) Recall from Section 1 that n≥3and from the hypothesis of the Theorem that L≥64. Therefore, 2 +n/(Llogn)is trivially upper bounded by n. Therefore |m†| ≤nwhich in turn implies that m†∈ M n. The rest of the proof now follows similarly to Case I . Proposition 2 is established by verifying that standard results in adaptive estimation of i.i.d (theorem 1 [9], see also theorem 8 [10]) or Markov chain (theorem B.1 [52]) densities canonically extend to the realm of controlled Markov chains. A sketch of the proof is included for the convenience of the reader in Appendix A. The complete proof can be found in Section B.2. 3 The Risk Bound for the deterministic Hellinger Loss As mentioned previously, the empirical Hellinger risk, which was the main focus of the previous section, can be thought of as a risk bound tailored to the given sample {(Xi, ai)}and was therefore, assumption free. In this section, we move on to the deterministic version of the Hellinger loss, which is averaged over all possible sample paths. This brings the two additional challenges that were described in the Technical Contributions paragraph of Section 1. We address these first; beginning with mixing. 7 Mixing: In this section, we assume the controlled Markov chain {(Xi, ai)}isgeometrically strongly mix- ing[19]. The strong mixing coefficient (also referred to as α-mixing coefficients) αi,jis defined by αi,j:= sup A,B P Hi 0∈A\ H∞ j∈B −P Hi 0∈A P H∞ j∈B , (Strong Mixing Coeff.) where AandBare Borel-measureable sets in the σ-algebras generated by Hi 0andH∞ jrespectively. We refer the readers to [19] for a comprehensive treatment of strong mixing coefficients (see also [15] for results on finding explicit constants). We assume the following in the ensuing developments. Assumption 1. There exists a constant cpsuch that αi,j≤e−cp(j−i). Observe that under this assumption, supiP j≥i√αi,j<∞. We define C∆:= supi(1 +P j≥i√αi,j)and note that C∆is a positive constant. Remark 4. The term “exponentially mixing” is commonly used in the literature to describe sequences of random variables whose strong mixing coefficients decay exponentially. Our primary motivation for assuming exponential mixing conditions is to utilize the sharp concentration inequalities in [45], which also require exponentially decaying strong mixing coefficients. To the best of our knowledge, there exists no equivalent results which relaxes the assumptions to accommodate polynomially decaying strong mixing coefficients. Any such relaxations would immediately apply to our own results. Non-stationarity: Recall that the sequence (Xi, ai)can be non-stationary and non-ergodic. In contrast to the Empirical Hellinger defined in eq. (Empirical Hellinger), there is no canonical notion of a determin- istic Hellinger loss for such sequences. Consequently, we consider two separate cases: one in which an ergodic occupation measure (Definition 2 below) exists (Theorem 2), and one in which it does not (Theorem 3). The former
https://arxiv.org/abs/2505.14458v1
can be viewed as a generalization of stationarity, while the latter dispenses with stationar- ity altogether. Proposition 5 provides a simple example showing that a sharper bound can be derived by incorporating the ergodic occupation measure than by ignoring it. 3.1 Ergodic Occupation Measure Exists The ergodic occupation measure was introduced informally in Section 1. We now formalize it by adapting equation 1.3 of [14] to the discrete time setting. Definition 2. [Ergodic Occupation Measure] Define the ergodic occupation measure ν:B(χ×I)→Ras ν(A) := lim t→∞1 ttX i=1P((Xi, ai)∈ A). Observe that if {(Xi, ai)}is a strictly stationary sequence, then the ergodic occupation measure exists (i.e., the limit is well-defined) and is given by the marginal distribution of (X0, a0). More precisely, lim t→∞1 ttX i=1P((Xi, ai)∈ A) =P((X1, a1)∈ A) =1 nnX i=1P((Xi, ai)∈ A). (3.1) Definition 3. Letνn(A) :=n−1Pn i=1P((Xi, ai)∈ A). We define rn:=∥νn−ν∥TV. Remark 5. For stationary sequences, rn= 0. It can also be verified that rn≤ O(1/n)holds under more general notions of stationarity, such as Nth-order or semi-stationarity [54]. 8 The following deterministic Hellinger distance is derived from eq. (Empirical Hellinger) by replacing the empirical measure with the ergodic occupation measure. Formally we define the Hellinger distance h2 as follows: h2(f1, f2) :=1 2Z χ×I×χp f1(x, l, y )−p f2(x, l, y )2 µχ(dy)ν(dx, dl ). Letˆsbe as defined in Section 2. We establish the following risk bound, whose proof is in Section B.19. Theorem 2. Letm(2) refbe the partition of Ainto cubes of edge length 2−l. Assume {(Xi, ai)}n i=0is a sequence from a controlled Markov chain satisfying Assumption 1. Then, the histogram estimator ˆssatisfies CE h2(s,ˆs) ≤inf m∈M l h2(s, Vm) +pen(m) +R(n). where R(n)is the following remainder term R(n) = 2l(d1+d2)max Sr∈m(2) refexp −Cpnν2(Sr)−2nCprn 4C∆ρ⋆(Sr) + 4n−1+ 2ν(Sr)(log n)2+ 2rn(logn)2 +rn andCponly depends upon cpin Assumption 1, and ρ⋆(Sr) := sup imax( P((Xi, ai)∈ Sr),sup j>iq P((Xi, ai)∈ Sr,(Xj, aj)∈ Sr)) . We highlight two key aspects of the previous theorem. First, since h2(·,·)≤1/2, Theorem 2 is only meaningful if R(n)<1/2. We show that this condition is satisfied whenever νadmits a density on Athat is bounded below by a positive constant k0(see Corollary 1 below). If (Xi, ai)is a Markov chain, this effectively means that its stationary density is bounded below by k0on the compact set A. In other words, we require that the chain is recurrent on A, which is not a stringent requirement. Second, although the ρ⋆ term is slightly unconventional, it is important for preserving the sharpness of the bound. See Remark 6 below for more discussion. We now show how deterministic risk bounds for i.i.d. data (Corollary 2 of [9]) or for stationary Markov chains (Theorem 2.2 of [52]) can be recovered as special cases of Theorem 2. For concreteness, we restrict our attention to stationary Markov chains. Corollary 1. Let{(Xi, ai)}be a geometrically strong mixing stationary Markov chain with invariant dis- tribution ν, which is bounded below by k0. Then, for large enough n R(n)≤2l(d1+d2)exp −Cpk0n C∆2l(d1+d2)+3(logn)2 . A direct comparison of Corollary 1 with Theorem 2.2 in [52] reveals
https://arxiv.org/abs/2505.14458v1
that we recover a sharper bound for R(n)due to our use of the Bernstein’s inequality (see Section 3.2 for details). In particular, when d1=d2, we show that R(n)≤ O 22ldexp −Cpk0n C∆22ld+3(logn)2 , whereas [52] obtains the bound O n223ld+1exp −s nk0 (40×2ld)!! which is larger for sufficiently large n. We now turn to proving Corollary 1. 9 3.2 Proof of Corollary 1 Proof. (Xi, ai)is stationary. Therefore, as mentioned in Remark 5, rn= 0. Consequently, R(n) = 2l(d1+d2)max Sr∈m(2) refexp −Cpnν2(Sr) 4C∆ρ⋆(Sr) + 4n−1+ 2ν(Sr)(log n)2 Next, fix a set Sr∈m(2) ref. We note by stationarity that P((Xi, ai)∈ Sr) =ν(Sr). We first consider the case when P((Xi, ai)∈ Sr)≥supj>ip P((Xi, ai)∈ Sr,(Xj, aj)∈ Sr), so that ρ⋆(Sr)becomes ρ⋆(Sr) =ν(Sr). The other case is handled similarly with more careful book-keeping. This implies, exp −Cpnν2(Sr) 4C∆ρ⋆(Sr) + 4n−1+ 2ν(Sr)(log n)2 <exp −Cpnν2(Sr) 4C∆ν(Sr) + 4n−1+ 2ν(Sr)(log n)2 . (3.2) Recall from Assumption 1 that C∆is a positive number greater than 1. Therefore, 4C∆ν(Sr) + 4n−1+ 2ν(Sr)(log n)2≤4C∆ν(Sr)(log n)2+ 4n−1. Now, allowing nto be large enough such that 4C∆ν(Sr)(log n)2≥4n−1we get 4C∆ν(Sr)(log n)2+ 4n−1≤8C∆ν(Sr)(log n)2. Substituting this upper bound on the right hand side of eq. (3.2) we get, exp −Cpnν2(Sr) 4C∆ν(Sr) + 4n−1+ 2ν(Sr)(log n)2 ≤exp −Cpnν2(Sr) 8C∆ν(Sr)(log n)2 = exp −Cpnν(Sr) 8C∆(logn)2 . Sris a cube of side length 2−landνadmits a density lower bounded by k0. Therefore, ν(Sr)≥k0/2l(d1+d2). The rest of the proof now follows. 3.3 Ergodic Occupation Measure Does Not Exist If the limit on the left hand side of eq. (3.1) fails to exist, then the ergodic occupation measure is undefined. This situation arises for non-stationary, non-ergodic processes. To endow such a process with a notion of recurrence, we define the ‘time to return’ as follows Definition 4. The first hitting time Sis defined as τ(1) S:= min {n: (Xn, an)∈ S,(Xj, aj)/∈ S ∀ 0≤j < n}. When i≥2thei-thtime to return (orreturn time ) of the state-control pair (x, l)is recursively defined as τ(i) S:= min( n: XPi−1 k=1τ(k) x,l+n, aPi−1 k=1τ(k) x,l+n ∈ S,(Xj, aj)/∈ S ∀i−1X k=1τ(k) S< j <i−1X k=1τ(k) S+n) . 10 Ifaidepends only on Xi, then{(Xi, ai)}forms a Markov chain, and {τ(i) S}becomes a renewal process [50]. We use this idea to prove a renewal-type result (Lemma 25) that counts the number of occurrences of S. In contrast to Harris recurrent processes, we do not assume independent renewals [29, 28], making our results applicable in a broader setting. We now introduce some notation. We define the maximum expected return time to SasT(S)and recall the definition of νn(S)from the introduction. Formally, T(S) := sup iE[τ(i) S|FPi−1 p=0τ(p) x,l],andνn(S) =1 nnX i=1P((Xi, ai)∈ S),respectively. (3.3) Lemma 4 (proved in Section B.4) establishes that having T(S⋆)<∞does not, by itself, imply that limn→∞νn(S⋆)is well defined. Lemma 4. There exist controlled Markov chains for which T(S)<∞andν(S)does not exist for any S ⊂χ×I. We prove Lemma 4 by producing an i.n.i.d sequence. Thus, the counterexample is both recurrent and mixing without being ergodic. Next, we define the Hellinger distance with respect to νnas h2 n(f1, f2) :=1 2Z χ×I×χp
https://arxiv.org/abs/2505.14458v1
f1(x, l, y )−p f2(x, l, y )2 µχ(dy)νn(dx, dl ). Choose a depth l≤nand let m(2) refbe the partition of χ×Iinto uniform cubes of edge length 2−l. To avoid trivialities, we implicitly assume throughout the rest of this section that T(S)<∞for any S ∈m(2) ref. We interpret this condition to mean that the controlled Markov chain {(Xi, ai)}is recurrent on open subsets ofχ×I. This enforces a notion of recurrence even for non-stationary processes and allows us to establish the non-ergodic analogue of Theorem 2 in Theorem 3 next; the proof is relegated to Section B.20. Theorem 3. Letm(2) refbe the partition of χ×Iinto uniform cubes of edge length 2−l. Define S⋆as S⋆:= argmax Sr∈m(2) refexp −Cpn 4T(Sr)2 4C∆ρ⋆(Sr) + 4n−1+(logn)2 2T(Sr) , where C∆is as in Assumption 1, Cponly depends upon cpin Assumption 1, and ρ⋆(Sr) := sup imax( P((Xi, ai)∈ Sr),sup j>iq P((Xi, ai)∈ Sr,(Xj, aj)∈ Sr)) . With, n≥2T(S⋆), assume that {(Xi, ai)}n i=0is a sequence from a controlled Markov chain satisfying Assumption 1. Then, the histogram estimator ˆssatisfies the following risk bound CE h2 n(s,ˆs) ≤inf m∈M l h2 n(s, Vm) +pen(m) +R(n). where the remainder term satisfies R(n) = 2l(d1+d2)exp −Cpn 4T(S⋆)2 4C∆ρ⋆(S⋆)+4+(log n)2 2T(S⋆) . Remark 6. We remark on two important aspects of the previous theorem, both of which are related to the remainder term R(n). On the one hand, as noted earlier, the risk bound is only meaningful if R(n)<1/2 which requires T(S⋆)<∞. 11 Second, although the term ρ⋆may initially appear unusual, it is instrumental in proving Corollary 1 and establishing the lower bound in Theorem 4. ρ⋆arises in the proof of Theorem 3 when we use a Bernstein inequality coupled with a covariance bound for strongly mixing random variables (Lemma 24) to bound a covariance term (eq. (B.38) ). If one were to trivially set ρ⋆= 1 or rely on weaker Hoeffding-type inequalities for non-stationary processes (e.g., theorem 1.2 of [35]), the lower bound would degrade to the point of losing its minimax sharpness. Such connections between concentration inequalities and the precision of resulting bounds are well-established in the literature; see section 1.2 of [42] for a detailed discussion. A natural question concerns the optimality of the previous bound. The following theorem addresses this issue by demonstrating the minimax-optimality (described below in eq. (3.5)) of the estimator up to poly- log order terms. Theorem 4. Assume the conditions of Theorem 3, and define ˜S⋆:= argmaxS∈m(2) refT(S). 1. If n (logn)3≥cC−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) log T ˜S⋆ , (3.4) thenR(n)≤4/n. 2. Ifn≤C−1 pT(S⋆)2 C∆ρ⋆(S⋆) +1 T(S⋆) ,thenR(n)>1/2, and the minimax risk satisfies inf ˆssup sE[h2 n(s,ˆs)]≤1 2(1 + π2)(3.5) where the infimum is over the class of all possible estimators and the supremum is over the class of all possible controlled Markov chains satisfying our assumptions. Proof. The proof is divided in two cases. When l≤n, the proof follows from Proposition 19 in Section B.12. Next, when l≥n+ 1it follows similarly to the proof of Case II , Theorem 1 that the optimal histogram is created by some partition m†such that m†∈ M
https://arxiv.org/abs/2505.14458v1
n. This completes the proof. A final question concerns whether the utility of considering the ergodic occupation measure described in Section 3.1 when Theorem 3 proves a risk bound under a more general setting. The benefit is in the inherent tightness that an average case object like the ergodic occupation measure provides over a worst case statistic like the maximum expected return time. In this situation, νis smaller than Tand Theorem 2 provides a tighter bound than 3. We make this concrete with the following Proposition. Proposition 5. LetR(1)(n)be the remainder term obtained from Theorem 2 and R(2)(n)be the remain- der term obtained from Theorem 3. Then there exists a controlled Markov chain such that R(2)(n) = O(pen(m⋆))andR(1)(n) =o(pen(m⋆)), where m⋆is the partition minimising the oracle risk. The broad idea behind the proof is to compare remainder terms of a time-inhomogenous Markov chain with carefully chosen piecewise-constant densities. It demonstrates that under appropriate choices, the first risk term is negligible compared to the second. See Section B.5 for full details. 12 4 Applications In this section we show the applicability of Theorem 1 by deriving uniform risk bounds when slies in a given smooth functional class. We also demonstrate the applicability of Theorem 3 for controlled Markov chains by showing that its conditions hold with mild and practical assumptions. We start with the former. 4.1 Uniform Risk Bounds over Functional Classes Here we show that the empirical Hellinger loss recovers optimal rates of convergence over classes of H ¨older smooth functions [13, Chapter 6] functions. For the purpose of illustration, we assume that d1=d2=d. Definition 5. We call a function f:A→Rto belong to the H ¨older space Hσ(A)with parameter σ∈(0,1] and finite norm ∥f∥σ>0if|f(x)−f(y)| ≤ ∥f∥σ∥x−y∥σ∀x, y∈A. Anyf∈Hσ(A)is called H ¨older smooth. Recall that H1(A)is the space of all Lipschitz smooth functions, and that elements of Hσ(A)are constant functions when σ >1. In particular, all non-constant continuously differentiable functions belong to Hσ(A)for some σ∈(0,1]. When√s(where sis the transition kernel corresponding to the controlled Markov chain) belongs to Hσ(A), we have the following corollary to Theorem 1. Corollary 2. For all σ∈(0,1], and√s∈Hσ(A), the estimator ˆssatisfies with an universal constant C>0, CE H2(s,ˆs) ≤(d∥√s∥σ)2d/(d+σ)logn nσ/(d+σ) +logn n. Next, we derive a risk bound for functions belonging to isotropic Besov spaces. Definition 6. Given a function f∈Lp(Ω),0< p≤ ∞ , and any integer r, we defne its modulus of smoothness of order ras ωr(f, t)p:= sup 0<|h|≤t∥∆r h(f,·)∥Lp(Ω), t > 0, where h∈Rdand|h|is it Euclidean norm. Here, ∆r h, is the r-th difference operator, defined by ∆r h(f, x) :=rX k=0(−1)r−kr k f(x+kh), x∈Ω⊂Rd, where this difference is set to zero whenever one of the points x+khis not in the support of f. It is easy to see that for any f∈Lp(Ω), we have ωr(f, t)p→0, Then, Besov space Bσ q(Lp(A))consists of all fsuch functions such that |f|Bσq(Lp(A)):=(R t>0tqσ−1(ωr(f, t)p)qdt0< q < ∞ supt≥0tqσ−1(ωr(f, t)p)qq=∞ is finite. Then, we define Bσ(Lp(A))as Bσ(Lp(A)) :=( Bσ p(Lp(A)), p ∈(1,2) Bσ ∞(Lp(A)), p ≥2 with the attached norm ∥·∥σ,p. Assumption 2. We make the following assumptions: 13
https://arxiv.org/abs/2505.14458v1
1. Let p∈(2d/(d+ 1),∞),σ∈(2d(1/p−1/2)+,1), and√s∈Bσ(Lp(A)). 2. For each i,(Xi, ai)admits the density Φisuch that Φi(x, l)≤Γfor all (x, l)∈χ×I. Recall from the Section 1 the definition of Vol(·). Then we have the following corollary. Corollary 3. Under Assumption 2, the estimator ˆs= ˆs(L0,∞)satisfies C′E H2(s,ˆs) ≤ΓVol( A)∥√s∥2d/(d+σ) p,σlogn nσ/(σ+d) +logn n, where C′>0depends only on Γ, σ, d, p andVol(A)where Vol(A)is the volume of the set A. The proofs of Corollaries 2, and 3 follow similarly to the proof of [10, Proposition 3] and we provide a brief sketch in Section B.17 4.2 Estimating the Transition Density of Fully Connected Markovian CMC’s In this section, we focus on fully connected CMC’s. A CMC {(Xi, ai)}is fully connected if there exists some ε0>0such that for all x, l, y∈χ×I×χ, ε0≤s(x, l, y )≤1/ε0, (Fully Connected) which endows {(Xi, ai)}with recurrence and mixing. Our notion of fully connected generalizes the class of inhomogeneous Markov chains first introduced in [24, 25]—and subsequently expanded in [44, 43]—to the setting of controlled Markov chains. A CMC is said to have ‘Markov controls’ if for any SI∈I P ai∈ SI|Xi=x,Hi−1 0=ℏi−1 0 =P(ai∈ SI|Xi=x). Remark 7. The dependence of aionXiandican be non-trivial. If there is no dependence on i, then {(Xi, ai)}is a regular Markov chain. If there is no dependence on Xi, then {Xi}is a regular time- inhomogenous Markov chain. Our objective in this section will be to show the recurrence and mixing properties of a fully connected Markovian CMC. In particular, we will show that a fully connected Markovian CMC satisfies Assumption 1, and derive the rate constant. Then we will derive an expression for T(S). We first address mixing by presenting a more general lemma, from which the mixing properties of fully connected CMCs follow as an immediate corollary. Lemma 6. Let{(X0, a0), . . . , (Xn, an)}be a CMC with transition density sand Markov controls. If there existχ0⊆χandκsuch that inf x∈χ, l∈Is(x, l, y )≥κfor all y∈χ0, then αi,j≤ 1−Vol(χ0)κj−i−1, where Vol(χ0)denotes the “volume” of the set χ0. Consequently, this CMC satisfies Assumption 1 with C∆= 1/(Vol( χ0)κ). 14 Applying Lemma 6 with χ0=χandκ=ε0immediately shows that a fully connected controlled Markov chain satisfies Assumption 1 with C∆= (ε0Vol(χ))−1. Moreover, we note that the proof of Lemma 6 actually shows something stronger: a fully connected CMC is ϕ-mixing [19]. The full proof, found in Section B.6, generalizes a classical result by Wolfowitz [58] on products of matrices. Turning to recurrence, we introduce some notation for the sake of exposition. Let S ⊆χ×I, andSχand SIbe such that Sχ={x∈χ: (x, l)∈ S for some l∈I}andSI={l∈I: (x, l)∈ Sfor some x∈χ}. Definition 7. Define τ(i,⋆,j) Sto be the time between the (j−1)- and j-th visits to SIafter the i-th visit to the state-control pair S. For convenience, let τ⋆=iX k=1τ(k) S+j−1X k=1τ(i,⋆,k) S. Then τ(i,⋆,j) Sis recursively defined as τ(i,⋆,j) S:= min {n: (aτ⋆+n∈ SI), aj/∈ SI∀τ⋆< j < τ ⋆+n}. Further, define T(⋆)(S) := sup i,j≥0E τ(i,⋆,j) S FPi−1 p=1τ(p) S+Pj−1 p=1τ(i,⋆,p ) S . The following proposition establishes the return-time properties of fully connected CMCs.
https://arxiv.org/abs/2505.14458v1
Its proof is in Section B.7. Proposition 7. For all (i,S)∈N×χ×I, it holds P-almost everywhere that Eh τ(i) S FPi−1 p=1τ(p) Si <T(⋆)(S) ε3 0Vol(Sχ). (4.1) Remark 8. The bound in eq. (4.1) can be improved by a more careful (but considerably more tedious) bookkeeping, but this is sufficient for the purposes of illustration. 4.3 Estimating the Transition Density of Fully Connected non-Markovian CMC’s The previous Section addressed fully connected Markov chains with Markovian controls, which sufficed to ensure mixing. Here, we remove the Markovianity assumption on the controls and instead consider general sequences of minorized α-mixing controls. To clarify the setup, we introduce additional notation. We call the sequence of controls aiminorized by Vif there exists a positive measure VonIsuch that V(I)≤1, and inf A∈Fp−1 0, C⊆χ,D⊆IP(ap∈D|Xp∈C, A)≥ V(D). (Minorisation) If{ai}itself forms a Markov chain, then taking C×Aas a “small set” recovers the usual notion of minorization for Markov chains; see [46] for details. It remains unclear whether an analogous concept of small sets exists for controlled Markov chains, but the presence of such sets would immediately generalize the condition in eq. (Minorisation) above. To make a non-Markovian controlled Markov chain tractable for analysis, we impose the following: Assumption 3. The controlled Markov chain {(Xi, ai)}is geometrically α-mixing, fully connected and satisfies the condition in eq. (Minorisation) with a measure Vwhose Radon–Nikod ´ym derivative with respect toµχ⊗µIis bounded below by ε1>0. 15 This leads us to the following Proposition. Proposition 8. Let{(Xi, ai)}be a controlled Markov chain satisfying Assumption 3. Then it is geometri- cally fast α-mixing and satisfies the following bound on expected return times: T(S)≤ε0ε1Vol(S) 1−ε0ε1Vol(S)+ 1. Our strategy to prove this result will be to dominate the tail probability {τ(i) S> p, p ∈N}with the tail probability of a geometric distribution whose probability of success is ε0ε1Vol(S). See Section B.8 for complete details. The main point of this section is not merely Proposition 8, but rather that condition in eq. (Minorisation) alone is insufficient to guarantee both recurrence and mixing in the controlled Markov chain. Lemma 9 establishes this formally, and its proof (deferred to Section B.9) provides a concrete counterexample. Lemma 9. There exists a controlled Markov chain that satisfies the condition in eq. (Minorisation) but whose α-mixing coefficients remain uniformly bounded away from zero. Lemma 9 does not imply that deterministic risk bounds cannot be derived for chains failing Assump- tion 3; it merely shows our two assumptions are not redundant. However, if {ai}is a Markov chain, then the condition in eq. (Minorisation) allows a Nummelin split [46, Chapter 5] which opens up a plethora of tools to derive its mixing properties. 5 Conclusions In this paper, we provide two flavors of risk bounds for estimation of the transition functions of controlled Markov chains with continuous states and controls. The first (Theorem 1) is tailored to the particular ob- served sample {(Xi, ai)}and is assumption free, while the second (Theorems 2 and 3) are oracle risk bounds under assumptions on the recurrence and mixing conditions. This address several open problems posed in previous work [7], like
https://arxiv.org/abs/2505.14458v1
data-dependent risk bounds, and risk bounds for controlled Markov chains with continuous state-control spaces. To conclude, we list a few directions for future research. Our deterministic guarantees rely on geometric α-mixing; existing concentration technology does not yet deliver comparably sharp bounds under summable mixing conditions. Relaxing this requirement without sacrificing tightness is an open problem. Doing this requires developing Bernstein-type inequalities for processes whose strong-mixing coefficients decay only polynomially. Moreover, while histograms confer interpretability and computational tractability, they may suffer in very high dimensions, suggesting that wavelets or spline based methods could yield further computational gains [40]. Integrating adaptive partitioning schemes with dimension-reduction (like PCA or its variants [21]) or representation-learning techniques promises to scale the methodology to higher- dimensional state–control spaces. Looking forward, the important question of developing hypothesis tests and resampling techniques [5] for transition probabilities remains unsolved, and we plan to address this question in a future work. Broadly, the risk bounds obtained in this paper lay a principled foundation for offline reinforcement learning [55]— like estimating the value-, Q-, and advantage- functions for offline MDP’s— and online control problems, like optimal controls for Guassian [36], and non-Gaussian [30, 6] POMDP’s. 16 References [1] Nathalie Akakpo and Claire Lacour. “Inhomogeneous and anisotropic conditional density estimation from dependent data”. In: Electronic Journal of Statistics 5.none (Jan. 2011), pp. 1618–1653. ISSN : 1935-7524, 1935-7524. DOI:10.1214/11-EJS653 . [2] Tom M Apostol. “An elementary view of Euler’s summation formula”. In: The American Mathemat- ical Monthly 106.5 (1999), pp. 409–418. [3] Robert B. Ash and Catherine A. Doleans-Dade. Probability and Measure Theory . en. Academic Press, 2000. ISBN : 978-0-12-065202-0. [4] Krishna B Athreya. “Kernel Estimation for Real-Valued Markov Chains”. en. In: (1998), p. 18. [5] Imon Banerjee and Sayak Chakrabarty. CLT and Edgeworth Expansion for m-out-of-n Bootstrap Estimators of The Studentized Median . May 2025. DOI:10.48550/arXiv.2505.11725 . [6] Imon Banerjee and Itai Gurvich. Goggin’s corrected Kalman Filter: Guarantees and Filtering Regimes . Feb. 2025. DOI:10.48550/arXiv.2502.14053 . [7] Imon Banerjee, Harsha Honnappa, and Vinayak Rao. “Off-line Estimation of Controlled Markov Chains: Minimaxity and Sample Complexity”. In: Operations Research (Feb. 2025). ISSN : 0030- 364X. DOI:10.1287/opre.2023.0046 . [8] Y . Baraud, L. Birg ´e, and M. Sart. “A new method for estimation and model selection:$$ \rho $$- estimation”. en. In: Inventiones mathematicae 207.2 (Feb. 2017), pp. 425–517. ISSN : 1432-1297. DOI:10.1007/s00222-016-0673-5 . [9] Yannick Baraud. “Estimator selection with respect to Hellinger-type risks”. en. In: Probability Theory and Related Fields 151.1 (Oct. 2011), pp. 353–401. ISSN : 1432-2064. DOI:10.1007/s00440- 010-0302-y . [10] Yannick Baraud and Lucien Birg ´e. “Estimating the intensity of a random measure by histogram type estimators”. en. In: Probability Theory and Related Fields 143.1 (Jan. 2009), pp. 239–284. ISSN : 1432-2064. DOI:10.1007/s00440-007-0126-6 . [11] Yannick Baraud and Lucien Birg ´e. “Rho-estimators revisited: General theory and applications”. In: The Annals of Statistics 46.6B (Dec. 2018), pp. 3767–3804. ISSN : 0090-5364, 2168-8966. DOI:10. 1214/17-AOS1675 . [12] Andrew Barron, Lucien Birg ´e, and Pascal Massart. “Risk bounds for model selection via penaliza- tion”. en. In: Probability Theory and Related Fields 113.3 (Feb. 1999), pp. 301–413. ISSN
https://arxiv.org/abs/2505.14458v1
: 1432- 2064. DOI:10.1007/s004400050210 . [13] J ¨oran Bergh and J ¨orgen L ¨ofstr ¨om.Interpolation Spaces: An Introduction . en. Ed. by S. S. Chern et al. V ol. 223. Grundlehren der mathematischen Wissenschaften. Berlin, Heidelberg: Springer, 1976. ISBN : 978-3-642-66453-3 978-3-642-66451-9. DOI:10.1007/978-3-642-66451-9 . [14] Abhay G. Bhatt and Vivek S. Borkar. “Occupation measures for controlled Markov processes: char- acterization and optimality”. In: The Annals of Probability 24.3 (July 1996), pp. 1531–1562. ISSN : 0091-1798, 2168-894X. DOI:10.1214/aop/1065725192 . [15] Riddhiman Bhattacharya and Galin L. Jones. Explicit Constraints on the Geometric Rate of Conver- gence of Random Walk Metropolis-Hastings . July 2023. DOI:10.48550/arXiv.2307.11644 . [16] Patrick Billingsley. “Statistical methods in Markov chains”. In: The Annals of Mathematical Statistics (1961), pp. 12–40. 17 [17] Lucien Birg ´e. “Model selection via testing: an alternative to (penalized) maximum likelihood estima- tors”. In: Annales de l’IHP Probabilit ´es et statistiques . V ol. 42. 2006, pp. 273–325. [18] Vivek S Borkar. Topics in controlled Markov chains . Harlow, UK: Longman Scientific & Technical, 1991. [19] Richard C. Bradley. “Basic Properties of Strong Mixing Conditions. A Survey and Some Open Ques- tions”. In: Probability Surveys 2 (2005), pp. 107–144. DOI:10.1214/154957805100000104 . [20] Richard C. Bradley. “Some Examples of Mixing Random Fields”. In: Rocky Mountain Journal of Mathematics 23.2 (June 1993), pp. 495–519. ISSN : 0035-7596. DOI:10 . 1216 / rmjm / 1181072573 . [21] Arghya Datta and Sayak Chakrabarty. “On the Consistency of Maximum Likelihood Estimation of Probabilistic Principal Component Analysis”. en. In: Advances in Neural Information Processing Systems 36 (Dec. 2023), pp. 28648–28662. [22] Nabarun Deb and Debarghya Mukherjee. Trade-off Between Dependence and Complexity for Non- parametric Learning – an Empirical Process Approach . Jan. 2024. DOI:10 . 48550 / arXiv . 2401.08978 . [23] Ronald A. DeV ore and Xiang Ming Yu. “Degree of Adaptive Approximation”. In: Mathematics of Computation 55.192 (1990), pp. 625–635. ISSN : 0025-5718. DOI:10.2307/2008437 . [24] R. L. Dobrushin. “Central Limit Theorem for Nonstationary Markov Chains. I”. In: Theory of Prob- ability & Its Applications 1.1 (Jan. 1956), pp. 65–80. ISSN : 0040-585X. DOI:10.1137/1101006 . [25] R. L. Dobrushin. “Central Limit Theorem for Nonstationary Markov Chains. II”. In: Theory of Proba- bility & Its Applications 1.4 (Jan. 1956), pp. 329–383. ISSN : 0040-585X. DOI:10.1137/1101029 . [26] Dmitry Dolgopyat and Omri M. Sarig. Local Limit Theorems for Inhomogeneous Markov Chains . en. V ol. 2331. Lecture Notes in Mathematics. Cham: Springer International Publishing, 2023. ISBN : 978-3-031-32600-4 978-3-031-32601-1. DOI:10.1007/978-3-031-32601-1 . [27] BK Ghosh. “Probability inequalities related to Markov’s theorem”. In: The American Statistician 56.3 (2002), pp. 186–190. [28] Peter Glynn and Yanlin Qu. “On a New Characterization of Harris Recurrence for Markov Chains and Processes”. en. In: Mathematics 11.9 (Jan. 2023), p. 2165. ISSN : 2227-7390. DOI:10.3390/ math11092165 . [29] Peter W. Glynn. “Wide-sense regeneration for Harris recurrent Markov processes: an open prob- lem”. en. In: Queueing Systems 68.3 (Aug. 2011), pp. 305–311. ISSN : 1572-9443. DOI:10.1007/ s11134-011-9238-x . [30] E.M. Goggin. “Convergence of filters with applications to the Kalman-Bucy case”. In: IEEE Trans- actions on Information Theory 38.3 (May 1992), pp. 1091–1100.
https://arxiv.org/abs/2505.14458v1