text
string | source
string |
|---|---|
will omit the definition of nuisance estimators and present a simplified algorithm in Algorithm 2. The complete bootstrap procedure with the estimators of nuisance parameters can be found in Appendix C. Algorithm 2: Simplified two-stage sampling and weighting procedure Input: Outcome mean estimators ˆE[YuN(s)] and ˆE[Y2 uN(s)]; estimators ˆCov(1) and ˆCov(2)(·); weighting and scaling schemes W ∈ {A ,C}andV ∈ {U ,N}; sampling funciton ¯ e(s,·); clipping rate lN. 1 First stage sampling: Compute ˜A(1,b)= (ˆΣ(1))1/2S(b) 1,S(b) 1∼N(0,I2), with ˆΣ(1)= (ˆCov(1))2×2. 2 Second stage sampling: Compute ˜A(2,b)= (ˆΣ(2,b))1/2S(b) 2,S(b) 2∼N(0,I2), with ˆΣ(2,b)= (ˆCov(2)(˜A(1,b)))2×2. 3 Weighting procedure: Compute estimates ˆ w(t,b) W,V(s) for w(t,b) W,V(s) with ˜A(t,b), ˆE[YuN(s)],ˆE[Y2 uN(s)],¯e(s,·) and lN. Defining ( ˜A(t,b)(0),˜A(t,b)(1))⊤≡˜A(t,b), generate bootstrap sample D(b) W,V=2X t=1ˆw(t,b) W,V(0)˜A(t,b)(0)−2X t=1ˆw(t,b) W,V(1)˜A(t,b)(1). 4 Repeated sampling: Repeat steps 1-3 to get Bbootstrap resamples. Output: Bootstrap sample {D(b) W,V:b∈[B]}. We make the following remarks regarding the proposed bootstrap procedure. Remark 6 (Computational efficiency) .The computational cost of generating a boos- trap sample in Algorithm 2 is O(1), leading to a total cost of O(B)for the whole procedure. This is much more efficient than the general nonparametric bootstrap even with linear computation cost for computing each test statistic, which takes O(BN)to generate Bsamples. Remark 7 (A nonparametric simulation-based approach) .Notably, Algorithm 2 mim- ics a two-stage asymptotic experiment to approximate the limiting distribution of the test statistic in (6). From this perspective, our bootstrap approach is conceptually aligned with the simulation-based methods that leverage asymptotic representation re- sults, as proposed in Hirano et al. (2023). Our method is nonparametric and free of distributional assumptions, thereby enabling more robust and widely applicable infer- ence. Asymptotic valid tests with plug-in bootstrap. Based on Algorithm 2, we construct the plug-in bootstrap tests using the bootstrap samples D(b) W,V. Denote the σ-algebra generated by the observed data as GN≡σ(H1∪H 2). Suppose DW,Vd=D(b) W,V, 17 conditional on GN. Then for W ∈ {A ,C}, define ˆϕW U≡ 1 √ NTN>Q1−α(DW,U| GN) and ˆϕW N≡ 1 WN>Q1−α(DW,N| GN) . The following theorem shows the validity of the plug-in bootstrap procedure and cor- responding bootstrap tests. Theorem 3 (Validity of plug-in bootstrap and tests ˆϕW V).Suppose Assumption 1-2 hold. Then, the following statements hold. 1. Suppose Assumption 3 holds, we have for V ∈ {U ,N}, sup x∈R P[DC,V≤x|GN]−P[WC V(0)≤x] p→0and lim N→∞EH0N[ˆϕC V] =α; 2. Suppose Assumption 4 holds, we have for V ∈ {U ,N}, sup x∈R P[DA,V≤x|GN]−P[WA V(0)≤x] p→0and lim N→∞EH0N[ˆϕA V] =α. The proof of Theorem 3 can be found in Appendix M. There is an implicit assump- tion we make behind Theorem 3, which is the knowledge of the sampling function ¯e(s,·) and lN. In practice, this assumption is reasonable since the selection algorithm or the experimental protocol is usually pre-specified before the data is collected and thus is at the hand of the experiment designer. In fact, this is the case in many empir- ical studies including Collins et al. (2007), Li et al. (2010), Offer-Westort et al. (2021), and Jin et al. (2023). 4 Finite-sample evaluation In this section, we conduct extensive numerical simulations and a
|
https://arxiv.org/abs/2505.10747v1
|
semi-synthetic data analysis to investigate the finite-sample performance of the tests studied in the previous section. In particular, we include all four bootstrap tests with two scaling and two weighting schemes proposed in Section 3.5. As a benchmark, we also include the IPW test statistic based on sample splitting, which only uses the data from follow-up stage for inference; the comparison demonstrates the benefit of using data from both stages. The significance level is taken to be 0 .05 throughout this section. 4.1 Numerical simulation Data generation procedure. We consider two potential outcome distributions: 1.Binary outcome: YuN(0)∼Bern( θ+ 0.5), YuN(1)∼Bern(0 .5); 2.Continuous outcome: YuN(0)∼N(θ,1), YuN(1)∼N(0,0.25). In the pilot stage, we employ the equal sampling in the pilot stage e(1) = 0 .5, which mimics the common practice in real world when there is no prior information which treatment is better. Insipired by batched bandit setup, we consider two selection al- gorithms: the modified version of Thompson sampling (7) and ε-greedy algorithm (8), both with clipping lN=ε/2. In the second stage, we sample two treatments based on the results of these selection algorithm. The task is to test if the treatment effect is significantly different from 0, i.e. θ=E[YuN(0)]−E[YuN(1)] = 0, or not. 18 Parameter setup. In both selection algorithms, we set ε∈ {0.1,0.2,0.4}. The sig- nal strength θranges from −0.2 to 0 .2 with equal sapce 0 .05, with θ= 0 corresponding to the null hypothesis. The number of total samples N= 1000 and each batch has the same sample size N1=N2= 500. ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian 10010−110−210−310010−110−210−310010−110−210−310010−210−410−6 10010−210−410−6Left−sided test ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian 10010−110−210−310010−110−210−310010−110−210−310010−210−4 10010−210−4Right−sided test Theoretical p−valueObserved p−value weighting adaptive weighting constant weighting statistic format normalized sample−splitting unnormalized Figure 3: QQ plots for the 5 tests under different signal strengths. The simulation is repeated for 2000 times. The number of bootstrap used in each test is 5000. Results analysis and interpretation. For the ease of presentation, we select the representative results with Thompson sampling being the selection algorithm. For ad- ditional simulation results with ε-greedy algorithm, we refer the readers to Appendix O. The calibration results are summarized as QQ plots in Figure 3 and the power 19 ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian −0.20−0.15−0.10−0.050.00−0.20−0.15−0.10−0.050.00−0.20−0.15−0.10−0.050.000.10.30.50.70.9 0.10.30.50.70.9Left−sided test ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian 0.000.050.100.150.200.000.050.100.150.200.000.050.100.150.200.10.30.50.70.9 0.10.30.50.70.9Right−sided test Signal strengthRejection rate weighting adaptive weighting constant weighting statistic format normalized sample−splitting unnormalizedFigure 4: Rejection rate for the 5 tests under different signal strength. The simulation is repeated for 2000 times. The number of bootstrap used in each test is 5000. results are summarized in Figure 4. From Figure 3, we observe that all the tests can produce relatively well-calibrated p-values. This validates the bootstrap proce- dure proposed in Algorithm 2. For power comparison, it is unsurprising to see that our approach using pooled two-stage data can improve power over sample splitting. Moreover, we make the following intriguing obsevations: 1.Tests based on adaptive weighting are more powerful. Tests with adap- tive weighting (solid lines) show substantial power improvement compared to tests with constant weighting, no matter which scaling is used. The improvement is especially pronounced when εis small.
|
https://arxiv.org/abs/2505.10747v1
|
This is mainly because when constant weighting is used, WIPW estimator boils down to the usual IPW estimator. It 20 is well-known that IPW estimator is highly variable when the downsampling on one arm in the second stage is substantial due to small ε. 2.Tests based on adaptive weighting are robust to ε.Relevant to the previous point, tests based on constant weighting (including sample splitting) are more sensitive to the εin the selection algorithm. The adaptive weighting scheme, on the other hand, is more robust to the choice of ε. This is because the adaptive weighting scheme can adjust the weights based on the observed data, making it less sensitive to ε. In experimental practice, employing adaptive weighting scheme for inference enables more aggressive exploitation strategies in sampling. 3.Normalization influences each test in distinct ways. Unlike the ran- domized controlled experiments, the normalization can make a difference in the power of the tests, even asymptotically (see Theorem 1 on results with TNand WN). We can observe a power gain when using unnormalized test with constant weighting (see plots in first column of Figure 4). However, from a theoretical standpoint, it remains unclear whether the unnormalized test exhibits greater power under more general data generating processes. 4.Power curve depends on the sideness of the test. We observe that the power performance differs between left-sided and right-sided tests. Under Bernoulli sampling, the left-sided test tends to reject more often across all meth- ods, even when the absolute signal magnitudes are the same. In contrast, under Gaussian sampling, the right-sided test exhibits greater power. These observa- tions highlight the potential need for side-dependent experimental design strate- gies. 4.2 Semi-synthetic data analysis To further investigate the performance of different tests on real data, we conduct a semi-synthetic data analysis designed to better mimic real-world settings. The data is derived from the Systolic Blood Pressure Intervention Trial (SPRINT) (Ambrosius et al., 2014). This is a randomized controlled trial and evaluates whether a new treat- ment program for lowering systolic blood pressure reduces the risk of cardiovascular disease (CVD). The population is divided into treatment (new treatment) and control (placebo) group and primary clinical outcome is the occurrence of a major CVD event. We generate the semi-synthetic data, apply the tests, and evaluate their performance through the following procedures. 1.Permute data to break the dependence. We first permute the outcomes within the whole population, generating B= 500 permuted samples. This per- mutation effectively removes any treatment effect, ensuring that the treatment and control groups have the same expected outcome level. 2.Add signal back to the data. For these 500 permuted samples, we manually introduce a treatment effect by increasing the mean outcome (i.e. the major CVD event occurrence) in the control group, since the new treatment is intended 21 to reduce the risk of CVD. Let N0 cdenote the total number of control-group participants who did not experience a CVD event. We set n0of these zero outcomes to 1, where n0∼Bin(N0 c, η). The added signal ηvaries within the set {0,0.015,0.03,0.045,0.06}. 3.Adaptively sample the data to maximize welfare.
|
https://arxiv.org/abs/2505.10747v1
|
For each permuted sample, we simulate adaptive sampling. We first draw N1= 1000 random sam- ples. Because the new treatment could be beneficial for the patients, we apply theε-greedy algorithm (8) to collect additional N2= 1000 samples in the second stage, encouraging assignment of new treatment. We vary ε∈ {0.1,0.2,0.4}. 4.Evaluate Type-I error control and power. We apply the five tests in- troduced in Section 4.1 to the synthetically generated data. We consider the right-sided test to see if the CVD event rate in the control group ( E[YuN(0)]) is higher than that in the treatment group ( E[YuN(1)]). We evaluate Type-I er- ror control before introducing signal and statistical power after introducing the signal. ε=0.1 ε=0.2 ε=0.4 0.0000.0150.0300.0450.0600.0000.0150.0300.0450.0600.0000.0150.0300.0450.0600.10.30.50.70.9 Signal strengthRejection rate weighting adaptive weighting constant weighting statistic format normalized sample−splitting unnormalized Figure 5: Type-I error and power for the five tests under semi-synthetic data. The results are presented in Figure 5. Perhaps a bit surprisingly, we observe that the test based on sample splitting suffers from Type-I error inflation in the absence of signal. This issue is primarily due to the sparsity of the outcome: the average rate of CVD occurrence is less than 0 .1. Such sparsity may prevent the central limit theorem from taking effect, posing a particular challenge for sample splitting, which uses only half of the data. We also find that the choice of εinfluences calibration performance: in the ε-greedy algorithm, a smaller εresults in a smaller effective sample size in the second stage. Consequently, Type-I error inflation in the sample-splitting method is mitigated as εincreases. In contrast, our methods control Type-I error well and εhas a much smaller effect on these tests, which combine data from both stages and thus have higher effective sample size. A further investigation of calibration performance is provided in Appendix P. Regarding power, the benefit of adaptive sampling remains 22 evident compared to the sample-splitting approach. Among all 4 tests proposed in this paper, the unnormalized test exhibits slightly higher power than the normalized test, although the difference is modest. 5 Conclusion and discussion In this paper, we establish a set of general and assumption-lean weak convergence results for the WIPW estimator under a two-stage adaptive sampling scheme. These results are largely agnostic to any specific outcome distribution, allowing for broad applicability across a wide range of potential outcomes. Moreover, they accommodate a broad spectrum of signal strengths, making them especially useful for downstream hypothesis testing. To facilitate asymptotically valid tests based on these weak con- vergence results, we propose a plug-in bootstrap procedure that is highly scalable. Finally, we validate our theoretical claims through extensive numerical simulations and a semi-synthetic data analysis, illustrating strong finite-sample performance of the proposed tests. There are two directions that can be directly pursued with the results and tech- niques developed in this paper. •Experimental design: In practice, designing the adaptive experiments often requires balancing statistical goals (e.g., power) with non-statistical considera- tions (e.g., regret, welfare) under budget constraints. Investigating the optimal design of adaptive experiments that trade off these competing objectives is a compelling
|
https://arxiv.org/abs/2505.10747v1
|
direction for future study. See, for instance, recent work on adaptive experimental design in Che et al. (2023), Liang et al. (2023), Simchi-Levi et al. (2023), and Li et al. (2024). Our results on limiting distributions and proposed bootstrap procedure simplify power calculations, which in turn can inform the design of adaptive experiments. •Covariate adjustment: In randomized controlled experiments, it is well es- tablished that appropriately adjusting for predictive covariates can improve the efficiency of statistical inference and increase the power of hypothesis testing (Lin, 2013). It would be valuable to explore how such covariate adjustments can be incorporated into the analysis of data collected from adaptive experi- ments, and how they may enhance the efficiency of the proposed tests. These investigations require the study of asymptotic efficiency of different tests and the techniques developed in this paper may provide a useful starting point for exploring this line of research. In Appendix N.2, we present preliminary results on augmenting the WIPW test statistics. There are several limitations in our current work that point to promising directions for future research. We summarize them below. •Beyond two-stage experiments: In this paper, we focus on a two-stage adap- tive sampling scheme. However, other adaptive designs exist that fall outside this framework, such as fully adaptive sampling schemes (Lai et al., 1985), early- stopping experiments (Sampson et al., 2005; Sill et al., 2009) and experiments 23 with adaptive stopping rules (Bauer et al., 1994). We have sketched the exten- sion of our results to the latter two classes of adaptive sampling strategies in Section N.1 and N.4, respectively. For the fully adaptive sampling schemes, we refer the readers to Khamaru et al. (2024), Han et al. (2024), and Ren et al. (2024) for recent works on this topic. •Statistical optimality: The statistical optimality of the proposed tests remains an open question. We investigate this question by providing preliminary com- parisons of power between the m= 1/2 and m= 1 weightings in Appendix O.2. It would be interesting to study semiparametrically efficient test statistic for testing the hypothesis H0N:E[YuN(0)] = E[YuN(1)] under suitable sub-classes of the general nonparametric data generating process. Although Hirano et al. (2023) and Adusumilli (2023) derive power functions via the Neyman-Pearson lemma, devising practical tests that attain the stated optimal power remains an open problem. Furthermore, for more complex scenarios—such as composite alternatives—the corresponding optimality theory remains undeveloped. Acknowledgments We thank Keisuke Hirano and Stefan Wager for helpful discussions and suggestions. Z.R. is supported by the National Science Foundation (NSF) under grant DMS-2413135 and Wharton Analytics. References Adusumilli, Karun (2023). “Optimal tests following sequential experiments”. In: arXiv preprint arXiv:2305.00403 . Agarwal, Alekh et al. (2014). “Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits”. In: Proceedings of the 31st International Conference on Machine Learning . Ed. by Eric P. Xing and Tony Jebara. Vol. 32. Proceedings of Machine Learning Research 2. Bejing, China: PMLR, pp. 1638–1646. Ambrosius, Walter T et al. (2014). “The design and rationale of a multicenter clinical trial comparing two strategies for control of systolic blood pressure: the
|
https://arxiv.org/abs/2505.10747v1
|
Systolic Blood Pressure Intervention Trial (SPRINT)”. In: Clinical trials 11.5, pp. 532–546. Bakshy, Eytan et al. (2018). “AE: A domain-agnostic platform for adaptive experi- mentation”. In: Conference on neural information processing systems , pp. 1–8. Bauer, Peter and Karl K¨ ohne (1994). “Evaluation of experiments with adaptive interim analyses”. In: Biometrics , pp. 1029–1041. Ben-Eltriki, Mohamed et al. (2024). “Adaptive designs in clinical trials: a systematic review-part I”. In: BMC Medical Research Methodology 24.1, p. 229. Bibaut, Aur´ elien, Maria Dimakopoulou, Nathan Kallus, Antoine Chambaz, and Mark van Der Laan (2021). “Post-contextual-bandit inference”. In: Advances in neural information processing systems 34, pp. 28548–28559. Bibaut, Aur´ elien and Nathan Kallus (2024). “Demistifying Inference after Adaptive Experiments”. In: arXiv preprint arXiv:2405.01281 . 24 Bowden, Jack and Lorenzo Trippa (2017). “Unbiased estimation for response adaptive clinical trials”. In: Statistical methods in medical research 26.5, pp. 2376–2388. Burnett, Thomas et al. (2020). “Adding flexibility to clinical trial designs: an example- based guide to the practical use of adaptive designs”. In: BMC medicine 18, pp. 1– 21. Chatterjee, Sourav and Elizabeth Meckes (2008). “Multivariate normal approximation using exchange-able pairs”. In: Alea 4, pp. 257–283. Che, Ethan, Daniel R Jiang, Hongseok Namkoong, and Jimmy Wang (2024). “Optimization- Driven Adaptive Experimentation”. In: arXiv preprint arXiv:2408.04570 . Che, Ethan and Hongseok Namkoong (2023). “Adaptive experimentation at scale: A computational framework for flexible batches”. In: arXiv preprint arXiv:2303.11582 . Chen, Jiafeng and Isaiah Andrews (2023). “Optimal conditional inference in adaptive experiments”. In: arXiv preprint arXiv:2309.12162 . Chen, Louis HY and Qi-Man Shao (2004). “Normal approximation under local depen- dence”. In: Annals of probability: An official journal of the Institute of Mathematical Statistics 32.3, pp. 1985–2028. Collins, Linda M, Susan A Murphy, and Victor Strecher (2007). “The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions”. In: American journal of preventive medicine 32.5, S112–S118. Cox, David R (1975). “A note on data-splitting for the evaluation of significance levels”. In:Biometrika , pp. 441–444. Cribari-Neto, Francisco, Nancy Lopes Garcia, and Klaus LP Vasconcellos (2000). “A note on inverse moments of binomial variates”. In: Brazilian Review of Economet- rics20.2, pp. 269–277. Crump, Richard K, V Joseph Hotz, Guido W Imbens, and Oscar A Mitnik (2009). “Dealing with limited overlap in estimation of average treatment effects”. In: Biometrika 96.1, pp. 187–199. Dimakopoulou, Maria, Zhimei Ren, and Zhengyuan Zhou (2021). “Online multi-armed bandits with adaptive inference”. In: Advances in Neural Information Processing Systems 34, pp. 1939–1951. Dimakopoulou, Maria, Zhengyuan Zhou, Susan Athey, and Guido Imbens (2017). “Es- timation considerations in contextual bandits”. In: arXiv preprint arXiv:1711.07077 . Dixit, Atray et al. (2016). “Perturb-Seq: dissecting molecular circuits with scalable single-cell RNA profiling of pooled genetic screens”. In: cell167.7, pp. 1853–1866. Dudley, R. M. (2002). Real Analysis and Probability . 2nd ed. Cambridge Studies in Advanced Mathematics. Cambridge University Press. Durrett, Rick (2019). Probability: theory and examples . Vol. 49. Cambridge university press. Dwork, Cynthia et al. (2015). “The reusable holdout: Preserving validity in adaptive data analysis”. In: Science 349.6248, pp. 636–638. Fan, Lin and Peter W Glynn (2021).
|
https://arxiv.org/abs/2505.10747v1
|
“Diffusion approximations for thompson sam- pling”. In: arXiv preprint arXiv:2105.09232 . Fithian, William, Dennis Sun, and Jonathan Taylor (2014). “Optimal inference after model selection”. In: arXiv preprint arXiv:1410.2597 . 25 Food, US, Drug Administration, et al. (2019). “Adaptive designs for clinical trials of drugs and biologics”. In: Guidance for industry¡ https://www. fda. gov/media/78495/download . Freidling, Tobias, Qingyuan Zhao, and Zijun Gao (2024). “Selective Randomization Inference for Adaptive Experiments”. In: arXiv preprint arXiv:2405.07026 . Gasperini, Molly et al. (2019). “A genome-wide framework for mapping gene regulation via cellular genetic screens”. In: Cell 176.1, pp. 377–390. Hadad, Vitor, David A Hirshberg, Ruohan Zhan, Stefan Wager, and Susan Athey (2021). “Confidence intervals for policy evaluation in adaptive experiments”. In: Proceedings of the national academy of sciences 118.15, e2014602118. Han, Qiyang, Koulik Khamaru, and Cun-Hui Zhang (2024). “UCB algorithms for multi-armed bandits: Precise regret and adaptive inference”. In: arXiv preprint arXiv:2412.06126 . Hirano, Keisuke and Jack R Porter (2023). “Asymptotic representations for sequen- tial decisions, adaptive experiments, and batched bandits”. In: arXiv preprint arXiv:2302.03117 . Howard, Steven R and Aaditya Ramdas (2022). “Sequential estimation of quantiles with applications to A/B testing and best-arm identification”. In: Bernoulli 28.3, pp. 1704–1728. Howard, Steven R, Aaditya Ramdas, Jon McAuliffe, and Jasjeet Sekhon (2021). “Time- uniform, nonparametric, nonasymptotic confidence sequences”. In: The Annals of Statistics 49.2, pp. 1055–1080. Hu, Feifang and William F Rosenberger (2006). The theory of response-adaptive ran- domization in clinical trials . John Wiley & Sons. Imbens, Guido W and Donald B Rubin (2015). Causal inference in statistics, social, and biomedical sciences . Cambridge University Press. Jin, Hannah A., William Bekerman, Dylan S. Small, and Amanda Rabinowitz (July 2023). Protocol for an Observational Study on Effects of Contact, Collision, and Non-Contact Sports Participation on Cognitive and Emotional Health . Johari, Ramesh, Leo Pekelis, and David J Walsh (2015). “Always valid inference: Bringing sequential analysis to A/B testing”. In: arXiv preprint arXiv:1512.04922 . Kasy, Maximilian and Anja Sautmann (2021). “Adaptive treatment assignment in experiments for policy choice”. In: Econometrica 89.1, pp. 113–132. Khamaru, Koulik and Cun-Hui Zhang (2024). “Inference with the upper confidence bound algorithm”. In: arXiv preprint arXiv:2408.04595 . Kizilcec, Ren´ e F et al. (2020). “Scaling up behavioral science interventions in online education”. In: Proceedings of the National Academy of Sciences 117.26, pp. 14900– 14905. Klasnja, Predrag et al. (2019). “Efficacy of contextually tailored suggestions for phys- ical activity: a micro-randomized optimization trial of HeartSteps”. In: Annals of Behavioral Medicine 53.6, pp. 573–582. Klenke, Achim (2017). Probability theory . Vol. 941, pp. 1–23. Kuang, Xu and Stefan Wager (2024). “Weak signal asymptotics for sequentially ran- domized experiments”. In: Management Science 70.10, pp. 7024–7041. Lai, Tze Leung and Herbert Robbins (1985). “Asymptotically efficient adaptive allo- cation rules”. In: Advances in applied mathematics 6.1, pp. 4–22. 26 Lattimore, Tor and Csaba Szepesv´ ari (2020). Bandit algorithms . Cambridge University Press. Le Cam, Lucien et al. (1972). “Limits of experiments”. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability . Vol. 1, pp. 245– 261. Li, Harrison H and Art B Owen (2024). “Double machine learning
|
https://arxiv.org/abs/2505.10747v1
|
and design in batch adaptive experiments”. In: Journal of Causal Inference 12.1, p. 20230068. Li, Lihong, Wei Chu, John Langford, and Robert E Schapire (2010). “A contextual- bandit approach to personalized news article recommendation”. In: Proceedings of the 19th international conference on World wide web , pp. 661–670. Liang, Biyonka and Iavor Bojinov (2023). “An experimental design for anytime-valid causal inference on multi-armed bandits”. In: arXiv preprint arXiv:2311.05794 . Liao, Peng, Kristjan Greenewald, Predrag Klasnja, and Susan Murphy (2020). “Per- sonalized heartsteps: A reinforcement learning algorithm for optimizing physical activity”. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiq- uitous Technologies 4.1, pp. 1–22. Lin, Winston (2013). “Agnostic notes on regression adjustments to experimental data: Reexamining Freedman’s critique”. In: The Annals of Applied Statistics 7.1. Lin, Zhantao, Nancy Flournoy, and William F Rosenberger (2021). “Inference for a two-stage enrichment design”. In: The Annals of Statistics 49.5, pp. 2697–2720. Luedtke, Alexander R and Mark J Van Der Laan (2016). “Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy”. In: Annals of statistics 44.2, p. 713. Magnusson, Baldur P and Bruce W Turnbull (2013). “Group sequential enrichment design incorporating subgroup selection”. In: Statistics in medicine 32.16, pp. 2695– 2714. Maharaj, Akash et al. (2023). “Anytime-valid confidence sequences in an enterprise a/b testing platform”. In: Companion Proceedings of the ACM Web Conference 2023, pp. 396–400. Marschner, Ian C (2021). “A general framework for the analysis of adaptive experi- ments”. In: Statistical Science 36.3, pp. 465–492. Meckes, Elizabeth (2009a). “On Stein’s method for multivariate normal approxima- tion”. In: High dimensional probability V: the Luminy volume . Vol. 5. Institute of Mathematical Statistics, pp. 153–179. Meckes, Mark W. (2009b). “Gaussian marginals of convex bodies with symmetries.” eng. In: Beitr¨ age zur Algebra und Geometrie 50.1, pp. 101–118. Nair, Yash and Lucas Janson (2023). “Randomization tests for adaptively collected data”. In: arXiv preprint arXiv:2301.05365 . Neal, Dan, George Casella, Mark CK Yang, and Samuel S Wu (2011). “Interval estima- tion in two-stage, drop-the-losers clinical trials with flexible treatment selection”. In:Statistics in Medicine 30.23, pp. 2804–2814. Niu, Ziang, Abhinav Chakraborty, Oliver Dukes, and Eugene Katsevich (2024). “Rec- onciling model-X and doubly robust approaches to conditional independence test- ing”. In: The Annals of Statistics 52.3, pp. 895–921. 27 Offer-Westort, Molly, Alexander Coppock, and Donald P Green (2021). “Adaptive experimental design: Prospects and applications in political science”. In: American Journal of Political Science 65.4, pp. 826–844. Rafferty, Anna, Huiji Ying, Joseph Williams, et al. (2019). “Statistical consequences of using multi-armed bandits to conduct adaptive educational experiments”. In: Journal of Educational Data Mining 11.1, pp. 47–79. Ramdas, Aaditya, Peter Gr¨ unwald, Vladimir Vovk, and Glenn Shafer (2023). “Game- theoretic statistics and safe anytime-valid inference”. In: Statistical Science 38.4, pp. 576–601. Ren, Huachen and Cun-Hui Zhang (2024). “On Lai’s Upper Confidence Bound in Multi-Armed Bandits”. In: arXiv preprint arXiv:2410.02279 . Russo, Daniel (2016). “Simple bayesian algorithms for best arm identification”. In: Conference on Learning Theory . PMLR, pp. 1417–1418. Sampson, Allan R and Michael W Sill (2005). “Drop-the-losers design: normal case”. In:Biometrical Journal: Journal of Mathematical Methods
|
https://arxiv.org/abs/2505.10747v1
|
in Biosciences 47.3, pp. 257–268. Schraivogel, Daniel, Lars M Steinmetz, and Leopold Parts (2023). “Pooled Genome- Scale CRISPR Screens in Single Cells”. In: Annual Review of Genetics 57.1, pp. 223– 244. Shen, Changyu, Xiaochun Li, and Lingling Li (2014). “Inverse probability weighting for covariate adjustment in randomized studies”. In: Statistics in medicine 33.4, pp. 555–568. Shin, Jaehyeok, Aaditya Ramdas, and Alessandro Rinaldo (2019). “Are sample means in multi-armed bandits positively or negatively biased?” In: Advances in Neural Information Processing Systems 32. — (2021). “On the bias, risk, and consistency of sample means in multi-armed ban- dits”. In: SIAM Journal on Mathematics of Data Science 3.4, pp. 1278–1300. Sill, Michael W and Allan R Sampson (2009). “Drop-the-losers design: binomial case”. In:Computational statistics & data analysis 53.3, pp. 586–595. Simchi-Levi, David and Chonghuan Wang (2023). “Multi-armed bandit experimental design: Online decision-making and adaptive inference”. In: International Confer- ence on Artificial Intelligence and Statistics . PMLR, pp. 3086–3097. Sladek, Robert et al. (2007). “A genome-wide association study identifies novel risk loci for type 2 diabetes”. In: Nature 445.7130, pp. 881–885. Slivkins, Aleksandrs et al. (2019). “Introduction to multi-armed bandits”. In: Founda- tions and Trends ®in Machine Learning 12.1-2, pp. 1–286. Tanniou, Julien, Ingeborg Van Der Tweel, Steven Teerenstra, and Kit CB Roes (2016). “Subgroup analyses in confirmatory clinical trials: time to be specific about their purposes”. In: BMC medical research methodology 16, pp. 1–15. Waudby-Smith, Ian and Aaditya Ramdas (2024). “Estimating means of bounded ran- dom variables by betting”. In: Journal of the Royal Statistical Society Series B: Statistical Methodology 86.1, pp. 1–27. Wu, Samuel S, Weizhen Wang, and Mark CK Yang (2010). “Interval estimation for drop-the-losers designs”. In: Biometrika 97.2, pp. 405–418. Zhang, Kelly, Lucas Janson, and Susan Murphy (2020). “Inference for batched ban- dits”. In: Advances in neural information processing systems 33, pp. 9818–9829. 28 Appendix A Explicit form of the asymptotic distributions 30 B Simulation detail for Figures 1-2 31 C Details of bootstrap algorithm in Section 3.5 32 C.1 Nuisance parameter estimation . . . . . . . . . . . . . . . . . . . . . . 32 C.2 Bootstrap algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 D Intuition from data collection and double-dipping 33 E A closer look at literature 33 E.1 Investigating Zhang et al. (2020) . . . . . . . . . . . . . . . . . . . . . 33 E.2 Investigating Hadad et al. (2021) . . . . . . . . . . . . . . . . . . . . . 34 E.3 Investigating Hirano et al. (2023) and Adusumilli (2023) . . . . . . . . 36 E.4 Investigating Che et al. (2023) . . . . . . . . . . . . . . . . . . . . . . . 36 F Probabilistic preliminaries 36 F.1
|
https://arxiv.org/abs/2505.10747v1
|
Bounded Lipschitz test function class . . . . . . . . . . . . . . . . . . . 36 F.2 Preliminaries on regular conditional distribution . . . . . . . . . . . . . 38 F.3 Definition of conditional convergence . . . . . . . . . . . . . . . . . . . 39 F.4 Proof of Lemma 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 F.5 Proof of Lemma 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 G Useful lemmas and the proofs 41 G.1 Lemma statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 G.2 Proof of Lemma 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 G.3 Proof of Lemma 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 H New results on conditional CLT and CMT 44 H.1 A new conditional CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 H.2 New conditional CMT results . . . . . . . . . . . . . . . . . . . . . . . 45 H.3 Proof of Lemma 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 H.4 Proof of Lemma 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 H.5 Proof of Lemma 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 I Preparation for the proof of Theorem 1 47 I.1 Auxiliary definitions for the proof of Theorem 1 . . . . . . . . . . . . . 47 I.2 Generic lemmas on the random functions . . . . . . . . . . . . . . . . . 47 I.3 Proof of Lemma 19 . . . . . . . . . . . . . . . . . . . . . .
|
https://arxiv.org/abs/2505.10747v1
|
. . . . . . . 48 I.4 Proof of Lemma 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 J Proof of Theorem 1 50 J.1 General proof roadmap for weak convergence result . . . . . . . . . . . 50 J.2 Proof of Step 1 in Appendix J.1 . . . . . . . . . . . . . . . . . . . . . . 53 J.3 Proof of Step 2 in Appendix J.1 . . . . . . . . . . . . . . . . . . . . . . 54 29 K Proof of lemmas in Appendix J 57 K.1 Proof of lemmas in Appendix J.1 . . . . . . . . . . . . . . . . . . . . . 57 K.2 Proof of lemmas in Appendix J.2 . . . . . . . . . . . . . . . . . . . . . 59 K.3 Proof of lemmas in Appendix J.3 . . . . . . . . . . . . . . . . . . . . . 60 L Proof of Theorem 2 64 L.1 Proof preparation for Theorem 2 . . . . . . . . . . . . . . . . . . . . . 64 L.2 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 M Proof of Theorem 3 66 M.1 Necessary definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 M.2 Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 N Extenstion 69 N.1 Extension to m= 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 N.2 Extension to test statistics with augmentation . . . . . . . . . . . . . . 71 N.3 Extension to selection with nuisance parameter . . . . . . . . . . . . . 71 N.4 Extension to adaptive experiments with stopping time . . . . . . . . . 72 N.5 Proof of Lemma 27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 O Additional simulation results 75 O.1 Additional simulation results
|
https://arxiv.org/abs/2505.10747v1
|
with ε-greedy algorithm . . . . . . . . . . 75 O.2 Power comparison: m= 1/2 versus m= 1 . . . . . . . . . . . . . . . . 76 P Additional semi-synthetic data analysis results 78 Notation. Throughout the appendix, we will use ( a, b) to denote a column vector for a∈Rk, b∈Rdwhen there is no ambiguity. In other words, we use ( a, b) to represent (a⊤, b⊤)⊤and we omit the transpose operator for ease of presentation. Thus when we write f(x, y) forx, yas vectors, we will use f(x, y) to denote the function f((x⊤, y⊤)⊤). We use C2(X) as the class of functions that are twice continuously differentiable on the domain X. We use 0kto denote a k-dimension vector with each dimension being 0 and Ikto denote the identify matrix with dimension k. We will drop the subscript k if there is no ambiguity. We use N+to denote the positive natural number. We denote ∂Ckas the boundary set of Ck. We denote ∇gas the gradient of a differentiable function g. We denote aN≲bNif there exists c >0 such that |aN/bN| ≤cfor large enough N.Without loss of generality, we will only prove the results in all the main text when qt= 1/2fort∈[2], i.e., two batches have the same sample size. A Explicit form of the asymptotic distributions We define the limiting probabilities H(1)(s)≡lim N→∞¯eN(s,H0) =e(s), H(2)(s)≡maxn lim N→∞lN,¯e(s, S((A(1), V(1)), c))o .(10) The function S(x, y) :R4ׯR→¯Ris defined as S(x, y)≡x1·x1/2 3/e1/2(0)−x2·x1/2 4/e1/2(1) + y/√ 2, (11) 30 where x= (x1, x2, x3, x4)⊤∈R4andy∈¯R. Also define the scaled asymptotic variance as V(t)(s)≡lim N→∞E[Y2 uN(s)]−H(t)(s)( lim N→∞E[YuN(s)])2, (12) and denote R(t)(s)≡(H(t)(s)/V(t)(s))1/2. Now we define the covariances for A(t)as follows. Distribution of A(1).The covariance Cov(1)can be defined as Cov(1)≡ −H(1)(0)H(1)(1) V(1)(0)V(1)(1)1/2 lim N→∞(E[YuN(0)]E[YuN(1)]). (13) Distribution of A(2).The asymptotic covariance structure Cov(2)(A(1)) can be writ- ten as Cov(2)(A(1))≡ −H(2)(0)H(2)(1) V(2)(0)V(2)(1)1/2 lim N→∞(E[YuN(0)]E[YuN(1)]). (14) Now we define the weights ¯ w(t) W(s). To this end, we need the following auxiliary random variables, M(t) A(s)≡qt(H(t)(s))1/2 P2 t=1qt(H(t)(s))1/22 and M(t) C(s)≡qt. Then we can write the weights as ¯w(t) W(s) = M(t) W(s)/(R(t)(s))21/2for any s∈ {0,1}andW ∈ {A ,C}. B Simulation detail for Figures 1-2 We present the simulation details for Figure 1 and Figure 2 respectively. Figure 1. To generate Figure 1, we consider the following potential outcome model: YuN(0)∼N(0,1), YuN(1)∼N(−cN/√ N,9), cN∈ {0,−5,−10,−15}. (15) We set the sample size N= 1000 and the batch size N1=N2= 500. The initial sampling e(0) = e(1) = 0 .5 and ε-greedy algorithm is used with ε= 0.05. The simulations are repeated 5000 times. Figure 2. We use the covariance structure (13) and (14), to simulate the limiting distribution (6). We set qt= 1/2 and assume the knowledge of the first and second moments E[YuN(s)],E[Y2 uN(s)] from model (15). Set the limiting signal strength c∈ {0,−5,−10,−15}. The simulations are repeated 100 ,000 times. 31 C Details of bootstrap algorithm in Section 3.5 We begin by estimating the necessary nuisance parameters for the bootstrap in Ap- pendix C.1
|
https://arxiv.org/abs/2505.10747v1
|
and then provide the detailed bootstrap algorithm in Appendix C.2. C.1 Nuisance parameter estimation Analogous to the estimator ˆE[YuN(s)] in (2), we estimate E[Y2 uN(s)] using: WIPWS( s)≡2X t=1Nth(t) N(s) P2 t=1Nth(t) N(s)·1 NtNtX u=1˜Λ(t) uN(s) and ˜Λ(t) uN(s)≡1(A(t) uN=s)(Y(t) uN)2 P[A(t) uN=s|Ht−1]. (16) Using these, we estimate the first-stage variances and covariances as: ˆV(1)(s) =ˆE[Y2 uN(s)]−H(1)(s)(ˆE[YuN(s)])2and ˆCov(1)=−(¯H(1))1/2ˆE[YuN(0)]ˆE[YuN(1)] (ˆV(1)(0)ˆV(1)(1))1/2, where ¯H(1)≡H(1)(0)H(1)(1). To define ˆCov(2)(·), first define ˆV(1)≡(ˆV(1)(0),ˆV(1)(1)) and consider the function ˆH(2)(x)≡ˆH(2)(x,0)·ˆH(2)(x,1), where ˆH(2)(x, s)≡( max{¯l,¯e(s, S((x,ˆV(1)),0))}if Assumption 3 holds; ¯e(s, S((x,ˆV(1)),0)) if Assumption 4 holds . Furthermore, we define ˆV(2)(x)≡ˆV(2)(x,0)·ˆV(2)(x,1), where ˆV(2)(x, s)≡ˆE[Y2 uN(s)]− ˆH(2)(x, s)(ˆE[YuN(s)])2. Finally, we can define the second-stage covariance function as ˆCov(2)(x)≡ −(ˆH(2)(x)/ˆV(2)(x))1/2ˆE[YuN(0)]ˆE[YuN(1)]. C.2 Bootstrap algorithm Now we consider the bootstrap procedure. 1.First stage sampling: Sample S(b) 1∼N(0,I2) and let ˜A(1,b)= (ˆΣ(1))1/2S(b) 1, where ˆΣ(1)= (ˆCov(1))2×2. 2.Second stage sampling: Sample S(b) 2∼N(0,I2) and let ˜A(2,b)= (ˆΣ(2,b))1/2S(b) 2, where ˆΣ(2,b)= (ˆCov(2)(˜A(1,b)))2×2. 3.Weighting procedure: Compute weights ˆ w(t,b) W,V(s) by replacing H(1)(s), H(2)(s) andV(1)(s), V(2)(s) in (6) by H(1)(s),ˆH(2)(˜A(1,b), s), and ˆV(1)(s),ˆV(2)(˜A(1,b), s), respectively. Then obtain the bootstrap sample: D(b) W,V=2X t=1ˆw(t,b) W,V(0)˜A(t,b)(0)−2X t=1ˆw(t,b) W,V(1)˜A(t,b)(1), where ˜A(t,b)(s) is the ( s+ 1)-th coordinate of ˜A(t,b). 4.Repeat sampling: Repeat steps 1-3 for Biterations to obtain Bbootstrap samples. 32 D Intuition from data collection and double-dipping In this section, we discuss the intuition behind the different phases of the limiting distribution, drawing on the data collection procedure outlined in Section 2.1. We argue that the key driver of the various limiting behaviors is the signal strength. To build intuition, consider the ε-greedy selection algorithm as an illustrative example, and suppose we use a fixed clipping rate lN=ε >0. Then the updated treatment assignment probability for treatment safter the pilot stage is given by ¯eN(s,H1) =ε 2· 1(DN(s)<0) + 1−ε 2 · 1(DN(s)≥0), DN(s)≡S(1) N(s)−S(1) N(1−s). When the signal strength cNconverges to a finite constant, the pilot-stage data does not provide sufficiently strong evidence for the ε-greedy algorithm to confidently prefer one arm over the other based on the summary statistic DN(s). As a result, the assignment remains uncertain, and this added randomness in the selection procedure leads to a non-normal limiting distribution. In contrast, when the signal strength is strong (i.e., c=−∞), the algorithm can confidently conclude that treatment 1 is superior to treatment 0 in terms of the expected potential outcome, with ignorable randomness in the selection. In this case, the limiting distribution approaches a normal distribution. An alternative perspective comes from the concept of “double-dipping.” It is widely believed that sample splitting is necessary to avoid selection bias (Fithian et al., 2014). When the signal is weak (i.e., cis finite), the selection procedure exhibits non-negligible randomness, and reusing data without accounting for this selection randomness can be problematic. However, when the signal is strong and the selection becomes de- terministic, the impact of “double-dipping” becomes negligible. In this regime (i.e., c=−∞), the two-stage data collection process can be viewed as a non-adaptive pro- cedure: sample treatment 0 (1) with probability e(0) (e(1)) in the first stage and with probability ε/2 (1−ε/2) in the second stage. Standard asymptotic inference applies,
|
https://arxiv.org/abs/2505.10747v1
|
ensuring asymptotic validity. E A closer look at literature In this section, we provide a detailed comparison between our results and the existing literature. E.1 Investigating Zhang et al. (2020) They show that the non-normal limiting distribution can happen for classical sample mean statistic under a batched bandit setup (see Figure 1 in their paper). To address this issue, the same paper proposes a batch-wise H´ ajek estimator. In particular, for each batch t∈[2], the test statistic can be computed as s (PNt u=1A(t) uN)(PNt u=1(1−A(t) uN)) N PNt u=1(1−A(t) uN)Y(t) uNPNt u=1(1−A(t) uN)−PNt u=1A(t) uNY(t) uNPNt u=1A(t) uN−∆n! (17) 33 where ∆ n≡E[YuN(0)]−E[YuN(1)]. A crucial assumption they make to recover the conditional asymptotic normality is that the conditional variance of the observed out- come is constant, i.e., Var[ Y(t) u|Ht−1] = Cons. ∈(0,∞). This assumption is very stringent and essentially rules out the possible heterogeneity in the distribution of potential outcomes Yu(0), Yu(1). Let us consider a concrete data generating model to illustrate the failure of this assumption. Consider the potential outcome distribution Yu(0)∼N(0,1) and Yu(1)∼N(0,4). Then we can compute, assuming the ε-greedy algorithm between stages, using the consistency assumption Var[Y(t) u|Ht−1] =E[(Y(t) u)2|Ht−1]−(E[Y(t) u|Ht−1])2 = ¯e(0,Ht−1)E[Yu(0)2] + ¯e(1,Ht−1)E[Yu(1)2] = ¯e(0,Ht−1) + 4¯e(1,Ht−1) = 1 + 3¯ e(1,Ht−1) = 1 + 3 1−ε 2 1(S(1) N(0)< S(1) N(1)) +3ε 21(S(1) N(0)≥S(1) N(1)). Moreover, test based on the pooled batch-wise estimator (17) is later on shown to be less powerful than the test statistic based on data pooled from all stages (see Hirano et al. (2023)). In fact, the sample mean test statistic considered in Section 5 of Hirano et al. (2023) is a special case of our proposed WIPW test statistic with m= 1. We refer readers to Appendix N.1 for more details. E.2 Investigating Hadad et al. (2021) Hadad et al. (2021) consider a class of WIPW estimators. Their paper observes that when estimating the expected outcome E[YuN(s)], even the classical inverse probability weighted estimator can exhibit non-normal behavior (see Figure 1 in Hadad et al.). To address this, Hadad et al. (2021) shows asymptotic normality can be recovered through the use of adaptive weighting—similar in spirit to the method proposed in our work (see Theorem 4 in their paper). However, their theoretical guarantees rely on a key assumption: that the ratio of variance estimators converges to a constant. This assumption, however, fails to hold under the null hypothesis H0N. Indeed, as we will demonstrate shortly, the ratio of variance estimators converges to a non-degenerate, positive random variable whenever c∈(−∞,∞)—that is, under both the null hypothesis H0Nand the weak signal regime H2N. Moreover, our simula- tions reveal that applying a normal approximation in the absence of this convergence can lead to inflated Type-I error rates when using the WIPW estimator. Consequently, the theoretical results in Hadad et al. (2021) are not directly applicable for hypothesis testing, due to the unknown limiting distribution under the null. To be specific, Theorem 4 in Hadad et al. (2021) hinges on the assumption that ˆVN(0)/ˆVN(1) converges weakly to a constant, where ˆVN(s) is defined
|
https://arxiv.org/abs/2505.10747v1
|
as in Equation (3). However, under the null H0Nand local alternative H2N, this condition generally fails. As shown in step 2 andstep 3 in section J.1: with adaptive weighting ( m= 1/2) andqt= 1/2, ˆVN(0)/ˆVN(1)d→P2 t=1V(t)(0)/(P2 t=1(H(t)(0))1/2)2 P2 t=1V(t)(1)/(P2 t=1(H(t)(1))1/2)2. (18) 34 where V(t)(s) and H(t)(s) are defined as in (12) and (10). To further illustrate this issue, we present empirical evidence showing that using the normal distribution to calibrate the test statistic leads to inflated Type-I error. Consider a two-batch setup under the following model: Yu(0)∼N(2,1), Yu(1)∼N(2,9). with a total sample size of N= 500 and equal batch sizes N1=N2= 250. We apply theε-greedy algorithm with ε= 0.05, and compute the weighted augmented IPW estimator (WAIPW), in line with the original setup in Hadad et al. (2021): WAIPW( s)≡2X t=1h(t) N(s) P2 t=1h(t) N(s)Γ(t) N(s), where Γ(t) N(s)≡PNt u=1Γ(t) u(s) Nt+ˆE[Yu(s)] and Γ(t) u(s)≡1(A(t) u=s)(Y(t) u−ˆE[Yu(s)]) P[A(t) u=s|Ht−1]. According to Theorem 4 in Hadad et al., 2021, we know WAIPW(0) −WAIPW(1) (˜VN(0) + ˜VN(1))1/2d→N(0,1),˜VN(s)≡P2 t=1(h(t) N(s))2PNt u=1 Γ(t) u(s)2 (P2 t=1h(t) N(s)Nt)2. Setting ˆE[Yu(0)] = 2 and ˆE[Yu(1)] = 6, we can satisfy the required condition that for at least one s∈ {0,1},ˆE[Yu(s)]→E[Yu(s)]. Then we use N(0,1) to calibrate the test statistic and obtain the Type-I error control results as shown in Figure 6. The results show substantial Type-I error inflation when the normal approximation is used. 10010−210−4 10010−110−210−3 Expected p−valueObserved p−value 0.0500.0750.1000.125 0.05 0.1 Significance levelsType I Error Figure 6: Type-I error inflation using normal approximation in Hadad et al. (2021). The simulation is repeated for 2000 times. 35 E.3 Investigating Hirano et al. (2023) and Adusumilli (2023) To address non-normality directly, Hirano et al. (2023) develop asymptotic representa- tions for a broad class of test statistics under batched designs, which are subsequently used to derive power functions and optimal tests in Adusumilli (2023). The validity of this theory requires establishing weak convergence of a vector of statistics, as well as assuming that the potential outcome distributions are differentiable in quadratic mean (QMD). In contrast, our Theorem 1 focuses on the widely used WIPW class of statistics and provides explicit weak convergence results that go beyond the QMD framework, based on transparent and interpretable assumptions. E.4 Investigating Che et al. (2023) While Che et al. (2023) broaden the framework of Hirano et al. (2023) to encompass settings beyond QMD, their emphasis lies in experimental design within batched bandit designs rather than in inferential validation. Specifically, they analyze the statistic 1√NtNtX u=1A(t) uNY(t) uN, deriving its asymptotic distribution under conditions similar to ours. Their results, however, are not sufficient for the purpose of inference. For example, one can show that—after an additional scaling by√Nt—this statistic generally fails to converge to the target estimand E[YuN(s)], except in the degenerate case E[YuN(s)] = 0. Moreover, the asymptotic distribution therein contains unknown parameters, making it infeasible to construct valid hypothesis tests or confidence intervals based on this statistic. By contrast, our work prioritizes the inferential integrity of adaptive experiments. Our results incorporate consistent and pivotal estimators, which guarantees that hy- pothesis tests are valid
|
https://arxiv.org/abs/2505.10747v1
|
and that confidence intervals faithfully reflect uncertainty around the desired parameter. F Probabilistic preliminaries In Section F.1, we discuss technique of proving weak convergence of random variables with test functions. In Section F.2, we discuss the regular conditional distribution (RCD) and its properties. F.1 Bounded Lipschitz test function class We will use the following definition of convergence in distribution throughout the appendix. Definition 1 (Convergence in distribution) .Suppose WN∈Rdis a sequence of ran- dom variables and W∈Rdis a random variable. We say WNconverge in distribution toWif for any bounded and continuous function f:Rd→R, we have E[f(WN)]→E[f(W)]. 36 Moreover, we use the notation WNd→Wto denote the convergence in distribution. IfWNandWare univariate, we will interchangeably use the following equivalent deifnition P[WN≤t]→P[W≤t],for each t∈Rat which t7→P[W≤t]is continuous. Beyond the bounded continuous functions, there are other classes of functions that are useful in the context of weak convergence. In particular, we will use the bounded Lipschitz function class to prove our main results. Suppose fis a function fromRdtoR. We call fis am-Lipschitz function as long as ∥f∥L≤m, where ∥f∥L≡supx̸=y∈Rd|f(x)−f(y)|/∥x−y∥2and∥ · ∥ 2is the Euclidean norm. Then we define ∥f∥BL≡ ∥f∥L+∥f∥∞where ∥f∥∞≡sup x∈Rd|f(x)|. The following lemma shows that the BL function class is rich enough to characterize the weak convergence of random variables. Lemma 1 (Theorem 11.3.3 in Dudley (2002)) .Suppose the sequence of random vari- ableWN∈RandW∈Rhave law PNandP, respectively. Then the following two statements are equivalent: 1.WNd→W; 2. R f(x)dPN(x)−R f(x)dP(x) →0for any fsuch that ∥f∥BL<∞. Despite the fruitful results in the literature on normal approximation on indepen- dent observations (Chatterjee et al., 2008) and weakly dependent observation Chen et al. (2004), these existing results do not apply directly to our case since the adap- tive sampling scheme introduces a strong dependence structure. We will develop new tools for proving our results, based on these existing results. Thus we review the rel- evant results in the following section. First comes the finite-sample bound proved in Chatterjee et al. (2008) and an interpolation result Meckes (2009b). Lemma 2 (Chatterjee et al. (2008), Theorem 3.1) .LetW1N, . . . , W NNbe a sequence of independent, identically distributed random vectors in Rk. Suppose E[WuN] = 0,E[WuNW⊤ uN] =Ik. LetWN≡PN u=1WuN/√ NandZ∼N(0,Ik). Then for any g∈ C2(Rk), |E[g(WN)]−E[g(Z)]| ≤∥g∥L 2√ N(E[∥WuN∥4 2])1/2+√ 2π 3√ N∥∇g∥LE[∥WuN∥3 2]. Lemma 2 shows that when the test function is chosen to be C2, the weak convergence forWNcan be derived as long as the third moment of WuNdiverging slower than√ N and fourth moment diverging slower than N. 37 Lemma 3 (Meckes (2009b), Corollary 3.5) .Consider the density function of a mul- tivariate Gaussian random variable ϕδ(x)≡1 (2πδ2)k/2exp −1 2δ2∥x∥2 2 . For any 1- Lipschitz function f, consider the Gaussian convolution (f∗ϕδ)≡E[f(x+δZ)]where Z∼N(0,Ik)andδ >0. Then we have ∥∇(f∗ϕδ)∥L≤ ∥f∥L×sup θ:∥θ∥2=1 ∇ϕ⊤ δθ 1≤√ 2 π1/21 δ where ∥f∥1denotes the L1norm of function f. Moreover, for any random variable X E[(f∗ϕδ)(X)−f(X)]≤E[δ∥Z∥2]≤δ√ k. Applying a smoothing argument to the Lipschitz function class with the help of Lemma 3, we can prove the weak convergence of WNunder Lipschitz class with the same requirement on the third
|
https://arxiv.org/abs/2505.10747v1
|
and fourth moment as in Lemma 2. We formalize such result in the following lemma. Lemma 4 (Upper bound with bounded Lipschitz function) .Suppose WuN∈Rk are i.i.d. random variables for any fixed N∈N+. Then if we have E[WuN] = 0,E[WuNW⊤ uN] =Ikand E[∥WuN∥4 2] N→0,E[∥WuN∥3 2] N1/2→0, then for any sequence of Lipschitz function rNsuch that ∥rN∥BL≤1, we have E" rN 1√ NNX u=1WuN!# −E[rN(Z)] →0, Z∼N(0,Ik). F.2 Preliminaries on regular conditional distribution To better understand the argument involving conditional distribution, we briefly dis- cuss the basic definition of regular conditional distribution. Let B(RN) be the Borel σ-algebra on RNand Ω ,FNbe the sample space and a sequence of σ-algebras. For any N∈N+, κN: Ω×B(RN) is a regular conditional distribution of WN≡(W1N, . . . , W NN) givenFNif ω7→κN(·, B) is measurable with respect to FNfor any fixed B∈ B(RN); B7→κN(ω, B) is a probability measure on ( RN,B(RN)) for any ω∈Ω; κN(ω, B) =P[(W1N, . . . , W NN)∈B|FN](ω) for almost all ω∈Ω and all B∈ B(RN). The following lemma from Klenke (2017, Theorem 8.37) ensures the general existence of regular conditional distribution. Lemma 5 (Theorem 8.37 in Klenke (2017)) .Suppose (Ω,G,P)is the Probability triple. LetF ⊂ G be a sub- σ-algebra. Let Ybe a random variable with values in a Borel space (E,E)(for example, Eis Polish, E=Rk). Then there exists a regular conditional distribution κY,FofYgivenF. 38 Result from Klenke (2017, Theorem 8.38) guarantees that the conditional expectation and the integral of measurable function with respect to regular conditional distribution are almost surely same. Lemma 6 (Modified version of Theorem 8.38 in Klenke (2017)) .LetYbe a random variable (Ω,G,P)taking values in a Borel space (E,E). LetF ⊂ G be aσ-algebra and letκY,Fbe a regular conditional distribution of YgivenF. Further, let f:E→R be measurable and E[|f(Y)|]<∞. Then we can define a version of the conditional expectation using regular conditional distribution, i.e., E[f(Y)|F](ω) =Z f(y)dκY,F(ω, y),∀ω∈Ω. Throughout this paper we will fix a version of the conditional expectation and the version is defined by applying Lemma 6. The following lemma is useful as well. Lemma 7 (Conditional expectation with variable measurable with respect to F).Let Ybe a random variable (Ω,G,P)with values in a Borel space (E,E). LetF ⊂ G be aσ-algebra and let κY,Fbe a regular conditional distribution of YgivenF. Suppose X∈ Fis a random variable in another Borel space (B,B). Further, let f:E×B→R be measurable and E[|f(X, Y)|]<∞. Then we can define a version of the following conditional expectation using regular conditional distribution: E[f(X, Y)|F](ω) =Z f(X(ω), y)dκY,F(ω, y),∀ω∈Ω. F.3 Definition of conditional convergence Since we are under the setup where the data is generated adaptively, we need to extensively work with the conditional convergence where the conditioning is on the data collected in the first stage. We need to first define the notions of conditional convergence, with the definition of regular conditional distribution. In particular, we adopt the definition of conditional convergence in distribution and probability from Niu et al. (2024). Definition 2. For each N, letWNbe a random variable and let FNbe aσ-algebra. Then, we say WNconverges in
|
https://arxiv.org/abs/2505.10747v1
|
distribution to a random variable Wconditionally on FNif P[WN≤t| FN]p→P[W≤t]for each t∈Rat which t7→P[W≤t]is continuous. (19) We denote this relation via WN| FNd,p−→W. Definition 3. For each N, letWNbe a random variable and let FNbe aσ-algebra. Then, we say WNconverges in probability to a constant cconditionally on FNifWN converges in distribution to the delta mass at cconditionally on FN(recall Defini- tion 2). We denote this convergence by WN| FNp,p−→c. In symbols, WN| FNp,p−→cifWN| FNd,p−→δc. (20) 39 F.4 Proof of Lemma 4 Proof of Lemma 4. Define the Gaussian convolution of rN(x) asFN(x, δ)≡E[rN(x+ δY)] where Y∼N(0,Ik) and the expectation is taken with respect to Y. We first decompose the desired difference to three parts: AN≡E" rN 1√ NNX u=1WuN!# −E[rN(Z)] =E" rN 1√ NNX u=1WuN!# −E" FN 1√ NNX u=1WuN, δ!# +E" FN 1√ NNX u=1WuN, δ!# −E[FN(Z, δ)] +E[FN(Z, δ)]−E[rN(Z)] ≡A(1) N+A(2) N+A(3) N. Now we decompose the proof into two steps. Control of A(2) N:We want to apply Lemma 2 and 3 to the variables WuN. First we notice for any fixed Nandδ, FN(x, δ)∈ C2(Rk). Indeed, we can write, by change of variable, FN(x, δ) =E[rN(x+δA)] =Z rN(x+δa)ϕ(a)da=Z rN(t)1 δdϕt−x δ dt. By dominated convergence theorem, we can interchange the derivative and integral so that we can verify for any fixed N, δ, F N(x, δ) is 2-times continuously differentiable. Then applying Lemma 2 with WuN, we have |A(2) N| ≤∥FN(·, δ)∥L(E[∥WuN∥4 2])1/2 2N1/2+√ 2π∥∇FN(·, δ)∥LE[∥WuN∥3 2] 3N1/2. By the definition of rNthat∥rN∥BL≤1 so that ∥FN(·, δ)∥L≤1. Then applying Lemma 3, we obtain ∥∇FN(·, δ)∥L≤√ 2/(π1/2δ). Therefore we have |A(2) N| ≤1 2(E[∥WuN∥4 2])1/2 N1/2+2 3δE[∥WuN∥3 2] N1/2. Control of A(1) N, A(3) N:Applying Lemma 3, we know |A(1) N| ≤δ√ k,|A(3) N| ≤δ√ k. Conclusion: Collecting all the results above, we have |AN| ≤1 2(E[∥WuN∥4 2])1/2 N1/2+2 3δE[∥WuN∥3 2] N1/2+ 2δ√ k. Since E[∥WuN∥4 2] N→0,E[∥WuN∥3 2] N1/2→0,almost surely as assumed, we can optimize δsuch that |AN| →0 to complete the proof. 40 F.5 Proof of Lemma 7 Proof of Lemma 7. By Lemma 6, we know that E[f(X, Y)|F](ω) =Z f(x, y)dκ(X,Y),F(ω,(x, y)). Now we prove that for almost every ω∈Ω, κ(X,Y),F(ω, S) =κY,F(ω, S 1)· 1(X(ω)∈S2),∀S=S1×S2⊂E×B. In other words, κ(X,Y),F(ω,·) is product measure of another measure κY,F(ω,·) and counting measure supported on the value X(ω). This can be proved by using the definition of regular conditional distribution. For any S=S1×S2⊂E×B, we have κ(X,Y),F(ω, S) =P[(X, Y)∈S|F](ω) = 1(X(ω)∈S2)P[Y∈S1|F](ω) =κY,F(ω, S 1)· 1(X(ω)∈S2) for almost every ω∈Ω. Thus by Fubini’s theorem, we conclude Z f(x, y)dκ(X,Y),F(ω,(x, y)) =Z f(X(ω), y)dκY,F(ω, y). G Useful lemmas and the proofs G.1 Lemma statements Lemma 8 (Conditional Polya’s theorem, Theorem 5 in Niu et al. (2024)) .LetWNbe a sequence of random variables. If WN| FNd,p−→Wfor some random variable Wwith continuous CDF, then sup t∈R|P[WN≤t| FN]−P[W≤t]|p→0. (21) Lemma 9 (Slutsky’s theorem, Theorem 13.18 in Dudley (2002)) .LetX1, X2. . .and Y1, Y2, . . . , be random variables with values in Rk. Suppose XNd→Xand∥XN− YN∥2p→0. Then YNd→X. Lemma 10 (Conditional weak law of large numbers, Theorem 7 in Niu et al.
|
https://arxiv.org/abs/2505.10747v1
|
(2024)) . LetWuNbe a triangular array of random variables, such that WuNare independent conditionally on FNfor each N. If for some δ >0we have 1 N1+δNX u=1E[|WuN|1+δ| FN]p→0, (22) then 1 NNX u=1(WuN−E[WuN|FN])| FNp,p−→0. (23) 41 Applying the dominated convergence theorem, we know 1 NNX u=1(WuN−E[WuN|FN])p→0. The condition (22) is satisfied when sup 1≤u≤NE[|WuN|1+δ| FN] =op(Nδ). (24) Lemma 11 (Durrett (2019), Theorem 2.3.2) .A sequence of random variables WN converges to a limit Win probability if and only if every subsequence of WNhas a further subsequence that converges to Walmost surely. Lemma 12 (Skorohod’s representation theorem) .Let(µN)N∈Nbe a sequence of prob- ability measures on a metric space Ssuch that µNconverges weakly to some probabil- ity measure µ∞onSasN→ ∞ . Suppose also that the support of µ∞is separable. Then there exist S-valued random variables WNdefined on a common probability space (Ω,F,P)such that the law of WNisµNfor all N(including N=∞) and such that (WN)N∈Nconverges to W∞,P-almost surely. Lemma 13 (Lipschitz continuity of square root matrix under F-norm) .Suppose A, B∈Rk×kare two positive definite matrices, then we have ∥A1/2−B1/2∥F≤√ k∥A−B∥1/2 F. Lemma 14 (Boundedness on the covariance) .Suppose XN, YNare two sequences of random variables, with finite first moment, satisfying 1.E[Xp N]andE[Yp N]converge to finite constants for p= 1,2; 2.lim inf N→∞Var[XN]>0,lim inf N→∞Var[YN]>0. Suppose a random sequence aN∈(0,1)almost surely. Then we have lim sup N→∞ a1/2 NE[XN] (E[X2 N]−aNE[XN]2)1/2×(1−aN)1/2E[YN] (E[Y2 N]−(1−aN)E[YN]2)1/2 < C < 1 (25) almost surely and the constant Conly depends on the limit of the moments limN→∞E[Xp N] andlimN→∞E[Yp N]forp∈ {1,2}. G.2 Proof of Lemma 13 Proof of Lemma 13. It suffices to prove ∥A1/2−B1/2∥2≤ ∥A−B∥1/2 2, (26) where ∥A∥2is the operator norm A. This is because if the statement (26) holds, then ∥A1/2−B1/2∥F≤√ k∥A1/2−B1/2∥2≤√ k∥A−B∥1/2 2≤√ k∥A−B∥1/2 F. 42 Now we prove (26). If vector xwith∥x∥= 1 is an eigenvector of√ A−√ Bwith eigenvalue µthen x⊤(A−B)x=x⊤(√ A−√ B)√ Ax+x⊤√ B(√ A−√ B)x=µx⊤(√ A+√ B)x. Now if we choose µ=±∥√ A−√ B∥2(depending on the sign of the eigenvalue which has the largest magnitude), we have ∥√ A−√ B∥2 2= (x⊤(√ A−√ B)x)2≤ |x⊤(√ A−√ B)x|x⊤(√ A+√ B)x =|x⊤(A−B)x| ≤ ∥A−B∥2. This completes the proof. G.3 Proof of Lemma 14 Proof of Lemma 14. We now divide the proof into two cases. 1.When limN→∞E[XN] = 0 orlimN→∞E[YN] = 0.Since lim inf N→∞(E[X2 N]−aNE[XN]2)≥lim inf N→∞(E[X2 N]−E[XN]2)>0 and lim inf N→∞(E[Y2 N]−(1−aN)E[YN]2)≥lim inf N→∞(E[Y2 N]−E[YN]2)>0, we know the claim is true with C= 0 almost surely. 2.When both limN→∞E[XN]̸= 0and limN→∞E[YN]̸= 0:Define the sequence EN≡aNE[Y2 N](E[X2 N]−E[X2 N]) + (1 −aN)(E[Y2 N]−E[YN]2)E[X2 N]. We know DN≡(E[X2 N]−aNE[XN]2)(E[Y2 N]−(1−aN)E[YN]2) =aN(1−aN)(E[XN]E[YN])2+EN ≡CN+EN, Note that conclusion (25) is equivalent to proving lim supN→∞|C1/2 N/D1/2 N|<1 almost surely. To this end, we observe that DN CN= 1 +EN CN = 1 +aNE[Y2 N](E[X2 N]−E[XN]2) CN+(1−aN)(E[Y2 N]−E[YN]2)E[X2 N] CN = 1 +E[Y2 N](E[X2 N]−E[XN]2) (1−aN)(E[XN]E[YN])2+(E[Y2 N]−E[YN]2)E[X2 N] aN(E[XN]E[YN])2 ≡1 +R1N+R2N. We can bound lim inf N→∞R1N≥lim inf N→∞E[Y2 N](E[X2 N]−E[XN]2) (E[XN]E[YN])2 43 = lim N→∞E[Y2 N] (E[YN])2lim N→∞(E[X2 N]−E[XN]2) (E[XN])2>0 almost surely, since lim N→∞(E[X2 N]−E[XN]2) = lim N→∞Var[XN]>0. Similarly, we can prove lim inf N→∞R2N>0 almost surely.
|
https://arxiv.org/abs/2505.10747v1
|
Thus we have lim inf N→∞DN CN≥1 + lim inf N→∞R1N+ lim inf N→∞R2N>1 almost surely. Thus the desired bound can be obtained by checking lim sup N→∞ a1/2 NE[XN] (E[X2 N]−aNE[XN]2)1/2×(1−aN)1/2E[YN] (E[Y2 N]−(1−aN)E[YN]2)1/2 = lim sup N→∞ C1/2 N D1/2 N =1 lim inf N→∞D1/2 N/C1/2 N<1 almost surely. H New results on conditional CLT and CMT In this section, we present a new conditional central limit theorem (CLT) and a new continuous mapping theorem (CMT) under the conditional convergence framework. H.1 A new conditional CLT Lemma 15 (Conditional CLT under Lipschitz class) .Consider σ-alegbras FN. Sup- pose W1N, . . . , W NN∈Rkare i.i.d. random variables conditional on FNfor any fixed N∈N. Define WN≡PN u=1WuN/√ N. Suppose E[WuN|FN] =0,E[WuNW⊤ uN|FN] =Ik and E[∥WuN∥4 2|FN] N=op(1),E[∥WuN∥3 2|FN]√ N=op(1). Furthermore, consider random variables XN∈RkandXNis measurable with respect toFN. Then for any measurable function gN:Rk7→Rdand any Lipschitz function f:Rd+k→Rsuch that ∥f∥BL≤1, we have E[f(gN(XN), WN)|FN]−E[f(gN(XN), Z)|FN] =op(1) where Z∼N(0,Ik). Lemma 15 can be viewed a conditional version of Lemma 4. 44 H.2 New conditional CMT results In this section, we extend the classical continuous mapping theorem to conditional convergence. The following lemma is the classical result. Lemma 16 (Classical CMT) .LetWN, Wbe random variables on a metric space S. Suppose a function g:S7→S′, where S′is another metric space, has the set of discontinuity points Dgsuch that P[W∈Dg] = 0 . Then if WNd→W, the following is true: g(WN)d→g(W). To account for the varying sequence and conditioning, we present the following lemmas. These wo results provide new conditional convergence in probability and distribution, respectively. Lemma 17 (CMT: convergence in conditional probability) .Suppose XN, YN∈ X ⊂ Rkand∥XN−YN∥2=op(1). Furthermore, suppose function f:X →Ris uniformly continuous in the support Xand uniformly bounded: supx∈X|f(x)|<∞. Then we haveE[|f(XN)−f(YN)|]→0. Moreover, we have E[|f(XN)−f(YN)||FN]p→0. Lemma 18 (CMT: convergence in conditional distribution) .Consider the σ-algebra FNand a measurable function gwith discontinuity set Dg. Suppose random variable W∼PWsatisfies P[W∈Dg] = 0 andg(W)is a continuous random variable. If the sequence of random variable WNsatisfies |E[f(WN)|FN]−E[f(W)]|p→0,for any fsuch that ∥f∥BL<∞, then we have supt∈R|P[g(WN)≤t|FN]−P[g(W)≤t]|p→0. H.3 Proof of Lemma 15 Proof of Lemma 15. We prove the result by regular conditional distribution argument. Define the regular conditional distribution κ˜WN,FN(ω,·) for ˜WN= (W1N, . . . , W NN) givenFNsuch that for almost every ω∈Ω, κ˜WN,FN(ω, B) =P[˜WN∈B|FN](ω),∀B∈ B(RNk). Now suppose ( ˜W1N(ω), . . . , ˜WNN(ω)) is a draw from κ˜WN,FN(ω,·). Then by Lemma 7 and 11, it suffices to prove for any subsequence NkofN, there exists a further subsequence Nkjsuch that for almost every ω∈Ω ANkj≡Z f gNkj(XNkj(ω)), x dκ˜WN,FN(ω, x)−Z f gNkj(XNkj(ω)), Z dPZ→0, where PZis the law of Z. The way we choose the subsequence Nkjis to search Nkj such that E[∥WuNkj∥4 2|FNkj](ω) Nkj→0,E[∥WuNkj∥3 2|FNkj](ω) N1/2 kj→0,for almost every ω∈Ω. This is doable guaranteed by the assumption in the lemma. We apply Lemma 4 with rNkj(·) =f(gNkj(XNkj(ω)),·) and UuNkj=˜WuNkj(ω) so that we have ANkj→0 almost surely. This completes the proof. 45 H.4 Proof of Lemma 17 Proof of Lemma 17. Fix any δ > 0, we can choose ε(δ)>0 such
|
https://arxiv.org/abs/2505.10747v1
|
that whenever ∥XN−YN∥2< ε(δ) we can guarantee |f(XN)−f(YN)| ≤δsince fis uniformly continuous in X. Then for any δ >0, we have E[|f(XN)−f(YN)|] =E[|f(XN)−f(YN)|1(∥XN−YN∥2≤ε(δ))] +E[|f(XN)−f(YN)|1(∥XN−YN∥2> ε(δ))] ≤δ+ 2 sup x∈X|f(x)|P[∥XN−YN∥2> ε(δ)]. Letting N→ ∞ , we know P[∥XN−YN∥2> ε(δ)]→0 so that we have lim N→∞E[|f(XN)−f(YN)|]≤δ. Since δis chosen arbitrarily, we have lim N→∞E[|f(XN)−f(YN)|] = 0 so that we complete the first part proof. By the uniform integrability of E[|f(XN)−f(YN)|FN], guaranteed by the uniform boundedness of f, we conclude the second part of the proof. H.5 Proof of Lemma 18 Proof of Lemma 18. Using Lemma 6 and definition of regular conditional distribution, we know Z f(x)dκWN,FN(ω, x)−Z f(x)dPW(x) →0,for any fsuch that ∥f∥BL<∞ for any ω∈ E ⊂ Ω, where P[E]→1. Applying Lemma 1 to κWN,FN(ω,·),PW(·), we know on the event E, random variable ˜WNfrom measure κWN,FN(ω,·) converges to measure PW(·) in distribution. In other words, we have for any bounded and continuous function h, E[h(˜WN(ω))] =Z h(x)dκWN,FN(ω, x)→Z h(x)dPW(x) =E[h(W)],∀ω∈ E. Then by continuous mapping theorem, we have for any t∈R, P[g(˜WN(ω))≤t]→P[g(W)≤t],∀ω∈ E ∩ { W /∈Dg}. Again applying Lemma 6, we know ∀ω∈ E ∩ { W /∈Dg}, |P[g(WN)≤t|FN](ω)−P[g(W)≤t]| →0. SinceP[E ∩ { W /∈Dg}] =P[E]→1 by the assumption that P[W∈Dg] = 0, we have |P[g(WN)≤t|FN]−P[g(W)≤t]|p→0. Finally, applying Lemma 8, we have sup t∈R|P[g(WN)≤t|FN]−P[g(W)≤t]|p→0. 46 I Preparation for the proof of Theorem 1 I.1 Auxiliary definitions for the proof of Theorem 1 Recall the definition of cNandcin Assumption 1. Then consider the function h(·, c) : R10→¯Rsuch that for x= (x1, . . . , x 10)∈R10, h(x, c) = (p x3/x5,−p x4/x6)A(x1, x2)⊤+1√ 2cwhere A=x7x8 x9x10 ,(27) andhN(x)≡h(x, cN). Recall the sampling function ¯ e(s, x), defined as in Assump- tion 2. Define for any x∈R10, HN(s, x)≡min{1−lN,max{lN,¯e(s, hN(x))}} and VN(s, x)≡E[Y2 uN(s)]−HN(s, x)E[YuN(s)]2. In particular, we define the weight and variance functions: H(2) N(x)≡(HN(0, x), HN(1, x)) and V(2) N(x)≡(VN(0, x), VN(1, x)) and the covariance function Cov(2) N(x)≡−(HN(0, x)HN(1, x))1/2 (VN(0, x))1/2(VN(1, x))1/2E[YuN(0)]E[YuN(1)]. (28) Also, define the matrix function Σ(2) N(x)≡(Cov(2) N(x))2×2. I.2 Generic lemmas on the random functions Lemma 19 (Asymptotic lower and upper bound of V(t) N(s) and VN(s, x)).Suppose the Assumption 1-2 hold and either Assumption 3 or Assumption 4 holds. Then for any t= 1,2ands= 0,1, we have 0<lim inf N→∞V(t) N(s)≤lim sup N→∞V(t) N(s)<∞, and 0<lim inf N→∞inf x∈R10VN(s, x)≤lim sup N→∞sup x∈R10VN(s, x)<∞. Similarly, V(t)(s)is uniformly lower and upper bounded for s= 0,1andt= 1,2. Lemma 20 (Lipschitz property of weight, variance and covariance matrix function) . Suppose the Assumption 1-2 hold and either Assumption 3 or Assumption 4 holds. Then there exist universal constants C1, C2, C3, C4such that for any x1, x2∈R10 |HN(s, x1)−HN(s, x2)| ≤C1|¯e(s, hN(x1))−¯e(s, hN(x2))| |H1/2 N(s, x1)−H1/2 N(s, x2)| ≤C2|¯e1/2(s, hN(x1))−¯e1/2(s, hN(x2))| |VN(s, x1)−VN(s, x2)| ≤C3|¯e(s, hN(x1))−¯e(s, hN(x2))| ∥(Σ(2) N(x1))1/2−(Σ(2) N(x2))1/2∥F≤C4X s=0,1|¯e1/2(s, hN(x1))−¯e1/2(s, hN(x2))|1/2. 47 I.3 Proof of Lemma 19 Proof of Lemma 19. We first prove the result for V(t) N(s). We divide the proof into two parts. Proof of lim inf N→∞V(t) N(s)>0:For any s, t, guaranteed by Assumption 1, lim inf N→∞V(t) N(s) =
|
https://arxiv.org/abs/2505.10747v1
|
lim inf N→∞ E[Y2 uN(s)]−E[YuN(s)]2+ (1−¯eN(s,Ht−1))E[YuN(s)]2 ≥lim inf N→∞ E[Y2 uN(s)]−E[YuN(s)]2 >0. Thus we have lim inf N→∞V(t) N(s)>0 for any s, t. Proof of lim supN→∞V(t) N(s)<∞:We bound lim sup N→∞V(t) N(s)≤lim sup N→∞E[Y2 uN(s)]≤lim sup N→∞(E[Y4 uN(s)])1/2<∞ where the last inequality is due to Assumption 1. The proof for VN(s, x) is similar so that we omit it. I.4 Proof of Lemma 20 Proof of Lemma 20. We prove the claim subsequently. Proof for HN(s, x):Write HN(s, x) = min {1−lN,max{lN,¯e(s, hN(x))}}. We first notice that min {1−lN,max{lN, x}}is a Lipschitz function of xwith Lipschitz constant 1 so that we have |HN(s, x1)−HN(s, x2)| ≤ |¯e(s, hN(x1))−¯e(s, hN(x2))|. Proof for H1/2 N(s, x):Since p min{1−lN,max{lN, x}}= min {p 1−lN,max{p lN,√x}} is a Lipschitz function of√x, we can conclude |H1/2 N(s, x1)−H1/2 N(s, x2)| ≤ |¯e1/2(s, hN(x1))−¯e1/2(s, hN(x2))|. Proof for VN(s, x):Recall the definition VN(s, x1) =E[Y2 uN(s)]−HN(s, x1)E[YuN(s)]2 so that we can bound |VN(s, x1)−VN(s, x2)| ≤E[YuN(s)]2|HN(s, x1)−HN(s, x2)| ≤E[YuN(s)]2|¯e(s, hN(x1))−¯e(s, hN(x2))| where the last inequality is true due to the result already proved for HN(s, x). Since E[YuN(s)]2is uniformly bounded by Assumption 1, we complete the proof for VN(s, x). 48 Proof for (Σ(2) N(x))1/2:By Lemma 13, we have ∥(Σ(2) N(x1))1/2−(Σ(2) N(x2))1/2∥F≤√ 2∥Σ(2) N(x1)−Σ(2) N(x2)∥1/2 F. Define VN(x)≡VN(0, x)VN(1, x). Then recalling the definition of Σ(2) N, we notice ∥Σ(2) N(x1)−Σ(2) N(x2)∥F =√ 2|Cov(2) N(x1)−Cov(2) N(x2)| ≤√ 2(HN(0, x1)HN(1, x1))1/2|E[YuN(0)]E[YuN(1)]| V1/2 N(x1)−V1/2 N(x2) V1/2 N(x1)V1/2 N(x2) +√ 2 (HN(0, x1)HN(1, x1))1/2−(HN(0, x2)HN(1, x2))1/2 |E[YuN(0)]E[YuN(1)]| V1/2 N(x2) ≡CN,1+CN,2. We bound CN,1andCN,2separately. 1.Treatment for CN,1.Notice VN(0, x1) is uniformly lower and upper bounded, proved as in Lemma 19. Then we denote the uniform lower and upper bounds respectively as cv, Cv, i.e., cv≤lim inf N→∞inf xVN(s, x)≤lim sup N→∞sup xVN(s, x)≤Cvfor any s= 0,1. Then we have V1/2 N(x1)−V1/2 N(x2) V1/2 N(x1)V1/2 N(x2)≤|VN(x1)−VN(x2)| c2v(V1/2 N(x1) +V1/2 N(x2)) ≤|VN(x1)−VN(x2)| 2c3v ≤Cv 2c3v(|VN(0, x1)−VN(0, x2)|+|VN(1, x1)−VN(1, x2)|) ≤Cv 2c3v(E[YuN(s)])2X s=0,1|¯e(s, hN(x1))−¯e(s, hN(x2))| where the last inequality is due to the result already proved for VN(s, x). There- fore, by the bound HN(s, x1)≤1, we have |CN,1|≲X s=0,1|¯e(s, hN(x1))−¯e(s, hN(x2))| =X s=0,1|¯e1/2(s, hN(x1))−¯e1/2(s, hN(x2))||¯e1/2(s, hN(x1)) + ¯e1/2(s, hN(x2))| ≤2X s=0,1|¯e1/2(s, hN(x1))−¯e1/2(s, hN(x2))|. 2.Treatment for CN,2.SinceE[YuN(s)] and VN(x) are uniformly lower and upper bounded, we have CN,2≲ (HN(0, x1)HN(1, x1))1/2−(HN(0, x2)HN(1, x2))1/2 49 ≤X s=0,1|H1/2 N(s, x1)−H1/2 N(s, x2)| ≤X s=0,1|¯e1/2(s, hN(x1))−¯e1/2(s, hN(x2))| We conclude the proof. J Proof of Theorem 1 The organization of this section is as follows. We first present the general proof roadmap in Appendix J.1. Then we work out the first two steps involved in the general proof roadmap under different weighting schemes in Appendix J.2 and Appendix J.3, respectively. J.1 General proof roadmap for weak convergence result Before presenting the general proof roadmap, we first define the following notations. Random variables in the limiting distributions. We first recall the following definitions. V(t)(s) = lim N→∞E[Y2 uN(s)]−H(t)(s) lim N→∞E[YuN(s)]2(29) and H(1)(s) =e(s) and H(2)(s) =( max{¯l,¯e(s, S((A(1), V(1)), c))}under Assumption 3 ¯e(s, S((A(1), V(1)), c)) under Assumption 4. (30) Define A(t)≡(A(t)(0), A(t)(1)), V(t)≡(V(t)(0), V(t)(1)) and H(t)≡(H(t)(0), H(t)(1)). Furthermore, recall the
|
https://arxiv.org/abs/2505.10747v1
|
asymptotic covariance matrix Σ(t)≡(Cov(t))2×2, where the covariance is defined as in Appendix A. We will just denote Cov(2)= Cov(2)(A1) for simplicity. Random variables related to observed data. For the ease of presentation, we rewrite the weight vector as H(t) N= (H(t) N(0), H(t) N(1)), H(1) N(s) =e(s), H(2) N(s)≡HN(s, E(1) N) = ¯eN(s,Ht−1) and the variance vector as V(t) N≡(V(t) N(0), V(t) N(1)) where V(t) N(s)≡E[Y2 uN(s)]−H(t) N(s)E[YuN(s)]2. Also, to slightly abuse the notation, we define Λ(t) N≡(Λ(t) N(0),Λ(t) N(1)) where Λ(t) N(s)≡(H(t) N(s))1/2 (V(t) N(s))1/21 N1/2 tNtX u=1(ˆΛ(t) uN(s)−E[YuN(s)]). 50 Define the matrix Σ(t) N≡(Cov(t) N)2×2, where the covariance is defined as Cov(t) N≡−(H(t) N(0)H(t) N(1))1/2 (V(t) N(0))1/2(V(t) N(1))1/2E[YuN(0)]E[YuN(1)]. (31) Random vector joining two stages. Recalling the definitions of H(t)(s) and V(t)(s) in (30) and (29), we define the limiting vectors W≡(W1, W2) where Wt≡(Zt, V(t), H(t),Vec((Σ(t))1/2)). (32) where Z1, Z2are independently generated from N(0,I2). We use Vec( Σ) to denote the vectorization of matrix Σand the order is by column. Now we define the empirical counterparts for Wt: E(t) N≡ (Σ(t) N)−1/2Λ(t) N, V(t) N, H(t) N,Vec((Σ(t) N)1/2) , t= 1,2. (33) It is easy to see that S(1) N(0)−S(1) N(1) = hN(E(1) N), recalling the definition of hNas in (27). In fact, S(1) N(0)−S(1) N(1) = Λ(1) N(0)·(V(1) N(0))1/2 (H(1) N(0))1/2−Λ(1) N(1)·(V(1) N(1))1/2 (H(1) N(1))1/2+1√ 2cN=hN(E(1) N). Next, we define an intermediate auxiliary random vector joining the limiting random vector M2and empirical random vector E(2) N, with Z2considered in (32): Ea N(x)≡(Z2, V(2) N(x), H(2) N(x),Vec((Σ(2) N(x))1/2)),∀x∈R10. (34) Recall V(2) N(x), H(2) N(x) and Σ(2) N(x) are defined as in Section I.1. Proofs for both weighting choices in Theorem 1 follow from the same proof roadmap. Now we present this general proof roadmap. Proof roadmap. First, the following lemma shows the consistency of WIPW( s) and WIPWS( s). Lemma 21 (Consistency of WIPW and WIPWS) .Under Assumption 1-2 and either Assumption 3 or Assumption 4, we have WIPW( s)−E[YuN(s)]p→0and WIPWS( s)−E[Y2 uN(s)]p→0, where we recall the estimators WIPW( s)andWIPWS( s)as in (2)and(16). The proof of Lemma 21 can be found in Appendix K.1.1. Now we prove the weak convergence. We summarize the roadmap as follows: Step 1: We first prove that for t= 1,2 R(t) V(s)≡PNt u=1(ˆΛ(t) uN(s)−WIPW( s))2 PNt u=1(ˆΛ(t) uN(s)−E[YuN(s)])2p→1 and S(t) V(s)≡H(t) N(s) NtV(t) N(s)NtX u=1(ˆΛ(t) uN(s)−E[YuN(s)])2p→1. 51 Step 2: Prove ( E(1) N, E(2) N)d→Wwhere E(t) Nis defined as in (33) and Wis defined as in (32). This step involves the analysis on the different choice of h(t) N(s) for Assumption 3 and 4; Step 3: We define ˆIN≡WN−√ N(E[YuN(0)]−E[YuN(1)]) (NˆVN(0) + NˆVN(1))1/2and ˆIU≡√ N(TN−(E[YuN(0)]−E[YuN(1)])) . In order to find the asymptotic distribution of ˆIN,ˆIU, we can rewrite ˆIN,ˆIUwith different weighting methods as a function of the weak limit of ( E(1) N, E(2) N) and apply Slutsky’s Lemma and the continuous mapping theorem. We consider the following four cases: 1.WNwith constant weighting: we can write ˆINwith constant weighting as P2 t=1Λ(t) N(0)(V(t) N(0)/H(t) N(0))1/2−P2 t=1Λ(t) N(1)(V(t) N(1)/H(t) N(1))1/2 (P2 t=1V(t) N(0)S(t) V(0)R(t) V(0)/H(t) N(0) +P2 t=1V(t) N(1)S(t) V(1)R(t) V(1)/H(t) N(1))1/2; 2.WNwith adaptive weighting: we
|
https://arxiv.org/abs/2505.10747v1
|
can write ˆINwith adaptive weighting as RN(0)P2 t=1Λ(t) N(0)(V(t) N(0))1/2−RN(1)P2 t=1Λ(t) N(1)(V(t) N(1))1/2 (R2 N(0)P2 t=1V(t) N(0)S(t) V(0)R(t) V(0) + R2 N(1)P2 t=1V(t) N(1)S(t) V(1)R(t) V(1))1/2, where R−1 N(s)≡P2 t=1(H(t) N(s))1/2; 3.TNwith constant weighting: we can write ˆIUwith constant weighting as 1√ 2 2X t=1Λ(t) N(0)(V(t) N(0)/H(t) N(0))1/2−2X t=1Λ(t) N(1)(V(t) N(1)/H(t) N(1))1/2! ; 4.TNwith adaptive weighting: we can write ˆIUwith adaptive weighting as √ 2 RN(0)2X t=1Λ(t) N(0)(V(t) N(0))1/2−RN(1)2X t=1Λ(t) N(1)(V(t) N(1))1/2! . Then we use the results R(t) V(s), S(t) V(s)p→1, t= 1,2 as well as the weak convergence of (E(1) N, E(2) N) to derive the weak convergence with the help of Slutsky’s Lemma and continuous mapping theorem. To see this, we will only work out the case for WNwith constant weighting since the proof for the other cases are similar and even simpler. We define the random vector (¯E(1) N,¯E(2) N) and ( ˇE(1) N,ˇE(2) N), where ¯E(t) N≡(E(t) N, S(t) V, R(t) V),ˇE(t) N≡(E(t) N,12,12) and S(t) V≡(S(t) V(0), S(t) V(1)), R(t) V≡(R(t) V(0), R(t) V(1)),12≡(1,1). Then we apply Lemma 9 with XN= (ˇE(1) N,ˇE(2) N) and YN= (¯E(1) N,¯E(2) N) by noticing that XNconverge weakly to ( W1,12,12, W2,12,12) and R(t) V, S(t) Vp→12fort= 1,2. Therefore we obtain YNd→(W1,12,12, W2,12,12). Then we can use continuous map- ping lemma (Lemma 16) to derive the weak convergence of ˆINby writing ˆINas a continuous function of YN. 52 J.2 Proof of Step 1 in Appendix J.1 We will show the convergence of R(t) V(s) and S(t) V(s) for s= 0,1 with different choice ofh(t) N(s). J.2.1 Proof of convergence of R(t) V(s) To first show the convergence of R(t) V(s), we give the following lemma. Lemma 22 (Convergence of R(t) V(s)).Suppose the Assumption 1-2 hold and either Assumption 4 or Assumption 3 holds. If the following statements are true: 1.WIPW( s)−E[YuN(s)] =op(1)for any s∈ {0,1}; 2.W(t) N(s)≡PNt u=1¯eN(s,Ht−1)(ˆΛ(t) uN−E[YuN(s)])2/Ntis asymptotically lower bounded; then we have for any s, t, R(t) V(s)p→1. Since the consistency has been proved in Lemma 21, it suffices to prove W(t) N(s) is stochastically lower bounded. We first present a useful lemma. Lemma 23 (Asymptotic representation of W(t) N(s)).Suppose the Assumption 1-2 hold and either Assumption 4 or Assumption 3 holds. Then we have W(t) N(s) =E[Y2 uN(s)]−¯eN(s,Ht−1)(E[YuN(s)])2+op(1). By Lemma 23, we know W(t) N(s) =E[Y2 uN(s)]−¯eN(s,Ht−1)(E[YuN(s)])2+op(1). Since Assumption 1 guarantees that lim inf N→∞(E[Y2 uN(s)]−¯eN(s,Ht−1)(E[YuN(s)])2)≥lim inf N→∞(E[Y2 uN(s)]−(E[YuN(s)])2)>0, we know W(t) N(s) is asymptotically lower bounded. J.2.2 Proof of convergence of S(t) V(s) We write S(t) V(s) =H(t) N(s) NtV(t) N(s)NtX u=1(ˆΛ(t) uN(s)−E[YuN(s)])2 =1 Nt¯eN(s,Ht−1)NtX u=1(ˆΛ(t) uN(s)−E[YuN(s)])2·1 V(t) N(s) =W(t) N(s) V(t) N(s). Then we know from Lemma 23 that W(t) N(s)− E[Y2 uN(s)]−¯eN(s,Ht−1)E[YuN(s)]2 =W(t) N(s)−V(t) N(s)p→0. By Lemma 19, we know lim inf N→∞V(t) N(s)>0. Therefore we have S(t) V(s) =W(t) N(s) V(t) N(s)p→1. 53 J.3 Proof of Step 2 in Appendix J.1 To proceed with the proof of Step 2, we will show that ( E(1) N, E(2) N)d→W. It suffices to show that for any bounded Lipschitz function f, Eh f(E(1) N, E(2) N)i →E[f(W)] (35) Without loss of generality, we assume
|
https://arxiv.org/abs/2505.10747v1
|
∥f∥∞≤1 2,sup x,y∈R20|f(x)−f(y)| ≤1,sup x,y∈R20|f(x)−f(y)| ∥x−y∥≤1 2. (36) We divide the proofs into the following steps Eh f(E(1) N, E(2) N)i −E[f(W)] =E[f(W1, Ea N(W1))]−E[f(W1, W2)] +E[f(E(1) N, E(2) N)]−E[f(W1, Ea N(W1))] ≡F1+F2. It suffices to F1=o(1) and F2=o(1). We will prove these two claims subsequently. Before doing so, we present a useful lemma, characterizing the asymptotic behavior of sampling function in the second stage. Lemma 24 (Convergence of sampling function) .Suppose the Assumption 1-2 hold and either Assumption 3 or Assumption 4 holds. Then if a sequence of random variable MNsatisfying MN→W1almost surely where W1is defined as in (32), then we have ¯e(s, hN(MN))→¯e(s, h(W1, c))and ¯e1/2(s, hN(MN))→¯e1/2(s, h(W1, c)) almost surely. Consequently, we know ¯e(s, hN(MN))−¯e(s, hN(W1))and ¯e1/2(s, hN(MN))−¯e1/2(s, hN(W1)) coverge to 0almost surely. J.3.1 Proof of F1=o(1) To prove F1=o(1), by the boundedness and Lipschitz property of fand Lemma 17, it suffices to show the following quantities are o(1): AN≡ ∥H(2) N(W1)−H(2)∥2, BN≡ ∥V(2) N(W1)−V(2)∥2 and CN≡ ∥Vec((Σ(2) N(W1))1/2)−Vec((Σ(2))1/2)∥2. We prove these claims subsequently. 54 Proof of AN=o(1):Recall the definition of H(2) N(W1) H(2) N(W1) = (HN(0, W1), HN(1, W1)) where HN(s, W 1) = min {1−lN,max{lN,¯e(s, hN(W1))}}. It suffices to prove for any s∈ {0,1} |HN(s, W 1)−H(2)(s)| →0 almost surely . We will prove the result depending on the choice of lNunder either Assumption 3 or 4 . 1.Under Assumption 3: In this case, 0 < c l<¯l=lN< c u<1/2. By the Lipschitz property of min {1−¯l,max{¯l, x}}inx, we have |HN(s, W 1)−H(2)(s)| ≤ |¯e(s, hN(W1))−¯e(s, h(W1, c))|. Convergence in RHS has been proved in Lemma 24. 2.Under Assumption 4: In this case lim N→∞lN= 0. We can bound by Lemma 24 that |HN(s, W 1)−H(2)(s)|=|min{1−lN,max{lN,¯e(s, hN(W1))}} − ¯e(s, h(W1, c))| ≤ |¯e(s, hN(W1))−¯e(s, h(W1, c))| +|lN−¯e(s, h(W1, c))| 1(¯e(s, hN(W1))< lN) +|1−lN−¯e(s, h(W1, c))| 1(¯e(s, hN(W1))>1−lN) ≤3|¯e(s, hN(W1))−¯e(s, h(W1, c))|+ 2lN→0 almost surely. Thus we have proved AN=o(1). Proof of BN=o(1):ForBN, recall the expression V(2) N(W1) = (VN(0, W1), VN(1, W1)) and VN(s, W 1) =E[Y2 uN(s)]−HN(s, W 1)E[YuN(s)]2. Notice it suffices to prove HN(s, W 1) converges to H(2)(s) and this can be implied by the convergence of HN(s, W 1) as shown in the proof of AN=o(1). Thus we have proved BN=o(1). Proof of CN=o(1):We notice that by Lemma 13, CN≲∥Σ(2) N(W1)−Σ(2)∥1/2 F=√ 2|Cov(2) N(W1)−Cov(2)|1/4. Recall the definition Cov(2) N(W1) =−(HN(0, W1)HN(1, W1))1/2 V1/2 N(0, W1)V1/2 N(1, W1)E[YuN(0)]E[YuN(1)] and Cov(2)=−(H(t)(0)H(t)(1))1/2 (V(t)(0)V(t)(1))1/2lim N→∞(E[YuN(0)]E[YuN(1)]). It suffices to prove the following claims: 55 1.|H1/2 N(s, W 1)−(H(2)(s))1/2|=o(1); 2.VN(s, W 1) is uniformly lower bounded, and |VN(s, W 1)−V(2)(s)|a.s.→0. First we show |H1/2 N(s, W 1)−(H(2)(s))1/2|=o(1) for any s∈ {0,1}. 1.Under Assumption 3: We can easily obtain |H1/2 N(s, W 1)−(H(2)(s))1/2| ≤1 2¯l1/2|¯e(s, hN(W1))−¯e(s, h(W1, c))| →0 by Lemma 24; 2.Under Assumption 4: we decompose |H1/2 N(s, W 1)−(H(2)(s))1/2| =|min{(1−lN)1/2,max{l1/2 N,¯e1/2(s, hN(W1))}} − ¯e1/2(s, h(W1, c))| ≤ |min{(1−lN)1/2,max{l1/2 N,¯e1/2(s, hN(W1))}} − ¯e1/2(s, hN(W1))| +|¯e1/2(s, hN(W1))−¯e1/2(s, h(W1, c))| ≤l1/2 N+lN+|¯e1/2(s, hN(W1))−¯e1/2(s, h(W1, c))| →0 almost surely by Lemma 24. Then, we only need to prove that VN(s, W 1) is uniformly lower bounded, and |VN(s, W 1)−V(2)(s)|a.s.→0. (37) For
|
https://arxiv.org/abs/2505.10747v1
|
the lower bound, by Lemma 19, we know lim inf N→∞VN(s, W 1)>0. For (37), this can be implied by the convergence BN=o(1). Thus we have proved CN=o(1) and thus we have F1=o(1). J.3.2 Proof of F2=o(1) We further divide the proof into two steps. Define AN(f)≡E[f(E(1) N, E(2) N)]−E[f(E(1) N, Ea N(E(1) N))], BN(f)≡E[f(E(1) N, Ea N(E(1) N))]−E[f(A1, Ea N(A1))]. Noticing F2=AN(f) +BN(f), it suffices to prove AN(f) =o(1), BN(f) =o(1). Proof of limN→∞|AN(f)|= 0 We first present a useful lemma. Lemma 25. Suppose the Assumption 1-2 hold and either Assumption 4 or Assumption 3 holds. For any bounded Lipschitz function fsatisfying (36), we have Eh f(E(1) N, E(2) N)−f(E(1) N, Ea N(E(1) N))| H(1) Ni =op(1) where H(1) Nis the σ-algebra generated by the data in first batch (A(1) 1N, Y(1) 1N), . . . , (A(1) N1N, Y(1) N1N). Then by dominated convergence theorem, we know lim N→∞|AN(f)|= 0. 56 Proof of limN→∞|BN(f)|= 0:We first show that E(1) Nd→W1in the following lemma. Lemma 26 (Weak convergence of E(1) N).Suppose the Assumption 1-2 hold and either Assumption 4 or Assumption 3 holds. Then we have E(1) Nd→W1. By Skorohod’s representation theorem (Lemma 12) and Lemma 26, there exists a sequence of random variables ˜E(1) Nsuch that ˜E(1) Nd=W1and lim N→∞˜E(1) N=W1. Then we write (˜E(1) N, Ea N(˜E(1) N)) = ( ˜E(1) N, Z2, V(2) N(˜E(1) N), H(2) N(˜E(1) N),Vec((Σ(2) N(˜E(1) N))1/2)) and (W1, Ea N(W1)) = ( W1, Z2, V(2) N(W1), H(2) N(W1),Vec((Σ(2) N(W1))1/2)). We intend to apply Lemma 17 and in order to do so, we need to show, in addi- tion to ˜E(1) N→W1, that ∥V(2) N(˜E(1) N)−V(2) N(W1)∥2,∥H(2) N(˜E(1) N)−H(2) N(W1)∥2and ∥(Σ(2) N(˜E(1) N))1/2−(Σ(2) N(W1))1/2∥Fconverge to 0 almost surely. In fact, by Lemma 20, it suffices to prove ¯e(s, hN(˜E(1) N))−¯e(s, hN(W1)) =o(1) and ¯ e1/2(s, hN(˜E(1) N))−¯e1/2(s, hN(W1)) =o(1). Applying Leamm 24 by noticing ˜E(1) N→W1almost surely, we complete the proof for limN→∞|BN(f)|= 0. K Proof of lemmas in Appendix J The organization of this section is as follows. We will prove the lemmas appearing in Appendix J.1 in Appendix K.1. We will prove the lemmas appearing in Appendix J.2 in Appendix K.2. Finally, we will prove the lemmas appearing in Appendix J.3 in Appendix K.3. We will inherit all the notations in Appendix J. K.1 Proof of lemmas in Appendix J.1 K.1.1 Proof of Lemma 21 Proof of Lemma 21. We will only prove the consistency for E[YuN(s)]. The proof for the second part is similar. We divide the proof into two cases depending if Assump- tion 4 or Assumption 3 holds. 1.Under Assumption 3: Compute Var[WIPW( s)−E[YuN(s)]] =E 1 N22X t=1NtX u=1E 1(A(t) uN=s) ¯eN(s,Ht−1)Y(t) uN−E[YuN(s)]!2 |H1 =1 N22X t=1NtX u=1EE[Y2 uN(s)] ¯eN(s,Ht−1)−E[YuN(s)]2 57 ≤E[Y2 uN(s)]E1 2N¯eN(s,H0)+1 2N¯eN(s,H1) . Thus we know by Assumption 3 that N¯eN(s,Ht)≥Nmin{e(s),¯l}for any t= 0,1. This implies Var[WIPW( s)−E[YuN(s)]]→0. This implies WIPW( s)− E[YuN(s)]p→0 since E[WIPW( s)] =E[YuN(s)]. 2.Under Assumption 4: We first can show that WN(s)≡2X t=1NtX u=1h(t) N(s) =2X t=1Nth(t) N(s) =1 2(N1/2¯e1/2 N(s,H0) +N1/2¯e1/2 N(s,H1)) ≥1 2 N1/2l1/2 N+N1/2e1/2(s) ≥1
|
https://arxiv.org/abs/2505.10747v1
|
2N1/2e1/2(s). (38) Compute Var[WIPW( s)−E[YuN(s)]] =E P2 t=1PNt u=1h(t) N(s) 1(A(t) uN=s) ¯eN(s,Ht−1)Y(t) uN−E[YuN(s)]2 W2 N(s) ≤4 Ne(s)E 2X t=1NtX u=1h(t) N(s) 1(A(t) uN=s) ¯eN(s,Ht−1)Y(t) uN−E[YuN(s)]!! 2 =4 N2e(s)2X t=1NtX u=1 E[Y2 uN(s)]−¯eN(s,Ht−1)E[YuN(s)]2 ≤4E[Y2 uN(s)] Ne(s). Then it suffices to show |E[WIPW( s)]−E[YuN(s)]| →0. In fact, we can compute |E[WIPW( s)]−E[YuN(s)]|= E" h(1) N(s)PN1 u=1(ˆΛ(1) uN(s)−E[YuN(s)]) WN(s)# ≤E" h(1) N(s)|PN1 u=1(ˆΛ(1) uN(s)−E[YuN(s)])| WN(s)# ≤E" |PN1 u=1(ˆΛ(1) uN(s)−E[YuN(s)])| N1# 58 ≤vuuutE PN1 u=1(ˆΛ(1) uN(s)−E[YuN(s)]) N1!2 =s E[Y2 uN(s)]−e(s)(E[YuN(s)])2 N1→0, where the second inequality is due to the lower bound (38) and the third inequal- ity is due to Jensen’s inequality. Then we know WIPW( s)−E[YuN(s)]p→0. K.2 Proof of lemmas in Appendix J.2 K.2.1 Proof of Lemma 22 Proof of Lemma 22. The idea is to decompose R(t) V(s) into different pieces and show each piece converges to zero. Recall the definition of W(t) N(s) as in Appendix J.2.1. Then consider R(t) V(s) =PNt u=1(ˆΛ(t) uN(s)−WIPW( s))2 PNt u=1(ˆΛ(t) uN(s)−E[YuN(s)])2 =¯eN(s,Ht−1) NtW(t) N(s)NtX u=1 ˆΛ(t) uN(s)−E[YuN(s)] +E[YuN(s)]−WIPW( s)2 = 1 +2 (E[YuN(s)]−WIPW( s)) W(t) N(s)¯eN(s,Ht−1) NtNtX u=1 ˆΛ(t) uN(s)−E[YuN(s)] +¯eN(s,Ht−1) W(t) N(s)(E[YuN(s)]−WIPW( s))2 ≡1 +R(t) 1N(s) +R(t) 2N(s). In order to show R(t) 1N(s) =op(1) and R(t) 2N(s) =op(1), it suffices to show 1. ¯eN(s,Ht−1) NtNtX u=1(ˆΛ(t) uN(s)−E[YuN(s)]) = op(1); (39) 2.W(t) N(s) is asymptotically lower bounded; 3. WIPW( s)−E[YuN(s)] =op(1) for any s∈ {0,1}. The second and the last claims follow by the assumption so we only need to show the first claim. The intuition behind the validity of (39) is the summand is mean zero and thus we only need to show the variance of the summand converges to 0. To this end, we can compute Var" ¯eN(s,Ht−1)1 NtNtX u=1(ˆΛ(t) uN,s−E[YuN(s)])# 59 =E" Var" 1 Nt¯eN(s,Ht−1)NtX u=1 ˆΛ(t) uN(s)−E[YuN(s)] |Ht−1## =E" ¯e2 N(s,Ht−1) N2 tNtX u=1E ˆΛ(t) uN(s)−E[YuN(s)]2 |Ht−1# =1 NtE ¯eN(s,Ht−1) E Y2 uN(s) −¯eN(s,Ht−1)E[YuN(s)]2 ≤1 NtE[Y2 uN(s)] =o(1). Thus we proved (39). K.2.2 Proof of Lemma 23 Proof of Lemma 23. Since the summand in W(t) N(s) is conditionally independent and identically distributed so we want to apply Lemma 10 to prove the claim. We will verify (24) with δ= 1 and WuN= ¯eN(s,Ht−1)(ˆΛ(t) uN(s)−E[YuN(s)])2. In other words, we need to verify Eh ¯e2 N(s,Ht−1)(ˆΛ(t) uN(s)−E[YuN(s)])4|Ht−1i =op(N). To see this, we can compute Eh ¯e2 N(s,Ht−1)(ˆΛ(t) uN(s)−E[YuN(s)])4|Ht−1i ≤8E[Y4 uN(s)] ¯eN(s,Ht−1)+ 8¯e2 N(s,Ht−1)(E[YuN(s)])4. Since N¯eN(s,Ht−1)≥Nmin{e(s),¯lN} → ∞ in probability by Assumption 4 or 3 and Assumption 1 guarantees that E[YuN(s)],E[Y4 uN(s)] are uniformly bounded, then we know E[Y4 uN(s)] N¯eN(s,Ht−1)=op(1),(E[YuN(s)])4 N=o(1). Thus we can apply Lemma 10 with δ= 1 such that W(t) N(s)−E[¯eN(s,Ht−1)(ˆΛ(t) uN(s)−E[YuN(s)])2|Ht−1] =op(1). Thus it suffices to prove E[¯eN(s,Ht−1)(ˆΛ(t) uN(s)−E[YuN(s)])2|Ht−1] is stochastically lower bounded. We compute E[¯eN(s,Ht−1)(ˆΛ(t) uN(s)−E[YuN(s)])2|Ht−1] =E[Y2 uN(s)]−¯eN(s,Ht−1)(E[YuN(s)])2. The above quantity is lower bounded by E[Y2 uN(s)]−(E[YuN(s)])2, which is asymptot- ically lower bounded by Assumption 1. Thus we have proved the claim. K.3 Proof of lemmas in Appendix J.3 K.3.1 Proof of Lemma 24 Proof of Lemma 24. We notice by the definition of h(·, c) in (27), and the definition of W1in definition (32), we know hN(MN)→h(W1, c),almost surely, using
|
https://arxiv.org/abs/2505.10747v1
|
continuous mapping theorem, for any c∈[−∞,0]. Based on the Assumption 2, we divide the proof into two cases. 60 1.When ¯e(s, x)is Lipschitz continuous on x.We note by Lemma 17 that |¯e(s, hN(MN))−¯e(s, h(W1, c))|a.s.→0 is true. Moreover, if a nonnegative function fis Lipschitz continuous and the range is in [0 ,1], then f1/2is uniformly continuous. This is because√xis a uniformly continuous function in the compact support [0 ,1]. Thus we apply Lemma 17 again with f= ¯e(s, x) to get |¯e1/2(s, hN(MN))−¯e1/2(s, h(W1, c))|a.s.→0. 2.When ¯e(s, x)takes the formPK k=1ck 1(g(x)∈Ck).For both functions ¯e1/2(s, x) and ¯ e(s, x), we only need to prove that 1(g(hN(MN))∈Ck)− 1(g(h(W1, c))∈Ck)a.s.→0,∀k∈[K] is true. Notice when c=−∞, we know by Assumption 2 that g(−∞) =−∞ ∈ C1. Then we know 1(g(hN(MN))∈C1)− 1(g(h(W1,−∞))∈C1) = 1(g(hN(MN))∈C1)−1a.s.→0. When c∈(−∞,0], we know g(h(W1, c)) is a continuous random variable. Indeed, h(W1, c) is a continuous random variable and gis a continuous function. This means P[g(h(W1, c))∈∂Ck] = 0 since ∂Ckis of Lebesgue measure zero by the definition of Ckin Assumption 2. Then by Lemma 16, we know 1(g(hN(MN))∈Ck)− 1(g(h(W1, c))∈Ck)a.s.→0. This completes the proof. K.3.2 Proof of Lemma 25 Proof of Lemma 25. Recall the definition of E(2) Nas in definition (33) E(2) N= (Σ(2) N)−1/2Λ(2) N, V(2) N, H(2) N,Vec((Σ(2) N)1/2) . Define Λ(2) uN(s)≡(H(2) N(s))1/2(ˆΛ(2) uN(s)−E[YuN(s)]) (V(2) N(s))1/2. Recall Λ(2) N=1 N1/2 2PN2 u=1(Λ(2) uN(0),Λ(2) uN(1)), V(2) N= (VN(0, E(1) N), VN(1, E(1) N)),Σ(2) N= Σ(2) N(E(1) N) and H(2) N= (HN(0, E(1) N), HN(1, E(1) N)). We will use Lemma 15 to prove the result. Define WuN2≡(Σ(2) N)−1/2(Λ(2) uN(0),Λ(2) uN(1)). In order to apply Lemma 15 with N=N2andWuN=WuN2, it suffices to verify the following conditions hold: 1 N2Eh ∥WuN2∥4 2|H(1) Nip→0 andEh ∥WuN2∥3 2|H(1) Ni N1/2 2p→0. (40) Notice we can bound ∥WuN2∥2≤ ∥(Σ(2) N)−1/2∥2X s=0,1(H(2) N(s))1/2 ˆΛ(2) uN(s)−E[YuN(s)] (V(2) N(s))1/2 . (41) Now we prove the claims in (40). 61 Proof of the first claim in (40) By the bound (41), Eh ∥WuN2∥4 2|H(1) Ni N2≤ ∥(Σ(2) N)−1/2∥4 2X s=0,18(H(2) N(s))2Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Ni N2(V(2) N(s))2. It suffices to show ∥(Σ(2) N)−1/2∥2=Op(1) (42) and 1 N2(V(2) N(s))2(H(2) N(s))2Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Nip→0, s= 0,1. (43) We separate the proof further into two parts. 1.Proof of (42):It suffices to prove that the eigenvalues of Σ(2) N=Σ(2) N(E(1) N) are uniformly bounded from 0 and ∞. Recall the expression of Σ(2) Nas in (31) and (28) Σ(2) N= (Cov(2) N(E(1) N))2×2. To show this, we only need to show that there exists universal constant Csuch that, adopting the abbreviation ¯ eN(s)≡ HN(s, E(1) N), lim supN→∞|Cov(2) N(E(1) N)|< C < 1. We will apply Lemma 14 to prove the result. Recall Cov(2) N(E(1) N) =(¯eN(0)(1−¯eN(0)))1/2E[YuN(0)]E[YuN(1)] (E[Y2 uN(0)]−¯eN(0)E[YuN(0)]2)1/2(E[Y2 uN(1)]−(1−¯eN(0))E[YuN(1)]2)1/2. Note ¯ eN(s)∈(0,1) almost surely by the clip assumption, lim N→∞E[Yp uN(s)] con- verges for p= 1,2 and s= 0,1 by Assumption 1, and lim inf N→∞Var[YuN(s)]>0 fors= 0,1 by Assumption 1. Then we can apply Lemma 14 with aN= ¯eN(0), XN=YuN(0) and YN=YuN(1), we know lim supN→∞|Cov(2) N(E(1) N)|< C <1 almost surely where Cis a universal constant. 2.Proof
|
https://arxiv.org/abs/2505.10747v1
|
of (43):First consider Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Ni ≤8 Eh (ˆΛ(2) uN(s))4|H(1) Ni +E[YuN(s)]4 = 8E[Y4 uN(s)] (H(2) N(s))2H(2) N(s)+ 8E[YuN(s)]4. Then we have (H(2) N(s))2Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Ni ≤8E[Y4 uN(s)] H(2) N(s)+ 8(H(2) N(s))2E[YuN(s)]4. Since H(t) N(s)≤1 and by Assumption 1 that E[Y4 uN(s)] is uniformly bounded, we know lim sup N→∞(H(2) N(s))2E[YuN(s)]4<∞ ⇒ lim sup N→∞(H(2) N(s))2E[YuN(s)]4 N2= 0. 62 Now since N¯eN(s,Ht−1)≥NlN→ ∞ by Assumption 3 or 4, we know 1 N2H(2) N(s)E[Y4 uN(s)] =1 N2¯eN(s,H1)E[Y4 uN(s)]p→0. Collecting these results, we have 1 N2(H(2) N(s))2Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Nip→0. Since we have proved in Lemma 19 that lim inf N→∞V(2) N(s)>0, we have 1 N2(V(2) N(s))28(H(2) N(s))2Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Nip→0. Proof of the second claim in (40).The proof is similar to the proof of the first claim so we omit it. K.3.3 Proof of Lemma 26 Proof of Lemma 26. Recall the expression of E(1) N. We have E(1) N≡ (Σ(1) N)−1/2Λ(1) N, V(1) N, H(1) N,Vec((Σ(1) N)1/2) . The proof can be decomposed to two steps. 1. We first prove that ∥H(1) N−H(1)∥2=o(1),∥V(1) N−V(1)∥2=o(1),∥(Σ(1) N)1/2−(Σ(1))1/2∥F=o(1). (44) 2. Then we prove (Σ(1) N)−1/2Λ(1) Nd→Z, Z∼N(0,I2). (45) Proof of (44):The convergence of H(1) NandV(1) Nare obvious. For Σ(1) N, we use Lemma 13 so that it suffices to prove ∥Σ(1) N−Σ(1)∥F=√ 2|Cov(1) N−Cov(1)|=o(1). To this end, recall the definition of Cov(1) Nas in (31), Cov(1) N=−(H(1) N(0)H(1) N(1))1/2 (V(1) N(0))1/2(V(1) N(1))1/2E[YuN(0)]E[YuN(1)]. Since∥V(1) N−V(1)∥2,∥H(1) N−H(1)∥2=o(1), and 0<lim inf N→∞V(1) N(s)≤lim sup N→∞V(1) N(s)<∞ as proved in Lemma 19, we know |Cov(1) N−Cov(1)|=o(1). The completes the proof for (44). 63 Proof of (45):This can be proved easily by applying Lemma 15. We omit the proof. L Proof of Theorem 2 Given two random variables XandY, the 1-Wasserstein distance is defined as W1(X, Y)≡sup ∥f∥L≤1 E[f(X)]−E[f(Y)] . L.1 Proof preparation for Theorem 2 We will only prove the result for WC U(c) since the other proofs are very similar. We will drop the subscript in w(t) C,U(s) and just write w(t)(s). Similarly, we use W(c) to denote WC U(c). Recall the definition of W(c) as W(c) =2X t=1A(t)(0)w(t)(0)−2X t=1A(t)(1)w(t)(1), w(t)(s) =(2V(t)(s))1/2 2p H(t)(s). To further ease the burden of notation, we define Vp(s)≡limN→∞E[Yp uN(s)]. Then we can rewrite V(2)(s)≡V2(s)−H(2)(s)V2 1(s), where H(2)(s) = min {1−¯l,max{¯l,¯e(s, S((A(1), V(1)), c))}}. We will use V(2)(s, c), H(2)(s, c) to denote V(2)(s) and H(2)(s) stress the dependence onc. In particular, we can write H(2)(s,−∞)≡min{1−¯l,max{¯l,¯e(s,−∞)}}and V(2)(s,−∞)≡V2(s)−H(2)(s,−∞)V2 1(s). Similarly, we can define Cov(2)(−∞) =−(H(2)(0,−∞)H(2)(1,−∞))1/2/(V(2)(0,−∞)V(2)(1,−∞))1/2V1(0)V1(1). Then with V(2)(s,−∞), H(2)(s,−∞), we can define the corresponding weight w(2) −∞(s)≡ (V(2)(s,−∞))1/2/(2H(2)(s,−∞))1/2. Moreover, we define (A(2) −∞(0), A(2) −∞(1))⊤∼N(0,Σ(2) −∞) where Σ(2) −∞≡(Cov(2)(−∞))2×2. Finally, define S2,s≡A(2) −∞(s)w(2) −∞(s). L.2 Proof of Theorem 2 Proof of Theorem 2. We rewrite the random variable W(−∞) as W(−∞) =A(1)(0)w(1)(0)−A(1)(1)w(1)(1) + S2,0−S2,1. Notice that W(−∞) is a Gaussian random variable with mean zero since S2,0−S2,1is independent with A(1)(0)w(1)(0)−A(1)(1)w(1)(1). The proof will be divded into three steps: 64 •We first decompose the desired W1(W(−∞),W(c)) distance into different pieces and bound different pieces by W1distances; •We bound the W1distance by |w(2)(s)−w(2) −∞(s)|and further obtain a bound |w(2)(s)−w(2) −∞(s)|, which just involves |¯e(s,
|
https://arxiv.org/abs/2505.10747v1
|
S((A(1), V(1)), c))−¯e(s,−∞)|; •We collect all the results to prove the claim. Decomposition of W1(W(−∞),W(c)).Now we first decompose the KS distance into two parts, using the triangle inequality: W1(W(−∞),W(c))≤W1(A(2)(0)w(2)(0), S2,0) +W1(A(2)(1)w(2)(1), S2,1)≡K0+K1. By triangle inequality, we have for s∈ {0,1}, Ks≤W1(A(2)(s)w(2)(s), A(2)(s)w(2) −∞(s)) +W1(A(2)(s)w(2) −∞(s), S2,s). In fact, W1(A(2)(s)w(2) −∞(s), S2,s) = 0 since w(2) −∞(s) is a constant and A(2)(s), A(2) −∞(s) have the same distribution. It suffices to study W1(A(2)(s)w(2)(s), A(2)(s)w(2) −∞(s)). Bounding W1(A(2)(s)w(2)(s), A(2)(s)w(2) −∞(s)).We compute W1(A(2)(s)w(2)(s), A(2)(s)w(2) −∞(s)) = sup ∥f∥L≤1 E f(A(2)(s)w(2)(s)) −Eh f(A(2)(s)w(2) −∞(s))i ≤Eh |A(2)(s)w(2)(s)−A(2)(s)w(2) −∞(s)|i ≤q E[|A(2)(s)|2]E[|w(2)(s)−w(2) −∞(s)|2] =q E[|w(2)(s)−w(2) −∞(s)|2]. Define D(c,−∞)≡(2V(2)(s, c)H(2)(s,−∞))1/2+ (2V(2)(s,−∞)H(2)(s, c))1/2. Notice that when Nis large, D(c,−∞)≥(2V(2)(s, c)H(2)(s,−∞))1/2≥(2 lim inf N→∞Var[YuN(s)]¯l)1/2. Then we can bound w(2)(s)−w(2) −∞(s) = (2V(2)(s, c)H(2)(s,−∞))1/2−(2V(2)(s,−∞)H(2)(s, c))1/2 2(H(2)(s, c)H(2)(s,−∞))1/2 = V(2)(s, c)H(2)(s,−∞)−V(2)(s,−∞)H(2)(s, c) (H(2)(s, c)H(2)(s,−∞))1/2 1 D(c,−∞)(46) ≲ V(2)(s, c)H(2)(s,−∞)−V(2)(s,−∞)H(2)(s, c) (H(2)(s, c)H(2)(s,−∞))1/2 . 65 Next, notice that min {H(2)(s,−∞), H(2)(s, c)} ≥¯l, so that we can futher bound w(2)(s)−w(2) −∞(s) ≲ V(2)(s, c)H(2)(s,−∞)−V(2)(s,−∞)H(2)(s, c) ≲V(2)(s, c)|H(2)(s,−∞)−H(2)(s, c)|+|V(2)(s, c)−V(2)(s,−∞)|H(2)(s, c). We now further bound the RHS by the quantity invovling just |¯e(s, S((A(1), V(1)), c))− ¯e(s,−∞)|. Then we show respectively that 1.|H(2)(s, c)−H(2)(s,−∞)| ≤ |¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|; 2.|V(2)(s, c)−V(2)(s,−∞)| ≤V2 1(s)|¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|. ForH(2)(s, c)−H(2)(s,−∞), the claim is true by the Lipschitz property of min {1− lN,max{lN, x}}. For V(2)(s, c)−V(2)(s,−∞), we can compute |V(2)(s, c)−V(2)(s,−∞)|=V2 1(s)|H(2)(s, c)−H(2)(s,−∞)| ≤V2 1(s)|¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|. By Lemma 19, we know V2(s)−V2 1(s)≲V(2)(s, c)≲V2(s). By the definition of H(2)(s, c) we know H(2)(s, c)∈(0,1). Then combining bound (46), we have w(2)(s)−w(2) −∞(s) ≲(V2(s) +V2 1(s))|¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|. Therefore, we obtain the bound W1(A(2)(s)w(2)(s), A(2)(s)w(2) −∞(s))≤q E[|w(2)(s)−w(2) −∞(s)|2] ≲ E[|¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|2]1/2. Concluding the proof: Therefore we can bound Ks≤W1(A(2)(s)w(2)(s), A(2)(s)w(2) −∞(s))≲ E[|¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|2]1/2. Thus we conclude W1(W(−∞),W(c))≤CX s=0,1 E[|¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|2]1/2. Now we prove the convergence of E[|¯e(s, S((A(1), V(1)), c))−¯e(s,−∞)|2] asc→ −∞ . This is true by dominated convergence theorem since ¯ e(s, S((A(1), V(1)), c))−¯e(s,−∞)→ 0 by Assumption 2. M Proof of Theorem 3 M.1 Necessary definitions Random vectors. We define the random vector W(1,b)≡ S(b) 1,ˆV(1), H(1),Vec(( ˆΣ(1))1/2) where H(1)= (H(1)(0), H(1)(1)) is defined as in (30), and W(2,b)≡ S(b) 2,ˆV(2,b),ˆH(2,b),Vec(( ˆΣ(2,b))1/2) where ˆH(2,b)= (ˆH(2,b)(0),ˆH(2,b)(1)) and ˆV(1)≡(ˆV(1)(0),ˆV(1)(1)). 66 Random functions. Forx∈R10, recalling the definiton of function hin (27), the weight function is defined as ˆH(2,b)(s, x)≡( ¯e(s, h(x,0)) under Assumption 4 , min{1−¯l,max{¯l,¯e(s, h(x,0))}}under Assumption 3 , and the variance function is defined as ˆV(2,b)(s, x)≡ˆE[Y2 uN(s)]−ˆH(2,b)(s, x)(ˆE[YuN(s)])2. By slightly abusing the notation, we define ˆH(2,b)(x)≡(ˆH(2,b)(0, x),ˆH(2,b)(1, x)), the random variance vector function ˆV(2,b)(x)≡(ˆV(2,b)(0, x),ˆV(2,b)(1, x)) and the covari- ance function ˆCov(2,b)(x)≡ −(ˆH(2,b)(0, x)ˆH(2,b)(1, x))1/2 (ˆV(2,b)(0, x)ˆV(2,b)(1, x))1/2ˆE[YuN(0)]ˆE[YuN(1)] and the covariance matrix function ˆΣ(2,b)(x)≡(ˆCov(2,b)(x))2×2. Last, define the func- tion W(2,b)(x)≡ S(b) 2,ˆV(2,b)(x),ˆH(2,b)(x),Vec(( ˆΣ(2,b)(x))1/2) W(a,b)(x)≡ Z2,ˆV(2,b)(x),ˆH(2,b)(x),Vec(( ˆΣ(2,b)(x))1/2) , where Z2is defined as in (32). M.2 Proof of Theorem 3 Proof of Theorem 3. Recall Was defined in (32). We use Lemma 18 to prove the result, with WN= (W(1,b), W(2,b)) and W=W. The continuous function gis chosen to be as in Step 3 in Section J.1. Thus it suffices
|
https://arxiv.org/abs/2505.10747v1
|
to prove the following statement is true: E[f(W(1,b), W(2,b))|GN]−E[f(W)]p→0,for any fsuch that ∥f∥BL<∞. Consider the following decomposition: E[f(W(1,b), W(2,b))|GN]−E[f(W)] =E[f(W(1,b), W(2,b)(W(1,b)))|GN]−E[f(W1, W2)] =E[f(W1, W(a,b)(W1))|GN]−E[f(W1, W2)] +E[f(W(1,b), W(a,b)(W(1,b)))|GN]−E[f(W1, W(a,b)(W1))|GN] +E[f(W(1,b), W(2,b)(W(1,b)))|GN]−E[f(W(1,b), W(a,b)(W(1,b)))|GN] ≡M1+M2+M3. We prove M1, M2, M3=op(1) respectively. 1.Proof of M1=op(1).Notice ( W1, W2)⊥ ⊥ G N. Then it suffices to prove E[f(W1, W(a,b)(W1))|GN]−E[f(W1, W2)|GN] =op(1). 67 Then by the Lipschitz property and boundedness of fand Lemma 17, it suffices to prove W(a,b)(W1)−W2 2p→0,FN=GN. In other words, we need to show, by Lemma 13, ∥ˆV(2,b)(W1)−V(2)∥2p→0,∥ˆH(2,b)(W1)−H(2)∥2p→0,∥ˆΣ(2,b)−Σ(2)∥Fp→0. We can compute ∥ˆΣ(2,b)−Σ(2)∥F=√ 2|ˆCov(2,b)−Cov(2)|. We observe that if we can prove ∥ˆH(2,b)(W1)−H(2)∥2p→0, the other claims are true by the consistency of ˆE[YuN(s)] and ˆE[Y2 uN(s)]. However, by observing that ˆH(2,b)(W1) =H(2), we know the claim is true. 2.Proof of M2=op(1).Define W(c,b)≡(S(b) 1, V(1), H(1),Vec((Σ(1))1/2)). Since (W(c,b), W(a,b)(W(c,b)))|GNd= (W1, W(a,b)(W1))|GN, it suffices to show E[f(W(1,b), W(a,b)(W(1,b)))|GN]−E[f(W(c,b), W(a,b)(W(c,b)))|GN] =op(1). By the Lipschitz property and boundedness of fand Lemma 17, it suffices to show∥W(1,b)−W(c,b)∥2=op(1) and ∥ˆV(2,b)(W(1,b))−ˆV(2,b)(W(c,b))∥2p→0,∥ˆH(2,b)(W(1,b))−ˆH(2,b)(W(c,b))∥2p→0 and, by Lemma 13 ∥ˆΣ(2,b)(W(1,b))−ˆΣ(2,b)(W(c,b))∥F=√ 2|ˆCov(2,b)(W(1,b))−ˆCov(2,b)(W(c,b))|p→0. By the consistency of ˆE[YuN(s)] and ˆE[Y2 uN(s)] proved in Lemma 21, it suffices to show ∥W(1,b)−W(c,b)∥2p→0,∥ˆH(2,b)(W(1,b))−ˆH(2,b)(W(c,b))∥2p→0 We prove these two claims subsequently. Proof of ∥W(1,b)−W(c,b)∥2=op(1):It suffices and it is easy to show ∥ˆV(1)−V(1)∥2=op(1),∥ˆΣ(1)−Σ(1)∥F=op(1). (47) 68 Proof of ∥ˆH(2,b)(W(1,b))−ˆH(2,b)(W(c,b))∥2=op(1) By the Lipschitz property of min {1−lN,max{lN, x}}, it suffices to show |¯e(s, h(W(1,b),0))−¯e(s, S(((Σ(1))1/2S(b) 1, V(1)),0))| =|¯e(s, h(W(1,b),0))−¯e(s, h(W(c,b),0))|p→0, s= 0,1. Depending on the smoothness of ¯ e(s, x), we divide the proof into two cases based on Assumption 2. •When ¯e(s, x)is Lipschitz continuous in x:It suffices to show |h(W(1,b),0)−h(W(c,b),0)|p→0. (48) By the definition of hin (27), it suffices to prove, by the continuous mapping theorem, that ∥(ˆΣ(1))1/2S(b) 1−(Σ(1))1/2S(b) 1∥2p→0,∥ˆV(1)−V(1)∥2p→0. This is obvious by result (47) and ∥(ˆΣ(1))1/2S(b) 1−(Σ(1))1/2S(b) 1∥2≤ ∥S(b) 1∥2∥(ˆΣ(1))1/2−(Σ(1))1/2∥2 ≤ ∥S(b) 1∥2∥(ˆΣ(1))1/2−(Σ(1))1/2∥F ≤ ∥S(b) 1∥2√ 2∥ˆΣ(1)−Σ(1)∥F. •When ¯e(s, x) =PK k=1ck 1(g(x)∈Ck):It suffices to prove for any k∈[K], | 1(g(h(W(1,b),0))∈Ck)− 1(g(h(W(c,b),0))∈Ck)|p→0. Then with result (48), applying Lemma 16, we know the claim is true since g(h(W(c,b),0)) is a continuous random variable and P[g(h(W(c,b),0))∈ ∂Ck] = 0. 3.Proof of M3=op(1).It is easy to see that W(2,b)|W(1,b),GNd=W(a,b)|W(1,b),GN so we know the following statement is true almost surely: E[f(W(1,b), W(2,b)(W(1,b)))|GN]−E[f(W(1,b), W(a,b)(W(1,b)))|GN] = 0. N Extenstion N.1 Extension to m= 1 In some adaptive experimental designs, unpromising treatments are dropped in the second stage—a strategy known as the “drop-the-loser” approach (Sampson et al., 2005; Sill et al., 2009). In such cases, the sampling probability for one of the treatment arms is set to exactly zero in the second stage. Recall Assumptions 3 and 4; neither 69 allows the sampling probability of any treatment to be exactly zero in the follow-up stage. In this section, we further extend the results in Theorem 1 to incorporate the weighting m= 1 in h(t) N(s) = ¯em N(s,Ht−1)/N1/2considered in the general estimator (2). In fact, we can write the resulting test statistic as WIPW( s) =P2 t=1Nt 1(A(t) uN=s)Y(t) uNP2 t=1Nt¯eN(s,Ht). (49) Consider the following assumption and theorem. Assumption 5 (Adaptive weighting with m= 1).Suppose adpative weighting ( m= 1) is used and clipping rate lN= 0as in Assumption 2. Theorem
|
https://arxiv.org/abs/2505.10747v1
|
4 (Adaptive weighting with m= 1).Suppose Assumption 1-2 and As- sumption 5 hold. Then, for any s∈ {0,1}, we have WIPW( s)−E[YuN(s)] =op(1). Furthermore, define M(t)(s)≡qt H(t)(s)P2 t=1qtH(t)(s)!2 and ¯w(t)(s) = M(t)(s)/(R(t)(s))21/2. Then considering the test statistic (49), we have √ N(WIPW( s)−E[YuN(s)])d→2X t=1A(t)(0) ¯w(t)(0) √ N{TN−(E[YuN(0)]−E[YuN(1)])}d→2X t=1A(t)(0) ¯w(t)(0)−2X t=1A(t)(1) ¯w(t)(1). Defining w(t)(s) = ¯w(t)(s)/(P1 s=0P2 t=1( ¯w(t)(s))2)1/2, then we have WN−√ N(E[YuN(0)]−E[YuN(1)]) (NˆVN(0) + NˆVN(1))1/2d→2X t=1A(t)(0)w(t)(0)−2X t=1A(t)(1)w(t)(1). Theorem 4 demonstrates that m= 1 weighting can accommodate the early-dropping experiments. We now comment on the proof of Theorem 4. Remark 8 (Comment on the proof of Theorem 4) .The proof of Theorem 4 is sim- ilar to the proof of Theorem 1, following the general proof roadmap sketched in Ap- pendix J.1. The difference happens in the Step 2 , where we need to show the weak convergence of the random vector (E(1) N, E(2) N), where E(t) N=1 N1/2 tNtX u=11(A(t) uN=s)(Y(t) uN−H(t) NE[Y(t) uN]), H(t) N and H(t) N= ¯eN(s,Ht−1). The other proofs are similar. 70 N.2 Extension to test statistics with augmentation When data is generated independently as in non-adaptive experiments, efficiency of IPW estimator may be improved by augmenting the statistic with a consistent sample mean estimator. In particular, we consider the weighted augmented inverse probability weighted (WAIPW) estimator, WAIPW( s), defined as 2X t=1Nth(t) N(s) P2 t=1Nth(t) N(s) 1 NtNtX u=11(A(t) uN=s)(Y(t) uN−ˆE[YuN(s)]) ¯eN(s,Ht−1)+ˆE[YuN(s)]! .(50) Furthermore, define WIPW a(s) as 2X t=1Nth(t) N(s) P2 t=1Nth(t) N(s) 1 NtNtX u=11(A(t) uN=s)(Y(t) uN−E[YuN(s)]) ¯eN(s,Ht−1)! +E[YuN(s)]. It is not hard to show that√ N(WAIPW( s)−WIPW a(s)) converges to 0 in probability as long as ˆE[YuN(s)]−E[YuN(s)]p→0. Then we can apply Theorem 1 and Theorem 4 with YuN=YuN−E[YuN(s)] to prove that√ N(WIPW a(s)−E[YuN(s)]) converges weakly. Similarly, we can show the weak limits of TNandWN. Asymptotic equivalence between WAIPW( s)and sample mean. We can show that when m= 1, the WAIPW( s) is asymptotically equivalent to the sample mean estimator, SM( s), defined as SM(s)≡P2 t=1PNt u=1 1(A(t) uN=s)Y(t) uNP2 t=1PNt u=1 1(A(t) uN=s). By convention, we define 0 /0 = 1. We formalize the equivalence claim in the following lemma. Lemma 27 (Asymptotic equivalence between sample mean and WAIPW( s)).Suppose m= 1 andˆE[YuN(s)]−E[YuN(s)]p→0. Furthermore, suppose Assumption 1 holds. Then we have√ N(SM( s)−WAIPW( s))p→0. The proof of Lemma 27 can be found in Appendix N.5. N.3 Extension to selection with nuisance parameter In practice, the selection algorithm can depend on statistic beyond S(1) N(0)−S(1) N(1). The example includes the case when the selection depends on the interim p-value, which involves the standard deviation estimate, going beyond the difference of the two statistics. The following theorem shows that our results can be extended to such case. Theorem 5 (Extension to selection algorithm with nuisance parameter) .Suppose the follow-up stage sampling probability is given by SN(S(1) N(0)−S(1) N(1)) = min {1−lN,max{lN,¯e(0,(S(1) N(0)−S(1) N(1))/ˆσ)}},(51) where ˆσ∈(0,∞). Then if there exists σ > 0such that ˆσp→σ∈(0,∞), then the conclusion of Theorem 1 still holds. 71 Proof sketch for Theorem 5. The proof is similar to the proof of Theorem 1. In particular, it follows the general
|
https://arxiv.org/abs/2505.10747v1
|
proof roadmap sketched in Appendix J.1. We need to choose the appropriate joint vector E(t) Nto accommodate the nuisance parameter ˆ σ. In particular, keeping E(1) Nas defined in (33), we can define the new E(2) Nas E(2) N= (Σ(2) N)−1/2Λ(2) N, V(2) N, H(2) N,Vec((Σ(2) N)1/2) where H(2) Nis defined as in (51). The other proofs are similar. N.4 Extension to adaptive experiments with stopping time We outline how our main results can be extended to adaptive experiments involving a stopping time. Specifically, we consider a stopping time τthat depends on the quantity S(1) N(0)−S(1) N(1). To describe the stopping criterion, we define the event E ≡ { D(S(1) N(0)−S(1) N(1))∈[0, β]⊂R}, where Dis a decision rule and β >0 lies on the decision boundary. The stopping time is then defined as τ≡ 1(E), where τ= 1 indicates continuation of the experiment and τ= 0 indicates early termination. This setup captures scenarios in which strong preliminary evidence warrants stopping the experiment at the pilot stage. For example, if D(x) =x, then τ= 0 when the evidence in favor of treatment 0 over treatment 1 is sufficiently strong, if βis large. Then the test statistic can be written as WIPW( s) =N1h(1) N(s) N1h(1) N(s) +N2h(2) N(s) 1(E)ˆΛ(1) N(s) +N2h(2) N(s) 1(E) N1h(1) N(s) +N2h(2) N(s) 1(E)ˆΛ(2) N(s).(52) Sketched derivation of weak limit. Weak limit WIPW( s) defined in (52) can be derived following the similar proof roadmap sketched in Appendix J.1. In particular, we will need to derive the joint weak convergence of the random vector including ˆΛ(1) N(s),ˆΛ(2) N(s), h(1) N(s), h(2) N(s) 1(E). N.5 Proof of Lemma 27 Proof of Lemma 27. When m= 1, we can write the WAIPW( s) as WAIPW( s) =P2 t=1PNt u=1 1(A(t) uN=s)(Y(t) uN−ˆE[YuN(s)])P2 t=1Nt¯eN(s,Ht−1)+ˆE[YuN(s)]. For the ease of notation, define IN(s)≡2X t=1NtX u=11(A(t) uN=s) and YN(s)≡2X t=1NtX u=11(A(t) uN=s)Y(t) uN. Also, define RN(s) =YN(s)−2X t=1NtX u=11(A(t) uN=s)E[Y(t) uN]. 72 Then we can write the difference as SM(s)−WAIPW( s) =YN(s) IN(s)−ˆE[YuN(s)] 1−IN(s)P2 t=1Nt¯eN(s,Ht−1)! ≡AN(s)×BN(s). It suffices to prove AN(s) =op(1) and BN(s) =Op(1/√ N). Proof of AN(s) =op(1).By the definition of IN(s), we can write AN(s) =RN(s) IN(s)+E[YuN(s)]−ˆE[YuN(s)]≡A1N(s) +E[YuN(s)]−ˆE[YuN(s)]. Since by the assumption, E[YuN(s)]−ˆE[YuN(s)] = op(1), it suffices to prove that Var[A1N(s)] =o(1). In fact, we can show something stronger: Var[√ NA 1N(s)] =O(1). Since the denominator IN=P2 t=1PNt u=1 1(A(t) uN=s) is likely to be 0, we divide the proof into three cases. Define I(t) N(s)≡PNt u=1 1(A(t) uN=s). 1.When IN(s) = 0 .In this case, we have A1N(s) = 1. However, this is event is exponentially unlikely since IN(s)≥ I(1) N(s) =PN1 u=1 1(A(1) uN=s) and E[I(1) N(s)] =N1e(s). Therefore, Var[√ NA 1N(s) 1(IN(s) = 0)] →0. 2.When IN(s)>0butI(1) N(s) = 0.In this case, we have |A1N(s)| ≤ |Y(2) uN(s)− E[YuN(s)]|. Then we have Var[√ NA 1N(s) 1(I(1) N(s) = 0)] ≤E[NA2 1N(s) 1(I(1) N(s) = 0)] =NVar[Y(2) uN(s)]P[I(1) N(s) = 0] . SinceP[I(1) N(s) = 0] →0 exponentially, we have Var[√ NA 1N(s) 1(I(1) N(s) = 0)] → 0. 3.When I(1) N(s)>0.We compute Var[A1N(s) 1(I(1) N(s)>0)]≤E" (RN(s))2 (IN(s))21(I(1) N(s)>0)# .
|
https://arxiv.org/abs/2505.10747v1
|
SinceIN(s)≥ I(1) N(s) =PN1 u=1 1(A(1) uN=s), we know E" (RN(s))2 (IN(s))21(I(1) N(s)>0)# ≤E" (RN(s))2 (I(1) N(s))21(I(1) N(s)>0)# =E" E (RN(s))2|H1 (PN1 u=1 1(A(1) uN=s))21(I(1) N(s)>0)# . Further, we can decompose Eh (RN(s))2|H1i =N1X u=11(A(1) uN=s)(Y(1) uN−E[YuN(s)])2+N2¯eN(s,H1)Var[YuN(s)] 73 ≤N1X u=1(Y(1) uN−E[YuN(s)])2+N2¯eN(s,H1)Var[YuN(s)]. Then define A2N(s)≡E"PN1 u=1(Y(1) uN−E[YuN(s)])2 (PN1 u=1 1(A(1) uN=s))21(I(1) N(s)>0)# and A3N(s)≡E" N2 1(I(1) N(s)>0) (PN1 u=1 1(A(1) uN=s))2# Var[YuN(s)]. It suffices to prove that A2N(s) =O(1/N) and A3N(s) =O(1/N). We first prove the claim for A2N(s). By Cauchy-Schwarz inequality, we have (A2N(s))2≤E PN1 u=1(Y(1) uN−E[YuN(s)])2 N1!2 E N1 1(I(1) N(s)>0) (PN1 u=1 1(A(1) uN=s))2!2 ≤E[(Y(1) uN−E[YuN(s)])4]E N1 1(I(1) N(s)>0) (PN1 u=1 1(A(1) uN=s))2!2 ≤E[(Y(1) uN−E[YuN(s)])4]E 4N1 (1 +PN1 u=1 1(A(1) uN=s))2!2 . For the first term, by Assumption 1, we have E[(Y(1) uN−E[YuN(s)])4] =O(1). For the second term, we use the following lemma to conclude the proof. Lemma 28 (Cribari-Neto et al. (2000)) .Suppose X1, . . . , X Nare i.i.d. Bernoulli random variables with E[Xi] =p. Then we have E" Nk (1 +PN i=1Xi)k# =O 1/pk . Now we can apply Lemma 28 with k= 4 so that we have A2N(s) =O(1/N1) = O(1/N). Similarly, A3N(s) =O(1/N). Thus Var[√ NA 1N(s) 1(I(1) N(s)>0)]→0. This concludes the proof of AN(s) =op(1). Proof of BN(s) =Op(1/√ N).We will prove that Var[√ NB N(s)] = O(1). The proof follows the similar argument as the proof of Var[√ NA 1N(s)] =O(1). We omit it. 74 O Additional simulation results We provide additional simulation results in this section. In Section O.1, we show the simulation results with ε-greedy selection algorithm applied. In Section O.2, we compare the power of the adaptive weighting with m= 1/2 and m= 1. O.1 Additional simulation results with ε-greedy algorithm We show the additional results for the simulation in Section 4.1 with ε-greedy selection algorithm applied. The εis chosen within 0 .1,0.2,0.4. Results are shown in Figure 7 and 8. We do not employ clipping in this case. ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian 10010−110−210−310010−110−210−310010−110−210−310010−210−410−6 10010−210−410−6Left−sided test ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian 10010−110−210−310010−110−210−310010−110−210−310010−210−4 10010−210−4Right−sided test Theoretical p−valueObserved p−value weighting adaptive weighting constant weighting statistic format normalized sample−splitting unnormalized Figure 7: QQ-plot for the 5 tests under different signal strength. The simulation is repeated for 2000 times. The number of bootstrap used in each test is 5000. 75 ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian −0.20−0.15−0.10−0.050.00−0.20−0.15−0.10−0.050.00−0.20−0.15−0.10−0.050.000.10.30.50.70.9 0.10.30.50.70.9Left−sided test ε=0.1 ε=0.2 ε=0.4Bernoulli Gaussian 0.000.050.100.150.200.000.050.100.150.200.000.050.100.150.200.10.30.50.70.9 0.10.30.50.70.9Right−sided test Signal strengthRejection rate weighting adaptive weighting constant weighting statistic format normalized sample−splitting unnormalizedFigure 8: Rejection rate for the 5 tests under different signal strength. The simulation is repeated for 2000 times. The number of bootstrap used in each test is 5000. O.2 Power comparison: m= 1/2versus m= 1 We have observed from Figure 4 that the adaptive weighting with m= 1/2 is more powerful than the constant weighting ( m= 0). The natural question is then to ask which adaptive weighting scheme, m= 1 or m= 1/2, is better in terms of power. To investigate this question, we conduct a set of simulation with both unnormalized and normalized tests and the simulation invovles 4
|
https://arxiv.org/abs/2505.10747v1
|
common distributions: Gaussian, Bernoulli, Poisson and Student distributions. We consider the Thompson sampling (7) with lN= 0.2 and consider the sampel size N= 20,000 and N1=N2= 10,000. We set the significance level to be 0 .05. The distribution information can be summarized as below: 76 •Gaussian: Yu(0)∼N(θ,1), Yu(1)∼N(0,0.25). •Bernoulli: Yu(0)∼Bern(0 .5 +θ), YuN(1)∼Bern(0 .5). •Poisson: Yu(0)∼Pois(1 + θ), Yu(1)∼Pois(1). •Student: Yu(0)∼θ+ t(4) , Yu(1)∼t(10) where 4 and 10 are degrees of freedom in the student distributions. In particular, we choose θ∈ {0,0.01,0.02,0.03,0.04}. The results are presented in Figure 9. Implementation details. We center the outcome YuN(s) by generating ˜YuN(s) = YuN(s)−E[YuN(1)] and compare the test power for the average treatment effect E[˜YuN(0)]−E[˜YuN(1)]. The motivation for this operation is we do not want the test to be affected by the absolute signal strength. Asymptotically, such operation is equivalent to use WAIPW( s), defind as in (50), when testing with the original data (YuN(0), YuN(1)). This is because the magnitude of θis very small. Also, we want to point out that when E[˜YuN(s)]∼1/√ NandE[˜YuN(1)] = 0 statistic WIPW( s) with m= 1 is asymptotically equivalent to the sample mean, as proved in Lemma 27. In other words, the asymptotic power function for the sample mean test statistic is the same as the power function for the m= 1 weighting when testing is performed on the transformed data ( ˜YuN(0),˜YuN(1)). Interpretation of the results. For all the other setups, it seems both m= 1 and m= 1/2 have very similar power performance. This means we would expect the sample mean test statistic should also have very comparable power performance with the weighting m= 1/2. It is generally unclear if the test statistics considered in this paper are optimal or not under these distributions. As an exception, the sample mean test stiatistic has been shown to be near-optimal in some Gaussian setup, shown in Section 4.4 of Hirano et al. (2023). Bernoulli Gaussian Poisson Student 0.10.30.50.70.9 Signal strengthRejection rate m = 0.5 m = 1 normalized unnormalized Figure 9: Rejection plots for adaptive weighting with m= 1 and m= 1/2 on the centered data ( ˜YuN(0),˜YuN(1)). 77 P Additional semi-synthetic data analysis results We present additional results for the semi-synthetic data analysis in Section 4.2. Fol- lowing the same procedure outlined in Section 4.2, we use 5000 permuted sample to compute the p-values for the 5 tests. The QQ-plot is shown in Figure 10. We can see the message is similar to the one in Figure 7 in Section 4.2 when there is no signal. ε=0.1 ε=0.2 ε=0.4 10010−110−210−310010−110−210−310010−110−210−310010−210−410−6 Expected p−valueObserved p−value weighting adaptive weighting constant weighting statistic format normalized sample−splitting unnormalized Figure 10: QQ-plot for the semi-synthetic data analysis. 78
|
https://arxiv.org/abs/2505.10747v1
|
arXiv:2505.11705v1 [math.ST] 16 May 2025Consistency of Bayes factors for linear models El´ ıas Moreno1*, Juan J. Serrano-P´ erez2and Francisco Torres-Ruiz2 1*Royal Academy of Sciences, Spain. 2Department of Statistic, University of Granada, Spain. *Corresponding author(s). E-mail(s): emoreno@ugr.es ; Contributing authors: jjserra@ugr.es ;fdeasis@ugr.es ; Abstract The quality of a Bayes factor crucially depends on the number of regressors, the sample size and the prior on the regression parameters, and h ence it has to be established in a case-by-case basis. In this paper we analyze the consistency of a wide class of Bay es factors when the number of potential regressors grows as the sample size g rows. We have found that when the number of regressors is finite some classes of priors yield inconsistency, and when the potential number o f regressors grows at the same rate than the sample size different priors yield di fferent degree of inconsistency. For moderate sample sizes, we evaluate the Bayes factors by c omparing the pos- terior model probability. This gives valuable information to discriminate between the priors for the model parameters commonly used for variab le selection. Keywords: Asymptotic, Bayes factors for linear models, complex linea r models, intrinsic priors, mixtures of g-priors 1 Introduction Bayesian variable selection in linear model is based on the posterior pr obability of a given set of candidate models, and, for convenience, these post erior probabilities are defined in terms of Bayes factors that compare nested models : a generic normal regression model Mpwithp≥1 regressors against the intercept only model M0 (encompassing from below). The Bayes factor introduced by Jeffreys(1961), the main statistical tool for model selection as it contains all the data information for model compariso n, crucially 1 dependsonthenumberofregressorsandthepriordistributionof theregressionparam- eters and, to a lesser extent, on the prior of the variance error. A recent review of priors for model selection is given in Consonni et al. (2018). In this paper we deal with a set of objective and subjective Bayes f actors, the former are defined by objective intrinsic priors and the others by s ubjective mixtures of the Zellner’s g-prior. All of them have in common their dependency on the data through the ratio of the sum of squared of the residuals of the mod els, the sufficient statistic, the number of regressors pand the sample size n. They certainly differ in the way the sufficient statistic is corrected by ( p,n). The intrinsic priors were introduced by Berger and Pericchi (1996) to justify the arithmetic intrinsic Bayes factor, an empirical tool for model selec tion based on the notion of partial Bayes factor ( Leamer,1983). These priors were studied as priors for model selection in Moreno(1997) andMoreno et al. (1998), and they have been appliedtovariableselectioninregressionby Morenoetal. (2003),Morenoetal. (2005), Moreno and Gir´ on (2005,2008),Casellaand Moreno (2006),Gir´ on et al. (2006),Gir´ on et al.(2010),Torres et al. (2011),Kang et al. (2021), among others. Onthe otherhand, the g-priorwasintroducedby ZellnerandSiow (1980)andsince then a considerable number of mixtures of the g-prior for model selection have been proposed including those by Zellner(1984),Fern´ andez et al. (2001),Cui and George (2008),Liang et al.
|
https://arxiv.org/abs/2505.11705v1
|
(2008),Maruyama and George (2011),Bayarri et al. (2012) and Gir´ on(2021). We analyze here the asymptotic of six Bayes factors for comparing the generic modelMpagainstM0. We remark that the inconsistency of a Bayes factor implies the inconsistency of the posterior model probability in the set of ca ndidate models (Morenoetal. ,2015).Thus,aninconsistentBayesfactorshouldberejectedforvar iable selection. When the number of regressors pis either finite or grows at rate p=O(nb), 0< b <1, the Bayes factors that use either intrinsic priors or convention al mixtures ofg-priors (Zellner,1984;Fern´ andez et al. ,2001) are consistent having the same rate of convergence than that of the BIC approximation. However, Ba yes factors for others mixtures of g-priorsare inconsistent when sampling from M0. Further when pgrowsat the same rate than the sample size n,p=O(n), as occurs in clustering or the analysis of variance, everyone is inconsistent when sampling from specific se ts of models. These sets are given and compared each other. For finite sample size nand for each Bayesfactor the posterior probability of M0in the set{M0,Mp}as a function of the sufficient statistic provide relevant information on the entertained Bayes factors. The rest of the paper is organized as follows. In Section 2the model notation is presented. In Section 3the set of priors for the parameters of M0andMpand the Bayes factors are given. Section 4presents the asymptotic analysis and Section 5 the posterior model probability for finite sample size. Section 6contains concluding remarks. 2 2 The models LetYrepresent the response random variable and x= (x1,...,x p) a vector of explanatory deterministic regressors related through the norma l linear model Y=β0+β1x1+...+βpxp+εp, whereβp+1= (β0,β1,...,β p)′is the vector of regression coefficients, εp∼N(0,σ2 p) andσ2 pis the variance error. The sampling distribution of Yis assumed to be either the normal distribution with pregressors N(y|xβp+1,σ2 p) or the intercept only model N(y|α0,σ2 0). Let (y,X) be the data set, where yis a vector of nindependent observations of Y, andXan×(p+1) design matrix of full rank. The likelihood of ( βp+1,σp) is given byNn(y|Xβp+1,σ2 pIn), and for a given prior distribution π(βp+1,σp) the Bayesian model is denoted as Mp:{Nn(y|Xβp+1,σ2 pIn),π(βp+1,σp)}. Likewise, the likelihood of the parameter of the intercept only model (α0,σ0) isNn(y| α01n,σ2 0In), and for a given prior distribution π(α0,σ0) the Bayesian model is M0:{Nn(y|α01n,σ2 0In),π(α0,σ0)}. Sometimes the models are simplified assuming that the variances of M0andMpare equal. 3 Priors for the model parameters and Bayes factors 3.1 Intrinsic priors and Bayes factor for heteroscedastic m odels Moreno et al. (2003) andGir´ on et al. (2006) used the methodology in Berger and Pericchi (1996) andMoreno et al. (1998) for constructing the intrinsic priors for the parameters of the normal linear models from the improper referen ce priors ( Berger and Bernardo ,1992). The intrinsic prior for the parameters ( βp+1,σp) ofMp, conditional on ( α0,σ0), turns out to be ( Moreno et al. ,2003) πIP(βp+1,σp|α0,σ0) =N/parenleftBig βp+1|˜α0,n(σ2 p+σ2 0) p+2(X′X)−1/parenrightBig HC+(σp|0,σ0), where˜α0= (α0,0′ p)′, andHC+(σp|0,σ0) is the half Cauchy distribution on the positive part of the real line with location at 0 and scale σ0. Thus, the unconditional intrinsic prior for ( βp+1,σp) is given as
|
https://arxiv.org/abs/2505.11705v1
|
πIP(βp+1,σp) =c/integraldisplay∞ 0/integraldisplay∞ −∞N/parenleftBig βp+1|˜α0,n(σ2 p+σ2 0) p+2(X′X)−1/parenrightBig HC+(σp|0,σ0)1 σ0dα0dσ0. 3 This is an improper prior although the Bayes factor of MpagainstM0for (πN(α0,σ0),πIP(βp+1,σp)) is well-defined as the arbitrary constant cthat appears inπN(α0,σ0) cancels out in the ratio. For a sample ( y,X) from either model MporM0, the Bayes factor for the intrinsic priors/parenleftbig πN(α0,σ0),πIP(βp+1,σp)/parenrightbig is given by BIP p0(y,X) =(p+2)p/2 π/2/integraldisplayπ/2 0sinpϕ[n+(p+2)sin2ϕ](n−p−1)/2 [(p+2)sin2ϕ+nBp0](n−1)/2dϕ,(1) where Bp0=y′(In−H)y y′(In−1 n1n1′n)y, andH=X(X′X)−1X′. The integral in ( 1) does not have a closed form expression and needs numerical integration. 3.2 Intrinsic prior and Bayes factor for homoscedastic mode ls If the variance σ2is assumed to be a common parameter of the models, the intrinsic prior for ( βp+1,σ) can be shown to be πIPH(βp+1|α0,σ) =Np+1/parenleftbigg βp+1|˜α0,2n p+1σ2(X′ p+1Xp+1)−1/parenrightbigg , and theBayesfactorof MpagainstM0forthepriors/parenleftbig πN(α0,σ),πIPH(βp+1,σ)/parenrightbig turns out to be BIPH p0(y,X) =/parenleftbigg 1+2n p+1/parenrightbigg(n−p−1)/2 /parenleftbigg 1+2n p+1Bp0/parenrightbigg(n−1)/2. 3.3 Mixtures of the Zellner’s g-prior and Bayes factors The stream of research that compares MpandM0using mixtures of g-prior simplifies the sampling models assuming that the intercept β0and the variance error σ2are common parameters. The prior for ( β0,σ) is assumed to be the improper reference prior πN(β0,σ) =c σ, wherecis an arbitrary positive constant, and the prior distribution for the rest of the regression coefficients βp= (β1,...,β p)′ofMpis assumed to be the Zellner’s g-prior N(βp|0p,gσ2(X′ pXp)−1), where gis a unknown positive hyperparameter and Xpis the matrix obtained from the original Xby suppressing their first column. 4 3.3.1 The Zellner-Siow’s mixture of g-prior Zellner and Siow (1980) implicitly assumed for the hyperparameter gthe Inverse Gamma distribution πZS(g|n) =(n/2)1/2 Γ(1/2)g−3/2exp(−n/(2g)), and hence the prior for the parameters ( β0,βp,σ) of model Mpis the mixture πZS(β0,βp,σ) =c σ/integraldisplay∞ 0N(βp|0p,gσ2(X′ pXp)−1)πZS(g|n)dg. This is an improper prior whose arbitrary constant cis that of πN(α0,σ0), and there- fore the Bayes factor of MpagainstM0for/parenleftbig πN(β0,σ),πZS(β0,βp,σ)/parenrightbig is well-defined and it turns out to be BZS p0(y,X) =(n/2)1/2 Γ(1/2)/integraldisplay∞ 0(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2g−3/2exp(−n/(2g))dg.(2) The integral in ( 2) does not have a closed form expression. 3.3.2 A degenerate mixtures of g-prior Several alternative priors to the distribution πZS(g) have been considered. The sim- plest one is the degenerate prior πFS(g|n) = 1(n)(g) suggested by Zellner(1984) and further studied by Fern´ andez et al. (2001). ForπFS(g|n) the prior for the parameter (β0,βp,σ) of model Mpis given by πFS(β0,βp,σ) =c σN(βp|0p,nσ2(X′ pXp)−1). The Bayes factor of MpagainstM0for/parenleftbig πN(β0,σ),πFS(β0,βp,σ)/parenrightbig is well-defined and it turns out to be BFS p0(y,X) =(1+n)(n−p−1)/2 (1+nBp0)(n−1)/2. 3.3.3 The prior πL(g|n) A more sophisticated prior for gthan the degenerate one was introduced by Liang et al.(2008) and is given by πL(g) =1 2(1+g)−3/2. 5 However, for this prior the Bayes factor BπL(g) p0(y,X) =1 2/integraldisplay∞ 0(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2(1+g)−3/2dg is inconsistent, its limit in probability is infinity when sampling from the mod elM0. This drawback of π(g) motivated to Liang et al. (2008) to introduce the prior πL(g|n) =1 2n(1+g/n)−3/2 that depends on n. Then, for πL(g|n) the prior for ( β0,βp,σ) is given by πL(β0,βp,σ) =c 2nσ/integraldisplay∞ 0Np(βp|0p,gσ2(X′ pXp)−1) (1+g/n)−3/2dg. For/parenleftbig πN(β0,σ),πL(β0,βp,σ)/parenrightbig the Bayes factor of MpagainstM0is BL p0(y,X) =1 2n/integraldisplay∞ 0(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2(1+g/n)−3/2dg. (3) The integral in ( 3) does not have
|
https://arxiv.org/abs/2505.11705v1
|
a closed form expression. 3.3.4 The prior πCG(g|n) A quite close prior to the preceding one was introduced by Cui and George (2008) and its expression is πCG(g|n) = (1+ g)−2. The Bayes factor for this prior turns out to be BCG p0(y,X) =/integraldisplay∞ 0(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2(1+g)−2dg. (4) The integral in ( 4) does not have a close form expression. 3.3.5 The prior πB(g|n,p) Later on Bayarri et al. (2012) also corrected the inconsistency of the Bayes factor for the prior π(g) =1 2(1 +g)−3/2truncating this prior to the subset of thereal line {g:g >(1 +n)/(1 +p)−1}. The resulting prior for gdepends on nandpand is given by πB(g|n,p) =1 2/parenleftbigg1+n 1+p/parenrightbigg1/2 (1+g)−3/21(1+n 1+p−1,∞)(g). 6 Thus, the prior for ( β0,βp,σ) is πB(β0,βp,σ) =c 2σ/parenleftbigg1+n 1+p/parenrightbigg1/2/integraldisplay∞ 1+n 1+p−1N(βp|0p,gσ2(X′ pXp)−1)(1+g)−3/2dg, and the Bayes factor of MpagainstM0for/parenleftbig πN(β0,σ),πB(β0,βp,σ)/parenrightbig is BB p0(y,X) =1 2/parenleftbigg1+n 1+p/parenrightbigg1/2/integraldisplay∞ 1+n 1+p−1(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2(1+g)−3/2dg. (5) The integral in ( 5) does not have a closed form expression. 3.3.6 The robust g-prior class We note that the priors πL(g),πCG(g|n) andπB(g|n,p) are particular cases of the “robust” g-prior class proposed by Bayarri et al. (2012). This class was defined as πR(g|a,d,ρ) =a[ρ(d+n)]a(g+d)−(a+1)1{g>ρ(d+n)−d}(g) (6) fora,d >0 andρ≥d/(d+n). 4 Asymptotic Let us assume that p=O(nb) for 0≤b≤1. To facilitate the reading of this section we bring here some auxiliary results on the asymptotic distribution of the sampling statisticBp0whenpgrows with n. We also give a lower bound of the Bayes factor BL p0 and an approximation to BB p0. These tools are given in Lemma 1and2. The limit in probability of a random sequence when sampling from model Mis denoted as [ PM]. Lemma 1. Forp=O(nb)with0≤b≤1and the pseudo-distance between a sampling model in MpandM0 δp0=1 2σ2pβ′ p+1X′/parenleftbig H−1 n1n1′ n/parenrightbig Xβp+1 n, we have that i) for0≤b <1, lim n→∞Bp0=/braceleftbigg 1, [PM0], (1+δ)−1,[PMp], whereδ= limn→∞δp0≥0, ii) forb= 1andr= limn→∞n/p >1, lim n→∞Bp0=/braceleftbigg 1−1/r, [PM0], (1−1/r)(1+δ)−1,[PMp]. 7 Proof.The proofis givenin Lemma 1 in Morenoet al. (2010)and, hence, it is omitted. 4.1 The case b <1 Whenngoes to infinity the Bayes factors BIP p0,BIPH p0,BZS p0,BFS p0andBB p0are all consistent. Further for b= 0, that is for finite p, the convergence rate under M0is O(n−p/2) and exponential under Mp. However, for 0 < b <1 the rate of convergence underM0isnotO(n−p/2)althoughitisexponentialunder Mp.Someoftheconsistency proofs of the Bayes factors have been given by Fern´ andez et al. (2001),Liang et al. (2008),Bayarri et al. (2012) andMoreno et al. (2010). Thus we only examine here the consistency of those Bayes factors for which either there is no co nsistency proof or the existing ones are not correct. Hence, we pay attention to the Bay es factors BL p0,BCG p0, BR p0,BIPH p0andBB p0. Lemma 2. For any Bp0and large nwe have that i)BL p0(Bp0)≥1 2n−(p+3)/2B−n−1 2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−p+1 2 Γ/parenleftbigp+1 2/parenrightbig . ii)BB p0(Bp0)≈1 2n−p/2 (p+1)1/2B−n−1 2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−p+1 2 γ/parenleftBig p+1 2,1−Bp0 Bp0p+1 2/parenrightBig , whereγ(a,b) =/integraldisplayb 0xa−1exp(−x)dxis the lower incomplete gamma function. Proof.See Appendix A The next theorem shows that the Bayes factor BIPH p0andBB p0are consistent, but BL p0and the Bayes factors for a subclass of robust priors are inconsis tent for
|
https://arxiv.org/abs/2505.11705v1
|
any p. Theorem 1. i) The Bayes factor BL p0(Bp0)is inconsistent under the null model M0for anyp. ii) The Bayes factor BR p0(Bp0)for any robust g-prior in the class R={πR(g|a,d,ρ) :a,d >0,ρ=d/(d+n)} is inconsistent under the null model M0for anyp. iii) The Bayes factors BIPH p0(Bp0)andBB p0(Bp0)are consistent. Proof.See Appendix B Corollary 1. The Bayes factor BCG p0(Bp0)is inconsistent under the null model M0 for anyp. Proof.AsBCG p0(Bp0) is defined by the robust g-prior in the class Rfora= 1,d= 1 andρ= 1/(1+n), the assertion follows from ii) in Theorem 1. 8 4.2 The case b = 1 Forb= 1anyofthe Bayesfactorsarenolongerconsistentandeveryon eisinconsistent when sampling from a set of model Mpthat depends on the limit of n/pand the limit ofthe pseudo distance δp0between M0andMp. The size of the inconsistency set varies across the Bayes factors. These sets are given in the following the orem. Theorem 2. Forp=O(n),r= limn→∞n/p >1andδ= limn→∞δp0>0we have: i) The Bayes factor BIP p0(Bp0)is consistent except when sampling from Mpand(r,δ) is in the set SIP=/braceleftBigg (r,δ) : (1+r)r−1/parenleftbigg 1+r−1 1+δ/parenrightbigg−r ≤1/bracerightBigg . ii) The Bayes factor BIPH p0(Bp0)is consistent except when sampling from Mpand (r,δ)is in the set SIPho=/braceleftBigg (r,δ) : (1+2 r)r−1/parenleftbigg 1+2(r−1) 1+δ/parenrightbigg−r ≤1/bracerightBigg . iii) The Bayes factor BZS p0(Bp0)is consistent except when sampling from Mpand(r,δ) is in the set SZS=/braceleftBigg (r,δ) :1 rexp(1)/parenleftbigg1+δ 1−1/r/parenrightbiggr−1 ≤1/bracerightBigg . iv) The Bayes factors BFS p0(Bp0)is inconsistent when sampling from any alternative modelMp. v) The Bayes factor BB p0(Bp0)is consistent except when sampling from Mpand(r,δ) is in the set SB=/braceleftBigg (r,δ) :/parenleftbiggr r−1/parenrightbiggr−1(1+δ)r 1+δrexp(−1)≤1/bracerightBigg . Proof.See Appendix C 5 Posterior model probability for moderate sample size Given a Bayes factor BC p0(Bp0) and the model prior π(M0) =π(Mp) = 0.5, the posterior probability of model M0in the set {M0,Mp}is given by PC(M0|Bp0) =/parenleftbig 1+BC p0(Bp0)/parenrightbig−1. Figure1displays the posterior probability of M0as a function of Bp0for those Bayes factors that are consistent for finite p, sayBIP p0,BIPH p0,BZS p0,BFS p0andBB p0, and some 9 values of nandp. This Figure illustrates the fact that for small pall posterior proba- bility curves are very similar, more so as nincreases. However, as pgrows the curves can be classified into two different groups, one with the Bayes facto rs{BZS p0,BFS p0} and another with {BIP p0,BIPH p0,BB p0}. The former group contains curves much less smoother than the latter, that is much more in favor of model M0than the latter, more so as pgrows. Except for small pand even for small nthe curves of the former group jump from almost 0 to almost 1 in a very narrow interval. This is a behavior that one expects only for large values of n. Fig. 1 Posterior probability of model M0in the set {M0,Mp}as a function of Bp0. 6 Conclusion Asymptotically we have found the unexpected result that the Baye s factorBL p0(Bp0), BCG p0(Bp0) and those for a wide subclass of the robust prior class are inconsis tent for 10 anyp. Further, for the robust priors in expression ( 6) the following conditions (i) a andρdo not depend on n, (ii) lim n→∞b/n=c≥0, (iii) lim n→∞ρ(b+n) =∞,
|
https://arxiv.org/abs/2505.11705v1
|
and (iv)n≥p+1+2a, were stated as sufficient conditions to ensure the consistency of the Bayes factor ( Bayarri et al. ,2012). Unfortunately, this assertion is not true and the Bayes factor BL p0, which is derived from the robust prior πR(g|1/2,n,1/2), serves as a counter-example. The others Bayes factors BIP p0,BIPH p0,BZS p0,BFS p0andBB p0are all consistent for p=O(nb) and 0≤b <1. Forb= 1, the latter Bayes factors are also consistent under the null. Ho wever, under the alternative BFS p0is inconsistent, so that it has the same asymptotic as the BIC approximation, and the remaining Bayes factors are consisten t except for specific sets of points ( r,δ). These inconsistency sets can be written in the explicit form SC=/braceleftbig (r,δ) :δ < δC(r)/bracerightbig forC=IP, IPH, ZS , where δC(r) = (r−1) (1+r)(r−1)/r−1−1, ifC=IP, 2(r−1) (1+2r)(r−1)/r−1−1, ifC=IPH, (rexp(1))1/(r−1)(1−1/r)−1,ifC=ZS, aredecreasingconvexfunctions of rsuch that lim r→∞δC(r) = 0,so that as rincreases the inconsistency set of every Bayes factor tends to be the empt y set. In Figure 2the inconsistency set of each Bayes factor is represented by a co lored area. Certainly the smaller the inconsistency set the preferred th e Bayes factor as model selector. We note that SB⊂SZSand hence BB p0/followsorequalBZS p0. AlsoSIP⊂SIPH⊂ SZSand hence BIP p0/followsorequalBIPH p0/followsorequalBZS p0. Moreover, lim r→1δC(r) = 0.4427,ifC=IP, 0.8205,ifC=IPH, ∞,ifC=ZS, B, which means that SIPandSIPHonly includes alternative models close to the null (δ <0.4427 and δ <0.8205, respectively) while SZSandSBinclude alternative models that are spread out in the quadrant r >1,δ≥0, and very far from the null. Therefore, our overall conclusion is to recommend the Bayes fact orBIP p0. 11 0246810 1 5 9 13 17 21 25 rδ 0246810 1 5 9 13 17 21 25 rδ 0246810 1 5 9 13 17 21 25 rδ 0246810 1 5 9 13 17 21 25 rδ Fig. 2 Inconsistency set of BIP p0, top left panel, BIPH p0, top right panel, BB p0, bottom left panel, and BZS p0, bottom right panel. Appendix A Proof of Lemma 2 i) Forg≥0 andn≥1 we have that/parenleftbig 1+g n/parenrightbig−3/2≥(1 +g)−3/2and then we can write BL p0(Bp0)≥1 2n/integraldisplay∞ 0(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2(1+g)−3/2dg =1 2nB−(n−1)/2 p0/integraldisplay∞ 0(1+g)−(p+3)/2/parenleftbigg 1+Bp0−1 1+gBp0/parenrightbigg(n−1)/2 dg. 12 Changing the variable gtoy=g/nwe have BL p0(Bp0)≥1 2B−(n−1)/2 p0/integraldisplay∞ 0(1+ny)−(p+3)/2/parenleftbigg 1+Bp0−1 1+nyBp0/parenrightbigg(n−1)/2 dy =1 2n−(p+3)/2B−(n−1)/2 p0/integraldisplay∞ 0/parenleftbigg y+1 n/parenrightbigg−(p+3)/2/parenleftbigg 1+Bp0−1 1+nyBp0/parenrightbigg(n−1)/2 dy. For large nwe have that the integrand in the last expression can be approximat ed by y−(p+3)/2exp/parenleftbigg −1−Bp0 2yBp0/parenrightbigg and hence BL p0(Bp0)≥1 2n−(p+3)/2B−(n−1)/2 p0/integraldisplay∞ 0y−(p+3)/2exp/parenleftbigg −1−Bp0 2yBp0/parenrightbigg dy. Therefore, we have BL p0(Bp0)≥1 2n−(p+3)/2B−(n−1)/2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−(p+1)/2 Γ/parenleftbigp+1 2/parenrightbig and i) is proved. ii) The Bayes factor BB p0(Bp0) can be written as BB p0(Bp0) =1 2/parenleftbigg1+n 1+p/parenrightbigg1/2 B−(n−1)/2 p0/integraldisplay∞ 1+n 1+p−1(1+g)−(p+3)/2/parenleftbigg 1+Bp0−1 1+gBp0/parenrightbigg(n−1)/2 dg. Changing the variable gtoy=g/nwe have BB p0(Bp0) =1 2/parenleftbigg1+n 1+p/parenrightbigg1/2 nB−(n−1)/2 p0/integraldisplay∞ 1−p/n 1+p(1+ny)−(p+3)/2/parenleftbigg 1+Bp0−1 1+nyBp0/parenrightbigg(n−1)/2 dy. For large nit follows BB p0(Bp0)≈1 2n−p/2 (p+1)1/2B−(n−1)/2 p0/integraldisplay∞ 1 p+1y−(p+3)/2exp/parenleftbigg −1−Bp0 2yBp0/parenrightbigg dy =1 2n−p/2 (p+1)1/2B−(n−1)/2 p0/integraldisplayp+1 0z(p+3)/2−2exp/parenleftbigg −1−Bp0 2Bp0z/parenrightbigg dz =1 2n−p/2 (p+1)1/2B−(n−1)/2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−(p+1)/2 γ/parenleftBig p+1 2,1−Bp0 Bp0p+1 2/parenrightBig . 13 This proves ii) and completes the proof of Lemma 2. Appendix B Proof
|
https://arxiv.org/abs/2505.11705v1
|
of Theorem 1 i) Toprovetheinconsistencyof BL p0(Bp0)underthenull M0weuseparti)inLemmas 1and2and we can write lim n→∞BL p0(Bp0)≥lim n→∞1 2n−(p+3)/2B−(n−1)/2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−(p+1)/2 Γ((p+1)/2) = 2(p−1)/2Γ((p+1)/2) lim n→∞n−(p+3)/2(1−Bp0)−(p+1)/2 =/braceleftBigg 1,forp= 1, [PM0], ∞,forp >1, [PM0], where the last equality follows from the fact that 1 − Bp0converges to zero in probability [ PM0] at rate O(n−2). This proves i). ii) For any prior in Rthe Bayes factor becomes BR p0(Bp0) =ada/integraldisplay∞ 0(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2(g+d)−(a+1)dg =adaB−(n−1)/2 p0/integraldisplay∞ 0/parenleftbigg 1+Bp0−1 1+gBp0/parenrightbigg(n−1)/2 (1+g)−p/2(g+d)−(a+1)dg. Changing the variable gtoy=g/nwe have BR p0(Bp0) =adanB−(n−1)/2 p0 /integraldisplay∞ 0/parenleftbigg 1+Bp0−1 1+nyBp0/parenrightbigg(n−1)/2 (1+ny)−p/2(ny+d)−(a+1)dy =adan−(p/2+a)B−(n−1)/2 p0 /integraldisplay∞ 0/parenleftbigg 1+Bp0−1 1+nyBp0/parenrightbigg(n−1)/2/parenleftbigg1 n+y/parenrightbigg−p 2/parenleftbigg y+d n/parenrightbigg−(a+1) dy Fornbig enough we have that the integrand in the last expression can be approximated by y−/parenleftbig p 2+a+1/parenrightbig exp/parenleftbigg −1−Bp0 2yBp0/parenrightbigg 14 and hence BR p0(Bp0)≈adan−(p/2+a)B−(n−1)/2 p0/integraldisplay∞ 0y−(p/2+a+1)exp/parenleftbigg −1−Bp0 2yBp0/parenrightbigg dy =adan−(p/2+a)B−(n−1)/2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−(p/2+a) Γ(p/2+a). When sampling from the null M0the limit in probability of BR p0(Bp0) turns out to be lim n→∞BR p0(Bp0) = lim n→∞/parenleftbig n(1−Bp0)/parenrightbig−(p/2+a)=∞,[PM0], where the last equality follows from i) in Lemma 1and the fact that 1 − Bp0 converges to zero in probability [ PM0] at rate O(n−2). This proves ii). iii) To prove the consistency of BIPH p0(Bp0) we rewrite the Bayes factor as BIPH p0(Bp0) =/parenleftbigg 1+2n p+1/parenrightbigg−p/2 B−(n−1)/2 p0/parenleftBigg 1−1−Bp0 1+2n p+1Bp0/parenrightBigg(n−1)/2 . For large nwe have that BIPH p0(Bp0)≈/parenleftbigg 1+2n p+1/parenrightbigg−p/2 B−(n−1)/2 p0exp/parenleftbigg −1−Bp0 Bp0p+1 4/parenrightbigg . When sampling from the M0we have from i) in Lemma 1that lim n→∞BIPH p0(Bp0) = lim n→∞/parenleftbig 1+2n1−b/parenrightbig−nb/2exp/parenleftbig −nb−2/4/parenrightbig = 0,[PM0]. This proves the consistency under M0. When sampling from the alternative Mpit follows from i) in Lemma 1that lim n→∞BIPH p0(Bp0) = lim n→∞/parenleftbig 1+2n1−b/parenrightbig−nb/2(1+δ)(n−1)/2exp/parenleftbig −δnb/4/parenrightbig =∞,[PMp]. This completes the proof of the consistency of BIPH p0. To prove the consistency of BB p0when sampling from the M0we note that lim n→∞1−Bp0 Bp0p+1 2= 0,[PM0], and hence we have for large nthat γ/parenleftBig p+1 2,1−Bp0 Bp0p+1 2/parenrightBig ≈2 p+1/parenleftbigg1−Bp0 Bp0p+1 2/parenrightbigg(p+1)/2 . 15 Therefore, from ii) in Lemma 2it follows that lim n→∞BB p0(Bp0) = lim n→∞(p+1)p/2−1n−p/2= 0,[PM0], so thatBB p0(Bp0) is consistent under de null model M0. When sampling from the Mp, we distinguish the case b= 0 and 0 < b <1. Forb= 0, we have lim n→∞1−Bp0 Bp0p+1 2=δ(p+1) 2,[PMp]. Then, for large n, γ/parenleftBig p+1 2,1−Bp0 Bp0p+1 2/parenrightBig ≈γ/parenleftBig p+1 2,δ(p+1) 2/parenrightBig ,[PMp], and from ii) in Lemma 2we have that BB p0(Bp0)≈A(p,δ)n−p/2(1+δ)(n−1)/2,[PMp], whereA(p,δ) = 2(p−1)/2(p+1)−1/2δ−(p+1)/2γ/parenleftBig p+1 2,δ(p+1) 2/parenrightBig . Therefore, lim n→∞BB p0(Bp0) =A(p,δ) lim n→∞n−p/2(1+δ)(n−1)/2=∞,[PMp]. This proves the consistency of BB p0forb= 0. For 0< b <1 we have that lim n→∞1−Bp0 Bp0p+1 2=∞,[PMp], and then for large n γ/parenleftBig p+1 2,1−Bp0 Bp0p+1 2/parenrightBig ≈Γ/parenleftbigp+1 2/parenrightbig ,[PMp]. Therefore, lim n→∞BB p0(Bp0) = lim n→∞A(1+δ)(n−1)/2=∞,[PMp], whereA= 2(p−1)/2(δn)−p 2[δ(p+1)]−1/2Γ/parenleftbigp+1 2/parenrightbig . This completes the proof of the theorem. 16 Appendix C Proof of Theorem 2 i) The proof of i) has been given in Moreno et al. (2010) and hence it is omitted. ii) When sampling from model M0we have lim p→∞BIPH p0(Bp0) = lim p→∞/parenleftbigg(1+2r)r−1 (−1+2r)r/parenrightbiggp/2 ,[PM0]. The function of rinside the bracket is smaller than 1 for r
|
https://arxiv.org/abs/2505.11705v1
|
>1 and this prove the consistency under M0. When sample from Mpand we have that lim p→∞BIPH p0(Bp0) = lim p→∞ (1+2r)r−1 /parenleftBig 1+2(r−1) 1+δ/parenrightBigr p/2 ,[PMp]. Thus, for any ( r,δ) such that (1+2r)r−1/parenleftbigg 1+2(r−1) 1+δ/parenrightbigg−r ≤1 the Bayes factor is inconsistent under Mp. This proves the assertion. iii) Using the approximation of BZS p0(Bp0) for large nin Lemma 2 in Moreno et al. (2015), BZS p0(Bp0)≈/parenleftbiggnexp(1) p+1/parenrightbigg−p/2/parenleftbigg1−p/n 1+δp0/parenrightbigg−(n−p−2)/2 , when sampling from the null M0we have that lim n→∞δp0= 0 and thus lim n→∞BZS p0(Bp0) = lim p→∞(rexp(1))−p/2/parenleftbigg 1−1 r/parenrightbigg−(p(r−1))/2 = 0,[PM0]. This proves the consistency of BZS p0underM0. When sampling from the alternative Mpwe have that lim n→∞BZS p0(Bp0) = lim p→∞(rexp(1))−p/2/parenleftbigg1−1/r 1+δ/parenrightbigg−(p(r−1))/2 = lim p→∞/bracketleftBigg 1 rexp(1)/parenleftbigg1+δ 1−1/r/parenrightbiggr−1/bracketrightBiggp/2 ,[PMp]. 17 Therefore BZS p0(Bp0) is inconsistent if and only if 1 rexp(1)/parenleftbigg1+δ 1−1/r/parenrightbiggr−1 ≤1. This proves iii). iv) For large nwe have that BFS p0(Bp0)≈n−p/2B−(n−1)/2 p0 When sampling from the null M0it follows that lim n→∞BFS p0(Bp0) = lim p→∞(pr)−p/2(1−1/r)−pr/2lim p→∞/bracketleftbigg r/parenleftbigg 1−1 r/parenrightbiggr p/bracketrightbigg−p/2 = 0,[PM0], and this proves the consistency under the null. Under the alternative Mpwe have that lim n→∞BFS p0(Bp0) = lim p→∞(pr)−p/2/bracketleftbig (1−1/r)(1+δ)−1/bracketrightbig−pr/2 = lim p→∞/bracketleftbig r(1−1/r)r(1+δ)−rp/bracketrightbig−p/2= 0,[PMp], and hence BFS p0(Bp0) is inconsistent for any ( r,δ). This proves iv). v) When sampling from the null we have that lim n→∞Bp0= 1−1 r, [PM0], and hence lim n→∞(1−Bp0)p 2Bp0=∞, [PM0]. Then, for large n, γ/parenleftBig p+1 2,(1−Bp0) Bp0p+1 2/parenrightBig ≈Γ/parenleftbigp+1 2/parenrightbig ,[PM0], and from part ii) in Lemma 2 BB p0(Bp0)≈1 2n−p/2 p1/2B−(n−1)/2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−(p+1)/2 Γ/parenleftbigp+1 2/parenrightbig ≈1 2(pr)−p/2 p1/2/parenleftbiggr−1 r/parenrightbigg−(pr−1)/2/parenleftbigg1 2(r−1)/parenrightbigg−(p+1)/2 Γ/parenleftbigp+1 2/parenrightbig =A(r)2(p−1)/2p−(p+1)/2/parenleftbiggr r−1/parenrightbigg(p(r−1))/2 Γ/parenleftbigp+1 2/parenrightbig ,[PM0], whereA(r) =r−1/2(r−1)>0. 18 Further, from of the Stirling’s approximation of the Gamma function , it follows that BB p0(Bp0)≈A(r)π1/22(p−1)/2p−(p+1)/2/parenleftbiggr r−1/parenrightbigg(p(r−1))/2 2−(p−1)/2pp/2exp(−p/2) =A(r)π1/2p−1/2/parenleftbiggr r−1/parenrightbigg(p(r−1))/2 exp(−p/2),[PM0]. Therefore, lim n→∞BB p0(Bp0) = lim p→∞/parenleftBigg/parenleftbiggr r−1/parenrightbiggr−1 exp(−1)/parenrightBiggp/2 = 0,[PM0], where the last equality follows from/parenleftbigr r−1/parenrightbigr−1exp(−1)<1. This proves the consistency of BB p0under the null model. When sampling from the alternative we have that lim n→∞Bp0=r−1 r(1+δ), [PMp], and hence lim n→∞(1−Bp0)p 2Bp0=∞, [PMp]. Then, for large n, γ/parenleftBig p+1 2,(1−Bp0)p 2Bp0/parenrightBig ≈Γ/parenleftbigp+1 2/parenrightbig ,[PMp], and from part ii) in Lemma 2 BB p0(Bp0)≈1 2n−p/2 p1/2B−(n−1)/2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−(p+1)/2 Γ/parenleftbigp+1 2/parenrightbig ≈1 2(pr)−p/2 p1/2/parenleftbiggr−1 r(1+δ)/parenrightbigg−(pr−1)/2/parenleftbigg1+δr 2(r−1)/parenrightbigg−(p+1)/2 Γ/parenleftbigp+1 2/parenrightbig =A(r,δ)2(p−1)/2p−(p+1)/2/parenleftbiggr r−1/parenrightbigg(p(r−1))/2/parenleftbigg(1+δ)r 1+δr/parenrightbiggp/2 Γ/parenleftbigp+1 2/parenrightbig ,[PMp], whereA(r,δ) = (r−1) (r(1+δr)(1+δ))−1/2>0. 19 Further, making use of the Stirling’s approximation for the Gamma fu nction, it follows that BB p0(Bp0)≈A(r,δ)π1/22(p−1)/2p−(p+1)/2/parenleftbiggr r−1/parenrightbigg(p(r−1))/2/parenleftbigg(1+δ)r 1+δr/parenrightbiggp/2 2−(p−1)/2pp/2exp(−p/2) =A(r,δ)π1/2p−1/2/parenleftbiggr r−1/parenrightbigg(p(r−1))/2/parenleftbigg(1+δ)r 1+δr/parenrightbiggp/2 exp(−p/2),[PMp]. So, limn→∞BB p0(Bp0) = lim p→∞/parenleftBigg/parenleftbiggr r−1/parenrightbiggr−1 (1+δ)r 1+δrexp(−1)/parenrightBiggp/2 =/braceleftBigg 0,ifh(r,δ)≤1,[PMp], ∞,ifh(r,δ)>1,[PMp], whereh(r,δ) =/parenleftbiggr r−1/parenrightbiggr−1(1+δ)r 1+δrexp(−1). This proves v) and completes the proof of the theorem. References Berger, J., Bernardo, J.M.: On the development of the reference p rior method (with discussion). Bayesian Statistics 4, 35–60 (1992) Bayarri, M.J., Berger, J.O., Forte, A., Garc´ ıa-Donato,G.: Criteria f or Bayesian model choice with application to variable selection. The Annals of Statistics 40, 1550–1577 (2012) Berger,J.O.,Pericchi,L.:TheintrinsicBayesfactorformodelselec tionandprediction. Journal of the American Statistical Association 91, 109–122 (1996) Consonni, G., Fouskakis, D., Liseo, B., Ntzoufras, I.: Prior distribut ions for objective Bayesian analysis. Bayesian Analysis 13(2), 627–679 (2018) Cui, W.,
|
https://arxiv.org/abs/2505.11705v1
|
George, E.I.: Empirical Bayes vs. fully Bayes variable selection . Journal of Statistical Planning and Inference 138, 888–900 (2008) Casella,G.,Moreno,E.:ObjectiveBayesianvariableselection.Journ aloftheAmerican Statistical Association 101, 157–167 (2006) Fern´ andez, C., Ley, E., Steel, M.F.: Benchmark priors for Bayesian model averaging. Journal of Econometrics 100, 381–427 (2001) 20 Gir´ on, F.J.: Bayesian Testing of Statistical Hypotheses. Arguval, M´ alaga (2021) Gir´ on, F.J., Moreno, E., Casella, G., Mart´ ınez, M.L.: Consistency of o bjective Bayes factorsfor nonnested linear models and increasingmodel dimension . RACSAM Real Academia de Ciencias. Serie A. Matem´ aticas 104, 161–171 (2010) Gir´ on, F.J., Moreno, E., Mart´ ınez, M.L.: An objective Bayesian proc edure for vari- able selection in regression. In: Balakrishnan, N., Castillo, E., Sarabia , J.M. (eds.) Advances on Distribution Theory, Order Statistics and Inference , pp. 389–404. Birkhauser, Boston (2006) Jeffreys, H.: Theory of Probability. Oxford University Press, Lond on (1961) Kang, S.G., Lee, W.D., Kim, Y.: Objective Bayesiangroup variable select ion for linear model. Computational Statistics, 1–24 (2021) Leamer, E.E.: Let’s take the con out of econometrics. American Eco nomic Review 73, 31–43 (1983) Liang, F., Paulo, R., Molina, G., Clyde, M.A., Berger, J.O.: Mixtures of g pr iors for Bayesian variable selection. Journal of the American Statistica l Association 103(481), 410–423 (2008) Moreno, E., Bertolino, F., Racugno, W.: An intrinsic limiting procedure f or model selection and hypothesis testing. Journal of the American Statist ical Association 93, 1451–1460 (1998) Moreno, E., Gir´ on, F.J.: Consistency of Bayes factors for linear mo dels. Comptes Rendus de l’Acad´ emie des Sciences. Series I, Mathematics 340, 911–914 (2005) Moreno, E., Gir´ on, F.J.: Comparison of Bayesian objective procedu res for variable selection in linear regression. Test 17, 472–492 (2008) Maruyama,Y.,George,E.I.:FullyBayesfactorswith ageneralizedgp rior.TheAnnals of Statistics 39(5), 2740–2765 (2011) Moreno, E., Gir´ on, F.J., Casella, G.: Consistency of objective Bayes factors as the model dimension grows. The Annals of Statistics 38, 1937–1952 (2010) Moreno, E., Gir´ on, F.J., Casella, G.: Posterior model consistency in v ariable selection as the model dimension grows. Statistical Science 30(2), 228–241 (2015) Moreno, E., Gir´ on, F.J., Torres-Ruiz, F.: Intrinsic priors for hypot hesis testing in nor- mal regressionmodels. RACSAM Revista de la Real Academia de Ciencia s Exactas, F´ ısicas y Naturales. Serie A, Matem´ aticas 97, 53–61 (2003) Moreno, E.: Bayes factors for intrinsic and fractional priors in nes ted models: 21 Bayesian robustness. L1-Statistical Procedures and Related To pics. Lectures Notes- Monograph Series 31, 257–270 (1997) Moreno, E., Torres-Ruiz, F., Casella, G.: Testing equality of regress ion coefficients in heteroscedastic normal regression models. Journal of Statist ical Planning and Inference 131, 117–134 (2005) Torres, F.A., Moreno, E., Gir´ on, F.J.: Intrinsic priors for model com parison in mul- tivariate normal regression. RACSAM Revista de la Real Academia de Ciencias Exactas, F´ ısicas y Naturales. Serie A, Matem´ aticas 105, 273–289 (2011) Zellner, A.: Basic Issues in Econometrics.Universityof ChicagoPres s,Chicago(1984) Zellner, A., Siow, A.: Posterior odds ratios for selected regression h ypotheses. In: Bernardo, J.M., DeGroot, M.H., Lindley, D.V., Smith, A.F.M. (eds.) Bayes ian Statistics. Proceedings of the First International Meeting Held in
|
https://arxiv.org/abs/2505.11705v1
|
arXiv:2505.12243v1 [math.CO] 18 May 2025Improved Bounds on the Probability of a Union and on the Number of Events that Occur Ilan Adler∗, Richard M. Karp∗and Sheldon M. Ross† April 15, 2025 Abstract LetA1,A2,...,A nbe events in a sample space. Given the probability of the inte r- section of each collection of up to k+ 1 of these events, what can we say about the probability that at least rof the events occur? This question dates back to Boole in the 19th century, and it is well known that the odd partial sums of the Inclusion- Exclusion formula provide upper bounds, while the even partial sums pr ovide lower bounds. We give a combinatorial characterization of the error in these bounds and use it to derive a very simple proof of the strongest possible bounds of a cert ain form, as well as a couple of improved bounds. The new bounds use more informati on than the classical Bonferroni-type inequalities, and are often sharper. Keywords: Bonferroni-type inequalities 1 Introduction LetA1,A2,...,A nbe events in a probability space and let Xdenote the number of these events that occur. We are interested in obtaining bounds on P(X≥r), forr= 1,2,...,n, based on knowledge of the probabilities of all s-way intersections of the events, for each s= 1,2,...,k+ 1,for some k < n. LetSjbe the sum of the probabilities of all j-way intersections, for 1 ≤j≤n.That is, Sj=/summationtext 1≤i1<i2<···<ij≤nP(Ai1Ai2···Aij).The Principle of Inclusion and Exclusion states that P(X≥r) =n/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj ∗Department of Industrial Engineering and Operations Research, University of California, Berkeley (ilan@berkeley.edu, karp@cs.berkeley.edu). †Department of Industrial and Systems Engineering, University of Southern California. Supported by the National Science Foundation under contract/grant CMMI213275 9. (smross@usc.edu) 1 It is also known that truncations of the preceding expression give u pper and lower bounds onP(X≥r): ifiis odd then r+i/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj≤P(X≥r)≤r+i+1/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj (1) Many improvements of this bound have been given, and are generica lly called Bonferroni- type inequalities. In the next section we recall how the preceding ine qualities can be derived from an algebraic identity. Our strategy in this note is to follow a similar line of reasoning, generalizing the algebraic identity to obtain some interesting expres sions for P(X≥r), and then derive bounds from those expressions that are stronge r or more general than the bounds previously known. Whereas our strongest bounds are exp ressed in terms of the individual s-fold intersections of events, and hence exploit more information t han the typical Bonferroni-type inequalities, which depend only on the coarser info rmation given by the quantities Sj,j= 1,...,k+1,we also give bounds just depending on the Sj.Other bounds based on the same information as ours are given in [1] and [3] as well as in the references therein. Notational Conventions. For integers s,t,we let/parenleftbigt s/parenrightbig = 0 if either min( s,t)<0 ors > t. Otherwise/parenleftbigt s/parenrightbig =t! s!(t−s)!.Also, for any event B,we letI{B}be the indicator variable of the eventB, equal to 1 if Bocurs and to 0 otherwise. 2 Basic Expressions for the Distribution of X Lemma 1 For all integers r >0,m≥0 I{m≥r}=m/summationdisplay
|
https://arxiv.org/abs/2505.12243v1
|
j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggm j/parenrightbigg (2) Proof:NotethatbothsidesofEquation(2)equal0when m < r.Theproofwhen m≥risby induction on r. The result follows when r= 1 since for m≥1 the identity/summationtextm j=0/parenleftbigm j/parenrightbig (−1)j= (1−1)m= 0 implies that 1 =/summationtextm j=1/parenleftbigm j/parenrightbig (−1)j+1.Assuming the result for r, we have m/summationdisplay j=r+1(−1)r+j+1/parenleftbiggj−1 r/parenrightbigg/parenleftbiggm j/parenrightbigg =m/summationdisplay j=r+1(−1)r+j+1[/parenleftbiggj r/parenrightbigg −/parenleftbiggj−1 r−1/parenrightbigg ]/parenleftbiggm j/parenrightbigg =m/summationdisplay j=r+1(−1)r+j+1/parenleftbiggj r/parenrightbigg/parenleftbiggm j/parenrightbigg +m/summationdisplay j=r+1(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggm j/parenrightbigg =m/summationdisplay j=r+1(−1)r+j+1/parenleftbiggm r/parenrightbigg/parenleftbiggm−r j−r/parenrightbigg +1−/parenleftbiggm r/parenrightbigg (3) =/parenleftbiggm r/parenrightbiggm−r/summationdisplay k=1(−1)k+1/parenleftbiggm−r k/parenrightbigg +1−/parenleftbiggm r/parenrightbigg 2 =/parenleftbiggm r/parenrightbigg +1−/parenleftbiggm r/parenrightbigg = 1 where Equation (3) followed from the induction hypothesis. Theorem 1 WithXbeing the number of the events A1,...,A nthat occur, P(X≥r) =n/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj Proof:Because Lemma 1 holds for all m,it follows that, I{X≥r}=X/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggX j/parenrightbigg =n/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggX j/parenrightbigg where the second equality follows because X≤nand/parenleftbigX j/parenrightbig = 0 when j > X.Taking expec- tations, and using that E[/parenleftbigX j/parenrightbig ] =Sjyields the result. Lemma 2 For all nonnegative integers m,k,rsuch that k≥r, m/summationdisplay j=k+1(−1)j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggm j/parenrightbigg = (−1)k+1r/summationdisplay i=1/parenleftbiggk−i r−i/parenrightbigg/parenleftbiggm−i k−i+1/parenrightbigg Proof:The result holds if m < ksince both sides equal 0 in this case. Hence, we need to prove the result when m≥k≥r.The proof is by induction on m. As it is immediate for m=kassume it is true for m−1. Then, with H(m,k,r) =/summationtextm j=k+1(−1)j/parenleftbigj−1 r−1/parenrightbig/parenleftbigm j/parenrightbig , H(m,k,r) =m/summationdisplay j=k+1(−1)j/parenleftbiggj−1 r−1/parenrightbigg [/parenleftbiggm−1 j−1/parenrightbigg +/parenleftbiggm−1 j/parenrightbigg ] =m/summationdisplay j=k+1(−1)j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggm−1 j−1/parenrightbigg +m−1/summationdisplay j=k+1(−1)j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggm−1 j/parenrightbigg =m−1/summationdisplay j=k(−1)j+1/parenleftbiggj r−1/parenrightbigg/parenleftbiggm−1 j/parenrightbigg −m−1/summationdisplay j=k+1(−1)j+1/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggm−1 j/parenrightbigg =m−1/summationdisplay j=k+1(−1)j+1[/parenleftbiggj r−1/parenrightbigg −/parenleftbiggj−1 r−1/parenrightbigg ]/parenleftbiggm−1 j/parenrightbigg + (−1)k+1/parenleftbiggk r−1/parenrightbigg/parenleftbiggm−1 k/parenrightbigg =m−1/summationdisplay j=k+1(−1)j+1/parenleftbiggj−1 r−2/parenrightbigg/parenleftbiggm−1 j/parenrightbigg + (−1)k+1/parenleftbiggk r−1/parenrightbigg/parenleftbiggm−1 k/parenrightbigg =m−1/summationdisplay j=k+1(−1)j+1/parenleftbiggj−1 r−2/parenrightbigg/parenleftbiggm−1 j/parenrightbigg + (−1)k+1[/parenleftbiggk−1 r−1/parenrightbigg +/parenleftbiggk−1 r−2/parenrightbigg ]/parenleftbiggm−1 k/parenrightbigg =m−1/summationdisplay j=k(−1)j+1/parenleftbiggj−1 r−2/parenrightbigg/parenleftbiggm−1 j/parenrightbigg + (−1)k+1/parenleftbiggk−1 r−1/parenrightbigg/parenleftbiggm−1 k/parenrightbigg 3 =−H(m−1,k−1,r−1) + (−1)k+1/parenleftbiggk−1 r−1/parenrightbigg/parenleftbiggm−1 k/parenrightbigg = (−1)k+1r−1/summationdisplay i=1/parenleftbiggk−1−i r−1−i/parenrightbigg/parenleftbiggm−1−i k−1−i+1/parenrightbigg + (−1)k+1/parenleftbiggk−1 r−1/parenrightbigg/parenleftbiggm−1 k/parenrightbigg = (−1)k+1r/summationdisplay j=2/parenleftbiggk−j r−j/parenrightbigg/parenleftbiggm−j k−j+1/parenrightbigg + (−1)k+1/parenleftbiggk−1 r−1/parenrightbigg/parenleftbiggm−1 k/parenrightbigg = (−1)k+1r/summationdisplay j=1/parenleftbiggk−j r−j/parenrightbigg/parenleftbiggm−j k−j+1/parenrightbigg where the induction hypothesis was used to evaluate H(m−1,k−1,r−1). Theorem 2 Fork≥r, P(X≥r) =k/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj+(−1)r+k+1r/summationdisplay i=1/parenleftbiggk−i r−i/parenrightbigg E[/parenleftbiggX−i k−i+1/parenrightbigg ] Proof:It follows from Lemmas 1 and 2 that for k≥r I{X≥r}=k/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggX j/parenrightbigg +(−1)r+k+1r/summationdisplay i=1/parenleftbiggk−i r−i/parenrightbigg/parenleftbiggX−i k−i+1/parenrightbigg Taking expectations now yields the result. Remark: The classical Bonferroni type inequalities (1) follow from Theorem 2 upon observ- ing that/summationtextr i=1/parenleftbigk−i r−i/parenrightbig E[/parenleftbigX−i k−i+1/parenrightbig ]≥0. 3 Bounds on P(X≥r) Lemma 3 E[/parenleftbigX−i k+1−i/parenrightbig ]≥(k+1 i) (n i)Sk+1 Proof:BecauseX≤nand/parenleftbigj−i k+1−i/parenrightbig //parenleftbigj k+1/parenrightbig is monotonically decreasing in j, it follows that /parenleftbiggX−i k+1−i/parenrightbigg ≥/parenleftbiggX k+1/parenrightbigg/parenleftbign−i k+1−i/parenrightbig /parenleftbign k+1/parenrightbig=/parenleftbiggX k+1/parenrightbigg/parenleftbigk+1 i/parenrightbig /parenleftbign i/parenrightbig Taking expectations yields the result. Theorem 3 P(X≥r)≥k/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj+r/summationdisplay i=1/parenleftbiggk−i r−i/parenrightbigg/parenleftbigk+1 i/parenrightbig /parenleftbign i/parenrightbigSk+1, r+kodd P(X≥r)≤k/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj−r/summationdisplay i=1/parenleftbiggk−i r−i/parenrightbigg/parenleftbigk+1 i/parenrightbig /parenleftbign i/parenrightbigSk+1, r+keven 4 Proof:Immediate from Theorem 2 and Lemma 3. Remarks: •Note (by considering the case P(X=n) = 1) that α=/summationtextr i=1/parenleftbigk−i r−i/parenrightbig(k+1 i) (n i)is the most stringent among all the bounds of the type k/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj+(−1)r+k+1αSk+1 •Inhisbook,Galambos[1]presentsaresult(provedfirstin[2])that issimilartoTheorem 3, except that/summationtextr i=1/parenleftbigk−i r−i/parenrightbig(k+1 i) (n i)is replaced by(/summationtextk−r j=0(−1)j(r+j−1 r−1)(n r+j)−1) (n k+1). However, these terms are actually identical, as can be easily shown via Lemmas 1, 2, an d 3. The proof presented here, though, is much
|
https://arxiv.org/abs/2505.12243v1
|
simpler. Considering Theorem 2, it is clear that obtaining a lower bound for E/bracketleftbig/parenleftbigX−i k+1−i/parenrightbig/bracketrightbig can provide upper and lower bounds for P(X≥r). So far, we have derived bounds based solely on the coarse information provided by the quantities Sj,j= 1,...,k+1,Next, we refine these bounds by expressing them in terms of the individual s-fold intersec tions of events, leading to stronger bounds. LetS={Ai1,...,A ix},i1< ... < i x,be the set of events that occur, and note that for i≤k≤x−1,/parenleftbigx−i k+1−i/parenrightbig is the number of k+1−isized subsets in . {Ai1,...,A ix−i}. Now, for r1< ... < r k+1−i,{Ar1,...,A rk+1−i}is such a subset if all of the events Ar1,...,A rk+1−ioccur as does at least ieventsAt1,...,A ti, whererk+1−i< tjforj= 1,...,i. Hence, defining T(r1,...,r k+1−i) ={(t1,...,t i) :rk+1−i< t1<···< ti} we obtain the expectation bound E/bracketleftbigg/parenleftbiggX−i k+1−i/parenrightbigg/bracketrightbigg =/summationdisplay r1<···<rk+1−iP /uniondisplay (t1,...,ti)∈T(r1,...,rk+1−i)Ar1···Ark+1−iAt1···Ati ≥/summationdisplay r1<···<rk+1−imax (t1,...,ti)∈T(r1,...,rk+1−i)P(Ar1···Ark+1−iAt1···Ati).(4) The preceding yields Theorem 4 P(X≥r)≥k/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj+r/summationdisplay i=1/parenleftbiggk−i r−i/parenrightbigg/summationdisplay r1<···<rk+1−imax (t1...,ti)∈T(r1,...,rk+1−i)P(Ar1···Ark+1−iAt1···Ati)r+kodd P(X≥r)≤k/summationdisplay j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg Sj−r/summationdisplay i=1/parenleftbiggk−i r−i/parenrightbigg/summationdisplay r1<···<rk+1−imax (t1...,ti)∈T(r1,...,rk+1−i)P(Ar1···Ark+1−iAt1···Ati)r+keven Proof:Immediate fromTheorem 2 and (4) . 5 3.1 An Improved Bound of P(X≥1) The case of P(X≥1) is of particular interest, as there are applications in which the foc us is on the probability of at least one event occurring (see e.g. [1]). Spec ializingi= 1 in (4), we obtain E/bracketleftbigg/parenleftbiggX−1 k/parenrightbigg/bracketrightbigg ≥/summationdisplay r1<···<rkmax r>rkP(Ar1···ArkAr) (5) As the preceding lower bound holds for all numberings of the nevents, an improved lower bound can be obtained by finding the numbering that maximizes the rig ht-hand side of (5). However, this is not computationally feasible when n is large. Instead , we will show how to compute the expected value of the right-hand side of (5) when the numbering is chosen at random. To this end, we first present the following lemma. Lemma 4 Leti1,...,i k,j1,...,j sall be distinct indices. The probability that jsis, in a ran- dom permutation, the only one of j1,...,j sto follow all of the indices i1,...,i kisk (k+s)(k+s−1). Proof.Note that this will be true if and only if jsis the last and one of i1,...,i kis the next to last of these k+sindices to appear in the random permutation. Consequently, the desired probability is1 k+sk k+s−1. For distinct indices i1,...,i k, letWs(i1,...,i k) be thesthlargest of the n−kvalues P(Ai1···AikAj), j /∈ {i1,...,i k}. Lemma 5 E[/parenleftbiggX−1 k/parenrightbigg ]≥/summationdisplay i1<···<ikn−k/summationdisplay s=1Ws(i1,...,i k)k (k+s)(k+s−1) Proof:Withp1,...,p nbeing a random permutation of 1 ,...,n,andBi=Api,i= 1,...,n, it follows from (4) that E[/parenleftbiggX−1 k/parenrightbigg ]≥/summationdisplay i1<...<i kE[max r>ikP(Bi1···BikBr)] (6) Now, for i1<···< ik, E[max r>ikP(Bi1···BikBr)] =1/parenleftbign k/parenrightbig/summationdisplay r1<···<rkE[max rP(Ar1···ArkAr)] (7) where the maximum on the right-hand side of the preceding equation (as well as on the left- hand side of the succeeding equation) is taken over all indices rthat follow all of r1,...,r k in the random permutation. Applying Lemma 4 gives that E[max rP(Ar1···ArkAr)] =n−k/summationdisplay s=1Ws(r1,...,r k)k (k+s)(k+s−1)(8) 6 The result now follows from (6) ,(7),and (8). The preceding yields Theorem 5 P(X≥1)≥k/summationdisplay j=1(−1)j+1Sj+/summationdisplay r1<···<rkn−k/summationdisplay s=1Ws(r1,...,r k)k (k+s)(k+s−1)keven P(X≥1)≤k/summationdisplay j=1(−1)j+1Sj−/summationdisplay r1<···<rkn−k/summationdisplay s=1Ws(r1,...,r k)k (k+s)(k+s−1)kodd Proof:Immediate fromTheorem 2 and Lemma 5. Example Suppose that n= 6 and that P(Ai1···Aij) =αi1···αij,j≤k+ 1, where αt=t+18
|
https://arxiv.org/abs/2505.12243v1
|
100,t= 1,...,6, soS1= 1.290,S2= 0.6925,S3= 0.1980. Below, we tabulate the various bounds presented in this paper for r= 1 and k= 2. Note that since kis even , S1−S2+(lower bound on E[/parenleftbiggX−1 2/parenrightbigg ])≤P(X≥1)≤S1−S2+S3 Reference S1−S2Lower Bound Lower Bound Upper Bound onE/bracketleftBig/parenleftbigX−1 2/parenrightbig/bracketrightBig onP(X≥1)onP(X≥1) (S1−S2+S3) Eq. (1) (Bonferroni) .5975 0 .5975 .7955 Theorem 3 .5975 .0990 .6965 .7955 Theorem 4 .5975 .1057 .7032 .7955 Theorem 5 .5975 .1896 .7871 .7955 References [1] J. Galambos and I. Simonelli. Bonferroni-type Inequalities with Applications . Springer- Verlag, New York, 1996. [2] E. Margaritescu On some Bonferroni inequalities . Stud. Cerc. Mat., 39 246-251, 1994. [3] J. Yang, F. Alajaji and G. Takahara. A short survey on bounding the union probability using partial information . arXiv:1710.07576, 2017. 7
|
https://arxiv.org/abs/2505.12243v1
|
1AHybridPriorBayesianMethodforCombiningDomestic Real-WorldDataandOverseasDatainGlobalDrugDevelopment KeerChen1,ZengyueZheng1,PengfeiZhu2,ShupingJiang2,NanLi3, JuminDeng1,PingyanChen4,ZhenyuWu5*,YingWu1,4* 1DepartmentofBiostatistics,SchoolofPublicHealth,SouthernMedical University,Guangzhou510515,China 2BARDS,MSDChina,Shanghai,China 3BARDS,MSDR&D(China)Co.,Ltd.,Beijing,China 4HainanInstituteofRealWorldData,TheAdministrationofBoaoLecheng InternationalMedicalTourismPilotZone,Hainan571434,China 5DepartmentofBiostatistics,SchoolofPublicHealth,FudanUniversity,No.130 Dong-AnRoad,Shanghai,200032,China.Tel/fax:+86(21)54237707.E-mail: zyw@fudan.edu.cn; *Correspondingto:YingWu(wuying19890321@gmail.com)andZhenyuWu 2Abstract Background Hybridclinicaltrialdesignintegratestraditionalrandomizedcontrolled trials(RCTs)withreal-worlddata(RWD),aimingtoenhancetrialefficiency throughdynamicincorporationofexternaldata(ExternaltrialdataandRWD). However,existingmethods,suchastheMeta-AnalyticPredictivePrior(MAP), exhibitsignificantlimitationsincontrollingdataheterogeneity,adjustingbaseline discrepancies,andoptimizingdynamicborrowingproportions.Theselimitations oftenintroduceexternalbiasorcompromiseevidencereliability,hinderingtheir applicationincomplexanalyseslikebridgingtrialsandmulti-regionalclinical trials(MRCTs). Objective ThisstudyproposesanovelhybridBayesianframework,EQPS-rMAP,to addressheterogeneityandbiasinmulti-sourcedataintegration.Itsfeasibilityand robustnessarevalidatedthroughsystematicsimulationsandretrospectivecase analyses,usingtwoindependentdatasetsevaluatingrisankizumab’sefficacyin patientswithmoderate-to-severeplaquepsoriasis. 3DesignandMethods TheEQPS-rMAPmethodoperatesinthreestages:(1)eliminatingbaseline covariatediscrepanciesthroughpropensityscorestratification;(2)constructing stratum-specificMAPpriorstodynamicallyadjustweightsforexternaldata;and (3)introducingequivalenceprobabilityweightstoquantifydataconflictrisks.The studyevaluatesthemethod’sperformanceacrosssixsimulatedanalyses (heterogeneitydifferences,baselineshifts,etc.),comparingitwithtraditional methods(MAP,PSMAP,EBMAP)intermsofestimationbias,typeIerrorcontrol, andsamplesizerequirements.Real-worldcaseanalysesfurthervalidateits applicability. Results SimulationsdemonstratethatEQPS-rMAPmaintainsestimationrobustness undersignificantheterogeneitywhilereducingsamplesizedemandsandenhancing trialefficiency.Caseanalysesconfirmitsabilitytocontrolexternalbiaswhile preservinghighestimationaccuracycomparedtoconventionalapproaches. ConclusionandSignificance TheEQPS-rMAPmethodprovidesempiricalevidenceforthefeasibilityof 4hybridclinicaldesigns.Itsmethodologicaladvancements—resolvingbaselineand heterogeneityconflictsthroughadaptivemechanisms—offerbroaderapplicability forintegratingexternalandreal-worlddataacrossdiverseanalyses,including bridgingtrials,MRCTs,andpost-marketingstudies. Keywords:Hybridclinicaltrialdesign,Bridgingstudies,RealWorldData(RWD), Externaltrialdata,Propensityscore,meta-analyticapproaches 1Introduction Inthedomainofconfirmatoryresearchfornoveldrugdevelopment, randomizedcontrolledtrials(RCTs)remainthegoldstandardforefficacy evaluation.However,strategicintegrationofexternaldataresourceshasemerged asacriticalpathwaytoovercomeefficiencybottlenecksinR&D.Datafrom hospitalinformationsystems,naturalpopulationcohorts,andconcurrent/historical clinicaltrials—containingvaluabledrugeffectivenessandsafety information—providecomplementaryevidenceforbridgingtrialsand multi-regionalclinicaltrials(MRCTs).Acoreparadoxpersists:whiletheFDA activelypromotesreal-worlddata(RWD)utilizationthroughinitiativeslikethe 5ComplexInnovativeTrialDesignPilotProgram1andClinicalTrialTransformation Initiative2,challengesincludingdataheterogeneity,inherentbiases,andtherapeutic effectvariabilitycontinuetounderminethereliabilityofexternaldataintegration. TheICHE5guideline3-establishedbridgingtrialframeworkandwidely adoptedMRCTparadigmprovideregulatoryfoundationsforcross-regionaldata harmonization.Statisticalmethodologiesfortheseanalyseshaveevolvedalong twopathways4:classicalapproachesemployingreconstructionprobability, weightedZ-tests,andsequentialdesigns5-12fordataalignment,andBayesian frameworksincorporatingmixedpriors13andpowerpriors14-17.Traditional Bayesianmixedpriorsaddressinter-regionalinformationconflictsbutsufferfrom presetweightingconstraintsthatlimitpracticalutility.Toovercomethislimitation, Meta-AnalyticPredictive(MAP)priormethods18andderivatives—including RobustMAP19,BayesiansemiparametricMAPprior20,empiricalBayesianMAP priors21,andtheEmpiricalBayesRobustMAPPrior(EB-rMAP)22—dynamically adjustdataborrowingproportionsbasedontheconsistencybetweenhistoricaland currentdatasets.However,withinframeworksintegratingReal-worlddata(RWD) andExternaltraildata,baselinediscrepanciesandvaryingeffectheterogeneity 6acrossdatasourcespersist.Traditionalmethodsfailtoaddressbaselinedifferences andcannotsimultaneouslyaccountforheterogeneityvariationsamongmultiple datasources.Instead,theysimplisticallyassumehomogeneityacrosssources, applyinguniformborrowingproportions. China'sestablishedReal-WorldStudy(RWS)guidelinesnowprovide regulatoryframeworksforinnovativedrugdevelopmentpathways.TheBoao LechengPilotZone'sRWD-supporteddrugregistrationexemplifiesregulatory innovation.ThisstudyintroducesEQPS-rMAP,anadvanceddata-sharingstrategy applicabletobridgingstudies,MRCTs,andpost-marketingconfirmatoryresearch. ThemethodconstructsunifiedhybridpriorsbysynthesizingdomesticRWDwith foreignRCTdata,stratifyingpatientsubgroupsthroughbaselinecovariate matchingtocreatehomogenizedlayers.Withineachstratum,layer-specificMAP priorsareintegratedwithoptimizedweightingtoreconcileeffectheterogeneityand baselinediscrepancies,augmentedbyuninformativepriorweightingforenhanced robustness.Consequently,EQPS-rMAPenablesadaptivedataborrowingthrough adjustablethresholdparameters,achievingprecision-controlledexternalevidence integration.Thefollowingsectionsdescribethetheoreticalfoundationsand 7practicalapplicationsofthePSandMAPmethods,leadingtotheintroductionof theEQPS-rMAPmethodologyinSection2.Section3demonstratestherobustness oftheEQPS-rMAPthroughextensivesimulationstudiesacrossvariousanalyses, andSection4discussestheempiricalresults. 2StatisticalMethods Inanticipationofconductingabridgingstudywithinanewgeographical region,thedataamalgamationencompassesoverseasrandomizedcontrolledtrial dataalongsidedomesticreal-worlddata,whichsolelyencompassesthetestgroup. Itispositedthattherespondercountacrossalldatasegmentsadherestoa one-parameterexponentialfamilyofdistributions~(,),providinga standardizedframeworkforanalysis. 2.1Reviewofpropensityscoreforincorporatingexternaldata Apropensityscoreencapsulatesthelikelihoodofanindividualreceivinga particulartreatment,accountingthecumulativeinfluenceofconfoundingvariables (X)onthetreatmentdecision23.Thismetriciscrucialforaddressing non-exchangeabilityissuesincausalinferencestudies.Inthisresearch,the 8propensityscoreisdefinedastheprobabilityofanindividual'sinclusioninCurrent trialdata24.Indexvector=(1,2)isthedatasourceindicatorvectorwiththe followingassignmentrule. =1,0,Real−worlddata 0,1,Externaltrialdata 0,0,Currenttrialdata Inthispaper,wedefinethepropensityscore()astheprobabilitythatan individualbelongstoCurrenttrialdata.Propensityscoremodelswereconstructed basedonExternaltrialdata,Real-worlddataandCurrenttrialdata.Letthevector denoteallcovariates: =Pr=0,0#2−1 Theanalysisexaminesthelikelihoodofaparticipantcomingfromanewstudy areabasedonprimaryconfoundingfactors.Itemphasizesthatparticipantswith similarpropensityscoresexhibitcomparabledistributionsofbaselinecovariates, ensuringbalancedgroupcomparisons.Logisticregressionremainsthestandard methodforcalculatingpropensityscores.However,recentprogressadvocates alternativeestimationtechniquessuchasrandomforests25neuralnetworks26,and ensemblemethodssuchasbaggingorboosting25,27. 9Propensityscoreanalysisemploysstrategieslikematching,weighting,and stratificationforbiasreduction.Specifically,stratificationdividesparticipantsinto quantilesbasedonpropensityscores,ensuringhomogeneitywithineachstratum regardingbaselinecharacteristics27-28.Thisprocessfacilitatesbalancedoutcome estimationacrossstrata. 2.2ReviewofMeta-Analytical-PredictivePrior Meta-analyticpredictive(MAP)priorapproachesleveragerobusthierarchical modelingframeworkstoaccommodatevaryinglevelsofbetween-trial heterogeneity.TheMAPpriorisconstructedbyderivingthepredictiveposterior distributionofthetargetparameter(e.g.,treatmenteffect)fromexternalstudies andissubsequentlyintegratedwithcurrenttrialdataduringtheanalysisphaseof thenewstudy19. Forinstance,considerasingle-armtrialdesignedtoestimateparameter, whereexternalevidencewillbesystematicallyincorporatedtoenhancethe precisionofinferences.Ingeneral,={1,2,⋯}denotesthenumberof respondersinExternaltrialdataand∗={∗}denotesthenumberofresponders inCurrenttrialdata.representsthenumberofpatientsandrepresentsthe 10patientresponserate.TheMAPapproachadoptsthefollowinghierarchicalmodel19. First,supposethebinaryresponseismodeledby =1,⋯,∗~(,) ThesimilarityofCurrenttrialdataandExternaltrialdataisexpressedby exchangeableparameters. ∗,1,⋯,~(,2),where=( 1−) Fortheoverallmean,avaguepriorwasusedbecausethedatahadsufficient information19.Theintertrialstandardvariancecanbedistributedfollowinga half-Tdistribution29-30,andinthispaper,weutilizeahalf-normaldistribution,with standarddeviationsselectedinsuchawaythatvaluesofthatareimplausibly largehavealowprobabilityofoccurring. GivenCurrenttrialdataandExternaltrialdata,theposteriordistribution (∗|0,∗)ofinterestcanbeexpressed. ∗0,∗∝∗∗∙∗0#2−2 Where(∗|∗)representsthelikelihoodfunctionoftheCurrenttrialdata, (∗|0)istheoriginalMAPprior18.Inpracticalapplications,thesamplesize fromtheoriginalgeographicalareaoftenexceedsthatofthenewgeographical 11region.Thisdiscrepancypresentsachallengeinevaluatingtheposterior distributionoftreatmenteffects,asthesmallersamplesizefromthenewregion maynotsubstantiallyalterthecombinedoutcome.Inordertoimprovethe robustness,RobustMAP(rMAP)19priorisproposed,whichaddsavagueprior component(∗)to(∗|0).Forexample,forbinomialdistribution,the standarduniformpriorBeta(1,1)orJefferys'prior(0.5,0.5)isgenerally used. 1−∙∗+∙∗#2−3 (0,1)isusedtocontroltheproportionofborrowingfromreferencedata (includingExternaltrialandReal-worlddata)inconstructionoftherMAP,when =1representsnoborrowing.WhendirectsamplingoftheMCMCfromthe unknowndistributionischallenging,theauthorsofthisstudyutilizedthemixture approximationoftheconjugateprior,asderivedbyDalalandHal17.Forthe binomialdistribution,(∗)canbeapproximatedasaweightedmixtureof betadistributions.TheKullback-Leiblerscatterisusedtomeasurethecomponents. Forexample,inthebinarycase,thismixtureprioris19 12 ∗0= =1 , #2−4 =1=1 ,representsthenumberofcomponentsintheapproximation.By combiningtheaboveformulas,wearriveatthefinalexpressionfortheprior. AccordingtoSchmidli19,thebestapproximationoftheexactpriorisobtainedby choosingmixtureweights,andhyperparametersoftheconjugateprior. Nevertheless,theRBesTpackageinRcannowbeusedtoapproximatethisvalue. ∗0=1−∙ =1 , +∙∗#2−5 Oneissueisdetermininghowtocalculatethevalueofthevaguepriorweight .Inbridgingexperiments,theweightsintheBayeshybridprior13oftendepend ontheresearcher'ssubjectiveexperiencetospecify.TheRobustMAPprior19 specifiesthevaguepriorweightbyfirstprovidinganinitialvalueandthen updatingitthroughmarginalprobabilities.Thisapproachaddsrobustnesstothe previousmethod.TheEmpiricalBayesRobustMAPprior22specifiesthevague priorweightbyconstructingastepfunctionbasedonBox'spriorpredicted p-value,makingthecomputationoftheweightsdatadependent,wherethe adjustableparametersgivethemodelbothobjectivityandflexibility. 132.3Proposedmethod Forthecurrentstudywhichcomparesatesttreatmentwithacontrol,our primaryanalysisofinterestisahypothesistestof0:=against1:< .Theanalysiscanbeconsideredatrialsuccessif (<|)>0.95. Assumingabridgingtrialincludingbothanexperimentaldruggroupanda controlgroupisgoingtobeconductedinanewregion.Thereferencedatafrom whichinformationcanbeborrowedforthispaperincludesdatafromrandomized controlledtrialsoverseas(bothtreatmentandcontrolgroups)andReal-worlddata containingonlytreatmentgroup. Step1:On-trialPropensityscore-basedstratification PropensityscoreswerecalculatedfrombaselinecharacteristicsofExternaltrial data,Real-worlddata,andCurrenttrialdata(definedinSection2.1).Let1 denotethesetofpropensityscoresforCurrenttrialdata,and2denotethesetof propensityscoresofReal-worlddata,and3denotesthesetofpropensityscores ofExternaltrialdata.Figure1illustratesthepropensityscoretrimmingand stratificationprocess.Inordertoensurehigherpopulationconsistency,therangeof 14valuesof1wasusedastheboundary,anddatafor2and3lyingoutside thisrangeweretrimmed.Theculleddataarethenuniformlystratifiedbyquantiles of1.Thenumberofstratumsisdenotedby,whichistypicallyfiveand quartilesareusedforstratification.Let= 0,⋯ denotethenumberof boundary-responsivequartilesofstrata,,,,,,denotethenumber ofpatientsinExternaltrialdata,thenumberofpatientsinCurrenttrialdata,and thenumberofpatientsinReal-worlddatainstratum,respectively. 15Figure1Illustrationofthepropensityscoretrimmingandstratificationprocess Step2:Stratum-specificMAPprior Performstratum-specificparameterestimationforeachstratum.For simplicity,abinomialdistributionisusedheretoillustrate.Infact,thismethodcan alsobeeasilyextendedtootheroutcomedistributionssuchasnormaldistribution, time-to-event(TTE)endpoint,etc.Assumingtheoutcomeoftheexperiment follows~(,),whererepresentsthenumberofrespondersinthedata, representstheresponserateparameter,anddenotetorepresentlog( 1−). Let,denotethenumberofrespondersinstratumwhosetrialsourceis, where=1,⋯,;=1,0,0,1,0,0(forexample,(0,0),1representsthe numberofrespondersfortheCurrenttrialinthefirststratum).,and, representparameterswithinExternaltrialdatastratumandReal-worlddatastratum, respectively.Thestructureof,canbemodeledusingaBayesianhierarchical modelasfollows: ,~(,,,) ,=(, 1−,),|,,2~(,,2); 16=1,⋯,;=1,0 ,=(, 1−,),|,,2~(,,2); =1,⋯,;=0,1#2−11 Thestratum-specificMAPpriorassumesthatthemeansfromdifferentdata sourcesandwithindifferentspecificstratumsoriginatefromthesamedistribution. Avaguepriorisusedforbecausethedatahavesufficientinformation. Heterogeneitybetweenthreedatasourceisrepresentedbythevarianceparameter ,2and,2,whichusesahalf-normalprior.Sincetherearetwotypesof heterogeneityincludedhere,theheterogeneitybetweenstratumsandthe heterogeneitybetweendifferentdatasources.Ourpaperadopts,2and,2 todenotetheheterogeneitybetweentheExternaltrialdataandtheReal-worlddata withinthesamestratum.Sointhispaper,werecommendselectingthe hyperparametersfortheprioron,2toreflectthemagnitudeofstratum-specific similaritybetweeneachstratumofExternaltrialdataandtheCurrenttrialdata relativetothebaselinecovariates;and,2toreflectthemagnitudeof stratum-specificsimilaritybetweeneachstratumofReal-worlddataandthe Currenttrialdatarelativetothebaselinecovariates.Inthispaper,weproposeto 17specifyapriorintheformofthefollowinghyperparameter,suchthatthepriorcan changetheproportionofdifferentstratumsborrowedinthepresenceofdifferent similaritiesinthestratums. ,~ℎ−(,),,~ℎ−(,) Where,and,reflectthestratum-specificsimilaritiesbetweenExternal trialdataandCurrenttrialdata,andbetweenReal-worlddataandCurrenttrialdata, respectively,relativetobaselinecovariates.Thenthesimilaritymeasurebetween thetwosourcesofdatawasexpressedaccordingtoWangetal31usingthe overlappingcoefficientbetweenthepropensityscore distributions,(),,()and,()32. ,= 01 min,,, , ,= 01 min,,, #2−12 Theoverlapcoefficientofstratums,=,1,,2,⋯,,,= ,1,,2,⋯,,.,and,canthenbeobtainedusingthe overlapcoefficientcalculation33-34. ,= ,,,= , 18Whereisgenerallytakenasthemedianof,andforthesameis true. Step3:Equivalence-probability-weightbasedEQPS-rMAPprior LetdenotestheExternaltrialdata,denotesReal-worlddata,and denotesCurrenttrialdata,denotestheparameterofinterestfor Currenttrialdata.Thestratum-spcificMAPpriorwasintegratedwithspecific weightsintothe(|,).Ratherthandefininganoverall EQPS-rMAPpriorbasedonthelayer-specificMAPprior,wedefinedlog-oddsof thestratum-specificresponserateas(=1,...,)andthelog-oddsforthe overallresponserateofCurrenttrialdataas.TheEQPS-rMAPprioristhen thedistributioninducedbythisrelationship.Inparticular,wedefine = =1, ∗ =,∗,+,∗, ,=,∗∙, ,∗∙,+,∗∙,, ,=,∗∙, ,∗∙,+,∗∙,#2−13 TheweightsheretakeintoaccounttheconsistencyofExternaltrialdata, Real-worlddatawithCurrenttrialdataandthesamplesizeineachstratum.Thatis, 19thelog-oddsfortheoverallresponserateofCurrenttrialdataisthe weightedsumoflog-oddsofthestratum-specific,ofReal-worlddataand ,ofExternaltrialdata.Where,,,,,denotethesamplesizesin stratumsofReal-worlddata,Externaltrialdata,andCurrenttrialdata, respectively.denotesthetotalsamplesizeofCurrenttrialdata.Ineach stratum,∙,indicatestheconsistencyofExternaltrialdatawithCurrent trialdata,while∙,indicatestheconsistencyofReal-worlddatawith Currenttrialdata.Consistencyisevaluatedbasedonthetailregionprobability proposedbyThompson35,calculatedasfollows, ∙,=2×(,>,),1−(,>,) ∙,=2×(,>,),1−(,>,) Where ,~(,,,−,), ,~(,,,−,), ,~(,,,−,) Therefore 20,>,= 01 ,1,,−11−,,−1 ,,,−, ∙,,−11−,,−1 ,,,−,,,#2−14 ,and,denotesthenumberofrespondingpatientsandresponseratefor eachstratumofExternaltrialdata,respectively.,and,denotethe numberofrespondingpatientsandresponserateforeachstratumofReal-world data,respectively.,and,denotethenumberofrespondingpatients andresponserateforeachstratumofCurrenttrialdata. FollowingtheestablishedRobustMAPpriorframework,thisstudyapplies MarkovchainMonteCarlo(MCMC)techniquestoderivesamplesfromaprior distributionthatintegratesExternaltrialdatawithReal-worlddata.Ourapproach buildsupontheinsightsofSchmidlietal19andHupfetal20andbenefitsfromthe useofconjugatepriors,whichnotablyenhancedensityestimation.Thenumberof components,,isuptothenumberofhistoricaldata,andisusuallychosentobe thesmallestnumberthatprovidesanadequateapproximationbasedonaparticular criterion,e.g.Kullback-LeiblerdivergenceorAIC/BIC.Theremaining parameters—mixtureweightsandBetahyperparameters,—are 21iterativelyestimatedviatheExpectation-Maximization(EM)algorithm,which alternatesbetweencomputingcomponentassignments(E-step)andmaximizing thecomplete-datalikelihood(M-step)22.Toenhancerobustness,weintroduce vaguepriorcomponentsthatalignwiththeconjugatestructureofthemodel.For example,withbinarydata,(0)canbethestandarduniformprior(1,1)or Jefferysprior(0.5,0.5).Forthepriorandposteriorofthebinomial distribution,itcanbeexpressedinthefollowingform: (|,)=(1−)∙ =1 (,) +∙(0) ,, =(1− ) =1 +,+− + 0+,0+−#(2−15)# TheweightofthevaguepriorinthisstudyisdeterminedbytheEquivalence ProbabilityWeightindicator24,assessingtheconsistencybetweenthemixed posteriordistribution,andthecurrentexperimentaldata. Inthisstudy,wedefinetheoptimalvaguepriorweight,denotedas,as thesmallestpriorweightthatensurestheconsistencymetricmeetsapredefined 22threshold.Thisconsistencymetricquantifiestheagreementbetweenthe posteriordistributionincorporatingexternaldataandtheresponseprobability distributionderivedfromCurrenttrialdata. First,theresponseprobabilitydistributionfortheCurrenttrialdataismodeled as: ~(,−) Thehybridposteriordistribution,denoted,isobtainedunderthe EQPS-rMAPframework: ~ ,, TheEQPS-rMAPpriorisconstructedasamixtureofinformativecomponents (derivedfromexternaldata)andavaguepriorcomponent.Itisexpressedas: −,=1− =1 , +∙#2−16 where()representsavague(non-informative)prior,usedincasesofhigh conflictbetweenexternalandCurrenttrialdata. Uponobservingthecurrenttrialdata,theposteriordistributionbecomes: −,, =(1− ) =1 +,+− + 0+,0+−###(2−17) Toassesstheagreementbetweenthehybridposteriorandthecurrentdata,we 23definethefollowingprobability: =−<<+#2−18 Here,denotestheclinicalequivalencemargin,representingthemaximum acceptabledeviationbetweenthetwodistributions. Theoptimalvaguepriorweightisthendefinedas: =:≥,∃..≥ 1, ℎ#2−19 Thismeansthatifthehybridposteriorissufficientlyconsistentwiththe currentdata(≥),weadoptthesmallestpossibleweightthatsatisfies thiscondition,thusmaximizingtheborrowingofexternalinformation.Ifthe consistencyisbelowthethreshold(<),themodeldefaultstofulluseofthe vagueprior(i.e.,=1),completelyexcludingexternaldata. Bothparametersandrequirepriorspecification,where∈(0,1) representstheinternal-externaldataconsistency(compatibility)threshold, governingtheactivationofexternaldataborrowing.When≥,thesystem employsthevaguepriorwiththeminimallynecessaryweight(i.e.,assigning maximalweighttoexternaldata).Ifpremainsbelow(indicatingsignificant conflictbetweenexternalandcurrentdata),issetto1,resultinginexclusive relianceonthevaguepriorwithoutexternaldataborrowing.Sincesmaller 24valuesreflectstrongerinconsistencybetweeninternalandexternaldata,we recommendsettingtoarelativelylargevaluetoensureborrowingisinitiated onlywhenexternaldatameetpredefinedcompatibilitycriteria. Thesecondparameter,servesasaclinicallydefinedequivalencemargin, quantifyingthepermissibledeviationbetweenthemixedposterior(integrating externaldata)andthecurrenttrialposterior.Asymmetricequivalenceinterval [−,+]isconstructedaroundthemixedposteriorestimate,and theprobability (−<<+)iscalculatedtoassessclinical equivalence.Thevalueofmustalignwithclinicalconsensus;forinstance, efficacyendpointsmaytoleratelargervaluescomparedtosafetyendpoints, therebyaccommodatingcontext-specificmedicalrequirementsforequivalence margins. Theparametersandexhibitsynergisticroles:actsasastatistical thresholdtorestrictdataborrowingunderincompatibility,whileintroduces clinicalflexibilitytocalibrateequivalencestandards.Thisdualmechanismbalances statisticalrigorwithenhancedadaptabilitytoReal-worlddatacomplexity.Detailed 25operationalizationoftheseparametersandtheirempiricalimpactswillbe systematicallyexaminedinsubsequentsimulationtrials. Step4:Posteriordistribution Forbinaryendpoints,theposteriordistributionis ,, =(1− ) =1 +,+− + 0+,0+−#2−20 Amongthem,inordertoflexiblyadapttotheproblemofconsistencychange duetotheincreaseordecreaseofdatainnewareas,thispaperadoptstheupdateof theweights and here. 0=0+,0+− 0,0 =k+,k+− k,k =0 (1−)∗ +0 = #2−21 Overall,thefigureillustratesthebridgingstrategybetweenthetestdruggroup andthecontroldruggroupforthebridgingtrialinthenewregion.Theoutcomeis 26presentedasabinomialdistributionforsimplicity,butothercommondistributions suchascontinuoustypedistributionandTTEcanbeusedasneeded. 27 28Figure2Flowchartofthemethodologicalorganizingframework 3Simulationstudy ToexaminetheperformanceofEQPS-rMAP,thissectionsimulatesvarious parameteranalyses. 3.1Parametersspecification Consideratypicaltwo-armrandomizedtrialinanewregionwith500patients ineachofthetreatmentandcontrolgroups,usingaunilateralTypeIerrorrateof 5%andabinaryendpoint.ThereareExternaltrialdatawith500patientsenrolled inboththetreatmentandcontrolgroup,andReal-worlddatawith500patients enrolledinthetreatmentgroup.Amultinomiallogisticmodelwasusedtomodel thegroupstowhichdifferentbaselinecovariatesbelonged. (1=1|)=e∗ 1+e∗+e∗ 2=1=e∗ 1+e∗+e∗#3−1 1isReal-worlddataindicator,and1=1indicatesthatsubjectswerefrom Real-worlddata.2isExternaltrialdataindicator,and2=1indicatesthat subjectswerefromExternaltrialdata.andshowtheimbalancein 29baselinecovariatesbetweenReal-worlddataandCurrenttrialdata,andbetween ExternaltrialdataandCurrenttrialdata,respectively.Individualoutcomeswere generatedbysimulationinthemodel. =1,,=0+1+2+31+42#3−2 isthetreatmentindicator.1representsthetreatmenteffect.2represents theimpactofcovariatesontheoutcome,while3representseffectheterogeneity betweenReal-worlddataandcurrenttrialdata.4representseffectheterogeneity betweenExternaltrialdataandcurrenttrialdata. Inthesimulationexperiments,issettoacombinationofbinaryand continuousvariablesforsimplicityand0=0,1=0.5,2=0.5.The hypothesesare0:=against1:<,whereandarethe objectiveresponseratesforthecontrolandtreatmentgroups,respectively.Thetrial isconsideredstatisticallysuccessfuliftheposteriorprobabilityoftreatment superiorityexceedsapre-specified95%Bayesianthreshold(equivalentto controllingtheTypeIerrorrateat5%underfrequentistcalibration): (>|,−)>0.95. Thesimulationstudyfocusesontwoobjectives:(1)validationofthe 30feasibilityofEQPS-rMAP,and(2)evaluationofstatisticalperformanceof EQPS-rMAP.Tovalidatefeasibility,weexaminedthechangesinmixedprior weights( )andthesamplesizeratio(EQPS−rMAPborrowing/no−borrowingdesign)to assesstheadaptivecapabilityofEQPS-rMAPacrossvariousscenarios.These scenariosconsistedoftwolevelsofbaselinedifferences,threelevelsofeffect heterogeneity,andninecombinationsoftheparametersλandδ,resultinginatotal of54scenarios(2baselinedifferences×3effectheterogeneity×3λ×3δ).Forthe evaluationofstatisticalperformance,EQPS-MAP(λ=0.8,δ=0.1)was comparedwithtraditionalmethods(EB-rMAP,MAP,andPS-MAP)usingabsolute
|
https://arxiv.org/abs/2505.12308v1
|
bias,meansquarederror(MSE),andtypeIerrorwithregardingto1.The comparisonwasconductedundersixscenariosdefinedbytwolevelsofbaseline differencesandthreelevelsofeffectheterogeneity. Thesimulationexperimentsusedthevagueprior(1,1).MarkovChain MonteCarlo(MCMC)calculationswereimplementedusingR4.2.3withRstan. Forallmethods,westartedfivechainsandran41,000MCMCiterationswitha burn-inperiodof1000. 3.2Simulationresults 31Thesimulationresultsarepresentedinaseriesofplotsthatillustratethe relationshipbetweentheheterogeneityoftheRWDwiththecurrentstudydataand theweightofthehybridprior,wherealargerweightindicatesincreaseduseof Externaldata. Figures3and4,eachwithlinesfordifferentvaluesofparameter,visualize theeffectoftheheterogeneityvariance(0,0.2,0.4)andthevaluesofparameter (0.1,0.15,0.2)ontheweightsofthehybridprior.Theimpactofvarying parametervaluesontheweightsofthemixedpriorundervaryinglevelsof heterogeneityisfullyelucidated.Thecurvesexhibitageneraltendencyof increasingandthendecreasing.Thisisduetothefactthattheweightofthemixed priorincreaseswhenthehorizontalcoordinateisgoingto0,indicatingthatthe borroweddataaremoreconsistentwiththedataofthenewarea.Inallplots,the largertheweightthresholdparameter,themoreconcentratedthelargercurve. Thisisduetothefactthatthelargerthethreshold,themorecautioustheborrowed datais,andtherangeofintervalsthatcanbeborrowedbecomesconcentrated.A comparisonofthethreecolumnsfromlefttorightdemonstratesthatthelargerthe tolerableerrorparameter,thewidereithercurveis,indicatingthatthemore 32relaxedtherequirementsare,andthemoreinformationcanbeborrowedwithequal heterogeneity.Theconflictminimizationpointisdefinedastheheterogeneity parametercombination(₃,₄)thatminimizesdiscordancebetweenExternal borrowabledata(ExternaltraildataandReal-worlddata)andCurrenttrialdata, correspondingtothepeakofmixedpriorweights(maximumborrowingefficiency) andthetroughofrequiredCurrenttrialsamplesizeasaproportionoftheoriginal (minimumsampleburden)inthefigure.Thecurve'scentralaxisrightwardshift reflectsheterogeneity-drivendisplacementofthisminimizationpoint.Specifically, opposing-directionheterogeneitieswithequalabsolutemagnitudes(₃=−₄) inducepartialcancellationthroughweightedprioreffects,enhancingalignment withcurrenttrialdataandtherebyminimizingcompositeprior-dataconflict. Atthesametime,thecurvedoesnotbecomemorecentralizedasaresultof theheterogeneitydifference.Thisisbecauseourmethodiscapableofhandlingthe heterogeneitysimultaneouslywithoutaffectingthescopeoftheborrowed information.Forthecaseswithandwithoutbaselinedifferences,theconclusions remainessentiallythesame.Forthecaseofthepresenceofbaselinedifferences, thecurvesareslightlymoreconcentrated.Thepresenceofbaselinedifferences 33affectsthesizeofdatasimilaritytosomeextent,whichinturnresultsina reductionintheamountofborrowedinformationthatcanbeappliedunderthe samerule. Figure3Combinatorialplotsofthechangeinpriorweightswithheterogeneityfordifferent 34parametervaluesanddifferentheterogeneitydifferencesintheabsenceofbaselinedifferences. Figure4Combinatorialplotsofthechangeinpriorweightswithheterogeneityfordifferent parametervaluesanddifferentheterogeneitydifferencesinthepresenceofbaselinedifferences. Themethodproposedinthispaperalsomakesitverysimpletocalculatehow 35muchsamplesizewasborrowed.Figure5and6illustratetherelativesamplesize requiredforstatisticalinferenceinnewregionscomparedtotheoriginalsize required.Thesefigureseffectivelyshowhowdifferentparametervaluesaffectthe requiredsamplesizeundervaryingdegreesofheterogeneousvariance.Incontrast totheobservedweighttrends,thecurvesexhibitadownwardandthenupward trajectory.Consequently,theconclusionsregardingthethresholdparameter,the allowableerrorparameter,andthecurveshiftremainlargelyconsistentwiththe weightingconclusions.Furthermore,acomparisonofthethreerowsfromthetop tothebottomrevealsthatthegreaterthelevelofheterogeneousvariation,the greatertheminimumvalueofthecurverequiredforthenewarea,indicatingthat lessinformationisborrowed.Thepresenceofbaselinedifferencesalsoresultsina reductionintheamountofborrowedinformationcomparedtotheabsenceof baselinedifferencesunderthesamerule.Thisisduetothefactthatboththelevel ofheterogeneityandthebaselinedifferencesinfluencethedegreeofdatasimilarity tosomeextent. 36 Figure5Combinedplotsofthechangeinsamplesizerequiredtoreachstatisticalinferencewith heterogeneityfornewareasfordifferentparametervaluesanddifferentheterogeneity differencesintheabsenceofbaselinedifferences. 37 Figure6Combinedplotsofthechangeinsamplesizerequiredtoreachstatisticalinferencewith heterogeneityfornewareasfordifferentparametervaluesanddifferentheterogeneity differencesinthepresenceofbaselinedifferences. Figures7and8comparetheEQPS-rMAPmethodwiththeEB-rMAPand PS-MAPmethodsusingbias,meansquarederror(MSE),andTypeIerroras 38metrics.EQPS-rMAPdemonstratessuperiorperformance,maintaininglowbias andMSEwhileeffectivelymanagingTypeIerror,regardlessofbaseline differences.Whenthereisnobaselinedifferenceandheterogeneousdifference,the proposedmethodEQPS-rMAPisclosetotheEB-rMAPmethodandPS-MAPis closetotheMAPmethod.Thelargertheheterogeneitydifferenceis,theadvantage oftheproposedmethodEQPS-rMAPismoreobvious,becausethismethoddoes notborrowdatainablendedwaylikepreviousmethods.EQPS-rMAPissuperior toEB-rMAPwhenthebaselinedifferenceisnot0comparedtothebaseline differenceis0.TheadvantageofPS-MAPisgreaterthanMAP. 39 Figure7MeanSquareError,AbsolutebiasandTypeIErrorforEQPS-rMAP,MAP,PS-MAP, EB-rMAPintheabsenceofbaselinedifferences. 40 Figure8MeanSquareError,AbsolutebiasandTypeIErrorforEQPS-rMAP,MAP,PS-MAP, EB-rMAPinthepresenceofbaselinedifferences. 4Anillustrativeexample Thissectionvalidatestheapplicabilityoftheproposedmethodologythrough aclinicalcasestudy.Weevaluatedtheefficacyofrisankizumabinpatientswith 41moderate-to-severeplaquepsoriasisusingtwoindependentdatasets:the UltIMMa-1/UltIMMa-2pooledanalysis(GooderhamMetal.,2022)36asa randomizedcontrolledtrial(RCT)datasourceanda40-weekmulticenter retrospectivecohortreportedbyBorroniRGetal.(2021)37.Sincerisankizumabis notyetapprovedbyChineseregulatoryauthoritiesandreal-worlddata(RWD) fromHainanLechengarestillundercollection,Externaltrialdata (UltIMMa-1/UltIMMa-2)andReal-worlddata(aretrospectivetreatmentcohort, 40-weekfollow-up)wereutilizedtoconstructexternalborrowabledata,with Currenttrialdatageneratedthroughsimulation.Specifically,hypotheticalCurrent trialdatadatasetsweresimulatedbasedonbaselinecharacteristicsfromExternal trialdata,withpredefinedresponseratesfortreatmentandcontrolgroupsinformed byExternaltrialdataandReal-worlddatadiscrepancies. Duetorestrictedaccesstopatient-leveldatafromUltIMMa-1/UltIMMa-2and theretrospectivecohort(40-weekfollow-up),wesimulatedthesedatasetsusing aggregatebaselinestatisticsreportedinpublishedliteraturetoensure reproducibility.Theprimaryendpointwasdefinedastheproportionofpatients achievinganabsolutePASIthreshold(PASI≤1)atweek40oftreatment,with 42hypothesistestingframedassuperioritytesting(0:=against0:<), whereanddenoteobjectiveresponserates(ORR)forcontrolandtreatment groups,respectively.Intheoriginaldata,theUltIMMa-1/UltIMMa-2trials included598patients(399inthetreatmentgroup,71.9%;199inthecontrolgroup, 38.7%),whiletheretrospectivecohortcomprised77treatedpatients(85.7%).Fora hypotheticalscenariorequiringadditionalRCTsforrisankizumab’sapprovalin China,weconstructedasimulateddataset(100patientspergroup)witha40% responseratesincontrolgroupand65%inthetreatmentgroup. WeusedIPWtocontrolfordifferencesinthefollowingbaselinefactors:age, sex,BMI,PASI,Exposuretopreviousbiologics,Anti-TNF,Anti-IL17.The analysisoftheexampledatawasrealizedinR4.2.3. Figure9illustratestheprobabilitydensitydistributionsoftreatmentgroups acrossdatasourcesatthe40-weektreatmentendpoint,includingthecurrenttrial data(nodataborrowing),externaltrialdataprior(UltIMMa-1/UltIMMa-2), real-worlddataprior,andtheposteriordistributiongeneratedbytheproposed EQPS-rMAPmethod.Significantheterogeneitydiscrepanciesareobserved betweenborrowableexternaldata(UltIMMa-1/UltIMMa-2trialsandReal-world 43data)andcurrenttrialdata,whereconventionalstaticborrowingapproacheswould introducebiasinposteriorestimation.ThedynamicBayesianborrowingmethod addressesthislimitationbyquantifyingdataconflictsandselectivelyintegrating high-consistencyexternaldatasources.Thisstrategyenhancesresearchefficiency throughexternalinformationincorporation(evidencedbysignificantlyincreased peakamplitude,confirmingexternaldatautilization)whilemaintainingrobust estimationaccuracy—theposteriordistributionconvergescentrallynearthetrue efficacyvalue.TheseresultsdemonstratethatEQPS-rMAP’s heterogeneity-adaptivemechanismeffectivelyidentifiesandretainsexternal evidencecompatiblewithcurrenttrialdata,achievinganoptimalbalancebetween biascontrolandinformationgain. Figure10presentsacomparativeanalysisofposteriorestimationtrends betweentraditionalmethods(MAP,EB-rMAP,andPS-MAP)andtheproposed EQPSmethodacrossvaryingproportionsofborroweddata.Thedashedvertical lineatX=0.25markstheprespecifiedtruetreatmenteffectforthecurrent region,servingasareferenceforevaluatingestimationaccuracy.Notably,the resultsdemonstratethatEQPSmaintainsstableandaccurateestimationsregardless 44ofdataborrowingratios,whereastraditionalmethodsexhibitsignificantvariability inperformance.Theproportionofborroweddatamaybevariedbychangingthe samplesizeoftheRWDdatainthetable.AstheproportionofExternallyborrowed dataincreases,traditionalmethodsareaffectedandestimatesareshifted.Thisisa commonproblemwithtraditionalmethodssuchasMAPandPS-MAP,wheredata fromanewareacannotreversetheresultsofExternaldata.Additionally,evenwith itsimprovedrobustness,EB-rMAPisaffectedbybaselinedifferencesindata components.Giventhattheimpactofprior-dataconflictsislesspronouncedthan thebaselinediscrepanciesobservedintheexample,theefficacyofthe methodologiescanberankedinthefollowingorder:EQPS>PS-MAP> EB-rMAP>MAP. 45 Figure9Probabilitydensitydistributionof,comparedwiththoseofcurrenttrialdataand EQPS,Externaldataprior,Real-worlddatapriorwithnormallydistributedoutcomes. Note: 1Currenttrialdata:representsthedistributionofthecurrenttrialdatawhennoexternaldatais borrowed. 462Dashedline:representsthetrueefficacyofthecurrenttrialdata,=0.65−0.4: 47 Figure10Theposteriordistributionof(−),comparedwithMAP,PS-MAP,EB-rMAP withnormallydistributedoutcomes.ThedashedlineatX=0.25indicatestheprespecifiedtrue 48treatmenteffectforthecurrentregion. 5Conclusion InMarch2024,theU.S.FoodandDrugAdministration(FDA)conveneda workshoptitled"AdvancingComplexInnovativeTrialDesignsinClinicalTrials: FromPilottoPractice"andannouncedplanstoreleasedraftguidanceonthe applicationofBayesianmethodsinclinicaltrialsbytheendof2025(FDA, 2023i)38.Theworkshopemphasizedtheintegrationofexternaldatasources, Bayesianstatisticaltechniques,andsimulationtechnologiesincomplexinnovative trialdesigns(FDA,2024a)39.ThispaperproposestheEQPS-rMAPmethod,which employsahybriddesigntoeliminatebaselinedifferencesbetweengroupsand inter-studyheterogeneity,aimingtooptimizetheintegrationofExternaltrialdata andReal-worlddata.Thisapproachprovidesaninnovativesolutiontoaccelerate drugdevelopmentandsignificantlyenhancetheefficiencyofrandomized controlledtrials(RCTs). Systematicsimulationexperimentsvalidatedtherobustnessofthedynamic Bayesianborrowingmethodacrossdiversescenarios,includingextremeconditions. 49ComparedtoconventionalmethodssuchasMAPandPSMAP—whichallow predefinedborrowingproportionsbutfailtocontrolbiasunderincreasingsample sizesduetofixedfull-sampleborrowingstrategies—theproposedmethod addressescriticallimitations.TraditionalEB-rMAPapproaches,whiledynamically adjustingborrowingscalesbasedondatasimilarity22,assumehomogeneityacross datasources.Thisassumptionrisksbiasundersignificantbaselineoreffect heterogeneity,leadingtooverlyconservativeborrowing(completeexclusionof externaldata)orerroneousborrowing(introducinghighlybiaseddata).The EQPS-rMAPmethodovercomesthesechallengesthroughatwo-stageoptimization: First,propensityscorestratificationresolvesbaselinediscrepanciesbetweenRWD andtrialdata40,ensuringbalanceinkeypredefinedcovariates.Second,a differentiatedweightingstrategyaddressesmulti-sourceheterogeneity,establishing anadaptiveborrowingmechanismbasedonquantifiedprior-dataconflicts. ComparedtoEB-rMAP22,EQPS-rMAPintroducesadditionaltuningparametersto enhanceflexibility,thoughsystematicsimulationsarerequiredtoidentifyoptimal parametercombinations.Thisframeworkisapplicablenotonlytobridgingstudies butalsotoextensionstrategiesinmulti-regionalclinicaltrials(MRCTs)and 50post-marketingconfirmatoryresearch. However,threelimitationsremain:First,thecurrentframeworksupportsonly qualitativeefficacyanalysis(determiningwhetherthedifferencebetween new-regiontreatmentandcontrolgroupsexceeds0)withoutquantifyingthe magnitudeofdifferences12.Apracticalsolutioninvolvescomparingefficacy differencestopredefinednon-inferioritymargins41,whichcanbeseamlessly integratedintotheexistingframework.Second,whilethestudyfocuseson improvingresearchefficiencyandestimationaccuracy,ithasyettoexplorepatient benefitoptimizationthroughadaptivedesigns(e.g.,dynamicrandomization). Futureworkcouldintegratepatientpreferencemodelstodevelopa multidimensionalbenefitevaluationsystem42-43.Third,whilethemethodperforms robustlyforbinaryendpoints,itrequiresextensiontocomplexendpointssuchas survivalanalysis.Developingtime-to-eventframeworksiscriticalforbroader clinicalapplicability.Furthermore,MRCTextensionstrategieslackoperational guidelines;collaborationwithregulatorsisneededtoestablishscientificcriteriafor priordataborrowingandefficacythresholds,ensuringmethodological implementation. 51Dataaccessibilitystatement ThedataandRcodetoreplicatethedataanalysisinSection4areavailable. Funding YingWuissupportedbytheNationalNaturalScienceFoundationofChina [Grantnumber82273732],theGuangzhouBasicandAppliedFoundationProject (2023A04J1106)andtheRealWorldResearchProjectGrantFundfromtheHainan InstituteofRealWorlddata(HNLC2022RWS018). Acknowledgements TheauthorsthankWilliamWang,HelenWu,andWeiweiZhaofromthe SMU-MSDBARDSAcademicProjectsofMerckfortheirvaluablecomments. Declarationofconflictofinterests TheAuthorsdeclarethatthereisnoconflictofinterest. References: 52[1].FoodDrugAdministrationCenterforDrugsEvaluationResearchComplex innovativetrialdesignmeetingprogram.<ComplexInnovativeTrialDesignMeeting Program|FDA>(2022).AccessedDecember27,2022. [2].FoodDrugAdministrationCenterforDrugsEvaluationResearchComplex innovativetrialdesigns.<act-eu-multi-annual-workplan-2022-2026_en.pdf>(2022). AccessedDecember27,2022. [3].InternationalConferenceonHarmonisation;guidanceonethnicfactorsinthe acceptabilityofforeignclinicaldata;availability--FDA.Notice.FedRegist1998;63: 31790-31796.JournalArticle. [4].刘晋,于浩and柏建岭,etal.新药临床研究中桥接试验的统计方法.中国 卫生统计2015;32:345-349. [5].ShaoJandChowSC.Reproducibilityprobabilityinclinicaltrials.Stat.Med. 2002;21:1727-1742.DOI:10.1002/sim.1177. [6].ChowS,ShaoJandHuOY.ASSESSINGSENSITIVITYANDSIMILARITY INBRIDGINGSTUDIES.J.BiopharmStat2002;12:385-400.DOI: 10.1081/BIP-120014567. [7].LiuJandChowS.BRIDGINGSTUDIESINCLINICALDEVELOPMENT.J. 53BiopharmStat2002;12:359-367.DOI:10.1081/BIP-120014564. [8].LanKK,SooYandSiuC,etal.TheuseofweightedZ-testsinmedical research.J.BiopharmStat2005;15:625-639.JournalArticle.DOI: 10.1081/BIP-200062284. [9].TsouH,TsongYandLiuJ,etal.WeightedEvidenceApproachofBridging Study.J.BiopharmStat2012;22:952-965.DOI:10.1080/10543406.2012.701580. [10]. DongX,GuoYandTsongY.A noteontwoapproachesoftestingbridgingevidencetoanewregion.J.Biopharm Stat2012;22:966-976.JournalArticle;ResearchSupport,U.S.Gov't,P.H.S.; Review.DOI:10.1080/10543406.2012.702651. [11].HsiaoC,XuJandLiuJ.ATWO-STAGEDESIGNFORBRIDGING STUDIES.J.BiopharmStat2007;15:75-83.DOI:10.1081/BIP-200040836. [12].HsiaoCF,XuJZandLiuJP.Agroupsequentialapproachtoevaluationof bridgingstudies.J.BiopharmStat2003;13:793-801.EvaluationStudy;JournalArticle. DOI:10.1081/BIP-120024210. [13].HsiaoCF,HsuYYandTsouHH,etal.UseofpriorinformationforBayesian evaluationofbridgingstudies.J.BiopharmStat2007;17:109-121.JournalArticle. 54DOI:10.1080/10543400601001501. [14].IbrahimJGandChenM.PowerPriorDistributionsforRegressionModels.Stat. Sci.2000;15:46-60.DOI:10.1214/ss/1009212673. [15].GandhiM,MukherjeeBandBiswasD.ABayesianApproachforInferencefrom aBridgingStudywithBinaryOutcomes.J.BiopharmStat2012;22:935-951.DOI: 10.1080/10543406.2012.698436. [16].DuanY,YeKandSmithEP.Evaluatingwaterqualityusingpowerpriorsto incorporatehistoricalinformation.Environmetrics2006;17:95-106.DOI: 10.1002/env.752. [17].HobbsBP,SargentDJandCarlinBP.CommensuratePriorsforIncorporating HistoricalInformationinClinicalTrialsUsingGeneralandGeneralizedLinearModels. BayesianAnal2012;7:639-674.JournalArticle.DOI:10.1214/12-BA722. [18].NeuenschwanderB,Capkun-NiggliGandBransonM,etal.Summarizing historicalinformationoncontrolsinclinicaltrials.ClinTrials2010;7:5-18.Journal Article.DOI:10.1177/1740774509356002. [19].SchmidliH,GsteigerSandRoychoudhuryS,etal.Robust meta-analytic-predictivepriorsinclinicaltrialswithhistoricalcontrolinformation. 55Biometrics2014;70:1023-1032.JournalArticle.DOI:10.1111/biom.12242. [20].HupfB,BunnVandLinJ,etal.Bayesiansemiparametricmeta‐analytic‐ predictivepriorforhistoricalcontrolborrowinginclinicaltrials.Stat.Med.2021;40: 3385-3399.DOI:10.1002/sim.8970. [21].LiJX,ChenWCandScottJA.Addressingprior-dataconflictwithempirical meta-analytic-predictivepriorsinclinicalstudieswithhistoricalinformation.J. BiopharmStat2016;26:1056-1066.JournalArticle.DOI: 10.1080/10543406.2016.1226324. [22].ZhangH,ShenYandLiJ,etal.Adaptivelyleveragingexternaldatawithrobust meta-analytical-predictivepriorusingempiricalBayes.Pharm.Stat.2023;22:846-860. JournalArticle.DOI:10.1002/pst.2315. [23].RRPandBRD.Thecentralroleofthepropensityscoreinobservationalstudies forcausaleffects.Biometrika1983;1:41-55.DOI:10.1093/biomet/70.1.41. [24].BennettM,WhiteSandBestN,etal.Anovelequivalenceprobabilityweighted powerpriorforusinghistoricalcontroldatainanadaptiveclinicaltrialdesign:A comparisontostandardmethods.Pharm.Stat.2021;20:462-484.DOI: 10.1002/pst.2088. 56[25].LeeBK,LesslerJandStuartEA.Improvingpropensityscoreweightingusing machinelearning.Stat.Med.2010;29:337-346.DOI:10.1002/sim.3782. [26].SetoguchiS,SchneeweissSandBrookhartMA,etal.Evaluatingusesofdata miningtechniquesinpropensityscoreestimation:asimulationstudy.Pharmacoepidem. Dr.S.2008;17:546-555.DOI:10.1002/pds.1555. [27].McCaffreyDF,RidgewayGandMorralAR.PropensityScoreEstimationWith BoostedRegressionforEvaluatingCausalEffectsinObservationalStudies.Psychol. Methods2004;9:403-425.DOI:10.1037/1082-989X.9.4.403. [28].RRPandBRD.ReducingBiasinObservationalStudiesUsingSubclassification onthePropensityScore.PublicationsoftheAmericanStatisticalAssociation1984;387: 516-524.DOI:10.1080/01621459.1984.10478078. [29].CochranWG.Theeffectivenessofadjustmentbysubclassificationinremoving biasinobservationalstudies.Biometrics1968;24:295-313.JournalArticle. [30].GustafsonP,HossainSandMacnabYC.Conservativepriordistributionsfor varianceparametersinhierarchicalmodels.CanadianJournalofStatistics2006;34: 377-390.DOI:10.1002/cjs.5550340302. [31].DalalSRandHallWJ.ApproximatingPriorsbyMixturesofNaturalConjugate 57Priors.JournaloftheRoyalStatisticalSociety:SeriesB(Methodological)1983;45: 278-286. [32].WangC,LiHandChenWC,etal.Propensityscore-integratedpowerprior approachforincorporatingreal-worldevidenceinsingle-armclinicalstudies.J. BiopharmStat2019;29:731-748.JournalArticle.DOI: 10.1080/10543406.2019.1657133. [33].LiuM,BunnVandHupfB,etal.Propensity‐score‐basedmeta‐analytic predictivepriorforincorporatingreal‐worldandhistoricaldata.Stat.Med.2021;40: 4794-4808.DOI:10.1002/sim.9095. [34].FIHandBJEL.Theoverlappingcoefficientasameasureofagreementbetween probabilitydistributionsandpointestimationoftheoverlapoftwonormaldensities. CommunicationinStatistics-TheoryandMethods1989;10:3851-3874.DOI: 10.1080/03610928908830127. [35].WILLIAMRTHOMPSON,ONTHELIKELIHOODTHATONEUNKNOWN PROBABILITYEXCEEDSANOTHERINVIEWOFTHEEVIDENCEOFTWO SAMPLES,Biometrika,Volume25,Issue3-4,December1933,Pages 285–294,https://doi.org/10.1093/biomet/25.3-4.285 58[36].GooderhamM,PinterAandFerrisLK,etal.Long-term,durable,absolute PsoriasisAreaandSeverityIndexandhealth-relatedqualityoflifeimprovementswith risankizumabtreatment:aposthocintegratedanalysisofpatientswith moderate-to-severeplaquepsoriasis.J.Eur.Acad.Dermatol.2022;36:855-865. JournalArticle;RandomizedControlledTrial.DOI:10.1111/jdv.18010. [37].BorroniRG,MalagoliPandGargiuloL,etal.Real-lifeEffectivenessandSafety ofRisankizumabinModerate-to-severePlaquePsoriasis:A40-weekMulticentric RetrospectiveStudy.ActaDerm.-Venereol.2021;101:adv605.JournalArticle.DOI: 10.2340/actadv.v101.283. [38].FDA.2023i.UsingBayesianstatisticalapproachestoadvanceourabilityto evaluate drug products. https://www.fda.gov/drugs/cder-small-business-industry-assistance-sbia/using-bayesian -statistical-approaches-advance-our-ability-evaluate-drug-products(accessedAugust1, 2024). [39].FDA.2024a.Advancingtheuseofcomplexinnovativedesignsinclinical trials: From pilot to practice. https://www.fda.gov/news-events/advancing-use-complex-innovative-designs-clinica l-trials-pilot-practice-03052024(accessedAugust15,2024). 59[40].ShenoyP.Multi-regionalclinicaltrialsandglobaldrugdevelopment.Perspect ClinRes.2016Apr-Jun;7(2):62-7.doi:10.4103/2229-3485.179430.PMID: 27141471;PMCID:PMC4840793. [41].LiuJP,HsuehHandHsiaoCF.ABayesiannoninferiorityapproachtoevaluation ofbridgingstudies.J.BiopharmStat2004;14:291-300.EvaluationStudy;Journal Article.DOI:10.1081/BIP-120037180. [42].RareDiseases:CommonIssuesinDrugDevelopment;DraftGuidancefor Industry;Availability(U.S.Food&DrugAdministrationDocuments/FIND,Silver Spring,2019). [43].Drazen,J.M.,Harrington,D.P.,Mcmurray,J.J.V.,Ware,J.H.,Woodcock, J.,&Bhatt,D.L.,etal.(2016).Adaptivedesignsforclinicaltrials.NEnglJMed, 375(1),65-74.
|
https://arxiv.org/abs/2505.12308v1
|
arXiv:2505.12706v1 [math.ST] 19 May 2025Efficient computation of complementary set partitions, with applications to an extension and estimation of generalized cumulants Elvira Di Nardo∗, Giuseppe Guarino† Abstract This paper develops new combinatorial approaches to analyze and compute spe- cial set partitions, called complementary set partitions, which are fundamental in the study of generalized cumulants. Moving away from traditional graph-based and algebraic methods, a simple and fast algorithm is proposed to list complemen- tary set partitions based on two-block partitions, making the computation more accessible and implementable also in non-symbolic programming languages like R. Computational comparisons in Maple demonstrate the efficiency of the proposal. Additionally the notion of generalized cumulant is extended using multiset subdi- visions and multi-index partitions to include scenarios with repeated variables and to address more sophisticated dependence structures. A formula is provided that expresses generalized multivariate cumulants as linear combinations of multivariate cumulants, weighted by coefficients that admit a natural combinatorial interpre- tation. Finally, the introduction of dummy variables and specialized multi-index partitions enables an efficient procedure for estimating generalized multivariate cu- mulants with a substantial reduction in data power sums involved. keywords: Complementary set partition; set partition lattice; generalized cumulant; multi-index partition; multiset subdivision; multivariate polykay 1 Introduction Generalized cumulants were introduced by McCullagh in [15] and permits to compute joint cumulants of polynomials in a random sample, such as linear, quadratic forms and so on [13]. To give a simple example, suppose we need the covariance ofP iaiXiandP j,kajkXjXk,where X= (X1, . . . , X n) is a vector of random variables (r.v.’s) not nec- essarily independent and identically distributed (i.i.d.). By the multilinear property of covariance we have covX iaiXi,X j,kajkXjXk =X i,k,jaiajkκi,jk ∗Department of Mathematics, University of Turin, Turin, Italy, email: elvira.dinardo@unito.it (corre- sponding author) †Local Health Authority of Potenza, Potenza, Italy, email: giuseppe.guarino@webmail.it 1 where κi,jk= cov( Xi, XjXk) is the generalized cumulant of XiandXjXk.These calcula- tions find applications in various fields, for example sample cumulants, Edgeworth series, conditional cumulants, Bartlett’s identities, see [15] and [23]. Many other examples can be found in [1], [2] and [16], to which we refer the reader interested in such applications. Although generalized cumulants are very useful, their practical application is con- strained by considerable computational complexity. This stems from their dependence on a specific class of set partitions, known as complementary set partitions. Manual listing of these partitions becomes increasingly difficult as the number of indexes increases, even if they are of moderate size. The development of automatic tools for these calculations would therefore be of great benefit. To the best of our knowledge, there are currently no available procedures for gener- ating complementary set partitions in non-symbolic programming environments, like R [19]. Computer algebra tools have been implemented by Wang, Andrews and Stafford [22], McCullagh and Wilks[17], and Kendall [12]. Nevertheless, due to the inefficiency of the resulting computation times, these methods have not been adopted in practice. Consequently, the tables provided in [16] continue to serve as the standard reference, even though they are incomplete, as a full listing would require an impractical number of
|
https://arxiv.org/abs/2505.12706v1
|
pages. As a result, any missing information must be manually computed, utilizing the fundamental properties of complementary set partition (see Section 2 for a short review). In [23], Stafford suggested a strategic change by avoiding the combinatorial complexity of the problem. Instead of initially computing complementary set partitions and then generalized cumulants, he reversed the process, proposing that generalized cumulants be calculated first as a way to recover complementary set partitions. This strategy was im- plemented1inMathematica 3.0 [20] (further details can be found in [1]). Despite relying exclusively on algebraic calculations, the automatic procedure’s implementation in non- open source software such as Mathematica prevented its widespread adoption. Moreover, this approach would be inefficient when applied to other formulas involving complemen- tary set partitions, like those for polykay cumulants. Consequently, the tables in [16] are still the primary reference tool for complementary set partitions and related calculations [18]. The purpose of this paper is twofold, both computational and theoretical. As suggested in [16], complementary set partitions can be computed using connected graphs. Here, we propose an alternative and novel combinatorial method. For a partition with mblocks, we only use the two-block partitions of the set [ m] ={1, . . . , m }to recover all partitions that are not complementary to the given one. Comparative analysis of computational times with both connected graph-based algorithms and other strategies demonstrates the efficiency of our approach, which is simple enough to be implemented in non-symbolic, open-source software2likeR. From a theoretical perspective, the main contribution of this paper is the extension of the notion of generalized cumulants to r.v.’s indexed by multiset subdivisions [8]. Generalized cumulants are usually expressed in terms of joint cumulants products indexed by partition blocks, with r.v.’s having distinct subscripts. By extending the indexing to multiset subdivisions, we allow repetitions and thus r.v.’s powers. In this framework, we define generalized multivariate cumulants as intermediate quantities between multivariate cumulants and moments, and we provide a closed-form expression 1J. E. Stafford kindly provided us with routines in Mathematica. 2All routines are available upon request 2 to represent these quantities in terms of products of multivariate cumulants. The key tool relies on defining a labelling rule to distinguish repeating r.v.’s and on characterizing complementary set partitions using appropriate vector subspaces. These subspaces, which are generated by binary vectors encoding the partition blocks, allow for an efficient trans- formation of the auxiliary distinct r.v.’s back to the original set. Furthermore, we provide a combinatorial interpretation of the coefficients multiplying the products of multivariate cumulants in the expression of generalized multivariate cumulants, which are all 1 in the classical case of generalized cumulants As an application, an unbiased estimator for generalized multivariate cumulants is pro- posed, based on multivariate polykays [5]. Traditionally, computing multivariate polykays involves set partitions and computer algebra tools [9]. By employing multi-index parti- tions, these formulas have been implemented efficiently even in non-symbolic program- ming environments like R[6]. In this context, the use of the labeling rule and dummy variables further speed up the computation of these estimators by reducing the number of
|
https://arxiv.org/abs/2505.12706v1
|
multivariate polykays involved. The main idea is to reduce the estimation of a gen- eralized multivariate cumulant to the estimation of a generalized cumulant of the same order, but involving fewer power sum symmetric functions. In turn, the estimation of a generalized cumulant is further simplified to the estimation of a joint cumulant, thus decreasing overall computational time. The paper is organized as follows. Section 2 provides an overview of complementary set partitions and their properties. Section 3 briefly discusses the existing literature on methods for generating complementary set partitions and introduces the new combinato- rial approach proposed in this paper. In Section 4, generalized multivariate cumulants and labeling rules are introduced. In the same section we define the ⃗1n-partitions and show how the subspaces spanned by their columns characterize complementary set partitions. A closed-form formula is then provided to express generalized multivariate cumulants in terms of products of multivariate cumulants, using an appropriate transformation of multi-index partitions into ⃗1n-partitions. These products are multiplied by specific co- efficients, which are given a combinatorial interpretation. Section 5 focuses on practical applications. We provide computational comparisons between the combinatorial method introduced here and other existing methods for generating complementary set partitions. Next, we explain how this same approach can be used for the efficient computation of generalized multivariate cumulants. We then illustrate the application of the labeling rule and suitable dummy variables to achieve efficient computation of generalized multivariate cumulant estimators. Conclusions and some open problems end the paper. 2 Background on complementary set partitions Recall that a partition πof [n] ={1, . . . , n }is a set with m≤nnonempty subsets of [n],named blocks of the partition, such that every integer is in exactly one subset. The set of partitions of [ n] is usually denoted by Π n,to which we refer in the following unless otherwise specified, and the subset of partitions in mblocks is denoted by Π n,m⊆Πn.To specify the blocks {Bi}ofπ∈Πn,m,we write π=B1|. . .|Bm.In particular the trivial partition is denoted by 1n= 1. . . n and the singleton partition is denoted by 0n= 1|. . .|n. The refinement is the natural partial order among set partitions, that is π≤˜π(πrefines ˜π) ifπ= ˜πor every block of πis a subset of some block of ˜ π.In particular, the least 3 upper bound π∨˜πis the finest partition which is refined by both πand ˜π. Definition 2.1. [17] Two set partitions πand ˜πare said to be complementary if their least upper bound is the trivial partition 1n,that is π∨˜π=1n. Swapping property. Let us begin by recalling the canonical representation ( cr1) of a partition π∈Πn,mas described in [17]: within each block, elements appear in increasing order; the blocks themselves are arranged in decreasing order of cardinality, and blocks of the same size are ordered lexicographically. In such a way, the block cardinalities ofπ∈Πn,mgives a partition3λ⊢ninmparts. Note that if π,˜π∈Πn,mshare the same integer partition of block cardinalities, then πis obtained from ˜ πby swapping some integers among blocks, that is there exists a permutation that takes the canonical representation of πinto the
|
https://arxiv.org/abs/2505.12706v1
|
canonical representation of ˜ π. Thus, the complementary set partitions of ˜ πcan be recovered from those of πswapping the corresponding interchanged integers in the blocks (swapping property). Example 2.1. The complementary set partitions of π= 1|234 are 12 |3|4,13|2|4,14|2|3, 123|4,124|3,12|3 4,134|2,13|24,14|23,1234.The complementary set partitions of ˜ π= 123|4 are the complementary set partitions of πwith 1 swapped with 4 ,that is 1 |24|3,1|2|34, 14|2|3,1|234,124|3,13|24,134|2,12|34,14|23,1234. Intersection matrices. Complementary set partitions can be grouped into equivalence classes using intersection matrices. Definition 2.2. Ifk, l≤n, π 1=B1|. . . , B k∈Πn,kandπ2=C1|. . .|Cl∈Πn,l,the intersection matrix M[k×l],denoted by π1∩π2,is such that its ( i, j)-th element Mijis the cardinality of Bi∩Cj,that is Mij=|Bi∩Cj|. Given π, the partitions π1andπ2are equivalent if they have equal intersection ma- trices, that is if π∩π1=π∩π2,after suitably permuting the blocks of π1andπ2or the columns of π∩π1andπ∩π2. Generalized cumulants. Suppose to have X= (X1, X2, . . .) r.v.’s not necessarily i.i.d. Definition 2.3. [3] Let Ibe a set of nsubscripts chosen among the ones of Xi’s and π=B1|. . .|Bma partition in mblocks of I.Generalized cumulants are the joint cumulants K(π) =KB1|...|Bm=K Y i∈B1Xi, . . . ,Y i∈BmXi! . (1) In the literature, to simplify the notation, the subscripts in Iare labeled by increasing integer numbers, that is I={j, . . . , l }is replaced by {1, . . . , n }= [n].Thus nis said the degree of the generalized cumulant while its order is the number mof blocks. If m= 1, (1) reduces to the joint moment K(1n) =K[n]=K(X1···Xn) =E[X1···Xn]. (2) 3Recall that λ⊢ndenotes a partition of an integer n,that is a sequence λ= (λ1, λ2, . . . , λ m),with m≤n,where λjare decreasing integers, named parts of λand such thatPm j=1λj=n. 4 Ifm=n >1,(1) reduces to the joint cumulant K(0n) =KB1|...|Bm=K(X1, . . . , X n). (3) The expression of generalized cumulants in terms of joint cumulants is [17] K(π) =X ˜π:π∨˜π=1nY Bi∈˜πκ(Bi) (4) where the summation runs over all ˜ πthat are complementary set partitions of πandκ(Bi) denotes the joint cumulant of the r. v.’s corresponding to the indices in Bi. 3 Methods to list complementary set partitions We begin this section with a short review of existing methods in the literature for listing complementary set partitions of π∈Πn.In the second part, we propose a new method relied on two-blocks partitions which offers significant computational advantages (see Section 5). Connected graphs. Given a partition π∈Πn,suppose G(π) the graph whose vertices are labeled by 1 , . . . , n and edges connect vertices in the same block partition. Thus G(π) is partitioned into cliques4, each one corresponding to a block of π. Among these graphs G(1n) is uniquely both complete and connected, consisting of a single clique. Definition 3.1. The sum of two graphs G(π) andG(˜π) is the graph G(π)⊕ G(˜π) having the same nvertices and edges obtained by the union of the edges of G(π) and those of G(˜π). Theorem 3.1. [15]π,˜π∈Πnare complementary if and only if G(π)⊕G(˜π) is connected. According to Theorem 3.1, a way to check if two
|
https://arxiv.org/abs/2505.12706v1
|
partitions πand ˜πare complementary is to verify that there exists a path in G(π)⊕ G(˜π) for each pair of vertices. To this aim, a classical strategy relies on working with its Laplacian matrix5. IfL[G(π)⊕ G(˜π)] has rank n−1,thenG(π)⊕ G(˜π) is connected as the number of connected components is equal to 1 [4]. In summary, to find all complementary set partitions of π, the algorithm checks each ˜ π∈Πnby computing the rank of L[G(π)⊕ G(˜π)].In Table 1, this algorithm is listed as cspLA. For completeness, we also employed the IsConnected function from theHypergraphs Maple 2024 package to verify graph connectivity. In Table 1, this algorithm is listed as cspGR. 4A clique in an undirected graph is a set of vertices in which every pair of distinct vertices is adjacent, forming a complete induced subgraph. A graph is complete if every vertex is directly connected to all other vertices by unique edges. A graph is connected if there exists a path, i.e. a sequence of edges, linking every pair of vertices. 5The Laplacian matrix of G(π) is defined as Li,j[G(π)] = deg( i) ifi=j,with deg( i) the number of edges terminating at the vertex i, Li,j[G(π)] =−1 ifi̸=jbutiandjare connected by an edge, otherwise 0. 5 Stafford’s algorithm. The following example shows the technique proposed by Stafford [23] for obtaining complementary set partitions. In Table 1, this algorithm is listed as cpsJS. Example 3.1. To express cov( X1, X2X3) in terms of joint cumulants, first set Y1=X1 andY2=X2X3.Then from cov( Y1, Y2) =E[Y1, Y2]−E[Y1]E[Y2] recover cov( X1, X2X3) = E[X1X2X3]−E[X1]E[X2X3].Now express the joint moments of Xi’s in terms of joint cumulants, that is E[X1] = κ1,E[X2X3] = κ2κ3+κ2 3andE[X1X2X3] = κ1κ2κ3+ κ1κ2 3+κ2κ1 3+κ3κ1 2+κ1 2 3.Plug these expressions in E[X1X2X3]−E[X1]E[X2X3] to get cov( X1, X2X3) =κ123+κ2κ1 3+κ3κ1 2.Thus the complementary set partitions of 1|23 are 123 ,13|2,12|3. 3.1 The new two-blocks partition method Suppose π=B1|. . .|Bm∈Πn,mwith m≥2.The steps of the new proposed method (see Table 1, algorithm csp) to list the complementary set partitions of πcan be summarized as follows: a)let{C1, C2}be a partition in two blocks of [ m] and set A1=∪i∈C1Bi and A2=∪i∈C2Bi; (5) b)for all ˜ π1∈ΠA1and ˜π2∈ΠA2consider the partition ˜ π= ˜π1∪˜π2∈Πn; c)repeat steps a)andb)for each partition in two blocks of [ m] and denote with T(n) π,mthe set of all partitions ˜ πconstructed as above; d)the set Π n− T(n) π,m,is of all complementary set partitions of π. Example 3.2. Consider π=B1|B2|B3∈Π5,3with B1={1}, B2={2,3}, B3= {4,5}.The partitions C1|C2in two blocks of the indexes {1,2,3}are 1|23,3|12 and 12|3.SetA1=B1∪B2={1,2,3}andA2=B3={4,5}.The partitions of A1are 123,1|23,13|2,12|3,1|2|3,those of A2are 45 ,4|5.Thus the partitions ˜ πin step b)are 123|45,123|4|5,1|23|45,1|23|4|5,13|2|45,13|2|4|5,12|3|45,12|3|4|5,1|2|3|45,1|2|3|4|5.The above are partitions of Π 5not complementary to 1 |23|45 as their least upper bound with πisπitself or A1|A2= 123|45.Repeating the same arguments for A1=B1, A2=B2∪B3 andA1=B1∪B3, A2=B2,all set partitions of Π 5not complementary to 1 |23|45 are retrieved. The following theorem proves that the steps a)-c)of the procedure outlined above indeed generate all set partitions that are not complementary to π∈Πn,m. Theorem 3.2. Ifπ=B1|. . .|Bm∈Πn,m,with m≥2,then the set of all partitions not complementary to
|
https://arxiv.org/abs/2505.12706v1
|
πis T(n) π,m= ˜π= ˜π1∪˜π2∈Πn|˜π1∈ΠA1,˜π2∈ΠA2with A1, A2in (5) at varying C1|C2∈Πm,2 . (6) 6 Proof. We prove that T(n) π,mcontains exactly those partitions not complementary to π. Indeed, if ˜ π∈ T(n) π,mthen π∨˜π≤A1|A2since π≤A1|A2and ˜π≤A1|A2,as from (5) ˜π1∈ΠA1,˜π2∈ΠA2respectively. Vice-versa, suppose ˜ πnot complementary to π.Then there exists π⋆∈Πnsuch that π∨˜π=π⋆̸=1n.Ifπ⋆∈Πn,mwith m≥2,asπ≤π⋆, there exists A1|A2∈Πn,2such that π⋆≤A1|A2since it’s sufficient to join two or more block of πuntil the structure of two blocks is reached. Remark 3.1. Note that |Πn|=Bnthen-th Bell number. Therefore the number of pairs (˜π1,˜π2) in (6) is B|A1|×B|A2|with|Aj|=P i∈Cj|Bi|, j= 1,2.Among these pairs, there are some that provide the same partition ˜ π.Therefore, the number of not complementary set partitions of πcan be obtained using the inclusion-exclusion principle. Indeed set l=|Πm,2|= 2(m−1)−1 and denote with Ti= ˜π= ˜π1∪˜π2∈Πn|˜π1∈ΠA1,˜π2∈ΠA2with A1, A2in (5) for a fixed C1|C2∈Πm,2 (7) fori= 1, . . . , l. Thus we have |T(n) π,m|=| ∪l i=1Ti|=lX k=1(−1)k+1 X 1≤i1≤···≤ ik≤l|Ti1∩. . .∩ Tik|! . (8) 4 Generalized multivariate cumulants This section introduces an extension of generalized cumulants through the use of multiset subdivisions of the subscripts of Xi’s. Roughly speaking a multiset is a set in which repetitions are allowed. In particular, a multiset Mis a pair ( ¯M, f) where f:¯M7→N and ¯Mis a set called the support of M[8]. For every j∈¯M, f(j) is said the multiplicity ofjand|M|=P j∈¯Mf(j) is the lenght of M.A multivariate cumulant can be recovered from a joint cumulant using a multiset of subscripts as follows κi(X) =κi1,...,in(X1, . . . , X n) =K(X1, . . . , X 1|{z} i1, . . . , X n, . . . , X n|{z} in) (9) where Kdenotes the joint cumulant and the order of the repeated r.v.’s does not matter. The “bag” Mof the subscripts in (9) is an example of multiset with support ¯M= [n] and multiplicities i1, . . . , i n.Unless stated otherwise, we will write M={1(i1), , . . . , n(in)}in the following sections. When one or more integers i1, . . . , i nequals 0, the corresponding r.v.’s are omitted from the bag. For example κ1,0,2(X1, X2, X3) =K(X1, X3, X3). Just as a set can be partitioned into disjointed subsets, similarly a multiset can be subdivided into disjointed submultisets. A multiset Mi= (¯Mi, fi) is a submultiset of M= (¯M, f) if¯Mi⊆¯Mandfi(j)≤f(j) for all j∈¯M.A subdivision6of a multiset M is a multiset of submultisets of M, such that their disjoint union returns M.Thus the following extends Definition 2.3. 6Formally, a subdivision of M= (¯M, f ) is a multiset of m≤ |M|non-empty submultisets Mi= (¯Mi, fi) ofMsuch that ∪m i=1¯Mi=¯MandPm i=1fi(j) =f(j) for all j∈¯M. 7 Definition 4.1. LetMbe a multiset of |M|subscripts chosen among the ones of Xi’s andSa subdivision in submultisets Mi= (¯Mi, fi), i= 1, . . . , m ≤ |M|.A generalized multivariate cumulant is the joint cumulant K(S) =KM1|...|Mm=KY i∈¯M1Xf1(i) i, . . . ,Y i∈¯MmXfm(i) i . (10) To simplify the notation, the multiset Mof subscripts is denoted by {1(i1), . . .
|
https://arxiv.org/abs/2505.12706v1
|
, n(in)} where each j∈[n] corresponds to ijrepeated subscripts of Xi’s. If M= [n] with no repeating integers, then Sreturns a set partition πand (10) reduces to (1). If m= 1 then M1=MandKM=E[Xi1 1···Xinn] a multivariate moment. If m=|M|then (10) returns the multivariate cumulant (9). To handle multivariate cumulants, multi-index partitions can be used instead of mul- tiset subdivisions. Multi-index partitions play a key role in the multivariate Fa` a di Bruno formula and are better suited for non-symbolic computational environments [6]. Recall that a composition of a multi-index7i= (i1, . . . , i n)∈Nn 0,inmmulti-indexes is a matrix Λ = (λ1, . . . ,λm) such that λ1+···+λm=i,see [10]. The length of Λ is the number of its columns, denoted by l(Λ).A multi-index partition is a composition whose columns are in lexicografic order. In the following, we fix a reverse lexicografic order8. Usually multi-index partitions are denoted by Λ = ( λr1 1,λr2 2, . . .)⊢iwith r1≥1 columns equal to λ1>λ2, r2≥1 columns equal to λ2>λ3and so on. In such a case l(Λ) = r1+r2+···. Remark 4.1. IfX= (X1, . . . , X n) is a random vector, then µi(X) =µi1...in(X) = E[Xi] =E[Xi1 1···Xinn] denotes its multivariate moment of order iwhile its multivariate cumulants κi(X) =κi1...in(X) are defined by κ0= 0 and the identityP |i|≥1κi(X)zi i!= ln 1 +P |i|≥1E[Xi]zi i! where zi=zi1 1···zinn,i! =i1!···in!,|i|=i1+···+in.Formulae giving multivariate moments in terms of multivariate cumulants (and viceversa) using multi-index partitions are [6] µi(X) =X Λ⊢idΛκX(Λ) and κi(X) =X Λ⊢i(−1)l(Λ)−1(l(Λ)−1)!dΛµX(Λ) (11) where the sum is over all multi-index partitions Λ = ( λr1 1,λr2 2, . . .)⊢iof length l(Λ) and dΛ:=i!Y i1 (λi!)riri!, µ X(Λ) :=Y i[µλi(X)]ri, κ X(Λ) :=Y i[κλi(X)]ri. (12) If Λ⊢⃗1nthen dΛ= 1 and (11) gives µ11...1(X) =X Λ⊢⃗1nκX(Λ) and κ11...1(X) =X Λ⊢⃗1n(−1)l(Λ)−1(l(Λ)−1)!µX(Λ) (13) with µ11...1(X) and κ11...1(X) joint moment and joint cumulant respectively. Subdivisions of M={1(i1), , . . . , n(in)}are in one-to-one correspondence with multi- index partitions of i= (i1, . . . , i n) [6]. 7The multi-indexes (0 ,0, . . . , 0),(1,1, . . . , 1)∈Nn 0will be denoted by ⃗0nand⃗1nrespectively. 8As example ( a1, b1)>(a2, b2) ifa1> a 2ora1=a2andb1> b 2. 8 Example 4.1. Consider the multiset M={1(1),2(2),3(2)}.Examples of subdivisions areS1= 1|23|23 and S2= 123 |23 corresponding to the multi-index partitions Λ 1= (λ1,λ2 2),Λ2= (λ3,λ4)⊢(1,2,2)⊺respectively, with λ1= (1,0,0)⊺,λ2= (0,1,1)⊺,λ3= (1,1,1)⊺,λ4= (0,1,1)⊺. With this notation, the definition of generalized multivariate cumulants can be re- expressed in terms of multi-index partitions rather than multiset subdivisions. Definition 4.2. The generalized multivariate cumulant of degree nand order l(Λ) = r1+···+rmcorresponding to the multi-index partition Λ = ( λr1 1, . . . ,λrmm)⊢i∈Nnis the joint cumulant KΛ(X) =Kr1|...|rm λ⊺ 1;...;λ⊺ m(X) =K Xλ1, . . . ,Xλ1 |{z } r1, . . . ,Xλm, . . . ,Xλm |{z } rm . (14) Ifm= 1 and r1= 1,then Λ = iand (14) returns the i-th multivariate moment, that isKi(X) =K(Xi) =E[Xi1 1···Xinn].Ifm=n > 1 and ( λ1, . . . ,λn) = (e1, . . . ,en),is the standard basis
|
https://arxiv.org/abs/2505.12706v1
|
of Rn,thenXej=Xjforj= 1, . . . , n and (14) returns the i-th multivariate cumulant (9). Example 4.2. IfX= (X1, X2, X3) then K1 122(X) =E[X1X2 2X2 3],andK1|2|2 100;010;001 (X) = κ122(X) the multivariate cumulant of order (1 ,2,2). The case r1=. . .=rm= 1 and i=⃗1nis the one of special interest for our purposes. Indeed a multi-index partition Λ ⊢⃗1nwith no equal columns corresponds to a partition π∈Πn,as the non-zero entries in each λjidentify the row indexes belonging to the j-th block Bjofπ.In such a case (14) reduces to K Xλ1, . . . ,Xλm =K(π),the generalized cumulant (1). In order to provide a formula for expressing generalized multivariate cumulants in terms of multivariate cumulants, we need to represent complementary set partitions using multi-index partitions. This is the focus of the next subsection. 4.1 Complementary ⃗1n-partitions Complementary set partitions can be characterized using partitions of ⃗1n.To this aim we fix a different canonical representation ( cr2) of partitions: elements within blocks are ordered increasingly, and the blocks are arranged in lexicographic order. Definition 4.3. The⃗1n-partition corresponding to π=B1|. . .|Bm∈Πn,mis the multi- index partition Λ π= (λ1, . . . ,λm)⊢⃗1nsuch that its ( t, j)-th element ( λj)t= 1 if t∈Bj, otherwise 0 . The number nof rows and the number m=|π|of columns of Λ are said the order and the size of the ⃗1n-partition respectively. We use the notation Λ π=λ⊺ 1|. . .|λ⊺ mwhen it’s needed. LetVπ= span( λ1,λ2, . . .) denote the subspace spanned by the columns of Λ πwith dim(Vπ) =l(Λπ).We say that Vπis the column span of Λ π.In particular V1ndenotes the subspace spanned by ⃗1n.If Λ π= Λ ˜πor every column of Λ ˜πis a non-zero linear combination of columns of Λ πwith coefficients in {0,1}then V˜π= span( ˜λ1,˜λ2, . . .)⊆ span(λ1,λ2, . . .) =Vπ.If we consider the set inclusion ⊆as partial order, then the greatest lower bound of two column spans of different ⃗1n-partitions is given by their intersection. 9 Example 4.3. We have Λ 1n=⃗1⊺ nand Λ ⃗0n=In,where Indenotes the identity matrix. The following are ⃗14-partitions: Λ1|234= 1000 |0111,Λ123|4= 1110 |0001,Λ12|34= 1100 |0011,Λ12|3|4= 1100 |0010|0001. Notice that V12|34⊆V12|3|4so that V12|34∩V12|3|4=V12|34.Instead V123|4∩V12|34=V14, since no column of Λ 123|4is a sum of columns from Λ 12|34and viceversa. Definition 4.4. Two⃗1n-partitions Λ and ˜Λ are said to be complementary if V∩˜V=V1n, with V,˜Vcolumn spans of Λ ,˜Λ respectively. Theorem 4.1. π∨˜π=π⋆if and only if Vπ∩V˜π=Vπ⋆. Proof. Let Λ π= (λ1,λ2, . . .) and Λ ˜π= (˜λ1,˜λ2, . . .) the ⃗1n-partitions corresponding to π,˜π∈Πnrespectively. Notice that π≤˜πif and only if V˜π⊆Vπ.Indeed Λ π= Λ ˜πor every column of Λ ˜πis a non-zero linear combination of columns of Λ πwith coefficients in {0,1}. Thus the least upper bound π∨˜πcorresponds to the greatest lower bound Vπ∩V˜π. Corollary 4.1. Complementary set partitions correspond to complementary ⃗1n-partitions. The following proposition parallels the swapping property recalled in Section 2. Proposition 4.1. Suppose π1, π2∈Πn,mwhose block cardinalities correspond to the same integer partition λ⊢nregardless of the order of the blocks. If there exists a permutation σthat reorders
|
https://arxiv.org/abs/2505.12706v1
|
the elements within the blocks of π1andπ2andVπ1∩Vπ=V1nthen Vπ2∩V˜π=V1n,where ˜ πis obtained from πby applying the permutation σto the elements within its blocks. Proof. If there exists a permutation σreordering the elements in the blocks of π1and π2,then there exist square permutation matrices PandQofnandmrows respectively such that Λ π1=PΛπ2Q.Thus Λ π1x⊺= Λ πx⊺is equivalent to Λ π2y⊺= Λ ˜πy⊺where Λ˜π=PΛπQ.Indeed as PandQare permutation matrices, then P2=InandQ2=Imand Λπ1x⊺= Λ πx⊺⇔PΛπ2Qx⊺=PPΛπQQx⊺⇔PΛπ2Qx⊺=PΛ˜πQx⊺⇔Λπ2y⊺= Λ ˜πy⊺ where y⊺=Qx⊺.Therefore, since Vπ1∩Vπ=V1nby hypothesis, Vπ2∩V˜π=V1nfollows. Example 4.4. In Example 2.1, π= 1|234 is obtained from ˜ π= 123|4 by swapping 1 and 4.Both have (3 ,1)⊢4 as block cardinalities, regardless of the order of the blocks, and Λ1|234= 1 0 0 1 0 1 0 1 =PΛ123|4Q= 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 0 1 1 0 . Intersection matrices of set partitions can be recovered by multiplying the correspond- ing⃗1n-partitions with suitable permutation matrices, as stated in the following proposi- tion. Proposition 4.2. Ifπ1∈Πn,kandπ2∈Πn,l,then π1∩π2=P1(Λ⊺ π1Λπ2)P2,where π1∩π2 is the [ k×l] intersection matrix of Definition 2.2 and P1, P2are permutation matrices of dimensions [ k×k] and [ l×l] respectively. 10 Proof. Since Λ π1is a [n×k] matrix and Λ π2is a [n×l] matrix, then Λ⊺ π1Λπ2is a [k×l] matrix such that the ( i, j)-th entry gives the number of coupled 1′s between the i-th column of Λπ1,corresponding to the i-th block of π1,and the j-th column of Λ π2,corresponding to thej-th block of π2.If the canonical representations cr1andcr2ofπ1andπ2agree, the (i, j)-th entry of Λ⊺ π1Λπ2gives the ( i, j)-th entry of π1∩π2,andP1=Ik, P2=Ilidentity matrices. Otherwise, there exits a permutation of blocks between π1andπ2.Therefore π1∩π2differs from Λ⊺ π1Λπ2by suitable permutations of rows and/or columns, which are encoded by left multiplication with a suitable permutation matrix P1(possibly equal to Ik) and right multiplication with a suitable permutation matrix P2(possibly equal to Il.) As a corollary, the equivalence classes of complementary set partitions can be con- structed using products of ⃗1n-partitions. Indeed, given π, the partitions π1andπ2are equivalent if Λ⊺ ˜πΛπ1is equal to Λ⊺ ˜πΛπ2after suitably permuting their rows and/or columns. Example 4.5. The partitions π1= 134|2 and π2= 234|1 are in the same equivalence class with representative π= 124|3 since π1∩π=2 1 1 0 =π2∩π= Λ⊺ π1Λπ=0 1 1 0 Λ⊺ π2Λπ1 0 0 1 . (15) The algorithm NullSpace :cspNS.According to Corollary 4.1, given two ⃗1n-partitions Λπ,Λ˜π,one way to determine if they are complementary is to compute a basis of Vπ∩V˜π and check if this basis reduces to ⃗1n.A basis for Vπ∩V˜πcan be computed using the nullspace9of the block matrix [Λ π| −Λ˜π].Therefore, given a ⃗1n-partition Λ π,to enu- merate all its complementary Λ ˜πit is necessary to find all pairs (Λ π,Λ˜π) such that the nullspace of [Λ π|−Λ˜π] isV1n.Because computing a nullspace numerically is time-intensive, various strategies can be employed to speed up the overall process. For example, some ⃗1n-partitions can be excluded initially, as they are clearly not complementary to Λ π.This is the case of Λ 0n,since
|
https://arxiv.org/abs/2505.12706v1
|
Vπ⊆V0n=Rn,but it also applies to all ⃗1n-partitions Λ ˜πsuch that: Λ ˜πand Λ πshare at least one column; at least one column of Λ ˜πcan be expressed as linear combination of some columns of Λ πor viceversa; dim( Vπ) + dim( V˜π)≥n+kwith k= 2, . . . , n as dim( Vπ∩V˜π) =k >1.While using ⃗1n-partitions and subspaces simplifies the search for complementary ⃗1n-partitions to a nullspace calculation, the algorithm faces significant challenges in terms of computational time (see Table 1, algorithm cspNS). Generalized cumulant. The generalized cumulant of X= (X1, . . . , X n) of order nand sizem,corresponding to the ⃗1n-partition Λ π= (λ1, . . . ,λm) is KΛπ(X) =Kλ⊺ 1;...;λ⊺ m(X) =K(Xλ1, . . . ,Xλm). (16) Notice that KΛπ(X) =K(π) in (1). Moreover, K⃗1⊺ n(X) corresponds to the joint moment in (2) and Ke⊺ 1;...,e⊺ n(X) corresponds to the joint cumulant in (3), with ( e1, . . . ,en) the standard basis of Rn.If Λ π= (λ1, . . . ,λm)⊢⃗1nthe expression of the generalized cumulant in terms of joint moments is KΛπ(X) =X Λ⋆∈CπκX(Λ⋆) (17) 9The nullspace of a matrix Ais the set of solutions to the equation Ax=0. 11 where Cπ={Λ⋆= (λ⋆ 1,λ⋆ 2, . . .)⊢⃗1n|Vπ⋆∩Vπ=V1n}with Vπ⋆, Vπcolumn spans of Λ⋆,Λπ respectively and κX(Λ⋆) =κλ⋆ 1(X)κλ⋆ 2(X)···.and Formula (17) parallels formula (4). The proof is given in the Appendix. 4.2 Labelling rules and transformations of ⃗1n-partitions In the following, using complementary ⃗1n-partitions, we provide a formula to express generalized multivariate cumulants using multivariate cumulants. To this aim, different subscripts are assigned to repeated r.v.’s so that eq. (17) can be applied. We then show how to recover the initial information on repeated r.v.’s in order to convert the joint moments in (17) into multivariate moments. This method is suitable for implementation in non-symbolic platforms like R. Example 4.6. For the computation of cov( X1, X2 2),consider cov( Y1, Y2Y3) =K100;011 = κ111+κ101κ010+κ110κ001.To obtain cov( X1, X2 2) consider the cumulant expansion of cov( Y1, Y2Y3) where the last two binary indices are summed. This yields cov( X1, X2 2) =κ12+ 2κ11κ01. To implement this strategy, we define appropriate mappings between multi-index parti- tions and ⃗1n-partitions. In the following let Lidenote the set of all multi-index partitions Λ = ( λr1 1, . . . ,λrmm)⊢i∈Nn 0and let M|i|denote the set of all ⃗1|i|-partitions Λ πwith |i|=r1|λ1|+. . .+rm|λm|. Definition 4.5 (Canonical transformation of a multi-index partition) .The canonical transformation φ:Li7→ M |i|maps a multi-index partition Λ ∈ Lito a⃗1|i|-partition Λπ∈ M |i|where for i= 1, . . . , r 1 (si)j=1j= (i−1)|λ1|+ 1, . . . , i|λ1| 0 otherwise and for q= 2, . . . , m, t = 1, . . . , r qandmq,t=r1|λ1|+···+rq−1|λq−1|+ (t−1)|λq| (sr1+···+rq−1+t)j=1j=mq,t+ 1, . . . , m q,t+|λq| 0 otherwise . Example 4.7. Consider M={1(1),2(2),3(2)}and the subdivision S= 1|2|2|33 corre- sponding to the multi-index partition Λ = ( λ1,λ2 2,λ3)⊢(1,2,2) with λ1= (1,0,0)⊺,λ2= (0,1,0)⊺,λ3= (0 ,0,2)⊺.The ⃗15-partition Λ 1|2|3|45= 10000 |01000|00100|00011 is the canonical transformation of Λ through φ. To transform a
|
https://arxiv.org/abs/2505.12706v1
|
⃗1|i|-partition Λ π∈ M |i|into a multi-index partition Λ ∈ Li,a suitable grouping rule is required for the rows of Λ π. Definition 4.6 (Labeling rule) .Given a multiset M={1(i1), . . . , n(in)}the labeling rule induced by Mis defined as σi: [p]7→[n] with p=|i| ≥nsuch that σ−1 i(k) = {tk−1+ 1, . . . , t k}fork∈[n] with tk=Pk j=1ijandt0= 0. Example 4.8. With reference to the multiset M={1(1),2(2),3(2)}in Example 4.7 the labeling rule σ(1,2,2):{1,2,3,4,5} 7→ { 1,2,3}is defined by σ(1,2,2)(1) = 1 , σ(1,2,2)(2) = σ(1,2,2)(3) = 2 , σ(1,2,2)(4) = σ(1,2,2)(5) = 3 . 12 A multi-index partition Λ ∈ Lican be constructed from a ⃗1|i|-partition Λ π∈ M |i|by applying a labeling rule σi,as follows. Definition 4.7 (Reverse transformation under a labeling map) .The reverse transfor- mation Φ σi(Λπ) :M|i|7→ L iunder the labeling rule σimaps a ⃗1|i|-partition Λ π= (s1, . . . ,sm)∈ M |i|into a multi-index partition Λ = ( λ1, . . . ,λm)∈ Liwhere for each q= 1, . . . , m andk= 1, . . . , n the entries of λqare given by ( λq)k=P t∈σ−1 i(k)(sq)t. Example 4.9. Let Λ 13|24|5= (s1,s2,s3)∈ M 5be a⃗15-partition, where s1= (1,0,1,0,0)⊺, s2= (0,1,0,1,0)⊺,s3= (0,0,0,0,1)⊺. Apply the reverse transformation Φ σiwithi= (1,2,2) using the labeling rule σ(1,2,2)of Example 4.8. The result is Λ = Φ σ(1,2,2)(Λ13|24|5) = (λ1,λ2,λ3)∈ L (1,2,2)withλ1= (1,1,0)⊺,λ2= (0,1,1)⊺,λ3= (0,0,1)⊺.Indeed, each λq is computed by summing the components of sqindexed by σ−1 i(k), that is the second and the third row of Λ 13|24|5corresponding to σ−1 i(2) = {2,3}as well as the fourth and the fifth row corresponding to σ−1 i(3) = {4,5}.Distinct ⃗15-partitions can be mapped to the same multi-index partition Λ ∈ L (1,2,2)via the reverse transformation Φ σ(1,2,2). Ex- amples include: Λ 12|34|5= 11000 |00110|00001 ,Λ12|35|4= 11000 |00101|00010 ,and Λ 13|25|4= 10100|01001|00010 . Using the labeling rule σi,a partition π=B1|. . .|Bm∈Π|i|,minduces a subdivision S=M1|M2|. . .|MmofM={1(i1), . . . , n(in)}as follows: for each j∈[p], where p=|i|, and each q∈[m], the label σi(j)∈¯Mqifj∈Bq. By Definition 4.7, since |σ−1 i(k)|=ik, the integerP t∈σ−1 i(k)(sq)tcounts the number of occurrences of the element k∈[n] within Mqcorresponding to Bq. Lemma 4.1. If Λ = ( λr1 1, . . . ,λrmm)∈ Lithen|Φ−1 σi(Λ)|=dΛ,where Φ−1 σi(Λ) = {Λπ∈ M|i||Φσi(Λπ) = Λ}anddΛis given in (12). Proof. Let Λ = ( λ1, . . . ,λl(Λ))⊢iwithλ1≥. . .≥λl(λ)and denote with Sthe subdivision corresponding to Λ .Note that the ⃗1|i|-partitions in Φ−1 i(Λ) encode all the partitions π such that if the integers in the blocks of πare replaced by their images under σi,then the subdivision Sis recovered. Thus ak=ik! Ql(Λ) q=1[(λq)k]!k= 1, . . . , m (18) is the number of ways to split the ikelements of σ−1 i(k) among the blocks B1, . . . , B l(Λ)of π, knowing that ( λq)kelements in Bqare mapped in k.Using (18) and grouping together the equal columns of Λ ,the productQn k=1akreads nY k=1ak=i! (λ1!)r1···(λm!)rm. (19) Recall that the blocks B1, . . . , B l(Λ)correspond to the multisets M1, .
|
https://arxiv.org/abs/2505.12706v1
|
. . , M l(Λ)ofSwhen thep=|i|integers in B1, . . . , B l(Λ)are replaced by their images under σi.The result follows by observing that when considering the number of ways to split all the elements in{1(i1), . . . , n(in)}among the multisets of S, the product in (19) is overcounted by the permutations of equal multisets, whose number is r1!···rm!. Indeed, among the multisets ofS, there may be copies, and once a multiset is filled, its copies are automatically determined. 13 The following theorem gives generalized multivariate cumulant in terms of multivariate cumulants. Theorem 4.2. Let Λ = ( λr1 1, . . . ,λrmm)⊢i∈Nn 0, and let Λ π=φ(Λ)∈ M |i|be the canonical transformation of Λ. The generalized multivariate cumulant associated with Λ satisfies KΛ(X) =Kr1|...|rm λ⊺ 1;...;λ⊺ m(X) =X eΛ=(˜λc1 1,˜λc2 2,...)∈CΛa˜Λ[κ˜λ1(X)]c1[κ˜λ2(X)]c2··· (20) where CΛ={˜Λ⊢i|Φσi(Λ˜π) = ˜Λ for Λ ˜π∈ C π},Cπis the set of all complementary 1|i|-partitions of Λ π, Φσiis the reverse transformation under σi, and a˜Λ= Φ−1 σi(˜Λ)∩ Cπ . Proof. Setp=|i|and define Y= (Y1, Y2, . . . , Y p) by Yj=Xσi(j),forj= 1,2, . . . , p (21) where σiis the labelling rule in Definition 4.7. Suppose Λ π=φ(Λ) = ( s1, . . . ,sl(Λ))∈ M p. Then KΛ(X) =Kφ(Λ)(Y) and from (17) Ks⊺ 1;...;s⊺ l(Λ)(Y) =X Λ⋆∈CπκY(Λ⋆) =X Λ⋆∈Cπκλ⋆ 1(Y)···κλ⋆ l(Λ⋆)(Y) (22) where Cπis the set of all complementary 1p-partitions of Λ π=φ(Λ),that is Cπ={Λ⋆= (λ⋆ 1, . . . ,λ⋆ l(Λ⋆))⊢1p|Vφ(Λ)∩V⋆=V1p}with V⋆andVφ(Λ)the column spans of Λ⋆and Λπ=φ(Λ) respectively. Now, consider for example κλ⋆ 1(Y) in (22). Observe that κλ⋆ 1(Y) =K(Xσi(1)|{z} λ⋆ 11, Xσi(2)|{z} λ⋆ 12, . . . , X σi(p)|{z} λ⋆ 1p) =κj1,...,jn(X) where each index jt=P j∈σ−1 i(t)(λ⋆ 1)jis the number of subscripts jof the entries of Y that are mapped in tunder σifor all t= 1, . . . , n. Since|σ−1 i(t)|=itthen jt≤itfor all t= 1, . . . , n. Thus we have κY(Λ⋆) =κλ⋆ 1(Y)···κλ⋆ l(Λ⋆)(Y) =κj1,...,jn(X)···κt1,...,tn(X)| {z } l(Λ⋆)=κX(˜Λ) (23) where ˜Λ = Φ σi(Λ⋆) is the image of Λ⋆under the reverse transformation Φ i, with columns (j1, . . . , j n), . . . , (t1, . . . , t n) and l(˜Λ) = l(Λ⋆). Note that (23) holds for every ⃗1p-partition Λ⋆∈Φ−1 i(˜Λ) that is complementary to Λ π.Thus the number of such partitions is a˜Λ= Φ−1 σi(˜Λ)∩ Cπ .Therefore, the result follows by grouping the terms on the right-hand side of (22) according to ˜Λ∈CΛ, and multiplying each κX(˜Λ) by its corresponding coefficient a˜Λ. 14 5 Applications Computational results. To demonstrate that the proposed algorithm based on two- block partitions is the most efficient, we implemented all the described methods for gener- ating complementary set partitions within a single software environment. Since Stafford’s algorithm requires symbolic computation, we selected Maple 2024 [14] as the common platform for all implementations. The computational times10are reported in Table 1. In particular, in Table 1, the first column lists the integer partition corresponding to the cardinalities of the blocks of π, which is the only required input due to
|
https://arxiv.org/abs/2505.12706v1
|
the swapping property. The second column presents the results of the new method proposed in Sec- tion 3.1. The third column shows the performance of Stafford’s algorithm. The fourth and fifth columns report the results of the algorithms based on connected graphs, using theIsConnected function and the Laplacian matrix, respectively. The final column cor- responds to the algorithm that computes the nullspace of the intersection between column spans of ⃗1n-partitions. integer partition csp cspAS cspGR cspLA cspNS (1,1,2,2)⊢6 0.0 0.0 0.016 0.094 0.187 (2,2,2)⊢6 0.0 0.016 0.016 0.172 0.296 (2,2,3)⊢7 0.016 0.031 0.282 1.234 1.516 (3,4)⊢7 0.0 0.031 0.234 1.313 1.437 (1,1,2,2,2)⊢80.062 0.313 0.750 3.187 3.828 (1,3,4)⊢8 0.016 0.328 0.906 5.516 6.078 (1,2,2,4)⊢9 0.313 1.375 5.062 31.688 33.891 (2,3,4)⊢9 0.110 1.172 6.234 40.125 43.719 (2,2,2,2,2)⊢10 1.218 9.110 39.140 241.782 659.093 (2,2,3,3)⊢10 0.718 8.609 41.907 278.593 876.39 Table 1: Computation times of the algorithms (implemented in Maple 2024 ) that generate complementary set partitions for a given partition π. According to the results in Table 1, the algorithms based on nullspaces and Laplacian matrices are quite similar in performance for small values of n.The algorithm using the Maple IsConnected function is faster than the previous two but slower than Stafford’s algorithm. On the other hand, the algorithm based on two-block partitions not only demonstrates significantly lower computational times compared to all the other algorithms but also can be easily implemented on non-symbolic platforms. In fact, it only requires a routine to generate multi-index partitions. In R, this routine is already available in the kStatistics package11[7]. The implementation strategy for Ris described below. The csp algorithm in R.InR,⃗1npartitions offer an efficient way to handle set par- titions. As a result, a version of the csp algorithm outlined in Section 3.1 has been 10All routines are available upon request. Benchmarks were run on a laptop with an 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz. 11The routine is currently available upon request from the authors but will be included in the kStatistics package shortly. 15 developed, using complementary ⃗1npartitions. Suppose we need to list the elements of Cπin (17), the set of complementary ⃗1n-partitions of Λ π= (λ1, . . . ,λm). The steps of the newly proposed method can be summarized as follows: a)suppose t1, . . . , t j|tj+1, . . . , t ma partition in two blocks of the subscripts of ( λ1, . . . ,λm), and set v1=λt1+···+λtjandv2=λtj+1+···+λtm; b)for all the ⃗1n-partitions Λ(1)⊢v1and all the ⃗1n-partitions Λ(2)⊢v2,construct the block matrix Λ⋆= (Λ(1),Λ(2)); c)repeat steps a)andb)for all partitions in two blocks of the subscripts of ( λ1, . . . ,λm), and denote with T(n) π,mthe set of ⃗1n-partitions Λ⋆= (Λ(1),Λ(2)) constructed as above; d)setCπ=Mn− T(n) π,m,where Mndenotes the set of all ⃗1n-partitions. Generalized multivariate cumulants using the cspalgorithm. From (4) and Theorem 3.2 generalized cumulants can be recovered as K(π) =X ˜π∈ΠnY B∈˜πκ(B)−X ˜π∈T(n) π,mY B∈˜πκ(B) (24) where T(n) π,mis given in (6). A similar strategy is applied to recover generalized multi- variate cumulants (20) as explained below. From (22), the computation of generalized multivariate cumulants KΛ(X) can be
|
https://arxiv.org/abs/2505.12706v1
|
performed using (24), that is Kφ(Λ)(Y) =X Λ⋆∈M|i|κY(Λ⋆)−X Λ⋆∈T(|i|) π,mκY(Λ⋆) (25) where Yis given in (21), T(|i|) π,mis the set of all not complementary ⃗1|i|-partitions of Λπ=φ(Λ) and m=l(Λπ).By repeating the same arguments of the proof of Theorem 4.2 and using Lemma 4.1, the first sum in (25) is X Λ⋆∈M|i|κY(Λ⋆) =X eΛ⊢id˜ΛκX(˜Λ). (26) Notice that the computation of the rhs of (26) is generally faster than the one at the lhs. For the latter sum in (25), with similar arguments we have X Λ⋆∈T(|i|) π,mκY(Λ⋆) =X eΛ∈Φi[T(|i|) π,m]b˜ΛκX(˜Λ) (27) where b˜Λ=|Φ−1 i(˜Λ)∩ NCπ|andNCπdenotes the set of all not complementary 1|i|- partitions of Λ π=φ(Λ).Suppose Λ π= (λ1, . . . ,λm).To compute the rhs of (27) effi- ciently, we use (8), which gives X eΛ∈Φi[T(|i|) π,m]b˜ΛκX(˜Λ) =lX k=1(−1)k+1X 1≤i1≤···≤ ik≤|i|X Λ⋆∈∩k j=1TijκX(Φi(Λ⋆)), where Tijis the set analogous to the one given in (7) but for ⃗1|i|-partitions, that is the set of all Λ⋆∈ M |i|such that Λ⋆= (Λ(1),Λ(2)) with Λ(j)⊢vj=P i∈Bjλiforj= 1,2 and for a fixed C1|C2∈Πm,2. 16 Estimation of generalized multivariate cumulants. An unbiased estimator ˆKΛ(X) of the generalized multivariate cumulant KΛ(X) is obtained by replacing the products of multivariate cumulants appearing on the rhs of (20) with their corresponding unbiased estimators. Specifically for each ˜Λ = (λc1 1,λc2 2, . . .)∈CΛthe products appearing on the rhs of (20) are replaced by k˜Λ(X) defined as E[k˜Λ(X)] =E k˜λ1;. . . ,˜λ1|{z} c1;˜λ2;. . . ,˜λ2|{z} c2;...(X) = [κ˜λ1(X)]c1[κ˜λ2(X)]c2···. (28) These statistics are the so-called multivariate polykays [8] typically expressed in terms of power sum symmetric functions given by St1,...,tn=St1,...,tn(X) =PN l=1Xt1 1,lXt2 2,l···Xtn n,l where the sum runs over Nistances of X= (X1, X2, . . . , X n). Example 5.1. Suppose n= 3.The unbiased estimator of κ110(X)κ001(X) is the polykay k110;001 (X)=−S0,0,1S1,0,0S0,1,0+ (N−1)S0,0,1S1,1,0+S0,1,0S1,0,1+S0,1,1S1,0,0−NS1,1,1 N(N−1) (N−2). Implementing this strategy to recover estimators of KX(Λ) is computationally inten- sive. Instead, a more efficient procedure12for obtaining ˆKΛ(X) consists in applying the canonical transformation φof Λ.Indeed from (22), we have ˆKΛ(X) =ˆKφ(Λ)(Y) (29) withYgiven in (21) and ˆKφ(Λ)(Y) unbiased estimator of the generalized cumulant Kφ(Λ)(Y).In turn, the expression of ˆKφ(Λ)(Y) involves power sum symmetric functions of the form St1,...,tp(Y) =NX l=1Yt1 1,lYt2 2,l···Ytn p,l (30) with ( t1, . . . , t p) binary vectors. The expression of ˆKΛ(X) in terms of power sums in Xis then recovered by (21), that is plugging Xσi(j)in place of Yjin (30). Moreover the binary vector ( t1, . . . , t p) is transformed into the multi-index ( j1, . . . , j n) where js=|σ−1 i(s)|, s= 1, . . . , n. Example 5.2. In Example 4.6 we have cov( X1, X2 2) = κ1,2+ 2κ1,1κ0,1.Its unbiased estimator is ˆK1|1 10;02(X) =k12(X) + 2k11;01(X) =NS12(X)−S10(X)S02(X) N(N−1) which can be recovered from ˆK100;011 (Y) =NS111(Y)−S100(Y)S011(Y) N(N−1)(31) with S111(Y) =P iY1,iY2,iY3,ireplaced by S12(X) =P iX1,iX2 2,iandS100(Y) =P iY1,i, S011(Y) =P iY2,iY3,ireplaced by S10(X) =P iX1,i, S02(X) =P iX2 2,irespectively. 12Routines for computing unbiased estimators of generalized multivariate cumulants are also available upon request. 17 From the previous discussion, the expression of
|
https://arxiv.org/abs/2505.12706v1
|
ˆKΛ(X) is recovered from an unbi- ased estimator of the generalized cumulant Kφ(Λ)(Y).In turn, an unbiased estimator ˆKΛπ(Y) of the generalized cumulant KΛπ(Y) can be obtained by replacing the product of joint cumulants on the rhs of (22) with the joint polykays kλ1;...;λm(Y),such that E[kλ1;...;λm(Y)] = κλ1(Y)···κλm(Y) which is the product of joint cumulants of Y,see Example 5.1. As before, this strategy is computationally intensive. A more efficient pro- cedure consists in using a change of variables. Indeed ˆKλ1;...;λm(Y) =ˆK1m(Z1, . . . , Z m) with Zj=Ysjforj= 1, . . . , m. The expression of ˆK1m(Z1, . . . , Z m) involves power sum symmetric functions of the form St1,...,tm(Z) =NX l=1Zt1 1,lZt2 2,l···Ztn m,l (32) with ( t1, . . . , t m) binary vectors. The expression of ˆKλ1;...;λm(Y) in terms of the power sums in Yis then recovered plugging Ysjin place of Zjin (32) and observing that (t1, . . . , t m) is transformed into ( j1, . . . , j p) =Pm j=1tjs⊺ j. Example 5.3. In Example 5.2, we have ˆK100;011(Y) =ˆK1,1(Z1, Z2) with Z1=Y(1,0,0)=Y1 andZ2=Y(0,1,1)=Y2Y3.Since ˆK1,1(Z1, Z2) =NS1,1(Z)−S1,0(Z)S0,1(Z) N(N−1)(33) with S1,1(Z) =PN l=1Z1,lZ2,l, S1,0(Z) =PN l=1Z1,landS0,1(Z) =PN l=1Z2,l,plugging Y1,i in place of Z1,iandY2,iY3,iin place of Z2,i,we recover S1,1(Z) =S1,1,1(Y), S1,0(Z) = S1,0,0(Y), S0,1(Z) =S0,1,1(Y).Replacing all these power sums in (33), (31) is obtained. 6 Conclusions and open problems This paper introduces a new combinatorial method for enumerating all complementary set partitions of a given partition, demonstrating that only two-block partitions are suf- ficient to identify all not complementary partitions. This approach outperforms existing procedures in terms of computational efficiency and can be implemented in non-symbolic programming environments such as R. In particular, classical methods that utilize con- nected graphs to generate complementary set partitions are discussed. We also consider the Stafford’s algorithm, which automates the calculation of generalized cumulants and includes procedures for complementary set partitions. However, this method is limited by its reliance on symbolic algebra tools and the employment of a not open source soft- ware as Mathematica . Finally, we have analyzed one more method using the columns of⃗1n-partitions (paralleling set partitions) as spanning bases of suitable subspaces. The complementary ⃗1n-partitions correspond to subspaces whose intersections are spanned by the unit vector. Comparisons of computational times were made using Maple 2024 . An extension of the notion of generalized cumulants to the case of repeated r.v.’s is proposed, based on multiset subdivisions. These indexes are intermediate between multi- variate moments and multivariate cumulants. A closed-form formula is derived to express these generalized multivariate cumulants in terms of multivariate cumulants, together 18 with a combinatorial interpretation of the coefficients involved. The key tool is the defini- tions of suitable functions transforming multi-index partitions, corresponding to multi-set subdivisions, in set partitions, corresponding to ⃗1n-partition and viceversa. The use of these functions and suitable dummy variables allow to develop very efficient procedures for estimating generalized multivariate cumulants through multivariate polykays. Exploring the broader implications of all these methods in multivariate analysis, such as their potential for new estimation procedures, hypothesis
|
https://arxiv.org/abs/2505.12706v1
|
testing, and models involving complex dependence structures, remains an open avenue for future investigation. For ex- ample, developing efficient algorithms for the computation of cumulants of polykays and their multivariate generalizations remains an ongoing challenge. At present the only rou- tine available is in Mathstatica [21] for the calculation of the cumulants of k-statistics13. Future research aims might include also the characterization of these families of general- ized cumulants in the setting of random matrices [11]. 7 Appendix: generalized cumulants and ⃗1n-partitions Denote with Mnthe set of all ⃗1n-partitions Λ π. Proposition 7.1. µX(Λπ) =P {Λ∈Mn|Vπ⊆V}κX(Λ) with Vπ, Vcolumn spans of Λ π,Λ respectively. Proof. Suppose Λ π= (λ1,λ2, . . .). From the second identity in (12), it follows that µX(Λπ) =µλ1(X)µλ2(X)···=X Λ1⊢λ1,Λ2⊢λ2, ...κX(Λ1)κX(Λ2)···. Define Λ = (Λ 1,Λ2, . . .). Observe that Λ is a ⃗1n-partition. In particular, each column λiof Λπcan be written as a non-zero linear combination of the columns of Λ with coefficients in {0,1}and therefore the column space Vπof Λ πis contained in the column space Vof Λ,i.e., Vπ⊆V.The result follows from the multiplicative property κ(Λ) = κ(Λ1)κ(Λ2)···. The expression of the generalized cumulant in terms of joint moments is given as follows. Proposition 7.2. If Λ π= (λ1, . . . ,λm)⊢⃗1nthen Kλ⊺ 1;...;λ⊺ m(X) =X {˜Λ∈Mn|˜V⊆Vπ}(−1)l(˜Λ)−1(l(˜Λ)−1)!µX(˜Λ) (34) where ˜V , V πare the column spans of ˜Λ,Λπrespectively and µX(˜Λ) is the product of joint moments of Xas defined in (12). Proof. In (16) set Yj=Xλjforj= 1,2, . . . , m. From the second identity in (13) we have Kλ⊺ 1;...;λ⊺ m(X) =κ⃗1m(Y1, . . . , Y m) =X Λ⊢⃗1m(−1)l(Λ)−1(l(Λ)−1)!µY(Λ) (35) 13Univariate polykays reduce to k-statistics when the product of joint cumulants is replaced by a single joint cumulant. 19 where the sum runs over all ⃗1m-partitions Λ of order m≤n.In each term µY(Λ) plug Xλjin place of Yjforj= 1,2, . . . , m. Assume without loss of generality that Λ = (λ⋆ 1,λ⋆ 2, . . . ,λ⋆ l) with l≤m. Then we can write µY(Λ) = µλ⋆ 1(Y1, . . . , Y m)µλ⋆ 2(Y1, . . . , Y m)··· µλ⋆ l(Y1, . . . , Y m) where each term takes the form µλ⋆ j(Y1, . . . , Y m) =E[Yλ⋆ 1j 1···Yλ⋆ mj m] = E[Xλ1λ⋆ 1j+···+λmλ⋆ mj].Define ˜λj=λ1λ⋆ 1j+···+λmλ⋆ mjforj= 1,2, . . . , l and notice that ˜Λ = ( ˜λ1, . . . , ˜λl)∈ M n.Then µY(Λ) = µX(˜Λ). The result follows by observing that, as Λ varies in the set of all ⃗1m-partitions, each ˜Λ is such that ˜V⊆Vπsince every column of ˜Λ is a non-zero linear combination of the columns of Λ πwith coefficients in {0,1}. Corollary 7.1. If Λ π= (λ1,λ2, . . .)⊢⃗1nthen X {˜Λ∈Mn|˜V⊆Vπ}(−1)l(˜Λ)−1(l(˜Λ)−1)! =0,if Λ π̸=⃗1⊺ n 1,if Λ π=⃗1⊺ n(36) with ˜V , V πcolumn spans of ˜Λ,Λπrespectively. Proof. If Λ π=⃗1⊺ nthe sum on the lhs of (36) reduces to ( −1)l(⃗1⊺ n)−1(l(⃗1⊺ n)−1)! = 1 .For Λπ̸=⃗1⊺ nrecall that, using formal power series [24], the multi-indexed sequence {κi}in (11) is the
|
https://arxiv.org/abs/2505.12706v1
|
formal cumulant sequence associated with a multi-indexed sequence {µi},with µ0= 1,and not necessarily representing the moment sequence of any multivariate random vector X.In particular, if µi= 1 for all i∈Nn 0then from (11) it follows κi= 1 if |i|= 1 otherwise 0 .Therefore if Λ π̸=⃗1⊺ n,the rhs of (34) vanishes corresponding to the value of the lhs of (36) when µX(Λ) = 1 . Corollary 7.2. If Λ π= (λ1, . . . ,λm)⊢⃗1nthen Kλ⊺ 1;...;λ⊺ m(X) =X Λ⋆∈CπκX(Λ⋆) (37) where Cπ={Λ⋆⊢⃗1n|Vπ⋆∩Vπ=V1n}with Vπ⋆, Vπcolumn spans of Λ⋆,Λπrespectively. Proof. From Proposition 7.1 and 7.2, we have Kλ⊺ 1;...;λ⊺ m(X) =X {˜Λ∈Mn|˜V⊆Vπ}(−1)l(˜Λ)−1(l(˜Λ)−1)!X {Λ⋆∈Mn|˜V⊆V⋆}κX(Λ⋆) (38) with V⋆,˜Vcolumn spans of Λ⋆,˜Λ respectively. Grouping the outer sum in (38) according to the length of ˜Λ,the rhs of (38) becomes X Λ⋆⊢⃗1⊺ nκX(Λ⋆)−X ˜Λ∈I2X Λ⋆∈S2,˜VκX(Λ⋆) +···+ (−1)l−1(l−1)!X ˜Λ∈IlX Λ⋆∈Sl,˜VκX(Λ⋆) (39) where Ii={˜Λ∈ M n|l(˜Λ) = iand˜V⊆Vπ},Si,˜V={Λ⋆∈ M n|˜V⊆V⋆,}with ˜Vcolumn span of ˜Λ for i= 2, . . . , l with l≤m.Collecting together common κX(Λ⋆), (39) returns Kλ⊺ 1;...;λ⊺ m(X) =X Λ⋆⊢⃗1⊺ nκX(Λ⋆)X {˜Λ∈Mn|˜V⊆Vπ∩V⋆}(−1)l(˜Λ)−1(l(˜Λ)−1)! (40) with ˜V , V πandV⋆column spans of ˜Λ,Λπand Λ⋆respectively. From Corollary 7.1 the inner sum in (40) is always zero, except in the case V⋆∩Vπ=V1n.Indeed, under this condition, we have ˜V=V1nand the inner sum in (40) to the single term ˜Λ =⃗1⊺ nyields the value 1 by Corollary 7.1. 20 Acknowledgements. Funding: the research of E.D. was partially supported by the MIUR-PRIN 2022 project “Non-Markovian dynamics and non-local equations”, 202277N5H9. References [1] D. F. Andrews and J. E. Stafford. Symbolic computation for statistical inference , volume 21. Oxford University Press, USA, 2000. [2] O. E. Barndorff-Nielsen and D. R. Cox. Asymptotic techniques for use in statistics . Chapman \& Hall, 1989. [3] D. R. Brillinger. Time series: data analysis and theory . SIAM, 2001. [4] F. R. Chung. Spectral graph theory , volume 92. American Mathematical Soc., 1997. [5] E. Di Nardo. Symbolic calculus in mathematical statistics: a review. Seminaire Lotharingien de Combinatoire , 67(B67a):pp. 72, 2015. [6] E. Di Nardo and G. Guarino. kstatistics: Unbiased estimates of joint cumulant products from the multivariate fa \a di bruno’s formula. The R journal , 14:208–228, 2022. [7] E. Di Nardo and G. Guarino. kStatistics: Unbiased Estimators for Cumulant Products and Faa Di Bruno’s Formula , 2022. R package version 2.1.1. [8] E. Di Nardo, G. Guarino, and D. Senato. A unifying framework for k-statistics, polykays and their multivariate generalizations. Bernoulli , 14(2):440–468, 2008. [9] E. Di Nardo, G. Guarino, and D. Senato. A new method for fast computing unbiased estimators of cumulants. Statistics and Computing , 19:155–165, 2009. [10] E. Di Nardo, G. Guarino, and D. Senato. A new algorithm for computing the multi- variate faa di bruno’s formula. Applied mathematics and computation , 217(13):6286– 6295, 2011. [11] Senato D. Di Nardo E., McCullagh P. Natural statistics for spectral samples. The Annals of Statistics , 41(2):982–1004, 2013. [12] W. S. Kendall. Computer algebra and yoke geometry i: when is an expression a tensor. Statistics and Computing , 4, 1994. [13] T. Kollo. Advanced multivariate statistics with matrices , volume 579 of Mathematics and
|
https://arxiv.org/abs/2505.12706v1
|
its applications . Springer, Dordrecht, 2005. [14] Maplesoft, a division of Waterloo Maple Inc.. Maple 2024 . Waterloo, Ontario. [15] P. McCullagh. Tensor notation and cumulants of polynomials. Biometrika , 71(3):461– 476, 1984. [16] P. McCullagh. Tensor methods in statistics: Monographs on statistics and applied probability . Chapman and Hall/CRC, 2018. 21 [17] P. McCullagh and A.R. Wilks. Complementary set partitions. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences , 415(1849):347– 362, 1988. [18] G. Mesters and P. Zwiernik. Non-independent component analysis. The Annals of Statistics , 52(6):2506–2528, 2024. [19] R Core Team. R: A Language and Environment for Statistical Computing . R Foun- dation for Statistical Computing, Vienna, Austria, 2022. [20] Wolfram Research. Mathematica 3.0 . Champaign, Illinios, 3.0 edition, 1996. [21] C. Rose and M. D. Smith. Mathematical Statistics with Mathematica, Springer Text in Statistics . Springer–Verlag, 2002. [22] Andrews D. F. Stafford, J. E. and Y. Wang. Symbolic computation: a unified ap- proach to studying likelihood. Statistics and Computing , 4:235–245, 1994. [23] J. E. Stafford. Automating the partition of indexes. Journal of Computational and Graphical Statistics , 3(3):249–259, 1994. [24] R. P. Stanley. Enumerative Combinatorics Volume 1 second edition . 2011. 22
|
https://arxiv.org/abs/2505.12706v1
|
arXiv:2505.12712v1 [math.ST] 19 May 2025STATISTICAL INFERENCE IN SEM FOR DIFFUSION PROCESSES WITH JUMPS BASED ON HIGH-FREQUENCY DATA SHOGO KUSANO1AND MASAYUKI UCHIDA2,3,4 Abstract. We study structural equation modeling (SEM) for diffusion processes with jumps. Based on high-frequency data, we consider the parameter estimation and the goodness-of-fit test in the SEM. Using a threshold method, we propose the quasi-likelihood of the SEM and prove that the quasi-maximum likelihood estimator has consistency and asymptotic normality. To examine whether a specified parametric model is correct or not, we also construct the quasi-likelihood ratio test statistics and investigate the asymptotic properties. Furthermore, numerical simulations are conducted. 1.Introduction We consider structural equation modeling (SEM) for diffusion processes with jumps. Let (Ω ,F,(Ft)t≥0,P) be a stochastic basis. The p1-dimensional observable process {X1,t}t≥0is defined by the following factor model: X1,t=Λ1ξt+δt, (1.1) where {ξt}t≥0and{δt}t≥0arek1andp1-dimensional c` adl` ag ( Ft)-adapted latent processes on the stochastic basis, k1≤p1, and Λ1∈Rp1×k1is a constant loading matrix. {ξt}t≥0and{δt}t≥0satisfy the following stochastic differential equations: dξt=a1(ξt−)dt+S1dW1,t+Z E1c1(ξt−, z1)p1(dt, dz 1), ξ 0=x1,0, (1.2) dδt=a2(δt−)dt+S2dW2,t+Z E2c2(δt−, z2)p2(dt, dz 2), δ 0=x2,0, (1.3) where E1=Rk1\{0},E2=Rp1\{0}, and x1,0∈Rk1andx2,0∈Rp1are non-random vectors. {W1,t}t≥0and {W2,t}t≥0arer1andr2-dimensional standard ( Ft)-Wiener processes, respectively. p1(dt, dz 1) and p2(dt, dz 2) are Poisson random measures on R+×E1andR+×E2with compensator q1(dt, dz 1) =E p1(dt, dz 1) and q2(dt, dz 2) =E p2(dt, dz 2) , respectively. The functions a1:Rk1−→Rk1,a2:Rp1−→Rp1,c1:Rk1×E1−→ Rk1,c2:Rp1×E2−→Rp1are unknown Borel functions, and S1∈Rk1×r1andS2∈Rp1×r2are constant matrices. Set the volatility matrices of {ξt}t≥0and{δt}t≥0asΣξξ=S1S⊤ 1andΣδδ=S2S⊤ 2, where ⊤stands for the transpose. The p2-dimensional observable process {X2,t}t≥0satisfies the factor model as follows: X2,t=Λ2ηt+εt, (1.4) 1Faculty of Advanced Science and Technology, Kumamoto University, Kumamoto, Japan 2Graduate School of Engineering Science, The University of Osaka, Toyonaka, Japan 3Center for Mathematical Modeling and Data Science (MMDS), The University of Osaka, Toyonaka, Japan 4CREST, Japan Science and Technology Agency, Tokyo, Japan Key words and phrases. Structural equation modeling; Diffusion processes with jumps; Quasi-maximum likelihood estima- tion; Goodness-of-fit test; Asymptotic theory; High-frequency data. 1 2 S. KUSANO AND M. UCHIDA where {ηt}t≥0is ak2-dimensional latent process, {εt}t≥0is ap2-dimensional c` adl` ag ( Ft)-adapted latent process on the stochastic basis, k2≤p2, and Λ2∈Rp2×k2is a constant loading matrix. In addition, we express the relationship between {ξt}t≥0and{ηt}t≥0as follows: ηt=B0ηt+Γξt+ζt, (1.5) where {ζt}t≥0is ak2-dimensional c` adl` ag ( Ft)-adapted latent process on the stochastic basis, B0∈Rk2×k2is a constant loading matrix whose diagonal elements are zero, and Γ∈Rk2×k1is a constant loading matrix. We suppose that Λ1andΛ2are full column rank, and Ψ=Ik2−B0is non-singular, where Ik2is the identity matrix of size k2.{εt}t≥0and{ζt}t≥0satisfy the following stochastic differential equations: dεt=a3(εt−)dt+S3dW3,t+Z E3c3(εt−, z3)p3(dt, dz 3), ε 0=x3,0, (1.6) dζt=a4(ζt−)dt+S4dW4,t+Z E4c4(ζt−, z4)p4(dt, dz 4), ζ 0=x4,0, (1.7) where E3=Rp2\{0},E4=Rk2\{0}, and x3,0∈Rp2andx4,0∈Rk2are non-random vectors. {W3,t}t≥0and {W4,t}t≥0arer3andr4-dimensional standard ( Ft)-Wiener processes, respectively. p3(dt, dz 3) and p4(dt, dz 4) are Poisson random measures on R+×E3andR+×E4with compensator q3(dt, dz 3) =E p3(dt, dz 3) and q4(dt, dz 4) =E p4(dt, dz 4) , respectively. The functions a3:Rp2−→Rp2,a4:Rk2−→Rk2,c3:Rp2×E3−→ Rp2,c4:Rk2×E4−→Rk2are unknown Borel functions, and S3∈Rp2×r3andS4∈Rk2×r4are constant matrices. Define the volatility matrices of {εt}t≥0and{ζt}t≥0asΣεε=S3S⊤ 3andΣζζ=S4S⊤ 4, respectively. Suppose that for any t≥0,Ft,σ(Wi,u−Wi,t;u≥t) (i= 1,2,3,4), and σ pi(Ai∩((t,∞)×Ei));Ai⊂R+×Eiis a Borel set (i= 1,2,3,4) are independent.
|
https://arxiv.org/abs/2505.12712v1
|
For i= 1,2,3,4, we assume that qi(dt, dz i) has a representation such that qi(dt, dz i) = fi(zi)dzidt, and fi(zi) =λi,0Fi(zi), where λi,0>0 is a unknown value, and Fi(zi) is a unknown probability density. Set p=p1+p2, and Xt= (X⊤ 1,t, X⊤ 2,t)⊤. (Xtn i)n i=0are discrete observations, where tn i=ihn. We suppose that T=nhnis fixed. Let Σ= Σ11Σ12 Σ12⊤Σ22! , where Σ11=Λ1ΣξξΛ⊤ 1+Σδδ, Σ12=Λ1ΣξξΓ⊤Ψ−1⊤Λ⊤ 2, Σ22=Λ2Ψ−1(ΓΣξξΓ⊤+Σζζ)Ψ−1⊤Λ⊤ 2+Σεε. We assume that Σis positive definite. For example, when ΣδδandΣεεare positive definite, Σis positive definite. PΣdenotes the law of the process {Xt}t≥0defined by (1.1)-(1.7). SEM is a statistical method that analyzes the relationships between unobservable variables and has been used in a wide range of fields, e.g., psychology, biology, economics, and so on. The unobservable variables are called latent variables. Intelligence and economic conditions are one of the most important examples of latent SEM FOR DIFFUSION PROCESSES WITH JUMPS 3 variables. There are a lot of studies on SEM. For instance, see Shapiro [19], [20] for statistical inference, Jacobucci [10] and Huang [8] for sparse inference, Huang [7] for model selection, and Czir´ aky [3] for SEM for time series data. In recent years, we can easily access high-frequency data thanks to the development of information technology. For that reason, a lot of researchers have studied statistical inference for diffusion processes based on discrete observations; see, e.g., Yoshida [24], Kessler [11], Uchida and Yoshida [23] and references therein. Statistical inference for jump-diffusion processes has been also studied intensively because we often observe discontinuous sample paths. As a result, jump-diffusion models have been used for modeling in various fields, including financial econometrics, physics and hydrology. Shimizu and Yoshida [21, 22] investigated statistical inference for ergodic diffusion processes with jumps based on discrete observations. Using a threshold method based on the increment, they construct a quasi-likelihood and show the quasi-maximum likelihood estimator has consistency and asymptotic normality. Notably, Mancini [16] independently developed a consistent estimator for the characteristics of jumps in the Poisson-diffusion model. Ogihara and Yoshida [17] proved the convergence of moments of the quasi-maximum likelihood estimator. For statistical inference of ergodic jump diffusion processes, see also Amorino and Gloter [1, 2]. For another filtering method, see, e.g., Inatsugu and Yoshida [9]. Recently, Kusano and Uchida [13] proposed SEM for diffusion processes. By using this method, we can examine the relationships between latent processes based on high-frequency data. For more details on the SEM, we also refer readers to Kusano and Uchida [14, 15]. On the other hand, it is not appropriate to use this method when observed data may contain jumps. That is because Kusano and Uchida [13] suppose that the latent processes are diffusion processes, and do not consider jumps. For example, we cannot use the result of Kusano and Uchida [13] for the data as shown in Figure 1. As mentioned above, jump-diffusion models are regarded as powerful tools to express a wide range of stochastic phenomena. Therefore, we propose SEM for diffusion processes with jumps. This paper is organized as follows. In Section 2, we introduce notation and assumptions. In Section
|
https://arxiv.org/abs/2505.12712v1
|
3, we consider the statistical inference of SEM for diffusion processes with jumps. First, we prepare some lemmas to justify the use of a threshold method. Using the threshold method, we define the estimator of Σ, and investigate the asymptotic properties. Next, the quasi-likelihood of the SEM is proposed, and it is shown that the quasi-maximum likelihood estimator has consistency and asymptotic normality. Furthermore, to examine whether a specified parametric model is correct or not, we study the quasi-likelihood ratio test of the SEM. In Section 4, we give examples and simulation results. Section 5 is devoted to the proofs of the results stated in Section 3. 2.Notation and Assumptions First, we introduce the notation as follows. For a vector v,v(i)stands for the i-th element of vand let |v|=√ trvv⊤. Diag vdenotes the diagonal matrix whose i-th diagonal element is v(i). For a matrix M,Mij is the ( i, j)-th component of Mand|M|=√ trMM⊤. For a symmetric matrix M, vecMand vech Mare the vectorization of M, and the half-vectorization of M, respectively. For a p-dimensional symmetric matrix M, Dpis the p2ׯpmatrix such that vec M=DpvechM, where ¯ p=p(p+ 1)/2. Set D+ p= D⊤ pDp−1D⊤ p. Note 4 S. KUSANO AND M. UCHIDA Figure 1. Sample path from a diffusion process with jumps. that vech M=D+ pvecM; see, e.g., Harville [6]. M+ pdenotes the set of all p×preal-valued positive definite matrices. For a positive sequence un,R: [0,∞)×Rd→Rstands for the short notation for functions which satisfy |R(un, x)| ≤unC(1 +|x|)Cfor some C >0. Let Ck ↑(Rd) be the space of all functions fsatisfying the following conditions: (i)fis continuously differentiable with respect to x∈Rdup to order k. (ii)fand all its derivatives are of polynomial growth in x∈Rd, where fis of polynomial growth in x∈Rdiff(x) =R(1, x). LetNd(µ,Σ) be a d-dimensional normal random variable with mean µ∈Rdand covariance matrix Σ ∈ M+ d. Zddenotes a d-dimensional standard normal random variable. χ2 rrepresents the random variable which has the chi-squared distribution with rdegrees of freedom, and χ2 r(α) stands for an upper α∈[0,1] point of χ2 r. The symbolsp−→andd−→express convergence in probability and convergence in distribution, respectively. For a stochastic process {St}t≥0, ∆n iS=Stn i−Stn i−1, ∆St=St−St−, and Ri(un, S) =R(un, Stn i). Set (d1, d2, d3, d4) = (k1, p1, p2, k2). Define Fn i=Ftn ifori= 0, . . . , n . Let Ri−1(un, ξ, δ, ε, ζ ) =Ri−1(un, ξ) +Ri−1(un, δ) +Ri−1(un, ε) +Ri−1(un, ζ) SEM FOR DIFFUSION PROCESSES WITH JUMPS 5 and ˜Ri−1(un, ξ, δ, ε, ζ ) = 1−Ri−1(un, ξ, δ, ε, ζ ). Next, we make the following assumptions. [A1] Fori= 1,2,3,4, there exist Li>0 and ζi(zi)>0 of at most polynomial growth in zisuch that |ai(xi)−ai(yi)| ≤Li|xi−yi|,|ci(xi, zi)−ci(yi, zi)| ≤ζi(zi)|xi−yi| and |ci(xi, zi)| ≤ζi(zi)(1 +|xi|) for any xi, yi∈Rdiandzi∈Ei. [A2] Fori= 1,2,3,4,ai∈C4 ↑(Rdi). [A3] Fori= 1,2,3,4, there exist ri>0 and Ki>0 such that fi(zi)1{|zi|≤ri}≤Ki|zi|1−di and Z |zi|lfi(zi)dzi<∞ for any l≥1. [A4] Fori= 1,2,3,4, it holds that inf xi|ci(xi, zi)| ≥ci,0|zi| for some ci,0>0 near the origin. Remark 1 [A1] and[A3] deduce sup t∈[0,T]E |ξt|l <∞,sup t∈[0,T]E |δt|l <∞,sup t∈[0,T]E |εt|l
|
https://arxiv.org/abs/2505.12712v1
|
<∞,sup t∈[0,T]E |ζt|l <∞ (2.1) for any l≥0 when Tis fixed; see, e.g., Platen and Bruti-Liberati [18]. In this paper, we only state the asymptotic results in the case where Tis fixed. On the other hand, we can also consider the case where hn−→0,T=nhn−→ ∞ andnh2 n−→0 as n−→ ∞ . In this case, we need to assume some regularity conditions including (2.1). 3.Main Theorem First, we consider the estimation of Σ. Set the true values of Λ1,Λ2,Γ,Ψ,Σξξ,Σδδ,ΣεεandΣζζas Λ1,0,Λ2,0,Γ0,Ψ0,Σξξ,0,Σδδ,0,Σεε,0andΣζζ,0, respectively. Note that the true value of Σis expressed as Σ0= Σ11 0Σ12 0 Σ12⊤ 0Σ22 0! , 6 S. KUSANO AND M. UCHIDA where Σ11 0=Λ1,0Σξξ,0Λ⊤ 1,0+Σδδ,0, Σ12 0=Λ1,0Σξξ,0Γ⊤ 0Ψ−1⊤ 0Λ⊤ 2,0, Σ22 0=Λ2,0Ψ−1 0(Γ0Σξξ,0Γ⊤ 0+Σζζ,0)Ψ−1⊤ 0Λ⊤ 2,0+Σεε,0. In this paper, we judge whether jumps occur or not in the interval ( tn i−1, tn i] from the increment |∆n iX|; see, e.g., Shimizu and Yoshida [22] and Ogihara and Yoshida [17]. In other words, if |∆n iX|exceeds Dhρ n, we judge that a jump occurs in the interval, and if not, we consider that no jumps occur in the interval, where ρ∈[0,1/2) and D > 0. Set two random times as τn i= inf t∈[tn i−1, tn i);|∆ξt|>0∨ |∆δt|>0∨ |∆εt|>0∨ |∆ζt|>0 and ηn i= sup t∈[tn i−1, tn i);|∆ξt|>0∨ |∆δt|>0∨ |∆εt|>0∨ |∆ζt|>0 fori= 1, . . . , n . Here, we define that τn i=ηn i=tn iif the infimum or supremum on the right-hand side does not exist. Note that τn iandηn imean the first and the last jump time in the interval [ tn i−1, tn i), respectively. For the two random times, the following lemma is shown. Lemma 1 Under [ A1] and [ A3], for ρ∈[0,1/2),D > 0 and all p≥1, PΣ0 sup t∈[tn i−1,τn i)|Xt−Xtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, ξ, δ, ε, ζ ) (3.1) and PΣ0 sup t∈[ηn i,tn i)|Xtn i−Xt|> Dhρ n Fn i−1 =Ri−1(hp n, ξ, δ, ε, ζ ) (3.2) fori= 1, . . . , n , where sup ϕ=−∞. Next, we define the following events: Cn i,j,0= Jn i,j= 0,|∆Xn i| ≤Dhρ n , Cn i,j,1= Jn i,j= 1,|∆Xn i| ≤Dhρ n , Cn i,j,2= Jn i,j≥2,|∆Xn i| ≤Dhρ n , Dn i,j,0= Jn i,j= 0,|∆Xn i|> Dhρ n , Dn i,j,1= Jn i,j= 1,|∆Xn i|> Dhρ n , Dn i,j,2= Jn i,j≥2,|∆Xn i|> Dhρ n fori= 1, . . . , n andj= 1,2,3,4, where Jn i,j=pj((tn i−1, tn i]×Ej). Let Cn i,k1,k2,k3,k4=Cn i,1,k1∩Cn i,2,k2∩Cn i,3,k3∩Cn i,4,k4 and Dn i,k1,k2,k3,k4=Dn i,1,k1∩Dn i,2,k2∩Dn i,3,k3∩Dn i,4,k4 SEM FOR DIFFUSION PROCESSES WITH JUMPS 7 fori= 1, . . . , n andk1, k2, k3, k4= 0,1,2. Note that |∆Xn i ≤Dhρ n =[ k1,k2,k3,k4=0,1,2Cn i,k1,k2,k3,k4 and |∆Xn i > Dhρ n =[ k1,k2,k3,k4=0,1,2Dn i,k1,k2,k3,k4 fori= 1, . . . , n . Furthermore, we define the sets as follows: K1=n (k1, k2, k3, k4)∈ {0,1,2}4; One element of k1, k2, k3,andk4is 1,and the others are 0o , K2=n (k1, k2, k3, k4)∈ {0,1,2}4; Two elements of k1, k2, k3,andk4are 1,and the others are 0o , K3=n (k1,
|
https://arxiv.org/abs/2505.12712v1
|
k2, k3, k4)∈ {0,1,2}4; Three elements of k1, k2, k3,andk4are 1,and the other is 0o and K4=n (k1, k2, k3, k4)∈ {0,1,2}4; At least one of k1, k2, k3,andk4is 2o . Below, we always suppose that ρ∈[1/3,1/2). Using Lemma 1, we can obtain the following lemmas. Lemma 2 Under [ A1], [A3] and [ A4], for a sufficiently large n, PΣ0 Cn i,0,0,0,0 Fn i−1 =˜Ri−1(hn, ξ, δ, ε, ζ ), (3.3) PΣ0 Cn i,1,1,1,1 Fn i−1 ≤λ1,0λ2,0λ3,0λ4,0h4 n, (3.4) PΣ0 Cn i,k1,k2,k3,k4 Fn i−1 =Ri−1(hρ+1 n, ξ, δ, ε, ζ ) (k1, k2, k3, k4)∈K1, (3.5) PΣ0 Cn i,k1,k2,k3,k4 Fn i−1 ≤ max j=1,2,3,4λj,02 h2 n(k1, k2, k3, k4)∈K2, (3.6) PΣ0 Cn i,k1,k2,k3,k4 Fn i−1 ≤ max j=1,2,3,4λj,03 h3 n(k1, k2, k3, k4)∈K3 (3.7) and PΣ0 Cn i,k1,k2,k3,k4 Fn i−1 ≤ max j=1,2,3,4λ2 j,0 h2 n(k1, k2, k3, k4)∈K4 (3.8) fori= 1, . . . , n . Lemma 3 Under [ A1], [A3] and [ A4], for any p≥1 and a sufficiently large n, PΣ0 Dn i,0,0,0,0 Fn i−1 =Ri−1(hp n, ξ, δ, ε, ζ ), (3.9) PΣ0 Dn i,1,1,1,1 Fn i−1 ≤λ1,0λ2,0λ3,0λ4,0h4 n, (3.10) PΣ0 Dn i,1,0,0,0 Fn i−1 =λ1,0hn˜Ri−1(hρ n, ξ, δ, ε, ζ ), (3.11) PΣ0 Dn i,0,1,0,0 Fn i−1 =λ2,0hn˜Ri−1(hρ n, ξ, δ, ε, ζ ), (3.12) PΣ0 Dn i,0,0,1,0 Fn i−1 =λ3,0hn˜Ri−1(hρ n, ξ, δ, ε, ζ ), (3.13) 8 S. KUSANO AND M. UCHIDA PΣ0 Dn i,0,0,0,1 Fn i−1 =λ4,0hn˜Ri−1(hρ n, ξ, δ, ε, ζ ), (3.14) PΣ0 Dn i,k1,k2,k3,k4 Fn i−1 ≤ max j=1,2,3,4λj,02 h2 n(k1, k2, k3, k4)∈K2, (3.15) PΣ0 Dn i,k1,k2,k3,k4 Fn i−1 ≤ max j=1,2,3,4λj,03 h3 n(k1, k2, k3, k4)∈K3 (3.16) and PΣ0 Dn i,k1,k2,k3,k4 Fn i−1 ≤ max j=1,2,3,4λ2 j,0 h2 n(k1, k2, k3, k4)∈K4 (3.17) fori= 1, . . . , n . By Lemmas 2 and 3, we can judge that no jumps occur in the interval [ tn i−1, tn i) if|∆Xn i ≤Dhρ n, and a single jump occurs in the interval [ tn i−1, tn i) if|∆Xn i > Dhρ n. Thus, to estimate Σ0, we use the estimator ˆΣn=1 ˜NnhnnX i=1(∆n iX)(∆n iX)⊤1{|∆n iX|≤Dhρ n}, where Nn=nX i=11{|∆n iX|≤Dhρ n},˜Nn=( Nn(Nn̸= 0), n (Nn= 0). The asymptotic result of ˆΣnis as follows. Theorem 1 Under [A1]-[A4], asn−→ ∞ , ˆΣnp−→Σ0 (3.18) and √n(vech ˆΣn−vechΣ0)d−→N¯p 0,2D+ p(Σ0⊗Σ0)D+⊤ p (3.19) under PΣ0. Next, we consider the estimation of parameters Λ1,Λ2,Γ,Ψ,Σξξ,Σδδ,Σεε,Σζζ. (3.20) Unfortunately, we cannot estimate all elements of (3.20) due to non-identifiable problems. Hence, we impose constraints on the parameters (3.20). For instance, statisticians may set some elements of (3.20) to 0 or 1 to ensure identifiability. These constraints are made from the theoretical viewpoint of each research field. See Kusano and Uchida [13] for details of the constraints and the identifiability problem. Note that it is sufficient to estimate only unknown non-duplicated elements of (3.20). Set the vector of those elements as θ∈Θ⊂Rq, where qis the number of only unknown non-duplicated elements of (3.20), and Θ is a parameter space. For simplicity, we suppose that Θ is convex and compact. Hereafter, we write the
|
https://arxiv.org/abs/2505.12712v1
|
parameters (3.20) as Λθ 1,Λθ 2,Γθ,Ψθ,Σθ ξξ,Σθ δδ,Σθ εε,Σθ ζζ. SEM FOR DIFFUSION PROCESSES WITH JUMPS 9 In addition, we set Σ(θ) = Σ(θ)11Σ(θ)12 Σ(θ)12⊤Σ(θ)22! , where Σ(θ)11=Λθ 1Σθ ξξΛθ⊤ 1+Σθ δδ, Σ(θ)12=Λθ 1Σθ ξξΓθ⊤Ψθ−1⊤Λθ⊤ 2, Σ(θ)22=Λθ 2Ψθ−1(ΓθΣθ ξξΓθ⊤+Σθ ζζ)Ψθ−1⊤Λθ⊤ 2+Σθ εε. Note that Σ(θ) is two-times continuously differentiable for θ. To estimate θ, we use the following quasi- likelihood: Ln(θ) = exp Hn(θ) , where Hn(θ) =−1 2nX i=1log det Σ(θ)1{|∆n iX|≤hρ n}−1 2hnnX i=1(∆n iX)⊤Σ(θ)−1(∆n iX)1{|∆Xn i|≤hρ n}. Define the quasi-maximum likelihood estimator as Hn(ˆθn) = sup θ∈ΘHn(θ). Set the true value of θasθ0∈Int(Θ). Let ∆0=∂ ∂θ⊤vechΣ(θ) θ=θ0,W0= 2D+ p Σ(θ0)⊗Σ(θ0) D+⊤ p andPθ0=PΣ(θ0). Furthermore, we make the following assumption. [B1] (a)Σ(θ) =Σ(θ0) =⇒θ=θ0. (b) ∆ 0has full column rank. For the quasi-maximum likelihood estimator, the following theorem holds. Theorem 2 Under [A1]-[A4] and[B1], asn−→ ∞ , ˆθnp−→θ0 (3.21) and √n ˆθn−θ0d−→Nq 0, ∆⊤ 0W−1 0∆0−1 (3.22) under Pθ0. Next, we consider the goodness-of-fit test: ( H0:Σ=Σ(θ), H1:Σ̸=Σ(θ).(3.23) 10 S. KUSANO AND M. UCHIDA ForΣ∈ M+ p, we set Ln(Σ) = exp Hn(Σ) , where Hn(Σ) =−1 2nX i=1log det Σ1{|∆n iX|≤hρ n}−1 2hnnX i=1(∆n iX)⊤Σ−1(∆n iX)1{|∆Xn i|≤hρ n}. LetJn=ˆΣnis positive definite . On Jn, we define the quasi-likelihood ratio as λn=supθ∈ΘLn(Σ(θ)) supΣ∈M+ pLn(Σ). Note that ˜Nn=NnonJn. Since 1 2hnnX i=1(∆n iX)⊤Σ−1(∆n iX)1{|∆Xn i|≤hρ n} =Nn 2×1 NnhnnX i=1trn (∆n iX)⊤Σ−1(∆n iX)1{|∆n iX|≤Dhρ n}o =Nn 2tr Σ−1ˆΣn onJn, we have Hn(Σ) =−Nn 2log det Σ−Nn 2tr Σ−1ˆΣn onJn. Furthermore, Hn(Σ) has a maximum value Hn(ˆΣn) =−Nn 2log det ˆΣn−Nnp 2 atΣ=ˆΣnonJn, so that −2 logλn=−2 sup θ∈ΘHn(Σ(θ)) + 2 sup Σ∈M+ pHn(Σ) =−2Hn(Σ(ˆθn)) + 2 Hn(ˆΣn) =Nnlog det Σ(ˆθn)−Nnlog det ˆΣn+Nntr Σ(ˆθn)−1ˆΣn −Nnp onJn. Therefore, we define the quasi-likelihood ratio test statistics as follows: Tn=Tn(ˆθn) =Nnlog det Σ(ˆθn)−Nnlog det ˜Σn+Nntr Σ(ˆθn)−1˜Σn −Nnp, where ˜Σn=(ˆΣn(on J n), Ip(on Jc n). Note that Tnis still well-defined on Jc n. See also Kusano and Uchida [13] for the details of the goodness-of-fit test (3.23). Set the true value of Σunder H0asΣ(θ0), where θ0∈Int(Θ). Under H0(i.e., under Pθ0), the following asymptotic property is shown. SEM FOR DIFFUSION PROCESSES WITH JUMPS 11 Theorem 3 Under [A1]-[A4] and[B1], asn−→ ∞ , Tnd−→χ2 ¯p−q under H0. By Theorem 3, we can consider the test of asymptotic significance level α∈(0,1) with the following rejection region: Tn> χ2 ¯p−q(α) . Next, we study the consistency of the test. Fix Σ∗as the true value of Σunder H1. Note that Σ∗̸=Σ(θ) for any θ∈Θ. Set the optimal parameter θ∗as U(θ∗) = inf θ∈ΘU(θ), where U(θ) = log det Σ(θ)−log det Σ∗+ tr Σ(θ)−1Σ∗ −p. In addition, to prove the consistency of the test, the following assumption is made. [B2] U (θ) =U(θ∗) =⇒θ=θ∗. Under H1(i.e., under PΣ∗), the following theorem holds. Theorem 4 Under [A1]-[A4] and[B2], asn−→ ∞ , P Tn> χ2 ¯p−q(α) −→1 under H1. 4.Numerical Simulations 4.1.True model. The four-dimensional observable process {X0 1,t}t≥0is defined by the true factor model as follows: X0 1,t=Λ1,0ξ0 t+δ0 t, where {ξ0 t}t≥0and{δ0 t}t≥0are one and four-dimensional latent processes, respectively, and Λ1,0= 1 0.7 1.3 0.9⊤ . The stochastic process {ξ0 t}t≥0satisfies the
|
https://arxiv.org/abs/2505.12712v1
|
following one-dimensional L´ evy-OU process: dξ0 t=−2 ξ0 t−−1)dt+ 1.2dW1,t+Z R\{0}zp1(dt, dz ), ξ0 0= 1, where {W1,t}t≥0is a one-dimensional standard Wiener process, and p1(dt, dz ) is a Poisson random measure onR+×R\{0}with the jump density f1(z) = 3 g(z|0,5). Here, g(z|µ, σ2) denotes the probability density 12 S. KUSANO AND M. UCHIDA function of the normal distribution with mean µ∈Rand variance σ2>0, i.e., g(z|µ, σ2) =1√ 2πσ2exp −(z−µ)2 2σ2 . The stochastic process {δ0 t}t≥0is defined by the following four-dimensional L´ evy-OU process: dδ0(1) t=−0.8δ0(1) t−dt+ 1.6dW(1) 2,t+Z R\{0}zp2,1(dt, dz ), δ0(1) 0= 0, dδ0(2) t=−0.5δ0(2) t−dt+ 0.7dW(2) 2,t+Z R\{0}zp2,2(dt, dz ), δ0(2) 0= 0, dδ0(3) t=−0.9δ0(3) t−dt+ 1.2dW(3) 2,t+Z R\{0}zp2,3(dt, dz ), δ0(3) 0= 0 and dδ0(4) t=−0.7δ0(4) t−dt+ 0.9dW(4) 2,t+Z R\{0}zp2,4(dt, dz ), δ0(4) 0= 0, where {W2,t}t≥0is a four-dimensional standard Wiener process, and p2,i(dt, dz ) for i= 1,2,3,4 are Poisson random measures on R+×R\{0}with the jump densities f2,1(z) = 2 g(z|0,3),f2,2(z) =g(z|0,2),f2,3(z) = g(z|0,3), and f2,4(z) = 2 g(z|0,2), respectively. The eight-dimensional observable process {X0 2,t}t≥0satisfies the following true factor model: X0 2,t=Λ2,0η0 t+ε0 t, where {η0 t}t≥0and{ε0 t}t≥0are two and eight-dimensional latent processes, respectively, and Λ2,0= 1 0.8 1.4 1.2 0 0 0 0 0 0 0 0 1 0 .6 1.3 0.9!⊤ . The relationship between {ξ0 t}t≥0and{η0 t}t≥0is expressed as follows: η0 t=Γ0ξ0 t+ζ0 t, where {ζ0 t}t≥0is a two-dimensional latent process and Γ0= 0.7−0.8⊤ . The stochastic process {ε0 t}t≥0satisfies the following eight-dimensional L´ evy-OU process: dε0(1) t=−0.8ε0(1) t−dt+ 0.9dW(1) 3,t+Z R\{0}zp3,1(dt, dz ), ε0(1) 0= 0, dε0(2) t=−1.5ε0(2) t−dt+ 1.2dW(2) 3,t+Z R\{0}zp3,2(dt, dz ), ε0(2) 0= 0, dε0(3) t=−0.9ε0(3) t−dt+ 0.8dW(3) 3,t+Z R\{0}zp3,3(dt, dz ), ε0(3) 0= 0, dε0(4) t=−0.7ε0(4) t−dt+ 1.1dW(4) 3,t+Z R\{0}zp3,4(dt, dz ), ε0(4) 0= 0, SEM FOR DIFFUSION PROCESSES WITH JUMPS 13 dε0(5) t=−1.2ε0(5) t−dt+ 1.5dW(5) 3,t+Z R\{0}zp3,5(dt, dz ), ε0(5) 0= 0, dε0(6) t=−0.5ε0(6) t−dt+ 1.3dW(6) 3,t+Z R\{0}zp3,6(dt, dz ), ε0(6) 0= 0, dε0(7) t=−1.3ε0(7) t−dt+ 0.7dW(7) 3,t+Z R\{0}zp3,7(dt, dz ), ε0(7) 0= 0 and dε0(8) t=−0.6ε0(8) t−dt+ 1.4dW(8) 3,t+Z R\{0}zp3,8(dt, dz ), ε0(8) 0= 0, where {W3,t}t≥0is an eight-dimensional standard Wiener process, and p3,i(dt, dz ) fori= 1, . . . , 8 are Poisson random measures on R+×R\{0}with the jump densities f3,1(z) = 2 g(z|0,2),f3,2(z) =g(z|0,3),f3,3(z) = g(z|0,2),f3,4(z) = 2 g(z|0,3),f3,5(z) = 2 g(z|0,3),f3,6(z) =g(z|0,3),f3,7(z) =g(z|0,2), and f3,8(z) = 2g(z|0,3), respectively. The stochastic process {ζ0 t}t≥0is defined by the following two-dimensional L´ evy-OU process: dζ0(1) t=−0.8ζ0(1) t−dt+ 0.9dW(1) 4,t+Z R\{0}zp4,1(dt, dz ), ζ0(1) 0= 0 and dζ0(2) t=−1.4ζ0(2) t−dt+ 1.1dW(2) 4,t+Z R\{0}zp4,2(dt, dz ), ζ0(2) 0= 0, where {W4,t}t≥0is a two-dimensional standard Wiener process, and p4,1(dt, dz ) and p4,2(dt, dz ) are Poisson random measures on R+×R\{0}with the jump densities f4,1(z) = 2 g(z|0,2) and f4,2(z) = g(z|0,3), respectively. Figure 2 is the path diagram of the true model at time t. Figure 2. Path diagram of the true model at time t. 14 S. KUSANO AND M. UCHIDA 4.2.Correctly specified model. Setp1= 4, p2= 8, k1= 1, k2= 2, and q= 26. We assume the loading matrices as Λθ 1= 1θ(1)θ(2)θ(3)⊤ and Λθ
|
https://arxiv.org/abs/2505.12712v1
|
2= 1θ(4)θ(5)θ(6)0 0 0 0 0 0 0 0 1 θ(7)θ(8)θ(9)!⊤ ,Γθ= θ(10) θ(11)! ,Ψθ=I2 where θ(i)fori= 1, . . . , 11 are not zero. In addition, we suppose the volatility matrices as Σθ ξξ=θ(12),Σθ δδ= Diag θ(13), θ(14), θ(15), θ(16)⊤ and Σθ εε= Diag θ(17), θ(18), θ(19), θ(20), θ(21), θ(22), θ(23), θ(24)⊤,Σθ ζζ= Diag θ(25), θ(26)⊤, where θ(i)fori= 12, . . . , 26 are positive. This model is a correctly specified model since Σ0=Σ(θ0), where θ0= 0.7,1.3,0.9,0.8,1.4,1.2,0.6,1.3,0.9,0.7,−0.8,1.44,2.56,0.49, 1.44,0.81,0.81,1.44,0.64,1.21,2.25,1.69,0.49,1.96,0.81,1.21⊤. Figure 3 is the path diagram of the correctly specified model at time t. Figure 3. Path diagram of the correctly specified model at time t. SEM FOR DIFFUSION PROCESSES WITH JUMPS 15 4.3.Misspecified model. Setp1= 4, p2= 8, k1= 1, k2= 1, and q= 25. In this model, the loading matrices are defined by Λθ 1= 1θ(1)θ(2)θ(3)⊤ and Λθ 2= 1θ(4)θ(5)θ(6)θ(7)θ(8)θ(9)θ(10)⊤ ,Γθ=θ(11) where θ(i)fori= 1, . . . , 11 are not zero. In addition, the volatility matrices are set as follows: Σθ ξξ=θ(12),Σθ δδ= Diag θ(13), θ(14), θ(15), θ(16)⊤ and Σθ εε= Diag θ(17), θ(18), θ(19), θ(20), θ(21), θ(22), θ(23), θ(24)⊤,Σθ ζζ=θ(25), where θ(i)fori= 12, . . . , 25 are positive. Figure 4 is the path diagram of the misspecified model at time t. Figure 4. Path diagram of the misspecified model at time t. 4.4.Simulation results. In this simulation, we set n= 105andT= 1, and generated 10000 independent sample paths from the true model. To maximize Hn(θ), we used optim() with the BFGS method in R language, and chose the true value θ0as an initial value. Let D= 10 and ρ= 0.4. 4.4.1. Correctly specified model. Tables 1 and 2 show the sample means and the sample standard deviations (S.D.s) of ˆΣnandˆθn, and Figures 5 and 6 show the histograms, the Q-Q plots, and the empirical distributions of√n((ˆΣn)11−(Σ0)11) and√n(ˆθ(1) n−θ(1) 0), which imply that ˆΣnandˆθnhave consistency and asymptotic normality. Figure 7 shows the histogram, the Q-Q plot, and the empirical distribution of Tn. It seems from 16 S. KUSANO AND M. UCHIDA Figure 7 that the asymptotic distribution of Tnis the chi-squared distribution with degrees of freedom 52. Hence, we can see that Theorems 1-3 hold true for this example. 4.4.2. Misspecified model. Table 3 shows the number of rejections of the quasi-likelihood ratio test in the misspecified model. Table 3 implies that the test has consistency. (ˆΣn)(1,1) (ˆΣn)(1,2) (ˆΣn)(1,3) (ˆΣn)(1,4) (ˆΣn)(1,5) (ˆΣn)(1,6) True value 4.000 1.008 1.872 1.296 1.008 0.806 Mean 4.001 1.008 1.873 1.297 1.009 0.807 S.D. 0.018 0.008 0.014 0.010 0.010 0.010 (ˆΣn)(1,7) (ˆΣn)(1,8) (ˆΣn)(1,9) (ˆΣn)(1,10) (ˆΣn)(1,11) (ˆΣn)(1,12) (ˆΣn)(2,2) 1.411 1.210 -1.152 -0.691 -1.498 -1.037 1.196 1.412 1.210 -1.153 -0.691 -1.498 -1.037 1.196 0.013 0.012 0.014 0.010 0.014 0.013 0.005 (ˆΣn)(2,3) (ˆΣn)(2,4) (ˆΣn)(2,5) (ˆΣn)(2,6) (ˆΣn)(2,7) (ˆΣn)(2,8) (ˆΣn)(2,9) 1.310 0.907 0.706 0.564 0.988 0.847 -0.806 1.311 0.908 0.706 0.565 0.988 0.847 -0.807 0.008 0.006 0.006 0.006 0.007 0.007 0.008 (ˆΣn)(2,10) (ˆΣn)(2,11) (ˆΣn)(2,12) (ˆΣn)(3,3) (ˆΣn)(3,4) (ˆΣn)(3,5) (ˆΣn)(3,6) -0.484 -1.048 -0.726 3.874 1.685 1.310 1.048 -0.484 -1.049 -0.726 3.875 1.686 1.311 1.049 0.006 0.008 0.007 0.018 0.010 0.010 0.010 (ˆΣn)(3,7) (ˆΣn)(3,8) (ˆΣn)(3,9) (ˆΣn)(3,10) (ˆΣn)(3,11) (ˆΣn)(3,12) (ˆΣn)(4,4) 1.835
|
https://arxiv.org/abs/2505.12712v1
|
1.572 -1.498 -0.899 -1.947 -1.348 1.976 1.835 1.573 -1.498 -0.899 -1.948 -1.348 1.977 0.013 0.013 0.014 0.010 0.014 0.013 0.009 (ˆΣn)(4,5) (ˆΣn)(4,6) (ˆΣn)(4,7) (ˆΣn)(4,8) (ˆΣn)(4,9) (ˆΣn)(4,10) (ˆΣn)(4,11) 0.907 0.726 1.270 1.089 -1.037 -0.622 -1.348 0.908 0.726 1.271 1.089 -1.037 -0.622 -1.348 0.007 0.007 0.009 0.009 0.010 0.007 0.010 (ˆΣn)(4,12) (ˆΣn)(5,5) (ˆΣn)(5,6) (ˆΣn)(5,7) (ˆΣn)(5,8) (ˆΣn)(5,9) (ˆΣn)(5,10) -0.933 2.326 1.212 2.122 1.819 -0.806 -0.484 -0.934 2.326 1.213 2.122 1.819 -0.807 -0.484 0.009 0.011 0.008 0.011 0.011 0.010 0.008 (ˆΣn)(5,11) (ˆΣn)(5,12) (ˆΣn)(6,6) (ˆΣn)(6,7) (ˆΣn)(6,8) (ˆΣn)(6,9) (ˆΣn)(6,10) -1.048 -0.726 2.410 1.697 1.455 -0.645 -0.387 -1.049 -0.726 2.410 1.698 1.455 -0.645 -0.387 0.010 0.010 0.011 0.011 0.010 0.010 0.008 (ˆΣn)(6,11) (ˆΣn)(6,12) (ˆΣn)(7,7) (ˆΣn)(7,8) (ˆΣn)(7,9) (ˆΣn)(7,10) (ˆΣn)(7,11) SEM FOR DIFFUSION PROCESSES WITH JUMPS 17 -0.839 -0.581 3.611 2.546 -1.129 -0.677 -1.468 -0.839 -0.581 3.611 2.547 -1.130 -0.678 -1.468 0.010 0.010 0.016 0.014 0.013 0.010 0.013 (ˆΣn)(7,12) (ˆΣn)(8,8) (ˆΣn)(8,9) (ˆΣn)(8,10) (ˆΣn)(8,11) (ˆΣn)(8,12) (ˆΣn)(9,9) -1.016 3.392 -0.968 -0.581 -1.258 -0.871 4.382 -1.017 3.393 -0.968 -0.581 -1.259 -0.871 4.382 0.012 0.015 0.012 0.009 0.012 0.012 0.020 (ˆΣn)(9,10) (ˆΣn)(9,11) (ˆΣn)(9,12) (ˆΣn)(10,10) (ˆΣn)(10,11) (ˆΣn)(10,12) (ˆΣn)(11,11) 1.279 2.771 1.918 2.457 1.663 1.151 4.092 1.279 2.772 1.919 2.457 1.663 1.151 4.093 0.011 0.016 0.014 0.011 0.011 0.010 0.018 (ˆΣn)(11,12) (ˆΣn)(12,12) 2.494 3.687 2.494 3.687 0.015 0.016 Table 1. Sample mean and sample standard deviation (S.D.) of ˆΣn. ˆθ(1) nˆθ(2) nˆθ(3) nˆθ(4) nˆθ(5) nˆθ(6) nˆθ(7) nˆθ(8) nˆθ(9) n True value 0.700 1.300 0.900 0.800 1.400 1.200 0.600 1.300 0.900 Mean 0.700 1.300 0.900 0.800 1.400 1.200 0.600 1.300 0.900 S.D. 0.004 0.007 0.005 0.004 0.004 0.004 0.004 0.005 0.005 ˆθ(10) nˆθ(11) nˆθ(12) nˆθ(13) nˆθ(14) nˆθ(15) nˆθ(16) nˆθ(17) nˆθ(18) nˆθ(19) n 0.700 -0.800 1.440 2.560 0.490 1.440 0.810 0.810 1.440 0.640 0.700 -0.800 1.441 2.560 0.490 1.440 0.810 0.810 1.440 0.640 0.004 0.006 0.014 0.013 0.003 0.009 0.005 0.005 0.007 0.006 ˆθ(20) nˆθ(21) nˆθ(22) nˆθ(23) nˆθ(24) nˆθ(25) nˆθ(26) n 1.210 2.250 1.690 0.490 1.960 0.810 1.210 1.210 2.250 1.690 0.490 1.960 0.810 1.210 0.007 0.012 0.008 0.009 0.010 0.006 0.011 Table 2. Sample mean and sample standard deviation (S.D.) of ˆθn. The number of rejections 10000 Table 3. The number of rejections of the misspecified model. 18 S. KUSANO AND M. UCHIDA Figure 5. Histogram (left), Q-Q plot (middle) and empirical distribution (right) of√n((ˆΣn)11− (Σ0)11). The red lines are theoretical curves. Figure 6. Histogram (left), Q-Q plot (middle) and empirical distribution (right) of√n(ˆθ(1) n−θ(1) 0). The red lines are theoretical curves. Figure 7. Histogram (left), Q-Q plot (middle) and empirical distribution (right) of Tn. The red lines are theoretical curves. SEM FOR DIFFUSION PROCESSES WITH JUMPS 19 5.Proofs In Lemmas 1-3, Propositions 1-6, and Theorem 1, we simply write PΣ0asP, andEstands for expectation under P.Cdenotes a general positive constant whose value can vary depending on the context without being specifically mentioned. Let λ0=P4 i=1λi,0. Proof of Lemma 1 .First, we show (3.1). Since |Xt−Xtn i−1| ≤ |X1,t−X1,tn i−1|+|X2,t−X2,tn i−1| =|Λ1,0(ξt−ξtn i−1) +δt−δtn i−1| +|Λ2,0Ψ−1 0Γ0(ξt−ξtn i−1) +Λ2,0Ψ−1 0(ζt−ζtn i−1) +εt−εtn i−1| ≤C|ξt−ξtn i−1|+|δt−δtn i−1|+|εt−εtn i−1|+C|ζt−ζtn i−1|, we obtain sup t∈[tn i−1,τn i)|Xt−Xtn i−1| ≤C sup t∈[tn i−1,τn i)|ξt−ξtn i−1|+ sup t∈[tn i−1,τn i)|δt−δtn i−1| + sup t∈[tn
|
https://arxiv.org/abs/2505.12712v1
|
i−1,τn i)|εt−εtn i−1|+C sup t∈[tn i−1,τn i)|ζt−ζtn i−1|. In a similar manner to Lemma 2.1 in Shimizu and Yoshida [22], it is shown that P C sup t∈[tn i−1,τn i)|ξt−ξtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, ξ), P sup t∈[tn i−1,τn i)|δt−δtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, δ), P sup t∈[tn i−1,τn i)|εt−εtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, ε) and P C sup t∈[tn i−1,τn i)|ζt−ζtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, ζ) for any p≥1. Therefore, one gets P sup t∈[tn i−1,τn i)|Xt−Xtn i−1|> Dhρ n Fn i−1 ≤P C sup t∈[tn i−1,τn i)|ξt−ξtn i−1|>Dhρ n 4 Fn i−1 +P sup t∈[tn i−1,τn i)|δt−δtn i−1|>Dhρ n 4 Fn i−1 +P sup t∈[tn i−1,τn i)|εt−εtn i−1|>Dhρ n 4 Fn i−1 +P C sup t∈[tn i−1,τn i)|ζt−ζtn i−1|>Dhρ n 4 Fn i−1 =Ri−1(hp n, ξ, δ, ε, ζ ) for all p≥1. In an analogous manner to Lemma 2.1 in Shimizu and Yoshida [22], we have P Csup t∈[ηn i,tn i)|ξtn i−ξt|> Dhρ n Fn i−1 =Ri−1(hp n, ξ), 20 S. KUSANO AND M. UCHIDA P sup t∈[ηn i,tn i)|δtn i−δt|> Dhρ n Fn i−1 =Ri−1(hp n, δ), P sup t∈[ηn i,tn i)|εtn i−εt|> Dhρ n Fn i−1 =Ri−1(hp n, ε) and P Csup t∈[ηn i,tn i)|ζtn i−ζt|> Dhρ n Fn i−1 =Ri−1(hp n, ζ) for any p≥1, which implies (3.2). □ Lemma 4 Letm≥n. For a matrix M∈Rm×nand a vector x∈Rn, ifMis full column rank, |Mx| ≥ min iσi |x|, where for i= 1, . . . , n ,σi>0 and σiis a singular value of M. Proof. First, we consider the singular value decomposition of M: M=UΛV⊤, where UandVarem×nandn×nmatrices such that U⊤U=V⊤V=In, and Λ = Diag( σ1, . . . , σ n)⊤. Using the fact that U⊤U=In, one gets |Mx|2=x⊤M⊤Mx=x⊤VΛU⊤UΛV⊤x =x⊤VΛ2V⊤x=nX i=1σ2 i(V⊤x)(i)2≥ min iσ2 i |V⊤x|2. Note that V V⊤=Insince Vis a square matrix and V⊤V=In. Consequently, we have |V⊤x|2=x⊤V V⊤x=|x|2, which completes the proof. □ Proofs of Lemmas 2-3 .First of all, we prove (3.8). It is sufficient to show the case where ( k1, k2, k3, k4) = (2,0,0,0)∈K4. Since P Cn i,2,0,0,0 Fn i−1 =P |∆n iX| ≤Dhρ n, Jn i,1≥2, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 ≤P Jn i,1≥2|Fn i−1 and P Jn i,1≥2|Fn i−1 =∞X k=2(λ1,0hn)k k!e−λ1,0hn =∞X l=0(λ1,0hn)l+2 (l+ 2)!e−λ1,0hn SEM FOR DIFFUSION PROCESSES WITH JUMPS 21 ≤(λ1,0hn)2∞X l=0(λ1,0hn)l l!e−λ1,0hn=λ2 1,0h2 n≤ max j=1,2,3,4λ2 j,0 h2 n fori= 1, . . . , n , we obtain P Cn i,2,0,0,0 Fn i−1 ≤ max j=1,2,3,4λ2 j,0 h2 n. Similarly, (3.17) can be shown. Furthermore, we see P Cn i,1,1,1,1 Fn i−1 =P |∆n iX| ≤Dhρ n, Jn i,1= 1, Jn i,2= 1, Jn i,3= 1, Jn i,4= 1 Fn i−1 ≤P Jn i,1= 1, Jn i,2= 1, Jn i,3= 1, Jn i,4= 1 Fn i−1 =P Jn i,1= 1 Fn i−1 P Jn i,2= 1 Fn i−1 P Jn i,3= 1 Fn i−1 P Jn i,4= 1 Fn i−1 =e−λ0hnλ1,0λ2,0λ3,0λ4,0h4 n ≤λ1,0λ2,0λ3,0λ4,0h4 n, which yields (3.4). In the same way, we can
|
https://arxiv.org/abs/2505.12712v1
|
obtain (3.6)-(3.7), (3.10), and (3.15)-(3.16). Note that τn i=tn i on Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 , and Taylor’s theorem implies 1−e−λ0hn≤λ0hn. It holds from Lemma 1 that P Dn i,0,0,0,0 Fn i−1 =P |∆n iX|> Dhρ n, Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 ≤P |Xτn i−Xtn i−1|> Dhρ n Fn i−1 ≤P sup t∈[tn i−1,τn i)|Xt−Xtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, ξ, δ, ε, ζ ) and P Cn i,0,0,0,0 Fn i−1 =P |∆n iX| ≤Dhρ n, Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 =P Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 −P |∆n iX|> Dhρ n, Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 =e−λ0hn−Ri−1(hp n, ξ, δ, ε, ζ ) =˜Ri−1(hn, ξ, δ, ε, ζ ) for any p≥1, which completes the proof of (3.3) and (3.9). Finally, we consider (3.5) and (3.11)-(3.14). It is sufficient to show the case where ( k1, k2, k3, k4) = (1 ,0,0,0)∈K1. First, we see P Cn i,1,0,0,0 Fn i−1 22 S. KUSANO AND M. UCHIDA =P |∆n iX| ≤Dhρ n, Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0,|∆Z1,τn i|>2Dhρ n σ0c1,0 Fn i−1 +P |∆n iX| ≤Dhρ n, Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0,|∆Z1,τn i| ≤2Dhρ n σ0c1,0 Fn i−1 , where σ0is a minimum singular value of Λ1,0,c1,0is the positive constant in [ A4], Z1,t=Z [0,t]×E1z1p1(dt, dz 1) and ∆ Z1,τn ihas the density F1under Fn i−1. Note that σ0>0 since Λ1,0is full column rank. On Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 , it follows from Lemma 4 that |Xτn i−Xτn i−|= Λ1,0 Λ2,0Ψ−1 0Γ0! (ξτn i−ξτn i−) ≥ Λ1,0(ξτn i−ξτn i−) ≥σ0|ξτn i−ξτn i−|=σ0 c1(ξτn i,∆Z1,τn i) ≥σ0c1,0|∆Z1,τn i| when|∆ξτn i|is small enough, and it holds from the triangle inequality and τn i=ηn ithat |Xτn i−Xτn i−| ≤ |Xτn i−Xtn i|+|Xtn i−Xtn i−1|+|Xtn i−1−Xτn i−| =|Xtn i−Xηn i|+|∆n iX|+|Xτn i−−Xtn i−1|. Thus, one gets sup t∈[ηn i,tn i)|Xtn i−Xt|+ sup t∈[tn i−1,τn i)|Xt−Xtn i−1| ≥ |Xtn i−Xηn i + Xτn i−−Xtn i−1| ≥ |Xτn i−Xτn i− − ∆n iX| ≥σ0c1,0|∆Z1,τn i| −Dhρ n> Dhρ n when|∆ξτn i|is small enough on |∆n iX| ≤Dhρ n, Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0,|∆Z1,τn i|>2Dhρ n σ0c1,0 . Consequently, Lemma 1 shows P |∆n iX| ≤Dhρ n, Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0,|∆Z1,τn i|>2Dhρ n σ0c1,0 Fn i−1 ≤P sup t∈[ηn i,tn i)|Xtn i−Xt|+ sup t∈[tn i−1,τn i)|Xt−Xtn i−1|> Dhρ n Fn i−1 ≤P sup t∈[ηn i,tn i)|Xtn i−Xt|>Dhρ n 2 Fn i−1 +P sup t∈[tn i−1,τn i)|Xt−Xtn i−1|>Dhρ n 2 Fn i−1 =Ri−1(hp n, ξ, δ, ε, ζ ) SEM FOR DIFFUSION PROCESSES WITH JUMPS 23 for all p≥1. Since hρ n−→0 asn−→ ∞ , we note that 2Dhρ n σ0c1,0≤r1 for a sufficiently
|
https://arxiv.org/abs/2505.12712v1
|
large n, where r1is the positive constant in [ A3]. It follows that P |∆Z1,τn i| ≤2Dhρ n σ0c1,0 Fn i−1, Jn i,1= 1 =Z {|z1|≤2σ−1 0c−1 1,0Dhρ n}\{0}F1(z1)dz1 =1 λ1,0Z {|z1|≤2σ−1 0c−1 1,0Dhρ n}\{0}f1(z1)1{|z1|≤r1}dz1 ≤K1 λ1,0Z {|z1|≤2σ−1 0c−1 1,0Dhρ n}\{0}|z1|1−k1dz1 ≤Chρ n λ1,0 for a sufficiently large n, so that P |∆n iX| ≤Dhρ n, Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0,|∆Z1,τn i| ≤2Dhρ n σ0c1,0 Fn i−1 ≤P Jn i,1= 1,|∆Z1,τn i| ≤2Dhρ n σ0c1,0 Fn i−1 =P |∆Z1,τn i| ≤2Dhρ n σ0c1,0 Fn i−1, Jn i,1= 1 P Jn i,1= 1 ≤Chρ+1 n. Therefore, since pis arbitrary, we obtain P Cn i,1,0,0,0 Fn i−1 =Ri−1(hρ+1 n, ξ, δ, ε, ζ ) for a sufficiently large n. Moreover, for a sufficiently large n, one gets P Dn i,1,0,0,0 Fn i−1 =P |∆n iX|> Dhρ n, Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 =P Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 −P |∆n iX| ≤Dhρ n, Jn i,1= 1, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 =e−λ0hnλ1,0hn−Ri−1(hρ+1 n, ξ, δ, ε, ζ ) =λ1,0hn˜Ri−1(hρ n, ξ, δ, ε, ζ ), which yields (3.11). Similarly, (3.12)-(3.14) can be shown. □ Proposition 1 Under [A1],[A3] and[A4], asn−→ ∞ , ˜Nn np−→1 (5.1) and √n˜Nn n−1p−→0 (5.2) under P. 24 S. KUSANO AND M. UCHIDA Proof. First, we show (5.1). Lemma 2 implies P |∆n iX| ≤Dhρ n Fn i−1 =P[ k1,k2,k3,k4=0,1,2Cn i,k1,k2,k3,k4 Fn i−1 =X k1,k2,k3,k4=0,1,2P Cn i,k1,k2,k3,k4 Fn i−1 =˜Ri−1(hn, ξ, δ, ε, ζ ) for a sufficiently large n. In other words, there exists N1∈Nsuch that P |∆n iX| ≤Dhρ n Fn i−1 =˜Ri−1(hn, ξ, δ, ε, ζ ) for any n≥N1. Fix ε >0 arbitrarily. Moreover, we have 1 nnX i=1˜Ri−1(hn, ξ, δ, ε, ζ ) = 1−hn nnX i=1Ri−1(1, ξ, δ, ε, ζ )p−→1, so that there exists N2∈Nsuch that P 1 nnX i=1˜Ri−1(hn, ξ, δ, ε, ζ )−1 > ε < ε for all n≥N2. Thus, for any n≥max{N1, N2}, one gets P 1 nnX i=1P |∆n iX| ≤Dhρ n Fn i−1 −1 > ε =P 1 nnX i=1˜Ri−1(hn, ξ, δ, ε, ζ )−1 > ε < ε, which yields 1 nnX i=1P |∆n iX| ≤Dhρ n Fn i−1p−→1. (5.3) Since E1 nnX i=11{|∆n iX|≤Dhρ n} Fn i−1 =1 nnX i=1P |∆n iX| ≤Dhρ n Fn i−1p−→1 and E1 n2nX i=112 {|∆n iX|≤Dhρ n} Fn i−1 =1 n×1 nnX i=1P |∆n iX| ≤Dhρ n Fn i−1p−→0, it follows from Lemma 9 in Genon-Catalot and Jacod [4] that Nn n=1 nnX i=11{|∆n iX|≤Dhρ n}p−→1. Consequently, we see P Nn= 0 =PNn n= 0 ≤P Nn n−1 >1 2 −→0, SEM FOR DIFFUSION PROCESSES WITH JUMPS 25 so that we get P ˜Nn n−1 > ε =Pn ˜Nn n−1 > εo ∩n Nn̸= 0o +Pn ˜Nn n−1 > εo ∩n Nn= 0o ≤P Nn n−1 > ε +P Nn= 0 −→0,(5.4) which implies (5.1). Next, we prove (5.2). Since 1√nnX i=1Ri−1(hn, ξ, δ, ε, ζ ) =p
|
https://arxiv.org/abs/2505.12712v1
|
nh2n×1 nnX i=1Ri−1(1, ξ, δ, ε, ζ )p−→0, it is shown that 1√nnX i=1n P |∆n iX| ≤Dhρ n Fn i−1 −1op−→0 in an analogous manner to the proof of (5.3). Thus, it follows from (5.3) that E1√nnX i=1 1{|∆n iX|≤Dhρ n}−1 Fn i−1 =1√nnX i=1n P |∆n iX| ≤Dhρ n Fn i−1 −1op−→0 and E1 nnX i=1 1{|∆n iX|≤Dhρ n}−12 Fn i−1 =1 nnX i=1Eh 1{|∆n iX|≤Dhρ n} Fn i−1i −2 nnX i=1Eh 1{|∆n iX|≤Dhρ n} Fn i−1i + 1 =−1 nnX i=1P |∆n iX| ≤Dhρ n Fn i−1 + 1p−→0, so that it holds from Lemma 9 in Genon-Catalot and Jacod [4] that √nNn n−1 =1√nnX i=1 1{|∆n iX|≤Dhρ n}−1p−→0. In a similar way to (5.4), we complete the proof of (5.2). □ Proposition 2 Under [A1]-[A4], for a sufficiently large n, Eh (∆n iX1)(j1)(∆n iX1)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i =hn(Σ11 0)j1j2+Ri−1(h2 n, ξ, δ, ε, ζ ) (5.5) and Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i =h2 n (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) +Ri−1(h3 n, ξ, δ, ε, ζ )(5.6) forj1, j2, j3, j4= 1, . . . , p 1. 26 S. KUSANO AND M. UCHIDA Proof. We only prove (5.6). By noting that 3 ρ+ 1≥2, in an analogous manner, (5.5) can be shown. First, we define the stochastic processes {ξc t}t≥0and{δc t}t≥0by the following diffusion processes dξc t=a1(ξc t)dt+S1,0dW1,t, ξc 0=x1,0, dδc t=a2(δc t)dt+S2,0dW2,t, δc 0=x2,0 independent of Jn i,j. Let Xc 1,t=Λ1,0ξc t+δc t. In a similar way to Lemma 21 in Kusano and Uchida [12], it is shown that Eh (∆n iXc 1)(j1)(∆n iXc 1)(j2)(∆n iXc 1)(j3)(∆n iXc 1)(j4) Fn i−1i =h2 n (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +Ri−1(h3 n, ξ, δ),(5.7) where Ri−1(h3 n, ξ, δ) =Ri−1(h3 n, ξ) +Ri−1(h3 n, δ). Since Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 =Cn i,0,0,0,0∪Dn i,0,0,0,0 andCn i,0,0,0,0∩Dn i,0,0,0,0=ϕ, we have 1{Jn i,1=0,Jn i,2=0,Jn i,3=0,Jn i,4=0}=1Cn i,0,0,0,0+1Dn i,0,0,0,0, which implies that Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,0,0,0,0 Fn i−1i =Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1{Jn i,1=0,Jn i,2=0,Jn i,3=0,Jn i,4=0} Fn i−1i −Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Dn i,0,0,0,0 Fn i−1i .(5.8) The independence of {ξt}t≥0,{δt}t≥0,{εt}t≥0and{ζt}t≥0yields P Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 =P Jn i,1= 0 Fn i−1 P Jn i,2= 0 Fn i−1 P Jn i,3= 0 Fn i−1 P Jn i,4= 0 Fn i−1 =e−λ0hn, so that it holds from (5.7) that Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1{Jn i,1=0,Jn i,2=0,Jn i,3=0,Jn i,4=0} Fn i−1i =Eh (∆n iXc 1)(j1)(∆n iXc 1)(j2)(∆n iXc 1)(j3)(∆n iXc 1)(j4)1{Jn i,1=0,Jn i,2=0,Jn i,3=0,Jn i,4=0} Fn i−1i =Eh (∆n iXc 1)(j1)(∆n iXc 1)(j2)(∆n iXc 1)(j3)(∆n iXc 1)(j4) Fn i−1i ×P Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 =h2 n (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +h2 n e−λ0hn−1 (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +h3 nRi−1(1, ξ, δ) SEM FOR DIFFUSION PROCESSES WITH JUMPS 27
|
https://arxiv.org/abs/2505.12712v1
|
=h2 n (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +Ri−1(h3 n, ξ, δ). On the other hand, the Cauchy-Schwartz inequality and Lemma 3 yield Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Dn i,0,0,0,0 Fn i−1i ≤Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4) 2 Fn i−1i1 2P Dn i,0,0,0,0 Fn i−11 2 ≤Eh |∆n iX1|8 Fn i−1i1 2P Dn i,0,0,0,0 Fn i−11 2 ≤n CEh |∆n iξ|8 Fn i−1i1 2+CEh |∆n iδ|8 Fn i−1i1 2o Ri−1(hp 2n, ξ, δ, ε, ζ ) ≤Ri−1(hp 2+1 2n, ξ, δ, ε, ζ ) for any p≥1 since it is shown that Eh ∆n iξ l Fn i−1i =Ri−1(hn, ξ),Eh ∆n iδ l Fn i−1i =Ri−1(hn, δ) for any l≥2 in an analogous manner to Proposition 3.1 in Shimizu and Yoshida [22]. Consequently, it follows from (5.8) that Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,0,0,0,0 Fn i−1i =h2 n (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +Ri−1(h3 n, ξ, δ, ε, ζ ).(5.9) Recall that |∆n iX| ≤Dhρ n =[ k1,k2,k3,k4=0,1,2Cn i,k1,k2,k3,k4 andCn i,k1,k2,k3,k4are disjoint. Since 1{|∆n iX|≤Dhρ n}=2X k1=02X k2=02X k3=02X k4=01Cn i,k1,k2,k3,k4, we see Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i =Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,0,0,0,0 Fn i−1i +Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,1,1,1,1 Fn i−1i +X (k1,k2,k3,k4)∈K1Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K2Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K3Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K4Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i . 28 S. KUSANO AND M. UCHIDA It holds from Lemma 2 that Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,1,1,1,1 Fn i−1i ≤Eh (∆n iX1)(j1) (∆n iX1)(j2) (∆n iX1)(j3) (∆n iX1)(j4) 1Cn i,1,1,1,1 Fn i−1i ≤Eh |∆n iX|41Cn i,1,1,1,1 Fn i−1i ≤D4h4ρ nP Cn i,1,1,1,1 Fn i−1 ≤D4λ1,0λ2,0λ3,0λ4,0h4ρ+4 n, which implies Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,1,1,1,1 Fn i−1i =Ri−1(h4ρ+4 n, ξ, δ, ε, ζ ). Similarly, for a sufficiently large n, Lemma 2 implies that Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i =Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) for (k1, k2, k3, k4)∈K1, Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i =Ri−1(h4ρ+2 n, ξ, δ, ε, ζ ) for (k1, k2, k3, k4)∈K2, Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i =Ri−1(h4ρ+3 n, ξ, δ, ε, ζ ) for (k1, k2, k3, k4)∈K3, and Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Cn i,k1,k2,k3,k4 Fn i−1i =Ri−1(h4ρ+2 n, ξ, δ, ε, ζ ) for (k1, k2, k3, k4)∈K4. Since 5 ρ+ 1<4ρ+ 2, it follows from (5.9) that Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i =h2 n (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) +Ri−1(h3 n, ξ, δ, ε, ζ ) for a sufficiently large n, which completes the proof. □ Proposition 3 Under [A1]-[A4], for a sufficiently large n, Eh (∆n iX2)(j1)(∆n iX2)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i =hn(Σ22 0)j1j2+Ri−1(h2 n, ξ, δ, ε, ζ ) and Eh (∆n iX2)(j1)(∆n iX2)(j2)(∆n iX2)(j3)(∆n iX2)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i =h2 n (Σ22 0)j1j2(Σ22 0)j3j4+ (Σ22 0)j1j3(Σ22 0)j2j4+
|
https://arxiv.org/abs/2505.12712v1
|
(Σ22 0)j1j4(Σ22 0)j2j3 +Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) +Ri−1(h3 n, ξ, δ, ε, ζ ) SEM FOR DIFFUSION PROCESSES WITH JUMPS 29 forj1, j2, j3, j4= 1, . . . , p 2. Proposition 4 Under [A1]-[A4], for a sufficient large n, Eh (∆n iX1)(j1)(∆n iX2)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i =hn(Σ12 0)j1j2+h2 nRi−1(1, ξ, δ, ε, ζ ) forj1= 1, . . . , p 1, j2= 1, . . . , p 2, Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX2)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i =h2 n (Σ11 0)j1j2(Σ12 0)j3j4+ (Σ11 0)j1j3(Σ12 0)j2j4+ (Σ12 0)j1j4(Σ11 0)j2j3 +Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) +Ri−1(h3 n, ξ, δ, ε, ζ ) forj1, j2, j3= 1, . . . , p 1, j4= 1, . . . , p 2, Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX2)(j3)(∆n iX2)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i =h2 n (Σ11 0)j1j2(Σ22 0)j3j4+ (Σ12 0)j1j3(Σ12 0)j2j4+ (Σ12 0)j1j4(Σ12 0)j2j3 +Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) +Ri−1(h3 n, ξ, δ, ε, ζ ) forj1, j2= 1, . . . , p 1, j3, j4= 1, . . . , p 2, and Eh (∆n iX1)(j1)(∆n iX2)(j2)(∆n iX2)(j3)(∆n iX2)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i =h2 n (Σ12 0)j1j2(Σ22 0)j3j4+ (Σ12 0)j1j3(Σ22 0)j2j4+ (Σ12 0)j1j4(Σ22 0)j2j3 +Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) +Ri−1(h3 n, ξ, δ, ε, ζ ) forj1= 1, . . . , p 1, j2, j3, j4= 1, . . . , p 2. Proofs of Propositions 3-4 .In an analogous manner to Proposition 2, these results can be shown. □ Proposition 5 Under [A1]-[A4], asn−→ ∞ , 1 nhnnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1ip−→(Σ0)j1j2 (5.10) and 1 n2h2nnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)(∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1ip−→0 (5.11) forj1, j2, j3, j4= 1, . . . , p . Proof. Proposition 2 implies 1 nhnnX i=1Eh (∆n iX1)(j1)(∆n iX1)(j2)1{|∆n iX|≤Dhρ n} Fn i−1ip−→(Σ11 0)j1j2 forj1, j2= 1, . . . , p 1. Propositions 3-4 imply that 1 nhnnX i=1Eh (∆n iX1)(j1)(∆n iX2)(j2)1{|∆n iX|≤Dhρ n} Fn i−1ip−→(Σ12 0)j1j2 30 S. KUSANO AND M. UCHIDA forj1= 1, . . . , p 1andj2= 1, . . . , p 2, and 1 nhnnX i=1Eh (∆n iX2)(j1)(∆n iX2)(j2)1{|∆n iX|≤Dhρ n} Fn i−1ip−→(Σ22 0)j1j2 j1, j2= 1, . . . , p 2, which yields (5.10). Since 5 ρ+ 1>2, by using Propositions 2-4, (5.11) can be shown in an analogous way. □ Proposition 6 Under [A1]-[A4], asn−→ ∞ , 1√nhnnX i=1n Eh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i −hn(Σ0)j1j2op−→0, (5.12) 1 nh2nnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i ×Eh (∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1ip−→(Σ0)j1j2(Σ0)j3j4,(5.13) 1 nh2nnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)(∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i p−→(Σ0)j1j2(Σ0)j3j4+ (Σ0)j1j3(Σ0)j2j4+ (Σ0)j1j4(Σ0)j2j3(5.14) and 1 n2h4nnX i=1Eh (∆n iX)(j1)(∆n iX)(j2) 41{|∆n iX|≤Dhρ n} Fn i−1ip−→0 (5.15) forj1, j2, j3, j4= 1, . . . , p . Proof. By using Propositions 2-4, (5.12)-(5.14) can be shown in a similar way to Lemma 6 in Kusano and Uchida [13]. We prove (5.15). In an analogous manner to the proof of Proposition 2, one gets Eh (∆n iX)(j1)(∆n
|
https://arxiv.org/abs/2505.12712v1
|
iX)(j2) 41{|∆n iX|≤Dhρ n} Fn i−1 =E (∆n iX)(j1)(∆n iX)(j2) 41Cn i,0,0,0,0 Fn i−1i +Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,1,1,1,1 Fn i−1i +X (k1,k2,k3,k4)∈K1Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K2Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K3Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K4Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,k1,k2,k3,k4 Fn i−1i =Ri−1(h4 n, ξ, δ, ε, ζ ) for a sufficiently large n, which yields (5.15). □ Proof of Theorem 1 .To show (3.18), it is sufficient to prove 1 ˜NnhnnX i=1(∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n}p−→(Σ0)j1j2 (5.16) SEM FOR DIFFUSION PROCESSES WITH JUMPS 31 forj1, j2= 1, . . . , p . Since it holds from Proposition 5 that 1 nhnnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1ip−→(Σ0)j1j2 and 1 n2h2nnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)(∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1ip−→0, it follows from Lemma 9 in Genon-Catalot and Jacod [4] that 1 nhnnX i=1(∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n}p−→(Σ0)j1j2, so that Proposition 1 implies n ˜Nn×1 nhnnX i=1(∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n}p−→(Σ0)j1j2, which yields (5.16). Next, we consider the following convergence: √n(vecˆΣn−vecΣ0) =nX i=1Ln id−→Np2 0,G0 , (5.17) where G0is ap2×p2matrix whose elements are (G0)p(j2−1)+j1,p(j4−1)+j3= (Σ0)j1j3(Σ0)j2j4+ (Σ0)j1j4(Σ0)j2j3 forj1, j2, j3, j4= 1, . . . , p , and Ln i= vec√n ˜Nnhn(∆n iX)(∆n iX)⊤1{|∆n iX|≤Dhρ n}−1√nΣ0 . Since D+ pG0D+⊤ p= 2D+ p(Σ0⊗Σ0)D+⊤ p, if (5.17) holds, (3.19) is shown in a similar way to Theorem 2 in Kusano and Uchida [13]. This is why we show (5.17). In order to prove (5.17), it is sufficient to show nX i=1E Ln i|Fn i−1p−→0, (5.18) nX i=1n E Ln iLn⊤ i|Fn i−1 −E Ln i|Fn i−1 E Ln i|Fn i−1⊤op−→G0 (5.19) and nX i=1E |Ln i|4|Fn i−1p−→0 (5.20) 32 S. KUSANO AND M. UCHIDA by using Theorems 3.2 and 3.4 in Hall and Heyde [5]. Note that (vecA)(p(j2−1)+j1)=Aj1j2,(vv⊤)j1j2=v(j1)v(j2) for any matrix A∈Rp2×p2and vector v∈Rp. Proposition 1 and the delta method imply √nn ˜Nn−1p−→0, so that it follows from Proposition 6 that nX i=1E Ln i|Fn i−1(p(j2−1)+j1) =n ˜Nn×1√nhnnX i=1n Eh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i −hn(Σ0)j1j2o + (Σ0)j1j2√nn ˜Nn−1p−→0, which yields (5.18). Since E Ln iLn⊤ i|Fn i−1 p(j2−1)+j1,p(j4−1)+j3 =n ˜N2nh2nEh (∆n iX)(j1)(∆n iX)(j2)(∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i −1 ˜NnhnEh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i (Σ0)j3j4 −1 ˜Nnhn Σ0 j1j2Eh (∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i +1 n(Σ0)j1j2(Σ0)j3j4 and E Ln i|Fn i−1 E Ln i|Fn i−1⊤ p(j2−1)+j1,p(j4−1)+j3 =n ˜N2nh2nEh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i Eh (∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i −1 ˜NnhnEh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i (Σ0)j3j4 −1 ˜Nnhn(Σ0)j1j2Eh (∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i +1 n(Σ0)j1j2(Σ0)j3j4, we see from Propositions 1 and 6 that nX i=1 E Ln iLn⊤ i|Fn i−1 −E Ln i|Fn i−1 E Ln i|Fn i−1⊤ p(j2−1)+j1,p(j4−1)+j3 =n ˜Nn2 ×1 nh2nnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)(∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i −n ˜Nn2 ×1 nh2nnX i=1Eh (∆n iX)(j1)(∆n iX)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i ×Eh (∆n iX)(j3)(∆n iX)(j4)1{|∆n iX|≤Dhρ n} Fn i−1i SEM FOR DIFFUSION PROCESSES WITH JUMPS 33 p−→(Σ0)j1j3(Σ0)j2j4+ (Σ0)j1j4(Σ0)j2j3, which
|
https://arxiv.org/abs/2505.12712v1
|
implies (5.19). Furthermore, Propositions 1 and 6 show nX i=1Eh |Ln i|4 Fn i−1i ≤CpX j1=1pX j2=1nX i=1Eh (Ln i)(p(j2−1)+j1) 4 Fn i−1i ≤n ˜Nn4 ×C n2h4npX j1=1pX j2=1nX i=1E (∆n iX)(j1)(∆n iX)(j2) 41{|∆n iX|≤Dhρ n} Fn i−1 +C npX j1=1pX j2=1 (Σ0)j1j2 4p−→0, so that we obtain (5.20), which completes the proof. □ In Proposition 7 and the proof of Theorem 2, we simply write Pθ0asP. Define H(θ) =−1 2log det Σ(θ)−1 2tr Σ(θ)−1Σ(θ0) . Set∂θ=∂/∂θ and∂2 θ=∂θ∂⊤ θ. Proposition 7 Under [A1]-[A4], asn−→ ∞ , sup θ∈Θ 1 nHn(θ)−H(θ) p−→0, (5.21) sup θ∈Θ 1 n∂2 θHn(θ)−∂2 θH(θ) p−→0 (5.22) under P. Proof. Since 1 nHn(θ)−H(θ) =−1 2log det Σ(θ)Nn n−1 −1 2trn Σ(θ)−1˜Nn nˆΣn−Σ(θ0)o , Theorem 1 and Proposition 1 imply sup θ∈Θ 1 nHn(θ)−H(θ) ≤sup θ∈Θ log det Σ(θ) Nn n−1 + sup θ∈Θ Σ(θ)−1 ˜Nn nˆΣn−Σ(θ0) p−→0, which yields (5.21). Next, we show (5.22). Note that 1 n∂θ(i)∂θ(j)Hn(θ) =Nn 2ntr (Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)(∂θ(j)Σ(θ)) −Nn 2ntr (Σ(θ)−1)(∂θ(i)∂θ(j)Σ(θ)) −˜Nn 2ntr (Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)(∂θ(j)Σ(θ))(Σ(θ)−1)ˆΣn 34 S. KUSANO AND M. UCHIDA +˜Nn 2ntr (Σ(θ)−1)(∂θ(i)∂θ(j)Σ(θ))(Σ(θ)−1)ˆΣn −˜Nn 2ntr (Σ(θ)−1)(∂θ(j)Σ(θ))(Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)ˆΣn and ∂θ(i)∂θ(j)H(θ) =1 2tr (Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)(∂θ(j)Σ(θ)) −1 2tr (Σ(θ)−1)(∂θ(i)∂θ(j)Σ(θ)) −1 2tr (Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)(∂θ(j)Σ(θ))(Σ(θ)−1)Σ(θ0) +1 2tr (Σ(θ)−1)(∂θ(i)∂θ(j)Σ(θ))(Σ(θ)−1)Σ(θ0) −1 2tr (Σ(θ)−1)(∂θ(j)Σ(θ))(Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)(Σ(θ0) fori, j= 1, . . . , q . Theorem 1 and Proposition 1 yield sup θ∈Θ 1 n∂2 θHn(θ)−∂2 θH(θ) ≤qX i=1qX j=1sup θ∈Θ 1 n∂θ(i)∂θ(j)Hn(θ)−∂θ(i)∂θ(j)H(θ) ≤ Nn n−1 qX i=1qX j=1sup θ∈Θ (Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)(∂θ(j)Σ(θ)) + Nn n−1 qX i=1qX j=1sup θ∈Θ (Σ(θ)−1)(∂θ(i)∂θ(j)Σ(θ)) +qX i=1qX j=1sup θ∈Θ (Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)(∂θ(j)Σ(θ))(Σ(θ)−1) ˜Nn nˆΣn−Σ(θ0) +qX i=1qX j=1sup θ∈Θ (Σ(θ)−1)(∂θ(i)∂θ(j)Σ(θ))(Σ(θ)−1) ˜Nn nˆΣn−Σ(θ0) +qX i=1qX j=1sup θ∈Θ (Σ(θ)−1)(∂θ(j)Σ(θ))(Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1) ˜Nn nˆΣn−Σ(θ0) p−→0, so that we obtain (5.22). □ Proof of Theorem 2 .First, we prove (3.21). [B1] (a) implies that H(θ) has a unique maximum value at θ=θ0, so that for any ε >0, there exists δ >0 such that |θ−θ0|> ε=⇒H(θ0)−H(θ)> δ. Consequently, since Hn(ˆθn)≥Hn(θ0), Proposition 7 shows P |ˆθn−θ0|> ε ≤P H(θ0)−H(ˆθn)> δ ≤P H(θ0)−1 nHn(θ0)>δ 3 SEM FOR DIFFUSION PROCESSES WITH JUMPS 35 +P1 nHn(θ0)−1 nHn(ˆθn)>δ 3 +P1 nHn(ˆθn)−H(ˆθn)>δ 3 ≤2P sup θ∈Θ 1 nHn(θ)−H(θ) >δ 3 −→0, which yields (3.21). Next, we show (3.22). Let ˇθn,λ=θ0+λ(ˆθn−θ0) and A1,n=ˆθn∈Int(Θ) . Using Taylor’s theorem, on A1,n, we have 0 =1√n∂θHn(ˆθn) =1√n∂θHn(θ0) +1 nZ1 0∂2 θHn(ˇθn,λ)dλ√n ˆθn−θ0 , so that −1√n∂θHn(θ0) =1 nZ1 0∂2 θHn(ˇθn,λ)dλ√n ˆθn−θ0 . (5.23) Since ∂θ(i)Hn(θ) =−Nn 2tr (Σ(θ)−1)(∂θ(i)Σ(θ)) +˜Nn 2tr (Σ(θ)−1)(∂θ(i)Σ(θ))(Σ(θ)−1)ˆΣn , one gets 1√n∂θ(i)Hn(θ0) =˜Nn 2ntr (Σ(θ0)−1)(∂θ(i)Σ(θ0))(Σ(θ0)−1)√n(ˆΣn−Σ(θ0)) +1 2√n˜Nn n−Nn n tr (Σ(θ0)−1)(∂θ(i)Σ(θ0)) fori= 1, . . . , q . Moreover, we see tr (Σ(θ0)−1)(∂θ(i)Σ(θ0))(Σ(θ0)−1)√n(ˆΣn−Σ(θ0)) = vec∂θ(i)Σ(θ0) ⊤ Σ(θ0)−1⊗Σ(θ0)−1√n(vecˆΣn−vecΣ(θ0)) = vech∂θ(i)Σ(θ0) ⊤D⊤ p Σ(θ0)⊗Σ(θ0)−1Dp√n(vech ˆΣn−vechΣ(θ0)) = 2 ∂θ(i)vechΣ(θ0) ⊤W−1 0√n(vech ˆΣn−vechΣ(θ0)) = 2∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0))(i) and Proposition 1 shows √n˜Nn n−Nn n =√n˜Nn n−1 −√nNn n−1 =op(1), (5.24) so that Proposition 1 and Theorem 1 imply −1√n∂θHn(θ0) =−˜Nn n∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +op(1) d−→Nq 0,∆⊤ 0W−1 0∆0 .(5.25) Similarly, it is shown that ∂θ(i)∂θ(j)H(θ0) =−1 2tr (Σ(θ0)−1)(∂θ(j)Σ(θ0))(Σ(θ0)−1)(∂θ(i)Σ(θ0)) =− ∂θ(i)vechΣ(θ0) ⊤W−1 0 ∂θ(j)vechΣ(θ0) = −∆⊤ 0W−1 0∆0 ij 36 S. KUSANO AND M. UCHIDA fori, j= 1, . . . , q , which yields ∂2 θH(θ0) =−∆⊤ 0W−1 0∆0. As∂2 θH(θ) is continuous in θ, one has sup θ∈Bn ∂2 θH(θ)−∂2 θH(θ0)
|
https://arxiv.org/abs/2505.12712v1
|
−→0 asn−→ ∞ , where {ρn}n∈Nis a sequence such that ρn−→0 asn−→ ∞ , and Bn= θ∈Θ;|θ−θ0| ≤ρn . Hence, Proposition 7 and (3.21) show that for any ε >0, P 1 nZ1 0∂2 θHn(ˇθn,λ)dλ+ ∆⊤ 0W−1 0∆0 > ε! =P Z1 0n1 n∂2 θHn(ˇθn,λ)dλ−∂2 θH(θ0)o dλ > ε ∩n |ˆθn−θ0| ≤ρno! +P Z1 0n1 n∂2 θHn(ˇθn,λ)dλ−∂2 θH(θ0)o dλ > ε ∩n |ˆθn−θ0|> ρno! ≤P sup θ∈Bn 1 n∂2 θHn(θ)−∂2 θH(θ0) > ε +P |ˆθn−θ0|> ρn ≤P sup θ∈Θ 1 n∂2 θHn(θ)−∂2 θH(θ) >ε 2 +P sup θ∈Bn ∂2 θH(θ)−∂2 θH(θ0) >ε 2 +P |ˆθn−θ0|> ρn −→0, so that 1 nZ1 0∂2 θHn(ˇθn,λ)dλp−→ − ∆⊤ 0W−1 0∆0. (5.26) Set B1,n=1√n∂θHn(θ0), B 2,n=−1 nZ1 0∂2 θHn(ˇθn,λ)dλ, ˜B2,n=( B2,n onA2,n , Iq onAc 2,n , where A2,n= detB2,n>0 . Note that (3.21) and (5.26) imply P A1,n −→1,P A2,n −→1 since θ0∈Int(Θ) and ∆⊤ 0W−1 0∆0is positive. In a similar way to Proposition 1, (5.26) yields ˜B−1 2,np−→(∆⊤ 0W−1 0∆0)−1. By Slutsky’s theorem, it holds from (5.25) that ˜B−1 2,nB1,nd−→Nq 0,(∆⊤ 0W−1 0∆0)−1 . SEM FOR DIFFUSION PROCESSES WITH JUMPS 37 Thus, for any closed set C∈Rq, it follows from (5.23) that lim sup n−→∞P√n(ˆθn−θ0)∈C = lim sup n−→∞P√n(ˆθn−θ0)∈C ∩(A1,n∩A2,n) + lim sup n−→∞P√n(ˆθn−θ0)∈C ∩(A1,n∩A2,n)c ≤lim sup n−→∞P B−1 2,nB1,n∈C ∩(A1,n∩A2,n) + lim sup n−→∞P Ac 1,n∪Ac 2,n ≤lim sup n−→∞P ˜B−1 2,nB1,n∈C + lim sup n−→∞P Ac 1,n + lim sup n−→∞P Ac 2,n ≤P (∆⊤ 0W−1 0∆0)−1 2Zq∈C , which yields (3.22). □ Lemma 5 Suppose that [A1]-[A4] and[B1] hold. Then, under H0, √n(ˆθn−θ0) = (∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +op(1) asn−→ ∞ . Proof. Recall that A1,n=ˆθn∈Int(Θ) ,A2,n= detB2,n>0 and B1,n=1√n∂θHn(θ0), B 2,n=−1 nZ1 0∂2 θHn(ˇθn,λ)dλ, ˜B2,n=( B2,n onA2,n , Iq onAc 2,n , where ˇθn,λ=θ0+λ(ˆθn−θ0). In a similar manner to the proof of Theorem 2, under H0, we have ˜B−1 2,np−→(∆⊤ 0W−1 0∆0)−1(5.27) and B1,n=1√n∂θHn(θ0) =˜Nn n∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +Rn, where R(i) n=1 2√n˜Nn n−Nn n tr (Σ(θ0)−1)(∂θ(i)Σ(θ0)) fori= 1, . . . , q . It follows from (5.24) that Rn=op(1) (5.28) under H0. Furthermore, it is shown that √n(ˆθn−θ0) =B−1 2,nB1,n =˜Nn nB−1 2,n∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +B−1 2,nRn 38 S. KUSANO AND M. UCHIDA onA1,n∩A2,n, so that √n(ˆθn−θ0)−(∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) =˜Nn nB−1 2,n−(∆⊤ 0W−1 0∆0)−1 ∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +B−1 2,nRn =˜Nn n˜B−1 2,n−(∆⊤ 0W−1 0∆0)−1 ∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) + ˜B−1 2,nRn onA1,n∩A2,n. Under H0, it holds from Proposition 1, Theorem 1, (5.27) and (5.28) that ˜Nn n˜B−1 2,n−(∆⊤ 0W−1 0∆0)−1 ∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) + ˜B−1 2,nRn=op(1). Therefore, since P A1,n∩A2,n −→1 under H0, for any ε >0, we have P √n(ˆθn−θ0)−(∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) > ε ≤Pn ˜Nn n˜B−1 2,n−(∆⊤ 0W−1 0∆0)−1 ∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) + ˜B−1 2,nRn > ε ∩(A1,n∩A2,n) +P (A1,n∩A2,n)c ≤P ˜Nn n˜B−1 2,n−(∆⊤ 0W−1 0∆0)−1 ∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) + ˜B−1 2,nRn > ε +P (A1,n∩A2,n)c −→0 under H0, which completes the proof. □ Lemma 6 Suppose that [A1]-[A4] and[B1] hold. Then, under H0, √n(vechΣ(ˆθn)−vechΣ(θ0)) = ∆ 0(∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +op(1) asn−→ ∞ . Proof. Letσ(θ) =
|
https://arxiv.org/abs/2505.12712v1
|
vech Σ(θ). Using Taylor’s Theorem, we get σ(ˆθn) =σ(θ0) +Z1 0∂ ∂θ⊤σ(θ0+λ(ˆθn−θ0))dλ (ˆθn−θ0), which yields √n(vechΣ(ˆθn)−vechΣ(θ0)) =Z1 0∂ ∂θ⊤σ(θ0+λ(ˆθn−θ0))dλ√n(ˆθn−θ0). In a similar manner to the proof of Theorem 2, it is shown that Z1 0∂ ∂θ⊤σ(θ0+λ(ˆθn−θ0))dλp−→∂ ∂θ⊤σ(θ0) = ∆ 0 SEM FOR DIFFUSION PROCESSES WITH JUMPS 39 under H0. Moreover, Lemma 5 yields √n(ˆθn−θ0) = (∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +op(1) under H0. Consequently, since √n(vech ˆΣn−vechΣ(θ0)) =Op(1) under H0, we obtain √n(vechΣ(ˆθn)−vechΣ(θ0)) = ∆ 0(∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0√n(vech ˆΣn−vechΣ(θ0)) +op(1) under H0, which completes the proof. □ Proof of Theorem 3. First, we see Tn=−2Hn(Σ(ˆθn)) + 2 Hn(ˆΣn) onJn. Note that ∂σ(i)Hn(Σ) =−Nn 2tr (Σ−1)(∂σ(i)Σ) +Nn 2tr (Σ−1)(∂σ(i)Σ)(Σ−1)ˆΣn and ∂σ(i)∂σ(j)Hn(Σ) =Nn 2tr (Σ−1)(∂σ(i)Σ)(Σ−1)(∂σ(j)Σ) −Nn 2tr (Σ−1)(∂σ(i)∂σ(j)Σ) −Nn 2tr (Σ−1)(∂σ(i)Σ)(Σ−1)(∂σ(j)Σ)(Σ−1)ˆΣn +Nn 2tr (Σ−1)(∂σ(i)∂σ(j)Σ)(Σ−1)ˆΣn −Nn 2tr (Σ−1)(∂σ(j)Σ)(Σ−1)(∂σ(i)Σ)(Σ−1)ˆΣn fori, j= 1, . . . , ¯p, where σ= vech Σ. Using Taylor’s Theorem, one gets Hn(Σ(ˆθn)) =Hn(ˆΣn) +∂σHn(ˆΣn)⊤(vechΣ(ˆθn)−vech ˆΣn) −1 2√n(vechΣ(ˆθn)−vech ˆΣn)⊤Rn√n(vechΣ(ˆθn)−vech ˆΣn) onJn, where ˇΣn,λ=ˆΣn+λ(Σ(ˆθn)−ˆΣn) and Rn=−2 nZ1 0(1−λ)∂2 σHn(ˇΣn,λ)dλ. Since Hn(Σ) has a maximum value at Σ=ˆΣnonJn, we have ∂σHn(ˆΣn) = 0 onJn, which yields Tn=√n(vechΣ(ˆθn)−vech ˆΣn)⊤Rn√n(vechΣ(ˆθn)−vech ˆΣn) (5.29) 40 S. KUSANO AND M. UCHIDA onJn. Note that √n(vechΣ(ˆθn)−vech ˆΣn)⊤˜Rn√n(vechΣ(ˆθn)−vech ˆΣn) is also well-defined on Jc n, where ˜Rn=( Rn(on J n), I¯p(on Jc n). Fixε >0 arbitrarily. Since ∂2 σH(Σ) is continuous in Σ, we can take a sufficient small δ >0 such that sup Σ∈B0(δ) ∂2 σH(Σ)−∂2 σH(Σ(θ0)) <ε 2, (5.30) where B0(δ) = Σ∈ M+ p:|Σ−Σ(θ0)| ≤δ and ∂σ(i)∂σ(j)H(Σ) =1 2tr (Σ−1)(∂σ(i)Σ)(Σ−1)(∂σ(j)Σ) −1 2tr (Σ−1)(∂σ(i)∂σ(j)Σ) −1 2tr (Σ−1)(∂σ(i)Σ)(Σ−1)(∂σ(j)Σ)(Σ−1)Σ(θ0) +1 2tr (Σ−1)(∂σ(i)∂σ(j)Σ)(Σ−1)Σ(θ0) −1 2tr (Σ−1)(∂σ(j)Σ)(Σ−1)(∂σ(i)Σ)(Σ−1)Σ(θ0) . In an analogous manner to Proposition 7, it is shown that sup Σ∈B0(δ) 1 n∂2 σHn(Σ)−∂2 σH(Σ) p−→0 (5.31) under H0. Since tr (Σ−1)(∂σ(i)Σ)(Σ−1)(∂σ(j)Σ) = vec( ∂σ(i)Σ)⊤ Σ−1⊗Σ−1 vec(∂σ(j)Σ) = ∂σ(i)vechΣ ⊤D⊤ p Σ−1⊗Σ−1 Dp ∂σ(j)vechΣ = ∂σ(i)σ ⊤ D+ p(Σ⊗Σ)D+⊤ p−1 ∂σ(j)σ = D+ p(Σ⊗Σ)D+⊤ p−1 ij, one has ∂σ(i)∂σ(j)H(Σ(θ0)) =−1 2tr (Σ(θ0)−1)(∂σ(i)Σ(θ0))(Σ(θ0)−1)(∂σ(j)Σ(θ0)) =− 2D+ p(Σ(θ0)⊗Σ(θ0))D+⊤ p−1 ij=−(W−1 0)ij, which yields ∂2 σH(Σ(θ0)) =−W−1 0. (5.32) SEM FOR DIFFUSION PROCESSES WITH JUMPS 41 Furthermore, it holds that |ˇΣn,λ−Σ(θ0)| ≤ |ˆΣn−Σ(θ0)|+λ|Σ(ˆθn)−ˆΣn| ≤ |ˆΣn−Σ(θ0)|+|Σ(ˆθn)−ˆΣn| ≤ |ˆΣn−Σ(θ0)|+|Σ(ˆθn)−Σ(θ0)|+|Σ(θ0)−ˆΣn| ≤δ forλ∈[0,1] on Cn, where Cn=n |ˆΣn−Σ(θ0)| ≤δ 3o ∩n |Σ(ˆθn)−Σ(θ0)| ≤δ 3o . Thus, we see from (5.32) that ˜Rn−W−1 0 = 2Z1 0(1−λ)n1 n∂2 σHn(ˇΣn,λ)−∂2 σH(Σ(θ0))o dλ ≤2Z1 0(1−λ) 1 n∂2 σHn(ˇΣn,λ)−∂2 σH(Σ(θ0)) dλ ≤sup Σ∈B0(δ) 1 n∂2 σHn(Σ)−∂2 σH(Σ(θ0)) ≤sup Σ∈B0(δ) 1 n∂2 σHn(Σ)−∂2 σH(Σ) + sup Σ∈B0(δ) ∂2 σH(Σ)−∂2 σH(Σ(θ0)) onCn∩Jn. Since Σ(θ0) is positive, Theorems 1-2 imply P Jn −→1,P Cn −→1 under H0. Consequently, it follows from (5.30)-(5.31) that P ˜Rn−W−1 0 > ε =Pn ˜Rn−W−1 0 > εo ∩ Cn∩Jn +Pn ˜Rn−W−1 0 > εo ∩ Cn∩Jnc ≤P sup Σ∈B0(δ) 1 n∂2 σHn(Σ)−∂2 σH(Σ) + sup Σ∈B0(δ) ∂2 σH(Σ)−∂2 σH(Σ(θ0)) > ε ∩ Cn∩Jn! +P Cc n∪Jc n ≤P sup Σ∈B0(δ) 1 n∂2 σHn(Σ)−∂2 σH(Σ) + sup Σ∈B0(δ) ∂2 σH(Σ)−∂2 σH(Σ(θ0)) > ε +P Cc n∪Jc n ≤P sup Σ∈B0(δ) 1 n∂2 σHn(Σ)−∂2 σH(Σ) >ε 2 +P sup Σ∈B0(δ) ∂2 σH(Σ)−∂2 σH(Σ(θ0)) >ε 2 +P Cc n +P Jc n −→0 under H0, so that one gets ˜Rnp−→W−1 0
|
https://arxiv.org/abs/2505.12712v1
|
(5.33) 42 S. KUSANO AND M. UCHIDA under H0. As it holds from Lemma 6 that √n(vechΣ(ˆθn)−vech ˆΣn) =√n(vechΣ(ˆθn)−vechΣ(θ0))−√n(vech ˆΣn−vechΣ(θ0)) = ∆0(∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0−I¯p √n(vech ˆΣn−vechΣ(θ0)) +op(1) under H0, Theorem 1 and (5.33) imply √n(vechΣ(ˆθn)−vech ˆΣn)⊤˜Rn√n(vechΣ(ˆθn)−vech ˆΣn) =√n(vech ˆΣn−vechΣ(θ0))⊤(W−1 0−H0)√n(vech ˆΣn−vechΣ(θ0)) +op(1) d−→Z⊤ ¯pP0Z¯p, where H0=W−1 0∆0(∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0,P0=W1 2 0(W−1 0−H0)W1 2 0. On the other hand, P0is an orthogonal projection matrix, and [B1] (b) yields rankP0= ¯p−q, so that it is shown that Z⊤ ¯pP0Z¯p∼χ2 ¯p−q. Therefore, for any closed set C∈R, we see from (5.29) that lim sup n−→∞P Tn∈C = lim sup n−→∞P√n(vechΣ(ˆθn)−vech ˆΣn)⊤Rn√n(vechΣ(ˆθn)−vech ˆΣn)∈C ∩Jn + lim sup n−→∞P Tn∈C ∩Jc n ≤lim sup n−→∞P√n(vechΣ(ˆθn)−vech ˆΣn)⊤˜Rn√n(vechΣ(ˆθn)−vech ˆΣn)∈C + lim sup n−→∞P Jc n ≤P χ2 ¯p−q∈C under H0, which completes the proof. □ Proposition 8 Suppose that [A1]-[A4] hold. Then, as n−→ ∞ , sup θ∈Θ 1 nTn(θ)−U(θ) p−→0 under H1. Proof. In an analogous manner to Proposition 7, we can show the result. □ Proposition 9 Suppose that [A1]-[A4] and[B2] hold. Then, as n−→ ∞ , ˆθnp−→θ∗ SEM FOR DIFFUSION PROCESSES WITH JUMPS 43 under H1. Proof. In a similar way to Theorem 2, the result can be proven. □ Proof of Theorem 4. By the continuous mapping theorem, it holds from Proposition 9 that U(ˆθn)p−→U(θ∗) under H1, so that Proposition 8 shows 1 nTn(ˆθn)−U(θ∗) ≤ 1 nTn(ˆθn)−U(ˆθn) + U(ˆθn)−U(θ∗) ≤sup θ∈Θ 1 nTn(θ)−U(θ) + U(ˆθn)−U(θ∗) p−→0 under H1, which yields 1 nTn(ˆθn)p−→U(θ∗) under H1. Since U(θ∗)>0 under H1, we obtain P Tn(ˆθn)> χ2 ¯p−q = 1−P1 nTn(ˆθn)≤χ2 ¯p−q n −→1 under H1, which completes the proof. □ References [1] Amorino, C. and Gloter, A. (2020). Contrast function estimation for the drift parameter of ergodic jump diffusion process. Scandinavian Journal of Statistics ,47(2) , 279-346. [2] Amorino, C. and Gloter, A. (2021). Joint estimation for volatility and drift parameters of ergodic jump diffusion processes via contrast function. Statistical Inference for Stochastic Processes ,24, 61-148. [3] Czir´ aky, D. (2004). Estimation of dynamic structural equation models with latent variables. Advances in Methodology and Statistics ,1(1), 185-204. [4] Genon-Catalot, V. and Jacod, J. (1993). On the estimation of the diffusion coefficient for multidimensional diffusion processes. Annales de l’Institut Henri Poincar´ e (B) Probabilit´ es et Statistiques, 29, 119-151. [5] Hall, P. and Heyde, C. C. (1981). Martingale limit theory and its application. Academic press . [6] Harville, D. A. (1998). Matrix algebra from a statistician’s perspective. Taylor & Francis . [7] Huang, P. H. (2017). Asymptotics of AIC, BIC, and RMSEA for model selection in structural equation modeling. Psychometrika ,82(2) , 407-426. [8] Huang, P. H., Chen, H. and Weng, L. J. (2017). A penalized likelihood method for structural equation modeling. Psychometrika ,82(2) , 329-354. [9] Inatsugu, H. and Yoshida, N. (2021). Global jump filters and quasi-likelihood analysis for volatility. Annals of the Institute of Statistical Mathematics ,73(3) , 555-598. [10] Jacobucci, R., Grimm, K. J. and McArdle, J. J. (2016). Regularized structural equation modeling. Structural equation modeling: a multidisciplinary journal ,23(4) , 555-566. 44 S. KUSANO AND M. UCHIDA [11] Kessler, M. (1997).
|
https://arxiv.org/abs/2505.12712v1
|
Estimation of an ergodic diffusion from discrete observations. Scandinavian Journal of Statistics ,24(2) , 211-229. [12] Kusano, S. and Uchida, M. (2023). Structural equation modeling with latent variables for diffusion processes and its application to sparse estimation. arXiv preprint arXiv:2305.02655v2. [13] Kusano, S. and Uchida, M. (2024a). Sparse inference of structural equation modeling with latent variables for diffusion processes. Japanese Journal of Statistics and Data Science ,7, 101-150. [14] Kusano, S. and Uchida, M. (2024b). Quasi-Akaike information criterion of SEM with latent variables for diffusion processes. Japanese Journal of Statistics and Data Science , Version of Record. [15] Kusano, S. and Uchida, M. (2024d). QBIC of SEM for diffusion processes from discrete observations. arXiv preprint arXiv:2407.01040. [16] Mancini, C. (2004). Estimation of the characteristics of the jumps of a general poisson-diffusion model. Scandinavian Actuarial Journal ,2004 (1), 42–52. [17] Ogihara, T. and Yoshida, N. (2011). Quasi-likelihood analysis for the stochastic differential equation with jumps. Statistical inference for stochastic processes ,14, 189-229. [18] Platen, E. and Bruti-Liberati, N. (2010). Numerical solution of stochastic differential equations with jumps in finance .Springer Science & Business Media . [19] Shapiro, A. (1983). Asymptotic distribution theory in the analysis of covariance structures. South African Statistical Journal ,17(1), 33-81. [20] Shapiro, A. (1985). Asymptotic equivalence of minimum discrepancy function estimators to GLE estimators. South African Statistical Journal ,19(1) , 73-81. [21] Shimizu, Y. and Yoshida, N. (2003). Janpugata kakusankatei no risankansokukarano suiteinitsuite. (Estimation of diffusion processes with jumps based on discretely sampled observations). The 2003 Japanese Joint Statistical Meeting, Meijo University. [22] Shimizu, Y. and Yoshida, N. (2006). Estimation of parameters for diffusion processes with jumps from discrete observations. Statistical Inference for Stochastic Processes ,9, 227-277. [23] Uchida, M. and Yoshida, N. (2012). Adaptive estimation of an ergodic diffusion process based on sampled data. Stochastic Processes and their Applications ,122(8) , 2885-2924. [24] Yoshida, N. (1992). Estimation for diffusion processes from discrete observation. Journal of Multivariate Analysis ,41, 220–242.
|
https://arxiv.org/abs/2505.12712v1
|
arXiv:2505.13364v1 [stat.AP] 19 May 2025MODELING INNOVATION ECOSYSTEM DYNAMICS THROUGH INTERACTING REINFORCED BERNOULLI PROCESSES GIACOMO ALETTI, IRENE CRIMALDI, ANDREA GHIGLIETTI, AND FEDERICO NUTARELLI Abstract. Understanding how capabilities evolve into core capabilities—and how core capabilities may ossify into rigidities—is central to innovation strategy (Leonard-Barton 1992, Teece 2009). A ma- jor challenge in formalizing this process lies in the interactive nature of innovation: successes in one domain often reshape others, endogenizing specialization and complicating isolated modeling. This is especially true in ecosystems where firm capabilities and innovation outcomes hinge on managing inter- dependencies and complementarities (Jacobides et al. 2018, 2024). To address this, we propose a novel formal model based on interacting reinforced Bernoulli processes. This framework captures how patent successes propagate across technological categories and how these categories co-evolve. The model is able to jointly account for several stylized facts in the empirical inno- vation literature, including sublinear success growth (success-probability decay), convergence of success shares across fields, and diminishing cross-category correlations over time. Empirical validation using GLOBAL PATSTAT (1980–2018) supports the theoretical predictions. We estimate the structural parameters of the interaction matrix and we also propose a statistical proce- dure to make inference on the intensity of cross-category interactions under the mean-field assumption. By endogenizing technological specialization, our model provides a strategic tool for policymakers and managers, supporting decision-making in complex, co-evolving innovation ecosystems—where targeted interventions can produce systemic effects, influencing competitive trajectories and shaping long-term patterns of specialization. Key-words: Innovation Ecosystems, Dynamic Capabilities, Interacting Bernoulli Processes, Patent In- teractions, Technological Specialization 1.Introduction Understanding the extent to which capabilities evolve into core capabilities—and how core capabil- ities may turn into core rigidities (Leonard-Barton 1992)—is a crucial question for firms, particularly those that are innovation-driven (Teece 2009, Eisenhardt and Martin 2000, Helfat and Peteraf 2003). Salter and Alexy (2014) support this view by emphasizing that a well-balanced mix of skills within teams is a fundamental driver of innovation—particularly when these skills are leveraged within a proactive ecosystem capable of transforming them into core capabilities (Jacobides et al. 2018). Find- ing such a mix of skills within an ecosystem is, however not an easy task and further exploration is needed. An interesting hint in this direction is provided in Kim and Magee (2017) and in Pichler et al. (2020). They demonstrate that knowledge flows predominantly occur within previously linked technological domains, which implies that domain-specific learning can reinforce the development of specialized skill sets (Cohen et al. 1990, Malerba and Orsenigo 1997). In their study, as well as in several others (Teece et al. 1997, Adner and Kapoor 2010, Youn et al. 2015, Pugliese et al. 2019) it clearly emerges how the concept of skills and capabilities is inherently linked with the concept of tech- nological fields. Studying how technologies, inherently linked to specific skill requirements, interact within an ecosystem is, therefore, crucial for formalizing the transformation of these skills into core capabilities (Granstrand 1998, Kim et al. 2022). One of the main challenges posed by the interaction among innovations is that they endogenize the processes of specialization and the formation of core capabilities. In
|
https://arxiv.org/abs/2505.13364v1
|
other words, the dynamic and path-dependent nature of capability evolution (Teece 2009, Eisenhardt and Martin 2000) is shaped by the co-evolution of technologies and markets. As a result, the interplay among innovations does more than merely shifting the external environment—it 1 2 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI recursively shapes the internal structure and evolutionary trajectory of firm capabilities, making it dif- ficult to model them in isolation. This challenge is well recognized in the literature, particularly given that firm capabilities operate within ecosystems whose value creation depends on how multilateral interdependencies–based on various forms of complementarities–are managed (Jacobides et al. 2018). Both generic and idiosyncratic complementarities across capabilities– and thus across technological domains– are crucial determinants of a firm’s success in innovation (Jacobides et al. 2024). An exam- ple of this is given by innovation-based platforms, where developers must be incentivized to generate complementary innovations (Jacobides et al. 2024). Given the above premises, formalizing the processes that govern interactions among innovations across different technological domains becomes essential for determining the extent to which ecosys- tems can effectively exploit complementarities—thereby fostering the development of core capabilities while avoiding their degeneration into core rigidities. The aim of the present work is to develop a novel mathematical framework to rigorously formalize such processes of interactive innovations. Technological innovation is not merely a process of isolated breakthroughs but is strongly influ- enced by self-reinforcing structures, where mutually connected technological fields benefit from each other, leading to cumulative innovation dynamics (Salter and Alexy 2014, Youn et al. 2015, Napolitano et al. 2018, Castaldi et al. 2015). The relationship between cross-category interactions and innova- tion success is, however, not necessarily straightforward and necessitates a dedicated formal treatment. While interactions within closely related Cooperative Patent Classification (CPC) categories may in- crease the overall likelihood of incremental innovation, interactions across technological domains may be a key driver of breakthrough innovations—though such events remain rare (Castaldi et al. 2015). Kov´ acs et al. (2021), for instance, show that patents classified in high-contrast categories—those with clear boundaries from neighboring technologies—receive more attention and citations than those in low-contrast categories. This suggests that classification is not merely an administrative process but actively shapes innovation diffusion by influencing how patents are discovered, cited, and integrated into future technological developments. According to Kov´ acs et al. (2021), one key mechanism through which this occurs is search dynamics—the way patent users identify and engage with prior knowledge. Their findings reinforce the idea that the way technologies are classified and interconnected affects not only their immediate visibility but also their long-term influence on the technological evolution of firms, seen as innovation ecosystems (Coccia 2017, Colladon et al. 2025). Specifically, understanding these systemic interdependencies in technologies is central to studying the logic of search dynamics and, consequently, to modeling how capabilities evolve within firms through a clearer understanding of their technological evolution (Pichler et al. 2020). The literature on this topic has primarily documented the clustering of technological domains and their reinforcing interactions (Youn et al. 2015), yet it often falls short in explicating
|
https://arxiv.org/abs/2505.13364v1
|
the underlying mechanisms—highlighting the need for a formal model. Indeed, while Leonard-Barton (1992) empha- sized that core capabilities may evolve into core rigidities if not continuously adapted, numerous scholars over the years have argued that a formal framework is indispensable for meaningfully quantifying and understanding this transformation (Teece 2009, Zollo and Winter 2002, Helfat and Peteraf 2003, Youn et al. 2015). The latter should model the recursive, path-dependent interactions among innovations and pave the way for future research to assess how these interactions influence the development, entrench- ment, or renewal of capabilities. Our study bridges this gap by introducing a mathematical framework to assess how innovation success in one CPC category reinforces success, not only in the same category, but also in the other categories, thus offering a structured explanation for how technological trajectories emerge and evolve. A major difficulty in building such a formal framework lies in the fact that, despite the growing recognition of cross-category knowledge flows (Blazsek and Escribano 2010, Nemet 2012, Pichler et al. 2020, Singh et al. 2021), the role of cross-category interactions in innovation remains largely unexplored in formal models. The relevance of our model and its ability to explain real-world innovation dynamics is assessed by identifying a set of stylized facts some of which have been consistently documented in the empirical MODELING INNOVATION DYNAMICS 3 literature. While existing models typically focus on a subset of these phenomena, our framework pro- vides a unified theoretical explanation that links them through the structure of interactions among technological domains as required by the literature (Pichler et al. 2020). In few words, our model consists of a finite set of interacting Bernoulli processes, each for each technological category. The time-steps are marked by the succession of patents, according to their full date of publications. The status 1 of the process related to category hidentifies a patent which is a “success”, with respect to a novelty measure, in the technological field h. The dynamics of the Bernoulli processes are governed by a self- and cross-category reinforcement rule: the probability that patent n+ 1 will be a success for category hhas an increasing dependence on the number of past successes, not only in category h, but also in all the other categories. The interaction matrix Γ = ( γj,h)j,hcollects the weight of this dependence: γj,h≥0 quantifies how much past successes in a given category jinduces future successes in category h. As a novelty measure for our model, we use a variant of the forward citation index in Squicciarini et al. (2013): essentially, in order to measure the impact of a given patent in a certain CPC-1 category, we measure the number of citations to the considered patent by subsequent patents belonging to the considered category, normalized with respect to the patent cohort. However, it is very important to note that our methodology is presented in general so that it can be applied with different classifications in categories and/or different novelty measures, even in other contexts.1 All the provided theoretical results are proven rigorously and are supported by the analysis of a
|
https://arxiv.org/abs/2505.13364v1
|
real- world data set from the GLOBAL PATSTAT database. More specifically, we analyze all the patents published in the period [1980 −2018] showing that the observed behaviors match with the theoretical ones. We also perform a statistical study on the interaction intensity among the categories. Below, we present a list of core empirical regularities that our model is able to jointly explain, of- fering a more comprehensive perspective on the nature of technological path dependence. We divided the mentioned regularities into those related to the formation of core capabilities—denoted as C.i—and those related to the ossification of core capabilities into core rigidities—denoted as R.i. C.1Early-stage interventions produce stronger and more persistent effects , especially before the in- novation landscape fragments into relatively autonomous and specialized domains.2Our model is ruled by a self- and cross- reinforcement mechanism so that the probability of a future suc- cess in a certain category depends on the number of past successes in that category and also in all the other categories. This study therefore highlights the importance of early interven- tions as having the strongest systemic impact, thereby reinforcing Leonard-Barton (1992)’s view that the evolution of core capabilities is path-dependent and resistant to late-stage ad- justments—emphasizing the strategic imperative for timely, anticipatory actions in innovation policy and capability building. C.2The probability of observing a future success decreases along time. This fact is reflected in a sub-linear growth of the number of observed successes: indeed, for each category h, the number St,hof successes in category hexhibits a power-law growth with an Heaps’ exponent lower than one. In other words, technological categories become ”crowded” over time, with increasing saturation and depletion of easy innovation opportunities, making new breakthroughs less frequent (Jones 2009, Bloom et al. 2020, Clancy 2023). In a broader innovation ecosystem, this supports the need for cross-category recombination: when a single field is exhausted, future breakthroughs often require combining insights from other fields (Fleming 2001), thereby creating new opportunities of developing core capabilities. C.3Irreducibility of the interaction matrix: all the categories present the same Heaps exponent and this means that all the categories interact each other or directly (when γj,h>0) or undirectly by means of a path in the graph associated to the interaction matrix Γ. This evidence formalizes 1The interested reader is referred to Section A in appendix for a discussion on novelty measures. There we also motivate our preference for the index of Squicciarini et al. (2013) over other measures of patent novelty. 2Similar calls for timely and anticipatory policy actions have been raised in studies emphasizing the cumulative and path-dependent nature of technological change (David 1985, Mazzucato 2018). 4 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI the arguments in Weitzman (1998), Fleming (2001), Alves et al. (2007), Godin (2017) among others. R.1Relative number of successes across categories stabilizes along time: for two different categories handj, the ratio St,h/St,jstabilizes over time toward the ratio of their respective components of the eigenvector-centrality score vector u, that is St,h/St,j→uh/uj. This means that the relative proportions of successes across categories remain stable in the long run.
|
https://arxiv.org/abs/2505.13364v1
|
Such stability reflects the consolidation of core capabilities within firms constituting the innovation ecosystem, on the one hand paving the way for the formation of new core capabilities and on the other starting the process of ossification of traditional ones into core rigidities (Leonard-Barton 1992). While related ideas regarding the influence of central nodes over time in economic production networks have been theorized (Acemoglu et al. 2016), to the best of our knowledge, this specific idea has never been formally stated or formalized within the innovation literature.3 R.2The long-run decline of the cross-category correlations: if we denote by Xn,hthe random variable that takes the value 1 if patent nis a success in category hand 0 otherwise, then the corre- lation coefficients between Xn,handXn,jdecreases to zero along time. Indeed, although the interaction matrix Γ is constant along time, its impact on the probabilities of success decreases linearly along time, generating vanishing correlations among the categories. This last piece of evidence is reminiscent of the knowledge spillovers literature (Jaffe and Trajtenberg 2002). In their book (and subsequent works), Jaffe and Trajtenberg (2002) observe increasing spe- cialization within technological fields, implying that cross-field citations — and, by extension, indirect correlations — tend to diminish over time relative to within-field citations. However, the treatment in Jaffe and Trajtenberg (2002) remains primarily empirical and descriptive. This process of specialization is at the basis of the ossification process into core rigidities described in Leonard-Barton (1992). A more thorough positioning of our contribution within the existing literature, considering the em- pirical regularities discussed above, is presented in Section 1.1. Our novel mathematical model offers a strategic tool for policymakers and practitioners. Indeed, as aforementioned, it helps managers understanding how and when innovation processes interact, when a technological field becomes saturated, and how it relates to other fields over the long term. This, in turn, provides valuable insights to identify how and when the capabilities—required for developing innovations within a given technological field—may evolve into core capabilities or degenerate into core rigidities (Leonard-Barton 1992). Furthermore, our formal model can guide managers in innovation-related decision-making, which has become more challenging due to the growing complexity and variety of innovation ecosystems (Dias Sant´ Ana et al. 2020), as well as the increasing interdependencies among technologies (Adner and Kapoor 2010, Russell and Smorodinskaya 2018, Granstrand and Holgersson 2020) –especially in an environment where innovation ecosystems coevolve (Thomas and Autio 2019, Paasi et al. 2023). In particular, we illustrate how a localized innovation shock in one technological domain can propagate across the broader ecosystem, influencing success rates in other domains depending on the structure of interdependencies. Importantly, we show that even targeted interventions can have far-reaching effects—either amplifying or constraining innovation elsewhere—thereby offering managers additional tools to assess competitive dynamics and make strategic decisions within innovation ecosystems. In the next subsection we position our work in relation to the existing literature. 1.1.Positioning within the literature. Prior studies in the management literature have generally addressed aspects of patent dynamics—such as citation behavior (Lee et al. 2012), innovation competi- tion (Keller 2007), and R&D investment (Akcigit
|
https://arxiv.org/abs/2505.13364v1
|
and Kerr 2018, Aghion and Howitt 2007)— lacking, however, a formalization of the process governing cross-category technological interactions. In the 3Although empirical evidence point in this direction (Breschi et al. 2000, Malerba 2002). MODELING INNOVATION DYNAMICS 5 course of the years scholars have deepen the latter aspect by investigating knowledge spillovers across technologies (Kim and Magee 2017) and forming patents’ networks (Higham et al. 2022). Emerging methods, including machine learning models for predicting patent success (Yang 2023), and descriptive studies on radical inventions’ cross-domain influence (Dahlin and Behrens 2005), highlight this gap. These works, however, remain empirical in nature and do not provide a formal understanding of the processes that drive their results. The present paper addresses this by explicitly capturing – via inter- active Bernoulli processes– how innovation in one CPC category affects outcomes in others, providing a structural model of technological co-evolution. Parallel developments in management research, often leveraging NLP and LLMs, have focused on patent similarity across categories (Younge and Kuhn 2016, Arts et al. 2018, Jurek 2024, Bekamiri et al. 2024). While useful for mapping static proximity, these approaches remain correlational and fall short in explaining dynamic mechanisms such as technological convergence, divergence, and the long-run distribution of innovation outcomes (Ganguli et al. 2024). Even studies incorporating dynamics (Kim and Magee 2017, Park and Magee 2017, Taalbi 2023, Maragakis et al. 2023) tend to remain empirical, lacking formal structure. As innovation increasingly unfolds through technological interdependencies over time – generative AI constitutes an example of this phenomenon (Chen et al. 2023, Du et al. 2024, Mohammadabadi 2025)– recent works emphasize the need for formal models that endogenize these relationships (Pichler et al. 2020, Taalbi 2023). Existing models, in particular, are often capable of showing isolated phenomena but do not jointly address long-run specialization, capability evolution, or structural path dependence. Our contribution lies in bridging this gap by introducing a unified framework that models interacting technological domains, capturing features such as long-run dominance, category cycles, and the evolu- tion from interdisciplinarity to specialization (Wang and Schneider 2020). We now consider the main features of our model with respect to the literature: •Early-stage interventions produce stronger and more persistent effects. Back at the time David (1985) introduced the concept of path dependence in technological change, illustrating how early choices can lock-in trajectories that resist later correction. This inertia can either evolve into rigidities if not shaped proactively (Leonard-Barton 1992), or be catalyzed into core capa- bilities through well-designed, mission-oriented innovation policies implemented by institutions (Mazzucato 2018, Kattel and Mazzucato 2018). Our paper elucidates these dynamics by exam- ining how innovation processes interact with one another, thereby potentially influencing the early-stage outcomes of adjacent processes. •The probability of observing a future success decreases along time. This aspect is reflected in a power-law growth of the number of observed successes with an exponent smaller than one. This evidence was noted in Heaps (1978) (indeed, the power-law exponent is often called the Heaps’ exponent) and observed in many fields, such as economics, natural sciences, linguistics and so on. Since Heaps’ work, it has
|
https://arxiv.org/abs/2505.13364v1
|
become natural to require that applicative models satisfy this power-law property (Clauset et al. 2009, Mitzenmacher 2004, and 2005). •All categories interact with one another, either directly or indirectly. Several studies in the literature have hypothesized or empirically observed this pattern (Verspagen 2007, Rigby 2015, Higham et al. 2022). Few, however have provided a formal modelization of this phenomenon. We did so by introducing an interaction matrix governing the probability of future success in a certain category. •Relative number of successes across categories stabilizes along time. Despite continuous innova- tion, patent categories exhibit long-term stability in their relative success rates. We demonstrate that the ratio of cumulative successes across categories converges to the ratio of their centrality scores, indicating a predictable long-run distribution. This fact aligns with the empirical ob- servation that technological domains rarely disappear entirely (Park and Magee 2017, Kim and Magee 2017) and an interpretation can be given in light of the work of Maslach (2016): firms persist after incremental failures but pivot after radical ones. As most innovation is incremental 6 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI in nature, firms tend to remain anchored in familiar technological domains, reinforcing their sustained importance despite the emergence of new fields.4 •The long-run decline of the cross-category correlations. Our model captures the declining cor- relation between patent categories over time, reflecting a trend toward increased specialization –a well-documented phenomenon in the literature (Ganguli et al. 2024). Namely, technolo- gies that were initially interdisciplinary tend to become more specialized as they mature. A well-documented historical example of this phenomenon is the divergence between telecommu- nications and information technology (IT) (Messerschmitt 1996, Shin 2010).5 Finally, a clarification is in order: the literature has adopted diverse definitions of the terms nov- elty,innovation , and invention , often depending on the degree of novelty introduced or the social and economic impact of the innovation. In physics and mathematics, innovation processes are typically modeled as sequences of observed elements. An item is labeled a novelty orinnovation if it appears for the first time in a sequence of observations; otherwise, it is classified as a repetition of a specific element observed in the past. A classic example is provided by the analysis of word sequences in texts, as explored by Tria et al. (2014), where a novelty is defined as the first occurrence of a word in the sequence, and subsequent appearances of the same word are treated as repetitions . In this framework, suitable models are the Poisson-Dirichlet model (Pitman and Yor 1997) and the urn with triggering (Tria et al. 2014) for a single process and their corresponding versions with interaction (Fortini et al. 2018, Iacopini et al. 2020, Aletti et al. 2023a, 2025). In contrast, in economics, innovation is frequently synonymous of patented invention . Here, a patent is, by definition, a novel item—never previously observed—but the emphasis is on whether the patent constitutes a successful innovation. Economic studies thus often model the probability of generating asignificant innovation—what we refer to in this work as a “success”. Accordingly, in this paper,
|
https://arxiv.org/abs/2505.13364v1
|
we use the terms innovation ,novelty , and invention interchangeably, focusing primarily on whether a patented item meets the threshold of success (with respect to some measure), regardless of whether it constitutes a breakthrough or an incremental improvement. Moreover, in this framework, the natural class of models are the interacting Bernoulli processes, which our model belong to. To the best of our knowledge, there are only few papers in the scientific literature regarding inter- acting (or interrelated) Bernoulli processes. The first one is Antelman (1972), which tries to develop a Bayesian method for making statistical inference on the parameters of two Bernoulli processes when the a-priori joint distribution that generates them exhibits dependence. More precisely, a family of closed-under sampling prior distributions, called the Dirichlet-beta family are introduced. However, these distributions result intractable and several approximation are necessarily proposed. Moreover, this study does not cover the case of more than two Bernoulli processes, as in our case. Another paper is Katselis et al. (2019), where the parameters of the Bernoulli distributions at time t+ 1 depend only on the status of the processes at the present time tso that a multi-lag dependence is not allowed. Finally, in Pandit et al. (2019) the provided model relates the parameter of the Bernoulli distribution of a certain process at a given time-step to the history of all the processes over Ltime lags by means of interaction parameters θh,j,ℓ, that capture how much process jat time lag ℓaffects process h. Hence, this model and ours share the dependence of the parameter of the Bernoulli distribution for the future observations on the present and past observations of all the processes. However, important differences are present. First of all, in Pandit et al. (2019) the dependence on the past is up to a fixed number L of lags, while in our model the dependence is on all the past observations. Moreover, in Pandit et al. (2019) the parameters θh,j,ℓdo not depend on the time-step t, while in our model the corresponding 4This also aligns with empirical observations that technological domains, such as pharmaceuticals and automotive, maintain consistent importance over time (Maslach 2016, Perri et al. 2020, Lafond and Kim 2019) 5Initially, these domains were tightly interconnected, with innovations in data transmission and computing co-evolving (Jones and McGuirt 1991, Fransman 2001, Leiner et al. 2009). Over time, however, they have become increasingly spe- cialized: telecommunications has focused on network infrastructure and communication protocols, while IT has expanded into computing systems, software, and data management (Messerschmitt 1996, Garrone et al. 2002). MODELING INNOVATION DYNAMICS 7 quantities essentially are γj,h/(c+t), that decreases along time and let us to get, for each process, a vanishing success probability and a power-law behavior of the number of successes, that we observe in the patent data set. Finally, the paper Pandit et al. (2019) focuses on a maximum likelihood estima- tion procedure for the model parameters θh,j,ℓ∈RN×N×L, that, since the high number of parameters, needs some sparsity condition on Θ = {θh,j,ℓ}; while we provide a deep theoretical study of the model obtaining asymptotic results
|
https://arxiv.org/abs/2505.13364v1
|
that can be verified on real data. It is also worthwhile to stress that the theoretical results for our model follow from new general the- oretical results, that, among other things, have the merit to provide a general mathematical framework where we can also include the results in the above quoted papers Tria et al. (2014), Aletti et al. (2023b) regarding the number of distinct observed novelties. 1.2.Structure of the paper. The remainder of the paper is organized as follows. In Section 2, we introduce our primary novelty measure based on a category-level variant of the forward citation index, and define what constitutes a “success” in each technological field. Section 3 presents the proposed mathematical model of interacting Bernoulli processes, offering a theoretical framework to capture the dynamics of innovation across categories, and states the main asymptotic results. All the theoretical results are analytically proven in the appendix (supplementary material). Moreover, Section 3 highlights the implications of the model through a simulation exercise, related to the propagation effects of exogenous shocks. In Section 4, we validate the theoretical findings using a comprehensive dataset of patents from the GLOBAL PATSTAT database, and we estimate the structural parameters of the interaction matrix. Statistical inference on the interaction intensity under a mean-field assumption is also performed. Section 5 concludes. 2.Category forward citation index and definition of “success” As previously noted in Subsection 1.1, in economics the term “innovation” often refers to “patented invention”. Since patents are novel by definition, our focus lies on whether they represent a successful innovation. Thus, we use the terms innovation, novelty, and invention interchangeably, emphasizing their success rather than their degree of novelty. The number of citations a given patent receives (more briefly, forward citations ) reflects the techno- logical importance of the patent for the development of subsequent technologies (see Squicciarini et al. (2013) and reference therein). In other words, forward citations to a certain patent represent the range of later inventions that have benefited from that patent. Therefore, we have decided to use a variant of the forward citation index in Squicciarini et al. (2013) for detecting the patents that could be defined a “success” in one or more categories. However, it is worthwhile to note that our approach is not specific for a certain classification in categories or for the chosen index and so it could be also applied with a different index aimed to evaluate the importance of a patent.6 Consider a system Hof technological categories for the classification of patents. For a given patent n, with publication (full) date dn, we set, for each category h∈ H, CIT n,h= number of patents with publication date in the period [ dn, dn+ 365 ×T] and belonging to category hand citing n. Then, given a patent nwith publication year ynand belonging to category cn, we refer to the set of patents with the same publication year of n, i.e. yn, and belonging to the same category of n, i.e. cn, as the “cohort of patent n”. Hence, for each category h, we define the index In,has the
|
https://arxiv.org/abs/2505.13364v1
|
normalization, with respect to the cohort of patent n, of the number of citations that patent nreceived by the patents 6A review of the novelty indices present in the literature and potentially applicable to our context is presented in Appendix A. There we also motivate our preference for the index of Squicciarini et al. (2013) over other measures of patent novelty. 8 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI, AND F. NUTARELLI belonging to category hand published within Tyears after the publication date of n. Formally, we define In,h=CIT n,h max{i∈cohort of n}CIT i,h. Note that the number of forward citations a patent gets can differ greatly depending on the publication year and the category it comes from. This is why we decided to share the approach adopted in Squicciarini et al. (2013) and normalize this number with respect to the cohort of the patent considered. In other words, the index we use is a disaggregation of the forward citation index, say IS(n), in Squicciarini et al. (2013): indeed, we have IS(n) =CIT n max{i∈cohort of n}CIT i=X h∈Hmax{i∈cohort of n}CITi,h max{i∈cohort of n}CITiIn,h. Remark 2.1 (Invariance property of the index) .Note that, if we restrict our attention to (or we can observe) only the patents published in a certain category h∗, then, for each patent nbelonging to this category, the index value In,h∗remains well defined and its value does not change with respect to the one computed if we consider all the published patents. Indeed, its definition is based only on the observation of the patents belonging to category h∗. The above defined index In,htakes values in [0 ,1]. We can fix a suitable threshold τand call “success” (according to the terminology for Bernoulli processes) only the patents that have the index greater than τ. More precisely, we give the following definition: Definition 2.1 (success for a certain category) .Given the threshold τ, we say that a patent nis a success for (or in) category hif the index In,h> τ. According to the above definition, if we denote by Xn,hthe random variable that is equal to 1 if patent nis a success for category hand equal to 0 otherwise, we obtain a finite system of Bernoulli processes, that is {X(h)= (Xn,h)n:h∈ H} . These processes are not independent, but they interact. This interaction is crucial to consider, as numerous studies have highlighted that –within innovation ecosystems– innovation processes do not evolve in isolation but often influence one another through knowledge spillovers, recombination, and co-dependence, shaping both innovation outcomes and strate- gic investment decisions across domains (Adner and Feiler 2019, Bondarev and Krysiak 2021, Kim and Magee 2017). In the following sections, we are going to first introduce a new model of interacting Bernoulli pro- cesses, secondly, show some related theoretical results, and finally, verify that the observed behaviors in the available data match with the ones predicted by the theoretical results. We will also perform a statistical test on the interaction intensity. The methodology is presented in general so that it can be applied with different categories or indexes or also in
|
https://arxiv.org/abs/2505.13364v1
|
different contexts. 3.Mathematical model and theoretical results: Interacting Bernoulli processes In this section we will introduce and study a new model for a finite system of interacting Bernoulli processes. In the context described in the previous section, each Bernoulli process refers to a patent category and the time-steps tcorresponds to the sequential (with respect to the date of publication) patents n. However, unless otherwise specified, the model is here presented in a general framework so that it might be applied in other contexts, too. LetHbe a finite set with N= card( H). We are going to define a system {X(h)= (Xt,h)t∈N\{0}: h∈ H} of Bernoulli processes with parameters evolving along time according to a reinforcement rule MODELING INNOVATION DYNAMICS 9 with interaction: specifically, for each t≥0, let Xt+1,1, . . . , X t+1,NbeNBernoulli random variables, that are conditionally independent given the past and such that (3.1)Pt,h=P(Xt+1,h= 1|past until time-step t) =θh+P j∈Hγj,hSt,j c+t where θh>0,c≥θhare parameters that tune the initial condition in the model dynamics (for details, see (B.1) in Appendix B) and St,j=Pt n=1Xn,jis the number of successes until time-step tfor process j. Moreover, the parameters γj,hplay the fundamental role in the dynamics of the system (see again (B.1)). We refer to the matrix Γ = ( γj,h)j,has the interaction matrix and we assume the following conditions: (A1) Γ is non-negative and such that 1⊤Γ≤1⊤(where 1= (1, . . . , 1)⊤), i.e.P j∈Hγj,h≤1 for each h∈ Hso that we have Pt,h∈(0,1); (A2) Γ is irreducible, that is the graph with the Bernoulli processes as nodes and with Γ as the (weighted) adjacency matrix is strongly connected. (In Appendix C we will discuss the case of a reducible matrix Γ.7) Hence, at each time-step t, the probability Pt,hthat we will have a success for process hat time-step t+1 can have an increasing dependence, not only on the number of successes observed in process hitself, according to a self-reinforcement principle (Pemantle 2007), but also on the number St,jof successes observed in any other process juntil time-step t(a property that we call cross-reinforcement ) . The parameter γj,hregulates this dependence (with γj,h= 0 meaning the absence of direct dependence). In other words, for each pair ( j, h) of processes, the parameter γj,hquantifies how much the appearance of a success in process jinduces a potential future success in process h. It is important to note that the assumption of irreducibility of Γ entails an interaction in this sense among all the processes: process jcan reinforces process hdirectly (the case when γj,h>0) or indirectly since the presence of at least one path that joint them. In our context, given tpatents, the probability Pt,hthat the future patent t+ 1 will be a success in category hcan have an increasing dependence on the number of past successes in any category ( self- and cross-category reinforcement ) with the parameter γj,hruling the strength of this (direct) dependence and the assumed irreducibility of the matrix Γ generates a global interaction among all the categories (according to the literature on knowledge spillovers among patents Jaffe et al. (2000),
|
https://arxiv.org/abs/2505.13364v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.