text string | source string |
|---|---|
will omit the definition of nuisance estimators and present a simplified algorithm in Algorithm 2. The complete bootstrap procedure with the estimators of nuisance parameters can be found in Appendix C. Algorithm 2: Simplified two-stage sampling and weighting procedure Input: Outcome mean estimators ˆE[YuN(s)] and ˆE[Y... | https://arxiv.org/abs/2505.10747v1 |
semi-synthetic data analysis to investigate the finite-sample performance of the tests studied in the previous section. In particular, we include all four bootstrap tests with two scaling and two weighting schemes proposed in Section 3.5. As a benchmark, we also include the IPW test statistic based on sample splitting,... | https://arxiv.org/abs/2505.10747v1 |
This is mainly because when constant weighting is used, WIPW estimator boils down to the usual IPW estimator. It 20 is well-known that IPW estimator is highly variable when the downsampling on one arm in the second stage is substantial due to small ε. 2.Tests based on adaptive weighting are robust to ε.Relevant to the ... | https://arxiv.org/abs/2505.10747v1 |
For each permuted sample, we simulate adaptive sampling. We first draw N1= 1000 random sam- ples. Because the new treatment could be beneficial for the patients, we apply theε-greedy algorithm (8) to collect additional N2= 1000 samples in the second stage, encouraging assignment of new treatment. We vary ε∈ {0.1,0.2,0.... | https://arxiv.org/abs/2505.10747v1 |
direction for future study. See, for instance, recent work on adaptive experimental design in Che et al. (2023), Liang et al. (2023), Simchi-Levi et al. (2023), and Li et al. (2024). Our results on limiting distributions and proposed bootstrap procedure simplify power calculations, which in turn can inform the design o... | https://arxiv.org/abs/2505.10747v1 |
Systolic Blood Pressure Intervention Trial (SPRINT)”. In: Clinical trials 11.5, pp. 532–546. Bakshy, Eytan et al. (2018). “AE: A domain-agnostic platform for adaptive experi- mentation”. In: Conference on neural information processing systems , pp. 1–8. Bauer, Peter and Karl K¨ ohne (1994). “Evaluation of experiments w... | https://arxiv.org/abs/2505.10747v1 |
“Diffusion approximations for thompson sam- pling”. In: arXiv preprint arXiv:2105.09232 . Fithian, William, Dennis Sun, and Jonathan Taylor (2014). “Optimal inference after model selection”. In: arXiv preprint arXiv:1410.2597 . 25 Food, US, Drug Administration, et al. (2019). “Adaptive designs for clinical trials of dr... | https://arxiv.org/abs/2505.10747v1 |
and design in batch adaptive experiments”. In: Journal of Causal Inference 12.1, p. 20230068. Li, Lihong, Wei Chu, John Langford, and Robert E Schapire (2010). “A contextual- bandit approach to personalized news article recommendation”. In: Proceedings of the 19th international conference on World wide web , pp. 661–67... | https://arxiv.org/abs/2505.10747v1 |
in Biosciences 47.3, pp. 257–268. Schraivogel, Daniel, Lars M Steinmetz, and Leopold Parts (2023). “Pooled Genome- Scale CRISPR Screens in Single Cells”. In: Annual Review of Genetics 57.1, pp. 223– 244. Shen, Changyu, Xiaochun Li, and Lingling Li (2014). “Inverse probability weighting for covariate adjustment in rando... | https://arxiv.org/abs/2505.10747v1 |
Bounded Lipschitz test function class . . . . . . . . . . . . . . . . . . . 36 F.2 Preliminaries on regular conditional distribution . . . . . . . . . . . . . 38 F.3 Definition of conditional convergence . . . . . . . . . . . . . . . . . . . 39 F.4 Proof of Lemma 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . ... | https://arxiv.org/abs/2505.10747v1 |
. . . . . . . 48 I.4 Proof of Lemma 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 J Proof of Theorem 1 50 J.1 General proof roadmap for weak convergence result . . . . . . . . . . . 50 J.2 Proof of Step 1 in Appendix J.1 . . . . . . . . . . . . . . . . . . . . . . 53 J.3 Proof of Step 2 in Appendix J.... | https://arxiv.org/abs/2505.10747v1 |
with ε-greedy algorithm . . . . . . . . . . 75 O.2 Power comparison: m= 1/2 versus m= 1 . . . . . . . . . . . . . . . . 76 P Additional semi-synthetic data analysis results 78 Notation. Throughout the appendix, we will use ( a, b) to denote a column vector for a∈Rk, b∈Rdwhen there is no ambiguity. In other words, we us... | https://arxiv.org/abs/2505.10747v1 |
and then provide the detailed bootstrap algorithm in Appendix C.2. C.1 Nuisance parameter estimation Analogous to the estimator ˆE[YuN(s)] in (2), we estimate E[Y2 uN(s)] using: WIPWS( s)≡2X t=1Nth(t) N(s) P2 t=1Nth(t) N(s)·1 NtNtX u=1˜Λ(t) uN(s) and ˜Λ(t) uN(s)≡1(A(t) uN=s)(Y(t) uN)2 P[A(t) uN=s|Ht−1]. (16) Using thes... | https://arxiv.org/abs/2505.10747v1 |
ensuring asymptotic validity. E A closer look at literature In this section, we provide a detailed comparison between our results and the existing literature. E.1 Investigating Zhang et al. (2020) They show that the non-normal limiting distribution can happen for classical sample mean statistic under a batched bandit s... | https://arxiv.org/abs/2505.10747v1 |
as in Equation (3). However, under the null H0Nand local alternative H2N, this condition generally fails. As shown in step 2 andstep 3 in section J.1: with adaptive weighting ( m= 1/2) andqt= 1/2, ˆVN(0)/ˆVN(1)d→P2 t=1V(t)(0)/(P2 t=1(H(t)(0))1/2)2 P2 t=1V(t)(1)/(P2 t=1(H(t)(1))1/2)2. (18) 34 where V(t)(s) and H(t)(s) a... | https://arxiv.org/abs/2505.10747v1 |
and that confidence intervals faithfully reflect uncertainty around the desired parameter. F Probabilistic preliminaries In Section F.1, we discuss technique of proving weak convergence of random variables with test functions. In Section F.2, we discuss the regular conditional distribution (RCD) and its properties. F.1... | https://arxiv.org/abs/2505.10747v1 |
and fourth moment as in Lemma 2. We formalize such result in the following lemma. Lemma 4 (Upper bound with bounded Lipschitz function) .Suppose WuN∈Rk are i.i.d. random variables for any fixed N∈N+. Then if we have E[WuN] = 0,E[WuNW⊤ uN] =Ikand E[∥WuN∥4 2] N→0,E[∥WuN∥3 2] N1/2→0, then for any sequence of Lipschitz fun... | https://arxiv.org/abs/2505.10747v1 |
distribution to a random variable Wconditionally on FNif P[WN≤t| FN]p→P[W≤t]for each t∈Rat which t7→P[W≤t]is continuous. (19) We denote this relation via WN| FNd,p−→W. Definition 3. For each N, letWNbe a random variable and let FNbe aσ-algebra. Then, we say WNconverges in probability to a constant cconditionally on FNi... | https://arxiv.org/abs/2505.10747v1 |
(2024)) . LetWuNbe a triangular array of random variables, such that WuNare independent conditionally on FNfor each N. If for some δ >0we have 1 N1+δNX u=1E[|WuN|1+δ| FN]p→0, (22) then 1 NNX u=1(WuN−E[WuN|FN])| FNp,p−→0. (23) 41 Applying the dominated convergence theorem, we know 1 NNX u=1(WuN−E[WuN|FN])p→0. The condit... | https://arxiv.org/abs/2505.10747v1 |
Thus we have lim inf N→∞DN CN≥1 + lim inf N→∞R1N+ lim inf N→∞R2N>1 almost surely. Thus the desired bound can be obtained by checking lim sup N→∞ a1/2 NE[XN] (E[X2 N]−aNE[XN]2)1/2×(1−aN)1/2E[YN] (E[Y2 N]−(1−aN)E[YN]2)1/2 = lim sup N→∞ C1/2 N D1/2 N =1 lim inf N→∞D1/2 N/C1/2 N<1 almost surely. H New results on conditiona... | https://arxiv.org/abs/2505.10747v1 |
that whenever ∥XN−YN∥2< ε(δ) we can guarantee |f(XN)−f(YN)| ≤δsince fis uniformly continuous in X. Then for any δ >0, we have E[|f(XN)−f(YN)|] =E[|f(XN)−f(YN)|1(∥XN−YN∥2≤ε(δ))] +E[|f(XN)−f(YN)|1(∥XN−YN∥2> ε(δ))] ≤δ+ 2 sup x∈X|f(x)|P[∥XN−YN∥2> ε(δ)]. Letting N→ ∞ , we know P[∥XN−YN∥2> ε(δ)]→0 so that we have lim N→∞E[|f... | https://arxiv.org/abs/2505.10747v1 |
lim inf N→∞ E[Y2 uN(s)]−E[YuN(s)]2+ (1−¯eN(s,Ht−1))E[YuN(s)]2 ≥lim inf N→∞ E[Y2 uN(s)]−E[YuN(s)]2 >0. Thus we have lim inf N→∞V(t) N(s)>0 for any s, t. Proof of lim supN→∞V(t) N(s)<∞:We bound lim sup N→∞V(t) N(s)≤lim sup N→∞E[Y2 uN(s)]≤lim sup N→∞(E[Y4 uN(s)])1/2<∞ where the last inequality is due to Assumption 1. ... | https://arxiv.org/abs/2505.10747v1 |
asymptotic covariance matrix Σ(t)≡(Cov(t))2×2, where the covariance is defined as in Appendix A. We will just denote Cov(2)= Cov(2)(A1) for simplicity. Random variables related to observed data. For the ease of presentation, we rewrite the weight vector as H(t) N= (H(t) N(0), H(t) N(1)), H(1) N(s) =e(s), H(2) N(s)≡HN(s... | https://arxiv.org/abs/2505.10747v1 |
can write ˆINwith adaptive weighting as RN(0)P2 t=1Λ(t) N(0)(V(t) N(0))1/2−RN(1)P2 t=1Λ(t) N(1)(V(t) N(1))1/2 (R2 N(0)P2 t=1V(t) N(0)S(t) V(0)R(t) V(0) + R2 N(1)P2 t=1V(t) N(1)S(t) V(1)R(t) V(1))1/2, where R−1 N(s)≡P2 t=1(H(t) N(s))1/2; 3.TNwith constant weighting: we can write ˆIUwith constant weighting as 1√ 2 2X t=1... | https://arxiv.org/abs/2505.10747v1 |
∥f∥∞≤1 2,sup x,y∈R20|f(x)−f(y)| ≤1,sup x,y∈R20|f(x)−f(y)| ∥x−y∥≤1 2. (36) We divide the proofs into the following steps Eh f(E(1) N, E(2) N)i −E[f(W)] =E[f(W1, Ea N(W1))]−E[f(W1, W2)] +E[f(E(1) N, E(2) N)]−E[f(W1, Ea N(W1))] ≡F1+F2. It suffices to F1=o(1) and F2=o(1). We will prove these two claims subsequently. Before... | https://arxiv.org/abs/2505.10747v1 |
the lower bound, by Lemma 19, we know lim inf N→∞VN(s, W 1)>0. For (37), this can be implied by the convergence BN=o(1). Thus we have proved CN=o(1) and thus we have F1=o(1). J.3.2 Proof of F2=o(1) We further divide the proof into two steps. Define AN(f)≡E[f(E(1) N, E(2) N)]−E[f(E(1) N, Ea N(E(1) N))], BN(f)≡E[f(E(1) N... | https://arxiv.org/abs/2505.10747v1 |
2N1/2e1/2(s). (38) Compute Var[WIPW( s)−E[YuN(s)]] =E P2 t=1PNt u=1h(t) N(s) 1(A(t) uN=s) ¯eN(s,Ht−1)Y(t) uN−E[YuN(s)]2 W2 N(s) ≤4 Ne(s)E 2X t=1NtX u=1h(t) N(s) 1(A(t) uN=s) ¯eN(s,Ht−1)Y(t) uN−E[YuN(s)]!! 2 =4 N2e(s)2X t=1NtX u=1 E[Y2 uN(s)]−¯eN(s,Ht−1)E[YuN(s)]2 ≤4E[Y2 uN(s)] Ne(s). Then it suff... | https://arxiv.org/abs/2505.10747v1 |
continuous mapping theorem, for any c∈[−∞,0]. Based on the Assumption 2, we divide the proof into two cases. 60 1.When ¯e(s, x)is Lipschitz continuous on x.We note by Lemma 17 that |¯e(s, hN(MN))−¯e(s, h(W1, c))|a.s.→0 is true. Moreover, if a nonnegative function fis Lipschitz continuous and the range is in [0 ,1], the... | https://arxiv.org/abs/2505.10747v1 |
of (43):First consider Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Ni ≤8 Eh (ˆΛ(2) uN(s))4|H(1) Ni +E[YuN(s)]4 = 8E[Y4 uN(s)] (H(2) N(s))2H(2) N(s)+ 8E[YuN(s)]4. Then we have (H(2) N(s))2Eh (ˆΛ(2) uN(s)−E[YuN(s)])4|H(1) Ni ≤8E[Y4 uN(s)] H(2) N(s)+ 8(H(2) N(s))2E[YuN(s)]4. Since H(t) N(s)≤1 and by Assumption 1 that E[Y4 uN(s)] i... | https://arxiv.org/abs/2505.10747v1 |
S((A(1), V(1)), c))−¯e(s,−∞)|; •We collect all the results to prove the claim. Decomposition of W1(W(−∞),W(c)).Now we first decompose the KS distance into two parts, using the triangle inequality: W1(W(−∞),W(c))≤W1(A(2)(0)w(2)(0), S2,0) +W1(A(2)(1)w(2)(1), S2,1)≡K0+K1. By triangle inequality, we have for s∈ {0,1}, Ks≤W... | https://arxiv.org/abs/2505.10747v1 |
to prove the following statement is true: E[f(W(1,b), W(2,b))|GN]−E[f(W)]p→0,for any fsuch that ∥f∥BL<∞. Consider the following decomposition: E[f(W(1,b), W(2,b))|GN]−E[f(W)] =E[f(W(1,b), W(2,b)(W(1,b)))|GN]−E[f(W1, W2)] =E[f(W1, W(a,b)(W1))|GN]−E[f(W1, W2)] +E[f(W(1,b), W(a,b)(W(1,b)))|GN]−E[f(W1, W(a,b)(W1))|GN] +E[f... | https://arxiv.org/abs/2505.10747v1 |
4 (Adaptive weighting with m= 1).Suppose Assumption 1-2 and As- sumption 5 hold. Then, for any s∈ {0,1}, we have WIPW( s)−E[YuN(s)] =op(1). Furthermore, define M(t)(s)≡qt H(t)(s)P2 t=1qtH(t)(s)!2 and ¯w(t)(s) = M(t)(s)/(R(t)(s))21/2. Then considering the test statistic (49), we have √ N(WIPW( s)−E[YuN(s)])d→2X t=1A(t... | https://arxiv.org/abs/2505.10747v1 |
proof roadmap sketched in Appendix J.1. We need to choose the appropriate joint vector E(t) Nto accommodate the nuisance parameter ˆ σ. In particular, keeping E(1) Nas defined in (33), we can define the new E(2) Nas E(2) N= (Σ(2) N)−1/2Λ(2) N, V(2) N, H(2) N,Vec((Σ(2) N)1/2) where H(2) Nis defined as in (51). The oth... | https://arxiv.org/abs/2505.10747v1 |
SinceIN(s)≥ I(1) N(s) =PN1 u=1 1(A(1) uN=s), we know E" (RN(s))2 (IN(s))21(I(1) N(s)>0)# ≤E" (RN(s))2 (I(1) N(s))21(I(1) N(s)>0)# =E" E (RN(s))2|H1 (PN1 u=1 1(A(1) uN=s))21(I(1) N(s)>0)# . Further, we can decompose Eh (RN(s))2|H1i =N1X u=11(A(1) uN=s)(Y(1) uN−E[YuN(s)])2+N2¯eN(s,H1)Var[YuN(s)] 73 ≤N1X u=1(Y(1) uN−E[Y... | https://arxiv.org/abs/2505.10747v1 |
common distributions: Gaussian, Bernoulli, Poisson and Student distributions. We consider the Thompson sampling (7) with lN= 0.2 and consider the sampel size N= 20,000 and N1=N2= 10,000. We set the significance level to be 0 .05. The distribution information can be summarized as below: 76 •Gaussian: Yu(0)∼N(θ,1), Yu(1)... | https://arxiv.org/abs/2505.10747v1 |
arXiv:2505.11705v1 [math.ST] 16 May 2025Consistency of Bayes factors for linear models El´ ıas Moreno1*, Juan J. Serrano-P´ erez2and Francisco Torres-Ruiz2 1*Royal Academy of Sciences, Spain. 2Department of Statistic, University of Granada, Spain. *Corresponding author(s). E-mail(s): emoreno@ugr.es ; Contributing autho... | https://arxiv.org/abs/2505.11705v1 |
(2008),Maruyama and George (2011),Bayarri et al. (2012) and Gir´ on(2021). We analyze here the asymptotic of six Bayes factors for comparing the generic modelMpagainstM0. We remark that the inconsistency of a Bayes factor implies the inconsistency of the posterior model probability in the set of ca ndidate models (More... | https://arxiv.org/abs/2505.11705v1 |
πIP(βp+1,σp) =c/integraldisplay∞ 0/integraldisplay∞ −∞N/parenleftBig βp+1|˜α0,n(σ2 p+σ2 0) p+2(X′X)−1/parenrightBig HC+(σp|0,σ0)1 σ0dα0dσ0. 3 This is an improper prior although the Bayes factor of MpagainstM0for (πN(α0,σ0),πIP(βp+1,σp)) is well-defined as the arbitrary constant cthat appears inπN(α0,σ0) cancels out in t... | https://arxiv.org/abs/2505.11705v1 |
a closed form expression. 3.3.4 The prior πCG(g|n) A quite close prior to the preceding one was introduced by Cui and George (2008) and its expression is πCG(g|n) = (1+ g)−2. The Bayes factor for this prior turns out to be BCG p0(y,X) =/integraldisplay∞ 0(1+g)(n−p−1)/2 (1+gBp0)(n−1)/2(1+g)−2dg. (4) The integral in ( 4)... | https://arxiv.org/abs/2505.11705v1 |
any p. Theorem 1. i) The Bayes factor BL p0(Bp0)is inconsistent under the null model M0for anyp. ii) The Bayes factor BR p0(Bp0)for any robust g-prior in the class R={πR(g|a,d,ρ) :a,d >0,ρ=d/(d+n)} is inconsistent under the null model M0for anyp. iii) The Bayes factors BIPH p0(Bp0)andBB p0(Bp0)are consistent. Proof.See... | https://arxiv.org/abs/2505.11705v1 |
and (iv)n≥p+1+2a, were stated as sufficient conditions to ensure the consistency of the Bayes factor ( Bayarri et al. ,2012). Unfortunately, this assertion is not true and the Bayes factor BL p0, which is derived from the robust prior πR(g|1/2,n,1/2), serves as a counter-example. The others Bayes factors BIP p0,BIPH p0,B... | https://arxiv.org/abs/2505.11705v1 |
of Theorem 1 i) Toprovetheinconsistencyof BL p0(Bp0)underthenull M0weuseparti)inLemmas 1and2and we can write lim n→∞BL p0(Bp0)≥lim n→∞1 2n−(p+3)/2B−(n−1)/2 p0/parenleftbigg1−Bp0 2Bp0/parenrightbigg−(p+1)/2 Γ((p+1)/2) = 2(p−1)/2Γ((p+1)/2) lim n→∞n−(p+3)/2(1−Bp0)−(p+1)/2 =/braceleftBigg 1,forp= 1, [PM0], ∞,forp >1, [PM0]... | https://arxiv.org/abs/2505.11705v1 |
>1 and this prove the consistency under M0. When sample from Mpand we have that lim p→∞BIPH p0(Bp0) = lim p→∞ (1+2r)r−1 /parenleftBig 1+2(r−1) 1+δ/parenrightBigr p/2 ,[PMp]. Thus, for any ( r,δ) such that (1+2r)r−1/parenleftbigg 1+2(r−1) 1+δ/parenrightbigg−r ≤1 the Bayes factor is inconsistent under Mp. This proves... | https://arxiv.org/abs/2505.11705v1 |
George, E.I.: Empirical Bayes vs. fully Bayes variable selection . Journal of Statistical Planning and Inference 138, 888–900 (2008) Casella,G.,Moreno,E.:ObjectiveBayesianvariableselection.Journ aloftheAmerican Statistical Association 101, 157–167 (2006) Fern´ andez, C., Ley, E., Steel, M.F.: Benchmark priors for Bayes... | https://arxiv.org/abs/2505.11705v1 |
arXiv:2505.12243v1 [math.CO] 18 May 2025Improved Bounds on the Probability of a Union and on the Number of Events that Occur Ilan Adler∗, Richard M. Karp∗and Sheldon M. Ross† April 15, 2025 Abstract LetA1,A2,...,A nbe events in a sample space. Given the probability of the inte r- section of each collection of up to k+ ... | https://arxiv.org/abs/2505.12243v1 |
j=r(−1)r+j/parenleftbiggj−1 r−1/parenrightbigg/parenleftbiggm j/parenrightbigg (2) Proof:NotethatbothsidesofEquation(2)equal0when m < r.Theproofwhen m≥risby induction on r. The result follows when r= 1 since for m≥1 the identity/summationtextm j=0/parenleftbigm j/parenrightbig (−1)j= (1−1)m= 0 implies that 1 =/summatio... | https://arxiv.org/abs/2505.12243v1 |
simpler. Considering Theorem 2, it is clear that obtaining a lower bound for E/bracketleftbig/parenleftbigX−i k+1−i/parenrightbig/bracketrightbig can provide upper and lower bounds for P(X≥r). So far, we have derived bounds based solely on the coarse information provided by the quantities Sj,j= 1,...,k+1,Next, we refine... | https://arxiv.org/abs/2505.12243v1 |
100,t= 1,...,6, soS1= 1.290,S2= 0.6925,S3= 0.1980. Below, we tabulate the various bounds presented in this paper for r= 1 and k= 2. Note that since kis even , S1−S2+(lower bound on E[/parenleftbiggX−1 2/parenrightbigg ])≤P(X≥1)≤S1−S2+S3 Reference S1−S2Lower Bound Lower Bound Upper Bound onE/bracketleftBig/parenleftbigX... | https://arxiv.org/abs/2505.12243v1 |
1AHybridPriorBayesianMethodforCombiningDomestic Real-WorldDataandOverseasDatainGlobalDrugDevelopment KeerChen1,ZengyueZheng1,PengfeiZhu2,ShupingJiang2,NanLi3, JuminDeng1,PingyanChen4,ZhenyuWu5*,YingWu1,4* 1DepartmentofBiostatistics,SchoolofPublicHealth,SouthernMedical University,Guangzhou510515,China 2BARDS,MSDChina,Sh... | https://arxiv.org/abs/2505.12308v1 |
bias,meansquarederror(MSE),andtypeIerrorwithregardingto1.The comparisonwasconductedundersixscenariosdefinedbytwolevelsofbaseline differencesandthreelevelsofeffectheterogeneity. Thesimulationexperimentsusedthevagueprior(1,1).MarkovChain MonteCarlo(MCMC)calculationswereimplementedusingR4.2.3withRstan. Forallmethods,westa... | https://arxiv.org/abs/2505.12308v1 |
arXiv:2505.12706v1 [math.ST] 19 May 2025Efficient computation of complementary set partitions, with applications to an extension and estimation of generalized cumulants Elvira Di Nardo∗, Giuseppe Guarino† Abstract This paper develops new combinatorial approaches to analyze and compute spe- cial set partitions, called c... | https://arxiv.org/abs/2505.12706v1 |
pages. As a result, any missing information must be manually computed, utilizing the fundamental properties of complementary set partition (see Section 2 for a short review). In [23], Stafford suggested a strategic change by avoiding the combinatorial complexity of the problem. Instead of initially computing complement... | https://arxiv.org/abs/2505.12706v1 |
multivariate polykays involved. The main idea is to reduce the estimation of a gen- eralized multivariate cumulant to the estimation of a generalized cumulant of the same order, but involving fewer power sum symmetric functions. In turn, the estimation of a generalized cumulant is further simplified to the estimation o... | https://arxiv.org/abs/2505.12706v1 |
canonical representation of ˜ π. Thus, the complementary set partitions of ˜ πcan be recovered from those of πswapping the corresponding interchanged integers in the blocks (swapping property). Example 2.1. The complementary set partitions of π= 1|234 are 12 |3|4,13|2|4,14|2|3, 123|4,124|3,12|3 4,134|2,13|24,14|23,1234... | https://arxiv.org/abs/2505.12706v1 |
partitions πand ˜πare complementary is to verify that there exists a path in G(π)⊕ G(˜π) for each pair of vertices. To this aim, a classical strategy relies on working with its Laplacian matrix5. IfL[G(π)⊕ G(˜π)] has rank n−1,thenG(π)⊕ G(˜π) is connected as the number of connected components is equal to 1 [4]. In summa... | https://arxiv.org/abs/2505.12706v1 |
πis T(n) π,m= ˜π= ˜π1∪˜π2∈Πn|˜π1∈ΠA1,˜π2∈ΠA2with A1, A2in (5) at varying C1|C2∈Πm,2 . (6) 6 Proof. We prove that T(n) π,mcontains exactly those partitions not complementary to π. Indeed, if ˜ π∈ T(n) π,mthen π∨˜π≤A1|A2since π≤A1|A2and ˜π≤A1|A2,as from (5) ˜π1∈ΠA1,˜π2∈ΠA2respectively. Vice-versa, suppose ˜ πnot comple... | https://arxiv.org/abs/2505.12706v1 |
, n(in)} where each j∈[n] corresponds to ijrepeated subscripts of Xi’s. If M= [n] with no repeating integers, then Sreturns a set partition πand (10) reduces to (1). If m= 1 then M1=MandKM=E[Xi1 1···Xinn] a multivariate moment. If m=|M|then (10) returns the multivariate cumulant (9). To handle multivariate cumulants, m... | https://arxiv.org/abs/2505.12706v1 |
of Rn,thenXej=Xjforj= 1, . . . , n and (14) returns the i-th multivariate cumulant (9). Example 4.2. IfX= (X1, X2, X3) then K1 122(X) =E[X1X2 2X2 3],andK1|2|2 100;010;001 (X) = κ122(X) the multivariate cumulant of order (1 ,2,2). The case r1=. . .=rm= 1 and i=⃗1nis the one of special interest for our purposes. Indeed a... | https://arxiv.org/abs/2505.12706v1 |
the elements within the blocks of π1andπ2andVπ1∩Vπ=V1nthen Vπ2∩V˜π=V1n,where ˜ πis obtained from πby applying the permutation σto the elements within its blocks. Proof. If there exists a permutation σreordering the elements in the blocks of π1and π2,then there exist square permutation matrices PandQofnandmrows respecti... | https://arxiv.org/abs/2505.12706v1 |
Vπ⊆V0n=Rn,but it also applies to all ⃗1n-partitions Λ ˜πsuch that: Λ ˜πand Λ πshare at least one column; at least one column of Λ ˜πcan be expressed as linear combination of some columns of Λ πor viceversa; dim( Vπ) + dim( V˜π)≥n+kwith k= 2, . . . , n as dim( Vπ∩V˜π) =k >1.While using ⃗1n-partitions and subspaces simpl... | https://arxiv.org/abs/2505.12706v1 |
⃗1|i|-partition Λ π∈ M |i|into a multi-index partition Λ ∈ Li,a suitable grouping rule is required for the rows of Λ π. Definition 4.6 (Labeling rule) .Given a multiset M={1(i1), . . . , n(in)}the labeling rule induced by Mis defined as σi: [p]7→[n] with p=|i| ≥nsuch that σ−1 i(k) = {tk−1+ 1, . . . , t k}fork∈[n] with ... | https://arxiv.org/abs/2505.12706v1 |
. . , M l(Λ)ofSwhen thep=|i|integers in B1, . . . , B l(Λ)are replaced by their images under σi.The result follows by observing that when considering the number of ways to split all the elements in{1(i1), . . . , n(in)}among the multisets of S, the product in (19) is overcounted by the permutations of equal multisets, ... | https://arxiv.org/abs/2505.12706v1 |
the swapping property. The second column presents the results of the new method proposed in Sec- tion 3.1. The third column shows the performance of Stafford’s algorithm. The fourth and fifth columns report the results of the algorithms based on connected graphs, using theIsConnected function and the Laplacian matrix, ... | https://arxiv.org/abs/2505.12706v1 |
performed using (24), that is Kφ(Λ)(Y) =X Λ⋆∈M|i|κY(Λ⋆)−X Λ⋆∈T(|i|) π,mκY(Λ⋆) (25) where Yis given in (21), T(|i|) π,mis the set of all not complementary ⃗1|i|-partitions of Λπ=φ(Λ) and m=l(Λπ).By repeating the same arguments of the proof of Theorem 4.2 and using Lemma 4.1, the first sum in (25) is X Λ⋆∈M|i|κY(Λ⋆) =X e... | https://arxiv.org/abs/2505.12706v1 |
ˆKΛ(X) is recovered from an unbi- ased estimator of the generalized cumulant Kφ(Λ)(Y).In turn, an unbiased estimator ˆKΛπ(Y) of the generalized cumulant KΛπ(Y) can be obtained by replacing the product of joint cumulants on the rhs of (22) with the joint polykays kλ1;...;λm(Y),such that E[kλ1;...;λm(Y)] = κλ1(Y)···κλm(Y... | https://arxiv.org/abs/2505.12706v1 |
testing, and models involving complex dependence structures, remains an open avenue for future investigation. For ex- ample, developing efficient algorithms for the computation of cumulants of polykays and their multivariate generalizations remains an ongoing challenge. At present the only rou- tine available is in Mat... | https://arxiv.org/abs/2505.12706v1 |
formal cumulant sequence associated with a multi-indexed sequence {µi},with µ0= 1,and not necessarily representing the moment sequence of any multivariate random vector X.In particular, if µi= 1 for all i∈Nn 0then from (11) it follows κi= 1 if |i|= 1 otherwise 0 .Therefore if Λ π̸=⃗1⊺ n,the rhs of (34) vanishes corresp... | https://arxiv.org/abs/2505.12706v1 |
its applications . Springer, Dordrecht, 2005. [14] Maplesoft, a division of Waterloo Maple Inc.. Maple 2024 . Waterloo, Ontario. [15] P. McCullagh. Tensor notation and cumulants of polynomials. Biometrika , 71(3):461– 476, 1984. [16] P. McCullagh. Tensor methods in statistics: Monographs on statistics and applied proba... | https://arxiv.org/abs/2505.12706v1 |
arXiv:2505.12712v1 [math.ST] 19 May 2025STATISTICAL INFERENCE IN SEM FOR DIFFUSION PROCESSES WITH JUMPS BASED ON HIGH-FREQUENCY DATA SHOGO KUSANO1AND MASAYUKI UCHIDA2,3,4 Abstract. We study structural equation modeling (SEM) for diffusion processes with jumps. Based on high-frequency data, we consider the parameter est... | https://arxiv.org/abs/2505.12712v1 |
For i= 1,2,3,4, we assume that qi(dt, dz i) has a representation such that qi(dt, dz i) = fi(zi)dzidt, and fi(zi) =λi,0Fi(zi), where λi,0>0 is a unknown value, and Fi(zi) is a unknown probability density. Set p=p1+p2, and Xt= (X⊤ 1,t, X⊤ 2,t)⊤. (Xtn i)n i=0are discrete observations, where tn i=ihn. We suppose that T=nh... | https://arxiv.org/abs/2505.12712v1 |
3, we consider the statistical inference of SEM for diffusion processes with jumps. First, we prepare some lemmas to justify the use of a threshold method. Using the threshold method, we define the estimator of Σ, and investigate the asymptotic properties. Next, the quasi-likelihood of the SEM is proposed, and it is sh... | https://arxiv.org/abs/2505.12712v1 |
<∞,sup t∈[0,T]E |ζt|l <∞ (2.1) for any l≥0 when Tis fixed; see, e.g., Platen and Bruti-Liberati [18]. In this paper, we only state the asymptotic results in the case where Tis fixed. On the other hand, we can also consider the case where hn−→0,T=nhn−→ ∞ andnh2 n−→0 as n−→ ∞ . In this case, we need to assume some regu... | https://arxiv.org/abs/2505.12712v1 |
k2, k3, k4)∈ {0,1,2}4; Three elements of k1, k2, k3,andk4are 1,and the other is 0o and K4=n (k1, k2, k3, k4)∈ {0,1,2}4; At least one of k1, k2, k3,andk4is 2o . Below, we always suppose that ρ∈[1/3,1/2). Using Lemma 1, we can obtain the following lemmas. Lemma 2 Under [ A1], [A3] and [ A4], for a sufficiently large n, P... | https://arxiv.org/abs/2505.12712v1 |
parameters (3.20) as Λθ 1,Λθ 2,Γθ,Ψθ,Σθ ξξ,Σθ δδ,Σθ εε,Σθ ζζ. SEM FOR DIFFUSION PROCESSES WITH JUMPS 9 In addition, we set Σ(θ) = Σ(θ)11Σ(θ)12 Σ(θ)12⊤Σ(θ)22! , where Σ(θ)11=Λθ 1Σθ ξξΛθ⊤ 1+Σθ δδ, Σ(θ)12=Λθ 1Σθ ξξΓθ⊤Ψθ−1⊤Λθ⊤ 2, Σ(θ)22=Λθ 2Ψθ−1(ΓθΣθ ξξΓθ⊤+Σθ ζζ)Ψθ−1⊤Λθ⊤ 2+Σθ εε. Note that Σ(θ) is two-times continuously di... | https://arxiv.org/abs/2505.12712v1 |
following one-dimensional L´ evy-OU process: dξ0 t=−2 ξ0 t−−1)dt+ 1.2dW1,t+Z R\{0}zp1(dt, dz ), ξ0 0= 1, where {W1,t}t≥0is a one-dimensional standard Wiener process, and p1(dt, dz ) is a Poisson random measure onR+×R\{0}with the jump density f1(z) = 3 g(z|0,5). Here, g(z|µ, σ2) denotes the probability density 12 S. KU... | https://arxiv.org/abs/2505.12712v1 |
2= 1θ(4)θ(5)θ(6)0 0 0 0 0 0 0 0 1 θ(7)θ(8)θ(9)!⊤ ,Γθ= θ(10) θ(11)! ,Ψθ=I2 where θ(i)fori= 1, . . . , 11 are not zero. In addition, we suppose the volatility matrices as Σθ ξξ=θ(12),Σθ δδ= Diag θ(13), θ(14), θ(15), θ(16)⊤ and Σθ εε= Diag θ(17), θ(18), θ(19), θ(20), θ(21), θ(22), θ(23), θ(24)⊤,Σθ ζζ= Diag θ(25), θ(2... | https://arxiv.org/abs/2505.12712v1 |
1.572 -1.498 -0.899 -1.947 -1.348 1.976 1.835 1.573 -1.498 -0.899 -1.948 -1.348 1.977 0.013 0.013 0.014 0.010 0.014 0.013 0.009 (ˆΣn)(4,5) (ˆΣn)(4,6) (ˆΣn)(4,7) (ˆΣn)(4,8) (ˆΣn)(4,9) (ˆΣn)(4,10) (ˆΣn)(4,11) 0.907 0.726 1.270 1.089 -1.037 -0.622 -1.348 0.908 0.726 1.271 1.089 -1.037 -0.622 -1.348 0.007 0.007 0.009 0.009... | https://arxiv.org/abs/2505.12712v1 |
i−1,τn i)|εt−εtn i−1|+C sup t∈[tn i−1,τn i)|ζt−ζtn i−1|. In a similar manner to Lemma 2.1 in Shimizu and Yoshida [22], it is shown that P C sup t∈[tn i−1,τn i)|ξt−ξtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, ξ), P sup t∈[tn i−1,τn i)|δt−δtn i−1|> Dhρ n Fn i−1 =Ri−1(hp n, δ), P sup t∈[tn i−1,τn i)|εt−εtn i−1|> Dhρ n Fn i−1 ... | https://arxiv.org/abs/2505.12712v1 |
obtain (3.6)-(3.7), (3.10), and (3.15)-(3.16). Note that τn i=tn i on Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 , and Taylor’s theorem implies 1−e−λ0hn≤λ0hn. It holds from Lemma 1 that P Dn i,0,0,0,0 Fn i−1 =P |∆n iX|> Dhρ n, Jn i,1= 0, Jn i,2= 0, Jn i,3= 0, Jn i,4= 0 Fn i−1 ≤P |Xτn i−Xtn i−1|> Dhρ n Fn i−1 ≤P... | https://arxiv.org/abs/2505.12712v1 |
large n, where r1is the positive constant in [ A3]. It follows that P |∆Z1,τn i| ≤2Dhρ n σ0c1,0 Fn i−1, Jn i,1= 1 =Z {|z1|≤2σ−1 0c−1 1,0Dhρ n}\{0}F1(z1)dz1 =1 λ1,0Z {|z1|≤2σ−1 0c−1 1,0Dhρ n}\{0}f1(z1)1{|z1|≤r1}dz1 ≤K1 λ1,0Z {|z1|≤2σ−1 0c−1 1,0Dhρ n}\{0}|z1|1−k1dz1 ≤Chρ n λ1,0 for a sufficiently large n, so that P |∆... | https://arxiv.org/abs/2505.12712v1 |
nh2n×1 nnX i=1Ri−1(1, ξ, δ, ε, ζ )p−→0, it is shown that 1√nnX i=1n P |∆n iX| ≤Dhρ n Fn i−1 −1op−→0 in an analogous manner to the proof of (5.3). Thus, it follows from (5.3) that E1√nnX i=1 1{|∆n iX|≤Dhρ n}−1 Fn i−1 =1√nnX i=1n P |∆n iX| ≤Dhρ n Fn i−1 −1op−→0 and E1 nnX i=1 1{|∆n iX|≤Dhρ n}−12 Fn i−1 =1 nnX... | https://arxiv.org/abs/2505.12712v1 |
=h2 n (Σ11 0)j1j2(Σ11 0)j3j4+ (Σ11 0)j1j3(Σ11 0)j2j4+ (Σ11 0)j1j4(Σ11 0)j2j3 +Ri−1(h3 n, ξ, δ). On the other hand, the Cauchy-Schwartz inequality and Lemma 3 yield Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4)1Dn i,0,0,0,0 Fn i−1i ≤Eh (∆n iX1)(j1)(∆n iX1)(j2)(∆n iX1)(j3)(∆n iX1)(j4) 2 Fn i−1i1 2P Dn i,0,0,0,0 F... | https://arxiv.org/abs/2505.12712v1 |
(Σ22 0)j1j4(Σ22 0)j2j3 +Ri−1(h5ρ+1 n, ξ, δ, ε, ζ ) +Ri−1(h3 n, ξ, δ, ε, ζ ) SEM FOR DIFFUSION PROCESSES WITH JUMPS 29 forj1, j2, j3, j4= 1, . . . , p 2. Proposition 4 Under [A1]-[A4], for a sufficient large n, Eh (∆n iX1)(j1)(∆n iX2)(j2)1{|∆n iX|≤Dhρ n} Fn i−1i =hn(Σ12 0)j1j2+h2 nRi−1(1, ξ, δ, ε, ζ ) forj1= 1, . . . , ... | https://arxiv.org/abs/2505.12712v1 |
iX)(j2) 41{|∆n iX|≤Dhρ n} Fn i−1 =E (∆n iX)(j1)(∆n iX)(j2) 41Cn i,0,0,0,0 Fn i−1i +Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,1,1,1,1 Fn i−1i +X (k1,k2,k3,k4)∈K1Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K2Eh (∆n iX)(j1)(∆n iX)(j2) 41Cn i,k1,k2,k3,k4 Fn i−1i +X (k1,k2,k3,k4)∈K3Eh (∆n iX)(j1)(∆n iX)(j... | https://arxiv.org/abs/2505.12712v1 |
implies (5.19). Furthermore, Propositions 1 and 6 show nX i=1Eh |Ln i|4 Fn i−1i ≤CpX j1=1pX j2=1nX i=1Eh (Ln i)(p(j2−1)+j1) 4 Fn i−1i ≤n ˜Nn4 ×C n2h4npX j1=1pX j2=1nX i=1E (∆n iX)(j1)(∆n iX)(j2) 41{|∆n iX|≤Dhρ n} Fn i−1 +C npX j1=1pX j2=1 (Σ0)j1j2 4p−→0, so that we obtain (5.20), which completes the proof. □ In Pro... | https://arxiv.org/abs/2505.12712v1 |
−→0 asn−→ ∞ , where {ρn}n∈Nis a sequence such that ρn−→0 asn−→ ∞ , and Bn= θ∈Θ;|θ−θ0| ≤ρn . Hence, Proposition 7 and (3.21) show that for any ε >0, P 1 nZ1 0∂2 θHn(ˇθn,λ)dλ+ ∆⊤ 0W−1 0∆0 > ε! =P Z1 0n1 n∂2 θHn(ˇθn,λ)dλ−∂2 θH(θ0)o dλ > ε ∩n |ˆθn−θ0| ≤ρno! +P Z1 0n1 n∂2 θHn(ˇθn,λ)dλ−∂2 θH(θ0)o dλ > ε ∩n |ˆθn−θ0|> ρ... | https://arxiv.org/abs/2505.12712v1 |
vech Σ(θ). Using Taylor’s Theorem, we get σ(ˆθn) =σ(θ0) +Z1 0∂ ∂θ⊤σ(θ0+λ(ˆθn−θ0))dλ (ˆθn−θ0), which yields √n(vechΣ(ˆθn)−vechΣ(θ0)) =Z1 0∂ ∂θ⊤σ(θ0+λ(ˆθn−θ0))dλ√n(ˆθn−θ0). In a similar manner to the proof of Theorem 2, it is shown that Z1 0∂ ∂θ⊤σ(θ0+λ(ˆθn−θ0))dλp−→∂ ∂θ⊤σ(θ0) = ∆ 0 SEM FOR DIFFUSION PROCESSES WITH JU... | https://arxiv.org/abs/2505.12712v1 |
(5.33) 42 S. KUSANO AND M. UCHIDA under H0. As it holds from Lemma 6 that √n(vechΣ(ˆθn)−vech ˆΣn) =√n(vechΣ(ˆθn)−vechΣ(θ0))−√n(vech ˆΣn−vechΣ(θ0)) = ∆0(∆⊤ 0W−1 0∆0)−1∆⊤ 0W−1 0−I¯p √n(vech ˆΣn−vechΣ(θ0)) +op(1) under H0, Theorem 1 and (5.33) imply √n(vechΣ(ˆθn)−vech ˆΣn)⊤˜Rn√n(vechΣ(ˆθn)−vech ˆΣn) =√n(vech ˆΣn−vechΣ(θ0... | https://arxiv.org/abs/2505.12712v1 |
Estimation of an ergodic diffusion from discrete observations. Scandinavian Journal of Statistics ,24(2) , 211-229. [12] Kusano, S. and Uchida, M. (2023). Structural equation modeling with latent variables for diffusion processes and its application to sparse estimation. arXiv preprint arXiv:2305.02655v2. [13] Kusano, ... | https://arxiv.org/abs/2505.12712v1 |
arXiv:2505.13364v1 [stat.AP] 19 May 2025MODELING INNOVATION ECOSYSTEM DYNAMICS THROUGH INTERACTING REINFORCED BERNOULLI PROCESSES GIACOMO ALETTI, IRENE CRIMALDI, ANDREA GHIGLIETTI, AND FEDERICO NUTARELLI Abstract. Understanding how capabilities evolve into core capabilities—and how core capabilities may ossify into rig... | https://arxiv.org/abs/2505.13364v1 |
other words, the dynamic and path-dependent nature of capability evolution (Teece 2009, Eisenhardt and Martin 2000) is shaped by the co-evolution of technologies and markets. As a result, the interplay among innovations does more than merely shifting the external environment—it 1 2 G. ALETTI, I. CRIMALDI, A. GHIGLIETTI... | https://arxiv.org/abs/2505.13364v1 |
the underlying mechanisms—highlighting the need for a formal model. Indeed, while Leonard-Barton (1992) empha- sized that core capabilities may evolve into core rigidities if not continuously adapted, numerous scholars over the years have argued that a formal framework is indispensable for meaningfully quantifying and ... | https://arxiv.org/abs/2505.13364v1 |
real- world data set from the GLOBAL PATSTAT database. More specifically, we analyze all the patents published in the period [1980 −2018] showing that the observed behaviors match with the theoretical ones. We also perform a statistical study on the interaction intensity among the categories. Below, we present a list o... | https://arxiv.org/abs/2505.13364v1 |
Such stability reflects the consolidation of core capabilities within firms constituting the innovation ecosystem, on the one hand paving the way for the formation of new core capabilities and on the other starting the process of ossification of traditional ones into core rigidities (Leonard-Barton 1992). While related... | https://arxiv.org/abs/2505.13364v1 |
and Kerr 2018, Aghion and Howitt 2007)— lacking, however, a formalization of the process governing cross-category technological interactions. In the 3Although empirical evidence point in this direction (Breschi et al. 2000, Malerba 2002). MODELING INNOVATION DYNAMICS 5 course of the years scholars have deepen the latte... | https://arxiv.org/abs/2505.13364v1 |
become natural to require that applicative models satisfy this power-law property (Clauset et al. 2009, Mitzenmacher 2004, and 2005). •All categories interact with one another, either directly or indirectly. Several studies in the literature have hypothesized or empirically observed this pattern (Verspagen 2007, Rigby ... | https://arxiv.org/abs/2505.13364v1 |
we use the terms innovation ,novelty , and invention interchangeably, focusing primarily on whether a patented item meets the threshold of success (with respect to some measure), regardless of whether it constitutes a breakthrough or an incremental improvement. Moreover, in this framework, the natural class of models a... | https://arxiv.org/abs/2505.13364v1 |
that can be verified on real data. It is also worthwhile to stress that the theoretical results for our model follow from new general the- oretical results, that, among other things, have the merit to provide a general mathematical framework where we can also include the results in the above quoted papers Tria et al. (... | https://arxiv.org/abs/2505.13364v1 |
normalization, with respect to the cohort of patent n, of the number of citations that patent nreceived by the patents 6A review of the novelty indices present in the literature and potentially applicable to our context is presented in Appendix A. There we also motivate our preference for the index of Squicciarini et a... | https://arxiv.org/abs/2505.13364v1 |
different contexts. 3.Mathematical model and theoretical results: Interacting Bernoulli processes In this section we will introduce and study a new model for a finite system of interacting Bernoulli processes. In the context described in the previous section, each Bernoulli process refers to a patent category and the t... | https://arxiv.org/abs/2505.13364v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.