text
string | source
string |
|---|---|
in (1). Under the null hy- pothesis, all random effects are equal to p0. Under the alternative, the distance between the unobserved empirical distribution of random effects eπ(10) and the null distribution is a good proxy for W1(π, π 0), provided ϵis not too small. Lemma 5. Letδ∈(0,1), ifW1(π, δp0)≥ϵand there exists a positive constant C > 0that depends only on δsuch that ϵ≥C n, it holds that W1(eπ, δp0)≥ϵ/2with probability at least 1−δ The proof can be found in appendix H. Whenever ϵ≥C/n, the homogeneity testing problem (27) reduces to the fixed-effects problem studied by Chhor and Carpentier [2022] Given Yi=Xi|pi∼Bin(t, pi) for 1 ≤i≤n Test H0:pi=p0for 1≤i≤nv.s. H1:W1(eπ, δp0)≥ϵ/2 (29) This reduction is optimal because if ϵ≲n−1, we can construct mixing distributions that, with constant probability, produce the same marginal measure as the null distribution, making it impossible for a valid test to distinguish them. Lemma 6. There exists a universal positive constant Csuch that the critical separation for testing random effects (27) is lower-bounded by C/n. The proof is in appendix H.2. In summary, if ϵ≳n−1, any test for fixed-effects testing (29) is applicable, while if ϵ≲n−1no valid test can be powerful. For the fixed effects problem, the analysis is split into two cases. When p0is near the boundaries of [0 ,1], under the alternative hypothesis, the mixing distribution must allocate 16 some mass away from the boundaries. This deviation is detectable by comparing the expected and observed means: ϵ/2≤W1(eπ, π 0) =⇒ϵ/4≤1 nnX i=1pi−p0 ifp0≤ϵ/8 . A simple test of mean deviation can be constructed using the first Kravchuk polynomial ψα 1(X) =I(T1,p0(X)> qα(Pπ0, T1,p0)) where T1,p0(X) =1 nnX ieK1(Xi, p0, t) . (30) Conversely, when p0is away from the boundaries, a mixing distribution under the alternative can match the null distribution’s mean. To detect such deviations, we check if the observed variance is compatible with homogeneity: ϵ/2≤W1(eπ, π 0) =⇒ϵ2/4≤1 nnX i=1(pi−p0)2. The second Kravchuck polynomial estimates the above ℓ2distance without bias, allowing us to construct a test for it: ψα 2(X) =I(T2,p0(X)> qα(Pπ0, T2,p0) ) where T2,p0(X) =1 nnX i=1eK2(Xi, p0, t) . (31) Combining both tests via a Bonferroni correction ψα LM(X) = max ψα/2 1(X) ,ψα/2 2(X) (32) leads to a test that is local minimax optimal for both fixed and random effects. Lemma 7 (Homogeneity testing of fixed effects [Chhor and Carpentier, 2022]) .For testing problem (29), the critical separation is given by ϵ∗(n, t, π 0)≍ 1 tnifµp0≲1 tn µp0 if1 tn≲µp0≲1 tn1/2 µ1/2 p0 t1/2n1/4if1 tn1/2≲µp0 Furthermore, the test (32) attains them. Heuristically, the minimax rates for homogeneity testing of random effects follow by thresh- olding the above critical separation at ϵ∗≍n−1. The proof is deferred to appendix ap- pendix H. Theorem 4 (Homogeneity testing of random effects) .For the testing problem (27), the critical separation (28) is given by ϵ∗(n, t, π 0)≍(1 nifµp0≲t n3/2 µ1/2 p0 t1/2n1/4ifµp0≳t n3/2ift≳√n 17 and ϵ∗(n, t, π 0)≍ 1 nifµp0≲1 n µp0 if1 n≲µp0≲1 tn1/2 µ1/2 p0 t1/2n1/4if1 tn1/2≲µp0ift≲√n (33) Furthermore, the test (32) attains them. Intuitively, one expects that testing
|
https://arxiv.org/abs/2504.13977v1
|
the mean (30) is easier than testing variance (31). This is reflected by equation (33), where the fast testing rates near the boundaries, i.e. µp0≲t−1n−1/2, are achieved by testing for deviations in the mean (30), and the slow testing rates away from the boundaries are achieved by testing for deviations in the variance (31). 4.2 Lower bounds on the critical separation The lower-bound arguments in theorem 4 follow similar constructions to those in section 3.3. Using lemma 2, the inequality V(P, Q)≤p χ2(P, Q) and the tensorization property of the chi-squared distance, we have that the local critical separation is lower-bounded by the W1 distance among mixing distribution whose marginal measures cannot be distinguished in chi-squared distance. Corollary 1. ϵ∗(n, t, π 0)≥sup π1∈DW1(π1, π0)s.t.χ2(Pn π0, Pn π1)< C2 α, (34) where Cα= 1−β+α. Alternatively, the same claim holds if χ2(Pπ0, Pπ1)<log (1 + C2 α)/n. Analogous to lemma 3, we can control the chi-squared distance by looking at the moment the difference between the null and the alternative mixing distributions. Lemma 8. Letp0∈(0,1)andπ0=δp0. For any mixing distribution π1, it holds that χ2(Pπ1, Pπ0) =Mp0(π1, π0). (35) The proof can be found in appendix C.2. By (35), it is clear that the maximum separation achieved in (34) depends on the number of moments matched by π1andπ0. Since π0is a point mass, an alternative mixing distribution π1can only differ on the first moment or match it. If π1differs on the first moment, then the mean test (30) detects them, while if π1matches the first moment of π0, then the debiased Pearson’s chi-squared test (31) detects them, which explains the optimality of the combined test (32). 18 4.3 Simulations Due to symmetry, we consider only π0=δp0for 0≤p0≤1/2. We assess the performance of various tests under the distributions used to derive the lower bounds in theorem 4. The first family of distributions perturbs the null distribution’s mean π=δp0+ϵfor 0≤ϵ≤1−p0, (36) the second family matches the mean π=1 2·(δp0+ϵ+δp0−ϵ) for 0 ≤ϵ≤p0, (37) while the third family perturbs the probability assigned to the null hypothesis’s point mass π= (1−ϵ)·δp0+ϵ·δ1for 0≤ϵ≤1 . (38) Figure 5 shows the power of the tests as a function of the distance to the null distributions π0=δ0.5. In all cases, the local minimax test achieves good power. 0.0 0.1 0.2 0.3 0.00 0.25 0.50 0.75 1.00 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 W1Power (higher is better)first moment perturbation low prob perturbation match first moment T est Deb. Pearson Local Minimax Dist. to MLE Dist. to MOM Plug-in Figure 5: Power of the test as a function of the distance W1(π, π 0), where the null distribution isπ0=δ0.5and the alternative distribution πbelongs to the family of distributions (36) on the left panel, (38) on the center panel, and (37) on the right panel. Figure 6 displays the local critical separation for (36),(37), and (38). Each family plays a specific role in capturing the behavior predicted by (28). The family (38) limits the power for small null hypotheses close to the origin, (36) captures the linear growth with respect to p0for moderate
|
https://arxiv.org/abs/2504.13977v1
|
p0values, while (37) captures the√p0dependence for null hypotheses that are close to 0 .5. Figure 7 shows the empirical local critical separation when considering all families of dis- tributions, approximating the local critical separation in (33). When considering all three kinds of mixing distributions, all tests have a comparable performance. 5 Homogeneity testing without a reference effect In section 4, we assume that the distribution under the null hypothesis is known. However, in statistical meta-analyses, researchers must asses the homogeneity of treatment effect without 19 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 Minimum W 1( , 0) such that power 0.95 and type I error 0.05 Null hypothesis 0=p0 W1(lower is better)first moment perturbation low prob perturbation match first moment T est Deb. Pearson Local Minimax Dist. to MLE Dist. to MOM Plug-inFigure 6: Minimum W1required to obtain high power and low type I error for the families of distributions (36),(37), and (38) as a function of the null distribution. For the right panel, if a line is not drawn for a given null distribution, it signifies that the corresponding test could not control the type I error. any reference distribution. Let Sdenote the set of all point masses supported on [0 ,1] S={δp:p∈[0,1]}, and define the distance to the set as the shortest distance to any of its members: W1(π, S) = sup s∈SW1(π, s) . We want to differentiate between the mixing distribution being a point mass and being far away from the set of point masses: Test H0:π∈Sv.s.H1:W1(π, S)≥ϵ. (39) To determine the smallest ϵfor which a powerful valid test exists, we use the global minimax framework restricted to the set S. Consider all tests that guarantee type I error control 0.0 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 Minimum W 1( , 0) such that power 0.95 and type I error 0.05 Null hypothesis 0=p0 W1(lower is better)100 T est Deb. Pearson Local Minimax Dist. to MLE Dist. to MOM Plug-in Figure 7: Minimum W1required to obtain high power and low type I error as a function of the null distribution. 20 under the null hypothesis ΨS=∩π0∈SΨ(π0) = ψ: sup π0∈SPπ0(ψ(X) = 1) ≤α , define the risk as their maximum type II error under the alternative hypothesis R(ϵ) = inf ψ∈ΨSsup W1(π,S)≥ϵPπ(ψ(X) = 0) , and let the critical separation be the smallest ϵsuch that there exists a test that controls both errors ϵ∗= inf{ϵs.t.R∗(ϵ)≤β}. (40) Intuitively, distinguishing a mixing distribution from the set of point masses is difficult if it is highly concentrated around its mean. Thus, the ϵseparation between the hypotheses relates to the smallest variance that we can detect. Under the null hypothesis, the variance of the mixing distribution is zero, V(π) = 0, whereas under the alternative hypothesis, it must be greater than ϵ2 V(π) =Vp∼π[p] =W2 2(π, δm1(π))≥W2 1(π, δm1(π))≥W2 1(π, S)≥ϵ2. (41) Hence, (39) reduces to testing the variance of the mixing distribution Test H0:V(π) = 0 v.s. H1:V(π)≥ϵ2. (42) In section 5.1,
|
https://arxiv.org/abs/2504.13977v1
|
we develop a test by debiasing the plug-in estimator of V(π). Although minimax optimal, this test is overly conservative, leaving room to improve type I error control. Section 5.2 introduces a debiased Cochran’s chi-squared test that achieves better type I control while preserving optimality. 5.1 A conservative test The natural starting point is to study an estimator of the variance of the mixing distribution. The following U-statistic bV(X) =n 2−1X i<jh(Xi, Xj) 2where h(X, Y) = X 2 t 2+ Y 2 t 2−2 X 1 t 1 Y 1 t 1, (43) is an unbiased estimator of the mixing distribution’s variance EPπh bV(X)i =Ep,qiid∼π[p−q]2=V(π) . (44) Furthermore, it is a debiased version of the empirical variance: bV(X) =eV(X)−d(X) where eV(X) =1 n−1nX i=1Xi t−bm1(π)2 , 21 d(X) = 2 n 2−1P i<jµXi/t t−1+µXj/t t−1 andbm1(X) =1 nPn i=1Xi t. The corresponding test is defined using its maximum quantile over the null hypothesis to guarantee validity ψbV(X) =I bV(X)>sup π∈Sqα(Pπ,bV(X)) . (45) The following lemma, proved in appendix I.1.1, establishes that its performance is compa- rable to the worst-case separation rates achieved in theorem 4. This result is expected, as the type I error must be controlled for all distributions in S. Lemma 9. For testing problem (42), the test (45) controls type I error by α. Furthermore, there exists a universal positive constant Csuch that the type II error of the test is bounded byβwhenever ϵ(n, t)≥C· 1 n1/2for√n≲t 1 t1/2n1/4otherwise(46) Although the test (45) is minimax optimal, it can be conservative when the mixing distribu- tion is a point mass near the boundary of [0 ,1]. Consider the null distribution π0=δp0∈S, the ratio between the variance under π0and the maximum variance under the null depends onµ2 p0 VPπ0h bV(X)i supπ0∈SVPπ0h bV(X)i=µ2 p0. (47) Since the mean of bVis always zero under the null hypothesis, (47) implies that the 1 −α quantile of bVvaries substantially across S, and the decision threshold in (45) is overly conservative. 5.2 Tightening the type I error control A practical solution is to normalize bVusing an estimator of µp0. To achieve this, assume access to two independent samples of equal size Xi, Yi∼Pπfor 1≤i≤n, and define the ratio statistic R(X, Y) =bV(X) max(bµ(Y), γ)(48) wherebµ(Y) is an unbiased estimator of µm1(π)andγis a truncation parameter bµ(Y) =en 2−1X i<jeh(Yi, Yj) 2andeh(X, Y) =X t+Y t−2X tY t. The numerator (48) estimates µm1(π)whenever it is large and avoids doing so when it is too small to ensure that the variance of (48) remains controlled. We define the corresponding 22 test, referred to as the debiased Cochran’s χ2, ψR(X) =I R(X, Y)>sup π∈Sqα(Pπ, R) . (49) The following lemma states that the debiased Cochran’s χ2test performs the same as (45) up-to-constants. Theorem 5. For testing problem (39), the debiased Cochran’s χ2test(49) controls type I error by α. Furthermore, for γ≥c/nwhere cis a universal positive constant, there exists a universal positive constant Csuch that the type II error is bounded by βwhenever (46) is satisfied. The proof appears in appendix I.1.2. The sample splitting technique simplifies the proof, but simulations indicate it is unnecessary.
|
https://arxiv.org/abs/2504.13977v1
|
Furthermore, the proof suggests that unbiased estimation of µm1(π)is not crucial; instead, estimating max ( m1(π),1−m1(π)) is sufficient since one of the terms must behave like a constant. Consequently, bµ(Y) can be replaced by µbm1(Y)in (48). Examining the variance of the statistic under a point in Sversus the maximum variance under S, we observe that they match up to constants insofar as the underlying distribution is not too close to the origin VPπ0[R(X, Y)] supπ0∈SVPπ0[R(X, Y)]≍µp0 max ( µp0, γ)2 =( µ2 p0ifµ2 p0≤γ 1 otherwisewhere π0=δp0∈S. Thus, the debiased Cochran’s χ2test is conservative only for distributions in Sclose to the boundaries of [0 ,1]. Finally, the following lemma, proved in appendix I.2, states that separation achieved by (45) and (49) can be matched, meaning that both tests are minimax optimal. Lemma 10. The critical separation (40) is lower-bounded by ϵ∗(n, t)≳max1√n,1 t1/2n1/4 5.3 Simulations Three statistics are evaluated through simulation. The first is the Cochran’s χ2test statistic t·eV(X)/µbm1(X)[Cochran, 1954]. The statistic is undefined when the numerator and the denominator are both zero, which happens with probability one when the mixing distribution is a point mass at the origin. To fix this issue, we threshold the denominator whenever it goes below γ. C(X) =t·eV(X) max µbm1(X), γ. 23 Henceforth, we refer to the above as the modified Cochran’s χ2test statistic. We set γ= 10−10for all simulations in this section. We consider both the finite sample and asymp- totically valid Cochran’s χ2tests ψC(X) =I C(X)≥sup π∈Sqα(Pπ, C) andψAC(X) =I C(X)≥χ2 n−1,α n−1 , (50) where χ2 n−1,αis the 1 −αquantile of the chi-squared distributions with n−1 degrees of freedom. The second statistic is based on (49) but using the same normalization as the modified Cochran’s χ2, we call the corresponding test the debiased Cochran’s χ2I, R1(X) =t·bV(X) max µbm1(X), γ. and ψR1(X) =I R1(X)≥sup π∈Sqα(Pπ, R1) . (51) Finally, we consider (49) and define the corresponding test analogously, called debiased Cochran’s χ2II R2(X) =t·bV(X) max (bµ(X), γ)andψR2(X) =I R2(X)≥sup π∈Sqα(Pπ, R2) . (52) We remark that ψR1andψR2are expected to have similar performance since any denominator that estimates max( m1(π),1−m1(π)) should work. Figure 8 shows the distribution of the three statistics under the distribution Pπwhere π=δ0.5. The bias is substantially reduced when using R1orR2rather than Cochran’s statistic. The same can be observed for point masses closer to the origin; see appendix K.2. 0500100015002000 0 2 4 60100020003000 0 2 4 6 0 2 4 6 Null hypothesis 0=p0 Countst=2 t=5 t=10n=5 n=50Statistic debiased Cochran's 2 (I) debiased Cochran's 2 (II) mod. Cochran's 2 Figure 8: Distribution of modified and debiased Cochran’s χ2test statistics under the mixing distribution π=δ0.5. Figure 9 displays the 1 −αquantile of the statistics for α= 0.05 for mixing distributions π=δp0where p0∈[0,0.5]. Additionally, figure 10 illustrates that using the maximum quantile over Sleads to conservative thresholds for point masses close to the origin. 24 0.00.51.01.52.02.5 0.0 0.1 0.2 0.3 0.4 0.50.00.51.01.52.02.5 0.0 0.1 0.2 0.3 0.4 0.50.0 0.1 0.2 0.3 0.4 0.5 Null hypothesis 0=p0 0.95 quantilet=2 t=5 t=10n=5 n=50Statistic debiased Cochran's 2 (I) debiased Cochran's 2 (II) mod. Cochran's 2
|
https://arxiv.org/abs/2504.13977v1
|
Threshold Cochran's asymptotic threshold Max. quantile debiased Cochran's 2 (I) Max. quantile debiased Cochran's 2 (II) Max. quantile mod. Cochran's 2 Figure 9: 0.95% quantile as a function of the null hypothesis distribution for the studied statistics. Additionally, the asymptotic threshold used by Cochran’s χ2test (50) and the maximum quantile of each statistic is displayed. To analyze the power of the proposed tests, we consider the families of distributions (37) and (38) for p0= 0.5, which were used for constructing the lower-bound lemma 10. In our simulations, all tests agree except when the mixing distribution’s moments cannot be reliably estimated, i.e. when tis small relative to n. In that case, the tests valid for finite samples improve over the asymptotic Chochran’s χ2test as shown in Figure 11. We conclude that any of the considered test statistics might be used in practice. 0.000.020.04 0.0 0.1 0.2 0.3 0.4 0.50.000.020.04 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 Null hypothesis 0=p0 Type I errort=2 t=5 t=10n=5 n=50T est Asymptotic Cochran's 2 debiased Cochran's 2 (I) debiased Cochran's 2 (II) mod. Cochran's 2 Figure 10: Type I error of tests (50),(51) and (52) as a function of the null distribution. Note the conservative behavior whenever the number of trials tis small. 25 0.0 0.1 0.2 0.30.000.250.500.751.00 0.0 0.1 0.2 0.3 W1 distance between null and alternative hypothesisPower (higher is better)Match first moment Perturb probabilities T est Asymptotic Cochran's 2 debiased Cochran's 2 (I) debiased Cochran's 2 (II) mod. Cochran's 2 Figure 11: Power of the test as a function of the distance W1(π, π 0) for t= 2 and n= 50. The null mixing distribution is π0=δ0.5, and the alternative mixing distribution πbelongs to the family of distributions (38) on the left panel, and (37) on the right panel. 6 Applications to meta-analyses and model selection In section 6.1, we test the homogeneity of rare effects across a small number of studies with large number of participants. Additionally, in section 6.2, we use goodness-of-fit testing to assess the quality of different approximations to the underlying mixing distribution when the number of studies is much larger than the number of participants. 6.1 Meta-analysis of cardiovascular safety concerns associated with rosiglitazone Nissen and Wolski [2007] conducted a meta-analysis of 42 studies, each with 40 to 2900 participants, to assess cardiovascular risks associated with the rosiglitazone treatment. Fig- ure 12 displays the proportion of patients who experienced a myocardial infarction (MI) or died from cardiovascular causes (D) in both the treatment and control groups. Two chal- lenges are noteworthy: the heterogeneity in the study sizes and the low event rate. The maximum death rate in the treated group is 2%, while over 80% of control group studies report zero events. These low event counts suggest that standard asymptotic procedures may be unreliable, as they assume event probabilities are bounded away from zero. 26 0.00 0.01 0.02 Death from cardiovascular cause / No. of Patients0102030CountsControl No Yes 0.00 0.01 0.02 0.03 0.04 Myocardial infarction / No. of Patients0510152025CountsControl No YesFigure 12: Histograms of proportions of
|
https://arxiv.org/abs/2504.13977v1
|
the proportion of patients that suffered a myocardial infarction and died from cardiovascular causes due to the application of the rosiglitazone treatment for the treatment and control groups. Following Park [2019], we test the homogeneity of myocardial infarction and death propor- tions across studies in both treatment and control groups. Two approaches are considered: (I) constructing confidence intervals using a homogeneity test under a simple null hypothe- sis (section 4), or (II) obtaining a P-value from a homogeneity test under a composite null hypothesis (section 5.3). First, we construct a 95% confidence interval by inverting a test under a simple null hypoth- esis. The tests in section 4 assume equal study sizes ( t), which does not hold here. Adjusting the methods for varying study sizes is straightforward and detailed in appendix L.1. Table 1 presents the resulting confidence intervals for each one of the methods. Some classical tests reject all null hypotheses, indicating evidence of heterogeneity. However, the local minimax test (32) provides non-empty confidence intervals, suggesting possible homogeneity. The con- fidence intervals also show improvements in tightness: the debiased ℓ2test (31) outperforms the classical ℓ2test, and the local minimax test (32) further improves upon the debiased ℓ2 test by incorporating a mean test (30), which is useful for detecting small effects. Test MI (Treatment) MI (Control) D (Treatment) D (Control) Local Minimax (32) [0.005, 0.009] [0.003, 0.007] [0.003, 0.005] [0.001, 0.003] Mean (30) [0.006, 0.01] [0.003, 0.01] Rejected all [0.001, 0.006] debiased ℓ2(31) [0.006, 0.01] [0.003, 0.01] [0.003, 0.006] [0.001, 0.005] ℓ2(L.1) Rejected all [0.003, 0.017] [0.002, 0.013] [0.001, 0.013] Plug-in (9) Rejected all [0.004, 0.009] [0.002, 0.007] [0.001, 0.007] mod. Pearson’s χ2(L.1) Rejected all Rejected all [0.003, 0.006] [0.001, 0.007] mod. LRT (L.1) Rejected all Rejected all Rejected all [0.001, 0.004] Table 1: 95% Confidence intervals produced by inverting homogeneity tests for simple nulls. Second, we compute the P-values for the homogeneity tests introduced in section 5. Their definitions for varying numbers of patients per study appear in appendix L.2. Table 1 shows the obtained P-values. The Cochran’s χ2test rejects the null hypothesis of homogeneity 27 at the α= 0.05 level for all covariates except the death proportion in the control group. In contrast, its debiased versions either fail to reject or yield P-values near the decision threshold α= 0.05. These results align with the confidence interval from the local minimax test. Test MI (Treat.) MI (Control) D (Treat.) D (Control) Asymptotic Cochran’s χ2(50) 0 0 0.002 0.479 mod. Cochran’s χ2(50) 0 0.003 0.005 0.492 debiased Cochran’s χ2I (51) 0.062 0.285 0.049 0.286 debiased Cochran’s χ2II (52) 0.062 0.285 0.048 0.285 Table 2: P-values for homogeneity testing with a composite null. 6.2 Modelling of county-level political outcomes from 1976 to 2004 Tian et al. [2017] and Vinayak et al. [2019] consider the problem of estimating the underlying mixing distribution of the counties’ political leaning. They analyze data from 3,109 U.S. counties, recording the number of times the Republican party won in each county during the eight presidential elections from 1976 to 2004. Tian et al. [2017] used the method
|
https://arxiv.org/abs/2504.13977v1
|
of moments (MOM) to estimate the mixing distribution, while Vinayak et al. [2019] applied maximum likelihood estimation (MLE). Vinayak et al. [2019] noted that the MLE and MOM estimators look qualitatively different, although they match their first 8 moments. Here, we extend their analysis by applying the goodness-of-fit tests from section 3 to assess which method better models the underlying distribution. 0.00 0.25 0.50 0.75 1.00 No. of times Republicans won / No. of elections0.00.10.20.3Approximate densityEstimated Distribution Empirical MLE MOM 0.00 0.25 0.50 0.75 1.00 No. of times Republicans won / No. of elections0.00.10.20.3Approximate densityEmpirical Distribution Test Train Figure 13: (Left panel) Empirical distribution of the training and test dataset. (Right panel) Estimated distributions together with the empirical distribution of the training dataset. We randomly split the counties into two halves, each containing approximately n≈1,555 counties with t= 8. We refer to these halves as the training and test datasets, shown in figure 13. Using the training dataset, we compute the empirical, MLE, and MOM estimators of the mixing distribution, illustrated in the right panel of figure 13. Although the MLE and MOM estimators match their first eight moments, they yield distinct qualitative rep- resentations of the data; see figure 14. The MLE estimator clusters around three points, 28 corresponding to counties that consistently vote Democratic, consistently vote Republican, or are swing counties. In contrast, the MOM estimator suggests greater heterogeneity in political leanings. 1 2 3 4 5 6 7 80.100.150.200.250.30 MomentValueMethod Empirical MLE MOM Unbiased Figure 14: First 8 moments of the empirical, MLE and MOM estimators of the mixing distribution on the first half of the data. Additionally, the unbiased estimator of each moment is presented; see their definition in (90) of appendix K.1. We use the second half of the data to evaluate whether the goodness-of-fit tests from section 3 can distinguish the estimated distributions from the observed data. The P-value represents the smallest significance level at which a test rejects the null hypothesis that the second half of the data follows the null mixing distribution. A higher P-value suggests the test finds it harder to differentiate the data from the null distribution. Table 3 compiles the computed P-values for each test and null distribution. Most tests reject the MOM and empirical estimates of the mixing distribution. Therefore, we may conclude that the MLE provides a more accurate fit. Additionally, the global minimax, debiased Pearson’s χ2, modified Pearson’s χ2and modified likelihood ratio test are in agreement in all cases. Test MLE MOM Empirical Plug-in (9) 0.07 0 0.99 Global minimax (19) 0.13 0 0 debiased Pearson’s χ2(18) 0.14 0 0 mod. Pearson’s χ2(K.1) 0.17 0 0 mod. LRT (K.1) 0.16 0 0 Distance to MOM (K.1) 0.37 0.94 0.26 Distance to MLE (K.1) 0.65 0 0.10 Table 3: P-values for goodness-of-fit tests of the MLE, MOM and empirical mixing distribu- tions on the second half of the county-level data. See section 3 and appendix K.1 for their definitions. 29 7 Discussion In this paper, we study goodness-of-fit testing for random binomial effects within the mini- max framework. We
|
https://arxiv.org/abs/2504.13977v1
|
demonstrate that combining a test based on the plug-in and debiased Pearson’s χ2tests [Balakrishnan and Wasserman, 2019] is minimax optimal. The critical sep- aration rates match previously established estimation rates [Tian et al., 2017, Vinayak et al., 2019], indicating that testing is as hard as estimation, a common feature of deconvolution problems. The use of Kravchuk polynomials [Kravchuk, 1929] simplifies lower-bound argu- ments and may have broader applications in empirical Bayes methods, where bounding the distance between marginal measures via distances between mixing distributions is essential. Furthermore, the analysis of the plug-in test reveals its adaptivity to the null distribution’s variance. Given the practicality of the test, it is of interest to understand when the local rates are sharp. For homogeneity testing with respect to a reference effect, we connect the problem to its fixed-effects counterpart [Chhor and Carpentier, 2022] and establish how the critical sepa- ration depends on the null distribution’s location. Inverting the local minimax test yields confidence intervals that exhibit adaptivity due to faster rejection of null hypotheses near the boundary. Formalizing these adaptivity properties could offer a valuable alternative to adaptive estimation-based confidence intervals. For homogeneity testing without a reference effect, we propose a debiased version of Cochran’s χ2test, which is minimax optimal. The debiased Cochran’s chi-squared statistic has a smaller bias and variance than its classical counterpart. However, our tests do not adapt to the un- derlying variance of the data because they use the maximum quantile over the null hypothesis space. An alternative is to investigate Berger and Boos [1994]’s method: sample split the data, use one half to construct a confidence interval for the homogeneous effect, and the other half to check whether the local minimax test (32) rejects all members of the interval. We expect this approach to be less conservative since it avoids considering the worst quantile under the null hypothesis space. Acknowledgments We thank Edward H. Kennedy, Tudor Manole, Ian Waudby-Smith, Kenta Takatsu, JungHo Lee and Siddhaarth Sarkar for useful comments during the develop- ment of this work. We gratefully acknowledge funding from the National Science Foundation (DMS-2310632). 30 Appendices A Notation 32 B Polynomial approximation in Bernstein form 33 C Kravchuk polynomials 36 C.1 Bounding the total variation distance by moment differences . . . . . . . . . 37 C.2 Bounding the chi-squared distance by moment differences . . . . . . . . . . . 38 C.3 Expectations and variances of 1st and 2nd Kravchuk polynomials . . . . . . 40 D Wasserstein metric 42 D.1 Concentration and locality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 E General lemmas for upper-bound arguments 44 F General lemmas for lower-bound arguments 46 G Minimax critical separation for Goodness-of-fit testing 48 G.1 Upper bounds on the critical separation . . . . . . . . . . . . . . . . . . . . . 48 G.2 Lower bounds on
|
https://arxiv.org/abs/2504.13977v1
|
the critical separation . . . . . . . . . . . . . . . . . . . . . 52 H Local minimax critical separation with a reference effect 56 H.1 Upper bounds on the critical separation . . . . . . . . . . . . . . . . . . . . . 59 H.2 Lower bound for random effects . . . . . . . . . . . . . . . . . . . . . . . . . 62 H.3 Lower bounds for fixed effects . . . . . . . . . . . . . . . . . . . . . . . . . . 63 I Minimax critical separation for homogeneity testing without a reference effect 66 I.1 Upper bounds on the critical separation . . . . . . . . . . . . . . . . . . . . . 66 I.2 Lower bounds on the critical separation . . . . . . . . . . . . . . . . . . . . . 75 J Repository with source code to reproduce simulations and applications 77 K Simulations 77 K.1 Testing procedures for Goodness-of-fit testing . . . . . . . . . . . . . . . . . 77 K.2 Additional simulations for homogeneity testing with composite null hypothesis 79 31 L Applications 79 L.1 Testing procedures for homogeneity testing with simple null . . . . . . . . . 79 L.2 Testing procedures for homogeneity testing with composite null . . . . . . . 81 References 83 A Notation Symbol Description V(P, Q) Total variation distance χ2(P, Q) χ2distance KL(P, Q) Kullback–Leibler divergence δj(B) =( 1 if j∈B 0 otherwiseDirac delta δj,k=δj({k}) =( 1 if j=k 0 otherwiseDirac delta Bj,t(p) = t j ·pj·(1−p)t−p,0≤p≤1 j-th element of the t-order Bernstein basis bj,t(π) =Ep∼π[Bj,t(p)] Expected j-th fingerprint under π Pπ(A) =R Bin(t, p)(A)dπ(p) =Pt j=1bj(π)·δj(A) Measure of set Afor binomial mixture ml(π) =Ep∼π[pl] l-moment of distribution π V(π) =Vp∼π[p] Variance of distribution π µp=p·(1−p) Variance of Ber( p) Bt(f)(x) =Pt j=0f(j/t)·Bj,t(x) Bernstein polynomial approximation of f Km(x, p, t ) =Pm v=0(−1)m−v t−x m−v x v pm−v(1−p)vm-th Kravchuk polynomial eKm(x, p, t ) = t m−1·Km(x, p, t ) m-th normalized Kravchuk polynomial 32 B Polynomial approximation in Bernstein form Lemma 11. Letf∈Lip1[0,1]andk∈Ns.t.k≤t, there exists a Bernstein polynomial pkof order t pk(x) =tX j=0cj,k·Bj,t(x) that satisfies sup x∈[0,1]|f(x)−pk(x)| ≤αk kwhere αk=( 2 ifk=√ t π/2if√ t < k≤t and ∥ck∥∞≤ √ t·2tifk=t√ k·(t+ 1)·ek2/tifk < t 1 ifk=√ t Proof of lemma 11. Fork=√ tGiven f∈Lip1[0,1] Given f∈Lip1[0,1], by proposition 4.9 of Bustamante [2017], it holds that sup x∈[0,1]|f(x)−pk(x)| ≤2√ t where |cj,k|=|f j t | ≤j tsince f∈Lip1. Thus, ∥ck∥∞≤1. For√ t < k≤tGiven f∈Lip1[0,1], there exists a trigonometric polynomial of degree at most kthat uniformly approximates fon its domain [Tian et al., 2017, thm 3] [Plaskota, 2021, thm 7.2] inf pk∈Pk∥f−pk∥∞[0,1]≤π 2·1
|
https://arxiv.org/abs/2504.13977v1
|
k. Letpkbe the polynomial that achieves the infimum; it can be expressed in the shifted Chebyshev basis pk(x) =kX m=0am(f)·Tm(x) where ∥a∥2 2≤1 . (53) Furthermore, since k≤t,pkcan be expressed in the Bernstein basis of degree t. Vinayak et al. [2019] Showed that the Chebyshev basis can be rewrriten as Tm(x) =tX j=0Ct,m,jBj,t(x) where Ct,m,j=min(j,m)X l=max(0 ,j+m−t)(−1)m−l· 2m 2l t−m j−l t j 33 when m=t, it reduces to Tm(x) =mX j=0Ct,m,jBm j(x) where Ct,m,j= (−1)m−j· 2m 2j m j Thus, by plugging into equation (53), we get pk(x) =kX m=0amTm(x) =tX j=0cj·Bj,t(x) where cj=kX m=0am·Ct,m,j The proof finishes using proposition proposition 2. Corollary 2. W1(π, π 0)≤2αk k+ck· ∥b(π)−b(π0)∥1 where αkandckare defined in lemma 11. Proof of corollary 2. Letfkbe the polynomial in Bernstein form defined in lemma 11, it follows that W1(π, π 0) = sup f∈Lip1[0,1]Z1 0f(p)dπ−dπ0 = sup f∈Lip1[0,1]Z1 0f(p)−fk(p)dπ−dπ0+Z1 0fk(p)dπ−dπ0. Using the definition of fkand recalling that bj(π) =Ep∼π[Bj,t(p)], the following upper-bound holds W1(π, π 0)≤sup f∈Lip1[0,1]2· ∥f−Pf∥∞+tX j=0|cj,k| · |bj(π)−bj(π0)| ≤sup f∈Lip1[0,1]2· ∥f−Pf∥∞+∥ck∥∞· ∥b(π)−b(π0)∥1 The lemma follows by the guarantees given in lemma 11. Proposition 2 (Extension of Rababah [2003] and Vinayak et al. [2019]) . ∥c∥∞≤(√t+ 1·2tifk=t √ k+ 1·ek ∧√ k·(t+ 1)·ek2/t if√ t < k≤t Proof of proposition 2. 34 Fork=t ∥c∥∞= max j≤t|cj| ≤max j≤t∥a∥1·max m≤k|Ct,m,j| ≤√ t+ 1·max j≤t,m≤k|Ct,m,j| since∥a∥2≤1 We exploit the strategy employed in lemma 5 of Rababah [2003] by interpreting the coeffi- cients as a convex combination. |Ct,m,j| ≤min(j,m)X l=max(0 ,j+m−t) 2m 2l t−m j−l t j =min(j,m)X l=max(0 ,j+m−t)wl· 2m 2l m l where wl= m l t−m j−l t j Note that {wl}j l=0form a partition of unity, i.e.Pj l=0wl= 1 by the Chu–Vandermonde’s identity. Thus, the upper bound of |Ct,m,j|is a convex combination of(2m 2l) (m l)and it follows that |Ct,m,j| ≤ max max(0 ,j+m−t)≤l≤min(j,m) 2m 2l m l Furthermore,(2m 2l) (m l)achieves a maximum at l=m/2. Thus, it follows that |Ct,m,j| ≤ 2m m m m/2≤2msince2m m ≍4m √m(1−1/m) Finally, we get that max j≤t,m≤k|Ct,m,j| ≤2t Fork≤tBy the Cauchy-Schwarz inequality |cj| ≤ ∥a∥2· kX m=0C2 t,m,j!1/2 ≤ kX m=0C2 t,m,j!1/2 since∥a∥2≤1 Recall that Ct,m,j=X l(−1)m−j· 2m 2l m l·wl 35 Thus, kX m=0C2 t,m,j≤kX m=0min(j,m)X lwl·" 2m 2l m l#2 by Jensen’s inequality ≤kX m=0min(j,m)X lwl·e2l ≤kX m=0e2 min( j,m)·X lwl ≤(k+ 1)·e2ksinceX lwl≤1 Consequently |cj| ≤√ k+ 1·ek Fork < t ∥b∥∞= max j≤t|bj| ≤max j≤t∥a∥1·max m≤k|Ct,m,j| ≤√ k·max j≤t,m≤k|Ct,m,j| since∥a∥2≤1 ≤√ k·max m≤kvuuttX j=0C2 t,m,j ≤√ k(t+ 1)ek2 t by lemma 4.4 of Vinayak et al. [2019] C Kravchuk polynomials The generating series of the Kravchuk polynomials implies a simple local expansion of the Bernstein basis. The generating series is, see equation 2.82.4 of Szeg˝ o [1975], tX j=0Kj(x, p, t )·wj= (1 + (1 −p)·w)x·(1−p·w)t−xforx∈ {0, . . . , t}, 36 hence for p∈(0,1) it holds that Bj,t(u+p) Bj,t(p)=u p+ 1j · 1−u 1−pt−j =tX m=0Km(j, p, t )·um pm(1−p)m. (54) A simple change of variables leads to the following expansion Bj,t(u) =Bj,t(p)·tX m=0Km(j, p, t )·(u−p)m µm pfor 0 < p < 1 . (55) This local expansion is actually the Taylor polynomial expansion since B(l) j,t(p) =Bj,t(p)·Kl(j, p, t ) µl
|
https://arxiv.org/abs/2504.13977v1
|
p·(l)! for 0 < p < 1 and l∈ {0, . . . , t}. C.1 Bounding the total variation distance by moment differences Lemma 12 (Extension of lemma 3) .Letπ0andπ1be two mixing distributions supported on the [0,1]interval. For p∈(0,1), it follows that p pt∧(1−p)t·Mp(π1, π0) 2≤V(Pπ1, Pπ0)≤Mp(π1, π0) 2. Proof of lemma 12. We prove the lemma using (54) rather than (55). A simple change of variables implies the lemma’s statement. V(Eu∼π1[Bin( t, u+p)], Eu∼π0[Bin( t, u+p)]) =1 2·tX j=0|Eu∼π1[Bj,t(u+p)]−Eu∼π0[Bj,t(u+p)]| =1 2·tX j=0Bj,t(p)· tX m=1Km(j, p, t )·∆m pm(1−p)m (56) where in the last equality we used (54) and ∆ m=Eπ1[um]−Eπ0[um]. It follows that V(Eu∼π1[Bin( t, u+p)], Eu∼π0[Bin( t, u+p)]) ≤1 2· tX j=0Bj,t(p)· tX m=1Km(j, p, t )·∆m pm(1−p)m 2 1/2 by Jensen’s inequality =1 2·"tX m=1t m ·∆2 m pm(1−p)m#1/2 by orthogonality (17) 37 The lower-bound proceeds similarly. By (56) and norm monotonicity, it follows that V(Eu∼π1[Bin( t, u+p)], Eu∼π0[Bin( t, u+p)]) ≥1 2· tX j=0B2 j,t(p)· tX m=1Km(j, p, t )·∆m pm(1−p)m 2 1/2 Since Bj,t(p)≥pt∧(1−p)t, it holds that V(Eu∼π1[Bin( t, u+p)], Eu∼π0[Bin( t, u+p)]) ≥p pt∧(1−p)t 2· tX j=0Bj,t(p)· tX m=1Km(j, p, t )·∆m pm(1−p)m 2 1/2 =p pt∧(1−p)t 2·"tX m=1t m ·∆2 m pm(1−p)m#1/2 where in the last step, we used the orthogonality of the Kravchuk polynomials. C.2 Bounding the chi-squared distance by moment differences Lemma 13. Letp∈(0,1), if Eu∼π0Bj,t(u)≥C−1·Bj,t(p)for1≤j≤t (57) then χ2(Pπ0, Pπ1)≤C·Mp(π1, π0). Furthermore, the above strengthens to equality if (57) holds with equality. Proof. For any 0 < p < 1, it holds that χ2(Pπ0, Pπ1) =tX j=0|Eu∼π1[Bj,t(u)]−Eu∼π0[Bj,t(u)]|2 Eu∼π0Bj,t(u) =tX j=0B2 j,t(p)· Pt m=1Km(j, p)·∆m(π1,π0) pm(1−p)m 2 Eπ0Bj,t(u)by eq. (55) where ∆m(π1, π0) =Eu∼π1[u−p]m−Eu∼π0[u−p]m. 38 Then, by orthogonality of the Kravchuck polynomials (17) and the assumption (57), we have that χ2(Pπ0, Pπ1)≤C·tX m=1t m ·∆2 m(π1, π0) µm p. Lemma 8. Letp0∈(0,1)andπ0=δp0. For any mixing distribution π1, it holds that χ2(Pπ1, Pπ0) =Mp0(π1, π0). (35) Proof of lemma 8. For the particular case where π0is a point mass, i.e. π=δp0, choosing p=p0implies that Eu∼π0Bj,t(u) =Bj,t(p0) for 1 ≤j≤t. The statement follows from lemma 13. Lemma 8 can be slightly generalized. Proposition 3 and corollary 3, show that one can replace π0=δp0in lemma 8 by any distribution supported on [ p0−δ, p0+δ] where δ≲p0/t. Proposition 3. Forp∈(0,1/2]and0≤j≤t, it holds that Bj,t(u) Bj,t(p)−1 ≤ 1 +1−p pj/t ·|u−p| 1−p!t −1≤ 1 +|u−p| pt −1. Proof. By the local Taylor expansion (55), it holds that Bj,t(u) Bj,t(p)−1 ≤tX m=1|Km(j, t, p )| ·|u−p|m µm p. (58) Furthermore, by (16), we have that |Km(j, t, p )| ≤t m ·pm·mX v=01−p pv ·wv where wv= t−j m−v j v t m≥0 andmX v=0wv= 1 due to the Chu-Vandermonde’s identity. Since (1 −p)/p≥1, by Jensen’s inequality, we have that |Km(j, t, p )| ≤t m ·pm·exp( log1−p p ·tX v=0v·wv) =t m ·pm·1−p pj·(t−1 m−1)/(t m) . 39 Pluging the inequality in (58), we get Bj,t(u) Bj,t(p)−1 ≤tX m=1t m ·|u−p| 1−pm ·1−p pj·(t−1 m−1)/(t m) = 1 +1−p pj/t ·|u−p| 1−p!t −1 . The following bound is an immediate consequence. Corollary 3. Letp∈(0,1/2]andπ0be a mixing distribution such that |u−p|
|
https://arxiv.org/abs/2504.13977v1
|
≤δalmost surely for u∼π0where δ≤log(2−C)·p/tfor some C∈(0,1]. Then, for any mixing distribution π1, it holds that χ2(Pπ0, Pπ1)≤C−1·Mp(π1, π0). Proof of corollary 3. By proposition 3, it holds that Bj,t(u) Bj,t(p)≥2− 1 +|u−p| pt ≥2−exptδ p ≥C The statement follows by lemma 13. C.3 Expectations and variances of 1st and 2nd Kravchuk polyno- mials In the following two remarks, we adopt the notation ∆ =q−p,µp=p(1−p) and dl=Z ∆ldπ(p) Remark 1 (First Kravchuk polynomial) .The expectation and second moment of the first Kravchuk polynomial under a binomial distribution is given by EX∼Bin(t,q)h eK1(X, p, t )i = ∆ EX∼Bin(t,q)[eK1(X, p, t )2] = ∆2+µq t Using them, we compute its variance under the binomial distribution. VX∼Bin(t,p)[eK1(X, p, t )] =EX∼Bin(t,p)[eK1(X, p, t )2] =µp t VX∼Bin(t,q)[eK1(X, p, t )] =µq t=µp t+ (1−2p)∆ t−∆2 t≤µp t+∆ t−∆2 t 40 The expectation of the second Kravchuk polynomial under a mixture of binomials is given by EX∼Pπ[eK1(X, p, t )] =d1 We proceed to compute its variance under a binomial mixture. Vq∼π[EX∼Bin(t,q)[eK1(X, p, t )]] = d2−d2 1 Eq∼π[VX∼Bin(t,q)[eK1(X, p, t )]]≤µp t+d1 t−d2 t Consequently, VX∼Pπ[eK1(X, p, t )]≤µp t+d1 t+ (1−1 t)·d2−d2 1 Remark 2 (Second Kravchuk polynomial) .The expectation and second moment of the sec- ond Kravchuk polynomial under a binomial distribution is given by EX∼Bin(t,q)h eK2(X, p, t )i = ∆2 EX∼Bin(t,q)[eK2(X, p, t )2] = ∆4+ 2µ2 q t−1+ 2µq t· 2p2−(1 + 4 p)q+ 3q2 Using them, we compute its variance under the binomial distribution. VX∼Bin(t,p)[eK2(X, p, t )] =EX∼Bin(t,p)[eK2(X, p, t )2] =µ2 p (t−1)t VX∼Bin(t,q)[eK2(X, p, t )] = 2µ2 q t−1+ 4p2q t−2(1 + 2 p(2 +p))q2 t+ 8(1 + p)q3 t−6q4 t =4(1−2p)(t−2) t(t−1)·∆3 +2 t1 t−1−2 ·∆4 +4(1−2p) (t−1)t·µp·∆ +2 (t−1)t·µ2 p +2(1 + 2( t−4)µp) (t−1)t·∆2 ≤4·µ2 p t2+ 8·µp t2·∆ + 21 t2+µp t ·∆2+ 4·∆3 t−2·∆4 t 41 The expectation of the second Kravchuk polynomial under a mixture of binomials is given by EX∼Pπ[eK2(X, p, t )] =d2 We proceed to compute its variance under a binomial mixture. Vq∼π[EX∼Bin(t,q)[eK2(X, p, t )]] = d4−d2 2 Eq∼π[VX∼Bin(t,q)[eK2(X, p, t )]]≤4·µ2 p t2+ 8·µp t2·d1+ 21 t2+µp t ·d2+ 4·d3 t−2·d4 t Consequently, VX∼Pπ[eK2(X, p, t )]≤4·µ2 p t2+ 8·µp t2·d1+ 21 t2+µp t ·d2+ 4·d3 t+ (1−2 t)·d4−d2 2 D Wasserstein metric Lemma 14. Leteπ1andeπ0be two distributions supported on [−1,1]that share they first k moments Eeπ1[Xl]−Eeπ0[Xl] = 0 forl≤k There exists π1andπ0supported on [a, b]that share they first kmoments and satisfy W1(π0, π1) =b−a 2·W1(eπ0,eπ1) Proof of lemma 14. Define the linear transformation from [ a, b] to [−1,1] m(x) =2·x−(b+a) b−a and the shifted distributions πi(x) =m′(x)·eπi◦m(x) then the first kmoments of π0andπ1are the same since Z [a,b]xkdπi=Z [−1,1]xkdeπifori∈ {0,1} 42 and W1(π0, π1) = sup f∈Lip1[a,b]Zb af dπ 1−dπ0 = sup f∈Lip1[a,b]Z1 −1f◦m−1deπ1−deπ0 change of variable =b−a 2·sup f∈Lip1[−1,1]Z1 −1f deπ1−deπ0 =b−a 2·W1(eπ0,eπ1) D.1 Concentration and locality Lemma 15. Letpiiid∼πfor1≤i≤n, define the empirical average eπ=1 nnX i=1δpi and consider the expectation of W1(eπ, π). It follows that Eπ[W1(eπ, π)]≤J(π)√nwhere J(π) =Z1 0µ1/2 Fπ(x)dxandFπ(x) =Ep∼πI[p≤x]. Furthermore, the variance satisfies Vπ[W1(eπ, π)]≤J(π)√n2
|
https://arxiv.org/abs/2504.13977v1
|
. Finally, J(π)satisfies J(π)≤J(π0) +√ 3·p W1(π, π 0). Proof of lemma 15. By theorem 3.2 of Bobkov and Ledoux [2019],we have that EπW1(eπ, π)≤J(π)√n Due to the triangle inequality in the space L2 VπW1(eπ, π)≤Eπ[W1(eπ, π)]2≤ EπW2 1(eπ, π) ≤J(π)√n2 43 Alternatively, the generalized Minkowski inequality can be used to obtain the same result Vπ[W1(eπ, π)] ≤Eπ[W1(eπ, π)]2 =EπZ1 0|Feπ(x)−Fπ(x)|dx2 ≤Z1 0 Eπ[Feπ(x)−Fπ(x)]21/2dx2 By the generalized Minkowski inequality =Z1 0(Vπ[Feπ(x)])1/2dx2 =Z1 0rµFπ(x) ndx2 =J(π)√n2 Finally, i|µFπ(x)−µFπ0(x)|=|Fπ(x)−Fπ0(x)| · |1 +Fπ(x) +Fπ0(x)| which we use to expand J(π) around π0 J(π)√n≤J(π0)√n+r 3 n·Z |Fπ(x)−Fπ0(x)|1/2 ≤J(π0)√n+√ 3·r W1(π, π 0) n where in the last step, we used Jensen’s inequality. E General lemmas for upper-bound arguments Lemma 16. Consider the hypothesis test H0:P∈D0v.s.H1:P∈D1 and the test statistic T. Let qα(P, T)be1−αquantile of the test statistic under the distri- bution P qα(P, T) = inf {u:P(T > u )≤α} (59) and define the test ψ=I(T≥t)where t= sup P∈D0qα(P, T) (60) 44 The test controls the type I error sup P∈D0P(ψ= 1)≤α. Furthermore, if it holds that t≤inf P∈D1EP[T]−s VP[T] β(61) then the test controls the type II error sup P∈D1P(ψ= 0)≤β. Finally, since qα(P, T)≤EP[T] +p VP[T]/α, it follows that (61) is implied by sup P∈D0EP0[T] +r VP0[T] α≤inf P∈D1EP1[T]−s VP1[T] β. Proof of lemma 16. Ift= supP∈D0qα(P), it follows that the type I error is controlled sup P∈D0P(ψ= 1) = sup P∈D0P(T≥t) by (60) = sup P∈D0P(T≥sup P∈D0qα(P, T)) ≤sup P∈D0P(T≥qα(P, T)) ≤α by (59) 45 Lettinf= inf P∈D1EP[T]−q VP[T] β, then the type II error is controlled: sup P∈D1P(ψ= 0) = sup P∈D1P(T < t ) ≤sup P∈D1P(T < t inf) = sup P∈D1P(EP[T]−T > E P[T]−tinf) ≤sup P∈D1P(|EP[T]−T|> E P[T]−tinf) ≤sup P∈D1VP[T] (EP[T]−tinf)2By Markov’s inequality since tinf≤EP[T] ≤sup P∈D1VP[T] supP∈D1VP[T]/β =β F General lemmas for lower-bound arguments Lemma 17 (General lower-bound) .Consider a null D0mixing distribution space. Let Ψα be the set of all tests that control the type I error over D0 Ψα= ψ: sup π∈D0Pn π(ψ(X) = 1) ≤α and define the risk Ras the maximum type II error over such set R∗(ϵ) = inf ψ∈Ψαsup π0∈D0sup π:W1(π,D0)≥ϵPn π(ψ(X) = 0) where W1(π, D 0) = inf π0∈D0W1(π, π 0) Finally, consider a mixing distribution π1and a distribution over mixing distributions on the null space Γ0, that is, supp(Γ 0)⊆D0 Ifπ1andΓ0are such that V Eπ0∼Γ0 Pn π0 , Pn π1 < C αorχ2 Eπ0∼Γ0 Pn π0 , Pn π1 < C2 αwhere Cα= 1−(α+β) 46 then the critical separation ϵ∗= inf{ϵ:R∗(ϵ)≤β} is lower-bounded ϵ∗≥W1(π1, D0). Proof of lemma 17. By definition, we have that R∗(ϵ) = inf ψ∈Ψαsup π0∈Dsup π:W1(π,π0)≥ϵPn π(ψ(X) = 0) = inf ψ∈Ψαsup π0∈D0sup π:W1(π,D0)≥ϵPn π(ψ(X) = 0) + Pn π0(ψ(X) = 1) −Pn π0(ψ(X) = 1) ≥inf ψ∈Ψαsup π0∈D0sup π:W1(π,D0)≥ϵPn π(ψ(X) = 0) + Pn π0(ψ(X) = 1) −α where in the last line we used the fact that ψ∈Ψα. Since the supremum dominates any particular instance, we have that R∗(ϵ)≥inf ψ∈Ψαsup π0∈D0Pn π1(ψ(X) = 0) + Pn π0(ψ(X) = 1) −α. Since the supremum dominates any average, we have that R∗(ϵ)≥inf ψ∈ΨαPn π1(ψ(X) = 0)
|
https://arxiv.org/abs/2504.13977v1
|
+ Eπ0∼Γ0Pn π0(ψ(X) = 1) −α. By theorem 2.2 of Tsybakov [2009], it follows that R∗(ϵ)≥1−V Eπ0∼Γ0Pn π0, Pn π1 −α. Finally, by the assumption V Eπ0∼Γ0Pn π0, Pn π1 < C α, it follows that R∗(ϵ)≥1−Cα−α > β . Alternatively, the same claim follows whenever χ2 Eπ0∼Γ0Pn π0, Pn π1 < C2 αsince V Eπ0∼Γ0Pn π0, Pn π1 ≤q χ2 Eπ0∼Γ0Pn π0, Pn π1 < C α We have shown that R∗(ϵ)> β ∀ϵ≥W1(π1, D0) . Thus, it follows by definition of the critical separation that ϵ∗> W 1(π1, D0). Lemma 2 (Le Cam’s method, theorem 2.2 of Tsybakov [2009]) . ϵ∗(n, t)≥sup π0,π1∈DW1(π0, π1)s.t.V(Pπ0, Pπ1)<Cα 2n(20) where Cα= 1−(α+β). 47 Proof of lemma 2. LetD0={π0}and Γ 0=δπ0, by lemma 17 by setting, it follows that ϵ∗≥W1(π, π 0) if V Pn π0, Pn π1 < C α Optimizing over all mixing distributions in Dand using the sublinearity of the total variation distance V(Pn, Qn)≤2nV(P, Q) yields the lemma’s statement. G Minimax critical separation for Goodness-of-fit test- ing G.1 Upper bounds on the critical separation G.1.1 Reduction to Multinomial Goodness-of-fit testing under the ℓ1distance Theorem 2. For testing under W1(4), the debiased Pearson’s χ2test(18) controls type I error by α. Furthermore, for any constant δ >0, there exists positive constant Cdepending onδsuch that the type II error is bounded by βwhenever ϵ(n, t)≥C· 1 tfort≲logn 1√tlognforlogn≲t≲n1/4−δ (logn)3/8 Proof of theorem 2. Letϕbe the α-level minimax test for H0:bt(π) =bt(π0) v.s. H1:∥bt(π)−bt(π0)∥1≥γ. By lemma 1, it holds that γ≤C·t1/4 n1/2. where Cis a positive constant. Under the alternative, by corollary 2, we have that: ϵ≤W1(π, π 0)≤π k+ck· ∥b(π)−b(π0)∥1where ck≤(√ t·2tifk=t√ k·(t+ 1)·ek2/tifk < t(62) Thus, we have that the debiased Pearson’s χ2test (18) controls the type II error by β whenever ϵ 2≥π k+C·ck·t1/4 n1/2. (63) Equation (63) is implied by the following condition if the approximation error dominates ϵ≥4π kifπ k≥C·ck·t1/4 n1/2. (64) 48 Case I: ϵ≳1/tfort≲logn Choose k=t, then ck≤√ t·2tby (62). Thus, ϵ≥4π/tin so far as π C·n1/2≥t3/4·2t, which is implied by π C·n1/24/3≥t·et. Consequently, the condition is satisfied in so far as t≤W0π C·n1/24/3 (65) where W0is the real branch of the Lambert Wfunction. Furthermore, we have that (65) is implied by t≤1 2·loghπ C·n1/2i4/3 forπ C·n1/24/3 ≥e. Thus, it follows that: ϵ≥C0 tforn≥C1andt≤C2·logn. Case II: ϵ≳1/√tlognforlogn≲tand logt≲logn Letc >0, and choose k=√tlognc, then k < t ift > c·logn. Thus, by (62), it holds that ck≤√ k·2t·ek2/t. Furthermore, by (64), it holds that ϵ≥4π/√tlogncin so far π 2C·n1/2 t5/4≥k3/2·2k2/t which is implied byπ 2C·n1/2 t5/44/3 ·1 t≥k2 t· 24/3k2 t and can be reduced toπ 2C·n1/2 t5/44/3 ·1 t≥k2 t·ek2 t. Plugging the chosen k, we get π 2C·n1/2 t5/44/3 ·1 t≥nc·lognc which holds if c·logn < t≤π 2C4/3 ·n2/3−c c·logn3/8 . Choose a fixed δ >0 and letting c=δ8 3, it holds that: ϵ≥C0√δ·tlognifC1·δ·logn < t≤C2·n1/4−δ (δlogn)3/8 where C0=4π√ 8/3,C1=8 3andC2= 3 83/8 π 2C1/2. 49 G.1.2 Plug-in estimator of W1(π, π 0) Theorem 1. The plug-in test (9)controls the type I error by α. Moreover, there exists a universal positive constant Csuch that the test controls
|
https://arxiv.org/abs/2504.13977v1
|
the type II error by βwhenever ϵ(n, t, π 0)≥C·" J(π0)√n+r Ep∼π0[p(1−p)] t+1 n+1 t# . Proof of theorem 1. LetT(X) =W1(bπ, π 0). By lemma 16, the test control the type II error byβif EPπ[T]−EPπ0[T]≥s VPπ[T] β+r VPπ0[T] α. (66) Since the standard deviation of the statistic is dominated by its expectation p VPπ[T]≤q EPπ[W2 1(bπ, π 0)]≤q (EPπ[W1(bπ, π 0)])2=EPπ[T] , condition (66) reduces to EPπ[T]≥C·EPπ0[T] where C=1−β−1/2 1−α−1/2. (67) By the triangle inequality, we have that |EPπ[T]−W1(π, π 0)| ≤Eπ[W1(eπ, π)] +EPπ[W1(bπ,eπ)] . For the first term, by lemma 15, it holds that Eπ[W1(eπ, π)]≤J(π)√n≤J(π0)√n+√ 3·r W1(π, π 0) n For the second term, it will be shown that EPπ[W1(eπ,bπ)]≤C·"r Eπ0µp t+r W1(π, π 0) t+1 t# (68) Using these inequalities, we have that (67) is implied by W1(π, π 0)≥C′·" J(π0)√n+r W1(π, π 0) n+r Eπ0µp t+r W1(π, π 0) t+1 t# , which is implied by ϵ≥C′·" J(π0)√n+r Eπ0µp t+1 n+1 t# . Thus, the lemma is proved. 50 Finally, we prove (68). By triangle inequality, it holds that EPπ[W1(eπ,bπ)] =EPπZ1 0|Fbπ(x)−Feπ(x)|dx =EPπ"Z1 0 1 nnX i=1 IXi t≤x −I(pi≤x) dx# ≤n−1·nX i=1 EπZ1 0EPπ|π IXi t≤x −I(pi≤x) dx where, in the last inequality, we used the triangle inequality and Tonelli’s theorem. Since all the terms are identically distributed, it holds that: EPπ[W1(eπ,bπ)]≤EπZ1 0EPπ|π IX t≤x −I(p≤x) dx. Recall that Pπ|πd= Bin( t, p). Hence, we condition on the mixing distribution and proceed to bound the integrand. By definition of the indicator function, it follows that EPπ|π|I(X/t≤x)−I(p≤x)|=EPπ|π[I(X/t≤x≤p)∨I(p≤x≤X/t)] ≤EPπ|π[I(X/t≤x≤p)]∨EPπ|π[I(p≤x≤X/t)] . For the first term, by Bernstein’s inequality, it holds that EPπ|π[I(Xi/t≤x≤pi)]≤EPπ|π IX t−p≤x−p ≤C·exp −t(x−p)2 µp∧(x−p) . 51 An analogous bound follows for the second term. Consequently EPπ[W1(eπ,bπ)] ≤EπZ1 0EPπ|π[|I(X/t≤x)−I(p≤x)|] ≤C·EπZ1 0exp −t(x−p)2 µp∧(x−p) =C·EπZp+µp p−µpexp −t(x−p)2 µpi +Zp−µp 0exp [−t(p−x)] +Z1 p+µpexp [−t(x−p)] ≤C·Eπrµp t+2 exp[−tµp]−exp[−t(1−p)]−exp[−tp] t ≤C·Eπrµp t+exp[−t·µp] t ≤C·" Eπµ1/2 p t1/2+1 t# ≤C·"r Eπµp t+1 t# . Finally, to localize the result, note that µpis 3-Lipschitz since |µx−µy| ≤ |x−y|+|x2−y2|=|x−y|(1 +|x+y|)≤3|x−y|. Thus, using the fact that W1can be represented as taking a supremum over 1-Lipschitz, see (5), it follows that Eπµp≤Eπ0µp+|Eπµp−Eπ0µp| ≤Eπ0µp+ 3·W1(π, π 0) . Hence, we have that EPπ[W1(eπ,bπ)]≤C·"r Eπ0µp t+r W1(π, π 0) t+1 t# . which proves (68). G.2 Lower bounds on the critical separation The following lemma is extensively used to prove theorem 3. 52 Lemma 18. Let0≤δ≤1 2, and consider the following optimization problem over moment- matching mixing distributions ϵ= sup π1,π0W1(π1, π0)s.t.ml(π1) =ml(π0)forl≤L where the supremum is taken over all distributions supported on [1 2−δ,1 2+δ]. There exists a universal positive constant Csuch that ϵ≥C·δ L. Proof of lemma 18. By lemma 14, the following program sup π1,π0W1(π1, π0) s.t. ml(π1) =ml(π0) for l≤Landπ1, π0supp. on [1 2−δ,1 2+δ] is lower-bounded by δ·sup π1,π0W1(π1, π0) s.t. ml(π1) =ml(π0) for l≤Landπ1, π0supp. on [ −1,1] . Noting that W1(π1, π0)≥Eπ1[|X|]−Eπ0[|X|] since | · | ∈ Lip1[−1,1] , we can further lower-bound the supremum by δ·sup π1,π0Eπ1[|X|]−Eπ0[|X|] s.t. ml(π1) =ml(π0) for l≤Landπ1, π0supp. on [ −1,1] . The statement follows by lemma 4 and (24). Theorem 3. The critical
|
https://arxiv.org/abs/2504.13977v1
|
separation (6)is lower-bounded by ϵ∗(n, t)≳ 1 tfort≲logn 1√tlognforlogn≲t≲n logn 1√nfort≳n logn(25) Proof of theorem 3. The statement is equivalent to proving that there exists mixing distri- butions π1andπ0inDthat satisfy W1(π1, π0)≳ 1 tfort≲logn 1√tlognfor log n≲t≲n logn 1√nfort≳n logn 53 and no valid test can control the type II error by β. Recall that by (20), we want to find two mixing distribution that maximize W1(π1, π0) constrained to satisfy V(Pπ1, Pπ0)≲1 n. Small tregime By lemma 18, there exists mixing distributions π, π 0supported on [0 ,1] that match their first tmoments and satisfy W1(π1, π0)≥C·1 2t By lemma 3, their marginal measures are indistinguishable V(Pπ1, Pπ0)≤d(π1, π0) 2= 0 Therefore, by (20), it holds that ϵ∗(n, t)≥W1(π0, π1)≥C·1 2t Medium tregime By lemma 18, there exist mixing distributions π, π 0supported on [1 2− δ,1 2+δ] where 0 < δ≤1√ 8that match their first Lmoments and satisfy W1(π1, π0)≥C·δ L By lemma 3 with p=1 2, the following bound on their marginal measures follows V(Pπ1, Pπ0) ≤1 2·vuuttX m=L+1t m ·∆2 m·4m where ∆ m=Eπ1 X−1 2m −Eπ0 X−1 2m ≤vuuttX m=L+1t m (4δ2)m since∀x∈supp( π0)∪supp( π1)|x−1 2| ≤δ ≤t L+ 11/2 aL 2·r a·1−at−L 1−awhere a= 4δ2 ≤t L+ 11/2 (2δ)Lsince a1−at−L 1−a≤1⇐=a≤1 2⇐=δ≤1√ 8 ≲ 2δ·r t L!L Therefore, W1(π1, π0)≳δ LandV(Pπ1, Pπ0)≲ 2δ·r t L!L 54 Let the separation rate be r≍δ L, we have that V(Pπ1, Pπ0)≲ 2r√ tLL since we need that V(Pπ1, Pπ0)≲1 n, we choose Lso that the separation rate can be as large as possible r≲sup 1≤L≤t1 n1/L·√ Lt≍1√t·lognif log n≲t. Consequently W1(π1, π0)≳1√tlognandV(Pπ1, Pπ0)≲1 nif log n≲t. Finally, by (20), it holds that ϵ∗(n, t)≳1√tlognif log n≲t. Large tregime For this regime, we construct the lower-bound manually rather than using the solution of an optimization problem. Let π0=1 2·δ0+1 2·δ1andπ1=1 2−ϵ ·δ0+1 2+ϵ ·δ1 It holds that W1(π0, π1) =ϵ. Since Bj,t(0) = δj(0) and Bj,t(1) = δt(1) , the marginal measures of the data are Pπ0=1 2·δ0+1 2·δtandPπ1=1 2−ϵ ·δ0+1 2+ϵ ·δt 55 and their χ2distance is given by χ2 Pn π0, Pn π1 + 1 =nY i=1χ2(Pπ0, Pπ1) + 1 =nY i=11 2· (1−2ϵ)2+ (1 + 2 ϵ)2 ≤nY i=11 2[exp (−4ϵ) + exp (4 ϵ)] since 1 + x≤ex =nY i=1cosh (4 ϵ) ≤nY i=1exp 8ϵ2 since cosh x≤ex2/2 = exp 8nϵ2 Therefore, it holds that V Pn π0, Pn π1 ≤q χ2 Pn π0, Pn π1 ≤Cαforϵ=log(Cα+ 1) 81/2 ·1 n1/2, (69) and consequently, by (20), it follows that ϵ∗(n, t)≳1√n. H Local minimax critical separation with a reference effect Lemma 19 (Extension of lemma 5) .Consider the hypotheses given in (27). Under the null hypothesis, it follows that W1(eπ, π 0)≤Cδ·J(π0)√nwith probability 1−δ Additionally, if ϵ≥Cδ·J(π0)√n+1 n where Cδ=h 2·√ 3·(2 +δ−1/2)i2 56 , it follows that under the alternative hypothesis, it holds that W1(eπ, π 0)≥ϵwith probability 1−δ Lemma 5 is a corollary of the above lemma, since if the null hypothesis is a point mass π0=δp0. It follows that J(π0) = 0, and the statement of lemma 5 follows. Proof of
|
https://arxiv.org/abs/2504.13977v1
|
lemma 19. By Chebyshev inequality, it follows that |W1(eπ, π 0)−W1(π, π 0)| ≤W1(eπ, π) ≤EπW1(eπ, π) +|W1(eπ, π)−EW 1(eπ, π)| ≤EπW1(eπ, π) + (1 + δ−1/2)p VπW1(eπ, π) with prob. 1 −δ By theorem 3.2 of Bobkov and Ledoux [2019],we have that EπW1(eπ, π)≤J(π)√n and VπW1(eπ, π)≤Eπ[W1(eπ, π)]2≤ EπW2 1(eπ, π) ≤J(π)√n2 Consequently, we have that |W1(eπ, π 0)−W1(π, π 0)| ≤(2 +δ−1/2)·J(π)√nwith prob. 1 −δ Thus, under the null hypothesis, W1(π, π 0) = 0, it follows that W1(eπ, π 0)≤(2 +δ−1/2)·J(π0)√nwith prob. 1 −δ Under the alternative hypothesis, we have W1(π, π 0)≥ϵ. Note that |µFπ(x)−µFπ0(x)|=|Fπ(x)−Fπ0| · |1 +Fπ(x) +Fπ0| which implies that J(π)√n≤J(π0)√n+r 3 n·Z |Fπ(x)−Fπ0|1/2 ≤J(π0)√n+√ 3·r W1(π, π 0) nby Jensen’s ineq 57 Thus, we have that |W1(eπ, π 0)−W1(π, π 0)| ≤(2 +δ−1/2)·" J(π0)√n+√ 3·r W1(π, π 0) n# with prob. 1 −δ and consequently W1(eπ, π 0)≥ϵ 2with prob. 1 −δ under the following condition ϵ 2≥(2 +δ−1/2)·" J(π0)√n+√ 3·r W1(π, π 0) n# which is implied by ϵ≥h 2·√ 3·(2 +δ−1/2)i2 ·J(π0)√n∨1 n Theorem 4 (Homogeneity testing of random effects) .For the testing problem (27), the critical separation (28) is given by ϵ∗(n, t, π 0)≍(1 nifµp0≲t n3/2 µ1/2 p0 t1/2n1/4ifµp0≳t n3/2ift≳√n and ϵ∗(n, t, π 0)≍ 1 nifµp0≲1 n µp0 if1 n≲µp0≲1 tn1/2 µ1/2 p0 t1/2n1/4if1 tn1/2≲µp0ift≲√n (33) Furthermore, the test (32) attains them. Proof of theorem 4. For the upper-bound, lemma 20 indicates that the first-moment test controls the type II error whenever ϵ≥C·r1where r1=p0∨1 n while lemma 21 indicates that the second-moment test controls the type II error whenever ϵ≥C·r2where r2=p1/2 0 t1/2n1/4∨1 n1/2 Consequently, consider the test that rejects whenever one of them rejects ψα=ψα/2 1∨ψα/2 2 58 By union bound, ψαcontrols the type I error by α. Furthermore, it controls type II error whenever ϵ≥C·[r1∨r2] which implies that ϵ∗(n, t, π 0)≲(1 nifµp0≲t n3/2 µ1/2 p0 t1/2n1/4ifµp0≳t n3/2ift≳√n (70) and ϵ∗(n, t, π 0)≲ 1 nifµp0≲1 n µp0 if1 n≲µp0≲1 tn1/2 µ1/2 p0 t1/2n1/4if1 tn1/2≲µp0ift≲√n (71) Lemmas 23, 24, and 25 provide following lower-bound ϵ∗(n, t, π 0)≳γ∗(n, t, π 0) where γ∗(n, t, π 0) = 1 tnifµp0≲1 tn µp0 if1 tn≲µp0≲1 tn1/2 µ1/2 p0 t1/2n1/4if1 tn1/2≲µp0 Additionally, lemma 6 show that ϵ∗cannot be below 1 /n. Thus, we have that ϵ∗(n, t, π 0)≳1 n∨γ∗(n, t, π 0) which matches up-to-constants the rates obtained in (70) and (71).Hence, the statement of the lemma follows. H.1 Upper bounds on the critical separation H.1.1 Mean test Proposition 4. Forq≥p≥1andπ=k−1·Pk i=1δpi, it holds that Z |p−p0|qdπ(p) =1 k kX i=1|pi−p0|q! ≤kq/p−1· 1 kkX i=1|pi−p0|p!q/p =kq/p−1·Z |p−p0|pdπ(p)q/p 59 Lemma 20. For testing problem (27), the first moment test (30) controls type I error. Furthermore, there exists universal positive constant Csuch that the test controls type II error by βwhenever ϵ(n, t)≥C· µp0∨1 n where C= 41 β∨1 α1/4 Proof of lemma 20. Letdl=R (p−p0)ldπ(p). Remark 1 states that EPπ[T1] =d1andn·VPπ[T1]≤p0 t+1 td1+ (1−1 t)·d2−d2 1 Lemma 16 indicates that we need to find ϵsuch that the following condition is satisfied s VPπ[T1] β+r VPπ0[T1] α≤EPπ[T1]−EPπ0[T1] . (72)
|
https://arxiv.org/abs/2504.13977v1
|
Note that s VPπ[T1] β+r VPπ0[T1] α≤C n1/2· 2p0 t+1 td1+ (1−1 t)·d2−d2 11/2 , where C=2√ 2h 1 β∨1 αi1/2 , is implied by d1≥C1/2·" 21/2·p1/2 0 n1/2t1/2∨1 nt∨d1/2 2 n1/2# . (73) Recalling that d1≥ϵ≥2p0, we obtain the following simple upper-bound d2≤Z |p−p0|dπ(p)≤m1(π) +p0=d1+ 2p0≤2d1. Combining it with (73), we get that (72) is implied by ϵ≥(2C)1/2·" p1/2 0 n1/2t1/2∨1 n# ∨4p0. Thus ϵ≥C· p0∨1 n where C= 41 β∨1 α1/4 sufices for (72) to hold. 60 H.1.2 Debiased ℓ2test Lemma 21. For the homogeneity testing problem (27), the second-moment test (31)controls type I error. Furthermore, there exists universal positive constant Csuch that the test controls type II error by βwhenever ϵ(n, t)≥C β1/2·" µ1/2 p0 t1/2n1/4∨1 n1/2# Proof of lemma 21. Letdl=R (p−p0)ldπ(p). Remark 2 states that EPπ[T2] =d2 n·VPπ[T2]≲p2 0 t2+p0 t2d1+1 t2d2+p0 td2+1 td3+ (1−1 t)·d4−d2 2 Lemma 16 indicates that we need to find ϵsuch that the following condition is satisfied s VPπ[T2] β+r VPπ0[T2] α≤EPπ[T2]−EPπ0[T2] (74) Noting that s VPπ[T2] β+r VPπ0[T2] α≤C n1/2· 2·p2 0 t2+p0 t2d1+1 t2d2+p0 td2+1 td3+ (1−1 t)·d4−d2 21/2 where C=2√ 2h 1 β∨1 αi1/2 , it follows that (74) is implied by d2≳p0 tn1/2∨p1/2 0d1/2 1 tn1/2∨1 t2n∨p0 tn∨d1/2 3 t1/2n1/2∨d1/2 4 n1/2 By Jensen’s inequality, it holds that d1≤d1/2 2. Therefore, d2≳p0 tn1/2∨p2/3 0 t4/3n2/3∨1 t2n∨d1/2 3 t1/2n1/2∨d1/2 4 n1/2 ≍p0 tn1/2∨1 t2n∨d1/2 3 t1/2n1/2∨d1/2 4 n1/2 since p2/3 0 t4/3n2/3≤p0 tn1/2∨1 t2n. 61 If no assumptions on πare made, using d3≤d2andd4≤d2, we get that (74) holds whenever d2≳p0 tn1/2∨1 t2n∨1 tn∨1 n Recalling that d2≥ϵ2we get that ϵ≳p1/2 0 n1/4t1/2∨1 n1/2 implies (74). H.2 Lower bound for random effects Lemma 22. Consider event Asuch that πn 1(A)≤πn 0(A) and denote the resulting conditional distributions by Pπ0|A=ePπ0andPπ1|A=ePπ1 It follows that local minimax risk is lower-bounded by R∗(ϵ, π0)≥πn 1(A)−πn 0(A)·V(ePπ0,ePπ1)−α Proof of lemma 22. First, recall that the local minimax risk can be lower-bound as follows by definition R∗(ϵ, π0) = inf ψ∈Ψ(π0)sup πs.t.W1(π,π0)≥ϵPn π(ψ(X) = 0) ≥inf ψ∈Ψ(π0)sup π:W1(π,π0)≥ϵPn π0(ψ(X) = 1) + Pn π(ψ(X) = 0) −α since ψ∈Ψ(π0) (75) For any mixing distributions π0andπ1, it follows that Pn π0(ψ(X) = 1) + Pn π(ψ(X) = 0) ≥Pn π0(ψ(X) = 1|A)·πn 0(A) +Pn π(ψ(X) = 0|A)·πn(A) =πn(A) + inf ψ Pn π0(ψ(X) = 0|A)·πn 0(A)−Pn π(ψ(X) = 0|A)·πn(A) 62 Since, πn(A)≤πn 0(A), it holds that Pn π0(ψ(X) = 1) + Pn π(ψ(X) = 0) ≥πn(A) +πn 0(A)·inf ψ Pn π0(ψ(X) = 0|A)−Pn π(ψ(X) = 0|A) =πn(A)−πn 0(A)·sup ψ Pn π(ψ(X) = 0|A)−Pn π0(ψ(X) = 0|A) ≥πn(A)−πn 0(A)·V(ePπ0,ePπ) . (76) Bounding (75) from below by (76) proves the statement. Lemma 6. There exists a universal positive constant Csuch that the critical separation for testing random effects (27) is lower-bounded by C/n. Proof of lemma 6. Letπ= (1−Cα/n)·δp0+(Cα/n)·δ1, and consider the event A=∪n i=1{pi= p0}. Note that it always happens under the null mixing distribution π0=δp0, and that it happens with constant probability under the mixing distribution π (1−Cα)n=πn(A)≤πn 0(A) = 1 . Furthermore, conditioned on the event, the marginal distribution of the data is identical under both
|
https://arxiv.org/abs/2504.13977v1
|
mixing distributions Pπ0|A=Pπ0andPπ|A=Pπ0. Consequently, by lemma 22, the local minimax risk is lower-bounded by a constant R∗(ϵ, π0)≥πn(A)−πn 0(A)·V(Pπ0, Pπ0)−α= (1−Cα)n−α > β . Since the distance between πandπ0satisfies W1(π1, π0)≥Cα 2n, the statement of the lemma follows. H.3 Lower bounds for fixed effects H.3.1 Small p0regime Lemma 23. Let 0≤p0≲1 tnandϵ≍1 tn and consider the following mixing distributions π0=δp0andπ1=δp0+ϵ It follows that no test can distinguish them based on a sample of size n. Consequently, ϵ∗(n, t, π 0)≥W1(π, π 0)≍1 tn. 63 Proof of lemma 23. Letπ0=Pδ0, then it follows that V(Pπ0, Pπ)≤V(Pπ0, P0) +V(Pπ, P0) = 1−(1−p0)t+ 1−(1−p0−ϵ)t Letξ1∈[1−p0,1] and ξ2∈[1−p0−ϵ,1], by Taylor’s expansion, let it follows that V(Pπ0, Pπ) =p0tξt−1 1+ (p0+ϵ)tξt−1 2≤2p0t+ϵt (77) Therefore, V(Pn π0, Pn π)≤n·V(Pπ0, Pπ) ≲p0tn+ϵtn by (77) ≲1 since p0≲1 tnandϵ≲1 tn where the last constant can be made arbitrarily small for large enough nort. Thus, by lemma 2, we have that ϵ∗(n, t, π 0)≥W1(π, π 0)≍ϵ≍1 tn H.3.2 Medium p0regime Lemma 24. Let 0≤p0≲1 t√n and consider the following mixing distributions π0=δp0andπ1=1 2·[δ2p0+δ0] It follows that no test can distinguish them based on a sample of size n. Consequently, ϵ∗(n, t, π 0)≥W1(π, π 0) =p0. Proof of lemma 24. They both share the first moment; if they would share the second mo- ment, then they would be identical. Hence, we let them differ in their second moment. Therefore, assume t≥2 so that the binomial density preserves the second-moment informa- tion. Finally, note that their separation is given by p0 W1(π1, π0) = 2(1 −1 2)p0=p0 64 In the following, we show that χ2(Pπ0, Pπ1)≲1 n. Consequently, by the tensorization property of the χ2distance, the distance between the product marginal measures can be bounded by an arbitrary constant for nlarge enough χ2 Pn π0, Pn π1 = 1 +χ2(Pπ0, Pπ1)n≤χ2(Pπ0, Pπ1)·n≲1 By (34), it follows that ϵ∗(n, t, π 0)≳p0forp0≲1√nt We finish the proof with a derivation that asserts that, in fact, χ2(Pπ0, Pπ1)≲1 n. Note the following upper bound. χ2(Pπ0, Pπ1) =tX m=1t m ·[Eπ1[X−p0]m]2 (p0(1−p0))mby lemma 8 =tX m=2t m ·[Eπ1[X−p0]m]2 (p0(1−p0))msince they share the first moment =1 4·tX m=2t m ·amwhere a=p0 1−p0 =1 4· (1 +a)t−(1 +at) (1 +a)t−(1 +at)≥0 since t≥2 ≤1 4· eta−(1 +at) ≤1 4·eta 2·(ta)2 ≲t2p2 0 ≲1 nsince p0≲1 t√n H.3.3 Large p0regime Lemma 25. Let 1 n1/2t≲p0≲1andϵ≍p1/2 0 t1/2n1/4 and consider the following mixing distributions π0=δp0andπ1=1 2·[δp0+ϵ+δp0−ϵ]. 65 It follows that no test can distinguish them based on a sample of size n. Consequently, ϵ∗(n, t, π 0)≥W1(π, π 0) =ϵ≍p1/2 0 t1/2n1/4. Proof of lemma 25. Without loss of generality, assume that p0≤1 2, then it holds that 0 ≲ p0−ϵ≤p0+ϵ≲1. We proceed to bound the chi-squared distance between the marginal measures χ2(Pπ0, Pπ1) =tX m=1t m ·[Eπ1[X−p0]m]2 (p0(1−p0))mby lemma 8 =tX m=2t m ·[Eπ1[X−p0]m]2 (p0(1−p0))msince they share the first moment =tX m=2t m ·h ϵm+(−ϵ)m 2i2 (p0(1−p0))m =−1 +1 2 1−ϵ2 p0(1−p0)t +1 2 1 +ϵ2 p0(1−p0)t ≤ −1 +et2ϵ4 2p2 0(1−p0)2 It follows that χ2 Pn π0, Pn π1 ≤ent2ϵ4 2p2 0(1−p0)2≲1 and, by (34), it holds
|
https://arxiv.org/abs/2504.13977v1
|
that ϵ∗(n, t, π 0)≳p1/2 0 t1/2n1/4for1 n1/2t≲p0. I Minimax critical separation for homogeneity testing without a reference effect I.1 Upper bounds on the critical separation I.1.1 Conservative test Lemma 9. For testing problem (42), the test (45) controls type I error by α. Furthermore, there exists a universal positive constant Csuch that the type II error of the test is bounded 66 byβwhenever ϵ(n, t)≥C· 1 n1/2for√n≲t 1 t1/2n1/4otherwise(46) Proof of lemma 9. Henceforth, let T(X) =bV(X) to simplify the notation. By lemma 16, it holds that the test controls type I and II errors if the following condition is satisfied sup π∈SEPπ[T] +r VPπ[T] α≤ inf π:W1(π,S)≥ϵEPπ[T]−s VPπ[T] β. (78) Bound under null hypothesis To upper-bound the LHS of (78), note that by (44) and lemma 26, we have that sup π0∈SEPπ0[T] +r VPπ0[T] α= sup p0∈[0,1]vuutµ2p0 t(t−1)·n−1/t (n 2) α≤1√ 2α·1 tn1/2fort≥2 and n≥2 (79) Bound under alternative hypothesis To lower-bound the RHS of (78), recall that for any mixing distribution πby (44) it holds that EPπ[T] =V(π) By lemma 26, it follows that EPπ[T]/2≥s VPπ0[T] β(80) if V(π)≳1 n1/2·m1(π) t+V(π) +V1/2(π) which is implied by V(π)≳m1/2 1(π) tn1/2+1 n(81) In particular, by (41), if πsatisfies W1(π, S)≥ϵ, then it must satisfy that V(π)≥ϵ2. Consequently, (81) is implied by ϵ≳1 t1/2n1/4+1 n1/2forπs.t.W1(π, S)≥ϵ where we used the fact that m1(π)≤1 for any mixing distribution π. Thus, we can state the following lower-bound for the RHS of (78) inf π:W1(π,S)≥ϵEPπ[T]−s VPπ[T] β≳ϵ2forϵ≳1 t1/2n1/4+1 n1/2. (82) 67 Finally, using (79) and (82), we have that (78) is satisfied if ϵ≳1 t1/2n1/4+1 n1/2. Lemma 26. For the estimator bV, defined in (43), the following upper-bounds on its variance hold. Let πbe a point mass mixing distribution, π=δp0withp0∈[0,1], it follows that VPπ[T] =µ2 p0 t(t−1)·n−1/t n 2 ∀t≥2 For a mixing distribution πsuch that its mean m1=Ep∼π[p]satisfies m1≤1 2, it holds that VPπ[T]≲1 n·m2 1 t2+V2(π) +V(π) ∀t≥3 Proof of lemma 26. Henceforth, we let T(X) =bV(X) to simplify the notation. Since Tis a U-statistic with a kernel of degree 2, it follows by theorem 3 of [Lee, 2019, sec. 1.3] that its variance is given by VPπ[T] =2(n−2) n 2·σ2 1+1 n 2·σ2 2 where σ2 1=V[E[h(X1, X2) 2|X1=x1]] =1 4·VX1 2+m2(π)−2X1 tm1(π) =1 4·V" X1 2 t 2−2X1 tm1(π)# andσ2 2=1 4·V[h(X1, X2)] Variance for π∈SLetπ∈S, there exists p0∈[0,1] such that π=δp0. Note that σ2 1=1 2·µ2 p0 (t−1)tandσ2 2=µ2 p0 (t−1)t·(2−1/t) Thus, we have that VPπ[T] =µ2 p0 t(t−1)·n−1/t n 2 68 Variance for general mixing distribution We use the fact that by theorem 4 of Lee [2019], it holds that σ2 1≤σ2 2 2 Consequently, VPπ[T]≤ (n−2) n 2+1 n 2! ·σ2 2=2 n·σ2 2 (83) By the law of total variance, we have that σ2 2=VPπ[h(X1, X2)] =A+B where A=Ep,qiid∼πVX1|p∼Bin(t,p),X2|q∼Bin(t,p)[h(X1, X2)] B=Vp,qiid∼πEX1|p∼Bin(t,p),X2|q∼Bin(t,p)[h(X1, X2)] =Vp,qiid∼π[(p−q)2] Henceforth, we use mlto denote ml(π) =Ep∼π pl in order to simplify the notation. Term Areduces to A=4(m2−2m3+m4) t−1+4(m1−m2)2 t2−4(m2+ 2m1m2+ 2m2 2−4(1 + m1)m3+ 3m4) t Letdl=R (p−m1(π))ldπ(p), then it can be shown that m2=d2+m2 1 m3=d3+ 3m1m2−2m3 1 m4=d4+ 4m1m3+ 3m4 1−6m2 1m2 We get that A= 4m2
|
https://arxiv.org/abs/2504.13977v1
|
1 t22t−1 t−1+ 4d2 (t−1)t+ 8d3 tt−2 t−1(1−2m1) + 4m4 1 t22t−1 t−1+ 8m1d2 (t−1)t2(1 + 2( t−3)t) −4d4 (t−1)t(2t−3)−4d2 2 t2(2t−1) −8m2 1d2 (t−1)t2(1 + 2( t−3)t)−8m3 1 t2(t−1)(2t−1) ≲m2 1 t2+d2 t2+d3 t+m4 1 t2+m1d2 tsince m1≤1 2andt≥3 ≲m2 1 t2+d2 t2+d3 t+m1d2 tsince m1≤1 (84) 69 Regarding B, we have that B=Ep,qiid∼π(p−q)4−(2V1[π])2 = 2m4(π)−8m1(π)m3(π) + 6m2 2(π)−(2d2)2 = 2(3 d2 2+d4)−(2d2)2 = 2(d2 2+d4) (85) Thus, plugging (84) and (85) into (83), we get VPπ[T]≲1 n·m2 1 t2+d2 t2+d3 t+m1·d2 t+d2 2+d4 ≲1 n·m2 1 t2+V(π) t2+d2 t+m1·V(π) t+V2(π) +V(π) ≲1 n·m2 1 t2+V2(π) +V(π) where in the penultimate inequality, we use the facts that V(π) =Vp∼π[p] =d2,d3≤d2and d2≤d4. I.1.2 Non-conservative test Theorem 5. For testing problem (39), the debiased Cochran’s χ2test(49) controls type I error by α. Furthermore, for γ≥c/nwhere cis a universal positive constant, there exists a universal positive constant Csuch that the type II error is bounded by βwhenever (46) is satisfied. Proof of theorem 5. We consider the case where t≥2 and m1(π)<1 2. If the second as- sumption does not follow, replace YibyeYi=t−Yiand proceed, and do the same with Xi. Henceforth, let R(X, Y) =T(X) bη(Y),T(X) =bV(X) ,bη(Y) = max( bµ(Y), γ) and η(π) = max( µm1(π), γ) to simplify the notation. By lemma 16, it holds that the test controls type I and II errors if the following condition is satisfied sup π∈SEPπ[R(X, Y)] +r VPπ[R(X, Y)] α≤ inf π:W1(π,S)≥ϵEPπ[R(X, Y)]−s VPπ[R(X, Y)] β(86) Recall that T(X) is independent from bµ(Y). Therefore, EPπ[T(X, Y)] =EPπ[T(X)]·EPπ bη−1(Y) (87) 70 Bound under null hypothesis To bound the LHS of (86) note that for π0∈S, it holds that EPπ0[R(X, Y)] = 0 since EPπ0[T(X)] = 0 and bη−1(Y)≥γ−1with probability one. Furthermore, by lemma 27 it holds that EPπ bη−2(Y) ≍η−2forγ≳n−1 , together with (87) it implies that VPπ0[R(X, Y)] =EPπ0 T2(X) ·EPπ bη−2(Y) ≍EPπ0[T2(X)] η2(π) Letπ0=δp0where p0∈[0,1], then using lemma 26, we get the following upper-bound VPπ0[R(X, Y)]≍1 t(t−1)·n−1/t n 2·µp0 max ( µp0, γ)2 ≲1 t2n(88) Consequently, we can upper-bound the LHS of (86) by sup π0∈SEPπ0[R(X, Y)] +r VPπ0[R(X, Y)] α≲1 tn1/2fort≥2 and n≥2 Bound under alternative hypothesis To lower-bound the RHS of (86), consider πs.t.W1(π, S)≥ ϵ. By lemma 27 EPπ[R(X, Y)]≥C−1 0·EPπ[T(X)] η(π)andVPπ[R(X, Y)]≤C2+C2 1 η2(π)·VPπ[T(X)] Since η(π)>0, it holds that EPπ[R(X, Y)]≥p VPπ[R(X, Y)]/βis implied by EPπ[T(X)]≳p VPπ[T(X)] which matches up-to-constants (80), following the argument thereafter, we have that inf π:W1(π,S)≥ϵEPπ[T]−s VPπ[T] β≳ϵ2forϵ≳1 t1/2n1/4+1 n1/2(89) Finally, using (88) and (89), we have that (86) is satisfied if ϵ≳1 t1/2n1/4+1 n1/2. Lemma 27. For any distribution πsupported on [0,1]. Let η(π) = max( µm1(π), γ)andbη(Y) = max( bµ(Y), γ) 71 There exists universal positive constants c0,C0,C1andC2such that E[bη(Y)]≤C0·η ,C−1 0·η−1≤E[bη−1(Y)]≤C1·η−1 andC−1 0·η−2≤E[bη−2(Y)]≤C2·η−2 forγ > c 0/n. Proof of lemma 27. Expectation and variance of bµWe consider the case where t≥2 and m1(π)<1 2. If the second assumption does not follow, replace YibyeYi=t−Yiand proceed. It can be checked that the statistic is an unbiased estimator of µm1(π) EPπ[bµ] =EPπ[h(X1, X2)] = 2 · m1(π)−m2 1(π) =µm1(π) For the variance, recall that bµis a U-statistic.
|
https://arxiv.org/abs/2504.13977v1
|
Therefore, analogously to (83), it follows that VPπ[bµ]≤1 2n·VX,Yiid∼Pπh eh(X, Y)i We split the analysis of the variance in two terms VPπ[h(X1, X2)] =A+B where A=Ep,qiid∼πVX1|p∼Bin(t,p),X2|q∼Bin(t,p)h eh(X1, X2)i B=Vp,qiid∼πEX1|p∼Bin(t,p),X2|q∼Bin(t,p)h eh(X1, X2)i By direct computation, the first term can be bounded by A=2 t2·(m1(π)−m2(π))·(t−2·(m1(π)−m2(π))·(2t−1))≤2 t·µm1(π) For the second term, we have that B=Vp,qiid∼π (p+q−2·p·q)2 ≤8m1(π)≤16µm1(π) where we used the fact that m2(π)≤m1(π)≤2µm1(π)since m1(π)≤1/2. Consequently, VPπ[h(X1, X2)]≤c′·µm1(π)andVPπ[bµ]≤c′·µm1(π) n 72 Bernstein bounds for bµNoting that ehis bounded, by Hoeffding [1963], see also equation 1.4. of Arcones [1995], the following Bernstein’s inequality for the U-statistics bUholds Pπ bµ−µm1(π)> δ ≤exp −c′·δ2·n µm1(π)+δ where c′is a positive constant. Thus, for any c >0, it follows that Pπ bη <η b =Pπ bµ∨γ <η b ≤Pπ bµ <η b andγ <η botherwise the prob. is zero =Pπ bµ <µm1(π) b since γ <η b⇐⇒ µm1(π)> γ·bforb≥1 ≤Pπ |bµ−µm1(π)| ≥µm1(π)·(1−1/b) ≤2 exp −c′·µm1(π)·n·2 3 forb≥2 ≤2 exp ( −b·c) for γ >2 3·c c′·1 n Analogously, Pπ(bη > bη ) =Pπ(bµ∨γ > bη ) ≤Pπ(bη > bη ) =Pπ(bµ > γ ∨bη) +Pπ(γ >bµ∨bη) ≤Pπ(bµ > bη ) for b >1 since η≥γ ≤Pπ bµ−µm1(π)> cγ where cγ=bη−µm1(π)≥(b−1)·( µm1(π)ifµm1(π)≥γ γ ifµm1(π)< γ≥(b−1)·γ 73 Ifµm1(π)≥γ, Pπ(bη > bη )≤Pπ bµ−µm1(π)>(b−1)µm1(π) ≤exp −c′·(b−1)2·µm1(π)·n b ≤exp −c′ 2·b·µm1(π)·n forb≥2 ≤exp(−b·c) for γ >2 c′·c n Otherwise, if µm1(π)< γ, it follows that Pπ(bη > bη )≤Pπ bµ−µm1(π)>(b−1)γ ≤exp −c′ 2·γ·b·n forb≥2 ≤exp(−b·c) for γ >2 c′·c n Bounds of bη’s expectations The proof follows [Canonne et al., 2022, sec. 2.2]. Letc0be any constant such that c0≥2 c′·c∨2 3·c c′ then the following bounds hold EPπ bη−1(Y) =η−1·Z∞ 0Pπ bη <η b db ≤η−1· 2 +Z∞ 2Pπ bη <η b db ≤η−1·C1where C1= 2 1 +exp(−2c) c forγ >c0 n EPπ bη−2(Y) = 2η−2·Z∞ 0Pπ bη2<η2 b2 ·b db = 2η−2·Z∞ 0Pπ bη <η b ·b db ≤2η−2· 2 +Z∞ 2Pπ bη <η b ·b db ≤η−2·C2where C2= 4 1 +(2c+ 1) c2·exp(−2·c) forγ >c0 n 74 EPπ[bη(Y)] =η·Z∞ 0Pπ(bη > bη )db ≤η· 2 +Z∞ 2Pπ(bη > bη )db ≤η·C0where C0= 2 +exp(−2c) c forγ >c0 n By Jensen’s inequality, it holds that EPπ bη−1(Y) ≥(EPπ[bη(Y)])−1≥C−1 0·η−1 and EPπ bη−2(Y) ≥(EPπ[bη(Y)])−2≥C−1 0·η−2. I.2 Lower bounds on the critical separation Lemma 10. The critical separation (40) is lower-bounded by ϵ∗(n, t)≳max1√n,1 t1/2n1/4 Proof of lemma 10. Fort≲√n, we can rely on the lower-bound construction for fixed effects. Let π0=δ0.5, and use the lower-bound construction in lemma 25. The claim for this case follows. Fort≳√nWe do a similar lower-bound construction to (69). Let Γ0=1 2·δ0+1 2·δ1andπ1=1 2−γ ·δ0+1 2+γ ·δ1 We compute their χ2distance by the Ingster–Suslina χ2-method [Ingster and Suslina, 2003], it follows that χ2 Ep∼Γ0h Pn δpi , Pn π1 =Ep,q∼Γ0" tX j=0Pδp(j)·Pδq(j) Pπ1(j)!n# 75 Noting that Pδ0=δ0andPδ1=δt the distance simplifies to χ2 Ep∼Γ0h Pn δpi , Pn π1 =1 4·1 Pπ1(0)n +1 4·1 Pπ1(t)n Noting that Pπ1=1 2−γ ·δ0+1 2+γ ·δt , we get that χ2 Ep∼Γ0h Pn δpi , Pn π1 =1 4·2n·[(1−2γ)n+ (1 + 2 γ)n] ≤1 4·2n·[exp{−2nγ}+ exp{2nγ}] =cosh 2 nγ 2n+1 ≤exp{2n2γ2} 2n+1 Thus, we have that
|
https://arxiv.org/abs/2504.13977v1
|
if we choose γsuch that γ=r n+ 1 2n2·log 2 +logC2 α 2n2 we control the χ2distance between the distributions by an arbitrary constant χ2 Ep∼Γ0h Pn δpi , Pn π1 ≤C2 αfor Consequently, by lemma 17, it follows that R∗(ϵ)> β forϵ≥W1(π1, S) Note that W1(π1, S) = min p0∈[0,1]W1(π, δp0) = min p0∈[0,1]1 2+γ−2γp0=1 2−γ≥γ 2forγ <1 4 the above condition is satisfied for nlarge enough. Consequently, we have that R∗(ϵ)> β forϵ≳1√n 76 J Repository with source code to reproduce simula- tions and applications The code to reproduce the experiments can be accessed at https://github.com/lkania/TestingRandomEffectsForBinomialData K Simulations K.1 Testing procedures for Goodness-of-fit testing Recall that njis the observed fingerprint, see (12), and bj(π) = bj,t(π) is the expected fingerprint, see (3). It holds that nj∼Bin(n, bj,t(π)) ,Ehnj ni =bj,t(π) and Vhnj ni =µbj,t(π) n In the following, we define some test statistics that appear in the simulations but are not defined in the main text. Modified Pearson’s χ2test T=1 t+ 1tX j=0 nj n−bj(π0)2 max µbj(π0), γandψ=I(T≥qα(Pπ0, T)) The original test uses γ= 0, for our simulations, we use γ= 10−10. Modified Likelihood Ratio test (LRT) T= 1 t+ 1tX j=0nj n·logmax ( nj/n, γ) max ( bj(π0), γ) andψ=I(T≥qα(Pπ0, T)) The original test uses γ= 0, we use γ= 10−10. Modified Maximum Likelihood Estimation (MLE) test The Maximum Likelihood Estimator (MLE) [Vinayak et al., 2019] is defined as any solution to the following optimiza- tion bπMLE∈arg max π∈DnX i=1log Ep∼πt Xi pXi(1−p)t−Xi = arg max π∈DtX j=0nj n·log Ep∼πt j pj(1−p)t−j 77 It minimizes the Kullback–Leibler divergence between the observed and expected fingerprints bπMLE∈arg min π∈DKL bb, b(π) wherebbj=nj nandbj(π) =Ep∼πt j pj(1−p)t−j To avoid numerical issues during the simulation, we implement the following modified MLE optimization bπMLE∈arg max π∈DtX j=0nj n·log max γ , E p∼πt j pj(1−p)t−j The original statistic can be recovered using γ= 0, we use γ= 10−10. The corresponding test is defined as follows T=W1(bπMLE, π0) and ψ=I(T≥qα(Pπ0, T)) Method of Moments (MOM) test The Method of Moments (MOM) [Tian et al., 2017] minimizes the L1distance between the observed and expected moments. Let bmj(X) be the unbiased estimator of the j-th moment bmj(X) =1 nnX i=1 Xi j t j forj∈ {1, . . . , t}, (90) then the MOM estimator is defined as any solution to the following optimization bπMOM∈arg min π∈DtX j=0 Ep∼π[pj]−bmj(X) The corresponding test is defined as follows T=W1(bπMOM, π0) and ψ=I(T≥qα(Pπ0, T)) 78 K.2 Additional simulations for homogeneity testing with compos- ite null hypothesis 010002000300040005000 0 2 4 60100020003000 0 2 4 60 2 4 6 Null hypothesis 0=p0 Countst=2 t=5 t=10n=5 n=50Statistic debiased Cochran's 2 (I) debiased Cochran's 2 (II) mod. Cochran's 2 Figure 15: Distribution of modified and debiased Cochran’s χ2test statistics under the mixing distribution π=δ0.1. 010002000300040005000 0 1 2010002000300040005000 0 1 2 0 1 2 Null hypothesis 0=p0 Countst=2 t=5 t=10n=5 n=50Statistic debiased Cochran's 2 (I) debiased Cochran's 2 (II) mod. Cochran's 2 Figure 16: Distribution of modified and debiased Cochran’s χ2test statistics under the mixing distribution π=δ0.01. L Applications L.1 Testing procedures for homogeneity
|
https://arxiv.org/abs/2504.13977v1
|
testing with simple null In the following, let π0=δp0. 79 ℓ2test The following test statistic is used T=1 nnX i=1Xi ti−p02 The corresponding test is ψ=I(T≥qα(Pπ0, T)) Modified Pearson’s χ2test The following test statistic is usually employed T=1 nnX i=1ti· Xi ti−p02 µp0 To avoid numerical instabilities in the application, we utilized the following modification T=1 nnX i=1ti·Xi ti−p02 andψ=I(T≥qα(Pπ0, T)) Modified Likelihood Ratio test (LRT) The test statistic is computed as follows T= 1 nnX i=1Xi·logmax ( Xi/ti, γ) max ( p0, γ) + (ti−Xi)·logmax (1 −Xi/ti, γ) max (1 −p0, γ) The corresponding test is ψ=I(T≥qα(Pπ0, T)) The original test uses γ= 0, we use γ= 10−10. Modified Maximum Likelihood Estimation (MLE) test To avoid numerical issues during the simulation, we implement the following modified MLE optimization bπMLE∈arg max π∈DnX i=1log max γ , E p∼πti Xi pXi(1−p)ti−Xi . The original statistic can be recovered using γ= 0, we use γ= 10−10. The corresponding test is defined as follows T=W1(bπMLE, π0) and ψ=I(T≥qα(Pπ0, T)) . 80 Method of Moments (MOM) test Let tmax= max 1≤i≤nti, then bπMOM∈arg min π∈DtmaxX j=0 Ep∼π[pj]−Pn i=1(Xi j) (ti j)·I(j≤ti) Pn i=1I(j≤ti) . The corresponding test is defined as follows T=W1(bπMOM, π0) and ψ=I(T≥qα(Pπ0, T)) . L.2 Testing procedures for homogeneity testing with composite null In the following definitions, let γ= 10−10. Modified Cochran’s χ2test Theχ2statistic is defined as follows bχ2(X) =eV(X) max µbm1(X), γwhere eV(X) =1 n−1nX i=1ti·Xi ti−bm1(π)2 and bm1(X) =Pn i=1XiPn i=1ti(91) We define the modified Cochran’s χ2test as ψχ2(X) =I bχ2(X)≥sup π∈Sqα(Pπ,bχ2) . We define the asymptotic modified Cochran’s χ2test as ψχ2(X) =I bχ2(X)≥χ2 n−1,α n−1 . Debiased Cochran’s χ2test I TheR1statistic is defined as R1(X) =bV(X) max µbm1(X), γ where bV(X) =n 2−1X i<jh(Xi, Xj) 2s.t.h(Xi, Xj) = Xi 2 ti 2+ Yj 2 tj 2−2 Xi 1 ti 1 Xj 1 tj 1 (92) andbm1(X) is defined in (91). The corresponding test is ψR1(X) =I R1(X)≥sup π∈Sqα(Pπ, R1) . 81 Debiased Cochran’s χ2test II TheR2statistic is R2(X) =bV(X) max (bµ(X), γ) where bµ(X) =n 2−1X i<jeh(Xi, xj) 2s.t.eh(Xi, Xj) =Xi ti+Xj tj−2Xi tiXj tj andbV(X) is defined in (92). The corresponding test is ψR2(X) =I R2(X)≥sup π∈Sqα(Pπ, R2) . 82 References Miguel A. Arcones. A Bernstein-type inequality for U-statistics and U-processes. Statistics & Probability Letters , 22(3):239–247, 1995. Khanh Do Ba, Huy L. Nguyen, Huy N. Nguyen, and Ronitt Rubinfeld. Sublinear Time Algorithms for Earth Mover’s Distance. Theory of Computing Systems , 48(2):428–442, 2011. Sivaraman Balakrishnan and Larry Wasserman. Hypothesis testing for densities and high- dimensional multinomials: Sharp local minimax rates. The Annals of Statistics , 47(4), 2019. T. Batu, L. Fortnow, R. Rubinfeld, W.D. Smith, and P. White. Testing that distributions are close. In Proceedings 41st Annual Symposium on Foundations of Computer Science , pages 259–269. IEEE Comput. Soc, 2000. Roger L. Berger and Dennis D. Boos. P Values Maximized Over a Confidence Set for the Nuisance Parameter. Journal of the American Statistical Association , 89(427):1012–1016, 1994. Serge Bernstein. Sur l’ordre de La Meilleure Approximation Des Fonctions Continues Par Des Polynˆ omes de Degr´ e Donn´ e , volume 4. Hayez, imprimeur
|
https://arxiv.org/abs/2504.13977v1
|
des acad´ emies royales, 1912. Sergey Bobkov and Michel Ledoux. One-dimensional empirical measures, order statistics, and Kantorovich transport distances. Memoirs of the American Mathematical Society , 261 (1259):0–0, 2019. S. P. Brooks, B. J. T. Morgan, M. S. Ridout, and S. E. Pack. Finite Mixture Models for Proportions. Biometrics , 53(3):1097–1115, 1997. Jorge Bustamante. Bernstein Operators and Their Properties - Error Inequalities . Springer International Publishing, 2017. Cl´ ement L. Canonne. Topics and Techniques in Distribution Testing . Now Publishers, 2022. Clement L. Canonne, Ayush Jain, Gautam Kamath, and Jerry Li. The Price of Tolerance in Distribution Testing. In Proceedings of Thirty Fifth Conference on Learning Theory , pages 573–624, 2022. Siu-On Chan, Ilias Diakonikolas, Paul Valiant, and Gregory Valiant. Optimal Algorithms for Testing Closeness of Discrete Distributions. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 1193–1203, 2014. Julien Chhor and Alexandra Carpentier. Sharp local minimax rates for goodness-of-fit testing in multivariate binomial and Poisson families and in multinomials. Mathematical Statistics and Learning , 5(1):1–54, 2022. William G. Cochran. Some Methods for Strengthening the Common X2Tests. Biometrics , 10(4):417–451, 1954. 83 J´ erˆ ome Dedecker and Bertrand Michel. Minimax rates of convergence for Wasserstein de- convolution with supersmooth errors in any dimension. Journal of Multivariate Analysis , 122:278–291, 2013. J´ erˆ ome Dedecker, Aur´ elie Fischer, and Bertrand Michel. Improved rates for Wasserstein de- convolution with ordinary smooth error in dimension one. Electronic Journal of Statistics , 9(1):234–265, 2015. Diego Dominici. Asymptotic analysis of the Krawtchouk polynomials by the WKB method. The Ramanujan Journal , 15(3):303–338, 2008. M. S. Ermakov. Asymptotically minimax tests for nonparametric hypotheses concerning the distribution density. Journal of Soviet Mathematics , 52(2):2891–2898, 1990. M. S. Ermakov. Minimax Detection of a Signal In a Gaussian White Noise. Theory of Probability & Its Applications , 35(4):667–679, 1991. Leonardo Grilli, Carla Rampichini, and Roberta Varriale. Binomial Mixture Modeling of University Credits. Communications in Statistics - Theory and Methods , 44(22):4866– 4879, 2015. Philippe Heinrich and Jonas Kahn. Strong identifiability and optimal minimax rates for finite mixture estimation. The Annals of Statistics , 46(6A):2844–2870, 2018. Nhat Ho and XuanLong Nguyen. On strong identifiability and convergence rates of parameter estimation in finite mixtures. Electronic Journal of Statistics , 10(1):271–307, 2016. Wassily Hoeffding. Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association , 58(301):13–30, 1963. Justin S. Hogg, Fen Z. Hu, Benjamin Janto, Robert Boissy, Jay Hayes, Randy Keefe, J. Christopher Post, and Garth D. Ehrlich. Characterization and modeling of the Haemophilus influenzae core and supragenomes based on the complete genomic sequences of Rd and 12 clinical nontypeable strains. Genome Biology , 8(6):R103, 2007. Piotr Indyk. Fast color image retrieval via embeddings. In Workshop on Statistical and Computational Theories of Vision (at ICCV), 2003 , 2003. Yu. I. Ingster. On Testing a Hypothesis Which Is Close to a Simple Hypothesis. Theory of Probability & Its Applications , 45(2):310–323, 2001. Yu. I. Ingster and Irina A. Suslina. Nonparametric Goodness-of-Fit Testing Under Gaussian Models , volume 169. Springer, 2003. Yuri I. Ingster. Asymptotically minimax hypothesis testing
|
https://arxiv.org/abs/2504.13977v1
|
for nonparametric alternatives. I.Math. Methods Statist , 2(2):85–114, 1993a. Yuri I. Ingster. Asymptotically minimax hypothesis testing for nonparametric alternatives. II.Math. Methods Statist , 2(2):85–114, 1993b. 84 Yuri I. Ingster. Asymptotically minimax hypothesis testing for nonparametric alternatives. III.Math. Methods Statist , 2(2):85–114, 1993c. Christopher J¨ urges, Lars D¨ olken, and Florian Erhard. Dissecting newly transcribed and old RNA using GRAND-SLAM. Bioinformatics , 34(13):i218–i226, 2018. Leonid Vasilevich Kantorovich and SG Rubinshtein. On a space of totally additive functions. Vestnik of the St. Petersburg University: Mathematics , 13(7):52–59, 1958. Weihao Kong and Gregory Valiant. Spectrum estimation from samples. The Annals of Statistics , 45(5), 2017. Mikhail Kravchuk. Sur une g´ en´ eralisation des polynˆ omes d’Hermite. Comptes Rendus , 189 (620-622):5–3, 1929. A. J. Lee. U-Statistics: Theory and Practice . Routledge, 2019. Oleg V. Lepski and Vladimir G. Spokoiny. Minimax Nonparametric Hypothesis Testing: The Case of an Inhomogeneous Alternative. Bernoulli , 5(2):333, 1999. Shichao Lin, Kun Yin, Yingkun Zhang, Fanghe Lin, Xiaoyong Chen, Xi Zeng, Xiaoxu Guo, Huimin Zhang, Jia Song, and Chaoyong Yang. Well-TEMP-seq as a microwell-based strategy for massively parallel profiling of single-cell temporal RNA dynamics. Nature Communications , 14(1):1272, 2023. Frederic M. Lord. A strong true-score theory, with applications. Psychometrika , 30(3): 239–270, 1965. Frederic M. Lord. Estimating true-score distributions in psychological testing (an empirical bayes estimation problem). Psychometrika , 34(3):259–299, 1969. Frederic M. Lord and Noel Cressie. An Empirical Bayes Procedure for Finding an Interval Estimate. Sankhy¯ a: The Indian Journal of Statistics, Series B (1960-2002) , 37(1):1–9, 1975. Stephen Lowe. The beta-binominal mixture model for word frequencies in documents with applications to information retrieval. In Eurospeech99 , pages 2443–2446, 1999. H. B. Mann and A. Wald. On the Choice of the Number of Class Intervals in the Application of the Chi Square Test. The Annals of Mathematical Statistics , 13(3):306–317, 1942. Tudor Manole and Abbas Khalili. Estimating the number of components in finite mixture models via the Group-Sort-Fuse procedure. The Annals of Statistics , 49(6):3043–3069, 2021. Maria Melkersson and Jan Saarela. Welfare participation and welfare dependence among the unemployed. Journal of Population Economics , 17(3):409–431, 2004. XuanLong Nguyen. Convergence of latent mixing measures in finite and infinite mixture models. The Annals of Statistics , 41(1):370–400, 2013. 85 Steven E. Nissen and Kathy Wolski. Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes. New England Journal of Medicine , 356 (24):2457–2471, 2007. Liam Paninski. A Coincidence-Based Test for Uniformity Given Very Sparsely Sampled Discrete Data. IEEE Transactions on Information Theory , 54(10):4750–4755, 2008. Junyong Park. Testing homogeneity of proportions from sparse binomial data with a large number of groups. Annals of the Institute of Statistical Mathematics , 71(3):505–535, 2019. Karl Pearson. X. onthe criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science , 50(302):157–175, 1900. Leszek Plaskota. Approximation and Complexity, 2021. Abedallah Rababah. Transformation of Chebyshev–Bernstein Polynomial Basis. Computa- tional Methods
|
https://arxiv.org/abs/2504.13977v1
|
in Applied Mathematics , 3(4):608–622, 2003. Filippo Santambrogio. Optimal Transport for Applied Mathematicians: Calculus of Varia- tions, PDEs, and Modeling , volume 87. Springer International Publishing, 2015. Lars Snipen, Trygve Almøy, and David W Ussery. Microbial comparative pan-genomics using binomial mixture models. BMC Genomics , 10(1):385, 2009. G´ abor Szeg˝ o. Orthogonal Polynomials . American Mathematical Society, 1975. Henry Teicher. Identifiability of Finite Mixtures. The Annals of Mathematical Statistics , 34 (4):1265–1269, 1963. Hoben Thomas. A binomial mixture model for classification performance: A commentary on Waxman, Chambers, Yntema, and Gelman (1989). Journal of Experimental Child Psychology , 48(3):423–430, 1989. Kevin Tian, Weihao Kong, and Gregory Valiant. Learning Populations of Parameters. In Advances in Neural Information Processing Systems , volume 30, 2017. Alexandre B. Tsybakov. Introduction to Nonparametric Estimation . Springer, 2009. Gregory Valiant and Paul Valiant. An Automatic Inequality Prover and Instance Optimal Identity Testing. In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science , pages 51–60, 2014. C´ edric Villani. Optimal Transport , volume 338. Springer, 2009. Ramya Korlakai Vinayak, Weihao Kong, Gregory Valiant, and Sham Kakade. Maximum Likelihood Estimation for Learning Populations of Parameters. In Proceedings of the 36th International Conference on Machine Learning , pages 6448–6457, 2019. 86 Jonathan Weed and Francis Bach. Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance. Bernoulli , 25(4A), 2019. Yihong Wu and Pengkun Yang. Minimax Rates of Entropy Estimation on Large Alphabets via Best Polynomial Approximation. IEEE Transactions on Information Theory , 62(6): 3702–3720, 2016. Yihong Wu and Pengkun Yang. Optimal estimation of Gaussian mixtures via denoised method of moments. The Annals of Statistics , 48(4):1981–2007, 2020a. Yihong Wu and Pengkun Yang. Polynomial methods in statistical inference: Theory and practice. Foundations and Trends in Communications and Information Theory , 17(4): 402–586, 2020b. Yuting Ye and Peter J. Bickel. Binomial Mixture Model With U-shape Constraint, 2021. arXiv:2107.13756. Hui Zhou and Wei Pan. Binomial Mixture Model-based Association Tests under Genetic Heterogeneity. Annals of Human Genetics , 73(Pt 6):614, 2009. 87
|
https://arxiv.org/abs/2504.13977v1
|
NONPARAMETRIC ESTIMATION IN UNIFORM DECONVOLUTION AND INTERV AL CENSORING BYPIETGROENEBOOMaAND GEURT JONGBLOEDb Delft Institute of Applied Mathematics, Mekelweg 4, 2628 CD Delft, The Netherlands. aP .Groeneboom@tudelft.nl;bG.Jongbloed@tudelft.nl In the uniform deconvolution problem one is interested in estimating the distribution function F0of a nonnegative random variable, based on a sam- ple with additive uniform noise. A peculiar and not well understood phe- nomenon of the nonparametric maximum likelihood estimator in this setting is the dichotomy between the situations where F0(1) = 1 andF0(1)<1. IfF0(1) = 1 , the MLE can be computed in a straightforward way and its asymptotic pointwise behavior can be derived using the connection to the so-called current status problem. However, if F0(1)<1, one needs an iter- ative procedure to compute it and the asymptotic pointwise behavior of the nonparametric maximum likelihood estimator is not known. In this paper we describe the problem, connect it to interval censoring problems and a more general model studied in [5] to state two competing naturally occurring con- jectures for the case F0(1)<1. Asymptotic arguments related to smooth functional theory and extensive simulations lead us to to bet on one of these two conjectures. 1. Introduction. Consider a random variable U, distributed according to a distribution function F0onR+. Instead of observing U, one observes the sum S=U+V where V∼Unif(0,1), independent of U. The random variable Sthen has convolution density gS(s) =Z 1[0,1](s−v)dF0(v) =Z 1[s−1,s](v)dF0(v) =F0(s)−F0(s−1). (1.1) Statistical inference for F0based on ni.i.d. random variables S1,...,S nwith density gS, is known as the uniform deconvolution problem. It is not essential that the support of the uniform distribution is [0,1], we can always reduce the deconvolution problem with uniform random variables with another support to this situation. For simplicity we stick in this paper to uniform random variables with support [0,1]. As discussed in [8], an interesting application of uniform deconvolution is in the “deblurring” of pictures, blurred by Poisson noise, see, e.g., [13]. The log likelihood of Fis then defined as ℓ(F) =nX i=1log{F(Si)−F(Si−1)}. (1.2) IfF0satisfies F0(1) = 1 , this model will be seen to be equivalent to the so-called current status (or interval censoring case 1) model. Consequently, there is a one step algorithm to compute the nonparametric MLE of F0. We explain this further in Section 2.1. We also derive the asymptotic distribution of the nonparametric MLE in this section. AMS 2000 subject classifications: Primary 62G05, 62N01; secondary 62-04. Keywords and phrases: deconvolution, current status, interval censoring, nonparametric maximum likelihood, incubation time model, asymptotic distribution. 1arXiv:2504.14555v2 [math.ST] 23 Apr 2025 2 PIET GROENEBOOM AND GEURT JONGBLOED In section 2.2, we show that if the support of the distribution corresponding to the distri- bution function F0is not contained in [0,1], with upper support point M > 1, the model can be shown to be equivalent to an interval censoring, case m, model, where m=⌈M⌉. Con- sequently, a one step algorithm to compute the nonparametric MLE is not known, but one can use iterative algorithms in this case to compute the nonparametric MLE, for example the iterative convex minorant algorithm, proposed in [10]. A more
|
https://arxiv.org/abs/2504.14555v2
|
general model is studied in [5] and [6]. There the length of the support of the uni- form random variable V, sayE, is actually random (but observable). This model is used for estimating the distribution of the incubation time of a disease. In this interpretation there is a positive random variable E, the duration of the “exposure time”, with (unknown) distribution function FE. Given E, an independent “infection time” Vis drawn, uniformly distributed on the interval [0,E]. Moreover, the quantity of interest is the incubation (length) time Uwith distribution function F0. Apart from E, the observation is then the time Sfor getting symp- tomatic, where S=U+V. Here, the random variables UandVare independent, condition- ally on E. The observations therefore consist of ni.i.d. pairs of exposure times and times of getting symptomatic (Ei,Si), i = 1,...,n. (1.3) This model is also considered in [1], [2], [4] and [14]. Under the assumption that Ehas an absolutely continuous distribution, the asymptotic pointwise distribution of the nonparametric MLE of F0is derived in [5]. Remarkably, for this result there is no distinction between different assumptions on the support of the distribution corresponding to F0, it holds in general. Taking for the distribution of Ethe point mass at 1in the statement of Theorem 4.1 in [5], a case certainly not covered by the assumption of absolute continuity in this theorem and coinciding with the uniform deconvolution problem stated above, the correct asymptotic distribution appears in case F0(1) = 1 as derived in our section 2.1. This also provides a natural conjecture (which we do, however, not believe in) for the asymptotic distribution of the nonparametric MLE in the situation where F0(1)<1. In section 3 we discuss this more general model, used in estimating the incubation time distribution, in more detail and formulate the conjecture. Before turning to a simulation study and asymptotic considerations to support our actual conjecture in section 5, in section 4 we digress in a discussion of the estimation of other func- tionals of F0, rather than the evaluation of the nonparametric MLE itself at a fixed point. For these so-called smooth functionals, asymptotic results are known. Moreover, the idea behind these smooth functionals is needed to understand the asymptotic considerations leading up to our final conjecture for the asymptotic distribution of the nonparametric MLE of F0(t0)in the uniform deconvolution setting. 2. Uniform deconvolution and interval censoring. In this section, we establish a rela- tion between the uniform deconvolution problem and various interval censoring models. In those models, also a dichotomy between two cases arises naturally. The uniform deconvolu- tion problem in case F0(1) = 1 will be seen to be related to the interval censoring case 1 (or, current status) problem. This situation will be discussed in section 2.1, where also the asymp- totic distribution of the nonparametric MLE of F0(t0)is given. In section 2.2, we discuss the relation between the uniform deconvolution model where F0(1)<1and interval censoring casem, form≥2. We first briefly describe the interval censoring case 1 (IC- 1) problem. Consider a random variable Xwith distribution function F0of interest and independently of
|
https://arxiv.org/abs/2504.14555v2
|
this a random vari- ableYwith a distribution function G. Instead of observing X, the pair (Y,∆)is observed, where ∆indicates whether Xis smaller than or equal to Y. So, ∆ = 1 {X≤Y}. In survival UNIFORM DECONVOLUTION 3 analysis terminology, Xis the event time, Yis the inspection time and we observe the “sta- tus” of the subject at time Y. The status being whether the event (usually an infection) has al- ready occured at time Yor not. Note that given inspection time Y=y,∆∼Bernoulli (F0(y)). The problem is to estimate distribution function F0, based on an i.i.d. sample of (Yi,∆i)’s. A more general model is the interval censoring case m(IC-m) model, where mstands for the number of distinct inspection times per subject. Formally, Y= (Y1,...,Y m)is a random vector of distinct inspection times and Xa nonnegative random variable independent of these event times. One then observes (Y,∆), where the m+ 1-vector ∆indicates which of the intervals defined by the vector Ycontains X:∆ = (∆ 1,...,∆m+1). Here for 1≤j≤ m+ 1,∆j= 1(Y(j−1),Y(j)](X), denoting by Y(j)thej-th order statistic in the vector Y, and by convention Y0≡0andY(m+1)=∞. Note that given the (ordered) vector of observation times equals (y(1),...,y (m)), the vector ∆is multinomially distributed with parameters 1 and probability vector (F0(y(1)),F0(y(2))−F0(y(1)),...,F 0(y(m))−F0(y(m−1)),1−F0(y(m))). Given an i.i.d. sample of vectors (Y,∆), the problem is then to estimate the distribution function F0. In the two subsections below, we establish the relation between the uniform deconvolution problem and the interval censoring models and derive results based on that, depending on whether F0(1) = 1 orF0(1)<1. 2.1. Case F0(1) = 1 .Following the suggestions of Exercise 2, p. 61, of [10], we set up an equivalence of the interval censoring problem with the current status model, in case F0(1) = 1 . Given the observed S1,S2,...,S n, define ∆i= 1,ifSi≤1 0,ifSi>1(2.1) and let the random variable Yibe defined by: Yi= Si, , if∆i= 1 Si−1,if∆i= 0,(2.2) translating the Si’s to pairs (Yi,∆i)for1≤i≤n. Note that, for y∈(0,1), considering a single observation (e.g. i= 1), we get P(Y≤y) =P(Y≤y∧∆ = 0) + P(Y≤y∧∆ = 1) =P(S−1≤y∧∆ = 0) + P(S≤y∧∆ = 1) = P(1< S≤y+ 1) + P(S≤y) =Z1+y 1gS(s)ds+Zy 0gS(s)ds=Zy 0F0(s+ 1)ds=y. Here we use (1.1) and the fact that F0(s+ 1) = 1 ifs >0, asF0(1) = 1 . Hence, the Yi’s are uniformly distributed on [0,1]. Moreover, again using (1.1), for y∈(0,1), the conditional distribution of ∆given Y=y is Bernoulli, with success parameter P{∆ = 1|Y=y}=gS(y) gS(y) +gS(y+ 1)=F0(y). This shows the equivalence of the uniform deconvolution model with the current status model in the case F0(1) = 1 . More specifically, it is a current status model with event time distrubution F0and inspection time distribution uniform on [0,1]. This means that we can 4 PIET GROENEBOOM AND GEURT JONGBLOED immediately characterize the nonparametric MLE in this setting and also derive its pointwise asymptotic distribution. Indeed, the MLE is the maximizer of the log likelihood nX i=1(∆ilogF(Yi) + (1−∆i)log(1 −F(Yi))), where ∆iandYiare defined by (2.1) and (2.2) as functions of the observations Si. Being a special instance of a ‘generalized isotonic
|
https://arxiv.org/abs/2504.14555v2
|
regression problem’, the MLE ˆFnis known to coincide with the solution of the isotonic regression problem of minimizing the sum of squares nX i=1(∆i−F(Yi))2, over all distribution functions F. The one step algorithm for computing the solution, using the so-called cusum diagram of the ∆i’s is described, e.g., on p. 30 of [9]. The solution consists of local averages of the ∆iand can therefore only have rational values. Specializing Theorem 3.7 in Section 3.8 of [9] to the setting where the inspection times are uniformly distributed on [0,1], yields the asymptotic distribution of the MLE in the uniform deconvolution model with F0(1) = 1 . THEOREM 2.1. LetF0be differentiable on (a,b),0< a < b < 1with a continuous pos- itive derivative f0(t)fort∈(a,b), where [a,b]is the support of f0. Let ˆFnbe the nonpara- metric MLE of F0. Then, for t0∈(a,b): n1/3{ˆFn(t0)−F0(t0)}/(4f0(t0)F0(t0)(1−F0(t0))})1/3d−→argmint∈R W(t) +t2 ,(2.3) where Wis two-sided Brownian motion on R, originating from zero. 2.2. Case F0(1)<1.In line with the previous subsection, we will make a connection of the uniform deconvolution problem to the more general IC- mproblem, where m=⌈M⌉, Mdenoting the upper support point of the distribution corresponding to distribution function F0. We start again by constructing a vector ∆andYbased on the observation S. Note that S now takes values in [0,M+ 1]. Define for 1≤j≤m+ 1thej-th component in ∆by ∆j= 1,ifS∈(j−1,j] 0,else.(2.4) Moreover, define the components of the vector Yby Yj=S− ⌊S⌋+j−1 (2.5) for1≤j≤m+ 1. Similarly to the IC- 1setting, we can show that Y1is uniformly distributed on [0,1]. Indeed, fory∈(0,1), P(Y1≤y) =m+1X j=1P(Y1≤y∧∆j= 1) =m+1X j=1P(Y1≤y∧j−1< S≤j) =m+1X j=1P(S−(j−1)≤y∧j−1< S≤j) =m+1X j=1P(j−1< S≤y+j−1) =m+1X j=1Zj−1+y j−1gS(s)ds=m+1X j=1Zj−1+y j−1F0(s)−F0(s−1)ds=Zm+y mF0(s)ds UNIFORM DECONVOLUTION 5 Again, the right hand side equals ybecause F0(s) = 1 fors≥m, asm≥MandF0(M) = 1 . So, the vector Yconsists of a uniformly distributed random variable Y1on[0,1], followed byYj=Y1+j−1for2≤j≤m. It is therefore also automatically ordered. Now, consider the conditional distribution of ∆given Y, which only depends on Y1. For fixed j∈ {1,2,...,m + 1}andy1∈(0,1), using (1.1) P(∆j= 1|Y1=y1) =gS(j−1 +y1)Pm+1 k=1gS(k−1 +y1) =F0(j−1 +y1)−F0(j−2 +y1) F0(m+y1)−F0(y1−1)=F0(y(j))−F0(y(j−1)), because y1−1<0andm+y1> m≥M. In view of the previous discussion on the IC- m model, this shows that translating the sample of Si’s to the sample of (Yi,∆i)’s according to (2.4) and (2.5), yields IC- mdata. The distribution of ‘inspection times’ is quite specific and peculiar, having distance exactly equal to one with a uniformly distributed first. Having established this relation between the two models, results known for the IC- m model can be used to derive results for our deconvolution model. However, much less re- sults are known for the IC- mmodel than for IC- 1. One indication of the difficulties in going away from the IC-1 model is that the nonparametric MLE can have irrational values in the IC-m,m >1, model. A simple example of this is given in Example 1.3 on p. 48 of [10]. Here the values of the nonparametric maximum likelihood estimate of the distribution function of the hidden variable contain the values1 2±1 6√ 3(the solution can be computed analytically in this simple example). This cannot happen in
|
https://arxiv.org/abs/2504.14555v2
|
the IC- 1model. Let us now turn to the nonparametric MLE. Define mn= max j:Sj>1(Sj−1). (2.6) Denoting by Qnthe empirical distribution of (S1,...,S n), the nonparametric MLE is defined as maximizer of the log likelihood function ℓn(F) =Z log{F(s)−F(s−1)}dQn(s) =Z s≤1logF(s)dQn(s) +Z 1<s≤mnlog{F(s)−F(s−1)}dQn(s) +Z s>1∨mnlog{1−F(s−1)}dQn(s). (2.7) Note that for maximizing this log likelihood function, the class of distribution functions can be restricted to those satisfying F(Si) = 1 ifSi>max j:Sj>1(Sj−1), and F(Si−1) = 0 ifSi−1<minjSj, for different values of Fat these points would make the log likelihood smaller. Define the set of distribution functions. DEFINITION 2.1.Fnis the set of discrete distribution functions F, which only have mass at the points {Si,Si−1 : 1≤i≤n}and satisfy F(x) = 1 ifx≥mn, (2.8) and F(x) = 0 ifx <min jSj. (2.9) 6 PIET GROENEBOOM AND GEURT JONGBLOED We restrict the distribution functions, occurring in the problem of maximizing the likeli- hood to this set Fn. The nonparametric MLE can then be characterized using the following process, defined in terms of F∈ Fn Wn,F(t) =Z{s≤t∧1} F(s)dQn(s)−Z{0< s−1≤t < s≤mn} F(s)−F(s−1)dQn(s) +Z{1< s≤t∧mn} F(s)−F(s−1)dQn(s)−Z{1∨mn−1< s−1≤t} 1−F(s−1)dQn(s), t≥0,(2.10) where we write the set as shorthand for its indicator function. Now, let T1<···< Tmbe the points Sisuch that Sinot of type (2.8) and Si−1is not of type (2.9). Then 0< F(Ti)<1fori= 1,...,m , if the log likelihood for Fis finite, and we can define: Y={y∈(0,1)m:y= (y1,...,y m) = (F(T1),...,F (Tm), F∈ Fn}. The following lemma characterizes the MLE and is proved in a completely analogous way as Lemma 2.2 in [5]. LEMMA 2.1. Let the class of distribution functions Fnbe as defined in Definition 2.1. Then ˆFn∈ Fnmaximizes (2.7) over F∈ Fnif and only if (i)Z u∈[t,∞)dWn,ˆFn(u)≤0, t ≥0, (2.11) (ii)Z ˆFn(t)dWn,ˆFn(t) = 0. (2.12) where Wn,Fnis defined by (2.10). Moreover, ˆFn∈ Fnis uniquely determined by (2.11) and (2.12). As mentioned above, in order to compute the MLE in this case, an iterative algorithm has to be used. We use the iterative convex minorant algorithm proposed in [10], using line search to determine the step size, as described in [12]. This algorithm was also applied in computing the MLE for the incubation time distribution in [4]. Rscripts for computing the MLE in this way are available in [7]. This algorithm uses the characterization of Lemma 2.1. The asymptotic distribution of the nonparametric MLE in the IC- mmodel has only par- tially been established for the case 2 model in the so-called separated case, in which the distance between observation times stays away from zero. However, its proof is given under the additional assumption that the distance between the observation times has an absolutely continuous distribution, a situation we clearly do not have here. For details, see [3]. One conjecture for the asymptotic distribution of the MLE in the uniform deconvolution problem when F0(1)<1would be that the statement of Theorem 2.1 is also valid in this setting. In the next section, we consider the more general mixed uniform deconvolution problem for which asymptotic results have been derived leading to a second conjecture for the asymptotic distribution in the general fixed uniform
|
https://arxiv.org/abs/2504.14555v2
|
deconvolution problem. UNIFORM DECONVOLUTION 7 3. The mixed model. The mixed uniform deconvolution model as described in the in- troduction (so with the random length Eof the support of the uniform noise V) is clearly a generalization of the what we from now will call the fixed uniform deconvolution model . The first reducing to the latter in case Ehas the degenerate distribution at the point 1. Let M1 andM2be the upper support points of the distributions corresponding to EandUrespec- tively, so that M=M1+M2is the upper support point of the distribution of S=U+V. Denoting by µthe product of the measure dFEand Lebesgue measure on [0,M],(Ei,Si) has (convolution) density qFwith respect to µ qF(e,s) =e−1{F(s)−F(s−e)} =e−1Zs u=(s−e)+dF(u), e > 0, s∈[0,M]. (3.1) We define the underlying measure Q0for(Ei,Si)by dQ0(e,s) =qF(e,s)ds dF E(e), s ∈[0,M], e ∈(0,M2]. (3.2) For estimating the distribution function F0ofU, usually parametric distributions are used, like the Weibull, log-normal or gamma distribution. However, in [4] the nonparametric MLE is used. This maximum likelihood estimator ˆFnmaximizes the function ℓ(F) =n−1nX i=1log (F(Si)−F(Si−Ei)) (3.3) over alldistribution functions FonRwhich satisfy F(x) = 0 ,x≤0, see [4]. The monotonic- ity and boundedness of F(between 0and1) ensures that this maximization problem has a solution. Note that the random variable Einow replaces the value 1inF(Si−1)compared to the fixed uniform deconvolution problem, but that for the rest the model is the same. The asymptotic distribution of the nonparametric MLE in the mixed uniform deconvolu- tion problem is given as Theorem 4.1 in [5]. THEOREM 3.1. Let the conditions of Theorem 4.1 in [5] be satisfied. In particular, let Ei stay away from zero and have an absolutely continuous distribution. Then n1/3{ˆFn(t0)−F0(t0)}/(4f0(t0)/cE)1/3d−→argmin W(t) +t2 , (3.4) where Wis two-sided Brownian motion on R, originating from zero, and where the constant cEis given by: cE=Z e−11 F0(t0)−F0(t0−e)+1 F0(t0+e)−F0(t0) dFE(e). (3.5) This result is valid, irrespective of whether the support of distribution corresponding to F0 is contained in [0,1]or not. Here we see a condition for that result, namely that Ehas an absolutely continuous distri- bution, a condition clearly not met by the degenerate distribution at 1. Ignoring this for the moment, the statement of the theorem would yield the asymptotic distribution with variance (4f0(t0)/cE)2/3Var argmint∈R W(t) +t2 , where, cE=Z e−11 F0(t0)−F0(t0−e)+1 F0(t0+e)−F0(t0) dFE(e) =1 F0(t0)−F0(t0−1)+1 F0(t0+ 1)−F0(t0). (3.6) 8 PIET GROENEBOOM AND GEURT JONGBLOED Now, if F0(1) = 1 , this expression (for t0∈(0,1)) reduces to cE=1 F0(t0)+1 1−F0(t0)=1 F0(t0)(1−F0(t)). (3.7) This is precisely the constant in the asymptotic variance of Theorem 2.1, proved in the case F0(1) = 1 . Now two natural conjectures come up. The first is that also for the setting where F0(1)<1forcEwill equal (3.7). The second being that it equals (3.6). In Section 5 we will further investigate this asymptotic variance, using a simulation study as well as asymp- totic calculations involving so-called smooth functionals. In section 4, we discuss asymptotic results for this type of functionals. 4. Interlude; examples of applications of smooth functional theory in the fixed model. Though estimating the distribution function at a fixed point in the uniform
|
https://arxiv.org/abs/2504.14555v2
|
deconvolution function is intrinsically more complicated than based on a direct sample from the unknown distribution, reflected in a slower rate of estimation than the ‘parametric rate’√n, there are functionals of F0that can be estimated at rate√nin the uniform deconvolution model. For the functionals discussed in this section, we have known (normal) asymptotic behavior of the functions of the nonparametric MLE. The theory does not depend on whether the support of the distribution, corresponding to F0, is contained in [0,1]or not. We illustrate the theory withF0the truncated exponential distribution function on [0,2], given by F0(x) =1−exp{−x} 1−exp{−2}1[0,2](x) + 1 (2,∞)(x). (4.1) Note that in this case the support of f0is not contained in [0,1]. By “smooth functionals” we mean both global smooth functionals such as the “mean func- tional” and local smooth functionals such as smooth approximations of the distribution func- tion and density, which can be estimated by applying kernel smoothing to the nonparametric MLE. These estimates are asymptotically normal. For deriving the asymptotic behavior, we establish a representation in the “observation space” of the functional in the “hidden space”, using the score function. The setup is briefly described in the appendix (section 7.1). For a more detailed discussion of the smooth functional theory, see [9]. EXAMPLE 4.1. Using the theory of Section 7 for the functional F7→R xdF(x), we get from Lemma 7.1 for the score function θFon[0,3]: θF(x) = −{1−F(x)} − { 1−F(x+ 1)}, x ∈[0,1], F(x−1)− {1−F(x)}, x ∈(1,2], F(x−2) +F(x−1), x ∈(2,3].(4.2) ForF=F0as in (4.1), this becomes: θF(x) = 2ex−e(1 +e) / (e2−1)ex , x ∈[0,1], (1 +e2)ex−e2(1 +e) / (e2−1)ex , x ∈(1,2], (2e2−(1 +e)e3−x)/(e2−1), x ∈(2,3].(4.3) A picture of this function is given in Figure 1. The asymptotic variance of√nR xdˆFn(x)is given by: σ2 F0=Z3 0θF0(x)2{F0(x)−F0(x−1)}dx≈0.357915 , and we have: √nZ xdˆFn(x)−Z xdF 0(x) D−→N(0,σ2 F0), UNIFORM DECONVOLUTION 9 0.5 1.0 1.5 2.0 2.5 3.0 -1.0-0.50.51.01.5 Fig 1: The score function θF0for the truncated exponential distribution, given by (4.3). where N(0,σ2 F0)is a normal distribution with expectation zero and variance σ2 F0. One can in fact consistently estimate the asymptotic variance σ2 F0by Z θˆFn(x)2{ˆFn(x)−ˆFn(x−1)}dx, where one takes F=ˆFnin (4.2). Note that the rate of convergence is√ninstead of the cube rootnrate, expected for ˆFn(t)itself. The asymptotic efficiency of this estimate of the first moment is proved in [15], see in particular example 11.2.3e on p. 230 of [15]. It beats obvious moment estimates like ¯Sn− 1/2, because of the particular form of the characteristic function of the Uniform distribution (having zeroes). But these matters are not the main concern of our paper. EXAMPLE 4.2. We can estimate the density f0by the density estimator ˆfnh(t) =Z Kh(t−x)dˆFn(x) where h >0is a bandwidth and Kh(u) =h−1K(u/h)for a symmetric smooth kernel K, for example the triweight kernel K(u) =35 32(1−u2)31[−1,1](u). This estimator was considered in [8]. It was shown that, under the condition given there: nh31/2 ˆfnh(t)−Z Kh(t−x)dF0(x) D−→N(0,σ2 t), where σ2 t= lim h↓0h3Z θ2 h,t,F 0(s){F0(s)−F0(s−1)}ds, (4.4) andθh,t,F 0is given by θh,t,F 0(x) =m−1X i=0{1−F0(x+i)}K′ h(t−(x+i)), x ∈[0,1]. Here θh,t,F 0(x+i) =θh,t,F
|
https://arxiv.org/abs/2504.14555v2
|
0(x+i−1)−K′ h(t−(x+i−1)),i= 1...,m andm=⌈M⌉, where Mis the upper bound of the support of f0. We show that σ2 t, given by (4.4) can be simplified to σ2 t=F0(t){1−F0(t)}Z K′(u)2du. (4.5) 10 PIET GROENEBOOM AND GEURT JONGBLOED We have, for t∈(0,1):Z θ2 h,t,F 0(x){F0(x)−F0(x−1)}dx =Zt+h x=t−hK′ h(t−x)2{1−F0(x)}2F0(x)dx +Zt+h x=t−hK′ h(t−x)2F0(x)2{F0(x+ 1)−F0(x)}dx +... +Zt+h x=t−hK′ h(t−x)2F0(x)2{1−F0(x+m−1)}dx =Zt+h x=t−hK′ h(t−x)2{1−F0(x)}2F0(x)dx +Zt+h x=t−hK′ h(t−x)2F0(x)2{1−F0(x)}dx =Zt+h x=t−hK′ h(t−x)2F0(x){1−F0(x)}dx ∼h−3F0(t){1−F0(t)}Z K′(u)2du, h ↓0, using the telescoping sum F0(x+ 1)−F0(x) +···+ 1−F0(x+m−1). Fort∈(i,i+ 1) ,i≥1we get a similar argument and for t=i,i= 1,2,... we get the result from the contributions from both sides of i, which each contribute one half to the total mass. EXAMPLE 4.3. In an entirely similar way, the asymptotic distribution of the estimator ˜Fnh(t) =Z I Kh(t−x)dˆFn(x) of the distribution function F0itself can be derived, where I Kh(u) =I K(u/h)andI Kis the integrated kernel K, defined by I K(u) =Zu −∞K(x)dx. We have the following result. THEOREM 4.1. LetF0have a continuous positive density f0on(0,M), where M > 0, and let t0∈(0,M). Then, if nh→ ∞ andh↓0: nh1/2 ˜Fnh(t)−Z I Kh(t−x)dF0(x) D−→N(0,˜σ2 t), where ˜σ2 t=F0(t){1−F0(t)}Z K(u)2du. The proof proceeds along a similar path as the proof of the analogous result for the density in the preceding example. The truncated exponential distribution function, defined by (4.1) satisfies the conditions of the theorem, for M= 2. UNIFORM DECONVOLUTION 11 5. Simulations and asymptotic considerations. Having two competing conjectures for the asymptotic distribution of ˆFn(t0)in the fixed uniform deconvolution problem, related to the constants (3.6) and (3.7) in the asymptotic variance, in this section we will first present a simulation study supporting (3.7). One could say that “plugging in” the point mass at 1 as distribution for Eshould be possible in the mixed case setting in case F0(1) = 1 . In the caseF0(1)<1, however, this does not seem to lead to the right result. Diving into the proof of Theorem 4.1 in [5], we identify a term that in case F0(1) = 1 is identically zero. But ifF0(1)<1this term is not zero and has order Op(n−5/6)in the mixed model, but order Op(n−2/3)in the fixed model, which means that it is negligible in the mixed model but not negligible in the asymptotic expansion for the fixed model. To simulate the mixed model, we first generate the random pairs (Ui,Ei), U i∼F0, E i∼FE= (· −0.5)1[0.5,1.5)+ 1[1.5,∞), i = 1,...,n. Then, conditionally on Eia Uniform (0,Ei)random variable Viis drawn. Our observations then consist of (Ei,Si),where Si=Ui+Vi, i = 1,...,n. This example satisfies the conditions of Theorem 4.1 in [5], which means that the asymptotic variance of the nonparametric maximum likelihood estimator ˆFnofF0evaluated at t0∈ (0,2)is given by: σ2 t0= (4f0(t0)/cE)−2/3var argmint∈R W(t) +t2 , (5.1) where cEis given by (3.5). It was computed numerically in [11], using Airy functions, that var argmint∈R W(t) +t2 ≈0.263555964 . To illustrate Theorem 4.1 in [5] we can simulate from the model and compute the variances ofˆFn(ti)for, say 10,000, samples for ti=i·0.1,i= 1,...,19and compare the variances of ˆFn(ti)with the asymptotic variances σ2 ti, taking t0=tiin (5.1). A comparison of simulated
|
https://arxiv.org/abs/2504.14555v2
|
and asymptotic values is shown in Figure 2, where we take n= 10,000, andF0the truncated standard exponential and the uniform distribution function on [0,2], respectively. IfM > 1, as in this example where M= 2, the values of the asymptotic variances are no longer of the form (5.1) in the fixed model. We shall now explain the reason of this discrepancy. We start with the characterization of Lemma 2.1. The MLE is determined by the process WˆFnand, just as in [5], we now investigate the terms in a local expansion of WˆFn. To this end, we first define the process Xn. LEMMA 5.1. LetF0have a continuous positive density f0on(0,M), where M > 1, let t0∈(0,M)and let the process Xnbe defined by: Xn(t) =Z t0<s≤t0+n−1/3tδ1(s) ˆFn(s)d(Qn−Q0)(s) −Z t0<s−1≤t0+n−1/3tδ2(s) ˆFn(s)−ˆFn(s−1)d(Qn−Q0)(s) +Z t0<s≤t0+n−1/3tδ2(s) ˆFn(s)−ˆFn(s−1)d(Qn−Q0)(s) −Z t0<s−1≤t0+n−1/3tδ3(s) 1−ˆFn(s−1)d(Qn−Q0)(s), (5.2) 12 PIET GROENEBOOM AND GEURT JONGBLOED 0.00.51.01.52.00.050.100.150.20 (a) 0.00.51.01.52.00.060.080.100.120.140.16 (b) Fig 2: Simulated variances, times n2/3, for the mixed model of ˆFn(ti), for ti= 0.1,0.2,...,1.9(blue solid curve, linearly interpolated between values at the ti), compared with the asymptotic values (5.1) of Theorem 4.1 in [5] (red, dashed) for Eiuniform on [0.5,1.5]. The simulated variances are based on 10,000 simulations of samples of size n= 10,000for (a) F0the truncated exponential distribution function on [0,2]and (b) F0 the uniform distribution function on [0,2]. The blue dashed curves are the corresponding asymptotic variance curves of Conjecture 5.1 for the fixed model. where δ1= 1[0,1],δ2= 1(1,mn]andδ3= 1(mn,∞)andQnis the empirical distribution mea- sure of the Siwith corresponding underlying measure Q0. Then n2/3Xnconverges in distri- bution, in the Skorohod topology, to the process t7→√cW(t), t ∈R, where cis defined by c=1 F0(t0)−F0(t0−1)+1 F0(t0+ 1)−F0(t0). PROOF . The proof follows the proof of Lemma 9.4 in [5]. We now have: LEMMA 5.2. Wn,ˆFn(t0+n−1/3t)−Wn,ˆFn(t0) =Xn(t) +Yn(t), where Xnis defined by (5.2) and Ynby Yn(t) =Z t0<s≤t0+n−1/3tF0(s)−F0(s−1) ˆFn(s)−ˆFn(s−1)−F0(s+ 1)−F0(s) ˆFn(s+ 1)−ˆFn(s) ds. (5.3) We can write Yn(t) =An(t) +Bn(t), where An(t) =−Z s∈[t0,t0+n−1/3t)ˆFn(s)−F0(s) 1 ˆFn(s)−ˆFn(s−1)+1 ˆFn(s+ 1)−ˆFn(s) ds, UNIFORM DECONVOLUTION 13 and Bn(t) =Z s∈[t0,t0+n−1/3t)(ˆFn(s−1)−F0(s−1) ˆFn(s)−ˆFn(s−1)+ˆFn(s+ 1)−F0(s+ 1) ˆFn(s+ 1)−ˆFn(s)) ds. (5.4) The crucial difference between the mixed model and the fixed model is the term Bn(t)in the expansion of Wn,ˆFnin Lemma 5.2. In the mixed model the corresponding term ˜Bn(t) =Z s∈[t0,t0+n−1/3t)Z e−1(ˆFn(s−e)−F0(s−e) ˆFn(s)−ˆFn(s−e)+ˆFn(s+e)−F0(s+e) ˆFn(s+e)−ˆFn(s)) dFE(e)ds(5.5) is shown in [5] to be of order Op(n−5/6)in the leading expansion, but (5.4) is of order Op(n−2/3). In fact, the terms An(t)andBn(t)are both of order Op(n−2/3)in the fixed model, whereas the corresponding terms are of order Op(n−2/3)andOp(n−5/6), respec- tively, in the mixed model, see the proof of Lemma 9.8 and Lemma 9.9 in [5], respectively. Integration w.r.t. dFE, where FEis absolutely continuous, causes the lower order of the term. Showing that (5.5) is of order Op(n−5/6), however, is not at all easy and is in fact the main effort in [5]. We need to apply the smooth functional theory here, as discussed in Section 4. We have the following conjecture on the asymptotic behavior of ˆFnifViis Uniform (0,1). CONJECTURE 5.1. LetF0have a continuous positive density f0on(0,M), where M > 0, and let t0∈(0,M). Let ˆFnbe the nonparametric MLE
|
https://arxiv.org/abs/2504.14555v2
|
of F0. Then, for t0∈(a,b): n1/3{ˆFn(t0)−F0(t0)}/(4f0(t0)F0(t0)(1−F0(t0))})1/3d−→argmint∈R W(t) +t2 ,(5.6) where Wis two-sided Brownian motion on R, originating from zero. So in this case we expect the MLE to have exactly the same limit behavior as in the case that the support of the distribution is contained in [0,1](see Theorem 2.1). The reason we believe the conjecture might be true partly relies on simulations and partly on the behavior of the smooth functionals in Section 4, where also the F(1−F)“Bernoulli factor” enters into the variance. If the conjecture is true, we think it enters via telescoping sums, as we demonstrated for the density estimator, based on the MLE, but we have not been able to prove this. We finally show in Figure 4 a picture of the variance curve for 10,000 samples of size n= 10,000, where the blue dashed curve shows the theoretical curve of Conjecture 5.1 and the purple dotted curve the theoretical curve if we would apply Theorem 4.1 in [5] with E degenerate at 1(ignoring the conditions of Theorem 4.1 in [5]). The “dip” of the empirical variance curves at the point 1ifF0is the uniform distribution function on [0,2], which was also visible for the mixed model in Figure 2, is remarkable, but we do not have an explanation for it. Note that the theoretical curves touch at this point, be- cause the theoretical variance at the point t= 1contains F0(t){1−F0(t}for both conjectures there. 14 PIET GROENEBOOM AND GEURT JONGBLOED 0.00.51.01.52.00.050.100.150.20 (a) 0.00.51.01.52.00.060.080.100.120.140.16 (b) Fig 3: Simulated variances, times n2/3, for the fixed model of ˆFn(ti), forti= 0.1,0.2,...,1.9 (blue solid curve, linearly interpolated between values at the ti), compared with the asymp- totic values of Conjecture 5.1 (blue, dashed). The simulated variances are based on 10,000 simulations of samples of size n= 10,000for (a) F0the truncated exponential distribution function on [0,2]and (b) F0the uniform distribution function on [0,2]. The red dashed curves are the corresponding asymptotic variance curves of Theorem 4.1 in [5] for the mixed model andEiuniform on [0.5,1.5]. 0.00.51.01.52.00.050.100.150.20 (a) 0.00.51.01.52.00.060.080.100.120.140.16 (b) Fig 4: Simulated variances, times n2/3, for the fixed model of ˆFn(ti), forti= 0.1,0.2,...,1.9 (black solid curve, linearly interpolated between values at the ti), compared with the asymp- totic values of Conjecture 5.1 (blue, dashed) and the conjecture that would follow from The- orem 4.1 in [5], ignoring the conditions (purple, dotted), for (a) F0the truncated exponential distribution function on [0,2]and (b) F0the uniform distribution function on [0,2]. The sim- ulated variances are based on 10,000simulations of samples of size n= 10,000. 6. Concluding remarks. We showed that the asymptotic distribution of the nonpara- metric maximum likelihood estimator (MLE) in the uniform deconvolution model can de derived from a corresponding result for the current status model in the case that the support UNIFORM DECONVOLUTION 15 of the distribution is contained in the unit interval. If the support of the unknown distribution is not contained in the unit interval, the asymptotic distribution is unknown. But also in this setting, the model is shown to be related to an interval censoring model, but now case mfor m≥2.
|
https://arxiv.org/abs/2504.14555v2
|
The asymptotics in the mixed uniform deconvolution model (where the length of the sup- port of the uniform variable is random) was studied in [5] under a smoothness condition on the distribution of the interval length. In that setting there is no distinction depending on the support of the distribution corresponding to F0. In case F0(1) = 1 , the statement of the the- orem reduces to that for the current status model if the (non-smooth) degenerate distribution is substituted. A natural question is whether the same can be done in case F0(1)<1. Simulations indicate this cannot be done. Moreover, an important term in the asymptotic analysis of the nonparametric MLE in the mixed uniform deconvolution problem turns out to behave essentially differently, depending on whether F0(1) = 1 orF0(1)<1. This discrep- ancy is explained from smooth functional theory, the strength of which is also demonstrated using more easily understandable functionals in Section 4. Rscripts for computing the MLE and producing the pictures in this paper are available in [7]. 7. Appendix. 7.1. Score equations. As explained in the Appendix of [4], the theory of the estimation of smooth functionals is based on certain score equations. We define the score function θF by: θF(s) =E{a(X)|S=s}=R x∈(s−1,s]a(x)dF(x) F(s)−F(s−1), (compare to (A.1) in [4]). This is the conditional expectation of a(X)in the “hidden” space of the variable of interest, given our observation S. Note that we changed the notation somewhat w.r.t. [4], and denote the distribution function of the incubation time by Finstead of G. Defining ϕF(t) =Z x∈[0,t]a(x)dF(x), we get the following representation for the score function, conditioned on X=x: E{θF(S)|X=x}=Z s∈(x,x+1]ϕF(s)−ϕF(s−1) F(s)−F(s−1)ds. (7.1) By differentiation w.r.t. xwe get the following equation, with on the right the derivative w.r.t. xof the functional we want to estimate, denoted by ψ: ϕF(x+ 1)−ϕF(x) F(x+ 1)−F(x)−ϕF(x)−ϕF(x−1) F(x)−F(x−1)=ψ(x), x ∈[0,M]. (7.2) Here and in the sequel, we assume that distribution functions are right-continuous. We have the following lemma. LEMMA 7.1. The solution of the system (7.2) is found in the following way. Let m=⌈M⌉ and let θF(x)be given by θF(x) =−m−1X i=0{1−F(x+i)}ψ(x+i), x ∈[0,1]. 16 PIET GROENEBOOM AND GEURT JONGBLOED andθF(x+i) =θF(x+i−1) +ψ(x+i−1),i= 1...,m + 1. Then ϕFis given by ϕF(x) =F(x)θF(x), x ∈[0,1], and ϕF(x+i)−ϕF(x+i−1) ={F(x+i)− {F(x+i−1)}θF(x+i), i = 1,...,m. REMARK 7.1. This is in accordance with Example 11.2.3e on p. 230 of [15], where bF0=θF0. It is also in line with formula (12) in Theorem 2 in [8]. REFERENCES [1] B ACKER , J. A., K LINKENBERG , D. and W ALLINGA , J. (2020). Incubation period of 2019 novel coro- navirus (2019-nCov) infections among travellers from Wuhan, China, 20-28 January 2020. Euro Surveill. 25. [2] B RITTON , T. and S CALIA TOMBA , G. (2019). Estimation in emerging epidemics: bases and remedies. J. R. Soc. Interface 16. [3] G ROENEBOOM , P. (1996). Lectures on inverse problems. In Lectures on probability theory and statistics (Saint-Flour, 1994) .Lecture Notes in Math. 1648 67–164. Springer, Berlin. https://doi.org/10.1007/ BFb0095675 MR1600884 (99c:62092) [4] G ROENEBOOM , P. (2021). Estimation of the incubation time distribution for COVID-19. Stat.
|
https://arxiv.org/abs/2504.14555v2
|
Neerl. 75 161–179. https://doi.org/10.1111/stan.12231 MR4245907 [5] G ROENEBOOM , P. (2024a). Nonparametric estimation of the incubation time distribution. Electron. J. Stat. 181917–1969. https://doi.org/10.1214/24-ejs2243 MR4736274 [6] G ROENEBOOM , P. (2024b). Estimation of the incubation time distribution in the singly and doubly interval censored model. Stat. Neerl. 78617–635. https://doi.org/10.1111/stan.12335 MR4827421 [7] G ROENEBOOM , P. (2025). Deconvolution. https://github.com/pietg/deconvolution. [8] G ROENEBOOM , P. and J ONGBLOED , G. (2003). Density estimation in the uniform deconvolution model. Statist. Neerlandica 57136–157. https://doi.org/10.1111/1467-9574.00225 MR2035863 (2004k:62121) [9] G ROENEBOOM , P. and J ONGBLOED , G. (2014). Nonparametric Estimation under Shape Constraints . Cam- bridge Univ. Press, Cambridge. [10] G ROENEBOOM , P. and W ELLNER , J. A. (1992). Information bounds and nonparametric maximum likeli- hood estimation .DMV Seminar 19. Birkh ¨auser Verlag, Basel. MR1180321 (94k:62056) [11] G ROENEBOOM , P. and W ELLNER , J. A. (2001). Computing Chernoff’s distribution. J. Comput. Graph. Statist. 10388–400. https://doi.org/10.1198/10618600152627997 MR1939706 [12] J ONGBLOED , G. (1998). The iterative convex minorant algorithm for nonparametric estimation. J. Comput. Graph. Statist. 7310–321. https://doi.org/10.2307/1390706 MR1646718 [13] O’S ULLIVAN , F. and R OYCHOUDHURY , K. (2001). An analysis of the role of positivity and mixture model constraints in Poisson deconvolution problems. J. Comput. Graph. Statist. 10673–696. https: //doi.org/10.1198/106186001317243395 MR1938974 [14] R EICH , N. G., L ESSLER , J., C UMMINGS , D. A. T. and B ROOKMEYER , R. (2009). Estimating incuba- tion period distributions with coarse data. Stat. Med. 282769–2784. https://doi.org/10.1002/sim.3659 MR2750164 [15] VAN DE GEER, S. A. (2000). Applications of empirical process theory .Cambridge Series in Statistical and Probabilistic Mathematics 6. Cambridge University Press, Cambridge. MR1739079 (2001h:62002)
|
https://arxiv.org/abs/2504.14555v2
|
arXiv:2504.14659v2 [eess.SP] 29 Apr 20251 Markovian Continuity of the MMSE Elad Domanovitz and Anatoly Khina Abstract —Minimum mean square error (MMSE) estimation is widely used in signal processing and related fields. While it is known to be non-continuous with respect to all standard notions of stochastic convergence, it remains robust in pra ctical applications. In this work, we review the known counterexam ples to the continuity of the MMSE. We observe that, in these counterexamples, the discontinuity arises from an element in the converging measurement sequence providing more informati on about the estimand than the limit of the measurement sequenc e. We argue that this behavior is uncharacteristic of real-wor ld applications and introduce a new stochastic convergence no tion, termed Markovian convergence, to address this issue. We pro ve that the MMSE is, in fact, continuous under this new notion. We supplement this result with semi-continuity and continu ity guarantees of the MMSE in other settings and prove the continuity of the MMSE under linear estimation. Index Terms —Minimum mean square error, estimation error, parameter estimation, inference algorithms, correlation . I. I NTRODUCTION Minimum mean square error (MMSE) estimation is a cornerstone of statistical signal processing and estimati on theory [1]–[3]. Its simple conditional-mean expression, a long with its physical interpretation as the minimizer of the mea n power of the estimation error, makes it the standard choice f or many engineering applications in signal processing, commu - nications, control theory, machine learning, data science , and other domains. Classic examples of linear MMSE (LMMSE) estimation in signal processing include Wiener and Kalman filters [4]–[7] . These serve as building blocks in control and communication s, e.g., in Linear Quadratic Gaussian (LQG) and H2control [8], [9], and in feed-forward equalizers (FFE) and decision feed - back equalizers (DFE) [10]. The quadratic loss function ser ves also as a common choice in regression analysis [11], [12] and machine learning (ML), and is a common choice in reinforcement [13] and online learning [14]. While many of the above solutions are linear, non-linear variants thereof exist relying both on classical [15]–[17] and modern ML-based techniques [18]–[21]. These techniques rely primarily on models that rely on statistical knowledge. As the required statistics is acqui red from finite samples and finite-percision/noisy measurement s, continuity of the MMSE and the corresponding estimators is implicitly assumed. Similar implicit assumption are chara cter- istic also of model-free reinforcement learning [13]. However, despite being considered robust in practice, the MMSE is known to be non-continuous in general. This work was supported in part by the I SRAEL SCIENCE FOUNDATION (grant No. 2077/20) and in part by a grant from the Tel Aviv Uni versity Center for AI and Data Science (TAD). The authors are with the School of Electrical and Computer En - gineering, Tel Aviv University, Tel Aviv 6997801, Israel (e -mails: domanovi@eng.tau.ac.il, anatolyk@tau.ac.il).Namely, consider a sequence of pairs of random variables {(Xn,Yn)}∞ n=1converging in some standard stochastic sense (in distrubtion, in probability, in mean square, almost sur ely) to a pair of random variables (X,Y). Here,(Xn,Yn)can represent, e.g.,
|
https://arxiv.org/abs/2504.14659v2
|
an empirical distribution resulting from a finite sample of length ndrawn from the distribution of (X,Y), or a finite-percision variant of (X,Y)with the machine percision increasing with n. Then, MMSE(Xn∣Yn) /⟶MMSE(X∣Y) in general, where MMSE(X∣Y)denotes the MMSE in esti- mating the random parameter Xfrom the measurement Y. As is common, we will say that the MMSE is continu- ous/discontinuous in a particular stochastic sense depend - ing on the stochastic convergence sense of the sequence {(Xn,Yn)}∞ n=1to(X,Y). Wu and Verd´ u [22], and Y¨ uksel and Linder [23] (see also [24], [25, Chapter 8.3]) provided concrete counterexample s to the continuity of the MMSE, demonstrating that the disconti - nuity may even be unbounded. Wu and Verd´ u [22] proved that the MMSE is upper semi- continuous (u.s.c.) in distribution, as long as {Xn}∞ n=1and Xare uniformly bounded. For additive channels with inde- pendent noise NinXand{Xn}∞ n=1whereXandNhave finite second moments (finite power), they proved that the MMSE is also u.s.c. in distribution. Further, if Nhas a bounded continuous probability density function (PDF), th en the MMSE is also continuous in distribution, to wit lim n→∞MMSE(Xn∣Xn+N)=MMSE(X∣X+N). Y¨ uksel and Linder [23] (see also [25, Chapter 8.3]) con- sidered the case of a common parameter X=Xnfor all n, and a sequence in nof channels /b{aceleft.big1PYn∣X/b{ace{ight.big1∞ n=1fromXto Yn, that converges to a channel PY∣XfromXtoY. They proved that, for bounded and continuous distortion measure s between Xand its estimate, the MMSE is u.s.c. in distri- bution. Hogeboom-Burr and Y¨ uksel [26], [27] (see also [25, Chapter 8.3]) strengthened the u.s.c. in distribution guar antee to a continuity one by restricting the sequence of channels t o bestochastically degraded/garbled [25, Chapter 7.3], depicted in Figure 1, satisfying: a)PYn∣Xis stochastically degraded with respect to PY∣X. b)PYn∣Xis stochastically degraded with respect to PYn+1∣X. Unfortunately, these continuity results are limited to spe cific settings and do not fully explain the robustness of the MMSE which is observed in practical scenarios. In this work, we establish new continuity results for the MMSE, which subsume the aforementioned results. To that 2 end, we first review existing counterexamples to and guaran- tees of the continuity of the MMSE in Section II. We identify common traits in these counterexamples: For a sequence of pairs of random variables {(Xn,Yn)}converges to (X,Y)in probability, (Xn,Yn)p− −−−→ n→∞(X,Y), either •the second moment (mean power) of Xndoes not con- verge to that of X, viz.E/b{acketleft.big2X2 n/b{acket{ight.big2 /⟶E/b{acketleft.big2X2/b{acket{ight.big2, or •the MMSE in estimating XfromYnis strictly better than the MMSE of estimating XfromY. We argue that such behavior is uncharacteristic of real-wor ld applications and suggest adding two additional requiremen ts: 1) convergence of the second moment: lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2; 2) a Markovian restriction X⊸− −Y⊸− −Yn (1) for alln, depicted in Figure 2. This restriction amounts to assuming that, given X,Ynis degraded with respect toY. We prove in Section III that, under these two restrictions, the MMSE is Markov continuous in probability. Byproducts of this result include continuity guarantees of finite-powe r additive noises with diminishing power and rounding errors with increasing machine precision. In Section IV, we prove that
|
https://arxiv.org/abs/2504.14659v2
|
the MMSE is u.s.c. in distri- bution as long as the second moment of Xnconverges to that ofX(requirement 1 above). This requirement is weaker than the uniform boundness requirement of [22]. For the case of a common parameter X=Xnwith a finite second moment, and a sequence /b{aceleft.big1PYn∣X/b{ace{ight.big1∞ n=1of channels converging to a channel PY∣X, we further show that the MMSE is continuous in distribution as long as /b{aceleft.big1PYn∣X/b{ace{ight.big1∞ n=1satisfies the degradedness requirement a above (see Figure 2). In particular, this mean s that requirement b above is superfluous. In Section V, we supplement the above results by proving that the MMSE under linear estimation (LMMSE) is continu- ous in distribution as long as the second moments of {Xn}∞ n=1 and of{Yn}∞ n=1converge to those of XandY, respectively. We establish all the results in this work in the more general framework of random vector (RV) parameters and measurements. We conclude the paper with Section VI by a summary and discussion of possible future directions. Next, we introduce the notation used in this paper; neces- sary background about stochastic convergence and stochast ic degradedness is provided in Appendix A. A. Notation R,N,Z The sets of real, natural (positive integer), and integer numbers, respectively. x[i] Theithentry of vector x∈Rkfori∈ {1,2,...,k}.xTThe transpose of a vector x. x2/pa{enleft.big2x2[1],x2[2],...,x2[k]/pa{en{ight.big2Tforx∈Rkand k∈N(entrywise squaring). ⟨x,y⟩ xTy—the standard Euclidean inner prod- uct between vectors x,y∈Rkfork∈N. ∣x∣ The absolute value of x∈R. ∥x∥/{adical.te}.big1 ⟨x,x⟩—the standard Euclidean norm of a vectorx. x≤y x [i]≤y[i]for alli∈{1,2,...,k}, where x,y∈Rkandk∈N. ⌊x⌋ The floor operation applied entrywise to the entries of x∈Rkfork∈N. sign{x} The sign of x∈R. trace{A} The trace of Ak×k∈Rfork∈N. lim,lim,lim Limit, limit superior, and limit inferior, respectively. E,Var Expectation and variance operators, re- spectively. ⟨X,Y⟩RV E[⟨X,Y⟩]=E/b{acketleft.big2XTY/b{acket{ight.big2for random vec- tors (RVs) X,Y∈Rkwherek∈N. ∥X∥RV/{adical.te}.big1 ⟨X,X⟩RV=/{adical.te}.big2 E/b{acketleft.big1XTX/b{acket{ight.big1for an RV X∈ Rkwherek∈N. Xd=Y X andYhave the same distribution. X⫫Y Independence between RVs XandY. X⊸− −Y⊸− −Z Markov triplet: XandZare independent givenY. Xd⊸− −Yd⊸− −Z Garbled triplet: the conditional distribu- tion ofZgivenXis stochastically de- graded/garbled with respect to the con- ditional distribution of ZgivenX, with respect to X(see also Definition A.5). Xnd− −−−→ n→∞X Convergence in distribution of {Xn}∞ n=1 toX. Xnp− −−−→ n→∞X Convergence in probability of {Xn}∞ n=1 toX. Xna.s.− −−−→ n→∞X Almost-sure convergence of {Xn}∞ n=1 toX. Xnm.s.− −−−→ n→∞X Mean-square convergence of {Xn}∞ n=1 toX. MMSE(X∣Y)The MMSE in estimating XgivenY. LMMSE (X∣Y)The LMMSE in (linearly) estimating X givenY. II. D ISCUSSION OF EXISTING RESULTS We first present the definition of the MMSE [28, Chapter 8], [3, Chapter 4], [29, Chapter 7]. Definition II.1. The MMSE in estimating an RV Xwith a finite second moment, ∥X∥RV<∞, from an RV Yis defined as MMSE(X∣Y)≜inf/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9X−ˆX/sym0A9/sym0A9/sym0A9/sym0A9/sym0A92 RV, 3 PY∣X... PYn∣Yn+1... PY2∣Y3PY1∣Y2X Y Yn+1 Yn Y3 Y2 Y1 Fig. 1. Illustration of a nested sequence of (physically) de graded channels X⊸− −Y⊸− − ⋯⊸− −Yn+1⊸− −Yn⊸− − ⋯⊸− −Y3⊸− −Y2⊸− −Y1. PY∣X... PYn∣Y... PY2∣Y PY1∣YXY Yn Y2 Y1 Fig. 2. Illustration of a sequence of individually (physica lly) degraded channels: X⊸− −Y⊸− −Yifor alli∈{1,2,...,n}. This is a less stringent requirement than the one depicted in Figure 1 as it does not
|
https://arxiv.org/abs/2504.14659v2
|
assume degrade dness between XiandXjfori≠j. where the infimum is over all RVs ˆXwith finite second moment that satisfy X⊸− −Y⊸− −ˆX. The following is a known characterization of the MMSE [3, Chapter 4], [30, Chapter 9.1.5], [7, Appendix for Chapter 3] which is often used as its definition. Theorem II.1. The MMSE estimate of an RV Xwith a finite second moment, ∥X∥RV<∞, from an RV Yis given by E[X∣Y], and the corresponding MMSE is given as MMSE(X∣Y)=∥X−E[X∣Y]∥2 RV (2a) =∥X∥2 RV−∥E[X∣Y]∥2 RV. (2b) It is well known that the MMSE is not continuous in general [22]–[24], [25, Chapter 8.3]. We start by recalling known counterexamples that demonstrate it. We first demonstrate that even in the absence of measure- ments, the MMSE which reduces to the variance, might not be continuous. Example II.1. LetY=Yn=0for alln∈N. SetX=0and Xn=⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩/{adical.te}n,w.p.1 2n −/{adical.te}n,w.p.1 2n 0,w.p.1−1 n. Clearly,Xnd− −−−→ n→∞0=X, but lim n→∞∥Xn−X∥=lim n→∞∥Xn∥=lim n→∞1=1, meaning that Xnm.s.− −−−→ n→∞/Xand lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=1>0=E/b{acketleft.big2X2/b{acket{ight.big2. Consequently, for all n∈N, MMSE(Xn∣Yn)=∥Xn∥2 RV=1,>0=MMSE(X∣Y) meaning that the MMSE is not continuous in this case: lim n→∞MMSE(Xn∣Yn)=1>0=MMSE(X∣Y). The latter further suggests that, in this example, the MMSE i s lower semi-continuous (l.s.c.) but not u.s.c.Even when Xnm.s.− −−−→ n→∞X, the MMSE might not be contin- uous when a Markovian restriction of the form (1) does not hold. This is demonstrated in the following two examples. Example II.2. LetXandYbe independent random vari- ables such that Xhas bounded support and Y∈Zwith ∥Y∥RV<∞. For concretness, let Ybe equiprobable Bernoulli distributed, and let Xbe uniformly distributed over the unit interval. Define Xn=Xand Yn=Y+X n for alln∈N. SinceXn=Xfor alln∈Nand∥X∥RV<∞,Xnm.s.− −−−→ n→∞X andXna.s.− −−−→ n→∞Xtrivially hold. SinceXis bounded, ∥X∥RV<∞. Consequently, ∥Yn∥RV=/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9Y+X n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9RV≤∥Y∥RV+1 n∥X∥RV<∞. Furthermore, lim n→∞∥Yn−Y∥RV=lim n→∞/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9X n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9RV=0, Hence,Ynm.s.− −−−→ n→∞Y. Furthermore, {Yn}∞ n=1a.s.− −−−→ n→∞Y. SinceX⫫Y,MMSE(X∣Y)=Var(X)=1/12. However, since Xcan be perfectly estimated from Ynfrom its fractional part, viz. X=n(Yn−⌊Yn⌋)a.s., MMSE(Xn∣Yn)=MMSE(X∣Yn)=0∀n∈N. Hence, the MMSE is not continuous in this example: lim n→∞MMSE(Xn∣Yn)=0<1 12=MMSE(X∣Y). In particular, the MMSE is u.s.c. but not l.s.c. in this examp le. Note further that the Markovian relation (1) does not hold in this example. Example II.3. LetXandNbe independent Rademacher RVs, andY=X+N. In particular, Var(X)=1<∞. LetXn= n n+1XandYn=Xn+Nfor alln∈N. Clearly,(Xn,Yn)a.s.− −−−→ n→∞(X,Y)and lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=lim n→∞n n+1E/b{acketleft.big2X2/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2, 4 lim n→∞E/b{acketleft.big2Y2 n/b{acket{ight.big2=lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2+E/b{acketleft.big2N2/b{acket{ight.big2 =E/b{acketleft.big2X2/b{acket{ight.big2+E/b{acketleft.big2N2/b{acket{ight.big2=E/b{acketleft.big2Y2/b{acket{ight.big2. Hence(Xn,Yn)m.s.− −−−→ n→∞(X,Y)(see Theorems A.1 and A.2). SinceXncan be perfectly estimated from Ynfrom its frac- tional part, viz.. Xn=Yn−sign{Yn}a.s.,MMSE(Xn∣Yn)=0 for alln∈N. However, Xcannot be perfectly estimated from Y=X+N. In fact, E[X∣Y]=Y/2and MMSE(X∣Y)=E/b{acketleft.big5/pa{enleft.big4X−N 2/pa{en{ight.big42 /b{acket{ight.big5=1 2. Hence, the MMSE is not continuous: lim n→∞MMSE(Xn∣Yn)=0<1/2=MMSE(X∣Y). Again, the latter suggests that the MMSE is u.s.c. but not l.s.c. in this example. And again, we note that the Markovian relation (1) does not hold in this example. While MMSE is not generally continuous or even semi- continuous in general, it was proved by Wu and Verd´ u [22, Theorem 3] to be u.s.c. if the supports of the RVs Xand {Xn}∞ n=1are uniformly bounded. Theorem II.2 ([22, Theorem 3]) .Let(X,Y)be a pair of RVs and let{(Xn,Yn)}∞ n=1be a sequence of pairs of RVs such that •(Xn,Yn)d− −−−→ n→∞(X,Y); •P(∥X∥≤m)=1andP(∥Xn∥≤m)=1for alln∈N for some m∈R. Then, the MMSE
|
https://arxiv.org/abs/2504.14659v2
|
is u.s.c. in distribution: lim n→∞MMSE(Xn∣Yn)≤MMSE(X∣Y). (3) When restricting the possible statistical relations, the f ol- lowing continuity results have been proved. Theorem II.3 ([22, Theorem 4]) .LetXandNbe a pair of RVs of the same length {(Xn,Yn)}∞ n=1be a sequence of pairs of RVs, such that •Y=X+N, andYn=Xn+Nfor alln∈N; •N⫫X,{Xn}∞ n=1; •∥X∥RV,∥N∥RV<∞; •Xnd− −−−→ n→∞X. Then, •the MMSE is u.s.c. (3)in distribution. •In addition, if Nhas a probability density function (PDF) that is bounded and continuous, then the MMSE is continuous in distribution: lim n→∞MMSE(Xn∣Yn)=MMSE(X∣Y). Unfortunately, the result above is limited to additive nois e channels where the noise Nhas bounded and continuous PDF. In particular, it does not guarantee MMSE continuity, e.g., for noises with continuous uniform or arcsine distribution s, or noises whose distribution contains discrete or singular be hav- ior (recall Lesbegue’s decomposition theorem [31, Chapter 2, Section 2.3]).Remark II.1.Furthermore, as indicated in [22], since MMSE(Xn∣Yn)=inf g1∥Xn−g1(Yn)∥2 RV =inf g1∥Xn−Yn+Yn−g1(Yn)∥2 RV =inf g2∥N−g2(Yn)∥2 RV =MMSE(N∣Yn), the setting of Theorem II.3 can be viewed as an estimation problem of a fixed parameter NfromYn, whereYnis the output of an additive noise channel with noise Xn, and where Ynd− −−−→ n→∞Y. SinceN⫫X,{Xn}∞ n=1, this means further that (N,Xn,Yn)d− −−−→ n→∞(N,X,Y). The framework where a common parameter Xpasses through a sequence of channels resulting in a sequence {Xn}∞ n=1was also studied by Y¨ uksel and Linder [23, The- orem 3.2] and Hogeboom-Burr and Y¨ uksel [26], [27] (see also [25, Chapter 8.3]). The following theorem and remarks summarize relevant results about continuity in distributi on. Theorem II.4 ( [27], [25, Theorem 8.3.4]) .Letc∶X׈X→R, X∈Xbe an RV , and {Yn∈Y}∞ n=1be a sequence of RVs such that 1)cis continuous and bounded; 2)Xis a convex set; 3)(X,Yn)d− −−−→ n→∞(X,Y); 4)Xd⊸− −Yd⊸− −Ynfor alln∈N; 5)Xd⊸− −Yn+1d⊸− −Ynfor alln∈N. Then, lim n→∞inf g∶Y→ˆXE[c(X,g(Yn))]=inf g∶Y→ˆXE[c(X,g(Y))]. Requirement (4), Xd⊸− −Yd⊸− −Yn, means that the channel fromXtoYnis stochastically degraded with respect to the channel from XtoY. Equivalently, this requirement means that there exist probabilistically identical channels to t hese two channels such that, for the same input X, their outputs satisfy (1). See Appendix A for further details about stocha stic degradedness. Remark II.2.For uniformly bounded RVs X,Y, and{Yn}∞ n=1, Theorem II.4 can be readily applied to the MMSE by selecting a quadratic cost function. Remark II.3.When only requirements 1–3 hold, Y¨ uksel and Linder [23, Theorem 3.2] (see also [25, Theorem 8.3.3]) proved that u.s.c. in distribution holds: lim n→∞inf g∶Y→ˆXE[c(X,g(Yn))]=inf g∶Y→ˆXE[c(X,g(Y))]. This is subsumed by the result of Theorem II.2 for a quadratic cost function cand bounded RVs. While Theorem II.4 extends the continuity guarantees be- yond the scope of additive noise channels of Theorem II.3, it 5 is limited to bounded RVs and nested garbling of {Yn}∞ n=1and Y(see also Figure 1): Xd⊸− −Yd⊸− − ⋯d⊸− −Yn+1d⊸− − ⋯d⊸− −Y2d⊸− −Y1. (4) This is demonstrated by the following additive-noise examp le. Example II.4. LetXandNbe independent continuous random variables uniformly distributed over the interval [−/{adical.te} 3,/{adical.te} 3]. LetYn=X+N/nandY=X. Note that the second moments are all final: ∥X∥RV=∥Y∥RV=1and ∥Yn∥=1+n−2≤2for alln∈N. Furthermore, lim n→∞∥Yn−Y∥RV=lim n→∞∥N∥RV n=0. Therefore, Ynm.s.− −−−→ n→∞Y=X,
|
https://arxiv.org/abs/2504.14659v2
|
meaning that lim n→∞MMSE(X∣Yn)=0=MMSE(X∣Y) by the squeeze theorem: 0≤lim n→∞MMSE(X∣Yn)≤lim n→∞∥X−Yn∥RV=0. However, since requirement 5 in Theorem II.4 does not hold for anyn∈N, this theorem cannot be applied for this case. The conditions of Theorem II.3 (recall Remark II.1) do not hold either since the uniform distribution is not continuou s at its support boundaries. Remark II.4.Replacing the converging noise sequence {N/n}∞ n=1with certain other converging uniform noise se- quences, e.g., /b{aceleft.big1N/2n/b{ace{ight.big1∞ n=1, may satisfy (4). However, taking the distribution of Nto be triangular or arcsine would violate (4). The latter choice also violates the boundedness condit ion of Theorem II.3. While these results provide guarantees for the continuity o f the MMSE for certain cases, their scope remains limited. In Sections III and IV, we provide guarantees for the continuit y of the MMSE under a larger framework. We further supplement these results by establishing continuity in distribution o f the MMSE under linear estimation in Section V. III. MMSE M ARKOVIAN CONTINUITY IN PROBABILITY In practical scenarios, the deviation of YnfromYdoes not carry extra information about the nominal parameter X beyond the information provided by the measurement Y. More precisely, a Markov relation X⊸− −Y⊸− −Yn, as in (1), holds for alln∈N. This Markovian restriction, which is depicted in Figure 2, excludes Examples II.2 and II.3 but holds for Example II.4. We therefore propose the following new sense of stochastic convergence. Definition III.1. A sequence of pairs of RVs {(Xn,Yn)}∞ n=1 Markov converges in probability to a pair of RVs (X,Y)if Xnp− −−−→ n→∞X, Ynp− −−−→ n→∞Y,(5) and the Markov relation (1) holds for all n∈N. We denote this convergence by (Xn,Yn)M.p.− −−−→ n→∞(X,Y).Remark III.1.The separate convergences in (5) are equivalent to (see Lemma A.1) (Xn,Yn)p− −−−→ n→∞(X,Y) as long as XnandX(and hence also YnandY) are of the same size for all n∈N. However, while (Xn,Yn)p− −−−→ n→∞(X,Y)is equivalent to (Yn,Xn)p− −−−→ n→∞(Y,X), (Xn,Yn)M.p.− −−−→ n→∞(X,Y)and(Yn,Xn)M.p.− −−−→ n→∞(Y,X)are not equivalent. Remark III.2.Markovian variants of a.s. and m.s. conver- gences can be similarly defined. Viewing this problem in communication-channel terms, to define proper Markovian ordering, a common input Xneeds to be assumed to result in channel outputs Yand{Yn}∞ n=1that satisfy the Markovian relation (1). Hence, convergence in probability is assumed in Definition III.1 in order to relate the channel from XtoYto that from XntoYn. In case of a common parameter, Xd=X1d=X2d=⋯d=Xn, the convergence in probability can be replaced with a conver - gence in distribution with the Markovity property replaced by stochastic degradedness; see Section IV. Remark III.3.Under the Markovian restriction (1), Ynmay still carry extra information about Xn(but not on X) beyond that ofY. For example, the case of the same diminishing noise which is added to both YandXsatisfies (1): Xn=X+Z n, Yn=Y+Z n, for alln∈NwithZ⫫(X,Y); see Corollary III.1 in the sequel. The following theorem, proved in the sequel, states that, under the Markovian restriction (1) and assuming a convergi ng second moment of the parameter, the MMSE is continuous. We term such continuity Markovian continuity . Theorem III.1. Let(X,Y)be a pair of RVs and let {(Xn,Yn)}∞ n=1be a sequence
|
https://arxiv.org/abs/2504.14659v2
|
of pairs of RVs such that (Xn,Yn)M.p.− −−−→ n→∞(X,Y), and lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2. (6) Then,E[Xn∣Yn]m.s.− −−−→ n→∞E[X∣Y]and the MMSE is Markov continuous in probability: lim n→∞MMSE(Xn∣Yn)=MMSE(X∣Y). Remark III.4.Condition (6) can be replaced by Xnm.s.− −−−→ n→∞Xor by/b{aceleft.big2X2 n/b{ace{ight.big2∞ n=1being uniformly integrable (u.i.); see Theorem A.2 for details. Remark III.5.WhenYis deterministic, the Markov condi- tion (1) reduces to Yn⫫Xfor alln∈N. This is the case in Example II.2 with Y=0. 6 Sequences that comply with the Markov restriction (1) and the converging second moment restriction of Theorem III.1 include corruption by independent additive noises of decre as- ing strength and floating point representations with increa sing machine precision. This is summarized in the following two corollaries, which are proved in Appendix E. Corollary III.1 (Additive noise effect) .LetX,M,N be RVs such that ∥X∥RV,∥N∥RV<∞;M⫫(X,Y);X,N∈Rkfor k∈N; andY,M∈Rmform∈N. Then, lim (λ,γ)→(0,0)MMSE(X+γN∣Y+λM)=MMSE(X∣Y).(7) Corollary III.2 (Machine percision effect) .LetXandYbe two RVs such that ∥X∥RV<∞. Define⌊x⌋a≜⌊x/a⌋⋅a. Then, lim (λ,γ)→(0,0)MMSE/pa{enleft.big1⌊X⌋γ/uni2223.big1⌊Y⌋λ/pa{en{ight.big1=MMSE(X∣Y). To prove Theorem III.1, we will first prove two special cases, which are of interest on their own right. Lemma III.1. Let{(Xn,Yn)}∞ n=1be a sequence of pairs of RVs and letX∞be an RV such that ∥X∞∥RV<∞andXnm.s.− −−−→ n→∞X∞. Then, E[Xn∣Yn]m.s.− −−−→ n→∞E[X∞∣Yn]. Proof: DenoteˆX∞∣n≜E[X∞∣Yn],ˆXn∣n≜E[Xn∣Yn], ˘X∞∣n≜X∞−ˆX∞∣n,˘Xn∣n≜Xn−ˆXn∣n. Then, ∥X∞−Xn∥2 RV=/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9ˆX∞∣n−ˆXn∣n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A92 RV+/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9˘X∞∣n−˘Xn∣n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A92 RV +2/uni27E8.big2ˆX∞∣n−ˆXn∣n,˘X∞∣n−˘Xn∣n/uni27E9.big2RV =/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9ˆX∞∣n−ˆXn∣n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A92 RV+/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9˘X∞∣n−˘Xn∣n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A92 RV, where the second step follows from the orthogonality princi ple of MMSE estimation [30, Chapter 9.1.5], [28, Chapter 8.6]. SinceXnm.s.− −−−→ n→∞X∞, bothlim n→∞/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9ˆXn∣n−ˆX∞∣n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9RV=0and lim n→∞/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9˘Xn∣n−˘X∞∣n/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9RV=0. The proof of the following lemma is available in Ap- pendix C. Lemma III.2. LetXandYbe two RVs such that ∥X∥RV<∞. Let{Yn}∞ n=1be a sequence of RVs such that (X,Yn)M.p.− −−−→ n→∞(X,Y). Then, E[X∣Yn]m.s.− −−−→ n→∞E[X∣Y]. We are now ready to prove Theorem III.1. Proof of Theorem III.1: Letǫ>0, however small. By (6) and since Xnp− −−−→ n→∞X, we have Xnm.s.− −−−→ n→∞X(see Theorem A.2). Then, by lemmata III.1 and III.2, there exists n0∈N, such that, for all n>n0, ∥E[X∣Yn]−E[Xn∣Yn]∥RV<ǫ, ∥E[X∣Y]−E[X∣Yn]∥RV<ǫ. Hence, by the triangle (Minkowski) inequality, ∥E[X∣Y]−E[Xn∣Yn]∥RV≤∥E[X∣Yn]−E[Xn∣Yn]∥RV +∥E[X∣Y]−E[X∣Yn]∥RV<2ǫ. Sinceǫ>0is arbitrary, E[Xn∣Yn]m.s.− −−−→ n→∞E[X∣Y].Since m.s. convergence guarantees convergence of the second moment (see by Theorem A.2), we further have lim n→∞∥E[Xn∣Yn]∥RV=∥E[X∣Y]∥RV, lim n→∞∥Xn∥RV=∥X∥RV. Then, using the standard formula of the MMSE (see Theo- rem II.1), we obtain lim n→∞MMSE(Xn∣Yn)=lim n→∞/pa{enleft.big2∥Xn∥2 RV−∥E[Xn∣Yn]∥2 RV/pa{en{ight.big2 =∥X∥2 RV−∥E[X∣Y]∥2 RV =MMSE(X∣Y), which concludes the proof. IV. MMSE C ONTINUITY IN DISTRIBUTION In this section, we present results regarding continuity properties in distribution of the MMSE. We first present a result about the semi-continuity in dis- tribution of the MMSE, which replaces the bounded-support requirement of Theorem II.2 by a relaxed requirement of convergence of the second moment. Theorem IV .1. Let(X,Y)be a pair of RVs and let {(Xn,Yn)}∞ n=1be a sequence of pairs of RVs such that •(Xn,Yn)d− −−−→ n→∞(X,Y); •lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2. Then, the MMSE is u.s.c. in distribution: lim n→∞MMSE(Xn∣Yn)≤MMSE(X∣Y). The proof of Theorem IV .1 is available in Appendix D. Recalling Remark III.2, for the case of a common parameter distribution, the Markovity requirement and the convergence in probability requirement of Theorem III.1 can be replaced with a stochastic degradedness requirement and a convergence in distribution requirement. This is stated in the following theorem,
|
https://arxiv.org/abs/2504.14659v2
|
which implies that requirement 5 of Theorem II.4, Xd⊸− −Yn+1d⊸− −Yn, is not necessary for the contiuity of the MMSE as long as requirement 5, Xd⊸− −Yd⊸− −Yn, continues to hold. Namely, the nested garbling requirement of Figure 1 ma y be replaced by the individual garbling requirement depicte d in Figure 2. Since we focus on the MMSE, the cost function is taken to be quadratic: c(x,ˆx)=(x,ˆx)2. We further note that the requirement of bounded RVs of Remark II.2 is relaxed to a requirement regarding the convergence of seconds moments (see Remark A.2 and Theorem A.2 for more details). Theorem IV .2. Let(X,Y)be a pair of RVs and let {Yn}∞ n=1 be a sequence of RVs such that 1)∥X∥RV<∞; 2)(X,Yn)d− −−−→ n→∞(X,Y); 3)Xd⊸− −Yd⊸− −Ynfor alln∈N. 7 Then, the MMSE is continuous in distribution: lim n→∞MMSE(X∣Yn)=MMSE(X∣Y). Proof: Since∥X∥RV<∞, the second moment of the fixed sequence {X}∞ n=1trivially converges to that of X. Hence, we can apply Theorem IV .1 with Xn=Xto attain lim n→∞MMSE(X∣Yn)≤MMSE(X∣Y). (8) Now, since Xd⊸− −Yd⊸− −Ynfor alln∈N, MMSE(X∣Y)≤MMSE(X∣Yn) for alln∈N(see Lemma A.3 for details). Consequently, MMSE(X∣Y)≤lim n→∞MMSE(X∣Yn). (9) Combining (8) and (9) proves the desired result. V. LMMSE C ONTINUITY IN DISTRIBUTION In this section, we treat the continuity in distribution of the LMMSE which is defined and characterized next [28, Chapter 8.4], [30, Chapter 9.1], [7, Chapter 3]. Definition V .1. The LMMSE in (linearly) estimating an RV X∈Rkfrom an RV Y∈Rmsuch that ∥X∥RV,∥Y∥RV<∞, is defined as LMMSE (X∣Y)≜inf∥X−(AY+b)∥2 RV, where the infimum is over all deterministic vectors b∈Rk and deterministic matrices A∈Rk×m. Theorem V .1. LetXandYbe two RVs such that ∥X∥RV,∥Y∥RV<∞. Denote ηX≜E[X], C X≜E/b{acketleft.big2(X−ηX)(X−ηX)T/b{acket{ight.big2, ηY≜E[Y], C Y≜E/b{acketleft.big2(Y−ηY)(Y−ηY)T/b{acket{ight.big2, CX,Y≜E/b{acketleft.big2(X−ηX)(Y−ηY)T/b{acket{ight.big2, and assume that CYis invertible.1Then, the LMMSE estimate ˆXofXfromYis given by ˆX=ηX+CX,YC−1 Y(Y−ηY), i.e.,A=CX,YC−1 Yandb=ηX−CX,YC−1 YηYin Definition V .1. The corresponding LMMSE is given as LMMSE (X∣Y)=∥X∥2 RV−/sym0A9/sym0A9/sym0A9/sym0A9/sym0A9ˆX/sym0A9/sym0A9/sym0A9/sym0A9/sym0A92 RV =trace/b{aceleft.big2CX−CX,YC−1 YCT X,Y/b{ace{ight.big2. The following theorem establishes the continuity of the LMMSE in distribution under adequate conditions. Theorem V .2. Let(X,Y)be a pair of RVs and let {(Xn,Yn)}∞ n=1be a sequence of pairs of RVs such that 1)(Xn,Yn)d− −−−→ n→∞(X,Y); 1IfCYis not invertible, this means that the entries of Yare linearly dependent a.s. Hence, to attain the LMMSE estimator, one may remove all the linearly dependent entries and estimate from the remain ing entries without loss of performance.2)lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2andlim n→∞E/b{acketleft.big2Y2 n/b{acket{ight.big2=E/b{acketleft.big2Y2/b{acket{ight.big2. 3)CYis invertible. Then, the LMMSE is continuous in distribution: lim n→∞LMMSE (Xn∣Yn)=LMMSE (X∣Y). Requirement 2 guarantees the convergence of the means (see Theorem A.1), and all the second order statistics by the Cauchy–Schwarz inequality. Since the LMMSE depends only on the second order statistics by Theorem V .1, this suffices t o guarantee the continuity of the LMMSE; for a formal proof see Appendix F. We next review examples II.1 and II.2 and introduce a new example to demonstrate the necessity of the requirements of Theorem V .2. Example V .1. Consider the setting of Example II.1. Note that all the MMSE estimators in this example are linear, meaning that the MMSEs coincide with their corresponding LMMSEs. This demonstrates, in turn, the necessity of the
|
https://arxiv.org/abs/2504.14659v2
|
convergenc e of the second moment of {Yn}∞ n=1to that of Yin requirement 2 of Theorem V .2. Example V .2. LetXbe some random variable with zero mean and unit variance. Set Y=X,Xn=Xfor alln∈N, and Yn=⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩/{adical.te}n,w.p.1 2n −/{adical.te}n,w.p.1 2n X,w.p.1−1 n. Clearly, requirements 1 and 3 of Theorem V .2 hold for all n∈N. Moreover, lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=lim n→∞1=1=E/b{acketleft.big2X2/b{acket{ight.big2. However, lim n→∞E/b{acketleft.big2Y2 n/b{acket{ight.big2=2>1=E/b{acketleft.big2Y2/b{acket{ight.big2. Hence the second part of requirement 2 of Theorem V .2 is violated. Indeed, using the standard formula for the LMMSE (see Theorem V .1) yields lim n→∞LMMSE (Xn∣Yn)=lim n→∞/pa{enleft.big5E/b{acketleft.big2X2 n/b{acket{ight.big2−E[XnYn] E/b{acketleft.big1Y2 n/b{acket{ight.big1Yn/pa{en{ight.big5 =1−1 2>0=LMMSE (X∣Y), which demonstrates the necessity of the second part of re- quirement 2 in Theorem V .2. Example V .3. Consider the setting of Example II.2. By The- orem V .2, the LMMSE is continuous, as long as Var(Y)>0. The discrepancy between the continuity of the LMMSE and the discontinuity of the MMSE stems from the linearity constraint of the LMMSE estimator, as the perfect recovery ofXfromYnis non-linear. However, for Y=0andXsuch thatVar(X)>0: LMMSE (Xn∣Yn)=0 ∀n∈N, LMMSE (X∣Y)=Var(X)>0. This, in turn, demonstrates the necessity of requirement 3 i n Theorem V .2, which is violated in this case. 8 VI. D ISCUSSION AND FUTURE WORK This work focused on bridging the gap between the per- ception of practitioners of the MMSE being robust and the claim of the theoreticians of the MMSE being discontinuous in general. By introducing a Markov restriction (1) between the nomi- nal parameter, the nominal measurement, and the converging measurement, we proved that MMSE is in fact continuous assuming converging second moments. Such a restriction may be of interest beyond MMSE estimation, e.g., in other inference problems. Assuming converging second moments, we further estab- lished results on the upper-semicontinuity in distributio n and continuity for MMSE in estimating a parameter from a con- verging sequence of channels under an individual statistical degradedness assumption of each converging channel with respect to the limit channel. Finally, we proved that the MMSE under linear estimation is continuous in distribution assuming again converging seco nd moments. It would be interesting to explore under what other con- ditions the MMSE is continuous. Following [23], [26], [27], [32], It would be interesting to extend the results of our wor k to other cost functions [3, Chapter 4], [33]. ACKNOWLEDGMENTS The authors thank Pavel Chigansky for helpful discussions about the proof of Theorem III.1 and uniform integrability, Amir Puri for an interesting discussion about the proper for mu- lation of Markovian continuity, and Ido Nachum for interest ing discussions and specifically about Skorokhod’s theorem in i ts general form. APPENDIX A BACKGROUND ON STOCHASTIC CONVERGENCES AND STOCHASTIC DEGRADEDNESS / GARBLING A. Stochastic Convergences We first present four standard definitions of stochastic convergence [31, Chapter 5], [34, Chapter 2]. Definition A.1 (sochastic convergences) .Let{Xn}∞ n=1be a sequence of random vectors (RVs) in Rk, and letXbe an RV , defined on the same probability space (Ω,F,P). Then, 1)Convergence in distribution. {Xn}∞ n=1converges in dis- tribution to Xif lim n→∞P(Xn≤x)=P(X≤x) for every x∈Rkat which the cumulative distribution function of X,P(X≤x), is
|
https://arxiv.org/abs/2504.14659v2
|
continuous at x. We denote this convergence by Xnd− −−−→ n→∞X. 2)Convergence in probability. {Xn}∞ n=1converges in prob- ability to Xif, for all ǫ>0,2 lim n→∞P(∥Xn−X∥>ǫ)=0. 2Other metrics between XnandXcan be used as well.We denote this convergence by Xnp− −−−→ n→∞X. 3)Almost-sure convergence. {Xn}∞ n=1converges almost surely (a.s.) to Xif P/pa{enleft.big3lim n→∞Xn=X/pa{en{ight.big3=1 or, equivalently, if2 P/pa{enleft.big3lim n→∞∥Xn−X∥=0/pa{en{ight.big3=1. We denote this convergence by Xna.s.− −−−→ n→∞X. 4)Mean-square (m.s.) convergence. {Xn}∞ n=1converges in square mean to Xif∥X∥RV<∞,∥Xn∥RV<∞for all n∈N, and lim n→∞∥Xn−X∥RV=0. We denote this convergence by Xnm.s.− −−−→ n→∞X. The following is an alternative definition of convergence in distribution, also known as convergence in law orweak convergence [31, Chapter 5, Definition 1.5]. Definition A.2. Let{Xn}∞ n=1be a sequence of RVs in Rk, and letXbe an RV in Rk. Then,{Xn}∞ n=1converges in distribution toX(Xnd− −−−→ n→∞X) if lim n→∞E[f(Xn)]=E[f(X)] for all bounded and continuous functions f∶Rk→R. The equivalence of the two definitions for convergence in distribution is often presented as part of the Portmanteau lemma [34, Chapter 2], [35, Chapters 2 and 3]. Remark A.1.While convergences in probabiliy, in m.s., and a.s. require the RVs in {Xn}∞ n=1andXto be defined on the same probability space, this is not necessary for convergen ce in distribution. The proof of the following lemma is available in the appendix. Lemma A.1. Letk∈N. Let{Xn}∞ n=1be a sequence of random vectors in Rk, and let Xbe a random vector in Rk, defined on the same probability space (Ω,F,P). Then, a)Xnp− −−−→ n→∞X⇔Xn[i]p− −−−→ n→∞X[i]∀i∈{1,2,...,k}; b)Xna.s.− −−−→ n→∞X⇔Xn[i]a.s.− −−−→ n→∞X[i]∀i∈{1,2,...,k}; c)Xnm.s.− −−−→ n→∞X⇔Xn[i]m.s.− −−−→ n→∞X[i]∀i∈{1,2,...,k}; d)Xnd− −−−→ n→∞X⇒Xn[i]d− −−−→ n→∞X[i]∀i∈{1,2,...,k}. The results in the following lemma and theorem are well known; see [31, Chapter 5, Theorems 3.1 and 5.4]. Lemma A.2. Letk∈N. Let{Xn}∞ n=1be a sequence of RVs in Rk, and letXbe an RV in Rk, defined on the same probability space(Ω,F,P). Then, the following relations hold: 1)Xna.s.− −−−→ n→∞X⇒Xnp− −−−→ n→∞X; 9 2)Xnm.s.− −−−→ n→∞X⇒Xnp− −−−→ n→∞X; 3)Xnp− −−−→ n→∞X⇒Xnd− −−−→ n→∞X. Lemma A.2 states that m.s. convergence guarantees conver- gence in probability; the opposite direction does not hold i n general. However, under uniform integrability, defined nex t, the opposite direction holds as well. Definition A.3 (uniform integrability [31, Chapter 4]) .A sequence {Xn∈R∣n∈N}of random variables is said to be uniformly integrable (u.i.) if for every ǫ>0, there exists a constanta>0such that E[1{∣Xn∣>a}⋅∣Xn∣]≤ǫ ∀n∈N, where 1{⋅}is the indicator function. A sequence of RVs /b{aceleft.big2Xn∈Rk/sym0A8/sym0A8/sym0A8/sym0A8/sym0A8n∈N/b{ace{ight.big2fork∈Nis said to be u.i. if {Xn[i]∣n∈N} is u.i. for all i∈{1,2,...,k}. Remark A.2.An almost-surely bounded sequence {Xn}∞ n=1, viz. a sequence satisfying P(∥Xn∥≤m)=1for alln∈Nfor somem∈R, is trivially u.i. Theorem A.1 (see [35, Section 3.4]) .Let{Xn}∞ n=1be a sequence of RVs, and let Xbe an RV , such that Xnd− −−−→ n→∞X. Then, the following statements are equivalent: •∥X∥RV<∞,∥Xn∥RV<∞for alln∈N, and lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2; •/b{aceleft.big2X2 n/b{ace{ight.big2∞ n=1is u.i. Furthermore, if one of the above statements holds, then lim n→∞E[Xn]=E[X]. Theorem A.2 (see [31, Chapter 5, Section 5.2, Th. 5.4]) .Let {Xn}∞ n=1be a sequence of RVs, and let Xbe an RV . Then, the following statements are equivalent: •Xnm.s.− −−−→ n→∞X; •Xnp− −−−→ n→∞X,∥X∥RV<∞,∥Xn∥RV<∞for alln∈N, andlim
|
https://arxiv.org/abs/2504.14659v2
|
n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2; •Xnp− −−−→ n→∞Xand/b{aceleft.big2X2 n/b{ace{ight.big2∞ n=1is u.i. Furthermore, if one of the above statements holds, then lim n→∞E[Xn]=E[X]. B. Stochastic Degradedness / Garbling The following notion of stochastic degradedness orgarbling and theorem will be used in the derivation of continuity in distribution results in Section IV. We define this notion in terms of RVs rather than in the more common terms of conditional distributions [25, Definition 7.3.1]. Definition A.4. Let(X1,Y1)and(X2,Y2)be two pairs of RVs. We say that (X2,Y2)isstochastically degraded orgarbled with respect to (X1,Y1)if X1d=X2and there exist ˜X,˜Y1,˜Y2such that (Xi,Y1)d=/pa{enleft.big1˜X,˜Y1/pa{en{ight.big1, (Xi,Y2)d=/pa{enleft.big1˜X,˜Y2/pa{en{ight.big1, ˜X⊸− −˜Y1⊸− −˜Y2. This definition means that we can view these two pairs as two channels with the same input ˜Xwhere the channel to the first output ˜Y1is more informative than that to the second output ˜Y2. Since we are interested only in the marginal distributions of the pairs (X1,Y1)and of(X2,Y2)but not the joint distribution of the quadruple, we can specialize Definition A.4 to the following. Definition A.5. LetX,Y1,Y2be three RVs. We will say that, givenX,Y2isstochastically degraded orgarbled with respect toY1if there exists an RV ˜Y1such that: /pa{enleft.big1X,˜Y1/pa{en{ight.big1d=(X,Y1), X⊸− −˜Y1⊸− −Y2. We denote this by Xd⊸− −Y1d⊸− −Y2. The following simple result can be viewed as a special- ization of Blackwell’s informativeness theorem [25, Chap- ter 7.3.1] for MMSEs; see, e.g., [22, Theorem 11] for a proof.3 Lemma A.3. LetXd⊸− −Y1d⊸− −Y2. Then, MMSE(X∣Y1)≤MMSE(X∣Y2), with equality if and only if E[X∣Y1]=E[X∣Y2]a.s. APPENDIX B PROOF OF LEMMA A.1 a) See [34, Theorem 2.7]. b) Define the events Aℓ=/b{aceleft.big3lim n→∞Xn[ℓ]=Xn[ℓ]/b{ace{ight.big3for allℓ∈ {1,2,...,k}andA=/b{aceleft.big3lim n→∞Xn=Xn/b{ace{ight.big3. Clearly, A=k ⋂ ℓ=1Aℓ. Assume first that Xna.s.− −−−→ n→∞Xand set some i∈{1,2,...k}. Then, 1=P(A)=P/pa{enleft.big5k /uni22C2.disp ℓ=1Aℓ/pa{en{ight.big5≤P(Ai)≤1. Thus, by the squeeze theorem, Xn[i]a.s.− −−−→ n→∞X[i]for alli∈ {1,2,...,k}. Now assume Xn[i]p− −−−→ n→∞X[i]∀ℓ∈{1,2,...,k}. Then, 1≥P(A)=P/pa{enleft.big5k /uni22C2.disp ℓ=1Aℓ/pa{en{ight.big5=1−P/pa{enleft.big5k /uni22C3.disp ℓ=1Ac ℓ/pa{en{ight.big5≥1−k /summation.disp ℓ=1P/pa{enleft.big1Ac ℓ/pa{en{ight.big1≥1. Thus, by the squeeze theorem, Xna.s.− −−−→ n→∞X. c) The result immediately follows by noting that ∥X−Xn∥2 RV=k /summation.disp i=1∥X[i]−Xn[i]∥2 RV. 3The proof of [22, Theorem 11] assumed X⊸− −Y1⊸− −Y2but the result and the proof are intact for Xd⊸− −Y1d⊸− −Y2. 10 d) Follows immediately from the definition of convergence in distribution. APPENDIX C PROOF OF LEMMA III.2 Assume that X(and hence also Xn) is of dimension k∈N, andY(and hence also Yn) is of dimension m∈N. Define the functions gn(y)≜E[X∣Yn=y], (13a) g(y)≜E[X∣Y=y]. (13b) Denote the i-th element (scalar-valued function) of the vector- valued function gnbygn[i], and, similarly, the i-th ele- ment (scalar-valued function) of the vector-valued functi ong byg[i]. SinceX⊸− −Y⊸− −Ynfor alln∈N, E[X∣Y,Yn]=E[X∣Y] (14) a.s. for all n∈N. Furthermore, gn(Yn)≜E[X∣Yn] (15a) =E[E[X∣Y,Yn]∣Yn] (15b) =E[E[X∣Y]∣Yn] (15c) =E[g(Y)∣Yn], (15d) where (15a) follows from (13b), (15b) follows from the law of total expectation, (15c) follows from (14), and (15d) follows from (13a). Equation (15) means that gnis the MMSE estimator of g(Y)givenYn. In particular, gn[i]is the MMSE estimator of g[i](Y)givenYn. The remainder of the proof follows similar steps to those in [22, Appendix C]. Denote by L2/pa{enleft.big2Rk/pa{en{ight.big2the set of measur- able functions that are square integrable with respect to th e probability measure of Y, i.e., the set of measurable functions fthat satisfy ∥f(Y)∥RV<∞. Denote by Cc/pa{enleft.big2Rk/pa{en{ight.big2⊂L2/pa{enleft.big2Rk/pa{en{ight.big2 the
|
https://arxiv.org/abs/2504.14659v2
|
space of compactly-supported continuous functions (an d hence also bounded) on Rk. Clearly g[i]∈L2/pa{enleft.big2Rk/pa{en{ight.big2for all i∈{1,2,...,k}since ∥g(Y)∥RV(a)=∥E[X∣Y]∥RV(b)≤∥X∥RV(c)<∞, where (a) holds by the definition of g(13a), (b) follows from (2b) in Theorem II.1 and the non-negativity of the MMSE, and (c) holds by the lemma assumption. Moreover, since Cc/pa{enleft.big2Rk/pa{en{ight.big2is dense in L2/pa{enleft.big2Rk/pa{en{ight.big2[36, Theo- rem 3.14], for any i∈{1,2,...,k}and anyǫ>0, there exists a function ˆg[i]∈Cc/pa{enleft.big2Rk/pa{en{ight.big2, such that ∥g[i](Y)−ˆg[i](Y)∥RV<ǫ. (16) Leti∈{1,2,...,k}and letǫ>0, however small. Then, there exists ˆg[i]∈Cc/pa{enleft.big2Rk/pa{en{ight.big2that satisfies (16). Then, there exists ni∈Nsuch that, for all n>ni, ∥g[i](Y)−gn[i](Yn)∥RV≤∥g[i](Y)−ˆg[i](Yn)∥RV (17a)≤∥g[i](Y)−ˆg[i](Y)∥RV+∥ˆg[i](Y)−ˆg[i](Yn)∥RV(17b) <2ǫ, (17c) where (17a) follows from gnbeing the MMSE estimator ofg(Y)givenYn(15); (17b) follows from the triangle (Minkowski) inequality [31, Chapter 3.1, Theorem 2.6]; (17 c) holds by noting that the first term in (17b) is bounded as in (16). To bound the second term in (17b) by ǫfor a large enough ni, note first that ˆg[i](Yn)p− −−−→ n→∞ˆg[i](Y) by the continuous mapping theorem [31, Chapter 5.10, Theo- rem 10.3]. Consequently, ˆg[i](Yn)d− −−−→ n→∞ˆg[i](Y) by part 3 of Lemma A.2. Since ˆg[i]∈Cc/pa{enleft.big2Rk/pa{en{ight.big2is continuous and bounded, so is ˆg[i]2. Hence, lim n→∞E/b{acketleft.big2/pa{enleft.big1ˆg[i](Yn)/pa{en{ight.big12/b{acket{ight.big2=E/b{acketleft.big2/pa{enleft.big1ˆg[i](Y)/pa{en{ight.big12/b{acket{ight.big2 by Definition A.2. Thus, by Theorem A.2, ˆg[i](Yn)m.s.− −−−→ n→∞ˆg[i](Y), (18) Setn0=max i∈{1,2,...,k}ni. Then, by summing (17) over all i∈{1,2,...,k}, we obtain ∥g(Y)−gn(Yn)∥RV<2/{adical.te} kǫ for alln>n0, which proves the desired result. APPENDIX D PROOF OF THEOREM IV.1 To prove Theorem IV .1, we will use Skorokhod’s represen- tation theorem [35, Chapter 1, Section 6], stated below. Theorem D.1. LetXbe an RV and let {Xn∣n∈N}be a sequence of RVs such that Xnd− −−−→ n→∞X. Then, there exists a sequence of RVs /b{aceleft.big1˜Xn/uni2223.big1n∈N/b{ace{ight.big1, all defined on the same probability space, such that ˜Xnd=Xn ∀n∈N, ˜Xd=X, ˜Xna.s.− −−−→ n→∞˜X. Proof of Theorem IV .1: Definegandgn, and their elements g[i]andgn[i]as in the proof of Lemma III.2. Set ǫ>0, however small. By Theorem D.1, there exist ˜X,˜Y,/b{aceleft.big1˜Xn/b{ace{ight.big1∞ n=1, and/b{aceleft.big1˜Yn/b{ace{ight.big1∞ n=1 that satisfy: /pa{enleft.big1˜Xn,˜Yn/pa{en{ight.big1d=(Xn,Yn) ∀n∈N, (20a) /pa{enleft.big1˜X,˜Y/pa{en{ight.big1d=(X,Y), (20b) /pa{enleft.big1˜Xn,˜Yn/pa{en{ight.big1a.s.− −−−→ n→∞/pa{enleft.big1˜Xn,˜Yn/pa{en{ight.big1. (20c) 11 Furthermore, since lim n→∞E/b{acketleft.big2˜X2 n/b{acket{ight.big2=lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2=E/b{acketleft.big2˜X2/b{acket{ight.big2, ˜Xnm.s.− −−−→ n→∞˜Xby Theorem A.2. Hence, by the definition of m.s. convergence (part 4 of Definition A.1), there exists ni∈N such that, for all n>ni, /uni2225.big1˜Xn[i]−˜X[i]/uni2225.big1RV<ǫ. As in the proof of Lemma III.2, for any i∈{1,2,...,k}, there exists ˆg[i]∈Cc/pa{enleft.big2Rk/pa{en{ight.big2that satisfies /uni2225.big1g[i](˜Y)−ˆg[i](˜Y)/uni2225.big1RV<ǫ. (21) Further, following the steps in the proof of (18) in Lemma III.2, ˆg[i]/pa{enleft.big1˜Yn/pa{en{ight.big1m.s.− −−−→ n→∞ˆg[i]/pa{enleft.big1˜Y/pa{en{ight.big1 (22) holds for all i∈{1,2,...,k}. To wit, for any i∈{1,2,...,k}, there exists ℓi∈N, such that /uni2225.big1ˆg[i]/pa{enleft.big1˜Yn/pa{en{ight.big1−ˆg[i]/pa{enleft.big1˜Y/pa{en{ight.big1/uni2225.big1RV<ǫ (23) holds for all n<ℓi. We are now ready to prove the desired result. Set some i∈{1,2,...,k}andti=max(ni,ℓi). Then, for all n>ti, /{adical.te}.big2 MMSE(Xn[i]∣Yn)=/{adical.te}.big3 MMSE/pa{enleft.big1˜Xn[i]/uni2223.big1˜Yn/pa{en{ight.big1 (24a) =/uni2225.big1˜Xn[i]−gn[i]/pa{enleft.big1˜Yn/pa{en{ight.big1/uni2225.big1RV(24b) ≤/uni2225.big1˜Xn[i]−ˆg[i]/pa{enleft.big1˜Yn/pa{en{ight.big1/uni2225.big1RV(24c) ≤/uni2225.big1ˆg[i]/pa{enleft.big1˜Y/pa{en{ight.big1−ˆg[i]/pa{enleft.big1˜Yn/pa{en{ight.big1/uni2225.big1RV+/uni2225.big1g[i]/pa{enleft.big1˜Y/pa{en{ight.big1−ˆg[i]/pa{enleft.big1˜Y/pa{en{ight.big1/uni2225.big1RV +/uni2225.big1˜X[i]−g[i]/pa{enleft.big1˜Y/pa{en{ight.big1/uni2225.big1RV+/uni2225.big1˜Xn[i]−˜X[i]/uni2225.big1RV(24d) </uni2225.big1˜X[i]−g[i]/pa{enleft.big1˜Y/pa{en{ight.big1/uni2225.big1RV+3ǫ (24e) =/{adical.te}.big3 MMSE/pa{enleft.big1˜X[i]/uni2223.big1˜Y/pa{en{ight.big1+3ǫ (24f) =/{adical.te}.big2 MMSE(X[i]∣Y)+3ǫ, (24g) where (24a) follows from (20a), (24b) and (24f) follow from Theorem 2a, (24c) follows from gnbeing the MMSE esti- mator of g/pa{enleft.big1˜Y/pa{en{ight.big1given˜Yn(15), (24d) holds by the triangle (Minkowski) inequality, (24e) follows from (21)–(23), and (24g) follows from (20b). Since (24) holds for any ǫ>0, for a sufficiently large ti, lim n→∞MMSE(Xn[i]∣Yn)≤MMSE(X[i]∣Y). (25) Consequently, the desired result follows: lim n→∞MMSE(Xn∣Yn)=lim n→∞k /summation.disp i=1MMSE(Xn[i]∣Yn)(26a) ≤k /summation.disp i=1lim n→∞MMSE(Xn[i]∣Yn)(26b) ≤k /summation.disp i=1MMSE(X[i]∣Y) (26c) =MMSE(X∣Y), (26d) where (26a) and (26d) follow from Definition II.1, (26b)
|
https://arxiv.org/abs/2504.14659v2
|
follows from limit-superior arithmetics, and (26c) follow s from (25).APPENDIX E PROOFS OF COROLLARIES III.1 AND III.2 Proof of Corollary III.1: Let{γ∈R}∞ n=1and{λ∈R}∞ n=1 be some sequences that converge to zero. Denote Xn=X+ γnNandYn=Y+λnMforn∈N. Since∥X∥RV,∥N∥RV<∞, ∥Xn∥RV≤∥X∥RV+∣γn∣∥N∥RV<∞ ∀n∈N, where the inequality follows from the triangle (Minkowski) inequlaity. Futher, lim n→∞∥Xn−X∥RV=lim n→∞∥γnN∥RV=lim n→∞∣γn∣⋅∥N∥RV=0, which means that Xnm.s.− −−−→ n→∞Xby Definition A.1. Further- more, (6) holds by Theorem A.2. Set some ǫ>0. Then, lim n→∞P(∥Yn−Y∥>ǫ)=lim n→∞P/pa{enleft.big4∥M∥>ǫ λn/pa{en{ight.big4=0, meaning that Ynp− −−−→ n→∞by Definition A.1. Hence, (Xn,Yn)p− −−−→ n→∞(X,Y) by Lemmata A.1 and A.2. SinceMis independent of (X,Y), the Markov condition (1) X⊸− −Y⊸− −Ynholds for all n∈N. Thus, by Theorem III.1, lim n→∞MMSE(X+γnN∣Y+λnM)=lim n→∞MMSE(Xn∣Yn) =MMSE(X∣Y). Since the above holds for any sequence {(λn,γn)}∞ n=1that satisfies(λn,γn)n→∞− −−−→(0,0), the desired result (7) follows. Proof of Corollary III.2: Let{γ∈R}∞ n=1and{λ∈R}∞ n=1 be some sequences that converge to zero. Denote Xn=⌊X⌋γn andYn=⌊Y⌋λn. SinceXn−X∈(−γn,γn)k,∥Xn−X∥RV</{adical.te} k∣γn∣. Since lim n→∞γn=0, this implies lim n→∞∥Xn−X∥RV=0. Since∥Y∥RV<∞, by the triangle (Minkowski) inequality, /uni2223.big1∥Xn∥RV−∥X∥RV/uni2223.big1≤∥Xn−X∥RV. Thus,∥Xn∥RV<∞for alln∈Nas well. Hence, Xnm.s.− −−−→ n→∞X by Definition A.1. Similarly, since Yn−Y∈(−λn,λn)k,∥Yn−Y∥</{adical.te} k∣λn∣. Consequently, P(∥Yn−Y∥>ǫ)=0, for/{adical.te} k∣λn∣<ǫ. Sincelim n→∞λn=0,Ynp− −−−→ n→∞Yby Defi- nition A.1. Hence, (Xn,Yn)p− −−−→ n→∞(X,Y)by Lemmata A.1 and A.2. SinceXn=⌊Y⌋λnis a deterministic function of Y, the Markov condition (1) X⊸− −Y⊸− −Ynholds for all n∈N. 12 Thus, by Theorem III.1, lim n→∞MMSE/pa{enleft.big1⌊X⌋γn/uni2223.big1⌊Y⌋λn/pa{en{ight.big1=lim n→∞MMSE(Xn∣Yn) =MMSE(X∣Y). Since, the above holds for any sequence {(λn,γn)}∞ n=1that satisfies(λn,γn)n→∞− −−−→(0,0), the desired result (7) follows. APPENDIX F PROOF OF THEOREM V.2 We will first prove the following lemma. Lemma F.1. Let(X,Y)be a pair of random variables and let{(Xn,Yn)}∞ n=1be a sequence of pairs of random variables such that lim n→∞E/b{acketleft.big2X2 n/b{acket{ight.big2=E/b{acketleft.big2X2/b{acket{ight.big2,lim n→∞E/b{acketleft.big2Y2 n/b{acket{ight.big2=E/b{acketleft.big2Y2/b{acket{ight.big2. Then, lim n→∞Cov(Xn,Yn)=Cov(X,Y), whereCov(A,B)≜E[(A−E[A])(B−E[B])]denotes the covariance between AandB. Proof: By Skorokhod’s theorem (Theorem D.1), there exist˜X,˜Y,/b{aceleft.big1/pa{enleft.big1˜Xn,˜Yn/pa{en{ight.big1/b{ace{ight.big1∞ n=1such that /pa{enleft.big1˜Xn,˜Yn/pa{en{ight.big1d=(Xn,Yn) ∀n∈N, (27a) /pa{enleft.big1˜X,˜Y/pa{en{ight.big1d=(X,Y), (27b) /pa{enleft.big1˜Xn,˜Yn/pa{en{ight.big1a.s.− −−−→ n→∞/pa{enleft.big1˜X,˜Y/pa{en{ight.big1. Therefore, ∣Cov(Xn,Yn)−Cov(X,Y)∣ =/uni2223.big1Cov/pa{enleft.big1˜Xn,˜Yn/pa{en{ight.big1−Cov/pa{enleft.big1˜X,˜Y/pa{en{ight.big1/uni2223.big1 (28a) =/uni2223.big1Cov/pa{enleft.big1˜X,˜Yn−˜Y/pa{en{ight.big1−Cov/pa{enleft.big1˜X−˜Xn,˜Yn/pa{en{ight.big1/uni2223.big1 (28b) ≤/{adical.te}.big3 Var/pa{enleft.big1˜X/pa{en{ight.big1/{adical.te}.big3 Var/pa{enleft.big1˜Yn−˜Y/pa{en{ight.big1+/{adical.te}.big3 Var/pa{enleft.big1˜Xn−˜X/pa{en{ight.big1/{adical.te}.big3 Var/pa{enleft.big1˜Yn/pa{en{ight.big1, (28c) where (28a) follows from (27a) and (27b), (28b) follows from the bi-linearity of the covariance, and (28c) follows from t he Cauchy–Schwarz inequality. By Theorem A.2, 0≤lim n→∞Var/pa{enleft.big1˜Xn−˜X/pa{en{ight.big1≤lim n→∞E/b{acketleft.big3/pa{enleft.big1˜Xn−˜X/pa{en{ight.big12/b{acket{ight.big3=0,(29) (and similarly for ˜Yn). Thus, by the squeeze theorem, lim n→∞Var/pa{enleft.big1˜Xn−˜X/pa{en{ight.big1=0, lim n→∞Var/pa{enleft.big1˜Yn−˜Y/pa{en{ight.big1=0.(30) Further, by Theorem A.2, lim n→∞Var/pa{enleft.big1˜Yn/pa{en{ight.big1=Var/pa{enleft.big1˜Y/pa{en{ight.big1<∞. Hence, the desired result follows from (28)–(30) and the squeeze theorem. We are now ready to prove Theorem V .2. Proof of Theorem V .2: SinceCYis invertible, its deter- minantdet{CY}≠0and Theorem V .1 is applicable.By Theorem A.1, lim n→∞E[Xn]=E[X], lim n→∞E[Yn]=E[Y].(31) By Lemma F.1, lim n→∞CYn=CY, lim n→∞CXn,Yn=CX,Y.(32) Furthermore, since C−1 Y=adj{CY} det{CY}, we also have lim n→∞C−1 Yn=C−1 Y, (33) whereadj{CY}denotes the adjugate of CY. The desired result then follows from the LMMSE formula in Theorem V .1, Theorem A.1 and the convergence results in (31)–(33). REFERENCES [1] H. L. Van Trees, Detection Estimation and Modulation theory, Part I: Detection, Estimation, and Linear Modulation Theory . New York: John Wiley & Sons, 2004. [2] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimatio n Theory . Prentice-Hall, Inc., 1993. [3] E. L. Lehmann and G. Casella, Theory of Point Estimation . Springer Science & Business Media, 2006. [4] R. E. Kalman, “A
|
https://arxiv.org/abs/2504.14659v2
|
new approach to linear filtering and predi ction problems,” Transactions of the ASME–Journal of Basic Engineering , vol. 82, no. 1, pp. 35–45, 1960. [5] N. Wiener, Extrapolation, interpolation, and smoothing of stationar y time series . The MIT press, 1964. [6] A. N. Kolmogorov, “Stationary sequences in Hilbert spac e,”Moscow University Mathematics Bulletin , vol. 2, no. 6, pp. 1–40, 1941. [7] T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation . Prentice Hall, 2000. [8] D. P. Bertsekas, Dynamic Programming and Optimal Control , 2nd ed. Belmont, MA, USA: Athena Scientific, 2000, vol. I. [9] B. Hassibi, A. H. Sayed, and T. Kailath, Indefinite-Quadraric Estimation and Control: A Unified Approach to H2andH∞Theories . New York: SIAM Studies in Applied Mathematics, 1998. [10] J. R. Barry, E. A. Lee, and D. G. Messerschmitt, Digital Communication , 3rd ed. New York, NY , USA: Springer-Verlag, 2004. [11] D. Birkes and Y . Dodge, Alternative Methods of Regression . John Wiley & Sons, 2011. [12] D. Chicco, M. J. Warrens, and G. Jurman, “The coefficient of determi- nation R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation,” PeerJ Computer Science , vol. 7, p. e623, 2021. [13] R. S. Sutton and A. G. Barto, Introduction to reinforcement learning . Cambridge: MIT press, 1998, vol. 135. [14] N. Cesa-Bianchi and G. Lugosi, Prediction, learning, and games . Cambridge university press, 2006. [15] N. J. Gordon, D. J. Salmond, and A. F. Smith, “Novel appro ach to nonlinear/non-Gaussian Bayesian state estimation,” in IEE proceedings F (radar and signal processing) , vol. 140, no. 2. IET, 1993, pp. 107– 113. [16] G. A. Einicke and L. B. White, “Robust extended Kalman fil tering,” IEEE Trans. Sig. Process. , vol. 47, no. 9, pp. 2596–2599, 1999. [17] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications . Artech house, 2003. [18] G. Revach, N. Shlezinger, X. Ni, A. L. Escoriza, R. J. Van Sloun, and Y . C. Eldar, “Kalmannet: Neural network aided Kalman filteri ng for partially known dynamics,” IEEE Transactions on Signal Processing , vol. 70, pp. 1532–1547, 2022. 13 [19] A. N. Putri, C. Machbub, D. Mahayana, and E. Hidayat, “Da ta driven linear quadratic Gaussian control design,” IEEE Access , vol. 11, pp. 24 227–24 237, 2023. [20] T. Diskin, Y . C. Eldar, and A. Wiesel, “Learning to estim ate without bias,” IEEE Transactions on Signal Processing , vol. 71, pp. 2162–2171, 2023. [21] F. Ait Aoudia and J. Hoydis, “Model-free training of end -to-end commu- nication systems,” IEEE Journal on Selected Areas in Communications , vol. 37, no. 11, pp. 2503–2516, 2019. [22] Y . Wu and S. Verd´ u, “Functional properties of minimum m ean-square error and mutual information,” IEEE Trans. Inf. Theory , vol. 58, no. 3, pp. 1289–1301, 2012. [23] S. Y¨ uksel and T. Linder, “Optimization and convergenc e of observation channels in stochastic control,” SIAM Journal on Control and Optimiza- tion, vol. 50, no. 2, pp. 864–887, 2012. [24] T. F. M´
|
https://arxiv.org/abs/2504.14659v2
|
ori and G. J. Sz´ ekely, “Four simple axioms of dep endence measures,” Metrika , vol. 82, no. 1, pp. 1–16, 2019. [25] S. Y¨ uksel and T. Bas ¸ar, Stochastic Teams, Games, and Control under Information Constraints . Springer, 2024. [26] I. Hogeboom-Burr and S. Y¨ uksel, “Continuity properti es of value functions in information structures for zero-sum and gener al games and stochastic teams,” SIAM Journal on Control and Optimization , vol. 61, no. 2, pp. 398–414, 2023. [27] I. Hogeboom-Burr, “Comparison and continuity propert ies of equilib- rium values in information structures for stochastic games ,” Master’s thesis, Department of Mathematics and Statistics, Queen’s University, 2022. [28] J. A. Gubner, Probability and Random Processes for Electrical and Computer Engineers . Cambridge University Press, 2006. [29] A. Papoulis and S. U. Pillai, Probability, Random Variables, and Stochastic Processes , 4th ed. Tata McGraw-Hill Education, 2002. [30] H. Pishro-Nik, Introduction to Probability, Statistics, and Random Processes . Galway, Ireland: Kappa Research, 2014. [31] A. Gut, Probability: A Graduate Course . Springer, 2005, vol. 200, no. 5. [32] G. Baker and S. Y¨ uksel, “Continuity and robustness to i ncorrect priors in estimation and control,” in Proc. IEEE Int. Symp. on Inf. Theory (ISIT) , Barcelona, Spain, Jul. 2016, pp. 1999–2003. [33] Q. Wang, Y . Ma, K. Zhao, and Y . Tian, “A comprehensive sur vey of loss functions in machine learning,” Annals of Data Science , vol. 9, pp. 187–212, 2022. [34] A. W. van der Vaart, Asymptotic Statistics , ser. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge Uni versity Press, 1998. [35] P. Billingsley, Convergence of Probability Measures . John Wiley & Sons, 2013. [36] W. Rudin, Real and Complex Analysis , 3rd ed. McGraw-Hill, Inc., 1987.
|
https://arxiv.org/abs/2504.14659v2
|
arXiv:2504.15150v1 [stat.ME] 18 Apr 2025PREVALENCE ESTIMATION IN INFECTIOUS DISEASES WITH IMPERFECT TESTS : A C OMPARISON OF FREQUENTIST AND BAYESIAN LOGISTIC REGRESSION METHODS WITH MISCLASSIFICATION CORRECTION Jorge Mario Estrada Alvarez Caja de Compensacion Familiar de Risaralda Salud - Comfamiliar Pereira, PA 66003 jestradaa@comfamiliar.comHenan F. Garcia SISTEMIC Research Group Faculty of engineering University of Antioquia Medellin, PA 15213 hernanf.garcia@udea.edu.co Miguel Ángel Montero-Alonso Department of Statistics and Operational Research University of Granada Granada, Spain PA 18071 mmontero@ugr.eduJuan de Dios Luna del Castillo Department of Statistics and Operational Research University of Granada Granada, Spain PA 18071 jdluna@ugr.edu April 22, 2025 ABSTRACT Accurate estimation of disease prevalence is critical for i nforming public health strategies. Imper- fect diagnostic tests can lead to misclassification errors, such as false positives and false negatives, which may skew estimates if not properly addressed. This stu dy compared four statistical methods for estimating the prevalence of sexually transmitted infe ctions (STIs) and the factors associated with them, while incorporating corrections for misclassifi cation. The methods examined were: (i) standard logistic regression with external correction usi ng known sensitivity and specificity; (ii) the Liu et al. model, which jointly estimates false positive and false negative rates; (iii) Bayesian logistic regression with external correction; and (iv) a Bayesian mo del with internal correction that utilizes informative priors on diagnostic accuracy. Data were obtai ned from 11,452 individuals participating in a voluntary screening campaign for HIV , syphilis, and hep atitis B from 2020 to 2024. Prevalence estimates and regression coefficients were compared across models using metrics of relative change from crude estimates, confidence interval width, and coeffic ient variability. The Liu model yielded higher prevalence estimates but exhibited wider intervals and convergence issues for low-prevalence conditions. In contrast, the Bayesian model with internal c orrection (BEC) provided intermediate estimates with the narrowest confidence intervals and more s table intercepts, reflecting an improved estimation of baseline prevalence. Prior distributions ei ther informative or weakly informative con- tributed to regularization, particularly in small-sample or rare-event settings. Furthermore, account- ing for misclassification impacts both prevalence estimate s and covariate associations. Although the APREPRINT - APRIL 22, 2025 Liu model presents theoretical advantages compared to stan dard regression, its practical limitations, particularly in cases with sparse data, diminish its effect iveness. In contrast, Bayesian models that incorporate misclassification correction provide a robust and flexible alternative in low-prevalence situations, proving especially useful when accurate estim ates are necessary despite diagnostic uncer- tainties. Keywords Prevalence estimation ·Missclassification ·Logistic regression, Infectious disease 1 Introduction Accurate estimation of the prevalence of infectious diseas es is essential for guiding the planning, implementation, a nd evaluation of public health strategies. Such estimates are typically based on the large-scale application of diagnos- tic tests, whose performance is rarely perfect. As a result, misclassification errors such as false positives and false negatives occur, introducing bias and reducing the precisi on of the resulting estimates [1]. Accounting for possible misclassification in the binary out come variable is crucial, as ignoring it can lead to substant ial bias in parameter estimates. Suppose misclassification is s uspected, and
|
https://arxiv.org/abs/2504.15150v1
|
information on the misclassification rates (sensitivity and specificity) is available or can be reasona bly assumed. In that case, methods that explicitly incorpor ate this information should be employed to correct the estimate d prevalence and avoid bias toward the null value in the resulting estimates [1]. Logistic regression is a commonly used tool for prevalence e stimation that allows for covariate adjustment. Moreover, correcting for measurement error in logistic regression mo dels improves the estimation of the prevalence of a conditio n while allowing the direct inclusion of relevant demographi c and epidemiological covariates [2]. Nonetheless, method s that integrate both elements are required to adequately add ress misclassification’s challenges and benefit from the flexibility to include covariates of interest. This combina tion enables the derivation of prevalence estimates with greater validity, particularly when multiple infectious d iseases are epidemiologically correlated. Various strategies have been developed to handle diagnosti c errors in prevalence estimation, ranging from post-hoc adjustments via marginal standardization of predicted pro babilities [3], which provides an extrinsic adjustment bas ed on the model’s predictions. In this same line of work, Liu et a l. [4] propose a logistic regression model with correction for misclassification in the binary outcome variable. The me thod incorporates parameters for the false positive (FP) and false negative (FN) rates into the standard logistic reg ression model, thereby enabling the estimation of coefficie nts adjusted for these error rates. The model proposed by Liu et a l. includes different variants depending on whether one or both types of error need to be estimated, assuming these er rors may be either equal or distinct. Valle et al. [5] propose a Bayesian inference approach to log istic regression for correcting bias in prevalence estimat es when using imperfect tests. However, their results primari ly focus on the bias induced in the estimation of risk factor associations rather than on the method for adjusting preval ence estimates in the presence of covariates. This study directly addresses the prevalence estimation of diseases using imperfect tests while accounting for the nee d to control for epidemiological and/or demographic covaria tes. The objective was to compare four statistical method- ologies for prevalence estimation under a practical framew ork for adjusted and corrected point estimates, emphasizin g the importance of an integrated approach that considers dia gnostic uncertainty and covariate adjustment. The propose d methodological approaches (frequentist and Bayesian) wer e: (i) standard logistic regression for prevalence estimat ion adjusted for covariates, with correction based on known tes t sensitivity and specificity, (ii) modified logistic regres - sion for simultaneous estimation of test errors as proposed by Liu et al. [4], (iii) standard logistic regression with Bayesian inference and extrinsic correction using known se nsitivity and specificity, and (iv) a Bayesian logistic regr es- sion model with intrinsic correction of misclassification e rror via prior distribution elicitation for classification errors [5]. The methods are compared using a representative sample of 11452 individuals simultaneously evaluated for HIV and syphilis seroprevalence. 2 APREPRINT - APRIL 22, 2025 2 Methods This study aims to
|
https://arxiv.org/abs/2504.15150v1
|
estimate the prevalence of low-prevalenc e sexually transmitted infections (STIs) while correcting for diagnostic misclassification. We implemented and compa red four logistic regression models that differ in their handling of misclassification and statistical inference fr ameworks. 2.1 Crude Prevalence Estimation Crude prevalence was estimated using the Rogan–Gladen meth od [6], which adjusts the observed proportion of posi- tive tests based on known sensitivity (Se)and specificity (Sp)of the diagnostic test: ˆPadj=ˆPobs+Sp−1 Se+Sp−1. (1) This estimator assumes fixed, known values of Se and Sp and yie lds an unbiased prevalence estimate under nondiffer- ential misclassification. 2.2 Adjusted Prevalence Estimation via Regression Models Four logistic regression models were applied to estimate ad justed prevalence for HIV and syphilis, accounting for both covariates and test misclassification. Covariates include d age, sex, test results for other infections, and key popula tion indicators. Table 1 defines the regression coefficients asso ciated with each covariate. Table 1: Coefficient assignment for covariates in the models Covariate Assigned Coefficient Intercept β0 Age (years) β1 Sex β2 Syphilis test* β3 Hepatitis B test β4 MSM** β5 LGTBI population β6 Other populations β7 Sex worker β8 *The coefficient β3for the syphilis prevalence models represents the result of the HIV test. **MSM: Men who have sex with men. 2.2.1 Standard Logistic Regression (STD) The standard model estimates the probability of disease giv en covariates using the logistic function: P(Yi= 1|Xi) =π(Xi) =exp(ηi) 1+exp(ηi), (2) ηi=β0+p/summationdisplay j=1βjXij. (3) Parameters are estimated via maximum likelihood. Adjusted prevalence is computed by correcting the predicted values using the Rogan–Gladen formula. 3 APREPRINT - APRIL 22, 2025 2.2.2 Bayesian Logistic Regression with External Correcti on (BC) In the Bayesian extension, regression coefficients are assi gned prior distributions. Following Newman et al. [7], prio rs are: βj∼ N/parenleftbigg 0,π2 3(p+1)/parenrightbigg , j= 0,...,p. (4) Posterior inference is conducted via Markov chain Monte Car lo (MCMC). Point estimates are then corrected post hoc using known Se and Sp. 2.2.3 Logistic Regression with Simultaneous Estimation of Misclassification (Liu) The Liu model [4] introduces latent variables ˜Yifor true disease status, linking them to observed outcomes Yivia misclassification probabilities, following traditional a ssumptions on non-differential misclassification [8, 9]: P(Yi= 1|˜Yi= 0) =r0, (5) P(Yi= 0|˜Yi= 1) =r1. (6) The conditional probability is then: P(Yi= 1|Xi) =r0+(1−r0−r1)·1 1+exp(−ηi). (7) Model parameters (β,r0,r1)are jointly estimated via maximum likelihood. 2.2.4 Bayesian Logistic Regression with Internal Correcti on (BEC) This hierarchical model [5] assumes the true outcome ˜Yifollows a logistic model: ˜Yi∼Bernoulli (πi), (8) πi=exp(ηi) 1+exp(ηi). (9) The observed test result Yidepends on ˜Yiand the test characteristics: P(Yi= 1|˜Yi= 1) =Se, (10) P(Yi= 1|˜Yi= 0) = 1 −Sp. (11) Priors are specified as: Yi∼Bernoulli (Se)if˜Yi= 1, (12) Yi∼Bernoulli (1−Sp)if˜Yi= 0, (13) Inference is conducted via MCMC to obtain posterior distrib utions of prevalence and covariate effects. 2.3 Evaluation Criteria Models were compared using the following metrics, accordin g to the approach suggested by Morris et al., [10]: •Relative change in adjusted prevalence with respect to crude estimates. •Interval width of confidence/credible intervals. •Stability of regression coefficients , measured by standard errors or posterior variance. 4 APREPRINT - APRIL
|
https://arxiv.org/abs/2504.15150v1
|
22, 2025 Table 2: Descriptive characteristics of the study populati on (n = 11,452) Characteristic Age (mean [SD]) 34 (14) Male sex 5759 (50%) HIV reactivity 155 (1.4%) Syphilis reactivity 905 (7.9%) Hepatitis B reactivity 8 (<0.1%) Key population type MSM 3248 (28%) LGTBIQ 224 (2.0%) Other populations 193 (1.7%) General population 6574 (57%) Sex worker 1213 (11%) Continuous variables are presented as mean (standard devia tion), and categorical variables as n (percentage). 2.4 Data Source and Variables Data were obtained from 11,452 individuals screened betwee n 2020–2024. Covariates included age, sex, syphilis and hepatitis B status, and population group classification (e. g., MSM, sex worker). Tests used were the Determine HIV Early Detect (Se = 0.975, Sp = 0.999) and Bioline Syphilis 3.0 (Se = 0.964, Sp = 0.974). 2.5 Software All analyses were performed in Rversion 4.4.3 [11]. Frequentist models used base glm() andoptim() routines; Bayesian models were estimated using rstan andrstanarm [12, 13]. 3 Results 3.1 Sample Characteristics The study population included 11452 individuals, whose dem ographic and serological characteristics are summarized in Table 2. The mean age was 34 years (standard deviation: 14) . The sex distribution was balanced, with 5759 men corresponding to 50% of the sample. Regarding serological results, 155 participants (1.4%) te sted reactive for HIV , 905 (7.9%) for syphilis, and 8 (<0.1%) for hepatitis B. Concerning key population categories, the most significant proportion corresponded to the general population, with 6574 individuals (57%), followed by men who have sex with men (MSM), with 3248 (28%), and sex workers, with 1213 (11%). Additionally, individuals identified as part of the LGTBIQ community (n = 224; 2.0%) and other popula- tions (n = 193; 1.7%) were included. 3.2 Comparison of Adjusted Prevalence Estimates Table 3 presents the adjusted prevalence estimates for HIV a nd syphilis under different regression models with mis- classification correction. Notable variations in the estim ates were observed depending on the methodological approac h used. For HIV , the crude prevalence was 1.39%. The standard model ( STD), which applies an external correction based on known sensitivity and specificity, reduced this estimate to 0.73%. In contrast, the Liu model, which simultaneously estimates diagnostic error parameters, yielded a prevalen ce of 1.66%, representing a relative increase of 127.69% 5 APREPRINT - APRIL 22, 2025 Table 3: Adjusted prevalence estimates for HIV and syphilis according to different regression models Disease Model Adjusted P Lower Upper Change vs. Crude (%) Cha nge vs. STD (%) CI Width HIVCrude 0.0139 0.0117 0.0160 – – 0.0043 STD 0.0073 0.0055 0.0097 -47.42 – 0.0042 Liu 0.0166 0.0131 0.0212 19.72 127.69 0.0081 BC 0.0095 0.0078 0.0116 -31.57 30.15 0.0037 BEC 0.0100 0.0078 0.0127 -28.24 36.47 0.0049 SyphilisCrude 0.0567 0.0514 0.0620 – – 0.0106 STD 0.0277 0.0223 0.0336 -51.21 – 0.0113 Liu 0.1155 0.1064 0.1252 103.64 317.39 0.0187 BC 0.0340 0.0288 0.0391 -40.08 22.82 0.0103 BEC 0.0414 0.0379 0.0452 -26.94 49.74 0.0073 Note: STD = standard model with external correction; Liu = model wi th simultaneous estimation of error parameters; BC = Bayesian model without internal correction; BEC = Bayesian model with
|
https://arxiv.org/abs/2504.15150v1
|
misclassification correction. Table 4: Comparison of regression coefficients for HIV accor ding to the implemented model Coefficient STD Liu Liu Change (%) BC BEC BEC Change (%) β0 -5.864 -4.648 -20.73 -5.418 -4.717 -12.93 β1 -0.001 0.002 -325.76 -0.001 -0.015 2798.64 β2 0.991 0.356 -64.09 0.749 0.512 -31.68 β3 1.946 1.492 -23.33 1.782 1.892 6.18 β4 1.547 1.545 -0.14 0.433 0.454 4.86 β5 0.883 0.617 -30.15 0.754 0.717 -4.96 β6 1.214 0.604 -50.23 0.664 0.513 -22.74 β7 0.673 0.428 -36.33 0.326 0.319 -2.01 β8 0.516 0.060 -88.44 0.191 -0.138 -171.98 compared to the STD model. Meanwhile, the Bayesian model wit h misclassification correction (BEC) estimated a prevalence of 1.00%, with a confidence interval width of 0.00 49. In the case of syphilis, the crude prevalence was 5.67%. The S TD model produced a considerably lower adjusted estimate (2.77%), while the Liu model estimated a prevalenc e of 11.55%—more than twice the crude value and with the widest confidence interval (0.0187). In contrast, t he BEC model estimated a prevalence of 4.14%, with the narrowest confidence interval (0.0073), indicating the hig hest relative precision among the models compared. Overall, the Bayesian models demonstrated a better balance between misclassification error adjustment and estimate precision—particularly the BEC model, which yielded inter mediate prevalence values and the narrowest confidence intervals for both outcomes. 3.3 Comparison of Regression Coefficients for HIV Models Table 4 displays the estimated coefficients for the logistic regression model for HIV across four approaches: the standard model (STD), the Liu model (simultaneous adjustme nt of misclassification parameters), the Bayesian model without internal correction (BC), and the Bayesian model wi th misclassification correction (BEC). Relative percentag e changes concerning the standard model are also included. The intercept showed a reduction of 20.73% in the Liu model an d 12.93% in the Bayesian model with correction, suggesting differences in the estimation of the baseline pr evalence level when diagnostic error is accounted for. Overall, substantial differences were observed in the magn itude and direction of some coefficients depending on the adjustment method used. The Liu model showed notable reduct ions compared to the standard model across most coefficients, with a change of up to -88.44% for the coefficien t associated with the sex worker population ( β8) and 6 APREPRINT - APRIL 22, 2025 Table 5: Comparison of standard errors for HIV model coeffici ents between Liu and Bayesian model with correction (BEC) Coefficient SE (Liu) SE (BEC) Relative Change β0 0.694 0.237 7.539 β1 0.004 0.006 -0.639 β2 0.229 0.228 0.008 β3 0.311 0.168 2.424 β4 0.847 0.553 1.345 β5 0.230 0.200 0.327 β6 0.415 0.397 0.089 β7 0.387 0.402 -0.073 β8 0.172 0.311 -0.695 -64.09% for male sex ( β2), indicating significant adjustment when simultaneously c orrecting for test sensitivity and specificity. In contrast, the Bayesian model with misclassification corr ection (BEC) adjusted the magnitude and direction of specific coefficients. The most striking case is β8, which changed from a positive value in the standard model (0 .516) to negative (-0.138) in BEC, representing a relative change of approximately 1.7 times compared to
|
https://arxiv.org/abs/2504.15150v1
|
the BC model. Another relevant finding is observed in the coefficient for ag e group (β1), which changed from -0.001 in the standard model to -0.015 in BEC, representing a relative increase of a pproximately 3 times compared to its value in the BC model. Although its magnitude remains small, this change re flects the sensitivity of the age effect to how misclassifi- cation is modeled. The covariate associated with syphilis reactivity ( β3) maintained a positive direction in all models, with a sligh t in- crease in BEC (1.892) compared to STD (1.946), suggesting th e stability of this effect despite adjustment for measure- ment error. Table 5 compares the standard errors (SE) of the estimated co efficients for the HIV model between the Liu approach and the Bayesian model with misclassification correction (B EC). Overall, notable differences were observed in esti- mator precision depending on the methodological approach. The BEC model showed a marked reduction in the standard error of the intercept (from 0.694 to 0.237), corresponding to a relative precision gain of more than sevenfold, suggest ing increased stability in estimating the corrected baseli ne prevalence. Similarly, reductions were observed in the sta ndard errors of the covariate associated with syphilis reac tiv- ity, with a 2.42-fold relative precision gain compared to th e Liu model. A substantial improvement was also observed for the hepatitis B covariate. In contrast, some covariates—such as age group and sex worke r—showed increased standard errors under the BEC model, which may reflect greater uncertainty associated wit h their effects, possibly due to low frequency or interactio ns with other variables. These findings indicate that the BEC model not only adjusts po int estimates of the coefficients but also differentially improves their precision, particularly for components mos t relevant to population-level inference. This reinforces its usefulness in scenarios where diagnostic error may comprom ise both the validity and precision of traditional models. 3.4 Comparison of Regression Coefficients for Syphilis Mode ls Table 6 presents the estimated coefficients for the logistic regression model for syphilis under the different models evaluated. Differences in magnitude and direction are obse rved across approaches, particularly when comparing models corrected for misclassification with the standard mo del. 7 APREPRINT - APRIL 22, 2025 Table 6: Comparison of regression coefficients for syphilis according to the implemented model Coefficient STD Liu Liu Change (%) BC BEC BEC Change (%) β0 -4.890 -3.164 -35.30 -4.766 -5.322 11.67 β1 0.036 0.022 -38.72 0.035 0.037 5.04 β2 0.615 0.159 -74.15 0.575 0.728 26.62 β3 1.986 1.525 -23.22 1.839 2.061 12.09 β4 2.140 1.743 -18.55 0.747 0.719 -3.71 β5 0.954 0.597 -37.45 0.903 1.004 11.24 β6 1.473 0.672 -54.39 1.222 1.371 12.19 β7 1.470 0.921 -37.39 1.288 1.439 11.77 β8 1.825 0.886 -51.43 1.691 1.990 17.67 Table 7: Comparison of standard errors for syphilis model co efficients: Liu model vs. Bayesian model with correction (BEC) Coefficient SE (Liu) SE (BEC) Relative Change β0 0.367 0.166 3.884 β1 0.004 0.003 0.627 β2 0.062 0.165 -0.857 β3 0.187 0.184 0.031 β4 0.675 0.516 0.711 β5 0.109 0.126 -0.255
|
https://arxiv.org/abs/2504.15150v1
|
β6 0.193 0.285 -0.544 β7 0.191 0.230 -0.311 β8 0.168 0.164 0.047 The Liu model showed a systematic reduction in most coefficie nts, with the most notable decreases observed for male sex (β2), with a change of -74.15%, and the sex worker category ( β8), with a decrease of 51.43%. These results may reflect an overestimation of associations in the standard mo del when misclassification is not accounted for. In contrast, the Bayesian model with misclassification corr ection (BEC) preserved the positive direction of most effec ts and even increased their magnitude compared to the standard model. For example, the coefficient for sex workers increased from 1.825 to 1.990, representing a 17.67% rise, w hile the effect of male sex increased by 26.62%. These changes suggest that, once misclassification is corrected, the association between these factors and syphilis infecti on strengthens—an insight that may be epidemiologically and p rogrammatically significant. Conversely, the effect of hepatitis B reactivity ( β4) showed a decrease in the BEC model relative to the standard m odel, suggesting a potential adjustment toward a more conservati ve effect. The age group variable ( β1) maintained a positive and stable effect across all models. Table 7 shows moderate variations in the precision of the est imators between the two models, with a pattern differing from that observed in the analysis for HIV . The BEC model demonstrated a substantial improvement in the intercept’s precision, with a reduction in the standard error from 0.367 to 0.166, representing a relative precisio n gain of more than 3.8-fold. This finding suggests that the estimation of the baseline prevalence for syphilis is mo re stable under a Bayesian approach with misclassification correction. However, for several covariates—such as sex, LGTBIQ popula tion type, and other population groups—an increase in the standard error was observed under the BEC model. For inst ance, the SE for sex increased from 0.062 to 0.165, which may reflect more significant uncertainty in estimating this covariate’s effect when diagnostic uncertainty is explicitly incorporated. A similar trend was noted for the p opulation group categories, with relative decreases in precision ranging from -25% to -54%. 8 APREPRINT - APRIL 22, 2025 In contrast, variables such as age, hepatitis B reactivity, and sex worker status showed comparable or slightly reduced standard errors under the Bayesian model, suggesting that t he estimates’ precision was maintained or even improved. Overall, these results indicate that the Bayesian model wit h correction adjusts point estimates and alters precision i n a covariate-specific manner. The explicit correction of mis classification errors enables a more robust estimation of the intercept (and thus of the adjusted prevalence). Howeve r, it may come with trade-offs in the precision of specific individual predictors. 4 Discussion Several studies have demonstrated that estimating disease prevalence using imperfect diagnostic tests can lead to sub - stantial bias if sensitivity and specificity are not adequat ely accounted for. The need for such correction has been widely documented in the literature, particularly in setti ngs where prevalence varies across populations. In this reg ard, Li and
|
https://arxiv.org/abs/2504.15150v1
|
Fine [14] point out that sensitivity—traditionally c onsidered an intrinsic property of the test—is a significant factor influencing prevalence estimates due to factors such as changes in the severity spectrum of cases or variations in the clinical context of diagnosis. Consequently, adjust ing for sensitivity and specificity becomes methodological ly relevant and necessary to ensure valid and comparable preva lence estimates across studies or heterogeneous popula- tions. Accurate estimation of prevalence adjusted for individual characteristics becomes especially relevant when the need is not only to obtain an overall population average but also to g enerate stratified or subgroup-specific estimates defined by demographic or epidemiological covariates. In this cont ext, logistic regression models emerge as an optimal and flexible tool to address this need. Vansteelandt et al. [15] e mphasize that when information on individual-level covariates is available, the extension of classical estima tors through generalized linear models allows for the direc t inclusion of these factors in the analysis, thereby improvi ng the precision and validity of prevalence estimates. This approach is not only adaptable to different sampling design configurations. However, it is also advantageous in terms of statistical efficiency and cost-effectiveness, as it mak es more effective use of the collected information—especia lly when combined with methodological proposals that expand it s scope for estimation purposes [16]. This study compared different methodological approaches f or estimating prevalence and associations in the presence of misclassification in the outcome, applied to sexually tra nsmitted infections with low and moderate prevalence. Standard logistic regression models were evaluated, the mo del proposed by Liu et al. [4], and two Bayesian ap- proaches—with and without explicit correction for misclas sification [5]. It is important to note that although relevant changes were o bserved in the coefficients of some covariates across the different models, the magnitude and direction of these c hanges should be interpreted in light of the underlying distribution of variables in the study population. For inst ance, coefficients with low absolute values in the standard model may exhibit significant relative variations in the cor rected models without necessarily implying a substantive change in their effect. In contrast, the intercept ( β0) showed consistent changes across models, suggesting that the primary corrections are reflected in the baseline component of the model—that is, in t he prevalence estimation when all covariates are set to zero. In this context, the corrected intercept can be interp reted as approximating the baseline prevalence adjusted fo r misclassification error, providing a more stable and robust outcome measure without additional risk factors. These findings reinforce the notion that correcting for misc lassification affects relative associations and the point estimate of the baseline prevalence. In particular, it was o bserved that adjusted estimates differ systematically fro m unadjusted ones. While it cannot be asserted with certainty that these corrections necessarily yield values closer to the true prevalence, it is reasonable to consider that they r epresent a significant methodological improvement. In this sense, corrected models may offer estimates more consisten t with epidemiological assumptions and less biased due 9 APREPRINT - APRIL
|
https://arxiv.org/abs/2504.15150v1
|
22, 2025 to misclassification error, which is especially relevant in population-based studies and epidemiological surveillan ce systems. The results revealed substantial differences in adjusted p revalence estimates and regression coefficients depending on the approach used. In particular, the Bayesian model with mi sclassification correction (BEC) demonstrated a favorable balance between precision and bias correction, producing i ntermediate estimates between the standard model and the Liu model while yielding the narrowest confidence intervals . This finding was consistent for both HIV and syphilis. However, methodological and practical implications must b e carefully considered when selecting the statistical ap- proach. The model proposed by Liu et al. [4], especially in it s more general formulation that simultaneously estimates the false positive (FP) and false negative (FN) rates, incor porates a substantially more significant number of paramete rs than standard logistic regression. This increased complex ity translates into higher sample size requirements to achi eve estimation stability and to ensure convergence of the Fishe r scoring algorithm. According to the simulations presente d in their work, a minimum sample size of approximately 5,000observations is required to obtain reliable parameter estimates, including the misclassification rates. This req uirement becomes even more critical when the outcome preva- lence is low, as the limited number of informative events may severely constrain the model’s ability to estimate the additional parameters associated with classification erro r accurately. In this study, despite having a considerably large sample si ze (n= 11452 ), challenges in terms of precision were observed, expressed as wide confidence intervals and high st andard errors for several coefficients in the Liu model. This phenomenon was particularly evident in covariates ass ociated with very low-prevalence conditions, such as HIV positivity, where the effective number of cases was relativ ely limited (1.4% of the total sample). These findings sugges t that even in settings with large sample sizes, the model’s pe rformance may be compromised by the low occurrence of the event, thereby limiting its practical applicability in studies of rare infectious diseases. Additionally, the Liu model exhibited convergence issues i n scenarios where the data did not include both misclassifi- cation errors. This aspect, linked to the structural assump tions of the error model, may compromise the validity of the approach. In contrast, Bayesian models allow for the incorp oration of prior information on sensitivity and specificity , improving estimation stability in smaller samples and enab ling a probabilistic characterization of uncertainty. In t his study, that advantage was reflected in substantial improvem ents in the precision of the intercept—directly related to the corrected baseline prevalence—as well as in key coeffici ents. Concerning Bayesian modeling, an important methodologica l advantage lies in incorporating prior information on the regression coefficients, allowing for the regularizati on of estimates and reducing variability associated with sm all samples. In this regard, Newman et al. [7] proposed the formu lation of informative prior distributions based on the vari - ance of standardized coefficients and the number of covariat es included in the model—a methodology implemented in this study. This approach is beneficial in
|
https://arxiv.org/abs/2504.15150v1
|
contexts where cla ssical estimation yields unstable coefficients and extensi ve standard errors. However, the use of non-informative priors also holds pract ical applicability, as demonstrated by Gordóvil-Merino et al. [17] through a simulation study. They employed weakly informative priors, providing excellent stability in estimating parameters in logistic regression models appli ed to small samples. Although the authors acknowledged that this approach does not fully resolve issues arising fro m asymmetric distributions, they emphasized that includin g prior knowledge—even minimal—helps mitigate extreme infe rences and improve model regularization. It is crucial in studies with small numbers of observations or rare events. T hus, the Bayesian approach emerges as a methodologically sound and flexible alternative for situations where the assu mptions of classical models are compromised. Our findings support using models that explicitly account fo r misclassification in prevalence and association studies based on imperfect diagnostic tests. While the Liu model rep resents an improvement over standard logistic regression by addressing systematic bias, its practical and theoretic al limitations make it less suitable in specific contexts. In contrast, Bayesian models with misclassification correcti on—such as the one evaluated in this study—stand out as a 10 APREPRINT - APRIL 22, 2025 robust and flexible alternative, particularly in low-preva lence settings, when diagnostic errors are known or partial ly known, and when precise estimates are needed to support publ ic health decision-making. References [1] Magder L. and P Hughes James. Logistic regression when th e outcome is measured with uncertainty. American Journal of Epidemiology , 1997. [2] I. Lewis Fraser, Sanchez-Vazquez M., and R. Torgerson P. Association between covariates and disease occur- rence in the presence of diagnostic error. Epidemiology and Infection , 2011. [3] Clemma J Muller and Richard F MacLehose. Estimating pred icted probabilities from logistic regression: differ- ent methods correspond to different target populations. International Journal of Epidemiology , 43(3):962–970, 03 2014. [4] Haiyan Liu and Zhiyong Zhang. Logistic regression with m isclassification in binary outcome variables: a method and software. Behaviormetrika , 44:447–476, 2017. [5] Valle D., M. Tucker Lima Joanna, Millar J., Amratia P., an d Haque Ubydul. Bias in logistic regression due to imperfect diagnostic test results and practical correctio n approaches. Malaria Journal , 2015. [6] Walter J. Rogan and Beth Gladen. Estimating prevalence f rom the results of a screening test. American Journal of Epidemiology , 107(1):71–76, 01 1978. [7] Ken B. Newman, Cristiano Villa, and Ruth King. Logistic r egression models: practical induced prior specifica- tion, 2025. [8] JM Neuhaus. Bias and efficiency loss due to misclassified r esponses in binary regression. Biometrika , 86(4):843– 855, 12 1999. [9] J.A. Hausman, Jason Abrevaya, and F.M. Scott-Morton. Mi sclassification of the dependent variable in a discrete- response setting. Journal of Econometrics , 87(2):239–269, 1998. [10] Tim P. Morris, Ian R. White, and Michael J. Crowther. Usi ng simulation studies to evaluate statistical methods. Statistics in Medicine , 38(11):2074–2102, 2019. [11] R Core Team. R: A Language and Environment for Statistical Computing . R Foundation for Statistical Comput- ing, Vienna, Austria, 2025. [12] Stan Development Team. RStan: the
|
https://arxiv.org/abs/2504.15150v1
|
R interface to Stan, 2 025. R package version 2.32.7. [13] Ben Goodrich, Jonah Gabry, Imad Ali, and Sam Brilleman. rstanarm: Bayesian applied regression modeling via Stan., 2024. R package version 2.32.1. [14] Jialiang Li and Jason P. Fine. Assessing the dependence of sensitivity and specificity on prevalence in meta- analysis. Biostatistics , 12(4):710–722, 04 2011. [15] S. Vansteelandt, E. Goetghebeur, and T. Verstraeten. R egression models for disease prevalence with diagnostic tests on pools of serum samples. Biometrics , 56(4):1126–1133, 05 2004. [16] Mario Villalobos-Arias. Estimation of population inf ected by covid-19 using regression generalized logistics and optimization heuristics, 2020. [17] Amalia Gordóvil Merino, Joan Guàrdia Olmos, and Maribe l Peró Cebollero. Estimation of logistic regression models in small samples. a simulation study using a weakly in formative default prior distribution. Psicológica: Revista de metodología y psicología experimental , 33(2):345–361, 2012. 11 This figure "test.png" is available in "png" format from: http://arxiv.org/ps/2504.15150v1
|
https://arxiv.org/abs/2504.15150v1
|
arXiv:2504.15186v1 [math.ST] 21 Apr 2025Sum of Independent XGamma Distributions Therrar Kadria,b, Rahil Omairib, Khaled Smailicand Seifedine Kadryd aDepartment of Science, Northwestern Polytechnic, Grande Prair e, Canada; bDepartment of Art and Science, Lebanese International Univers ity, Lebanon cDepartment of Applied Mathematics, Faculty of Sciences, Lebanes e University, Zahle, Lebanon dDepartment of Applied Data Science, Noroff University College, Norw ay ARTICLE HISTORY Compiled April 22, 2025 ABSTRACT The XGamma distribution is a generated distribution from am ixture of Exponential and Gamma distributions. It is found that in many cases the XG amma has more flexibility than the Exponential distribution. In this pape r we consider the sum of independent XGamma distributions with different parameter s. We showed that the probability density function of this distribution is a sum o f the probability density function of the Erlang distributions. As a consequence, we fi nd exact closed expres- sions of the other related statistical functions. Next, we e xamine the estimation of the parameters by maximum likelihood estimators. We observ e in an applications a real data set which shows that this model provides better fit t o the data as compared to the sum of the Exponential distributions, the Hypoexpone ntial models. KEYWORDS Probability Density Function; Convolution of Random Varia bles; XGamma Distribution; Hypoexponential Distribution; Maximum Lik elihood Estimation 1. Introduction Some random variables play an important role in queuing and s urvival problems (Abdelkader , 2003; Bocharov et al., 2011). However, the sum of independent ran- dom variables is one of the most important and used function o f random variables that takes a worthy part in modeling many events in various fie lds of science (Trivedi , 1982), Markov process, (Kadri et al. , 2015) reliability and performance evaluation (Bolch et al. , 2006), and spatial diversity (Khuong & Kong , 2 006). The sum of in- dependent random variables has been studied through many au thors over time, as well as the case of finding their exact probability density fu nction (PDF), cumulative distribution function (CDF), moment generating function ( MGF) and other statistical parameters. Moreover, we may present some: (Amari & Misra , 1 997) on the sum of Exponential random variables, (Moschopoulos , 1985) on the sum of Gamma random variables. The introduction of new lifetime distributions or modified l ifetime distributions has become a time-valuable methods in statistical and biome dical research. In ad- CONTACT T. Kadri. Email: therrar@hotmail.com dition, finite mixture distributions that emerge from stand ard distributions plays a better role in modeling real-life phenomena as compared to t he standard ones see (McLachlan & Peel , 2000). In recent years, it turns out that t here are many well- known distributions used to model data sets do not offer enough flexibility to provide proper fit. For this reason, new distributions are introduce d that are more flexible and fit the data properly. By maintaining the role of finite mixtur e distributions in model- ing time-to-event data see (Everitt , 2005). Sen et al. (2016 ) introduced a new finite mixture of Exponential and Gamma distributions
|
https://arxiv.org/abs/2504.15186v1
|
called the X Gamma distribution. They list some of its statistical parameters and clarify an a pplication that this model is provides an adequate fit for the data set more than the Expon ential distribution. For this reason, they consider this new finite mixture and fou nd that the XGamma model provides better fit to the data as compared to the Expone ntial model. In this paper, we consider the sum of nindependent XGamma random variables with different scale parameters. We give an exact closed expre ssions of the PDF of this distribution as a linear combination of Erlang random varia bles. As a consequence, we determine closed expressions of the CDF, MGF and moment of order k of the convolution oftheXGammadistributionandalsoas alinear c ombination oftheErlang random variables. We discuss estimation of the parameters b y maximum likelihood estimators. We observe in an application to real data set tha t this model is quite flexible and can be used quite effectively in analyzing reliabi lity models and indicated that the Mixed Erlang distribution is a serious competitor t o the others. 1.1. Some Preliminaries In this section, we state some continuous distributions. Mo reover, we give some of its statistical parameter. The Erlang distribution is the sum of nindependent identical Exponential random variableswithparameter θ∈R+.Wewriteitas En θ∼Erl(n,θ).Theprobabilitydensity function of En θis given as fEn θ(t) =/braceleftBigg (θt)n−1θe−θt (n−1)!ift≥0 0 if t <0(1) and with MGF of En θis ΦEn θ(t) =θn (θ−t)n,fort > θ. (2) Moreover, the Laplace transform of PDF of En θis L/braceleftbig fEn θ(t)/bracerightbig =θn (θ+t)n,fort > θ. (3) The Hypoexponential distribution is the distribution of th e sum of m≥2 in- dependent Exponential random variables with different param eters presented by Smaili et al. (2014). This distribution is used in modeling m ultiple exponential stages inseries, whenthosestages arenot distinctthenit is consi dered as ageneral case of the Hypoexponential distribution.Thisgeneral case can beexp ressedas H− →k− →θ=n/summationtext i=1ki/summationtext j=1Xij, 2 whereXijis an Exponential random variables with parameter θi, i= 1,2,...,n, and written as H− →k− →θ∼Hypoexp(− →θ ,− →k), where− →θ= (θ1,θ2,...,θn)∈Rn +,− →k= (k1,k2,...,kn)∈Nnandm=n/summationtext i=1ki. The XGamma distribution is a finite mixture of Exponential an d Gamma distribu- tions presented by Sen et al. (2016). We write it as XG θ∼XG(θ). The PDF of this distributionexpressedas fXGθ(t) =2/summationtext i=1πifi(t),wheref1(t)isthePDF ofExponential distribution with parameter θandf2(t) is the PDF of Gamma distribution with scale parameter θand shape parameter 3, and with mixing proportions π1=θ/(1+θ) and π2= 1−π1.Moreover, they expressed the PDF of the XGamma distribution as fXGθ(t) =θ2 (1+θ)(1+θ 2t2)e−θt, (4) witht, θ >0,andθis the scale parameter. Moreover, the MGF of XGamma distribution is given as ΦXGθ(t) =θ2((θ−t)2+θ) (1+θ)(θ−t)3(5) 2. The Exact Expression of Sum of Independent XGamma Distributions In this section, we findan exact close expressions of the PDF f or the sum of nindepen- dent XGamma distribution with different scale parameter θ. This closed expression is formulated as a linear combination of Erlang distribution. We suppose that Xjarenindependent XGamma distributions. LetXj∼XG(θj),for 1≤j≤n.From Eq(4) the PDFs of XjarefXj(t) =θ2 j (1+θj)(1+ θj 2t2)e−θjt. Theorem 2.1. LetSnbe the sum of nindependent
|
https://arxiv.org/abs/2504.15186v1
|
XGamma distributions denoted asSn∼HypoXG(− →θ). Then the PDF of Snis given as fSn(t) =n/summationtext i=13/summationtext k=1RikfYik(t) (6) with Rik=Aik θ4−k in/producttext l=1θ2 l (1+θl), (7) andYik∼Erl(4−k,θi) 3 Furthermore, Ai1=θin/producttext j=1,j/negationslash=i/parenleftbigg(θj−θi)2+θj (θj−θi)3/parenrightbigg Ai2=Ai1/parenleftBigg n/summationtext j=1,j/negationslash=i/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg/parenrightBigg Ai3=Ai1 2/parenleftBigg n/summationtext j=1,j/negationslash=i/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg/parenrightBigg2 + Ai1 2/parenleftBigg 2 θi+n/summationtext j=1,j/negationslash=i/parenleftBigg −2(θj−θi)2−θj ((θj−θi)2+θj)2+3 (θj−θi)2/parenrightBigg/parenrightBigg . (8) Proof.SinceXiare independent, then L{fSn(t)}=n/producttext i=1L{fXi(t)}.But using the fact that L{fXi(t)}= ΦXi(−t),the MGF of the XGamma is given in Eq(5), then L{fSn(t)}=n/producttext i=1θ2 i((θi+t)2+θi) (1+θi)(θi+t)3=/parenleftbiggn/producttext i=1θ2 i (1+θi)/parenrightbiggn/producttext i=1/parenleftBig (θi+t)2+θi (θi+t)3/parenrightBig . Now by using the Heaviside Theorem for Repeated Roots, we hav e n/producttext i=1/parenleftbigg(θi+t)2+θi (θi+t)3/parenrightbigg =3/summationtext k=1A1k (θ1+t)4−k+3/summationtext j=1A2k (θ2+t)4−k+...+3/summationtext p=1Ank (θn+t)4−k =n/summationtext i=13/summationtext k=1Aik (θi+t)4−k(9) where Aik= lim t→−θi1 (k−1)!dk−1 dtk−1/parenleftBigg (t+θi)3n/producttext j=1/parenleftbigg(θj+t)2+θj (θj+t)3/parenrightbigg/parenrightBigg (10) fork= 1,2,3 andi= 1,2,...,n. Next, we can write ( t+θi)3n/producttext j=1/parenleftBig (θj+t)2+θj (θj+t)3/parenrightBig = /parenleftbig (θi+t)2+θi/parenrightbign/producttext j=1,j/negationslash=i/parenleftBig (θj+t)2+θj (θj+t)3/parenrightBig .By substituting in Eq (10), we obtain that Aik= lim t→−θi1 (k−1)!dk−1 dtk−1/parenleftBigg /parenleftbig (θi+t)2+θi/parenrightbign/producttext j=1,j/negationslash=i/parenleftbigg(θj+t)2+θj (θj+t)3/parenrightbigg/parenrightBigg 4 Next, we derive Aik,fork= 1,2,3 andi= 1,2,...,n, then we have Ai1= lim s→−θi/bracketleftBigg/parenleftBigg (s+θi)3n/producttext j=1/parenleftbigg(θj+s)2+θj (θj+s)3/parenrightbigg/parenrightBigg/bracketrightBigg = lim s→−θi/bracketleftBigg /parenleftbig (θi+s)2+θi/parenrightbign/producttext j=1,j/negationslash=i/parenleftbigg(θj+s)2+θj (θj+s)3/parenrightbigg/bracketrightBigg =θin/producttext j=1,j/negationslash=i/parenleftbigg(θj−θi)2+θj (θj−θi)3/parenrightbigg . (11) On the other hand, Ai2= lim s→−θi/bracketleftBigg/parenleftBigg (s+θi)3n/producttext j=1/parenleftbigg(θj+s)2+θj (θj+s)3/parenrightbigg/parenrightBigg′/bracketrightBigg (12) LetY=/parenleftBigg (s+θi)3n/producttext j=1/parenleftBig (θj+s)2+θj (θj+s)3/parenrightBig/parenrightBigg , then lnY= 3ln(s+θi)+n/summationtext j=1ln/parenleftbig (θj+s)2+θj/parenrightbig −3ln(θj+s)), which impliesY′ Y=3 s+θi+n/summationtext j=1/parenleftBig 2(θj+s) (θj+s)2+θj−3 s+θj/parenrightBig , thusY′=Y/parenleftBigg 3 s+θi+n/summationtext j=1/parenleftBig 2(θj+s) (θj+s)2+θj−3 s+θj/parenrightBig/parenrightBigg .By substituting in Eq (12), we ob- tain Ai2= lim s→−θi/bracketleftbig Y′/bracketrightbig =Ai1n/summationtext j=1,j/negationslash=i/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg . (13) Eventually, Ai3= lim s→−θi1 2/parenleftBig Y′′/parenrightBig (14) In the same manner, we have Y′′=Y′/parenleftBigg 3 s+θi+n/summationtext j=1/parenleftBig 2(θj+s) (θj+s)2+θj−3 s+θj/parenrightBig/parenrightBigg +/parenleftBigg −3 (s+θi)2+n/summationtext j=1/parenleftBig −2(s+θj)2−θj ((θj+s)2+θj)2+3 (s+θj)2/parenrightBig/parenrightBigg Y.Again by substituting in Eq (14), 5 we get Ai,3= lim s→−θi1 2/bracketleftBig Y′′/bracketrightBig =1 2Ai2/parenleftBigg n/summationtext j=1,j/negationslash=i/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg/parenrightBigg +1 2Ai1/parenleftBigg 2 θi+n/summationtext j=1,j/negationslash=i/parenleftBigg −2(θj−θi)2−θj ((θj−θi)2+θj)2+3 (θj−θi)2/parenrightBigg/parenrightBigg =1 2Ai1/parenleftBigg n/summationtext j=1,j/negationslash=i/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg/parenrightBigg2 +1 2Ai1/parenleftBigg 2 θi+n/summationtext j=1,j/negationslash=i/parenleftBigg −2(θj−θi)2−θj ((θj−θi)2+θj)2+3 (θj−θi)2/parenrightBigg/parenrightBigg (15) However, by substituting Eq(9) in the above Laplacian trans formation, we obtain L{fSn(t)}=/parenleftBiggn/productdisplay i=1θ2 i (1+θi)/parenrightBigg n/summationtext i=13/summationtext k=1Aik (θi+t)4−k(16) But from Eq(3), we have L/braceleftbig fEn θ(t)/bracerightbig =θn (θ+t)n,and thus we rewrite Eq(16) as L{fSn(t)}=/parenleftBiggn/productdisplay i=1θ2 i (1+θi)/parenrightBigg n/summationtext i=13/summationtext k=1Aik θ4−k iθ4−k i (θi+s)4−k =/parenleftBiggn/productdisplay i=1θ2 i (1+θi)/parenrightBigg n/summationtext i=13/summationtext k=1Aik θ4−k iL/braceleftBig fE4−k θi(t)/bracerightBig =/parenleftBiggn/productdisplay i=1θ2 i (1+θi)/parenrightBigg n/summationtext i=13/summationtext k=1Aik θ4−k ifYik(t) =n/summationtext i=13/summationtext k=1RikfYik(t) whereRik=/parenleftbiggn/producttext i=1θ2 i (1+θi)/parenrightbigg Aik θ4−k i,andAikare defined in Eqs (11,13,15) respectively, and Yik∼Erl(4−k,θi). Inthenexttheorem,wegivetheCDF,MGF,reliabilityfuncti onandhazardfunction expressions of Sn. Theorem 2.2. LetSn∼HypoXG(− →θ). Then we have FSn(t) =n/summationtext i=13/summationtext k=1RikFYik(t) (17) whereRikis defined in Eq (7), and Yik∼Erl(4−k,θi). 6 Proof.From Theorem 2 .1 the PDF of SnisfSn(t) =n/summationtext i=13/summationtext k=1RikfYik(t).On the other hand we have the CDF of Sncan be expressed as FSn(t) =t/integraldisplay 0fSn(x)dx=t/integraldisplay 0n/summationtext i=13/summationtext k=1RikfYik(x)dx=n/summationtext i=13/summationtext k=1Rikt/integraldisplay 0fYik(x)dx=n/summationtext i=13/summationtext k=1RikFYik(t) where,Yik∼Erl(4−k,θi). Lemma 2.3. LetRikbe the coefficient of HypoXG, thenn/summationtext i=13/summationtext k=1Rik= 1. Proof.LetFYik(t) and FSn(t) be the CDF of Yik∼Erl(4−k,θi) and Sn∼HypoXG(− →θ) respectively. However the CDF for any random variable Xis P(X < t) = 1 as t→+∞,thus lim t→+∞FYik(t) = 1 and lim t→+∞FSn(t) = 1, but form Theorem 2.2 we have FSn(t) =n/summationtext i=13/summationtext k=1RikFYik(t),then
|
https://arxiv.org/abs/2504.15186v1
|
lim t→+∞FSn(t) = lim t→+∞n/summationtext i=13/summationtext k=1RikFYik(t) =n/summationtext i=13/summationtext k=1Riklim t→+∞FYik(t).Hence,n/summationtext i=13/summationtext k=1Rik= 1. Theorem 2.4. LetSn∼HypoXG(− →θ). Then we have ΦSn(t) =n/summationtext i=13/summationtext k=1RikΦYik(t) (18) Proof.ReferringtoTheorem2 .1thePDFof SnisfSn(t) =n/summationtext i=13/summationtext k=1RikfYik(t).However, using the MGF expression, we obtain ΦSn(t) =+∞/integraldisplay −∞etxfSn(x)dx=+∞/integraldisplay −∞etx/parenleftbiggn/summationtext i=13/summationtext k=1RikfYik(x)/parenrightbigg dx=n/summationtext i=13/summationtext k=1Rik+∞/integraldisplay −∞etxfYik(x)dx but +∞/integraldisplay −∞etxfYik(x)dx= ΦYik(t), thus ΦSn(t) =n/summationtext i=13/summationtext k=1RikΦYik(t). where Φ Yik(t) is defined in Eq (2). 7 Theorem 2.5. LetSn∼HypoXG(− →θ). Then we have RSn(t) =n/summationtext i=13/summationtext k=1RikRYik(t) (19) Proof.Using the expression of reliability function, we have RSn(t) = 1−FSn(t) = 1−n/summationtext i=13/summationtext k=1RikFYik(t), but FYik(t) = 1−RYik(t), then, RSn(t) = 1−n/summationtext i=13/summationtext k=1Rik(1−RYik(t)) = 1−n/summationtext i=13/summationtext k=1Rik+n/summationtext i=13/summationtext k=1RikRYik(t) but by using Lemma 2.3 we haven/summationtext i=13/summationtext k=1Rik= 1,henceRSn(t) =n/summationtext i=0AiRYi(t). Theorem 2.6. LetSn∼HypoXG(− →θ). Then we have hSn(t) =n/summationtext i=13/summationtext k=1RikhYik(t)RYik(t) n/summationtext i=13/summationtext k=1RikRYik(t)(20) Proof.Referring to the expression of the hazard function, we have hSn(t) =fSn(t) RSn(t)=n/summationtext i=13/summationtext k=1RikfYik(t) n/summationtext i=13/summationtext k=1RikRYik(t) butfYik(t) =hYik(t)RYik(t),thus hSn(t) =n/summationtext i=13/summationtext k=1RikhYik(t)RYik(t) n/summationtext i=13/summationtext k=1RikRYik(t). 8 Theorem 2.7. LetSn∼HypoXG(− →θ). Then we have E[Sk n] =n/summationtext i=13/summationtext k=1RikE[Yk ik] (21) Proof.From Theorem 2.4, we have Φ Sn(t) =n/summationtext i=13/summationtext k=1RikΦYik(t).But themoment of or- derkofSnis expressed as E[Sk n] =dkΦSn(t) dtk/vextendsingle/vextendsingle/vextendsingle t=0then we obtain E[Sk n] =dkΦSn(t) dtk/vextendsingle/vextendsingle/vextendsingle t=0= n/summationtext i=13/summationtext k=1RikdkΦYik(t) dtk/vextendsingle/vextendsingle/vextendsingle t=0=n/summationtext i=13/summationtext k=1RikE[Yk ik]. 3. Maximum Likelihood Estimation of Parameters in HypoXGamma Distribution In this section, we examine the estimation of the parameters of the HypoXGamma by maximum likelihood estimators. Next, we observe in an appli cations a real data set which shows that this model provides better fit to the data as c ompared to the sum of the Exponential distributions, the Hypoexponential mod els Theorem 3.1. LetSn∼HypoXG(− →θ),− →θ= (θ1,θ2,...,θn)andt1,t2,...,tNbe the observed values. Then the maximum likelihood estimators of θp, forp= 1,2,...,n. is denoted by/hatwideθpthat verify the following implicit equations N/summationtext u=1N(2+θp) θp(1+θp)+N/summationtext u=1n/summationtext i=1∂ ∂θp/parenleftbigg e−θitu/parenleftbigg Ai,1t2 u 2+Ai,2tu+Ai,3/parenrightbigg/parenrightbigg n/summationtext i=1e−θitu/parenleftbigg Ai,1t2u 2+Ai,2tu+Ai,3/parenrightbigg N/summationtext u=1logK+N/summationtext u=1log/bracketleftbiggn/summationtext i=1/parenleftBig Ai,1 θ3 iθ3 it2 ue−θitu 2+Ai,2 θ2 iθ2 itue−θitu+Ai,3 θiθie−θitu/parenrightBig/bracketrightbigg= 0 (22) whereK=n/producttext i=1θ2 i (1+θi),andAi,1,Ai,2andAi,3defined in Theorem 2.1. Proof.SinceSn∼HypoXG(− →θ),with− →θ= (θ1,θ2,...,θn)andt1,t2,...,tNtheobserved values. Then from Theorem 2.1, the probability density func tion ofSnis given as fSn(t) =n/summationtext i=13/summationtext k=1RikfYik(t) =n/summationtext i=1Ri1θ3 it2e−θit 2+Ri2θ2 ite−θit+Ri3θie−θit Now the likelihood function is given by L(− →θ) =N/productdisplay u=1n/summationtext i=1Ri1θ3 it2 ue−θitu 2+Ri2θ2 itue−θitu+Ri3θie−θitu 9 with Ri1=KAi,1 θ3 i, Ri2=KAi,2 θ2 i, Ri3=KAi,3 θi whereK=n/producttext i=1θ2 i (1+θi),andAikdefined in Theorem 2.1, then L(− →θ) =N/productdisplay u=1Kn/summationtext i=1/parenleftbiggAi,1 θ3 iθ3 it2 ue−θitu 2+Ai,2 θ2 iθ2 itue−θitu+Ai,3 θiθie−θitu/parenrightbigg thus the log-likelihood function is given by l(− →θ) = log/parenleftBig L(− →θ)/parenrightBig =N/summationtext u=1log/bracketleftbigg Kn/summationtext i=1/parenleftbiggAi,1 θ3 iθ3 it2 ue−θitu 2+Ai,2 θ2 iθ2 itue−θitu+Ai,3 θiθie−θitu/parenrightbigg/bracketrightbigg =N/summationtext u=1logK+N/summationtext u=1log/bracketleftbiggn/summationtext i=1/parenleftbiggAi,1 θ3 iθ3 it2 ue−θitu 2+Ai,2 θ2 iθ2 itue−θitu+Ai,3 θiθie−θitu/parenrightbigg/bracketrightbigg . The nonlinear log-likelihood equations are listed below ∂l(− →θ) ∂θp=N/summationtext u=1∂ ∂θp/parenleftbiggN/summationtext u=1logK+N/summationtext u=1log/bracketleftbiggn/summationtext i=1/parenleftBig Ai,1 θ3 iθ3 it2 ue−θitu 2+Ai,2 θ2 iθ2 itue−θitu+Ai,3 θiθie−θitu/parenrightBig/bracketrightbigg/parenrightbigg N/summationtext u=1logK+N/summationtext u=1log/bracketleftbiggn/summationtext i=1/parenleftBig Ai,1 θ3 iθ3 it2 ue−θitu 2+Ai,2 θ2 iθ2 itue−θitu+Ai,3 θiθie−θitu/parenrightBig/bracketrightbigg= 0 (23) forp= 1,2,...,n. Now, we investigate the numerator. We have N/summationtext u=1logK=N/summationtext u=1log/bracketleftbiggn/producttext i=1θ2 i (1+θi)/bracketrightbigg =N(2log(θp)−log(1+θp))+Nn/summationtext i=1 i/negationslash=p(2log(θi)−log(1+θi)) then
|
https://arxiv.org/abs/2504.15186v1
|
∂ ∂θp/parenleftbiggN/summationtext u=1logK/parenrightbigg =N(2+θp) θp(1+θp). Moreover, ∂ ∂θp/parenleftbiggN/summationtext u=1log/bracketleftbiggn/summationtext i=1e−θitu/parenleftbiggAi,1t2 u 2+Ai,2tu+Ai,3/parenrightbigg/bracketrightbigg/parenrightbigg =N/summationtext u=1n/summationtext i=1∂ ∂θp/parenleftBig e−θitu/parenleftBig Ai,1t2 u 2+Ai,2tu+Ai,3/parenrightBig/parenrightBig n/summationtext i=1e−θitu/parenleftBig Ai,1t2 u 2+Ai,2tu+Ai,3/parenrightBig 10 Again, for the numerator we have ∂ ∂θp/parenleftbigg e−θitu/parenleftbiggAi,1t2 u 2+Ai,2tu+Ai,3/parenrightbigg/parenrightbigg = t2 u 2∂(Ap,1e−θptu) ∂θp+tu∂(Ap,2e−θptu) ∂θp+∂(Ap,3e−θptu) ∂θpifi=p e−θitu/parenleftBig t2 u 2∂Ai,1 ∂θp+tu∂Ai,2 ∂θp+∂Ai,3 ∂θp/parenrightBig ifi/ne}ationslash=p(24) then, ifi=p,we have Ap,1 =n/producttext j=1,j/negationslash=p/parenleftBig (θj−θp)2+θj (θj−θp)3/parenrightBig ,so let ln Ap,1 = /summationtext j=1,j/negationslash=p/parenleftbig ln/parenleftbig (θj−θp)2+θj/parenrightbig −3ln(θj−θp)/parenrightbig , thus∂Ap,1 ∂θp Ap,1=n/summationtext j=1,j/negationslash=p(θj−θp)2+3θj ((θj−θp)2+θj)(θj−θp)and∂Ap,1 ∂θp=Ap,1n/summationtext j=1,j/negationslash=p(θj−θp)2+3θj ((θj−θp)2+θj)(θj−θp), thus ∂Ap,1e−θptuu ∂θp=Ap,1e−θptuu/parenleftBigg −tu+n/summationtext j=1,j/negationslash=p(θj−θp)2+3θj ((θj−θp)2+θj)(θj−θp)/parenrightBigg . In the same manner, we get ∂Ap,2e−θptuu ∂θp=Ap,2e−θptuu(−tu+V(θp)) whereV(θp) =n/summationtext j=1,j/negationslash=p(θj−θp)2+3θj ((θj−θp)2+θj)(θj−θp)+n/summationtext j=1,j/negationslash=p/parenleftbigg 4(θj−θp)2 ((θj−θp)2+θj)2−2 (θj−θp)2+θj−3 (θj−θp)2/parenrightbigg n/summationtext j=1,j/negationslash=p/parenleftbigg 2(θj−θp) (θj−θp)2+θj−3 θj−θp/parenrightbigg and ∂Ap,3e−θptuu ∂θp=Ap,3e−θptuu(−tu+G(θp)) whereG(θp) =n/summationtext j=1,j/negationslash=p(θj−θp)2+3θj ((θj−θp)2+θj)(θj−θp) +2 n/summationtext j=1,j/negationslash=p/parenleftbigg 4(θj−θp)2 ((θj−θp)2+θj)2−2 (θj−θp)2+θj−3 (θj−θp)2/parenrightbigg n/summationtext j=1,j/negationslash=p/parenleftbigg 2(θj−θp) (θj−θp)2+θj−3 θj−θp/parenrightbigg n/summationtext j=1,j/negationslash=p/parenleftbigg 2(θj−θp) (θj−θp)2+θj−3 θj−θp/parenrightbigg 2 + 2 θp+n/summationtext j=1,j/negationslash=p/parenleftbigg −2(θj−θp)2−θj ((θj−θp)2+θj)2+3 (θj−θp)2/parenrightbigg +−2 θ2p+n/summationtext j=1,j/negationslash=p/parenleftBigg −8((θj−θp)2−θj)(θj−θp) ((θj−θp)2+θj)3+4(θj−θp) ((θj−θp)2+θj)2+6 (θj−θp)3/parenrightBigg n/summationtext j=1,j/negationslash=p/parenleftbigg 2(θj−θp) (θj−θp)2+θj−3 θj−θp/parenrightbigg 2 + 2 θp+n/summationtext j=1,j/negationslash=p/parenleftbigg −2(θj−θp)2−θj ((θj−θp)2+θj)2+3 (θj−θp)2/parenrightbigg . Now,i/ne}ationslash=p,we have 11 Ai,1=n/producttext j=1∨j=p,j/negationslash=i/parenleftBig (θj−θi)2+θj (θj−θi)3/parenrightBig =(θp−θi)2+θp (θp−θi)3n/producttext j=1,j/negationslash=i,p/parenleftBig (θj−θi)2+θj (θj−θi)3/parenrightBig ,then ∂Ai,1 ∂θp=n/producttext j=1,j/negationslash=i,p/parenleftbigg(θj−θi)2+θj (θj−θi)3/parenrightbigg∂ ∂θp/parenleftbigg(θp−θi)2+θp (θp−θi)3/parenrightbigg =−θi−2θp−(θi−θp)2 (θi−θp)4n/producttext j=1,j/negationslash=i,p/parenleftbigg(θj−θi)2+θj (θj−θi)3/parenrightbigg (25) and again using the same manner, we get ∂Ai,2 ∂θp= (/parenleftbigg2(θj−θp) (θj−θp)2+θj−3 θj−θp/parenrightbigg/parenleftBigg −θi−2θp−(θi−θp)2 (θi−θp)4/parenrightBigg + /parenleftbigg(θj−θp)2+θj (θj−θp)3/parenrightbigg/parenleftBigg 4(θj−θp)2 ((θj−θp)2+θj)2−2 (θj−θp)2+θj−3 (θj−θp)2/parenrightBigg +n/summationtext j=1,j/negationslash=i,p/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg−θi−2θp−(θi−θp)2 (θi−θp)4)× n/producttext j=1,j/negationslash=i,p/parenleftbigg(θj−θi)2+θj (θj−θi)3/parenrightbigg . and ∂Ai,3 ∂θp=∂ ∂θpAi,1 2/parenleftbigg2(θj−θp) (θj−θp)2+θj−3 θj−θp/parenrightbigg2 + ∂ ∂θpAi,1 2n/summationtext j=1,j/negationslash=i,p/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg + ∂ ∂θpAi,1 2/parenleftBigg −2(θj−θp)2−θj ((θj−θp)2+θj)2+3 (θj−θp)2/parenrightBigg + ∂ ∂θpAi,1 2n/summationtext j=1,j/negationslash=i,p/parenleftBigg −2(θj−θi)2−θj ((θj−θi)2+θj)2+3 (θj−θi)2/parenrightBigg + Ai,1(2/parenleftBigg/parenleftBigg 4(θj−θp)2 ((θj−θp)2+θj)2−2 (θj−θp)2+θj−3 (θj−θp)2/parenrightBigg/parenrightBigg × /parenleftBigg/parenleftbigg2(θj−θp) (θj−θp)2+θj−3 θj−θp/parenrightbigg +n/summationtext j=1,j/negationslash=i,p/parenleftbigg2(θj−θi) (θj−θi)2+θj−3 θj−θi/parenrightbigg/parenrightBigg )+ Ai,1 −8/parenleftBig (θj−θp)2−θj/parenrightBig (θj−θp) ((θj−θp)2+θj)3+4(θj−θp) ((θj−θp)2+θj)2+6 (θj−θp)3 where∂Ai,1 ∂θp,andAi,1defined in the above Eqs (25) and (11). By substituting all the above obtained equation in (23) with the two different cases, we obtain an unified equation. The MLEs are solutions of (23). These equation is not solvable analytically, but some numerical iterative me thods, as Newton-Raphson 12 method, can be used. The solutions can be approximate numeri cally by using software such as MATHEMATICA, MAPLE, and R. Here we work with MATHEMAT ICA. Application The data set is from Lawless (2011). The data given arose in te sts on endurance of deep groove ball bearings. The data are the nu mber of million revolutions before failure for each of the 23 ball bearings i n the life tests and they are: 17.88,28.92,33.00,41.52,42.12,45.60,48.48,51.84,51.96,54.12,55.56, 67.80,68.64,68.64,68.88,84.12,93.12,98.64,105.12,105.84,127.92, 128.04,173.40 We expect that the proposed HypoXG( θ1,θ2) distribution will serve as a com- petitive model. Using our implicit equations in Eq (22) and w ith the help of MATHEMATICA to solve these equations, we obtain the estimat ion ofθ1and θ2as /hatwideθ1= 27947.47469372068 /hatwideθ2= 0.05407088132815127 The estimated distribution HypoXG( /hatwideθ1,/hatwideθ2) versus the estimated distribution of the Hypoexponential distribution with two parameters i. e. Hypoexp( α1,α2). In Chesneau (2018), Chesneau, used the method of maximum lik elihood to determine an estimation of the parameters of the Hypoexpone ntial distribution. The method is applied on the survival time data and showed an e stimation of α1andα2as /hatwideα1= 0.027691338927039302 /hatwideα2= 0.027691606617103376 The fitted densities are superimposed
|
https://arxiv.org/abs/2504.15186v1
|
on the histogram of the data sets and shown in the following Figure: Figure 1. Fitted pdfs and cdfs to the Survival Times for BallBearings d ata set. References Abdelkader, Y. H. (2003). Erlang distributed activity times in stoch astic activity networks. Kybernetika ,39(3), 347–358. Amari, S. V. & Misra, R. B. (1997). Closed-form expressions for dis tribution of sum of Expo- nential random variables. IEEE Transactions on reliability ,46(4), 519–522. 13 Bocharov, P. P., D’Apice, C., & Pechinkin, A. V. (2011). Queueing theory . Walter de Gruyter. Bolch, G., Greiner, S., De Meer, H., & Trivedi, K.(2006). Queueing Netw orks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications. 2nd Edition John Wiley & Sons. Chesneau, C. (2018). A new family of distributions based on the Hyp oexponential distribution with fitting reliability data. Statistica ,78(2), 127–147. Everitt, B.(2005). Encyclopedia of statistics in behavioral science . (Vol. 2). Wiley-Blackwell. Kadri, T., Smaili, K. & Kadry, S. (2015). Markov modeling for reliability a nalysis using Hypo- exponential distribution. In Numerical Methods for Reliability and Safety Assessment , (pp. 599–620). Springer, Cham. Khuong, H. V., Kong H. Y. General expression for pdf of a sum of ind ependent Exponential random variables. IEEE Commun Lett .10(3), 159–161. Lawless J. F. (2011). Statistical models and methods for lifetime da ta. (Vol. 362). John Wiley & Sons. McLachlan, G. J, & Peel, G. (2000). Finite mixture models. New York, NY. John Wiley & Sons Moschopoulos, P. G.(1985). The distribution of the sum of independ ent Gamma random vari- ables.Annals of the Institute of Statistical Mathematics ,37(3), 541–544. Sen, S., Maiti, S. S., & Chandra, N. (2016). The XGamma distribution: statistical properties and application. Journal of Modern Applied Statistical Methods ,15(1), 774–788. Smaili, K, Kadri, T., Kadry, S. A Modified-Form Expressions For The Hy poexponential Dis- tribution, British Journal of Mathematics & Computer Science, 4 ,(3) 322–332. Trivedi, K. S. (1982). Probability and statistics with reliability, queuin g, and computer science applications (Vol. 13). Englewood Cliffs, NJ: Prentice-hall. 14
|
https://arxiv.org/abs/2504.15186v1
|
ASTOCHASTIC METHOD TO ESTIMATE A ZERO -INFLATED TWO -PART MIXED MODEL FOR HUMAN MICROBIOME DATA John Barrera∗ Instituto de Ingeniería Matemática Facultad de Ingeniería Universidad de Valparaíso Valparaíso, Chile Cristian Meza CIMFA V Universidad de Valparaíso Valparaíso, Chile Ana Arribas-Gil Departamento de Estadística Universidad Carlos III de Madrid Getafe, Spain ABSTRACT Human microbiome studies based on genetic sequencing techniques produce compositional longitu- dinal data of the relative abundances of microbial taxa over time, allowing to understand, through mixed-effects modeling, how microbial communities evolve in response to clinical interventions, environmental changes, or disease progression. In particular, the Zero-Inflated Beta Regression (ZIBR) models jointly and over time the presence and abundance of each microbe taxon, considering the compositional nature of the data, its skewness, and the over-abundance of zeros. However, as for other complex random effects models, maximum likelihood estimation suffers from the intractability of likelihood integrals. Available estimation methods rely on log-likelihood approximation, which is prone to potential limitations such as biased estimates or unstable convergence. In this work we develop an alternative maximum likelihood estimation approach for the ZIBR model, based on the Stochastic Approximation Expectation Maximization (SAEM) algorithm. The proposed methodology allows to model unbalanced data, which is not always possible in existing approaches. We also provide estimations of the standard errors and the log-likelihood of the fitted model. The performance of the algorithm is established through simulation, and its use is demonstrated on two microbiome studies, showing its ability to detect changes in both presence and abundance of bacterial taxa over time and in response to treatment. Keywords compositional data; longitudinal data; microbiome data; SAEM algorithm; ZIBR model. 1 Introduction The human microbiome, a complex community of microorganisms, plays a crucial role in the body’s functions and health. It influences metabolic pathways, drug metabolism, and contributes to the bioconversion of nutrients, detoxification, and protection against pathogens (Dekaboruah et al., 2020). In particular, the gut microbiota has been shown to interact with the host immune system, influencing the development of some diseases (Clemente et al., 2012). The microbiome significantly impacts the human genome, with genotypes influencing its composition and activity, while the microbiome alters the expression of genetic risk for chronic inflammatory and immune conditions (Jeyakumar et al., 2019). Given its major role in human health, it is crucial to have an accurate and reliable framework for the collection and analysis of human microbiome data. Current collection methods include high-throughput sequencing technologies such as the 16S ribosomal RNA (rRNA) sequencing approach and shotgun sequencing. These benchmark procedures are at the basis of leading studies such as the Human Microbiome Project (Turnbaugh et al., 2007), which characterized microbiomes in five major parts of the human body: airway, skin, oral cavity, digestive tract, and vagina. They produce sequence counts that are not comparable between samples, so traditional approaches consist in normalizing by the total sequence count, producing compositional data (Tyler et al., 2014). If multiple observations are recorded at different points in time, an additional component is introduced, leading to the analysis of longitudinal compositional data. ∗Corresponding author. E-mail: john.barrera@postgrado.uv.clarXiv:2504.15411v1 [stat.ME] 21 Apr
|
https://arxiv.org/abs/2504.15411v1
|
2025 Stochastic estimation of a zero-inflated mixed model for microbiome data Regarding data analysis, Kodikara et al. (2022) compiled some recent models developed to study longitudinal micro- biome data from sample sequencing, considering those that model count data and those that model relative abundances. One of the latter is the Zero-Inflated Beta Regression (ZIBR) (Chen and Li, 2016), which is defined as a two-stage mixed effects model which is based on the work of Ospina and Ferrari (2012), that allows the inclusion of clinical covariates, both to explain the presence or absence of a certain bacterial taxon and, in case of presence, the influence of these covariates on the relative abundance of the taxon. It also provides a comprehensive approach to analyse longitudinal compositional microbiome data taking into account its bounded nature, skewness and the over-abundance of zeros. Since its appearance, ZIBR has been succesfully applied in several studies (Hu et al., 2022; D’Agata et al., 2019) as it is capable of treating the features above mentioned, explaining within-subject correlations and providing methods to conduct hypothesis tests on the significance of covariates. As for other complex mixed-effects models, the Maximum Likelihood (ML) estimation method for ZIBR proposed by its authors relies on approximating the log-likelihood using Gauss-Hermite quadrature and numerical optimization of this expression. However, there is evidence that in certain scenarios the estimators obtained in this way may be biased and, in the case of generalized mixed models, even be outperformed by other techniques (Handayani et al., 2017). Additionally, the proposed estimation method can only be used for balanced data; that is, with the same number of observations per individual. In clinical studies this is not often the case, resulting in analysis with potentially misleading conclusions (Powney et al., 2014). Therefore, new strategies are required to address the challenges posed by missing data (Myers, 2000). The ZIBR model can also be thought of a particular case of the Generalized Additive Model for Location, Scale and Shape (GAMLSS, Rigby and Stasinopoulos, 2005) and its parameters can be estimated in this framework. Although in this case the available maximum estimation strategy does allow to handle missing data, it still relies on a penalized local log-likelihood approximation which can cause problems in performing likelihood ratio tests, particularly in complex GLMMs such as the ZIBR (see Stasinopoulos et al. (2017)). All things considered, the versatile features of the ZIBR model make it a promising choice for the analysis of compositional microbiome data, despite possible drawbacks in existing estimation strategies. Therefore, for the final purpose of precisely identifying those taxa responsive to disease onset, changes in environmental conditions or specific interventions, any possible improvement in the estimation method which is able to provide more accurate estimates will amount to significant progress in the understanding of human microbiome and its relation to human health. Following this aim, in this work we propose a new estimation framework for the ZIBR model on longitudinal compositional data based on the Stochastic Approximation EM (SAEM) algorithm (Delyon et al., 1999). This algorithm provides an exact maximum likelihood estimation strategy in missing
|
https://arxiv.org/abs/2504.15411v1
|
data models for which the EM algorithm (Dempster et al., 1977) is not directly applicable because the complexity of the likelihood function does not allow for exact calculation of its conditional expectation. This would be the case of the ZIBR model. The SAEM algorithm not only preserves the good behaviour of the EM algorithm in terms of convergence, unbiasedness and monotonicity, but has also shown interesting properties in complex mixed models (Márquez et al., 2023; Meza et al., 2012) and includes procedures for statistical inference and hypothesis testing (Samson et al., 2007). Furthermore, it can be combined with MCMC techniques for improved performance (Kuhn and Lavielle, 2005) and can be extended to Restricted Maximum Likelihood (REML) estimation (Meza et al., 2007). In this article we extend the algorithm to distributions not belonging to the exponential family and derive the explicit expressions at all its steps, for both parameter estimation and log-likelihood approximation, once the ML estimators have been obtained. We also obtain approximations of the standard errors of the estimators, by means of the stochastic approximation of the Fisher information matrix. This allows us to provide a comprehensive estimation approach that avoids downsides related to likelihood approximations, is able to incorporate unbalanced data, and facilitates the inference pipeline, from modeling to covariate effects testing, under the same framework. The structure of the document is as follows: in Section 2, we introduce the ZIBR model and develop the SAEM based inference method to be used in our work. In Section 3 we present simulation studies on synthetic data generated under different settings, comparing the results obtained with our approach and those given by estimation based on likelihood approximation or penalization, and in Section 4 we assess the behaviour of the proposed routine on a dataset coming from clinical microbiome studies. Finally, Section 5 closes the article with the main conclusions, a discussion of the results, and possible limitations and future developments. 2 Models and methods In this section we describe the ZIBR model for longitudinal compositional data (Chen and Li, 2016), we revise the foundations of the SAEM algorithm for parameter estimation by maximum likelihood as well as log-likelihood estimation through Importance Sampling, and present its extension to the ZIBR model. 2 Stochastic estimation of a zero-inflated mixed model for microbiome data 2.1 The ZIBR model for longitudinal compositional data The ZIBR model describes the presence and abundance of a single bacterial taxon on different individuals over time, and can be subsequently applied to different bacteria. Let yitbe the relative abundance of a bacterial taxon in the individual iat time t,1≤i≤N,1≤t≤Ti. The model assumes that yitfollows the distribution: yit∼/braceleftbigg0 with prob. 1−pit, Beta (uitϕ,(1−uit)ϕ)with prob. pit(1) withϕ >0and0< uit, pit<1. These two last components are characterized by log/parenleftbiggpit 1−pit/parenrightbigg =ai+XT itα, log/parenleftbigguit 1−uit/parenrightbigg =bi+ZT itβ, (2) where aiandbiare individual specific intercepts, αandβare vectors of regression coefficients and XitandZitare covariates for each individual and time point. We further consider that each one of the random intercepts follows a normal distribution, independently from each other: ai∼N(a, σ2 1), b i∼N(b, σ2 2). From Equations 1
|
https://arxiv.org/abs/2504.15411v1
|
and 2, it can be seen that the ZIBR model explicitly includes a component that is responsible for the presence of zeros in the data. It is also clear that conveniently defined covariates XitandZitcan influence both the probability of presence or absence of a bacterial taxon (through the logistic regression that defines pit) and the magnitude of its relative abundance (through the uitcomponent in the proposed Beta distribution). Furthermore, the inclusion of a random intercept allows modeling correlations in observations from the same individual. Even though it is easy to expand the definition to consider random slopes, in practice it is enough to consider just a random intercept (Min and Agresti, 2005). The model parameter θ= (ϕ, a, b, α, β, σ2 1, σ2 2)can be estimated by maximum likelihood. From Equations 1 and 2, the likelihood function for data y= (yit,1≤i≤N,1≤t≤Ti)is L(θ;y) =N/productdisplay i=1/integraldisplay R/integraldisplay RTi/productdisplay t=1(1−pit)1{yit=0}[pitf(yit;uit, ϕ)]1{yit>0}g/parenleftbig ai, bi|a, σ2 1, b, σ2 2/parenrightbig daidbi (3) where f(yit;uit, ϕ)is the Beta density function with parameters uitandϕonyit: f(yit;uit, ϕ) =Γ(ϕ) Γ(uitϕ)Γ((1−uit)ϕ)yuitϕ−1 it (1−yit)(1−uit)ϕ−1, andgis the product of the two univariate normal density functions of random effects aiandbi. Given the impossibility of analytical calculation of the integral shown in Equation 3, an approximation can be achieved by means of the Gauss-Hermite quadrature (GHQ). With this approximation, and through numerical optimization, the maximum likelihood estimators for θcan be found as proposed by Chen and Li (2016). Hypothesis tests for the significance of covariates can also be conducted, in particular the Likelihood Ratio Test (LRT). The implementation of this approach is available in the ZIBR package (Zhang Chen, 2023) developed for the R software. In addition to this alternative, the gamlss package (Rigby and Stasinopoulos, 2005) can also be used, which, in a similar manner to the aforementioned one, is based on a penalized quasilikelihood approximation and its optimization based on numerical algorithms (Rigby and Stasinopoulos, 1996). A notable advantage of this implementation, however, is its capacity to handle unbalanced data, which renders it a suitable option for a comparative analysis of the results obtained in this study. 2.2 The SAEM algorithm for mixed effects models The Stochastic Approximation Expectation-Maximization (SAEM) algorithm (Delyon et al., 1999) is a powerful tool for estimating population parameters in complex mixed effect models. This algorithm is applicable for the iterative computation of ML estimates in a wide variety of incomplete data statistical problems in which the Expectation step of the EM algorithm is not explicit; in particular in mixed effects models, where the individual random effects are treated as non-observed data. Let y= (yit,1≤i≤N,1≤t≤Ti)andφ= (φi,1≤i≤N)denote the observed and non-observed data, respectively, so the complete data of the model are (y,φ). In this case, the SAEM algorithm consists of replacing the usual E-step of EM with a stochastic approximation procedure. Given an initial point θ(0), iteration qof the algorithm writes: 3 Stochastic estimation of a zero-inflated mixed model for microbiome data •Simulation (S) step: Draw a realization φ(q)from the conditional distribution p/parenleftbig ·|y;θ(q−1)/parenrightbig . •Stochastic Approximation (SA) step: Update sq(θ), the approximation of the conditional expectation E/bracketleftbig logp/parenleftbig y,φ(q);θ/parenrightbig
|
https://arxiv.org/abs/2504.15411v1
|
|y, θ(q−1)/bracketrightbig : sq(θ) =sq−1(θ) +γq/parenleftig logp/parenleftig y,φ(q);θ/parenrightig −sq−1(θ)/parenrightig where {γq}q∈Nis a decreasing sequence of stepsizes with γ1= 1. •Maximization (M) step: Update θ(q)according to θ(q)= arg maxθsq(θ). There are some important remarks on the the working details of the SAEM algorithm. In the case of complex mixed effects models, such as ZIBR, the conditional distribution of the non-observed data p/parenleftbig ·|y;θ(q−1)/parenrightbig cannot be computed in closed form and simulation from it cannot be carried out directly (Kuhn and Lavielle, 2005). However, a MCMC approach can be used in the Simulation step of the SAEM algorithm described above, consisting in applying the Metropolis-Hastings algorithm (Metropolis et al., 1953) with different proposal kernels, in order to approximate p/parenleftbig ·|y;θ(q−1)/parenrightbig with a Markov chain with defined transition probabilities. Also, convergence can be improved by generating more than one Markov chain or realization at simulation and by applying a Monte Carlo scheme. That is, at the Simulation step mrealizations φ(q,l)∼p/parenleftbig ·|y;θ(q−1)/parenrightbig ,1≤l≤m, are drawn, and in the SA step the approximation of the conditional expectation is updated as sq(θ) =sq−1(θ) +γq/parenleftigg 1 mm/summationdisplay l=1logp/parenleftig y,φ(q,l);θ/parenrightig −sq−1(θ)/parenrightigg . If the complete-data model belongs to the exponential family, that is, if logp(y,φ;θ) =−Ψ(θ) +⟨S(y,φ), ξ(θ)⟩ where S(y,φ)represents a sufficient statistic of the data, then, the SA step reduces to: Fq=Fq−1+γq/parenleftigg 1 mm/summationdisplay l=1S(y,φ(q,l))−Fq−1/parenrightigg (4) andsq(θ) =−Ψ(θ) +⟨Fq, ξ(θ)⟩; that is, the actualization is made only on the sufficient statistic. This scheme can be applied even to models outside the exponential family, provided that a part of the model belongs to this family. However, we cannot speak of updating a sufficient statistic, but rather of a data summary function. Under general circumstances (Delyon et al., 1999; Kuhn and Lavielle, 2005), the convergence of the parameter sequence θ(q)toward a (local) maximum of the likelihood ˆθis guaranteed, regardless of the starting point θ(0)(Celeux et al., 1995). The sequence of stepsizes {γq}q∈Nis usually set to 1 during the first iterations to avoid getting stuck in local maxima. In this way the first iterations are identical to those of the Monte Carlo EM (MCEM) algorithm (Wei and Tanner, 1990), which is known for its slow convergence rate. To avoid this scenario, in later iterations of SAEM γqdecreases to zero to force convergence with fewer iterations. Details of application of the SAEM algorithm to complex mixed-effects models can be found in Meza et al. (2012); Márquez et al. (2023); Arribas-Gil et al. (2014) and de la Cruz et al. (2024). 2.3 The SAEM algorithm for ZIBR parameter estimation As we have seen before, a mixed model can be considered as an unobserved data problem and therefore be addressed using the SAEM algorithm. Let us consider φi= (ai, bi),1≤i≤N, the non-observed data. By the definition of the ZIBR model φifollows the multivariate normal distribution φi∼N(µ,G)withµ= (a, b)andG=diag(σ2 1, σ2 2). With the usual notation y= (yit: 1≤i≤N,1≤t≤Ti)andφ= (φi: 1≤i≤N), the complete-data likelihood writes: p(y,φ;θ) =p(y|φ;α, β, ϕ )p(φ|µ,G) ∝|G|−N 2/productdisplay iexp/parenleftbigg −(φi−µ)TG−1(φi−µ) 2/parenrightbigg ×/productdisplay i,t(1−pit)1{yit=0}p1{yit>0} it f(yit;uit, ϕ)1{yit>0}.(5) 4 Stochastic estimation of a zero-inflated mixed model for microbiome data Like most zero-inflated models, the
|
https://arxiv.org/abs/2504.15411v1
|
ZIBR model cannot be considered part of the exponential family (Eggers, 2015). However, the decomposition presented in Equation 5 allows us to propose a simplified structure for the SAEM algorithm (Equation 4). For the multivariate normal part corresponding to the random effects, the actualization in the SA step is done on the respective sufficient statistics. For the mixture distribution corresponding to the observed data, y|φ;α, β, ϕ , maximization of the conditional log-likelihood is followed by estimates updates, as suggested for non-exponential family models (Comets et al., 2021). Then, the Maximum Likelihood iterative estimation algorithm for the parameters of the ZIBR model writes, for a given starting point θ(0)and at iteration q, as: 1.Simulation step: draw φ(q) i, i= 1,···, Nfrom the distribution p(·|y;θ(q−1)). 2.Stochastic Approximation step: update the summary data functions F1(y,φ)andF2(y,φ)with the scheme: F(q) 1(y,φ) =F(q−1) 1 (y,φ) +γq/parenleftigg/summationdisplay iφ(q) i−F(q−1) 1 (y,φ)/parenrightigg F(q) 2(y,φ) =F(q−1) 2 (y,φ) +γq/parenleftigg/summationdisplay iφ(q) iφ(q) iT−F(q−1) 2 (y,φ)/parenrightigg .(6) where {γq}q∈Nis a decreasing sequence of stepsizes with γ1= 1. 3.Maximization step: update the parameters of the model with µ(q)=F(q) 1(y,φ) N G(q)=F(q) 2(y,φ) N−F(q) 1(y,φ)F(q) 1(y,φ)T N2(7) Given the form of the model definition in the Beta part, steps 2 and 3 are modified by first calculating /parenleftig ˜β(q),˜ϕ(q)/parenrightig = (8) arg max β,ϕ/summationdisplay i,t/bracketleftbigg 1{yit>0}/parenleftbigg logΓ(ϕ) Γ/parenleftig u(q) itϕ/parenrightig Γ((1−uit(q))ϕ)+u(q) itϕlogyit+ϕ/parenleftig 1−u(q) it/parenrightig log (1−yit)/parenrightbigg/bracketrightbigg and ˜α(q)= arg max α/summationdisplay i,t/bracketleftig 1{yit>0}log/parenleftig p(q) it/parenrightig + 1{yit=0}log/parenleftig 1−p(q) it/parenrightig/bracketrightig . (9) where u(q) it=u(q) it(bi, β)andp(q) it=p(q) it(ai, α)are calculated using φ(q) iand Equation 2. Maximization in (8) and (9) is achieved numerically. Finally, the values are updated by doing ϕ(q)=ϕ(q−1)+γq/parenleftig ˜ϕ(q)−ϕ(q−1)/parenrightig α(q)=α(q−1)+γq/parenleftig ˜α(q)−α(q−1)/parenrightig β(q)=β(q−1)+γq/parenleftig ˜β(q)−β(q−1)/parenrightig(10) Let us discuss the details of this implementation. As mentioned in 2.2, the choice of the starting point θ(0)for SAEM does not affect its convergence; however, it is recommended to use values obtained in previous studies or with other estimation methods. Following the example of the existing implementation of the saemix packageComets et al. (2017), we will use γqdefined as follows: γq=/braceleftigg 1 ifq≤K1, 1 q−K1ifK1< q≤K1+K2. where K1+K2is the total number of iterations. We also discussed in section 2.2 that it is possible to improve the performance of the algorithm by taking multiple sequences or Markov chains in the Simulation step, and using Monte Carlo in Equations 7 and 10. Furthermore, during the SA step, we obtain sequences that allow to estimate E/parenleftig φi|yi;ˆθ/parenrightig andVar/parenleftig φi|yi;ˆθ/parenrightig to be calculated, values necessary to approximate the log-likelihood through Importance Sampling, with which the Likelihood Ratio Test (LRT) can be computed, as presented in the following subsection. 5 Stochastic estimation of a zero-inflated mixed model for microbiome data 2.3.1 Approximation of the log-likelihood using Importance Sampling The log-likelihood of the observed data cannot be computed in closed form for complex mixed effects models, but its estimation is required to perform the LRT and to compute information criteria for a given model. One approximation method is given by the application of the Importance Sampling algorithm (Kloek and van Dijk, 1978). Let LLy(ˆθ) be the log-likelihood of the model at the vector
|
https://arxiv.org/abs/2504.15411v1
|
of population parameter estimates, that is LLy(ˆθ) = log p(y;ˆθ) where p(y;ˆθ) =L(ˆθ;y)is the joint probability distribution function of the observed data given ˆθ. Notice that LLy(ˆθ) = log p(y;ˆθ) =/summationtextN i=1logp(yi;ˆθ)and, for some proposal distribution ˜pφiabsolutely continuous with respect topφi, we have p(yi;ˆθ) =/integraldisplay p(yi, φi;ˆθ)dφi=/integraldisplay p(yi|φi;ˆθ)p(φi;ˆθ) ˜p(φi;ˆθ)˜p(φi;ˆθ)dφi=E˜p/bracketleftigg p(yi|φi;ˆθ)p(φi;ˆθ) ˜p(φi;ˆθ)/bracketrightigg . That is, p(yi;ˆθ)can be expressed as an expectation which can be approximated by: 1. Obtain a random sample of size K φ(1) i,···, φ(K) ifrom the proposal distribution ˜pφi; 2. Compute the empirical mean ˆp(i,K)=1 K/summationtextK k=1p(yi|φ(k) i;ˆθ)p(φ(k) i;ˆθ) ˜p(φ(k) i;ˆθ) An optimal proposal distribution would be the conditional distribution pφi|yisince in that case the estimator of the expectation has zero variance. But since the closed form expression of the distribution is not available, we choose a proposal close to this optimal distribution, based on the empirically estimated conditional mean and variance, µi=ˆE/bracketleftig φi|yi;ˆθ/bracketrightig andσ2 i=ˆVar/bracketleftig φi|yi;ˆθ/bracketrightig , of the simulated random effects during the simulation step of the SAEM algorithm. Then, the proposed candidate φ(k) i, with k= 1, . . . , K , is drawn with a noncentral student t-distribution φ(k) i=µi+σi×Ti,k, with Ti,k∼tνi.i.d., where tνdenotes a Student’s t-distribution with νdegrees of freedom. In this work, unless otherwise mentioned, the parameters for calculating the log-likelihood will be ν= 5andK= 500 . 2.3.2 Stochastic approximation of the standard errors In addition to providing estimates of the parameters of a model, it is desirable that the estimation method is capable of also estimating its standard errors, with the objective of constructing confidence intervals or performing statistical tests on individual estimators, such as the Wald test. In the case of maximum likelihood estimation, these errors can asymptotically be calculated based on the Fisher information matrix of the model, which for complex models cannot be computed in a closed form. Based on the Louis’s missing information principle (Louis, 1982) it is possible to compute an estimation of the Fisher information matrix. According to this principle, we have the identity: ∂2 θlogp(y;θ) =E/parenleftbig ∂2 θlogp(y,φ;θ)|y;θ/parenrightbig +Cov(∂θlogp(y,φ;θ)|y;θ) where Cov(∂θlogp(y,φ;θ)|y;θ) =E/parenleftbig ∂θlogp(y,φ;θ)∂θlogp(y,φ;θ)T|y;θ/parenrightbig −E(∂θlogp(y,φ;θ)|y;θ)E(∂θlogp(y,φ;θ)|y;θ)T Given this, the second order derivative of the observed likelihood function with respect to parameter ˆθ,∂2 θL(ˆθ;y), can be approximated by the sequence {Hq}q∈Nwhich is calculated at iteration qof the SAEM algorithm as: Dq=Dq−1+γq/bracketleftig ∂θlogp(y,φ(q);θ(q))−Dq−1/bracketrightig Gq=Gq−1+γq/bracketleftig ∂2 θlogp(y,φ(q);θ(q)) +∂θlogp(y,φ(q);θ(q))∂θlogp(y,φ(q);θ(q))′−Gq−1/bracketrightig Hq=Gq−DqD′ q. At convergence, −H−1 qcan be used to approximate the covariance matrix of the parameter estimates (Zhu and Lee, 2002; Cai, 2010), which are useful to derivate procedures for hypothesis testing for the different parameters of the model. 6 Stochastic estimation of a zero-inflated mixed model for microbiome data 3 Simulation studies To evaluate the behavior of the proposed estimation method, and to compare it with existing alternatives, we conducted several simulation studies. It is worth noticing that the GHQ approach does not allow to deal with a different number of observations per individual, which is possible with our SAEM-based approach and the gamlss package. Therefore, we present simulations with balanced data first. Additional simulations on unbalanced data are also provided in Appendix A, Supplementary Materials, in which the performance of
|
https://arxiv.org/abs/2504.15411v1
|
SAEM on the unbalanced datasets is compared with the use of the GHQ algorithm on balanced datasets obtained from imputation, and with gamlss without imputation. Covariates significance analysis based on the LRT and the Wald test are also presented in Appendix B, Supplementary Materials. 3.1 Setup We use two different settings for generating synthetic data under the ZIBR model (Equations 1 and 2). The parameters for each configuration were chosen as follows: •Setting 1: a=b=−0.5,α=β= 0.5,σ1= 3.2,σ2= 2.6,ϕ= 6.4. •Setting 2: a=b=−0.5,α=β= 0.5,σ1= 0.7,σ2= 0.5,ϕ= 6.4. In the balanced scenario, for both Settings 1 and 2 the number of individuals N= 100 will remain fixed, but the number of observations per individual Tiwill change, making Ti=TwithT= 3,5,10. In addition, a variable Xis defined that mimics the concept of treatment and control groups, making X= 0for the first half of individuals and X= 1for the other half. Furthermore, we consider the same variable as covariate in both parts of the models, making Z=X. For both Settings 1 and 2, R= 1000 datasets were generated, and the SAEM estimation was implemented with m= 5 chains and (K1, K2) = (750 ,250), having therefore 1000 total iterations, with a starting point θ0= (ϕ0, a0, b0, α0, β0, σ1,0, σ2,0) = (8 ,−0.3,−0.2,0.7,0.8,0.38,0.31). 3.2 Results Table 1 shows the performance analysis of the two estimation methods for Settings 1 and 2 on balanced datasets, evaluated by bias/parenleftig 1 R/summationtextR r=1ˆθr−θ/parenrightig , mean absolute error (MAE =1 R/summationtextR r=1|ˆθr−θ|/parenrightig and root mean square error (RMSE =/radicalig 1 R/summationtextR r=1(ˆθr−θ)2/parenrightbigg . Analyzing these results globally, and the global estimates distributions in Figures 1 and 2, we can see how SAEM estimates are always centered (except for a small bias for σ2in Setting 1), whereas GHQ and GAMLSS methods present strong biases or bimodality for different parameters: b,β,σ2andϕfor GHQ in Setting 1 and σ1(only Setting 1), σ2 andϕ(both settings) for GAMLSS. A thorough examination of these results reveals that the SAEM estimation achieves optimal bias performance in Setting 1. In Setting 2, the GHQ method exhibits slightly lower bias values. It merits attention, however, that GAMLSS attains the lowest root mean square error (RMSE) and mean absolute error (MAE) values in many parameters across both settings. It is worth noticing, nevertheless, that the GAMLSS estimates for ϕare extremely biased in both settings. These two situations are common in estimators obtained by quasi-likelihood methods, as indicated by Nelder and Lee (1992). A remarkable property of SAEM is its tendency to exhibit a consistent decrease in the error measures as the number of observations per individual increases, a phenomenon not consistently observed in the other analyzed alternatives. In Setting 1, where the values of the variance parameters are higher, it is evident that the worst GHQ results are obtained in the estimation of the parameters associated with the Beta part of the model ( b, β, σ 2andϕ). This component has a complex functional form that could be incorrectly approximated when integrating by numerical methods. The SAEM approach is advantageous in this regard. In the context of GAMLSS, the estimation of
|
https://arxiv.org/abs/2504.15411v1
|
the parameter controlling the overdispersion of the data, i.e., ϕ, is particularly challenging (as well as the estimation of σ2in Setting 2). According to Stasinopoulos et al. (2017), by defining ZIBR as a model with several random effects, GAMLSS estimates the variance components with a method prone to generating biases. Given that overdispersion is a common feature in real microbiota data, the GAMLSS procedure may not be suitable for modeling this phenomenon. A more detailed analysis of the Figures shows that for Setting 1 the estimated densities of aandαare very similar in all methods, while the other parameters show marked differences. In the case of GHQ, the results for bandβare very skewed, while the distribution of σ2andϕshows bimodal behavior. This is due to the poor approximation of the log-likelihood, which leads to an erroneous estimation of these parameters. A review of the GAMLSS results reveals that the distribution of σ1is significantly deviated from the true parameter, while the distribution of ϕexhibits 7 Stochastic estimation of a zero-inflated mixed model for microbiome data Figure 1: Estimated density of the parameters obtained by the SAEM algorithm, the GHQ method and the GAMLSS procedure on artificial balanced datasets simulated under Setting 1. The dotted vertical line represents the true value of the parameter. 8 Stochastic estimation of a zero-inflated mixed model for microbiome data Figure 2: Estimated density of the parameters obtained by the SAEM algorithm, the GHQ method and the GAMLSS procedure on artificial balanced datasets simulated under Setting 2. The dotted vertical line represents the true value of the parameter. 9 Stochastic estimation of a zero-inflated mixed model for microbiome data Table 1: Summary statistics of the results obtained by SAEM algorithm, the GHQ procedure and the GAMLSS method on balanced data sets over 1000 simulation runs. For each parameter value and number of observations per individual, Ti, bold numbers indicate the lowest (absolute) value for each of bias, RMSE and MAE. Parameter Value Bias RMSE MAE Bias RMSE MAE Bias RMSE MAE SAEM GHQ GAMLSS a -0.5 Ti= 3 0.0064 0.5896 0.4629 -0.0010 0.6509 0.4734 0.1460 0.4348 0.3449 Ti= 5 0.0081 0.5626 0.4526 0.0044 0.5634 0.4514 0.1302 0.4281 0.3436 Ti= 10 0.0154 0.5369 0.4306 0.0221 0.5947 0.4749 0.1139 0.4269 0.3469 α 0.5 Ti= 3 0.0091 0.8277 0.6519 0.0215 0.9634 0.6587 -0.1343 0.5846 0.4626 Ti= 5 0.0046 0.8001 0.6340 0.0138 0.7926 0.6186 -0.1203 0.5822 0.4590 Ti= 10 -0.0046 0.7533 0.6007 -0.0253 0.8363 0.6625 -0.1026 0.5827 0.4673 b -0.5 Ti= 3 -0.0414 0.4936 0.3970 0.0970 0.5570 0.4607 -0.0723 0.4744 0.3767 Ti= 5 -0.0532 0.4511 0.3560 0.1752 0.5057 0.4317 -0.0829 0.4416 0.3478 Ti= 10 -0.0552 0.4392 0.3512 0.3197 0.5155 0.4582 -0.0993 0.4375 0.3481 β 0.5 Ti= 3 -0.0384 0.6827 0.5372 -0.1442 0.7253 0.5788 -0.0493 0.6529 0.5122 Ti= 5 -0.0216 0.6530 0.5092 -0.2532 0.6784 0.5544 -0.0526 0.6160 0.4809 Ti= 10 -0.0275 0.6083 0.4825 -0.3828 0.5972 0.5202 -0.0660 0.5980 0.4704 σ1 3.2 Ti= 3 0.0284 0.6423 0.4961 0.0693 1.1306 0.5277 -0.7702 0.8400 0.7754 Ti= 5 0.0259 0.4920 0.3904 0.0631 0.5534 0.4238 -0.6914 0.7461 0.6932 Ti= 10 -0.0012 0.3951 0.3170 0.0870 0.4433 0.3550 -0.5939 0.6428 0.5956 σ2 2.6
|
https://arxiv.org/abs/2504.15411v1
|
Ti= 3 -0.1720 0.2979 0.2429 -0.3206 0.6612 0.4347 -0.0142 0.2478 0.1957 Ti= 5 -0.1639 0.2779 0.2270 -0.4970 0.8065 0.5605 -0.0899 0.2425 0.1932 Ti= 10 -0.1699 0.2635 0.2162 -0.8565 1.0651 0.8674 -0.1457 0.2480 0.2018 ϕ 6.4 Ti= 3 0.1301 1.2251 0.9465 -0.3696 2.0104 1.4482 4.6834 5.1779 4.6834 Ti= 5 0.0895 0.8059 0.6283 -0.9741 2.1842 1.4390 2.3698 2.6248 2.3713 Ti= 10 0.0645 0.4940 0.3931 -1.9106 2.8099 2.0202 1.1085 1.2564 1.1139 SAEM GHQ GAMLSS a -0.5 Ti= 3 0.0106 0.2101 0.1659 0.0026 0.2060 0.1623 0.0376 0.1935 0.1540 Ti= 5 0.0017 0.1813 0.1432 -0.0003 0.1780 0.1404 0.0317 0.1688 0.1340 Ti= 10 0.0033 0.1414 0.1144 0.0023 0.1380 0.1114 0.0276 0.1337 0.1080 α 0.5 Ti= 3 -0.0120 0.2912 0.2283 -0.0040 0.2859 0.2243 -0.0385 0.2663 0.2102 Ti= 5 -0.0060 0.2551 0.2025 -0.0032 0.2482 0.1973 -0.0349 0.2346 0.1870 Ti= 10 -0.0042 0.1997 0.1607 -0.0029 0.1910 0.1528 -0.0282 0.1833 0.1468 b -0.5 Ti= 3 0.0006 0.1348 0.1063 -0.0025 0.1331 0.1043 -0.0212 0.1409 0.1102 Ti= 5 -0.0031 0.1099 0.0873 -0.0028 0.1096 0.0874 -0.0094 0.1116 0.0885 Ti= 10 -0.0032 0.0922 0.0732 -0.0031 0.0922 0.0729 -0.0040 0.0928 0.0737 β 0.5 Ti= 3 -0.0089 0.1792 0.1430 -0.0049 0.1753 0.1403 0.0122 0.1829 0.1463 Ti= 5 -0.0006 0.1536 0.1223 -0.0011 0.1516 0.1206 0.0050 0.1542 0.1226 Ti= 10 0.0015 0.1326 0.1060 0.0012 0.1308 0.1044 0.0012 0.1314 0.1052 σ1 0.7 Ti= 3 -0.1448 0.3944 0.3239 -0.0535 0.3176 0.2501 -0.0845 0.2768 0.2164 Ti= 5 -0.0394 0.2107 0.1605 -0.0275 0.1951 0.1499 -0.0643 0.1808 0.1373 Ti= 10 -0.0192 0.1137 0.0910 -0.0169 0.1108 0.0883 -0.0489 0.1106 0.0877 σ2 0.5 Ti= 3 -0.0812 0.1918 0.1407 -0.0542 0.1529 0.1139 0.2324 0.2481 0.2328 Ti= 5 -0.0359 0.0991 0.0771 -0.0337 0.0963 0.0749 0.1453 0.1610 0.1460 Ti= 10 -0.0193 0.0629 0.0499 -0.0190 0.0624 0.0496 0.0715 0.0891 0.0754 ϕ 6.4 Ti= 3 -0.0920 1.2570 1.0054 0.0031 1.1890 0.9402 7.4381 8.0589 7.4381 Ti= 5 -0.0700 0.7283 0.5820 -0.0620 0.7214 0.5767 3.3718 3.6059 3.3718 Ti= 10 -0.0463 0.4423 0.3518 -0.0472 0.4409 0.3510 1.3984 1.5107 1.4002 10 Stochastic estimation of a zero-inflated mixed model for microbiome data considerable variability when observations per individual are limited. Figure 2, on the other hand, shows that for Setting 2 SAEM and GHQ are practically equivalent in the estimates densities, while GAMLSS differs only in that it performs poorly for σ2andϕ. This indicates that the minor errors differences between the methods in favor of GHQ for this simulation scenario (Table 1) are not significant. Regarding the execution times of the routines, GHQ and GAMLSS are faster than SAEM in the 3 cases considered in both Settings. On a computer with Intel Core i7-13700HX processor at 2.10 GHz, for Ti= 3 (5 ,10), on average, GAMLSS takes 0.94 (1.03,1.20)seconds, GHQ takes 1.77 (3.39,8.54)seconds, while SAEM takes 9.86 (14 .14,25.30) seconds. 4 Case studies In this section we demonstrate the use of the proposed inference framework on a publicly available microbiome study and we will focus on the capabilities of the SAEM algorithm to detect changes in the presence of bacterial taxa in response to treatments. The study of another set of microbiome data can be found in Appendix C of the Supplementary Materials. As we mentioned in
|
https://arxiv.org/abs/2504.15411v1
|
the previous section, given that the ZIBR model has more than one random effect, GAMLSS uses an estimation method that can cause a bias in the estimation of the variance parameters, and also does not allow the evaluation of the tests based on the calculation of the log-likelihood that we will use in the following section. For this reason, the GAMLSS approach will not be applied to the real data considered below. 4.1 Inflammatory bowel disorder pediatric study The data used in this section come from a study to verify the effectiveness of treatments in pediatric inflammatory bowel disorder (IBD) patients (Lewis et al., 2015). This study includes information from 90 children subjected to three different types of therapy: anti-TNF treatment (TNF: tumor necrosis factor), exclusive enteral nutrition (EEN) and partial enteral nutrition with an ad lib diet (PEN). After filtering the data to discard low sequencing depth samples, low abundant genus and taxa with a proportion of zeros higher than 0.9 or lower than 0.1, the information from 18 bacterial genera measured at 4 different time points for each one of the 59 individuals (47 anti-TNF and 12 EEN) remained for the analysis. Figure 3 shows the average composition of the intestinal microbiome of the subjects in both groups and its evolution over time. Figure 3: Average gut microbiome composition of the treatment groups (anti-TNF and EEN) over observation week 11 Stochastic estimation of a zero-inflated mixed model for microbiome data The objective of the study is to verify if the different treatments influence the presence of the different bacterial taxa in the samples, controlling for time and initial abundance. In addition, we want to compare if the results obtained by the SAEM algorithm differ from those obtained through the GHQ procedure implemented in the ZIBR package. The initial values for SAEM were the estimates found by the GHQ method, for each model corresponding to each bacterial taxon. Given this choice of initial values, we used m= 5Markov chains and (K1, K2) = (375 ,125) iterations, as well as 500 simulated values for the log-likelihood calculation by Importance Sampling. The p-values obtained through the LRT were adjusted using the Benjamini-Hochberg process (Benjamini and Hochberg, 1995) to decrease the false discovery rate (FDR) and the full values are presented in Table 8, Appendix D of Supplementary Materials. Figure 4: Bacterial taxa in which the treatment (anti-TNF vs. EEN) have a statistical effect in abundance identified by SAEM and GHQ After model fitting, the GHQ method detected 11 bacterial taxa in which the treatment influenced the abundances, while SAEM managed to identify 12 taxa, all those identified by GHQ in addition to Escherichia with FDR=5%, as shown in Figure 4. A more detailed analysis of the Escherichia data shows that the influence of treatment is greater on the frequency of presence of this bacterium in individuals than on the level of abundance once its presence is confirmed (Figure 5 in Appendix D). This is confirmed by the LRT results for the significance of the treatment in the calculation of the probability pit(FDR p-value
|
https://arxiv.org/abs/2504.15411v1
|
0.03) compared to those of the Beta component of the abundance uit(FDR p-value 0.80), and by the Wald test (Table 9, Appendix D). These results show that at the 5% significance level the treatment is significant in the logistic part but not in the Beta part, proving that the definition of the ZIBR model and the combination with the SAEM estimation allows the increase of the ability to detect the influence of a given treatment defined. A detailed figure with the convergence behavior of the estimators across iterations is shown in Figure 6, Appendix D of the supplementary materials. The role of Escherichia in IBD is well documented. There is evidence (Baldelli et al., 2021) that the accumulation ofEscherichia coli and other strains of Escherichia in the intestine is related to inflammatory processes, and other works (Mirsepasi-Lauridsen et al., 2019) suggest that a combination of antibiotic and dietary treatments is capable of controlling overproliferation of E. coli in the digestive system and also reducing the symptoms of IBD, allowing to infer a correlation between these two events. 5 Conclusions and discussion In this article we have developed an exact maximum likelihood estimation strategy for the ZIBR model for the analysis of longitudinal compositional microbiome data using the SAEM algorithm. We have also proposed a method for calculating the log-likelihood of the model which allows to obtain information criteria for the model, and approximations of the estimators standard errors, which is not possible under the alternative estimation method based on Gauss-Hermite quadrature likelihood approximation. Moreover, despite the capacity of the GAMLSS approach to estimate the ZIBR model parameters and calculate the standard errors of these estimates, the SAEM method exhibits superior performance controlling the type I error of the Wald test for the significance of parameters. Although the results obtained by SAEM 12 Stochastic estimation of a zero-inflated mixed model for microbiome data conform to the expected theoretical properties of standard errors, it must be noted that the method can still be improved, since, as it depends on a stochastic approximation, convergence towards coherent values for the Fisher information matrix is not guaranteed, which could introduce bias in the estimation of the standard errors. We are confident that these details can be enhanced in further developments. Another advantage of the proposed estimation method is its ability on handling unbalanced data, a scenario that can occur both due to the design of the experiment itself and due to external factors, such as to individuals dropping out during the study. This aspect was not considered in the development of the original estimation method for ZIBR, so comparisons of performance with our method cannot be established in this scenario, unless data interpolation is performed before using the GHQ method. The GAMLSS method, another option that allows for handling unbalanced data, has been found to show errors in the estimation of certain important parameters of the model, even when its performance in the rest is adequate. It should be emphasized that unbalanced data is a fairly common situation in medical experiments, in which multiple factors influence patients
|
https://arxiv.org/abs/2504.15411v1
|
abandoning the follow-up. This could be one of the reasons contributing to the high non-publication rate in many medical studies, which according to certain sources could be close to 50% (Chan et al., 2014). Therefore, developing analysis methods that can deal accurately with unbalanced data is of great interest. The definition of the ZIBR model used throughout this work corresponds to the one originally proposed by Chen and Li (2016) and implemented in the ZIBR package for R statistical software. However, there are possibilities for modification of this definition that have been discussed. One of them is the use of random effects for more than one covariate, an aspect that has been already incorporated in the implementation used in this article. Another possibility is the inclusion of cross correlations in the random effects, proposing a different structure in the variance of these effects. Liu et al. (2019) mention that this inclusion could alter the results for tests on covariates, detecting significance where a simpler structure would not detect it. Although this approach has not been implemented here, the SAEM algorithm could be easily modified to serve this purpose. In the field of human microbiome analysis, other models have been proposed in addition to ZIBR. Among the most important are ZIBR-SRE (Han et al., 2021), an extension of ZIBR which considers the compositional nature of microbiota data; zero-inflated Gaussian mixed models (ZIGMM) (Zhang et al., 2020), which in addition to managing the overabundance of zeros can work with both proportion data and counts; and the negative binomial mixed model (NBMM) (Zhang et al., 2018), which allows the specification of more general variance structures and also be modified to deal with zero inflation. It seems interesting to implement the SAEM algorithm to these models and study its potential benefits in estimation. Finally, an interesting extension of this work would be to obtain the Restricted Maximum Likelihood (REML) estimates, a known method for reducing bias in the estimation of variance components in mixed effects models (Meza et al., 2007), using the Harville’s approach, i.e., integrating out the fixed effects, via the SAEM algorithm. We expect that, in the context of longitudinal models on microbiome data, this could improve the results obtained through ML estimation. Supplementary materials The supplementary materials include extended simulation studies, tables and figures referenced in Sections 3 and 4 and an additional case study. R codes to implement the routines of the SAEM estimation and to reproduce the statistical analysis of Section 4 are available at https://github.com/jbarrera232/saem-zibr . Declaration of conflicting interests The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article. Funding The work of the first and second author was supported by ANID MATH-AmSud Project AMSUD 230032-SMILE. The work of the first author was also supported by ANID Becas/Doctorado Nacional 21231659. The third author gratefully acknowledges the support of grants PID2021-123592OB-I00 funded by MCIN/AEI/10.13039 /501100011033 and by ERDF - A way of making Europe, and TED2021-129316B-I00 funded by MCIN/AEI/10.13039 /501100011033 and by the European Union NextGenerationEU/PRTR. 13 Stochastic estimation of a
|
https://arxiv.org/abs/2504.15411v1
|
zero-inflated mixed model for microbiome data References Arribas-Gil, A., Bertin, K., Meza, C., and Rivoirard, V . (2014). LASSO-type estimators for semiparametric nonlinear mixed-effects models estimation. Statistics and Computing , 24(3):443–460. Baldelli, V ., Scaldaferri, F., Putignani, L., and Del Chierico, F. (2021). The role of Enterobacteriaceae in gut microbiota dysbiosis in inflammatory bowel diseases. Microorganisms , 9(4):697. Benjamini, Y . and Hochberg, Y . (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: series B (Methodological) , 57(1):289–300. Cai, L. (2010). High-dimensional exploratory item factor analysis by a Metropolis-Hastings Robbins-Monro algorithm. Psychometrika , 75(1):33–57. Celeux, G., Chauveau, D., and Diebolt, J. (1995). On Stochastic Versions of the EM Algorithm. Research Report RR-2514, INRIA. Chan, A. W., Song, F., Vickers, A., Jefferson, T., Dickersin, K., Gøtzsche, P. C., Krumholz, H. M., Ghersi, D., and Van Der Worp, H. B. (2014). Increasing value and reducing waste: addressing inaccessible research. The Lancet , 383(9913):257–266. Chen, E. Z. and Li, H. (2016). A two-part mixed-effects model for analyzing longitudinal microbiome compositional data. Bioinformatics , 32(17):2611–2617. Clemente, J. C., Ursell, L. K., Parfrey, L. W., and Knight, R. (2012). The impact of the gut microbiota on human health: an integrative view. Cell, 148(6):1258–1270. Comets, E., Karimi, B., Delattre, M., Ranke, J., Lavenu, A., Lavielle, M., Chanel, M., Guhl, M., Fayette, L., and Kaisaridi, S. (2021). Saemix user’s guide, version 3.0. https://github.com/iame-researchCenter/saemix/ blob/7638e1b09ccb01cdff173068e01c266e906f76eb/docsaem.pdf . Comets, E., Lavenu, A., and Lavielle, M. (2017). Parameter estimation in nonlinear mixed effect models using saemix, an R implementation of the SAEM algorithm. Journal of Statistical Software , 80(3):1–41. de la Cruz, R., Lavielle, M., Meza, C., and Núñez-Antón, V . (2024). A joint analysis proposal of nonlinear longitudinal and time-to-event right-, interval-censored data for modeling pregnancy miscarriage. Computers in Biology and Medicine , 182:109186. Dekaboruah, E., Suryavanshi, M. V ., Chettri, D., and Verma, A. K. (2020). Human microbiome: an academic update on human body site specific surveillance and its possible role. Archives of Microbiology , 202:2147–2167. Delyon, B., Lavielle, M., and Moulines, E. (1999). Convergence of a stochastic approximation version of the EM algorithm. Annals of Statistics , pages 94–128. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological) , 39(1):1–22. D’Agata, A. L., Wu, J., Welandawe, M. K. V ., Dutra, S. V . O., Kane, B., and Groer, M. W. (2019). Effects of early life NICU stress on the developing gut microbiome. Developmental Psychobiology , 61(5):650–660. Eggers, J. (2015). On Statistical Methods for Zero-Inflated Models . Thesis, Uppsala Universitet. Han, Y ., Baker, C., V ogtmann, E., Hua, X., Shi, J., and Liu, D. (2021). Modeling longitudinal microbiome compositional data: a two-part linear mixed model with shared random effects. Statistics in Biosciences , 13:243–266. Handayani, D., Notodiputro, K. A., Sadik, K., and Kurnia, A. (2017). A comparative study of approximation methods for maximum likelihood estimation in generalized linear mixed models (GLMM). AIP Conference Proceedings , 1827(1):020033. Hu,
|
https://arxiv.org/abs/2504.15411v1
|
J., Wang, C., Blaser, M. J., and Li, H. (2022). Joint modeling of zero-inflated longitudinal proportions and time-to-event data with application to a gut microbiome study. Biometrics , 78(4):1686–1698. Jeyakumar, T., Beauchemin, N., and Gros, P. (2019). Impact of the microbiome on the human genome. Trends in Parasitology , 35(10):809–821. Kloek, T. and van Dijk, H. K. (1978). Bayesian estimates of equation system parameters: An application of integration by Monte Carlo. Econometrica , 46(1):1–19. Kodikara, S., Ellul, S., and Lê Cao, K.-A. (2022). Statistical challenges in longitudinal microbiome data analysis. Briefings in Bioinformatics , 23(4):1–18. Kuhn, E. and Lavielle, M. (2005). Maximum likelihood estimation in nonlinear mixed effects models. Computational Statistics & Data Analysis , 49(4):1020–1038. 14 Stochastic estimation of a zero-inflated mixed model for microbiome data Lewis, J. D., Chen, E. Z., Baldassano, R. N., Otley, A. R., Griffiths, A. M., Lee, D., Bittinger, K., Bailey, A., Friedman, E. S., Hoffmann, C., et al. (2015). Inflammation, antibiotics, and diet as environmental stressors of the gut microbiome in pediatric Crohn’s disease. Cell Host & Microbe , 18(4):489–500. Liu, L., Shih, Y .-C. T., Strawderman, R. L., Zhang, D., Johnson, B. A., and Chai, H. (2019). Statistical Analysis of Zero-Inflated Nonnegative Continuous Data: A Review. Statistical Science , 34(2):253 – 279. Louis, T. (1982). Finding the observed information matrix when using the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological) , 44(2):226–233. Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics , 21(6):1087–1092. Meza, C., Jaffrézic, F., and Foulley, J.-L. (2007). REML estimation of variance parameters in nonlinear mixed effects models using the SAEM algorithm. Biometrical Journal: Journal of Mathematical Methods in Biosciences , 49(6):876–888. Meza, C., Osorio, F., and De la Cruz, R. (2012). Estimation in nonlinear mixed-effects models using heavy-tailed distributions. Statistics and Computing , 22(1):121–139. Min, Y . and Agresti, A. (2005). Random effect models for repeated measures of zero-inflated count data. Statistical Modelling , 5(1):1–19. Mirsepasi-Lauridsen, H. C., Vallance, B. A., Krogfelt, K. A., and Petersen, A. M. (2019). Escherichia coli pathobionts associated with inflammatory bowel disease. Clinical Microbiology Reviews , 32(2):e00060–18. Myers, W. R. (2000). Handling missing data in clinical trials: an overview. Drug information journal: DIJ/Drug Information Association , 34:525–533. Márquez, M., Meza, C., Lee, D.-J., and De la Cruz, R. (2023). Classification of longitudinal profiles using semi- parametric nonlinear mixed models with P-splines and the SAEM algorithm. Statistics in Medicine , 42(27):4952– 4971. Nelder, J. and Lee, Y . (1992). Likelihood, quasi-likelihood and pseudolikelihood: some comparisons. Journal of the Royal Statistical Society Series B: Statistical Methodology , 54(1):273–284. Ospina, R. and Ferrari, S. L. (2012). A general class of zero-or-one inflated beta regression models. Computational Statistics & Data Analysis , 56(6):1609–1623. Powney, M., Williamson, P., Kirkham, J., and Kolamunnage-Dona, R. (2014). A review of the handling of missing longitudinal outcome data in clinical trials. Trials , 15(1):1–11. Rigby, R. A. and Stasinopoulos, D. (1996). A semi-parametric additive model for variance heterogeneity. Statistics and Computing
|
https://arxiv.org/abs/2504.15411v1
|
, 6:57–65. Rigby, R. A. and Stasinopoulos, D. (2005). Generalized additive models for location, scale and shape, (with discussion). Applied Statistics , 54:507–554. Samson, A., Lavielle, M., and Mentré, F. (2007). The SAEM algorithm for group comparison tests in longitudinal data analysis based on non-linear mixed effects model. Statistics in Medicine , 26(27):4860–4875. Stasinopoulos, M. D., Rigby, R. A., Heller, G. Z., V oudouris, V ., and De Bastiani, F. (2017). Flexible regression and smoothing: using GAMLSS in R . CRC Press, Taylor & Francis Group. Turnbaugh, P. J., Ley, R. E., Hamady, M., Fraser-Liggett, C. M., Knight, R., and Gordon, J. I. (2007). The human microbiome project. Nature , 449(7164):804–810. Tyler, A. D., Smith, M. I., and Silverberg, M. S. (2014). Analyzing the human microbiome: a “how to” guide for physicians. American College of Gastroenterology , 109(7):983–993. Wei, G. C. G. and Tanner, M. A. (1990). A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms. Journal of the American Statistical Association , 85(411):699–704. Zhang, X., Guo, B., and Yi, N. (2020). Zero-inflated gaussian mixed models for analyzing longitudinal microbiome data. PloS ONE , 15(11):e0242073. Zhang, X., Pei, Y .-F., Zhang, L., Guo, B., Pendegraft, A. H., Zhuang, W., and Yi, N. (2018). Negative binomial mixed models for analyzing longitudinal microbiome data. Frontiers in Microbiology , 9:1683. Zhang Chen, E. (2023). ZIBR: A Zero-Inflated Beta Random Effect Model . R package version 1.0.2. Zhu, H.-T. and Lee, S.-Y . (2002). Analysis of generalized linear mixed models via a stochastic approximation algorithm with Markov chain Monte-Carlo method. Statistics and Computing , 12(2):175–183. 15 Supplementary Material to A stochastic method to estimate a zero-inflated two part mixed model for human microbiome data John Barrera, Cristian Meza and Ana Arribas-Gil Appendix A Additional comparative simulation study on unbalanced and interpolated datasets In this section we report some results obtained in the application of the ZIBR model to an unbalanced data scenario. In this context, some studies (Abe and Iwasaki, 2007) advise the use of imputation with the individual average or to remove some observations in the data for the use of longitudinal data methods that cannot cope with unbalanced designs. Therefore, this is the approach we consider here to be able to compare GHQ to SAEM on this setting. However, as already mentioned, interpolation can create some inaccuracies in the estimations. In comparison, estimation using the SAEM algorithm does allow for its use directly on the original unbalanced data. We also compare the estimation obtained with the GAMLSS implementation, which does allow to work with unbalanced data. A.1 Setup In the context of an unbalanced data scenario, data generation will have two parts. First, we will use the parameters of Setting 2 ( a=b=−0.5,α=β= 0.5,σ1= 0.7,σ2= 0.5,ϕ= 6.4.) to generate 1000 datasets, with Ti= 10 and different values for the number of individuals, N∈ {50,100,200}. The variables XandZ are defined as usual. Then, we will randomly eliminate 20% of the observations from each data set; therefore, the median (interquartile range (IQR)) of the number of observations per individual in
|
https://arxiv.org/abs/2504.15411v1
|
the three specifications is 8 (IQR: 7 to 9). Given the drop-out method we chose to simulate an unbalanced data situation, we can assume that we are in a case of MCAR (Missing Completely At Random) (Rubin, 1976). Finally, we will compare the performance of SAEM on unbalanced data with GHQ on interpolated data. For each individual, the interpolation process will be carried out as follows: •if the missing value is between two known observations, linear interpolation will be performed; and •if the missing value is at the end of the observations, it will be replaced with the last known value. As in the main article, we compute the bias, MAE and RMSE of the estimations with all methods and the estimated densities of the parameter estimates. A.2 Results First of all, notice that, from the two simulations scenarios used in the analysis of balanced data (Section 3 of the main document), the one used here (Setting 2) was the most favorable to the GHQ method. Table 1 shows the results of the estimation on unbalanced datasets for the SAEM algorithm and the GAMLSS method, and on interpolated datasets with the GHQ procedure. These results show that in most cases, the SAEM estimators 1arXiv:2504.15411v1 [stat.ME] 21 Apr 2025 outperform the GHQ and GAMLSS estimators by having a lower absolute bias. Furthermore, in the case of both SAEM and GAMLSS, the increase in the number of individuals reduces the MAE and RMSE values in all cases; something that cannot be said for the estimates obtained by GHQ. Although there are some scenarios in which GHQ obtains better results in error measures, this advantage is quite small, whereas when SAEM is superior, the differences in the values are much more noticeable, which can be clearly seen in Figure 1. In this scenario, the estimate using GAMLSS presents the best RMSE and MAE values of the three methods in most cases, although this advantage is generally quite small. Additionally, as the sample size increases, these estimates gradually approach those obtained by SAEM. Table 1: Summary statistics of the results obtained by the SAEM algorithm and GAMLSS method on unbal- anced and GHQ procedure on interpolated data sets over 1000 simulation runs. For each parameter value and number of individuals, N, bold numbers indicate the lowest (absolute) value for each of bias, RMSE and MAE. Parameter Value Bias RMSE MAE Bias RMSE MAE Bias RMSE MAE SAEM (unbalanced) GHQ (interpolated) GAMLSS (unbalanced) a -0.5 N= 50 0.0048 0.2083 0.1662 0.1425 0.2726 0.2226 0.0264 0.1874 0.1494 N= 100 0.0015 0.1508 0.1199 0.1408 0.2206 0.1826 0.0275 0.1338 0.1081 N= 200 0.0041 0.1041 0.0830 0.1459 0.1869 0.1589 0.0283 0.0971 0.0774 α 0.5 N= 50 -0.0107 0.3419 0.2694 0.0329 0.3782 0.2965 -0.0279 0.3075 0.2445 N= 100 -0.0052 0.2450 0.1949 0.0358 0.2680 0.2109 -0.0296 0.2125 0.1714 N= 200 -0.0069 0.1801 0.1434 0.0296 0.2002 0.1584 -0.0314 0.1610 0.1299 b -0.5 N= 50 -0.0016 0.1449 0.1156 -0.1495 0.2052 0.1689 -0.0028 0.1340 0.1073 N= 100 -0.0047 0.0977 0.0777 -0.1489 0.1765 0.1532 -0.0048 0.0928 0.0736 N= 200 -0.0031 0.0685 0.0555 -0.1476 0.1621 0.1481 -0.0045 0.0654
|
https://arxiv.org/abs/2504.15411v1
|
0.0531 β 0.5 N= 50 -0.0118 0.2283 0.1828 -0.0163 0.2171 0.1766 -0.0096 0.2131 0.1714 N= 100 0.0073 0.1578 0.1257 -0.0015 0.1497 0.1203 0.0070 0.1484 0.1177 N= 200 0.0042 0.1084 0.0864 -0.0054 0.1064 0.0842 0.0022 0.1012 0.0801 σ1 0.7 N= 50 -0.1083 0.2935 0.2164 0.1867 0.2706 0.2205 -0.0787 0.1859 0.1443 N= 100 -0.0288 0.1637 0.1226 0.2223 0.2606 0.2281 -0.0507 0.1246 0.0984 N= 200 -0.0180 0.1063 0.0841 0.2248 0.2441 0.2253 -0.0447 0.0912 0.0723 σ2 0.5 N= 50 -0.0532 0.1411 0.1034 -0.0038 0.0960 0.0764 0.0626 0.1059 0.0838 N= 100 -0.0245 0.0797 0.0616 0.0120 0.0685 0.0546 0.0759 0.0961 0.0811 N= 200 -0.0173 0.0554 0.0438 0.0210 0.0526 0.0425 0.0792 0.0897 0.0799 ϕ 6.4 N= 50 0.0405 0.9219 0.7313 0.2200 0.8603 0.6580 1.5476 1.8430 1.5695 N= 100 -0.0186 0.6004 0.4772 0.1560 0.5562 0.4419 1.4776 1.6241 1.4802 N= 200 -0.0454 0.4161 0.3334 0.1330 0.4030 0.3204 1.4547 1.5270 1.4547 It is interesting to note that the worst results of GHQ are concentrated in the parameters related to the logistic part of the ZIBR model; that is, the one that controls zero inflation. This could be evidence that interpolation affects this component of the model much more than the other. In Figure 1, where the densities of the estimators obtained by the methods are shown, we see a behavior that confirms what was mentioned, being also clear the fact that the random components aandbare those that show a distribution much further from the theoretical values for the GHQ method with interpolation. Also the variance components show greater deviation from the real value according to the density graph. For GAMLSS, its densities are comparable to those of SAEM in unbalanced datasets, with the exception of the σ2andϕparameters, which are specifically responsible for regulating the dispersion in the beta part of the ZIBR model. Lastly, although a comparison with the results obtained on balanced data sets simulated under Setting 2 can not be directly established, since the number of observations differ, notice that the estimation results obtained with SAEM in the two cases (Figure 1 here and Figure 2 in the main document) are quite similar, whereas for GHQ there is a clear impact of interpolation on the estimation results. Appendix B Additional simulation study of hypothesis tests on co- variates association In the context of the ZIBR model, the effect of covariates on the presence or absence and abundance of a bacteria taxon must be studied. Therefore, we need procedures to test the null hypothesis H0:α=β= 0, H0:α= 0 and H0:β= 0. A common approach in mixed models is to use the Likelihood Ratio Test (LRT) in this context, as a way of comparing two nested models. An alternative way to test both fixed and random 2 Figure 1: Estimated density of the parameters obtained by the SAEM algorithm on unbalanced datasets and the GHQ procedure on interpolated datasets simulated under Setting 2. The dotted vertical line represents the true value of the parameter. 3 effects parameters significance is to use the Wald test, for which standard errors estimates of each parameter are required. B.1 Likelihood ratio test The likelihood ratio test
|
https://arxiv.org/abs/2504.15411v1
|
is performed to test the null hypothesis H0:α=β= 0. We will now analyze its type I error with the SAEM estimation method. As for parameter estimation, we are interested in the performance on both balanced and unbalanced data, and we will compare it with the results of the LRT based on the GHQ procedure, only in the balanced data scenario. However, as can be seen in Stasinopoulos et al. (2017), the calculation of the log-likelihood of the ZIBR model using GAMLSS is not the same as in the other two procedures, since it uses estimators based on a penalized quasi-likelihood method (Breslow and Clayton, 1993). For this reason, the values obtained for the log-likelihood in GAMLSS are not comparable with the other methods and the Likelihood Ratio Test cannot be applied in the same way. Therefore, we will not report it in this section. The parameter values to simulate 1000 datasets are set as follows: •a=−0.5, b= 0.5, •α=β= 0, •σ1= 0.7,σ2= 0.5, •ϕ= 6.4. The number of individuals in each dataset takes two values N∈ {50,100}, and we keep the number of observations per individual fixed Ti= 10. The procedure will be carried out on both 1000 balanced datasets and 1000 datasets from which 20% of their observations are dropped out following the MCAR process already described. The SAEM estimation settings were the same as above, while the log-likelihood estimation by Importance Sampling was carried out by simulating K= 500 values. The results shown in Table 2 make evident the similar behavior in balanced and unbalanced data with regard to the LRT procedure. In no case is there a significant difference between the expected and obtained values, and this behavior is not affected by the number of individuals or the type of data being worked on. Both SAEM and GHQ obtain good results in the balanced case, while in the unbalanced case SAEM approximates the theoretical level quite well. Table 2: Type I error for testing H0:α=β= 0 in balanced and unbalanced data with the SAEM algorithm and the GHQ procedure for nominal significance level of 0.05 and 0.01. SAEM GHQ Significance level Significance level Data type 0 .05 0 .01 0 .05 0 .01 Balanced N= 50 0.050 0.007 0.059 0.009 N= 100 0.048 0.009 0.050 0.012 Unbalanced N= 50 0.064 0.009 N= 100 0.045 0.010 B.2 Wald test for individual parameters Finally, we will check the results of the Wald test using the standard errors of the estimators computed by stochastic approximation of the Fisher observed matrix. It is important to mention that the GHQ method implemented in the ZIBR package does not offer any method for calculating the standard errors of the estimates, so the results obtained using SAEM cannot be compared with those provided by the GHQ method. However, GAMLSS does produce estimates for the standard errors of the estimators, so comparisons can be made between SAEM and GAMLSS. 4 B.2.1 Fixed effects The null hypotheses to be tested are H0:α= 0 and H0:β= 0, for which the test statistics are defined by ˆα2 V ar(ˆα)andˆβ2 V
|
https://arxiv.org/abs/2504.15411v1
|
ar(ˆβ) where V ar(ˆα) and V ar(ˆβ) are estimated by the procedure described in Section 2.3 of the main document. Under the null hypothesis, these variables follow an asymptotic χ2distribution with one degree of freedom. We keep the simulation settings used for the LRT. To improve the convergence properties of the SAEM algorithm, we use m= 10 Markov chains in the execution. The results are summarized in Table 3. We can see that the values obtained through simulation are not far from the theoretically expected values, although slightly above the ones obtained with the Likelihood Ratio Test. As shown in the table, SAEM achieves type I errors close to the theoretical values of the test, while GAMLSS does not achieve optimal results. Furthermore, the inaccuracies of GAMLSS are more pronounced for testing β= 0 than testing α= 0, which confirms the tendency of GAMLSS to produce poor results in the Beta component of the ZIBR model. Table 3: Type I error of the Wald test for H0:α= 0 and H0:β= 0 using the SAEM algorithm for nominal significance level of 0.05 and 0.01. SAEM GAMLSS Significance level Significance level 0.05 0 .01 0 .05 0 .01 H0:α= 0 0.069 0.022 0.120 0.038 H0:β= 0 0.062 0.013 0.296 0.158 B.2.2 Random effects The null hypotheses are now H0:a= 0 and H0:b= 0, and the corresponding test statistics ˆa2 V ar(ˆa)andˆb2 V ar(ˆb), follow each a χ2distribution with one degree of freedom, if the null hypothesis are valid. The simulation settings now are those of Setting 2 with a= 0 and b= 0. As for the fixed-effects test, we use m= 10 Markov chains to accelerate convergence. The results of the test are presented in Table 4. Table 4: Type I error of the Wald test for H0:a= 0 and H0:b= 0 using the SAEM algorithm for nominal α-level of 0.05 and 0.01. SAEM GAMLSS Significance level Significance level 0.05 0 .01 0 .05 0 .01 H0:a= 0 0.051 0.014 0.092 0.029 H0:b= 0 0.049 0.011 0.250 0.126 The results of this section, in the same way as those of the previous one, are indicative that the calculation of the standard errors given by SAEM meets the expected statistical properties, while GAMLSS does not obtain similar Type I errors. One of the advantages of this result is that it allows the development of hypothesis tests in a less computationally demanding way than the Likelihood Ratio Test, since this requires the estimation of two different models, while the Wald test only needs to estimate one. 5 Appendix C Aditional case study: Pregnancy effect in vaginal mi- crobiome In this case we apply the ZIBR model to the analysis of longitudinal data from a study (Romero et al., 2014) describing the vaginal microbiome of a group of 22 pregnant and 32 non-pregnant women. In this case we try to verify the effect of pregnancy on the distribution of the different bacterial taxa observed, in a similar way to what is done in a recent work (Zhang et al., 2020). However, unlike the cited study, where the
|
https://arxiv.org/abs/2504.15411v1
|
analysis is performed on count data, we will analyse the data as proportions using the ZIBR model with SAEM estimation. Furthermore, a comparison of the results with the GHQ method can not be established since the number of time points is different between individuals; that is, the data is unbalanced. Table 5: Characteristics of the two groups of women, separated by pregnancy status Non-pregnant Pregnant Variable1N= 32 N= 22 Age (years) 37 (31-43) 24 (20-29) Time (months) 3.40 (3.52-3.67) 8.13 (8.00-8.43) N. of observations 24 (21-29) 6 (6-7) 1Mean(Q1-Q3) A preliminary review of the data (Table 5) allows us to notice that there are large differences between the characteristics of pregnant and non-pregnant women. The time span of observations is much longer for pregnant women, and thus they have many more readings than non-pregnant women. In addition, the average age of pregnant women is much younger than the non-pregnant group. This is why we decided to include age as a covariate in the models to be used, according to the following specifications: •Model 1: pregnancy, time and age as covariates, taking pregnancy as a factor of interest for testing. •Model 2: pregnancy, time, age and interaction between time and pregnancy as covariates, testing the effect of pregnancy and the interaction. Once we filter the bacterial taxa with a proportion of zeros between 0.1 and 0.9 and those that are absent in either of the two groups of women, we have 897 observations from 54 individuals and 57 taxa. With these data, we developed the LRT for the proposed variables in each model using SAEM at a significance level of 0.05. Figure 2: Proportion of presence of the taxa in the observations of the two groups of women (pregnant and non-pregnant) and the difference between these values Figures 2 and 3 show the differences in presence of the taxa considered and the distribution of the non- zero abundance data. From these figures we can see that, on average, there are more bacteria with a higher 6 Figure 3: Logit of the non-zero abundance of the taxa in the observations of the two groups of women (pregnant and non-pregnant) abundance in non-pregnant women. In pregnant women, however, the dominance of the Lactobacillus genus in the most abundant bacteria is very evident, which is consistent with previous findings (Walther-Ant´ onio et al., 2014), which also suggest that low variety and high stability is another characteristic of the vaginal microbiome in pregnant women. The LRT indicates that Model 1 was able to detect a higher number of bacteria (51% of all taxa) affected by pregnancy than Model 2 (16%), for which the interaction of time with pregnancy is statistically significant for a higher number of bacteria (26%) compared to pregnancy alone. Figure 4 details these results further. In particular, for Model 1 the bacteria for which an influence of pregnancy is detected are more commonly found among those that are more abundant in non-pregnant women. In comparison, only 2 of the most present bacteria in pregnant women show a statistical significance of pregnancy: Lactobacillus crispatus andSneathia
|
https://arxiv.org/abs/2504.15411v1
|
sanguinegens . The information in Table 6 also confirms these findings, showing that the coefficients associated with pregnancy for these taxa in the abundance part have different sign. In a previous work (Romero et al., 2014) it is found that bacteria of the genus Sneathia , potentially pathogenic, reduce their presence during pregnancy, while Lactobacillus crispatus abundance in pregnancy is associated with a lower risk of preterm delivery (Veˇ sˇ ciˇ c´ ık et al., 2020). However, Figure 4b shows that adding the time-pregnancy interaction to the model specification changes the results. First, only a small number of the bacteria that predominate in non-pregnant women show significant dependence on pregnancy or the interaction between time and pregnancy. But in the other group of bacteria, nine show significance for pregnancy or the interaction, and three of them for both: Lactobacillus jensenii ,Prevotella genogroup 1 andMegasphaera sp type 1 . Previous studies (Severgnini et al., 2022) report that both Lactobacillus bacteria and those associated with bacterial vaginosis ( Prevotella ,Sneathia ) change their abundance between pregnant and non-pregnant women and also along time in case of pregnancy. The coefficients associated with the variables can be consulted in Table 7. In view of these results, we can assert that the ZIBR model and the SAEM estimation for relative abundance data obtain similar conclusions as both previous research results and the mixed models defined for log-transformed count data. 7 Figure 4: Negative of log transformed p-value of the LRT for the interest variables in Model 1 (a) and Model 2 (b) for the bacterial taxa. The horizontal line represents the threshold α= 0.05 8 Table 6: ML estimates calculated by the SAEM algorithm for the parameters of Model 1 on the vaginal microbiome data Logistic part Beta part Taxa Time Pregnancy Age1Time Pregnancy Age1 More present in non-pregnant Actinomycetales 0.1643 -2.9565 -4.3481 0.2711 -0.5074 0.1593 Peptoniphilus -0.5538 -3.3886 -4.9629 0.3549 -0.5902 0.4930 Finegoldia magna 0.2397 -3.0966 -4.3831 -0.2211 -0.4315 -0.0565 Anaerococcus 0.3107 -3.1733 -4.4771 -0.0889 -0.4276 0.1836 Clostridiales Family XI Incertae Sedis 0.6038 -5.0301 -5.4008 -0.2489 -0.1186 0.4467 Prevotella 0.0793 -2.7107 -4.7978 0.0155 -0.5715 0.0004 Anaerococcus vaginalis 0.8104 -4.6513 -5.3721 -0.2472 -0.3762 0.2780 Prevotella genogroup 2 0.5090 -1.6274 -4.0081 0.7375 -1.2336 -0.3287 Dialister 2.2496 -2.9795 -5.6544 0.4761 -0.2715 0.0820 Atopobium 2.0765 -5.5589 -5.2281 -0.2565 -0.5111 -0.0844 Gardnerella vaginalis 0.6939 -2.9834 -4.2448 0.4555 -0.9249 -0.4500 Leptotrichia amnionii 0.5094 -2.7212 -3.0925 -0.2628 -0.4553 -0.3020 Bacteroides 0.6935 -3.2685 -5.5691 -0.9732 0.1607 0.0274 Clostridiales 0.4412 -1.1113 -4.1706 0.6240 -0.8599 0.3289 Streptococcus anginosus 1.1914 -2.7476 -3.2483 -0.4074 -0.3116 -0.3012 Staphylococcus -1.2574 -1.8653 -2.9380 -0.4051 -0.2712 -0.5260 Bacteria -0.8519 -0.0419 -5.5588 -0.2858 -0.2568 0.2760 Peptoniphilus lacrimalis 0.0722 -2.1754 -4.4349 -0.0621 -0.5729 0.1260 Corynebacterium accolens 0.2755 -3.5430 -6.4496 -0.1933 -0.1842 0.1541 Parvimonas micra 1.7007 -1.9734 -4.0355 -0.5192 -0.1024 0.5413 Porphyromonas -0.4227 -2.2421 -5.6416 0.4346 -1.0118 -0.3632 Prevotella bivia 0.6292 -2.7653 -4.0522 0.5408 -0.7516 -0.4583 Bifidobacteriaceae 1.2630 -2.0531 -4.9472 0.2806 -0.1624 -0.1074 Peptoniphilus asaccharolyticus -0.1290 -3.6205 -6.3497 -0.6669 -0.2086 -0.0705 Peptoniphilus harei 0.3274 -2.4012 -6.4962 -0.4734 -0.2088 0.2916 Actinomyces 1.7491 -2.8264 -6.0772 0.2213 -0.5420 -0.0433 Sneathia 0.7347 -4.7077 -4.9721 -0.7067 -0.3644 -0.7778 Gemella 0.6244
|
https://arxiv.org/abs/2504.15411v1
|
-3.1960 -4.3473 0.0634 -0.3158 0.1018 Coriobacteriaceae 0.7464 -3.6256 -5.1390 -0.6120 -0.2043 -0.2116 Veillonellaceae 1.3996 -3.8063 -6.2298 -0.2779 0.1731 -0.2848 Eggerthella 3.9614 -3.8869 -4.4172 0.2629 -1.0546 -1.0317 Lachnospiraceae 0.4803 -3.3161 -4.0073 -1.3028 -0.0050 -0.7322 Bacteroidales 0.8925 -2.6188 -5.7584 -0.5879 0.1703 -1.1204 Peptostreptococcus -1.5022 -2.0752 -3.6386 -0.7837 -0.3178 -0.3577 Aerococcus christensenii 0.8137 -1.7496 -4.0424 0.6430 -0.6755 -0.7022 Dialister sp type 3 -0.4974 -2.8009 -5.9637 -0.5940 -0.4529 -0.5076 Mobiluncus curtisii 0.6626 -4.2598 -3.8239 -0.8479 -0.1577 -0.7071 Lactobacillales -0.0490 -3.7421 -6.4767 -0.6701 -0.1307 0.4199 Actinomycetaceae -0.2150 0.0529 -4.8075 0.3671 -0.5665 -0.6179 Anaerococcus tetradius -1.8136 -0.3787 -4.3929 0.0235 -0.5861 -0.2021 Prevotella genogroup 3 -3.1242 1.4811 -4.4904 -0.1353 0.7275 0.4336 Firmicutes 0.2795 -1.6765 -5.7859 -1.7511 1.1085 -0.4444 Atopobium vaginae 2.6754 -2.3296 -1.8858 1.0732 -1.7974 -0.7179 Streptococcus 1.4210 -2.3253 -4.1598 -0.6664 -0.1194 -0.6794 Aerococcus 0.6551 -1.6029 -5.4446 0.7116 -1.3571 -3.0284 BVAB1 -2.0914 2.6161 -3.9562 -0.3836 -0.2357 0.1702 Ureaplasma 0.6215 -1.5126 -5.5155 -1.0830 0.0987 -0.2251 More present in pregnant Lactobacillus jensenii 1.1874 2.6978 -2.9872 0.4672 0.1064 -0.6276 Lactobacillus crispatus 1.2249 2.4304 -2.1262 0.0366 1.2101 1.0443 Lactobacillus vaginalis -1.2876 3.1935 -5.5519 0.3050 -0.2523 -1.1646 Lactobacillus gasseri 0.1602 0.3210 -2.8409 -0.8749 0.3214 -0.1971 Lactobacillus 2.8199 0.5166 -2.7068 -0.3749 -0.5104 -1.0172 Lactobacillus iners 3.0629 -0.2200 -0.4422 -0.2367 0.4068 -0.8884 Prevotella genogroup 1 -1.4204 3.3757 -2.3026 -1.4471 -0.7525 -2.6299 Megasphaera sp type 1 0.4711 1.2356 -3.7876 0.3543 -0.5830 -0.2166 Sneathia sanguinegens -0.3651 2.9115 -3.0625 -0.1183 -1.3089 -1.6596 Proteobacteria -0.3355 0.8134 -6.0708 0.2224 -0.2306 0.0222 Note: Bold coefficients represent a statistically significant variable for the corresponding taxa according to LRT (α= 0.05). 1Variable scaled to [0 ,1]. 9 Table 7: ML estimates calculated by the SAEM algorithm for the parameters of Model 2 adjusted on the vaginal microbiome data Logistic part Beta part Taxa Time Pregnancy Age1Interaction Time Pregnancy Age1Interaction More present in non-pregnant Actinomycetales 0.3948 -2.2550 1.2636 -4.5273 0.2574 -0.2449 0.4723 -0.1987 Peptoniphilus 0.3700 0.2810 0.9554 -5.1428 0.4979 -0.0130 0.7428 -0.8741 Finegoldia magna 0.9772 -0.3225 -0.6225 -4.4261 -0.1683 -0.4308 -0.0207 0.0193 Anaerococcus 0.8629 -0.6215 1.8259 -4.6912 0.0549 0.0580 0.4514 -0.5976 Clostridiales Family XI Incertae Sedis 0.7207 -10.1685 1.4287 -5.5151 -0.1724 -0.0642 0.5863 0.0701 Prevotella 0.4429 -1.3821 1.6417 -4.8826 0.1590 -0.1239 0.1221 -0.8141 Anaerococcus vaginalis 0.7951 -4.0813 -0.0944 -5.4910 -0.1770 -0.1281 0.4443 -0.1729 Prevotella genogroup 2 0.9482 -0.5006 2.2294 -4.4230 0.9075 -0.3012 0.3405 -1.2531 Dialister 2.4420 -2.6952 0.3451 -5.7329 0.6666 0.2014 0.1664 -0.9042 Atopobium 3.3311 -3.1559 -2.0620 -5.4275 -0.1449 0.0914 0.2228 -0.7851 Gardnerella vaginalis 0.6400 -2.7894 -0.8232 -4.3367 0.5297 -0.7076 -0.2064 -0.4432 Leptotrichia amnionii 1.4334 -0.6080 -0.8310 -3.2125 -0.2064 -0.0218 -0.0096 -0.5715 Bacteroides 0.7602 -5.7682 2.1606 -5.7675 -0.8673 -7.3757 0.2836 9.7135 Clostridiales 0.6560 -0.9829 1.4696 -4.3330 0.9404 1.0325 0.4875 -2.7669 Streptococcus anginosus 2.0122 -0.3166 0.9938 -3.3596 -0.3247 -0.3774 -0.2174 0.1301 Staphylococcus -1.5362 -3.6995 -0.6303 -2.9711 -0.4023 -0.5308 -0.4955 0.3844 Bacteria -0.6773 0.2644 2.3964 -5.5534 -0.4084 -0.5027 0.2986 0.4826 Peptoniphilus lacrimalis -0.0278 -2.0074 3.6833 -4.6556 -0.0100 1.4279 0.4354 -2.5014 Corynebacterium accolens 0.5597 -0.7272 0.1489 -6.5660 -0.1154 -0.5092 0.2607 0.6831 Parvimonas micra 1.4761 -2.9152 2.2397 -4.3283 -0.5811 0.0751 1.1713 0.0643 Porphyromonas -0.1872 1.5996 2.2892 -5.8198 0.4243 -0.6574 -0.1062 -0.2790 Prevotella bivia 1.6976 0.5426 -1.3359 -4.1394 0.6937 -0.3713 -0.3830 -0.7431 Bifidobacteriaceae 1.7009 -1.9078 -0.6436 -5.0737 0.2531 -0.1764 0.1672
|
https://arxiv.org/abs/2504.15411v1
|
0.0999 Peptoniphilus asaccharolyticus -0.0138 -5.6375 1.3374 -6.6523 -0.5590 -0.1947 0.3948 0.0481 Peptoniphilus harei 0.7404 1.5115 1.1492 -6.6702 -0.3795 -0.0607 0.5226 -0.0205 Actinomyces 1.3715 -7.1737 0.7501 -6.2155 0.3464 -0.4793 0.1216 -0.0661 Sneathia 1.1231 0.1323 0.1920 -5.1472 -0.6467 -0.5812 -0.4221 0.4894 Gemella 1.4290 0.5889 -0.2813 -4.4960 0.3385 0.2498 0.1827 -1.1261 Coriobacteriaceae 1.0181 -0.9975 1.5135 -5.2714 -0.5296 -0.0980 -0.0432 0.0134 Veillonellaceae 2.0141 3.2373 1.9231 -6.2525 -0.3010 4.9707 -0.2761 -10.8236 Eggerthella 7.0001 1.3307 0.1952 -4.6386 0.7614 0.8781 -0.8357 -3.1145 Lachnospiraceae 0.4188 -6.9168 1.3680 -4.1897 -1.1530 -0.0616 -0.5525 0.0708 Bacteroidales 0.7965 -4.1245 2.7556 -6.2071 -0.3769 3.4525 -0.4391 -4.0362 Peptostreptococcus -0.7720 0.7950 0.5703 -3.7720 -0.6830 -0.4789 -0.2520 0.3792 Aerococcus christensenii 1.1763 -1.3857 -1.3277 -4.2673 0.7233 -0.5886 -0.3450 0.0167 Dialister sp type 3 -0.2289 -1.6974 0.2424 -6.1968 -0.4251 -0.2131 -0.2086 -0.2637 Mobiluncus curtisii 0.3987 -4.1474 1.6940 -4.1145 -1.0627 0.1107 -0.1854 0.2001 Lactobacillales -0.0543 -4.3852 0.8537 -6.4998 -0.5817 -0.1442 0.4201 0.0153 Actinomycetaceae 0.2268 1.2868 2.9645 -5.0411 0.3207 -1.4314 -0.2794 1.5222 Anaerococcus tetradius -1.2706 1.1320 1.5860 -4.6129 0.1162 -0.0854 0.0456 -0.7397 Prevotella genogroup 3 -1.7386 5.3548 3.9245 -4.4041 -0.1681 1.4938 0.3868 -1.8901 Firmicutes 0.1821 -3.8383 3.1218 -5.9893 -1.6163 -3.2600 -0.1226 5.5245 Atopobium vaginae 3.3931 -1.8104 -0.2133 -2.2539 1.5484 -0.1086 -0.2015 -2.3932 Streptococcus 2.2390 -0.7838 -2.3284 -4.1217 -1.0025 -1.1272 -0.6405 1.8209 Aerococcus 1.0920 -0.1265 0.3653 -6.4798 0.4889 -1.1422 -0.9858 1.0428 BVAB1 -2.4305 1.5174 0.0303 -4.0814 -0.2639 0.2136 0.2991 -0.5858 Ureaplasma 0.4596 -1.5496 -2.9635 -5.5847 -1.0195 -0.0987 -0.0569 0.2484 More present in pregnant Lactobacillus jensenii -0.4242 1.1176 -1.0204 -3.1727 -0.7723 -1.3386 -0.1073 3.3769 Lactobacillus crispatus 1.5798 2.9938 1.7026 -1.9845 0.0106 0.7945 0.8268 0.4186 Lactobacillus vaginalis -3.5236 -2.9340 -5.3811 -6.1216 -1.3030 -0.7328 0.3448 3.0030 Lactobacillus gasseri -1.2666 -1.9058 -1.5392 -2.8785 -0.9620 0.0504 -0.0161 0.2846 Lactobacillus 2.6612 1.1124 -2.7797 -2.7404 -0.6025 -1.0576 -0.8576 1.0133 Lactobacillus iners 2.9543 -0.1484 -2.8471 -0.4604 -0.2748 0.3779 -0.8028 0.0509 Prevotella genogroup 1 1.4486 5.5025 1.2280 -2.9843 -0.7541 0.5118 -1.8807 -1.8349 Megasphaera sp type 1 3.2792 3.7197 0.4440 -3.9302 0.4912 -0.6149 -0.1020 0.0530 Sneathia sanguinegens 0.9750 4.0012 3.2933 -3.1000 -0.0251 -1.1921 -1.6335 -0.2152 Proteobacteria -1.3540 -0.9926 0.9526 -6.1615 0.2327 -0.6051 0.1561 0.5710 Note: Bold coefficients represent a statistically significant variable for the corresponding taxa according to LRT ( α= 0.05). 1Variable scaled to [0 ,1]. 10 Appendix D Additional figures and tables D.1 Inflammatory bowel disorder pediatric study Table 8 shows the adjusted p-values for the variables considered in the model for each bacterial taxon, while Table 9 summarizes the estimators obtained by the ZIBR model for Escherichia , clarifying in which part of the model the effect of the treatment is perceived. Furthermore, Figure 5 shows the convergence behavior of the SAEM algorithm for the ZIBR model in the mentioned taxon, evidencing one of its great advantages, which is not needing a large number of iterations when entering the phase of descent to the final values. Figure 6 supports the results of Table 9, where the evolution over time of both the presence or absence of Escherichia and its average abundance in individuals is seen, showing a differentiated behavior between treatments and controls in the presence but not in the abundance. Table 8: P-values obtained by the Likelihood Ratio Test
|
https://arxiv.org/abs/2504.15411v1
|
based on the SAEM algorithm for the bacterial taxa of the IBD patients data. The p-values were corrected using the Benjamini-Hochberg process to decrease the false discovery rate. Species Baseline Time Treat Bacteroides 0.0000 0.1172 0.2133 Ruminococcus 0.0008 0.1203 0.0033 Faecalibacterium 0.0000 0.2645 0.0009 Bifidobacterium 0.0000 0.1621 0.0008 Escherichia 0.0000 0.0293 0.0308 Clostridium 0.0003 0.2905 0.0934 Dialister 0.0008 0.2227 0.0045 Eubacterium 0.0049 0.0106 0.0114 Roseburia 0.0000 0.1587 0.0708 Streptococcus 0.0170 0.1706 0.0000 Dorea 0.0000 0.3773 0.0605 Parabacteroides 0.0000 0.0989 0.2028 Lactobacillus 0.0000 0.2208 0.0020 Veillonella 0.0000 0.7333 0.0053 Haemophilus 0.0000 0.1961 0.0000 Alistipes 0.0000 0.4010 0.0000 Collinsella 0.0000 0.1228 0.0078 Coprobacillus 0.0000 0.5255 0.3298 Table 9: Estimated effects with the SAEM algorithm of the variables in the ZIBR model for Escherichia . Variable1Beta part Logistic part Baseline 2.5521*** 323.66*** (0.3656) (89.4662) Time -0.0356 0.1979*** (0.0261) (0.0412) Treat 0.0746 3.0186*** (0.1576) (0.8815) Note: Standard errors of the respective coefficients in parentheses. Symbol * (**,***) represents significance at 10% (5%, 1%) level. 1Statistical significance is calculated with the Wald test. 11 Figure 5: Logit of the non-zero abundance (left) and percentage of samples with presence (right) for Escherichia in each treatment group (anti-TNF and EEN) across observation week. Figure 6: Convergence of the ML estimates of the parameters of the ZIBR model for the Escherichia genus calculated by the SAEM algorithm. The SAEM routine was implemented with 5 Markov chains and 500 iterations. 12 References Abe, T. and Iwasaki, M. (2007). Evaluation of statistical methods for analysis of small-sample longitudinal clinical trials with dropouts. Journal of the Japanese Society of Computational Statistics ,20(1), 1–18. Breslow, N. E. and Clayton, D. G. (1993). Approximate inference in generalized linear mixed models. Journal of the American statistical Association ,88(421), 9–25. Romero, R., Hassan, S. S., Gajer, P., Tarca, A. L., Fadrosh, D. W., Nikita, L., Galuppi, M., Lamont, R. F., Chaemsaithong, P., Miranda, J., et al. (2014). The composition and stability of the vaginal microbiota of normal pregnant women is different from that of non-pregnant women. Microbiome ,2(1), 1–19. Rubin, D. B. (1976). Inference and missing data. Biometrika ,63(3), 581–592. Severgnini, M., Morselli, S., Camboni, T., Ceccarani, C., Laghi, L., Zagonari, S., Patuelli, G., Pedna, M. F., Sambri, V., Foschi, C., et al. (2022). A deep look at the vaginal environment during pregnancy and puer- perium. Frontiers in Cellular and Infection Microbiology ,12, 838405. Stasinopoulos, M. D., Rigby, R. A., Heller, G. Z., Voudouris, V., and De Bastiani, F. (2017). Flexible regression and smoothing: using GAMLSS in R . CRC Press, Taylor & Francis Group. Veˇ sˇ ciˇ c´ ık, P., Musilov´ a, K., Str´ an´ ık, J., ˇStˇ ep´ an, M., Kacerovsk` y, M., et al. (2020). Lactobacillus crispatus dominant vaginal microbiota in pregnancy. Ceska gynekologie ,85(1), 67–70. Walther-Ant´ onio, M. R., Jeraldo, P., Berg Miller, M. E., Yeoman, C. J., Nelson, K. E., Wilson, B. A., White, B. A., Chia, N., and Creedon, D. J. (2014). Pregnancy’s stronghold on the vaginal microbiome. PloS ONE , 9(6), e98514. doi: 10.1371/journal.pone.0098514. Zhang, X., Guo, B., and Yi, N. (2020). Zero-inflated gaussian mixed models for analyzing longitudinal micro-
|
https://arxiv.org/abs/2504.15411v1
|
arXiv:2504.15515v2 [math.ST] 23 Apr 2025TRANSPORT f-DIVERGENCES WUCHEN LI Abstract. We define a class of divergences to measure differences betwee n probability density functions in one-dimensional sample space. The con struction is based on the convex function with the Jacobi operator of mapping functio n that pushforwards one density to the other. We call these information measures transport f-divergences . We present several properties of transport f-divergences, including invariances, convexities, variational formulations, andTaylorexpansionsintermso fmappingfunctions. Examples of transport f-divergences in generative models are provided. 1.Introduction Measuring dissimilarities between probability densities are crucial problems in machine learning [8] and Bayesian inference problems [11]. The info rmation theory studies these dissimilarity functionals [6, 9]. In this area, f-divergences invented by Csiszar-Morimoto [10] and Ali-Silvey [1], belongto a class of information mea surements. Famous examples of f-divergences include total variation (TV) distance, χ2-divergence, Kullback-Leibler (KL) divergence, Jensen-Shannon (JS) divergence [21] and α-divergences [2, 8]. They have been widely applied in image and signal processing [20], Markov C hain Monte Carlo (MCMC) sampling algorithms [15], Bayesian inverse problems, and g enerative modeling in artificial intelligence (Generative AI) [11] . In recent years, optimal transport [23] studies the other ty pes of distances between probability densities, which has shown popularities in ima ge or signal processing [20] and generative AI [5]. In this area, the distance function, c alled earth mover’s distance or Wasserstein distance, also measures the differences betwe en probability densities by comparing pushforward mapping functions between densitie s from a ground cost in the sample space. It is known that the mapping function can measu re the probability densi- ties with sparse support, such as empirical data distributi ons, while classical information divergences may not be well-defined [5, 11]. In particular, o ptimal transport studies a Riemannian metric in probability densities space, namely W asserstein-2 space [4, 23] or density manifold [13]. Recently, the convexities of inform ation entropy functionals in Wasserstein-2 space are used in information-theoretical i nequalities [4, 22]. The asymmetry between probability densities is a property i n studying information measures and their variational problems. In f-divergence, the asymmetry property is constructed from the ratio between two probability density functions, called likelihood functions. However, the asymmetry property is often lackin g in classical optimal transport Key words and phrases. Transport f-divergences; Optimal transport; Chain rules. 1 2 LI distances, especially with Euclidean distances as ground c osts. A natural question arises. What are analogs of asymmetry property and f-divergences in Wasserstein- 2space? This paper defines a class of divergences in one-dimensional probability space. Let p,q be probability densities supported on Ω = R1. Consider DT,f(p/bardblq) =/integraldisplay Ωf(q(x) p(T(x)))q(x)dx, wheref:R→R+is a positive convex function with f(1) = 0, and T: Ω→Ω is a monotone function, which pushforwards densities qtop. We call D T,fthetransport f- divergence. We apply the ratio between qandp(T) to represent the likelihood function. Several properties of transport f-divergences are studied, including invariances, convex- ities, variational formulations, local behaviors, and Tay lor expansions in Wasserstein-2 space. We also provide several examples of transport f-divergences and their formulas in generative models. In literature,
|
https://arxiv.org/abs/2504.15515v2
|
there are joint studies between information divergences and optimal trans- port distances [7, 12, 19, 22]. On the one hand, [22] applies t he second-order derivatives of information divergences in Wasserstein-2 space to prove the first-order entropy power inequalities and their generalizations. On the other hand, the analytical and statistical estimation properties of Wasserstein-2 distances have bee n conducted in Gaussian dis- tributions [19]. Compared to previous works, we define trans portf-divergences, which generalizes KL divergences, Hessian distances, and α-divergences in Wasserstein-2 space [15, 16, 17]. This paper is organized as follows. In section 2, we briefly re viewf-divergences and their properties. In section 3, we introduce the main result of this paper. We define transport f-divergences and formulate their properties. We present se veral examples of transport f-divergencesinsection4. Severalanalytical formulasoft ransport f-divergences in location-scale families and generative models are provi ded in section 5. 2.Review of f-divergences In this section, we briefly review f-divergences [1, 10], which measures the difference between probability distributions. Consider a one-dimensional sample space Ω = R1. Denote the space of smooth positive probability density functions by P(Ω) =/braceleftBig p∈C∞(Ω):/integraldisplay Ωp(x)dx= 1, p(x)>0/bracerightBig . Given two probability density functions p,q∈ P(Ω), define f-divergence D f:P(Ω)× P(Ω)→R+by Df(p/bardblq) =/integraldisplay Ωf(p(x) q(x))q(x)dx, wheref:R→R+isaconvex functionwith f(1) = 0. Ingeneral, D f(p/bardblq) isnotsymmetric with respect to densities p,q. I.e., D f(p/bardblq)/ne}ationslash= Df(q/bardblp). From this reason, we call D fthe divergence function, instead of the distance function. TRANSPORT f-DIVERGENCES 3 There are several examples of f-divergences. (1) Total variation: If f(u) =|u−1|, then Df(p/bardblq) = DTV(p,q) =/integraldisplay Ω|p(x)−q(x)|dx. (2)χ2-divergence: If f(u) =|u−1|2, then Df(p/bardblq) = Dχ2(p/bardblq) =/integraldisplay Ω|p(x)−q(x)|2 q(x)dx. (3) Squared Hellinger distance: If f(u) =1 2|1−√u|2, then Df(p/bardblq) =1 2/integraldisplay Ω(/radicalbig p(x)−/radicalbig q(x))2dx. (4) KL divergence: If f(u) =ulogu−(u−1), then Df(p/bardblq) = DKL(p/bardblq) =/integraldisplay Ωp(x)logp(x) q(x)dx. (5) JS divergence: If f(u) =1 2(ulog2u u+1+log2 u+1), then Df(p/bardblq) = DJS(p,q) =1 2/parenleftBig DKL(p/bardblp+q 2)+DKL(q/bardblp+q 2)/parenrightBig . Thef-divergences exhibit several useful properties in estimat ions and AI sampling algorithms; see [3, 21]. (i) Nonnegativity: The f-divergence is always nonnegative and equals to zero if and only ifp=q. (ii) Generalized entropy: Let q= 1. Df(p/bardbl1) =/integraldisplay Ωf(p(x))dx. (iii) Joint convexity: D f(p/bardblq) is jointly convex in both variables pandq. Given any constant λ∈[0,1], then Df(λp1+(1−λ)p2/bardblλq1+(1−λ)q2)≤λDf(p1/bardblq1)+(1−λ)Df(p2/bardblq2). (iv) Additivity and Scaling: Suppose that f1,f2are convex functions and a >0, then Df1+af2(p/bardblq) = Df1(p/bardblq)+aDf2(p/bardblq). (v) Invariance: The f-divergence is invariant to bijective transformations. Su ppose k: Ω→Ω is a bijective map function, then Df(p/bardblq) = Df(k#p/bardblk#q). Besides, denote ˜f(u) =f(1 u)u, then D f(p/bardblq) = D˜f(q/bardblp). (vi) Variational formulation: Assume fis strictly convex, then Df(p/bardblq) =sup φ/integraldisplay Ωφ(x)p(x)dx−/integraldisplay Ωf∗(φ(x))q(x)dx, where the infimum is taken among continuous function φ∈C1(Ω;R), andf∗is the conjugate function of f, such that f∗(v) := supv∈R{uv−f(u)}. 4 LI (vii) Local behaviors: Suppose f∈C2, then lim λ→01 λ2Df/parenleftbig (1−λ)q+λp/bardblq/parenrightbig =f′′(1) 2Dχ2(p/bardblq). (viii) Taylor expansions: Suppose f∈C4andf′(1) = 0, then Df(p/bardblq) =/integraldisplay Ω/bracketleftBig1 2f′′(1)(p(x)−q(x))2 q(x)+1 6f′′′(1)(p(x)−q(x))3 q(x)2/bracketrightBig dx +O(/integraldisplay Ω(p(x)−q(x))4 q(x)3dx). 3.Transport f-divergences In this section, we recall some facts in optimal transport. U sing them, we formulate f-divergences in terms of optimal transport mapping functio
|
https://arxiv.org/abs/2504.15515v2
|
ns. We call them transport f-divergences . We demonstrate several properties of transport f-divergences, including invariances, dualities, local behaviors and Taylor expans ions in Wasserstein-2 spaces. 3.1.Review of Wasserstein- 2distances. We first review the definition of optimal transport mapping functions in one-dimensional sample spa ce. See [4, Formula (6.0.3)]. Foranytwoprobabilitydensities p,q∈ P(Ω)withfinitesecondmoments, theWasserstein- 2 distance is defined by: W2(p,q) := inf T/radicalBigg/integraldisplay Ω|T(x)−x|2q(x)dx, (1) where the infimum is taken over all continuous mapping functi onT: Ω→Ω that pushfor- wardsqtop. We also write the pushforward operation by T#q=p, which represents that the Monge-Amper´ e equation holds: p(T(x))·T′(x) =q(x). (2) The optimal mapping Tis a monotone function with closed form formulas. Denote the cumulative distribution functions (CDFs) Fp,Fqof probability density function p,q, respectively, such that Fp(x) =/integraldisplayx −∞p(y)dy, F q(x) =/integraldisplayx −∞q(y)dy. Denote the quantile functions of probability densities p,qby Qp(u) =F−1 p(u), Q q(u) =F−1 q(u). where we denote F−1 p,F−1 qare inverse CDFs of p,q, respectively. From the integration on both sides of equation (2) with respect to x, then Fp(T(x)) =Fq(x). Thus, the optimal transport mapping function satisfies T(x) :=F−1 p(Fq(x)) =Qp(Fq(x)). (3) TRANSPORT f-DIVERGENCES 5 Equivalently, the squared Wasserstein-2 distance satisfie s W2(p,q)2=/integraldisplay Ω|Qp(Fq(x))−x|2q(x)dx =/integraldisplay1 0|Qp(u)−Qq(u)|2du, where we denote u=Fq(x) withu∈[0,1]. There is a Kantorovich formulation and duality formula for t he Wasserstein-2 distance (1). We can write the linear programming formulation of one- half squared Wasserstein-2 distance: 1 2W2(p,q)2= inf π/integraldisplay Ω/integraldisplay Ω1 2|x−y|2π(x,y)dxdy, where the infimum is taken among all joint distributions π∈L1(Ω2;R) with marginal densities p,q, respectively, such that /integraldisplay Ωπ(x,y)dx=p(y),/integraldisplay Ωπ(x,y)dy=q(x), π(x,y)≥0. TheKantorovich dualityformulameansthattheWasserstein -2distancecanberepresented by: 1 2W2(p,q)2=/integraldisplay ΩΦ1(y)p(y)dy−/integraldisplay ΩΦ0(x)q(x)dx, where Φ 0, Φ1∈C(Ω;R) are a pair of functions, Kantorovich duality variables, co rrespond- ing to densities q,p, respectively, such that Φ′ 1(T(x)) = Φ′ 0(x) =T(x)−x. By taking the integration of the above formula, we have Φ0(x) =/integraldisplayx 0T(y)dy−|x|2 2+c0,Φ1(x) =|x|2 2−/integraldisplayx 0T−1(y)dy+c1.(4) Herec0,c1∈Rare constants, and T−1is the inverse function of the optimal mapping function. From equation (3), T−1(x) =Qq(Fp(x)). 3.2.Transport f-divergences. Inthissubsection, wedefine f-divergencesinWasserstein- 2 space. Definition 1 (Transport f-divergence) .Given a positive convex function f: Ω→R+, such that f(1) = 0. Define a functional DT,f:P(Ω)×P(Ω)→R+, such that DT,f(p/bardblq) =/integraldisplay Ωf(q(x) p(T(x)))q(x)dx, (5) whereTis the monotone mapping function, such that T#q=p. We call DT,fthetransport f-divergence . We first present several equivalent formulations of transpo rtf-divergences. Proposition 1 (Equivalent formulations) .The following equivalent formulations hold: 6 LI (i) DT,f(p/bardblq) =/integraldisplay Ωf(T′(x))q(x)dx =/integraldisplay Ωf(q(x) p(T(x)))q(x)dx =/integraldisplay Ωf(q(T−1(x)) p(x))p(x)dx.(6) (ii) DT,f(p/bardblq) =/integraldisplay1 0f(Q′ p(u) Q′q(u))du. (7) HereQ′ p(u) =d duQp(u),Q′ q(u) =d duQq(u)are quantile density functions of densi- tiesp,q, respectively. (iii)Letpref∈ P(Ω)be a reference measure. Let Tp,Tq: Ω→Ωbe the monotone mapping functions, which pushforward probability densiti esp,qtopref, respectively. I.e.,p= (Tp)#prefandq= (Tq)#pref. Then DT,f(p/bardblq) =/integraldisplay Ωf(T′ p(z) T′q(z))pref(z)dz. (8) Proof.(i) Theoptimal transport mappingfunction is monotone. Fro m theMonge-Amper´ e equation (2), we have DT,f(p/bardblq) =/integraldisplay Ωf(T′(x))q(x)dx=/integraldisplay Ωf(q(x) p(T(x)))q(x)dx. Similarly, denote ( T−1)#p=q, then the following Monge-Amper´ e equation holds: q(T−1(x))d dxT−1(x) =p(x). Letx=T−1(˜x), then DT,f(p/bardblq) =/integraldisplay Ωf(q(x) p(T(x)))q(x)dx =/integraldisplay Ωf(q(T−1(˜x)) p(˜x))q(T−1(˜x))dT−1(˜x) =/integraldisplay Ωf(q(T−1(˜x)) p(˜x))p(˜x)dx. Hence we derive the result. (ii) Denote the change of variable formula by u=Fq(x)
|
https://arxiv.org/abs/2504.15515v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.