text string | source string |
|---|---|
the Einstein notation for the two types of indices a,b,c,d∈ {1,...,d,γ }andi,j,k,l,m ∈ {1,...,d}throughout this paper. A pair of subscript and superscript indices implies summation over those indice s. For example, the terms d+1/summationdisplay b=1gab∂b,d/summationdisplay j=1gij∂j are abbreviated as gab∂bandgij∂j, whe... | https://arxiv.org/abs/2504.21363v1 |
expansion of the posterior prob abilityPn π and the frequentist probability Pn θ,γ, using a shrinkage argument. Then, we derive the conditions for the probability matching prior on an oTF as follows. Theorem 2. The probability matching prior πγ PM(θ,γ)for the non-regular parameter γis the solution of the partial differe... | https://arxiv.org/abs/2504.21363v1 |
one satisfying ˆγπ−ˆγ∗ ML= Op/parenleftbig n−3/parenrightbig , where ˆγ∗ ML= ˆγML−1 nˆcis the bias-adjusted MLE ( Hashimoto ,2019). In case (ii), the moment matching prior πj MMis defined by ˆθj π−ˆθj ML= Op/parenleftig n−3/2/parenrightig forj= 1,...,d(M. Ghosh & Liu ,2011). Hashimoto (2019) derived these priors in a ... | https://arxiv.org/abs/2504.21363v1 |
prior πγ PMon the oTEF have the form Lχ/braceleftbigg logπ−log(detgθ)−1 2loggγγ/bracerightbigg = 0. (10) On the other hands, the conditions ( 8) for the moment matching priors πγ MMon the oTEF have the form Lχ/braceleftbigg logπ−1 2log(detgθ)−loggγγ/bracerightbigg = 0. (11) Thus,the twomatchingpriorshaveacommonpointtha... | https://arxiv.org/abs/2504.21363v1 |
normal distributions with the fixed β=−1/2 in the density ( 4) for simplicity. The density p(x;α,−1/2,γ) represents the truncated normal distribution with µ=α andσ= 1. 14 -3 -2 -10123-3-2-10123 αγ Fig. 2 Streamline: N(α,1,γ) In this case, the vector field χis given by χ=∂γ+Ψ(2)(ν) 1+Ψ(2)(ν)∂α. The streamline along χshown... | https://arxiv.org/abs/2504.21363v1 |
=1√n/parenleftig Dθˆli/parenrightig⊤ u+1 n∂γˆli/parenleftbiggt ˆc/parenrightbigg +1 2n/parenleftig D⊗2 θˆli/parenrightig⊤ u⊗2+1 n3/2/parenleftig Dθ∂γˆli/parenrightig⊤ u/parenleftbiggt ˆc/parenrightbigg +1 3!n3/2/parenleftig D⊗3 θˆli/parenrightig⊤ u⊗3 17 +1 2n2∂γ∂γˆli/parenleftbiggt ˆc/parenrightbigg2 +1 2n2/par... | https://arxiv.org/abs/2504.21363v1 |
posterior ( A1) from the above calculations. By (A2), (A3) and (A6), it holds that π(u,t;Xn) =1 (2π)d/2/radicalig detˆg−1 θexp/braceleftbig t−u⊤ˆgθu/2/bracerightbig/bracketleftbigg 1+1√nB1(u,t)+1 nB2(u,t)+Op/parenleftbigg1 n3/2/parenrightbigg/bracketrightbigg , (A7) where B1=P1(u) ˆπ+L1(u,t) =1 ˆπD⊤ θˆπu+ˆA(1,1)⊤ut ˆc... | https://arxiv.org/abs/2504.21363v1 |
for the regular parameter Considerthe casethat θ1is the parameterofinterest.We will calculate the asymptotic expansion of the posterior probability Pn π(U1≤z) and the frequentist probability Pn θ,γ(U1≤z) to derive the conditions of probability matching prior π1 PM. By integrating the expansionof the marginalposteriorde... | https://arxiv.org/abs/2504.21363v1 |
( B12), the posterior mean of uis E[u|Xn] =/integraldisplay Rduπ(u;Xn)du =/integraldisplay Rduφd(u;0,ˆg−1 θ)du+1√n/integraldisplay Rdu/braceleftbigg u⊤1 ˆπDθˆπ/bracerightbigg φd(u;0,ˆg−1 θ)du −1√n/integraldisplay Rdu/braceleftbigg u⊤1 ˆcˆA(1,1)/bracerightbigg φd(u;0,ˆg−1 θ)du +1√n/integraldisplay Rdu/braceleftbigg1 3!ˆ... | https://arxiv.org/abs/2504.21363v1 |
T. (1997, April). Exact Structural Inference in Opt imal Job- Search Models. Journal of Business & Economic Statistics ,15(2), 165–179, https://doi.org/10.1080/07350015.1997.10524698 Ortega, F.J., & Basulto, J. (2016, August). A generalization of Jeff reys’ rule for non regular models. Communications in Statistics - The... | https://arxiv.org/abs/2504.21363v1 |
NON-PARAMETRIC MULTIPLE CHANGE -POINT DETECTION A P REPRINT Andreas Anastasiou Department of Mathematics and Statistics University of Cyprus anastasiou.andreas@ucy.ac.cyPiotr Fryzlewicz Department of Statistics London School of Economics p.fryzlewicz@lse.ac.uk ABSTRACT We introduce a methodology, labelled Non-Parametri... | https://arxiv.org/abs/2504.21379v1 |
compute the maximizer of an objective function. In an attempt to improve on the computational cost of that method, [10] use the Pruned Exact Linear Time (PELT) method introduced in [13] in order to find the optimal segmentation. [17] propose a non-parametric approach basedarXiv:2504.21379v1 [math.ST] 30 Apr 2025 Non-pa... | https://arxiv.org/abs/2504.21379v1 |
within R1and calculates ˜Bb 1,λT+1(Xi), for each i∈ {1,2, . . . , T } and for each b∈ {1,2, . . . , λ T}. This creates λTvectors yb= ˜Bb 1,λT+1(X1),˜Bb 1,λT+1(X2), . . . , ˜Bb 1,λT+1(XT) , b= 1, . . . , λ T. The next step is to aggregate the contrast function information across i∈ {1,2, . . . , T }by applying to each... | https://arxiv.org/abs/2504.21379v1 |
indicated in Figure 1 and they are only related to this specific example, because for cases with a different number of change-points we would have a different number of such phases. At the beginning s= 1, e= 60 , and we take the expansion parameter, λT, to be equal to 10. The first change-point to be isolated is r2; th... | https://arxiv.org/abs/2504.21379v1 |
. , T is regarded as independent Bernoulli data with probability of success ˆF(u). Working under this framework, the joint profile log-likelihood for a candidate set of change-points is obtained, and then maximized in an integrated form (over R) in order to estimate the change-points. We now describe our method generic... | https://arxiv.org/abs/2504.21379v1 |
e4], . . . , [s2K, e2K], the end-point is fixed and equal to e. After the detection of a change-point, sore is updated based on whether the change-point was detected in a right- or a left-expanding interval, respectively. The process follows until there are no intervals to be checked. Change-point detection occurs if a... | https://arxiv.org/abs/2504.21379v1 |
was also obtained in [26] but under stronger assumptions than our (A1). In particular, [26] require that there is an upper bound, denoted by ¯Knfor the true number of change-points. This upper bound, ¯N, is such thatδT/(¯N4(log¯N)2(logT)2+c)− − − − → T→∞∞, where δTis the minimum distance between consecutive change-poin... | https://arxiv.org/abs/2504.21379v1 |
a two-step process in which the number of change-points is first estimated through a version of the Bayesian Information Criterion (BIC), and the locations are estimated next. For ¯Nbeing the maximum permitted number of change-points (the fixed- Nscenario), NMCD imposes the restrictive assumption δT/(¯N4(log¯N)2(logT)2... | https://arxiv.org/abs/2504.21379v1 |
NPID is robust to small deviations from the optimal threshold value, misspecification of the threshold can possibly lead to the misestimation of the number and/or the location of change-points. In order to reduce the dependence of our methodology on the threshold choice, we propose in this section an approach which sta... | https://arxiv.org/abs/2504.21379v1 |
. , b∗,j j+1 +j pT, (13) 7 Non-parametric segmentation A P REPRINT where ST b∗,j 0, b∗,j 1, . . . , b∗,j j+1 =TjX i=0T−1X l=2 b∗,j i+1−b∗,j i l(T−l) ˆFb∗,j i+1 b∗,j i(X[l]) log ˆFb∗,j i+1 b∗,j i(X[l]) + 1−ˆFb∗,j i+1 b∗,j i(X[l]) log 1−ˆFb∗,j i+1 b∗,j i(X[l]) (14) forˆFb∗,j i+1 b∗,j i(u) =1 b∗,j i+1−b... | https://arxiv.org/abs/2504.21379v1 |
a large number of frequently occurring change-points, NPID will keep operating on short intervals, leading to fast computation times. This is not the case for many of the competitors; see for example [18] and [17], where it is explained that the computational time for binary segmentation related approaches and some dyn... | https://arxiv.org/abs/2504.21379v1 |
other continuous distribution), and the fact that we explicitly study both these models below only serves as an extra check of the correctness of our implementation. The models used are: Gaussian : The distribution used is the standard normal. There are no change-points and the lengths of the data sequences are T∈ {30,... | https://arxiv.org/abs/2504.21379v1 |
methods used in the simulation study Method notation Reference Rpackage ECP [17] ecp NMCD [26] changepoint.np NBS [18] – NWBS [18] – CPM [20] cpm Sparse grid: For practical applications, instead of creating Tbinary sequences by using every empirical quantile (see the pseudocode for our algorithm), one could take Q, equ... | https://arxiv.org/abs/2504.21379v1 |
assumes that observations are independent with finite αth absolute moments, for some α∈(0,2][16]. 10 Non-parametric segmentation A P REPRINT Three changes in the mean as in (MM_Gauss) TimeData values 0 100 200 300 400−4−202Three changes as in (MM_Gauss_tr) TimeData values 0 100 200 300 400051015202530Three changes in t... | https://arxiv.org/abs/2504.21379v1 |
among all scenarios covered in the models, performs accurately regarding both the number and the locations of the estimated change-points. To be more precise, NPID is always among the top-performing methods when considering accuracy in any aspect (estimated number of change-points and estimated change-point locations);... | https://arxiv.org/abs/2504.21379v1 |
now focus on a comparison of the behavior of the competitors in (MM Gauss) and (MM Pois) to that under (MM Gauss tr) and (MM Pois tr); in the latter two models the exponential function was applied to the data sequences obtained by the former two models. We highlight that NPID remained unaffected under such transformati... | https://arxiv.org/abs/2504.21379v1 |
0 50 45 4 1 0.551 5.71 NPID 0 9 91 0 0 0.131 0.02 ECP (MM Pois tr) 1 67 28 4 0 0.721 1.83 NMCD 0 0 86 11 3 0.098 0.02 NBS 0 39 56 3 2 0.458 0.22 NWBS 0 48 48 1 3 0.531 6.05 13 Non-parametric segmentation A P REPRINT Table 5: Distribution of ˆN−Nover 100 simulated data sequences from the structures (MV Gauss), (MV Gauss... | https://arxiv.org/abs/2504.21379v1 |
results for the NPID and ECP methods. Bottom row: The results for the NMCD and NWBS methods. 5 Real data examples 5.1 Micro-array data In this section, we investigate the performance of our method on micro-array data, which consists of individuals with a bladder tumour. The data set can be obtained from the Rpackage ec... | https://arxiv.org/abs/2504.21379v1 |
.) Figure 7 shows the results for NPID, as well as for the ECP, NMCD, and NWBS algorithms. NPID, ECP and NMCD exhibit a similar behavior detecting 4, 3, and 5 change-points, respectively, while NWBS does not detect any change-points. It is interesting to see that NPID and NMCD capture, through the last estimated change... | https://arxiv.org/abs/2504.21379v1 |
for any changes in the distribution of the data. Figure 10 shows the results for the NPID method, as well as for the ECP, NMCD, and NWBS methods. The estimated numbers of change-points for NPID, ECP, NMCD, and NWBS are 8, 7, 17, and 12, respectively. While the changes returned by NPID and ECP are relatively easy to jus... | https://arxiv.org/abs/2504.21379v1 |
mean at 100, 200, 300. The values for the piecewise- constant signal in the four segments are 0,1,−0.2,−1.3. On this signal, we add noise following the Pois- son(1) distribution. (MM Pois tr)multi mean Pois tr: For this scenario, we transform the data sequences created from (MM Pois) using the exponential function. (MV... | https://arxiv.org/abs/2504.21379v1 |
the NPID algorithm proceeds, each change-point will get isolated in an interval where its detection will occur with high probability; for this result we use again Lemma 1 available in the main paper. It suffices to restrict our proof on a single change-point detection framework within an interval [sj, ej)which contains... | https://arxiv.org/abs/2504.21379v1 |
2012. [14] C. B. Lee. Nonparametric multiple change-point estimators. Statistics & Probability Letters ,27:295–304, 1996. [15] A. Lung-Yut-Fong, C. L ´evy-Leduc, and O. Capp ´e. Homogeneity and change-point detection tests for multivariate data using rank statistics. Journal de la SFdS ,156:133–162, 2015. [16] D. S. Ma... | https://arxiv.org/abs/2504.21379v1 |
different scenarios of no change- points, covering scenarios from both continuous and discrete distributions. The models used are given in Section 3.2 of the main paper. Tables S1 and S2 below present the frequency distribution of ˆN−Nfor all the above scenarios and for both the L∞andL2mean-dominant norms, when their r... | https://arxiv.org/abs/2504.21379v1 |
1)b∗(e−s+1)X t=11{Xt+s−1≤u} ˆhA b∗(u) =1 (1−b∗)(e−s+ 1)e−s+1X t=b∗(e−s+1)+11{Xt+s−1≤u}. (S2) S2 Non-parametric segmentation A P REPRINT The above functions ˆhB b∗(u)andˆhA b∗(u)are used to calculate the empirical cumulative distribution at u∈Rup to and after the point b∗, respectively. In addition, for r0= 0 andrN+1=T,... | https://arxiv.org/abs/2504.21379v1 |
≤L ˜Bb s,e − ˜Fb s,e . Therefore, it is straightforward to see that in order to proof Lemma 1, it suffices to show just that P(A∗ T)≥1−12 T, withA∗ Tas in (S13). In order to show this result, we use the notation in (S10) and after simple steps we can show that for any u∈R db∗ s,e(u) ≤ ˆhA b∗(u)−hA b∗(u) + hA b∗(u)... | https://arxiv.org/abs/2504.21379v1 |
1 2ϵ2 1 . (S22) In the same way, for b∗> r∗ j, P p b∗(1−b∗) sup u∈R r∗ j(e−s+1)X t=11{Xt+s−1≤u} b∗(e−s+ 1)−r∗ j b∗Frj(u) >ϵ1 4 ≤2 exp −e−s+ 1 8ϵ2 1 . (S23) and P p b∗(1−b∗) sup u∈R b∗(e−s+1)X t=r∗ j(e−s+1)+11{Xt+s−1≤u} b∗(e−s+ 1)−b∗−r∗ j b∗Frj+1(u) >ϵ1 4 ≤2 exp −e−s+ 1 8ϵ2 1 . (S24) The results in (S21), ... | https://arxiv.org/abs/2504.21379v1 |
1) S7 Non-parametric segmentation A P REPRINT using the Dvoretzky-Kiefer-Wolfowitz inequality. Therefore, for any positive constant Kthat does not depend on T, we have that P L Brj s,e−Frj s,e >Kp max{e−rj, rj−s+ 1}√ l! ≤2 exp −K2max{e−rj, rj−s+ 1} 2(e−rj) + exp −K2max{e−rj, rj−s+ 1} 2(rj−s+ 1) ≤4 exp −K2 2... | https://arxiv.org/abs/2504.21379v1 |
mentioned, our method due to its expansion approach naturally ensures that ∃k∈ {1, . . . , K }such that cr k∈IR 1. There is no other change-point in [1, cr k]apart from r1. We will now show that for b= argmax1≤t≤cr kL(|˜Bt 1,cr k|), then L(|˜Bb 1,cr k|)> ζT. Using Lemma 1, we have that L ˜Bb 1,cr k ≥L ˜Br1 1,cr k ... | https://arxiv.org/abs/2504.21379v1 |
s,e−˜Frj s,e ! . Because l→ ∞ asT→ ∞ means that1√ l=o(1)and we have using (S29) that for w(r∗ j) =q r∗ j(1−r∗ j) L Dr∗ j s,e ≥w(r∗ j)˜Cj γj,T≥C∗/γj,T (S40) with probability tending to 1 for any arbitrary C∗∈(0, w(r∗ j)). ForMb∗ s,e=Db∗ s,e−∆b∗ s,e, we have that L Db∗ s,e =L Mb∗ s,e−Mr∗ j s,e+Mr∗ j s,e+∆b∗ s,e ... | https://arxiv.org/abs/2504.21379v1 |
b∗∈Js,e∩[α,1−α]\NT(d) →1 (S45) S11 Non-parametric segmentation A P REPRINT asd→ ∞ . W.l.o.g. we take b∗≤r∗ j, and simple calculations yield Mb∗ s,e(Xi)−Mr∗ js,e(Xi) =1 llX t=r∗ jl+11{Xt+s−1≤Xi} r b∗ 1−b∗−s r∗ j 1−r∗ j! +1 lr∗ jlX t=11{Xt+s−1≤Xi} s 1−r∗ j r∗ j−r 1−b∗ b∗! +1p b∗(1−b∗)lr∗ jlX t=b∗l+11{Xt+s−1≤Xi}+ q r∗ j(... | https://arxiv.org/abs/2504.21379v1 |
from the result in [3], page 1489, that P max m≥dγ2 j,T1 msup i=1,...,T|Rm(Xi)| ≥C′′′/γj,T! ≤γj,T C′′′E1 m0sup i=1,...,T|Rm0(Xi)| , (S49) where m0:= min m∈N:m≥dγ2 j,T . The same result holds for ˜Rm(Xi)in the place of Rm(Xi). Now, due to the Dvoretzky-Kiefer-Wolfowitz inequality we have that for any ϵ >0 P1√m0sup i... | https://arxiv.org/abs/2504.21379v1 |
will first isolate r2orrNdepending on whether r2−cr k∗is smaller or larger than T−rN. IfT−rN< r2−cr k∗thenrN will get isolated first in a left-expanding interval and the procedure to show its detection is exactly the same as in Step 1.1 where we explained the detection of r1. Therefore, w.l.o.g. and also for the sake o... | https://arxiv.org/abs/2504.21379v1 |
k∗L ˜Bt s,cr k∗ andˆr1is the estimated location for r1and|r1−ˆr1|/γ2 1,T=Op(1). After this, the algorithm continues in [cr k∗, T]and keeps detecting all the change-points as explained in Step 1. It is important to note that there will not be any double detection issues because naturally, at each step of the algorith... | https://arxiv.org/abs/2504.21379v1 |
An Aldous-Hoover type representation for row exchangeable arrays Evan Donald University of Central Florida ev446807@ucf.eduJason Swanson University of Central Florida jason.swanson@ucf.edu Abstract In an array of random variables, each row can be regarded as a single, sequence- valued random variable. In this way, the ... | https://arxiv.org/abs/2504.21584v1 |
roll of the ith globe created by the machine. In situations such as these, it would be natural to assume that ξhas certain symmetries. More specifically, we assume that (i) each row of the array, ξi={ξij:j∈N}, is an exchangeable sequence of S-valued random variables, and (ii) the sequence of rows, {ξi:i∈N}, is an excha... | https://arxiv.org/abs/2504.21584v1 |
conditional distribution (1.1) is entirely determined by the conditional distributions, L(µm|XmN, . . . , X MN,µm−1), (1.2) where 1≤m≤M. The proof of Theorem 1.2 (see Section 3.3) is constructive and provides us with a way of using (1.2) to compute (1.1). In follow-up work (see [3]), we take on the question of how to a... | https://arxiv.org/abs/2504.21584v1 |
probability kernel from Ω to S. It can be shown that the function µ7→µ∞mapping M1toM1(S∞) is measurable. Therefore, if µis a random probability measure on S, then µ∞is a random probability measure on S∞. In fact, the random variable µ∞isσ(µ)-measurable. 2.2 Regular conditional distributions IfXis an S-valued random var... | https://arxiv.org/abs/2504.21584v1 |
then P n\ i=1{ξi∈Ai} G! =nY i=1P(ξi∈Ai| G)a.s., for all n∈Nand all Ai∈ S. 5 Proof. By Lemma 2.1, P n\ i=1{ξi∈Ai} G! =µ∞(A1× ··· × An) =nY i=1µ(Ai) a.s. IfA=Si−1×Ai×S∞, then µ(Ai) =µ∞(A) =P(ξ∈A| G) =P(ξi∈Ai| G) a.s. 2.4 Exchangeability and empirical measures A sequence ξ={ξi:i∈N}ofS-valued random variables is exchangeab... | https://arxiv.org/abs/2504.21584v1 |
i.i.d. given G, and f:T×S→Ris a measurable function such thatE|f(Y, ξ 1)|<∞. IfY∈ GandZis any version of E[f(Y, ξ 1)| G]then P lim n→∞1 nnX i=1f(Y, ξi) =Z G! = 1 a.s. (2.3) In particular, 1 nnX i=1f(Y, ξi)→E[f(Y, ξ 1)| G]a.s. (2.4) Proof. Taking expectations in (2.3) yields (2.4). It thus suffices to prove (2.3). We fi... | https://arxiv.org/abs/2504.21584v1 |
distributions as ξ. Forn∈N, define dn: (0,1)→ {0,1}bydn(x) =⌊2nx⌋ −2⌊2n−1x⌋, so that dn(x) is the n-th digit of the canonical binary expansion of x. Define φ: (0,1)→(0,1)2by φ(x) = ∞X n=12−nd2n−1(x),∞X n=12−nd2n(x)! . Note that φis measurable, and that whenever Uis uniform on (0 ,1), it follows that φ1(U) andφ2(U) are ... | https://arxiv.org/abs/2504.21584v1 |
can write ϖas a measurable function of µ, so that ϖ∈σ(µ). Thus, mY i=1µn i∈σ(µm)⊆σ(ϖ,µm)⊆σ(ϖ, µ) =σ(µ). It therefore suffices to show that P m\ i=1n\ j=1{ξij∈Aij} µ! =mY i=1nY j=1µi(Aij) a.s. (3.1) By Theorem 3.1, we may assume that ξij=f(α, β i, λij), where f:R3→Sis measurable and{α, β i, λij:i, j∈N}is an i.i.d. colle... | https://arxiv.org/abs/2504.21584v1 |
and this proves the claim. Theorem 3.5. Letξbe row exchangeable. Fix m, M, N ∈Nwithm < M . Then XmNand {(µj, XjN)}M j=m+1are conditionally independent given µm. Proof. LetAij∈ Sfori≤Mandj≤N, and let Bi∈ M 1form < i ≤M. We adopt the shorthand Ai=Ai1× ··· × AiN. By Theorem 3.3, P(µm+1∈Bm+1, . . . , µ M∈BM, X 1N∈A1, . . .... | https://arxiv.org/abs/2504.21584v1 |
Convergence rate for Nearest Neighbour matching: geometry of the domain and higher-order regularity Simon Viel*, Lionel Truquet†, Ikko Yamane‡ May 1, 2025 Abstract Estimating some mathematical expectations from partially observed data and in particular missing outcomes is a central problem encountered in numerous field... | https://arxiv.org/abs/2504.21633v1 |
reference measure, an expectation of Z∗under Qcan be written as an expectation of Z:=h(X, Y)under P weighted by the density ratio q/p, where pandqdenote respectively the densities of Pand Qwith respect to the reference measure. Shimodaira [2000] considered such a reweighting for parametric maximum likelihood estimation... | https://arxiv.org/abs/2504.21633v1 |
both problems can 3 be described in a similar framework. To estimate the expectation e(h) :=E[h(X∗, Y∗)], we observe a sample (Xi, Yi),1≤i≤n, of i.i.d. random vectors taking values in Rd×R as well as an additional sample of unlabelled i.i.d. random variables X∗ j,1≤j≤m. We denote by Xthe measurable subset of Rdon which... | https://arxiv.org/abs/2504.21633v1 |
and Singh and P ´oczos [2016], with a similar convergence rate for the variance. The quality of these matching-type estimators then depends on their bias, the convergence rate of which generally depends on the dimension of the covariate vector. If the regression function ghas Lipschitz properties on a bounded domain X,... | https://arxiv.org/abs/2504.21633v1 |
of various sufficient conditions for checking our geometric conditions is given in Section IV. In Section V we show that such conditions are sufficient to control higher order properties of the bias for a local polynomial extension of the previous estimates. 5 Numerical experiments are given in Section VI. A conclusion... | https://arxiv.org/abs/2504.21633v1 |
satisfied. However, such an assumption is quite restrictive and cannot be used for studying the standard average treatment effect. See Section III for details. Without such a restriction on the target measure Q, condition (A) is related to the geometry of the boundary of X. We defer the reader to Section IV for a discu... | https://arxiv.org/abs/2504.21633v1 |
also denote by∥·∥the operator norm on the space of d×dmatrices with real coefficients, corresponding to the norm ∥·∥used on Rd. We will therefore consider the following two assumptions. In what follows, we denote by Uan open convex and bounded set containing Xand by co (X) the convex hull of X. We recall that co (X)is ... | https://arxiv.org/abs/2504.21633v1 |
Finally, assume that kd+1=o(n), that the source density pis a restriction to Xof an element p∈C1(U)with∇pisη-H¨older continuous on co(X)for some η∈(0,1), and that the support of Qlies in the interior of X. Then, for i= 1,2, we have the following expansion for the first order bias E[Bi,n] =Ci(k, P, Q, h )n−2/d+o((k/n)2/... | https://arxiv.org/abs/2504.21633v1 |
ddoes not exceed 2. Such an improvement, assuming second order regularity for the conditional expectation as well as the additional geometric condition (A) on the support Xof the covariates, is one of the main result of the paper. III. A PPLICATION TO AVERAGE TREATMENT EFFECTS In the setting of average treatment effect... | https://arxiv.org/abs/2504.21633v1 |
Bof the global average treatment effect. With our results, we can prove that under the next less restrictive geometric condition (T8) below, the conditional bias term Bis of order (k/N)2/d, and (k/N)3/2instead if d= 1. To state the theorem, we consider the following assumptions. (T1) The support XofXis a compact subset... | https://arxiv.org/abs/2504.21633v1 |
Condition (A) but not Condition (X2). 2) Consider X:=S k∈N∗Ck, where Ck:={x∈R2:ak≤ ∥x∥ ≤ bk}. We choose ak:= (k+1)−1andbk∈(ak, ak−1)such that for all k≥1,0< a k−1−bk≤δ/(k+1)4 withδsmall enough. Then Xsatisfies Condition (X2) but not Condition (A). For illustrations of Proposition 2, see Figure 1. In the next theorem, w... | https://arxiv.org/abs/2504.21633v1 |
checking (A) in the submanifold case, a result already given in Theorem 5. V. E XTENSION TO LOCAL POLYNOMIALS In this section, we discuss a more general estimator introduced by Holzmann and Meister [2024] to reduce the bias when the regression function ghas more regularity. They use local polynomials of order L∈Nto app... | https://arxiv.org/abs/2504.21633v1 |
4, the previous theorem leads to the following result. 15 Corollary 3. Suppose that Conditions (T1) to (T5) hold true, that the regression functions g0andg1are restrictions of mappings on an open convex neighbourhood of Xhaving derivatives of order l∈Nwhich are all H ¨older continuous of order β∈(0,1]. Suppose further ... | https://arxiv.org/abs/2504.21633v1 |
response is generated 16 according to Y=|X(1)|3+ε, where εa normal noise variable with mean 0and standard deviation 0.1, independent of all the other variables. Setup TN0.5-Cubic-Reversed: The conditional distribution of Ygiven Xis the same as in Setup TN0.5-Cubic. However, we switch the locations of the source and the... | https://arxiv.org/abs/2504.21633v1 |
(X,Y)PXPY|X (X*,Y*)QXPY|X y = -x (d) Setup TN0.5-Cubic-Reversed. Fig. 2: Visualization of the data distributions used in the experiments. VIII. P ROOFS A. Proof of Theorem 1 We prove the result when i= 2, the case i= 1 follows in the same way using uniform bounds, with respect to x, for the derivatives of ∆(x,·). Recal... | https://arxiv.org/abs/2504.21633v1 |
n+ 11+2/d , where C:= 2e1/4∥∇g∥2 ∞Γ(2 + ⌊2/d⌋)(cp|Vd|)−2/dandL:=cp|Vd| 2d+3. In the series of inequalities above, we have used the equality σ Sd−1 =d|Vd|, and we also used the fact that for all a≥0,P(ˆτℓ(x)≥a) =P(ˆτℓ(x)> a)since the distribution Pis absolutely continuous with respect to the Lebesgue measure on Rd. B... | https://arxiv.org/abs/2504.21633v1 |
13 and then applying Lemma 12 yields Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1(1−Fx(r))nrd+1dr −Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1exp(−np(x)|Vd|rd)rd+1dr ≤2np(x)|Vd|Cℓ,nZε/k 0(p(x)|Vd|rd)ℓ−1O rd+1exp(−np(x)|Vd|rdLk)rd+1 dr =O(n(k/n)(2+d+1)/d) =o((k/n)2/d), where we used kd+1=o(n). The leading term of the bias is now reduced to the integral 2... | https://arxiv.org/abs/2504.21633v1 |
a+O R2+κ a , where κ:= min( η, β). We also usedR Sd−1θdσ(θ) = 0 . Here, Ψ2(a) :=1 σ(Sd−1)Z Sd−1θ⊤∇g(a)⊤∇p(a) p(a)+∇2g(a) 2 θdσ(θ), as defined in the statement of the theorem. By the same expansion of pand the fact thatR Sd−1θdσ(θ) = 0 , we can express the denominator of I(a, R a)as Z Sd−1p(a+Raθ)dσ(θ) =p(a)σ(Sd−1) ... | https://arxiv.org/abs/2504.21633v1 |
≤E[Z1(Z≥t)] E[Z1(Z≥t)]−E[(Z−R)1(Z≥t)]E[R] E[Z]. Setting Z:=n kQ(Ak(X1)), R:=n kR X1(∥X1−x∥ ≤ˆτk(x))1(∥X1−x∥> a) dQ(x), and ad:=2kt 3n¯q|Vd|, we first have Z−R≤n kZ X1(∥X1−x∥ ≤a) dQ(x)≤n k¯q|Vd|ad≤2t 3. Therefore, E[Z1(Z≥t)] E[Z1(Z≥t)]−E[(Z−R)1(Z≥t)]≤E[Z1(Z≥t)] E[Z1(Z≥t)]−2tP(Z≥t)/3, and because for α≥0, the mapping x7→... | https://arxiv.org/abs/2504.21633v1 |
for some δ∈(0,1), E NaN−a w1(Nw≥1) ≤E NaN−a w1(Nw≤(1−δ)Npw) +Na((1−δ)Npw)−a ≤NaP(Nw≤(1−δ)Npw) + (1 −δ)−ap−a w ≤Naexp −δ2Npw 2 + (1−δ)−ap−a w, which is bounded in N. The last inequality is based on the Chernoff concentration bound on binomial random variables, see for instance Boucheron et al. [2013]. The previous... | https://arxiv.org/abs/2504.21633v1 |
we consider the case for which j≥2circles with radius aℓintersect the ballB(x, ε). Suppose that ak, . . . , a k+j−1are the radii of the circles that intersect B(x, ε). It automatically means that 2ε ∥x∥2−ε2= (∥x∥ −ε)−1−(ε+∥x∥)−1≥a−1 k+j−1−a−1 k=j−1. We then get ∥x∥2≤ε2+ 2ε/(j−1). However, |B(x, ε)∩X| ≥πε2−ηεPj s=0δk+s ... | https://arxiv.org/abs/2504.21633v1 |
of the boundary. The vertex angle and thus the clipped volume can depend on the location z, but its minimum is still positive thanks to the compactness. 34 xz(1) x x+sx(2) |tx(1) |e2z(2) x∂Xzx |tx(1)|√ dv e1 e2 Fig. 7: Illustration for the proof of Theorem 5-2 with the Euclidean norm ∥·∥. Without loss of generality, su... | https://arxiv.org/abs/2504.21633v1 |
ith section of the convex set Xat point x. By convexity, these sections are closed intervals, that is Ii((xj)j̸=i) = [a(xj)j̸=i, b(xj)j̸=i] for some real numbers a(xj)j̸=i< b(xj)j̸=i. We now have the bound exp −Lδ(x,Xc)d ≤dX i=1exp −Lηdgi(x)d . Moreover, for i= 1, . . . , d , we have Zb(xj)j̸=i a(xj)j̸=iexp(−Lηdgi(... | https://arxiv.org/abs/2504.21633v1 |
the mean value inequality, for all a∈Ai,|φi(ai(x))−φi(a)| ≤ Ki∥ai(x)−a∥. This means that the slope of the boundary does not drop vertically around z. We construct the set C1depicted in Figure 6 as the intersection of the cone with vertex z, directed downside or towards x, with slope Ki, and the ball B(x, εA) that is in... | https://arxiv.org/abs/2504.21633v1 |
are bounded sets and thus have a finite Lebesgue measure. 5) IfX1andX2satisfy Condition (X2), then it follows directly from the definition that X1∪X1satisfies Condition (X2). 39 6) Suppose that X1andX2satisfy Condition (A). We have the inequality, for all x∈Rd, δ(x,(X1∪X2)c)≥max{δ(x,Xc 1), δ(x,Xc 2)}. Therefore if LX:=... | https://arxiv.org/abs/2504.21633v1 |
inf ˜P∈P1K∗X a=1˜P(Va)2≥ inf γ∈RK∗,γ1=1∥Ξ(V)γ∥2 ≥ inf γ∈RK∗,∥γ∥≥1∥Ξ(V)γ∥2 ≥λmin(Ξ(V))2 ≥det(Ξ( V))2 ρ(Ξ(V))2(K∗−1) ≥det(Ξ( V))2 ∥Ξ(V)∥2(K∗−1) F, 42 where λmindenotes the smallest eigenvalue, ρthe spectral radius and ∥·∥ Fthe Frobenius norm. Thus, ∥Ξ(V)∥2 F=K∗X a,j=1V2Ψ(j) a =K∗X a=1LX l=0X λ∈Nd |λ|=lV2λ a ≤K∗X a=1LX l=... | https://arxiv.org/abs/2504.21633v1 |
obtain, for all C′>0, E[u1|ˆτk(x)]≤C′ ν+1 νΓ(b)CνΓ(1 + b)ν−1 Γ(bν)Z∞ C′/νt−bνdt. But since bν > 1, CνΓ(1 + b)ν Γ(bν)Z∞ C′/νt−bνdt=1 bν−1(CΓ(1 + b))ν Γ(bν)ν C′bν−1 ≤C′C′′√ b (bν−1)√ 2πνCΓ(1 + b)ebb−b C′bν , using Stirling’s approximation. If we choose C′large enough, the above expression remains bounded as ν→ ∞ . Si... | https://arxiv.org/abs/2504.21633v1 |
Han. Estimation based on nearest neighbor matching: From density ratio to average treatment effect. Econometrica , 91(6):2187–2217, 2023. Marco Loog. Nearest neighbor-based importance weighting. In 2012 IEEE international workshop on machine learning for signal processing , pages 1–6. IEEE, 2012. Franc ¸ois Portier, Li... | https://arxiv.org/abs/2504.21633v1 |
three items at once. Let φ:R→Rbe a bounded and measurable function and let πbe a permutation of J1;nK, then E[φ(Z(1), . . . , Z (n))1(ˆπ=π)] =E[φ(Zπ(1), . . . , Z π(n))1 Zπ(1)<···< Z π(n) ] =Z Rnφ(x)1(x1<···< x n)fZ(x1). . . f Z(xn) dx, since the vector (Zπ(1), . . . , Z π(n))has the same distribution as the vector (... | https://arxiv.org/abs/2504.21633v1 |
a)≤e1/4exp −n kcp|Vd| 8ad! . Proof. Let(Un 1, . . . , Un n)be an n-sample of the uniform distribution on [0,1], then applying Lemma 3, we know that P(ˆτk(x)> a) =P(Un (k)> F x(a)) =P(B≤k−1), where Bhas a binomial distribution with parameters nandFx(a), with Fx(a)being defined in the proof of Lemma 4. We know from conce... | https://arxiv.org/abs/2504.21633v1 |
of the ℓ′-th nearest neighbour and A2be the set of indexes of the ℓ′−1 nearest neighbours of y. To see CA1,A2,i1,i2⊆ D , we can show that a∈A1⊔ {i1} ⇐⇒ ∥Xa−x∥ ≤ ∥ Xi1−x∥anda∈A2⊔ {i2} ⇐⇒ ∥ Xa−y∥ ≤ ∥ Xi2−y∥using Conditions (i-iii) and the triangular inequality. A1thus characterizes the ℓ−1nearest neighbours of x and∥Xi1−... | https://arxiv.org/abs/2504.21633v1 |
Borel sets B(x, R x)andB(y, R y)are disjoint, and we may apply Lemma 9, which yields P(ˆτℓ(x)> R x,ˆτℓ′(y)> R y)≤P(ˆτℓ(x)> R x)P(ˆτℓ′(y)> R y). Now, we can write the expectation of nonnegative random variables thanks to tail distri- bution functions as follows. Let χ+denote the right-continuous generalised inverse of a... | https://arxiv.org/abs/2504.21633v1 |
Cℓ,nZ R+(p(x)|Vd|rd)ℓ−1rκexp(−nLkp(x)|Vd|rd)n−ℓrd+1dr=O(k2+κ dn−2+κ d). Proof. Using the bound Cℓ,n≤nℓ Γ(ℓ)and making the change of variables s:=nLkp(x)|Vd|rd, we can write 1-1:=Cℓ,nZ R+(p(x)|Vd|rd)ℓ−1r2+κexp(−nLkp(x)|Vd|rd)rd−1dr ≤nℓ Γ(ℓ)Z R+s nLkℓ−1s nLkp(x)|Vd|2+κ d exp(−s) (dnL kp(x)|Vd|)−1ds ≤Cn−2+κ dL−ℓ−2+κ d... | https://arxiv.org/abs/2504.21633v1 |
arXiv:2504.21669v1 [econ.EM] 30 Apr 2025On the Robustness of Mixture Models in the Presence of Hidden Markov Regimes with Covariate-Dependent Transition Probabilities∗ Demian Pouzo Department of Economics, University of California, Berkeley, U.S.A. Email: dpouzo@berkeley.edu Zacharias Psaradakis Birkbeck Business Schoo... | https://arxiv.org/abs/2504.21669v1 |
consistently estimable even if the dependence of the unob servable regime sequence is not taken into account. A condition on the tail behavior of the char- acteristic function of the (standardized) conditional distribution of the observable 2 responses is also provided under which the pseudo-true paramete r for the QML... | https://arxiv.org/abs/2504.21669v1 |
of interest, and considerQMLestimationoftheparametersoftheoutcomeequatio nofamisspecified generalized HMM. Section 3discusses numerical results from a simulation study. Section4summarizes and concludes. 2 Framework, Results and Discussion 2.1 DGP and Model Consider a discrete-time stochastic process {(Xt,St)}t≥0such tha... | https://arxiv.org/abs/2504.21669v1 |
and is not required to be exogenous, in the sense thatZtmay be contemporaneously correlated with U1,t. Third, the statistical model is misspecified, in the sense that the DGPis not a member of the family {(Pπ,Q¯ϑ): (π,¯ϑ)∈Θ}; this is because thedynamic structure of the regimes is misspecified. As a lready discussed in Se... | https://arxiv.org/abs/2504.21669v1 |
function in ( 7) are allθsuch that /summationdisplay s∈S¯ϑsσ(s)−1f((·−µ(s)−γ(s)·)/σ(s)) =p∗(·|·). It is straightforward to verify that the equality above holds for µ=µ∗ 1,σ=σ∗ 1, γ=γ∗, and¯ϑ∗such that ¯ϑ∗ s= Pr∗(S1=s). Theorem 1establishes that the true parameters π∗(s) := (µ∗ 1(s),σ∗ 1(s),γ∗(s)), s∈S, associated with ... | https://arxiv.org/abs/2504.21669v1 |
from the results of Pouzo et al. (2022) under suitable differentiability and moment conditions. These conditio ns are satis- fied, for example, in the case where fis Gaussian and Q∗(s|z,s′) =G(αs,s′+βs,s′z) for some continuous distribution function GonRwhose support is all of R. In the next subsection, we discuss a suffici... | https://arxiv.org/abs/2504.21669v1 |
Gaussiandistributions, including distributions withheavy tails(andfin itevariance). For instance, the result holds if fis the density of a (rescaled) Student- tdistribution with degrees of freedom υ>2 (see Example 1 in Holzmann et al. (2006)). 2.4 Discussion on the Main Theorem The consistency results in ( 5)–(6) and in... | https://arxiv.org/abs/2504.21669v1 |
a result analogous to that in Theorem 1does not hold when lagged values of Ytare present as covariates in the outcome equations ( 1) and (2) (e.g., as in Markov-switching autoregressive models). In this case, misspecification of the depe ndence structure of the regimes will affect estimation of all the parameters, not ju... | https://arxiv.org/abs/2504.21669v1 |
beyond the assumption of i.i.d. regimes, the additional source of miss pecification being the incorrect assumption of uncorrelatedness of the covar iateWtand the noise variableU1,t. The results relating to the accuracy of the estimated standard e rrors are not substantially different from those obtained with ω∗= 0. As po... | https://arxiv.org/abs/2504.21669v1 |
the quasi-log-likelihood function associat ed with it under the assumption that the regime variables {St}are i.i.d., with Pr( St= 1) =¯ϑ, and the noise variables {εt}are i.i.d., independent of {St}, withεt∼ N(0,1). The Monte Carlo results reported in Table 2reveal substantial finite-sample bias in the case of the QML es... | https://arxiv.org/abs/2504.21669v1 |
and H. White (1982). Misspecified models with dependen t observa- tions.Journal of Econometrics 20 , 35–58. Douc, R. and E. Moulines (2012). Asymptotic properties of the max imum likelihood estimation in misspecified hidden Markov models. Annals of Statistics 40 , 2697– 2732. Engel, C. and C. S. Hakkio (1996). The distri... | https://arxiv.org/abs/2504.21669v1 |
arXiv:2505.00151v1 [math.ST] 30 Apr 2025A BAYESIAN APPROACH TO INVERSE PROBLEMS IN SPACES OF MEASURES PHUOC-TRUONG HUYNH Abstract. In this work, we develop a Bayesian framework for solving inv erse prob- lems in which the unknown parameter belongs to a space of Rado n measures taking values in a separable Hilbert space... | https://arxiv.org/abs/2505.00151v1 |
required. In recent years, the Bayesian approach as a means of regulari zation has become more popular. We recall from [ 31,32] that a Bayesian solution to ( 1.1) is a probability measure µz post, namely the posterior distribution of uconditioned on measurement data z, as described by Bayes’ rule [ 31] dµz post dµpr(u)... | https://arxiv.org/abs/2505.00151v1 |
(see [ 5] and the references therein), most existing work considers point processes with real, positive coefficients. We instead consider vector-valued coefficients and show that the resulting point process belong s toM(Ω, H )µpr- almost surely. (2) Given the aforementioned priors, we study the well-pose dness of the Bayes... | https://arxiv.org/abs/2505.00151v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.