text string | source string |
|---|---|
measures M(Ω,R)≡ M(Ω), see for instance [ 4], while H=Cngives M(Ω,Cn) corresponding to the space of complex vector-valued measures in [ 27]. Next, we can see that every u∈ M (Ω, H ) is absolutely continuous with respect to|u|, meaning that if B∈ B(Ω) and |u|(B) = 0, then u(B) = 0 H. Hence, by the Radon-Nikodym Theorem ... | https://arxiv.org/abs/2505.00151v1 |
weak* topology onM(Ω, H ). In separable Banach spaces, the two σ-algebras, in fact, coincide, see [ 32, Sec- tion A.2.2]. However, since M(Ω, H ) is not separable, this property does not hold in M(Ω, H ); see Remark 2.8. In the following, we show that the σ-algebra Bw∗is appro- priate for our analysis. We begin by show... | https://arxiv.org/abs/2505.00151v1 |
, f k;r)), and thus implies the mea- surability of Uas a map from WtoM(Ω, H ). The proof is complete. /square Hence, in the following, by a measurable map U:W→ M (Ω, H ) orU:M(Ω, H )→ W, we mean it is measurable with respect to Bw∗(equivalently, it is also weakly* mea- surable). Corollary 2.4. LetQ:W→HandY:W→Ωbe measur... | https://arxiv.org/abs/2505.00151v1 |
Hence, we conclude that the two σ-algebras are not equivalent on M(Ω). For simplicity, and when no confusion arises, we will also wr iteMandCto denote M(Ω, H ) and C(Ω, H ), respectively. 3.Prior distribution on the space of measures Following the discussion in the previous section, we are now able to define prior distr... | https://arxiv.org/abs/2505.00151v1 |
(3.2)is a well-defined random measure on M(Ω, H ) for every sequence {γk}k∈N. (2) If K=∞almost surely, then (3.2)is a well-defined random measure on M(Ω, H ) if{|γk|2}k∈N∈ℓpand{var/bardblQk/bardblH}k∈N∈ℓqwith 1/p+ 1/q= 1. Proof. We adapt the proof in [ 18]. First, assume that K < ∞almost surely. The map ω/ma√sto→u(ω) =K(... | https://arxiv.org/abs/2505.00151v1 |
the distribution measure of random variables of the form U=QδY. The proof follows that of [ 24, Proposition 5.3.1] for the compound Poisson process. This , in particular, implies that the measure µdefining the Poisson point process is an infinitely divisible measure ; that is, for each n∈N, there exists a Radon probabili... | https://arxiv.org/abs/2505.00151v1 |
measure µz postexists uniquely. (2) Stability: For every z∈Yand every sequence {zn}n∈N⊂Ysuch that zn→z inY, there holds dHell(µzn post, µz post)→0. 11 Proof. Our proof adapts that of [ 23, Theorem 2.5]. Firstly, let z∈Ybe fixed. We prove that Z(z)>0. Indeed, since L(z|·)>0 on Mby (A1), we have M=∞/uniondisplay n=1Mnwher... | https://arxiv.org/abs/2505.00151v1 |
MLN(z|u)dµpr(u)→/integraldisplay ML(z|u)dµpr(u) =Z(z),for all z∈Y. Hence, one has /radicaltp/radicalvertex/radicalvertex/radicalbtLN(z|u) ZN(z)−/radicaltp/radicalvertex/radicalvertex/radicalbtL(z|u) Z(z)→0,for all z∈Y,for a.e. u∈ M . In addition, since /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsin... | https://arxiv.org/abs/2505.00151v1 |
weak* continuous. By Corollary 5.1, Theorem 4.1is applicable and the Bayesian inverse problem in this setti ng 14 is well-posed. We define the prior distribution of uthrough the random measure u=K/summationdisplay k=1QkδYk, where we consider the distributions K∼Poiss (γ),Qk∼ N(µ, σ2)andYk∼Uniform (Ω). If we know that qk... | https://arxiv.org/abs/2505.00151v1 |
edge-preserving maxi- mum a posteriori estimators in non-parametric Bayesian inverse problems. Inverse Probl. , 34(4):37, 2018. Id/No 045002. [2] V. I. Bogachev. Measure theory. Vol. I and II . Berlin: Springer, 2007. [3] K. Bredies and S. Fanzon. An optimal transport approach f or solving dynamic inverse problems in s... | https://arxiv.org/abs/2505.00151v1 |
2016. [22] S. Lang. Real analysis. 2nd ed. Reading, Massachusetts , etc.: Addison-Wesley Publishing Com- pany, Advanced Book Program/World Science Division. XIV, 5 33 p., 1983. 16 [23] J. Latz. On the well-posedness of Bayesian inverse prob lems. SIAM/ASA J. Uncertain. Quantif. , 8:451–482, 2020. [24] W. Linde. Probabi... | https://arxiv.org/abs/2505.00151v1 |
Bayesian Discrepancy Measure: Higher-order and Skewed approximations Elena Bortolato1,‡, Francesco Bertolino2,‡, Monica Musio2,‡, and Laura Ventura3,‡ 1Universitat Pompeu Fabra, Barcelona School of Economics, Spain; elena.bortolato@bse.eu 2University of Cagliari, Cagliari, Italy; bertolin@unica.it, mmusio@unica.it 3Uni... | https://arxiv.org/abs/2505.00185v1 |
For the third-order approx- imations, connections with frequentist inference are highlighted when using objective matching priors. Also for multidimensional parameters, while a first-order Gaussian approximation of the posterior distribution can be used to calculate the BDM, it still fails to account for potential post... | https://arxiv.org/abs/2505.00185v1 |
for δH. Let θmbe the posterior median and consider the interval defined as IE= (θ0,+∞) ifθm< θ0or as IE= (−∞, θ0) ifθ0< θm. Then, the BDM of the hypothesis H0can be computed as δH= 1−2P(θ∈IE|y) = 1−2Z IEπ(θ|y)dθ. (3) Note that the quantity 2 P(θ∈IE|y) gives the posterior probability of an equi-tailed credible interval ... | https://arxiv.org/abs/2505.00185v1 |
simply given by δH˙ = 2 Φ ψ0−ˆψq jp(ˆψ)−1 −1. (10) Thus, to first-order, δHagrees numerically with 1 −p-value based on the profile Wald statistic wp(ψ) = ( ˆψ−ψ)/jp(ˆψ)−1/2. In practice, as for the scalar parameter case, also the approxi- mation (5) of δHmay be inaccurate, in particular for a small sample size or... | https://arxiv.org/abs/2505.00185v1 |
Optimal Transport map construction); •it naturally reduces to the univariate definition δH=|F± P(θ0)|when d= 1. The primary practical difficulty lies in computing the center-outward distribution function F± P(·) for an arbitrary posterior distribution π(θ|y), as it typically requires solving a complex Optimal Transport... | https://arxiv.org/abs/2505.00185v1 |
and references therein) ensures that a frequentist p-value coincides with a Bayesian posterior survivor probability to a high degree of approximation, in the marginal posterior density (8). Welch and Peers [25] showed that for a scalar parameter θthe Jeffreys’ prior is probability- matching, in the sense that posterior... | https://arxiv.org/abs/2505.00185v1 |
starting from the Laplace approximation of the posterior distribution. In particular, let W(θ) = 2( ℓ(ˆθ)−ℓ(θ)) be the loglikelihood ratio forθ. Using W(θ), a first-order approximation of the BDM for the hypothesis H0:θ=θ0can be obtained as δH˙ = 1−P(χ2 d≥W(θ0)), (20) where χ2 dis the Chi-squared distribution with ddeg... | https://arxiv.org/abs/2505.00185v1 |
derivative of the loglikelihood ℓ(θ), i.e. ℓ(k)(θ) =∂kℓ(θ)/∂θk, k= 1,2,3, . . .. Moreover, let ˜θ= argmaxθ∈Θ{ℓ(θ) + log π(θ)}be the MAP estimate of θand leth=√n(θ−˜θ) be the rescaled parameter. Using the result (14) of [5] and all the regularity conditions there stated, the skew-symmetric (SKS) approximation for the po... | https://arxiv.org/abs/2505.00185v1 |
. , d , and let Ω = ( j(˜θ)/n)−1be the inverse of the scaled observed Fisher information matrix evaluated at the MAP. We denote the elements of Ω by Ω st, and in particular let us denote by Ω 11the element corresponding to the param- eter of interest ψ. Moreover, let us denote with ℓ(3) stl(θ) =∂3ℓ(θ) ∂θs∂θt∂θlthe elem... | https://arxiv.org/abs/2505.00185v1 |
α, Ω, and ξcan be obtained analytically. Ultimately, the marginal distributions are available in closed form as well. Given its tractability, we adopt the derivative matching approach proposed by [26] to derive SKS approximations for models with multidimen- sional parameters. For the SN model we instead can easily defi... | https://arxiv.org/abs/2505.00185v1 |
hence its integral is convex. Defining g(Z) =ZZ1 0Φ−1(FSN(t, ξ, ω, α ))dt+1 2dX i=2Z2 i, then T2(Z) =∇g(Z). The composite map T(·), used in (27), is the gradient of a convex function and thus represents the optimal transport map (under quadratic cost) from a SN distribution to a standard normal. 5 Examples of higher-or... | https://arxiv.org/abs/2505.00185v1 |
SN approximations approximate better the true BDM. Furthermore, the SN approximation more accurately captures the tail behavior of the posterior distribution than the SKS approximation. θ0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 n= 6 IO 0.93 0.78 0.46 0.00 0.46 0.78 0.93 0.99 HO 1.00 0.96 0.62 0.00 0.30 0.57 0.73 0.83 SKS 1.00... | https://arxiv.org/abs/2505.00185v1 |
the presence of the metabolites. We focus on the most popular regression model for binary data, namely the logistic regression with mean function logit−1(β0+ β1x1+β2x2). As in [5], Bayesian inference is carried out by employing independent, weakly informative Gaussian priors N(0, 25) for the coefficients β= (β0, β1, β2... | https://arxiv.org/abs/2505.00185v1 |
18 −0.4−0.20.002468 β1π(β1ly)IOrder SKS SN True −2.0−1.00.00.00.51.01.5 β2π(β2ly)IOrder SKS SN TrueFigure 4: Marginal posterior distributions for the regression parameters of the logistic regression example. The marginal medians are indicated in blue, while the parameters under the null hypoth- esis are indicated in re... | https://arxiv.org/abs/2505.00185v1 |
research program. Sao Paulo J. Math. Sci. 2022 ,16566–584. [14] Peyr´ e, G. and Cuturi, M., Computational optimal transport: With applications to data science. Foundations and Trends ®in Machine Learning, 2019 ,11(5-6) 355-607. [15] Pierce, D.A., Bellio, R. Modern likelihood-frequentist inference. Int. Stat. Rev. 2017 ... | https://arxiv.org/abs/2505.00185v1 |
arXiv:2505.00215v1 [math.ST] 30 Apr 2025Algebraic Constraints for Linear Acyclic Causal Models Cole Gigliotti and Elina Robeva May 8, 2025 Abstract In this paper we study the space of second- and third-order moment tensors of random vectors which satisfy a Linear Non-Gaussian Acyclic Model (LiNGAM). In such a causal mo... | https://arxiv.org/abs/2505.00215v1 |
third-order moments of the random vector Xuniquely specify the DAG G(see Theorem 3.1). Our constraints contain as a subset the well-known constraints arising from the local Markov property satisfied by the covariance matrix only [Sul18], and also extend recent work on LiNGAM where Gis assumed to be a polytree [ADG+23].... | https://arxiv.org/abs/2505.00215v1 |
finite third moment ω(3) i=E[ε3 i]. No other assumption about their distribution is made and, in particular, the errors need not be Gaussian (in which case we would have E[ε3 i] = 0 by symmetry of the Gaussian distribution). The coefficients λjiin (1) are unknown real- valued parameters, and we fill them into a matrix ... | https://arxiv.org/abs/2505.00215v1 |
and the problem is to describe the model M2(G) consisting of all covariance matrices Swhich factorize as S= (I−Λ)−TΩ(2)(I−Λ)−1with Ω(2)∈PD(R|V|) and Λ ∈RE. Conditional independence implies constraints on the entries of Sfor a given graph G as follows. If sets of vertices AandBared-separated given set Cin the graph G(se... | https://arxiv.org/abs/2505.00215v1 |
graph. More recently, [SRD24] uses rank constraints on matrices which consist of second-, third-, and higher-order moments in order to learn a DAG Gwith hidden variables. Furthermore, [SD23] uses algebraic constraints for goodness-of-fit tests which determine whether the data arises from a linear non-Gaussian model at ... | https://arxiv.org/abs/2505.00215v1 |
7 Proof. Recall that Xv=P u∈pa(v)λuvXu+εv, Ifwis any non-descendant of v,Xwis independent of εv, and so E[εvXw] = 0. We see that svw=E[XvXw] =X u∈pa(v)λuvE[XuXw] +E[εvXw] =X u∈pa(v)λuvsuw. A similar equation can be derived for tvwzfor any z∈V: tvwz=X u∈pa(v)λuvtuwz. Then, the vector ( −1, λpa(v),v)Tis in the left null ... | https://arxiv.org/abs/2505.00215v1 |
2 ×2 determinants inside PD(R|V|)×Sym3(R|V|). However, the rank conditions from Theorem 3.1 may involve larger determinants even in the case of polytrees. Thus, we here ask if we can replace pa( v), and nd( v) to obtain different matrices whose minors also vanish on the model. Proposition 4.2. For any A⊆V, define pa(A)... | https://arxiv.org/abs/2505.00215v1 |
a∈V\ {v}such that there is a directed path from atovwhich does not pass through w. We claim that pa( {v} ∪A) ={w}. Since wis a parent of vandw̸∈A, then w∈pa({v}∪A). Now for any u∈pa({v}∪A), we know that u /∈ {v} ∪A, and there is an edge from uto either a=vor any element a ofA. By definition of A, there is a directed pa... | https://arxiv.org/abs/2505.00215v1 |
Tv,Nv drop rank. Here the set Nv⊆nd(v)×Vis defined as follows, Nv:={(w, z)|w∈nd(v) and z∈nd(w)∪ {v, w}}. Proof. This follows from the step in the poof at which we claimed that ”by symmetry of Ω(3), we need only compute Ω(3) v,w,z.” A more detailed analysis shows that we can restrict further to ( w, z)∈Nvas claimed. 12... | https://arxiv.org/abs/2505.00215v1 |
trials on nsamples in which the method returned each vertex as the sink. As the number of samples grows, we observe that the vast majority of times we obtain the correct sink node (node 5). A small proportion of the experiments wrongly label node 4 as a sink node, most likely due to numerical error. 14 1 2 3 4 5 Figure... | https://arxiv.org/abs/2505.00215v1 |
potentially more difficult, it would be quite interesting to study the model of sec- ond and third-order moments when the graph Gis allowed to have directed cycles. Finding the defining equations in this case would be quite useful in designing a causal discovery algorithm, extending previous work which only applies to ... | https://arxiv.org/abs/2505.00215v1 |
Conformal changepoint localization Sanjit Dandapanthula∗ sanjitd@cmu.edu Aaditya Ramdas† aramdas@cmu.edu May 2, 2025 Abstract Changepoint localization is the problem of estimating the index at which a change occurred in the data generating distribution of an ordered list of data, or declaring that no change oc- curred.... | https://arxiv.org/abs/2505.00292v1 |
the changepoint under (only) the assumption that the pre-change and post-change distributions are exchangeable. •We describe methods for learning the conformal score function used in the CONCH algorithm from the data, thereby increasing the power of our method. •We demonstrate the CONCH algorithm on a variety of synthe... | https://arxiv.org/abs/2505.00292v1 |
guarantees under minimal statistical assumptions. While originally developed for supervised learning tasks, conformal methods have gradually been extended to changepoint analysis. Early work on connecting conformal prediction to changepoint detection focused primarily on testing for exchangeability violations rather th... | https://arxiv.org/abs/2505.00292v1 |
family of score functions is called a score function . Note that a score function is intuitively intended to be a pre-processing transformation intended to separate the pre-change and post-change data points by projecting them into one dimension. In particular, the score function can be learned in any way that uses its... | https://arxiv.org/abs/2505.00292v1 |
while r > t do 11 κ(t) r←s(1) n−t(Xr;JXt+1, . . . , X nK,(X1, . . . , X t)) 12 Draw θr∼Unif(0 ,1) 13 p(t) r←1 n−r+1Pn j=r 1κ(t) j>κ(t) r+θr1κ(t) j=κ(t) r 14 r←r−1 15 end 16 ˆF0(z) :=1 tPt r=11p(t) r≤z 17 ˆF1(z) :=1 n−tPn r=t+11p(t) r≤z 18 W(0) t←√ tKS(ˆF0, u) 19 W(1) t←√n−tKS(ˆF1, u) 20 Use either the empirical test ... | https://arxiv.org/abs/2505.00292v1 |
that point’s score, relative to the other points on the same side. Focusing on the tth row of the MCP, we then refine these p-values as in Figure 2. Focus on a particular index t∈[n−1]. Under H0t, the scores ( κ(t) r)t r=1are exchangeable, meaning that the p-values ( p(t) r)t r=1are independent. Therefore, any aggregat... | https://arxiv.org/abs/2505.00292v1 |
t)n−1 t=1using line 1 to line 19 of Algorithm 1 on simulated data ( X(b) t)n t=1 4end 5Draw θ0, θ1∼Unif(0 ,1) 6pleft t←1 B+1 θ0+PB b=1(1W(0,b) t>W(0) t+θ01W(0,b) t=W(0) t) 7pright t←1 B+1 θ1+PB b=1(1W(1,b) t>W(1) t+θ11W(1,b) t=W(1) t) 8return (pleft t, pright t) 4.1.2 Asymptotic KS test We can use the asymptotic di... | https://arxiv.org/abs/2505.00292v1 |
log( pright t)∼χ2 4, assuming that p0andp1are independent uniform random variables. This method is equivalent to taking the product of p0andp1, which can be a more powerful choice than the minimum when both p0andp1are expected to be small. •Bonferroni correction (arbitrary dependence): If the left and right p-values ar... | https://arxiv.org/abs/2505.00292v1 |
algorithm can be used if we are not sure about the existence of a changepoint in the dataset. 11 5 Pre-testing for exchangeability Suppose we are unsure whether a changepoint exists in the dataset. In this case, it is natural to first test the entire data for exchangeability and only proceed with the CONCH algorithm if... | https://arxiv.org/abs/2505.00292v1 |
hypothesis H0t:ξ=tagainst the alternative H1t:ξ̸=tis the likelihood ratio test. In particular, the likelihood ratio test rejects H0tif and only if (for some threshold c1−α>0): supξ∈[n]\{t}Qξ r=1f0(Xr)Qn r=ξ+1f1(Xr) Qt r=1f0(Xr)Qn r=t+1f1(Xr)≥c1−α. Hence, a good choice for the left-hand score function is the likelihood ... | https://arxiv.org/abs/2505.00292v1 |
on the left-hand side of the data exchangeably and the right-half non-exchangeably. Therefore, we can train the classifier using a weighted empirical risk minimization which more strongly penalizes misclassifying data on the far right (which is likely to be post-change). Similarly, ˆ g1can be trained using a weighted e... | https://arxiv.org/abs/2505.00292v1 |
event of probability at most α0where the pre-testing algorithm fails, the CONCH algorithm cannot give a sensible confidence interval. However, over 1000 simulations (with a changepoint), the pre-test was always able to detect a deviation from exchangeability. Note that if we know that there is a changepoint in the data... | https://arxiv.org/abs/2505.00292v1 |
conformal p-values (green p-values are valid, red p-values are invalid). Note that all of the p-values are only valid when t=ξ= 400 , shown as the yellow row. Observe that only when t=ξ= 400 are the p-values in row ttruly uniformly distributed and independent in the left and right halves respectively, as they should be... | https://arxiv.org/abs/2505.00292v1 |
on data to the right of t. Therefore, we use the minimum method to combine the p-values, plotting the resulting p-values ( pt)n−1 t=1. 18 0 200 400 600 800 1000 t0.000.050.100.150.200.25p-value (pt)p-values for MNIST digit change (pre-trained classifier) Changepoint (ξ=400) Threshold (α=0.05)Figure 7: p-values for MNIS... | https://arxiv.org/abs/2505.00292v1 |
7.3 Sentiment change using large language models (LLMs) Finally, we consider a simulation with a sentiment change in a sequence of text samples, showing that our algorithm is practical for localizing changepoints in language data. We consider the Stanford Sentiment Treebank (SST-2) dataset of movie reviews labeled with... | https://arxiv.org/abs/2505.00292v1 |
for SST-2 mixed sentiment change Changepoint (ξ=400) Threshold (α=0.05) Figure 11: p-values for SST-2 mixed sentiment change at ξ= 400 using DistilBERT trained for sentiment analysis. The dashed red line indicates the true changepoint, and the region on the horizontal axis where the p-values lie above the dotted green ... | https://arxiv.org/abs/2505.00292v1 |
lighter. arXiv preprint arXiv:1910.01108 . Scott, D. W. (1979). On optimal and data-based histograms. Biometrika ,66(3), 605–610. Shafer, G., & Vovk, V. (2008). A tutorial on conformal prediction. Journal of Machine Learning Research ,9(3). Shin, J., Ramdas, A., & Rinaldo, A. (2023). E-detectors: A nonparametric framew... | https://arxiv.org/abs/2505.00292v1 |
to the Kolmogorov-Smirnov statistic Instead of using the Kolmogorov-Smirnov statistic to measure the discrepancy between the nor- malized ranks and the uniform distribution, we can use other discrepancy measures such as the Cram´ er-von Mises statistic, the Anderson-Darling statistic, the Kuiper statistic, or the Wasse... | https://arxiv.org/abs/2505.00292v1 |
, κ(t) r). Hence, the scores κ(t) 1, . . . , κ(t) rare exchangeable under H0t. Now, if ˜U∼Unif([ r]), then the cdf of X˜Uconditional on the bag of observations JX1, . . . , X rKis FX˜U(x) =1 rrX j=11κ(t) j≤x. Conditional on the bag of observations, κ(t) ris distributed like X˜U, so by the randomized probability integra... | https://arxiv.org/abs/2505.00292v1 |
and R(the distribution of Xifor any i≤n) are absolutely continuous with respect to Lebesgue measure and have densities qandr. Then, the answer to the above question is the likelihood ratio q(x)/r(x), as we will formalize below. It will be easier to state the result in terms of the normalized rank functional Tn[s] =1 nn... | https://arxiv.org/abs/2505.00292v1 |
The iterated Dirichlet process and applications to Bayesian inference Evan Donald University of Central Florida ev446807@ucf.eduJason Swanson University of Central Florida jason.swanson@ucf.edu Abstract Consider an i.i.d. sequence of random variables, taking values in some space S, whose underlying distribution is unkn... | https://arxiv.org/abs/2505.00451v1 |
is its mean measure, or base measure, satisfying E[λ(A)] =ρ(A). The number κ∈(0,∞) is its concentration parameter. The smaller κis, the more likely λis to be concentrated on only a few points in S. Ifλis a Dirichlet process onSwith parameters κandρ, then its law, which is an element of M1(M1(S)), is denoted byD(κρ). No... | https://arxiv.org/abs/2505.00451v1 |
we let ρ=D(εϱ), where ε > 0 and ϱis a nonrandom measure on S. As above, if A⊆Sis measurable, then P(ξij∈A) =E[P(ξij∈A|µi)] =E[µi(A)] =ϱ(A), so that ϱis the prior distribution for ξij. This means that ϖhas distribution D(κD(εϱ)). We call a random measure with a distribution of this form an iterated Dirichlet process (or... | https://arxiv.org/abs/2505.00451v1 |
true for the IDP, not even in the case S={0,1} that was treated by Liu in [12]. In other words, the work in [11] is not sufficient to justify its use in [12]. In Theorem 4.7, we provide a new proof that justifies sequential imputation. Our proof avoids the supposition of a full joint density. In doing so, it shows that... | https://arxiv.org/abs/2505.00451v1 |
arandom measure on Sis a function µ: Ω→Mthat is ( F,M)-measurable. LetM1=M1(S) be the set of all probability measures on S. Note that M1={ν∈M: ν(S)∈ {1}}. Hence, M1∈ M and we may define M1byM1=M|M1. Arandom probability measure on Sis a function µ: Ω→M1that is ( F,M1)-measurable, or equivalently, a random measure taking... | https://arxiv.org/abs/2505.00451v1 |
of dL(X). LetG ⊆ F be a σ-algebra. A regular conditional distribution for XgivenGis a random probability measure µonSsuch that P(X∈A| G) =µ(A) a.s. for all A∈ S. Since S is a complete and separable metric space, regular conditional distributions exist. In fact, such a µexists whenever Xtakes values in a standard Borel ... | https://arxiv.org/abs/2505.00451v1 |
i=1f(Y, ξi) =Z G! = 1 a.s. 7 In particular, 1 nnX i=1f(Y, ξi)→E[f(Y, ξ1)| G]a.s. (2.2) 2.4 Bayesian inference for row exchangeable arrays The following four theorems come from [4]. The first is a de Finetti theorem for row exchangeability ([4, Theorem 3.2]). The final three ([4, Theorems 3.3, 3.4, 3.5]) provide needed ... | https://arxiv.org/abs/2505.00451v1 |
D(α)∈M1(M1). IfB∈ M 1, then we writeD(α, B) for (D(α))(B). Given αas above, let κ=α(S)>0 and ρ=κ−1α, so that ρ∈M1. We will typically writeD(α) =D(κρ), and think of the law of a Dirichlet process as being determined by two parameters, a positive number κ∈(0,∞) and a probability measure ρ∈M1. The measure ρ is called the ... | https://arxiv.org/abs/2505.00451v1 |
ηn= (η1, . . . , η n) and xn= (x1, . . . , x n)∈Sn. Note that for fixed i, we have P(ηi∈A) =E[P(ηi∈A|λ)] =E[λ(A)] =ρ(A). (3.4) Thus, ρrepresents our prior distribution on the individual ηi’s, in the case that we have not observed any of their values. The posterior distribution is given in the following theorem, which c... | https://arxiv.org/abs/2505.00451v1 |
) . Hence, P(λ∈B,ηn∈A, α∈C) =E D α+nX i=1δηi, B 1A(ηn) 1C(α) , which implies P(λ∈B|ηn, α) =D(α+Pn i=1δηi, B). Now let ( T,T) be a measurable space. Fix n∈Nand let Ybe a T-valued random variable such that Yand ( λ, α) are conditionally independent given ηn. This holds, for example, if Yis a function of ηnandW, wher... | https://arxiv.org/abs/2505.00451v1 |
processes on S with mixing distribution L(α), where α/α(S) is itself a Dirichlet process (see [15, 16]). An IDP is a Dirichlet process on M1whose base measure is the law of a Dirichlet process. A hierarchical Dirichlet process, on the other hand, is as a Dirichlet process on Swhose base measure is a Dirichlet process i... | https://arxiv.org/abs/2505.00451v1 |
this assumption is not satisfied by the IDP. In particular, the work in [11] is not sufficient to justify its use in [12]. In Theorem 4.7, we provide a new proof that justifies this approach in general, including the case S={0,1}treated in [12] as well as the general case treated here. 4.1 Importance sampling Importanc... | https://arxiv.org/abs/2505.00451v1 |
k=1Wk Y ≈Var1 KeKeX k=1f(Zk) Y . The right-hand side is K−1 eVar(f(Z)|Y). In [10], it is shown that VarPK k=1Wkf(Z∗,k)PK k=1Wk Y ≈1 KVar(f(Z)|Y) 1 + VarW h(Y) Y . We therefore define Ke=K 1 + Var W h(Y) Y, and call this the effective sample size . By (4.4), we have VarW h(Y) Y =Var(W|Y) h(Y)2=Var(W|Y) E[W|... | https://arxiv.org/abs/2505.00451v1 |
1, . . . , Z∗,k m−1). (4.5) In other words, the simulated vector Z∗,k= (Z∗,k 1, . . . , Z∗,k M) can be constructed sequentially using L(Zm|Ym,Zm−1), where in each step the missing data Zm−1is imputed with the previously simulated values Z∗,k 1, . . . , Z∗,k m−1. To prove that Theorem 4.1 applies in this situation, we m... | https://arxiv.org/abs/2505.00451v1 |
in Proposition 4.5, sequential imputation—as it is presented in Theorem 4.4—does not apply in this case. Namely, Assumption 4.3 is not satisfied. The vector Z=µmhas no joint density with respect to any product measure. Hence, the proof of Theorem 4.4, which is a rigorous presentation of the proof in [11], does not just... | https://arxiv.org/abs/2505.00451v1 |
k= 1, we obtain L(Z|Y;dz) =1 fM(Z)w(Y, Z)(γ∗ 1···γ∗ M)(Y, dz). Since m∗=γ∗ 1···γ∗ M, this proves (4.7). 5 Sequential imputation for IDPs In this section, we apply sequential imputation, in the form of Theorem 4.7, to an array of samples from an IDP. In Theorem 4.7, we take Z=µMand we let Yrepresent some observations we... | https://arxiv.org/abs/2505.00451v1 |
trivially satisfied whenever Yis discrete. From an applied perspective, this is no restriction at all. Any real-world measurement will have limits to its precision, meaning that only a finite number of measurement outcomes are possible. Theorem 5.2 below describes how to use sequential imputation to compute L(µM|Y) whe... | https://arxiv.org/abs/2505.00451v1 |
set of discontinuities of Φ, then L(Φ(µM)|Y=y) = lim K→∞PK k=1Vkδ(Φ(u∗,k M))PK k=1Vk. (5.10) 26 The proof of Theorem 5.2 will be given in Section 5.5. Corollary 5.3. With the assumptions of Theorem 5.2, we have L(µM+1|Y=y) = lim K→∞1 κ+M κD(εϱ) +MX m=1PK k=1Vkδ(u∗,k m)PK k=1Vk! . (5.11) Consequently, if Φis a measurabl... | https://arxiv.org/abs/2505.00451v1 |
emphasize this dependence, we write u∗,k m=u∗,k m(y) and Vk=Vk(y). Define µ∗,k m=u∗,k m(Y). Note that u∗,k m(y) is independent of Y, whereas µ∗,k mis not. The random measure µ∗,k mis playing the role of Z∗,k min Section 4.3. To show that we have constructed µ∗,k mcorrectly, we must show that{µ∗,k M}∞ k=1|Y∼m∗(Y)∞. This... | https://arxiv.org/abs/2505.00451v1 |
Proof of Theorem 5.2. We apply Theorem 4.7. Let γmandm∗be as in Proposition 5.5. Ifenis counting measure on Tandnm=L(µm), then L(Y,µm)≪enM×nm, so that Assumption 4.6 holds. Let fmbe the density of ( Y,µm) with respect to enM×nm, and recall the notational conventions of Section 4.4. Let w(y, ν) be given by (4.6). Letµ∗,... | https://arxiv.org/abs/2505.00451v1 |
Remark 6.2, this special case is easily generalized to the case of an arbitrary Sin which our observations are made with limited precision. The outline of this section is as follows. In Section 6.1, we describe how Theorem 5.2 and Corollary 5.3 simplify in the case that Sis finite. After that, the remainder of the sect... | https://arxiv.org/abs/2505.00451v1 |
i < m , we have θ∗ m=θ∗ iwith probabilty tmi, and, with probability tmm/(tm1+···+tmm), the random vector θ∗ mis independent of θ∗ m−1 and has the Dirichlet distribution, Dir( εp+ym). Finally, we define V, the total weight of the simulation. According to (5.8), the total weight should be MY m=11 κ+m−1mX i=1tmi. (6.3) 34... | https://arxiv.org/abs/2505.00451v1 |
pressed penny machine, like those found in museums or tourist attractions. For a fee, the machine presses a penny into a commemorative souvenir. Now imagine the machine is broken, so that it mangles all the pennies we feed it. Each pressed penny it creates is mangled in its own way. Each has its own probability of land... | https://arxiv.org/abs/2505.00451v1 |
mean, we can plot the distribution function of ν. See Figure 1(a) for a graph of x7→ν((0, x]). For a different visualization, we can plot an approximate density for ν. The measure νhas a discrete component, so we obtain an approximate density using Gaussian kernel density estimation, replacing each point mass δxby a Ga... | https://arxiv.org/abs/2505.00451v1 |
2(a). We next consider κ= 10, again using K= 10000, which generated an effective sample size of about 388 (compared to 300 in [12] for the same value of κ). This time, using (6.6) gives L(θ321,1|Y=y)≈1 330 10 Beta(1 ,1) +320X m=1P10000 k=1Vkδ(θ∗,k m,1) P10000 k=1Vk . Note that in this second case, the simulated value... | https://arxiv.org/abs/2505.00451v1 |
3 1 16 2.94 28 6 5 1 3 0 15 2.07 29 6 6 1 2 0 15 1.93 30 0 8 2 4 0 14 2.71 31 8 5 1 0 0 14 1.5 32 5 0 8 0 1 14 2.43 33 0 0 13 0 0 13 3 34 5 4 1 2 0 12 2 35 6 2 0 3 0 11 2 36 4 7 0 0 0 11 1.64 37 0 1 6 4 0 11 3.27 38 5 5 0 1 0 11 1.73 40 product # 1 star 2 stars 3 stars 4 stars 5 stars # reviews average 39 5 6 0 0 0 11 ... | https://arxiv.org/abs/2505.00451v1 |
effect of these 2 reviews on the expected long-term rating, we apply (6.5) with Φ(( xmℓ)) =A(x50) to obtain L(A(θ50)|Y=y)≈P100000 k=1Vkδ(A(θ∗,k 50))P100000 k=1Vk. 41 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.00.00.20.40.60.8(a)m= 51, mean: 2.54 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.00.00.20.40.60.81.0 (b)m= 50, mean: 2.83 1.0 1.5 ... | https://arxiv.org/abs/2505.00451v1 |
0and let Mhave a Pareto distribution with minimum value c and tail index r. That is, P(M > m ) = (m/c)−rform > c . IfX|M∼Gamma( α, α/M ), then X∼Gamer( r, c, α ). According to Proposition 6.3, the parameter ris the tail index of the mean player scores in the population. However, it is also the tail index of the raw pla... | https://arxiv.org/abs/2505.00451v1 |
scores they earn each time they play. The 10 friends all have their own usernames that they use when playing the game. The usernames are Asparagus Soda, Goat Radish, Potato Log, Pumpkins, Running Stardust, Sweet Rolls, The Matrix, The Pianist Spider, The Thing, and Vertigo Gal. We will consider three different scenario... | https://arxiv.org/abs/2505.00451v1 |
unusual. The first is Goat Radish. They played only one game and scored a 38, which is a relatively low score compared to the rest of the group. And yet the IDP model has given them an expected long-term average score of 71. Not only is this counterintuitive, it is also inconsistent with how the model treated The Piani... | https://arxiv.org/abs/2505.00451v1 |
Table 7. We use the same L,κ,ε,p, and Mas in the first scenario. Also as it was there, the number Nmis the number of scores in the mth row of Table 7, and ymnis the nth score in the mth row. Note, however, that the username in the mth row has changed in the current scenario. We again used a log scale factor of 42 and g... | https://arxiv.org/abs/2505.00451v1 |
Soda a much higher expected long-term average score than Potato Log. This confirms our intuition that Asparagus Soda is the better player. But because Asparagus Soda played only one game, the model should have a lot more uncertainty surrounding 47 Username Scores Vertigo Gal 45, 100, 118, 121, 125, 130, 133, 145, 161, ... | https://arxiv.org/abs/2505.00451v1 |
conditional mean of C(θ9, θ2) is about 79%, the conditional mode is much higher. Asparagus Soda vs. Pumpkins. We now turn our attention to comparing Asparagus Soda, who played only once, to Pumpkins, who played four times. (See Figure 5.) Looking at Table 7, we see that Asparagus Soda corresponds to m= 9 and Pumpkins c... | https://arxiv.org/abs/2505.00451v1 |
On the Distribution of the Sample Covariance from a Matrix Normal Population Haoming Wang aSchool of Mathematics, Sun Yat-sen University, Xingang West Road, 135, Guangzhou, 510275, Guangdong, China Abstract This paper discusses the joint distribution of sample variances and covari- ances, expressed in quadratic forms i... | https://arxiv.org/abs/2505.00470v1 |
independent and identically distributed, assumed in most cases for simplicity to be normal. Dawid (1977) discussed 2 the matrix normal distribution, denoted as Nn,p(M;A, B) Fn,p(X) =1 (2π)np 2|A|p 2|B|n 2etr −1 2B−1X′A−1X when M= 0; ×etr −1 2B−1M′A−1M etr −B−1M′A−1X when M̸= 0;(1.2) where AandBaren×nandp×preal sy... | https://arxiv.org/abs/2505.00470v1 |
1A12t2, . . . , t′ 1A1ptp, t′ 2t2, . . . , t′ 2A2ptp, ... t′ ptp, It so requires us to develop mathematical techniques in handling the1 2p(p+1) quadratic forms with possibly different coefficients. In practice, it also occurs such a problem quite often that the number of parameters to be estimated when we have no prere... | https://arxiv.org/abs/2505.00470v1 |
will be given. 5 2. Notations and Conventions 2.1. Gamma, Beta and Hypergeometric Functions The multivariate Gamma function, denoted by Γ p(a), is defined to be Γn(a) =Z A>0etr(−A)|A|a−n+1 2dA, (2.1) where ℜ(a)>1 2(n−1) and A >0 means the integral is taken over the space of real symmetric positive definite n×nmatrices.... | https://arxiv.org/abs/2505.00470v1 |
orthogonal diagonalisable ) if it is T1and there exist further n×1 vectors {ai}such that Zcan be developed into the sum Z=nX i=1pX j=1γijaibi, where γij=a′ iZb′ jand the random variables γijarepairwise uncorre- lated with each other :Eγijγi′j′=cijδii′δjj′; T3(totally diagonalisable ) if it is T2and there exist {τ2 i}an... | https://arxiv.org/abs/2505.00470v1 |
(1.1). This proves (2.9). The Gamma integral involving the hypergeometric function is also used. Lemma 3 (Constantine (1963)) .LetZbe an p×pcomplex symmetric matrix whose real part is positive definite and Yan arbitrary p×psymmetric matrix. Then for any awithℜ(a)>1 2(p−1), Z S>0etr(−SZ)|S|a−p+1 20F0(SY)dS= Γ p(a)|Z|−a ... | https://arxiv.org/abs/2505.00470v1 |
.=xnand from the plane x1+x2+. . .+xn=n¯x remain constant. It must therefore lie on the surface of an ( n−1)-dimensional sphere which is everywhere at right angles to the radius vector x1=x2= ···=xn. The element of volume is then proportional to (√ns1)n−2ds1d¯x. For the factor of proportionality, we require the entire ... | https://arxiv.org/abs/2505.00470v1 |
2 (2π)nexp(−1 2x′A11x−1 2y′A22y), (3.5) where Σ11 2=A11⊗E11+A22⊗E22and the 2 ×2 matrix Eijhas the ( i, j)- element one and zero elsewise; the n×nmatrix A11(orA22) being the inverse of the covariance of the x-variable (or y). Thus, as (3.5) can be rewritten as p(x, y) =|Σ11 2|1 2 (2π)nexpn −q−1x′x+q−1x′ I−q 2A11 x −q−... | https://arxiv.org/abs/2505.00470v1 |
contoured. These terminologies are slightly modified from those in Fang and Zhang (1990). the 2×2 real symmetric positive definite matrix S= (sij) should be similar to (b), i.e. Vn,2(S) =cn,2|S|n−p−1 2etr −q−1B−1S 0F0 I−1 2qA−1, q−1B−1S , cn,2=1 2nΓ2(n−1 2)|A||B|n 2, (q >0).(3.11) This extends the result (3.4) we o... | https://arxiv.org/abs/2505.00470v1 |
by Hsu (1939) to prove the1 2p(p+ 1) variables have the probability density function Vn,p(sij) =cn,pdet(sij)n−p−1 2exp −q−1nX i,j=1bijsij! ×0F0 δrs−1 2qars, q−1nX i,j=1bijsij! , cn,p=det(ars)p 2det(bij)n 2 2np 2Γp(n−1 2), s ij=nX r=1xrixrj,(q >0) (3.16) for all p×preal symmetric positive definite matrices ( sij); elsew... | https://arxiv.org/abs/2505.00470v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.