text
string | source
string |
|---|---|
measures M(Ω,R)≡ M(Ω), see for instance [ 4], while H=Cngives M(Ω,Cn) corresponding to the space of complex vector-valued measures in [ 27]. Next, we can see that every u∈ M (Ω, H ) is absolutely continuous with respect to|u|, meaning that if B∈ B(Ω) and |u|(B) = 0, then u(B) = 0 H. Hence, by the Radon-Nikodym Theorem for vector measures, see [ 22, Corollary 12.4.2], there exists a function u′:Ω→Hsuch that /bardblu′/bardblH∈L∞(Ω,du),with /bardblu′(x)/bardblH= 1 for |u| −almost every x∈Ω anducan be decomposed into u(B) =/integraldisplay Bdu=/integraldisplay Bu′d|u|,for all B∈ B(Ω). Equivalently, M(Ω, H ) can be characterized as the dual of C(Ω, H ), where C(Ω, H ) is the space of bounded continuous functions on Ωtaking values in H. By Singer’s repre- sentation theorem (see, e.g., [ 17]), the duality pairing between M(Ω, H ) and C(Ω, H ) is defined by /a\}bracketle{tu, f/a\}bracketri}htM(Ω,H ),C(Ω,H )=/integraldisplay Ωfdu=/integraldisplay Ω(f(x), u′(x))Hd|u|(x). By a slight abuse of notation, we will simply write /a\}bracketle{tu, f/a\}bracketri}htto denote the dual pairing between u∈ M (Ω, H ) and f∈ C(Ω, H ), unless otherwise stated. Hence, the norm on M(Ω, H ) is also characterized by the dual norm /bardblu/bardblM= sup/braceleftBigg /a\}bracketle{tu, f/a\}bracketri}ht:f∈ C(Ω, H ),sup x∈Ω/bardblf(x)/bardblH≤1/bracerightBigg = sup/braceleftbigg/integraldisplay Ω(f(x), u′(x))Hd|u|(x) :f∈ C(Ω, H ),sup x∈Ω/bardblf(x)/bardblH≤1/bracerightbigg . 4 Since C(Ω, H ) is nonreflexive, the structure of M(Ω, H ) is rather complicated; in par- ticular, M(Ω, H ) itself is both nonreflexive and nonseparable. Nevertheles s, when equipped with the weak* topology w∗(i.e. the coarsest topology such that every map f/ma√sto→ /a\}bracketle{tu, f/a\}bracketri}htis continuous), the space M(Ω, H ) becomes a locally convex Hausdorff space whose dual is C(Ω, H ). This property characterizes the weak* convergence on M(Ω, H ), namely a sequence of measures {uk}k∈Nconverges to a limit u∈ M (Ω, H ) if /a\}bracketle{tuk, f/a\}bracketri}ht → /a\}bracketle{t u, f/a\}bracketri}htask→ ∞ ,for all f∈ C(Ω, H ). Equipped with the weak* topology w∗, it can be seen that ( M(Ω, H ), w∗) is a complete locally convex Hausdorff topological vector space . In addition, M(Ω, H ) is a Souslin space [ 29], i.e., an image of a separable, completely metrizable spac e under a continuous map. For more details on the topological propert ies of the space of vector measures, we refer to [ 22, Section 12.3] and [ 26]. 2.2. Measurability on M(Ω, H ).In the following, we will recall several measure- theoretic properties of the space M(Ω, H ). Many of these results are classical and can be implied from properties in standard references such as [ 15,22,2]. However, since precise references for some statements are not readily avai lable, we also include their proofs for completeness. In order to introduce prior measures on M(Ω, H ), an appropriate Borel σ-algebra onM(Ω, H ) should be introduced. The following σ-algebras on M(Ω, H ) are natural to consider: (1) The strong Borel σ-algebra, denoted by B, generated by the strong topology (the norm topology) on M(Ω, H ). (2) The weak* Borel σ-algebra, denoted by Bw∗, generated by the
|
https://arxiv.org/abs/2505.00151v1
|
weak* topology onM(Ω, H ). In separable Banach spaces, the two σ-algebras, in fact, coincide, see [ 32, Sec- tion A.2.2]. However, since M(Ω, H ) is not separable, this property does not hold in M(Ω, H ); see Remark 2.8. In the following, we show that the σ-algebra Bw∗is appro- priate for our analysis. We begin by showing that Bw∗can also be characterized by linear functionals on M(Ω, H ). Proposition 2.1. Theσ-algebra Bw∗coincides with the σ-algebra generated by C(Ω, H ), which corresponds to a collection of linear functionals on M(Ω, H )viau/ma√sto→ /a\}bracketle{tu, f/a\}bracketri}ht, f∈ C(Ω, H ). Proof. Denote by/tildewideσtheσ-algebra generated by the given set of linear functionals on M(Ω, H ). For every f∈ C(Ω, H ), the linear functional u/ma√sto→ /a\}bracketle{tu, f/a\}bracketri}htis continuous in w∗, and therefore Bw∗-measurable. Consequently, /tildewideσ⊂ B w∗. Conversely, we know that for every f∈ C(Ω, H ) and r >0, the set {u∈ M (Ω, H ) :/a\}bracketle{tu, f/a\}bracketri}ht< r} is/tildewideσ-measurable. Hence, let u0∈Xbe given. For a finite set {f1, f2, . . . , f k} ⊂X∗and r >0, the set V=V(u0;f1, f2, . . . , f k;r) :={u∈X:|/a\}bracketle{tu−u0, fi/a\}bracketri}ht|< r, for all i= 1, . . . , k }, defining a neighborhood of u0for the weak* topology, is also σ1-measurable. As the sets of this type form a basis of the weak* topology, this implies Bw∗⊂/tildewideσ. In conclusion, we have Bw∗=/tildewideσand the proof is complete. /square 5 With the given σ-algebra on M(Ω, H ), we are ready to introduce several measurable maps from and to M(Ω, H ) which will be useful in the sequel. In the following, we denote by ( W,A) a measurable space. We first introduce measurability prope rties of maps from WtoM(Ω, H ). For details, we refer to [ 15, Chapter II], [ 8] and the references therein. Definition 2.2. Consider a map U:W→ M (Ω, H ). (1)Uisweakly* measurable if for every f∈ C(Ω, H ), the map w/ma√sto→ /a\}bracketle{tU(w), f/a\}bracketri}ht, w ∈W is measurable from WtoRwith its Borel σ-algebra. (2)UisBw∗-measurable if it is a measurable map to (M(Ω, H ),Bw∗). In fact, the two notions of measurability are equivalent, as follows directly from the definitions of the two coinciding σ-algebras on M(Ω, H ). Proposition 2.3. A map U:W→ M (Ω, H )is weakly* measurable if and only if it is Bw∗-measurable. Proof. Assume that UisBw∗-measurable. For every f∈ C(Ω, H ) and u∈ M (Ω, H ), the map u/ma√sto→ /a\}bracketle{tu, f/a\}bracketri}htis measurable. Hence, as a composition of measurable maps, t he map w/ma√sto→ /a\}bracketle{tU(w), f/a\}bracketri}htis measurable. Hence, Uis weakly* measurable. Conversely, assume that Uis weakly* measurable. Hence, for every f∈ C(Ω, H ) andr >0, the set W(f;r) :={w∈W:|/a\}bracketle{tU(w), f/a\}bracketri}ht|< r} is measurable. Hence, let u0∈ M (Ω, H ). For every f1, . . . , f k∈C(Ω, H ) and r >0, the set W(u0;f1, . . . , f k;r) :={w∈W:|/a\}bracketle{tU(w)−u0, fi/a\}bracketri}ht|< r for all i= 1, . . . , k } is measurable. This set is in fact U−1(V(u0;f1, . . .
|
https://arxiv.org/abs/2505.00151v1
|
, f k;r)), and thus implies the mea- surability of Uas a map from WtoM(Ω, H ). The proof is complete. /square Hence, in the following, by a measurable map U:W→ M (Ω, H ) orU:M(Ω, H )→ W, we mean it is measurable with respect to Bw∗(equivalently, it is also weakly* mea- surable). Corollary 2.4. LetQ:W→HandY:W→Ωbe measurable maps. Then the map U:W→ M (Ω, H )given by U(w) =Q(w)δY(w)is measurable. Proof. For every f∈ C(Ω, H ), we consider the map ϕf:W→Rgiven by ϕf(w) =/a\}bracketle{tU(w), f/a\}bracketri}ht= (Q(w), f(Y(w)))H,for all w∈W. By the measurability of QandY, and the continuity of the inner product ( ·,·)H, it follows that ϕfis measurable for every f∈ C(Ω, H ). Hence, by Proposition 2.3, the map Uis measurable. /square We next study some measurable maps mapping from M(Ω, H ). Proposition 2.5. LetKbe a separable Hilbert space. Then a map G:M(Ω, H )→Kis measurable if and only if the map u/ma√sto→(G(u), g)His measurable for every ginK. 6 Proof. We proceed as in the proof of Proposition 2.1. IfGis measurable, then the map u/ma√sto→(G(u), g)Kis measurable as it is the composition of a measurable map and a continuous map. Conversely, assume that the map u/ma√sto→(G(u), g)Kis measurable for every g. Hence the set G−1(V(g0, r)) is measurable for every g0∈Kandr >0, where V(g0, r) :={g∈K: (g, g 0)K< r} is measurable. Since these sets form the Borel σ-algebra on K, we conclude that Gis measurable. The proof is complete. /square Corollary 2.6. LetKbe a Hilbert space and G:M(Ω, H )→Kbe a weak*-to-weak continuous map. Then Gis measurable. This follows from the fact that the map u/ma√sto→(G(u), g)Kis weak* continuous for every g∈Kand therefore measurable. Proposition 2.7. The norm map /bardbl·/bardbl:M(Ω, H )→R,u/ma√sto→ /bardblu/bardblM(Ω,H )is measurable. Proof. First, notice that the space C(Ω, H ) is separable since Ωis compact. Denote bySa countable subset of C(Ω, H ) such that supx∈Ω/bardblf(x)/bardblH≤1 for all f∈Sand Sis dense in the unit ball of C(Ω, H ). Since for every f∈S, every map u/ma√sto→ /a\}bracketle{tu, f/a\}bracketri}ht, is measurable by the definition, the map u/ma√sto→supf∈B/a\}bracketle{tu, f/a\}bracketri}htis also measurable. By the definition of the dual norm, one has /bardblu/bardblM(Ω,H )= sup /bardblf/bardbl≤1/a\}bracketle{tu, f/a\}bracketri}ht= sup f∈S/a\}bracketle{tu, f/a\}bracketri}ht. Since the supremum of a countable set of measurable function s is also measurable, we conclude that the map u/ma√sto→ /bardblu/bardblM(Ω,H )is measurable. The proof is complete. /square Remark 2.8. While the norm map is measurable, we remark that there exists a set that is measurable in Bbut not in Bw∗. Indeed, we simply consider the space of real- valued Radon measures M(Ω)and let Ebe a non-Borel measurable subset of Ω. On the space Ω, we consider the set ME:={δx:x∈E}. Since for every δx1, δx2∈ M E, one has /bardblδx1−δx2/bardblM= 2,∀x1/\e}atio\slash=x2, the set includes all of its limit points and is therefore clos ed in the strong topology. In particular, it is measurable in B. On the other hand, the map ϕ:Ω→ M (Ω), ϕ(x) =δxis weakly measurable by Proposition 2.3. Hence, the set MEis not weakly measurable, otherwise ϕ−1(ME) =Ewould be measurable, which is a contradiction.
|
https://arxiv.org/abs/2505.00151v1
|
Hence, we conclude that the two σ-algebras are not equivalent on M(Ω). For simplicity, and when no confusion arises, we will also wr iteMandCto denote M(Ω, H ) and C(Ω, H ), respectively. 3.Prior distribution on the space of measures Following the discussion in the previous section, we are now able to define prior distributions, or more precisely prior measures, on the spa ceM(Ω, H ), which repre- sents our initial beliefs about the model parameters before observing any data. In the following, we explore several examples of prior measures, i llustrating their properties and the motivations behind their choices in different modeli ng scenarios. 7 3.1. Random measures. Central objects in defining prior measures on the space of measures are so-called random measures. More precisely, co nsider a probability space (Θ,F,P). A random measure on M(Ω, H ) is a (weakly*) measurable map U:Θ→ M(Ω, H ). This induces a Borel probability measure µpronM(Ω, H ) given by µpr(E) :=P(U(ω)∈E), E ∈ B w∗. (3.1) In addition, since M(Ω, H ) is a Souslin space [ 29], every Borel measure is Radon [ 9, Theorem 8.6.13], meaning that every finite Borel measure on i t is inner regular. Further- more, one has the following characterization of Borel proba bility measures on M(Ω, H ): Proposition 3.1 ( [2, Theorem 7.4.3] ).Every Borel measure µonM(Ω, H )is Radon and is concentrated on a countable union of metrizable compa ct sets. In addition, for every Borel set Band every ε >0, there exists a metrizable compact set Kε⊂Bsuch that |µ|(B\Kε)< ε. Nevertheless, sampling a general random measure is nontriv ial due to the non- separability of the space. This presents a challenging task in practical computation. To address this issue, it is necessary to consider a subset of random measures that not only ensures the well-posedness of the Bayesian inverse problem but also enables efficient sampling and numerical implementation. This motiv ates the development of structured priors or parametrizations that restrict the sp ace of random measures to a computationally tractable class, while still capturing t he essential features of the underlying inverse problem. As we have seen, since we are int erested in parameters of the form ( 1.2), we consider the so-called class of point processes which is defined as follows: Definition 3.2. LetKbe a random variable on N,{Yk}k∈Na sequence of random variables Yk:Θ→Ω, and {Qk}k∈Na sequence of i.i.d. H-valued random variables. We consider the point process of the form U∼K/summationdisplay k=1γkQkδYk, (3.2) where {γk}k∈Nis a fixed sequence of positive coefficients that decay sufficien tly fast. Here, certain conditions on {γk}k∈Nand{Qk}k∈Nare needed to ensure that the measure µpr, defined as the distribution of Ugiven in (3.1), is a well-defined measure on M(Ω, H ). We remark that here we do not assume that Kis independent of {(Yk, Qk)}k∈N. Without any confusion, we also write uto denote U. The expression in ( 3.2) is well- defined as a random measure on M(Ω, H ) according to the following result. Proposition 3.3. Letube given in (3.2). (1) If K < ∞almost surely, then
|
https://arxiv.org/abs/2505.00151v1
|
(3.2)is a well-defined random measure on M(Ω, H ) for every sequence {γk}k∈N. (2) If K=∞almost surely, then (3.2)is a well-defined random measure on M(Ω, H ) if{|γk|2}k∈N∈ℓpand{var/bardblQk/bardblH}k∈N∈ℓqwith 1/p+ 1/q= 1. Proof. We adapt the proof in [ 18]. First, assume that K < ∞almost surely. The map ω/ma√sto→u(ω) =K(ω)/summationdisplay k=1γkQk(ω)δYk(ω), 8 is measurable, since for every n∈NandE∈ B w∗, the set Wn:=/braceleftBigg ω∈Θ:K(ω) =nandu(ω) =n/summationdisplay k=1γkQk(ω)δYk(ω)∈E/bracerightBigg is measurable. Hence, the set W:=∪∞ n=1Wn=u−1(E) is measurable. In addition, there holds /bardblu/bardblM(Ω,H )≤K(ω)/summationdisplay k=1|γk|/bardblQk(ω)/bardblH. SinceP(ω:K(ω)<∞) = 1, we conclude that /bardblu/bardblM(Ω,H )<∞almost surely. Next, we assume that K=∞. In this case, denote un:=n/summationdisplay k=1γkQkδYk. It can be seen that unisH-valued random measure. In addition, one has /bardblun/bardblM(Ω,H )≤n/summationdisplay k=1|γk|/bardblQk/bardblH:=vn. We prove that the sequence {vn}n∈Nis bounded almost surely. Indeed, by Hölder’s inequality, we have ∞/summationdisplay k=1var/bardblγkQk/bardblH=∞/summationdisplay k=1|γk|2var/bardblQk/bardblH ≤ /bardbl{| γk|2}k∈N/bardblℓp/bardbl{var/bardblQk/bardblH}/bardblℓq<∞. Hence, by Kolmogorov’s Theorem, we have/summationtext∞ k=1|γk|/bardblQk/bardblH<∞almost surely. Hence, {vn}n∈Nis bounded almost surely. Finally, since un⇀∗ualmost surely, we have /bardblu/bardblM(Ω,H )≤lim infn→∞/bardblun/bardblM(Ω,H )≤lim infn→∞vn<∞. The proof is complete. /square We provide some examples of random measures satisfying the a ssumptions in Propo- sition 3.5. Example 3.4 ((Poisson point process)). LetQbe a probability measure on H, andGbe a density function on Ω. We assume that Kfollows the Poisson distribution Pois(γ), i.e., P(K(ω) =n) =γnexp(−γ) n!, n = 0,1, . . . Hence u=/summationtextK k=1QkδYkdefines a random variable taking values in M(Ω, H ). This random measure is closely related to Poisson point processe s on Ω, see for instance, [11]. Typically, the intensity γis specified in terms of a measure λdefined on Ωor on Ω×H, known as the rate measure. If λis given in terms of QandY, that is, dλ=ν(dQ)·ϕ(dG), then the random variable Kis not indepedent of YkorQk. Next, we show that uhas a finite first moment under certain conditions. 9 Proposition 3.5. Letube defined in (3.2)with γk= 1for all k∈N. Assume that E[K]<∞andsupk∈NE[/bardblQk/bardblH]<∞. IfKand each Qkare independent, for every k∈N, thenEµpr[/bardblu/bardblM]<∞. In addition, if {Qk}k∈N∼ Q is a sequence of independent and identically distributed (i.i.d.) H-valued random variables, then Eµpr[/bardblu/bardblM] =E[K]·E[/bardblQ1/bardblH]. (3.3) Proof. Denote M:= supk∈NE[/bardblQk/bardblH]<∞. Since /bardblu(ω)/bardblM=K(ω)/summationdisplay k=1/bardblQk(ω)/bardblH, ω ∈Ω, one has E[/bardblu/bardblM] =E/bracketleftBigg E[K/summationdisplay k=1/bardblQk/bardblH|K]/bracketrightBigg ≤E[KM] =E[K]M < ∞. (3.4) Finally, ( 3.3) follows from ( 3.4) by using the i.i.d. properties of the sequence {Qk}k∈N. This completes the proof. /square 3.2. Characterization of random measures. In order to characterize a (probabil- ity) measure µon a topological space, one might make use of its characteristic functional /hatwideµ, see [ 2, Section 7.13]. Definition 3.6. Letµbe a measure on (M, σ∗). The characteristic functional of µis the functional /hatwideµ:C →Cdefined by /hatwideµ(f) :=/integraldisplay Mexp [i/a\}bracketle{tu, f/a\}bracketri}ht] dµ(u),for all f∈ C. (3.5) Since ( M, σ∗) is a locally convex Hausdorff space, the characteristic fun ctional uniquely determines the measure: Proposition 3.7 ( [2, Proposition 7.13.4] ).Assume that µ1andµ2are Radon measures onM. Then µ1=µ2if and only if /hatwiderµ1=/hatwiderµ2. As a typical example, the Poisson point process in Example 3.4is characterized by a measure of the form /hatwideµ(f) = exp/parenleftbigg γ/integraldisplay Ω/integraldisplay Hexp(i(q, f(y))H)−1) dqdy/parenrightbigg = exp/parenleftbigg γ/integraldisplay Mexp(i/a\}bracketle{tf, u/a\}bracketri}ht)−1/parenrightbigg dµ0(u), where µ0is
|
https://arxiv.org/abs/2505.00151v1
|
the distribution measure of random variables of the form U=QδY. The proof follows that of [ 24, Proposition 5.3.1] for the compound Poisson process. This , in particular, implies that the measure µdefining the Poisson point process is an infinitely divisible measure ; that is, for each n∈N, there exists a Radon probability measure µ1/n such that /hatwideµ(f) = (/hatwidestµ1/n(f))n,for all f∈ C. In fact, since Mis a complete locally convex space, we have the following rep resen- tation theorem from [ 14, Satz 2.2], which characterizes all infinitely divisible me asures onM. This makes use of the concept of a Lévy measure on a topologic al space; de- tails are provided in [ 14]. The representation theorem indeed forms the foundation f or constructing our class of random measures. 10 Theorem 3.8 ((Lévy-Khintchine representation theorem)). A probability mea- sure on Mis infinitely divisible if and only if there exist a u0∈ M , a covariance operator R:C → M and a Lévy measure νsuch that /hatwideµ(f) = exp/bracketleftbigg i/a\}bracketle{tu0, f/a\}bracketri}ht −1 2/a\}bracketle{tRf, f/a\}bracketri}ht+/integraldisplay M(exp( i/a\}bracketle{tu, f/a\}bracketri}ht)−1−i/a\}bracketle{tu, f/a\}bracketri}ht1F(u))dν(u)/bracketrightbigg ,(3.6) for all f∈ C. Here, Fis a convex, compact, and balanced neighborhood of 0(meaning that λF⊂Ffor all λwith |λ|<1) such that ν(Fc)<∞. We remark that the infinitely divisible property for measure s has also been studied in [18], for Radon measures on Banach spaces. 4.Well-posedness of the Bayesian inverse problem Using the prior measure introduced in Section 3, we are ready to prove the well- posedness of the Bayesian inverse problem ( 1.1). We recall that for a separable Banach space Y, our goal is to determine the posterior measure µz postgiven by dµz post dµpr(u) =L(z|u) Z(z),where Z(z) :=/integraldisplay ML(z|u) dµpr(u), (4.1) where we have L(z|u) := exp( Ψ(u;z)). Under certain assumptions on the likelihood function L, we establish the well-posedness of the Bayesian inverse pr oblem ( 1.1). In order to prove well-posedness, we make use of the Hellinger d istance between probability measures, which is commonly used in comparing probability d istributions, especially in the context of Bayesian inverse problems; cf. [ 31]. Here, we recall its definition for the sake of convenience: The Hellinger distance between two pro bability measures µandµ′ onM, denoted by dHell, is given by dHell(µ, µ′)2=1 2/integraldisplay M /radicalBigg dµ dν−/radicalBigg dµ′ dν 2 dν, where both µandµ′are absolutely continuous with respect to ν. 4.1. Well-posedness of the Bayesian inverse problem. In what follows, we adopt the approach of [ 23] to present the assumptions that guarantee the well-posedn ess of the problem. Assumption 1. We assume that the likelihood function satisfies the followi ng condi- tions: (A1) For almost every u∈ M , the map L(·|u)is strictly positive. (A2) For every z∈Y,L(z|·)is measurable and L(z|·)∈L1(M,dµpr). (A3) There exists g∈L1(M,dµpr)such that L(z|·)≤g, for every z∈Y. (A4) For every u∈ M , the function L(u|·) :Y→Ris continuous. Under the given assumptions, we provide a general well-pose dness result: Theorem 4.1. Let the assumptions in Assumption 1hold. Then the Bayesian inverse problem is well-posed, in the sense that: (1) Existence and uniqueness: For every z∈Y, the posterior
|
https://arxiv.org/abs/2505.00151v1
|
measure µz postexists uniquely. (2) Stability: For every z∈Yand every sequence {zn}n∈N⊂Ysuch that zn→z inY, there holds dHell(µzn post, µz post)→0. 11 Proof. Our proof adapts that of [ 23, Theorem 2.5]. Firstly, let z∈Ybe fixed. We prove that Z(z)>0. Indeed, since L(z|·)>0 on Mby (A1), we have M=∞/uniondisplay n=1Mnwhere Mn=/braceleftbigg u∈ M :L(z|u)≥1 n/bracerightbigg , n∈N. AsMn⊂ M n+1, for all n∈N, we use the σ-continuity of µprto have limn→∞µpr(Mn) =µpr(∪∞ n=1Mn) =µpr(M) = 1 . In particular, there exists n0∈Nsuch that µpr(Mn0)>0. Hence, Z(z) =/integraldisplay ML(z|u)dµpr(u)>/integraldisplay Mn0L(z|u)dµpr(u)≥µpr(Mn0) n0>0. Using the Bayes’ Theorem for Radon spaces, see [ 23, Lemma 2.4], we obtain the unique existence of µz postsatisfying ( 4.1). To prove the stability property (2), we first prove the contin uity of the function z/ma√sto→Z(z). Indeed, for every sequence {zk}k∈N, one has L(u|zk)→L(u|z) for almost every u∈ M , by assumption ( A4). Hence, by ( A3) and the dominated convergence theorem, we obtain Z(zk) =/integraldisplay ML(u|zk)dµpr(u)→/integraldisplay ML(u|z)dµpr(u) =Z(z). Now, by the first part, the measure µzk postis well-defined for every k∈N. Hence we can write 2dHell(µz, µzk)2=/integraldisplay M/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/radicaltp/radicalvertex/radicalvertex/radicalbtL(z|u) Z(z)−/radicaltp/radicalvertex/radicalvertex/radicalbtL(zk|u) Z(zk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dµpr(u). By the continuity of L(·|u) and Zfor almost every u∈ M , we have /radicaltp/radicalvertex/radicalvertex/radicalbtL(z|u) Z(z)−/radicaltp/radicalvertex/radicalvertex/radicalbtL(zk|u) Z(zk)→0 ask→ ∞ ,for a.e. u∈ M . On the other hand, we use the fact that (√a−√ b)2≤a+b,∀a, b≥0 to obtain /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/radicaltp/radicalvertex/radicalvertex/radicalbtL(z|u) Z(z)−/radicaltp/radicalvertex/radicalvertex/radicalbtL(zk|u) Z(zk)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ≤L(z|u) Z(z)+L(zk|u) Z(zk) ≤2 Z(z)(L(z|u) +L(zk|u))≤4 Z(z)g(u), where we have used ( A2) and the continuity of Zin the last inequality. Since 4 g(·)/Z(z)∈ L1(M,dµpr) for every z∈Y, we again use the dominated convergence theorem to con- clude that dHell(µz, µzk)→0. The proof is complete. /square 4.2. Approximation of the posterior distribution. In the previous section, we have derived the well-posedness of the Bayesian inverse pro blem ( 1.1). Nevertheless, in practical applications, solving the inverse problem dir ectly in an infinite-dimensional Banach space is not feasible. Hence, approximations are nec essary, and it becomes im- portant to study whether the perturbed posterior, arising f rom the approximation of the forward model, converges to the posterior associated wi th the exact model. To for- malize this, let LNdenote an approximation of the likelihood L, obtained, for instance, through discretization of the forward operator or the under lying space. For every fixed z∈Y, under certain assumptions on the the approximation LN, the existence of the 12 posterior measure corresponding to the likelihood LNis guaaranted, which is denoted byµz post,N. Formally, it is given by dµz post,N dµpr=LN(z|u) ZN(z),where ZN(y) =/integraldisplay MLN(z|u)dµpr(u). (4.2) The question is whether the sequence of measures {µz post,N}N∈Nconverges to µz post, under appropriate conditions, is addressed in the following. Theorem 4.2. Assume that LN, Lsatisfy Assumption 1for every N∈N, where the same upper bound g∈L1(M,dµpr)as in (A3)is suposed to hold for all N∈N. If |LN(z|·)−L(z|·)| →0almost surely in M,for every z∈Y, then dHell(µz post, µz post,N)→0. Proof. The proof proceeds analogously to that of Theorem 4.1. Let z∈Ybe fixed. For every N∈N, since LN(z|u)→L(z|u) for a.e. u∈ M andLN(z|u)≤g(u) for some g∈L1(M,dµpr), we again use the dominated convergence theorem to have ZN(z) =/integraldisplay
|
https://arxiv.org/abs/2505.00151v1
|
MLN(z|u)dµpr(u)→/integraldisplay ML(z|u)dµpr(u) =Z(z),for all z∈Y. Hence, one has /radicaltp/radicalvertex/radicalvertex/radicalbtLN(z|u) ZN(z)−/radicaltp/radicalvertex/radicalvertex/radicalbtL(z|u) Z(z)→0,for all z∈Y,for a.e. u∈ M . In addition, since /vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/radicaltp/radicalvertex/radicalvertex/radicalbtLN(z|u) ZN(z)−/radicaltp/radicalvertex/radicalvertex/radicalbtL(z|u) Z(z)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 ≤LN(z|u) ZN(z)+L(z|u) Z(z) ≤2 Z(z)(LN(z|u) +L(z|u))≤4g(u) Z(z). Since g∈L1(X,dµpr), we again use the dominated convergence theorem to conclud e that 2dHell(µz post, µz post,N)2=/integraldisplay M/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/radicaltp/radicalvertex/radicalvertex/radicalbtL(z|u) Z(z)−/radicaltp/radicalvertex/radicalvertex/radicalbtLN(z|u) ZN(z)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle2 dµpr(u)→0, from which the proof is complete. /square Remark 4.3. In Theorem 4.1, which concerns the well-posedness of the Bayesian inverse problem, and Theorem 4.2, which addresses the consistency of the approximations, ou r convergence results are provided without a convergence rat e, which follows from the general setting for Bayesian inverse problems given in [23]. We remark that further assumptions could be considered in order to obtain local Lip schitz continuity. For instance, the results in [18]could be applied in our setting, by noting that the map u/ma√sto→ /bardblu/bardblMis measurable. A detailed treatment is beyond the scope of th is paper and will be the subject of future research. 13 5.Applications 5.1. Inverse problems with Gaussian noise. Finally, to illustrate the theory, we consider some examples applicable within this framework. A s is typical in parameter identification problems, we consider the problem ( 1.1), where the observation space Yis finite-dimensional, i.e. Y=RNoandξfollows a Gaussian distribution, i.e. ξ∼ N(0, Σ) where Σis a positive-definite matrix. In this setting, the likeliho od potential function reads as Ψ(u;z) =1 2/bardblG(u)−z/bardbl2 Σ=1 2/bardblΣ−1/2(G(u)−z)/bardbl2 2. andL(z|u) = exp( −Ψ(u;z)). As is typical, we assume that the forward operator G: M(Ω, H )→RNois continuous in the weak* topology. Hence, one could verify that the assumptions on the likelihood function L(z|u) are satisfied. Corollary 5.1. Assume that the operator G:M(Ω, H )→RNois weak* continuous. Then the assumptions in Assumption 1hold. Proof. Continuity of the forward operator implies the measurabili ty of the likelihood function. By the definition, the function Lis strictly positive for every u∈ M and z∈Y. It is bounded by 1 and the likelihood function is continuous inzfor any u∈ M . The proof is complete. /square 5.2. Examples. Finally, we introduce some concrete examples that can be stu died in the Bayesian framework. Example 5.2 (Convolution problems with Gaussian kernels). LetΩ⊂Rd, d≥1 be a compact domain with a non-empty interior. We consider th e Gaussian kernel depending on σ k=kσ:Ω×Ω→Rgiven by k(x, y) =kσ(x, y) := exp/parenleftBigg −|x−y|2 2σ2/parenrightBigg , x, y ∈Ω. Let the signal to identify be a real-valued discrete measure , represented as u=Ns/summationdisplay k=1qkδyk, q k∈R, yk∈Ω. OnΩ, we fix a finite set of measurement locations x= (x1, . . . , x No)and consider the vector kernel k[x, y] := ( k(x1, y);k(x2, y);. . .;k(xNo, y)), y ∈Ω, as well as the (linear) forward operator G:M(Ω)→RNodefined by Gu=/integraldisplay Ωk[x, y]du(y), u ∈ M (Ω). Together with the additive noise ξ, we aim to determine ufrom the measurement data z through the model z=Gu+ξ. Some applications of this problem have been studied in, for instance, [25,7]. In addition, the recent work [20]addresses the problem of selecting optimal sensor placements for this setting. As the kernel is continuous, the map G:M(Ω)→RNois
|
https://arxiv.org/abs/2505.00151v1
|
weak* continuous. By Corollary 5.1, Theorem 4.1is applicable and the Bayesian inverse problem in this setti ng 14 is well-posed. We define the prior distribution of uthrough the random measure u=K/summationdisplay k=1QkδYk, where we consider the distributions K∼Poiss (γ),Qk∼ N(µ, σ2)andYk∼Uniform (Ω). If we know that qkare positive coefficients, for instance in biological imagin g[30], one could consider the Log-normal distribution, Qk∼LogNormal (µ, σ2), which is known to be supported on (0,∞). Example 5.3 ((Sound source localization with Helmholtz equ ation)). The in- verse sound source location problem seeks to recover an unkn own acoustic source u, modeled as a superposition of time-harmonic monopoles, fro m noisy pointwise mea- surements of the acoustic pressure, that is uhas the form u=Ns/summationdisplay k=1qkδyk, q k∈C, yk∈Ωs, where Ωsdenotes the source domain. The problem is governed by the Hel mholtz equa- tion on a bounded domain; details can be found in [27]. In this setting, by [27, Lemma 2.4] , the solution operator S:M(ΩS,C)→ C(ΩO,C) is linear and bounded, where ΩOdenotes the observation set with ΩO∩ΩS=∅, which implies that the (linear) observation operator G:M(ΩS,C)→CNogiven by Gu= (S[u](x1), . . . , S [u](xNo)), x 1, . . . , x No∈Ωo is weak*-to-strong continuous. Hence, Theorem 4.1can be applied and the Bayesian inverse problem is well-posed. We remark that the Bayesian inverse problem in this setting h as also been studied in[16], where the prior is defined on ℓ1via a sequence of measures {µpr,k}k∈N. In our setting, the prior measure is naturally defined through (3.2). Here, one could consider again that K∼Poiss (λ),Qk∼Complex N(q, σ2, c2), where Complex Nis the complex Gaussian distribution, and Yk∼Uniform (ΩS), fork∈N. 6.Conclusion and remarks In this work, we study the Bayesian inverse problem ( 1.1), where the parameter to be identified belongs to a space of measures and, as such, inhe rits its sparse structure from the ambient space. To define a prior distribution in this space, we consider an appropriate topological structure–namely, the weak* topo logy–together with its corre- sponding Borel σ-algebra. The priors are characterized via point processes , which are measurable with respect to the underlying structure. With t he given priors, we establish the well-posedness of the Bayesian inverse problem, as well as the consistency under the approximation of the likelihood function and the prior meas ure. Nevertheless, a numerical study should be conducted to demo nstrate the practical applicability of the approach. In addition, it is evident th at appropriate choices of point processes are essential to ensure accurate reconstruction . These topics will be addressed in future work. 15 Acknowledgement This work was supported by the Austrian Science Fund (FWF) un der the grant DOC78. The author owes a debt of gratitude to Prof. Barbara Ka ltenbacher for her valuable discussions and comments, which significantly con tributed to the improvement of this manuscript. He is also grateful to Prof. Daniel Walte r for his comments and for his interest in a forthcoming collaboration that builds on t his work. References [1] S. Agapiou, M. Burger, M. Dashti, and T. Helin. Sparsity- promoting and
|
https://arxiv.org/abs/2505.00151v1
|
edge-preserving maxi- mum a posteriori estimators in non-parametric Bayesian inverse problems. Inverse Probl. , 34(4):37, 2018. Id/No 045002. [2] V. I. Bogachev. Measure theory. Vol. I and II . Berlin: Springer, 2007. [3] K. Bredies and S. Fanzon. An optimal transport approach f or solving dynamic inverse problems in spaces of measures. ESAIM, Math. Model. Numer. Anal. , 54(6):2351–2382, 2020. [4] K. Bredies and H. K. Pikkarainen. Inverse problems in spa ces of measures. ESAIM, Control Optim. Calc. Var. , 19(1):190–218, 2013. [5] T. Broderick, A. C. Wilson, and M. I. Jordan. Posteriors, conjugacy, and exponential families for completely random measures. Bernoulli , 24(4B):3181–3221, 2018. [6] T. Bui-Thanh, O. Ghattas, J. Martin, and G. Stadler. A com putational framework for infinite- dimensional Bayesian inverse problems. I: The linearized c ase, with application to global seismic inversion. SIAM J. Sci. Comput. , 35(6):a2494–a2523, 2013. [7] E. J. Candés and C. Fernandez-Granda. Super-resolution from noisy data. The journal of fourier analysis and applications , 19(6):1229–1254, 2013. [8] E. Casas, C. Clason, and K. Kunisch. Parabolic control pr oblems in measure spaces with sparse solutions. SIAM J. Control Optim. , 51(1):28–63, 2013. [9] D. L. Cohn. Measure theory . Birkhäuser Adv. Texts, Basler Lehrbüch. New York, NY: Birkhäuser/Springer, 2nd revised ed. edition, 2013. [10] D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Vol. I: Ele mentary theory and methods. Probab. Appl. New York, NY: Springer, 2nd ed. edition, 2003. [11] D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Vol. II: Ge neral theory and structure. Probab. Appl. New York, NY: Springer, 2nd revised and extend ed ed. edition, 2008. [12] M. Dashti and D.-L. Duong. Stability of particle trajec tories of scalar conservation laws and applications in Bayesian inverse problems. Preprint, arXi v:2307.14536 [math.AP] (2023), 2023. [13] M. Dashti, S. Harris, and A. Stuart. Besov priors for Bay esian inverse problems. Inverse Probl. Imaging , 6(2):183–200, 2012. [14] E. Dettweiler. Grenzwertsätze für Wahrscheinlichkei tsmaße auf Badrikianschen Räumen. Z. Wahrscheinlichkeitstheor. Verw. Geb. , 34:285–311, 1976. [15] J. Diestel and J. J. j. Uhl. Vector measures , volume 15 of Math. Surv. American Mathematical Society (AMS), Providence, RI, 1977. [16] S. Engel, D. Hafemeyer, C. Münch, and D. Schaden. An appl ication of sparse measure valued Bayesian inversion to acoustic sound source identification .Inverse Probl. , 35(7):33, 2019. Id/No 075005. [17] W. Hensgen. A simple proof of Singer’s representation t heorem. Proc. Am. Math. Soc. , 124(10):3211–3212, 1996. [18] B. Hosseini. Well-posed Bayesian inverse problems wit h infinitely divisible and heavy-tailed prior measures. SIAM/ASA J. Uncertain. Quantif. , 5:1024–1060, 2017. [19] B. Hosseini and N. Nigam. Well-posed Bayesian inverse p roblems: priors with exponential tails. SIAM/ASA J. Uncertain. Quantif. , 5:436–465, 2017. [20] P.-T. Huynh, K. Pieper, and D. Walter. Towards optimal s ensor placement for inverse problems in spaces of measures. Inverse Probl. , 40(5):43, 2024. Id/No 055007. [21] K. Kunisch, P. Trautmann, and B. Vexler. Optimal contro l of the undamped linear wave equation with measure-valued controls. SIAM J. Control Optim. , 54(3):1212–1244,
|
https://arxiv.org/abs/2505.00151v1
|
2016. [22] S. Lang. Real analysis. 2nd ed. Reading, Massachusetts , etc.: Addison-Wesley Publishing Com- pany, Advanced Book Program/World Science Division. XIV, 5 33 p., 1983. 16 [23] J. Latz. On the well-posedness of Bayesian inverse prob lems. SIAM/ASA J. Uncertain. Quantif. , 8:451–482, 2020. [24] W. Linde. Probability in Banach spaces - stable and infin itely divisible distributions. 2nd ed. A Wiley-Interscience Publication. Chichester: John Wiley & Sons Ltd. 195 p.; $ 29.95 (1986)., 1986. [25] C. W. McCutchen. Superresolution in microscopy and the abbe resolution limit. J. Opt. Soc. Am. , 57(10):1190–1192, Oct 1967. [26] K. Pieper. Finite element discretization and efficient numerical solut ion of elliptic and parabolic sparse control problems . PhD thesis, Technische Universität München, 2015. [27] K. Pieper, B. Q. Tang, P. Trautmann, and D. Walter. Inver se point source location with the Helmholtz equation on a bounded domain. Computational Optimization and Applications , 77(1):213–249, 2020. [28] R.-D. Reiss. A course on point processes . Springer Ser. Stat. New York: Springer-Verlag, 1993. [29] M.-F. Sainte-Beuve. Some topological properties of ve ctor measures with bounded variation and its applications. Ann. Mat. Pura Appl. (4) , 116:317–379, 1978. [30] G. Schiebinger, E. Robeva, and B. Recht. Superresoluti on without separation. Inf. Inference , 7(1):1–30, 2018. [31] A. M. Stuart. Inverse problems: A Bayesian perspective .Acta Numerica , 19:451–559, 2010. [32] A. M. Stuart and M. Dashti. The Bayesian approach to inve rse problems. In R. Ghanem, D. Higdon, and H. Owhadi, editors, Handbook of Uncertainty Quantification , pages 311–428. Springer, 2017. [33] T. J. Sullivan. Well-posed Bayesian inverse problems a nd heavy-tailed stable quasi-Banach space priors. Inverse Probl. Imaging , 11(5):857–874, 2017. 17
|
https://arxiv.org/abs/2505.00151v1
|
Bayesian Discrepancy Measure: Higher-order and Skewed approximations Elena Bortolato1,‡, Francesco Bertolino2,‡, Monica Musio2,‡, and Laura Ventura3,‡ 1Universitat Pompeu Fabra, Barcelona School of Economics, Spain; elena.bortolato@bse.eu 2University of Cagliari, Cagliari, Italy; bertolin@unica.it, mmusio@unica.it 3University of Padova, Padova, Italy; ventura@stat.unipd.it May 2, 2025 Abstract The aim of this paper is to discuss both higher-order asymptotic expansions and skewed ap- proximations for the Bayesian Discrepancy Measure for testing precise statistical hypotheses. In particular, we derive results on third-order asymptotic approximations and skewed approxima- tions for univariate posterior distributions, also in the presence of nuisance parameters, demon- strating improved accuracy in capturing posterior shape with little additional computational cost over simple first-order approximations. For the third-order approximations, connections to frequentist inference via matching priors are highlighted. Moreover, the definition of the Bayesian Discrepancy Measure and the proposed methodology are extended to the multivari- ate setting, employing tractable skew-normal posterior approximations obtained via derivative matching at the mode. Accurate multivariate approximations for the Bayesian Discrepancy Measure are then derived by defining credible regions based on the Optimal Transport map, that transforms the skew-normal approximation to a standard multivariate normal distribu- tion. The performance and practical benefits of these higher-order and skewed approximations are illustrated through two examples. 1 Introduction Bayesian inference often relies on asymptotic arguments, leading to approximate methods that frequently assume a parametric form for the posterior distribution. In particular, a Gaussian distribution provides a convenient density for a first-order approximation. However, this ap- proximation fails to capture potential skewness and asymmetry in the posterior distribution. To avoid this drawback, starting from third-order expansions of the Laplace’s method for the posterior distributions (see, e.g., [9], [16], [17], and references therein), possible alternatives are: •higher-order asymptotic approximations: these offer improved accuracy at minimal ad- ditional computational cost compared to first-order approximations, and are applicable to posterior distributions and quantities of interest such as tail probabilities and credible regions (see, e.g., [23], and references therein); 1arXiv:2505.00185v1 [stat.ME] 30 Apr 2025 •to use skewed approximations for the posterior distribution, theoretically justified by a skewed Bernstein-von Mises theorem (see, e.g., [5] and [26], and references therein). The aim of this contribution is to discuss higher-order expansions and skew-symmetric ap- proximations for the Bayesian Discrepancy Measure (BDM) proposed in [3] for testing precise statistical hypotheses. Specifically, the BDM assesses the compatibility of a given hypothesis with the available information (prior and data). To summarize this information, the poste- rior median is used, providing a straightforward evaluation of the discrepancy with the null hypothesis. The BDM possesses desirable properties such as consistency and invariance under reparameterization, making it a robust measure of evidence. For a scalar parameter of interest, even with nuisance parameters, computing the BDM involves evaluating tail areas of the posterior or marginal posterior distribution. A first-order Gaussian approximation can be used, but it may be inaccurate, especially with small sample sizes or many nuisance parameters, since it fails to account for potential posterior asymmetry and skewness. In this respect, the aim of this paper is to provide higher-order asymptotic ap- proximations and skewed asymptotic approximations for the BDM.
|
https://arxiv.org/abs/2505.00185v1
|
For the third-order approx- imations, connections with frequentist inference are highlighted when using objective matching priors. Also for multidimensional parameters, while a first-order Gaussian approximation of the posterior distribution can be used to calculate the BDM, it still fails to account for potential posterior asymmetry and skewness. In this respect, this paper also addresses higher-order asymptotic approximations and skewed approximations for the BDM. The latter ones are based on an Optimal Transport map (see [7] and [8]), that transforms the skew-normal approximation to a standard multivariate normal distribution. The paper is organized as follows. Section 2 provides some background for the BDM for a scalar parameter of interest, even with nuisance parameters, and extends the definition to the multivariate framework. Section 3 illustrates higher-order Bayesian approximations for the BDM; connections with frequentist inference are highlighted when using objective matching priors. Section 4 discusses skewed approximations for the posterior distribution and for the BDM, theoretically justified by a skewed Bernstein-von Mises theorem, with new insights in the multivariate framework. Two examples are discussed in Section 5. Finally, some concluding remarks are given in Section 6. 2 Background Consider a sampling model f(y;θ), indexed by a parameter θ∈Θ⊆Rd,d≥1, and let L(θ) = L(θ;y) = exp {ℓ(θ)}be the likelihood function based on a random sample y= (y1, . . . , y n) of sizen. Given a prior density π(θ) for θ, Bayesian inference for θis based on the posterior density π(θ|y)∝π(θ)L(θ). In several applications, it is of interest to test the precise (or sharp) null hypothesis H0:θ=θ0 (1) against H1:θ̸=θ0. In Bayesian hypothesis testing, the usual approach relies on the well- known Bayes Factor (BF), which measures the ratio of posterior to prior odds in favor of the null hypothesis H0. Typically, a high BF, or the weight of evidence W= log(BF), provides support for H0. However, improper priors can lead to an undetermined BF, and in the context of precise null hypotheses, the BF can be subject to the Jeffreys-Lindley paradox. Furthermore, the BF is not well-calibrated, as its finite sampling distribution is generally unknown and may depend on nuisance parameters. To address these limitations, recent research has explored alternative Bayesian measures of evidence for precise null hypothesis testing, including the 2 e−value (see e.g., [11], [12] and [13] and references therein) and the BDM [3]. In the following of the paper we focus on the Bayesian Discrepancy Measure of evidence proposed in [3] (see also [4]). 2.1 Scalar case The BDM gives an absolute evaluation of a hypothesis H0in light of prior knowledge about the parameter and observed data. In the absolutely continuous case, for testing (1) the BDM is defined as δH= 1−2 minZθ0 −∞π(θ|y)dθ,1−Zθ0 −∞π(θ|y)dθ . (2) The quantity min {Rθ0 −∞π(θ|y)dθ,1−Rθ0 −∞π(θ|y)dθ}can interpreted as the posterior probability of a ”tail” event concerning only the precise hypothesis H0. Doubling this ”tail” probability, related to the precise hypothesis H0, one gets a posterior probability assessment about how ”central” the hypothesis H0is, and hence how it is supported by the prior and the data. This interpretation is related to an alternative definition
|
https://arxiv.org/abs/2505.00185v1
|
for δH. Let θmbe the posterior median and consider the interval defined as IE= (θ0,+∞) ifθm< θ0or as IE= (−∞, θ0) ifθ0< θm. Then, the BDM of the hypothesis H0can be computed as δH= 1−2P(θ∈IE|y) = 1−2Z IEπ(θ|y)dθ. (3) Note that the quantity 2 P(θ∈IE|y) gives the posterior probability of an equi-tailed credible interval for θ. The Bayesian Discrepancy Test assesses hypothesis H0based on the BDM. High values ofδHindicate strong evidence against H0, whereas low values suggest data consistency with H0. Under H0, for large sample sizes, δHis asymptotically uniformly distributed on [0 ,1]. Conversely, when H0is false, δHtends to 1 in probability. While thresholds can be set to interpret δH, in line with the ASA statement, we agree with Fisher that significance levels should be tailored to each case based on evidence and ideas. The BDM remains invariant under invertible monotonic reparametrizations. Under general regularity conditions and assuming Cromwell’s Rule for prior selection, δHexhibits specific properties: (1) if θ0=θt(the true value of the parameter), δHtends toward a uniform distri- bution as sample size increases; (2) if θ0̸=θt,δHconverges to 1 in probability. Furthermore, using a matching prior, δHis exactly uniformly distributed for all sample sizes. The practical computation of δHrequires the evaluation of tail areas of the form P(θ≥θ0|y) =Z∞ θ0π(θ|y)dθ. (4) The derivation of a first-order tail area approximation is simple since it uses a Gaussian ap- proximation. With this approximation, a first-order approximation for δHwhen testing (1) is simply given by δH˙ = 2 Φ θ0−ˆθq j(ˆθ)−1 −1, (5) where ˆθis the maximum likelihood estimate (MLE) of θ,j(θ) =−ℓ(2)(θ) =−∂2ℓ(θ)/∂θ2is the observed information, the symbol ” ˙ =” indicates that the approximation is accurate to 3 O(n−1/2) and Φ( ·) is the standard normal distribution function. Thus, to first-order, δHagrees numerically with 1 −p-value based on the Wald statistic w(θ) = ( ˆθ−θ)/j(ˆθ)−1/2and also with the first-order approximation of the e−value (see, e.g., [19]). In practice, the approximation (5) ofδHmay be inaccurate, in particular for a small sample size, because it forces the posterior distribution to be symmetric. 2.2 Nuisance parameters In most applications, θis partitioned as θ= (ψ, λ), where ψis a scalar parameter of interest andλis a ( d−1)−dimensional nuisance parameter, and it is of interest to test the precise (or sharp) null hypothesis H0:ψ=ψ0 (6) against H1:ψ̸=ψ0. In the absolutely continuous case, for testing (6) in the presence of nuisance parameters, the BDM is defined as δH= 1−2 minZψ0 −∞πm(ψ|y)dψ,1−Zψ0 −∞πm(ψ|y)dψ , (7) where πm(ψ|y) is the marginal posterior density for ψ, given by πm(ψ|y) =Z π(ψ, λ|y)dλ∝Z π(ψ, λ)L(ψ, λ)dλ. (8) Also in this framework, the practical computation of δHrequires the evaluation of tail areas of the form Pm(ψ≥ψ0|y) =Z∞ ψ0πm(ψ|y)dψ. (9) The derivation of a first-order tail area approximation is still simple since it uses a Gaussian approximation. Let ℓp(ψ) = log L(ψ,ˆλψ) be the profile loglikelihood for ψ, with ˆλψcon- strained MLE of λgiven ψ. Moreover, let ( ˆψ,ˆλ) be the full MLE, and let jp(ψ) =−ℓ(2) p(ψ) = −∂2ℓp(ψ)/∂ψ2be the profile observed information. A first-order approximation for δHwhen testing (6) is
|
https://arxiv.org/abs/2505.00185v1
|
simply given by δH˙ = 2 Φ ψ0−ˆψq jp(ˆψ)−1 −1. (10) Thus, to first-order, δHagrees numerically with 1 −p-value based on the profile Wald statistic wp(ψ) = ( ˆψ−ψ)/jp(ˆψ)−1/2. In practice, as for the scalar parameter case, also the approxi- mation (5) of δHmay be inaccurate, in particular for a small sample size or large number of nuisance parameters, since it fails to account for potential posterior asymmetry and skewness. 2.3 The multivariate case Extending the definition of the BDM to the multivariate setting, where θ∈Θ⊆Rdwith d > 1, presents some challenges. The core concepts of the univariate definition rely on the unique ordering of the real line and the uniquely defined median, which splits the probability mass into two equal halves (tail areas). In Rd, with d >1, there is no natural unique ordering, and concepts like the median and ”tail areas” relative to a specific point θ0lack a single, universally accepted definition. Despite these challenges, the fundamental goal remains the same: to quantify how consistent the hypothesized value θ0is with the posterior distribution 4 π(θ|y); specifically measuring how ”central” or, conversely, how ”extreme” θ0lies within the posterior distribution. Utilizing the notion of center-outward quantile functions ([7], [8]), a concept from recent multivariate statistics, provides a theoretically appealing way to define the multivariate BDM. LetF± P:Rd→Bdbe the center-outward distribution function mapping the posterior distri- bution Pθ(with density π(θ|y)) to the uniform distribution Udon the unit ball Bd. More precisely, the center-outward distribution function F± P:Rd→Bdis defined as the almost ev- erywhere unique gradient of a convex function that pushes a distribution Pθforward to the uniform distribution Udon the unit ball BdinRd. That is, F± P:=∇g,such that F± P#Pθ=Ud. The center-outward quantile function Q± Pis defined as the (continuous) inverse of F± P, i.e. Q± P:= (F± P)−1. It maps the open unit ball Bd(minus the origin) to Rd\(F± P)−1(0) and satisfies Q± P#Ud=Pθ. Forτ∈(0,1), define the center-outward quantile region of order τas R± P(τ) :=Q± P(τBd), and the center-outward quantile contour of order τas C± P(τ) :=Q± P(τSd−1), where Sd−1is the unit sphere in Rd. When d= 1, this coincides with the rescaled univariate cumulative distribution function F± P(x) = 2 FP(x)−1 and the BDM (7) can be expressed as δH=|F± P(θ0)|. This measures the (rescaled) distance of the quantile rank of θ0from the center point (corre- sponding to rank 0). Generalizing this, we can define the multivariate BDM for the hypothesis H0:θ=θ0as δH=∥F± P(θ0)∥, (11) where ∥ · ∥denotes the standard Euclidean norm in Rd. Here, F± P(θ0) maps the point θ0to a location uwithin the unit ball Bd. This definition has desirable properties (see [7]): •it yields a value between 0 and 1; •δH= 0 if θ0corresponds to the geometric center of the distribution (mapped to 0byF± P); •δHincreases as θ0moves away from the center towards the ”boundary” of the distribution, approaching 1 for points mapped near the surface of the unit ball Sd−1; •it is invariant under suitable classes of transformations (affine transformations if Pθis el- liptically contoured, more generally under monotone transformations linked to an
|
https://arxiv.org/abs/2505.00185v1
|
Optimal Transport map construction); •it naturally reduces to the univariate definition δH=|F± P(θ0)|when d= 1. The primary practical difficulty lies in computing the center-outward distribution function F± P(·) for an arbitrary posterior distribution π(θ|y), as it typically requires solving a complex Optimal Transport problem (see [14]). 5 3 Beyond Gaussian I: higher-order asymptotic ap- proximations 3.1 Scalar case In order to have more accurate evaluations of the first-order approximation (5) of δH, it may be useful to resort to higher-order approximations based on tail area approximations (see, e.g., [17], [23], and references therein). Using the tail area argument to the posterior density, we can derive the O(n−3/2) approximation P(θ≥θ0|y) ¨ = Φ( r∗(θ0)), (12) where the symbol ”¨ =” indicates that the approximation is accurate to O(n−3/2) and r∗(θ) =r(θ) +1 r(θ)logq(θ) r(θ), with r(θ) = sign( ˆθ−θ)[2(ℓ(ˆθ)−ℓ(θ))]1/2likelihood root and q(θ) =ℓ(1)(θ)j(ˆθ)−1/2π(ˆθ) π(θ). In the expression of q(θ),ℓ(1)(θ) =∂ℓ(θ)/∂θis the score function. Using the tail area approximation (12), a third-order approximation of the BDM (2) can be computed as δH¨ = 1−2 min{Φ(r∗(θ0)),1−Φ(r∗(θ0))}= 2Φ( |r∗(θ0)|)−1. (13) Note that the higher-order approximation (13) does not call for any condition on the prior π(θ), i.e. it can be also improper, and it is available at a negligible additional computational cost over the simple first-order approximation. Note also that using r∗(θ) an (1 −α) equi-tailed credible interval for θcan be computed asCI={θ:|r∗(θ)| ≤z1−α/2}, where z1−α/2is the (1 −α/2)-quantile of the standard normal distribution, and in practice it can reflect asymmetries of the posterior. Moreover, from (12), the posterior median can be computed as the solution in θof the estimating equation r∗(θ) = 0. 3.2 Nuisance parameters When θis partitioned as θ= (ψ, λ), where ψis a scalar parameter of interest and λis a (d−1)−dimensional nuisance parameter, in order to have more accurate evaluations of the first-order approximation (10) of δH, using the tail area argument to the marginal posterior density, we can derive the O(n−3/2) approximation (see, e.g., [17] and [23]) Pm(ψ≥ψ0|y) ¨ = Φ( r∗ B(ψ0)), (14) where r∗ B(ψ) =rp(ψ) +1 rp(ψ)logqB(ψ) rp(ψ), with rp(ψ) = sign( ˆψ−ψ)[2(ℓp(ˆψ)−ℓp(ψ))]1/2profile likelihood root and qB(ψ) =ℓ(1) p(ψ)|jp(ˆψ)|−1/2|jλλ(ψ,ˆλψ)|1/2 |jλλ(ˆψ,ˆλ)|1/2π(ˆψ,ˆλ) π(ψ,ˆλψ). 6 In the expression of qB(ψ),ℓ(1) p(ψ) is the profile score function and jλλ(ψ, λ) represents the (λ, λ)-block of the observed information j(ψ, λ). Using the tail area approximation (14), a third-order approximation of the BDM (7) can be computed as δH¨ = 1−2 min{Φ(r∗ B(ψ0)),1−Φ(r∗ B(ψ0))}= 2Φ( |r∗ B(ψ0)|)−1. (15) Note that the higher-order approximation (15) does not call for any condition on the prior π(ψ, λ), i.e. it can be also improper. Note also that using r∗ B(ψ) an (1 −α) equi-tailed credible interval for ψcan be computed as CI={ψ:|r∗ B(ψ)| ≤z1−α/2}. Moreover, from (14), the posterior median of (8) can be computed as the solution in ψof the estimating equation r∗ B(ψ) = 0. 3.2.1 Approximations with matching priors The order of the approximations of the previous sections refers to the posterior distribution function, and may depend more or less strongly on the choice of prior. A so-called strong matching prior (see [6],
|
https://arxiv.org/abs/2505.00185v1
|
and references therein) ensures that a frequentist p-value coincides with a Bayesian posterior survivor probability to a high degree of approximation, in the marginal posterior density (8). Welch and Peers [25] showed that for a scalar parameter θthe Jeffreys’ prior is probability- matching, in the sense that posterior survivor probabilities agree with frequentist probabilities and credible intervals of a chosen width coincide with frequentist confidence intervals. With the Jeffreys’ prior we have q(θ) =ℓ(1)(θ)j(ˆθ)−1/2i(ˆθ)1/2 i(θ)1/2 and the corresponding r∗(θ) coincides with the frequestist modified likelihood root by [2]. In this case, using the tail area approximation (12), a third-order approximation of the BDM of the hypothesis H0:θ=θ0coincides with 1 −p∗, where p∗is the p−value based on r∗(θ). Thus, when using the Jeffreys’ prior and higher-order asymptotics in the scalar case, there is an agreement between Bayesian and frequentist testing hypothesis. In the presence of nuisance parameters, following [23], when using a strong matching prior, the marginal posterior density can be written as πm(ψ|y) ¨∝exp −1 2r∗ p(ψ)2 sp(ψ) rp(ψ) , (16) where sp(ψ) =ℓ(1) p(ψ)/jp(ˆψ)1/2is the profile score statistic. Moreover, the tail area of the marginal posterior for ψcan be approximated to third-order as Pm(ψ≥ψ0|y) ¨ = Φ( r∗ p(ψ0)), (17) where r∗ p(ψ) is the modified profile likelihood root r∗ p(ψ) =rp(ψ) +1 rp(ψ)logqp(ψ) rp(ψ), (18) which has a third-order standard normal null distribution. In (18), the quantity qp(ψ) is a suitably defined correction term (see, e.g., [2] and [20], Chapter 9). A remarkable advantage of (16) and (17) is that its expression automatically includes the matching prior, without requiring its explicit computation. Using (17), an asymptotic equi-tailed credible interval for ψcan be computed as CI={ψ: |r∗ p(ψ)| ≤z1−α/2}, i.e., as a confidence interval for ψbased on (18) with approximate level 7 (1−α). Note from (17) that the posterior median of πm(ψ|y) can be computed as the solution inψof the estimating equation r∗ p(ψ) = 0, and thus it coincides with the frequentist estimator defined as the zero-level confidence interval based on r∗ p(ψ). Such an estimator has been shown to be a refinement of the MLE ˆψ. Using the tail area approximation (17), a third-order approximation of the BDM of the hypothesis H0:ψ=ψ0is δ∗ H¨ =1−2 min{Φ(r∗ p(ψ0)),1−Φ(r∗ p(ψ0))}= 2 Φ( |r∗ p(ψ0)|)−1. (19) In this case (19) coincides with 1 −p∗ r, where p∗ ris the p-value based on (18). Thus, when using strong matching priors and higher-order asymptotics, there is an agreement between Bayesian and frequentist testing hypothesis, point and interval estimation. From a practical point of view, the computation of (19) can be easily performed in practical problems using the likelihoodAsy package [15] of the statistical software R. In practice, the advantage of using this package is that it does not require the function qp(ψ) explicitly but it only requires the code for computing the loglikelihood function and for generating data from the assumed model. Some examples can be found in [19]. 3.3 Multidimensional parameters When θis multidimensional, the derivation of a first-order tail area approximation and a first- order approximation for δHis still simple to derive
|
https://arxiv.org/abs/2505.00185v1
|
starting from the Laplace approximation of the posterior distribution. In particular, let W(θ) = 2( ℓ(ˆθ)−ℓ(θ)) be the loglikelihood ratio forθ. Using W(θ), a first-order approximation of the BDM for the hypothesis H0:θ=θ0can be obtained as δH˙ = 1−P(χ2 d≥W(θ0)), (20) where χ2 dis the Chi-squared distribution with ddegrees of freedom. This approximation is asymptotically equivalent to the first-order approximation δH˙ = 1−P χ2 d≥(θ0−ˆθ)Tj(ˆθ)(θ0−ˆθ) . (21) Higher-order approximations based on modifications of the loglikelihood ratios are available also for multidimensional parameters of interest, both with or without nuisance parameters (see [20], [21] and [23], and references therein). As is the case with the approximations for a scalar parameter, the proposed results are based on the asymptotic theory of modified loglikelihood ratios [21], they require only routine maximization output for their implementation, and they are constructed for arbitrary prior distributions For instance, paralleling the scalar parameter case, a credible region for a d-dimensional parameter of interest θwith approximately 100(1 − α)% coverage in repeated sampling, can be computed as CR={θ:W∗(θ)≤χ2 d;1−α}, where W∗(θ) is a suitable modification of the loglikelihood ratio W(θ) or of the profile log-likelihood ratio (see [20] and [21]), and χ2 d;1−αis the (1 −α) quantile of the χ2 ddistribution. In practice, the region CRcan be interpreted as the extension to the multidimensional case of the equi-tailed setCI, i.e. the region CRis computed as a multidimensional case of the set CIbased on the Chi-squared approximation. As in the scalar case, the region CRcan reflect departures from symmetry, with respect to the first order approximation based on the Wald statistic. Some simulation studies on CRbased on W∗(θ) can be found in [24]. Using W∗(θ), a higher-order approximation of the BDM for the hypothesis H0:θ=θ0can be obtained as δH¨ = 1−P(χ2 d≥W∗(θ0)). (22) 8 The major drawback with this approximation is that the signed root loglikelihood ratio trans- formation W∗(θ) in general depends on the chosen parameter order. Moreover, its computation can be cumbersome when dis large. 4 Beyond Gaussian II: skewed approximations A major limitation of standard first-order Gaussian approximations, like (5) and (10), is their reliance on symmetric densities, which simplifies inference but can misrepresent key poste- rior features like skewness and heavy tails. Indeed, even simple parametric models can yield asymmetric posteriors, leading to biased and inaccurate approximations. To overcome this, recent work has introduced flexible families of approximating posterior densities that can capture the shape and skewness ([5, 26, 22]). In particular, [5] develop a class of closed-form deterministic approximations using a third-order extension of the Laplace approximation. This approach yields tractable skewed approximations that better capture the actual shape of the target posterior while remaining computationally efficient. Also the skewed approximations, as well as the higher-order approximations discussed in Section 3, rely on higher-order expansions and derivatives. They start with a symmetric Gaus- sian approximation, but centered at the Maximum a Posteriori (MAP) estimate and introduce skewness through the Gaussian distribution function combined with a cubic term influenced by the third derivative of the loglikelihood function. 4.1 Scalar case Let us denote with ℓ(k)(θ) the k-th
|
https://arxiv.org/abs/2505.00185v1
|
derivative of the loglikelihood ℓ(θ), i.e. ℓ(k)(θ) =∂kℓ(θ)/∂θk, k= 1,2,3, . . .. Moreover, let ˜θ= argmaxθ∈Θ{ℓ(θ) + log π(θ)}be the MAP estimate of θand leth=√n(θ−˜θ) be the rescaled parameter. Using the result (14) of [5] and all the regularity conditions there stated, the skew-symmetric (SKS) approximation for the posterior density for θis πSKS(θ|y)∝2ϕ(h; 0,˜ω) Φ(˜α(h)), (23) where ϕ(h; 0,˜ω) is the normal density function with mean 0 and variance ˜ ω=nj(˜θ)−1and ˜α(h) =ℓ(3)(˜θ)√ 2π 12n3/2h3 is the skewness component, expressed as a cubic function of h, reflecting the influence of the third derivative of the loglikelihood on the shape of the posterior distribution. Equation (23) provides a practical skewed second-order approximation of the target poste- rior density, centered at its mode. This approach is known as the SKS approximation or skew- modal approximation. Compared to the classical first-order Gaussian approximation derived from the Laplace method, the SKS approximation remains similarly tractable while providing significantly greater accuracy. Note that this approximation depends on the prior distribution through the MAP. Using (23) and the approximation 2ϕ(h; 0,˜ω)1 2+1√ 2π˜α(h) = 2ϕ(h; 0,˜ω)Φ(˜α(h)) +O(n−1), we can derive the approximation PSKS(θ≥θ0|y) =R∞ h02ϕ(h; 0,˜ω) 1 2+1√ 2π˜α(h) dh R∞ −∞2ϕ(h; 0,˜ω) 1 2+1√ 2π˜α(h) dh 9 for the tail area for (4), where h0=√n(θ0−˜θ). Note that the denominator simply is equal to 1, due to the symmetry of ϕ(·) and the oddness of ˜ α(h). The numerator can be splitted into two integrals Z∞ h02ϕ(h; 0,˜ω)1 2+1√ 2π˜α(h) dh=Z∞ h0ϕ(h; 0,˜ω)dh+√ 2√πZ∞ h0ϕ(h; 0,˜ω)˜α(h)dh. The first integral can be expressed as the standard Gaussian tail Z∞ h0ϕ(h; 0,˜ω)dh= 1−Φh0√ ˜ω , while the second integral involves the skewness term and can be expressed as √ 2√πZ∞ h0ϕ(h; 0,˜ω)˜α(h)dh=ℓ(3)(˜θ) 6n3/2Z∞ h0h3ϕ(h; 0,˜ω)dh. Substituting z=h/√ ˜ωinto the integralR∞ h0h3ϕ(h; 0,˜ω)dh, we have Z∞ h0h3ϕ(h; 0,˜ω)dh=Z∞ h0h31√ 2π˜ωexp −h2 2˜ω dh =Z∞ h0/√ ˜ω(√ ˜ωz)31√ 2π˜ωexp −(√ ˜ωz)2 2˜ω√ ˜ωdz = ˜ω3/2Z∞ h0/√ ˜ωz31√ 2πexp −z2 2 dz. Using the identityR∞ z0z3ϕ(z; 0,1)dz=ϕ(z0; 0,1)(z2 0+2), with z0=h0/√ ˜ω, andRz0 −∞z3ϕ(z; 0,1)dz= −ϕ(z0; 0,1)(z2 0+ 2), we obtain Z∞ h0h3ϕ(h; 0,˜ω)dh= ˜ω3/2ϕh0√ ˜ω; 0,1 h0√ ˜ω2 + 2! = ˜ω3/2ϕh0√ ˜ω; 0,1h2 0 ˜ω+ 2 . Then the resulting SKS approximation to P(θ≥θ0|y) is PSKS(θ≥θ0|y) = 1−Φh0√ ˜ω +ℓ(3)(˜θ) 6n3/2˜ω3/2ϕh0√ ˜ω; 0,1h2 0 ˜ω+ 2 . Finally, substituting this approximation into (7), we get the SKS approximation of the BDM, given by δSKS H= 2 Φ h0√ ˜ω −2 sign( h0)ℓ(3)(˜θ) 6n3/2˜ω3/2ϕh0√ ˜ω; 0,1h2 0 ˜ω+ 2 −1. (24) Note that the first term of this approximation differs from that in (5) since it is evaluated at the MAP and not at the MLE. 10 4.2 Nuisance parameters As in Subsection 2.2 suppose that the parameter is partitioned as θ= (ψ, λ), where ψis a scalar parameter of interest and λa nuisance parameter of dimension d−1. Also for the marginal posterior distribution πm(ψ|y) a SKS approximation is available (see [5], Section 4.2). Adopting the index notation, let us denote by j(θ) =−[ℓ(2) st(θ)] the observed Fisher infor- mation matrix, where ℓ(2) st(θ) =∂2ℓ(θ) ∂θs∂θt,s, t= 1, . .
|
https://arxiv.org/abs/2505.00185v1
|
. , d , and let Ω = ( j(˜θ)/n)−1be the inverse of the scaled observed Fisher information matrix evaluated at the MAP. We denote the elements of Ω by Ω st, and in particular let us denote by Ω 11the element corresponding to the param- eter of interest ψ. Moreover, let us denote with ℓ(3) stl(θ) =∂3ℓ(θ) ∂θs∂θt∂θlthe elements of the third derivative of the log-likelihood, with s, t, l = 1, . . . , d . Finally, let us define the two quantities v1,1= 3dX i=1dX j=1ℓ(3) 1ij(˜θ) Ωij+ 3dX i=1dX j=1dX k=1ℓ(3) ijk(˜θ) ΩijΩk1 and v3,111 =ℓ(3) 111(˜θ) + 3dX i=1ℓ(3) 11i(˜θ) Ωi1+ 3dX i=1dX j=1ℓ(3) 1ij(˜θ) ΩijΩj1 +dX i=1dX j=1dX k=1ℓ(3) ijk(˜θ) ΩijΩk1Ω11. Then, following formula (23) in [5], the SKS approximation of the marginal posterior density πm(ψ|y) can be expressed as πmSKS (ψ|y)∝2ϕ(hψ; 0,Ω11) Φ(αψ(hψ)), (25) where hψ=√n(ψ−˜ψ) is the rescaled parameter of interest, ϕ(·; 0,Ω11) is the density of a Gaussian distribution with mean 0 and variance Ω 11, and the skewness component αψ(hψ) is defined as αψ(hψ) =√ 2π 12n3/2 v1,1hψ+v3,111h3 ψ . Using (25), we can derive the SKS tail area approximation of (9), given by PmSKS (ψ≥ψ0|y) =Z∞ hψ02ϕ(hψ; 0,Ω11) Φ(αψ(hψ))dhψ, where hψ0=√n(ψ0−˜ψ). Finally, the marginal SKS approximation of the BDM is given by δmSKS H = 1−2 min{PmSKS (ψ≥ψ0|y),1−PmSKS (ψ≥ψ0|y)}. (26) The marginal SKS tail area approximation PmSKS (ψ≥ψ0|y), and thus also δmSKS H , can be derived numerically. 4.3 Multidimensional parameters While the SKS approximation is theoretically elegant, similarly to the higher-order modification of the log-likelihood ratio W∗(θ), it has two main drawbacks. The first one is that it relies only on local information around the mode. The second is that it is computationally intensive because it relies on third-order derivatives (i.e., a tensor of derivatives) of the loglikelihood. The size of this derivative tensor increases cubically with the number of parameters, leading to 11 substantial memory and computational demands, particularly in models with many parameters. Furthermore, quantities as the moments and marginal distributions and quantiles of the SKS approximation, are not available in closed form, even in the scalar case. To address these challenges, [26] propose a class of approximations based on the standard skew-normal (SN) distribution. Their method matches posterior derivatives, aiming to preserve the ability to model skewness while employing more computationally tractable structures. It utilizes local information around the MAP by matching the mode m, the negative Hessian at the mode, i.e. j(˜θ), and the third-order unmixed derivatives vector t∈Rdof the log-posterior. The goal is to find the parameters of the multivariate SN distribution SN d(ξ,Ω, α) that best match these quantities. The notation SN d(ξ,Ω, α) indicates a d-dimensional SN distribution (see e.g. [1], and references therein), with location parameter ξ, scale matrix Ω, and shape parameter α. The matching equations are given by 0 =−Ω−1(m−ξ) +ζ1(κ)α, j(˜θ) = Ω−1−ζ2(κ)αα⊤, t=ζ3(κ)α◦3, κ=α⊤(m−ξ), where ζk(κ) denotes the k-th derivative of log Φ( κ) and◦3 represents the Hadamard (element- wise) product. The solution proceeds by reducing the system to a one-dimensional root-finding problem in κ, after which
|
https://arxiv.org/abs/2505.00185v1
|
α, Ω, and ξcan be obtained analytically. Ultimately, the marginal distributions are available in closed form as well. Given its tractability, we adopt the derivative matching approach proposed by [26] to derive SKS approximations for models with multidimen- sional parameters. For the SN model we instead can easily define the multivariate quantiles. As suggested in [7, 8], an effective approach to define the quantiles in the multidimensional case is to identify the Optimal Transport (OT) map between the spherical uniform distribution and the desired multivariate SN distribution. Considering the inherent relationship between the standard multivariate Gaussian distribution and the spherical uniform distribution, we explore the OT map linking a multivariate SN distribution to a multivariate standard normal distribution. Indeed, given a multivariate standard normal SinRd, it is well known that U=S/∥S∥is uniformly distributed on the sphere of radius√ dinR. Furthermore, 2(Φ( ∥S∥)− 0.5) is uniform in (0,1). Thus, the OT map and the quantiles of the multivariate standard Gaussian are coherently defined as a bijection of the norm of the multivariate standard normal vector S(the distance from the origin). In particular, we utilize the canonical multivariate SN distribution, derived from applying a rotational transformation, and we consider a component- wise transformation using the univariate SN distribution function and the standard normal quantile function, which delineates a transport map represented as the gradient of a convex function. From X∼SNd(ξ,Ω, α), let δ= Ω( α/√ 1 +α⊤Ωα). We define a rotation T1(X) =QXby means of the matrix Q∈Rd×dsuch that: •Z=Q⊤(X−ξ) aligns the skewness with the first coordinate; •in the rotated space, Z1∼SN1(0, ω2,∥α∥), with ω2= [Q⊤ΩQ]1,1, and Z2:dare Gaussian. The matrix Qis obtained by applying a (rectangular) QR decomposition to the αvector. The vector of means is E(Z) =Q⊤δp 2/πand the covariance matrix is V=Q⊤(Ω−2 π(Q⊤δ)⊤Q⊤δ). Moreover, the scale parameter of Z1isσ=√Q⊤ΩQand we denote as µ1=E[Z1] and V1=V ar(Z1) its mean and variance. 12 We define the transport map T2(X) in the rotated space as T2(X) = Φ−1(FSN(X1,0, σ2, Q⊤α), µ1, V1) X2 ... Xd , where FSN(·) is the univariate SN cumulative distribution function and Φ−1(·) is the standard normal quantile function. In practice, we transform the first component using the univariate SN cumulative distribution function ( FSN) and the standard normal quantile function (Φ−1) to remove its skewness, while leaving other components unchanged. Note that the SN distribution is closed under linear transformations. In particular, after the rotation, the skewness of the variable Zbecomes Q⊤α(see [1]). The variable Z′=T2(Z) is now approximately multivariate normal. Finally, we apply an affine transformation to standardize the result. More precisely, consider T3(X) =V−1/2(X−Q⊤δp 2/π), and set U=T3(Z′). The resulting Uis distributed as a standard normal (see Figure 1). It follows that, using the SN approximation πSN(θ|y) for the posterior distribution of θ, then the SN approximation of the BDM can be expressed as δSN H= 1−Pr(χ2 d≥ ∥T(θ0)∥), (27) where T(x) =T3◦T2◦T1. The map T(x) =T3⊙T2⊙T1is the OT map as it is the gradient of a convex function. In particular, T1andT3are affine transformation and the function Φ−1(FSN(z, ξ, ω, α )) is monotonically increasing in z,
|
https://arxiv.org/abs/2505.00185v1
|
hence its integral is convex. Defining g(Z) =ZZ1 0Φ−1(FSN(t, ξ, ω, α ))dt+1 2dX i=2Z2 i, then T2(Z) =∇g(Z). The composite map T(·), used in (27), is the gradient of a convex function and thus represents the optimal transport map (under quadratic cost) from a SN distribution to a standard normal. 5 Examples of higher-order and of skewed approxi- mations In the following, we focus on assessing the performance of the higher-order approximations and of the skewed approximations of the BDM in two examples, discussed also in [3] and in [5]. 5.1 Exponential model We revisit Example 1 in [3], where the model for data y1, . . . , y nis an exponential distribution with scale parameter θ, meaning E(Y) =θ. By employing the Jeffrey’s prior, which is π(θ)∝ θ−1, the resulting posterior distribution is an Inverse Gamma, characterized by shape and rate parameters equal to nandtn, respectively, with tn=P iyi. The quantities for the SKS approximation of the posterior distribution are available in [5] (see Section 3.1), while for the higher-order approximation we have that q(θ) coincides with the score statistic, i.e. q(θ) =ℓ(1)(θ)/i(θ)1/2. We analyze how well the two approximations align with the true BDM under growing sample size ( n= 6,12,20,40), keeping fixed the MLE to ˆθ= 1.2. The MAP is 1.03 ( n= 6), 1.11 ( n= 12), 1.14 ( n= 20), 1.17 ( n= 40). 13 −4−2 0 24−4−2024SN X1X2xxMode −4 −2 024−4−2024Rotated SN Z1Z2x25% 50% 75% 95% −4−2 0 24−4−2024Symmetrized Z′1Z′2x25% 50% 75% 95% −4 −2 024−4−2024Standard Normal U1U2 x25% 50% 75% 95% Figure 1: First panel: Original SN approximation of a bivariate posterior distribution, with the mode in red and skewness direction indicated by the black line. Second panel: Rotated SN distri- bution aligning the skewness with the first coordinate; red dashed lines show quantiles of the first rotated component. Third panel: Symmetrized distribution after applying a univariate marginal transformation . Fourth panel: Final standardized and centered Normal distribution. Bottom panel: Visualization of the Optimal Transport (OT) map. 14 Figures 2 and 3 and Table 1 report the approximations of the BDM for several candidate values for θ0. In particular, the first order (IO) approximation (5), the higher-order (HO) approximation (13), the SKS approximation (24), a direct numerical tail area calculation (SKS- num) of (23) and the SN approximation (27) are considered. Figures 2 and 3 display also the approximations to the corresponding posterior distributions, where the HO approximation is derived numerically by inverting the tail area. Also, note that the SKS approximation of the posterior distribution is not guaranteed to be included in (0,1), so we practically bounded the BDM in this interval. The results confirm that the HO and the SKS approximations yield remarkable improve- ments over the first-order counterpart for any n. Moreover, they show that the HO approx- imation of the BDM is almost perfectly overimposed to the true BDM, especially for values ofθ0far from the MLE. When the value under the null hypothesis is closer to the MLE, the SKS approximation, the numerical tail area from the SKS and
|
https://arxiv.org/abs/2505.00185v1
|
SN approximations approximate better the true BDM. Furthermore, the SN approximation more accurately captures the tail behavior of the posterior distribution than the SKS approximation. θ0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 n= 6 IO 0.93 0.78 0.46 0.00 0.46 0.78 0.93 0.99 HO 1.00 0.96 0.62 0.00 0.30 0.57 0.73 0.83 SKS 1.00 1.00 0.80 0.20 0.32 0.73 0.94 0.99 SKS-num 1.00 0.94 0.53 0.07 0.58 0.91 0.99 1.00 SN 1.00 0.94 0.52 0.06 0.51 0.78 0.91 0.97 BDM 1.00 0.96 0.62 0.11 0.30 0.57 0.73 0.83 n= 12 IO 0.99 0.92 0.61 0.00 0.61 0.92 0.99 1.00 HO 1.00 0.99 0.75 0.00 0.48 0.78 0.91 0.96 SKS 1.00 1.00 0.74 -0.00 0.61 0.91 0.99 1.00 SKS-num 1.00 0.99 0.72 0.01 0.64 0.95 1.00 1.00 SN 1.00 1.00 0.77 0.04 0.62 0.89 0.98 1.00 BDM 1.00 0.99 0.75 0.08 0.48 0.78 0.91 0.96 n= 20 IO 1.00 0.97 0.74 0.00 0.74 0.97 1.00 1.00 HO 1.00 1.00 0.85 0.00 0.62 0.90 0.97 0.99 SKS 1.00 1.00 0.91 0.08 0.66 0.96 1.00 1.00 SKS-num 1.00 1.00 0.84 0.02 0.73 0.98 1.00 1.00 SN 1.00 1.00 0.94 0.02 0.72 0.95 1.00 1.00 BDM 1.00 1.00 0.85 0.06 0.62 0.90 0.97 0.99 n= 40 IO 1.00 1.00 0.89 0.00 0.89 1.00 1.00 1.00 HO 1.00 1.00 0.95 0.00 0.81 0.98 1.00 1.00 SKS 1.00 1.00 0.99 0.05 0.83 1.00 1.00 1.00 SKS-num 1.00 1.00 0.96 0.03 0.87 1.00 1.00 1.00 SN 1.00 1.00 1.00 0.02 0.87 0.99 1.00 1.00 BDM 1.00 1.00 0.95 0.04 0.81 0.98 1.00 1.00 Table 1: BDM for a series of values θ0for the parameter and increasing sample sizes in the Expo- nential example. The values of the true BDM and the best approximation(s) in each configuration are highlighted in bold. 15 0.00.51.01.52.0 0.51.01.52.02.53.03.54.0 θπ(θ|y) HOrder IGamma IOrder SKS SNn=6 0.00.51.01.52.0 0.51.01.52.02.53.03.54.0 θπ(θ|y) HOrder IGamma IOrder SKS SNn=12 n=6BDM 0.30.60.91.21.51.82.12.40.00.30.60.9 HOrder IGamma IOrder SKS SKS−num SN n=12BDM 0.30.60.91.21.51.82.12.40.00.30.60.9 HOrder IGamma IOrder SKS SKS−num SNFigure 2: Exact posterior (in green) and approximate posteriors for n= 6,12 in the Exponential model (panels 1-2). The blue verical line indicates the posterior median. BDM for a series of parameters (panels 3-4). 16 0.00.51.01.52.0 0.51.01.52.02.53.03.54.0 θπ(θ|y) HOrder IGamma IOrder SKS SNn=20 0.00.51.01.52.0 0.51.01.52.02.53.03.54.0 θπ(θ|y) HOrder IGamma IOrder SKS SNn=40 n=20BDM 0.30.91.52.10.00.30.60.9 HOrder IGamma IOrder SKS SKS−num SN n=40BDM 0.30.91.52.10.00.30.60.9 HOrder IGamma IOrder SKS SKS−num SNFigure 3: Exact posterior (in green) and approximate posteriors for n= 20,40 in the Exponential model (panels 1-2). The blue vertical line indicates the posterior median. BDM for a series of parameters (panels 3-4). 17 5.2 Logistic regression model We consider now a real-data application on the Cushings dataset (see [5], Section 5.2), openly available in the Rlibrary MASS. The data are obtained from a medical study on n= 27 in- dividuals, aimed at investigating the relation between Cushing’s syndrome and two steroid metabolites, namely Tetrahydrocortisone and Pregnanetriol. We define a binary response variable Y, which takes value 1 when the patient is affected by bilateral hyperplasia, and 0 otherwise. The two observed covariates x1andx2are two dummy variables representing
|
https://arxiv.org/abs/2505.00185v1
|
the presence of the metabolites. We focus on the most popular regression model for binary data, namely the logistic regression with mean function logit−1(β0+ β1x1+β2x2). As in [5], Bayesian inference is carried out by employing independent, weakly informative Gaussian priors N(0, 25) for the coefficients β= (β0, β1, β2). Figure 4 displays the marginal posterior distributions for β1andβ2obtained via MCMC sampling (black curves) along with the first order, the SKS and the SN approximations. The MAP values for the two parameters are -0.031 and -0.286, respectively. We aim to test the two null hypotheses H0:β1= 0 and H0:β2= 0, corresponding to the null effect of the metabolytes’ presence in determining Cushing’s syndrome (red vertical lines in Figure 4). The exact BDM gives the values 0.592 and 0.932, respectively, indicating that the hypothesized value may support the null hypothesis for the first parameter β1, whereas the second value suggests a weak disagreement with the assumed value for H0:β2= 0. The SKS approximations of the BDM for the considered hypotheses are 0.612 and 0.935, respectively; the SN approximations are 0.584 and 0.870, respectively; the first-order approximations are 0.512 and 0.891, respectively; while the higher-order approximations provide 0.611 and 0.998, respectively. Finally, the approximations based on the matching priors are 0.477 and 0.862, respectively. The skewed approximations (SKS, SN) provide thus the best results. For the composite hypothesis H0:β1=β2= 0, the ground truth is not available, al- though in presence of low correlation between the components one can roughly estimate it as the geometric means between the two marginal measures, which is 0.743. The first-order ap- proximation for the BDM gives 0.300, while the SN approximation gives 0.760, revealing that the value under the null is more extreme (see also Figure 5). 6 Concluding remarks Although the higher-order and skewed approximations described in this paper are derived from asymptotic considerations, they perform well in moderate or even small sample situations. Moreover, they represent an accurate method for computing posterior quantities and to ap- proximate δHand they make quite straightforward to assess the effect of changing priors (see, e.g., [18]). When using objective Bayesian procedures based on strong matching priors and higher-order asymptotics, there is an agreement between Bayesian and frequentist point and interval estimation, and also in significance measures. This is not true in general with the e−value as discussed in [19]. A significant contribution of this work is the extension to multivariate hypotheses. We pro- posed a formal definition of the multivariate BDM based on center-outward Optimal Transport maps, providing a theoretically sound generalization of the univariate concept. By utilizing ei- ther the multivariate normal or multivariate SN approximations of the posterior distribution, we can formulate the multivariate quantiles in a closed form, thereby allowing us to derive the BDM for composite hypotheses. Nonetheless, precisely determining or defining these quantiles on the true posterior is challenging, as the Transport map may not be available in a closed form and requires solving a complex optimization problem. However, the SN approximation as well as the derived OT map continue to be manageable in high-dimensional settings, whereas
|
https://arxiv.org/abs/2505.00185v1
|
18 −0.4−0.20.002468 β1π(β1ly)IOrder SKS SN True −2.0−1.00.00.00.51.01.5 β2π(β2ly)IOrder SKS SN TrueFigure 4: Marginal posterior distributions for the regression parameters of the logistic regression example. The marginal medians are indicated in blue, while the parameters under the null hypoth- esis are indicated in red. β1β2 −0.5 0.0 0.5−1.5−1.0−0.50.00.5 xIOrder SN True Figure 5: Joint posterior for ( β1, β2) in the logistic regression example with the first order (IOrder) and skew normal (SN) approximations. The point (0,0) is marked with a cross. 19 typical OT methods generally do not scale efficiently with increasing dimensions. As a final remark, the high-order procedures proposed and described are tailored to con- tinuous posterior distributions, and their extension to models with discrete or mixed-type parameters warrants further study. Moreover, although the higher-order and skewed methods, alongside SN-based OT maps, offer a useful means for approximating the posterior distribu- tions and computing tail areas, their application might fail in handling complex or irregular posterior landscapes. In such cases, employing integrated computational procedures to find the transport map [10] and utilizing the direct definition of the multivariate BDM could be more appropriate. Abbreviations The following abbreviations are used in this manuscript: BDM Bayesian Discrepancy Measure BF Bayes Factor MAP Maximum a Posteriori MLE Maximum Likelihood Estimate OT Optimal Transport SKS SKew-Symmetric SN Skew-Normal References [1] Azzalini, A., Capitanio, A. Statistical applications of the multivariate skew normal distri- bution. J. Roy. Statist. Soc. B 1999 ,61, 579–602. [2] Barndorff-Nielsen, O.E., Chamberlin, S.R. Stable and invariant adjusted directed likeli- hoods. Biometrika 1994 ,81, 485–499. [3] Bertolino, F., Manca, M., Musio, M., Racugno, W., Ventura, L. A new Bayesian discrepancy measure. Stat. Meth. & App. 2024 ,33, 381—405. [4] Bertolino, F., Columbu, S., Manca, M., Musio, M. Comparison of two coefficients of varia- tion: a new Bayesian approach. Comm. Statist. - Sim. Comp. 2024 53, 6260–6273. [5] Durante, D., Pozza, F., Szabo, B. Skewed Bernstein–von Mises theorem and skew-modal approximations. Ann. Statist. 2024 ,52, 2714–2737. [6] Fraser, D.A.S., Reid, N. Strong matching of frequentist and Bayesian parametric inference. J. Stat. Plan. Inf. 2002 ,103, 263–285. [7] Hallin, M., Del Barrio, E., Cuesta-Albertos, J., Matr´ an, C. Distribution and quantile func- tions, ranks and signs in dimension d: A measure transportation approach. Ann. Statist. 2021 ,49, 1139–1165. [8] Hallin, M., Konen, D. Multivariate Quantiles: Geometric and Measure-Transportation- Based Contours. In Applications of Optimal Transport to Economics and Related Topics 2024 , 61–78. Cham: Springer Nature Switzerland. [9] Kass, R.E., Tierney, L., Kadane, J. The validity of posterior expansions based on Laplace’s method. In: Bayesian and likelihood methods in statistics and econometrics 1990 , 473–488. [10] Li, K., Han, W., Wang, Y., Yang, Y. Optimal Transport-Based Generative Models for Bayesian Posterior Sampling 2025 ,arXiv preprint arXiv:2504.08214. [11] Madruga, M., Pereira, C. Stern. J. Bayesian evidence test for precise hypotheses. J. Stat. Plan. Inf. 2003 ,117, 185–198. 20 [12] Pereira, C., Stern, J.M. Evidence and Credibility: Full Bayesian Significance Test for Precise Hypotheses. Entropy 1999 ,1, 99–110. [13] Pereira, C., Stern, J.M. The e−value: a fully Bayesian significance measure for precise statistical hypotheses and its
|
https://arxiv.org/abs/2505.00185v1
|
research program. Sao Paulo J. Math. Sci. 2022 ,16566–584. [14] Peyr´ e, G. and Cuturi, M., Computational optimal transport: With applications to data science. Foundations and Trends ®in Machine Learning, 2019 ,11(5-6) 355-607. [15] Pierce, D.A., Bellio, R. Modern likelihood-frequentist inference. Int. Stat. Rev. 2017 ,85, 519–541. [16] Reid, N. Likelihood and Bayesian approximation methods. Bayesian Stat. 1995 ,5, 351–368. [17] Reid, N. The 2000 Wald memorial lectures: asymptotics and the theory of inference. Ann. Statist. 2003 ,31, 1695–1731. [18] Reid, N., Sun, Y. Assessing Sensitivity to Priors Using Higher Order Approximations. Comm. Statist. - Th. Meth. 2010 ,39, 1373–1386. [19] Ruli, E., Ventura, L. Can Bayesian, confidence distribution and frequentist inference agree? Stat. Meth. & Ap. 2021 ,30, 359–373. [20] Severini, T.A. Likelihood methods in statistics . Oxford University Press, Oxford (2000) [21] Skovgaard, I.M. Likelihood Asymptotics. Scand. J. Stat. 2001 ,28, 3–32. [22] Tan, L.S. and Chen, A., 2024. Variational inference based on a subclass of closed skew normals. J. Statist. Comput. Simul. 2024 , 1–15. [23] Ventura, L., Reid, N. Approximate Bayesian computation with modified loglikelihood ratios. Metron 2014 ,7, 231–245. [24] Ventura, L., Ruli, E., Racugno, W. A note on approximate Bayesian credible sets based on modified log-likelihood ratios. Stat. Prob. Lett. 2013 ,83, 2467–2472. [25] Welch, B.L., Peers, H.W. On formulae for confidence points based on integrals of weighted likelihoods. J. Roy. Statist. Soc. B1963 ,25, 318—329. [26] Zhou, J., Grazian, C., Ormerod, J.T. Tractable skew-normal approximations via matching. J. Statist. Comput. Simul. 2024 ,94, 1016–1034. 21
|
https://arxiv.org/abs/2505.00185v1
|
arXiv:2505.00215v1 [math.ST] 30 Apr 2025Algebraic Constraints for Linear Acyclic Causal Models Cole Gigliotti and Elina Robeva May 8, 2025 Abstract In this paper we study the space of second- and third-order moment tensors of random vectors which satisfy a Linear Non-Gaussian Acyclic Model (LiNGAM). In such a causal model each entry Xiof the random vector Xcorresponds to a vertex i of a directed acyclic graph Gand can be expressed as a linear combination of its direct causes {Xj:j→i}and random noise. For any directed acyclic graph G, we show that a random vector Xarises from a LiNGAM with graph Gif and only if certain easy-to-construct matrices, whose entries are second- and third-order moments of X, drop rank. This determinantal characterization extends previous results proven for polytrees and generalizes the well-known local Markov property for Gaussian models. 1 Introduction Structural equation models (SEMs) [MDLW18] capture cause-effect relationships among a set of random variables {Xi, i∈V}by hypothesizing that each variable is a noisy function of its direct causes. Given a directed acyclic graph (DAG) G= (V, E) with one random variable Xiassociated to each vertex i∈V, the directed edges j→i∈Ecorrespond to direct causes. A linear structural equation model with graph Gthen hypothesizes that Xi=X j→i∈EλjiXj+εi, i∈V, (1) where the random variables εiare mutually independent and represent random noise. Classically, the noise terms εiare assumed to be Gaussian, in which case one aims to learn the graph Gsolely from the covariance matrix of X. In this scenario, one can only learn Gup to a Markov equivalence class. On the other hand, linear non-Gaussian acyclic models (LiNGAM) [SHHK06] assume that the error terms εiare non-Gaussian. They have sparked a wide range of interest since they allow one to learn the true directed acyclic graph Grather than its Markov equivalence class from observational data [SHHK06]. One way of learning the graph Gboth in the Gaussian and non-Gaussian settings is via the method of moments. In such methods, one obtains insights about the algebraic structure of the moments of the random vector Xfor each graph [DM17, Sul18, WD19, 1 WD23, SD23], and then devises an algorithm that utilizes these insights and learns the graph. The algebraic relations that hold among the entries of the covariance matrix of X have been a major topic of interest in the algebraic statistics community [DRW20, Sul08, vOM17, DST13, STD10]. In this work we focus on the set of second- and third-order moments of a random vector Xwhich arises from a LiNGAM for a given DAG G. These moments are enough to identify the graph Gand to learn it efficiently even in the high-dimensional setting [WD19]. Specific low-degree determinantal relationships among the second and third order moments for a linear non-Gaussian causal model have given rise to efficient algorithms for learning the graph Gfrom data [SRD24, WD19, WD23, DGLN+25], but a complete algebraic characterization of the model of second- and third-order moments is only known when the graph Gis a polytree [ADG+23]. We here complete this characterization for any DAG G. We show that rank constraints on certain matrices whose entries are second- or
|
https://arxiv.org/abs/2505.00215v1
|
third-order moments of the random vector Xuniquely specify the DAG G(see Theorem 3.1). Our constraints contain as a subset the well-known constraints arising from the local Markov property satisfied by the covariance matrix only [Sul18], and also extend recent work on LiNGAM where Gis assumed to be a polytree [ADG+23]. We first illustrate our result in an example. 1 2 3 Figure 1: The complete DAG on 3 vertices. Example 1.1. Consider the graph Gwith vertices V={1,2,3}and edges {1→2,1→ 3,2→3}, the simplest DAG which is not a polytree (see Figure 1). Our Theorem 3.1 implies that if Xlies in the LiNGAM for some DAG, then it lies in the LiNGAM for this particular DAG Gif and only if its set of second-order moments sij=E[XiXj] and third-order moments tijk=E[XiXjXk] are such that the matrices M2=s11t111t112t113 s12t112t122t123 , M 3= s11s12t111t112t113t122t123 s12s22t112t122t123t222t223 s13s23t113t123t133t223t233 drop rank, i.e., they have ranks 1 and 2, respectively. The rest of this paper is organized as follows. We begin in Section 2 by a description of linear non-Gaussian acyclic models as well as the parametrization they imply for the 2 second- and third-order moments of the random vector X. We include a review of relevant prior work on linear Gaussian or non-Gaussian models. In Section 3, we present our main result, Theorem 3.1, which exhibits the constraints that characterize the set of second- and third-order moments corresponding to a given DAG G. We note that our result includes the constraints arising from the local Markov property in the Gaussian case [Sul18]. In Section 4, we show how to derive additional polynomial equations from the ones implied by Theorem 3.1. In particular, we show how our result generalizes the characterization of the defining equations for polytrees in the non-Gaussian case [ADG+23]. While previous work [WD19] uses algebraic constraints to recover the sources of a DAG, in Section 6 we show how to use our result for the recovery of sink nodes. We conclude in Section 7 with a discussion and further questions of interest. The proofs of our results are located in Section 5. 2 Background In this section we first introduce the mathematical formulation of our problem and prior work related to it. 2.1 Preliminaries LetG= (V, E) be a DAG, and let ( Xi, i∈V), be a collection of random variables indexed by the vertices in V. A vertex j∈Vis aparent of a vertex iif there is an edge pointing from jtoi, i.e., if ( j, i)∈Ewhich we will also write as j→i∈E. A vertex j∈Vis anon-descendant of a vertex iif there are no directed paths from itoj. We denote the set of all parents of vertex iby pa( i) and the set of all non-descendants of iby nd( i). As described in (1), the graph Ggives rise to the linear structural equation model consisting of the joint distributions of all random vectors X= (Xi, i∈V) such that Xi=X j∈pa(i)λjiXj+εi, i∈V, where the εiare mutually independent random variables representing stochastic errors. The errors are assumed to have expectation E[εi] = 0, finite variance ω(2) i=E[ε2 i]>0, and
|
https://arxiv.org/abs/2505.00215v1
|
finite third moment ω(3) i=E[ε3 i]. No other assumption about their distribution is made and, in particular, the errors need not be Gaussian (in which case we would have E[ε3 i] = 0 by symmetry of the Gaussian distribution). The coefficients λjiin (1) are unknown real- valued parameters, and we fill them into a matrix Λ = ( λji)∈R|V|×|V|by adding a zero entry when ( j, i)̸∈E. We denote the set of all such sparse matrices as RE. We note that for simplicity, and without loss of generality, the equations in (1) do not include a constant term, so we have E[Xi] = 0 for all i∈V. 3 The structural equations (1) can be rewritten in matrix-vector format as X= ΛTX+ε, and, thus, X= (I−Λ)−Tε, where we note that the matrix I−Λ is always invertible when the graph Gis acyclic [Sul18]. Let Ω(2)= (E[εiεj]) and Ω(3)= (E[εiεjεk]) be the covariance matrix and the tensor of third-order moments of ε, respectively. Both Ω(2)and Ω(3)are diagonal, with the diagonal entries being Ω(2) ii=ω(2) i=E[ε2 i]>0 and Ω(3) iii=ω(3) i=E[ε3 i]. Lemma 2.1. The covariance matrix and the third-order moment tensor of the solution X of(1)are equal to S= (sij) = (I−Λ)−TΩ(2)(I−Λ)−1, T= (tijk) = Ω(3)•(I−Λ)−1•(I−Λ)−1•(I−Λ)−1, respectively. Here •denotes the Tucker product [KB09]. This fact follows from standard results on how moments of random vectors change after linear transformation. For a complete proof, see, e.g., [ADG+23, Proposition 1.2]. As we are assuming positive error variances, E[ε2 i]>0, the matrix Ω(2)is positive definite and the same is true for the covariance matrix SofX. Since Ω(3)is diagonal, the third-order moment tensor TofXis a symmetric tensor of symmetric tensor rank at most |V|; this need not be the case for a general |V|×|V|×|V|tensor [CGLM08]. In the sequel, we write PD(R|V|) for the positive definite cone in R|V|×|V|and Sym3(R|V|) for the space of symmetric tensors in R|V|×|V|×|V|. Definition 2.2. LetG= (V, E) be a DAG. The second- and third-order moment model of Gis the set M≤3(G) that comprises of all pairs of covariance matrices Sand third-order moment tensors Tthat are realizable under the linear structural equation model given by G. That is, M≤3(G) ={((I−Λ)−TΩ(2)(I−Λ)−1 | {z } S,Ω(3)•(I−Λ)−1•(I−Λ)−1•(I−Λ)−1 | {z } T) : Ω(2)∈PD(R|V|) diagonal ,Ω(3)∈Sym3(R|V|) diagonal ,Λ∈RE} ⊆PD(R|V|)×Sym3(R|V|). Furthermore, the second- and third-order moment ideal of Gis the ideal I≤3(G) of poly- nomials in the entries S= (sij) and T= (tijk) that vanish when ( S, T)∈ M≤3(G). The problem which we solve here is as follows. Problem 2.3. Assume that S∈PD(R|V|)is a positive definite matrix and T∈Sym3(R|V|) is a symmetric tensor. Given a DAG G, find polynomial constraints in the entries of S andTwhich are satisfied if and only if (S, T)lies in M≤3(G). 4 2.2 Prior work Here we summarize relevant prior work on the algebraic description of the model of interest M≤3(G). Linear Gaussian models This problem has classically been studied in the algebraic statistics literature in the case of Gaussian graphical models [Sul18]. Here, they consider a linear structural equation model with Gaussian error terms εi. As a result, the third-order moments all vanish,
|
https://arxiv.org/abs/2505.00215v1
|
and the problem is to describe the model M2(G) consisting of all covariance matrices Swhich factorize as S= (I−Λ)−TΩ(2)(I−Λ)−1with Ω(2)∈PD(R|V|) and Λ ∈RE. Conditional independence implies constraints on the entries of Sfor a given graph G as follows. If sets of vertices AandBared-separated given set Cin the graph G(see, e.g. [MDLW18] for the definition of d-separation), then the Global Markov Property implies that XAis conditionally independent of XBgiven XC. For a Gaussian distribution this is equivalent to the submatrix SA∪C,B∪Cof the covariance matrix Swith rows indexed by A∪Cand columns indexed by B∪Chaving rank at most the size of C,|C|, as shown in [Sul08]. Furthermore, the model M2(G) is cut out by precisely these rank constraints arising from all the different d-separation statements which hold for Ginside the positive definite cone PD(R|V|) [Sul08]. The seminal paper [STD10] then asks the question of what other rank constraints hold for submatrices of the covariance matrix Sin a Gaussian graphical model, and the answer comes via trek-separation. Definition 2.4 ([STD10, RS21]) .Given k≥2 vertices v1, . . . v k, ak-trekTbetween them is an ordered tuple of directed paths ( P1, . . . , P k) which have a common source node t, called the top of T(top(T)), and the path Pihas sink vifor each i= 1, . . . , k . A 2-trek is usually known as a trek. LetA, B⊆Vbe two subsets of vertices. The pair of sets ( L, R)trek-separates Aand Bif for every trek T= (P1, P2) between a vertex a∈Aand a vertex b∈Beither P1 contains a vertex from LorP2contains a vertex from R. The main result in [STD10] states that the submatrix SA,Bhas rank at most rif and only if there exist sets of vertices L, R⊆Vsuch that ( L, R) trek-separates ( A, B) and |L|+|R| ≤r. Therefore, all rank constraints on Scorrespond to trek-separation in the graph. When the graph Ghas hidden variables, however, rank constraints are not enough to cut out the model M2(G) [Sul08]. Such constraints, also knwon as Verma constraints, can sometimes be expressed in the form of nested determinants [DRW20], but a general way of obtaining all of them is not known. 5 Linear Non-Gaussian Models Such a thorough study has not been done for the LiNGAM models M≤3(G) which we consider here. Instead of only looking at the covariance matrix S, we now also have access to the third-order moment tensor T. The constraints which cut out the model M≤3(G) have been discovered in the case when Gis a polytree [ADG+23]. For arbitrary DAGs G, a generalized version of trek-separation has been found [RS21], but a complete characterization of the constraints that cut out M≤3(G) was not known prior to the present work. On the algorithmic side, methods based on algebraic constraints which hold among the second- and higher-moments of the random vector Xhave found success. The work [WD19] develops a high-dimensional algorithm for learning the DAG Gbased on testing the rank of certain 2 ×2 matrices (see Corollary 4.4) in order to successively find source nodes and remove them from the
|
https://arxiv.org/abs/2505.00215v1
|
graph. More recently, [SRD24] uses rank constraints on matrices which consist of second-, third-, and higher-order moments in order to learn a DAG Gwith hidden variables. Furthermore, [SD23] uses algebraic constraints for goodness-of-fit tests which determine whether the data arises from a linear non-Gaussian model at all. 2.3 Notation We will be interested in matrices whose entries consist of blocks of S, and blocks of T. Thus, we require notation for blocking out sections of matrices and tensors. For two subsets A, B⊆V, we can define SA,Bto be the matrix with entries Sa,bwith a∈A, and b∈B. Here ais the row label, and bis the column label. We will need to do a similar operation with T. For three subsets A, B, C ⊆V, we define TA,B×Cto be a matrix with entries Ta,b,cwhere a∈A,b∈B, and c∈Cwhich is flattened such that ais the row label and ( b, c) is the column label. As an example, let A=B=C={1,2}. Then TA,B×Cis T{1,2},{1,2}×{1,2}=(1,1) (1 ,2) (2 ,1) (2 ,2) 1T1,1,1T1,1,2T1,2,1T1,2,2 2T2,1,1T2,1,2T2,2,1T2,2,2. 3 Algebraic characterization of M≤3(G) We are now ready for our main result. The following theorem gives explicit constraints that cut out the set M≤3(G) of pairs ( S, T) that arise from the LiNGAM with DAG G. Theorem 3.1. LetG= (V, E)be a DAG, S∈PD(R|V|), and T∈Sym3(R|V|). Then, (S, T)lies in the model M≤3(G)if and only if for every vertex v∈V, the following matrix 6 has rank equal to |pa(v)|, Mv:=Spa(v),nd(v)Tpa(v),nd(v)×V Sv,nd(v) Tv,nd(v)×V . (2) Remark 3.2. The rank constraints given in (2)are equivalent to the vanishing of all (|pa(v)|+ 1)-minors of the matrix Mv. Since Sis assumed to be positive definite, the sub- matrix Spa(v),pa(v)is invertible, and therefore the rank constraints are also equivalent to the last row of Mvbeing a linear combination of its other rows. Example 3.3. (Example 1.1 Continued) Consider the graph Gwith vertices V={1,2,3} and edges {1→2,1→3,2→3}(Figure 1)). Using Macaulay2, we can define the ideal I≤3(G) which equals the kernel of the parametrization of the moments SandTin terms of the parameters Λ, Ω(2), and Ω(3)from Lemma 2.1. We can then confirm that this ideal equals the ideal generated by the 2 −minors of the matrix M2and the 3 −minors of the matrix M3that arise from our Theorem 3.1 M2=s11t111t112t113 s12t112t122t123 , M 3= s11s12t111t112t113t122t123 s12s22t112t122t123t222t223 s13s23t113t123t133t223t233 , saturated by the principal minors of the matrix S. Indeed, according to Theorem 3.1, the model M≤3(G) is cut out by the minors of these matrices inside PD(R|V|)×Sym3(R|V|), and so the principal minors of Sare always nonzero. Remark 3.4. Fixv∈V. The block of Mvcontaining only entries of Sis as follows, pa(v) nd( v)\pa(v) pa(v)Spa(v),pa(v)Spa(v),nd(v)\pa(v) v S v,pa(v) Sv,nd(v)\pa(v). The fact that it drops rank represents exactly the Local Markov Property which implies that Xvis conditionally independent from Xnd(v)\pa(v)given Xpa(v)[MDLW18, Sul18]. The following Lemma is the necessity condition of Theorem 3.1. Lemma 3.5. LetG= (V, E)be a DAG, if (S, T)lies in the model M≤3(G), then for any vertex v∈V, the following matrix has rank |pa(v)|, Mv:=Spa(v),nd(v)Tpa(v),nd(v)×V Sv,nd(v) Tv,nd(v)×V . (3) Since these matrices drop rank, all (|pa(v)|+ 1)×(|pa(v)|+ 1) minors must vanish.
|
https://arxiv.org/abs/2505.00215v1
|
7 Proof. Recall that Xv=P u∈pa(v)λuvXu+εv, Ifwis any non-descendant of v,Xwis independent of εv, and so E[εvXw] = 0. We see that svw=E[XvXw] =X u∈pa(v)λuvE[XuXw] +E[εvXw] =X u∈pa(v)λuvsuw. A similar equation can be derived for tvwzfor any z∈V: tvwz=X u∈pa(v)λuvtuwz. Then, the vector ( −1, λpa(v),v)Tis in the left null space of the matrix Mv, which means thatMvhas rank at most |pa(v)|. In a DAG, pa( v)⊆nd(v), and so Spa(v),pa(v)is a submatrix of Mv, which does not drop rank since Sis positive definite. Therefore Mvis a (|pa(v)|+ 1) by |nd(v)|(|V|+ 1) matrix of rank exactly equal to |pa(v)|. The next Lemma is the more difficult step in the proof of Theorem 3.1. We show that if the matrices in equation (2) drop rank, then we can construct the appropriate Λ ,Ω(2),Ω(3) which parametrize SandT. This guarantees that ( S, T) will lie in the model M≤3(G). Lemma 3.6. Let(G, E)be a directed acyclic graph. Let (S, T)∈PD(R|V|)×Sym3(R|V|). For each vertex v∈V, letMvbe the matrix defined in equation (2). Suppose that each Mv has rank |pa(v)|. Then, there exists Λ∈RE, such that Ω(2):= (I−Λ)TS(I−Λ), (4) Ω(3):=T•(I−Λ)•(I−Λ)•(I−Λ) (5) are diagonal. Proof sketch. Fixv∈V. By Remark 3.2, the bottom row of the matrix Mvis a linear combination of the other rows. We define λivto be the coefficient of the i-th row in this linear combination, and all other λkv= 0. In this way, we define the matrix Λ ∈RE. A calculation then shows that Ω(2)and Ω(3)are both diagonal. Theorem 3.1 is established by combining Lemma 3.5 and Lemma 3.6. The full proof are located in Section 5. Computing the vanishing ideal of M≤3(G) While in Theorem 3.1 we found algebraic constraints that cut out our model M≤3(G), we have not discussed its vanishing ideal. We conjecture that the vanishing ideal of the model M≤3(G) equals the ideal generated by the minors of the matrices Mvforv∈Vsaturated by the product of all principal minors of the matrix S. 8 The paper [BS24] describes a principled way of finding this ideal exactly assuming the parameters Λ ,Ω(2),Ω(3)are identifiable from ( S, T). This is indeed the case for us, and Theorem 3.11 of [BS24] can be applied. (We would follow their Section 4.2, augmented by the variables tijk. To express our parametrization as a birational map, we include all entries of Ω(3)in the domain.) However, the equations that one obtains using Theorem 3.11 from [BS24] appear more complicated than the principal minors of our matrices Mv. Therefore, we leave it as future work to compute the vanishing ideal of the model M≤3(G). 4 Additional equations While we have derived enough equations in Theorem 3.1 to cut out our model M≤3(G), in this section we show how to find additional equations which also vanish on our model, but are not minors of the matrices Mvdefined in Theorem 3.1. Definition 4.1. ForA, B⊆V, define RA,B:= SA,B TA,B×V . In the statement of Theorem 3.1, we have matrices of the form Mv=Rpa(v),nd(v) Rv,nd(v) . The paper [ADG+23] shows that the model M≤3(G) corresponding to a polytree Gis cut out by only
|
https://arxiv.org/abs/2505.00215v1
|
2 ×2 determinants inside PD(R|V|)×Sym3(R|V|). However, the rank conditions from Theorem 3.1 may involve larger determinants even in the case of polytrees. Thus, we here ask if we can replace pa( v), and nd( v) to obtain different matrices whose minors also vanish on the model. Proposition 4.2. For any A⊆V, define pa(A) := [ a∈Apa(a)! \A={all vertices outside Awith an edge pointing into A}. For any A⊆V, define nd(A) :=\ a∈And(a) ={all common non-descendants of the vertices in A}. Letv∈G, and A⊆V\v. Then the last row of the following matrix is a linear combination of the other rows.Rpa({v}∪A),nd({v}∪A) Rv,nd({v}∪A) . 9 Remark 4.3. Given a collection of vertices A, we can think of the above equations as corresponding to removing the set Afrom the structural equation model. As an example, letv∈V, and let a∈Vbe a parent of v. Then Xv=λavXa+···+εv. Removing the vertex afrom the structural equation model would require replacing Xawith its expression involving its parents, and augmenting εvtoεv+λavεa. In this way, we have expanded pa(v)topa({v, a}). Since εv+λavεais only independent of those Xwfor which wis a non-descendant of both vanda, we are forced to shrink nd(v)tond({v, a}). For more details, see the proof sketch of Lemma 3.5 located in Section 3. Proposition 4.2 can be applied to generate more equations. For example, if w∈Vis a source vertex, we might ask if there is a collection of equations which determine this fact. Corollary 4.4 (Sources) .Letw∈Vbe a source of the graph G,v∈V\w. Then the following matrix has rank 1,sw,w tw,w,w sv,w tv,w,w . Proof. We set A=V\ {v, w}. Then, {w} ⊇pa({v} ∪A), and w∈nd({v} ∪A). Applying Proposition 4.2 gives the result. This is the precise equation used in the algorithm proposed in [WD19], which efficiently recovers the DAG Gby recursively identifying source nodes. Polytrees A polytree Gis a directed graph whose undirected skeleton is a tree. Such a graph is naturally acyclic. Polytrees also have the property that for any two v, w∈V, there is at most one simple trek (a trek whose left and right side do not share any edges) between v andw. We denote the top of this trek by top( v, w). The ideal which cuts out the model M≤3(G) inside PD(R|V|)×Sym3(R|V|) of a polytree Gis given in [ADG+23]. Proposition 4.5 (ADG+23, Lemma 3.4 (b)) .LetG= (V, E)be a polytree, and v, w∈V. Then if there is an edge between vandw, the 2-minors of the following trek-matrix vanish, svk1. . . s vkrtvℓ1m1. . . t vℓqmq swk1. . . s wkrtwℓ1m1. . . t wℓqmq , where •k1, . . . , k rare vertices such that top(v, ka) = top( w, ka)fora= 1, . . . , r , and •(ℓ1, m1), . . . , (ℓq, mq)are such that top(v, ℓb, mb) = top( w, ℓb, mb)forb= 1, . . . , q . 10 We show how to recover the result of Proposition 4.5 using Proposition 4.2. Proof. Letv, w∈Vsuch that there is an edge between them. Without loss of generality, assume that wis a parent of v. Let A⊂Vbe all
|
https://arxiv.org/abs/2505.00215v1
|
a∈V\ {v}such that there is a directed path from atovwhich does not pass through w. We claim that pa( {v} ∪A) ={w}. Since wis a parent of vandw̸∈A, then w∈pa({v}∪A). Now for any u∈pa({v}∪A), we know that u /∈ {v} ∪A, and there is an edge from uto either a=vor any element a ofA. By definition of A, there is a directed path γfrom atovwhich does not go though w. Thus, u→γis a directed path from utovwhich can only pass through wifu=w. Since u /∈ {v} ∪A, then u=w. We also claim that nd( {v} ∪A) contains all u∈Vsuch that top( w, u) = top( v, u). Letube such a vertex. If uis a descendant of v, then top( v, u) =vbut top( w, u) =w. Therefore, uis not a descendant of v. If there exists a∈Asuch that uis a descendant of a, then top( v, u)∈A. Since the skeleton of Gis a tree, there is only one (undirected) path between wandu, which must go through v. Thus, there is no trek between wandu, and so there is no top( w, u). Hence u∈nd({v} ∪A). Applying Proposition 4.2 shows that the following matrix has rank at most 1, Sw,nd({v}∪A)Tw,nd({v}∪A)×V Sv,nd({v}∪A)Tv,nd({v}∪A)×V . (6) By the arguments above, the matrix in Proposition 4.5 is a submatrix of the matrix in equation (6) with the same number of rows and possibly fewer columns. Therefore, it has rank at most 1 as well. 5 Proofs In this section we prove our main results Theorem 3.1 and Proposition 4.2. 5.1 Proof of Theorem 3.1 We can now finish the proof of Theorem 3.1. Lemma 3.6 shows that if the matrices Mv all drop rank, then we can parametrize ( S, T) in the way which guarantees that ( S, T)∈ M≤3(G). Proof of Lemma 3.6. In a directed acyclic graph, pa( v)⊆nd(v). Thus, Spa(v),pa(v)is a submatrix of Mv. Since Spa(v),pa(v)is invertible, then the bottom row of Mvis a linear combination of the top |pa(v)|rows. Let λivbe the coefficients of this linear combination. Set the remaining λkvto 0 and define Λ to be the matrix containing these coefficients. Then Λ ∈RE. 11 Now, let Ω(2)= (I−Λ)TS(I−Λ) as in equation (4). We will show that Ω(2) v,w= 0 for v̸=w. Since Ω(2)is symmetric, without loss of generality let wbe a non-descendant of v. Then, so are all of its parents. Therefore, Ω(2) v,w= S−ΛTS − S−ΛTS Λ v,w = Sv,w−X u∈pa(v)Su,wλu,v −X y∈pa(w) Sv,y−X u∈pa(w)Su,yλu,v λy,w = 0. The same observation applies to Ω(3)defined in equation (5) as follows. Let z∈V, andwbe a non-descendant of v, in which case, so are all of its parents y∈pa(w). By symmetry of Ω(3), we need only compute Ω(3) v,w,z, Ω(3) v,w,z=[((T−T•Λ)−(T−T•Λ)•Λ)•(I−Λ)]v,w,z =X x∈V Tv,w,x−X u∈pa(v)Tu,w,xλu,v −X y∈pa(w) Tv,y,x−X u∈pa(w)Tu,y,xλu,v λy,w [I−Λ]x,z =X x∈V 0−X y∈pa(w)0·λy,w [I−Λ]x,z =0. Thus, every entry but the diagonals of Ω(2)and Ω(3)is 0. Remark 5.1. In Lemma 3.6, we require that each Mvdefined in equation (2)drops rank. However, we still get the result of Lemma 3.6 by instead ensuring that the smaller matrices, M′ v:=Spa(v),nd(v)Tpa(v),Nv Sv,nd(v)
|
https://arxiv.org/abs/2505.00215v1
|
Tv,Nv drop rank. Here the set Nv⊆nd(v)×Vis defined as follows, Nv:={(w, z)|w∈nd(v) and z∈nd(w)∪ {v, w}}. Proof. This follows from the step in the poof at which we claimed that ”by symmetry of Ω(3), we need only compute Ω(3) v,w,z.” A more detailed analysis shows that we can restrict further to ( w, z)∈Nvas claimed. 12 5.2 Proof of Proposition 4.2 Letv∈VandA⊆V\v. Recall that for any w∈nd(A∪v)⊆nd(v) and z∈V, svw=X u∈pa(v)λuvsuw, (7) tvwz=X u∈pa(v)λuvtuwz. (8) Ifu∈pa(v)∩A, then w∈nd(u), so we have suw=X x∈pa(u)λxusxw, (9) tuwz=X x∈pa(u)λxutxwz. (10) We see that we can replace all terms indexed by u∈pa(v)∩Ain equations (7) and (8) using equations (9) and (10). svw=X x∈P1,vλ(1) xvsxw, tvwz=X x∈P1,vλ(1) xvtxwz, where λ(1) xvis some polynomial in the λij’s, and the set P1,v⊆Vis defined as follows. For each u∈pa(v), ifu /∈A, then include uinP1,v. Otherwise if u∈A, then include pa( u) in P1,v. Repeating this procedure, for each n≥2, we obtain sets Pn,vand equations svw=X x∈Pn,vλ(n) xvsxw, tvwz=X x∈Pn,vλ(n) xvtxwz, where the set Pn,vis defined as follows. For each u∈Pn−1,v, ifu /∈A, then u∈Pn,v, else ifu∈Athen Pn,vcontains pa( u). Since Gis acyclic, there is no infinite chain of vertices ( un) such that un+1∈pa(un), thus the sequence of sets Pn,vterminates. The terminating set contains only elements of V which are parents of either vorAand do not lie in A. Thus, is contained in pa( {v} ∪A). 13 Hence, the equations have the form svw=X x∈pa({v}∪A)λ(∞) xvsxw, tvwz=X x∈pa({v}∪A)λ(∞) xvtxwz, for some coefficients λ(∞) xv, some of which may be 0. This implies that the last row of Rpa({v}∪A),nd({v}∪A) Rv,nd({v}∪A) , is a linear combination of its other rows. 6 Applications In this section we apply our results to the problem of recovering a sink node, and we examine the threshold sensitivity of the condition numbers which we use for determining matrix ranks. We sample from a LiNGAM with graph G= (V, E) and ask the question: ”Can we recover a sink node from the data?”. We first compute estimators ˆSand ˆTforSandT, respectively. For each vertex i∈V, we create the following matrix, ˆMi=" ˆS(V\i),(V\i)ˆT(V\i),(V\i)×(V\i) ˆSi,(V\i)ˆTi,(V\i)×(V\i)# . (11) By Theorem 3.1, the true matrix Miwill drop rank if and only if iis not an ancestor of any other vertices in the graph. Then for each iwe compute the singular value decomposition (SVD) of Miand return its condition number ci. We obtain a guess for the sink of our unknown graph Gby picking the vertex ˆi=ifor which the singular value ciis minimal. That is, for which the matrix Miis most likely to drop rank. LetGbe the graph given in Figure 2. To produce Figure 3, we perform 50 runs, where each error is sampled from a Γ(5 ,1) distribution. Using 100 different sample sizes, we compute ˆSand ˆT, and produce an index ˆi∈ {1, . . .5}, an estimate of a sink node. For each vertex, we record the number of times it was output and the number of samples used in the computation. The result is a graph, Figure 3, which at each number of samples n, records the pro- portion of
|
https://arxiv.org/abs/2505.00215v1
|
trials on nsamples in which the method returned each vertex as the sink. As the number of samples grows, we observe that the vast majority of times we obtain the correct sink node (node 5). A small proportion of the experiments wrongly label node 4 as a sink node, most likely due to numerical error. 14 1 2 3 4 5 Figure 2: The complete DAG on 5 vertices Figure 3: At each number of samples n, we record the number of times each vertex was assigned as the sink out of the entire data set. The graph is a bar chart, each bar represents 200 data points. 1 2 3 Figure 4: The line DAG on 3 vertices 15 Figure 5: True positive rate plotted against the false positive rate for different threshold values. We use 5000 samples to construct ˆSandˆTfor each run, Each data point represents one threshold value and uses 50 runs to approximate the true and false positive rates. Sensitivity to Changes of Threshold Values As above, we sample from a LiNGAM with graph given in Figure 4. Picking a threshold value t, we guess that a vertex vis a sink of Gif the rank of the matrix on line 11 is less than t. The rate at which this method correctly chooses 3 as a sink versus choosing either of 1 or 2 as a sink for different values of tis given in Figure 5. The number of samples used to estimate SandTis fixed constant at 5000. Note that the method could label more than one vertex as a sink. Figure 5 shows a nicely shaped ROC curve, which would allow us to pick an appropriate threshold value. For instance, if we wanted to ensure a false positive rate of at most 20%, then we can pick a threshold with which we would expect a true positive rate between 60 and 80%. 7 Discussion In this paper we studied the set M≤3(G) of second- and third-order moments Sand Tof a random vector which satisfies a linear structural equation model with respect to a directed acyclic graph G. We derived explicit polynomial constraints in the entries of S 16 andTwhich cut out the set M≤3(G) (Theorem 3.1). We then proved that our equations generalize earlier work in the case of polytrees (Section 4). Furthermore, we noted that when restricted to the covariance matrix S, our equations imply the Local Markov Property. This work opens up more interesting questions in the field of algebraic statistics. The existing cumulant-based algorithm for recovering a latent variable LiNGAM of [SRD24] suggests that one could obtain a similar characterization of the set of second-, third-, and potentially higher-order cumulants in a latent variable LiNGAM. Furthermore, even when all variables are observed if the underlying distribution of the coordinates of the random vector Xis symmetric around 0, using higher-order moments would be needed in order to uniquely recover the graph. We believe that extending our results to cumulants of order higher than 3 should be completely analogous to the proof of Theorem 3.1. While
|
https://arxiv.org/abs/2505.00215v1
|
potentially more difficult, it would be quite interesting to study the model of sec- ond and third-order moments when the graph Gis allowed to have directed cycles. Finding the defining equations in this case would be quite useful in designing a causal discovery algorithm, extending previous work which only applies to cycle-disjoint graphs [DGLN+25]. We also believe that the determinantal equations arising from Theorem 3.1 could po- tentially shed light in the context of linear Gaussian hidden variable models which are not cut out by determinantal constraints, such as Verma constraints, the Pentad, and others [DRW20]. Adding the third-order moments in such models (and then eliminating them from the defining ideal) should shed light on how to obtain their characterization in general. 8 Acknowledgments We would like to thank Mathias Drton for helpful discussions. Elina Robeva was supported by a Canada CIFAR AI Chair and an NSERC Discovery Grant (DGECR-2020-00338). References [ADG+23] Carlos Am´ endola, Mathias Drton, Alexandros Grosdos, Roser Homs, and Elina Robeva. Third-order moment varieties of linear non-Gaussian graphical models. Information and Inference: A Journal of the IMA , 12(3):1405–1436, 04 2023. [BS24] Tobias Boege and Liam Solus. Real birational implicitization for statistical models, 2024. [CGLM08] Pierre Comon, Gene Golub, Lek-Heng Lim, and Bernard Mourrain. Symmet- ric tensors and symmetric tensor rank. SIAM Journal on Matrix Analysis and Applications , 30(3):1254–1279, 2008. 17 [DGLN+25] Mathias Drton, Marina Garrote-Lop´ ez, Niko Nikov, Elina Robeva, and Samuel Y. Wang. Causal discovery for linear non-gaussian models with dis- joint cycles. submitted to UAI 2025 , 2025. [DM17] Mathias Drton and Marloes Maathuis. Structure learning in graphical mod- eling. Annual Review of Statistics and Its Application , 4:365–393, 2017. [DRW20] Mathias Drton, Elina Robeva, and Luca Weihs. Nested covariance determi- nants and restricted trek separation in gaussian graphical models. Bernoulli , 26(4):2503–2540, 2020. [DST13] Jan Draisma, Seth Sullivant, , and Kelli Talaska. Positivity for gaussian graphical models. Adv. in Appl. Math. , 50(5):661–674, 2013. [KB09] T. Kolda and B. Badre. Tensor decompositions and applications. SIAM Review , 51(3):455–500, 2009. [MDLW18] Marloes Maathuis, Mathias Drton, Steffen Lauritzen, and Martin Wainwright. Handbook of graphical models . CRC Press, 2018. [RS21] Elina Robeva and Jean-Baptiste Seby. Multi-trek separation in linear struc- tural equation models. SIAM Journal on Applied Algebra and Geometry , 5(2):278–303, 2021. [SD23] Daniela Schkoda and Mathias Drton. Goodness-of-fit tests for linear non- gaussian structural equation models, 2023. [SHHK06] Shohei Shimizu, Patric. O. Hoyer, Aapo Hyv¨ arinen, and Antti” Kerminen. A linear non-gaussian acyclic model for causal discovery. Journal of Machine Learninig Research , 7:2003–2030, 2006. [SRD24] Daniela Schkoda, Elina Robeva, and Mathias Drton. Causal discovery of linear non-gaussian causal models with unobserved confounding. arXiv:2408.04907 , 2024. [STD10] Seth Sullivant, Kelli Talaska, and Jan Draisma. Trek separation for gaussian graphical models. Annals of Statistics , 38(3):1665–1685, 2010. [Sul08] Seth Sullivant. Algebraic geometry of gaussian bayesian networks. Advances in Applied Mathematics , 40(4):482–513, 2008. [Sul18] Seth Sullivant. Algebraic Statistics , volume 194 of Graduate Studies in Math- ematics . American Mathematical Society, Providence, RI, 2018. 18 [vOM17] Thijs van Ommen and Joris M. Mooij. Algebraic equivalence
|
https://arxiv.org/abs/2505.00215v1
|
Conformal changepoint localization Sanjit Dandapanthula∗ sanjitd@cmu.edu Aaditya Ramdas† aramdas@cmu.edu May 2, 2025 Abstract Changepoint localization is the problem of estimating the index at which a change occurred in the data generating distribution of an ordered list of data, or declaring that no change oc- curred. We present the broadly applicable CONCH (CONformal CHangepoint localization) algorithm, which uses a matrix of conformal p-values to produce a confidence interval for a (single) changepoint under the mild assumption that the pre-change and post-change distribu- tions are each exchangeable. We exemplify the CONCH algorithm on a variety of synthetic and real-world datasets, including using black-box pre-trained classifiers to detect changes in sequences of images or text. 1 Introduction In offline changepoint localization, we are (informally) given some ordered list of data and are told that there may have been a change in the data generating distribution at some unknown index, called the changepoint . Suppose, for example, that the data is drawn independently from a density f0before the changepoint and is drawn independently from another density f1̸=f0post-change. Then, the goal of changepoint localization is to estimate the index at which the change occurred, or to declare that no change occurred. In fact, using the tools of conformal prediction, we are able to non-trivially localize the changepoint without any further assumptions about f0andf1. Now, we are ready to formally describe the offline changepoint localization problem in a more general setting. In all of the following, we will let [ K] ={1, . . . , K }forK∈Nand use M(S) to ∗Carnegie Mellon University, Department of Statistics †Carnegie Mellon University, Department of Statistics and Machine Learning Department 1arXiv:2505.00292v1 [math.ST] 1 May 2025 denote the set of probability measures over S. Furthermore, we used= to denote equality in distri- bution. We are given a list of X-valued random variables ( Xt)n t=1for some n∈N. Furthermore, there exists an unknown changepoint ξ∈[n] such that ( Xt)ξ t=1∼P0and ( Xt)n t=ξ+1∼P1are re- spectively sampled from the pre-change distribution P0∈ M(Xξ) and the post-change distribution P1∈ M (Xn−ξ). We use ξ=nto denote the case where no change occurs. Let P=P0×P1; we assume that the pre-change data is independent of the post-change data. We make the very general assumption that P0isexchangeable . This means that P0is invariant to permutations in the following sense: for any permutation π: [ξ]→[ξ], we have: (X1, . . . , X ξ)d= (Xπ(1), . . . , X π(ξ)). For instance, it could be the case that P0corresponds to i.i.d. observations according to some cumulative distribution function F0. Similarly, we assume that the post-change distribution P1 is exchangeable as well (which occurs if P1corresponds to i.i.d. observations according to some cumulative distribution function F1). We summarize our main contributions below. •We present the CONCH algorithm, a novel and widely applicable method which uses a matrix of conformal p-values (MCP) to produce a confidence interval for a changepoint under the mild assumption that the pre-change and post-change distributions are each exchangeable. •We show that the CONCH algorithm is able to produce both finite-sample and asymptotically valid confidence sets for
|
https://arxiv.org/abs/2505.00292v1
|
the changepoint under (only) the assumption that the pre-change and post-change distributions are exchangeable. •We describe methods for learning the conformal score function used in the CONCH algorithm from the data, thereby increasing the power of our method. •We demonstrate the CONCH algorithm on a variety of synthetic and real-world datasets, including using black-box classifiers to detect changes in sequences of images or text. We show that the CONCH algorithm is able to produce narrow confidence sets for the changepoint, even when the change is difficult to detect. We begin by reviewing prior work on changepoint localization and conformal approaches to change detection in Section 2. We introduce the CONCH algorithm in Section 3. We then show how to construct confidence sets for the changepoint in Section 4 and prove the validity of the resulting confidence sets. Next, we discuss how the score function should be chosen in Section 6. Finally, we apply the CONCH algorithm on a variety of synthetic and real-world datasets in Section 7 to demonstrate its effectiveness and flexibility. 2 Related work Offline changepoint localization has been extensively studied in the fields of statistics and theoretical computer science; in this section, we review some related work and discuss the novelty of our algorithm in the literature. 2 2.1 Classical approaches to changepoint localization Here, we give a review of several classical methods for offline changepoint analysis; further details can be found in Truong et al. (2020). Traditional change point localization has largely been domi- nated by parametric approaches based on the generalized likelihood ratio; however, these parametric methods often require knowing the pre-change and post-change distributions to achieve optimality. Likelihood ratio based methods (such as the CUSUM procedure of Page (1955)) can be efficient and powerful, but they are sensitive to model misspecification. Bayesian approaches, which assign prior distributions to changepoint locations and model parameters, provide natural uncertainty quan- tification through posterior distributions; as an example, see Fearnhead (2006). However, their guarantees depend on the correctness of the prior specification and model assumptions, and there may not be a natural choice of prior distribution for many practical problems. Nonparametric methods make fewer assumptions about data distributions, and are typically based on an iterative two-sample test. Rank-based nonparametric tests, such as those developed by Pettitt (1979) and Ross and Adams (2012), offer distribution-free control of Type I error rates but have significant limitations in their statistical guarantees. While such methods can be used to provide a confidence interval for the changepoint with very light distributional assumptions, their power guarantees deteriorate significantly for more complex changes due to the statistical challenge of two-sample testing. In contrast, CONCH addresses these limitations by allowing the score function to be learned in a way that achieves nontrivial power, while also providing finite-sample valid confidence sets for changepoint location under minimal distributional assumptions. 2.2 Conformal inference for change detection The application of conformal prediction techniques to changepoint problems is relatively recent compared to traditional approaches. Conformal prediction, as surveyed in Shafer and Vovk (2008), provides a framework for constructing valid prediction regions with distribution-free
|
https://arxiv.org/abs/2505.00292v1
|
guarantees under minimal statistical assumptions. While originally developed for supervised learning tasks, conformal methods have gradually been extended to changepoint analysis. Early work on connecting conformal prediction to changepoint detection focused primarily on testing for exchangeability violations rather than precise localization. Vovk et al. (2003) proposed a general approach for testing the exchangeability assumption in an online setting using conformal martingales, laying important groundwork for changepoint detection. The conformal martingale approach is a special case of the e-detector framework laid out in Shin et al. (2023), which provides an extremely general and powerful method for sequential change detection. While much of the prior work using conformal prediction for change detection is in the sequential (online) setting, here we study the problem in the offline setting. Furthermore, our work is focused on localization of the changepoint in addition to detection. Building on this foundation, Vovk (2021) and Vovk et al. (2021) introduced conformal test mar- tingales specifically designed for changepoint detection; for them, the motivation was to determine when the data generating distribution changes and a predictive algorithm needs to be retrained. 3 They introduced conformal versions of the CUSUM and Shiryaev-Roberts procedures, which con- trol false alarm rates through martingale properties. However, the primary emphasis remained on detection rather than localization of the changepoint. Volkhonskiy et al. (2017) developed a more computationally efficient conformal test martingale, called the inductive conformal martingale, for change detection. Furthermore, they provided con- formity measures and betting functions tailored specifically for change detection. However, their method does not provide formal guarantees for the localization task. More recently, Nouretdinov et al. (2021) investigated conformal changepoint detection under the assumption that the data generating distribution is continuous, showing that the conformal martingale is statistically effi- cient but without addressing the question of confidence interval construction for the changepoint location. Despite these recent advances, existing conformal changepoint methods focus on detection (whether a change occurred) rather than localization (precise estimation of where it occurred). The primary novelty of our approach lies in leveraging conformal prediction techniques to con- struct valid confidence sets for the changepoint location, as well as allowing for the score function to be learned from the data. By bridging the gap between conformal prediction and classical changepoint localization using two-sample testing, our approach opens new possibilities for reliable changepoint analysis in complex, high-dimensional data settings where classical methods fail. 3 The CONCH algorithm using a matrix of conformal p-values In this section, we discuss our CONCH algorithm for conformal changepoint localization in technical detail. We begin by defining score functions, which are a tool used in conformal prediction to reduce the dimensionality of the data; for more details on conformal prediction, see Shafer and Vovk (2008). In the following definition, we use JXmKto denote the set of unordered bagsofmdata points (which may contain repetitions)1. We will denote an element of JXmKbyJY1, . . . , Y mK, where the Yiare X-valued random variables. Definition 3.1 (Score functions) .Afamily of score functions is a list ( st)T t=1of functions st: X × JXtK× Xn−t→R. Each element in a
|
https://arxiv.org/abs/2505.00292v1
|
family of score functions is called a score function . Note that a score function is intuitively intended to be a pre-processing transformation intended to separate the pre-change and post-change data points by projecting them into one dimension. In particular, the score function can be learned in any way that uses its second argument exchangeably, while its third argument can be used non-exchangeably. Now, we discuss our algorithm for change- point localization when the post-change distribution is known to be exchangeable. The goal of our algorithms is to simultaneously test the null hypotheses that a change occurs at time t∈[n−1]: 1Such an unordered bag is often called a multiset . 4 1 t nPre-change Post-change Figure 1: H0t:ξ=tstates that ( Xk)t k=1∼P0and ( Xk)n k=t+1∼P1. Then, the associated alternative hypotheses are of the form H1t:ξ̸=t. Ultimately, our algorithms will output a list of statistics ( W(0) t, W(1) t)n−1 t=1which don’t depend on P0andP1under (H0t)n−1 t=1; we say these statistics are distribution-free . The statistics W(0) tandW(1) twill then be combined into a single p-value pt, which will be used to test H0tagainst H1tfort∈[n−1]. Fix α∈(0,1); we can then simultaneously invert these hypothesis tests to construct a 1 −αconfidence set for the changepoint: C1−α={t∈[n−1] :pt> α}, Note that since H0tis only a test of exchangeability on either side of the changepoint, the above confidence interval is not valid if there is no changepoint in the data. However, we describe a method to pre-test the data for exchangeability in Section 5, which we empirically observe does not hurt the performance of our algorithms. On the other hand, if a point estimator for the changepoint is desired, we can output the estimate ˆξ= arg max t∈[n−1]pt. In practice, one may not know the quantiles of W(0) tandW(1) texactly, so we can use any of the methods detailed in Section 4 to estimate the quantile. As described in Section 1, we assume that P0andP1are exchangeable. Then, we have the following algorithm for changepoint localization, which we call the CONCH (CONformal CHange- point localization) algorithm. In the following algorithm, let udenote the cumulative distribution of a Unif(0 ,1) distribution, given by: u(z) = 0 z <0 z 0≤z≤1 1 z >1. Furthermore, recall that the Kolmogorov-Smirnov (KS) distance between two cumulative distribu- tions FandGis given by: KS(F, G) = sup z∈R|F(z)−G(z)|. Our algorithm is motivated by the Kolmogorov-Smirnov test for goodness of fit, described in Massey Jr (1951). Note that our algorithm does not specially depend on the choice of the KS distance; we detail several extensions in Appendix A. Now, we are ready to give the CONCH algorithm. 5 Algorithm 1: CONCH: conformal changepoint localization algorithm Input: (Xt)n t=1(dataset), ( s(0) t)n t=1and ( s(1) t)n t=1(left and right score function families) Output: C1−α(a level 1 −αconfidence set for ξ) 1fort∈[n−1]do 2 r←1 3 while r≤tdo 4 κ(t) r←s(0) t(Xr;JX1, . . . , X tK,(Xt+1, . . . , X n)) 5 Draw θr∼Unif(0 ,1) 6 p(t) r←1 rPr j=1 1κ(t) j>κ(t) r+θr1κ(t) j=κ(t) r 7 r←r+ 1 8 end 9 r←n 10
|
https://arxiv.org/abs/2505.00292v1
|
while r > t do 11 κ(t) r←s(1) n−t(Xr;JXt+1, . . . , X nK,(X1, . . . , X t)) 12 Draw θr∼Unif(0 ,1) 13 p(t) r←1 n−r+1Pn j=r 1κ(t) j>κ(t) r+θr1κ(t) j=κ(t) r 14 r←r−1 15 end 16 ˆF0(z) :=1 tPt r=11p(t) r≤z 17 ˆF1(z) :=1 n−tPn r=t+11p(t) r≤z 18 W(0) t←√ tKS(ˆF0, u) 19 W(1) t←√n−tKS(ˆF1, u) 20 Use either the empirical test of Section 4.1.1 or the asymptotic test of Section 4.1.2 to map ( W(0) t, W(1) t) to left and right p-values ( pleft t, pright t) 21 Combine pleft tandpright tinto a single p-value pt(Section 4.2) 22end 23C1−α← {t∈[n−1] :pt> α} 24return C1−α Here is some intuition for Algorithm 1. For each value of t∈[n−1], one can think of the CONCH algorithm in four stages. First, the ordered list is split at some index tinto the left and right halves. Next, each data point is mapped through a score function into a one dimensional statistic αi. Then, the (normalized) rank statistics of the scores are sequentially computed on the left and right halves. If there was no change in the left segment, the ranks for that segment would each be distributed like Unif(0 ,1). In particular, as t→ ∞ , we know by the Glivenko-Cantelli theorem that supz∈R|ˆF0(z)−u(z)| →0. Similarly, if there was no change in the right segment, we would have supz∈R|ˆF1(z)−u(z)| →0. 6 Note that because W(0) tandW(1) tonly depend on the normalized ranks p(t) rforr∈[n], they are distribution-free statistics under H0tand we can use them for hypothesis testing. Furthermore, recall that we have samples pn∼Unif(0 ,1) for 1 ≤n≤tand let ( φt)t≥0denote the Brownian bridge (φt=Bt−tB1, where ( Bt)t≥0is a standard Brownian motion on R). Then, we have the following central limit theorem, due to Donsker (1951): √ t sup z∈R|ˆF0(z)−u(z)| =√ tKS(ˆF0, u)d−→ sup t∈[0,1]|φt|. Notice that normalizing by√ tstabilizes the KS distance, and we have an explicit form for the asymptotic distribution. A similar argument motivates the term√n−tKS(ˆF1, u). Essentially, W(0) tandW(1) tare discrepancy scores for the left and right halves respectively, measuring how far the normalized ranks in the left and right halves of the ordered list are from being uniform. We can perform asymptotically valid hypothesis tests using these statistics when tandn−tare both large. Near the boundaries of the ordered list (when torn−tis small) we can use simulations to perform approximate finite-sample tests using W(1) tandW(2) t. The resulting p-values can be combined for each tand the test can be inverted to form a confidence set by the Neyman construction. Algorithm 1 effectively computes the p-values p(t) r, which can be written as an n×(n−1) matrix; we call this the matrix of conformal p-values (MCP) : MCP := p(1) 1p(2) 1···p(n−1) 1 p(1) 2p(2) 2···p(n−1) 2 ............ p(1) np(2) n···p(n−1) n . Note that MCP rt=p(t) r. For each candidate changepoint t∈[n−1] we construct np-values for H0t(corresponding to a column in the MCP), where each p-value corresponds to one of the data points. Recall from Algorithm 1 that these p-values are constructed by considering the randomized rank of
|
https://arxiv.org/abs/2505.00292v1
|
that point’s score, relative to the other points on the same side. Focusing on the tth row of the MCP, we then refine these p-values as in Figure 2. Focus on a particular index t∈[n−1]. Under H0t, the scores ( κ(t) r)t r=1are exchangeable, meaning that the p-values ( p(t) r)t r=1are independent. Therefore, any aggregation of them (for instance, the normalized KS distance from their empirical cdf to uniform used in W(i) t) is distribution-free and we can use it to perform statistical tests. Similarly, the p-values ( p(t) r)n r=t+1are independent, and we can aggregate them using the KS distance into a single distribution-free statistic used for testing H0t. In Section 4 and Section 4.2, we discuss the subroutines of the CONCH algorithm in greater depth. Then, in Section 4.3, we will prove theoretical guarantees about the validity of our confidence sets. 4 Constructing confidence sets for the changepoint In this section, we describe several methods for constructing confidence sets for the changepoint using the CONCH algorithm. All of these methods will be based on the Neyman construction, 7 Refinement of p-values in row tof the MCP matrix Left half (pre-change) Right half (post-change) p(t) 1 p(t) 2 ···p(t) t−1p(t) tp(t) t+1 ··· p(t) n Discrepancy: W(0) t=√ tKS(ˆF0, u) W(1) t=√n−tKS(ˆF1, u) Figure 2: Refinement of the p-values in the tth row of the matrix of conformal p-values (MCP) into the discrepancy statistics W(0) tandW(1) t. The p-values for data points before and at the candidate changepoint tare aggregated into W(0) tusing the Kolmogorov-Smirnov distance between their empirical CDF and the uniform distribution. Similarly, the p-values for data points after the candidate changepoint are aggregated into W(1) t. which inverts the hypothesis test of H0t:ξ=tagainst H1t:ξ̸=t. Using the algorithms in this section, we will produce two p-values for this test, and combine them into one. In particular, we will construct a 1 −αconfidence set for the changepoint by simultaneously inverting the hypothesis tests for all t∈[n−1]. 4.1 Hypothesis testing: generating left and right p-values In this section, we discuss how we can consolidate the p-values from the CONCH algorithm into two p-values for H0t, which we call the leftandright p-values; this consolidation can be done using empirical or asymptotic tests, which we will describe further in Section 4.1.1 and Section 4.1.2 respectively. Although we have observed the empirical and asymptotic tests in this section to perform the best in practice, we also detail an alternative method based on permutation testing in Appendix A. 4.1.1 Empirical testing We can use simulations to construct confidence sets for the changepoint by estimating the level 1 −α quantile of W(0) tandW(1) tunder the null hypothesis that there is no changepoint. This method is valid for all values of t. We use the following algorithm to construct left and right p-values. 8 Algorithm 2: Empirical test Input: (W(0) t, W(1) t) (discrepancy scores), B∈N(sample size) Output: (pleft t, pright t) (a pair of p-values for the left and right data) 1forb∈[B]do 2 (X(b) t)n t=1∼Unif([0 ,1]n) 3 Compute ( W(0,b) t)n−1 t=1and ( W(1,b)
|
https://arxiv.org/abs/2505.00292v1
|
t)n−1 t=1using line 1 to line 19 of Algorithm 1 on simulated data ( X(b) t)n t=1 4end 5Draw θ0, θ1∼Unif(0 ,1) 6pleft t←1 B+1 θ0+PB b=1(1W(0,b) t>W(0) t+θ01W(0,b) t=W(0) t) 7pright t←1 B+1 θ1+PB b=1(1W(1,b) t>W(1) t+θ11W(1,b) t=W(1) t) 8return (pleft t, pright t) 4.1.2 Asymptotic KS test We can use the asymptotic distribution of Wtto construct confidence sets for the changepoint. This method is valid when tandn−tare both large. We use the following algorithm to construct confidence sets for the changepoint using the asymptotic distribution of Wt. Recall that φt= Bt−tB1(where ( Bt)t≥0is a standard Brownian motion on R) is the Brownian bridge on R. Define Fφ(z) :=P supt∈[0,1]|φt| ≤z , where ( φt)t≥0is the Brownian bridge. Then, by a central limit theorem due to Donsker (1951), we can compute the asymptotically valid p-values pleft t=Fφ(W(0) t), pright t=Fφ(W(1) t). Note that in general, one can use the following approximation to the quantile of supt∈[0,1]|φt| (which corresponds to the asymptotic quantile of W(0) tandW(1) t), as originally observed by Knuth (2014): ˆq1−α≈r −1 2logα 2 . Observe also that the asymptotic test of Section 4.1.2 is much faster to run than the empirical test of Section 4.1.1, but it is only valid when tandn−tare both large. In practice, we can use the empirical test in Section 4.1.1 to test H0twhen tandn−tare small, and use the asymptotic test in Section 4.1.2 when tandn−tare large. Next, we will discuss how to combine the left and right p-values from Section 4.1 into a single p-value for H0t. 4.2 Combining p-values and constructing confidence sets In this section, we discuss how we can combine the left and right p-values into a single p-value pt forH0t. At the changepoint, we expect pξto be exactly uniformly distributed. Away from the 9 changepoint, we would expect either the left or right p-value to be small. Here are some ways to combine the left and right p-values under various independence assumptions on pleft tandpright t. For instance, the left and right p-values are independent if the left score function s(0) tands(1) t have no dependence on its last argument; we call such score functions non-adaptive . Intuitively, a non-adaptive left score function s(0) tis one which does not use data to the right of t. On the other hand, it’s clear that the left and right p-values won’t necessarily be independent if the score functions are adaptive (s(0) tors(1) tdepend nontrivially on their last argument). •Minimum (requires independence): The choice pt= 1−(1−min{pleft t, pright t})2is a powerful combining method for change detection, since away from the changepoint, we would expect either the left or right p-value to be small. Under the null hypothesis H0t, we would expect pleft tandpright tto be independent uniform random variables, so that ptis distribution- free and uniformly distributed. •Fisher’s method (requires independence): Fisher (1970) observed that the combin- ing method pt=Fχ2 4(−2 log( pleft t)−2 log( pright t)) can be a good choice, letting Fχ2 4denote the cumulative distribution function of a χ2 4random variable. This choice works because −2 log( pleft t)−2
|
https://arxiv.org/abs/2505.00292v1
|
log( pright t)∼χ2 4, assuming that p0andp1are independent uniform random variables. This method is equivalent to taking the product of p0andp1, which can be a more powerful choice than the minimum when both p0andp1are expected to be small. •Bonferroni correction (arbitrary dependence): If the left and right p-values are depen- dent, then we can use the Bonferroni correction to combine the p-values: pt= min {2p0,2p1,1}. This p-value is not distribution-free, but it is stochastically larger than uniform under the null hypothesis so we can use it for testing. In practice, we have found the minimum and Bonferroni correction to be a good choice for combining function, under independence and dependence respectively. Using the p-values ( pt)n−1 t=1 constructed from any of the above methods, we can construct a 1 −αconfidence set for the changepoint by simultaneously inverting the hypothesis tests for all t∈[n−1] with the standard Neyman construction C1−α={t∈[n−1] :pt> α}, as described in Algorithm 1. Next, we will discuss the theoretical guarantees of the CONCH algorithm. 4.3 Theoretical guarantees In this section, we give finite-sample and asymptotic coverage guarantees for the CONCH algorithm which hold under very general conditions; all proofs can be found in Appendix B. Although it is 10 equivalent, we take a slightly different approach to randomized conformal p-values than the one typically taken in the conformal prediction literature. For an alternative proof, see Angelopoulos et al. (2024) and Vovk et al. (2003). We begin with a useful lemma. Lemma 4.1 (Randomized probability integral transform) .Suppose Xis a real-valued random variable with cdf FXand let V∼Unif(0 ,1)be drawn independently of X. If we define U= lim y↑XFX(y) +V FX(X)−lim y↑XFX(y) , then U∼Unif(0 ,1). Here, Uis called the randomized probability integral transform of X. Using Lemma 4.1, we can prove coverage guarantees for our confidence sets. Theorem 4.2 (Coverage guarantee (empirical test)) .Suppose that (Xt)n t=1is an exchangeable sequence of random variables with a changepoint ξ∈[n]. LetC1−αbe the level 1−αconfidence set constructed with Algorithm 1, using the empirical test described in Section 4.1.1 to construct left and right p-values and using any algorithm in Section 4.2 to combine these p-values. Let Γdenote the following sub-bags of observations: Γ = (( JX1, . . . , X rK)n r=1,(JXn−r+1, . . . , X nK)n r=1)) Then, the coverage of C1−αconverges to at least 1−α, (conditional on the sub-bags Γ): P(ξ∈ C1−α|Γ)≥1−α. Similarly, we have an asymptotic coverage guarantee for the CONCH algorithm. Theorem 4.3 (Asymptotic coverage guarantee (asymptotic KS test)) .Suppose that (Xt)n t=1is an exchangeable sequence of random variables with a changepoint ξ=⌊γn⌋for some γ∈(0,1). LetC1−αbe the level 1−αconfidence set constructed with Algorithm 1, using the asymptotic test described in Section 4.1.2 to construct left and right p-values and using any algorithm in Section 4.2 to combine these p-values. Let Γdenote the following sub-bags of observations: Γ = (( JX1, . . . , X rK)n r=1,(JXn−r+1, . . . , X nK)n r=1)) Then, the coverage of C1−αis at least 1−α(conditional on the sub-bags Γ): lim n→∞P(ξ∈ C1−α|Γ)≥1−α. Next, we show the ways in which our
|
https://arxiv.org/abs/2505.00292v1
|
algorithm can be used if we are not sure about the existence of a changepoint in the dataset. 11 5 Pre-testing for exchangeability Suppose we are unsure whether a changepoint exists in the dataset. In this case, it is natural to first test the entire data for exchangeability and only proceed with the CONCH algorithm if we conclude that the data is not exchangeable. In doing so, we observe that this method does not affect the performance of our algorithms too much in practice; see Section 7.1 for related simulations. So how should we test the entire data for exchangeability? Formally, we are testing H0: (X1, . . . , X n) are exchangeable against H1: (X1, . . . , X n) are not exchangeable . Following Vovk (2021), using a score function s, we compute the sequential (randomized) ranks for t∈[n] as ˜pt=1 ttX j=1(1s(Xj)>s(Xt)+θt1s(Xj)=s(Xt)), where θt∼Unif(0 ,1) are i.i.d. and independent of the data. Under exchangeability, pt∼Unif(0 ,1) are independent, so we can perform a one-sample KS test from uniform using (˜ p1, . . . , ˜pn). In particular, if ˆFis the empirical cdf of (˜ p1, . . . , ˜pn) and uis the cdf of Unif(0 ,1), we can compute the statistic√nKS(ˆF, u) and reject when this statistic is above a threshold tα0, which is chosen so that the Type I error of the test does not exceed α0∈(0,1). Here, the threshold can be chosen by empirically simulating uniform random variables or through asymptotics, as in Section 4.1. The p-value associated with this test is termed the “forward p-value”. We also recommend to compute the sequential ranks in reverse by pt=1 tnX j=n−t+1(1s(Xj)>s(Xt)+θt1s(Xj)=s(Xt)) and calculate a “backward p-value” use a one-sample KS test from uniform using ( p1, . . . , pn). Then, the forward and backward p-values obtained from the respective KS tests can be combined using the Bonferroni method (rejecting only when either of the p-values is at least 2 α0). Hence, it suffices to choose a good score function to test exchangeability, for which several meth- ods are provided in Shafer and Vovk (2008). Note that if we pre-test the data for exchangeability using a test of level α0∈(0,1), then our CI algorithm will lose its theoretical guarantees on the set (of probability at most α0) where the pre-test makes a Type I error, but we show in Section 7 that this does not affect the empirical performance of the CONCH algorithm. Next, we discuss how the score function can be learned from the data to tighten our confidence sets. 12 6 How should the score function be chosen? The choice of score function is crucial to the performance of the CONCH algorithm, and the score function should be chosen to maximize the power of the test. In this section, we will discuss how the score function should be chosen in practice. Suppose that P0=⊗ξ t=1P∗ 0andP1=⊗n t=ξ+1P∗ 1, where P∗ 0andP∗ 1have densities f0andf1respectively. Then, if f0andf1are known, the Neyman- Pearson lemma states that the uniformly most powerful test for the
|
https://arxiv.org/abs/2505.00292v1
|
hypothesis H0t:ξ=tagainst the alternative H1t:ξ̸=tis the likelihood ratio test. In particular, the likelihood ratio test rejects H0tif and only if (for some threshold c1−α>0): supξ∈[n]\{t}Qξ r=1f0(Xr)Qn r=ξ+1f1(Xr) Qt r=1f0(Xr)Qn r=t+1f1(Xr)≥c1−α. Hence, a good choice for the left-hand score function is the likelihood ratio statistic (we formally motivate this choice in Appendix D): s(0) t(xr;Jx1, . . . , x tK,(xt+1, . . . , x n)) =f1(xr) f0(xr). Analogously, we could choose the right-hand score function: s(1) t(xr;Jxt+1, . . . , x nK,(x1, . . . , x t)) =f0(xr) f1(xr). In practice, the choice of score function is crucial to the performance of the CONCH algorithm. The score function should be chosen to maximize the power of the test, although the validity of the test is not dependent on the choice of score function. Here are several ways to choose the score function: •Likelihood ratio : If the pre-change and post-change distributions are known, the likelihood ratio is an extremely powerful score function for the changepoint localization problem: s(0) t(xr;Jx1, . . . , x tK,(xt+1, . . . , x n)) =f1(xr) f0(xr), s(1) t(xr;Jxt+1, . . . , x nK,(x1, . . . , x t)) =f0(xr) f1(xr). •Density estimators : If the pre-change and post-change distributions are unknown, one can estimate the likelihood ratio using density estimators on the left and right-hand sides respectively. For instance, suppose one has trained a density estimator ˆf0using the data (x1, . . . , x t) and another density estimator ˆf1using the data ( xt+1, . . . , x n). Furthermore, suppose one has trained a density estimator ˆfexch 0using the data Jx1, . . . , x tKand another density estimator ˆfexch 1using the data Jxt+1, . . . , x nK. The density estimators ˆf0and ˆf1can use their data non-exchangeably to learn, while we restrict ˆfexch 0andˆfexch 1to use their training data exchangeably. For instance, one could use a weighted ERM to learn ˆf0which places more 13 weight on earlier samples (which are more likely to come from the pre-change distribution P0. Then, we define the score functions as follows: s(0) t(xr;Jx1, . . . , x tK,(xt+1, . . . , x n)) =ˆf1(xr) ˆfexch 0(xr), s(1) t(xr;Jxt+1, . . . , x nK,(x1, . . . , x t)) =ˆf0(xr) ˆfexch 1(xr). For example, one could use a kernel density estimator or a neural network to learn these density estimators. •Classifier-based likelihood ratio : One can use a classifier to estimate the likelihood ratio, which allows us to use our method even when the likelihood is intractable. Suppose we have trained classifiers ˆ g0:X → [0,1] and ˆ g1:X → [0,1] which each output an estimated probability that x∈ Xwas drawn from P1(instead of P0). Then, we can use these classifiers to estimate the likelihood ratio: s(0) t(xr;Jx1, . . . , x tK,(xt+1, . . . , x n)) =ˆg0(xr) 1−ˆg0(xr), s(1) t(xr;Jxt+1, . . . , x nK,(x1, . . . , x t)) =1−ˆg1(xr) ˆg1(xr). In fact, note that ˆ g0can depend
|
https://arxiv.org/abs/2505.00292v1
|
on the left-hand side of the data exchangeably and the right-half non-exchangeably. Therefore, we can train the classifier using a weighted empirical risk minimization which more strongly penalizes misclassifying data on the far right (which is likely to be post-change). Similarly, ˆ g1can be trained using a weighted empirical risk minimization which more strongly penalizes misclassifying data on the far left (which is likely to be pre-change). •Pretrained multi-class classifier : We can use a pre-trained multi-class classifier to create a score function, which allows our algorithm to form narrow confidence sets in extremely general settings. Suppose we have a pre-trained multi-class classifier ˆ g:X → ∆S, where Sis a discrete set of labels and ∆Srepresents the probability simplex over S: ∆S=( p∈[0,1]|S|:X s∈Sps= 1) . Furthermore, assume that f0is a density over g−1(s0) for some s0∈ S, where gis the ground truth classifier. Similarly, assume that f1is a density over g−1(s1) for some s1∈ S. We begin by estimating the indices of the pre-change and post-change distributions: ˆs0= arg max s∈S|{1≤i≤t: ˆg(xi)s≥ˆg(xi)s′for all s′∈ S}| , ˆs1= arg max s∈S|{t+ 1≤i≤n: ˆg(xi)s≥ˆg(xi)s′for all s′∈ S}| . 14 Then, we can use these indices to estimate the likelihood ratio: s(0) t(xr;Jx1, . . . , x tK,(xt+1, . . . , x n)) =ˆg(xr)ˆs0 1−ˆg(xr)ˆs0, s(1) t(xr;Jxt+1, . . . , x nK,(x1, . . . , x t)) =1−ˆg(xr)ˆs1 ˆg(xr)ˆs1. Finally, we demonstrate the performance of our algorithms through simulations. 7 Experiments In this section, we demonstrate that the CONCH algorithm can be used to construct narrow confi- dence sets for the changepoint in a wide variety of settings, and that our confidence sets have good empirical coverage and width. All code for these experiments, including generic implementations of the CONCH algorithm, can be found in the following GitHub repository: https://www.github.com/sanjitdp/conformal-change-localization . 7.1 Gaussian mean change We begin with a Gaussian mean change simulation. Suppose that we observe data of length n= 1000 and the changepoint is ξ= 400. Furthermore, suppose that P0=⊗ξ t=1N(−1,1) and P1=⊗n t=ξ+1N(1,1). First, as a baseline, suppose that we have the oracle likelihood ratio score function, as described in Section 6. We can then use the CONCH algorithm to find discrepancy scores (( W(i) t)1 i=0)n−1 t=1and transform them into left and right p-values using Algorithm 2. Finally, we use the methods described in Section 4.2 to combine the left and right p-values into a single p-value for H0tusing the minimum combining function and construct a 1 −αconfidence set for the changepoint by inverting the test. We plot the resulting p-values ( pt)n−1 t=1below. Note that even if we didn’t know that the data contained a changepoint, we can pre-test the whole data for exchangeability, as discussed in Section 5. When we pre-test the data for exchangeability at level α0= 0.01 using the identity score function and only the forward p-values, we estimate over 1000 simulations (with no changepoint) that the Type I error rate of our pre- testing algorithm is 0 .009, which aligns with the theoretical Type I error of α0= 0.01. On the
|
https://arxiv.org/abs/2505.00292v1
|
event of probability at most α0where the pre-testing algorithm fails, the CONCH algorithm cannot give a sensible confidence interval. However, over 1000 simulations (with a changepoint), the pre-test was always able to detect a deviation from exchangeability. Note that if we know that there is a changepoint in the data, the pre-testing algorithm is not needed. Type I error 0.009 Power 1.000 Table 1: Type I error and power of pre-test for exchangeability of the entire sequence. 15 For all future simulations, assume that we do not use a pre-test for exchangeability. On the other hand, suppose that we do not have access to the oracle score function. Then, as described in Section 6, we can use a kernel density estimator to estimate the likelihood ratio using the data before and after t(for each t∈[n−1]), using a Gaussian kernel. We choose bandwidth 10−1when t∈ {1, n−1}and bandwidth r−1/5otherwise, where ris the number of samples used to learn each KDE; this is known as Scott’s rule (originally due to Scott (1979)). Then, we repeat the process described above to compute the left and right p-values. However, the left and right p-values are dependent in this case (since the score function depends on data from both sides). Therefore, we use the Bonferroni correction to combine the p-values, plotting the resulting p-values ( pt)n−1 t=1. 0 200 400 600 800 1000 t0.00.10.20.30.40.50.60.7ptp-values for Gaussian mean change (oracle score function) Changepoint (ξ=400) Threshold (α=0.05) (a) oracle score function family 0 200 400 600 800 1000 t0.00.20.40.60.8ptp-values for Gaussian mean change (learned score function) Changepoint (ξ=400) Threshold (α=0.05) (b) learned score function family Figure 3: p-values for a Gaussian mean change at ξ= 400 . The dashed red line indicates the true changepoint, and the region on the horizontal axis where the p-values lie above the dotted green line (α= 0.05) corresponds to our 95% confidence set. In particular, our confidence interval does not suffer much (compared to the oracle likelihood ratio score function) even if our score is learned using a kernel density estimator. Hence, we plot the associated confidence sets for the changepoint using the oracle and learned scores. 380 390 400 410 420 430 440 t95% CI50% CIConformal confidence intervals for the changepoint Oracle Learned Figure 4: 95% and 50% confidence sets for Gaussian mean change using an oracle score function family and a learned score function family. The dashed red line indicates the true changepoint. To give more intuition for the CONCH algorithm, we can also visualize the matrix of conformal 16 p-values (MCP) as described in Section 3. 0 200 400 600 800 r0 200 400 600 800tMatrix of conformal p-values (MCP) 12345678 −log10(pt) (a) matrix of conformal p-values (MCP) 0 200 400 600 800 r0 200 400 600 800tValidity of conformal p-values Valid p-values Invalid p-values All valid p-values (t=400) 12345678 −log10(pt) (b) validity of conformal p-values Figure 5: (a) Matrix of conformal p-values (MCP) for a Gaussian mean change using the oracle score function. The true changepoint occurs at ξ= 400 . (b) Validity of p-values in the matrix of
|
https://arxiv.org/abs/2505.00292v1
|
conformal p-values (green p-values are valid, red p-values are invalid). Note that all of the p-values are only valid when t=ξ= 400 , shown as the yellow row. Observe that only when t=ξ= 400 are the p-values in row ttruly uniformly distributed and independent in the left and right halves respectively, as they should be under H0t. For instance, if t < ξ , then the right p-values will be invalid and the Kolmogorov-Smirnov statistic W(1) tused in the CONCH algorithm (Algorithm 1) will detect their deviation from uniformity. Running the same simulations for 1000 trials, we report the empirical average width and coverage of our confidence sets at various levels, as well as the bias and mean absolute deviation of our point estimator ˆξ= arg max t∈[n−1]pt. Method Avg. width Coverage Bias Mean absolute deviation Oracle score (50%) 22.56 0.50±0.03-1.01 13.13 Parametric learned score (50%) 24.05 0.51±0.03 0.22 13.10 Learned score (50%) 28.08 0.56±0.0312.33 35.15 Oracle score (95%) 74.33 0.94±0.02-1.01 13.13 Parametric learned score (95%) 75.40 0.95±0.02 0.22 13.10 Learned score (95%) 75.96 0.97±0.0212.33 35.15 Table 2: Average width and coverage of confidence sets for Gaussian mean change. We know from our theoretical results (Section 4.3) that the true coverage probability is exactly 50% and 95% respectively for a 50% or 95% confidence interval constructed using our method; this is what we observe empirically as well. Importantly, the average width of the confidence sets using the learned score function is very comparable to the average width of the confidence sets from using the oracle score function, showing that our method performs well even when the likelihood of the 17 data is not available. Furthermore, though we do not give theoretical guarantees for our point estimator, we observe that the point estimator is empirically close to the true changepoint. 7.2 MNIST digit change Next, we consider a simulation of a digit change from the MNIST handwritten digit dataset (Deng, 2012); even when the data is high-dimensional with complex structure, our algorithm can be used to localize the changepoint. Suppose that we observe data of length n= 1000 and the changepoint isξ= 400. Suppose that the pre-change class P0consists of i.i.d. draws from the set of handwritten “3” digits and the post-change class P1consists of i.i.d. draws from the set of handwritten “7” digits. t=398 t=399 t=ξ=400 t=401 t=402 Figure 6: Partial sample of MNIST digit change from “3” to “7” with a changepoint at ξ= 400 . As described in Section 6, we can use a pretrained multi-class classifier to estimate the likelihood ratio using the data before and after t(for each t∈[n−1]); in this case, we use a simple convolutional neural network; see Appendix C for details about how the network was trained. We can then use the CONCH algorithm to find discrepancy scores (( W(i) t)1 i=0)n−1 t=1and transform them into left and right p-values using Algorithm 2. In this case, the left and right p-values are independent, since the left score function s(0) tonly depends on data to the left of tand the right score function s(1) tonly depends
|
https://arxiv.org/abs/2505.00292v1
|
on data to the right of t. Therefore, we use the minimum method to combine the p-values, plotting the resulting p-values ( pt)n−1 t=1. 18 0 200 400 600 800 1000 t0.000.050.100.150.200.25p-value (pt)p-values for MNIST digit change (pre-trained classifier) Changepoint (ξ=400) Threshold (α=0.05)Figure 7: p-values for MNIST digit change at ξ= 400 using a convolutional neural network trained for digit prediction. The dashed red line indicates the true changepoint, and the region on the horizontal axis where the p-values lie above the dotted green line ( α= 0.05) corresponds to our 95% confidence set. In particular, the resulting confidence set is [385 ,423], which is completely nontrivial and ob- tained only using a classifier learned on the entire MNIST dataset. Now, suppose we had 100 additional certified pre-change samples and 100 additional certified post-change samples, which we add to the left and right samples respectively. In this case, we are able to obtain better results. 0 200 400 600 800 1000 t0.000.020.040.060.080.100.120.14p-value (pt)p-values for MNIST digit change (left-side calibration) Threshold (α=0.05) Changepoint (ξ=400) (a) left-side calibration 0 200 400 600 800 1000 t0.000.020.040.060.080.10p-value (pt)p-values for MNIST digit change (two-sided calibration) Changepoint (ξ=400) Threshold (α=0.05) (b) two-sided calibration Figure 8: p-values for a Gaussian mean change at ξ= 400 using 100 certified calibration samples (a) on the left side only and (b) on each side. The dashed red line indicates the true changepoint, and the region on the horizontal axis where the p-values lie above the dotted green line ( α= 0.05) corresponds to our 95% confidence set. When we have additional pre-change samples, we get the confidence interval [389 ,413], so the additional pre-change samples allow us to significantly tighten our confidence interval. If we assume we have access to 100 pre-change and post-change samples (i.e., we can learn the nature of the 19 change that will happen), we obtain the even tighter confidence interval [396 ,406]. Finally, it is interesting to note that our algorithms give a nontrivial confidence interval even when the pre-trained classifier was trained for an entirely different but related task. If we replaced our convolutional neural network with a ResNet-18 image classification model introduced in He et al. (2016) (without fine-tuning on MNIST), we obtain the following result. 0 200 400 600 800 1000 t0.00.20.40.60.8ptp-values for MNIST digit change (pre-trained classifier) Changepoint (ξ=400) Threshold (α=0.05) Figure 9: p-values for MNIST digit change at ξ= 400 using the large ResNet-18 image model trained for image classification. The dashed red line indicates the true changepoint, and the region on the horizontal axis where the p-values lie above the dotted green line ( α= 0.05) corresponds to our 95% confidence set. The resulting confidence set is [327 ,426], which is much wider than the ones obtained using a classifier trained specially for MNIST. In fact, the ResNet-18 model outputs one of 1000 possible classes in the ImageNet database. This shows that our methods are robust to model misspecifi- cation, since CONCH gives a nontrivial confidence set as long as the left and right scores are not exchangeable with one another.
|
https://arxiv.org/abs/2505.00292v1
|
7.3 Sentiment change using large language models (LLMs) Finally, we consider a simulation with a sentiment change in a sequence of text samples, showing that our algorithm is practical for localizing changepoints in language data. We consider the Stanford Sentiment Treebank (SST-2) dataset of movie reviews labeled with a binary sentiment, introduced in Socher et al. (2013). Suppose that we would like to detect a change from a generally positive sentiment in movie reviews to a negative sentiment. Although we analyze a simple dataset containing movie reviews, such a scenario arises frequently in practice when one is interested in localizing the time at which an arbitrary sentiment change occurred, given a sequence of language data. For instance, this may be a change in customer sentiment toward a product or general approval of a political leader or government office. Although we show an example with only two sentiments, in general, it easy to pre-train models for general sentiment classification using a large language model. 20 Suppose that we observe data of length n= 1000 and the changepoint is ξ= 400. Suppose that the pre-change class P0consists of i.i.d. draws from the set of positive reviews and the post-change classP1consists of i.i.d. draws from the set of negative reviews. For instance, here is a sample of the data before and after the changepoint. t = 398 ( positive ): " invigorating , surreal , and resonant with a rainbow of emotion ." t = 399 ( positive ): "a generous , inspiring film that unfolds with grace and humor and gradually becomes a testament to faith ." t = 400 ( positive ): "a fascinating , bombshell documentary " t = 401 ( negative ): "a fragment of an underdone potato " t = 402 ( negative ): "is a disaster , with cloying messages and irksome characters " As described in Section 6, we can use a pretrained multi-class classifier to estimate the likelihood ratio using the data before and after t(for each t∈[n−1]). For this simulation, we use the DistilBERT base model, fine-tuned for sentiment analysis on the uncased SST-2 dataset (for more on this model, see Sanh et al. (2019)). Using this model, we obtain the following results. 0 200 400 600 800 1000 t0.0000.0250.0500.0750.1000.1250.1500.1750.200p-value (pt)p-values for SST-2 sentiment change Changepoint (ξ=400) Threshold (α=0.05) Figure 10: p-values for SST-2 sentiment change at ξ= 400 using DistilBERT trained for sentiment analysis. The dashed red line indicates the true changepoint, and the region on the horizontal axis where the p-values lie above the dotted green line ( α= 0.05) corresponds to our 95% confidence set. The resulting confidence set is [386 ,408]; note that we were able to obtain such a tight confidence interval without even knowing the nature of the change that would happen. In fact, consider the 21 more realistic (but much more difficult to localize) scenario where general sentiment changes from being 60% positive pre-change to 40% positive post-change. In this case, we are still able to form a nontrivial confidence set. 0 200 400 600 800 1000 t0.0000.0250.0500.0750.1000.1250.1500.1750.200p-value (pt)p-values
|
https://arxiv.org/abs/2505.00292v1
|
for SST-2 mixed sentiment change Changepoint (ξ=400) Threshold (α=0.05) Figure 11: p-values for SST-2 mixed sentiment change at ξ= 400 using DistilBERT trained for sentiment analysis. The dashed red line indicates the true changepoint, and the region on the horizontal axis where the p-values lie above the dotted green line ( α= 0.05) corresponds to our 95% confidence set. Even though the change is extremely subtle and we impose very general modeling assumptions, we are able to obtain the 95% confidence set [363 ,491] for the changepoint. 8 Conclusion We introduced the CONCH algorithm, a novel method for constructing confidence sets for the changepoint, assuming that the pre-change and post-change distributions are each exchangeable. We demonstrated that the CONCH algorithm can be used to construct nontrivial confidence sets for the changepoint in a variety of settings, which enjoy finite-sample and asymptotic coverage guarantees. Furthermore, we demonstrated through simulation and experiments that the algorithm has good empirical coverage and width. We also discussed how the parameters of the CONCH algorithm, including the score function, should be chosen in practice to maximize the power of the test. Acknowledgments We thank Carlos M. M. Padilla for some early conversations and discussions, and Jing Lei for discussions relating to Appendix D. 22 References Angelopoulos, A. N., Barber, R. F., & Bates, S. (2024). Theoretical foundations of conformal prediction. arXiv preprint arXiv:2411.11824 . Deng, L. (2012). The MNIST database of handwritten digit images for machine learning research. Signal Processing Magazine, IEEE ,29, 141–142. https://doi.org/10.1109/MSP.2012. 2211477 Donsker, M. (1951). An invariance principle for certain probability limit theorems . American Math- ematical Society. Memoirs. Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American statistical asso- ciation ,56(293), 52–64. Fearnhead, P. (2006). Exact and efficient Bayesian inference for multiple changepoint problems. Statistics and computing ,16, 203–213. Fisher, R. A. (1970). Statistical methods for research workers. In Breakthroughs in statistics: Methodology and distribution (pp. 66–70). Springer. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Pro- ceedings of the IEEE conference on computer vision and pattern recognition , 770–778. Knuth, D. E. (2014). The art of computer programming: Seminumerical algorithms, volume 2 . Addison-Wesley Professional. Massey Jr, F. J. (1951). The Kolmogorov-Smirnov test for goodness of fit. Journal of the American statistical Association ,46(253), 68–78. Nouretdinov, I., Vovk, V., & Gammerman, A. (2021). Conformal changepoint detection in con- tinuous model situations. Conformal and Probabilistic Prediction and Applications , 300– 302. Page, E. S. (1955). A test for a change in a parameter occurring at an unknown point. Biometrika , 42(3/4), 523–527. Pettitt, A. N. (1979). A non-parametric approach to the change-point problem. Journal of the Royal Statistical Society: Series C (Applied Statistics) ,28(2), 126–135. Ramdas, A., & Wang, R. (2024). Hypothesis testing with e-values. arXiv preprint arXiv:2410.23614 . Ross, G. J., & Adams, N. M. (2012). Two nonparametric control charts for detecting arbitrary distribution changes. Journal of Quality Technology ,44(2), 102–116. Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and
|
https://arxiv.org/abs/2505.00292v1
|
lighter. arXiv preprint arXiv:1910.01108 . Scott, D. W. (1979). On optimal and data-based histograms. Biometrika ,66(3), 605–610. Shafer, G., & Vovk, V. (2008). A tutorial on conformal prediction. Journal of Machine Learning Research ,9(3). Shin, J., Ramdas, A., & Rinaldo, A. (2023). E-detectors: A nonparametric framework for sequential change detection. The New England J of Stat. in Data Sci. ,2(2), 229–260. https://doi.org/ 10.51387/23-NEJSDS51 23 Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A. Y., & Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. Proceedings of the 2013 conference on empirical methods in natural language processing , 1631–1642. Truong, C., Oudre, L., & Vayatis, N. (2020). Selective review of offline change point detection methods. Signal Processing ,167, 107299. Volkhonskiy, D., Burnaev, E., Nouretdinov, I., Gammerman, A., & Vovk, V. (2017). Inductive conformal martingales for change-point detection. Conformal and Probabilistic Prediction and Applications , 132–153. Vovk, V. (2021). Testing randomness online. Statistical Science ,36(4), 595–611. Vovk, V., Nouretdinov, I., & Gammerman, A. (2003). Testing exchangeability on-line. Proceedings of the 20th international conference on machine learning (ICML-03) , 768–775. Vovk, V., Petej, I., Nouretdinov, I., Ahlberg, E., Carlsson, L., & Gammerman, A. (2021). Re- train or not retrain: Conformal test martingales for change-point detection. Conformal and Probabilistic Prediction and Applications , 191–210. 24 A Extensions of the CONCH algorithm Here, we describe several alternatives to the CONCH algorithm described in Section 3 and Section 4. A.1 Permutation testing Instead of using the empirical test or asymptotic test (Section 4.1.1 and Section 4.1.2 respectively), we can use permutation tests to construct confidence sets for the changepoint by using the distri- bution of W(0) tandW(1) tunder permutations of the data before and after trespectively. Under the null hypothesis H0t, the data before and after tis exchangeable, so the distribution of ( W(0) t, W(1) t) under permutations of the data before and after tshould remain the same. This method is valid for all values of t, and we can use the following algorithm to construct confidence sets for the changepoint using permutation tests. Algorithm 3: Permutation test Input: (W(0) t, W(1) t) (discrepancy scores), B∈N(sample size) Output: (pleft t, pright t) (a pair of p-values for the left and right data) 1forb∈[B]do 2 (X(b) t)n t=1∼Unif([0 ,1]n) 3 Compute ( W(0,b) t)n−1 t=1and ( W(1,b) t)n−1 t=1using Line 1 to Line 19 of Algorithm 1 on permuted data ( X(b) t)n t=1 4end 5Draw θ0, θ1∼Unif(0 ,1) 6pleft t←1 B+1 θ0+PB b=1(1W(0,b) t>W(0) t+θ01W(0,b) t=W(0) t) 7pright t←1 B+1 θ1+PB b=1(1W(1,b) t>W(1) t+θ11W(1,b) t=W(1) t) 8return (pleft t, pright t) In practice, we have observed that permutation testing performs very similarly to the empirical test, but it is much slower to run. Therefore, we recommend using the empirical test and asymptotic test over the permutation test. In addition, the empirical test can be run once to obtain empirical distributions for the discrepancy scores, and those distributions can be reused across datasets; this is not possible with the permutation test, which must be run on each dataset. A.2 Alternatives
|
https://arxiv.org/abs/2505.00292v1
|
to the Kolmogorov-Smirnov statistic Instead of using the Kolmogorov-Smirnov statistic to measure the discrepancy between the nor- malized ranks and the uniform distribution, we can use other discrepancy measures such as the Cram´ er-von Mises statistic, the Anderson-Darling statistic, the Kuiper statistic, or the Wasserstein p-distance. These statistics all give rise to valid hypothesis tests of H0t, and we can use them to construct confidence sets for the changepoint using the same methods described in Section 4 using 25 the CONCH algorithm. However, in our experiments, we have not observed a significant difference in performance between the Kolmogorov-Smirnov statistic and these other statistics. B Proofs of main results In this section, we present proofs of our main results. Proof of Lemma 4.1. LetF−1 X(x) = inf {z∈R:FX(z)≥x}denote the quantile transformation of X. We then fix u∈[0,1] and decompose P(U≤u) =P(U≤u, X < F−1 X(u)) +P(U≤u, X =F−1 X(u)) +P(U≤u, X > F−1 X(u)).(1) We proceed by estimating these three quantities. Note that for any x < F−1 X(u), we have lim y↑xFX(y)≤FX(x)< u. This means that if X < F−1(u) we have U= lim y↑XFX(y) +V FX(X)−lim y↑XFX(y) <lim y↑XFX(y) +V u−lim y↑XFX(y) = (1−V) lim y↑XFX(y) +V u <(1−V)u+V u =u. Therefore, the first term in Equation (1) is P(U≤u, X < F−1 X(u)) =P(X < F−1 X(u)) = lim y↑F−1 X(u)FX(y). Next, suppose that X=F−1 X(u). IfFXis continuous at X, then it is obvious that U=u. If not, then we have U= lim y↑XFX(y) +V FX(X)−lim y↑XFX(y) ≤u⇐⇒ V≤u−limy↑XFX(y) FX(X)−limy↑XFX(y). Hence, we find that P U≤u, X =F−1 X(u) =P(X=F−1 X(u))P V≤u−limy↑XFX(y) FX(X)−limy↑XFX(y) X=F−1 X(u) = FX(F−1 X(u))−lim y↑F−1 X(u)FX(y)! P V≤u−limy↑F−1 X(u)FX(y) FX(F−1 X(u))−limy↑F−1 X(u)FX(y)! . 26 Note that limy↑F−1 X(u)FX(y)< u≤FX(F−1 X(u)) by definition, so since V∼Unif(0 ,1) we obtain FX(F−1 X(u))−lim y↑F−1 X(u)FX(y)! P V≤u−limy↑F−1 X(u)FX(y) FX(F−1 X(u))−limy↑F−1 X(u)FX(y)! = FX(F−1 X(u))−lim y↑F−1 X(u)FX(y)! u−limy↑F−1 X(u)FX(y) FX(F−1 X(u))−limy↑F−1 X(u)FX(y)! =u−lim y↑F−1 X(u)FX(y). Whether FXis continuous at Xor not, the second term in Equation (1) is given by P U≤u, X =F−1 X(u) =u−lim y↑F−1 X(u)FX(y). Finally, for all x > F−1 X(u), we have FX(x)≥limy↑F−1 X(u)FX(y)> u. This means that whenever X > F−1 X(u), we have U= lim y↑XFX(y) +V FX(X)−lim y↑XFX(y) > u. We have shown P(U≤u, X > F−1 X(u)) = 0, leading to the final estimate P(U≤u) =u, thereby showing that U∼Unif(0 ,1) as desired. Proof of Theorem 4.2. First, we demonstrate that p(t) ris always a true p-value under the null H0t (conditional on the bag JX1, . . . , X rK), for all 1 ≤r≤t. We may assume without loss of generality that r≤t, since the case t < r ≤nfollows by a symmetric argument. Since X1, . . . , X tare exchangeable under H0t, for any permutation π: [r]→[r] we have κ(t) π(1), . . . , κ(t) π(r) = s(0) t(Xπ(1);JX1, . . . , X rK,(Xt+1, . . . , X n)), . . . , s(0) t(Xπ(r);JX1, . . . , X rK,(Xt+1, . . . , X n)) =π(κ(t) 1, . . .
|
https://arxiv.org/abs/2505.00292v1
|
, κ(t) r). Hence, the scores κ(t) 1, . . . , κ(t) rare exchangeable under H0t. Now, if ˜U∼Unif([ r]), then the cdf of X˜Uconditional on the bag of observations JX1, . . . , X rKis FX˜U(x) =1 rrX j=11κ(t) j≤x. Conditional on the bag of observations, κ(t) ris distributed like X˜U, so by the randomized probability integral transform and the fact that 1 −θr∼Unif(0 ,1), we find that (conditional on JX1, . . . , X rK) U=1 rrX j=11κ(t) j<κ(t) r+1−θr rrX j=11κ(t) j=κ(t) r∼Unif(0 ,1). 27 In particular, this means that (conditional on JX1, . . . , X rK) 1−U=1 rrX j=11κ(t) j≥κ(t) r−1−θr rrX j=11κ(t) j=κ(t) r =1 rrX j=11κ(t) j>κ(t) r+θr rrX j=11κ(t) j=κ(t) r =p(t) r∼Unif(0 ,1), andp(t) ris a p-value for H0tconditional on Γ as desired, such that {p(t) r}t r=1are independent (by exchangeability of the κ(t) r). Of course, this means that under the null hypothesis (conditional on Γ) the discrepancy scores (( W(i) t)1 i=0)n−1 t=1are distributed as if the data was all i.i.d. from Unif(0 ,1). Another application of the randomized probability integral transform (Lemma 4.1) gives that pleft t andpright tare exactly uniformly distributed (conditional on Γ), and any of the methods in Section 4.2 will yield a valid confidence interval by the Neyman construction. Proof of Theorem 4.3. The proof of this fact is completely analogous to the proof of Theorem 4.2, except that the test used is only asymptotically valid as t→ ∞ andn−t→ ∞ simultaneously. Letting φt=Bt−tB1(where ( Bt)t≥0denotes a standard Brownian motion on R), the validity of the test follows from the following theorem of Donsker (1951): √ t sup z∈R|ˆF0(z)−u(z)| =√ tKS(ˆF0, u)d−→ sup t∈[0,1]|φt|. C Convolutional neural network architecture We trained a convolutional neural network from scratch on the MNIST handwritten digit dataset, using the following architecture: •Convolutional layer (1 input channel, 32 output channels, 3 ×3 kernel, stride 1) •ReLU activation •Convolutional layer (32 input channels, 64 output channels, 3 ×3 kernel, stride 1) •ReLU activation •Max pooling layer (2 ×2) •Dropout ( p= 0.25) •Flattening layer •Linear layer (output size 128) •ReLU activation 28 •Dropout ( p= 0.5) •Linear layer (output size 10) We train for one epoch using the Adam optimizer and cross-entropy loss, and achieve 98.63% accuracy on the test dataset. D Optimal conformal scores via a Neyman-Pearson-type lemma In this section, we prove a Neyman-Pearson lemma for conformal prediction. Suppose we observe independent X-valued data X1, . . . , X n+1and we wish to test the null: H0:X1, . . . , X n+1are i.i.d. against the alternative H1:X1, . . . , X nare i.i.d., but Xn+1has a different distribution . Suppose further that we are forced to use a conformal p-value to perform this test. To elaborate, we must choose a non-conformity score function s:X →R, calculate the p-value pn[s] =1 n+ 1n+1X i=11s(Xi)≥s(Xn+1), and reject the null at level αwhen pn[s]≤α. The natural question then is: what is the optimal (oracle) score function for this task? Assume further that Q(the distribution of Xn+1)
|
https://arxiv.org/abs/2505.00292v1
|
and R(the distribution of Xifor any i≤n) are absolutely continuous with respect to Lebesgue measure and have densities qandr. Then, the answer to the above question is the likelihood ratio q(x)/r(x), as we will formalize below. It will be easier to state the result in terms of the normalized rank functional Tn[s] =1 nnX i=11s(Xi)≤s(Xn+1). Then, we reject H0ifTn[s]≥tn,1−αfor some threshold tn,1−α. Theorem D.1 (Conformal Neyman-Pearson lemma) .Lets∗(x) =q(x)/r(x)denote the likelihood ratio non-conformity score. Then, for any other s, we have the optimality result E[Tn[s∗]]≥E[Tn[s]]. Proof. LetFYdenote the cdf of the random variable Yand let F−1 Y(y) = inf {z∈R:FY(z)≥y} denote the quantile transformation. Now, it’s clear from definitions that E[Tn[s]] =1 nnX i=1P(s(Xi)≤s(Xn+1)) =P(s(X1)≤s(Xn+1)). 29 By the probability integral transform (a special case of Lemma 4.1), we obtain P(s(X1)≤s(Xn+1)) =P(Fs(Xn+1)(s(X1))≤U), for an independent uniform U∼Unif(0 ,1). Let P(Fs(Xn+1)(s(X1))≤u|U=u) be a regular conditional probability, which exists and is unique for Lebesgue-almost every u∈(0,1) by the disintegration of measure. Hence, the above expression equals EU[P(Fs(Xn+1)(s(X1))≤u|U=u)] =Z1 0P s(X1)≤F−1 s(Xn+1)(u) du. Fixu∈(0,1) and suppose we want to test ˜H0:X1∼Qagainst ˜H1:X1∼R, rejecting ˜H0 whenever s(X1)≤F−1 s(Xn+1)(u). Under ˜H0, the cdf of s(X1) is exactly Fs(Xn+1), so the probability integral transform (Lemma 4.1) gives that the Type I error of our test is u: P˜H0 s(X1)≤F−1 s(Xn+1)(u) =u. The power of our test is P˜H1 s(X1)≤F−1 s(Xn+1)(u) . By the usual Neyman-Pearson lemma, the power is maximized by the choice s=s∗. Integrating the bound from 0 to 1 gives the desired result. Note that the proof of Theorem D.1 also shows a stronger result; define the conditional power βs(u) =Eh Tn[s] s(Xn+1) =F−1 s(Xn+1)(u)i , which exists and is unique for Lebesgue-almost every u∈(0,1) by the disintegration of measure. Then, we have that (for Lebesgue-almost every u∈(0,1)) βs(u)≥βs∗(u). In this sense, the likelihood ratio nonconformity score yields the uniformly most powerful conformal test. While the above reasoning yields that conformal p-values are optimized by the likelihood ratio score, it turns out to also be true that conformal e-values are also optimized by the same score. Indeed, in order to test exchangeability of X1, . . . , X n+1against the alternative that their joint distribution is given by Rn×Q, one can show (Ramdas and Wang, 2024, Section 6.7.7) that the log-optimal e-variable is given by q(Xn+1)Qn i=1r(Xi) 1 n+1Pn+1 i=1q(Xi)Q j̸=ir(Xj). Dividing throughout byQn+1 i=1r(Xi), the above e-variable becomes q(Xn+1)/r(Xn+1) 1 n+1Pn+1 i=1q(Xi)/r(Xi). 30 In fact, every e-variable for testing exchangeability must be of the form s(X)P πs(Xπ), where the sum is taken over all ( n+ 1)! permutations πandX= (X1, . . . , X n) and Xπ= (Xπ(1), . . . , X π(n)) for a positive score function s; thus, we see once again the the log-optimal e-variable is obtained using the likelihood ratio score s(X) =q(Xn+1)/r(Xn+1). These above results motivate the use of likelihood ratio type score functions in our experiments; indeed even when the likelihood ratio is not known exactly, one can hope to learn it and approximate the optimal tests. 31
|
https://arxiv.org/abs/2505.00292v1
|
The iterated Dirichlet process and applications to Bayesian inference Evan Donald University of Central Florida ev446807@ucf.eduJason Swanson University of Central Florida jason.swanson@ucf.edu Abstract Consider an i.i.d. sequence of random variables, taking values in some space S, whose underlying distribution is unknown. In problems of Bayesian inference, one models this unknown distribution as a random measure, and the law of this random measure is the prior. When S={0,1}, a commonly used prior is the uniform distribution on [0 ,1], or more generally, the beta distribution. When Sis finite, the analogous choice is the Dirichlet distribution. For a general space S, we are led naturally to the Dirichlet process (see Ferguson [5]). Here, we consider an array of random variables, and in so doing are led to what we call the iterated Dirichlet process (IDP). We define the IDP and then show how to compute the posterior distribution, given a finite set of observations, using the method of sequential imputation. Ordinarily, this method requires the existence of certain joint density functions, which the IDP lacks. We therefore present a new, more general proof of the validity of sequential imputation, and show that the hypotheses of our proof are satisfied by the IDP. AMS subject classifications: Primary 62G05; secondary 60G57, 62D10, 62M20 Keywords and phrases: exchangeability, Dirichlet processes, Bayesian inference, importance sampling, sequential imputation 1 Introduction LetSbe a complete and separable metric space and {ηi:i∈N}an exchangeable sequence ofS-valued random variables. By de Finetti’s theorem, there exists a random measure λ onSsuch that, given λ, the sequence η1, η2, . . .is conditionally i.i.d. with distribution λ. To compute the conditional distribution of ηN+1, ηN+2, . . .given η1, . . . , η N, we must, among other things, assign a prior distribution to λ. The random measure λis an element of M1=M1(S), the space of probability measures onS. A prior distribution for λis therefore an element of M1(M1(S)). In the case S={0,1}, we can identify a measure ν∈M1(S) with its projected value ν({1})∈[0,1]. In this way, we can identify M1(M1(S)) with M1([0,1]). Hence, a prior distribution for λis simply a 1arXiv:2505.00451v1 [math.ST] 1 May 2025 probability measure on [0 ,1]. A common choice for a non-informative prior in this case is the uniform distribution, or its generalization, the beta distribution. In the case S={0, . . . , d }, we can identify a measure ν∈M1(S) with its projected values (ν({0}), . . . , ν ({d}))∈∆d, where ∆dis the standard d-simplex, ∆d={t= (t0, . . . , t d)∈Rd+1:tj≥0 andPd 0tj= 1}. (1.1) In this way, a prior distribution for λis simply a probability measure on ∆d. A common choice in this case is the Dirichlet distribution. In the case of a general S, a common choice for a non-informative, non-parametric prior is to let λbe a Dirichlet process on S. The Dirichlet process was first introduced by Thomas S. Ferguson in [5]. Its definition and relevant properties are formally summarized in Section 3. A Dirichlet process is almost surely a discrete measure. It has two parameters, κand ρ. The probability measure ρ∈M1(S)
|
https://arxiv.org/abs/2505.00451v1
|
is its mean measure, or base measure, satisfying E[λ(A)] =ρ(A). The number κ∈(0,∞) is its concentration parameter. The smaller κis, the more likely λis to be concentrated on only a few points in S. Ifλis a Dirichlet process onSwith parameters κandρ, then its law, which is an element of M1(M1(S)), is denoted byD(κρ). Now let {ξij:i, j∈N}be an array of S-valued random variables. In [4], such arrays were intended to model a situation in which there are several agents, all from the same population, each undertaking a sequence of actions. The space Srepresents the set of possible actions andξijrepresents the jth action of the ith agent. A simple example presented in [4] was the following. Imagine a pressed penny machine, like those found in museums or tourist attractions. For a fee, the machine presses a penny into a commemorative souvenir. Now imagine the machine is broken, so that it mangles all the pennies we feed it. Each pressed penny it creates is mangled in its own way, and has its own probability of landing on heads when flipped. The machine, though, might have certain tendencies. For instance, it might tend to produce pennies that are biased toward heads. In this situation, the agents are the pennies and the actions are the heads and tails that they produce. In this example, then, we have S={0,1}and the random variable ξij∈Sdenotes the result of the jth flip of the ith penny created by the machine. We assume that ξisrow exchangeable , which means that (i) each row of the array, ξi={ξij:j∈N}, is an exchangeable sequence of S-valued random variables, and (ii) the sequence of rows, {ξi:i∈N}, is an exchangeable sequence of S∞-valued random variables. It is easy to see that ξis row exchangeable if and only if {ξσ(i),τi(j)}and{ξij}have the same finite-dimensional distributions whenever σandτiare (finite) permutations of N. As a matter of notation, we will sometimes write ξ(i) and ξ(i, j) for ξiandξij, respectively. According to a de Finetti theorem for row exchangeable arrays (see [4, Theorem 3.2]), there exists a sequence of random measure µ1, µ2, . . .onSand a random measure ϖon M1(S) such that •given ϖ, the sequence µ1, µ2, . . .is i.i.d. with distribution ϖ, and 2 •for each i, given µi, the sequence ξi1, ξi2, . . .is i.i.d. with distribution µi. The random measures µiare called the row distributions of ξ, and ϖis called the row distribution generator . In the present work, we are concerned with performing Bayesian inference on row exchangeable arrays. The first step toward this to assign a prior distribution to ξ. According to the above, this is equivalent to assigning to a prior distribution to ϖ. Following Ferguson, we would like to define ϖto be a Dirichlet process. Since ϖis a random measure on M1(S), the law of ϖshould have the form D(κρ), where κ >0 and ρis a nonrandom measure on M1(S). IfB⊆M1(S) is measurable, then P(µi∈B) =E[P(µi∈B|ϖ)] =E[ϖ(B)] =ρ(B). In other words, ρis the prior distribution for µi, which is a random measure on S. Again turning to the Dirichlet process,
|
https://arxiv.org/abs/2505.00451v1
|
we let ρ=D(εϱ), where ε > 0 and ϱis a nonrandom measure on S. As above, if A⊆Sis measurable, then P(ξij∈A) =E[P(ξij∈A|µi)] =E[µi(A)] =ϱ(A), so that ϱis the prior distribution for ξij. This means that ϖhas distribution D(κD(εϱ)). We call a random measure with a distribution of this form an iterated Dirichlet process (or IDP) on Swith parameters κ,ε, and ϱ. We call κandεthecolumn androw concentrations , respectively, and we call ϱthebase measure ofϖ. The effect of κandεon the behavior of ϖis discussed in Section 3.5. Having assigned a prior, we turn our attention to inference. We adopt the notation used in [4]: Xin= ξi1··· ξin ,Xmn= X1n ... Xmn = ξ11··· ξ1n ......... ξm1··· ξmm ,µm= µ1 ... µm . Our objective is to make inferences about the future of the process ξbased on past observations. That is, if M, N, N′∈NandN < N′, then we wish to compute L(ξij:i≤M, N < j ≤N′|XMN). (1.2) Here, the notation L(X|Y) denotes the regular conditional distribution of Xgiven Y. The exact computation of (1.2) is intractable. For example, consider the simplest nontrivial case, M= 2 and S={0,1}. When S={0,1}, an IDP on Scan be regarded as simply a Dirichlet process on [0 ,1] whose base measure is a Beta distribution. Such a process is considered in [3]. There, the authors provide us with a formula for P(ξ2,N+1= 1|ξij=kij, i= 1,2, j= 1, . . . , N ). To state this formula, let ϱk=ϱ({k}),αk=εϱk, and yi=PN j=1kij. Using the notation t(n)= Γ(t+n)/Γ(t) =Qn−1 k=0(t+k), define r=κε(2N) (ε(N))2α(y1) 1α(y2) 1 α(y1+y2) 1α(N−y1) 0 α(N−y2) 0 α(2N−y1−y2) 0. 3 We then have P(ξ2,N+1= 1|ξij=kij, i= 1,2, j= 1, . . . , N ) =r 1 +rε ε+Nϱ1+N ε+ny2 N +1 1 +rε ε+ 2Nϱ1+2N ε+ 2Ny1+y2 2N . The case M= 3 and S={0,1}is also outlined in [3], and involves roughly twice as many computations. To deal with larger values of M, as well as more general spaces S, we need to adopt a different approach to the computation of (1.2). In [4], we showed that (1.2) can be computed in terms of the distributions, L(µm|XmN, . . . , X MN,µm−1), (1.3) where 1 ≤m≤M(see Section 2.4). But the same issue that makes (1.2) intractable also hinders the computation of (1.3). Namely, in both cases, we are trying to condition on more than one row of the array ξ. If, instead of (1.3), we wanted to compute L(µm|XmN,µm−1), (1.4) for 1≤m≤M, then this can be done (see Section 5.3). The problem, then, is to find a way to use (1.4) to determine (1.2). This was done by Liu in [12] for the special case S={0,1}, using a method known as sequential imputation. The proofs in [12] cite [11], in which Kong, Liu, and Wong first developed the method of sequential imputation. In [11, Section 2], the authors give their core argument, demonstrating the validity of the method. This argument evidently presupposes that the underlying random variables have densities with respect to some product measure. In Proposition 4.5, we show that this is not
|
https://arxiv.org/abs/2505.00451v1
|
true for the IDP, not even in the case S={0,1} that was treated by Liu in [12]. In other words, the work in [11] is not sufficient to justify its use in [12]. In Theorem 4.7, we provide a new proof that justifies sequential imputation. Our proof avoids the supposition of a full joint density. In doing so, it shows that sequential imputation can be validly applied to an IDP on any complete and separable metric space. This includes the special case S={0,1}treated in [12] as well as the general case treated here. When we apply sequential imputation to the IDP, we arrive at our main result, Theorem 5.2. Combined with [4, Theorem 3.4], which is presented below as Theorem 2.4, this allows us to compute (1.2). The outline of this paper is as follows. In Section 2, we present some basic background results and establish notation. We also include the main results from [4] on row exchangeable arrays. In Section 3, we give the necessary background on Dirichlet processes, including Ferguson’s definition, the Sethuraman stick-breaking construction, mixtures of Dirichlet processes, and the formulas of inference for both ordinary observations as well as noisy observations. In Section 3.5, we take a closer look at the iterated Dirichlet process, which we defined above. Namely, we discuss the relationship between the parameters, κ,ε, and ϱ, and the behavior of ϖ. It should be noted that the iterated Dirichlet process is not the same thing as the hierarchical Dirichlet process, a process that appears elsewhere in the literature (see, 4 for example, [15, 16]). The latter is a mixture of Dirichlet processes, whereas the IDP is not. For a more detailed explanation of the differences between the two, see Remark 3.6. In Section 4, as described above, we discuss the method of sequential imputation. We then apply this in Section 5 to the iterated Dirichlet process, obtaining our main results, which are Theorem 5.2 and Corollary 5.3. Finally, in Section 6, we present several hypothetical examples to illustrate the use of our main results. 2 Background and notation 2.1 Random measures and kernels If (X,U) is a topological space, then B(X) = σ(U) denotes the Borel σ-algebra on X. We may sometimes denote B(X) byBX. We use Rdto denote the Borel σ-algebra onRdwith the Euclidean topology. If ( S,S) is a measurable space and B∈ S, then S|B={A∩B:A∈ S} ={A∈ S:A⊆B}. We let R∗= [−∞,∞] denote the extended real line, equipped with the topology generated by the metric ρ(x, y) =|tan−1(x)−tan−1(y)|. We use R∗to denote B(R∗). Note that R∗={A⊆R∗:A∩R∈ R} . From this point forward, Sdenotes a complete and separable metric space with metric dandS=B(S). Let M=M(S) denotes the space of σ-finite measures on S. For B∈ S, we define the projection πB:M→R∗byπB(ν) =ν(B). If Tis a set and µ:T→M, then we adopt the notation µ(t, B) = ( µ(t))(B). Let M=M(S) be the σ-algebra on M defined by M=σ({πB:B∈ S} ). Note that Mis the collection of all sets of the form {ν∈M:ν(B)∈A}, where B∈ SandA∈ B([0,∞]). Given a probability space, (Ω ,F, P),
|
https://arxiv.org/abs/2505.00451v1
|
arandom measure on Sis a function µ: Ω→Mthat is ( F,M)-measurable. LetM1=M1(S) be the set of all probability measures on S. Note that M1={ν∈M: ν(S)∈ {1}}. Hence, M1∈ M and we may define M1byM1=M|M1. Arandom probability measure on Sis a function µ: Ω→M1that is ( F,M1)-measurable, or equivalently, a random measure taking values in M1. We equip M1with the Prohorov metric, π, which metrizes weak convergence. Since S is complete and separable, M1is complete and separable under π. It can be shown that M1=B(M1). (See, for example, [6, Theorem 2.3].) Unless otherwise specified, we will equip S∞with the metric d∞defined by d∞(x, y) =∞X n=1d(xn, yn)∧1 2n. Note that x→yinS∞if and only if xn→yninSfor all n. In particular, the metric d∞ induces the product topology on S∞. Consequently, S∞=B(S∞) and S∞is separable. In fact, ( S∞,d∞) is a complete and separable metric space. Let ( T,T) be a measurable space. A function µ:T→Mis called a kernel from T toSifµ(·, B) is (T,R∗)-measurable for all B∈ S. By [7, Lemma 1.37], given a function µ:T→Mand a π-system Csuch that S=σ(C), the following are equivalent: (i)µis a kernel. (ii)µ(·, B) is (T,R∗)-measurable for all B∈ C. 5 (iii)µis (T,M)-measurable. Aprobability kernel is a kernel taking values in M1. Note that a random measure on Sis just a kernel from Ω to S, and a random probability measure on Sis just a probability kernel from Ω to S. It can be shown that the function µ7→µ∞mapping M1toM1(S∞) is measurable. Therefore, if µis a random probability measure on S, then µ∞is a random probability measure on S∞. In fact, the random variable µ∞isσ(µ)-measurable. LetS′be another complete and separable metric space. Let γbe a probability kernel from TtoSandγ′a probability kernel from T×StoS′. (We allow the possibility that T is a singleton, in which case γis a probability measure on Sandγ′is a probability kernel from StoS′.) We write γγ′to denote the probability kernel from TtoS×S′characterized by (γγ′)(y, A×A′) =Z Aγ′(y, z, A′)γ(y, dz), where y∈T,z∈S,A∈ S, and A′∈ S′. In particular, this means Z S×S′f(z, z′) (γγ′)(y, dz dz′) =Z SZ S′f(z, z′)γ′(y, z, dz′)γ(y, dz). As shorthand for this equation, we write (γγ′)(y, dz dz′) =γ′(y, z, dz′)γ(y, dz). IfTis a singleton, then γγ′is a probability measure and ( γγ′)(dz dz′) =γ′(z, dz′)γ(dz). 2.2 Regular conditional distributions IfXis an S-valued random variable and µis a probability measure on S, then we write X∼µ, as usual, to mean that Xhas distribution µ. We also use L(X) to denote the distribution of X, so that X∼µandL(X) =µare synonymous. If νis a finite, nonzero measure on S, then we write L(X)∝νto mean that L(X) =ν/ν(S). We will sometimes combine this with differential notation. For example, dL(X)∝f(x)ν(dx) means that for some constant c∈(0,∞), the function cfis the Radon-Nikodym derivative of L(X) with respect to ν. A naked differential indicates Lebesgue measure, so that in the case S=R, the notation dL(X) =f(x)dxmeans that fis the density of Xwith respect to Lebesgue measure. If L(X) =µandB∈ S, then we use L(X;B) to denote µ(B). In this way, we may sometimes write L(X;dx) instead
|
https://arxiv.org/abs/2505.00451v1
|
of dL(X). LetG ⊆ F be a σ-algebra. A regular conditional distribution for XgivenGis a random probability measure µonSsuch that P(X∈A| G) =µ(A) a.s. for all A∈ S. Since S is a complete and separable metric space, regular conditional distributions exist. In fact, such a µexists whenever Xtakes values in a standard Borel space. Moreover, µis unique in the sense that if µandeµare two such random measures, then µ=eµa.s. That is, with probability one, µ(B) =eµ(B) for all B∈ S. The random measure µmay be chosen so that it isG-measurable. We use the notation L(X| G) to refer to the regular conditional distribution of Xgiven G. Ifµis a random probability measure, the notation X| G ∼ µmeans L(X| G) =µ 6 a.s. If νis a random measure that is a.s. finite and nonzero, then L(X| G)∝νmeans X| G ∼ ν/ν(S). As with unconditional distributions, we will combine this with differential and semicolon notation, so that, for instance, L(X| G;A) =P(X∈A| G) a.s. Let ( T,T) be a measurable space and YaT-valued random variable. Then there exists a probability kernel µfrom TtoSsuch that X|σ(Y)∼µ(Y). Moreover, µis unique in the sense that if µandeµare two such probability kernels, then µ=eµ,L(Y)-a.e. In this case, we will typically omit the σ, and write, for instance, that L(X|Y) =µ(Y). Note that for fixed y∈T, the probability measure µ(y,·) is not uniquely determined, since the kernel µis only unique L(Y)-a.e. Nonetheless, if a particular µhas been fixed, we will use the notation L(X|Y=y) to denote the probability measure µ(y,·). 2.3 Exchangeability A sequence ξ={ξi:i∈N}ofS-valued random variables is conditionally i.i.d. if there exists aσ-algebra G ⊆ F and a random probability measure νonSsuch that P(ξ∈A| G) =ν∞(A) a.s., for all A∈ S∞. It can be shown that a sequence ξis conditionally i.i.d. if and only if there is a random probability measure µsuch that P(ξ∈A|µ) =µ∞(A) a.s., (2.1) and in this case, µcan be chosen so that µ∈ Gandµ=νa.s. (see [4, Lemma 2.1]). A sequence ξ={ξi:i∈N}ofS-valued random variables is exchangeable if (ξk1, . . . , ξ km)d= (ξ1, . . . , ξ m) whenever k1, . . . , k mare distinct elements of N. A function σ:N→Nis called a (finite) permuatation ifσis bijective and there exists n0∈Nsuch that σ(n) =nfor all n≥n0. The sequence ξis exchangeable if and only if {ξσ(i)}and{ξi}have the same finite-dimensional distributions whenever σis a permutation. By [8, Theorem 1.1], we have that ξis conditionally i.i.d. if and only if ξis exchangeable. In that case, the random measure µin (2.1) is a.s. unique, σ(ξ)-measurable, and satisfies µn(B)→µ(B) a.s., for all B∈ S, where µn=1 nnX i=1δξi are the empirical measures (see [8, Proposition 1.4]). In fact, we have that µn→µa.s. in M1 under the Prohorov metric (see [4, Proposition 2.4]). In [4, Proposition 2.5], the following conditional version of the law of large numbers is presented. Proposition 2.1. Let(T,T)be a measurable space. Suppose Yis a T-valued random variable, ξis conditionally i.i.d. given G, and f:T×S→Ris a measurable function such thatE|f(Y, ξ1)|<∞. IfY∈ GandZis any version of E[f(Y, ξ1)| G]then P lim n→∞1 nnX
|
https://arxiv.org/abs/2505.00451v1
|
i=1f(Y, ξi) =Z G! = 1 a.s. 7 In particular, 1 nnX i=1f(Y, ξi)→E[f(Y, ξ1)| G]a.s. (2.2) 2.4 Bayesian inference for row exchangeable arrays The following four theorems come from [4]. The first is a de Finetti theorem for row exchangeability ([4, Theorem 3.2]). The final three ([4, Theorems 3.3, 3.4, 3.5]) provide needed tools for performing Bayesian inference on row exchangeable arrays. Theorem 2.2. Suppose ξis a row exchangeable, infinite array of S-valued random variables. Then there exists an a.s. unique random probability measure ϖonM1and an a.s. unique sequence µ={µi:i∈N}of random probability measures on Ssuch that µ|ϖ∼ϖ∞and ξi|µi∼µ∞ ifor all i∈N. Theorem 2.3. Ifξis row exchangeable, then L(Xmn|ϖ,µm) =L(Xmn|µm) =mY i=1µn ia.s., for all m, n∈N. Theorem 2.4. Letξbe row exchangeable. Fix M∈N. For each i∈ {1, . . . , M }, let Ni, Oi∈Nwith Ni< O i, and let Aij∈ S fori≤MandNi< j≤Oi. Let Gbe a sub-σ-algebra of σ(X1N1, X2N2, . . . , X MNM). Then P M\ i=1Oi\ j=Ni+1{ξij∈Aij} G! =E"MY i=1OiY j=Ni+1µi(Aij) G# . Theorem 2.5. Letξbe row exchangeable. Fix m, M, N ∈Nwithm < M . Then XmNand {(µj, XjN)}M j=m+1are conditionally independent given µm. According to Theorem 2.4 with Ni=N,Oi=N′, andG=σ(XMN), we can compute the posterior distribution (1.2), provided we can compute L(µM|XMN). Note that L(µM|XMN;dν) =L(µ2, . . . , µ M|XMN, µ1=ν1;dν2···dνM)L(µ1|XMN;dν1). Iterating this, we see that we can determine L(µM|XMN) if we know L(µm|XMN,µm−1),1≤m≤M. (2.3) Theorem 2.5 shows that in (2.3), the first m−1 rows of XMNcan be omitted. Hence, the conditional distribution (1.2) is entirely determined by the conditional distributions given in (1.3). 8 3 Overview of the Dirichlet process 3.1 Definition Thegamma distribution on Ris defined by Gamma( α, λ)(dx)∝1(0,∞)xα−1eλxdx forα >0 and λ >0, and Gamma(0 , λ) =δ0forλ >0. The beta distribution onRis defined by Beta( α, β)(dx)∝1(0,1)xα−1(1−x)β−1dx forα >0 and β >0, Beta(0 , β) =δ0forβ >0, and Beta( α,0) = δ1forα >0. Letα= (α0, . . . , α d)∈[0,∞)d+1with αj>0 for some j. Let Z0, . . . , Z dbe independent with Zj∼Gamma( αj,1), and define Yj=Zj/(Pk i=1Zi). Then the Dirichlet distribution with parameter αis defined to be the distribution of ( Y0, . . . , Y d). We denote it by Dir( α). It is a probability measure on the simplex, ∆d, defined in (1.1). Ifαj= 0, then Yj= 0 a.s. If αj>0 for all j, then Dir( α) is the probability measure on ∆dproportional to tα0−1 0···tαd−1 ddt. Here, dtdenotes the surface measure on ∆d. For fixed j, we have Yj∼Beta( αj,P i̸=jαi). Given a nonzero, finite measure αonS, we define a Dirichlet process on Swith parameter αto be a random probability measure λonSthat satisfies L(λ(B0), . . . , λ (Bd)) = Dir( α(B0), . . . , α (Bd)), (3.1) whenever {B0, . . . , B d} ⊆ S is a partition of S. LetD(α) denote the law of a Dirichlet process with parameter α. Since a Dirichlet process is an M1-valued random variable, it follows that D(α) is a probability measure on ( M1,M1). That is,
|
https://arxiv.org/abs/2505.00451v1
|
D(α)∈M1(M1). IfB∈ M 1, then we writeD(α, B) for (D(α))(B). Given αas above, let κ=α(S)>0 and ρ=κ−1α, so that ρ∈M1. We will typically writeD(α) =D(κρ), and think of the law of a Dirichlet process as being determined by two parameters, a positive number κ∈(0,∞) and a probability measure ρ∈M1. The measure ρ is called the base measure , orbase distribution , and the number κis called the concentration parameter . The following theorem shows that ρis the mean measure of λand can be found in [5, Theorem 4.3]. Theorem 3.1. Letφ:S→Rbe measurable. If φ∈L1(ρ), then EZ φ dλ =Z φ dρ. (3.2) Taking φ= 1Agives the special case, E[λ(A)] =ρ(A)for all A∈ S. Written analytically, (3.2) takes the form Z M1Z Sφ(x)ν(dx)D(κρ, dν ) =Z Sφ(x)ρ(dx). Taking φ= 1A, we have Z M1ν(A)D(κρ, dν ) =ρ(A), (3.3) for all A∈ S. 9 3.2 The stick-breaking construction The Dirichlet process can be built explicitly through the Sethuraman stick-breaking construction, which we outline below. See [14] for further details. LetV={Vk}∞ k=1be i.i.d. (0 ,1)-valued random variables with Vk∼Beta(1 , κ). (If κ= 1, then each Vkis uniformly distributed. The larger κis, the smaller the Vk’s will tend to be; and conversely, the smaller κis, the larger the Vk’s will tend to be.) Note that by the second Borel-Cantelli lemma,P kVk=∞a.s. Choose A∈ F such that P(A) = 1 andP kVk(ω) =∞for all ω∈A. We define a sequence R={Rk}∞ k=1of random variables as follows. On Ac, letR1= 1 andRk= 0 for k >1. On A, recursively define R1=V1and Rk+1= 1−kX j=1Rj! Vk+1. TheRk’s can be thought of as the lengths of pieces of a stick of length 1. Initially, we break off a piece of length V1. After we have broken off npieces, we take what remains and break off a proportion of it, where Vk+1is that proportion. On A, the Rk’s can be expressed explicitly as Rk=Vkk−1Y j=1(1−Vj). Note thatP∞ k=1Rk= 1. This is trivially the case on Ac. To see that it also holds on A, note that the above two displays give 1−kX j=1Rj=Rk+1 Vk+1=kY j=1(1−Vj). Using results about infinite products, this shows that 0 <P∞ j=1Rj≤1, andP∞ j=1Rj= 1 if and only ifP∞ j=1Vj=∞. Let{Uk}∞ k=1be i.i.d. S-valued random variables with Uk∼ρ, and define λ=∞X k=1RkδUk, so that λ: Ω→M1is a random probability measure that satisfies (3.1). Note that if κis small, then the maximum of the Rk’s will tend to be large, meaning thatλis likely to be concentrated on only a few points in S. Conversely, if κis large, then theRk’s will be small, so that the mass of λis spread out over many points in S. It can be shown, using the stick-breaking construction, that the function ( κ, ρ)7→ D(κρ) is continuous and therefore a kernel from (0 ,∞)×M1toM1. Moreover, if ρis not a point mass, then D(κρ) has no discrete component. That is, D(κρ,{ν}) = 0 for all ν∈M1. 10 3.3 Samples from a Dirichlet process A sequence of samples from a Dirichlet process λ∼ D(κρ) is a sequence η={ηi}∞ i=1that satisfies η|λ∼λ∞. We adopt the notation
|
https://arxiv.org/abs/2505.00451v1
|
ηn= (η1, . . . , η n) and xn= (x1, . . . , x n)∈Sn. Note that for fixed i, we have P(ηi∈A) =E[P(ηi∈A|λ)] =E[λ(A)] =ρ(A). (3.4) Thus, ρrepresents our prior distribution on the individual ηi’s, in the case that we have not observed any of their values. The posterior distribution is given in the following theorem, which can be found in [5, Theorem 3.1]. Theorem 3.2. With notation as above, we have L(λ|ηn) =D α+nX i=1δηi ,and (3.5) L(ηn+1|ηn) =κ κ+nρ+n κ+nbρn, (3.6) wherebρn=n−1Pn i=1δηiis the empirical distribution of ηn. The next proposition expresses (3.5) in a purely analytic form. Proposition 3.3. Letαbe a nonzero, finite measure on S. Then Z Bνn(A)D(α, dν ) =Z M1Z AD α+nX i=1δxi, B νn(dxn)D(α, dν ), (3.7) for every A∈ Snand every B∈ M 1. Proof. Letλandηbe as in Theorem 3.2. Then P(λ∈B,ηn∈A) =E[1B(λ)P(ηn∈A|λ)] =E[1B(λ)λn(A)] =Z Bνn(A)D(α, dν ). On the other hand, by (3.5), we have P(λ∈B,ηn∈A) =E[1A(ηn)P(λ∈B|ηn)] =E 1A(ηn)D α+nX i=1δηi, B =E E 1A(ηn)D α+nX i=1δηi, B λ =EZ AD α+nX i=1δxi, B λn(dxn) =Z M1Z AD α+nX i=1δxi, B νn(dxn)D(α, dν ), which proves (3.7). 11 We can also use (3.6) to obtain a recursive formula for the distribution of ηn. Let ρn=L(ηn). Suppose f:Sn+1→Ris bounded and measurable. Then Z Sn+1f dρ n+1=E[f(ηn+1)] =E[E[f(ηn+1)|ηn]] =EZ Sf(ηn, xn+1)L(ηn+1|ηn;dxn+1) . Using (3.6), this gives Z Sn+1f dρ n+1=κ κ+nEZ Sf(ηn, xn+1)ρ(dxn+1) +1 κ+nnX i=1E[f(ηn, ηi)]. Hence, Z Sn+1f dρ n+1=κ κ+nZ SnZ Sf(xn+1)ρ(dxn+1)ρn(dxn) +1 κ+nnX i=1Z Snf(xn, xi)ρn(dxn).(3.8) 3.4 Mixtures and noisy observations Ifµis a kernel from TtoSandνis a measure on ( T,T), then A7→R Tµ(t, A)ν(dt) is a measure on S. We denote this measure byR µ dν. Ifµis a probability kernel and νis a probability measure, thenR µ dν is a probability measure. In particular, if µis a random probability measure on S, then E[µ] =R µ dP is a probability measure on S. Letαbe a nonzero, finite random measure on S. That is, αis a random measure on Ssuch that α(S)∈(0,∞) a.s. Let λbe a random probability measure on Ssuch that λ|α∼ D(α). (This notation is meaningful, since α7→ D(α) is a probability kernel.) In this case we call λamixture of Dirichlet processes on Swith mixing distribution L(α). We also letκ=α(S) and ρ=α/α(S), so that κandρare random variables taking values in (0 ,∞) andM1, respectively. Letλbe a mixture of Dirichlet processes as above. A sequence of samples from λis a sequence η={ηi}∞ i=1that satisfies η|λ, α∼λ∞. The following proposition generalizes (3.5) to mixtures. Proposition 3.4. With notation as above, we have L(λ|ηn, α) =D α+nX i=1δηi , for any n∈N. 12 Proof. LetA∈ SnandB, C∈ M 1. Using Proposition 3.3, we have P(λ∈B,ηn∈A, α∈C) =E[1C(α)1B(λ)P(ηn∈A|λ, α)] =E[1C(α)1B(λ)λn(A)] =E[1C(α)E[1B(λ)λn(A)|α]] =E 1C(α)Z Bνn(A)D(α, dν ) =E 1C(α)Z M1Z AD α+nX i=1δxi, B νn(dxn)D(α, dν ) . On the other hand, E D α+nX i=1δηi, B 1A(ηn) 1C(α) =E 1C(α)E 1A(ηn)D α+nX i=1δηi, B λ, α =E 1C(α)Z AD α+nX i=1δxi, B λn(dxn) =E 1C(α)EZ AD α+nX i=1δxi, B λn(dxn) α =E 1C(α)Z M1Z AD α+nX i=1δxi, B νn(dxn)D(α, dν
|
https://arxiv.org/abs/2505.00451v1
|
) . Hence, P(λ∈B,ηn∈A, α∈C) =E D α+nX i=1δηi, B 1A(ηn) 1C(α) , which implies P(λ∈B|ηn, α) =D(α+Pn i=1δηi, B). Now let ( T,T) be a measurable space. Fix n∈Nand let Ybe a T-valued random variable such that Yand ( λ, α) are conditionally independent given ηn. This holds, for example, if Yis a function of ηnandW, where Wis some noise that is independent of (λ, α). In other words, we can think of Yas a noisy observation of ηn. The following result extends (3.5) to noisy observations of data generated by a Dirichlet mixture. A special case of this appears as [1, Theorem 3]. Theorem 3.5. With notation as above, we have L(λ|Y) =Z (0,∞)×M1×SnD tν+nX i=1δxi L(κ, ρ,ηn|Y;dt dν dx ), (3.9) In particular, if αis not random, as in the setting of Theorem 3.2, so that Yandλare conditionally independent given ηn, then L(λ|Y) =Z SnD α+nX i=1δxi L(ηn|Y;dxn) (3.10) 13 Proof. LetB∈ M 1. Then P(λ∈B|Y) =E[P(λ∈B|Y,ηn)|Y] =E[P(λ∈B|ηn)|Y], (3.11) since Yandλare independent given ηn. Similarly, P(λ∈B|ηn) =E[P(λ∈B|ηn, α)|ηn] =E[P(λ∈B|ηn, α)|Y,ηn], (3.12) since Yandαare independent given ηn. Substituting (3.12) in (3.11), we have P(λ∈B|Y) =E[E[P(λ∈B|ηn, α)|Y,ηn]|Y] =E[P(λ∈B|ηn, α)|Y]. By Proposition 3.4, this gives P(λ∈B|Y) =E D α+nX i=1δηi, B Y =Z (0,∞)×M1×SnD tν+nX i=1δxi, B L(κ, ρ,ηn|Y;dt dν dx ), which is (3.9). In the case that αis not random, this reduces to (3.10). 3.5 Understanding the IDP parameters Letϖ∼ D(κD(εϱ)) be an IDP and ξ={ξmn}m,n∈Nan array of samples from ϖ. By this we mean there exists a sequence of random measures µ={µm}m∈Nsuch that µ|ϖ∼ϖ∞ andξm|µm∼µ∞ m. Note that each µmis a function of ξm. More specifically, the random measure µmis the almost sure weak limit of the empirical distributions n−1Pn j=1δξmj. As noted in Section 1, we call the µm’s the row distributions, and we call ϖthe row distribution generator. Since ξm|µm∼µ∞ mandµm∼ D(εϱ), we may apply the results of Section 3.3. For example, by (3.4), we have P(ξmn∈A) =ϱ(A). In other words, the base measure ϱis the prior distribution on the samples ξmn. Similarly, by (3.6), we have P(ξm,n+1∈A|Xmn) =ε ε+nϱ(A) +1 ε+nnX j=1δξmj(A). As in Section 3.3, if the row concentration εis small, then our prior expectation on the empirical distributions of the rows ξmis that they are concentrated on only a few points in S. Conversely, if εis large, then we expect those empirical distributions to be spread out over many points. By Theorem 2.3 and (3.6) we have P(ξm+1,n∈A|µm) =E[P(ξm+1,n∈A|µm+1)|µm] =E[µm+1(A)|µm] =Z M1ν(A)L(µm+1|µm;dν) =κ κ+mZ M1ν(A)D(εϱ, dν ) +1 κ+mmX i=1µi(A). 14 Hence, (3.3) gives P(ξm+1,n∈A|µm) =κ κ+mϱ(A) +1 κ+mmX i=1µi(A). As above, if the column concentration κis small, then we expect the empirical distribution of the µm’s to be concentrated on only a few points in M1. In other words, we expect the µ’s to take on only a handful of different possible values. Conversely, if κis large, then we expect the µm’s to take on many different values. Remark 3.6.The IDP bears some superficial resemblance to what is called a hierarchical Dirichlet process . A hierarchical Dirichlet process is a mixture of Dirichlet
|
https://arxiv.org/abs/2505.00451v1
|
processes on S with mixing distribution L(α), where α/α(S) is itself a Dirichlet process (see [15, 16]). An IDP is a Dirichlet process on M1whose base measure is the law of a Dirichlet process. A hierarchical Dirichlet process, on the other hand, is as a Dirichlet process on Swhose base measure is a Dirichlet process itself. These are actually quite different. In fact, the two processes have different state spaces. A hierarchical Dirichlet process takes values in M1, whereas an IDP takes values in M1(M1). For example, if we take κ′to be random and ρ′∼ D(εϱ), then we can define a hierarchical Dirichlet process λby the conditional distribution λ|κ′, ρ′∼ D(κ′ρ′). Note that ρ′is an M1-valued random variable. In contrast, for the IDP, we take κandρto be nonrandom and ρ=D(εϱ). In this case, ρis a nonrandom element of M1(M1). We then define an IDP ϖby the unconditional distribution ϖ∼ D(κρ). 4 Sequential imputation Recall that our goal is to compute (1.2), which we can do by computing (1.3). As we saw in Section 1, it will be unreasonable to calculate (1.3) exactly, at least when m < M , since in that case we would need to simultaneously condition on multiple rows of observations. If, however, we condition on only one row of observations, then we can make progress. (See Section 5.3.) Suppose, for instance, that we can compute (1.4) for 1 ≤m≤M. If we can use these instead of (1.3) to compute (1.2), then we can avoid the issues that arose in the exact calculation of (1.3). By Theorem 2.4, we must find a way to use (1.4) to compute L(µM|XMN). If we do this via simulation, then we must find a way to use (1.4) to simulate µMaccording to the conditional distribution L(µM|XMN). One approach would be to simulate µ1according to the distribution L(µ1|X1N), and then use that simulated value to simulate µ2according to L(µ2|X2N, µ1), and so on. However, if we do that, then we would not be simulating µMaccording to its correct conditional distribution, since our simulation of µmwould not take into account observations from higher-numbered rows. One way to fix this is to do many such incorrect simulations of µM. Let Kbe the number of incorrect simulations we generate. Some of these Ksimulations will be “more incorrect” than others. We then assign the Ksimulated values weights according to their 15 level of correctness, and choose one of them randomly, with probabilities proportional to those weights. If we assign the weights appropriately, then the distribution of the chosen value will converge to L(µM|XMN) asKtends to infinity. This is the method of sequential imputation, first introduced in [11]. It is an application of the more general method of importance sampling that originated in [9]. In this section, we show how to apply sequential imputation to the IDP. This was done in [12] in the special caseS={0,1}. The work in [12] was based on the work in [11], which evidently assumed that the underlying random variables have densities with respect to some product measure. In Proposition 4.5, we show that
|
https://arxiv.org/abs/2505.00451v1
|
this assumption is not satisfied by the IDP. In particular, the work in [11] is not sufficient to justify its use in [12]. In Theorem 4.7, we provide a new proof that justifies this approach in general, including the case S={0,1}treated in [12] as well as the general case treated here. 4.1 Importance sampling Importance sampling is a method of approximating a particular probability distribution using samples from a different distribution. The samples themselves will vary in how “important” they are in determining the distribution of interest. This is modeled by assigning different weights to the samples. We begin by presenting, without commentary, the formal statement of the method of importance sampling in Theorem 4.1 below. We then describe in Remark 4.2 the intuitive interpretation of the method. Let ( T,T) be a measurable space. Let Zbe an S-valued random variable and Y aT-valued random variable. Let m∗be a probability kernel from TtoSsuch that L(Z|Y)≪m∗(Y) a.s. Assume there exist measurable functions w:T×S→Rand h:T→[0,∞) such that h(Y)>0 a.s., Eh(Y)<∞, and w(Y,·) =h(Y)dL(Z|Y) dm∗(Y)a.s. (4.1) Define Z∗so that Z∗|Y∼m∗(Y) and let W=w(Y, Z∗). Let {(Z∗,k, Wk)}∞ k=1be copies of (Z∗, W) that are i.i.d. given Y. Define eZKso that eZK|Z∗,1, . . . , Z∗,K, Y∝KX k=1WkδZ∗,k (4.2) Theorem 4.1. With the notation and assumptions given above, we have L(eZK|Z∗,1, . . . , Z∗,K, Y)→ L(Z|Y)a.s. (4.3) asK→ ∞ . Remark 4.2.The interpretation of Theorem 4.1 is the following. We observe Yand we wish to determine L(Z|Y). Unfortunately, for one reason or another, this is not directly possible. Instead, we are only able to determine a different distribution, m∗(Y), which we call the simulation measure. Using m∗(Y), we generate an i.i.d. collection of samples, Z∗,1, . . . , Z∗,K. 16 Thek-th sample, Z∗,k, gets assigned the weight Wk=w(Y, Z∗,k), where wis some function satisfying (4.1). We then use these weights to randomly choose one of the Ksamples. The randomly chosen sample is denoted by eZK. Theorem 4.1 says that if Kis large, then the law of eZKis close to L(Z|Y). Proof of Theorem 4.1. First note that E[w(Y, Z∗,1)|Y] =Z Sw(Y, z)m∗(Y, dz) =h(Y). (4.4) Hence, Ew(Y, Z∗,1) =Eh(Y)<∞. Now let f:S→Rbe bounded and measurable. By Proposition 2.1, we have 1 KKX k=1w(Y, Z∗,k)f(Z∗,k)→E[w(Y, Z∗,1)f(Z∗,1)|Y] a.s. But E[w(Y, Z∗,1)f(Z∗,1)|Y] =Z Sw(Y, z)f(z)m∗(Y, dz) =h(Y)Z Sf(z)L(Z|Y;dz) =h(Y)E[f(Z)|Y]. Therefore, since h(Y)>0 a.s., E[f(eZK)|Z∗,1, . . . , Z∗,K, Y] =PK k=1w(Y, Z∗,k)f(Z∗,k)PK k=1w(Y, Z∗,k) →h(Y)E[f(Z)|Y] h(Y)E[1|Y] =E[f(Z)|Y], and this proves (4.3). 4.2 Effective sample size Letf:S→Rbe continuous and bounded. By (4.3), if Kis large, then PK k=1Wkf(Z∗,k)PK k=1Wk≈E[f(Z)|Y]. On the other hand, by (2.2), if {Zk}∞ k=1are copies of Zthat are i.i.d. given Y, then 1 KKX k=1f(Zk)≈E[f(Z)|Y]. This latter estimate of E[f(Z)|Y] is presumably more efficient, in the sense that smaller K values are needed. This is because in the latter estimate, we are generating values directly fromL(Z|Y), rather than from the modified distribution m∗(Y). 17 In an effort to measure this difference in efficiency, let Kbe a given number of weighted samples. We wish to find a number Kesuch that VarPK k=1Wkf(Z∗,k)PK
|
https://arxiv.org/abs/2505.00451v1
|
k=1Wk Y ≈Var1 KeKeX k=1f(Zk) Y . The right-hand side is K−1 eVar(f(Z)|Y). In [10], it is shown that VarPK k=1Wkf(Z∗,k)PK k=1Wk Y ≈1 KVar(f(Z)|Y) 1 + VarW h(Y) Y . We therefore define Ke=K 1 + Var W h(Y) Y, and call this the effective sample size . By (4.4), we have VarW h(Y) Y =Var(W|Y) h(Y)2=Var(W|Y) E[W|Y]2. Therefore, Ke=K 1 +Var(W|Y) E[W|Y]2−1 . If we approximate E[W|Y] by the sample mean, W=K−1PK k=1Wk, and Var( W|Y) by the population variance, eS2= (K−1PK k=1W2 k)−W2, then we have Ke≈K′ e, where K′ e=K 1 +eS2/W2= PK k=1Wk2 PK k=1W2 k On the other hand, if we use the sample variance, S2=K(K−1)−1eS2, then we have Ke≈K′′ e, where K′′ e=K 1 +S2/W2=K(K−1) K−1 +KeS2/W2=K(K−1) K−1 +K(K/K′ e−1)=K−1 K−K′ e/K K′ e. 4.3 Sequential imputation and the simulation measure Now fix M∈N. Let Z= (Z1, . . . , Z M) be an SM-valued random variable and let z= (z1, . . . , z M) denote an element of SM. We adopt the notation Zm= (Z1, . . . , Z m) and we use zm= (z1, . . . , z m) for an element of Sm. Note that ZM=ZandzM=z. We also let Y= (Y1, . . . , Y M) be a TM-valued random variable and adopt similar notation in that case. We think of Yas observed values and Zas unobserved. In this sense, Zis regarded as “missing data.” We wish to determine L(Z|Y). Suppose, however, that we are only able to 18 determine L(Zm|Ym,Zm−1) for 1 ≤m≤M. (By convention, a variable with a 0 subscript is omitted. Hence, when m= 1, we have L(Zm|Ym,Zm−1) =L(Z1|Y1).) We describe here a method of using L(Zm|Ym,Zm−1) to approximate L(Z|Y). This method is called sequential imputation and first appeared in [11]. Consider, for the moment, the case M= 2. By conditioning on Z1, we could determine L(Z|Y) sequentially, if we could compute (i)L(Z1|Y) and (ii)L(Z2|Y, Z 1). The second of these is available to us, but the first is not. Instead of (i), we can only compute L(Z1|Y1). The idea in sequential imputation is to use L(Z1|Y1) to simulate Z1, then use this simulated value in (ii) to determine the law of Z2. We are substituting the missing data Z1with its (incorrectly) simulated value. This kind of substitution is called imputation. Since we are using L(Z1|Y1) instead of the correct distribution in (i), we must combine this with the method of importance sampling presented in Theorem 4.1. To apply Theorem 4.1, we first construct the simulation measure m∗. Let γmbe a probability kernel from Tm×Sm−1toSwith Zm|Ym,Zm−1∼γm(Ym,Zm−1). Let γ∗ mbe the probability kernel from TM×Sm−1toSgiven by γ∗ m(y,zm−1) =γm(ym,zm−1). Note thatγ∗ M−1is a probability kernel from TM×SM−2toSandγ∗ Mis a probability kernel from TM×SM−1toS. Hence, γ∗ M−1γ∗ Mis a probability kernel from TM×SM−2toS2. Iterating this, if we define m∗=γ∗ 1···γ∗ M, then m∗is a probability kernel from TMtoSM. In Theorem 4.1, we have Z∗,k|Y∼m∗(Y). Hence, Z∗,k m|Y, Z∗,k 1, . . . , Z∗,k m−1∼γ∗ m(Y, Z∗,k 1, . . . , Z∗,k m−1) =γm(Ym, Z∗,k
|
https://arxiv.org/abs/2505.00451v1
|
1, . . . , Z∗,k m−1). (4.5) In other words, the simulated vector Z∗,k= (Z∗,k 1, . . . , Z∗,k M) can be constructed sequentially using L(Zm|Ym,Zm−1), where in each step the missing data Zm−1is imputed with the previously simulated values Z∗,k 1, . . . , Z∗,k m−1. To prove that Theorem 4.1 applies in this situation, we must find a weight function wsatisfying (4.1). 4.4 A simulation density As noted earlier, sequential imputation first appeared in [11]. There, a weight function was constructed using density functions. The proof and construction in [11] did not specify the codomain of the random variables YandZ, nor did it specify the measures with respect to which they have joint and conditional densities. In Theorem 4.4 below, we give a rigorous formulation of the proof in [11]. First, we clarify the assumptions about the existence of densities, and then show how this relates to the simulation measure m∗. Assumption 4.3. There exist σ-finite measures nandenonSandT, respectively, such that L(Y, Z)≪enM×nM. If Assumption 4.3 holds, then we may let f=dL(Y, Z)/d(enM×nM) be a density of (Y, Z) with respect to enM×nM. If we write fwith omitted arguments, it is assumed that they have been integrated out. For example, f(y1,zm) =Z SM−mZ TM−1f(y, z)enM−1(dy2···dyM)nM−m(dzm+1···dzm). 19 In other words, such functions are the marginal densities. By changing fon a set of measure zero, we may assume f∈[0,∞) everywhere and if the value of a marginal density at a point is 0, then fat that point is 0 for all values of the omitted arguments. We use |to denote conditional densities. For example, f(zm+1|ym,zm) =f(ym,zm+1) f(ym,zm). As usual, we adopt the convention that a variable with a 0 subscript is omitted. For instance, ifm= 1, then f(ym,zm−1) =f(y1) and f(zm|ym,zm−1) =f(z1|y1). With this notation, we may write γ∗ m(y,zm−1, dzm) =f(zm|ym,zm−1)n(dzm). We also have (γ∗ M−1γ∗ M)(y,zM−2, dzM−1dzM) =γ∗ M(y,zM−1, dzM)γ∗ M−1(y,zM−2, dzM−1) =f(zM|yM,zM−1)f(zM−1|yM−1,zM−2)n(dzM)n(dzM−1). Iterating this, we obtain m∗(y, dz) =f∗(y, z)nM(dz), where f∗(y, z) =MY m=1f(zm|ym,zm−1). 4.5 A proof using densities We now define the weight function and show that sequential imputation leads asymptotically toL(Z|Y). For ( y, z)∈T×S, define w(y, z) =MY m=1f(ym|ym−1,zm−1). Define Z∗,kandeZKas in (4.2). Theorem 4.4. If Assumption 4.3 holds and f(y)∈L2(enM), then L(eZK|Z∗,1, . . . , Z∗,K, Y)→ L(Z|Y)a.s. Proof. By Theorem 4.1, it suffices to show that L(Z|Y)≪m∗(Y) a.s., f(Y)>0 a.s., Ef(Y)<∞, and w(Y,·) =f(Y)dL(Z|Y) dm∗(Y)a.s. Since f(y) is the density of Ywith respect to enM, we have P(f(Y) = 0) =Z f−1({0})f(y)enM(dy) = 0 , 20 so that f(Y)>0 a.s. Since f(y)∈L2(enM), we also have Ef(Y) =Z TMf(y)2enM(dy)<∞. Finally, w(y, z)f∗(y, z) =MY m=1f(ym,zm−1) f(ym−1,zm−1)f(ym,zm) f(ym,zm−1)=f(y, z). Hence, L(Z|Y=y;dz) =f(z|y)nM(dz) =f(y, z) f(y)nM(dz) =w(y, z) f(y)f∗(y, z)nM(dz) =w(y, z) f(y)m∗(y, dz). Therefore, dL(Z|Y)/dm∗(Y) =w(Y,·)/f(Y) a.s. 4.6 The IDP has no density We wish to apply sequential imputation to determine L(µM|XMN), using the computable distributions (1.4). In this case, we would naturally take Zm=µmandYm=XmN= (ξm1, ξm2, . . . , ξ mN). In [12], the author did exactly this in the special case S={0,1}. Unfortunately, as we see below
|
https://arxiv.org/abs/2505.00451v1
|
in Proposition 4.5, sequential imputation—as it is presented in Theorem 4.4—does not apply in this case. Namely, Assumption 4.3 is not satisfied. The vector Z=µmhas no joint density with respect to any product measure. Hence, the proof of Theorem 4.4, which is a rigorous presentation of the proof in [11], does not justify the use of sequential imputation in this setting. This includes not only the general setting that we are working with, but also the special case S={0,1}that was treated in [12]. Proposition 4.5. Ifϱis not a point mass, then there do not exist σ-finite measures n1and n2onM1such that L(µ2)≪n1×n2. Proof. Suppose ϱis not a point mass. Assume that n1andn2areσ-finite measures on M1 andL(µ2)≪n1×n2. Let A={γ∈M1:n1({γ})>0}, so that Ais countable. Let D={(γ, γ)∈M2 1:γ /∈A}. Then (n1×n2)(D) =Z Acn1({γ})n2(dγ) = 0 . Therefore, by hypothesis, P(µ2∈D) = 0. Since ϱis not a point mass, the measure ρ=D(εϱ) has no discrete component. Therefore, P(µ1∈A) =ρ(A) = 0, since Ais countable. Hence, by [7, Theorem 5.4] and (3.6), we have P(µ2∈D) =P(µ1/∈A, µ 1=µ2) =P(µ1=µ2) =E[P(µ1=µ2|µ1)] =E[L(µ2|µ1;{µ1})] =Eκ κ+ 1ρ({µ1}) +1 κ+ 1δµ1({µ1}) =1 κ+ 1>0, a contradiction. 21 4.7 A proof without a simulation density In order to prove that sequential imputation applies to the IDP, we must give a new proof based on a new assumption. We continue to let the simulation measure m∗be defined as in Section 4.3. But we must drop Assumption 4.3, and consequently drop the assumption that m∗has a density with respect to a product measure. We cannot drop densities altogether, though, since they are essential to defining the weight function. Assumption 4.6. There exist σ-finite measures n1,n2, . . . ,nMandenonS, S2, . . . , SMand T, respectively, such that L(Y,Zm)≪enM×nmfor every m. Under Assumption 4.6, we may let fmbe a density of ( Y,Zm) with respect to enM×nm. We adopt the same assumptions and notational conventions for fmas we did for fin Section 4.4. For ( y, z)∈T×S, define w(y, z) =f1(y1)M−1Y m=1fm(ym+1|ym,zm). (4.6) Define Z∗,kandeZKas in (4.2). Theorem 4.7. If Assumption 4.6 holds and fM(y)∈L2(enM), then L(eZK|Z∗,1, . . . , Z∗,K, Y)→ L(Z|Y)a.s. Proof. The first part of the proof of Theorem 4.4 carries over, so we need only show that w(Y,·) =fM(Y)dL(Z|Y) dm∗(Y)a.s. (4.7) Letk∈ {1, . . . , M −1}and let A∈ T,B∈ Sk, and C∈ S. Then P(Yk+1∈A,Zk∈B, Z k+1∈C|Yk,Zk) = 1B(Zk)P(Yk+1∈A, Z k+1∈C|Yk,Zk) = 1B(Zk)E[P(Yk+1∈A, Z k+1∈C|Yk+1,Zk)|Yk,Zk] = 1B(Zk)E[1A(Yk+1)γk+1(Yk+1,Zk, C)|Yk,Zk] = 1B(Zk)Z Aγk+1(Yk, yk+1,Zk, C)fk(yk+1|Yk,Zk)en(dyk+1). Hence, P(Yk+1∈A,Zk∈B, Z k+1∈C|Yk) =E 1B(Zk)Z Aγk+1(Yk, yk+1,Zk, C)fk(yk+1|Yk,Zk)en(dyk+1) Yk =Z BZ Aγk+1(Yk, yk+1,zk, C)fk(yk+1|Yk,zk)en(dyk+1)fk(zk|Yk)nk(dzk) =Z AZ BZ Cfk(yk+1|Yk,zk)γk+1(Yk, yk+1,zk, dzk+1)fk(zk|Yk)nk(dzk)en(dyk+1). 22 On the other hand, P(Yk+1∈A,Zk∈B, Z k+1∈C|Yk) =Z AZ B×Cfk+1(yk+1,zk+1|Yk)nk+1(dzk+1)en(dyk+1). Hence, Z B×Cfk+1(yk+1,zk+1|Yk)nk+1(dzk+1) =Z BZ Cfk(yk+1|Yk,zk)γk+1(Yk, yk+1,zk, dzk+1)fk(zk|Yk)nk(dzk), foren-a.e.yk+1∈T. In particular, with probability one, we have L(Zk+1|Yk+1;dzk+1) =fk+1(zk+1|Yk+1)nk+1(dzk+1) =fk+1(Yk+1,zk+1|Yk) fk+1(Yk+1|Yk)nk+1(dzk+1) =fk+1(Yk) fk+1(Yk+1)fk(Yk+1|Yk,zk)γk+1(Yk+1,zk, dzk+1)fk(zk|Yk)nk(dzk). Since fk+1(yk) and fk(yk) are both densities of Ykwith respect to enk, we have fk+1(yk) = fk(yk),enk-a.e. In particular, fk+1(Yk) =fk(Yk) a.s. Thus, fk+1(zk+1|Yk+1)nk+1(dzk+1) =fk(Yk) fk+1(Yk+1)fk(Yk+1|Yk,zk)γ∗ k+1(Y,zk, dzk+1)fk(zk|Yk)nk(dzk), almost surely. Note that γ∗ 1(Y, dz 1) =γ1(Y1, dz1) =f1(z1|Y1)n1(dz1). Hence, starting with k=M−1 and iterating backwards to
|
https://arxiv.org/abs/2505.00451v1
|
k= 1, we obtain L(Z|Y;dz) =1 fM(Z)w(Y, Z)(γ∗ 1···γ∗ M)(Y, dz). Since m∗=γ∗ 1···γ∗ M, this proves (4.7). 5 Sequential imputation for IDPs In this section, we apply sequential imputation, in the form of Theorem 4.7, to an array of samples from an IDP. In Theorem 4.7, we take Z=µMand we let Yrepresent some observations we have made about the samples XMN. In Section 5.1, we show that sequential imputation cannot, in general, be used on direct observations Y=XMN. Starting in Section 5.2, therefore, we assume that Yis a discrete function of XMN. Section 5.2 contains our main result, Theorem 5.2. This theorem shows how to use sequential imputation to compute L(µM|Y). The chief challenge is to construct the simulated row distributions, µ∗,k M. According to (4.5), these should be constructed using the single-row conditional distributions, L(µm|Ym,µm−1). In Section 5.3, we compute L(µm|Ym,µm−1). In Section 5.4, we use these to generate µ∗,k M. Then, in Section 5.5, we prove Theorem 5.2. 23 5.1 The impossibility of direct observations A direct observation of the samples would be represented by taking Y=XMN. In general, though, we cannot treat the case Y=XMNwith sequential imputation. This is because Assumption 4.6 may fail. More specifically, if Y=XMNand the base measure ϱis not discrete, then Assumption 4.6 will fail, according to Proposition 5.1 below. To prove Proposition 5.1, we begin by establishing two formulas that we will need later. LetA∈ SnandB∈ M 1. By Theorem 2.3, P(µm∈B, X mn∈A|µm−1) =E[1B(µm)P(Xmn∈A|µm)|µm−1] =E[1B(µm)µn m(A)|µm−1]. By (3.6), this gives P(µm∈B, X mn∈A|µm−1) =1 κ+m−1 κE[1B(µm)µn m(A)] +m−1X i=11B(µi)µn i(A) .(5.1) In particular, since P(Xmn∈A) =E[µn m(A)], we have P(Xmn∈A|µm−1) =1 κ+m−1 κP(Xmn∈A) +m−1X i=1µn i(A) . (5.2) Proposition 5.1. Suppose ϱis not discrete. That is, there exists B∈Ssuch that ϱ(B)>0 andϱ({x}) = 0 for all x∈B. Then there do not exist σ-finite measures en1anden2onS such that L(ξ11, ξ21)≪en1×en2. Proof. Assume L(ξ11, ξ21)≪en1×en2. Let A={x∈S:en1({x})>0}, so that Ais countable. LetD={(x, x)∈S2:x∈B∩Ac}. Then (en1×en2)(D) =Z B∩Acen1({x})en2(dx) = 0 . Therefore, by hypothesis, P(ξ21=ξ11, ξ11∈B∩Ac) =P((ξ11, ξ21)∈D) = 0 . (5.3) On the other hand, by Theorem 2.5 and (5.2), P(ξ21∈C|ξ11) =E[P(ξ21∈C|µ1, ξ11)|ξ11] =E[P(ξ21∈C|µ1)|ξ11] =E1 κ+ 1(κP(ξ21∈C) +µ1(C)) ξ11 =κ κ+ 1ϱ(C) +1 κ+ 1E[µ1(C)|ξ11]. By (3.5) and Theorem 3.1, this gives P(ξ21∈C|ξ11) =κ κ+ 1ϱ(C) +1 κ+ 1ε ε+ 1ϱ(C) +1 ε+ 1δξ11(C) . 24 Hence, L(ξ21|ξ11) =τ τ+ 1ϱ+1 τ+ 1δξ11, where τ= (κ+ 1)( ε+ 1)−1. We therefore have P(ξ21=ξ11, ξ11∈B∩Ac) =E[1B∩Ac(ξ11)P(ξ21=ξ11|ξ11)] =E 1B∩Ac(ξ11)τ τ+ 1ϱ({ξ11}) +1 τ+ 1 . Note that ξ11∈Bimplies ϱ({ξ11}) = 0. Thus, P(ξ21=ξ11, ξ11∈B∩Ac) =1 τ+ 1P(ξ11∈B∩Ac) =1 τ+ 1ϱ(B∩Ac). Since Ais countable, ϱ(B∩Ac) =ϱ(B)−X x∈A∩Bϱ({x}) =ϱ(B)>0, which contradicts (5.3). 5.2 Sequential imputation with discrete observations Ifϱis discrete, then we may assume without loss of generality that Sis countable. Therefore, ifSis uncountable and Ym=XmN= (ξm1, . . . , ξ mN), then Proposition 5.1 shows that Assumption 4.6 fails. In other words, if Sis uncountable and we wish to use sequential imputation, then we cannot observe the data ξijdirectly. On the other hand, we can observe discrete functions of ξij. This is because Assumption 4.6 is
|
https://arxiv.org/abs/2505.00451v1
|
trivially satisfied whenever Yis discrete. From an applied perspective, this is no restriction at all. Any real-world measurement will have limits to its precision, meaning that only a finite number of measurement outcomes are possible. Theorem 5.2 below describes how to use sequential imputation to compute L(µM|Y) when our observations Yare discrete. The proof of Theorem 5.2 will be given in Section 5.5. Before stating Theorem 5.2, we first establish some notation. LetTbe a countable set, fix N∈N, and let φm:SN→T. Define Ym=φm(XmN). We adopt the notation of Section 4.3, so that Y= (Y1, . . . , Y M),Ym= (Y1, . . . , Y m), y= (y1, . . . , y M)∈TM, and ym= (y1, . . . , y m)∈Tm. We will apply Theorem 4.7 with M1in place of SandµMin place of Z. We therefore change notation from ztoν. That is, ν= (ν1, . . . , ν M)∈MM 1andνm= (ν1, . . . , ν m)∈Mm 1. Letϱn=L(Xmn), so that ϱ1=ϱ. Using (3.8) with εϱinstead of κρgives us a recursive way to compute ϱn. In particular, for Bn∈ SnandB∈ S, we have ϱn+1(Bn×B) =ε ε+nϱn(Bn)ϱ(B) +1 ε+nnX i=1ϱn(Bn∩π−1 iB), where πi:Sn→Sis the projection onto the ith co-ordinate. 25 Form∈ {1, . . . , M }andym∈T, letAm=φ−1 m({ym})∈ SN. Then Ym=ymif and only ifXmN∈Am. Therefore, the prior likelihoods ,P(Ym=ym), satisfy P(Ym=ym) =ϱN(Am). (5.4) Although the notation does not explicitly indicate it, we must remember that the set Am depends on the vector ym. Now fix y= (y1, . . . , y M)∈TM. Using y, we will construct a weighted simulation of µM, which is a pair ( t,u∗ M), where t={tmi: 1≤m≤M,1≤i≤m}is a triangular array of [0 ,∞)-valued random variables and u∗ M= (u∗ 1, . . . , u∗ M) is a vector of M1-valued random variables, all of which are independent of Y. The rows of t, which we denote by tm= (tm1, . . . , t mm), are called the row weights of the weighted simulation, and the random measures u∗ mare called the simulated row distributions . We construct tmandu∗ mby recursion onmas follows. Let tmi=( (u∗ i)N(Am) if 1 ≤i < m, κϱN(Am) if i=m,(5.5) and L(u∗ m|u∗ m−1)∝tmmZ SND εϱ+NX n=1δxn ϱN(dx|Am) +m−1X i=1tmiδu∗ i, (5.6) where u∗ m−1= (u∗ 1, . . . , u∗ m−1) and ϱN(A|Am) =P(XmN∈A|XmN∈Am). In other words, P(u∗ m=u∗ i|u∗ m−1) =tmi tm1+···tmm, for 1≤i < m , and, with probability tmm/(tm1+···+tmm), the random measure u∗ mis independent of u∗ m−1and has distribution Z SND εϱ+NX n=1δxn ϱN(dx|Am). (5.7) Finally, having constructed the weighted simulation ( t,u∗ M), we define the total weight of the weighted simulation to be V=MY m=11 κ+m−1mX i=1tmi. (5.8) Theorem 5.2. Let{(tk,u∗,k M)}K k=1beKindependent weighted simulations as above, with corresponding total weights Vk. Then L(µM|Y=y) = lim K→∞PK k=1Vkδ(u∗,k M)PK k=1Vk, (5.9) where δ(u∗,k M)is the point mass measure on MM 1centered at u∗,k M. Consequently, if Φis a measurable function on MM 1taking values in a metric space and P(µM∈Dc|Y=y) = 1 , where D⊆MM 1is the
|
https://arxiv.org/abs/2505.00451v1
|
set of discontinuities of Φ, then L(Φ(µM)|Y=y) = lim K→∞PK k=1Vkδ(Φ(u∗,k M))PK k=1Vk. (5.10) 26 The proof of Theorem 5.2 will be given in Section 5.5. Corollary 5.3. With the assumptions of Theorem 5.2, we have L(µM+1|Y=y) = lim K→∞1 κ+M κD(εϱ) +MX m=1PK k=1Vkδ(u∗,k m)PK k=1Vk! . (5.11) Consequently, if Φis a measurable function on M1taking values in a metric space and P(µM+1∈Dc|Y=y) = 1 , where D⊆M1is the set of discontinuities of Φ, then L(Φ(µM+1)|Y=y) = lim K→∞1 κ+M κD(εϱ)◦Φ−1+MX m=1PK k=1Vkδ(Φ(u∗,k m))PK k=1Vk! .(5.12) Proof. Let Ψ : M1→Rbe continuous and bounded. By Theorem 2.5 and (3.6), we have E[Ψ(µM+1)|Y] =E[E[Ψ(µM+1)|µM, Y]|Y] =E[E[Ψ(µM+1)|µM]|Y] =Eκ κ+MZ M1Ψ(ν)D(εϱ, dν ) +1 κ+MMX m=1Ψ(µm) Y =κ κ+MZ M1Ψ(ν)D(εϱ, dν ) +1 κ+MMX m=1E[Ψ(µm)|Y] By (5.10), this gives E[Ψ(µM+1)|Y] =κ κ+MZ M1Ψ(ν)D(εϱ, dν ) +1 κ+MMX m=1lim K→∞PK k=1VkΨ(u∗,k m)PK k=1Vk. Since Ψ was arbitrary, this proves (5.11), and (5.12) follows immediately. 5.3 Conditioning on a single row We will prove Theorem 5.2 by applying Theorem 4.7. To do this, we must, among other things, compute the conditional distribution γmdescribed in Section 4.3. This is done below, and the result is presented in (5.14). Theorem 5.4. Fixm∈ {1, . . . , M }. Let γmbe the probability kernel from Tm×Mm−1 1 to M1withµm|Ym,µm−1∼γm(Ym,µm−1). Fixym∈Tmandνm−1∈Mm−1 1. For 1≤i≤m, let qi=qm i(νm−1) =( νN i(Am)if1≤i < m, κϱN(Am)ifi=m,(5.13) and let pi=qi/(q1+···+qm). Then γm(ym,νm−1) =pmZ SND εϱ+NX j=1δxj ϱN(dx|Am) +m−1X i=1piδνi. (5.14) 27 Proof. Letγbe the probability kernel on the right-hand side of (5.14) and let B∈ M 1. We must show that P(µm∈B|Ym,µm−1) =γ(Ym,µm−1, B). By Theorem 2.5, it suffices to show that P(µm∈B|Ym,µm−1) =γ(Ym,µm−1, B). Define the kernel ϑmfrom T×Mm−1 1toM1by ϑm(ym,νm−1, dνm) =κνN m(Am)L(µm;dνm) +m−1X i=1νN m(Am)δνi(dνm). (5.15) Then (5.1) gives P(µm∈B, Y m=ym|µm−1) =ϑ(ym,µm−1, B) κ+m−1. (5.16) Now let C∈σ(Ym,µm−1). Without loss of generality, we may assume that Cis of the form C={Ym∈D} ∩ {µm−1∈F}for some D⊆TandF∈ Mm−1 1. Then (5.16) gives E[1B(µm)1C] =P(µm∈B, Y m∈D,µm−1∈F) =E 1F(µm−1)X ym∈DP(µm∈B, Y m=ym|µm−1) =E 1F(µm−1)X ym∈Dϑ(ym,µm−1, B) ϑ(ym,µm−1, M1)P(Ym=ym|µm−1) , where in the last line we have used (5.16) with B=M1. Hence, E[1B(µm)1C] =X ym∈DE 1F(µm−1)ϑ(ym,µm−1, B) ϑ(ym,µm−1, M1)E[1{ym}(Ym)|µm−1] =X ym∈DE E 1F(µm−1)ϑ(ym,µm−1, B) ϑ(ym,µm−1, M1)1{ym}(Ym) µm−1 =X ym∈DE 1F(µm−1)ϑ(ym,µm−1, B) ϑ(ym,µm−1, M1)1{ym}(Ym) . We can rewrite this is E[1B(µm)1C] =X ym∈DE 1F(µm−1)ϑ(Ym,µm−1, B) ϑ(Ym,µm−1, M1)1{ym}(Ym) =E 1F(µm−1)ϑ(Ym,µm−1, B) ϑ(Ym,µm−1, M1)X ym∈D1{ym}(Ym) =E 1F(µm−1)ϑ(Ym,µm−1, B) ϑ(Ym,µm−1, M1)1D(Ym) =Eϑ(Ym,µm−1, B) ϑ(Ym,µm−1, M1)1C . Hence, P(µm∈B|Ym,µm−1) =ϑ(Ym,µm−1, B) ϑ(Ym,µm−1, M1). 28 It remains to show that γ(ym,µm−1) =ϑ(ym,µm−1)/ϑ(ym,µm−1, M1). Note that P(µm∈B, Y m=ym) =E[1B(µm)P(XmN∈Am|µm)] =E[1B(µm)µN m(Am)] =Z BνN m(Am)L(µm;dνm), which shows that L(µm|Ym=ym;dνm) =1 P(Ym=ym)νN m(Am)L(µm;dνm). Thus, (5.15) becomes ϑ(ym,νm−1) =κP(Ym=ym)L(µm|Ym=ym) +m−1X i=1νN i(Am)δνi. By (3.10), L(µm|Ym=ym) =Z SND εϱ+NX n=1δxn L(XmN|Ym=ym;dx) Since {Ym=ym}={XmN∈Am}andϱN=L(XmN), we can combine these last two equations to arrive at ϑ(ym,νm−1) =κϱN(Am)Z SND εϱ+NX n=1δxn ϱN(dx|Am) +m−1X i=1νN i(Am)δνi. It therefore follows from (5.14) that γ(ym,µm−1) =ϑ(ym,µm−1)/ϑ(ym,µm−1, M1). 5.4 Generating the simulations Having computed γmin Theorem 5.4, we can now compute the simulation measure m∗, described in Section 4.3. In (5.6) and (5.8), the terms u∗ mandVdepend on y. To
|
https://arxiv.org/abs/2505.00451v1
|
emphasize this dependence, we write u∗,k m=u∗,k m(y) and Vk=Vk(y). Define µ∗,k m=u∗,k m(Y). Note that u∗,k m(y) is independent of Y, whereas µ∗,k mis not. The random measure µ∗,k mis playing the role of Z∗,k min Section 4.3. To show that we have constructed µ∗,k mcorrectly, we must show that{µ∗,k M}∞ k=1|Y∼m∗(Y)∞. This is done below in Proposition 5.5. To prove Proposition 5.5, we use the following explicit construction of u∗,k m(y). Define H⊆RmbyH= [0,∞)∞\ {(0, . . . , 0)}. Let U={Umk(t) : 1≤m≤M,1≤k≤K, t∈H} be an independent collection of random variables, where Umk(t) takes values in {1, . . . , m } and satisfies P(Umk(t) =i) =ti/(t1+···+tm). Let λ={λmk: 1≤m≤M,1≤k≤K} 29 be an independent collection of random measures on S, where λmkis a Dirichlet mixture satisfying λmk∼Z SND εϱ+NX n=1δxn ϱN(dx|Am). (5.17) Assume U,λ, and Yare independent. Define tk m= (tk m1, . . . , tk mm)∈Rmandθ(m, k)∈ {1, . . . , m }recursively as follows. Let tk 11=κϱN(A1) and θ(1, k) = 1. For m > 1, let tk mi=( λN θ(i,k),k(Am) if 1 ≤i < m, κϱN(Am) if i=m,(5.18) and θ(m, k) =( θ(Umk(tmk), k) if 1 ≤Umk(tmk)< m, m ifUmk(tmk) =m.(5.19) With this construction, we may write u∗,k m(y) =λθ(m,k),k. In the proof of Proposition 5.5, we also use the notation F ∨ G =σ(F ∪ G ), whenever F andGareσ-algebras on a common set. Proposition 5.5. Letγmbe as in Theorem 5.4 and m∗=γ∗ 1···γ∗ M, where γ∗ m(y,νm−1) = γm(ym,νm−1). Then {µ∗,k M}∞ k=1|Y∼m∗(Y)∞. Proof. As noted in (4.5), it suffices to show that µ∗,k m|Y,µ∗,k m−1∼γm(Ym,µ∗,k m−1). We first note that if Fmk=σ(U1k, . . . , U mk, λ1k, . . . , λ mk), then tk m∈ F m−1,kand θ(m, k)∈ F m−1,k∨σ(Umk). This follows from (5.18) and (5.19) by induction. Also, by (5.19), we have u∗,k m(y) =( u∗,k i(y) if Umk(tmk) =i < m, λmk ifUmk(tmk) =m.(5.20) Hence, u∗,k m(y)∈ F mk. In particular, Umk,λmk, andu∗,k m−1(y) are independent. Now let B∈ M 1andC∈σ(Y,µ∗,k m−1). Without loss of generality, we may assume that Cis of the form C={Y∈D} ∩ {µ∗,k m−1∈F}for some D⊆TMandF∈ Mm−1 1. We then have E[1B(µ∗,k m)1C] =P(Y∈D,µ∗,k m∈F×B) =X y∈DP(Y=y,u∗,k m(y)∈F×B) =X y∈DP(Y=y)P(u∗,k m(y)∈F×B) =X y∈DP(Y=y)E[1F(u∗,k m−1(y))P(u∗,k m(y)∈B|u∗,k m−1(y))] (5.21) 30 Using (5.20), we obtain P(u∗,k m(y)∈B|u∗,k m−1(y)) =P(Umk(tk m) =m, λ mk∈B|u∗,k m−1(y)) +m−1X i=1P(Umk(tk m) =i, u∗,k i(y)∈B|u∗,k m−1(y)). From (5.18) and (5.13), it follows that tk mi=qm i(u∗,k m−1(y)). Since Umk,λmk, andu∗,k m−1(y) are independent, the above becomes P(u∗,k m(y)∈B|u∗,k m−1(y)) =pm m(u∗,k m−1(y))P(λmk∈B) +m−1X i=1pm i(u∗,k m−1(y))δu∗,k i(y)(B). It follows from (5.17) and (5.14) that P(u∗,k m(y)∈B|u∗,k m−1(y)) =γm(ym,u∗,k m−1(y), B). Substituting this into (5.21) and noting that u∗,k M(y) and Yare independent, we have E[1B(µ∗,k m)1C] =X y∈DP(Y=y)E[1F(u∗,k m−1(y))γm(ym,u∗,k m−1(y), B)] =X y∈DE[1{y}(Y)1F(u∗,k m−1(y))γm(ym,u∗,k m−1(y), B)] =X y∈DE[1{y}(Y)1F(µ∗,k m−1)γm(Ym,µ∗,k m−1, B)] =E[1D(Y)1F(µ∗,k m−1)γm(Ym,µ∗,k m−1, B)] =E[γm(Ym,µ∗,k m−1, B)1C], showing that P(µ∗,k m∈B|Y,µ∗,k m−1) =γm(Ym,µ∗,k m−1, B). 5.5 Proof of the main result Having established Theorem 5.4 and Proposition 5.5, we are now ready to prove the main result.
|
https://arxiv.org/abs/2505.00451v1
|
Proof of Theorem 5.2. We apply Theorem 4.7. Let γmandm∗be as in Proposition 5.5. Ifenis counting measure on Tandnm=L(µm), then L(Y,µm)≪enM×nm, so that Assumption 4.6 holds. Let fmbe the density of ( Y,µm) with respect to enM×nm, and recall the notational conventions of Section 4.4. Let w(y, ν) be given by (4.6). Letµ∗,k Mbe as in Proposition 5.5, so that {µ∗,k M}∞ k=1|Y∼m∗(Y)∞. We define the weights Wk=w(Y,µ∗,k M). We first prove that Wk=Vk(Y), where, according to (5.8), we have Vk(y) =MY m=11 κ+m−1mX i=1tk mi. (5.22) 31 Note that f1(y1) =P(Y1=y1) =ϱN(A1). For the other factors in (4.6), we use Theorem 2.5, (5.2), and (5.4) to obtain P(Ym+1=ym+1|Ym,µm) =P(Xm+1,N∈Am+1|µm) =1 κ+m κϱN(Am+1) +mX i=1µN i(Am+1) , so that fm(ym+1|ym,νm) =κ κ+mϱN(Am+1) +1 κ+mmX i=1νN i(Am+1). Substituting this into (4.6) gives w(y, ν) =ϱN(A1)M−1Y m=1κ κ+mϱN(Am+1) +1 κ+mmX i=1νN i(Am+1) , which can be rewritten as w(y, ν) =MY m=1κ κ+m−1ϱN(Am) +1 κ+m−1m−1X i=1νN i(Am) . (5.23) In the proof of Proposition 5.5, we noted that tk mi=qm i(u∗,k m−1(y)). Hence, by (5.22) and (5.13), we have Vk(y) =MY m=11 κ+m−1mX i=1qm i(u∗,k m−1(y)) =MY m=11 κ+m−1 κϱN(Am) +m−1X i=1(u∗,k i(y))N(Am) . It follows from (5.23) that Vk(y) =w(y,u∗,k M(y)), so that Wk=Vk(Y). Finally, we construct eµK Mso that eµK M|µ∗,1 M, . . . ,µ∗,K M, Y∝KX k=1Wkδ(µ∗,k M). (5.24) Since fM(y) =P(Y=y), we haveZ TMfM(y)2enM(dy) =X y∈TMP(Y=y)2≤X y∈TMP(Y=y) = 1 , so that fM(y)∈L2(enM). Hence, by Theorem 4.7, L(µM|Y) = lim K→∞L(eµK M|µ∗,1 M, . . . ,µ∗,K M, Y). Applying (5.24) to the above gives L(µM|Y) = lim K→∞PK k=1Wkδ(µ∗,k M)PK k=1Wk. Since Wk=Vk(Y) andµ∗,k M=u∗,k M(Y), this proves (5.9), and (5.10) follows immediately. 32 6 Examples In this section, we present four hypothetical applications to illustrate the use of the IDP. See https://github.com/jason-swanson/idp for the code used to generate the simulations. The framework for each of these examples was described in Section 1. Namely, we consider a situation in which there are several agents, all from the same population. Each agent undertakes a sequence of actions. These actions are chosen according to the agent’s particular tendencies. Although different agents have different tendencies, there may be patterns in the population. We observe a certain set of agents over a certain amount of time. Based on these observations, we want to make probabilistic forecasts about two things: •The future behavior of the agents we have observed. •The behavior of a new (unobserved) agent from the population. We model this situation with an IDP. More specifically, let Sbe a complete and separable metric space. Let ξbe a row exchangeable array of S-valued random variables whose row distribution generator—the random measure ϖin Theorem 2.2—is an IDP. That is, ϖ∼ D(κD(εϱ)), where κ >0 and ε >0 are the column and row concentration parameters, respectively, and ϱ∈M1is the base measure. The value of ξijrepresents the jth action of theith agent. In this way, the space Srepresents the set of possible actions that the agents may undertake. All the examples in this section involve a finite state space S. But, as we describe in
|
https://arxiv.org/abs/2505.00451v1
|
Remark 6.2, this special case is easily generalized to the case of an arbitrary Sin which our observations are made with limited precision. The outline of this section is as follows. In Section 6.1, we describe how Theorem 5.2 and Corollary 5.3 simplify in the case that Sis finite. After that, the remainder of the section is devoted to the examples. Our simplest example is in Section 6.2, and concerns the malfunctioning pressed penny machine described in Section 1. Section 6.3 presents a similar example, but with significantly more data. This is the same example treated in [12] (originally considered in [2]) and is concerned with the flicking of thumbtacks. Section 6.4 applies the IDP model to the analysis of Amazon reviews. The final example, found in Section 6.6, is about video game leaderboards. To prepare for that example, a custom prior distribution, which we call the “gamer” distribution, is presented in Section 6.5. 6.1 IDPs on a finite state space LetL≥2 be an integer and suppose that S={0, . . . , L −1}. Let pℓ=ϱ({ℓ}), so that we may identify ϱwith the vector p= (p0, . . . , p L−1). We assume that pℓ>0 for all ℓ∈S. Lety={ymn: 1≤m≤M,1≤n≤Nm}be a jagged array of elements in S. The array y denotes our observed data. That is, we observe ξmn=ymnfor 1≤m≤Mand 1≤n≤Nm, and we wish to compute the conditional distribution of ξgiven these observations. Define the row counts y={ymℓ: 1≤m≤M,0≤ℓ≤L−1}byymℓ=|{n:ymn=ℓ}|. Since ξis row exchangeable, all of our calculations will depend on yonly through the array y. We use ymto denote the vector ( ym1, . . . , ym,L−1). 33 To apply Theorem 5.2, let N= max {N1, . . . , N M}andT=SN n=1Sn. Let φm:SN→T be the projection onto the first Nmcomponents, so that φm(x1, . . . , x N) = ( x1, . . . , x Nm). Then Y= (Y1, . . . , Y M), where Ym=φm(XmN) =XmNm. Note that Am={Ym=ym}= {XmNm=ym}. Therefore, if we define θmℓ=µm({ℓ}), then the prior likelihoods satisfy ϱN(Am) =P(XmNm=ym) =E[P(XmNm=ym|µm)] =EL−1Y ℓ=0θymℓ mℓ . From (3.1) it follows that (θm0, . . . , θ m,L−1)∼Dir(εp0, . . . , εp L−1). This gives ϱN(Am) =EL−1Y ℓ=0θymℓ mℓ =1 B(εp)Z ∆L−1L−1Y ℓ=0tymℓ+εpℓ−1 ℓ dt=B(εp+ym) B(εp), (6.1) where B(x) = Γ(PL ℓ=0xℓ)−1QL ℓ=0Γ(xℓ) is the multivariate Beta function. Having computed the prior likelihoods, we turn our attention to the weighted simulations. From (3.10), it follows that (5.7) is equal to L(µm|Ym=ym). But Ym=XmNm, so by (3.5) we can rewrite (5.5) and (5.6) as tmi=(QNm n=1u∗ i(ymn) if 1 ≤i < m, κϱN(Am) if i=m, and L(u∗ m|u∗ m−1)∝tmmD εϱ+NmX n=1δymn +m−1X i=1tmiδu∗ i. (6.2) If we define θ∗ mℓ=u∗ m({ℓ}), then we can rewrite the row weights as tmi=(QL−1 ℓ=0(θ∗ iℓ)ymℓif 1≤i < m, κϱN(Am) if i=m. In this case, we can identify u∗ mwith the vector θ∗ m= (θ∗ m0, . . . , θ∗ m,L−1), and (6.2) becomes L(θ∗ m|θ∗ m−1)∝tmmDir(εp+ym) +m−1X i=1tmiδθ∗ i, where θ∗ m= (θ∗ 1, . . . , θ∗ m). In other words, for
|
https://arxiv.org/abs/2505.00451v1
|
i < m , we have θ∗ m=θ∗ iwith probabilty tmi, and, with probability tmm/(tm1+···+tmm), the random vector θ∗ mis independent of θ∗ m−1 and has the Dirichlet distribution, Dir( εp+ym). Finally, we define V, the total weight of the simulation. According to (5.8), the total weight should be MY m=11 κ+m−1mX i=1tmi. (6.3) 34 But in Theorem 5.2, we see that the weights are all relative to their sum, so we are free to multiply this value by any constant that does not depend on k. Leaving it as it is will produce a very small number, on the order of 1 /M!. For computational purposes, then, we multiply (6.3) by cMM!, where cis a nonrandom constant. The total weight of our simulation is then V=MY m=1cm κ+m−1mX i=1tmi (6.4) We call log cthelog scale factor of the simulation. In the examples covered later in this section, we used c= 1 unless otherwise specified. Now, if θ= (θmℓ)∈RM×Land Φ : RM×L→Ris continuous, then (5.10) gives L(Φ(θ)|Y=y) = lim K→∞PK k=1Vkδ(Φ(θ∗,k M))PK k=1Vk. (6.5) Similarly, if Φ : RL→Ris continuous, then (5.12) gives L(Φ(θM+1)|Y=y) = lim K→∞1 κ+M κDir(εp)◦Φ−1+MX m=1PK k=1Vkδ(Φ(θ∗,k m))PK k=1Vk! .(6.6) Remark 6.1.In the case S={0,1}, the base measure ϱis entirely determined by the number p=ϱ({1}), and we may define a single row count for each row, ym=|{n:ymn= 1}|. In this case, letting a=εpandb=ε(1−p), we can rewrite (6.1) as ϱN(Am) =B(a+ym, b+Nm−ym) B(a, b). Defining θ∗ m=u∗ m({1}), the row weights of the weighted simulations become tmi=( (θ∗ i)ym(1−θ∗ i)Nm−ymif 1≤i < m, κϱN(Am) if i=m, and (6.2) becomes L(θ∗ m|θ∗ m−1)∝tmmBeta( a+ym, b+Nm−ym) +m−1X i=1tmiδθ∗ i. Remark 6.2.Let us return for the moment to the general setting, where Sis an arbitrary complete and separable metric space. Let S′be another complete and separable metric space and let ψ:S→S′be measurable. Let ξ′={ξ′ ij}, where ξ′ ij=ψ(ξij). It is straightforward to verify that ξ′is a row exchangeable array of S′-valued random variables whose row distribution generator ϖ′satisfies ϖ′∼ D(κD(εϱ′)), where ϱ′=ϱ◦ψ−1. We can apply this to S′={0, . . . , L −1}, where L≥2. For each ℓ∈S′={0, . . . , L −1}, choose Bℓ∈ Sso that {Bℓ:ℓ∈S′}is a partition of S. Define ψ:S→S′byψ=PL−1 ℓ=0ℓ1Bℓ and let ξ′={ξ′ ij}where ξ′ ij=ψ(ξij). Suppose we can only observe the process ξ′. That is, we can only observe the values of ξwith enough precision to tell which piece of the partition those values lie in. Based on some set of these observations, we wish to make probabilistic inferences about ξ′. Since ξ′is an array of samples from an IDP on S′, we may do this using the simplified formulas in this section. 35 Coin # 1st Flip 2nd Flip 3rd Flip 4th Flip 5th Flip 1 H H H H T 2 H T H H H 3 T H H T H 4 H H T H H 5 T T T H T 6 T H H H H 7 H T T H H Table 1: Results of flipping seven different mangled pennies 6.2 The pressed penny machine Imagine a
|
https://arxiv.org/abs/2505.00451v1
|
pressed penny machine, like those found in museums or tourist attractions. For a fee, the machine presses a penny into a commemorative souvenir. Now imagine the machine is broken, so that it mangles all the pennies we feed it. Each pressed penny it creates is mangled in its own way. Each has its own probability of landing on heads when flipped. In this situation, the agents are the pennies and the actions are the heads and tails that they produce. Now suppose we create seven mangled pennies and flip each one 5 times, giving us the results in Table 1. Of the 35 flips, 23 of them (or about 65.7%) were heads. In fact, 6 of the 7 coins landed mostly on heads. The machine clearly seems predisposed to creating pennies that are biased towards heads. Coin 5, though, produced only one head. Is this coin different from the others and actually biased toward tails? Or was it mere chance that its flips turned out that way? For instance, suppose all 7 coins had a 60% chance of landing on heads. In that case, there would still be a 43% chance that at least one of them would produce four tails. How should we balance these competing explanations and arrive at some concrete probabilities? One way to answer this is to model the example with an IDP as in Section 6.1. We take L= 2, so that S={0,1}, where 0 represents tails and 1 represents heads. We then take κ=ε= 1 and p0=p1= 1/2. From the table above, we have M= 7,Nm= 5 for all m, and y= 1 1 1 1 0 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 0 0 0 1 0 0 1 1 1 1 1 0 0 1 1 . With K= 10000, we generated the weighted simulations ( tk,θ∗,k 7) for 1 ≤k≤K, and computed their corresponding total weights, Vk. In this case, the effective sample size of our simulations (denoted by K′′ εin Section 4.2) was approximately 6067. Before addressing Coin 5 directly, let us ask a different question. If we were to get a new coin from this machine, how would we expect it to behave? The new coin would have 36 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0(a) distribution function of θ8,1 0.0 0.2 0.4 0.6 0.8 1.00.51.01.52.02.53.0 (b) density of θ8,1with h≈0.119 0.0 0.2 0.4 0.6 0.8 1.0012345 (c) density of θ8,1with h= 0.001 0.0 0.2 0.4 0.6 0.8 1.00.00.51.01.52.0 (d) density of θ5,1 Figure 1: Approximate distribution and density functions for θm,ℓ some random probability of heads, which is denoted by θ8,1. Taking Φ : R2→Rto be the projection, Φ( x0, x1) =x1, we can use (6.6) to approximation the distribution of θ8,1, giving L(θ8,1|Y=y)≈ν, where ν=1 8 Beta(1 /2,1/2) +7X m=1P10000 k=1Vkδ(θ∗,k m,1) P10000 k=1Vk . Using this, we have P(ξ8,1= 1|Y=y) =E[θ8,1|Y=y]≈0.633, so that, given our observations, the first flip of a new coin has about a 63.3% chance of landing heads. To visualize the distribution of θ81rather than simply its
|
https://arxiv.org/abs/2505.00451v1
|
mean, we can plot the distribution function of ν. See Figure 1(a) for a graph of x7→ν((0, x]). For a different visualization, we can plot an approximate density for ν. The measure νhas a discrete component, so we obtain an approximate density using Gaussian kernel density estimation, replacing each point mass δxby a Gaussian measure with mean xand standard deviation h, where his the “bandwidth” of the density estimation. For the measure ν, we used Python’s scipy.stats.gaussian kdeclass to compute the bandwidth according to Scott’s Rule (see [13]). In this case, we obtained h≈0.119, yielding the graph in Figure 37 1(b). For a coarser estimate, see Figure 1(c), which uses h= 0.001. In all the remaining examples in this section, we will default to using the bandwidth determined by Scott’s Rule. Turning back to the question of Coin 5, if we define Φ : R7×2→Rby Φ(( xm,ℓ)) = x5,1, then (6.5) gives L(θ5,1|Y=y)≈P10000 k=1Vkδ(θ∗,k 5,1) P10000 k=1Vk An approximate density for this measure is given in Figure 1(d). Using this, we can compute the probability that a sixth flip of Coin 5 lands on heads, which is P(ξ5,6= 1|Y=y) =E[θ5,1|Y=y]≈0.461. We can also compute the probability that Coin 5 is biased toward tails, which is given by P(θ51<1/2|Y=y)≈0.481. 6.3 Flicking thumbtacks In [12], the following situation is considered. Imagine a box of 320 thumbtacks. We flick each thumbtack 9 times. If it lands point up, we call it a success. Point down is a failure. Because of the imperfections, each thumbtack has its own probability of success. The results (that is, the number of successes) for these 320 thumbtacks are given by r= (7,4,6,6,6,6,8,6,5,8,6,3,3,7,8,4,5,5,7,8,5,7,6,5,3,2,7,7,9,6,4,6, 4,7,3,7,6,6,6,5,6,6,5,6,5,6,7,9,9,5,6,4,6,4,7,6,8,7,7,2,7,7,4,6, 2,4,7,7,2,3,4,4,4,6,8,8,5,6,6,6,5,3,8,6,5,8,6,6,3,5,8,5,5,5,5,6, 3,6,8,6,6,6,8,5,6,4,6,8,7,8,9,4,4,4,4,6,7,1,5,6,7,2,3,4,7,5,6,5, 2,7,8,6,5,8,4,8,3,8,6,4,7,7,4,5,2,3,7,7,4,5,2,3,7,4,6,8,6,4,6,2, 4,4,7,7,6,6,6,8,7,4,4,8,9,4,4,3,6,7,7,5,5,8,5,5,5,6,9,1,7,3,3,5, 7,7,6,8,8,8,8,7,5,8,7,8,5,5,8,8,7,4,6,5,9,8,6,8,9,9,8,8,9,5,8,6, 3,5,9,8,8,7,6,8,5,9,7,6,5,8,5,8,4,8,8,7,7,5,4,2,4,5,9,8,8,5,7,7, 2,6,2,7,6,5,4,4,6,9,3,9,4,4,1,7,4,4,5,9,4,7,7,8,4,6,7,8,7,4,3,5, 7,7,4,4,6,4,4,2,9,9,8,6,8,8,4,5,7,5,4,6,8,7,6,6,8,6,9,6,7,6,6,6). This data originally came from an experiment described in [2]. In the original experiment, there were not 320 thumbtacks. Rather, there were 16 thumbtacks, 2 flickers, and 10 surfaces. We follow [12], however, in treating the data as if it came from 320 distinct thumbtacks. To model this example we take L= 2, so that S={0,1}, where 0 represents failure (point down) and 1 represents success (point up). To match the modeling in [12], we take ε= 2 and p0=p1= 1/2, so that D(εϱ)◦π−1 1= Beta(1 ,1), where π1:M1→[0,1] is the projection, ν7→ν({1}). We will use and compare two different values of κ(which is denoted bycin [12]). For the data, we have M= 320 and Nm= 9 for all m. Our row counts, y, are given by ym1=rmandym0= 9−rm. We first consider κ= 1. As in [12], we generated K= 10000 weighted simulations. In this case, our effective sample size was approximately 244. (For comparison, in [12], Liu 38 0.0 0.2 0.4 0.6 0.8 1.00.00.51.01.52.02.53.03.5(a)κ= 1 0.0 0.2 0.4 0.6 0.8 1.00.00.51.01.52.02.5 (b)κ= 10 Figure 2: Approximate density of L(θ321,1|Y=y) reported an effective sample size of 227 for the case κ= 1.) The unknown probability of success for a new thumbtack is given by θ321,1, and (6.6) gives L(θ321,1|Y=y)≈1 321 Beta(1 ,1) +320X m=1P10000 k=1Vkδ(θ∗,k m,1) P10000 k=1Vk . An approximate density for this measure is given in Figure
|
https://arxiv.org/abs/2505.00451v1
|
2(a). We next consider κ= 10, again using K= 10000, which generated an effective sample size of about 388 (compared to 300 in [12] for the same value of κ). This time, using (6.6) gives L(θ321,1|Y=y)≈1 330 10 Beta(1 ,1) +320X m=1P10000 k=1Vkδ(θ∗,k m,1) P10000 k=1Vk . Note that in this second case, the simulated values θ∗,k m,1and their corresponding weights Vk were all regenerated. An approximate density for this measure is given in Figure 2(b). As in the previous example, these approximate densities were constructed using Gaussian kernel density estimation. Their respective bandwidths are h≈0.105 and h≈0.096. The graphs in Figure 2 are qualitatively similar to their counterparts in [12], but with minor differences. It is difficult, though, to make a direct comparison. Although Gaussian kernel smoothing was also used in [12], details about the smoothing were not provided. For instance, the bandwidths used to produce the graphs in [12] were not reported therein. 6.4 Amazon reviews The model in [12] only covers agents with two possible actions, such as coins and thumbtacks. The IDP, though, can handle agents whose range of possible actions is arbitrary. Imagine, then, that we discover a seller on Amazon that has 50 products. Their products have an average rating of 2.4 stars out of 5. Some products have almost 100 ratings, while others have only a few. On average, the products have 23 ratings each. In this case, the agents are the products and the actions are the ratings that each product earns. Each 39 individual rating must be a whole number of stars between 1 and 5, inclusive. Hence, each action has 5 possible outcomes. The data used for this hypothetical seller is given in Table 2. product # 1 star 2 stars 3 stars 4 stars 5 stars # reviews average 1 9 25 15 41 0 90 2.98 2 21 28 18 1 3 71 2.11 3 16 11 21 11 0 59 2.46 4 3 9 37 0 3 52 2.83 5 11 0 36 0 5 52 2.77 6 16 16 4 15 0 51 2.35 7 30 3 15 0 0 48 1.69 8 12 9 17 1 7 46 2.61 9 13 13 18 1 0 45 2.16 10 23 2 0 14 0 39 2.13 11 11 4 6 7 10 38 3.03 12 6 3 21 0 5 35 2.86 13 14 9 0 5 2 30 2.07 14 4 25 0 0 0 29 1.86 15 8 7 2 10 0 27 2.52 16 5 4 6 10 0 25 2.84 17 6 10 9 0 0 25 2.12 18 11 1 2 3 7 24 2.75 19 20 3 0 0 0 23 1.13 20 6 9 4 2 1 22 2.23 21 5 1 3 8 1 18 2.94 22 9 1 5 2 1 18 2.17 23 5 7 3 1 1 17 2.18 24 0 3 12 0 2 17 3.06 25 1 11 1 3 1 17 2.53 26 0 3 0 6 7 16 4.06 27 2 2 8
|
https://arxiv.org/abs/2505.00451v1
|
3 1 16 2.94 28 6 5 1 3 0 15 2.07 29 6 6 1 2 0 15 1.93 30 0 8 2 4 0 14 2.71 31 8 5 1 0 0 14 1.5 32 5 0 8 0 1 14 2.43 33 0 0 13 0 0 13 3 34 5 4 1 2 0 12 2 35 6 2 0 3 0 11 2 36 4 7 0 0 0 11 1.64 37 0 1 6 4 0 11 3.27 38 5 5 0 1 0 11 1.73 40 product # 1 star 2 stars 3 stars 4 stars 5 stars # reviews average 39 5 6 0 0 0 11 1.55 40 1 2 2 4 1 10 3.2 41 4 1 3 1 0 9 2.11 42 4 1 1 0 0 6 1.5 43 3 1 0 1 0 5 1.8 44 3 0 1 0 0 4 1.5 45 1 2 0 0 0 3 1.67 46 0 1 2 0 0 3 2.67 47 2 1 0 0 0 3 1.33 48 0 0 2 0 0 2 3 49 0 1 0 1 0 2 3 50 0 0 1 1 0 2 3.5 Table 2: Reviews for 50 different products from a given seller To model this example we take L= 5, so that S={0,1,2,3,4}, where ℓ∈Srepresents an (ℓ+ 1)-star review. We take κ= 10, ε= 5, and pℓ= 1/5 for each ℓ∈S. For the data, we have M= 50 and the number Nmis the total number of reviews given to the mth product. For our row counts, the number ymℓis the total number of ( ℓ+ 1)-star reviews given to the mth product. In this example, we generated K= 100000 weighted simulations, and obtained an effective sample size of about 561. In computing the simulation weights as in (6.4), we used a log scale factor of 28.8. As with the pressed penny machine, we begin by considering a hypothetical new product from this seller. The quality of this 51st product can be characterized by the vector θ51= (θ51,0, θ51,1, θ51,2, θ51,3, θ51,4), since θ51,ℓis the (unknown) probability that the product will receive an ( ℓ+ 1)-star review. The long-term average rating of this product over many reviews will be A(θ51), where A(x) =P4 ℓ=0ℓxℓ. According to (6.6), we have L(A(θ51)|Y=y)≈1 60 10 Dir(1 ,1,1,1,1)◦A−1+50X m=1P100000 k=1Vkδ(A(θ∗,k m))P100000 k=1Vk! . Using this, we have E[A(θ51)|Y=y]≈2.54, meaning that the expected long-term average rating of a new product is a little more than 2.5. For a more informative look at the quality of a new product, an approximate density for L(A(θ51)|Y=y) is given in Figure 3(a). This graph in Figure 3(a) shows a bimodal distribution, meaning that we can expect the average ratings of future products to cluster around 2 and 3 stars. After having considered a hypothetical new product, we turn our attention to the 50 products that have already received reviews. Consider, for instance, the 50th product. This product has a 3.5-star average rating, but only 2 reviews. To see the
|
https://arxiv.org/abs/2505.00451v1
|
effect of these 2 reviews on the expected long-term rating, we apply (6.5) with Φ(( xmℓ)) =A(x50) to obtain L(A(θ50)|Y=y)≈P100000 k=1Vkδ(A(θ∗,k 50))P100000 k=1Vk. 41 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.00.00.20.40.60.8(a)m= 51, mean: 2.54 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.00.00.20.40.60.81.0 (b)m= 50, mean: 2.83 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.00.00.20.40.60.81.01.21.4 (c)m= 26, mean: 3.8 Figure 3: Approximate densities for L(A(θm)|Y=y) This gives E[A(θ50)|Y=y]≈2.83, and an approximate density for L(A(θ50)|Y=y) is given in Figure 3(b). According to the model, the 50th product’s two reviews (a 3-star and a 4-star review) have transformed the graph in Figure 3(a) to the graph in 3(b), and increased its expected long-term average rating from 2.54 to 2.83. We can similarly look at the 26th product. This product has an average rating of 4.06, but it only has 16 reviews. Using (6.5) as above, we obtain E[A(θ26)|Y=y]≈3.8, and an approximate density for L(A(θ26)|Y=y) as given in Figure 3(c). 6.5 The gamer distribution In our previous examples, we took pto be a uniform measure. That is, we took pℓ= 1/Lfor allℓ. Our final example will be presented in Section 6.6 and is concerned with video game leaderboards. In that example, to have plausible results that match our intuition about video games, it will not be sufficient to let pbe uniform. Instead, we will construct pfrom the continuous distribution described in this section. Letr,c, and αbe positive real numbers. A nonnegative random variable Xis said to have the gamer distribution with parameters r,c, and α, denoted by X∼Gamer( r, c, α ), if 42 Xhas density f(x) =rcr αrΓ(α)x−r−1Zαx/c 0yα+r−1e−ydy, (6.7) forx >0. The fact that this is a probability density function is shown in Proposition 6.3 below. Note that if X∼Gamer( r, c, α ) and s >0, then sX∼Gamer( r, sc, α ). The gamer distribution is meant to model the score of a random player in a particular single-player game. The game is assumed to have a structure in which the player engages in a sequence of activities that can result in success or failure. Successes increase the player’s score. Failures bring the player closer to a termination event, which causes the game to end. The distribution of scores at the higher end of the player skill spectrum has a power law decay with exponent r. More specifically, there is constant Ksuch that P(X > x )≈Kx−r for large values of x. For small values of x, the distribution of Xlooks like a gamma distribution. The parameter cindicates the average score of players at the lower end of the skill spectrum, which make up the bulk of the player base. The parameter αis connected to the structure of the game. Higher values of αindicate a more forgiving game in which the termination event is harder to trigger. See below for more on the meaning of these parameters. The gamer distribution can be seen as a mixture of gamma distributions, where the mixing distribution is Pareto. More specifically, it is straightforward to prove the following. Proposition 6.3. Letr, c >
|
https://arxiv.org/abs/2505.00451v1
|
0and let Mhave a Pareto distribution with minimum value c and tail index r. That is, P(M > m ) = (m/c)−rform > c . IfX|M∼Gamma( α, α/M ), then X∼Gamer( r, c, α ). According to Proposition 6.3, the parameter ris the tail index of the mean player scores in the population. However, it is also the tail index of the raw player scores. To see this, letγ(β, u) =Ru 0yβ−1e−ydydenote the lower incomplete gamma function. Then (6.7) can be rewritten as f(x) =rcr αrΓ(α)x−r−1γ α+r,αx c . (6.8) Since γ(β, u)→Γ(β) asu→ ∞ , we have f(x)∼Γ(α+r) αrΓ(α)rcr xr+1 asx→ ∞ . In other words, the density of Xis asymptotically proportional to the density ofMasx→ ∞ . For small values of x, note that γ(β, u)∼uβe−uasu→0. Hence, if we introduce the parameter λ=α/c, then f(x)∼r λrΓ(α)x−r−1(λx)α+re−λx=rλα Γ(α)xα−1e−λx asx→0. In other words, the density of Xis asymptotically proportional to the density of Gamma( α, α/c ) asx→0. Since Gamma( α, α/c ) has mean c, the parameter ccan be understood as the average score of players at the lower end of the skill spectrum. 43 To understand α, we look to the fact that X|M∼Gamma( α, α/M ). Given M, we can think of Xas being driven by αexponential clocks, each with mean M/α . Each clock represents a time to failure, and when all clocks expire, the player has reached the termination event. Since αdenotes the number of such clocks, a higher value of αindicates that more failures are needed to trigger the end of the game. We also have Var( X|M) =M2/α. Hence, αcan also be understood through the fact that 1 /√αis the coefficient of variation ofXgiven M. For computational purposes, it may be more efficient to rewrite (6.8) in terms of the logarithm of the gamma function and the regularized lower incomplete gamma function, P(β, u) =γ(β, u)/Γ(β). In this case, we have f(x) =rc αr exp(log Γ( α+r)−log Γ( α))x−r−1P α+r,αx c forx >0. 6.6 Video game leaderboards For our final example, we consider a single-player video game in which an individual player tries to score as many points as possible before the game ends. If Xis a random score of a random player, then we will assume that X∼Gamer( r, c, α ), where r= 7/3, c= 28, and α= 3. The values of these parameters are arbitrarily chosen for the sake of the example. The values of randcgive the distribution a mean of about 50 and a decay rate that approximately matches the decay rate in the global Tetris leaderboard (see https://kirjava.xyz/tetris-leaderboard/ ). The choice α= 3 indicates a game in which the player has 3 “lives,” which is a typical gaming structure, especially in classic arcade video games. Finally, we assume that the actual score displayed by the game is rounded to the nearest integer and capped at 499. A group of 10 friends get together and play this game. Each friend plays the game a different number of times. In this case, the agents are the players and the actions are the
|
https://arxiv.org/abs/2505.00451v1
|
scores they earn each time they play. The 10 friends all have their own usernames that they use when playing the game. The usernames are Asparagus Soda, Goat Radish, Potato Log, Pumpkins, Running Stardust, Sweet Rolls, The Matrix, The Pianist Spider, The Thing, and Vertigo Gal. We will consider three different scenarios for this example. 6.6.1 Players with matching scores In our first scenario, the 10 friends generate the scores given in Table 3. Note that in that table, the scores are listed in increasing order. To get an overview of the data, we can place the 10 players in a leaderboard, ranked by their high score, as shown in Table 4. To model this scenario, we take L= 500, so that S={0,1, . . . , 499}. We take κ=ε= 1 and let pℓ=( F(ℓ+ 0.5)−F(ℓ−0.5) if 0 ≤ℓ <499, 1−F(498.5) if ℓ= 499 , 44 Username Scores Pumpkins 12, 21, 25, 25, 26, 27, 30, 33, 34, 34, 36, 42, 44, 44, 48, 55, 67, 69 Potato Log 18, 21, 21, 22, 23, 25, 29, 29, 32, 33, 47, 53, 54, 56, 57, 65, 75 The Thing 10, 16, 16, 19, 19, 25, 25, 26, 29, 32, 35, 37, 42, 44, 59, 60 Running Stardust 23, 38, 62, 71, 138, 149, 151 Sweet Rolls 15, 23, 56, 71, 98, 130 Vertigo Gal 10, 30, 40, 56, 87, 92 Asparagus Soda 17, 43, 55 The Matrix 11, 15 Goat Radish 38 The Pianist Spider 3 Table 3: Player scores for Video Game Scenario 1 rank name hi score avg score IDP avg # games 1 Running Stardust 151 90 80 7 2 Sweet Rolls 130 66 55 6 3 Vertigo Gal 92 52 52 6 4 Potato Log 75 39 39 17 5 Pumpkins 69 37 38 18 6 The Thing 60 31 32 16 7 Asparagus Soda 55 38 40 3 8 Goat Radish 38 38 71 1 9 The Pianist Spider 32 32 37 1 10 The Matrix 15 13 43 2 Table 4: Leaderboard for Video Game Scenario 1 where Fis the distribution function of a Gamer(7 /3,28,3) distribution. For the data, we have M= 10, the number Nmis the number of scores in the mth row of Table 3, and ymn is the nth score in the mth row. Note that since the model only depends on ymnthrough the row counts ymℓ, the order in which the scores are listed in the vector ymis not relevant. In this scenario, we generated K= 40000 weighted simulations, and obtained an effective sample size of about 326. In computing the simulation weights as in (6.4), we used a log scale factor of 42. The long-term average score of the player in the mth row of Table 3 will be A(θm), where A(x) =P499 ℓ=0ℓxℓ. For example, using (6.5), the expected long-term average score of Running Stardust is E[A(θ4)|Y=y]≈79.65. These conditional expectations, rounded to the nearest integer, are shown in the “IDP avg” column of Table 4. Looking at these averages, we can see at least two players whose numbers seem
|
https://arxiv.org/abs/2505.00451v1
|
unusual. The first is Goat Radish. They played only one game and scored a 38, which is a relatively low score compared to the rest of the group. And yet the IDP model has given them an expected long-term average score of 71. Not only is this counterintuitive, it is also inconsistent with how the model treated The Pianist Spider. 45 The reason for this behavior can be seen in Table 3. There is only one other player that managed to score exactly 38 in one of their games: Running Stardust. So from the model’s perspective, there is a reasonable chance that Goat Radish and Running Stardust have similar scoring tendencies. Since Running Stardust happens to be the top player, this leads to an unusually high long-term estimate for Goat Radish. Our intuition is able to dismiss this line of reasoning because we know, for instance, that there is very little difference between a score of 38 and 39. Had Goat Radish scored a 39 instead, our predictions should not change that much. But we only know this because we are viewing the positive real numbers as more than just a set. We are viewing them as a totally ordered set with the Euclidean metric. The IDP model is not designed to utilize these properties of the state space. From its perspective, the number “38” is just a label. It is nothing more than the name of a particular element of the state space, and it happens to be an element that only two players were able to hit. We see similar behavior in the model’s forecast for The Matrix, who scored an 11 and a 15 in their two games. No one else scored an 11, but exactly one other player managed to score exactly 15, and that was Sweet Rolls, who happens to be the second best player. Just as with Goat Radish, this causes the model to generate an unintuitively high value for The Matrix’s long-term average score. To test this explanation, we are led to our second scenario. 6.6.2 Matching scores removed The scores in our second scenario are the same as in our first, but we changed Goat Radish’s 38 to a 39, and The Matrix’s 15 to a 14. (See Table 5.) The scores 14 and 39 are unique in that no other player achieved exactly those scores. We reran the model, again generating K= 40000 weighted simulations. This time, we obtained an effective sample size of about 22.3. To save time, we deleted the two heaviest simulations, leaving K= 39998 simulations with an effective sample size of about 1099. The new expected long-term averages are shown in Table 6. We now see that Goat Radish and The Matrix have lower, more reasonable long-term averages according to the model. Likewise, Running Stardust and Sweet Rolls have slightly higher averages. In the first scenario, their averages were brought down because of their associations with Goat Radish and The Matrix. 6.6.3 Players with only a few games In our third scenario, the ten friends generated the scores in
|
https://arxiv.org/abs/2505.00451v1
|
Table 7. We use the same L,κ,ε,p, and Mas in the first scenario. Also as it was there, the number Nmis the number of scores in the mth row of Table 7, and ymnis the nth score in the mth row. Note, however, that the username in the mth row has changed in the current scenario. We again used a log scale factor of 42 and generated K= 40000 weighted simulations, obtaining an effective sample size of about 39. This time, we deleted the 26 heaviest simulations, leaving K= 39974 simulations and an effective sample size of about 207. As before, the resulting long-term expected averages, E[A(θm)|Y=y], are shown in Table 8. In this example, we focus our attention on Asparagus Soda, who it situated at No. 4 46 Username Scores Pumpkins 12, 21, 25, 25, 26, 27, 30, 33, 34, 34, 36, 42, 44, 44, 48, 55, 67, 69 Potato Log 18, 21, 21, 22, 23, 25, 29, 29, 32, 33, 47, 53, 54, 56, 57, 65, 75 The Thing 10, 16, 16, 19, 19, 25, 25, 26, 29, 32, 35, 37, 42, 44, 59, 60 Running Stardust 23, 38, 62, 71, 138, 149, 151 Sweet Rolls 15, 23, 56, 71, 98, 130 Vertigo Gal 10, 30, 40, 56, 87, 92 Asparagus Soda 17, 43, 55 The Matrix 11,14 Goat Radish 39 The Pianist Spider 3 Table 5: Player scores for Video Game Scenario 2 rank name hi score avg score IDP avg # games 1 Running Stardust 151 90 84 7 2 Sweet Rolls 130 66 62 6 3 Vertigo Gal 92 52 51 6 4 Potato Log 75 39 39 17 5 Pumpkins 69 37 38 18 6 The Thing 60 31 31 16 7 Asparagus Soda 55 38 39 3 8 Goat Radish 38 38 43 1 9 The Pianist Spider 32 32 37 1 10 The Matrix 15 13 28 2 Table 6: Leaderboard for Video Game Scenario 2 on the leaderboard, but played the game only once. The question is, does he deserve to be at No. 4? Is he truly the fourth-best player among the ten friends? For example, Potato Log, who is at No. 3, played the game 20 times and only managed to get a high score of 87. Asparagus Soda almost matched that high score in a single attempt. Intuitively, it seems clear that Asparagus Soda is the better player and should rank higher than Potato Log. It is less clear how Asparagus Soda compares to Pumpkins, the No. 2 player. Neither of them made a lot of attempts, but Asparagus Soda has the higher average score. Which one is more likely to have the higher long-term average score? If they had a contest where they each played a single game and the higher score wins, who should we bet on? Asparagus Soda vs. Potato Log. Looking at Table 7, we see that Asparagus Soda corresponds to m= 9 and Potato Log corresponds to m= 2. Table 8 shows us that E[A(θ9)|Y=y]≈67 and E[A(θ2)|Y=y]≈31. In other words, the IDP model gives Asparagus
|
https://arxiv.org/abs/2505.00451v1
|
Soda a much higher expected long-term average score than Potato Log. This confirms our intuition that Asparagus Soda is the better player. But because Asparagus Soda played only one game, the model should have a lot more uncertainty surrounding 47 Username Scores Vertigo Gal 45, 100, 118, 121, 125, 130, 133, 145, 161, 173, 173, 187, 190, 192, 193, 200, 220, 223, 256, 275, 314, 354, 388, 475, 524 Potato Log 4, 13, 13, 16, 19, 19, 19, 19, 23, 24, 25, 26, 31, 38, 41, 43, 44, 47, 51, 87 The Thing 4, 6, 9, 19, 25, 27, 28, 38, 39, 40 The Matrix 13, 15, 17, 32, 32, 61, 78 Running Stardust 21, 23, 51, 61, 65 Goat Radish 23, 25, 34, 51 Pumpkins 49, 65, 84, 117 Sweet Rolls 26, 65 Asparagus Soda 86 The Pianist Spider 62 Table 7: Player scores for Video Game Scenario 3 rank name hi score avg score IDP avg # games 1 Vertigo Gal 475 207 198 25 2 Pumpkins 117 79 72 4 3 Potato Log 87 30 31 20 4 Asparagus Soda 86 86 67 1 5 The Matrix 78 35 37 7 6 Running Stardust 65 44 45 5 6 Sweet Rolls 65 46 52 2 8 The Pianist Spider 62 62 56 1 9 Goat Radish 51 33 34 4 10 The Thing 40 24 26 10 Table 8: Leaderboard for Video Game Scenario 3 Asparagus Soda’s forecasted mean. To see this, we can compare approximate densities for L(A(θ9)|Y=y) andL(A(θ2)|Y=y). (See Figure 4.) As is visually evident, Asparagus Soda’s density is supported on a much wider interval. In this way, the model acknowledges the possibility that Asparagus Soda’s actual long- term average score is lower than Potato Log’s. The probability that this is the case is P(A(θ9)< A(θ2)|Y=y). If we define Φ : R10×500→Rby Φ(( xmℓ)) =A(x9)−A(x1), then we can use (6.5) to obtain P(A(θ9)< A(θ2)|Y=y)≈0.049. In other words, according to the model, there is a 95% chance that Asparagus Soda is a better player than Potato Log. Now suppose the two of them had a contest in which they each played the game once and the higher score wins. What is the probability that Asparagus Soda would win this contest? If we define C:R500×R500→RbyC(x, y) =P ℓ>ℓ′xℓyℓ′and then C(θ9, θ2) is the (unknown) probability that Asparagus Soda beats Potato Log in this single-game contest. The actual probability, given our observations Y=y, is then E[C(θ9, θ2)|Y=y], which, according to (6.5), is approximately 0 .786. That is, Asparagus Soda has about a 79% chance of beating Potato Log in a contest involving a single play of the game. To visualize the uncertainty 48 0 100 200 300 400 5000.00000.00250.00500.00750.01000.01250.01500.01750.0200(a) density for L(A(θ9)|Y=y) 0 100 200 300 400 5000.000.020.040.060.080.10 (b) density for L(A(θ2)|Y=y) 0.0 0.2 0.4 0.6 0.8 1.00.00.51.01.52.02.53.0 (c) density for L(C(θ9, θ2)|Y=y) Figure 4: Asparagus Soda ( m= 9) vs. Potato Log ( m= 2) around this probability, we can graph an approximate density for L(C(θ9, θ2)|Y=y). This is done in Figure 4(c). The graph shows that although the
|
https://arxiv.org/abs/2505.00451v1
|
conditional mean of C(θ9, θ2) is about 79%, the conditional mode is much higher. Asparagus Soda vs. Pumpkins. We now turn our attention to comparing Asparagus Soda, who played only once, to Pumpkins, who played four times. (See Figure 5.) Looking at Table 7, we see that Asparagus Soda corresponds to m= 9 and Pumpkins corresponds to m= 7. Table 8 shows us that E[A(θ9)|Y=y]≈67 and E[A(θ7)|Y=y]≈ 72. We can visualize the model’s uncertainty around Pumpkins’ expected long-term average by graphing an approximate density for L(A(θ7)|Y=y). This is done in Figure 5(b). Visually comparing this graph with the corresponding one for Asparagus Soda in Figure 5(a), we see that the two long-term averages have comparable degrees of uncertainty. Using (6.5), we have P(A(θ9)< A(θ2)|Y=y)≈0.625, meaning there is a 62% chance that Pumpkins is the better player. We can also consider a single-game contest between Asparagus Soda and Pumpkins. As above, we can use (6.5) to compute E[C(θ9, θ7)|Y=y]≈0.484, meaning that Asparagus Soda has a 48% chance of beating Pumpkins in a single-game contest. To 49 0 100 200 300 400 5000.00000.00250.00500.00750.01000.01250.01500.01750.0200(a) density for L(A(θ9)|Y=y) 0 100 200 300 400 5000.0000.0050.0100.0150.0200.025 (b) density for L(A(θ7)|Y=y) 0.0 0.2 0.4 0.6 0.8 1.00.20.40.60.81.01.21.4 (c) density for L(C(θ9, θ7)|Y=y) Figure 5: Asparagus Soda ( m= 9) vs. Pumpkins ( m= 7) visualize the uncertainty around this probability, we can graph an approximate density for L(C(θ9, θ7)|Y=y). (See Figure 5(c).) References [1] Charles E. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Ann. Statist. , 2:1152–1174, 1974. [2] Laurel Beckett and Persi Diaconis. Spectral analysis for discrete longitudinal data. Adv. Math. , 103(1):107–128, 1994. [3] Donald A. Berry and Ronald Christensen. Empirical Bayes estimation of a binomial parameter via mixtures of Dirichlet processes. Ann. Statist. , 7(3):558–568, 1979. [4] Evan Donald and Jason Swanson. An Aldous-Hoover type representation for row exchangeable arrays. Preprint, https://arxiv.org/abs/2504.21584, 2025. 50 [5] Thomas S. Ferguson. A Bayesian analysis of some nonparametric problems. Ann. Statist. , 1:209–230, 1973. [6] Marie Gaudard and Donald Hadwin. Sigma-algebras on spaces of probability measures. Scand. J. Statist. , 16(2):169–175, 1989. [7] Olav Kallenberg. Foundations of modern probability . Probability and its Applications (New York). Springer-Verlag, New York, 1997. [8] Olav Kallenberg. Probabilistic symmetries and invariance principles . Probability and its Applications (New York). Springer, New York, 2005. [9] T. Kloek and H. K. van Dijk. Bayesian estimates of equation system parameters: An application of integration by monte carlo. Econometrica , 46(1):1–19, 1978. [10] Augustine Kong. A note on importance sampling using standardized weights. Technical Report 348, Chicago, Illinois 60637, July 1992. [11] Augustine Kong, Jun S. Liu, and Wing Hung Wong. Sequential imputations and Bayesian missing data problems. J. Amer. Statist. Assoc. , 89(425):278–288, March 1994. [12] Jun S. Liu. Nonparametric hierarchical Bayes via sequential imputations. Ann. Statist. , 24(3):911–930, 1996. [13] David W. Scott. Multivariate density estimation . Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons, Inc., New York, 1992. Theory, practice, and visualization, A Wiley-Interscience Publication. [14] Jayaram Sethuraman. A constructive definition of Dirichlet priors.
|
https://arxiv.org/abs/2505.00451v1
|
On the Distribution of the Sample Covariance from a Matrix Normal Population Haoming Wang aSchool of Mathematics, Sun Yat-sen University, Xingang West Road, 135, Guangzhou, 510275, Guangdong, China Abstract This paper discusses the joint distribution of sample variances and covari- ances, expressed in quadratic forms in a matrix population arising in compar- ing the differences among groups under homogeneity of variance. One major concern of this article is to compare Kdifferent populations, by assuming that the mean values of x(k) 11, x(k) 12, . . . , x(k) 1p, x(k) 21, x(k) 22, . . .,x(k) 2p, . . . , x(k) n1, x(k) n2, . . . , x(k) npin each population are M(k)(n×p),k= 1,2, . . . , K andM(n×p) a fixed matrix, with this hypothesis H0:M(1)=M(2)=···=M(k)=M, when the inter-group covariances are neglected and the intra-group covari- ances are equal. The Nintra-group variances and1 2N(N−1) intra-group covariances where N=npare classified into four categories T1,T11 2,T2and T3according to the spectral forms of the precision matrix. The joint distri- bution of the sample variances and covariances is derived under these four scenarios. Besides, the moment generating function and the joint distribu- tion of latent roots are explicitly calculated. As an application, we consider a classification problem in the discriminant analysis where the two populations should have different intra-group covariances. The distribution of the ratio of two quadratic forms is considered both in the central and non-central cases, with their exact power tabulated for different nandp. Keywords: Product moment distribution, Matrix normal distribution, Elliptical contoured distribution, Matrix-variate analysis of variance, Quadratic discriminant analysis 2020 MSC: Primary 62H10; Secondary 33B15, 33C20 Preprint submitted to arXiv https://arxiv.org/abs/2410.14490 May 2, 2025arXiv:2505.00470v1 [math.ST] 1 May 2025 1. Introduction In the early 20th century, Wishart (1928) once studied the joint distri- bution of sample variances and covariances, leading to the product moment distribution in independent identically distributed samples X1, X2, . . . , X n from a multivariate normal population Np(0, B), where Bis ap×preal symmetric positive definite matrix. His result, for combination with An- derson (1946) to be deduced later by replacing the central mean with any non-central matrix M(n×p) (rank( M)≤2), and the result in James (1955) with Mat any rank, equals Wp(S;n) =1 2np 2Γp(n−1 2)|B|n 2etr −1 2B−1S |S|n−p−1 2when M= 0; ×etr −1 2Ω 0F1n 2;1 4ΩB−1S when M̸= 0;(1.1) where Γ p(a) is the multivariate Gamma function, etr( Z) = exp tr( Z) and 0F1(a;Z) is the hypergeometric function in a positive constant aand a uni- variate matrix argument Z. In (1.1), nis the size of independent iden- tical normally distributed samples, |S|the determinant of the symmetric matrix S= (sij) of the1 2p(p+ 1) product moment coefficients of X= [X′ 1;X′ 2;. . .;X′ n], and Ω = ( ωij) the non-central parameter, Ω =B−1M′M, S =X′X Thus, if x1, . . . , x nare the sample values, this reduces to the constant ( n−1) times Pearson’s non-central χ2-distribution n¯x=nX 1(x) ns2=nX 1(x−¯x)2 After the 1960s, various writers turned their attention to the problems that arise when samples are not
|
https://arxiv.org/abs/2505.00470v1
|
independent and identically distributed, assumed in most cases for simplicity to be normal. Dawid (1977) discussed 2 the matrix normal distribution, denoted as Nn,p(M;A, B) Fn,p(X) =1 (2π)np 2|A|p 2|B|n 2etr −1 2B−1X′A−1X when M= 0; ×etr −1 2B−1M′A−1M etr −B−1M′A−1X when M̸= 0;(1.2) where AandBaren×nandp×preal symmetric positive definite matrices respectively and Man arbitrary n×preal matrix, which has been treated as a special case of the vector elliptical contoured distribution in Fang and Zhang (1990). The quadratic form S=X′X, by introducing a new variable X= A1 2Y, will become1 2p(p+1) independent quadratic forms in Y= (y1, . . . , y p), y′ 1Ay1, y′ 1Ay2, . . . , y′ 1Ayp, y′ 2Ay2, . . . , y′ 2Ayp, ... y′ pAyp, while the vectors y1, . . . , y pare not independent in most cases. When M= 0, though a bit difficult, the joint distribution of sample variances and covari- ances, first given by Khatri (1966), with the aid of the hypergeometric func- tion, is Vn,p(S) =|S|n−p−1 2etr (−q−1B−1S) 2np 2Γp(n−1 2)|A|p 2|B|n 20F0 I−1 2qA−1, q−1B−1S ,(1.3) where qis an arbitrary positive constant. When A=I, the density (1.3) for the quadratic form reduces to the central Wishart density (1.1) based on samples from a multivariate normal population. Despite significant progress, e.g., Srivastava and Khatri (1979), Jensen and Good (1981), and L¨ auter et al. (1998) among these years, the non-central distribution for the1 2p(p+1) quadratic forms hasn’t been obtained yet. In modern analysis of variance, we usually want to compare the means be- tween different groups by assuming the intra-group covariances of populations are equal and known, where the populations are assumed to be scalars or vec- tors. This article now seeks to move forward from vectors to matrices. For- mally speaking, we may consider Kdifferent populations under homogeneity of variance, assuming that the mean values of x(k) 11, x(k) 12, . . . , x(k) 1p, x(k) 21, x(k) 22, . . ., 3 x(k) 2p, . . . , x(k) n1, x(k) n2, . . . , x(k) npin each population are M(k)(n×p),k= 1,2, . . . , K , have the common mean M(n×p), or testing the hypothesis H0:M(1)=M(2)=···=M(k)=M. For example, if n= 1 and we have Kgroups of vectors x(k)with com- mon intra-group covariances and without inter-group covariances, then the hypothesis for the mean values µ(k)ofx(k),k= 1,2, . . . , K , H′ 0:µ(1)=µ(2)=···=µ(K)=µ, is widely used in the multivariate analysis of variance, by assuming normality. ForK= 2 and general n, now let ( x1, x2, . . . , x p) and ( y1, y2, . . . , y p) be such two different groups according to the left-spherical distribution in Fang and Zhang (1990). There are usually1 2p(p+1) intra-group covariances of size nby the assumption of homogeneity of variance for each group, when the inter-group covariances are neglected, e.g., the two groups are uncorrelated. If we put xi= Cov( xi, xi)1 2tiandAji= Corr( xj, xi)−1, then this can also be visualised by the1 2p(p+ 1) quadratic forms, t′ 1t1, t′
|
https://arxiv.org/abs/2505.00470v1
|
1A12t2, . . . , t′ 1A1ptp, t′ 2t2, . . . , t′ 2A2ptp, ... t′ ptp, It so requires us to develop mathematical techniques in handling the1 2p(p+1) quadratic forms with possibly different coefficients. In practice, it also occurs such a problem quite often that the number of parameters to be estimated when we have no prerequisite knowledge on the normal sample is too large. For example, if the N=npvalues are assumed to be drawn from Ndistinct normal populations, there are Nvariances and 1 2N(N−1) pairwise correlation coefficients or regression coefficients, giving amounts to1 2N(N+ 1), an approximate O(N2) unknown parameters. It is impossible to draw any meaningful conclusion without a reduction in pa- rameters of the population sampled. In order to handle this problem, we introduce four covariance structures, i.e., T1,T11 2,T2, and T3of a matrix population equivalent to four nested types of spectral decompositions of the precision matrix. Under these four scenarios, the non-central distribution 4 of these1 2p(p+ 1) quadratic forms is explicitly calculated. For the popula- tionT3, these results extend (1.3) and the others generalise the1 2p(p+ 1) quadratic forms to different correlation matrices. The matrix-variate anal- ysis of variance is equivalent to the linear discriminant analysis only when K= 2. The Behrens-Fisher problem in quadratic discriminant analysis with un- equal covariances is quite challenging, particularly due to the distribution of the ratio S=S1S−1 0in two quadratic forms S1andS0. In the trivial case, e.g., when the both quadratic forms S1andS0are independently distributed according to (1.1), the distribution of latent roots of the ratio statistic S, con- sidered in Chikuse (1981), and simplified by the result of Gupta and Kabe (2004) using an identity of Bingham (1974), is known to possess the form p(S) =1 4npBp 1 2n,1 2n |B|3n 2|I−S|n+p+1 2|S|n−p−1 2 1F0 n, S(I−S)−1 ,(0< S < I )(1.4) where Bn(a, b) is the multivariate Beta function and both SandI−Sare pos- itive definite. In fact, the result holds in a more general class of left-spherical distributions, e.g., L¨ auter et al. (1998) when the samples are central. In or- der to determine the distribution of the ratio statistic under the alternative hypothesis, the non-central distribution of the above quadratic forms should be first derived. This will be shown in this article to have only a slight differ- ence from the central distribution, containing a similar hypergeometric term of two matrix arguments. What is now asserted is that all such problems depend, in the first in- stance, on the determination of a fundamental joint distribution of sample variances and covariances, which will be a generalization of equation (1.1) and done first by Wishart (1928) in the case of independent identically dis- tributed samples from a multivariate normal population. It will, in fact, be the simultaneous distribution of N=npsample variances and sample co- variances under the presumed structure T1,T11 2,T2, and T3in the matrix population. It is the purpose of the present paper to give this generalised distribution. The planar case of n×2 variables will first be considered in detail, and thereafter a proof for the general n×pcase
|
https://arxiv.org/abs/2505.00470v1
|
will be given. 5 2. Notations and Conventions 2.1. Gamma, Beta and Hypergeometric Functions The multivariate Gamma function, denoted by Γ p(a), is defined to be Γn(a) =Z A>0etr(−A)|A|a−n+1 2dA, (2.1) where ℜ(a)>1 2(n−1) and A >0 means the integral is taken over the space of real symmetric positive definite n×nmatrices. The multivariate Beta function, denoted by Bn(a, b), is defined to be Bn(a, b) =Z 0<X<I|X|a−n+1 2|I−X|b−n+1 2dX, (2.2) where ℜ(a),ℜ(b)>1 2(n−1), and the integral is taken over all n×nreal symmetric matrices Xsuch that both XandI−Xare positive definite. The multivariate Beta function is related to the multivariate Gamma function by the formula Bn(a, b) =Γn(a)Γn(b) Γn(a+b). (2.3) The hypergeometric function of a matrix argument is defined by a recur- sive relation Z 0<X<I|X|a−n+1 2|I−X|b−n+1 2pFqa1, . . . , a p b1, . . . , b q;TX dX =1 Bn(b−a, a)p+1Fq+1a, a 1, . . . , a p b, b1, . . . , b q;T ,(2.4) and two limit equalities lim γ→∞p+1Fqa1, . . . , a p, γ b1, . . . , b q;γ−1T =pFqa1, . . . , a p b1, . . . , b q;T , (2.5) lim γ→∞pFq+1a1, . . . , a p b1, . . . , b q, γ;γT =pFqa1, . . . , a p b1, . . . , b q;T , (2.6) where ℜ(b−a),ℜ(a)>1 2(n−1), and Tis an n×ncomplex symmetric matrix such that ℜ(T)>0. In addition, the initial condition is 0F0(T) = etr( T). (2.7) 6 The hypergeometric function of two matrix arguments is defined by the zonal polynomial, a topic that can be found in Hua (1958) and Muirhead (1982). We are not going to give its definition here. However, when these two matrix arguments are of the same size, we have Z O(n)pFqa1, . . . , a p b1, . . . , b q;XHY H′ dH=pFqa1, . . . , a p b1, . . . , b q;X, Y , (2.8) where XandYare both n×nsymmetric matrices, and the integral is taken over the space of n×northogonal matrices. 2.2.T1,T11 2,T2andT3 Consider an n×pmatrix population with normal entries. A question that usually occurs is, if we assume only each entry to be normal, there will be1 2np(np+ 1) unknown parameters including variances and pairwise correlation coefficients or regression coefficients to be estimated, so we need some assumptions on the population sampled. To make this clear, let us block an np×npcovariance matrix Σ as Σ = Σ11Σ12. . . Σ1n Σ21Σ22. . . Σ2n ......... Σn1Σn2. . . Σnn where each Σ ijis an Hermite square matrix of order p. Definition 1 (T1,T11 2,T2andT3).Ann×pcomplex-valued matrix popu- lation Zis said to be T1(partially diagonalisable) if there exist orthonormal 1 ×pvectors {bj} such that Zcan be developed into the sum Z=pX j=1Zjbj, where Zj=Zb′ jandEZj(i)Z′ j′(i) =cj(i)δjj′; or in a word, the random coefficients Zjare pairwise uncorrelated at each row index i; T11 2(partial orthogonal diagonalisable ) if it is T1and the coefficients Zjare totally uncorrelated with each other :EZj(i)Z′ j′(i′) =cj(i, i′)δjj′; 7 T2(totally
|
https://arxiv.org/abs/2505.00470v1
|
orthogonal diagonalisable ) if it is T1and there exist further n×1 vectors {ai}such that Zcan be developed into the sum Z=nX i=1pX j=1γijaibi, where γij=a′ iZb′ jand the random variables γijarepairwise uncorre- lated with each other :Eγijγi′j′=cijδii′δjj′; T3(totally diagonalisable ) if it is T2and there exist {τ2 i}and{σ2 j}such thatEγijγjj′=σ2 iτ2 jδiiδj′j′; (**) ( degenerate into rank one ). if it is T3and additionally, there exist random variables αi, βjsuch that γij=αiβjand Z= nX i=1αiai! pX j=1βjbj! , where Eαiαi′=σ2 iδii′,Eβjβj′=τ2 jδjj′. Denote cj(i) =E|Zj(i)|2,cj(i, i′) =EZj(i)Zj(i′) and cij=E|γij|2. We tabulate some equivalent conditions for these assumptions to hold explicitly: Table 1: Comparison of T1, T11 2, T2andT3. T1⇔Σ =Pp i=1Pp j=1Aij⊗Bijwhere Bij=b′ ibj. ⇔Bii=b′ ibiconsists of common eigenvectors b′ iof Σ kk, independent of k, corresponding to eigenvalues Aii(k, k). T11 2⇔Σ =Pp i=1Ai⊗Biwhere Bi=b′ ibi. ⇔Bi=b′ ibiconsists of common eigenvectors b′ iof Σ kl, independent of k, l, corresponding to eigenvalues Ai(k, l). T2⇔Σ =Pn i=1Pp j=1γijAi⊗Bjwhere Ai=aia′ iandBj=b′ jbj ⇔Ai=aia′ i,Bj=b′ jbjconsists of eigenvectors ai⊗b′ jof Σ, corresponding to eigenvalues γij. T3⇔Σ =A⊗BandA=Pn i=1αiaia′ i,B=Pp j=1βjb′ jbj. ⇔ai⊗b′ jare eigenvectors of Σ, corresponding to eigenvalues αiβj. (**) ⇔there exists column vector Xand row vector Ysuch that Z=XYand Σ = A⊗Bwhere A=EXX′,B=EY′Y. 8 The matrix with tensor forms in Table 1 is said to be T1, T11 2, T2, and T3 respectively. Using these terminologies, we can restate the definitions above as follows: Definition 2 (The matrix normal distribution T′ 1, T′ 11 2, T′ 2, and T′ 3).Ann×p population is said to be a matrix normal distribution T′ 1if its probability density function is |Σ1|1 2 (2π)np 2etr −1 2pX i=1pX j=1AijXB ijX′! , where Σ 1is aT1positive definite matrix as Aij, Bijdefined in Table 1. T′ 11 2if its probability density function is |Σ11 2|1 2 (2π)np 2etr −1 2pX i=1AiXB iX′! , where Σ11 2is aT11 2positive definite matrix similar above. T′ 2if its probability density function is |Σ2|1 2 (2π)np 2etr −1 2nX i=1pX j=1γijAiXB jX′! , where Σ 2is aT2positive definite matrix similar above. T′ 3if its probability density function is 1 (2π)np 2|A|p 2|B|n 2etr −1 2A−1XB−1X′ , where AandBaren×nandp×pHermite positive definite matrices. 9 2.3. Some lemmas involving hypergeometric functions Lemma 1 (Dykstra (1970)) .LetXbe a matrix normal distribution T′ 1, T′ 11 2, T′ 2, orT′ 3. The matrix X′Xis positive definite with probability 1 if and only if n≥p. Proof of Lemma 1. This fact follows immediately from that the normal dis- tribution has a continuous density which lies in an ( n−1)-dimensional sub- space with probability zero. Lemma 2 (Khatri (1966)) .LetAbe an n×nsymmetric matrix and Ban p×psymmetric positive definite matrix with n≥p. Let Xbe an n×p matrix. Then Z X′X=Setr(AXBX′)dX=πnp 2 Γp(n 2)|S|n−p−1 20F0(A, BS ). (2.9) Proof of Lemma 2. From the definition of hypergeometric function we have etr(AXBX′) = 0F0(AXBX′). Now let g(A) =Z X′X=S0F0(AXBX′)dX. Since g(A) is a homogeneous symmetric function in A, by taking the trans- formation A7→HAH′and integrating HoverO(n), we get g(A) =Z X′X=S0F0(A, BX′X)dX=πnp 2 Γp(n 2)|S|n−p−1 20F0(A, BS ), using the density formula
|
https://arxiv.org/abs/2505.00470v1
|
(1.1). This proves (2.9). The Gamma integral involving the hypergeometric function is also used. Lemma 3 (Constantine (1963)) .LetZbe an p×pcomplex symmetric matrix whose real part is positive definite and Yan arbitrary p×psymmetric matrix. Then for any awithℜ(a)>1 2(p−1), Z S>0etr(−SZ)|S|a−p+1 20F0(SY)dS= Γ p(a)|Z|−a 1F0(a;Y Z−1). (2.10) 10 This is a widely known lemma concerning the distribution of latent roots of a symmetric positive definite matrix. We omit the proof here. Lemma 4. IfSis an p×psymmetric positive definite random matrix with probability density function p(S), then the joint density function of the latent roots l1, . . . , l pofSis πp2/2 Γp(1 2p)pY i<j(li−lj)Z O(p)p(HLH′)dH (2.11) where L= diag( l1, . . . , l p),l1> l2>···> lp>0; elsewhere zero. 3. Generalised Product Moment Distributions In this article, we are going to prove this main theorem: Theorem 1. Suppose Xis an n×preal matrix according to the matrix normal population T′ 3. 1.The probability density distribution of1 2p(p+ 1) variables in the p×p real symmetric matrix S=X′X= (sij), i≤jis etr (−q−1B−1S)|S|n−p−1 2 2np 2Γp(n−1 2)|A|p 2|B|n 20F0 I−1 2qA−1, q−1B−1S , where qis an arbitrary positive constant and this holds for all p×preal symmetric positive definite matrices S= (sij); elsewhere zero. 2.The moment generating function Eetr(RS)forS=X′Xis |A|−p 2|B|−n 21F0n 2;U, W−1 , where U=I−A−1andW=I−B1 2RB1 2. 3.The joint distribution of latent roots l1, l2, . . . , l pofS=X′Xis 1 Γp(n 2)Γp(p 2)|A|p 2|B|n 2πp2 2 2np 2pY i=1(li)n−p−1 2pY i<j(li−lj)0F0(A−1, LB−1), where L= diag( li)andl1> l2>···> lp>0; elsewhere zero. When p= 1, Theorem 1 reduces to known results about non-central χ2-distribution. We shall begin to prove this for p= 2 with a maximum of intuition approach and use induction to give another mathematically rigorous proof for any p≥2. 11 3.1. The special case - For p= 2 Let the frequency distribution of the population sampled be pr(xr, yr) =|Br|1 2 2πexp −1 2br·11x2 r−br·12xryr−1 2br·22y2 r , (3.1) where Br= (br·ij) is the inverse of the 2 ×2 covariance matrix of ( x, y). 3.1.1. Xhas independent rows Now let x1, x1, . . . x nrepresent the sample values of the x-variate, and y1, y2, . . . y nbe the corresponding values for the y-variates. Then the chance thatx1, y1,x2, y2, . . . x n, ynshould fall within the infinitesimal regions dx1, dy1, . . . , dx n, dynis p(x, y) =Qn r=1|Br|1 2 (2π)nexp −1 2nX r=1br·11x2 r−br·12xryr−1 2br·22y2 r! .(3.2) The following statistics are now to be calculated from the sample: n¯x=nX 1(x), n¯y=nX 1(y), ns2 1=nX 1(x−¯x)2, ns2 2=nX 1(y−¯y)2, nr 12s1s2=nX 1(x−¯x)(y−¯y). In order to transform the element of volume, we are employed by the geometrical reasoning from Wishart. The nvalues of xmay be regarded geometrically as specifying a point Pin an n-dimensional space, whose co- ordinates are x1−¯x, x 2−¯x, . . . x n−¯x. Similarly, the nvalues of yspecify a point Qin the same space. When ¯ xands1are fixed, as when a particular sample is chosen, Pis constrained to move so that its perpendicular distances from the line x1=x2=. .
|
https://arxiv.org/abs/2505.00470v1
|
.=xnand from the plane x1+x2+. . .+xn=n¯x remain constant. It must therefore lie on the surface of an ( n−1)-dimensional sphere which is everywhere at right angles to the radius vector x1=x2= ···=xn. The element of volume is then proportional to (√ns1)n−2ds1d¯x. For the factor of proportionality, we require the entire area of the surface of 12 a sphere in n−1 dimensions. If of radius rthis is, according to known results in Tumura (1965) 2·πn−1 2 Γ n−1 2·rn−3 Thus, we have a contribution to the transformed element of volume of 2·πn−1 2nn−2 2 Γ n−1 2·sn−2 1ds1d¯x By similar reasoning, Qmust lie on concentric spheres in the same space, and there will be corresponding contributions to the transformed element of volume. Let the radius vectors OPandOQbe cut by the unit sphere whose center is at Oin the points AandB. Then ABis a spherical curve, specified by the sample. To find the chance that this particular curve should be chosen, we note that Pbeing fixed, the chance that Qshould fall within the elementary range dθ(θ, being the angleπ 2−∠AOB ) is equal to 1√πcosn−2θdθ The point Qis connected to Pby the cosine relation that, if Dis the common point of the two straight lines tangent to the ellipse at PandQ respectively, and ϕis the angle PDQ , cosϕ=r12·3=r12−r13r23p 1−r2 13p 1−r2 23 Now OPbeing fixed, the chance that OQshould fall between the angles ϕandϕ+dϕ, measured from OP, is equal to Γ n−1 2 √πΓ n−2 2sinn−3ϕdϕ The transformed volume element will consist of the product of all the above probabilities. The exponential term in (3.2) is easily expressed in terms of s1,s2,s3, and the r’s, and we have dp=QN r=1|Br|1 2 πΓ(n−1 2)Γ(n−2 2)nn−2 2n−1exp −1 2nX r=1br·11x2 r−br·12xryr−1 2br·22y2 r! sN−2 1sN−2 2cosN−2θsinN−3ϕd¯xd¯yds 1ds2dθdϕ. (3.3) 13 Figure 1: The connection between the spherical curve ABand the elliptical curve PQ. Thus, if samples are identically distributed and the correlation coefficient in the common population is ρwith variances σ2 1andσ2 2, integrating the variables ¯ x,¯y, by converting to polar co-ordinates, we will get the product moment distribution in independent identically distributed samples from a 2-variate normal population p(a, b, h ) =1 π1 2Γ(n−1 2)Γ(n−2 2) A H H B n−1 2 e−Aa−Bb−2Hh· a h h b n−1 2 ,(3.4) where A=n 2σ2 1(1−ρ2), B =n 2σ2 2(1−ρ2), H =nρ 2σ1σ2(1−ρ2), a=s2 1, b =s2 2, h =rs1s2. For other values of p≥2, the product moment distribution is easily de- rived by the above method when the samples are independent identically distributed from a multivariate normal population. These results are well- summarized in books such as Anderson (1958), Srivastava and Khatri (1979), and Muirhead (1982). 3.1.2. Xhas independent columns Problems arise when the populations in (3.2) are correlated. There are, for example, new quantities of the partial and multiple correlations, and of 14 the partial regression coefficients. Thus, if the sample of values x1, x2, . . . , x n are jointly normally distributed, also if this is the case for y, a simplified form of the sample distribution may be p(x, y) =|Σ11 2|1
|
https://arxiv.org/abs/2505.00470v1
|
2 (2π)nexp(−1 2x′A11x−1 2y′A22y), (3.5) where Σ11 2=A11⊗E11+A22⊗E22and the 2 ×2 matrix Eijhas the ( i, j)- element one and zero elsewise; the n×nmatrix A11(orA22) being the inverse of the covariance of the x-variable (or y). Thus, as (3.5) can be rewritten as p(x, y) =|Σ11 2|1 2 (2π)nexpn −q−1x′x+q−1x′ I−q 2A11 x −q−1y′y+q−1y′ I−q 2A22 yo , (q >0),(3.6) we can integrate out the nuisance terms q−1x′(I−q 2A11)x(likewise that for y) in the exponential by Lemma 2 with these properties of hypergeometric function for p= 2 etr(X) = 0F0(X),Z O(n)0F0(AH 1BH′ 1)dH=0F0(A, B), H= [H1, H2],where H1isn×2,Z X′X=Setr(AXBX′)dX=πn Γ(n−1 2)Γ(n−2 2)|S|n−3 20F0(A, BS ),(3.7) where the integral in the second equality actually runs over the space of n×n orthogonal matrices O(n). This result, by writing Z= (x, y) and integrating q−1Z′(I−q 2A)Z(as- suming A11=A22=A), has a neat form expressed by matrices p(Z′Z) =|A| 2nΓ2(n−1 2)rtr −q−1Z′Z |Z′Z|n−3 2,0F0 I−q 2A, q−1Z′Z .(3.8) IfA11̸=A22, and we can put similarly s11=s2 1=x′x, s 22=s2 2=y′y, s 12=ρs1x2=x′y 15 then for q11, q22>0, this becomes p(s11, s12, s22) =|Σ11 2|1 2 2nΓ2(n−1 2)exp −q−1 11s11−q−1 22s22 |s11|n−3 2|s22|n−3 2 ×0F0 I−q11 2A11, q−1 11s11 0F0 I−q22 2A22, q−1 22s22 ,(3.9) This should not be confused with (3.4). Therefore, with the assumption that every column is jointly normal and the presumed form (3.5), the probabil- ity density distribution of the products s11, s22, s12from normal populations is calculated explicitly. This also indicates the approach to calculate the probability density distribution of these coefficients for T1. 3.1.3. Xhas general normal entries However, if we assume only each entry in the sample to be normal, things will become more complicated. There are usually 2 nunknown variances and n(2n−1) unknown pairwise correlation coefficients or regression coefficients to be estimated. The two tables are a synthesis of results for T′ 1,T′ 11 2,T′ 2and T′ 3, which is based on tensor decompositions of the precision matrix. More discussions about this will be given in Appendix A. Table 2: Characterizations of left-spherical (LS), multivariate spherical (MS), vector spher- ical (VS) distributions and their extensions to elliptical contoured distributions. Class Definition M.G.F. Class M.G.F. LS ΓXd=X,Γ∈O(n) ϕ(T′T) LE ϕ(t′ iAijtj) MS Pixid=xi, Pi∈O(n) ϕ(diag( T′T))ME ϕ(t′ iAiiti) VS Pvec(X)d= vec( X), P∈O(N)ϕ(tr(T′T)) VE ϕ(Pt′ iAiiti) where T= (t1, . . . , t n) and X= (x1, . . . , x n) are both n×pmatrices and the m.g.f. is Eexp(Ptrixri). Thus, if the population is pn,2(X) =1 (2π)n|A||B|n 2etr −1 2A−1XB−1X′ , (3.10) for some 2 ×2 and n×npositive definite matrices B= (bij) and A= (ars), the joint distribution of the three product moment coefficients s11, s12, s22in 16 Table 3: Four Common Matrix Normal Populations T′ 1,T′ 11 2,T′ 2andT′ 3. Type −2 log(m.g.f.) Parameters Class T′ 1Pp i=1Pp j=1t′ iAijtj1 2n(n−1)p2+np LE T′ 11 2Pp i=1t′ iAiiti1 2n(n+ 1)p ME T′ 2Pn i=1γijt′ jtj1 2n(n+ 1) +1 2p(p+ 1) + npME (*) T′ 3Pn i=1Pp j=1αiβjt′ jtj1 2n(n+ 1) +1 2p(p+ 1) VE (*) - T′ 2is a simultaneous diagonalizable multivariate elliptically contoured distribution, not necessarily vector elliptically
|
https://arxiv.org/abs/2505.00470v1
|
contoured. These terminologies are slightly modified from those in Fang and Zhang (1990). the 2×2 real symmetric positive definite matrix S= (sij) should be similar to (b), i.e. Vn,2(S) =cn,2|S|n−p−1 2etr −q−1B−1S 0F0 I−1 2qA−1, q−1B−1S , cn,2=1 2nΓ2(n−1 2)|A||B|n 2, (q >0).(3.11) This extends the result (3.4) we obtained earlier. 3.1.4. Moment generating function for X′X In order to calculate the moment generating function of the three prod- uct moment coefficients s11, s12, s22from population in (3.10), the geometric method is still useful in this situation. After an affine transformation allowing the origin being fixed in the three-dimensional space, i.e. a transformation Z= (x, y)7→(λ1x, λ 2y) between the two reciprocal orthogonal transforma- tions with the determinants ±1, this becomes the quadratic case in 3.1.2. Thus, the moment generating function, after a transformation Z7→ZB1 2 becomes Eetr X i≤jγijsij! =1 (2π)n|A|Z etr1 2Z′UZ−1 2WZ′Z dZ. (3.12) where U=I−A−1,W=I−B1 2RB1 2, 2R= Γ + Iand Γ = ( γij) symmetric. Letϕi, i= 1,2, . . . , n , be the characteristic roots of U=I−A. Since S=Z′Z is invariant under the left multiplication of Zby an orthogonal matrix, we 17 can consider Uto be a diagonal matrix with ϕas diagonal elements. Then (3.12) can be rewritten as 1 (2π)n|A|nY k=1ZZ exp −1 2(w11−ϕk)x2−w12xy−1 2(w22−ϕk)y2 dxdy by introducing vi1w1j+vi2w2j=δij, =|A|−1|B|−n 2( 1F0n 2;U, W−1 =nY k=1|δij−ϕkvij|−1 2) (3.13) 3.1.5. Distribution of latent roots for X′X To obtain the joint distribution of characteristic roots l1, l2, . . . , l pof S=X′X, the basic Lemma 4 applies here. Thus, if we have the follow- ing probability density distribution in the real symmetric positive definite matrix S= (sij) of1 2p(p+ 1) variables, p(S) =exp (−q−1B−1S)|S|n−p−1 2 2np 2Γp(n−1 2)|A|p 2|B|n 20F0 I−1 2qA−1, q−1B−1S , by integrating Hinp(HLH′) over O(p), we find the joint distribution of l1, l2, . . . , l pis 1 Γp(n 2)Γp(p 2)|A|p 2|B|n 2πp2 2 2np 2pY i=1(li)n−p−1 2pY i<j(li−lj)0F0(A−1, LB−1).(3.14) We have discussed the product moment distribution about a matrix nor- mal population T′ 1,T′ 11 2,T′ 2orT′ 3, at least implicitly. However, this is only done in the central case, i.e., the mean of this matrix is elementwise zero. The non-central distribution, arising in the general problem of matrix-variate analysis of variance, plays an important role still. The related distribution will be discussed in the final section for completeness. 3.2. The general case - For any p Let us adopt another notion system such as p(xri) for the joint distribution of (xri) and det( ars) for the determinant of ( ars) in reminder of size change in the row and column. 18 We should be aware of the form the general result may be expected to take by a comparison of equation (3.11) with the corresponding result (1.1) when A=I. In fact, we have for simplicity this probability density function fornpvariables ( xri) with n×nandp×preal symmetric matrices ( ars) and (bij), p(xri) =det(ars)p 2det(bij)n 2 (2π)np 2exp −1 2pX i,j=1bijtij! , t ij=nX r,s=1arsxrixsj, (3.15) analogous to (3.10). We shall begin by the induction method employed
|
https://arxiv.org/abs/2505.00470v1
|
by Hsu (1939) to prove the1 2p(p+ 1) variables have the probability density function Vn,p(sij) =cn,pdet(sij)n−p−1 2exp −q−1nX i,j=1bijsij! ×0F0 δrs−1 2qars, q−1nX i,j=1bijsij! , cn,p=det(ars)p 2det(bij)n 2 2np 2Γp(n−1 2), s ij=nX r=1xrixrj,(q >0) (3.16) for all p×preal symmetric positive definite matrices ( sij); elsewhere the density is zero. When p= 1, the theorem is true, since (3.16) represents the χ2-distribution with ndegrees of freedom. Let us assume that the theorem is true for p−1 variables, and then establish its validity for pvariables. The proof by induc- tion is then complete. Let us perform the transformation yli=nX r=1crlxri(l= 1,2, . . . , n −1), y ni=1 2qtpp−1 2nX r,s=1arsxrixrp, (3.17) fori= 1,2, . . . , p −1, where crlare so chosen as to make (3.17) an orthog- onal transformation of the variables ( x1i, x2i, . . . , x ni) to the new variables 19 (y1i, y2i, . . . , y ni) for each fixed i(1< i≤p−1). This gives p(yri, xrp) =det(ars)p 2det(bij)n 2 (2π)np 2exp −q−1p−1X i,j=1bijs′ ij! ×exp( −1 4qp−1X i,j=1bijyniynj−1 2qtpp1 2p−1X i=1bpiyni−1 2bpptpp) , tij= 2q−1s′ ij+1 2qyniynj, tip=1 2qtpp1 2 yni(i, j= 1,2, . . . , p −1), s′ ij=1 2qn−1X r=1yriyrj(i, j= 1,2, . . . , p −1).(3.18) By the assumption, the s′ ijare jointly distributed with the density Vn−1,p−1(s′ ij). Also, for the variables x1p, x2p, . . . , x np, we can introduce polar coordinates, namely, tppandn−1 angles, and get rid of the angles by integration. We then obtain p(s′ ij, yni, tpp) =cn,ptn−2 2ppdet(s′ ij)n−p−1 2exp( −p−1X i,j=1bij q−1s′ ij+1 4qyniynj −1 2qtpp1 2p−1X i=1bipyni−1 2bpptpp) , (3.19) wherever ( s′ ij) is a positive definite matrix, and we obtain zero otherwise. Now we can introduce the final set of variables sijby the transformation s′ ij=sij−sipsjp spp, y ni=sip√spp(i, j= 1,2, . . . , n −1). The Jacobian of this transformation is s−p−1 2pp. We obtain (3.16) by the de- terminant for block matrices, i.e., det( sij) =|spp| ·det(sij−sipsjp/spp). 4. Non-central Distributions and Two-Sample Discriminant Anal- ysis 4.1. Non-central Distributions Theorem 2. Suppose Xis an n×preal matrix according to the matrix normal distribution T3. and Mis an arbitrary real n×pmatrix. Let Y= X+MandS=Y′Y. 20 1.The probability density distribution for Sis the central distribution in Theorem 1.1 multiplied by etr −1 2Ω 0F1n 2;1 4ΨS , where Ω =M′A−1MB−1andΨ=B−1M′A−2MB−1. 2.The moment generating function is the central function in Theorem 1.2 multiplied by etr −1 2Ω etr1 4ΨW−1 where Wis defined in Theorem 1.2. 3.The latent roots l1, l2, . . . , l pofSis the central distribution in Theorem 1.2 multiplied by etr −1 2Ω 0F1n 2;1 4Ψ, L where L= diag( li),l1> l2>···> lp; elsewhere zero. Proof of Theorem 2. The proof for (1) is direct by applying James (1955) integral:Z O(n,p)etr(XH′)dH=0F11 2n;1 4X′X , where the integral runs over the Stiefel manifold of n×pmatrices, i.e. {H∈ Rn×p:H′H=Ip}, assuming n≥p. Similarly for (2) we have etr −1 2ΩZ O(n)dH1Z O(p)etr (B−1M′A−1)H1XH 2 dH2 = etr −1 2ΩZ O(n)0F1 (B−1M′A−2MB−1)H1XX′H′ 1 dH1 = etr −1
|
https://arxiv.org/abs/2505.00470v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.