Buckets:
Title: 1 Introduction
URL Source: https://arxiv.org/html/2211.16467
Markdown Content: marginparsep has been altered. topmargin has been altered. marginparpush has been altered. The page layout violates the ICML style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. We’re not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.
Linear Causal Disentanglement via Interventions
Chandler Squires * 1 2 Anna Seigal * 1 3 Salil Bhate 1 Caroline Uhler 1 2
††footnotetext: *Equal contribution 1Broad Institute of MIT and Harvard 2Laboratory for Information and Decision Systems, MIT 3School of Engineering and Applied Sciences, Harvard University. Correspondence to: Chandler Squires csquires@mit.edu.
Abstract
Causal disentanglement seeks a representation of data involving latent variables that are related via a causal model. A representation is identifiable if both the latent model and the transformation from latent to observed variables are unique. In this paper, we study observed variables that are a linear transformation of a linear latent causal model. Data from interventions are necessary for identifiability: if one latent variable is missing an intervention, we show that there exist distinct models that cannot be distinguished. Conversely, we show that a single intervention on each latent variable is sufficient for identifiability. Our proof uses a generalization of the RQ decomposition of a matrix that replaces the usual orthogonal and upper triangular conditions with analogues depending on a partial order on the rows of the matrix, with partial order determined by a latent causal model. We corroborate our theoretical results with a method for causal disentanglement. We show that the method accurately recovers a latent causal model on synthetic and semi-synthetic data and we illustrate a use case on a dataset of single-cell RNA sequencing measurements.
1 Introduction
The goal of representation learning is to find a description of data that is interpretable, useful for reasoning, and generalizable. Such a representation disentangles the data into conceptually distinct variables. Traditionally, conceptual distinctness of variables has meant statistical independence. This is the setting of independent component analysis (Comon, 1994). However, human reasoning often involves variables that are not statistically independent. For example, the presence of a sink and the presence of a mirror in an image. It is therefore natural to generalize conceptual distinctness to variables that are causally autonomous; i.e., interventions can be performed on each variable separately. This motivates causal disentanglement (Yang et al., 2021), the recovery of causally autonomous variables from data.
In this paper, we study the identifiability of causal disentanglement; i.e., its uniqueness. We adopt a generative perspective, as in (Bengio et al., 2013; Moran et al., 2021). We assume that the observed variables are generated in two steps. First, latent variables 𝑍 are sampled from a distribution ℙ ( 𝑍 ) . Then, the observed variables 𝑋 are the image of the latent variables under a deterministic mixing function. We assume that the latent variables are generated according to a linear structural equation model and that the mixing function is an injective linear map. Recent work has studied identifiability of various settings in representation learning (Khemakhem et al., 2020; Ahuja et al., 2021). A common assumption for identifiability is that variables are observed across multiple contexts, each affecting the latent distribution ℙ ( 𝑍 ) but not the mixing function. In our setup, each context is either an intervention on a latent variable, or is observational, i.e., has no interventions. We use the same terminology for interventions as Squires & Uhler (2022). From most to least general, a soft intervention on 𝑍 𝑖 changes the dependency of 𝑍 𝑖 on its direct causes, a perfect intervention removes this dependency but allows for stochasticity of 𝑍 𝑖 , and a do-intervention sets 𝑍 𝑖 to a deterministic value.
Our main result is that our linear causal disentanglement setup is identifiable if, in addition to an observational context, for each latent variable 𝑍 𝑖 , there is a context where 𝑍 𝑖 is the intervened variable under a perfect intervention; see Section 3.2. This is a sufficient condition for identifiability. Furthermore, we show that the condition of at least one intervention per latent node is necessary in the worst case: if some latent node is not intervened in any context, then there exist latent causal representations that are not identifiable; see Section 3.3. Our focus in this paper is on identifiability guarantees. Nonetheless, we convert our proofs into a method for causal disentanglement in the finite-sample setting. In Section 4, we apply the method to synthetic and semi-synthetic data and show that it recovers the generative model, and we compute a linear causal disentanglement on a single-cell RNA sequencing dataset.
1.1 Motivating Example
Consider two latent variables 𝑍
( 𝑍 1 , 𝑍 2 ) . Assume that 𝑋
( 𝑋 1 , 𝑋 2 ) is observed in two contexts 𝑘 ∈ { 0 , 1 } , that 𝑋
𝐺 𝑍 in both contexts, and that in context 𝑘 ,
𝑍
𝐴 𝑘 𝑍 + Ω 𝑘 1 / 2 𝜀 for 𝜀 ∼ 𝒩 ( 0 , 𝐼 ) .
Let
𝐴 0
𝐴 1
[ 0
− 1
0
0
]
,
Ω
0
[ 1
0
0
1
]
,
Ω
1
[ 1
0
0
1
/
4
]
,
𝐺
[ − 2
2
2
− 1 ] .
Context 𝑘
1 is an intervention on 𝑍 2 that changes its variance. The covariance of 𝑋 in contexts 𝑘
0 and 𝑘
1 are, respectively,
Σ 0
[ 20
− 16
−
16
13
]
and
Σ
1
[ 8
− 7
− 7
25 / 4 ] ,
since Σ 𝑘
𝐺 ( 𝐼 − 𝐴 𝑘 ) − 1 Ω 𝑘 ( 𝐼 − 𝐴 𝑘 ) 𝐺 ⊤ − ⊤ . However, the following parameters give the same covariance matrices:
𝐴 ^ 0
𝐴 ^ 1
[ 0
0
0
0
]
,
Ω
^
0
[ 1
0
0
1
]
,
Ω
^
1
[ 1
0
0
1
/
4
]
,
𝐺
^
[ − 2
4
2
− 3 ] .
The second set of parameters imply independence of 𝑍 1 and 𝑍 2 , since ( 𝐴 ^ 0 ) 1 , 2
0 , whereas the original parameters imply non-independence since ( 𝐴 0 ) 1 , 2 ≠ 0 . This non-identifiability holds for generic 𝐴 0 , 𝐴 1 , Ω 0 , Ω 1 , and 𝐺 , where 𝐴 1 comes from an intervention on 𝑍 2 , see Section 3.3. This non-identifiability extends to any number of latent variables 𝑑 : we show that, in the worst case, non-identifiability holds when fewer than 𝑑 + 1 contexts are observed.
1.2 Related Work
The growing field of causal representation learning blends techniques from multiple lines of work. Chief among these are identifiable representation learning, causal structure learning, and latent DAG (directed acyclic graph) learning.
Identifiable representation learning. The identifiability of the linear independent component analysis (ICA) model was given in Comon (1994). Identifiability of a nonlinear ICA model is studied in (Hyvarinen et al., 2019; Khemakhem et al., 2020), in the presence of auxiliary variables. The ICA model imposes the stringent condition that the latent variables are independent (in the linear setting) or conditionally independent given the auxiliary variables (in the nonlinear setting). Recent works on identifiable representation learning (Ahuja et al., 2021; Zimmermann et al., 2021) introduce structure on the data generating process to circumvent the independence condition. However, they do not consider latent variables that are causally related.
Setting Graphical Conditions Identification Result
Silva et al. (2006) LNG All children pure Identified up to Markov equivalence. Halpern et al. (2015) Dis 1 pure child per latent Identified up to Markov equivalence. Cai et al. (2019) LNG 2 pure children per latent Fully identified. Xie et al. (2020) LNG 2 pure children per latent Fully identified. Xie et al. (2022) LNG 2 pure children per latent* Fully identified. Kivva et al. (2021) Dis No twins Identified up to Markov equivalence. Liu et al. (2022) LG None Fully identified from 2 | 𝑉 |
perfect interventions if
| 𝐸 | ≤ | 𝑉 | . Ahuja et al. (2022b) Poly None Fully identified from | 𝑉 | do-interventions. This paper LNG or LG None Fully identified from | 𝑉 | perfect interventions. Table 1: Settings from prior works on learning latent DAG models. LNG is short for linear non-Gaussian, LG for linear Gaussian, Dis for discrete, and Poly for polynomial mixing. In *, pure children are allowed to be latent. The number of nodes and the number of edges in the latent graph are denoted | 𝑉 | and | 𝐸 | , respectively.
Causal Structure Learning. Causal structure is identifiable up to an equivalence class that depends on the available interventional data (Verma & Pearl, 1990; Hauser & Bühlmann, 2012; Yang et al., 2018; Squires et al., 2020; Jaber et al., 2020). See Squires & Uhler (2022) for a recent review. A key line of work (Eberhardt et al., 2005; Hyttinen et al., 2013) characterizes the interventions necessary and sufficient to ensure that the causal structure is fully identifiable; i.e., that the equivalence class is of size one. In particular, Eberhardt et al. (2005) showed that 𝑑 − 1 interventions are in the worst case necessary to fully identify a causal DAG model on 𝑑 nodes. The current paper extends this line of work to DAG models over latent variables.
Learning latent DAG models. The task of learning a DAG over latent variables dates back at least to Silva et al. (2006). They introduced the notion of a pure child: an observed variable 𝑋 𝑖 with only one latent parent, such 𝑋 𝑖 is also called an anchor (Halpern et al., 2015; Saeed et al., 2020). The method of Silva et al. (2006) requires that all observed variables are pure children. Recent works relax this assumption by studying the linear non-Gaussian setting, where all latent and observed variables are linear functions of their parents plus independent non-Gaussian noise. For example, Cai et al. (2019) propose a method which learns a latent DAG under the assumption that each latent variable has at least two pure children. The pure child assumption can be extended to allow subsets of latent variables with the same observed children, as in Xie et al. (2020), which introduces the Generalized Independent Noise condition. This condition was used by Xie et al. (2022) to permit latent variables with no observed children; i.e., a hierarchical latent model.
Other works consider the discrete setting, requiring that the latent and observed variables are discrete (Halpern et al., 2015) or that the latent variables are discrete (Kivva et al., 2021). The paper Kivva et al. (2021) relaxes the pure child assumption, as follows. The children of node 𝑍 𝑖 are the variables with a directed edge from 𝑍 𝑖 . The no twins assumption says that the observed children of any two latent nodes are distinct sets. A similar assumption called strong non-redundancy appears in Adams et al. (2021), which considers models whose latent variables can be downstream of observed variables. See Appendix A. These works require sparsity in the map between latent and observed variables: they do not allow all observed variables to be children of all latent variables, which is the setting of the present paper.
A number of recent works discard the sparsity requirement. Ahuja et al. (2022a) and Brehmer et al. (2022) learn a latent DAG from paired counterfactual data. In contrast, we study unpaired data, which is more realistic in applications such as biology (Stark et al., 2020). To the best of our knowledge, only two works consider unpaired data without sparsity assumptions. Liu et al. (2022) study a linear Gaussian model over 𝑑 latent variables, a nonlinear mixing function, and vector-valued contexts. Their identifiability result only applies to our setting if the latent graph has at most as many edges as nodes, see Appendix K. In that case, their result implies that 2 𝑑 interventions suffice for identifiability. We strengthen this result, showing that (i) 𝑑 interventions are sufficient and (ii) no restrictions on the latent graph are required. Moreover, we show that 𝑑 interventions are, in the worst case, necessary. Such necessary conditions do not appear in prior work on identifying latent DAGs. Finally, contemporaneous work (Ahuja et al., 2022b) shows that a latent DAG is identifiable from the more restricted class of do-interventions, but allow non-linear relationships. See Table 1 for a summary of prior work.
2 Setup
We consider 𝑑 latent variables 𝑍
( 𝑍 1 , … , 𝑍 𝑑 ) , generated according to a linear structural equation model. We index contexts by 𝑘 ∈ { 0 } ∪ [ 𝐾 ] , where [ 𝐾 ] := { 1 , … , 𝐾 } . The linear structural equation models in each context are related: context 𝑘
0 is observational data, while contexts 𝑘 ∈ [ 𝐾 ] are interventional data. We now state the assumptions for our model; see also Fig. 1.
Assumption 1. (a)
Linear latent model: Let 𝒢 be a DAG with nodes ordered so that an edge 𝑗 → 𝑖 implies 𝑗
𝑖 . The latent variables 𝑍 follow a linear structural equation model: in context 𝑘 , the latent variables 𝑍 satisfy
𝑍
𝐴 𝑘 𝑍 + Ω 𝑘 1 / 2 𝜀 , 𝐶𝑜𝑣 ( 𝜀 )
𝐼 𝑑 ,
where 𝐼 𝑑 ∈ ℝ 𝑑 × 𝑑 is the identity matrix, Ω 𝑘 ∈ ℝ 𝑑 × 𝑑 is diagonal with positive entries, and 𝐴 𝑘 ∈ ℝ 𝑑 × 𝑑 has ( 𝐴 𝑘 ) 𝑖 𝑗 ≠ 0 if and only if there is an edge 𝑗 → 𝑖 in 𝒢 . That is, in context 𝑘 ,
𝑍
𝐵 𝑘 − 1 𝜀 , where 𝐵 𝑘
Ω 𝑘 − 1 / 2 ( 𝐼 𝑑 − 𝐴 𝑘 ) . (1) (b)
Generic single-node interventions: For each 𝑘 ∈ [ 𝐾 ] , there exists 𝑖 𝑘 ∈ { 1 , … , 𝑑 } such that
𝐵 𝑘
𝐵 0 + 𝐞 𝑖 𝑘 𝐜 𝑘 ⊤ ,
further, ( 𝐵 𝑘 ) ⊤ 𝐞 𝑖 𝑘 is not a multiple of ( 𝐵 0 ) ⊤ 𝐞 𝑖 𝑘 unless 𝑖 𝑘 has no parents in 𝒢 .
(c)
Linear observations: Fix 𝑝 ≥ 𝑑 . There is a full rank matrix 𝐺 ∈ ℝ 𝑝 × 𝑑 such that 𝑋
𝐺 𝑍 in every context 𝑘 . Let 𝐻 := 𝐺 † denote its Moore-Penrose pseudoinverse. We set the entry of largest absolute value in each row of 𝐻 to 1. If multiple entries in a row have same absolute value we set the leftmost entry to be positive.
Our strongest results hold under one additional assumption.
Assumption 2.
Perfect interventions: For each k ∈ [ K ] , there exists i k ∈ { 1 , … , d } such that
𝐵 𝑘
𝐵 0 + 𝐞 𝑖 𝑘 𝐜 𝑘 ⊤ ,
where 𝐜 𝑘
𝜆 𝑘 𝐞 𝑖 𝑘 − 𝐵 0 ⊤ 𝐞 𝑖 𝑘 for some 𝜆 𝑘
0 .
Remark 1 (The parts of Assumption 1 that hold without loss of generality).
Taking 𝑉𝑎𝑟 ( 𝜀 𝑖 )
1 for all 𝑖 holds without loss of generality, since scaling can be absorbed into the matrix Ω 𝑘 . A linear structural equation model is causally sufficient if 𝜀 𝑖 ⟂ ⟂ 𝜀 𝑗 for all 𝑖 ≠ 𝑗 . Thus, for a causally sufficient linear structural equation model, we have 𝐶𝑜𝑣 ( 𝜀 )
𝐼 𝑑 in Assumption 1(a) without loss of generality. The ordering of nodes in Assumption 1(a) is also without loss of generality: a permutation of the latent nodes can be absorbed into 𝐺 . Our ordering makes the matrices 𝐴 𝑘 upper triangular.
The scaling of 𝐻 in Assumption 1(c) is without loss of generality, as follows. If { 𝐵 𝑘 } 𝑘
0 𝐾 and 𝐻 satisfy Assumption 1 then 𝑋
( 𝐵 𝑘 𝐻 ) † 𝜀 . Consider the matrices { 𝐵 𝑘 Λ } 𝑘
0 𝐾 and Λ − 1 𝐻 , for Λ diagonal with positive entries. Observe that 𝑋 ′
( 𝐵 𝑘 Λ Λ − 1 𝐻 ) † 𝜀 , has the same distribution as 𝑋 in context 𝑘 . The alternative matrices satisfy Assumption 1, except for the scaling condition on 𝐻 . Assumption 1(c) therefore fixes the scaling indeterminacy of each node.
The genericity condition in Assumption 1(b) automatically holds for perfect interventions. It fails to hold only for soft interventions that changes the variance but not the edge weights111i.e., ( Ω 𝑘 ) 𝑖 𝑘 , 𝑖 𝑘 ≠ ( Ω 0 ) 𝑖 𝑘 , 𝑖 𝑘 and ( 𝐴 𝑘 ) ⊤ 𝐞 𝑖 𝑘
( 𝐴 0 ) ⊤ 𝐞 𝑖 𝑘 . We show the importance of the genericity assumption for the identifiability of causal disentanglement in Appendix B.
Figure 1: The proposed setup. The latent variables 𝑍
( 𝑍 1 , … , 𝑍 𝑑 ) follows a linear DAG model, with contexts 𝑘
1 , … , 𝐾 being single node interventions on targets 𝑖 1 , … , 𝑖 𝐾 . The observed variables 𝑋
( 𝑋 1 , … , 𝑋 𝑝 ) are an injective linear function of the latent variables 𝑋
𝐺 𝑍 , where 𝐺 ∈ ℝ 𝑝 × 𝑑 does not vary across contexts.
We give an example of a setting in which Assumption 1 might apply. Suppose 𝑍 is the internal state of a cell (e.g., the concentrations of proteins, the locations of organelles, etc.) and that each context is an exposure to a different small molecule. Assumption 1(b) posits that each small molecule has a highly selective effect, modifying only one cellular mechanism. Assumption 2 posits that each small molecule completely disrupts the modified mechanism. While one does not expect all small molecules to be highly selective, one could filter based on selectivity.
In Appendix G, we describe a hypothesis test to test implications of Assumption 1(b), and show empirically that this test effectively determines model membership.
The covariance of 𝑋 in context 𝑘 is rank deficient when 𝑑 < 𝑝 , since 𝑋
𝐺 𝑍 . We therefore define the precision matrix of 𝑋 in context 𝑘 to be the pseudoinverse of the covariance matrix, Θ 𝑘 := 𝔼 [ 𝑋 𝑋 ⊤ ] † . Then
Θ 𝑘
𝐻 ⊤ 𝐵 𝑘 ⊤ 𝐵 𝑘 𝐻 , (2)
by Prop. 6 in Appendix C, since Cov 𝑘 ( 𝑍 )
( 𝐵 𝑘 ⊤ 𝐵 𝑘 ) − 1 .
We consider an unknown latent DAG. Each candidate DAG has unknown weights on its edges, unknown variances on its nodes, unknown new weights under each intervention, and an unknown mixing map to the observed variables. That is, our goal is to decompose the precision matrices { Θ 𝑘 } 𝑘
0 𝐾 in Equation (2) to recover 𝐻 and { 𝐵 𝑘 } 𝑘
0 𝐾 .
We recall some graph theoretic notions. The parents of node 𝑖 are pa 𝒢 ( 𝑖 )
{ 𝑗 ∣ 𝑗 → 𝑖 in 𝒢 } , and we define pa ¯ 𝒢 ( 𝑖 ) := pa 𝒢 ( 𝑖 ) ∪ { 𝑖 } . Similarly, an 𝒢 ( 𝑖 ) denotes the ancestors of 𝑖 in 𝒢 , the vertices 𝑗 with a directed path from 𝑗 to 𝑖 . We define an ¯ 𝒢 ( 𝑖 ) := an 𝒢 ( 𝑖 ) ∪ { 𝑖 } and an ¯ 𝒢 ( ℐ ) := ∪ 𝑖 ∈ ℐ an ¯ 𝒢 ( 𝑖 ) . The source nodes of 𝒢 are the nodes 𝑖 with pa 𝒢 ( 𝑖 )
∅ . We drop the subscript 𝒢 when the graph is clear from context.
The transitive closure of 𝒢 , denoted 𝒢 ¯ , is the DAG with pa 𝒢 ¯ ( 𝑖 )
an 𝒢 ( 𝑖 ) . Given a DAG 𝒢 , define the partial order ≺ 𝒢 to be 𝑖 ≺ 𝒢 𝑗 if and only if 𝑗 ∈ an 𝒢 ( 𝑖 ) . Thus, the transitive closure is the graph with 𝑗 → 𝑖 whenever 𝑖 ≺ 𝒢 𝑗 .
To decompose the precision matrices in Equation (2), we introduce a matrix decomposition defined on a partial order. Recall that the RQ decomposition writes 𝐻 ∈ ℝ 𝑑 × 𝑝 as 𝐻
𝑅 𝑄 for an upper triangular 𝑅 ∈ ℝ 𝑑 × 𝑑 and orthogonal 𝑄 ∈ ℝ 𝑑 × 𝑝 . We generalize the RQ decomposition222The RQ decomposition is used here (rather than the more familiar QR decomposition) because it gives an expression for the rows of 𝐻 , which each correspond to one latent variable..
Definition 1 (The partial order RQ decomposition).
Given a partial order ≺ , the partial order RQ decomposition writes 𝐻 ∈ ℝ 𝑑 × 𝑝 as 𝐻
𝑅 𝑄 , where 𝑅 ∈ ℝ 𝑑 × 𝑑 satisfies 𝑅 𝑖 𝑖 ≥ 0 and 𝑅 𝑖 𝑗
0 unless 𝑖 ⪯ 𝑗 , and where 𝐪 𝑖 , the 𝑖 -th row of 𝑄 ∈ ℝ 𝑑 × 𝑝 , is norm one and orthogonal to ⟨ 𝐪 𝑗 : 𝑖 ≺ 𝑗 ⟩ .
We recover the usual reduced RQ decomposition (Trefethen & Bau III, 1997) when ≺ is the total order 1 < ⋯ < 𝑑 . We construct the partial order RQ decomposition in Appendix D. Finally, given a positive definite matrix 𝑀 ∈ ℝ 𝑑 × 𝑑 , the Cholesky factor 𝑈 ∈ ℝ 𝑑 × 𝑑 , denoted cholesky ( 𝑀 ) , is the unique upper triangular matrix with positive diagonal such that 𝑀
𝑈 ⊤ 𝑈 .
3 Identifiability of Causal Disentanglement
We establish the sufficiency and worst case necessity of one intervention per latent node for identifiability of our causal disentanglement problem. The following result describes recovery of 𝒢 . Later, we discuss identifiability of the parameters in our setup. Since the labeling of latent nodes is unimportant, 𝒢 is recovered if it is found up to relabeling.
Theorem 1.
Assume the setup in Assumption 1 with 𝑑 latent variables. Then 𝑑 interventions are sufficient and, in the worst case, necessary to recover 𝒢 ¯ from { Θ 𝑘 } 𝑘 ∈ 𝐾 . If Assumption 2 also holds, then 𝑑 interventions are sufficient and, in the worst case, necessary to recover 𝒢 from { Θ 𝑘 } 𝑘 ∈ 𝐾 .
3.1 Preliminaries
We note the following basic fact, where 𝐯 ⊗ 2 := 𝐯𝐯 ⊤ :
Fact 1.
Let 𝐵 ∈ ℝ 𝑑 × 𝑑 . Then 𝐵 ⊤ 𝐵
∑ 𝑖
1 𝑑 ( 𝐵 ⊤ 𝐞 𝑖 ) ⊗ 2 .
We give a proof in Appendix F. This fact gives the key identity that drives our identifiability results.
Proposition 1.
Consider the setup in Assumption 1. Then, for any 𝑘 ∈ [ 𝐾 ] ,
Θ 𝑘 − Θ 0
( 𝐻 ⊤ 𝐵 𝑘 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝐻 ⊤ 𝐵 0 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 .
In particular the rank of the difference Θ 𝑘 − Θ 0 is 1 if 𝑖 𝑘 is a source node in 𝒢 , and 2 otherwise.
Proof.
By Assumption 1, 𝐵 𝑘 ⊤ 𝐞 𝑖
𝐵 0 ⊤ 𝐞 𝑖 for all 𝑖 ≠ 𝑖 𝑘 . Using Fact 1, we have
𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 0 ⊤ 𝐵 0
( 𝐵 𝑘 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝐵 0 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 .
Recall from Equation (2) that Θ 𝑘
𝐻 ⊤ 𝐵 𝑘 ⊤ 𝐵 𝑘 𝐻 . The result follows from left-multiplying both sides of the displayed equation by 𝐻 ⊤ and right-multiplying by 𝐻 . This shows that rank ( Θ 𝑘 − Θ 0 ) ≤ 2 . For a source node, both vectors 𝐵 𝑘 ⊤ 𝐞 𝑖 𝑘 and 𝐵 0 ⊤ 𝐞 𝑖 𝑘 have just one entry non-zero and rank ( Θ 𝑘 − Θ 0 )
1 . Otherwise, the vectors have more than one entry non-zero and, by the genericity condition in Assumption 1(b), the difference Θ 𝑘 − Θ 0 has rank two. ∎
We can reduce a more general causal disentanglement problem to our setting, as we explain in Appendix F. First, we can count the latent dimension, since rank ( Θ 𝑘 )
𝑑 for any 𝑘 . Second, we can identify which environments correspond to interventions on the same intervention target, see Prop. 9. Finally, we can identify which environment is observational using rank constraints, see Prop. 10. Thus, we assume without loss of generality that 𝑑 is known, that the observational environment is known, and that each node is only intervened on in one context.
3.2 Sufficiency
We define 𝑆 ( 𝒢 ) to be the set of permutations on 𝑑 letters such that 𝜎 ( 𝑗 ) > 𝜎 ( 𝑖 ) for all edges 𝑗 → 𝑖 . For example, if 𝒢 is a complete graph then 𝑆 ( 𝒢 ) contains only the identity. If 𝒢 has no edges then 𝑆 ( 𝒢 ) is the group of permutations on 𝑑 letters. The permutation matrix corresponding to permutation 𝜎 is 𝑃 𝜎 ∈ ℝ 𝑑 × 𝑑 with ( 𝑃 𝜎 ) 𝑖 𝑗
𝟙 { 𝑖
𝜎 ( 𝑗 ) } . Our main sufficiency result is the following.
Theorem 2.
Assume the set-up in Assumptions 1 and 2 with one intervention on each latent node. Then the graph 𝒢 , the intervention targets 𝑖 𝑘 , and the parameters are identifiable up to 𝑆 ( 𝒢 ) : given a solution ( 𝐵 0 , … , 𝐵 𝐾 , 𝐻 ) , the set of solutions is { ( 𝑃 𝜎 𝐵 0 𝑃 𝜎 ⊤ , … , 𝑃 𝜎 𝐵 𝐾 𝑃 𝜎 ⊤ , 𝑃 𝜎 𝐻 ) : 𝜎 ∈ 𝑆 ( 𝒢 ) } .
Theorem 2 says that solutions to the causal disentanglement problem are unique up to permutations of the latent nodes that preserve the property that 𝑗 → 𝑖 implies 𝑗
𝑖 . First, we verify that each permutation in 𝑆 ( 𝒢 ) gives a solution.
Proposition 2.
Assume the set-up in Assumption 1. Given a solution ( 𝐵 0 , … , 𝐵 𝐾 , 𝐻 ) to Equation (2) for 𝑘 ∈ { 0 } ∪ [ 𝐾 ] , the matrices ( 𝑃 𝜎 𝐵 0 𝑃 𝜎 ⊤ , … , 𝑃 𝜎 𝐵 𝐾 𝑃 𝜎 ⊤ , 𝑃 𝜎 𝐻 ) are a valid solution whenever 𝜎 ∈ 𝑆 ( 𝒢 ) .
Proof.
Let { 𝐵 𝑘 } 𝑘
0 𝐾 and 𝐻 satisfy Assumption 1 and Equation (2). Define 𝐵 𝑘 ( 𝜎 )
𝑃 𝜎 𝐵 𝑘 𝑃 𝜎 ⊤ and 𝐻 ( 𝜎 )
𝑃 𝜎 𝐻 for 𝜎 ∈ 𝑆 ( 𝒢 ) . Then Θ 𝑘
𝐻 ( 𝜎 ) ⊤ 𝐵 𝑘 ( 𝜎 ) ⊤ 𝐵 𝑘 ( 𝜎 ) 𝐻 ( 𝜎 ) . The matrices 𝐵 𝑘 ( 𝜎 ) are upper triangular, as follows. For all 𝑖 , 𝑗 ∈ [ 𝑝 ] , we have ( 𝐵 𝑘 ( 𝜎 ) ) 𝜎 ( 𝑖 ) , 𝜎 ( 𝑗 )
( 𝐵 𝑘 ) 𝑖 𝑗 . Hence 𝐵 𝑘 ( 𝜎 ) is upper triangular when ( 𝐵 𝑘 ) 𝑖 𝑗
0 for all 𝑖 , 𝑗 ∈ [ 𝑝 ] with 𝜎 ( 𝑖 )
𝜎 ( 𝑗 ) . This holds since 𝜎 ∈ 𝑆 ( 𝒢 ) . Moreover, these matrices also satisfy Assumption 1(b) with the intervention target 𝜎 ( 𝑖 𝑘 ) in context 𝑘 . Finally, 𝐻 ( 𝜎 ) satisfies Assumption 1(c), since we just permute the rows of 𝐻 . ∎
Algorithm 1 ID-Ancestors 1: Input: Θ 𝑘 , Θ 0 , { 𝐪 ^ 𝑖 } 𝑖 ∈ ℐ 2: Output: Vector 𝐪 ^ 𝑘 , ancestor set 𝒜 3: Let 𝒜
ℐ 4: for 𝑖 ∈ ℐ do 5: Let 𝑊 ¬ 𝑖
⟨ 𝐪 ^ 𝑖 : 𝑗 ∈ ℐ ∖ { 𝑖 } ⟩ 6: Let 𝑉 ¬ 𝑖
proj 𝑊 𝑖 ⟂ rowspan ( Θ 𝑘 − Θ 0 ) 7: If dim ( 𝑉 ¬ 𝑖 )
1 , let 𝒜
𝒜 ∖ { 𝑖 } 8: end for 9: Let 𝑊
⟨ 𝐪 ^ 𝑎 : 𝑎 ∈ 𝒜 ⟩ 10: Let 𝑉
proj 𝑊 ⟂ rowspan ( Θ 𝑘 − Θ 0 ) 11: Take 𝐪 ^ 𝑘 with first nonzero entry positive and ‖ 𝐪 ^ 𝑘 ‖ 2
1 , such that ⟨ 𝐪 ^ 𝑘 ⟩
𝑉 12: return 𝐪 ^ 𝑘 , 𝒜 Algorithm 2 ID-PartialOrder 1: Input: Precision matrices ( Θ 0 , Θ 1 , … , Θ 𝐾 ) , rank 𝑑 2: Output: Factor 𝑄 ^ , partial order ≺ 3: Let ℐ 0
{ } , 𝑄 ^
𝟎 𝑑 × 𝑑 4: for 𝑡
1 , … 𝐾 do 5: Let 𝑊 𝑡
⟨ 𝐪 ^ 𝑖 : 𝑖 ∈ ℐ 𝑡 − 1 ⟩ 6: Let 𝑉 𝑘
proj 𝑊 𝑡 ⟂ rowspan ( Θ 𝑘 − Θ 0 ) for 𝑘 ∉ ℐ 𝑡 − 1 7: Pick 𝑘 such that dim ( 𝑉 𝑘 )
1 8: Let 𝐪 ^ 𝑘 , 𝒜
ID-Ancestors ( Θ 𝑘 , Θ 0 , { 𝐪 ^ 𝑖 } 𝑖 ∈ ℐ 𝑡 − 1 ) 9: Add 𝑎 ′ ≻ 𝑘 for any 𝑎 ′ ⪰ 𝑎 , 𝑎 ∈ 𝒜 10: Let ℐ 𝑡
ℐ 𝑡 − 1 ∪ { 𝑘 } , 𝑄 ^ 𝑡
[ 𝐪 ^ 𝑘 ; 𝑄 ^ 𝑡 − 1 ] 11: end for 12: return 𝑄 ^ , ≺
We give a constructive proof of Theorem 2 via an algorithm to recover 𝐻 and { 𝐵 𝑘 } 𝑘
0 𝐾 from { Θ 𝑘 } 𝑘
0 𝐾 333 We only use the second moment of 𝑋 . We do not use the first moment since we assume 𝔼 [ 𝜀 ]
0 , and we do not use higher moments since in the worst case (Gaussian noise), they contain no additional information. . The computational complexity of the algorithm is given in Appendix H. The bulk of the algorithm is devoted to recovering 𝐻 . First, we recover the partial order ≺ 𝒢 (i.e., the DAG 𝒢 ¯ ), together with the matrix 𝑄 from a partial order RQ decomposition of 𝐻 , up to signs and permutations of rows, in Algorithm 2. The subroutine Algorithm 1 recovers the ancestors of a node and its corresponding row of 𝑄 . Then we recover 𝑅 in Algorithm 3. Having recovered 𝐻 , the matrices { 𝐵 𝑘 } 𝑘
0 𝐾 are found via the Cholesky decomposition.
We show that Algorithm 2 returns 𝑄 from the partial order RQ decomposition of 𝐻 , up to a permutation 𝜎 ∈ 𝑆 ( 𝒢 ) and a matrix in Sig 𝑑 , the 𝑑 × 𝑑 diagonal matrices with diagonal entries ± 1 .
Proposition 3.
Assume the setup in Assumption 1, and that every latent node is intervened on; i.e, { 𝑖 𝑘 } 𝑘
1 𝐾
[ 𝑑 ] . Let ( 𝑄 ^ , ≺ ) be the output of Algorithm 2. Then ≺ is the partial order ≺ 𝒢 . Moreover, let 𝐻
𝑅 𝑄 be the partial order RQ decomposition of 𝐻 for the partial order ≺ 𝒢 . Then 𝑄 ^
𝑆 𝑃 𝜎 𝑄 for some 𝜎 ∈ 𝑆 ( 𝒢 ) and 𝑆 ∈ Sig 𝑑 .
The following lemma relates the partial order of 𝒢 to the linear spaces rowspan ( Θ 𝑘 − Θ 0 ) .
Lemma 1.
Assume the setup in Assumption 1. Let 𝐻
𝑅 𝑄 be a partial order RQ decomposition of 𝐻 . Let 𝑖 𝑘 be the intervention target of context 𝑘 and let ℐ ⊆ [ 𝑑 ] . Then
(a)
rowspan ( Θ 𝑘 − Θ 0 ) ⊆ ⟨ 𝐡 𝑖 : 𝑖 ∈ ℐ ⟩ if and only if 𝑝𝑎 ¯ ( 𝑖 𝑘 ) ⊆ ℐ ,
(b)
rowspan ( Θ 𝑘 − Θ 0 ) ⊆ ⟨ 𝐪 𝑖 : 𝑖 ∈ 𝑎𝑛 ¯ ( 𝑖 𝑘 ) ⟩ , and
(c)
rowspan ( Θ 𝑘 − Θ 0 ) ⊈ ⟨ 𝐪 𝑖 : 𝑖 ∈ ℐ ⟩ if 𝑝𝑎 ¯ ( 𝑖 𝑘 ) ⊈ ℐ .
Proof.
Let 𝐡 𝑖 denote the 𝑖 th row of 𝐻 , and let 𝜶 𝑗 , 𝑖 𝑘 := ∑ 𝑖 ∈ pa ¯ ( 𝑖 𝑘 ) ( 𝐵 𝑗 ) 𝑖 𝑘 , 𝑖 𝐡 𝑖 . By Prop. 1, we have
rowspan ( Θ 𝑘 − Θ 0 )
⟨ 𝜶 𝑘 , 𝑖 𝑘 , 𝜶 0 , 𝑖 𝑘 ⟩ . (3)
Equation (3) shows that
rowspan
(
Θ
𝑘
−
Θ
0
)
⊂
⟨
𝐡
𝑖
:
𝑖
∈
pa
¯
(
𝑖
𝑘
)
⟩
. This linear space is contained in
⟨
𝐡
𝑖
:
𝑖
∈
ℐ
⟩
whenever
pa
¯
(
𝑖
𝑘
)
⊆
ℐ
. Conversely, assume there exists
𝑗
∈
pa
¯
(
𝑖
𝑘
)
with
𝑗
∉
ℐ
. Then, containment of
rowspan
(
Θ
𝑘
−
Θ
0
)
in
⟨
𝐡
𝑖
:
𝑖
∈
ℐ
⟩
cannot hold: containment implies
(
𝐵
0
)
𝑖
𝑘
,
𝑗
𝐡
𝑗
∈
⟨
𝐡
𝑖
:
𝑖
∈
[
𝑝
]
{
𝑗
}
⟩
, a contradiction since
𝐻
is full row rank and
(
𝐵
0
)
𝑖
𝑘
,
𝑗
≠
0
. Hence (a) holds.
By definition of the partial order RQ decomposition, we have 𝐡 𝑖 ∈ ⟨ 𝐪 𝑗 ∣ 𝑗 ∈ an ¯ ( 𝑖 ) ⟩ . Thus, by transitivity of the ancestorship relation, (b) holds. Conversely, assume there exists 𝑗 ∈ pa ¯ ( 𝑖 𝑘 ) with 𝑗 ∉ ℐ . Then rowspan ( Θ 𝑘 − Θ 0 ) ⊆ ⟨ 𝐪 𝑖 : 𝑖 ∈ ℐ ⟩ implies that
𝐡 𝑗 ∈ ⟨ 𝐪 𝑖 : 𝑖 ∈ ℐ ⟩ + ⟨ 𝐡 𝑖 : 𝑖 ∈ pa ¯ ( 𝑖 𝑘 ) ∖ { 𝑗 } ⟩ ,
since ( 𝐵 0 ) 𝑖 𝑘 , 𝑗 ≠ 0 . We partition ℐ into the descendants and non-descendants of 𝑗 : let ℐ 𝑑 := { 𝑖 ∈ ℐ : 𝑗 ∈ an ( 𝑖 ) } and let ℐ 𝑛 𝑑 := { 𝑖 ∈ ℐ : 𝑗 ∉ an ( 𝑖 ) } . By definition of the partial order RQ decomposition, we have 𝐪 𝑖 ⟂ 𝐡 𝑗 whenever 𝑗 ∈ an ( 𝑖 ) . Thus 𝐡 𝑗 ∈ ⟨ 𝐪 𝑖 : 𝑖 ∈ ℐ 𝑛 𝑑 ⟩ + ⟨ 𝐡 𝑖 : 𝑖 ∈ pa ¯ ( 𝑖 𝑘 ) ∖ { 𝑗 } ⟩ . Inverting the partial order RQ decomposition gives 𝐪 𝑖 ∈ ⟨ 𝐡 𝑖 ′ : 𝑖 ′ ∈ an ¯ ( 𝑖 ) ⟩ . Hence
𝐡 𝑗
∈ ⟨ 𝐡 𝑖 : 𝑖 ∈ an ¯ ( ℐ 𝑛 𝑑 ) ∪ ( pa ¯ ( 𝑖 𝑘 ) ∖ { 𝑗 } ) ⟩ ,
a contradiction, since 𝑗 ∉ an ¯ ( ℐ 𝑛 𝑑 ) and 𝐻 is full rank. ∎
Proof of Prop. 3.
Assume that 𝑄 ^ 𝑡 − 1 is the last 𝑡 − 1 rows of 𝑆 𝑃 𝜎 𝑄 , for some 𝜎 ∈ 𝑆 ( 𝒢 ) and 𝑆 ∈ Sig 𝑑 . Then 𝑊 𝑡 − 1
⟨ 𝐪 ^ 𝑖 : 𝑖 ∈ ℐ 𝑡 − 1 ⟩ . At step 𝑡 , we pick 𝑘 such that 𝑉 𝑘
proj 𝑊 𝑡 ⟂ rowspan ( Θ 𝑘 − Θ 0 ) has dimension one. Lemma 1 implies that such 𝑘 are those with pa ( 𝑖 𝑘 ) ⊆ ℐ 𝑡 − 1 . Algorithm 1 returns a set 𝒜 with pa ( 𝑖 𝑘 ) ⊆ 𝒜 ⊆ an 𝒢 ( 𝑖 𝑘 ) . Thus Line 9 adds the relation 𝑘 ≺ 𝑎 ′ if and only if 𝑎 ′ ∈ an 𝒢 ( 𝑖 𝑘 ) . Hence ≺ is the partial order ≺ 𝒢 . Line 8 picks 𝐪 ^ 𝑘 orthogonal to { 𝐪 𝑖 : 𝑖 ≻ 𝒢 𝑖 𝑘 } , such that rowspan ( 𝑄 ^ 𝑡 ) contains rowspan ( Θ 𝑘 − Θ 0 ) . By Lemma 1 and Definition 1, this is ± 𝐪 𝑖 𝑘 . Thus, 𝑄 ^ 𝑡 is equal to the last 𝑡 rows of 𝑆 ′ 𝑃 𝜎 ′ 𝑄 , for some 𝜎 ′ ∈ 𝑆 ( 𝒢 ) and 𝑆 ′ ∈ Sig 𝑑 . Repeating for 𝑡
1 , … , 𝐾 gives the result. ∎
Algorithm 3 IterativeDifferenceProjection 1: Input: Precision matrices ( Θ 0 , Θ 1 , … , Θ 𝐾 ) 2: Output: 𝐻 ^ , ( 𝐵 ^ 0 , 𝐵 ^ 1 , … , 𝐵 ^ 𝐾 ) 3: Let 𝑑
rank ( Θ 0 ) 4: Let 𝑄 ^ , ≺
ID-PartialOrder ( ( Θ 0 , Θ 1 , … , Θ 𝐾 ) , 𝑑 ) 5: Let 𝐶 ^ 𝑘
cholesky ( ( 𝑄 ^ † ) ⊤ Θ 𝑘 𝑄 ^ † ) for 𝑘
0 , … , 𝐾 6: Let 𝑅 ^
𝐼 𝑑 7: for 𝑘
1 , … , 𝐾 do 8: Let 𝐷 ^ 𝑘
𝐶 ^ 𝑘 − 𝐶 ^ 0 9: Let 𝑖 ^ 𝑘 be index of the only nonzero row of 𝐷 ^ 𝑘 10: Let 𝑅 ^ 𝑖 ^ 𝑘
( 𝐷 ^ 𝑘 ) 𝑖 ^ 𝑘 + ( 𝐶 ^ 0 ) 𝑖 ^ 𝑘 11: end for 12: Let 𝐻 ^ ′
𝑅 ^ 𝑄 ^ 13: Let 𝐻 ^
Λ ^ 𝐻 ^ ′ , for Λ ^ diagonal such that 𝐻 ^ satisfies the conditions on 𝐻 in Assumption 1(c) 14: Let 𝐵 ^ 0
cholesky ( ( 𝐻 ^ † ) ⊤ Θ 0 𝐻 ^ † ) 15: Let 𝐵 ^ 𝑘
𝐵 ^ 0 + 𝐞 𝑖 ^ 𝑘 ( | Λ ^ 𝑖 ^ 𝑘 , 𝑖 ^ 𝑘 | 𝐞 𝑖 ^ 𝑘 − 𝐵 ^ 0 ⊤ 𝐞 𝑖 ^ 𝑘 ) ⊤ for 𝑘
1 , … , 𝐾 16: return 𝐻 ^ , ( 𝐵 ^ 0 , 𝐵 ^ 1 , … , 𝐵 ^ 𝐾 )
We prove Theorem 2 by proving the following result, which is its constructive analogue.
Theorem 3.
Assume the setup in Assumptions 1 and 2, and that every latent node is intervened; i.e., { 𝑖 𝑘 } 𝑘
1 𝐾
[ 𝑑 ] . Let 𝐻 ^ and { 𝐵 ^ 𝑘 } 𝑘
0 𝐾 be the output of Algorithm 3. Then 𝐻 ^
𝑃 𝜎 𝐻 and 𝐵 ^ 𝑘
𝑃 𝜎 𝐵 𝑘 𝑃 𝜎 ⊤ for all 𝑘 , for some 𝜎 ∈ 𝑆 ( 𝒢 ) .
Proof.
Let 𝐻
𝑅 𝑄 . Then Θ 𝑘
𝑄 ⊤ 𝑅 ⊤ 𝐵 𝑘 ⊤ 𝐵 𝑘 𝑅 𝑄 , by Equation (2), and 𝑄 ^
𝑆 𝑃 𝜎 𝑄 for some 𝜎 ∈ 𝑆 ( 𝒢 ) and 𝑆 ∈ Sig 𝑑 , by Prop. 3. Hence ( 𝑄 ^ † ) ⊤ Θ 𝑘 𝑄 ^ † equals
𝑆 ( 𝑃 𝜎 𝑅 ⊤ 𝑃 𝜎 ⊤ ) ( 𝑃 𝜎 𝐵 𝑘 ⊤ 𝑃 𝜎 ⊤ ) ( 𝑃 𝜎 𝐵 𝑘 𝑃 𝜎 ⊤ ) ( 𝑃 𝜎 𝑅 𝑃 𝜎 ⊤ ) 𝑆 .
Let 𝐶 𝑘 ( 𝜎 )
𝑆 ( 𝑃 𝜎 𝐵 𝑘 𝑃 𝜎 ⊤ ) ( 𝑃 𝜎 𝑅 𝑃 𝜎 ⊤ ) 𝑆 . The matrix 𝐶 𝑘 ( 𝜎 ) is upper triangular, since it is a product of four upper triangular matrices, by the definition of 𝑆 ( 𝒢 ) and of the partial order RQ decomposition, where we use that 𝑖 ≺ 𝒢 𝑗 implies 𝑖 < 𝑗 . Moreover, 𝐶 𝑘 ( 𝜎 ) has positive diagonal, since the matrices 𝑅 and 𝐵 𝑘 have positive diagonal, by Definition 1 and Assumption 1(a) respectively. Hence 𝐶 ^ 𝑘 := 𝐶 𝑘 ( 𝜎 ) is the Cholesky factor of ( 𝑄 ^ † ) ⊤ Θ 𝑘 𝑄 ^ † . The differences 𝐷 ^ 𝑘 := 𝐶 ^ 𝑘 − 𝐶 ^ 0 equal 𝑆 𝑃 𝜎 ( 𝐵 𝑘 − 𝐵 0 ) 𝑅 𝑃 𝜎 ⊤ 𝑆
𝑆 𝑃 𝜎 𝐞 𝑖 𝑘 𝐜 𝑘 ⊤ 𝑅 𝑃 𝜎 ⊤ 𝑆 , by Assumption 1(b). The intervention target 𝜎 ( 𝑖 𝑘 ) is the only nonzero row of 𝐷 ^ 𝑘 , i.e., 𝑖 ^ 𝑘
𝜎 ( 𝑖 𝑘 ) . Observe that ( 𝐷 ^ 𝑘 ) 𝑖 ^ 𝑘 + ( 𝐶 ^ 0 ) 𝑖 ^ 𝑘
𝑆 𝜎 ( 𝑖 𝑘 ) , 𝜎 ( 𝑖 𝑘 ) 𝜆 𝑘 ( 𝑅 𝑃 𝜎 ⊤ ) 𝜎 ( 𝑖 𝑘 ) . Thus, we have recovered 𝑅 ^
Λ ^ 𝑃 𝜎 𝑅 𝑃 𝜎 ⊤ for Λ ^ diagonal such that Λ ^ 𝑖 ^ 𝑘 , 𝑖 ^ 𝑘
± 𝜆 𝑘 . This gives 𝐻 ^ ′
Λ ^ 𝑃 𝜎 𝑅 𝑃 𝜎 ⊤ 𝑃 𝜎 𝑄
Λ ^ 𝑃 𝜎 𝐻 . The scaling in Line 13 recovers 𝐻 ^
𝑃 𝜎 𝐻 and Λ ^ . We have ( 𝐻 ^ † ) ⊤ Θ 0 𝐻 ^ †
𝑃 𝜎 𝐵 0 ⊤ 𝑃 𝜎 ⊤ 𝑃 𝜎 𝐵 0 𝑃 𝜎 ⊤ , where 𝑃 𝜎 𝐵 0 𝑃 𝜎 ⊤ is upper triangular, and thus we recover 𝐵 ^ 0
𝑃 𝜎 𝐵 0 𝑃 𝜎 ⊤ from the Cholesky decomposition. Finally, since | Λ ^ 𝑖 ^ 𝑘 , 𝑖 ^ 𝑘 |
𝜆 𝑘 , Line 15 gives us 𝐵 ^ 𝑘
𝑃 𝜎 𝐵 𝑘 𝑃 𝜎 𝑇 . ∎
Theorem 3 requires Assumption 2, see Appendix J. In Appendix K, we compare our identifiability condition to that of Liu et al. (2022). We show that Liu et al. (2022) requires that the latent graph has fewer than 𝑑 edges. In contrast, our condition imposes no constraints on the latent graph.
(a) Error in estimating 𝐻 (b) Error in estimating 𝐵 0 (c) Intervention targets Figure 2: The adapted version of Algorithm 3 is consistent for recovering 𝐻 , 𝐵 0 , and { 𝑖 𝑘 } 𝑘
1 𝐾 from noisy data. At each sample size, we generate 500 random models. Note the logarithmic scale on the x-axis. In (a), we plot the median of ‖ 𝐻 ^ − 𝐻 ‖ 2 , the error in Frobenius norm. In (b), we plot the median of ‖ 𝐵 ^ 0 − 𝐵 ^ ‖ 2 . In (c), we plot the fraction of models where all intervention targets were correctly estimated. 3.3 Worst-case Necessity
We show that one intervention per latent node is necessary for identifiability of our setup, in the worst case. It is clear that observational data does not suffice for identifiability: from just observational data, we always have a solution with independent latent variables, as follows. Let 𝐻 ^
Λ 𝐵 0 𝐻 and 𝐵 ^ 0
Λ − 1 , for Λ a diagonal matrix with positive entries such that Assumption 1(c) holds. Then 𝐵 ^ 0 𝐻 ^
𝐵 0 𝐻 , i.e. 𝐵 ^ 0 , 𝐻 ^ solve the causal disentanglement problem. The new solution has independent latent nodes, since 𝐵 ^ 0 is diagonal.
The next result, which follows from prior work in causal structure learning, says that 𝑑 − 1 interventions are required in the worst case, for a fully observed model. This is the special case of our setup where 𝐻 is a permutation matrix.
Proposition 4.
Assume the setup in Assumptions 1 and 2, with 𝐻 a permutation matrix. Let 𝐾 < 𝑑 − 1 with all intervention targets distinct. Then, in the worst case over intervention targets { 𝑖 𝑘 } 𝑘
1 𝐾 , 𝐵 0 and 𝐻 are not identifiable.
Proof.
In the linear Gaussian setting with unknown-target interventions, the structure of a DAG is only identifiable up to its interventional Markov equivalence class (MEC), see e.g. Proposition 3.3(ii) of Castelletti & Peluso (2022). For single-node interventions, 𝑑 − 1 interventions are in the worst case necessary to ensure that the interventional MEC has size one, by Theorem 3.7 of Eberhardt et al. (2005). ∎
We show that 𝑑 interventions are necessary, in the worst case, when 𝐻 is a general matrix. The proof is in Appendix I.
Proposition 5.
Assume the setup in Assumptions 1 and 2, with 𝐾 < 𝑑 . Then there exist 𝐻 and { 𝐵 𝑘 } 𝑘
0 𝐾 and a set of 𝐾 distinct intervention targets such that (i) 𝐻 and 𝐵 𝑘 are not identifiable up to 𝑆 ( 𝒢 ) and (ii) ≺ 𝒢 is not identifiable.
Example 1.
Proposition 5 generalizes our motivating example from Section 1. Fix 𝐻 ∈ ℝ 2 × 2 with entries 𝐻 𝑖 𝑗 , and fix upper triangular 𝐵 0 , 𝐵 1 ∈ ℝ 2 × 2 with entries ( 𝐵 0 ) 𝑖 𝑗 and ( 𝐵 1 ) 𝑖 𝑗 , respectively. Assume 𝑖 1
2 ; i.e., ( 𝐵 0 ) 11
( 𝐵 1 ) 11 and ( 𝐵 0 ) 12
( 𝐵 1 ) 12 . Let
𝐵 ^ 0
[ 1
0
0
(
𝐵
0
)
22
]
,
𝐵
^
1
[ 1
0
0
(
𝐵
1
)
22
]
,
𝐻
^
[ ( 𝐵 0 ) 11 𝐻 11 + ( 𝐵 0 ) 12 𝐻 21
( 𝐵 0 ) 11 𝐻 12 + ( 𝐵 0 ) 12 𝐻 22
𝐻 21
𝐻 22 ] .
Then for 𝑘 ∈ { 0 , 1 } , we have 𝐵 ^ 𝑘 𝐻 ^
𝐵 𝑘 𝐻 , both equal to
[ ( 𝐵 0 ) 11 𝐻 11 + ( 𝐵 0 ) 12 𝐻 21
( 𝐵 0 ) 11 𝐻 12 + ( 𝐵 0 ) 12 𝐻 22
( 𝐵 𝑘 ) 22 𝐻 21
( 𝐵 𝑘 ) 22 𝐻 22 ]
Remark 2.
Prop. 5 pertains to the worst case over intervention targets. It is natural to ask if there exists a better choice of 𝐾 intervention targets, for 𝐾 < 𝑑 , such that 𝐻 and { 𝐵 𝑘 } 𝑘
0 𝐾 are identifiable. For example, when 𝑑
2 , consider 𝒢
{ 2 → 1 } , with an intervention on 𝑍 1 ; i.e., 𝑖 1
1 . Then rowspan ( Θ 1 − Θ 0 ) ⊆ ⟨ 𝐡 𝑖 : 𝑖 ∈ ℐ ⟩ if and only if ℐ
{ 1 , 2 } , by Lemma 1(a). Thus, Θ 1 − Θ 0 is rank 2, and we can detect that 𝑖 1
1 and 𝒢
{ 2 → 1 } ; otherwise, we would have rank ( Θ 1 − Θ 0 )
1 . While 𝑖 1 and 𝒢 are identifiable, preliminary computational evidence suggests that the entries of 𝐵 0 , 𝐵 1 , and 𝐻 are not identifiable.
Proof of Theorem 1.
The necessity of 𝑑 interventions is Prop. 5. Under Assumption 1 and 2, the sufficiency of 𝑑 interventions follows from Theorem 2. Under Assumption 1, the sufficiency is the recovery of ≺ 𝒢 in Prop. 3. ∎
4 Experimental Results
We adapt our proof of Theorem 3 into a method for causal disentanglement in the finite-sample setting. We modify our methods to (1) use matrices instead of vector spaces, (2) use scores based on singular values to test rank, and (3) choose a nonzero row based on norms. The adapted algorithms are in Appendix L. In this section, we investigate the performance of the method in a simulation study. There is a single hyperparameter 𝛾 ∈ [ 0 , 1 ] , the percentage of spectral energy associated to the largest singular value, above which we consider a matrix to have rank one. We use 𝛾
0.99 .
4.1 Synthetic Data Generation
We generate 500 random models following Assumption 1 for 𝑑
5 latent and 𝑝
10 observed variables, as follows. We sample the graph 𝒢 from an Erdős-Rényi random graph model with density 0.75 . We sample the nonzero entries of 𝐴 0 independently from 𝖴𝗇𝗂𝖿 ( ± [ 0.25 , 1 ] ) , and the nonzero entries of Ω 0 independently from 𝖴𝗇𝗂𝖿 ( [ 2 , 4 ] ) . We sample uniformly among permutations to generate the intervention targets 𝑖 𝑘 . In context 𝑘 , we have 𝐴 𝑘
𝐴 0 − 𝐞 𝑖 𝑘 𝐴 0 ⊤ 𝐞 𝑖 𝑘 ; i.e., all entries in row 𝑖 𝑘 are 0. We change ( Ω 0 ) 𝑖 𝑘 , 𝑖 𝑘 into a new value ( Ω 𝑘 ) 𝑖 𝑘 , 𝑖 𝑘 , sampled from 𝖴𝗇𝗂𝖿 ( [ 6 , 8 ] ) to ensure a non-negligible change. Finally, the entries of 𝐻 are sampled independently from 𝖴𝗇𝗂𝖿 ( [ − 2 , 2 ] ) . We consider sample sizes 𝑛 from 2500 to 250000 and use as input the sample precision matrices. All code for data generation and for our adapted versions of Algorithms 1, 2, and 3 (that is, Algorithms 6, 5 and 7) can be found at the link in Appendix M.
4.2 Synthetic Data Results
We show the results of applying our method in the finite-sample setting, presented in Fig. 2. We measure the error in estimating the parameters 𝐻 and 𝐵 0 and the intervention targets { 𝑖 𝑘 } 𝑘
1 𝐾 . Since the problem is only identifiable up to the partial order ≺ 𝒢 , we align our estimated 𝐻 ^ , 𝐵 ^ 0 , and { 𝑖 ^ 𝑘 } 𝑘
1 𝐾 with 𝐻 , 𝐵 0 , and { 𝑖 𝑘 } 𝑘
1 𝐾 by picking 𝜎 ∈ 𝑆 ( 𝒢 ) to maximize ∑ 𝑘
1 𝐾 𝟙 { 𝑖 𝑘
𝜎 ( 𝑖 ^ 𝑘 ) } . For small 𝑑 , this optimization can be solved by enumerating over 𝑆 ( 𝒢 ) . For large 𝑑 , we use the integer linear program in Appendix M.1, implemented in gurobipy. As expected, the estimation errors for 𝐻 and 𝐵 0 decrease as the sample size increases, while the fraction of models with all intervention targets correctly estimated increases.
4.3 Biological Data Results Figure 3: The latent graph and intervention targets learned from scRNA-seq data. Edges with weight of magnitude above 0.2 are shown. Boxes represent context indicators, corresponding to KRAS mutations, with edges to their respective targets. Red nodes are significantly associated with survival outcomes in the TCGA dataset.
We evaluate our method on a dataset from Ursu et al. (2022). This single-cell RNA sequencing (scRNA-seq) dataset consists of 90,000 cells from a lung cancer cell line, with 83 different nonsynonymous mutations of the KRAS oncogene overexpressed.
Semi-synthetic analysis. The authors divide the mutations into five categories based on the genes that they affect, and compute a score for the impact of each mutation. Taking the two highest impact mutations from each category gives 𝐾
10 contexts. The wild type KRAS gene is taken as the observational context. We select 𝑝
100 observational features to be the most variable genes from the proliferation regulation category in the Gene Ontology (Ashburner et al., 2000), which are significant modulators of cancer activity such as oncogenes and tumor suppressor genes. We compute the sample precision matrices Θ ^ 0 , Θ ^ 1 , … , Θ ^ 10 and use them as input to Algorithm 7 with 𝛾
0.99 .
Given estimates { 𝐵 ^ 𝑘 } 𝑘
1 𝐾 , 𝐻 ^ from our algorithm, we let 𝐵 ~ 𝑘
𝑀 𝑘 ⊙ 𝐵 ^ 𝑘 , where ( 𝑀 𝑘 ) 𝑖 𝑗
𝟙 { | ( 𝐵 ^ 𝑘 ) 𝑖 𝑗 |
0.04 } ; i.e., we truncate the entries away from zero. We treat the resulting parameters as a new generative model. This tests our method in a more realistic setting, with parameters based on real data, while retaining the ability to measure performance. Since the entries of the matrices 𝐵 ~ 𝑘 are smaller than for our synthetic data, we consider larger sample sizes 𝑛 ∈ { 1 𝑒 6 , 5 𝑒 6 , 1 𝑒 7 , 5 𝑒 7 , 1 𝑒 8 } . As seen in Fig. 6, Appendix M.2, we successfully recover the generative model.
Hypothetical Workflow. We illustrate a hypothetical workflow of our method on biological data. If we run our algorithm for all mutations ( 𝐾
83 ) on the 𝑝
83 most variable genes, we obtain the graph in Fig. 3. We see that G12R, G12V, G13R, G13V, and G12I all perturb highly-connected latent nodes with several descendants. The G12 and G13 positions in the KRAS protein are key functional residues whose mutations are known to be causal drivers of cancer (Huang et al., 2021). This indicates that the learned graph can highlight influential biological pathways which may be useful for prioritizing therapeutic development. The matrix 𝐻 ^ from our algorithm gives a mapping from genes to latent variables that can be transferred across datasets and related to other observations. For example, we compute estimates of the latent variables for the 589 lung cancer patients in the Cancer Genome Atlas (Liu et al., 2018) and relate these variables to the patients’ survival outcomes. We find that 5 latent variables, those targeted by the M170L, I163S, A55G, G12R, and Q25H mutations, are significantly associated with survival outcomes (at significance 0.1 , after multiple testing correction). See Appendix M.2 for details. Note that the output of our method should be treated as exploratory; further theoretical and methodological development is required to better match complex real-world data.
5 Discussion
In this paper, we showed that a latent causal model is identifiable when data is available from an intervention on each latent variable. Conversely, we showed that, in the worst case, such data is necessary for identifiability of the latent representation. Our proof is constructive, consisting of an algorithm for recovering the latent representation, which can be adapted to the noisy setting. Our algorithm was developed for maximal clarity of our identifiability result, and leaves open several directions for future work.
Theory of latent interventional Markov equivalence. We established sufficient and (worst-case) necessary conditions for the complete identifiability of the parameters 𝐻 and { 𝐵 𝑘 } 𝑘
0 𝐾 . However, in many settings, it is expected that these parameters (and corresponding combinatorial structures) are only partially identifiable. Indeed, Proposition 5 suggests that the problem parameters may be partially recoverable. In future work, it would be interesting to develop a theory of identifiability for 𝐾 < 𝑝 interventions.
Non-linear setting. Our results require that both the latent linear structural equation model and the mixing function are linear. We expect that the insights developed here may apply when one or both of these elements are non-linear. Notably, the contemporaneous work of Ahuja et al. (2022b) shows that, under certain conditions, the case of polynomial mixing can be reduced to the case of linear, which can be immediately applied to extend our result.
Statistical analysis of causal disentanglement. A next step beyond identifiability is to investigate the statistical properties of the setup. This includes lower bounds on the accuracy of recovering the parameters 𝐻 and { 𝐵 𝑘 } 𝑘
0 𝐾 , along with corresponding combinatorial structures such as 𝒢 and the matching between 𝑘 and 𝑖 𝑘 , and computationally efficient algorithms that match these lower bounds. Our identifiability result suggests that the differences of precision matrices may play a role. These differences appear in Varici et al. (2021), which uses techniques for directly estimating differences between precision matrices. Moreover, it would be interesting to develop a score-based approach, e.g., (penalized) maximum likelihood estimation. Our problem formulation suggests a natural parameterization for such an approach, which could be addressed using combinatorial optimization or gradient-based search.
Acknowledgements
We thank Nils Sturma, Mathias Drton, Jiaqi Zhang, and Alvaro Ribot for helpful discussions. We thank our anonymous reviewers for helpful suggestions that improved the paper. Chandler Squires was partially supported by an NSF Graduate Research Fellowship. Anna Seigal was supported by the Society of Fellows at Harvard University. Salil Bhate was supported by the Eric and Wendy Schmidt Center at the Broad Institute. In addition, this work was supported by NCCIH/NIH (1DP2AT012345), ONR (N00014-22-1-2116), NSF (DMS-1651995), the MIT-IBM Watson AI Lab, and a Simons Investigator Award to Caroline Uhler.
References Adams et al. (2021) Adams, J., Hansen, N., and Zhang, K. Identification of partially observed linear causal models: Graphical conditions for the non-gaussian and heterogeneous cases. Advances in Neural Information Processing Systems, 34:22822–22833, 2021. Ahuja et al. (2021) Ahuja, K., Hartford, J., and Bengio, Y. Properties from mechanisms: an equivariance perspective on identifiable representation learning. In International Conference on Learning Representations, 2021. Ahuja et al. (2022a) Ahuja, K., Hartford, J., and Bengio, Y. Weakly supervised representation learning with sparse perturbations. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022a. URL https://openreview.net/forum?id=6ZI4iF_T7t. Ahuja et al. (2022b) Ahuja, K., Wang, Y., Mahajan, D., and Bengio, Y. Interventional causal representation learning. In NeurIPS 2022 Workshop on Neuro Causal and Symbolic AI (nCSI), 2022b. Ashburner et al. (2000) Ashburner, M., Ball, C. A., Blake, J. A., Botstein, D., Butler, H., Cherry, J. M., Davis, A. P., Dolinski, K., Dwight, S. S., Eppig, J. T., et al. Gene ontology: tool for the unification of biology. Nature genetics, 25(1):25–29, 2000. Bengio et al. (2013) Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. Brehmer et al. (2022) Brehmer, J., Haan, P. D., Lippe, P., and Cohen, T. Weakly supervised causal representation learning. In ICLR2022 Workshop on the Elements of Reasoning: Objects, Structure and Causality, 2022. URL https://openreview.net/forum?id=rUXOUBuUcg5. Cai et al. (2019) Cai, R., Xie, F., Glymour, C., Hao, Z., and Zhang, K. Triad constraints for learning causal structure of latent variables. Advances in neural information processing systems, 32, 2019. Castelletti & Peluso (2022) Castelletti, F. and Peluso, S. Network structure learning under uncertain interventions. Journal of the American Statistical Association, pp. 1–12, 2022. Comon (1994) Comon, P. Independent component analysis, a new concept? Signal processing, 36(3):287–314, 1994. Davidson-Pilon (2019) Davidson-Pilon, C. lifelines: survival analysis in python. Journal of Open Source Software, 4(40):1317, 2019. Drton et al. (2007) Drton, M., Sturmfels, B., and Sullivant, S. Algebraic factor analysis: tetrads, pentads and beyond. Probability Theory and Related Fields, 138:463–493, 2007. Drton et al. (2008) Drton, M., Massam, H., and Olkin, I. Moments of minors of wishart matrices. 2008. Eberhardt et al. (2005) Eberhardt, F., Glymour, C., and Scheines, R. On the number of experiments sufficient and in the worst case necessary to identify all causal relations among n variables. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pp. 178–184, 2005. Greville (1966) Greville, T. N. E. Note on the generalized inverse of a matrix product. Siam Review, 8(4):518–521, 1966. Halpern et al. (2015) Halpern, Y., Horng, S., and Sontag, D. Anchored discrete factor analysis. arXiv preprint arXiv:1511.03299, 2015. Hauser & Bühlmann (2012) Hauser, A. and Bühlmann, P. Characterization and greedy learning of interventional markov equivalence classes of directed acyclic graphs. The Journal of Machine Learning Research, 13(1):2409–2464, 2012. Huang et al. (2021) Huang, L., Guo, Z., Wang, F., and Fu, L. Kras mutation: from undruggable to druggable in cancer. Signal transduction and targeted Therapy, 6(1):386, 2021. Hyttinen et al. (2013) Hyttinen, A., Eberhardt, F., and Hoyer, P. O. Experiment selection for causal discovery. Journal of Machine Learning Research, 14:3041–3071, 2013. Hyvarinen et al. (2019) Hyvarinen, A., Sasaki, H., and Turner, R. Nonlinear ICA using auxiliary variables and generalized contrastive learning. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 859–868. PMLR, 2019. Jaber et al. (2020) Jaber, A., Kocaoglu, M., Shanmugam, K., and Bareinboim, E. Causal discovery from soft interventions with unknown targets: Characterization and learning. Advances in neural information processing systems, 33:9551–9561, 2020. Khemakhem et al. (2020) Khemakhem, I., Kingma, D., Monti, R., and Hyvarinen, A. Variational autoencoders and nonlinear ICA: A unifying framework. In International Conference on Artificial Intelligence and Statistics, pp. 2207–2217. PMLR, 2020. Kivva et al. (2021) Kivva, B., Rajendran, G., Ravikumar, P., and Aragam, B. Learning latent causal graphs via mixture oracles. Advances in Neural Information Processing Systems, 34:18087–18101, 2021. Kuleshov et al. (2016) Kuleshov, M. V., Jones, M. R., Rouillard, A. D., Fernandez, N. F., Duan, Q., Wang, Z., Koplev, S., Jenkins, S. L., Jagodnik, K. M., Lachmann, A., et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic acids research, 44(W1):W90–W97, 2016. Liu et al. (2018) Liu, J., Lichtenberg, T., Hoadley, K. A., Poisson, L. M., Lazar, A. J., Cherniack, A. D., Kovatich, A. J., Benz, C. C., Levine, D. A., Lee, A. V., et al. An integrated tcga pan-cancer clinical data resource to drive high-quality survival outcome analytics. Cell, 173(2):400–416, 2018. Liu et al. (2022) Liu, Y., Zhang, Z., Gong, D., Gong, M., Huang, B., Hengel, A. v. d., Zhang, K., and Shi, J. Q. Weight-variant latent causal models. arXiv preprint arXiv:2208.14153, 2022. Moran et al. (2021) Moran, G. E., Sridhar, D., Wang, Y., and Blei, D. M. Identifiable deep generative models via sparse decoding. arXiv preprint arXiv:2110.10804, 2021. Saeed et al. (2020) Saeed, B., Belyaeva, A., Wang, Y., and Uhler, C. Anchored causal inference in the presence of measurement error. In Conference on uncertainty in artificial intelligence, pp. 619–628. PMLR, 2020. Seabold & Perktold (2010) Seabold, S. and Perktold, J. statsmodels: Econometric and statistical modeling with python. In 9th Python in Science Conference, 2010. Silva et al. (2006) Silva, R., Scheines, R., Glymour, C., Spirtes, P., and Chickering, D. M. Learning the structure of linear latent variable models. Journal of Machine Learning Research, 7(2), 2006. Squires & Uhler (2022) Squires, C. and Uhler, C. Causal structure learning: A combinatorial perspective. Foundations of Computational Mathematics, 2022. Squires et al. (2020) Squires, C., Wang, Y., and Uhler, C. Permutation-based causal structure learning with unknown intervention targets. In Conference on Uncertainty in Artificial Intelligence, pp. 1039–1048. PMLR, 2020. Squires et al. (2022) Squires, C., Yun, A., Nichani, E., Agrawal, R., and Uhler, C. Causal structure discovery between clusters of nodes induced by latent factors. In Conference on Causal Learning and Reasoning, pp. 669–687. PMLR, 2022. Stark et al. (2020) Stark, S. G., Ficek, J., Locatello, F., Bonilla, X., Chevrier, S., Singer, F., Rätsch, G., and Lehmann, K.-V. Scim: universal single-cell matching with unpaired feature sets. Bioinformatics, 36(Supplement_2):i919–i927, 2020. Trefethen & Bau III (1997) Trefethen, L. N. and Bau III, D. Numerical linear algebra, volume 50. Siam, 1997. Ursu et al. (2022) Ursu, O., Neal, J. T., Shea, E., Thakore, P. I., Jerby-Arnon, L., Nguyen, L., Dionne, D., Diaz, C., Bauman, J., Mosaad, M. M., et al. Massively parallel phenotyping of coding variants in cancer with perturb-seq. Nature Biotechnology, pp. 1–10, 2022. Varici et al. (2021) Varici, B., Shanmugam, K., Sattigeri, P., and Tajer, A. Scalable intervention target estimation in linear models. Advances in Neural Information Processing Systems, 34:1494–1505, 2021. Verma & Pearl (1990) Verma, T. and Pearl, J. Equivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, pp. 255–270, 1990. Xie et al. (2020) Xie, F., Cai, R., Huang, B., Glymour, C., Hao, Z., and Zhang, K. Generalized independent noise condition for estimating latent variable causal graphs. Advances in Neural Information Processing Systems, 33:14891–14902, 2020. Xie et al. (2022) Xie, F., Huang, B., Chen, Z., He, Y., Geng, Z., and Zhang, K. Identification of linear non-Gaussian latent hierarchical structure. In International Conference on Machine Learning, pp. 24370–24387. PMLR, 2022. Yang et al. (2018) Yang, K., Katcoff, A., and Uhler, C. Characterizing and learning equivalence classes of causal DAGs under interventions. In International Conference on Machine Learning, pp. 5541–5550. PMLR, 2018. Yang et al. (2021) Yang, M., Liu, F., Chen, Z., Shen, X., Hao, J., and Wang, J. CausalVAE: Disentangled representation learning via neural structural causal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9593–9602, 2021. Zimmermann et al. (2021) Zimmermann, R. S., Sharma, Y., Schneider, S., Bethge, M., and Brendel, W. Contrastive learning inverts the data generating process. In International Conference on Machine Learning, pp. 12979–12990. PMLR, 2021. Contents of Appendix 1 Introduction 1.1 Motivating Example 1.2 Related Work 2 Setup 3 Identifiability of Causal Disentanglement 3.1 Preliminaries 3.2 Sufficiency 3.3 Worst-case Necessity 4 Experimental Results 4.1 Synthetic Data Generation 4.2 Synthetic Data Results 4.3 Biological Data Results 5 Discussion A Additional related work B Non-generic soft interventions C Pseudoinverse of a covariance matrix D The partial order RQ decomposition E Further preliminaries for identifiability and reduction F Reduction F.1 Reducing to one intervention per node F.2 Reducing to a known observational context G Hypothesis testing a necessary condition for model membership H Computational complexity I Proofs I.1 Proof of non-identifiability for one missing intervention J Non-identifiability for imperfect interventions J.1 Parameter non-identifiability J.2 Graph non-identifiability K Comparison to Liu et al. (2022) L Finite-sample algorithms M Code and Data M.1 Optimizing over permutations M.2 Additional information on real data Appendix A Additional related work
Fig. 4 shows the two graphical conditions assumed in some prior works.
(a) Pure children (b) No twins Figure 4: Graphical conditions assumed in prior works. In (a), the orange edges link pure children ( 𝑋 1 and 𝑋 3 ) to their parents ( 𝑍 1 and 𝑍 3 , respectively). In (b), the no twins assumption is satisfied since the observed children of 𝑍 1 , 𝑍 2 , and 𝑍 3 are, respectively, { 𝑋 1 , 𝑋 2 } , { 𝑋 1 , 𝑋 2 , 𝑋 3 } , and { 𝑋 3 } , and these three sets are distinct. Appendix B Non-generic soft interventions
We discuss the genericity condition in Assumption 1(b). We show that for soft interventions in which this genericity condition fails to hold, identifiability of the causal disentanglement problem as in Theorem 1 may fail. The following matrices satisfy all of Assumption 1, except for the genericity condition in Assumption 1(b), since 𝐵 1 ⊤ 𝐞 1
2 𝐵 0 ⊤ 𝐞 1 :
𝐵 0
[ 1
1
0
1
]
,
𝐵
1
[ 2
2
0
1
]
,
𝐵
2
[ 1
1
0
2
]
,
𝐺
[ 1
0
0
1 ] .
Consider the alternative matrices
𝐵 ^ 0
[ 1
0
0
1
]
,
𝐵
^
1
[ 2
0
0
1
]
,
𝐵
^
2
[ 1
0
0
2
]
,
𝐺
^
[ 1
− 1
0
1 ] .
These do not differ from the original tuple of matrices via a permutation. However, one can check that they are a valid solution, since
Θ 0
Θ ^ 0
[ 1
1
1
2
]
,
Θ
1
Θ ^ 1
[ 4
4
4
5
]
,
Θ
1
Θ ^ 1
[ 1
1
1
5 ] .
Appendix C Pseudoinverse of a covariance matrix Proposition 6.
Let 𝑋
𝐺 𝑍 for 𝐺 ∈ ℝ 𝑝 × 𝑑 with full column rank. Assume 𝐶𝑜𝑣 ( 𝑍 ) is invertible and let 𝐾 := 𝐶𝑜𝑣 ( 𝑍 ) − 1 . Then 𝐶𝑜𝑣 ( 𝑋 ) †
𝐻 ⊤ 𝐾 𝐻 , where 𝐻 := 𝐺 † .
Proof.
The covariance matrices Cov ( 𝑋 ) and Cov ( 𝑍 ) are related via Cov ( 𝑋 )
𝐺 ⋅ Cov ( 𝑍 ) ⋅ 𝐺 ⊤ . The property ( 𝑈 𝑉 ) †
𝑉 † 𝑈 † holds whenever 𝑈 has full column rank and 𝑉 has full row rank (Greville, 1966). The matrix 𝐺 has full column rank, Cov ( 𝑍 ) has full rank, and 𝐺 ⊤ has full row rank. Hence Cov ( 𝑋 ) †
( 𝐺 ⊤ ) † Cov ( 𝑍 ) † 𝐺 †
𝐻 ⊤ 𝐾 𝐻 . ∎
Appendix D The partial order RQ decomposition
Recall the partial order RQ decomposition from Definition 1. We present Algorithm 4 to find the partial order RQ decomposition of a matrix. In Line 7, the normalize operator is normalize ( 𝐯 ) := 𝐯 ‖ 𝐯 ‖ 2 . We let 𝟎 𝑑 × 𝑝 denote the 𝑑 × 𝑝 matrix of zeros and 𝑄 𝑗 ⪰ 𝑖 denote the submatrix of 𝑄 on rows 𝑗 with 𝑗 ⪰ 𝑖 . We say that a partial order ≺ is consistent with the total order 1 , 2 , … , 𝑑 if 𝑖 ≺ 𝑗 implies 𝑖 < 𝑗 . Any partial order can be put in this form by relabelling.
Proposition 7.
Let 𝐻 ∈ ℝ 𝑑 × 𝑝 be full rank and fix a partial order ≺ over [ 𝑑 ] . Then there exists a unique partial order RQ decomposition of 𝐻 . If ≺ is consistent with the total order 1 , 2 , … , 𝑑 , the decomposition is returned by Algorithm 4.
Proof.
The matrix 𝑄 𝑗 ⪰ 𝑖 has fewer rows than columns and 𝐡 𝑖 ∈ ⟨ 𝐪 𝑗 : 𝑗 ⪰ 𝑖 ⟩ , by its construction in Algorithm 4. Hence the vector 𝐫 of non-zero entries in the 𝑖 -th row of 𝑅 is the unique solution to 𝑄 𝑗 ⪰ 𝑖 ⊤ 𝐫
𝐡 𝑖 . By construction, we see that 𝐻
𝑅 𝑄 and, moreover, 𝑅 𝑖 𝑗
0 if 𝑗 ⋡ 𝑖 , and 𝐪 𝑖 is orthogonal to 𝐪 𝑗 for 𝑖 ≺ 𝑗 . Furthermore, the entry 𝑅 𝑖 𝑖 is positive, since 𝐪 𝑖 is the (normalized) projection of 𝐡 𝑖 onto 𝑊 𝑖 . ∎
Algorithm 4 Partial Order RQ Decomposition 1: Input: Matrix 𝐻 ∈ ℝ 𝑑 × 𝑝 , partial order ≺ over [ 𝑑 ] consistent with the total order 1 , 2 , … , 𝑑 2: Output: A partial order RQ decomposition 𝑅 , 𝑄 3: Let 𝑅
𝟎 𝑑 × 𝑑 , 𝑄
𝟎 𝑑 × 𝑝 4: for 𝑖
1 , … , 𝑑 do 5: Let 𝐡 𝑖 be the 𝑖 -th row of 𝐻 6: Let 𝑊 𝑖
⟨ 𝐪 𝑗 : 𝑗 ≻ 𝑖 ⟩ 7: Let 𝐪 𝑖
normalize ( proj 𝑊 𝑖 ⟂ 𝐡 𝑖 ) be the 𝑖 -th row of 𝑄 8: Let 𝐫
( 𝑄 𝑗 ⪰ 𝑖 ⊤ ) † 𝐡 𝑖 9: For 𝑖 ⪯ 𝑗 , let 𝑅 𝑖 𝑗
𝑟 𝑗 10: end for 11: return 𝑅 , 𝑄 Appendix E Further preliminaries for identifiability and reduction
We prove Fact 1 and discuss the reduction described in Section 3.1.
Proof of Fact 1.
We have 𝐵 ⊤ 𝐵
𝐵 ⊤ ( ∑ 𝑖
1 𝑑 𝐞 𝑖 𝐞 𝑖 ⊤ ) 𝐵
∑ 𝑖
1 𝑑 𝐵 ⊤ ( 𝐞 𝑖 𝐞 𝑖 ⊤ ) 𝐵
∑ 𝑖
1 𝑑 ( 𝐵 ⊤ 𝐞 𝑖 ) ⊗ 2 . ∎
Proposition 8.
Consider the setup in Assumption 1. For 𝑘 , ℓ ∈ [ 𝐾 ] , we have
𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 ℓ ⊤ 𝐵 ℓ
( 𝐵 𝑘 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝐵 0 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝐵 ℓ ⊤ 𝐞 𝑖 ℓ ) ⊗ 2 + ( 𝐵 0 ⊤ 𝐞 𝑖 ℓ ) ⊗ 2 , (4)
and thus
Θ 𝑘 − Θ ℓ
( 𝐻 𝐵 𝑘 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝐻 𝐵 0 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝐻 𝐵 ℓ ⊤ 𝐞 𝑖 ℓ ) ⊗ 2 + ( 𝐻 𝐵 0 ⊤ 𝐞 𝑖 ℓ ) ⊗ 2 .
Proof.
The proof follows the same steps as Prop. 1, together with the fact that
𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 ℓ ⊤ 𝐵 ℓ
( 𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 0 ⊤ 𝐵 0 ) − ( 𝐵 ℓ ⊤ 𝐵 ℓ − 𝐵 0 ⊤ 𝐵 0 ) . ∎
Appendix F Reduction
In this section, we show that a more general causal disentanglement problem can be simplified to one that satisfies our assumptions. We consider an unknown observational context, and multiple contexts with the same target. We focus here on the case of perfect interventions.
F.1 Reducing to one intervention per node
We identify which contexts correspond to interventions on the same node. Thus, we can reduce to the case of one intervention per node by removing any redundant contexts. We do not use knowledge of which context is the observational context here.
Proposition 9.
Consider the setup in Assumptions 1 and 2. Assume generic parameters for 𝐵 0 , 𝜆 𝑘 , and 𝜆 ℓ . For 𝑘 , ℓ ∈ [ 𝐾 ] , we have 𝑟 𝑘 , ℓ
1 if and only if 𝑖 𝑘
𝑖 ℓ , where 𝑟 𝑘 , ℓ := rank ( Θ 𝑘 − Θ ℓ ) .
Proof.
Since 𝐻 is full rank, we have 𝑟 𝑘 , ℓ
rank ( 𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 ℓ ⊤ 𝐵 ℓ ) . Thus, we consider rank ( 𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 ℓ ⊤ 𝐵 ℓ ) . Suppose 𝑖 𝑘
𝑖 ℓ
𝑖 . We have
𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 ℓ ⊤ 𝐵 ℓ
( 𝐵 𝑘 ⊤ 𝐞 𝑖 ) ⊗ 2 − ( 𝐵 ℓ ⊤ 𝐞 𝑖 ) ⊗ 2 ,
by Equation (4) in Prop. 8. Both 𝐵 𝑘 ⊤ 𝐞 𝑖 and 𝐵 ℓ ⊤ 𝐞 𝑖 have a single nonzero entry at the 𝑖 -th coordinate, by Assumption 2. Thus 𝑟 𝑘 , ℓ
1 .
Suppose 𝑖 𝑘 ≠ 𝑖 ℓ and assume without loss of generality that 𝑖 𝑘 < 𝑖 ℓ . Given a matrix 𝑀 , let 𝑀 𝑈 denote the submatrix of 𝑀 with rows and columns indexed by the elements of the set 𝑈 . We have
( 𝐵 𝑘 ⊤ 𝐵 𝑘 − 𝐵 ℓ ⊤ 𝐵 ℓ ) { 𝑖 𝑘 , 𝑖 ℓ }
[ 𝜆 𝑘 2 − ( 𝐵 0 ) 𝑖 𝑘 , 𝑖 𝑘 2
− ( 𝐵 0 ) 𝑖 𝑘 , 𝑖 𝑘 ( 𝐵 0 ) 𝑖 𝑘 , 𝑖 ℓ
− ( 𝐵 0 ) 𝑖 𝑘 , 𝑖 𝑘 ( 𝐵 0 ) 𝑖 𝑘 , 𝑖 ℓ
− 𝜆 ℓ 2 + ( 𝐵 0 ) 𝑖 ℓ , 𝑖 ℓ 2 − ( 𝐵 0 ) 𝑖 𝑘 , 𝑖 ℓ 2 ] ,
by Equation (4) in Prop. 8 and Assumption 2. For generic parameters, this submatrix has rank two, so the full matrix has rank at least two; i.e., 𝑟 𝑘 , ℓ ≥ 2 . ∎
F.2 Reducing to a known observational context
The previous section explains how to reduce to the case with one intervention per latent node. We may also reduce to the case with only one observational context: if more than one context is the observational context, they will all have the same inverse covariance matrix, so we may select only one of these contexts to serve as the observational context 𝑘
0 . Next we show that, with one intervention per node, and one observational context, we can identify the observational context. We show that the observational context has the “sparsest” changes from the other contexts. We formalize this intuition with the following definition.
Definition 2.
The deviation score of context 𝑘 is
𝑟 𝑘 := ∑ ℓ ∈ [ 𝐾 ] ∖ { 𝑘 } 𝑟 𝑘 , ℓ ,
where 𝑟 𝑘 , ℓ := rank ( Θ 𝑘 − Θ ℓ ) 𝑓 𝑜 𝑟 𝑎 𝑙 𝑙 𝑘 , ℓ ∈ [ 𝐾 ] .
Proposition 10.
Consider the setup in Assumption 1. Then 𝑘 * ∈ { 0 } ∪ [ 𝐾 ] is an observational context if and only if 𝑘 *
arg min 𝑘 ∈ { 0 } ∪ [ 𝐾 ] 𝑟 𝑘 .
Proof.
Let source ( 𝒢 ) denote the set of source nodes in 𝒢 . By Prop. 1, 𝑟 0 , ℓ
1 + 𝟙 pa ( 𝑖 ℓ ) ≠ ∅ for all ℓ ∈ [ 𝐾 ] . Thus, 𝑟 0
2 𝐾 − | source ( 𝒢 ) | .
For 𝑘 ≠ 0 , we have 𝑟 𝑘 , ℓ ≥ 2 + 𝟙 pa ( 𝑖 ℓ ) ≠ ∅ for all ℓ ∈ [ 𝐾 ] ∖ { 𝑘 } . Thus, 2 𝐾 . Since 𝒢 must have at least one source node, we see that 𝑟 𝑘
𝑟 0 for all 𝑘 ≠ 0 . ∎
Appendix G Hypothesis testing a necessary condition for model membership
We define the null hypothesis
𝐻 0 : rank ( Θ 𝑘 − Θ 0 ) ≤ 2 ∀ 𝑘 ∈ [ 𝐾 ]
Assumption 1(b) implies that 𝐻 0 holds, by Prop. 1. The null hypothesis 𝐻 0 is a necessary condition for membership of ( Θ 0 , Θ 1 , … , Θ 𝐾 ) in the model defined by Assumption 1. However, 𝐻 0 is not a sufficient condition for model membership: we may have rank ( Θ 𝑘 − Θ 0 ) ≤ 2 for all 𝑘 ∈ [ 𝐾 ] , despite some interventions not targeting single nodes. For example, if 𝒢 is the empty graph, and all interventions have two targets, then 𝐻 0 holds. These cases may be ruled out with other conditions implied by model membership. We leave a membership test for our model to future work. Here, we focus on developing a test for 𝐻 0 .
Prior work on testing latent variables models Drton et al. (2007); Squires et al. (2022) use such rank constraints. To test whether a matrix 𝑀 ∈ ℝ 𝑝 × 𝑝 is rank 𝑘 , one can test that all minors of size 𝑘 + 1 vanish; i.e., the collection of hypotheses
𝐻 𝐴 , 𝐵 : 𝑡 𝐴 , 𝐵
0 𝐴 , 𝐵 ⊆ [ 𝑝 ] , | 𝐴 |
| 𝐵 |
𝑘 + 1
where 𝑡 𝐴 , 𝐵 := det ( 𝑀 𝐴 , 𝐵 ) .
For example, Squires et al. (2022) use this to test whether certain submatrices of a covariance matrix are rank one, as follows. Let 𝑀 ^ be the sample covariance matrix computed from 𝑛 samples. If the underlying distribution is multivariate Gaussian, it is well-known that 𝑀 ^ follows a Wishart distribution. Now, for each pair of subsets 𝐴 , 𝐵 , compute the empirical minor 𝑡 ^ 𝐴 , 𝐵 := det ( 𝑀 ^ 𝐴 , 𝐵 ) . Then, compute an estimate Var ^ ( 𝑡 𝐴 , 𝐵 ) . Such an estimate can be obtained by evaluating the expression for Var ( 𝑡 𝐴 , 𝐵 ) in Drton et al. (2008), which characterizes the moments of minors for Wishart matrices. Given this estimate, compute the z-score 𝑧 𝐴 , 𝐵
𝑡 ^ 𝐴 , 𝐵 / Var ^ ( 𝑡 𝐴 , 𝐵 ) . By typical asymptotic theory, 𝑧 𝐴 , 𝐵 → 𝒩 ( 0 , 1 ) as 𝑛 → ∞ , so we can use the z-score to compute an asymptotically correct p-value. Finally, the p-values for all pairs of subsets 𝐴 , 𝐵 can be aggregated into a single p-value using a multiple hypothesis testing procedure such as Bonferroni correction or Sidak adjustment.
In principle, a similar procedure can be performed to test our null hypothesis 𝐻 0 . However, under a Gaussianity assumption, Θ 𝑘 and Θ 0 follow inverse Wishart distributions, rather than Wishart distribution. This would require expressions for the moments of minors for inverse Wishart matrices. We leave such a hypothesis test, which could give guarantees on false discovery rate control, to future work. Instead, we demonstrate the performance of a simple hypothesis test based on the singular values of the matrix Θ ^ 𝑘 − Θ ^ 0 . Let
𝜌 2 ( 𝑀 ) := ( 𝜎 1 2 ( 𝑀 ) + 𝜎 2 2 ( 𝑀 ) ) / ( ∑ 𝑖
1 𝑝 𝜎 𝑖 2 ( 𝑀 ) )
If rank ( 𝑀 ) ≤ 2 , then 𝜌 2 ( 𝑀 )
1 , otherwise, 𝜌 2 ( 𝑀 ) < 1 . Thus, we may test 𝐻 0 by checking where 𝜌 2 ( Θ ^ 𝑘 − Θ ^ )
𝜏 for some threshold 𝜏 near 1.
We demonstrate the performance of this procedure for testing model membership. We generate 500 random models following Assumption 1, using the same hyperparameters as in Section 4. These models satisfy 𝐻 0 . We also generate 500 random models where the interventions target two nodes instead of one. For each 𝑘 , we pick intervention targets 𝐼 𝑘 ⊂ [ 𝑑 ] with | 𝐼 𝑘 |
2 , uniformly at random among all subsets of size two. We hold all other hyperparameters of the simulation fixed. We consider only 𝑛
2500 samples, the smallest sample size used in Section 4, and vary the threshold 𝜏 from 0.97 to 0.999, linearly spaced over 20 values. The singular value based test is able to determine model membership at a rate well above random guessing, see Fig. 5.
Figure 5: Performance of our singular-value-based hypothesis test for 𝐻 0 , the hypothesis that all precision matrix differences are of rank at most two. The gray line indicates the performance of randomly guessing. Our test performs thresholding on the value 𝜌 2 ( Θ ^ 𝑘 − Θ ^ 0 ) . Here, we vary the threshold from 0.97 to 0.999. Appendix H Computational complexity
Algorithm 3 takes as input 𝐾 precision matrices in ℝ 𝑝 × 𝑝 . If these are each computed from at most 𝑛 samples, then the total cost is 𝒪 ( 𝐾 𝑝 3 + 𝐾 𝑛 𝑝 2 ) . Algorithm 2 runs for 𝐾 rounds and computes at most 2 𝐾 projections per round. Each projection costs 𝒪 ( 𝑝 3 ) , so the cost of this step is 𝒪 ( 𝐾 2 𝑝 3 ) . In the remainder of Algorithm 3, we perform 𝒪 ( 𝐾 ) matrix multiplications and Cholesky decompositions. Each matrix multiplication costs 𝒪 ( 𝑝 2 ) and each Cholesky decomposition costs 𝒪 ( 𝑝 3 ) . The other operations (e.g. selecting rows) in Algorithm 3 are negligible. Therefore, the overall runtime of Algorithm 3 is dominated by Algorithm 2, with a total cost of 𝒪 ( 𝐾 2 𝑝 3 ) .
Appendix I Proofs I.1 Proof of Prop. 5 Proof.
For (i), it suffices to find 𝐻 ^ and { 𝐵 ^ 𝑘 } 𝑘
1 𝐾 such that 𝐵 ^ 𝑘 𝐻 ^
𝐵 𝑘 𝐻 for all 𝑘 ∈ [ 𝐾 ] , and such that there is no 𝜎 ∈ 𝑆 ( 𝒢 ) satisfying 𝐵 ^ 𝑘
𝑃 𝜎 𝐵 𝑘 𝑃 𝜎 ⊤ , 𝐻
𝑃 𝜎 𝐻 , by Equation (2). Suppose 𝑖 𝑘 ≠ 1 for any 𝑘 . Let
𝐵 ^ 𝑘
[ —– 𝐞 1 —–
( 𝐵 𝑘 ) 2 : 𝑑 , 1 : 𝑑 ] , 𝐻 ^
[ ( 𝐵 0 ) 1 ⊤ 𝐻
𝐻 2 : 𝑑 , 1 : 𝑝 ] .
Then, for all 𝑘 , we have 𝐵 ^ 𝑘 𝐻 ^
𝐵 𝑘 𝐻 . Suppose that 𝑍 1 has at least one parent, i.e., ( 𝐵 0 ) 1 𝑗 ≠ 0 for at least one 𝑗
1 . Then the first row of 𝐻 ^ is a nonzero combination of at least two rows of 𝐻 . Hence it is not equal to a single row of 𝐻 , since 𝐻 is full rank. Thus, 𝐻 ^ is not equal to 𝐻 up to any permutation of rows.
For (ii), observe that for the stated example, the partial order ≺ given by 𝐵 ^ 0 differs in general from the partial order ≺ 𝒢 , since 𝑍 1 has no predecessors in ≺ . ∎
Appendix J Non-identifiability for imperfect interventions J.1 Parameter non-identifiability
In this section, we show that Assumption 2 is necessary to identify 𝐻 . Let
𝐵 0
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12
0
(
𝐵
0
)
22
]
,
𝐵
1
[ ( 𝐵 1 ) 11
( 𝐵 1 ) 12
0
(
𝐵
0
)
22
]
,
𝐵
2
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12
0
(
𝐵
2
)
22
]
,
𝐻
[ 1
𝐻 12
0
1 ] .
Then, for any value 𝐻 ^ 12 ∈ ℝ , we have 𝐵 𝑘 𝐻
𝐵 ^ 𝑘 𝐻 ^ for all 𝑘 , where
𝐵 ^ 0
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12 + ( 𝐵 0 ) 11 𝐻 12 − ( 𝐵 0 ) 11 𝐻 ^ 12
0
(
𝐵
0
)
22
]
,
𝐵
^
1
[ ( 𝐵 1 ) 11
( 𝐵 1 ) 12 + ( 𝐵 1 ) 11 𝐻 12 − ( 𝐵 1 ) 11 𝐻 ^ 12
0
(
𝐵
0
)
22
]
,
𝐵
^
2
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12 + ( 𝐵 0 ) 11 𝐻 12 − ( 𝐵 0 ) 11 𝐻 ^ 12
0
(
𝐵
2
)
22
]
,
𝐻
^
[ 1
𝐻 ^ 12
0
1 ] .
J.2 Graph non-identifiability
Suppose that 𝒢 has edges 2 → 1 , 3 → 2 , and 3 → 1 . Then the weight matrices have the form
𝐵 0
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12
( 𝐵 0 ) 13
0
( 𝐵 0 ) 22
( 𝐵 0 ) 23
0
0
(
𝐵
0
)
33
]
,
𝐵
1
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12
( 𝐵 0 ) 13
0
( 𝐵 1 ) 22
( 𝐵 1 ) 23
0
0
(
𝐵
0
)
33
]
,
𝐵
2
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12
( 𝐵 0 ) 13
0
( 𝐵 2 ) 22
( 𝐵 2 ) 23
0
0
(
𝐵
0
)
33
]
,
𝐵
3
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12
( 𝐵 0 ) 13
0
( 𝐵 0 ) 22
( 𝐵 0 ) 23
0
0
(
𝐵
3
)
33
]
,
𝐻
[ 1
𝐻 12
𝐻 13
0
1
𝐻 23
0
0
1 ] .
Let 𝒢 ^ be the DAG with edges 2 → 1 and 3 → 2 . We show that there exist 𝐻 ^ and matrices 𝐵 ^ 𝑘 , following the support of 𝒢 ^ , such that 𝐵 𝑘 𝐻
𝐵 ^ 𝑘 𝐻 ^ for all 𝑘 . Note that 𝒢 and 𝒢 ^ have the same transitive closure. Let 𝐻 ^ 12 ∈ ℝ and let 𝐻 ^ 13 , 𝐻 ^ 23 be a solution to the system of equations:
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12 + ( 𝐵 0 ) 11 ( 𝐻 12 − 𝐻 ^ 12 )
( 𝐵 1 ) 11
( 𝐵 1 ) 12 + ( 𝐵 1 ) 11 ( 𝐻 12 − 𝐻 ^ 12 ) ] [ 𝐻 ^ 13
𝐻 ^ 23 ]
[ ( 𝐵 0 ) 11 𝐻 13 + ( 𝐵 0 ) 12 𝐻 23 + ( 𝐵 0 ) 13
( 𝐵 1 ) 11 𝐻 13 + ( 𝐵 1 ) 12 𝐻 23 + ( 𝐵 1 ) 13 ]
This system generically has a solution, since for generic parameters the matrix on the left hand side is rank two. Then a solution, with all matrices 𝐵 ^ 𝑘 have vanishing entry ( 1 , 3 ) , is as follows.
𝐵 ^ 0
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12 + ( 𝐵 0 ) 11 ( 𝐻 12 − 𝐻 ^ 12 )
0
0
( 𝐵 0 ) 22
( 𝐵 0 ) 23 + ( 𝐵 0 ) 22 ( 𝐻 23 − 𝐻 ^ 23 )
0
0
(
𝐵
0
)
33
]
𝐵
^
1
[ ( 𝐵 1 ) 11
( 𝐵 1 ) 12 + ( 𝐵 1 ) 11 ( 𝐻 12 − 𝐻 ^ 12 )
0
0
( 𝐵 0 ) 22
( 𝐵 0 ) 23 + ( 𝐵 0 ) 22 ( 𝐻 23 − 𝐻 ^ 23 )
0
0
(
𝐵
0
)
33
]
𝐵
^
2
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12 + ( 𝐵 0 ) 11 ( 𝐻 12 − 𝐻 ^ 12 )
0
0
( 𝐵 2 ) 22
( 𝐵 2 ) 23 + ( 𝐵 2 ) 22 ( 𝐻 23 − 𝐻 ^ 23 )
0
0
(
𝐵
0
)
33
]
𝐵
^
3
[ ( 𝐵 0 ) 11
( 𝐵 0 ) 12 + ( 𝐵 0 ) 11 ( 𝐻 12 − 𝐻 ^ 12 )
0
0
( 𝐵 0 ) 22
( 𝐵 0 ) 23 + ( 𝐵 0 ) 22 ( 𝐻 23 − 𝐻 ^ 23 )
0
0
( 𝐵 3 ) 33 ]
Appendix K Comparison to Liu et al. (2022)
We compare Theorem 2 to Liu et al. (2022). We restate their main result for convenience and notation.
Theorem (Theorem 4.1 of Liu et al. (2022)).
Let 𝑍
𝐴 𝑘 𝑍 + 𝜀 𝑘 and 𝑋
𝑔 ( 𝑍 ) + 𝜀 𝑥 . Let 𝛈 𝑘 be the sufficient statistic for the distribution of 𝑍 in environment 𝑘 . That is, 𝛈 𝑘
vec ( Θ ~ 𝑘 ) , where Θ ~ 𝑘 denotes the inverse covariance matrix of 𝑍 in the 𝑘 th setting and vec denotes the vectorization of a matrix. We assume that vec ignores zeros and repetitions. Assume that
(i)
{ 𝑥 ∈ 𝒳 ∣ 𝜑 𝜀 𝑥 ( 𝑥 )
0 } has measure zero, where 𝜑 𝜀 𝑥 is the characteristic function for 𝜀 𝑥 ,
(ii)
𝑔 is bijective, and
(iii)
There exists 𝐾 + 1 environments such that the following matrix is invertible:
𝐿
[ |
|
|
𝜼 1 − 𝜼 0
𝜼 2 − 𝜼 0
…
𝜼 𝐾 − 𝜼 0
|
|
| ] .
Then we can recover 𝑔 up to permutation and scaling.
First, we show that (i) and (ii) hold in our setting. Our assumption that 𝑋
𝐺 𝑍 for 𝐺 invertible guarantees (ii). Our assumption that 𝑋 is a deterministic function of 𝑍 corresponds to taking 𝜀 𝑥 ∼ 𝛿 0 , i.e., 𝜀 𝑥
0 with probability one. The characteristic function is 𝜑 𝜀 ( 𝑡 )
1 , thus satisfying (i).
We now show that (iii) is only satisfied in our setting when the number of edges in the latent graph is at most 𝑑 . The vector vec ( Θ ~ 𝑘 − Θ ~ 0 ) is of length 𝑑 + | 𝐸 | , where | 𝐸 | is the number of edges in the graphical model defined by Θ 0 . To be invertible, 𝐿 must be a square matrix, and hence we require 𝐾 ≥ 𝑑 + | 𝐸 | . If | 𝐸 | > 𝑑 , then 𝐾 > 2 𝑑 , and we must have an intervention target 𝑖 such that 𝑖
𝑖 𝑘 for at least three values of 𝑘 . We have Θ ~ 𝑘
𝐵 𝑘 ⊤ 𝐵 𝑘 , and thus
Θ ~ 𝑘 − Θ ~ 1
( 𝜆 𝑘 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝐵 0 ⊤ 𝐞 𝑖 𝑘 ) ⊗ 2 − ( 𝜆 ℓ 𝐞 𝑖 1 ) ⊗ 2 + ( 𝐵 0 ⊤ 𝐞 𝑖 1 ) ⊗ 2
Given 𝑘 1 , 𝑘 2 , and 𝑘 3 such that 𝑖 𝑘 1
𝑖 𝑘 2
𝑖 𝑘 3
𝑖 , we see that 𝜼 𝑘 1 , 𝜼 𝑘 2 and 𝜼 𝑘 3 differ only at position ( 𝑖 , 𝑖 ) . The space of vectors that differ in at most one entry is at most two-dimensional. Thus 𝜼 𝑘 1 , 𝜼 𝑘 2 , and 𝜼 𝑘 3 are not linearly independent, and 𝐿 is not invertible.
Algorithm 5 IdentifyPartialOrderFiniteSample 1: Hyperparameters: 𝛾 2: Input: Precision matrices ( Θ 0 , Θ 1 , … , Θ 𝐾 ) 3: Output: Factor 𝑄 ^ , partial order ≺ 4: Let ℐ 0
{ } , 𝑄 ^
𝟎 𝑑 × 𝑑 5: for 𝑡
1 , … 𝐾 do 6: Let 𝑀 𝑘
proj 𝑄 ^ 𝑡 − 1 ⟂ ( Θ 𝑘 − Θ 0 ) for each 𝑘 ∉ ℐ 𝑡 − 1 7: Let 𝜌 𝑘
𝜎 1 2 ( 𝑀 𝑘 ) / ( ∑ 𝑖
1 𝑝 𝜎 𝑖 2 ( 𝑀 𝑘 ) ) for each 𝑘 ∉ ℐ 𝑡 − 1 8: Pick 𝑘 ∈ arg max 𝑘 ∉ ℐ 𝑡 − 1 𝜌 𝑘 9: Let 𝐪 ^ 𝑘 , 𝒜
IdentifyAncestorsFiniteSample ( Θ 𝑘 , Θ 0 , { 𝐪 ^ 𝑖 } 𝑖 ∈ ℐ 𝑡 − 1 ; 𝛾 ) 10: Add 𝑎 ′ ≻ 𝑘 for any 𝑎 ′ ⪰ 𝑎 , 𝑎 ∈ 𝒜 11: Let ℐ 𝑡
ℐ 𝑡 − 1 ∪ { 𝑘 } , 𝑄 ^ 𝑡
[ 𝐪 ^ 𝑘 ; 𝑄 ^ 𝑡 − 1 ] 12: end for 13: return 𝑄 ^ , ≺ Algorithm 6 IdentifyAncestorsFiniteSample 1: Hyperparameters: 𝛾 2: Input: Θ 𝑘 , Θ 0 , { 𝐪 ^ 𝑖 } 𝑖 ∈ ℐ 3: Output: Vector 𝐪 ^ 𝑘 , ancestor set 𝒜 4: Let 𝒜
ℐ 5: for 𝑖 ∈ ℐ do 6: Let 𝑊 ¬ 𝑖
[ 𝐪 ^ 𝑖 : 𝑗 ∈ ℐ ∖ { 𝑖 } ] 7: Let 𝑀 ¬ 𝑖
proj 𝑊 𝑖 ⟂ ( Θ 𝑘 − Θ 0 ) 8: Let 𝜌 𝑘
𝜎 1 2 ( 𝑀 𝑘 ) / ( ∑ 𝑖
1 𝑝 𝜎 𝑖 2 ( 𝑀 𝑘 ) ) for each 𝑘 ∉ ℐ 𝑡 − 1 9: If 𝜌 𝑘 ≥ 𝛾 , let 𝒜
𝒜 ∖ { 𝑖 } 10: end for 11: Let 𝑊
[ 𝐪 ^ 𝑎 : 𝑎 ∈ 𝒜 ] 12: Let 𝑀
proj 𝑊 ⟂ ( Θ 𝑘 − Θ 0 ) 13: Let 𝐪 ^ 𝑘 be the (normalized) leading left singular vector of 𝑀 14: return 𝐪 ^ 𝑘 , 𝒜 Appendix L Finite-sample algorithms
Matrix rank scoring. In Line 7 of Algorithm 1 and Line 7 of Algorithm 2, we check whether a subspace is rank one. In the finite-sample setting, we represent these subspaces by matrices and measure how close the matrices are to rank one.
We use the score 𝜌 ( 𝑀 ) := 𝜎 1 2 ( 𝑀 ) / ( ∑ 𝑖
1 𝑝 𝜎 𝑖 2 ( 𝑀 ) ) , where 𝜎 𝑖 ( 𝑀 ) is the 𝑖 th largest singular value of 𝑀 ; 𝜌 ( 𝑀 ) can be interpreted as the percentage of spectral energy associated to the largest singular value of 𝑀 . Using this score to choose the next element of the partial order does not require hyperparameters, see Line 8 of Algorithm 5. In contrast, using this score to prune the set of ancestors requires a hyperparameter to determine whether a matrix is close enough to rank one, see 𝛾 in Line 8 of Algorithm 6. Larger values of 𝛾 (e.g., 0.999) result in a more conservative algorithm and will output a denser latent graph, while smaller values of 𝛾 (e.g., 0.8) result in more aggressive pruning of the latent graph.
Picking 𝐪 𝑘 . The matrix 𝑀 in Line 12 of Algorithm 6 is not guaranteed to be rank one in the finite-sample case. We instead select the leading left singular vector of 𝑀 .
Picking nonzero rows. In the finite-sample case, the matrix 𝐷 ^ 𝑘 will not usually have only one nonzero row, see Line 9 of Algorithm 7. We estimate the intervention target 𝑖 𝑘 by picking the row of largest norm. Since we assume that 𝑖 𝑘 is distinct for distinct 𝑘 , we maintain a set 𝒩 of candidate intervention targets and do not allow replicates.
Algorithm 7 IterativeDifferenceProjectionFiniteSample 1: Hyperparameters: 𝛾 2: Input: Precision matrices ( Θ 0 , Θ 1 , … , Θ 𝐾 ) 3: Output: 𝐻 ^ , ( 𝐵 ^ 0 , 𝐵 ^ 1 , … , 𝐵 ^ 𝐾 ) 4: Let 𝑑
𝐾 5: Let 𝑄 ^ , ≺
IdentifyPartialOrderFiniteSample ( ( Θ 0 , Θ 1 , … , Θ 𝐾 ) ; 𝛾 ) 6: Let 𝐶 ^ 𝑘
cholesky ( ( 𝑄 ^ † ) ⊤ Θ 𝑘 𝑄 ^ † ) for 𝑘
0 , … , 𝐾 7: Let 𝒩
[ 𝑝 ] 8: Let 𝑅 ^
𝐼 𝑑 9: for 𝑘
1 , … , 𝐾 do 10: Let 𝐷 ^ 𝑘
𝐶 ^ 𝑘 − 𝐶 ^ 0 11: Pick 𝑖 ^ 𝑘 ∈ arg max 𝑖 ∈ 𝒩 ‖ ( 𝐷 ^ 𝑘 ) 𝑖 ‖ 2 12: Let 𝒩
𝒩 ∖ { 𝑖 ^ 𝑘 } 13: Let 𝑅 ^ 𝑖 ^ 𝑘
( 𝐷 ^ 𝑘 ) 𝑖 ^ 𝑘 + ( 𝐶 ^ 0 ) 𝑖 ^ 𝑘 14: end for 15: Let 𝐻 ^ ′
𝑅 ^ 𝑄 ^ 16: Let 𝐻 ^
Λ 𝐻 ^ ′ , for Λ diagonal such that 𝐻 ^ satisfies the conditions on 𝐻 in Assumption 1(c) 17: Let 𝐵 ^ 0
cholesky ( ( 𝐻 ^ † ) ⊤ Θ 0 𝐻 ^ † ) 18: Let 𝐵 ^ 𝑘
𝐵 ^ 0 + 𝐞 𝑖 ^ 𝑘 ( | Λ ^ 𝑖 ^ 𝑘 , 𝑖 ^ 𝑘 | 𝐞 𝑖 ^ 𝑘 − 𝐵 ^ 0 ⊤ 𝐞 𝑖 ^ 𝑘 ) ⊤ for 𝑘
1 , … , 𝐾 19: return 𝐻 ^ , ( 𝐵 ^ 0 , 𝐵 ^ 1 , … , 𝐵 ^ 𝐾 ) Appendix M Code and Data
Our code can be found at
https://github.com/csquires/linear-causal-disentanglement-via-interventions.
M.1 Optimizing over 𝑆 ( 𝒢 )
Consider a partial order ≺ 𝒢 , a set of true intervention targets 𝑖 1 , … , 𝑖 𝐾 , and a set of estimated intervention targets 𝑖 ^ 1 , … , 𝑖 ^ 𝐾 . The integer linear program (5) computes the topological order 𝜋 * consistent with ≺ 𝒢 that maximizes the number of agreements between 𝑖 𝑘 and 𝑖 ^ 𝑘 . The topological order 𝜋 * can be recovered by letting 𝜋 * ( 𝑖 )
𝑗 for the unique 𝑗 such that 𝐴 𝑖 𝑗
1 . The first two lines of constraints ensure this uniqueness, and that 𝜋 * ( 𝑖 ) ≠ 𝜋 * ( 𝑖 ′ ) for 𝑖 ≠ 𝑖 ′ .
The final line of constraints ensures that 𝜋 * is consistent with ≺ 𝒢 . If 𝜋 * is not consistent with ≺ 𝒢 , then there exists 𝑖 , 𝑖 ′ , 𝑗 such that 𝑖 ≺ 𝒢 𝑖 ′ , 𝐴 𝑖 ′ 𝑗
1 , and 𝐴 𝑖 𝑗 ′
0 for all 𝑗 ′ ≤ 𝑗 , which violates the constraint ∑ 𝑗 ′ ≤ 𝑗 ( 𝐴 𝑖 𝑗 ′ − 𝐴 𝑖 ′ 𝑗 ′ ) ≥ 0 .
max
𝐴
𝑖
𝑗
∈
{
0
,
1
}
𝑑
×
𝑑
∑
𝑘
1 𝐾 𝐴 𝑖 𝑘 𝑖 ^ 𝑘 (5) s.t. ∑ 𝑖
1 𝑑 𝐴 𝑖 𝑗
1
∀
𝑗
∈
[
𝑑
]
∑
𝑗
1 𝑑 𝐴 𝑖 𝑗
1 ∀ 𝑖 ∈ [ 𝑑 ]
∑ 𝑗 ′ ≤ 𝑗 ( 𝐴 𝑖 𝑗 ′ − 𝐴 𝑖 ′ 𝑗 ′ ) ≥ 0 ∀ 𝑖 ≺ 𝒢 𝑖 ′ , ∀ 𝑗 ∈ [ 𝑑 ]
M.2 Additional information on real data
The scRNA-seq dataset of Ursu et al. (2022) is available at https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE161824. The TCGA dataset of Liu et al. (2018) is available at https://gdc-hub.s3.us-east-1.amazonaws.com/download/TCGA-LUAD.survival.tsv and https://gdc-hub.s3.us-east-1.amazonaws.com/download/TCGA-LUAD.htseq_fpkm.tsv.gz.
Processing. We use EnrichR (Kuleshov et al., 2016) to pick the 𝑝
100 and 𝑝
83 most variable genes in the proliferation regulation gene set from the Gene Ontology. We use the values from the processed dataset from Ursu et al. (2022); the only additional processing removes cells which were assigned to synonymous mutations (i.e., those that do not change any amino acids and hence do not have structural effects).
Semi-synthetic analysis. Our algorithm recovers the problem parameters for the semi-synthetic data, see Fig. 6.
Comparison to TCGA dataset. Our survival analysis is performed using the Cox proportional hazards model from the lifelines package (Davidson-Pilon, 2019). To correct for multiple hypothesis testing, we use the Benjamini-Hochberg procedure from the statsmodels package (Seabold & Perktold, 2010).
(a) Error in estimating 𝐻 (b) Error in estimating 𝐵 0 (c) Intervention targets Figure 6: (Semi-synthetic) The adapted version of Algorithm 3 is consistent for recovering 𝐻 , 𝐵 0 , and { 𝑖 𝑘 } 𝑘
1 𝐾 from semi-synthetic data. At each sample size, we generate 50 datasets. Note the logarithmic scale on the x-axis. In (a), we plot the mean of ‖ 𝐻 ^ − 𝐻 ‖ 2 , the error in Frobenius norm. In (b), we plot the mean of ‖ 𝐵 ^ 0 − 𝐵 0 ‖ 2 . In (c), we plot the fraction of models where all intervention targets were correctly estimated. Generated on Thu Jul 13 18:43:19 2023 by LATExml
Xet Storage Details
- Size:
- 94.7 kB
- Xet hash:
- fd6923e187b7938d6c707cfe2e2b1daac0630837538307fcf3353f297a8a02cf
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.