category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
causal inference
Causal Inference: Ignorability and Collider
https://stats.stackexchange.com/questions/541853/causal-inference-ignorability-and-collider
<p>I've encountered lots of causal inference terms and jargons (under the Neyman-Rubin potential outcome framework), and I had questions regarding ignorability.</p> <p>Is it the case that <strong>ignorability</strong> is always a no <strong>sample selection bias</strong> condition?</p> <p>And is this term equivalent to saying no <strong>collider</strong>?</p> <p>Related Questions:</p> <p><a href="https://stats.stackexchange.com/questions/541852/causal-inference-moderation-and-mediation">Causal Inference: Moderation and Mediation</a></p> <p><a href="https://stats.stackexchange.com/questions/541807/causal-inference-selection-bias-and-endogeneity">Causal Inference: Selection Bias and Endogeneity</a></p> <p><a href="https://stats.stackexchange.com/questions/541855/pearls-front-door-and-back-door">Pearl's Front-door and Back-door</a></p>
700
causal inference
Causal Inference: Moderation and Mediation
https://stats.stackexchange.com/questions/541852/causal-inference-moderation-and-mediation
<p>I've encountered lots of causal inference terms and jargons (under the Neyman-Rubin potential outcome framework), and I had a question regarding mediator and moderator.</p> <p>Is it the case that <strong>moderation</strong> / moderators (interaction terms) necessarily implies unobserved causal <strong>mediation</strong>?</p> <p>For example, both gender and race can have impact on income. In this case, if we have a (significant) multiplicative <strong>interaction term</strong> for gender and race into the regression model (i.e., gender <strong>moderates</strong> the impact of race, or vice versa), does it implies that there exists unobserved <strong>causal mediation</strong> (i.e., indirect effects) and we are trying to use this interaction as a proxy for that?</p> <p>Related Questions:</p> <p><a href="https://stats.stackexchange.com/questions/541853/causal-inference-ignorability-and-collider">Causal Inference: Ignorability and Collider</a></p> <p><a href="https://stats.stackexchange.com/questions/541807/causal-inference-selection-bias-and-endogeneity">Causal Inference: Selection Bias and Endogeneity</a></p> <p><a href="https://stats.stackexchange.com/questions/541855/pearls-front-door-and-back-door">Pearl's Front-door and Back-door</a></p>
<p>Mediation and moderation are two unrelated concepts that can but do not always occur together.</p> <p>Mediation occurs when the effect of one variable on another passes through a third variable, e.g., <span class="math-container">$A \rightarrow M \rightarrow Y$</span>. <span class="math-container">$M$</span> is a mediator of the relationship between <span class="math-container">$A$</span> and <span class="math-container">$Y$</span>. <span class="math-container">$A$</span> may also have a direct effect on <span class="math-container">$Y$</span> or other indirect effects through other mediators.</p> <p>Moderation occurs when the effect of one variable varies across levels of another variable, e.g., <span class="math-container">$E[Y^1-Y^0|X=1] \ne E[Y^1-Y^0|X=2]$</span>. <span class="math-container">$X$</span> is a moderator of the effect of the treatment on <span class="math-container">$Y$</span>.</p> <p>Moderation can occur without mediation when the moderator and the treatment do not cause each other, such as in a randomized experiment where both the treatment and moderator are randomized. There is no mediator because neither variable is caused by the other, but there could be moderation if the effect of the treatment varies across levels of the moderator.</p>
701
causal inference
Statistics and causal inference?
https://stats.stackexchange.com/questions/2245/statistics-and-causal-inference
<p>In his 1984 paper <a href="http://www-unix.oit.umass.edu/~stanek/pdffiles/causal-holland.pdf">"Statistics and Causal Inference"</a>, Paul Holland raised one of the most fundamental questions in statistics:</p> <blockquote> <p>What can a statistical model say about causation?</p> </blockquote> <p>This led to his motto:</p> <blockquote> <p>NO CAUSATION WITHOUT MANIPULATION</p> </blockquote> <p>which emphasized the importance of restrictions around experiments that consider causation. Andrew Gelman makes <a href="http://www.stat.columbia.edu/~cook/movabletype/archives/2010/08/no_understandin.html">a similar point</a>:</p> <blockquote> <p>"To find out what happens when you change something, it is necessary to change it."...There are things you learn from perturbing a system that you'll never find out from any amount of passive observation.</p> </blockquote> <p>His ideas are summarized in <a href="http://www.stat.columbia.edu/~gelman/research/published/causalreview4.pdf">this article</a>.</p> <p>What considerations should be made when making a causal inference from a statistical model?</p>
<p>This is a broad question, but given the Box, Hunter and Hunter quote is true I think what it comes down to is</p> <ol> <li><p>The quality of the experimental design:</p> <ul> <li>randomization, sample sizes, control of confounders,...</li> </ul></li> <li><p>The quality of the implementation of the design:</p> <ul> <li>adherance to protocol, measurement error, data handling, ...</li> </ul></li> <li><p>The quality of the model to accurately reflect the design:</p> <ul> <li>blocking structures are accurately represented, proper degrees of freedom are associated with effects, estimators are unbiased, ...</li> </ul></li> </ol> <p>At the risk of stating the obvious I'll try to hit on the key points of each:</p> <ol> <li><p>is a large sub-field of statistics, but in it's most basic form I think it comes down to the fact that when making causal inference we ideally start with identical units that are monitored in identical environments other than being assigned to a treatment. Any systematic differences between groups after assigment are then logically attributable to the treatment (we can infer cause). But, the world isn't that nice and units differ prior to treatment and evironments during experiments are not perfectly controlled. So we "control what we can and randomize what we can't", which helps to insure that there won't be systematic bias due to the confounders that we controlled or randomized. One problem is that experiments tend to be difficult (to impossible) and expensive and a large variety of designs have been developed to efficiently extract as much information as possible in as carefully controlled a setting as possible, given the costs. Some of these are quite rigorous (e.g. in medicine the double-blind, randomized, placebo-controlled trial) and others less so (e.g. various forms of 'quasi-experiments'). </p></li> <li><p>is also a big issue and one that statisticians generally don't think about...though we should. In applied statistical work I can recall incidences where 'effects' found in the data were spurious results of inconsistency of data collection or handling. I also wonder how often information on true causal effects of interest is lost due to these issues (I believe students in the applied sciences generally have little-to-no training about ways that data can become corrupted - but I'm getting off topic here...)</p></li> <li><p>is another large technical subject, and another necessary step in objective causal inference. To a certain degree this is taken care of because the design crowd develop designs and models together (since inference from a model is the goal, the attributes of the estimators drive design). But this only gets us so far because in the 'real world' we end up analysing experimental data from non-textbook designs and then we have to think hard about things like the appropriate controls and how they should enter the model and what associated degrees of freedom should be and whether assumptions are met if if not how to adjust of violations and how robust the estimators are to any remaining violations and...</p></li> </ol> <p>Anyway, hopefully some of the above helps in thinking about considerations in making causal inference from a model. Did I forget anything big?</p>
702
causal inference
What distinction is there between statistical inference and causal inference?
https://stats.stackexchange.com/questions/233235/what-distinction-is-there-between-statistical-inference-and-causal-inference
<p>Be it on a practical or theoretical level, what would you say are the key differences between statistical inference and causal inference.</p> <p>I've been trying to learn more about causal inference and don't see a key difference in most instances.</p> <p>If anything, I'd say that statistical inference is about finding associations, while causal inference uses counterfactuals/dag's to infer causal patterns. So there are differences in terms of techniques and a greater emphasis on things like omitted variable bias.</p>
<p>Causal inference is the process of <em>ascribing causal relationships</em> to associations between variables. Statistical inference is the process of using statistical methods to <em>characterize the association</em> between variables. Causality is at the root of scientific explanation which is considered to be causal explanation. However, establishing causal relationships is extremely difficult in spite of substantial advancements made during the past decades. Statistical inference works like a black box and generates the best possible characterization of the relationships between variables. Statistical inference provides estimates of the associations between variables but of course, association does not imply causation, so there is little that statistical inference can provide to establish causation. That is not to say that statistical tools cannot be used to establish causal relationships but for that purpose a number of rules must be taken into account. These rules are what is generally known as the <em>covering laws</em> of which statistical inference is the method used in the model of statistical relevance designed to establish scientific explanations. As scientific explanations are causal explanations a delicate relationship is established between statistical inference and causal inference. For a review of these concepts see Judea Pearl's "Causal inference in statistics:An overview" (<a href="http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf" rel="noreferrer">http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf</a>). </p>
703
causal inference
Causal Inference: Selection Bias and Endogeneity
https://stats.stackexchange.com/questions/541807/causal-inference-selection-bias-and-endogeneity
<p>I've encountered lots of causal inference terms and jargons (under the Neyman-Rubin potential outcome framework), and I had questions regarding their relationships:</p> <p>I know that exogeneity E(e|X) = 0 is a regression assumption that can be violated by omitted variable bias, and that selection bias is an omitted variable issue (such as the Angrist &amp; Pischke's example of hospitals).</p> <p>How do <strong>selection bias</strong> and <strong>endogeneity</strong> relate to each other? Is the latter a proper subset to the former? Or are they equivalent? Or are they merely overlapping?</p> <p>Thank you very much for your help in advance!</p> <p>Related Questions:</p> <p><a href="https://stats.stackexchange.com/questions/541852/causal-inference-moderation-and-mediation">Causal Inference: Moderation and Mediation</a></p> <p><a href="https://stats.stackexchange.com/questions/541853/causal-inference-ignorability-and-collider">Causal Inference: Ignorability and Collider</a></p> <p><a href="https://stats.stackexchange.com/questions/541855/pearls-front-door-and-back-door">Pearl's Front-door and Back-door</a></p>
704
causal inference
Causal inference for additive multiple treatments
https://stats.stackexchange.com/questions/432298/causal-inference-for-additive-multiple-treatments
<p>I encountered a causal inference problem in practice and want to find if there is a previously established statistical toolset that can be applied to my problem.</p> <p>My problem is characterized as follows:</p> <ul> <li>My goal is to characterize the causal effects of each <span class="math-container">$T$</span> treatment <span class="math-container">$X_1, \cdots X_T$</span> on outcome <span class="math-container">$Y$</span> where the values of treatments and outcomes are binary (0 or 1).</li> <li>Unlike a typical multiple treatment setting, multiple treatments can be active simultaneously. For example, it is possible for a sample to have <span class="math-container">$X_1 =1$</span> and <span class="math-container">$X_2 =1$</span>.</li> <li>If necessary, the additivity assumption can be introduced. For example, the causal effect of having <span class="math-container">$(X_1, X_2)=(1, 1)$</span> is the sum of the causal effects of the two cases: <span class="math-container">$(X_1, X_2)=(1,0)$</span> and <span class="math-container">$(X_1, X_2)=(0, 1)$</span>.</li> <li>As in typical causal inference settings such as that of propensity score matching, there can be multiple common cause variables.</li> </ul> <p>Q1. Is there a specific term describing the problem setting described above? If there is such a term, could you cite a few pedagogical materials?</p> <p>Q2. If the problem is not studied before, how can it be tackled using existing causal inference methods?</p>
<p>A1: A term sometimes used is "joint interventions". However, joint interventions explicitly refers to multiple treatments, <em>not</em> multiple treatments with the additivity assumption. <a href="https://www.who.int/publications/cra/chapters/volume2/2191-2230.pdf" rel="nofollow noreferrer">This chapter</a> may be a useful resource to refer to. Ideally, we would avoid the additivity assumption, and I wouldn't recommend using a different term to shorten the phrase (it is clearer to describe the assumption being made rather than use some singular term).</p> <p>A2: This problem has been tackled through a variety of approaches. As detailed in the above chapter. One of my favorite applied examples is <a href="https://academic.oup.com/ije/article/38/6/1599/669228" rel="nofollow noreferrer">Taubman et al. 2009</a>. The authors apply the parametric g-formula in the time-varying treatments/exposures (which is more complicated but demonstrates the concept). <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3710547/" rel="nofollow noreferrer">McCaffrey et al. 2013</a> describe multiple treatments in the context of propensity scores</p>
705
causal inference
Causal inference and Propensity score
https://stats.stackexchange.com/questions/642327/causal-inference-and-propensity-score
<p>I am trying to understand Rubin's causal model but I can not make the connection between certain notions. The problem of causal inference lies in calculating the counterfactual, i.e. knowing what the outcome would have been in the absence/with treatment.</p> <p>The causal effect is individual (and unobservable), so we are more interested in two aggregate effects: ATE and ATT. If treatment allocation is independent, then selection bias is eliminated, so ATT=ATE (randomization). In causal inference methods, we try to reproduce randomization (control selection bias as much as possible).</p> <p>To do this, we can calculate a propensity score and apply various methods (matching, IPTW, stratification).</p> <p>I can not make the connection between ATT/ATE and propensity score method:</p> <ol> <li>Why logistic regression (then matching, IPTW or stratification) reduce this bias?</li> <li>How does this relate to the counterfactual? Should not we try to estimate it?</li> <li>Why are there different methods depending on whether you want to estimate ATT or ATE? Because if you want to remove the bias, then ATT=ATE.</li> </ol> <p>In my mind, it is difficult to make the link between these methods and Rubin's general model.</p> <p>Thank you for enlightening me on the subject.</p>
<p>Strictly speaking propensity score (PS) analysis is not a causal method. It is just a “confounder concentrator” or data reduction method. It allows you to use fewer parameters in the outcome model and still capture confounding. You still must adjust for outcome heterogeneity in addition to differences in baseline variables across groups, so PS is not enough to do the right analysis. Covariate adjustment, adding a spline function of the logit of PS, is a good approach. But I seldom see a dataset where I need PS. I can do straight covariate adjustment, sometimes with penalization if there are too many covariates to model.</p> <p>PS assumes that all confounders are measured and are part of the PS score. This is a quite silly assumption for many problems.</p>
706
causal inference
Causal inference with (only) interval uncertainty?
https://stats.stackexchange.com/questions/626310/causal-inference-with-only-interval-uncertainty
<p>The modern popular frameworks that I am aware of for causal inference (i.e. potential outcomes or Pearlian) are based on a premise of uncertainty quantified as probability. There's nothing particularly wrong with that, but I like to explore tools and use cases.</p> <p><a href="https://en.wikipedia.org/wiki/Interval_arithmetic" rel="nofollow noreferrer">Interval arithmetic</a> (IA) is a way of quantifying/qualifying uncertainty. Some advantages I have found with implementing interval arithmetic include:</p> <ul> <li>Tends to be more comprehensible to non-technical audiences than probability.</li> <li>Intervals can be assigned according to background information (similar to causal inference and Bayesian inference).</li> <li>Calculations for propagation of uncertainty are <em>often</em> simple (esp. for monotonic functions)</li> </ul> <p>IA is not a general replacement for probability however, and has some disadvantages of its own:</p> <ul> <li>Multiplication across addition is subdistributive (which can confuse people)</li> <li>The <a href="https://en.wikipedia.org/wiki/Interval_arithmetic#Dependency_problem" rel="nofollow noreferrer">dependency problem</a></li> <li>Division with intervals covering zero leads to splitting the problem into multiple subproblems can be tedious.</li> <li>Finding a tightest interval for the image of a function is tantamount to solving an optimization problem, and optimization problems can get very technical or even infeasible.</li> <li>Ignores degree of uncertainty within or outside interval.</li> </ul> <p>Further, it can be combined with probability theory (e.g. <a href="https://en.wikipedia.org/wiki/Probability_box" rel="nofollow noreferrer">pboxes</a>).</p> <p>But I was thinking about causal inference with intervals only. Has this been researched? If so, I would love a starting paper/book to start reading up on that literature.</p>
<p>I'm not aware of anything using Interval Arithmetic, but perhaps the idea behind <a href="https://arxiv.org/abs/1501.01332" rel="nofollow noreferrer">invariant causal prediction</a> comes close. The way it's described and implemented in the article it's still very much rooted in probabilities, but the fundamental idea of invariant prediction could probably also be encoded using another measure of invariance that does not rely on probability distributions.</p>
707
causal inference
What is the relation between causal inference and prediction?
https://stats.stackexchange.com/questions/56909/what-is-the-relation-between-causal-inference-and-prediction
<p>What are the relationships and the differences between causal inference and prediction (both classification and regression)?</p> <p>In the prediction context, we have the predictor/input variables and response/output variables. Does that mean that there is causal relation between input and output variables? So, does prediction belong to causal inference?</p> <p>If I understand correctly, causal inference considers to estimate conditional distribution of one random variable given another random variable, and often use graphical models to represent the conditional independence between random variables. So, causal inference, in this sense, isn't prediction, is it?</p>
<p>Causal inference is focused on knowing what happens to <span class="math-container">$Y$</span> when you change <span class="math-container">$X$</span>. Prediction is focused on knowing the next <span class="math-container">$Y$</span> given <span class="math-container">$X$</span> (and whatever else you've got). </p> <p>Usually, in causal inference, you want an unbiased estimate of the effect of <span class="math-container">$X$</span> on Y. In prediction, you're often more willing to accept a bit of bias if you and reduce the variance of your prediction.</p>
708
causal inference
Regression in Causal Inference
https://stats.stackexchange.com/questions/479432/regression-in-causal-inference
<p>I was recently introduced to the topic of causal inference in statistics and I am currently learning about the importance of the backdoor criterion (BDC), as applied to the following DAG. Interest lies in assessing the causal effect of the treatment <span class="math-container">$X$</span> upon the outcome <span class="math-container">$Y$</span>. It is easily established that the sets of variables <span class="math-container">$\lbrace U_1, U_3\rbrace$</span>, <span class="math-container">$\lbrace U_2, U_3\rbrace$</span> and <span class="math-container">$\lbrace U_1, U_2, U_3\rbrace$</span> both satisfy the requirements of the BDC.</p> <p>My confusion lies in understanding how a causal effect is modeled. Often I have seen references to OLS regression and regression with inverse probability weighting (IPW). However, I have seen very little in the way of literature describing how these can be applied to a situation such as that described in the DAG below, what conditioning on variables (or sets of variables) means in a regression model and indeed how to establish which of the three sets of variables given above should be conditioned on.</p> <p>A concise explanation of the above concepts as applied to an example DAG, such as the one I have given would be very much appreciated.</p> <p><a href="https://i.sstatic.net/VkU87.png" rel="noreferrer"><img src="https://i.sstatic.net/VkU87.png" alt="enter image description here" /></a></p>
<p>Just to add to the excellent answers by Adrian and Noah, there is the residual question of:</p> <blockquote> <p>how to establish which of the three sets of variables given above should be conditioned on.</p> </blockquote> <p>Fist let's recap how the backdoor criterion is applied to this particular DAG, which I'm reposting here:</p> <p><a href="https://i.sstatic.net/VkU87.png" rel="noreferrer"><img src="https://i.sstatic.net/VkU87.png" alt="enter image description here" /></a></p> <p>Usually we are interested in the &quot;average causal effect&quot; (ACE) which is the expected increase of <span class="math-container">$Y$</span> for a unit change in <span class="math-container">$X$</span>. This means that we must allow all causal paths between <span class="math-container">$X \rightarrow Y$</span> to remain open but we must block any backdoor paths from <span class="math-container">$Y \rightarrow X$</span></p> <p>What makes this DAG quite intriguing is that <span class="math-container">$U_3$</span> appears to be a confounder for <span class="math-container">$X \rightarrow Y$</span> but is also a collider (having 2 direct causes, <span class="math-container">$U_1$</span> and <span class="math-container">$U_2$</span>). So a simplistic approach would be to say that we need to condition on it to block the backdoor path <span class="math-container">$Y \leftarrow U_3 \rightarrow X$</span>) but then we don't want to condition on it, because that will open up the backdoor path <span class="math-container">$Y \leftarrow U_2 \rightarrow U_3 \leftarrow U_1 \rightarrow X$</span>. This is easily resolved by blocking that path by additionally conditioining on either <span class="math-container">$U_2$</span> or <span class="math-container">$U_1$</span>, or indeed both.</p> <p>Thus we have arrived at the 3 candidate adjustment sets <span class="math-container">$\lbrace U_1, U_3\rbrace$</span>, <span class="math-container">$\lbrace U_2, U_3\rbrace$</span> and <span class="math-container">$\lbrace U_1, U_2, U_3\rbrace$</span>.</p> <p>All 3 sets will give us an unbiased estimate of the causal effect, so how do we choose between them ?</p> <p>We could reject the larger set <span class="math-container">$\lbrace U_1, U_2, U_3\rbrace$</span> on two grounds. First model parsimony. Second <span class="math-container">$U_2$</span> and <span class="math-container">$U_3$</span> are correlated and this correlation could be very high leading to instabilty in the estimation procedure that is used to fit the model. If they are not highly corrrelated then we might still consider this set, but with the additional considerations as below:</p> <ul> <li><p>we choose the set which gives us the most precise estimate of the causal effect - in a multivariable regression model this would be the estimate with the smallest standard error.</p> </li> <li><p><span class="math-container">$\lbrace U_2, U_3\rbrace$</span> will yield the most precise estimate because conditional on them, <span class="math-container">$U_1$</span> is an instrument and therefore should not be adjusted for. Adjusting for <span class="math-container">$U_2$</span> would reduce the residual variance of <span class="math-container">$Y$</span> more than adjusting for <span class="math-container">$U_1$</span> would. Thanks to Noah for pointing this out in the comments. Here is a monte carlo simulation in R of this DAG that demonstrates this:</p> </li> </ul> <pre><code>set.seed(15) nsim &lt;- 1000 se_1 &lt;- numeric(nsim) se_2 &lt;- numeric(nsim) N &lt;- 500 for(i in 1:nsim) { # simulate the DAG U1 &lt;- rnorm(N, 10, 2) U2 &lt;- -U1 + rnorm(N, 10, 2) U3 &lt;- U1 + U2 + rnorm(N, 10, 2) X &lt;- U1 + U3 + rnorm(N, 10, 2) Y &lt;- X + U3 + U2 + rnorm(N, 10, 2) # extract standard error for U1 coefs_1 &lt;- lm(Y ~ X + U3 + U1) %&gt;% summary() %&gt;% coef() se_1[i] &lt;- coefs_1[6] # extract standard error for U2 coefs_2 &lt;- lm(Y ~ X + U3 + U2) %&gt;% summary() %&gt;% coef() se_2[i] &lt;- coefs_2[6] } ggplot(df, aes( x = SE, group = U, color = U)) + geom_histogram(aes(y = ..density..), alpha = 0.7, position = &quot;identity&quot;, bins = 30) + geom_density() </code></pre> <p><a href="https://i.sstatic.net/QFyJv.png" rel="noreferrer"><img src="https://i.sstatic.net/QFyJv.png" alt="enter image description here" /></a></p> <p>As we can see, conditioning on <span class="math-container">$U_2$</span> gives consistently lower standard errors than conditioning on <span class="math-container">$U_1$</span></p>
709
causal inference
Causal inference from a cross sectional study design
https://stats.stackexchange.com/questions/147443/causal-inference-from-a-cross-sectional-study-design
<p>As far I know, causal inference can be made only from longitudinal study designs. Is there any way to make causal inference from a cross sectional study design? If yes, how can I do this? Please share if any literature is available. </p>
<p>You could also use pcalg package if you are interested in network analysis(graphical modeling) and creating directed causal networks. pcalg has several algorithms for observational(cross sectional) data. With assumption of no hidden variable, you could use "pc" algorithm in the package to estimate the equivalence class of a directed acyclic graph (DAG) from observational data. Depending on your variable having Gaussian distribution, discrete(ordinal) or binary you could use different conditional independence functions in the package. for example using the database that comes with the package(gmG) you could do the following for the above mentioned 3 types of variables (these are modified examples from package pdf):</p> <pre><code>library(pcalg) ## Using Gaussian Data ################################################## ## Load predefined data data(gmG) n &lt;- nrow (gmG8$ x) V &lt;- colnames(gmG8$ x) # labels aka node names ## estimate CPDAG pc.fit &lt;- pc(suffStat = list(C = cor(gmG8$x), n = n), indepTest = gaussCItest, ## indep.test: partial correlations alpha=0.01, labels = V, verbose = TRUE) if (require(Rgraphviz)) { ## show estimated CPDAG ## par(mfrow=c(1,2)) plot(pc.fit, main = "Estimated CPDAG") ## CPDAG stands for completed partially directed acyclic graph(CPDAG) or ## basically what pc algorithm computes for you. } ################################################## ## Using discrete data ################################################## ## Load data data(gmD) V &lt;- colnames(gmD$x) ## define sufficient statistics suffStat &lt;- list(dm = gmD$x, nlev = c(3,2,3,4,2), adaptDF = FALSE) ## estimate CPDAG pc.D &lt;- pc(suffStat, ## independence test: G^2 statistic indepTest = disCItest, alpha = 0.01, labels = V, verbose = TRUE) if (require(Rgraphviz)) { ## show estimated CPDAG ## par(mfrow = c(1,2)) plot(pc.D, main = "Estimated CPDAG") ## plot(gmD$g, main = "True DAG") } ################################################## ## Using binary data ################################################## ## Load binary data data(gmB) V &lt;- colnames(gmB$x) ## estimate CPDAG pc.B &lt;- pc(suffStat = list(dm = gmB$x, adaptDF = FALSE), indepTest = binCItest, alpha = 0.01, labels = V, verbose = TRUE) pc.B if (require(Rgraphviz)) { ## show estimated CPDAG plot(pc.B, main = "Estimated CPDAG") ## plot(gmB$g, main = "True DAG") } ########### </code></pre>
710
causal inference
Machine learning for causal inference
https://stats.stackexchange.com/questions/565090/machine-learning-for-causal-inference
<p>I have a multiclass classification problem where the target variable is actually different categories of causes, and the dataset is observational. I know of causal inference, and I would like to learn more about it, but if I do I would need to justify it. So: is it justified to believe that a causal approach would yield more accurate classification results than classical machine learning (ML)?</p> <p><strong>EDIT FOR CLARIFICATION</strong></p> <p>Causal methods should not be used simply because the target variable is called &quot;cause&quot;. However, causes are a special kind of target variable, because different causes might not be fully independent due to confounding or mediation. Do such structural considerations affect classification accuracy enough that methods that model them would be more accurate?</p>
<p>Start with the <a href="https://stats.stackexchange.com/questions/6/the-two-cultures-statistics-vs-machine-learning">The Two Cultures: statistics vs. machine learning?</a> thread. Machine learning is about finding patterns or correlations in data. Causal inference, like statistics, is about inference. As others already noticed in the comments, those are different problems. When your aim is to study if smoking causes cancer, your aim is <em>not</em> classification accuracy, but confirming or rejecting the existence of the causal relationship. On another hand, if you want to accurately predict that someone will get cancer, you may throw many different variables that directly or indirectly relate to cancer to maximize the predictive performance. Sure, causal reasoning could and should inspire your decisions on which features to consider, but you wouldn't make the prediction based only on the fact that somebody smokes cigarettes because there is a causal relationship. Keep in mind that there could be multiple causal relationships (many things cause cancer), we may not be able to measure all of them, the causal relationship also doesn't mean that something is certain (not everyone who smokes gets cancer), they can be of varying strength, and the data would also be noisy, so having the features that are causally related does not give you classification performance guarantees.</p> <p>Answering your question with a metaphor, using a causal model for classification is like if your task was to transport goods from A to B and you approached it with designing your own lorry. Sure, the custom lorry may be faster and more efficient for your problem, but is it worth it? Same with causal inference, if you used it, it would mean that you need to spend at least twice as much time on the problem since you would be solving two problems (a) causal inference, (b) classification. It is also non-trivial how would you inject the causal knowledge into the classification model, so this might be a third research problem to solve. If you are working on a high-stakes problem (e.g. medicine) and have a budget that allows you for doing the research, sure, it might be worth it. But in most cases, if your aim is only to do classification, it would be enough for you to have a machine learning model that would find by itself an approximate representation of the data that is good enough for classification.</p>
711
causal inference
Bias-Variance tradeoff in prediction versus causal inference
https://stats.stackexchange.com/questions/620053/bias-variance-tradeoff-in-prediction-versus-causal-inference
<p>In prediction, accepting a little more bias in exchange for a lot less variance is the very name of the game - we'll chose the model with minimal test MSE without regard for its composition (bias squared versus variance). In causal inference, we rarely - if ever - are willing to make this tradeoff. The emphasis/weight placed in textbooks preoccupied with causal inference (e.g. statistics, econometrics) on theorems such as Gauss-Markov or Cramer-Rao seems to support this point. Take for example this sentence from <em>Peter Kennedy's A Guide to Econometrics</em> - <em>&quot;In practice, the MSE criterion is not usually adopted unless the best unbiased criterion is unable to produce estimates with small variances. The problem of multicollinearity, discussed in chapter 11, is an example of such a situation.&quot;</em> In causal inference, is an unbiased estimator with minimal variance the holy grail? Or is that just the starting point in our search? When are we willing to accept a little more bias in exchange for a lot less variance in causal inference?</p> <p>In machine learning a lot of time is spent on choosing amongst models (e.g. Nearest Neighbor Model versus Linear Regression Model) and less time on choosing the estimator given a model; in statistics/econometrics - causal inference - it seems that less time is spent on choosing the statistical model (e.g. Linear Regression Model) and more time is spent on choosing the best estimator (e.g. Least Squares versus Maximum Likelihood) given a statistical model and a causal model that we have in mind. That distinction seems very relevant here and I would love it if the answer to this question would address that directly. If I made some incorrect statements in asking this question, please correct me; there are clearly some gaps in my understanding of how these topics interrelate.</p>
<h4>Estimators should be judged as normal (so biased estimators are not ruled out), but with appropriate experimental protocols to deal with causality</h4> <p>I disagree with some of the other answers here. When conducting causal analysis, there is still a distinction between attempting to make inferences about unknown parameters, and attempting to make predictions for new instances of data (presumably subject to some intervention, since this is causal analysis). This means that we could be conducting causal inference or we could be attempting to predict new outcomes, taking account of causality. In either case, I see no particular reason why we would restrict ourselves to unbiased estimators/predictors, particularly if there are superior estimators/predictors with some bias but much better MSE (or other properties that make them superior estimators). Texts in econometrics include discussion of unbiased estimators and MVUEs, etc., for the same reason that standard statistics books include these --- because they are useful parts of estimation theory. That does not mean that they are the only admissible estimators, nor that causal analysis has some special requirement to restrict to these estimators.</p> <p>When undertaking causal analysis, the only real difference to regular (non-intervention based) statistical analysis is that we make an effort to make inferences about causal effects that account for the underlying causal structure of the problem (e.g., colliders, confounding, etc.) and we impose additional experimental protocols to sever certain problematic causal effects that might exist (e.g., confounding) to allow us to interpret statistical associations as causal effects. Aside from that, all of the standard distinctions and principles of statistics apply to the estimation of the underlying model from the data. An unbiased estimator is not always a superior estimator, and when you compare it to a estimator with small bias but much smaller MSE, the unbiased estimator will usually be <em>further away</em> from the true parameter of interest (i.e., it will usually have <em>more error</em>). There may be cases when an available unbiased estimator has poor performance (e.g., high MSE) but another available biased estimator has good performance (e.g., low bias, low MSE), but to be clear, neither do we need to choose the estimator with minimum MSE; all estimators should be on the table and should be considered based on the totality of their properties and relevant trade-offs.</p> <p>On this matter, it is also notable that if biased estimation were ruled inadmissible in causal analysis, this would effectively rule out Bayesian methods in this field. Bayesian estimators are almost always biased, due to the incorporation of prior information. Nevertheless, Bayesian models are known to have many good estimation properties --- their estimators are admissible, consistent (under correct model specification), and they incorporate prior information according to the principles of probability. Estimators from Bayesian analysis (e.g., the posterior mode) may have superior performance to unbiased estimators in various circumstances.</p>
712
causal inference
Causal inference for continuous exposures
https://stats.stackexchange.com/questions/535388/causal-inference-for-continuous-exposures
<p>I am new to causal inference world and want to find which is the correct statistical procedure that can be applied to my data. I found a number of predictors 𝑋<sup>1...n</sup> which are associated with a continuous outcome 𝑌 in a cross-sectional setting (N<sub>samples</sub>~1000), both the predictors and the outcome being age-dependent and associated with some confounders (e.g., sex). For a subset of this sample (N<sub>samples</sub>~250), I have both variables measured at two time-points (in average 5 years apart), and I want to identify predictors showing evidence for causal relationship. I am wondering what is the best way to tackle this problem. I came across the propensity score methods but, as far as I know, these methods have been mostly used for binary exposure variables. Thanks</p>
713
causal inference
AIC for Causal Inference
https://stats.stackexchange.com/questions/398740/aic-for-causal-inference
<p>I read <a href="https://stats.stackexchange.com/questions/78295/using-aic-to-test-the-direction-of-causality">a post</a> explaining why the Akaike Criterion cannot be used for deciding if A cause B or B caused A.</p> <p>I'm curious about a more general case of using AIC for causal inference (with observational data only). Consider the case of 3 different events labelled A,B,C and we have a non-trivial joint probability distribution over them. For sanity, let's assume we have a partial order (<span class="math-container">$C&lt;A$</span> and <span class="math-container">$C&lt;B$</span>). So the potential causal structures (excluding the possibility of latent variables) are C causes A, C causes A and C causes B, C causes B, A causes B, B causes A, C causes A and C causes B and A causes B, etc. </p> <p>For each causal structure we could assign a parameterized causal model, the parameters determining the dependencies and initial distribution for any exogenous variables. Ex. if A,B all take on N values and C takes on M, the causal model for the "C causes A" case would have M*(N-1)+(M-1)+(N-1) parameters.</p> <p>Then you assume independent normally distributed errors in your observed probabilities. Restrict yourself to one causal structure and find the parameters to minimize the Chi-squared. You then compare the minimums of each structure using an parameter number penalty like AIC.</p> <p>Ex. let's say the lowest Chi-squared value for the "C causes A" case is 150 and lowest for "C cause A and C causes B" is 140 but the second model is more complex so when the parameter penalizations are accounted for it has a worse score.</p> <p>Could an AIC criterion of sorts work for the problem posed in this way?</p> <p>I recognize in some cases there will be multiple structures that could give rise to the same joint probability distribution and therefore potentially the same minimum Chi-squared score however in those cases the AIC criterion would select the least complicated structure or if both have the same complexity it cannot distinguish these two and ranks them equally. This I believe is the <a href="https://stats.stackexchange.com/questions/384330/is-causal-inference-only-from-data-possible">issue raised</a> when thinking about if A causes B or B causes A since any joint distribution can be obtained either way and they have same complexity but this would not be the case for two general causal models.</p>
714
causal inference
Time length for causal inference experiments
https://stats.stackexchange.com/questions/518475/time-length-for-causal-inference-experiments
<p>Let's say that I want to run a causal inference experiment, that is an experiment on historical data for an intervention that we were not able to perform a randomized controlled trial for. In the case of something like a difference-in-differences (DD), or even just a basic linear/logit regression, for the purpose of estimating the causal impact (marginal effects in this case) of some intervention, is there a rule of thumb for attempting to control for the length of time to use in the pre-intervention period? In the past, I've at least tried to at least compare full weeks, in order to incorporate any weekday impact.</p>
<blockquote> <p>In the case of something like a difference-in-differences (DD), or even just a basic linear/logit regression, ... is there a rule of thumb for attempting to control for the length of time to use in the pre-intervention period?</p> </blockquote> <p>In a difference-in-differences (DD) setting, we often want serial observations of our units <em>before</em> the treatment/policy of interest. Three or more pre-treatment time periods is desirable. The purpose of maximizing the temporal dimension pre-shock is to demonstrate, visually, parallel group trends. In particular, the treatment group and the control group should be moving in tandem <em>before</em> the treatment/policy goes into effect.</p> <blockquote> <p>In the past, I've at least tried to at least compare full weeks, in order to incorporate any weekday impact.</p> </blockquote> <p>It depends.</p> <p>You may be observing cyclical and/or idiosyncratic shocks in the raw data. If they influence both groups, then we often model the common shocks with period dummies (i.e., time effects). However, if you’re observing a divergence in trend emerging over a long time horizon, then we may want to augment the DD equation to account for this. Suppose the treatment group and the control group were slowly proceeding on different growth trajectories in the pre-policy epoch. In practice, we may want to model this by giving each unit its own linear, or even quadratic, time trend. Note, this approach is only useful when you amass sufficient pre-policy data. Three or more time periods isn't enough in my opinion. In fact, group-specific time trends often require far more than three pre-treatment periods; it should be long enough that we could reasonably extrapolate the group-specific trend into the post-treatment period.</p> <p>Suppose a policy is enacted in a subset of U.S. states to curb traffic fatalities. To investigate whether the policy actually resulted in less fatal crashes, you acquire <em>yearly</em> fatality data in all states before <em>and</em> after the policy. Assume the policy was introduced in 2018 and you were lucky enough to get your hands on state-level vehicular fatalities from 2000–2020. This results in 18 pre-treatment time periods, which is more than enough to graphically inspect the group trends and also sufficient to allow each state to have its own unique time trend. Peruse the top answer <a href="https://stats.stackexchange.com/questions/319298/control-for-time-trend-in-difference-in-differences">here</a> for more information on how to model this in practice.</p> <p>Even with a surplus of pre-event data, including group-specific time trends is still very restrictive as a modeling strategy, so don't overdue it. In practice, I would juxtapose your DD estimates across the restricted and unrestricted models. If the causal estimand is insensitive to alternative specifications, then your results appear more credible. Alternatively, its quite possible the group-specific trends may completely absorb your treatment effect, which is unfortunate.</p> <p>In sum, it is sensible to acquire three or more pre-treatment observations in a DD application. Any less would invite skepticism.</p>
715
causal inference
Causal Inference in Mortality Rates
https://stats.stackexchange.com/questions/419136/causal-inference-in-mortality-rates
<p>I was wondering how does one study the average treatment affect in scenarios suchs as mortality rates.</p> <p>For example: suppose we want to study the effect that a certain medicine has on the mortality rates os the patients. How can we do a study such as Difference-In-Differences or Propensity Scores if the differences before the treatment are zero? (for a patient to receive or not the treatment he/she has to not have died before being given the treatment, so the mortality rates of the control and treatment group are zero)</p> <p>Can someone help me understand causal inference in this situations where there's no difference before the treatment is implemented?</p> <p>Thank you!</p>
<p>Patients will likely differ in terms of measurable pre-treatment attributes. If you have access to these covariates, they should be an input to you adjusting method of choice (i.e. inverse propensity weighting). What do you mean by "differences before the treatment are zero?" If by that you mean that the propensity of each treatment assignment is independent conditioned on all attributes (treatment is random), then correlation is causation and the causal effect is easily calculated. But that is not likely to be the case.</p>
716
causal inference
Causal Inference: Meta Learners usage
https://stats.stackexchange.com/questions/645660/causal-inference-meta-learners-usage
<p>I have been running causal inference using Econ ML package on my data. I have a dataset containing customers divided into treatment and control and many other features. I run matching on those and obtained a matched dataset that contains the matched treat and control. If I calculate the difference in the avg outcome Y between the 2 matched group I get an ATE of 3. Now my question is if I train a Meta Learner, ex. X learner on the data before match and then use it to estimate the ATE on that matched dataset I have, am I supposed to get an ATE very close to 3? Or not? If not what is the reason? This is the part that is not clear to me.</p>
<p><strong>Disclaimer</strong> I just read the Econ ML package right now, very briefly. I also am not sure how &quot;matching&quot; comes into the model. Take my answer with a grain of salt, I may have misunderstood.</p> <p><strong>TL;DR</strong> Your first estimate (with the &quot;matching&quot;) is closer to the reality because the &quot;matching&quot; is likely to correct for the unobserved heterogeneity.</p> <p>My understanding is that Econ ML (links below, please check if I got the right package and models) is an extension of the traditional instrumental variables estimation. If so, then if the instruments are highly correlated with the controls, and the instrument itself does not suffer from an omitted variable bias, then the two ATE estimations should be closer to each other. Otherwise, they can be different. (Alternatively, if treatment and control are assigned randomly, then the two estimations will also be similar. However, from reading the model, this does not seem to be the case.)</p> <p>While I am not sure about what &quot;matching&quot; means, it sounds like you are running two different procedures, where the first procedure corrects for the omitted variable bias through &quot;matching&quot;. If this is the case, then consider the following for an intuition: Try to predict likelihood of death within the next year with only one variable, whether a person has visited a hospital. The coefficient will be positive, and you could make the argument that going to the hospital causes the likelihood of dying to increase, that the ATE is a positive number. Clearly this is incorrect, that's because you have not randomly assigned going to hospital, rather, people with underlying issues self-selected into going to the hospital. You do not observe this underlying condition, so the model suffers from omitted variable bias. Now consider that you want to correct for the fact that you failed to include the unobserved variable. You may want to use distance from the hospital as an instrument (bad idea in reality but good for an example). Or you wanted to &quot;match&quot; the people in a control group to similar people in the treatment group, perhaps using the distance from a hospital as your measure of similarity. Now you are beginning to control for the data you have been missing (omitted variable bias). So you may get closer to the actual ATE, assuming that you chose a &quot;good&quot; instrument.</p> <p>In other words, your first estimate (with the &quot;matching&quot;) is closer to the reality because the &quot;matching&quot; is likely to correct for the unobserved heterogeneity.</p> <p><a href="https://econml.azurewebsites.net/spec/motivation.html" rel="nofollow noreferrer">https://econml.azurewebsites.net/spec/motivation.html</a></p> <p><a href="https://econml.azurewebsites.net/spec/api.html" rel="nofollow noreferrer">https://econml.azurewebsites.net/spec/api.html</a></p>
717
causal inference
Exchangeability, causal inference, and partial pooling
https://stats.stackexchange.com/questions/560761/exchangeability-causal-inference-and-partial-pooling
<p>In Statistical Rethinking, Richard McElreath writes the following concerning the use of partial pooling (i.e. varying/random effects) in Bayesian hierarchical models:</p> <blockquote> <p>Could we also use partial pooling on the treatment effects? Yes, we could. Some people will scream “No!” at this suggestion, because they have been taught that varying effects are only for variables that were not experimentally controlled. Since treatment was “fixed” by the experiment, the thinking goes, we should use un-pooled “fixed” effects. This is all wrong. The reason to use varying effects is because they provide better inferences. It doesn’t matter how the clusters arise. If the individual units are <strong>exchangeable</strong> — the index values could be reassigned without changing the meaning of the model— then partial pooling could help.</p> </blockquote> <p>My understanding of the concept of <strong>exchangeablity</strong> is that it is not so much a property of a variable <em>per se</em> but rather a property of observations in relation to a variable, given a causal model. That is, exchangeability obtains when the group membership <span class="math-container">$X$</span> of individual observations <span class="math-container">$x_i$</span> can be reshuffled without altering the predicted distribution of the response variable <span class="math-container">$Y$</span>. Or, to rephrase it (correct me if I'm wrong), exchangeability obtains when the group membership of <span class="math-container">$x_i$</span> can be reshuffled without altering the inferred effect <span class="math-container">$Y$</span> ~ <span class="math-container">$X$</span>.</p> <p>Thus, McElreath argues, it doesn't matter whether <span class="math-container">$X$</span> is a &quot;nuisance&quot; variable (e.g. study site, subject ID, experimental block, etc.) or a substantive variable (e.g. sex, religious identity, or even experimental treatment). If you can reshuffle <span class="math-container">$x_i$</span> across <span class="math-container">$X$</span> without altering the predicted distribution of <span class="math-container">$Y$</span>, you should estimate <span class="math-container">$Y$</span> ~ <span class="math-container">$X$</span> using partial pooling.</p> <p>If I understand this correctly, the only reason why exchangeability would <em>not</em> obtain for <span class="math-container">$Y$</span> ~ <span class="math-container">$X$</span> is if the effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> were confounded by some other variable. Given a confound, reassigning <span class="math-container">$x_i$</span> across <span class="math-container">$X$</span> <em>could</em> change the distribution of <span class="math-container">$Y$</span> because the observations would be differently affected by the confounding variable. Thus, it would seem that the concept of exchangeability merges with the concept of causal inference, and the decision of whether to use partial pooling hinges on whether <span class="math-container">$Y$</span> ~ <span class="math-container">$X$</span> is unconfounded (either marginally or conditionally).</p> <p>Am I understanding this correctly? To summarize, I have the following questions:</p> <ol> <li>If, given a causal model, the total effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is unconfounded — either marginally or conditional on a set of covariates — is <span class="math-container">$x_i$</span> <em>necessarily exchangeable</em> with respect to <span class="math-container">$X$</span>?</li> <li>If the total effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is unconfounded — implying exchangeability — should the effect <span class="math-container">$Y$</span> ~ <span class="math-container">$X$</span> <em>always</em> be estimated using partial pooling?</li> <li>Are &quot;exchangeability&quot; and &quot;causal inference&quot; just two sides of the same coin with respect to a given <span class="math-container">$Y$</span> ~ <span class="math-container">$X$</span> model? If so, any model claiming to make causal inference with respect to <span class="math-container">$Y$</span> ~ <span class="math-container">$X$</span> can and should estimate the model using partial pooling, right?</li> </ol> <p>Thank you for your input. I know exchangeability is much-discussed on SE, but I have never found an answer to this particular formulation of questions.</p>
718
causal inference
Where does multilevlel modeling fit in with causal inference?
https://stats.stackexchange.com/questions/617895/where-does-multilevlel-modeling-fit-in-with-causal-inference
<p>I am just now exploring the world of multilevel modeling and I am wondering how to contextualize MLM within the broader toolkit of causal inference techniques. In one of my graduate econometrics course, I was taught the fixed effects v. random effects dichotomy that <a href="https://theeffectbook.net/ch-FixedEffects.html#advanced-random-effects" rel="noreferrer">Huntington-Klein</a> helpfully breaks down and criticizes (random effects are only plausible with no correlation between fixed effects and right-hand side variables). In my brief exploration of Bayesian statistics via McElreaths <a href="https://xcelab.net/rm/statistical-rethinking/" rel="noreferrer">Statistical Rethinking</a>, he argues that MLMs should probably be the default over the standard regression model in most disciplines.</p> <p>Conceptually, some of these ideas are fairly new to me, especially with how MLMs fit into the causal inference toolkit. As a result, I have three questions on the topic:</p> <ol> <li>If MLMs incorporate fixed effects, should one consider using fixed effects anymore as a tool to make causal inferences?</li> <li>If MLMs provide more detail on group or unit-specific intercepts and slopes, should users consider using standard regression adjustment anymore?</li> <li>Where do MLMs fit into the broader causal inference toolkit. Given that regression is still used for estimation with most strategies (matching, DID, IV, RDD, etc.), can one use MLMs instead of the standard regression model for these contexts? <em>Should</em> one consider using MLMs?</li> </ol>
<p>I think this question is conflating a few distinct issues.</p> <p>First of all, the terms &quot;multilevel modeling,&quot; &quot;random effects,&quot; and &quot;fixed effects&quot; are all used in different ways by different people. <a href="https://stats.stackexchange.com/questions/4700/what-is-the-difference-between-fixed-effect-random-effect-and-mixed-effect-mode/4702#4702">This post</a> outlines FIVE different ways people define the difference between fixed and random effects.</p> <p>Second, the most common use of MLM is for when you have observations &quot;nested&quot; at multiple levels (so students nested within schools, or observations nested within people in a longitudinal dataset). The question there is how you should deal with the higher level &quot;units&quot; (schools or people). One approach is to treat them as &quot;fixed effects&quot; (basically include a dummy variable for each &quot;group&quot;). On on hand, this approach controls for ALL possible bias at the group level, so that's good. On the other hand, precisely because of that, it doesn't allow you to actually analyze the effect of any group level variable (like school size, or &quot;race&quot; in a longitudinal dataset). Treating the groups as &quot;random effects&quot; (allowing the intercept and/or one or more coefficient to vary randomly at the group level) allows you to control for other group level variables (and to do various other cool things like empirical Bayes estimation of group level characteristics), but also opens you up to group level bias if you haven't controlled for all of the important group level factors (which is always the case to some extent).</p> <p>So in a nutshell, that's the trade off between fixed and random effects for using MLM to analyzed clustered data. How you navigate that trade off depends on your research question and how the data are set up.</p> <p>Now, as you note, some Bayesians (like Andrew Gelman and perhaps also McElreaths) advocate using MLM (the &quot;random effects&quot; approach) even when there is no &quot;nesting&quot; of data, because Bayesians see <em>all</em> model parameters as inherently &quot;random.&quot; But this is a more complicated approach and, in my experience, isn't yet super common among day-to-day statisticians due to various philosophical and logistical issues.</p> <p>Also, any time you run a normal OLS model and include dummy variables for race, you could also correctly say that you are including &quot;fixed effects&quot; for race....but people don't usually consider that &quot;multilevel modeling.&quot;</p> <p>What does all of this have to do with causal inference? Nothing and everything. Causal inference is really tough, and &quot;running a regression model&quot; on observational data is generally regarded as a pretty suboptimal way to establish causality...although sometimes it's all we've got. The extent to which we can interpret the results of a model in causal terms depends both on the model specification and the underlying theory behind it. MLM is just one way of specifying models to deal with particular problems that might contribute to bias or error in our estimates of coefficients and/or standard errors. If deployed well MLM might make a causal interpretation of a particular coefficient in a particular model more defensible, or it might not. But like any kind of model specification MLM (either fixed or random effects) has no <em>inherent</em> power to make models results causally interpretable, any more than &quot;including an interaction term&quot; or &quot;including a control for age,&quot; or any other way we might modify the specification of a model.</p>
719
causal inference
Variable type name in causal inference
https://stats.stackexchange.com/questions/546510/variable-type-name-in-causal-inference
<p>Causal inference language distinguishes different variable types: confounders, mediators, colliders, moderators.</p> <p>Some time ago I encountered quite rare variable name which I can not remember. The idea of it was that only a part of the confounding variable caused outcome and variable of interest, while the other part was irrelevant. This variable, as far as I remember, had an information (caused) of this irrelevant part.</p> <p>I could mess something here due to lack of memory, but I would gladly appreciate the name which could lead me to some literature describing this situation.</p>
<p>You are probably thinking of a component cause, part of the sufficient component causal model. It is described briefly <a href="https://sphweb.bumc.bu.edu/otlt/mph-modules/ep/ep713_causality/ep713_causality4.html" rel="nofollow noreferrer">here</a>.</p>
720
causal inference
Regression Methods in Causal Inference
https://stats.stackexchange.com/questions/601289/regression-methods-in-causal-inference
<p>In the most basic regression methods of causal inference (randomized experiment case), it's known that we can use covariates to predict the observed outcome, i.e. <span class="math-container">$Y^{obs}$</span> and the model is <span class="math-container">$$ Y^{obs}_i=\alpha+\tau W_i+\beta X+\epsilon_i $$</span> in which <span class="math-container">$\tau$</span> is ATE and <span class="math-container">$W_i$</span> is assignment. And we know that the causality of regression coefficients is guaranteed by randomized experiments. In other words, even if the linear relationship is wrong, we can still get the correct ATE. Regression helps to reduce the variance of ATE.</p> <p>Therefore, my question is <strong>whether we can use another model</strong>, such as Tree or Neural Networks to add the covariates, that is <span class="math-container">$$ Y^{obs}_i=\tau W_i+f(X)+\epsilon_i $$</span></p> <p>If so, can this model still reduce the variance? How to prove it and how to reduce possible overfitting?</p>
<p>Yes, we can use a non-linear estimator to reduce the variance and get more accurate results. There are many different techniques. To start of, for example, BART (<a href="https://projecteuclid.org/journals/annals-of-applied-statistics/volume-4/issue-1/BART-Bayesian-additive-regression-trees/10.1214/09-AOAS285.full" rel="nofollow noreferrer">Bayesian Additive Regression Trees</a>) has been found to be an excellent algorithm for such &quot;out-of-the-box&quot; causal inference tasks, see <a href="https://arxiv.org/abs/1707.02641" rel="nofollow noreferrer"><em>Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition</em></a> by Dorie et al. (2017) for a more detailed investigation. In the last 5 to 6 years representation learning-based methods have also blossomed (starting with the work of Johansson et al. (2016) <a href="https://arxiv.org/abs/1605.03661" rel="nofollow noreferrer"><em>Learning Representations for Counterfactual Inference</em></a>) offering often very competitive results too.</p>
721
causal inference
Causal inference on time-series data: is intervention needed?
https://stats.stackexchange.com/questions/631678/causal-inference-on-time-series-data-is-intervention-needed
<p>I'm working on the topic of causal inference, I use time-series data. I have two scenarios in front of me and I don't understand the difference:</p> <ul> <li>Given X and Y &quot;time&quot; features. I would like to know whether X, e.g. average income, does it cause Y, e.g. hotel reservations.</li> <li>Given X &quot;time&quot; feature and an intervention. I'm curious to see how the intervention affects X. As an example, I publish a new web interface, while I look at the amount of purchases.</li> </ul> <p>Are both causal inferences? What is the difference between them in practice? A good tool for the second is Google Causalimpact. Could you give me examples of the estimation methods in both cases?</p> <p>Earlier, I used causal inference on cross-section data sets and that was obvious for me, because I could use DoWhy and a kind of matching and scoring-based estimation methods.</p>
<p>Both are identical problems. The purpose of intervention is the measure causal effect size between X and Y. For example if income is increased at some point in time, if the same unit (person or family) would consume more hotel reservation services. Such that, in do-calculus notation, <span class="math-container">$P(Y|do(X))$</span> provides a tool to impose intervention on X, i.e., increasing salary. This is so called <a href="https://plato.stanford.edu/entries/causal-models/index.html#CausDiscInte" rel="nofollow noreferrer">Causal Discovery with Interventions</a></p>
722
causal inference
Causal Inference Short Time Series
https://stats.stackexchange.com/questions/555626/causal-inference-short-time-series
<p>I am trying to analyse causal inference associated with an intervenion using either Difference-in-Differences or Interrupted Time Series Analysis. I have a discrete time series consisting of data covering a four year period, which could either be aggregated by month [allowing for 24 observations in both the pre- and post-intervention periods] or quarterly [8 observations in each period]. Ideally, I assume that aggregation by month would be preferable....however there is a distinct possibility that there will be a large number of zero value observations, which I assume would make the use of regression techniques more complex.</p> <p>Were aggregation to be undertaken by quarter there would likely be less zeros, but much potentially insufficient observations for ITS [as the preferred method]. Does anyone know of reliable models for use in regards to data with either a small number of observations or data with large numbers of zero values?</p> <p>Any help hugely appreciated.....</p>
723
causal inference
A Short Video to Explain Causal Inference to Non-Technical Audiences
https://stats.stackexchange.com/questions/638811/a-short-video-to-explain-causal-inference-to-non-technical-audiences
<p>I thought this question might be too much like a shopping question so I initially asked in <a href="https://chat.stackexchange.com/transcript/message/65133449#65133449">Ten Fold</a>, but it was <a href="https://chat.stackexchange.com/transcript/message/65135899#65135899">suggested that I ask here</a>. Here is my original question:</p> <blockquote> <p>Anyone have suggestions on short videos I can send clients that explains causal inference? It seems like a difficult subject to reduce to a handful of sentences.</p> </blockquote> <p>I cannot assume that my clients have a lot of time to spend on understanding causal inference the way that I do, and I cannot assume that they have any understanding of university-level mathematics (e.g. mathematical statistics or graph theory). But my clients are sometimes interested in it. I just need to distill the subject into a short introduction. I love McElreath's <a href="https://www.youtube.com/watch?v=KNPYUVmY3NM" rel="noreferrer">Science Before Statistics: Causal Inference</a> but it is far too long and too technical.</p> <p>What I need is a video around 5 minutes (maybe 10 minutes max) that I can send to the client to digest. The goal of such a video would be to communicate:</p> <ol> <li>What is causal inference?</li> <li>Why does causal inference help us make decisions under uncertainty?</li> <li>How they would need to participate in helping form the causal assumptions of the modelling.</li> </ol> <p>An accurate, concise, and clear explanation for a non-technical audience would provide tangible value in my communications.</p>
724
causal inference
Do-calculus and causal inference for continuous random variables
https://stats.stackexchange.com/questions/604373/do-calculus-and-causal-inference-for-continuous-random-variables
<p>Typical treatments of do-calculus and causal inference use discrete random variables. For example, the first rule of do-calculus in Pearl states:</p> <p><a href="https://i.sstatic.net/GHK33.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GHK33.png" alt="enter image description here" /></a></p> <p>I'm curious about how the do-calculus and causal inference formulas would change if all variables were continuous random variables. Of course, (3.31) as written would always be true as the probability of a particular y is always zero in a continuous setting. Do we simply take (3.31) and other do-calculus formulas to refer to the probability density rather than the probability?</p> <p>If anyone has a reference text which formalizes do-calculus in the context of continuous random variables that would also be very helpful.</p>
725
causal inference
Why isn&#39;t causal inference a simple specialized regression problem?
https://stats.stackexchange.com/questions/464470/why-isnt-causal-inference-a-simple-specialized-regression-problem
<p>I am often told that the crucial difficulty in causal inference is that we only observe one value between <span class="math-container">$Y(1)$</span> and <span class="math-container">$Y(0)$</span> while we want to estimate <span class="math-container">$E[Y(1) - Y(0)]$</span>. There is always an unobserved value.</p> <p>Here is my problem: why don't we simply use the samples with treatment <span class="math-container">$z_i = 1$</span> to regress <span class="math-container">$y(1) \sim x$</span> , and similarly use the samples with treatment <span class="math-container">$z_i = 0$</span> to regress <span class="math-container">$y(0) \sim x$</span>, and combine them to estimate <span class="math-container">$E[Y(1) - Y(0)]$</span>?</p> <p>From this perspective, causal inference is just two regression problems and needn't be treated as a special area. I am sure that there must be something wrong, but what is it?</p>
<p>A real-life example for how you run into problems: People with prior heart attacks take various drugs like beta blockers. The more severe the patient state, the more like it is that they are prescribed the drug. If you do not know all that much about patients and just take a bunch of patients with a heart attack in the recent past, you will find that people that take beta-blockers have worse outcomes (even though randomized trials show benefits from beta-blockers). This issue is called confounding by indication.</p> <p>You now have to somehow account for the fact that people who are prescribed the drug on average have a much worse expected outcome without treatment than those that are not prescribed the drug. </p> <p>Appropriately dealing with that is what we are trying to deal with and formulating this problem in terms of counter-factual outcomes helps with understanding what is going on. Essentially, you need to take the prognosis for the patient (from the eyes of the treating phyiscian) into account. Very often, one big problem here is data availability. Even if you have some measurements available that you can somehow take into account as going into the prognosis, you may be missing out on information that is not captured in your database or is very hard to translate into something quantitative (e.g. free-text descriptions).</p>
726
causal inference
Positivity assumption in causal inference with continuous covariates
https://stats.stackexchange.com/questions/582471/positivity-assumption-in-causal-inference-with-continuous-covariates
<p>In causal inference, studies usually require several assumptions (e.g., Unconfoundedness) to make valid causal statements. One of these assumptions is the 'Positivity' Assumption (sometimes referred to as 'Common Support' / 'Overlap'). With measured covariates L, this assumption can be defined as:</p> <blockquote> <p>&quot;the probability of receiving every value of treatment conditional on L is greater than zero, i.e., positive&quot;<br /> [Hernán MA, Robins JM (2020). Causal Inference: What If.]</p> </blockquote> <p>Consider a binary treatment <strong>T</strong>. My understanding is that by conditioning on L we get several subsets. Inside these subsets we need to have data for both treatment values (<strong>T=1 and T=0</strong>). I.e. for each subset we need to have data for both treatment values (<strong>T=1 and T=0</strong>) with <strong>all of the covariates having the exact same values</strong>. This seems highly unlikely to me in settings with multiple continuous covariates (especially if we have additionally only few data).</p> <p><strong>My questions are:</strong></p> <ol> <li>Do I understand 'Positivity' correctly or have I misunderstood something?</li> <li>How likely is this assumption to hold in settings with multiple continuous covariates?</li> <li>Assuming Positivity is violated, to continue our analysis we would need to take an approach (e.g. trimming) to adjust for the violation? (i.e. to ensure Positivity holds in our 'adjusted' data set)</li> </ol>
727
causal inference
Is causal inference only from data possible?
https://stats.stackexchange.com/questions/384330/is-causal-inference-only-from-data-possible
<p>Suppose we are given a dataset but not the capability of performing some AB testing. We do some regression using X as predictor and Y as response and get a model. Can we actually say something about the causal relationship between X and Y? Or is it simply impossible to say anything about the causal relationship at all?</p> <p>For example, suppose the data we have is simply the fathers' height and the sons' height, and also suppose mother's height has no influence on son's height. We can get a good linear relationship using sons's height as X and fathers' height as Y. However, we cannot say that lower sons' height causes fathers' height to be lower.</p> <p>In other words, I feel the causal inference has to eventually resort to some physical/mechanical mechanism instead of just by looking at the data. Am I missing something here?</p>
<blockquote> <p>Suppose we are given a dataset but not the capability of performing some AB testing. We do some regression using X as predictor and Y as response and get a model. Can we actually say something about the causal relationship between X and Y?</p> </blockquote> <p>No you can't, even when all variables are observed, <a href="https://stats.stackexchange.com/questions/373385/is-a-regression-causal-if-there-are-no-omitted-variables/373388#373388">see here for instance.</a> If you are given only distributional information about the data (ie., you know the joint distribution of the observed variables), but no information about how the data was generated (<a href="https://stats.stackexchange.com/questions/211008/dox-operator-meaning/312130#312130">a causal model</a>), causal inference is impossible. In short, <a href="https://stats.stackexchange.com/questions/2245/statistics-and-causal-inference/310919#310919">you need causal assumptions to get causal conclusions</a>. You can get started on learning causal inference <a href="https://stats.stackexchange.com/questions/45999/introduction-to-causal-analysis/298744#298744">with the references here.</a></p> <p>It is easy to understand why that is the case by constructing an example where <em>different causal models entail the same observed joint probability distribution</em>. Consider that you have observed the joint probability distribution <span class="math-container">$P(x, y)$</span> of two random variables. Here, imagine you have no sampling uncertainty---so you have perfect knowledge of <span class="math-container">$P(x, y)$</span>, which entails perfect knowledge of the regression function and so on. To simplify things, consider that, in your data, <span class="math-container">$P(x,y)$</span> was found to be jointly normal with mean <span class="math-container">$0$</span>, variance 1 and covariance <span class="math-container">$\sigma_{xy}$</span> (this is without loss of generality, you can always standardize the data). What can you say about the causal effect of <span class="math-container">$x$</span> on <span class="math-container">$y$</span> or vice-versa?</p> <p>With only this information, <em>nothing</em>. The reason here is that there are several causal models that would create the same observed distribution, yet have different <a href="https://stats.stackexchange.com/questions/357255/does-statistical-independence-mean-lack-of-causation/357275#357275">interventional (and counterfactual) distributions</a>. Here I will show three of such models. Notice all of them gives you the same observed <span class="math-container">$\sigma_{xy}$</span>, but their causal conclusions are different: in the first model <span class="math-container">$X$</span> causes <span class="math-container">$Y$</span>, in the second model <span class="math-container">$Y$</span> causes <span class="math-container">$X$</span>, and, in the third model, neither causes each other --- <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are both common causes of the unobserved variable <span class="math-container">$Z$</span>.</p> <p><strong>Model 1</strong></p> <p><span class="math-container">$$ X = u_{x}\\ Y= \sigma_{yx}x + u_{y} $$</span></p> <p>Where <span class="math-container">$U_{x} \sim \mathcal{N}(0, 1)$</span> and <span class="math-container">$U_{y} = \mathcal{N}(0, 1 - \sigma_{xy}^2)$</span>. </p> <p><strong>Model 2</strong></p> <p><span class="math-container">$$ Y = u_{y}\\ X = \sigma_{yx}y + u_{x} $$</span></p> <p>Where <span class="math-container">$U_{x} \sim \mathcal{N}(0, 1 - \sigma_{xy}^2)$</span> and <span class="math-container">$U_{y} = \mathcal{N}(0, 1)$</span>. </p> <p><strong>Model 3</strong></p> <p><span class="math-container">$$ Z = U_{z}\\ X = \alpha Z + U_{x}\\ Y = \beta Z + U_{y} $$</span></p> <p>Where <span class="math-container">$\alpha\beta= \sigma_{xy}$</span>, <span class="math-container">$U_{z} = \mathcal{N}(0, 1)$</span>, <span class="math-container">$U_{x} = \mathcal{N(0, 1- \alpha^2)}$</span> and <span class="math-container">$U_{y} = \mathcal{N(0, 1- \beta^2)}$</span>.</p>
728
causal inference
Equation 3.6 Elements of Causal Inference
https://stats.stackexchange.com/questions/618819/equation-3-6-elements-of-causal-inference
<p>I am reading Elements of Causal Inference by Peters et al.</p> <p>On page 36 they are giving an example with the following SCM:</p> <p><span class="math-container">$$T := N_T$$</span> <span class="math-container">$$B := T\cdot N_B + (1 - T)\cdot(1 - N_B)$$</span></p> <p>On the equation 3.6, when talking about counterfactuals, they condition the SCM <span class="math-container">$\mathfrak{C}$</span> on <span class="math-container">$\mathfrak{C} | B=1, T=1$</span>, with the SCM becoming:</p> <p><span class="math-container">$$T := 1$$</span> <span class="math-container">$$B := T\cdot 1 + (1 - T)\cdot(1 - 1)=T$$</span></p> <p>Which doesn't make sense to me anymore. When we condition on <span class="math-container">$B=1$</span> shouldn't the SCM simply become</p> <p><span class="math-container">$$T := 1$$</span> <span class="math-container">$$B := 1$$</span></p> <p>?</p> <p>Shouldn't we be conditioning on <span class="math-container">$N_B = 1$</span> and <span class="math-container">$N_T = 1$</span> only?</p> <p>Furthermore, had we conditioned on <span class="math-container">$T=0, B=1$</span> following the model of the book, wouldn't we fall into a contradiction where <span class="math-container">$B:=T$</span> and <span class="math-container">$B=1$</span> and <span class="math-container">$T=0$</span>?</p>
<p>Your second and third pair of equations are the same, so there is no contradiction as far as I can see. The conditioning on <span class="math-container">$B, T$</span> is done to determine the value of <span class="math-container">$N_B$</span>, which is otherwise unknown, so there is no way to condition on it. If <span class="math-container">$T=0$</span> and <span class="math-container">$B=1$</span>, <span class="math-container">$N_B$</span> must have been equal to 0.</p>
729
causal inference
Textbook recommendations covering machine learning techniques for causal inference?
https://stats.stackexchange.com/questions/548929/textbook-recommendations-covering-machine-learning-techniques-for-causal-inferen
<p>Over the past 15 years there has been progress in adapting machine learning methods for causal inference. For example: targeted learning, double machine learning, causal trees.</p> <p>Is there a textbook that covers the current range of techniques? I haven't seen anything on Amazon, perhaps there are texts available on other sites? Or will be published soon?</p>
<p>I follow this area pretty closely, but I think this subfield is so new no textbook exists (yet).</p> <p>However, there are some course videos that are fairly good:</p> <ol> <li><a href="https://youtube.com/playlist?list=PLxq_lXOUlvQAoWZEqhRqHNezS30lI49G-" rel="noreferrer">Machine Learning &amp; Causal Inference: A Short Course</a> at Stanford (accompanying <a href="https://www.youtube.com/redirect?event=playlist_description&amp;redir_token=QUFFLUhqa0o2bGc4bzJ2YnFHQV84NnNWbWFpVUxFRGNNQXxBQ3Jtc0tsV0NWc2YyaFA3T3RsTW9IV3pZWFVrS0tmVXJtb1JkNnV6RHc4UDFNdkFqdVV1MDIwc0xfeGl0cmJ6Y3BxRGt5M0pEcHMtT3BCcU9ud3lpdlRPUE5TWW5nVTQ4ZnR4NEE4S0w4WWk2ZUx2Y3hxbzNZUQ&amp;q=https%3A%2F%2Fbookdown.org%2Fconnect%2F%23%2Fapps%2F3e3ee3cb-b53e-4956-b8d3-a3243e663162%2Faccess%2F1618" rel="noreferrer">tutorial</a>)</li> <li><a href="http://Summer%20Institute%20in%20Machine%20Learning%20in%20Economics%20(MLESI21)" rel="noreferrer">Summer Institute in Machine Learning in Economics (MLESI21)</a> at University of Chicago</li> </ol> <p>There is also a nice survey paper: <a href="https://doi.org/10.1146/annurev-economics-080217-053433" rel="noreferrer">&quot;Machine learning methods that economists should know about&quot;</a> by Susan Athey, Guido Imbens in the <em>Annual Review of Economics</em> (<a href="https://arxiv.org/abs/1903.10075" rel="noreferrer">link to draft</a>)</p>
730
causal inference
Why should we care about DAGs for causal inference?
https://stats.stackexchange.com/questions/565808/why-should-we-care-about-dags-for-causal-inference
<p>I am not acquainted with Pearl's approach for causal inference. From what I have seen so far, the causality is inferred from directed acyclic graphs(DAGs).</p> <p>Rubin's Causal Inference Sec 7.5 proved a theorem stating that asymptotic unbiasedness of OLS estimator for superpopulation treatment effect.</p> <p>By Rubin, if sample is so large that we have very small bias, the estimation of treatment effect can be done by using OLS with a few covariates. From this, under large sample assumption, I can just perform ordinary linear regression to estimate treatment effect.</p> <p>If one is inferring such treatment effect, why does one need DAGs to estimate treatment effect as compared to the asymptotic unbiasedness provided by Rubin's result? It seems to me that DAGs should be a special case of Rubin's theorem.</p>
731
causal inference
Should predictive analysis be tackled with causal inference in mind?
https://stats.stackexchange.com/questions/561704/should-predictive-analysis-be-tackled-with-causal-inference-in-mind
<p>Say I am trying to predict depression from anxiety. I collect data and build a MLE and obtain r=0.9. To me, this is great, so I push the model to production. 4 months later, I realise that the &quot;rate of unemployment&quot; is a confounder that plays on both variables.<br /> I conclude that I should not merely look at correlation, but rather tackle predictive analysis from a causal inference perspective.<br /> However, when learning about predictive analysis, I rarely see mention of causal inference. It is often just about &quot;finding the best model to fit your model&quot;.</p>
<p>As Vladimir mentions, the answer is &quot;it depends&quot;. If you build a well-calibrated correlational (i.e., predictive) model on units randomly sampled from your population, then that model should generalize to other members of the population. If you have a model of depression and anxiety, then applying that model to other units in the population will be accurate. The fact that there are confounding variables says nothing about the validity or generalizability of the model results (to the same population).</p> <p>For example, if you calibrate a model well to predict depression participants' reported anxiety, the fact that there are confounders of that relationship doesn't affect the quality of the prediction when applied to new data. Of course, the presence of a confounder implies that there is a variable predictive of the outcome that you are missing, in which case your predictions would be improved if you included it in your model (the same as any prognostic variable).</p> <p>Where you need causal inference is when you want to predict outcomes after (hypothetically) manipulating a feature. For example, if you manipulate participants' anxiety by giving them an anti-anxiety medication, your previous model for depression is no longer valid because the outcome and feature are confounded in the original data. Your model would need to account for all confounding factors and not condition on any confounding-inducing variables to be able to predict the outcome without bias. You need a statistical theory of causal inference to tell you which features to include and exclude in your model to arrive at an unbiased model for the outcome given manipulation of the treatment.</p>
732
causal inference
Can cross validation be used for causal inference?
https://stats.stackexchange.com/questions/3893/can-cross-validation-be-used-for-causal-inference
<p>In all contexts I am familiar with cross-validation it is solely used with the goal of increasing predictive accuracy. Can the logic of cross validation be extended in estimating the unbiased relationships between variables? </p> <p>While <a href="http://dx.doi.org/10.1007/s10940-009-9077-7" rel="noreferrer">this</a> paper by Richard Berk demonstrates the use of a hold out sample for parameter selection in the "final" regression model (and demonstrates why step-wise parameter selection is not a good idea), I still don't see how that exactly ensures unbiased estimates of the effect X has on Y any more so than choosing a model based on logic and prior knowledge of the subject.</p> <p>I ask that people cite examples in which one used a hold-out sample to aid in causal inference, or general essays that may help my understanding. I also don't doubt my conception of cross validation is naive, and so if it is say so. It seems offhand the use of a hold out sample would be amenable to causal inference, but I do not know of any work that does this or how they would do this. </p> <p>Citation for the Berk Paper:</p> <p><a href="http://dx.doi.org/10.1007/s10940-009-9077-7" rel="noreferrer">Statistical Inference After Model Selection</a> by: Richard Berk, Lawrence Brown, Linda Zhao Journal of Quantitative Criminology, Vol. 26, No. 2. (1 June 2010), pp. 217-236. </p> <p>PDF version <a href="http://www-stat.wharton.upenn.edu/~lzhao/papers/MyPublication/StatInfAfterMS_JQC_2010.pdf" rel="noreferrer">here</a></p> <p><a href="https://stats.stackexchange.com/q/3252/1036">This</a> question on exploratory data analysis in small sample studies by chl prompted this question. </p>
<p>I think it's useful to review what we know about cross-validation. Statistical results around CV fall into two classes: efficiency and consistency.</p> <p>Efficiency is what we're usually concerned with when building predictive models. The idea is that we use CV to determine a model with asymtptotic guarantees concerning the loss function. The most famous result here is due to <a href="https://iri.columbia.edu/%7Etippett/cv_papers/Stone1977.pdf" rel="nofollow noreferrer">An asymptotic equivalence of choice of model by cross‐validation and Akaike's criterion</a> (Stone 1977) and shows that LOO CV is asymptotically equivalent to AIC. But, Brett provides a good example where you can find a predictive model which doesn't inform you on the causal mechanism.</p> <p>Consistency is what we're concerned with if our goal is to find the &quot;true&quot; model. The idea is that we use CV to determine a model with asymptotic guarantees that, given that our model space includes the true model, we'll discover it with a large enough sample. The most famous result here is due to <a href="https://www.libpls.net/publication/MCCV_Shao_1993.pdf" rel="nofollow noreferrer">Linear Model Selection by Cross-Validation</a> (Shao 1993) concerning linear models, but as he states in his abstract, his &quot;shocking discovery&quot; is opposite of the result for LOO. For linear models, you can achieve consistency using LKO CV as long as <span class="math-container">$k/n \rightarrow 1$</span> as <span class="math-container">$n \rightarrow \infty$</span>. Beyond linear mdoels, it's harder to derive statistical results.</p> <p>But suppose you can meet the consistency criteria and your CV procedure leads to the true model: <span class="math-container">$Y = \beta X + e$</span>. What have we learned about the causal mechanism? We simply know that there's a well defined correlation between <span class="math-container">$Y$</span> and <span class="math-container">$X$</span>, which doesn't say much about causal claims. From a traditional perspective, you need to bring in experimental design with the mechanism of control/manipulation to make causal claims. From the perspective of Judea Pearl's framework, you can bake causal assumptions into a structural model and use the probability based calculus of counterfactuals to derive some claims, but you'll need to satisfy <a href="http://bayes.cs.ucla.edu/BOOK-2K/jw.html" rel="nofollow noreferrer">certain properties</a>.</p> <p>Perhaps you could say that CV can help with causal inference by identifying the true model (provided you can satisfy consistency criteria!). But it only gets you so far; CV by itself isn't doing any of the work in either framework of causal inference.</p> <p>If you're interested further in what we can say with cross-validation, I would recommend Shao 1997 over the widely cited 1993 paper:</p> <ul> <li><a href="http://www3.stat.sinica.edu.tw/statistica/j7n2/j7n21/j7n21.htm" rel="nofollow noreferrer">An Asymptotic Theory for Linear Model Selection</a> (Shao, 1997)</li> </ul> <p>You can skim through the major results, but it's interesting to read the discussion that follows. I thought the comments by Rao &amp; Tibshirani, and by Stone, were particularly insightful. But note that while they discuss consistency, no claims are ever made regarding causality.</p>
733
causal inference
Matching vs simple regression for causal inference?
https://stats.stackexchange.com/questions/431939/matching-vs-simple-regression-for-causal-inference
<p>This is a really simple, newbie question. I am really confused about the notion of matching and when it can be used instead of a multiple regression?</p> <p>Assume I have listed all the confounding variables (X), and my outcome (Y) and treatment assignment (A) are binary.</p> <p>Can I reach causal inference only by running the following logistic regression: <code>Y ~ A + X</code> and focus on "A" coefficients, SE, p-value? Doesn't it provide a better power than "matching" that'd result in losing a bunch of data?</p> <p>Any clarification would be really helpful?</p> <p>EDIT: assume A is not randomly assigned (analysis is observational)</p>
<p>Your question rightly acknowledges that throwing away cases can lose useful information and power. It doesn't, however, acknowledge the danger in using regression as the alternative: what if your regression model is incorrect?</p> <p>Are you sure that the log-odds of outcome are linearly related to treatment and to the covariate values as they are entered into your logistic regression model? Might some continuous predictors like age need to modeled with logs/polynomials/splines instead of just with linear terms? Might the effects of treatment depend on some of those covariate values? Even if you account for that last possibility with treatment-covariate interaction terms, how do you know that you accounted for it properly with the linear interaction terms you included?</p> <p>A perfectly matched set of treatment and control cases would get around those potential problems with regression.* That leads to the next practical problem: exact matching is seldom possible, so you have to use some approximation. There are several approaches to inexact matching; see <a href="https://stats.stackexchange.com/q/415571/28500">this page</a> for some discussion. <a href="https://en.wikipedia.org/wiki/Propensity_score_matching" rel="noreferrer">Matching based on propensity scores</a>, the probability of being in a treatment group give the covariate values for a case, is one frequently used method. </p> <p>You can also combine matching with regression. You could include covariates in a regression model of matched cases; some argue that you should do this in any event, as noted on <a href="https://stats.stackexchange.com/q/406379/28500">this page</a>. You can go even further to potentially include all cases: weighting cases according to their treatment/control propensity scores (inversely) in your regression model. <a href="https://stats.stackexchange.com/q/293960/28500">This page</a> nicely outlines matching versus weighting; <a href="https://stats.stackexchange.com/q/206748/28500">this page</a> goes into more details.</p> <p>Both regression and matching have strengths and weaknesses. You need not think of them necessarily as alternatives; combining them intelligently can sometimes work better than either alone.</p> <hr> <p>*Even a data set perfectly matched on the known covariates can't rule out the problem posed by unknown covariates that might affect outcome directly or change the effect of treatment on outcome. That's why randomized trials, which in principle average out those unknown effects, can be so important.</p>
734
causal inference
Causal Inference for experiment
https://stats.stackexchange.com/questions/573032/causal-inference-for-experiment
<p>I'm working through a textbook (Regression and Other Stories) and have come across a particular problem that I am having difficulty convincing myself I understand.</p> <p>I am specifically interested in part (b), but I include (a) as context.</p> <p>It is as follows</p> <p>'Before-after comparisons: The folder Sesame contains data from an experiment in which a randomly selected group of children was encouraged to watch the television program Sesame Street and the randomly selected control group was not.</p> <p>(a) The goal of the experiment was to estimate the effect on child cognitive development of watching more Sesame Street. In the experiment, encouragement but not actual watching was randomized. Briefly explain why you think this was done. Think of practical as well as statistical reasons.</p> <p>(b) Suppose that the investigators instead had decided to test the effectiveness of the program simply by examining how test scores changed from before the intervention to after. What assumption would be required for this to be an appropriate causal inference? Use data on just the control group from this study to examine how realistic this assumption would have been.'</p> <p>I think this question is hinting at the problem of attaining an unbiased estimate for ATE from a difference in means from pre-post scores with systematic differences in pre-treatment variables between the treated and control groups, leading to differences in the outcome that is dependent on these differences rather than the effect of treatment - either through sampling bias. For example, more intelligent children are encouraged to watch the TV program because their parents are aware of its supposed effects, or through heterogeneous treatment effects such as socioeconomic factors.</p> <p>The question suggests looking at control group data - which I have done, but I am not entirely sure what I am looking for in the data to test the above assumptions given the question suggests only looking at control data.</p>
<p>The validity of the post-pre estimator in the intervention group depends on there being no change in the control group. If there was change in the control group, then any changes you see in the treatment group could be due either to the treatment or to whatever caused changes in the control group. For example, if cognitive performance naturally increases in this period of a child's life, even in the absence of watching Sesame Street, then even if the intervention had no effect, you would expect to see increases in scores in the intervention group. Only by also observing the control group and noting the test score changes in the absence of the intervention can one ascribe causality to the intervention for score changes in the intervention group.</p> <p>So, what you should do is to look in the control group for changes in scores between post and pre. If there are no changes, this suggests that just using the differences in the intervention group is sufficient to establish causality. If there are changes, then just using the intervention is not sufficient to establish causality; the competing explanation that the observed changes are due simply to the same forces that change scores in the control group remains, and you cannot say the intervention was effective. Subtracting the post-pre changes in the control group from the change in the intervention group would give you a causal effect; this is also known as the method of difference-in-differences.</p> <p>In an experiment, this is not the best way to adjust for pre-intervention scores, but it does not invalidate the inference. If the experiment was randomized, comparing the post-difference means (assuming perfect compliance) is sufficient to estimate the causal effect. Adjusting for pre-treatment scores using regression/ANCOVA is even better. Things, of course, are a bit more complicated in the face of non-compliance.</p>
735
causal inference
Online resources for philosophy of causation for causal inference
https://stats.stackexchange.com/questions/62025/online-resources-for-philosophy-of-causation-for-causal-inference
<p>Can you recommend any books, articles, essays, online tutorials/courses, etc that would be interesting and useful for an epidemiologist/biostatistician to learn about the philosophy of causation/causal inference?</p> <p>I know quite a bit about actually doing causal inference from an epi and biostats framework, but I would like to learn something about the philosophy which underlies and motivates this work. For example, it's my understanding that Hume first talked about ideas that could be interpreted as counterfactuals. </p> <p>I have basically no training or experience with philosophy, so I need something relatively introductory to start of with, but I would also be interested in recommendations for more complex but important/foundational texts/authors (but please indicate that they are not introductory). </p> <p>I hope this isn't too off-topic for cross-validated, but I'm hoping that some of you will have been in the same boat as me before and able to share your favorite resources.</p>
<p>Without wanting to delve into specific papers, I think an excellent resource for something like that would be the <a href="http://plato.stanford.edu/" rel="nofollow noreferrer">Stanford Encyclopedia of Philosophy</a>. The lemmas on <a href="http://plato.stanford.edu/entries/causation-probabilistic/" rel="nofollow noreferrer">Probabilistic Causation</a> and <a href="http://plato.stanford.edu/entries/causation-mani/" rel="nofollow noreferrer">Causation and Manipulability</a> are peer reviewed, meticulous annotated and give great pointers on where to focus your research next.</p> <p>Just to cite and two papers: Two extremely enjoyable articles on the matter are <a href="https://www.maths.ed.ac.uk/%7Ev1ranick/papers/wigner.pdf" rel="nofollow noreferrer">The Unreasonable Effectiveness of Mathematics in the Natural Sciences</a> by Wigner (1960) and (lighter and definitely more recent) <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf" rel="nofollow noreferrer">The Unreasonable Effectiveness of Data</a> by Halevy, Norvig, and Pereira (2009).</p>
736
causal inference
What should I study after finishing &#39;Causal Inference in Statistics: A Primer&#39;?
https://stats.stackexchange.com/questions/576913/what-should-i-study-after-finishing-causal-inference-in-statistics-a-primer
<p>I have almost finished studying 'Causal Inference in Statistics: A Primer', but I still feel that I need to learn more.<br /> I considered 'Causality' (Pearl, 2009), but there seem to be several good learning resources about DAG (ex. Review Paper &amp; etc).<br /> What should I study after finishing 'Causal Inference in Statistics: A Primer'?</p>
737
causal inference
Why use causal inference if coefficients are same in an OLS?
https://stats.stackexchange.com/questions/617367/why-use-causal-inference-if-coefficients-are-same-in-an-ols
<p>I was reading this <a href="https://towardsdatascience.com/the-fwl-theorem-or-how-to-make-all-regressions-intuitive-59f801eb3299" rel="nofollow noreferrer">amazing article</a> about FWL theorem and it's application to causal inference.</p> <p>In the article, there are some examples showing that the coefficients of an OLS estimator is the same when estimating the coefficients using the FWL theorem. If that's the case, what is the point of using causal inference to reduce multivariate regressions into univariate ones?</p> <p>The article mentions couple of reasons, but if the coefficient in question is the same for both approaches I'm having difficulty seeing the benefits of it. TIA!</p>
<h2>Causal inference is all about what to estimate, not about how to estimate it</h2> <p>The point of causal inference is not to reduce multivariate regressions into univariate ones. The point of causal inference is to identify <strong>what</strong> estimand to estimate to begin with. The article in question gives you multiple ways of running the same regression. That's great, but how would one know what regression to run in the first place? That's the question causal graphical models help answer, and it's also how it's used in the article you cited. Without the causal graph shown in their example, we would not know what to include in our regression model, and the FWL theorem could not help us. Once we are given the causal graph, we know what we would like to estimate and we can then use different techniques (e.g. FWL) to do so.</p>
738
causal inference
Correlation , Regression and Causal inference
https://stats.stackexchange.com/questions/260677/correlation-regression-and-causal-inference
<p>Based on several posts i read on stack exchange I now know that neither correlation nor regression indicate causation, </p> <p>then why is it said that the 2 main uses of regression are 1)prediction 2)causal analysis and inference ??</p> <p>Reference to the following article by Dr Paul Allison </p> <p><a href="http://statisticalhorizons.com/prediction-vs-causation-in-regression-analysis" rel="nofollow noreferrer">http://statisticalhorizons.com/prediction-vs-causation-in-regression-analysis</a></p>
<blockquote> <p>In a causal analysis, the independent variables are regarded as causes of the dependent variable. The aim of the study is to determine whether a particular independent variable really affects the dependent variable, and to estimate the magnitude of that effect, if any.”</p> </blockquote> <p>If your knowledge about the world teaches you, that a dependence should be in one direction (maybe because you have experimental data where you changed one parameter willingly), then regression is a worthy tool to investigate that relationship more closely. Therefor it is used in the investigation of a relationship, but in itself it cannot decide on the direction of causality. Pure observation cannot do that, experiments can do that. The mathematics of regression is the same in both cases. </p>
739
causal inference
Causal inference using regression for multiple covariates
https://stats.stackexchange.com/questions/458211/causal-inference-using-regression-for-multiple-covariates
<p>I am reading lot of material regarding Causal Inference using Regression Analysis but I am unable to resolve my doubt.</p> <p>Suppose I have a data with Outcome <strong><em>Y</em></strong>, Treatment <strong><em>Tr</em></strong> and covariates <strong><em>X1, X2, X3, X4, ....</em></strong> </p> <p>I need to find Average Treatment Effect using Regression Analysis for my data. with three model.</p> <p><strong>First</strong> with only outcome and treatment</p> <pre><code>model1-&gt; lm(Y~Tr, data) </code></pre> <p><strong>Second</strong> with outcome, treatment, and covariates</p> <pre><code>model2-&gt; lm(Y~Tr+X1+X2+X3+X4+...., data) </code></pre> <p><strong>Third</strong> with outcome, treatment, covariates and Interaction between covariates and treatment</p> <pre><code>model3-&gt; lm(Y~Tr+X1+X2+X3+X4+....+X1*Tr + X2*Tr + X3*Tr + X4*Tr +......, data) </code></pre> <p><strong>I know for <code>model1</code> Average treatment effect(ATE) is coefficient of Tr in the <code>model1</code>. For <code>model2</code> I think ATE is still coefficient of Tr in the <code>model2</code>. But I am not sure. I am really confused what will be the ATE in our third model i.e. <code>model3</code></strong></p>
<p>For the model 3 ATE will be the following</p> <pre><code>ATE = beta0 + beta1*X1 + beta2*X2 + beta3*X3 + beta4*X4 + ........... </code></pre>
740
causal inference
How do we select model for causal inference?
https://stats.stackexchange.com/questions/565783/how-do-we-select-model-for-causal-inference
<p>I am reading Rubin's Causal Inference Sec 7.5 in context of completely randomized experiment.</p> <ol> <li><p>It says performing linear regression will produce asymptotically unbiased estimate of causal effect, independent of whether model is misspecified.</p> </li> <li><p>However, in the later section, it says incorporating interaction between covariates(i.e. predictive feature) and treatment will yield higher precision.(This probably needs some prior knowledge.)</p> </li> </ol> <p>How do I select model in this case to infer causal effect? I have a bunch of linear models. The estimation of causal effect is asymptotically consistent and consistency is independent of misspecification. I cannot tell why 2 is necessary from 1. I would see that using other models should not asymptotically increasing my precision by 1 alone if residuals do not decrease. However, I could run into overfitting by introducing too many interactions.</p> <p>How do I resolve my conflicting thoughts on 1 and 2 here? 1 says there is no need for interaction. I think 2 has a premise which is decreasing of residuals boost precision.</p>
741
causal inference
Why do we need a consistency assumption in causal inference?
https://stats.stackexchange.com/questions/631817/why-do-we-need-a-consistency-assumption-in-causal-inference
<p>Why do we need a consistency assumption in causal inference? I think the consistency assumption is quite obvious and it is more like a definition for the observed outcome.</p> <p><a href="https://i.sstatic.net/8nyuQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8nyuQ.png" alt="enter image description here" /></a></p>
<p>In one <a href="https://www.nature.com/articles/ijo200882" rel="noreferrer">article</a>, Hernán indeed writes</p> <blockquote> <p>&quot;Consistency may seem so obvious as to hardly deserve any attention. As a consequence, the condition of consistency is often taken for granted [...]&quot;</p> </blockquote> <p>Then, he goes on to demonstrate why it's not so simple.</p> <p>Problems can in particular arise in observational studies, where the &quot;treatment&quot; <span class="math-container">$A$</span>, e.g., gender, might not be unambiguously linked to an intervention. It is then hard to conceive what is meant by a statement like &quot;had the person been assigned to the other gender&quot;, i.e., by an expression like</p> <p><span class="math-container">$$ Y_{A=0}|A = 1. $$</span></p> <p>What is the interveining assignment mechanism of gender here that we are interested in? Is a change in name enough? (If we are looking at hiring discrimination from CVs, that might actually be enough) Or do we need to think of a person with all the same personality charachteristics, but different biological sex? Or a person born to the other sex and raised accordingly, including all socially learned gender roles? The same ambiguity makes it unclear what in this case we even mean by</p> <p><span class="math-container">$$ Y_{A=0}|A = 0. $$</span></p> <p>There is no well-defined invervention <span class="math-container">$A$</span>, and hence no well-defined causal contrast. Practically, this has downstream problems for data analysis, like the selection of adjustment variables. It is also linked to problems in evaluating the positivity assumption (see the article).</p> <p>In short, evaluating the consistency assumption guides us towards well-defined causal questions.</p> <p>Edit: See also the way more elaborate response <a href="https://stats.stackexchange.com/questions/304799/in-causal-inference-in-statistics-how-do-you-interpret-the-consistency-assumpti?rq=1">here</a>.</p>
742
causal inference
In what cases is identification not possible in causal inference?
https://stats.stackexchange.com/questions/617240/in-what-cases-is-identification-not-possible-in-causal-inference
<p>In step one of judea pearls causal inference book it is to define your graphical causal model. The second step is identification of the estimand for estimation in step 3. Are there any cases where identification may not be possible? i.e. where our dowhy expression cannot be expressed in terms of conditional expectations.</p>
<p>The causal effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is not identifiable in a number of cases. Pearl's <em>Causality: Models, Reasoning, and Inference, 2nd Ed.</em> (2009), on p. 90, has three examples. The simplest possible such example is the graph consisting of two vertices <span class="math-container">$X\to Y$</span> with also a confounding bow between <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> (represented by bidirectional dashed arrows). In such a case, a do expression will not be reducible to an expression containing only conditional expressions that you can evaluate from the (right) data.</p>
743
causal inference
Causal inference - propensity score balancing sufficient for potential outcome balancing?
https://stats.stackexchange.com/questions/632249/causal-inference-propensity-score-balancing-sufficient-for-potential-outcome-b
<p>I am trying to make some causal inference estimates in a dataset and was hoping someone here could help me out with a question I have coming out of my background reading.</p> <p>It seems that a very prevalent technique is to use a propensity score (as described by Rosenbaum, the probability of receiving the treatment as a function of a valid set of confounders) as the balancing mechanism when the set of confounders is very high dimension.</p> <p>My understanding is that doing so would ensure that treatment is essentially random given any combination of the values of the confounders, and thus mimicking a randomized control trial.</p> <p>However, this randomization of treatment seems to me to be only part of what makes an RCT the gold standard of causal inference. In an RCT, randomization of treatment also ensures the random distribution of potential outcomes between treatment and control groups.</p> <p>And this is where I am struggling. In certain sets of observational data, it seems entirely possibly that balancing on treatment propensity would not ensure balance on potential outcomes propensity. In other words, within any given stratified propensity score bucket, the treated audience could have a different potential outcomes propensity than the control audience. And thus the difference between the two groups averages would be a biased causal effect estimate.</p> <p>Is there something that I am missing here, and thus this isn’t a concern in practice? If not, are there other causal inference techniques out there that attempt to address this potential introduction of bias?</p> <p>Any help here is much appreciated!</p>
<p>You might be missing the key assumption required for propensity score methods (and all methods that rely on covariate adjustment, of which propensity score methods are a set, and likely not even among the best methods) to yield valid estimates of the causal effect: strong ignorability. String ignorability says that the potential outcomes are functions of the variables you are adjusting for (i.e., balancing) and possibly other variables that are independent of the treatment. That means by balancing the covariates, you are balancing the potential outcomes (at least, the components of the potential outcomes that are otherwise associated with treatment). If strong ignorability is not met, then methods of covariate adjustment do not balance the potential outcomes. Some argue strong ignorability can never be met in empirical applications, implying covariate adjustment methods are never valid for estimating causal effects.</p> <p>My answer <a href="https://stats.stackexchange.com/a/474669/116195">here</a> on potential outcomes might be helpful.</p>
744
causal inference
Conditional expectation function and causal inference
https://stats.stackexchange.com/questions/637141/conditional-expectation-function-and-causal-inference
<p><strong>!For the question itself skip to the last paragraph!</strong></p> <p>It is my understanding that iff we have a model of the form <span class="math-container">$$Y = m(X) + e$$</span> and <span class="math-container">$E[e|X] = 0$</span> we know that <span class="math-container">$m(X)$</span> is the conditional expectation function thus: <span class="math-container">$m(X) = E[Y|X]$</span>. If we observe <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, their relationship is linear in the parameters and we know what this relationship looks like, we can estimate the CEF using OLS. And our estimand will be consistent and unbiased for the true CEF. If either of the assumptions is violated, we only estimate a linear projection model (that can or can not be close to the true CEF).</p> <p>In texts on causal inference (e.g. Angrist and Pischke 2009), we define the average causal effect as <span class="math-container">$$E[Y|X, T = 1] - E[Y|X, T = 0] = E[Y(1) - Y(0)|X]$$</span> (<span class="math-container">$T$</span> denotes randomly assigned treatment). If <span class="math-container">$T$</span> is binary, I can see that we can fully saturate (include all possible interactions and polynomials, i.e. ensure we capture the correct functional form) the model (assuming <span class="math-container">$X$</span> only holds binary variables or is only a constant), which implies that any structural model of the form <span class="math-container">$$Y = \gamma T + X'\beta + e$$</span> is in fact the true CEF, and we can estimate our causal effect well (assuming large enough sample, finiteness of moments, and no perfect collinearity).</p> <p>BUT what if <span class="math-container">$T$</span> is continuous? Or random assignment only holds conditional on <span class="math-container">$X$</span> and <span class="math-container">$X$</span> has continuous components? In that case, how can we know that we are actually estimating the true CEF? If we aren't, the definition of the causal effect above still holds in the population model, but we can't estimate it using OLS. So why do we still end up using it in cases like this in applied research (I am speaking for econometrics, I don't know how other fields deal with this)?</p>
<h1>Linearity is a property of the function, not of the RVs</h1> <p>Given the comments I think I'm beginning to see what the confusion is, so I'll attempt an answer.</p> <p>The assumption of linearity does not rest upon the variable of intervention (treatment), the covariates, or the target variable being binary random variables. Quite simply, linearity is a property of the function, not of the variables. This is the case for standard linear regression just as much as for estimating a treatment effect.</p> <p>You are right to note that we may mis-estimate a linear slope parameter from finite data, even if the estimator is unbiased in the sample limit. This is why the rate of convergence to the true parameter and the <a href="https://en.wikipedia.org/wiki/Efficiency_(statistics)#:%7E:text=An%20efficient%20estimator%20is%20characterized,in%20the%20L2%20norm%20sense." rel="nofollow noreferrer">efficiency</a> of the estimator are of interest in many statistical applications.</p> <p><strong>Why do we use OLS?</strong></p> <p>Well, trying to estimate a non-deterministic function from finite data comes with the possibility of error. The best one can do is find a good estimator. Under the appropriate assumption OLS is the best linear unbiased estimator. So this, and the fact that it's a rather simple and interpretable estimator, makes it a popular choice. (Regarding your point about binary variables - even in this case we still could not be sure to arrive at the true relationship given finite data as long as there is randomness.)</p>
745
causal inference
Portfolio Optimization using causal inferences
https://stats.stackexchange.com/questions/605148/portfolio-optimization-using-causal-inferences
<p>I'm trying to use causal inferences in portfolio optimization and I used CausalImpact library in python because it deals with time series. I wanted to check the effect of covid19 on the daily closing prices, so I selected the prior and post period as the period before and after 2019-03-01 respectively. Since there are no companies that weren't affected by covid19, I used the S&amp;P500 stock market index as the control series. Is this feasible or are there any other alternatives I can use?</p> <p>Even though I predicted the counterfactual time series as above, I'm confused on how to use causal inferences in portfolio optimization. how can I use these causal inferences on portfolio optimization?</p>
<p>The index is combination of stocks that were all touched by the pandemic, as you admitted, so it cannot serve as a valid counterfactual.</p>
746
causal inference
Difference between exchangeability and independence in causal inference
https://stats.stackexchange.com/questions/558195/difference-between-exchangeability-and-independence-in-causal-inference
<p>When inferring causal effects from observational studies, one of the assumptions that's generally required is the exchangeability assumption. Suppose <span class="math-container">$A \in \{0, 1\}$</span> is a binary treatment, and let <span class="math-container">$Y^a$</span> denote the counterfactual outcome under treatment <span class="math-container">$A=a$</span>. The exchangeability over <span class="math-container">$A$</span> assumption is: <span class="math-container">$$Y^a\perp\!\!\!\perp A$$</span></p> <p>which says that <span class="math-container">$Y^a$</span> is independent of <span class="math-container">$A$</span>.</p> <p>My question is, why is this assumption called the &quot;exchangeability&quot; assumption when it's a statement about independence?</p> <p>I know that exchangeable random variables have joint probability distribution does not change when the positions in the sequence in which they appear are altered. And two random variables are independent if the realization of one does not affect the probability distribution of the other. But what is the relationship between exchangeability and independence in the causal inference context?</p>
<blockquote> <p>My question is, why is this assumption called the &quot;exchangeability&quot; assumption when it's a statement about independence?</p> </blockquote> <p>Exchangeability is the assumption of being able to <strong>exchange</strong> groups without changing the outcome of the study. Why? Because the relationship between treatment and outcome is not confounded. Why? Because treatment assignment is independent of everything else.</p> <p>If you have people with a more severe version of the disease in one group, if you exchange the treatment group with the control group, your results will be different, so exchangeability, in this case, would have been violated.</p> <p>You can also run into this assumption with a different name, depending on what source you're checking. One example is <em>unconfoundedness</em>. If you made treatment and outcome independent by adjusting for the appropriate variables <span class="math-container">$Z$</span> (check backdoor criterion, for example), you can also see this as <em>conditional exchangeability</em>, e.g., <span class="math-container">$Y^a \perp\!\!\!\perp A \mid Z $</span>.</p>
747
causal inference
Non-obvious real-world datasets for observational causal inference
https://stats.stackexchange.com/questions/347899/non-obvious-real-world-datasets-for-observational-causal-inference
<p>I am working on a project involving inference of causal direction from purely observational data, and not time series (given several assumptions, of course). I've been using the <a href="https://webdav.tuebingen.mpg.de/cause-effect/" rel="nofollow noreferrer">CauseEffectPairs</a> database to validate my method, but the ground truth in these datasets is based on the obviousness of true causal direction (e.g. altitude causes air temperature, age causes income). However, causal inference would be useful in the real world if it could identify a true causal direction in cases where it is not intuitively obvious.</p> <p>Let $X$ and $Y$ be collections of $L$ observations of variables with dimension $n$ and $m$, respectively. I want to evaluate my method on datasets with the following characteristics:</p> <ol> <li>There exists a non-confounded, unidirectional causal relationship between $X$ and $Y$.</li> <li>The causal direction is not obvious. That is, reasonable mechanisms could be proposed both for the case where $X$ causes $Y$ and for the case where $Y$ causes $X$, such that an experimental intervention would be necessary to choose between the two hypotheses.</li> <li>There exists a simple, non-costly experimental intervention that would point to the true causal direction.</li> </ol> <p>The data can be continuous, discrete, categorical, mixed, etc.</p> <p>The best answers would describe a set of measurements that could be easily carried out that would produce a dataset that meets the above criteria, or list some publically available data that could be used to create such a dataset.</p>
<p>I am not sure about what types of datasets you would be looking at, but I can suggest a measure.</p> <p>Somers' D is an asymmetric measure of association. It distinguishes between it is raining, therefore, there must be clouds from there are clouds, therefore, it must be raining. It won't always work, however. Particularly if the set is small, it is likely there won't be enough variability to create a difference. If the effect is not obvious and the set is small enough, the power to determine the likely direction won't be there. This is because the measure is ordinal and so some information is stripped away by losing the magnitudes.</p> <blockquote> <p>Somers, R. H. (1962). "A new asymmetric measure of association for ordinal variables". American Sociological Review. 27 (6). </p> </blockquote>
748
causal inference
Pearl&#39;s Causal Inference In Statistics, equation 3.11 - Calculation of group specific causal effects
https://stats.stackexchange.com/questions/602716/pearls-causal-inference-in-statistics-equation-3-11-calculation-of-group-spe
<p>In the book <em>Causal Inference In Statistics</em> by <em>Pearl</em>, page 63, while referring to the below DAG, it says -</p> <blockquote> <p>Thus to compute the <span class="math-container">$w$</span>-specific causal effect, written <span class="math-container">$P(y|do(x),w)$</span>, we adjust for <span class="math-container">$T$</span>, and obtain</p> <p><span class="math-container">$P(Y=y|do(X=x),W=w)$</span> <span class="math-container">$=$</span> <span class="math-container">$\sum_t {P(Y=y|X=x,W=w,T=t)P(T=t|X=x,W=w)}$</span> (3.11)</p> </blockquote> <p><a href="https://i.sstatic.net/u17IN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u17IN.png" alt="enter image description here" /></a></p> <p>I have the following queries -</p> <ol> <li>Why does it say - &quot;<strong>to compute the <span class="math-container">$w$</span>-specific causal effect, written <span class="math-container">$P(y|do(x),w)$</span></strong>&quot;? Given the definition of <span class="math-container">$do(x)$</span> presented <a href="https://stats.stackexchange.com/a/312130/331772">here</a>, it cannot be guaranteed that <span class="math-container">$P(y|do(x),w)$</span> calculates the respective causal effect, when conditioning on <span class="math-container">$w$</span> opens up a non-causal path (highlighted in pink in the figure). Am I understanding the definition of <span class="math-container">$do(x)$</span> incorrectly here?</li> <li>In the equation if the summation on the right-hand side is performed, <span class="math-container">$\sum_t {P(Y=y|X=x,W=w,T=t)P(T=t|X=x,W=w)}$</span><br> <span class="math-container">$=\sum_t {P(Y=y,T=t|X=x,W=w)}$</span><br> <span class="math-container">$=P(Y=y|X=x,W=w)$</span><br> which should not be the causal-effect as it seems to be including the association rising from the non-causal path. What am I missing here?</li> </ol>
<p>The <span class="math-container">$w$</span>-specific causal effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is quite distinct from the causal effect of <span class="math-container">$X$</span> on <span class="math-container">$Y.$</span> The causal effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is just <span class="math-container">$P(y|do(x)).$</span> The <span class="math-container">$w$</span>-specific causal effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is as you've written: <span class="math-container">$P(y|do(x),w).$</span> Essentially, you are stratifying the causal effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> by values of <span class="math-container">$w.$</span> Now, in the particular case in question, there are backdoor or other non-causal paths opened up by conditioning on <span class="math-container">$w,$</span> so it is necessary to stop those up to get the true <span class="math-container">$w$</span>-specific causal effect (just condition on <span class="math-container">$Z$</span> or <span class="math-container">$T$</span> as well to stop up the undesired path - hence the first formula in your question). I don't think you're necessarily misunderstanding the <span class="math-container">$do$</span> operator.</p> <p>In your second question, you need to understand what's meant by &quot;including the association rising from the non-causal path&quot;. You could include that association in more than one way. In this example, we're &quot;including&quot; that association by adjusting for it so that it does <em>not</em> bias our results.</p>
749
causal inference
Is there sense in applying causal inference methods to variables with low correlation?
https://stats.stackexchange.com/questions/187008/is-there-sense-in-applying-causal-inference-methods-to-variables-with-low-correl
<p>This question is somehow similar to <a href="https://stats.stackexchange.com/questions/26300/does-causation-imply-correlation">Does causation imply correlation?</a>, but what I would like to know is there any sense in applying a causal inference methods when we have a low correlation level. I'm very interested in a several particular cases, for the one of them the correlation coefficient equals 0.40, the scatter plot is attached. <a href="https://i.sstatic.net/in0p2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/in0p2.png" alt="enter image description here"></a></p> <p>Will the results of causal inference be meaningful, or the variable contains too much noise?</p>
<p>Statements on how "meaningfull" something is, necessarily involves a subjective assessment. So, it depends: A not very precise estimate for a very relevant topic for which there is no more precise measure yet, is probably very meaningful, as long as it is interpreted with the right degree of caution.</p> <p>Moreover, what is a high and low correlation depends on the context. For example: for a survey in the social sciences I would consider a correlation of .4 as suspiciously high; not impossible, but something I would check very very closely before believing it.</p>
750
causal inference
Bayesian Networks vs traditional stats approaches to Causal Inference?
https://stats.stackexchange.com/questions/554690/bayesian-networks-vs-traditional-stats-approaches-to-causal-inference
<p>I've been reading the 'book of why' by Judea Pearl and come to understand that Bayesian Networks can be used to establish causality given a directed acyclic graph (DAG) and that the methods are non-parametric. Throughout the book, the author drags Pearson and Fisher through the mud; it can be hard to tell what is an emotional reaction to resistance from the stats community vs genuine criticisms/improvements to traditional stats approaches to causal inference.</p> <p>My question is: How are traditional approaches from stats different?</p>
751
causal inference
Causal inference of impact of a university bankruptcy
https://stats.stackexchange.com/questions/656592/causal-inference-of-impact-of-a-university-bankruptcy
<p>I have taken one introductory course in causal inference but I'm very new to this.</p> <p>I have one problem I'm thinking of tackling. There are 124 electoral districts in Ontario. ED boundaries were the same from 2018 to 2022, but there was a high-profile university bankruptcy in one remote district in 2021. I would like to estimate the causal effect of that bankruptcy on the incumbent party's electoral support. In effect, how much of a hit did the governing party take because it oversaw this basically unpopular and controversial bankruptcy.</p> <p>Thinking this through, the district was a northern district, with a university in its boundaries plus with a significant French-Canadian population that was served by the university.</p> <p>So, would I do a difference-in-difference strategy with a dichotomous variable for each district (1 if northern, 0 if not) , perhaps a continuous variable for employment in the university sector in the district (# of professors in the district) and a continuous variable for proportion of French Canadians? Have I got that about right?</p> <p>Thanks.</p>
752
causal inference
Is the emmeans R package performing causal inference G-computation?
https://stats.stackexchange.com/questions/520389/is-the-emmeans-r-package-performing-causal-inference-g-computation
<p>So I am trying to get an understanding of causal inference and how it differs from the usual contrasts. I regularly use the emmeans package in R, and I am wondering when the function emmeans() mentions it has averaged over the covariates is this essentially performing G-computation? At least for regular OLS or identity link GLM models it seems like that</p> <p>One possible difference I see is that G-computation takes place on the response scale, so I wonder if you use the transform argument in emmeans when working with GLMs with a non-identity link, then would it be performing the G-computation? It seems like it does the delta method to convert the SEs to the response scale from the link scale.</p> <p>And then if I modeled the treatment probability, inverted it, and used it as a covariate in a model, and then used emmeans -- would this be doubly robust estimation?</p> <p>As my reference I am using the book here <a href="https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/" rel="nofollow noreferrer">https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/</a> pages 163-167.</p> <p>Edit 09/07/22: Anyone wondering about the connection --it is written about here <a href="https://vincentarelbundock.github.io/marginaleffects/articles/gformula.html" rel="nofollow noreferrer">https://vincentarelbundock.github.io/marginaleffects/articles/gformula.html</a>. The marginaleffects package can do this</p>
<p>Re-reading your question, my understanding is that you are asking if <code>emmeans()</code> does G-computation as part of what it <em>ordinarily</em> does. And based on my very limited understanding of causal models and G-computation, I would say the answer is <em>NO</em>. That is simply because we don't treat covariates in any special way. For a numerical covariate, the default action is to compute its mean and use that as a reference value for all subsequent estimates, regardless of whether it is regarded as a mediator or not. We just treat it as a direct effect.</p> <p>There may be some options in <code>emmeans()</code> that do allow the user to treat covariates in a different way. For example, we can fit a model <code>y ~ treat + M</code> where <code>treat</code> is a treatment and <code>M</code> is a mediator. Then suppose we subsequently do</p> <pre><code>emmeans(model, &quot;treat&quot;, cov.reduce = M ~ treat) </code></pre> <p>This instructs <code>emmeans</code> to <em>not</em> use the average value of <code>M</code>, but rather to use <code>lm()</code> to fit the model <code>M ~ treat</code> (with the same dataset) and use its predictions for the value of <code>M</code>. In that way, the reference value of <code>M</code> is different for each treatment level. This is equivalent to creating a covariate <code>C</code> that is equal to the residuals of the <code>M ~ treat</code> model, fitting the model <code>y ~ treat + C</code>, and using <code>emmeans()</code> in the ordinary way by using <code>C</code>'s mean (which is zero) as the reference value. Perhaps this is similar to what G-computation does -- I am not sure, but perhaps someone else can shed some light on this. But at least it does something special with covariates thought to be mediators, and that seems more akin to what is needed in causal inference.</p> <h2>Addendum</h2> <p>A comment to this answer suggests doing something like <code>emmeans(model, &quot;treat&quot;, cov.reduce = FALSE, weights = &quot;prop&quot;)</code> but that this is very inefficient as it creates a huge reference grid. I believe that the following may do the same thing:</p> <pre><code>emmeans(model, &quot;treat&quot;, submodel = ~ treat) </code></pre> <p>The above puts a linear constraint on the estimates whereby all the effects other than those of <code>treat</code> are replaced by predictions of those effects from the given submodel. See <code>vignette(&quot;xplanations&quot;, &quot;emmeans&quot;)</code> for the gory details. But in words, what happens is that we are trying to obtain the predictions we would have obtained from the submodel, while still accounting for the reduction in error variance achieved by including the covariate in the model. I think this in fact does relate to some causal-inference methods, but I lack the depth of knowledge in that are to be sure.</p> <p>In the case of mixed models and generalized linear models, the <code>submodel</code> constraint will not be quite the same as would be obtained by fitting the submodel with the same method. To accomplish this (or at least get closer), one can use a new feature in version 1.6.0 of <strong>emmeans</strong> to bring in covariate predictions from an external model. Suppose <code>model</code> was fitted using something like <code>model &lt;- lmer(y ~ M + treat + (1|SUBJ), ...)</code></p> <pre><code>Mmod &lt;- lmer(M ~ treat + (1|SUBJ, ...) Mpred &lt;- function(grid) list(M = predict(Mmod, newdata = grid, re.form = ~ 0)) emmeans(model, &quot;treat&quot;, cov.reduce = extern ~ Mpred) </code></pre> <p>This is like the original <code>cov.reduce = M ~ treat</code>, except it uses <code>Mmod</code> instead of <code>lm(M ~ treat)</code> to do the predictions of <code>M</code>.</p>
753
causal inference
Variable selection in causal inference regression models when $p &gt; n$?
https://stats.stackexchange.com/questions/623100/variable-selection-in-causal-inference-regression-models-when-p-n
<p>Are there accepted techniques for selecting variables in causal inference (not prediction) where the number of variables exceeds our sample size, making a standard OLS regression impossible?</p> <p>Assume treatment, outcome, and covariate variables have been carefully selected with a causal diagram, based on subject area knowledge or information from prior randomized controlled trials.</p> <p>That is, only variables which close back-door paths from the outcome to the treatment variables have been selected. There may also be moderators (such as age and gender) or risk factors having a close causal relationship with the outcome but not the treatment. Therefore, all of the selected variables are important in reducing bias or increasing precision of treatment effect estimates.</p>
754
causal inference
What are key papers discussing causal inference from a missing data perspective?
https://stats.stackexchange.com/questions/222939/what-are-key-papers-discussing-causal-inference-from-a-missing-data-perspective
<p>The Rubin Causal Model (RCM), also called Potential Outcome Framework, assumes any unit in a population has potential outcomes under any treatment relevant in a study. For example $Y_1$ denotes the outcome under treatment, $Y_0$ the outcome under control. In a non-randomized experiment the fact that in expectation the quantity</p> <p>$$E(Y_1|T=1) - E(Y_0|T=0)$$</p> <p>is observed in expectation as the mean difference between treated and untreated units, where $T$ denotes the treatment assignment, causes selection bias against the average treatment effect </p> <p>$$E(Y_1-Y_0).$$</p> <p>The key problem is that part of the data needed for causal inference is unobserved, in particular $E(Y_1|T=0)$ and $E(Y_0|T=1)$. There are several approaches to inference about this treatment effect, prominently weighting, stratification, or some other form of matching on the propensity score.</p> <p>An alternative approach tries to estimate the unobserved potential outcome distributions $P(Y_1,Y_0)$ directly using Bayesian techniques, such as multiple imputation. What are key papers that attempt causal inference by solving the missing data problem by multiple imputation or other Bayesian techniques?</p>
<p>$$E(Y_1−Y_0)$$ is the quantity we would like to learn about. Counterfactuals per se are not observed, so we need to make further assumptions to write this counterfactual quantity in terms of the observed variables $Y$ and $T$. One way is to assume that $$Y_1, Y_0 \perp T,$$ for example because treatment is randomized. If one further assumes that the observed variable Y obeys $Y = Y_1\cdot T + Y_0 \cdot (1 - T)$, we can write $$E(Y_1−Y_0) = E(Y_1|T = 1) − E(Y_0|T = 0) = E(Y|T = 1) - E(Y|T = 0).$$ The last expression can be estimated in a myriad of ways, because we can actually observe $Y$ and $T$. So the "causal inference"-step itself consists only of justifying and using counterfactual assumptions, and is not directly related to any estimation procedure like the ones you mention. Judea Pearl makes this point very forcefully (e.g., in "Causality", 2009, Cambridge University Press).</p> <p>To me, multiple imputation of the potential outcome distribution does not make sense. I am also not aware of any paper trying to do this. If one has a sample on $Y$ and $T$, one can make inferences about their population distributions, and if one makes the right assumption (as for example above), these quantities also tell you something about causal effects. An entirely different topic is how to deal with missing values in $Y$ and $T$ (not $Y_t$), and to what extent this is a problem for causal inference. For that, see [1] for a gentle and intuitive, and [2] for a very comprehensive treatment.</p> <p>[1] Pearl, Judea. "Linear models: A useful “microscope” for causal analysis." Journal of Causal Inference 1.1 (2013): 155-170.</p> <p>[2] Shpitser, Ilya, Karthika Mohan, and Judea Pearl. Missing data as a causal and probabilistic problem. No. TR-R-454. CALIFORNIA UNIV LOS ANGELES DEPT OF COMPUTER SCIENCE, 2015.</p>
755
causal inference
Causal inference in python - where to start?
https://stats.stackexchange.com/questions/545054/causal-inference-in-python-where-to-start
<p><strong>Point 1</strong>: I'm not sure if this question could be asked here, as it is may not seem to be about the &quot;science&quot; itself at the first glance! At the second glance though, I think in practice several newbies would face this question and it is a public benefit to have it for reference of people</p> <p><strong>Point 2</strong>: If the community think that this is an inappropriate place for this question, I would delete the whole question when I get an answer. Otherwise, I would edit the question and remove these two points and let the question be here for the reference of others.</p> <p>So I'm new to the world of causal inference. I am learning some elementary concepts such as DAG, matching on covariates/propensity score, etc. The problem is that I don't know where to start in terms of available packages in python? I have found several packages DoWhy, EconML, Causalib, CausalML, CausalNex, etc... . The point is that I don't know where to start for these simple tasks such as building a DAG, or matching in python? Does anyone know about a &quot;meta-reference&quot; to know how to work thorough among these libraries?</p>
<p>Here are a few good websites/books that I am fond of that use DAGs, and have code examples in R, Python, and Stata on github or packaged up.</p> <ul> <li><p><a href="https://mixtape.scunning.com/index.html" rel="nofollow noreferrer">Causal Inference: The Mixtape</a> and <a href="https://github.com/scunning1975/mixtape/" rel="nofollow noreferrer">its github</a></p> </li> <li><p><a href="https://gabors-data-analysis.com/" rel="nofollow noreferrer">Data Analysis for Business, Economics, and Policy</a> and <a href="https://github.com/gabors-data-analysis/da_case_studies" rel="nofollow noreferrer">its github</a>.</p> </li> <li><p><a href="https://theeffectbook.net/" rel="nofollow noreferrer">The Effect</a>, with examples in packages:</p> <ul> <li><code>install.packages('causaldata')</code> in R</li> <li><code>ssc install causaldata</code> in Stata</li> <li><code>pip install causaldata</code> in Python.</li> </ul> </li> <li><p><a href="http://www.upfie.net/" rel="nofollow noreferrer">Using Python for Introductory Econometrics</a> by Florian Heiss and Daniel Brunner.</p> </li> </ul> <p>This is not exactly the cutting-edge stuff, but the foundation you need to get started.</p> <p>I am an economist at a tech company who uses and teaches these methods.</p>
756
causal inference
When is it valid to use race/ethnicity in causal inference?
https://stats.stackexchange.com/questions/366301/when-is-it-valid-to-use-race-ethnicity-in-causal-inference
<p>It seems that often in social science, race is examined in causal terms, as researchers are interested in the differences between various ethnic groups in outcomes when controlling for other covariates. However, my understanding is that we actually can't use race for causal inference due to the omitted variable bias and the fact that essentially everything you control for is like a "post-treatment effect". </p> <p>So why do social scientists do this? Is it valid in certain contexts?</p>
<p>Race and ethnicity are variables that cannot be &quot;controlled&quot; in experiments, since it is not possible for the researcher to assign or change this characteristic of the study participant.<span class="math-container">$^\dagger$</span> For this reason, causal inference relating to race and ethnicity cannot generally rely on randomised controlled trials, and must instead fall back on uncontrolled observational studies. As with other uncontrolled studies on any other topic, this comes with all the regular drawbacks and caveats on causal interpretation of results, including the possibility that there may be omitted &quot;lurking variables&quot; that affect analysis. As a general principle, causal inference from uncontrolled studies is not reliable, and tends to be reasonable only in cases where the predictor in question is shown to have a statistical relationship conditional on a wide array of covariates, and tends to retain its predictive ability under variations in covariates that are not themselves intermediate causes.</p> <p>Many studies in the social sciences include race/ethnicity as covariates, and the goal is to filter out these variables to find some other causal or statistical relationship. There may be some studies where race/ethnicity is of direct interest as a predictor, and in this case the researcher needs to be careful to distinguish predictive effects from causal effects, as in any uncontrolled observational study. There is certainly no scientific problem with including race/ethnicity as variables in social science studies; the problems, if any, arise in regard to interpretation of results. There is a good discussion of the causal interpretation of race variables in <a href="https://uncch.pure.elsevier.com/en/publications/on-the-causal-interpretation-of-race-in-regressions-adjusting-for" rel="nofollow noreferrer">VanderWeele and Robinson (2014)</a>.</p> <p>For the most part, all of this is just a matter of applying general statistical principles to a particular set of variables. However, one issue that arises specifically in regard to causal inference regarding race and ethnicity is competing theories of whether any causality is direct (i.e., genetic/hereditary) or indirect through a mediator variable (i.e., due to discrimination). This aspect of the problem has been discussed at length by the economist Thomas Sowell in a series of books discussing statistical disparities among racial groups (see esp., <a href="https://rads.stackoverflow.com/amzn/click/com/067930262X" rel="nofollow noreferrer" rel="nofollow noreferrer">Sowell 1975</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/0465058728" rel="nofollow noreferrer" rel="nofollow noreferrer">Sowell 2013</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/154164560X" rel="nofollow noreferrer" rel="nofollow noreferrer">Sowell 2018</a>). Sowell notes that historically, there was an excessive tendency to ascribe all racial disparities to genetic causes in the nineteenth and early twentieth centuries, and since the late twentieth century there is now an excessive tendency to ascribe all racial disparities to discrimination. Both of these constitute a failure to properly apply statistical reasoning relating to causality, and both tend to occur due to a conflation of correlation and cause. In any case, if you have not already read these works, they may give you a better understanding of the difficulties that arise in making causal inferences from statistical disparities among racial and ethnic groups.</p> <p>It is difficult to answer your specific question without seeing a particular example of the kind of inference that concerns you. There are a wide variety of cases where social science researchers &quot;use race for causal inference&quot; and the validity would depend on the nature of the data and the resulting inference. (It is not even clear from that framing of the question whether race is the predictor of interest or just a covariate.)</p> <hr /> <p><span class="math-container">$^\dagger$</span> Note that there are some randomised experiments where the <em>appearance</em> of race is controlled via some experimental mechanism. For example, many studies on ethnic discrimination in employment use randomised 'correspondence tests' where the researchers control (and randomise) the markers of race and ethnicity in submitted job applications (see e.g., <a href="https://www.tandfonline.com/doi/abs/10.1080/1369183X.2015.1133279" rel="nofollow noreferrer">Zschirnt and Ruedin 2015</a>).</p>
757
causal inference
How to understand and model Causal Inference from regression?
https://stats.stackexchange.com/questions/549892/how-to-understand-and-model-causal-inference-from-regression
<p>I'm fairly new to casual inferences. I know that regression is used to identify linear relationship between the dependent and independent variables and it doesn't necessarily mean causality.</p> <p>I have recently come across some quasi experimental methods such as Diff-in-Diff and PSM methods which use regression to identify causal relationships.</p> <p>I tired to go through some online articles about this and got to know that zero conditional mean assumption, the error term has zero mean given any value of explanator variable is crucial for causal inference. I'll be honest, I am having a hard time understanding this assumption.</p> <p>Currently I'm having a disconnect on what assumptions/conditions makes regression model identify causal vs linear relations.</p> <p>Can anyone please help me understand the following:</p> <ul> <li><p>Differences in assumptions in setting up regression for linear vs causality? (What's the zero conditional mean assumption in simple terms(with an example if possible)?</p> </li> <li><p>How to come up with good confounding variables for setting up regression model to identify causality? (Do we need to look at any distributions/ perform any tests etc)?</p> </li> </ul> <p>Also, please provide some useful links if there are any good articles available on this topic.</p>
758
causal inference
How does BART (Bayesian Additive regression tree) help with causal inference?
https://stats.stackexchange.com/questions/446416/how-does-bart-bayesian-additive-regression-tree-help-with-causal-inference
<p>I have recently learned about using BART for causal inference from observational studies. So, I read that if we want to see the causal effect of a variable Z (binary) on Y in presence of X covariates then we can get factual (putting Z=0 for all points) and counterfactual (putting Z=1 for all points) predictions for Y from the trained BART model and see the difference in the two generated distributions. (Please correct this method if I am wrong)</p> <p>My question is why BART allows for this kind of inference and not a regular model (such as decision tree, logistic regression, etc) as we can also produce factual and counterfactual probabilities using any another model? </p> <p>Please help.</p>
<p>BART is a regression method, just like generalized linear models (e.g., linear or logistic regression), decision trees, or many other machine learning methods. BART has a few advantages for causal inference that distinguish it from other methods.</p> <p>First, because the tuning parameters correspond to Bayesian priors, each predicted value has a posterior distribution, from which credible intervals can be directly computed. Although for some regression methods there is theory for how to construct valid confidence intervals, the general advice for g-computation is to bootstrap, which does not always work with machine learning-based methods. There is some evidence that the credible intervals from BART do not reach nominal confidence levels, especially for the ATT, but there are ways to improve coverage. Note that confidence intervals can also be estimated for treatment effects estimated using other machine learning methods by using targeted minimum loss-based estimation (TMLE).</p> <p>Second, BART is extremely flexible and is able to account for nonlinearities and interactions without overfitting due to the Bayesian priors. In addition, the default tuning parameters are effective in many cases. This is in contrast with generalized linear models, which involve a fairly strict parametric structure, and decision trees, which can easily overfit, may not capture smoothness very well, and require a procedure to set the tuning parameters.</p> <p>Third, it's possible to incorporate substantive information into the priors, for example, on which variables to perform a split or for which variables the treatment effect should vary. This is less possible with some other methods that use fully data-driven splitting criteria and variable selection.</p> <p>BART's potential is still being explored, but its growing popularity in causal inference is due at least partly to its performance in causal inference competitions, such as that described by Dorie et al. (2019), which details many of the points I've brought up here. Conceptually, BART does just what many other regression methods do, as you have identified, but some of its unique characteristics make it particularly well suited for causal inference.</p> <hr> <p>Dorie, V., Hill, J., Shalit, U., Scott, M., &amp; Cervone, D. (2019). Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition. Statistical Science, 34(1), 43–68. <a href="https://doi.org/10.1214/18-STS667" rel="noreferrer">https://doi.org/10.1214/18-STS667</a></p>
759
causal inference
Is this a breach of the consistency principle in causal inference?
https://stats.stackexchange.com/questions/610079/is-this-a-breach-of-the-consistency-principle-in-causal-inference
<p>My understanding of the consistency principle is that the observed outcome is equal to the potential outcome. i.e. let T = treatment, if T=1 then then the Observed outcome (Y) is equal to the potential outcome i.e. Y(1) = Y . This implies that there can't be 'multiple versions of the same treatment' which will lead to differnt outcome.</p> <p>I'm struggling to understand this concept. E.G. i want to find out if owning a cat increases happiness. In my data i see that the type of a cat can vary. If i observe that owning a britsih blue hair increases happiness in my sample data but than owning a maincoon cat decreases happiness does this mean a <strong>violation</strong>? What if i also see that a maincoon cat increases happiness in some subjects too? e.g. my data looks like:</p> <pre><code>treatment | type cat | outcome 1 | British | increase 1 | Maincoon | increase 1 | Maincoon | decrease </code></pre> <p>E.G. if i had the above data, would i be able to run any causal inference experiments?</p> <p>If i include the 'type of cat' into my structural causal model, would consistency naturally follow? How could i embed this variable 'type of cat' into an SCM?</p> <p><em>please let me know if i am understanding this correctly!</em></p> <p>e.g.</p> <p><a href="https://i.sstatic.net/eVxKh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eVxKh.png" alt="enter image description here" /></a></p> <p>source: <a href="https://www.lesswrong.com/posts/JDWTro62tRAHzvhEH/causal-inference-sequence-part-1-basic-terminology-and-the" rel="nofollow noreferrer">https://www.lesswrong.com/posts/JDWTro62tRAHzvhEH/causal-inference-sequence-part-1-basic-terminology-and-the</a></p>
760
causal inference
Causal Inference for Pandemic Impact on Energy Fraud (All Units Treated)
https://stats.stackexchange.com/questions/661099/causal-inference-for-pandemic-impact-on-energy-fraud-all-units-treated
<p>I'm writing my MSc thesis and need some help understanding how to make a causal estimation of the COVID-19 pandemic's impact on energy fraud.</p> <p><strong>Context:</strong> I have a dataset of commercial losses, also known as non-technical losses, reported monthly by different energy distributors in Brazil. These distributors operate across large areas within the same state, and I cannot obtain data at the city level.</p> <p>Non-technical losses are typically attributed to energy theft or measurement errors. I hypothesize that the COVID-19 pandemic led to an increase in these losses, and I want to demonstrate this using causal inference.</p> <p><strong>Problem:</strong> I've already applied CausalImpact and CausalArima to this data and obtained some results. However, these results aren't strong enough to support my argument, so I'd like to explore different approaches, such as difference-in-differences (DID) or other causal inference methods. The challenge is that the &quot;treatment&quot;—the COVID-19 pandemic—affected all units (distributors/states). Currently, I have data on average income by state and quarter, as well as the percentage of unemployment. I'm trying to use this socioeconomic data to differentiate states that were more severely affected and potentially use this as a basis for treatment and control groups. I'm unsure how to proceed with this approach or other suitable methods given that the pandemic was a widespread event.</p>
761
causal inference
Should I use causal inference for this usecase?
https://stats.stackexchange.com/questions/605368/should-i-use-causal-inference-for-this-usecase
<p>I have a historical dataset of several million sales, and some of them are marked as returned. I have multiple variables, such as product, customers, creation date, etc. My goal is to determine the cause of the returned orders, such as whether it's a combination of a particular product type with a specific customer.</p> <ul> <li>I have analyzed my dataset using multiple dimensions for example estimating the percentage of returns by customer and by product type, it gave me some insights but with all these dimensions I couldn't really find a combination of variables that causes the returns.</li> <li>I have also tried to build a tress based model to try and predict whether the order is a return or not, then plotting the feature importance but it isn't enough.</li> </ul> <p>So, I'd like to know if it will be appropriate to use causal inference? I've seen several packages, such as EconML and CausalML, but I am unsure what intervention would be suitable in my case.</p> <p>Thanks!</p>
762
causal inference
Bayesian Methods for Causal Inference with Observational Panel Data
https://stats.stackexchange.com/questions/633722/bayesian-methods-for-causal-inference-with-observational-panel-data
<p>How comprehensive is the toolkit for Bayesian inference when trying to make causal inferences with observational panel data?</p> <p>I can see an easy application with the incorporation of fixed effects or the ADL model, but these models have well-documented problems.</p> <p>I also understand that there are Bayesian applications for difference-in-differences and synthetic controls, but each methods are generally suited for very particular designs (parallel trends for DID, one/few treated units for synthetic control, one (preferably) time period of treatment for both, etc.)</p> <p>Two methods that I have grown fond of are panel matching and marginal structural models. However, the developers of the {PanelMatch} package, to my knowledge, have not implemented a Bayesian framework for their methodology. In addition, it seems the development of a Bayesian marginal structural model is in the <a href="https://github.com/ajnafa/Latent-Bayesian-MSM/tree/main" rel="nofollow noreferrer">works</a>, but is not yet complete.</p> <p>Say that I have N &gt; 50 units and T &gt; 20 time periods where treatment statuses can change over time, there is no initial timing when treatment is assigned to all treated units, and I want to operate under a Bayesian framework to attempt to estimate a causal effect of treatment. What would one do under this scenario?</p>
763
causal inference
How to deal with cross-elasticity and time series for optimal pricing with causal inference?
https://stats.stackexchange.com/questions/623406/how-to-deal-with-cross-elasticity-and-time-series-for-optimal-pricing-with-causa
<p>I have a problem in which the prices of an &quot;item&quot; will change for specific hours of the day. I was leveraging the concept of price elasticity, which includes the self- and cross-elasticity coefficients (which are not directly observed), to evaluate the impact of that change.</p> <p>As there are can be other factors that affect the purchasing of this item (temperature, hour of the day, precipitation, ...), I was trying to build a model based on causal inference techniques, so that the effect of price on demand is properly captured. I am dealing with a time series: I have pre-treatment data for some months (no price changes during the day), and the treatment data generation for the next months will depend on how I set the prices during the day. I cannot make a random trial, in which the treatment would only be given to certain random customers, due to business reasons.</p> <p>As I wanted these prices to be changed optimally (e.g., in the most favorable hours to maximize profits), I was hoping to use a causal model within an optimization problem, in which the causal model would be retrained periodically, as more data becomes available. There are example of using, for example, ML models within causal inference, even within the context of price elasticity of demand (e.g., <a href="https://matheusfacure.github.io/python-causality-handbook/20-Plug-and-Play-Estimators.html" rel="nofollow noreferrer">https://matheusfacure.github.io/python-causality-handbook/20-Plug-and-Play-Estimators.html</a>)</p> <p>However, I have not found examples that deal with causal inference within time-series and cross-elasticity concerns. The cross-elasticity is critical, as in this setting customers can purchase this &quot;item&quot; in different hours due to the change in price in another hour. Within this context, do you know any examples/techniques that may be helpful? Do you know if any available package for causal inference (e.g., EconML, CausalML, DoWhy, DoubleML) would already account for this somehow (e.g., should I just use prices in different hours as a feature for a ML model that can be used in one of those packages?)</p> <p>Thank you.</p> <p>I tried analyzing the different packages available for causal inference, but no similar examples were found. There also tutorial on causal inference that deal with price elasticity of demand (e.g, <a href="https://matheusfacure.github.io/python-causality-handbook/20-Plug-and-Play-Estimators.html" rel="nofollow noreferrer">https://matheusfacure.github.io/python-causality-handbook/20-Plug-and-Play-Estimators.html</a>) but do not account for cross-elasticity.</p>
764
causal inference
Why does propensity score matching work for causal inference?
https://stats.stackexchange.com/questions/206748/why-does-propensity-score-matching-work-for-causal-inference
<p>Propensity score matching is used for make causal inferences in observational studies (see the <a href="http://faculty.smu.edu/Millimet/classes/eco7377/papers/rosenbaum%20rubin%2083a.pdf" rel="noreferrer">Rosenbaum / Rubin paper</a>). What's the simple intuition behind why it works?</p> <p>In other words, why if we make sure the probability of participating in the treatment is equal for the two groups, the confounding effects disappear, and we can use the result to make causal conclusions about the treatment?</p>
<p>I'll try to give you an intuitive understanding with minimal emphasis on the mathematics. </p> <p>The main problem with observational data and analyses that stem from it is confounding. Confounding occurs when a variable affects not only the treatment assigned but also the outcomes. When a randomized experiment is performed, subjects are randomized to treatments so that, on average, the subjects assigned to each treatment should be similar with respect to the covariates (age, race, gender, etc.). As a result of this randomization, it's unlikely (especially in large samples) that differences in the outcome are due to any covariates, but due to the treatment applied, since, on average, the covariates in the treatment groups are similar. </p> <p>On the other hand, with observational data there is no random mechanism that assigns subjects to treatments. Take for example a study to examine the survival rates of patients following a new heart surgery compared to a standard surgical procedure. Typically one cannot randomize patients to each procedure for ethical reasons. As a result patients and doctors self-select into one of the treatments, often due to a number of reasons related to their covariates. For example the new procedure might be somewhat riskier if you are older, and as a result doctors might recommend the new treatment more often to younger patients. If this happens and you look at survival rates, the new treatment might appear to be more effective, but this would be misleading since younger patients were assigned to this treatment and younger patients tend to live longer, all else being equal. This is where propensity scores come in handy.</p> <p>Propensity scores helps with the fundamental problem of causal inference -- that you may have confounding due to the non-randomization of subjects to treatments and this may be the cause of the "effects" you are seeing rather than the intervention or treatment alone. If you were able to somehow modify your analysis so that the covariates (say age, sex, gender, health status) were “balanced” between the treatment groups, you would have strong evidence that the difference in outcomes is due to the intervention/treatment rather than these covariates. Propensity scores, determine each subject’s probability of being assigned to the treatment that they received given the set of observed covarites. If you then match on these probabilities (propensity scores), then what you have done is taken subjects who were equally likely to be assigned to each treatment and compared them with one another, effectively comparing apples to apples. </p> <p>You may ask why not exactly match on the covariates (e.g. make sure you match 40 year old men in good health in treatment 1 with 40 year old men in good health in treatment 2)? This works fine for large samples and a few covariates, but it becomes nearly impossible to do when the sample size is small and the number of covariates is even moderately sized (see the curse of dimensionality on Cross-Validated for why this is the case). </p> <p>Now, all this being said, the Achilles heel of propensity score is the assumption of no unobserved confounders. This assumption states that you have not failed to include any covariates in your adjustment that are potential confounders. Intuitively, the reason behind this is that if you haven’t included a confounder when creating your propensity score, how can you adjust for it? There are also additional assumptions such as the stable unit treatment value assumption, which states that the treatment assigned to one subject does not affect the potential outcome of the other subjects. </p>
765
causal inference
Is there relationship between propensity score based causal inference and sampling weights?
https://stats.stackexchange.com/questions/622563/is-there-relationship-between-propensity-score-based-causal-inference-and-sampli
<p>Consider observational study with single outcome <span class="math-container">$Y$</span>, single covariate <span class="math-container">$X$</span> and treatment assignment variable <span class="math-container">$W$</span>. Under unconfounded treatment assignment assumption, <span class="math-container">$E_{sp}[Y(1)]=E[\frac{Y_i^{obs}W_i}{e(X_i)}]$</span> as shown in Rubin, Imbens' Causal Inference in 12.4, where <span class="math-container">$E[\cdot]$</span> means expectation taken over both randomization of treatment assignments and sampling from super population.</p> <p>If I denote <span class="math-container">$R_i$</span> as sampling indicator to indicate subject is sampled from super population, then technically <span class="math-container">$E[\frac{Y_i^{obs}W_i}{e(X_i)}]$</span> should be written as <span class="math-container">$E[\frac{R_iY_i^{obs}W_i}{e(X_i)}]$</span> as I sampled subject <span class="math-container">$i$</span> from some fixed super population.</p> <p>It seems that <span class="math-container">$e(X_i)$</span> looks like a sampling weight as in survey sampling.</p> <p><span class="math-container">$Q:$</span> Is it correct to think <span class="math-container">$e(X_i)$</span> as sampling weight? If so, what is the interpretation? What is the &quot;population&quot; sampled from with weights <span class="math-container">$e(X_i)$</span> here? This would be different from observed data population for sure, if this had not been RCT.</p> <p><span class="math-container">$Q':$</span> If <span class="math-container">$Q$</span> is correct, then causal inference in the book and Robbins' IPTW were based upon designed based inference? Hence this invalidates lm regression result on matched data with weights.</p>
766
causal inference
How come the BART results are this good at the 2016 Atlantic causal inference competition?
https://stats.stackexchange.com/questions/470754/how-come-the-bart-results-are-this-good-at-the-2016-atlantic-causal-inference-co
<p>The famous paper <a href="https://arxiv.org/abs/1707.02641" rel="noreferrer">Dorie,2017</a> shows that BART performs dramatically well in causal inference. In my replication, MSE in BART can be 40% lower than MSE in other machine learning methods.</p> <p>But all machine learning methods just regress <span class="math-container">$Y(0) \sim X$</span> on control group, get a predictor <span class="math-container">$f_0(X)$</span>, and use it to predict <span class="math-container">$Y(0)$</span> on treatment group, similarly regress <span class="math-container">$Y(1) \sim X$</span> on treatment group, get a predictor <span class="math-container">$f_1(X)$</span>, and use it to predict <span class="math-container">$Y(1)$</span> on control group, finally use <span class="math-container">$f_1(x) - y(0)$</span> (on control group) or <span class="math-container">$y(1) - f_0(x)$</span> (on treatment group) to estimate causal effects. It seems like that the accuracy of causal estimation depends on the accuracy of regression <span class="math-container">$Y \sim X$</span>, so the great performance of BART in causal inference implies that BART also performs great in the general regression problem.</p> <p>But BART is not very famous in the general regression problem, for example, the citations of Breiman's random forest paper are 50 times more than the citations of Chipman's BART paper. So, is BART just ignored in machine learning, or is BART particularly accurate in causal inference? If BART is particularly accurate, WHY? </p>
767
causal inference
Clarification on Counterfactual Outcomes in Causal Inference
https://stats.stackexchange.com/questions/652874/clarification-on-counterfactual-outcomes-in-causal-inference
<p>I’m studying <a href="https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/" rel="noreferrer">the textbook <em>Causal Inference: What If</em> by Miguel A. Hernán, James M. Robins</a>. On page 4, I came across a passage that seems nonsensical. The authors claim that, for each individual, the counterfactual outcome corresponding to the treatment they actually received is equal to the observed outcome:</p> <blockquote> <p>To make our causal intuition amenable to mathematical and statistical analysis we will introduce some notation. Consider a dichotomous treatment variable <span class="math-container">$A$</span> (1: treated, 0: untreated) and a dichotomous outcome variable <span class="math-container">$Y$</span> (1: death, 0: survival). In this book we refer to variables such as <span class="math-container">$A$</span> and <span class="math-container">$Y$</span> that may have different values for different individuals as random variables. Let <span class="math-container">$Y^{a=1}$</span> (read <span class="math-container">$Y$</span> under treatment <span class="math-container">$a = 1$</span>) be the outcome variable that would have been observed under the treatment value <span class="math-container">$a = 1$</span>, and <span class="math-container">$Y^{a=0}$</span> (read <span class="math-container">$Y$</span> under treatment <span class="math-container">$a = 0$</span>) the outcome variable that would have been observed under the treatment value <span class="math-container">$a = 0$</span>. <span class="math-container">$Y^{a=1}$</span> and <span class="math-container">$Y^{a=0}$</span> are also random variables. Zeus has <span class="math-container">$Y^{a=1} = 1$</span> and <span class="math-container">$Y^{a=0} = 0$</span> because he died when treated but would have survived if untreated, while Hera has <span class="math-container">$Y^{a=1} = 0$</span> and <span class="math-container">$Y^{a=0} = 0$</span> because she survived when treated and would also have survived if untreated.</p> <p>We can now provide a formal definition of a causal effect for an individual: The treatment <span class="math-container">$A$</span> has a causal effect on an individual’s outcome <span class="math-container">$Y$</span> if <span class="math-container">$Y^{a=1} \neq Y^{a=0}$</span> for the individual. Thus, the treatment has a causal effect on Zeus’s outcome because <span class="math-container">$Y^{a=1} = 1 \neq 0 = Y^{a=0}$</span>, but not on Hera’s outcome because <span class="math-container">$Y^{a=1} = 0 = Y^{a=0}$</span>. The variables <span class="math-container">$Y^{a=1}$</span> and <span class="math-container">$Y^{a=0}$</span> are referred to as potential outcomes or as counterfactual outcomes. Some authors prefer the term “potential outcomes” to emphasize that, depending on the treatment that is received, either of these two outcomes can be potentially observed. Other authors prefer the term “counterfactual outcomes” to emphasize that these outcomes represent situations that may not actually occur (that is, counter-to-the-fact situations).</p> <p>For each individual, one of the counterfactual outcomes—the one that corresponds to the treatment value that the individual did receive—is actually factual. For example, because Zeus was actually treated <span class="math-container">$(A = 1)$</span>, his counterfactual outcome under treatment <span class="math-container">$Y^{a=1} = 1$</span> is equal to his observed (actual) outcome <span class="math-container">$Y = 1$</span>. That is, an individual with observed treatment <span class="math-container">$A$</span> equal to <span class="math-container">$a$</span>, has observed outcome <span class="math-container">$Y$</span> equal to his counterfactual outcome <span class="math-container">$Y^a$</span>. This equality can be succinctly expressed as <span class="math-container">$Y = Y^a$</span> where <span class="math-container">$Y^a$</span> denotes the counterfactual <span class="math-container">$Y^a$</span> evaluated at the value <span class="math-container">$a$</span> corresponding to the individual’s observed treatment <span class="math-container">$A$</span>. The equality <span class="math-container">$Y = Y^a$</span> is referred to as consistency.</p> </blockquote> <p>This seems nonsensical to me because, by definition, a counterfactual outcome is what would have happened <em>if the opposite treatment had been received</em>. Are the authors correct here (am I misunderstanding)? Or should I be looking for another textbook?</p>
<p>Different fields sometimes adopt different terminology for the same concepts. This can be very annoying when you read papers from other fields, so I might be biased in favour of “potential outcomes” simply because that’s the term used within my own field (economics).</p> <p>That being said, the third paragraph</p> <blockquote> <p>For each individual, one of the counterfactual outcomes—the one that corresponds to the treatment value that the individual did receive—is actually factual...</p> </blockquote> <p>does demonstrate that “counterfactual outcomes” is a rather awkward choice which can lead to misunderstandings.</p> <p>If you can look past this, I’ve heard that it is a good book.</p> <p><strong>Edit</strong>: A brief explanation of “potential outcomes”. For a binary treatment, each individual has two potential outcomes. One without treatment (<span class="math-container">$Y_{0i}$</span>) and one with treatment (<span class="math-container">$Y_{1i}$</span>). We will ever only observe one outcome (<span class="math-container">$Y_{i}$</span>). If the individual is treated, <span class="math-container">$Y_i = Y_{1i}$</span>; if not treated <span class="math-container">$Y_i = Y_{0i}$</span>.</p> <p>In their example, Jupiter (only epidemiologists say Zeus) would die if treated (<span class="math-container">$Y_{1i}=1)$</span> but survive if untreated (<span class="math-container">$Y_{0i}=0$</span>), making the treatment effect on Jupiter <span class="math-container">$Y_{1i}-Y_{0i} = 1$</span>. Again, this individual treatment effect we will never be able to observe, but using this notation, it is possible to show that when, e.g., treatment is randomly assigned the difference in means between treatment and control group will yield <span class="math-container">$E[Y_{1i}-Y_{0i}]$</span></p>
768
causal inference
Causal inference where potential outcome is somehow &quot;violated&quot;?
https://stats.stackexchange.com/questions/555975/causal-inference-where-potential-outcome-is-somehow-violated
<p>The fundamental problem of causal inference says that only one potential outcome is observed for each unit.</p> <p>What happens if both outcomes from control and treatment can be observed? Can we still make use of analysis tools like causal trees to understand heterogeneous treatment effects?</p> <p>As a concrete example, suppose we are an online search engine and want to better understand how to serve ads. Each time a user enters a search query (request), we pick an ad from a collection of <span class="math-container">$N$</span> ads and show it to the user. For each of these <span class="math-container">$N$</span> ads, there are 2 versions, one with an image and one without. We randomize users into a control group and a treatment group (same distribution of users in each group), where users in the control group will see ads without an image and users in the treatment group will see ads with an image. By the end of the experiment, for each ad we record the total number of clicks by users in the control group as well as in the treatment group.</p> <p>In this particular case, each ad is an experimental unit, and we are able to observe outcomes from both the treatment group and the control group. We want to understand if we should add an image to the ads or not.</p> <p>In addition, each ad has its feature (e.g., associated company, promoted product, etc.), and we want to understand which group of ads will benefit the most from the addition of images. My question is, in such a case, can we follow the routines in treatment effect analysis? If not, what's a more suitable framework?</p>
<p>I agree that there is some confusion about the &quot;unit&quot; of analysis here. It's neither the ad nor the viewer, though; it's the instance of showing an ad to a viewer. And there is only one potential outcome observed because that instance can only either have an image or not. Because you randomly assigned, you don't have to worry about confounding, which is nice, but that's not the same thing as having both potential outcomes for each unit.</p> <p>It happens to be that instances are nested within specific ads, but the specific ad is a characteristic of the instance.</p> <p>You can estimate a number of quantities from this design. You can estimate the average treatment effect of pictures by simply comparing the outcomes between the instance with pictures and the instance without. You should additionally control for the specific ad and any user-level qualities as well to increase the precision of your estimate and improve estimation of the standard error. To do this, you could fit a fixed effects or random effects model with the treatment as the primary predictor and the specific ad as the fixed or random effect grouping variable, e.g., <code>Y ~ treat + (1|ad)</code> if using <code>lme4</code> for random effects or <code>Y ~ treat | ad</code> if using <code>fixest</code> for fixed effects (the results should be similar).</p> <p>You can also estimate the ad-specific treatment effect, which is the effect of showing a picture for a specific ad. This is no different from a subgroup average treatment effect; it is essentially interpreted as if you only had one ad but showed it several times with and without the picture. You can estimate these effects in a single model using the following syntax in R: <code>Y ~ ad/treat - 1</code> in <code>lm()</code>. This gives you a treatment effect for each ad. This would only make sense if you had many instances of each ad with both a picture and no picture.</p> <p>If you are interested not in specific ads but perhaps the effect of showing the picture for other user- or ad-level characteristics, you can estimate heterogeneous treatment effects using causal trees in the standard way; you just would not include the specific ad as a predictor if you were looking at ad-level characteristics. If you had specific hypotheses, you could also test them in the models above by including the predictor of interest in an interaction with treatment.</p>
769
causal inference
What are downsides to &quot;genetic matching,&quot; particularly outside of causal inference settings?
https://stats.stackexchange.com/questions/645101/what-are-downsides-to-genetic-matching-particularly-outside-of-causal-inferen
<p>Multivariate matching methods typically involve two steps. First the user computes <span class="math-container">$D$</span>, a matrix of the multivariate distances between units. Second, the user applies a matching function (e.g., 1:1 nearest neighbor) to input <span class="math-container">$D$</span> to generate matches between units. User commonly base <span class="math-container">$D$</span> on <strong>Euclidian Distance</strong> or <strong>Mahalanobis Distance</strong> because the latter down-weights correlated variables, i.e. those providing redundant (Fisher) information.</p> <p>One matching method that has gained traction is &quot;<strong>Genetic Matching</strong>&quot; (<a href="https://direct.mit.edu/rest/article-abstract/95/3/932/58101/Genetic-Matching-for-Estimating-Causal-Effects-A?redirectedFrom=fulltext_" rel="nofollow noreferrer">Diamond and Sekhon 2013</a>). One of the contributions of Genetic Matching is to compute <span class="math-container">$D$</span> via the <strong>Generalized Mahalanobis Distance (GMD)</strong>, which is distinguished by adding a matrix <span class="math-container">$W$</span> that weights each variable's contribution to the distance calculation. Given Genetic Matching's origins in causal inference, <span class="math-container">$W$</span> is computed via a &quot;genetic algorithm&quot; that searches for weights minimizing a summary measure of covariate imbalance (e.g., mean or sum of distances) across matches between treated and untreated units. Once the algorithm finds a <span class="math-container">$W$</span>, the <span class="math-container">$D$</span> matrix is computed and units are matched using the same matching function.</p> <h1>Question</h1> <p>My understanding is that Genetic Matching is appealing because it (1) uses <strong>GMD</strong> which allows more flexible weighting of variables to find optimal matches and (2) uses a relatively fast (genetic) algorithm to final optimal <span class="math-container">$W$</span>.</p> <ul> <li>Is there any non-computational downside to using <strong>GMD</strong>? Some sort of overfitting? I could see the desire for EXACT matching on certain variables leading to favor exact matching instead, but don't see much otherwise.</li> <li>How can we use genetic matching outside of causal inference? Suppose we were interested in unsupervised clustering to find similar units rather than matching to promote covariate balance in causal inference. In this case, we're interested in finding matches that minimize some summary measure of dissimilarity across matches. A reasonable example might be where we want to minimize the total distance between matched units, using GMD as our distance measure. In this case, is genetic matching still going to converge on the optimal <span class="math-container">$W$</span>? Is there actually a unique (set) of solutions to the genetic algorithm?</li> </ul>
<p>It's worth remembering that genetic matching is not a matching method (despite its name); it is a method of computing the distance between units that is supplied to a matching method for use in creating groups similar on the distributions of covariates. I describe genetic matching in my blog post <a href="https://ngreifer.github.io/blog/genetic-matching/" rel="nofollow noreferrer">here</a>, which I recommend reading to fully understand my answer.</p> <p>So, genetic matching should be compared to other methods of computing the distance between units, and its role in causal effect estimation should be compared to other methods of causal effect estimation.</p> <p>Genetic matching involves three components: the original distance matrix, which is adjusted using the weights; the matching method that is used to match units based on the distance matrix; and the imbalance metric that is used as the objective function for use in computing the weights. Note that the weights are applied to covariates in the distance matrix, not to individual units (I mention this because the concept of weights is used in other areas of causal effect estimation, and the imbalance metric is computed using the matching weights that result from the matching method). In principle, any distance matrix that can be adjusted using a set of weights can be used, though the literature often refers to the &quot;generalized Mahalanobis distance&quot; (GMD) matrix (note that the <code>Matching</code> package, which implements genetic matching in R, uses a scaled Euclidean distance matrix treating the off-diagonal elements as zero). Any matching method that involves pairwise distances can be used, but in practice nearest neighbor matching is used. The choice of which imbalance metric to use is a matter of debate, but theoretically depends on unseen features of the potential outcome-generating model.</p> <p>Given a distance matrix and matching method, the difference between genetic matching and matching without optimizing the weights is that without genetic matching, the weights are fixed to a specific value. If you get that value &quot;right&quot; without optimization, then genetic matching will be worse, because it has to estimate those weights. What &quot;right&quot; means is ambiguous, though; it generally means you achieve balance on the right terms in the potential outcome-generating model. Genetic matching can perform worse if the imbalance metric to be optimized does not balance the right terms in the potential outcome-generating model. For example, if you request that genetic matching optimizes balance on the means but imbalance on the variance causes the most bias, genetic matching may not do better than an arbitrarily chosen set of weights. That means it is important to select a good imbalance metric. Usually, this will be one that penalizes imbalance on the whole covariate distribution, not just the means, and not just the univariate distributions. The energy distance and kernel distance are two imbalance metrics that have been studied that tend to do well in practice because they penalize imbalance on the full covariate distribution.</p> <p>In practice, optimizing the weights using genetic matching will uniformly outperform treating the weights as fixed (e.g., assigning a weight of 1 to the propensity score and 0 to all other covariates, which is just propensity score matching). As long as minimizing your imbalance metric reduces bias, using it in the genetic matching algorithm will reduce bias in your estimate.</p> <p>However, genetic matching is limited in that it relies on matching methods that require pairwise distance. Not all matching methods do, and sometimes easing that restriction can yield better balance in the matched sample than is possible using pairwise distance matching.</p> <p>Genetic matching could definitely be used outside causal inference as long as the application is appropriate. I think your idea is fine; choose weights that yield a distance matrix that yields clusters that optimize some dissimilarity metric. The genetic algorithm and its cousins are used in many applications to optimize a non-smooth surface. However, the utility of the method depends on the appropriateness of the objective function; if optimizing a dissimilarity metric that is not known to recover true clusters, it is possible for this method to perform poorly.</p>
770
causal inference
causal inference exercise - &quot;covariate-specific effect&quot;
https://stats.stackexchange.com/questions/523729/causal-inference-exercise-covariate-specific-effect
<p><a href="https://i.sstatic.net/4AjuN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4AjuN.png" alt="enter image description here" /></a></p> <p>This graph and questions come from: <em>CAUSAL INFERENCE IN STATISTICS A Primer</em> - Pearl Glymour and Jewell (2016).</p> <p>We are interested in the effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span>. In order to identify it we looking for the sets compliant with the <em>backdoor criterion</em>, for example <span class="math-container">$[A Z]$</span> and <span class="math-container">$[A Z C]$</span> seems me among them. For simplicity we assume that the SCM is linear and the path coefficients are named: <span class="math-container">$\beta_{y,w}$</span>, <span class="math-container">$\beta_{y,z}$</span>, …, <span class="math-container">$\beta_{z,c}$</span>, … and so on.</p> <p>So the causal effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is: <span class="math-container">$\beta_{y,w} \beta_{w,x}$</span></p> <p>I ask you If my solutions for the study question above (a bit extended/modified by me) are correct.</p> <p>point (a)</p> <p>We looking for <span class="math-container">$P(Y=y|do(X=x),C=c)$</span></p> <p>I read that, given <span class="math-container">$(X,Y)$</span>, we need a set of control that include <span class="math-container">$C$</span> and deal with backdoor criterion. So <span class="math-container">$[A Z C]$</span> is a compliant set. The expanded expression is given in the book but I want to understand if, given the linear simplification, the regression like:</p> <p><span class="math-container">$Y = \theta_1 X + \theta_2 C + \theta_3 A + \theta_4 Z + r$</span></p> <p>Is what we need, and in particular if <span class="math-container">$\theta_1 + \theta_2$</span> represent the <em>c-specific effect</em>. In term of path coefficients we have <span class="math-container">$\beta_{y,w} \beta_{w,x} + \beta_{y,d} \beta_{d,c} $</span>. It is correct?</p> <p>Point (b)</p> <p><span class="math-container">$[Z A B C D]$</span> seems me a correct set and a regression similar to the the previous is what we need and the z-specifc effect is: <span class="math-container">$\beta_{y,w} \beta_{w,x} + \beta_{y,z}$</span>. It is correct?</p> <p>Point (c)</p> <p>Let me say that, for example, <span class="math-container">$\beta_{y,w} \beta_{w,x} = 3,6$</span> and <span class="math-container">$\beta_{y,z}=0,9$</span></p> <p>so <span class="math-container">$E[Y|do(x),z] = 3,6 x + 0,9 z$</span></p> <p>if <span class="math-container">$z = (1,2)$</span> we have <span class="math-container">$x = 0$</span> , se <span class="math-container">$z = (3,4,5)$</span> we have <span class="math-container">$x = 1$</span></p> <p>if <span class="math-container">$z = 1$</span></p> <p><span class="math-container">$E[Y|do(x),z] = 3,6*0 + 0,9*1 = 0,9$</span></p> <p>if <span class="math-container">$z = 2$</span></p> <p><span class="math-container">$E[Y|do(x),z] = 3,6*0 + 0,9*2 = 1,8$</span></p> <p>if <span class="math-container">$z = 3$</span></p> <p><span class="math-container">$E[Y|do(x),z] = 3,6*1 + 0,9*3 = 6,4$</span></p> <p>if <span class="math-container">$z = 4$</span></p> <p><span class="math-container">$E[Y|do(x),z] = 3,6*1 + 0,9*4 = 7,2$</span></p> <p>if <span class="math-container">$z = 5$</span></p> <p><span class="math-container">$E[Y|do(x),z] = 3,6*1 + 0,9*5 = 8,1$</span></p> <p>It is correct?</p> <p>If I'm wrong what are the correct solutions ?</p> <p><strong>EDIT</strong>: In order to clarify my doubts I add something. I’m sure that in linear models the quantity involved in the equation below (letters have no relations with the example above):</p> <p><a href="https://i.sstatic.net/F0jOJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0jOJ.png" alt="enter image description here" /></a></p> <p>can be translated in regression term as follow. I’have to perform the regression:</p> <p><span class="math-container">$Y=\theta_1 X + \theta_2 W + r$</span></p> <p>and <span class="math-container">$\theta_1$</span> give us the total effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span>. Therefore what I looked for.</p> <p>Now the z-specific effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is representable as: <a href="https://i.sstatic.net/CG9Vp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CG9Vp.png" alt="enter image description here" /></a></p> <p>But in linear regression terms what regression I have to perform? What coefficient I'm interested in?</p> <p>In a bit different case, the w-specific effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span> is also representable as: <a href="https://i.sstatic.net/qZttA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qZttA.png" alt="enter image description here" /></a></p> <p>But in linear regression terms what regression I have to perform? What coefficient I'm interested in?</p>
<p><span class="math-container">$\newcommand{\op}[1]{\operatorname{#1}} \newcommand{\doop}{\op{do}}$</span> Here's my answer.</p> <p><strong>a.</strong> To get the <span class="math-container">$c$</span>-specific effect of <span class="math-container">$X$</span> and <span class="math-container">$Y,$</span> which is <span class="math-container">$P(Y=y|\doop(X=x),C=c),$</span> we analyze as follows. The set <span class="math-container">$S$</span> in Rule 2 is actually the <span class="math-container">$\{Z,C\}$</span> nodes, minimally. Hence, we have that <span class="math-container">$$P(Y=y|\doop(X=x),C=c)=\sum_z P(Y=y|X=x,Z=z,C=c)\,P(Z=z|C=c).$$</span></p> <p><strong>b.</strong> We would have to measure <span class="math-container">$X,Y,Z,$</span> and one of <span class="math-container">$A,B,C,$</span> or <span class="math-container">$D.$</span> I'll pick <span class="math-container">$A$</span>, so that the expression becomes <span class="math-container">$$P(Y=y|\doop(X=x), Z=z)=\sum_aP(Y=y|X=x,A=a,Z=z)\,P(A=a|Z=z).$$</span></p> <p><strong>c.</strong> We have that <span class="math-container">$$ X= \begin{cases} 0,&amp;Z=1,2\\ 1,&amp;Z=3,4,5. \end{cases} $$</span> Here <span class="math-container">$Z\in\{1,2,3,4,5\}.$</span> Now the desired quantity is <span class="math-container">$E(Y)$</span> under the <span class="math-container">$Z$</span> strategy. That is, we want <span class="math-container">\begin{align*} E(Y) &amp;=\sum_y\left[y P(Y=y|\doop(X=g(Z)))\right]\\ &amp;=\sum_y\left[y \sum_zP(Y=y|\doop(X=x),Z=z)|_{x=g(z)}\,P(Z=z)\right]\\ &amp;=\sum_y\left[y \sum_z\left[\sum_aP(Y=y|X=x,A=a,Z=z)\,P(A=a|Z=z)\right]_{x=g(z)}\,P(Z=z)\right]\\ &amp;=\sum_{a,y}\sum_z\left[y\,P(Y=y|X=g(z),A=a,Z=z)\,P(A=a|Z=z)\,P(Z=z)\right]\\ &amp;=\sum_{a,y}\Bigg\{ \sum_{z=1}^2\left[y\,P(Y=y|X=0,A=a,Z=z)\,P(A=a|Z=z)\,P(Z=z)\right]\\ &amp;\qquad+\sum_{z=3}^5\left[y\,P(Y=y|X=1,A=a,Z=z)\,P(A=a|Z=z)\,P(Z=z)\right]\Bigg\}. \end{align*}</span> That's about as far as we can get without knowing the probability distributions more exactly.</p>
771
causal inference
Why do we do matching for causal inference vs regressing on confounders?
https://stats.stackexchange.com/questions/544926/why-do-we-do-matching-for-causal-inference-vs-regressing-on-confounders
<p>I'm new to the area of causal inference. From what I understand, one of the main concerns that causal inference tries to address is the effect of confounders!</p> <p>For the sake of reference, let's denote the feature that we are interested in (a.k.a treatment or exposure) by <strong>A</strong>, other features by <strong>X</strong> (Let's say some of them are confounders) and outcome by <strong>Y</strong>.</p> <p>I focus on the case where all our confounders are <strong>observable</strong>. I also limit to the case when we want to estimate <strong>average treatment effect</strong>.</p> <p>We have the below simple DAG:</p> <p><a href="https://i.sstatic.net/wWrey.png" rel="noreferrer"><img src="https://i.sstatic.net/wWrey.png" alt="enter image description here" /></a></p> <p>So there could be two cases:</p> <ol> <li>A is categorical</li> <li>A is continuous</li> </ol> <p>As I conceptually understand, the whole idea of matching is to marginalize the effect of treatment <strong>A</strong> on outcome <strong>Y</strong>, results in ignorability assumption to hold, which conceptually would be like replicating a randomized trial by making the distribution of covariates <strong>similiar</strong> in an observational data.</p> <p>I am wondering isn't it conceptually the same thing that we would have when making multiple regression on all variables? So the interpretation of each coefficient would be the <strong>marginal</strong> effect of each feature on the outcome!</p> <p>What is the point that I am missing? What prevents us from going for multiple regression for controlling confounder and turn into matching?</p>
<p>As I see it, there are two related reasons to consider matching instead of regression. The first is assumptions about functional form, and the second is about proving to your audience that functional form assumptions do not affect the resulting effect estimate. The first is a statistical matter and the second is epistemic. Consider the tale below that attempts to illustrate how the choice between matching and regression could play out.</p> <p>We'll assume you have measured a sufficient adjustment set to satisfy the backdoor criterion (i.e., all relevant confounders have been measured) with no measurement error or missing data, and that your goal is to estimate the marginal treatment effect of the treatment on an outcome. We'll also assume the standard assumptions of positivity and SUTVA hold. We'll consider a continuous outcome first, but much of the discussion extends to general outcomes.</p> <h2>Part 1: Regression</h2> <p>You decide to run a regression of the outcome on the treatment and confounders as a way to control for confounding by these variables because that is what linear regression is supposed to do. However, the effect estimate is only unbiased under extremely strict circumstances. First, that the treatment effect is constant across levels of the confounders, and second, that the linear model describes the conditional relationship between the outcome and the confounders. For the first, you might include an interaction between the treatment and each confounder, allowing for heterogeneous treatment effects while estimating the marginal effect. This is equivalent to g-computation (1), which involves using the fitted regression model to generate predicted values under treatment and control for all units and using the difference in the means of these predicted values as the effect estimate.</p> <p>That still assumes a linear model for the outcomes under treatment and control. Okay, we'll use a flexible machine-learning method like random forests instead. Well, now we can't claim our estimator is unbiased, only possibly consistent, and it still requires the specific machine learning model to approach the truth at a certain rate. Okay, we'll use Superlearner (2), a stacking method that takes on the rate of convergence of the fastest of its included models. Well, now we don't have a way to conduct inference, and the model might still be wrong. Okay, we'll use a semiparametric efficient doubly-robust estimator like augmented inverse probability weighting (AIPW) (3) or targeted minimum loss-based estimation (TMLE) (4). Well, that's only consistent if the true models fall in the Donsker class of models. Okay, we'll use cross-fitting with AIPW or TMLE to relax that requirement (5).</p> <p>Great. You've taken regression to its extreme, relaxing as many assumptions as possible and landing with a multiply-robust estimator (multiply-robust in the sense that if one of many models are correct, the estimator is consistent) with generally good inference properties (but it can be bootstrapped so getting the variance exactly right isn't a big problem). Have we solved causal inference?</p> <p>You submit the results of your cross-fit TMLE estimate using Superlearner for the propensity score and potential outcome models with a full library including highly adaptive lasso and many other models, which, under weak assumptions, are all that are required for a truly consistent estimator that converges at a parametric rate.</p> <p>A reviewer reads the paper and says, &quot;I don't believe the results of this model.&quot;</p> <p>&quot;Why not?&quot; you say. &quot;I used the optimal estimator with the best properties; it is consistent and semiparametric efficient with few, if any, assumptions on the functional forms of the models.&quot;</p> <p>&quot;Your estimator is consistent,&quot; says the reviewer, &quot;but not unbiased. That means I can only trust its results in general and as N goes to infinity. How do I know you have successfully eliminated <em>bias</em> in the effect estimate in <em>this</em> dataset?&quot;</p> <p>&quot;...&quot;</p> <h2>Part 2: Matching to the Rescue</h2> <p>You read about a hot new method called &quot;propensity score matching&quot; (6). It was big in 1983, and, even in 2021, you see it in almost every paper published in specialized medical journals. You come across King and Nielsen's influential paper &quot;Why Propensity Scores Should Not Be Used for Matching&quot; (7) and Noah's <a href="https://stats.stackexchange.com/questions/481110/propensity-score-matching-what-is-the-problem/481130#481130">answer</a> on CV describing the many drawbacks to using propensity score matching. Okay, you'll use genetic matching instead (8), and minimize the energy distance between the samples (9), including a flexibly estimated propensity score as a covariate to match on. You find that balance can be improved by using substantive knowledge to incorporate exact matching and caliper constraints that prioritize balance on covariates known to be important to the outcome. You decide to use full matching to relax the requirement of 1:1 matching to include more units in the analysis (10).</p> <p>You estimate the treatment effect using a simple linear regression of the outcome on the treatment and the covariates, including the matching weights in the regression and using a cluster-robust standard error to account for pair membership (11). You resubmit the result of your full matching analysis using exact matching and calipers for prognostically important variables and a distance matrix estimating using genetic matching on the covariates and a flexibly estimated propensity score.</p> <p>The reviewer reads your new manuscript. &quot;Wow, you've learned a lot. But I still don't believe you've removed bias the in the effect estimate.&quot;</p> <p>&quot;Look at the balance tables,&quot; you say. &quot;The covariate distributions are almost identical.&quot;</p> <p>&quot;I see low standardized mean differences,&quot; says the reviewer, &quot;but imbalances could remain on other features of the covariate distribution.&quot;</p> <p>&quot;Look at the balance tables in the appendix which contain balance statistics for pairwise interactions, polynomials up to the 5th power of each covariate, and Kolmogorov-Smirnov statistics to compare the full covariate distributions. There are no meaningful differences between the samples, and no differences at all on the most highly prognostic covariates because of the exact matching constraints and calipers.&quot;</p> <p>&quot;I see...&quot;</p> <p>&quot;Also, I used Branson's randomization test (12) with the energy distance as the balance statistic to show that my sample is better balanced not only than a hypothetical randomized trial using the same data, but also a block randomized trial, and even a covariate balance-constrained randomized trial.&quot;</p> <p>&quot;Wow, I guess I don't have much to say...&quot;</p> <p>&quot;My outcome regression estimator isn't just consistent, it's truly <em>unbiased</em> in <em>this</em> sample. Also, because I incorporated pair membership into the analysis, my standard errors are smaller and more accurate and the resulting estimate is less sensitive to unobserved confounding* (13).&quot;</p> <p>&quot;I get it!&quot;</p> <h2>Part 3: The criticism</h2> <p>Frank Harrell <a href="https://stats.stackexchange.com/a/481620/116195">bursts</a> into the room. &quot;Wait, by discarding so many units in matching, you have thrown away so much useful data and needlessly decimated your precision.&quot; Mark van der Laan follows. &quot;Wait, by using substantive 'expertise' you are not letting the analysis method find the true patterns in the data that might have eluded researchers, and your estimator does not converge at a known rate, let alone a parametric one! And there is no guarantee that your inference is valid!&quot; I, your humble narrator, too, join in on the dogpile. &quot;Wait, by using exact matching constraints and calipers, you have shifted your estimand away from the ATE or any <em>a priori</em> describable estimand (14)! Your effect estimate may be unbiased, but unbiased for <em>what</em>?&quot;</p> <p>You stand there, bewildered, defeated, feeling like you have come nowhere since you asked your simple question on CrossValidated what felt like years ago, no closer to understanding whether you should use matching or regression to estimate causal effects.</p> <p>The curtains close.</p> <h2>Part 4: Epilogue</h2> <p>In the face of uncertainty and scarcity, we are left with tradeoffs. The choice between a regression-based method and matching to estimate a causal effect depends on how you and your audience choose to manage those tradeoffs and prioritize the advantages and drawbacks of each method.</p> <p>Standard regression requires strong functional form assumptions, but with advanced methods, those can be relaxed, at the cost of giving up on bias and focusing on consistency and asymptotic inference. Many of these advanced methods work best in large samples, and they still require many choices along the way (e.g., which specific estimator to use, which machine learning methods to include in the Superlearner library, how many folds to use for cross-validation and cross-fitting, etc.). Although the multiply-robust methods may guarantee consistency and fast convergence rates in general data, it is not immediately clear how you can assess how well they eliminated bias in your dataset, potentially leaving one skeptical of their actual performance in your one instance.</p> <p>Matching methods require few functional form assumptions because no models are required (e.g., when using a distance matrix that doesn't depend solely on the propensity score, like that resulting from genetic matching). You can control confounding by adjusting the specification of the match, focusing more effort on hard-to-balance or prognostically important variables. You can come close to guaranteeing unbiasedness by ensuring you have achieved covariate balance, which can and should be measured extremely broadly with a skeptic in mind. You can use tools for analyzing randomized trials and trials with more powerful and robust designs. This comes at the cost of possibly decimating your precision by discarding huge amounts of data, changing your estimand so that your effect estimate doesn't generalize to a meaningful population and isn't replicable, and relying on ad hoc, &quot;artisanal&quot; methods with no clear path for valid inference.</p> <p>The advantage matching has over regression, and the reason why I think it is so valuable and why I devoted my graduate training to understanding and improving matching and its use by applied researchers as the author of the R package <code>cobalt</code>, <code>WeightIt</code>, <code>MatchIt</code>, and others, is an epistemic advantage. With matching, you can more effectively convince a reader that what you have done is trustworthy and that you have accounted for all possible objections to the observed result, and can at least point to specific assumptions and explain how their violation might affect results. This all centers on covariate balance, the similarity between covariate distributions across the treatment groups. By reporting balance broadly and submitting the resulting matched data to a battery of tests and balance measures, you can convince yourself and your readers that the resulting effect estimate is unbiased and therefore trustworthy (given the assumptions mentioned at the beginning, though these may be tenuous, and neither matching nor regression can solve that problem).</p> <p>However, not everyone agrees that this advantage so important, or more important than consistency and valid asymptotic inference. There can never be consensus on this matter, because consensus requires knowing the truth, and science (including statistics research) is about searching for an inherently unknowable truth (i.e., the true parameters that govern or describe our world). That is, if we knew the true causal effect, we could know the best method to estimate it, but we don't, so we can't. We can only do our best using the knowledge we have and try to manage the inherent constraints and tradeoffs as well as we can as we fumble around in the dark using the pinpoint of light the universe has shown us.</p> <hr /> <p>*Only when using a special method of inference for matched samples.</p> <ol> <li>Snowden JM, Rose S, Mortimer KM. Implementation of G-Computation on a Simulated Data Set: Demonstration of a Causal Inference Technique. Am J Epidemiol. 2011;173(7):731–738.</li> <li>van der Laan MJ, Polley EC, Hubbard AE. Super Learner. Statistical Applications in Genetics and Molecular Biology [electronic article]. 2007;6(1). (<a href="https://www.degruyter.com/view/j/sagmb.2007.6.issue-1/sagmb.2007.6.1.1309/sagmb.2007.6.1.1309.xml" rel="noreferrer">https://www.degruyter.com/view/j/sagmb.2007.6.issue-1/sagmb.2007.6.1.1309/sagmb.2007.6.1.1309.xml</a>). (Accessed October 8, 2019)</li> <li>Daniel RM. Double Robustness. In: Wiley StatsRef: Statistics Reference Online. American Cancer Society; 2018 (Accessed November 9, 2018):1–14.(<a href="http://onlinelibrary.wiley.com/doi/abs/10.1002/9781118445112.stat08068" rel="noreferrer">http://onlinelibrary.wiley.com/doi/abs/10.1002/9781118445112.stat08068</a>). (Accessed November 9, 2018)</li> <li>Gruber S, van der Laan MJ. Targeted Maximum Likelihood Estimation: A Gentle Introduction. 2009;17.</li> <li>Zivich PN, Breskin A. Machine Learning for Causal Inference: On the Use of Cross-fit Estimators. Epidemiology. 2021;32(3):393–401.</li> <li>Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55.</li> <li>King G, Nielsen R. Why Propensity Scores Should Not Be Used for Matching. Polit. Anal. 2019;1–20.</li> <li>Diamond A, Sekhon JS. Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Review of Economics and Statistics. 2013;95(3):932–945.</li> <li>Huling JD, Mak S. Energy Balancing of Covariate Distributions. arXiv:2004.13962 [stat] [electronic article]. 2020;(<a href="http://arxiv.org/abs/2004.13962" rel="noreferrer">http://arxiv.org/abs/2004.13962</a>). (Accessed December 22, 2020)</li> <li>Stuart EA, Green KM. Using full matching to estimate causal effects in nonexperimental studies: Examining the relationship between adolescent marijuana use and adult outcomes. Developmental Psychology. 2008;44(2):395–406.</li> <li>Abadie A, Spiess J. Robust Post-Matching Inference. Journal of the American Statistical Association. 2020;0(ja):1–37.</li> <li>Branson Z. Randomization Tests to Assess Covariate Balance When Designing and Analyzing Matched Datasets. Observational Studies. 2021;7:44–80.</li> <li>Zubizarreta JR, Paredes RD, Rosenbaum PR. Matching for balance, pairing for heterogeneity in an observational study of the effectiveness of for-profit and not-for-profit high schools in Chile. The Annals of Applied Statistics. 2014;8(1):204–231.</li> <li>Greifer N, Stuart EA. Choosing the Estimand When Matching or Weighting in Observational Studies. arXiv:2106.10577 [stat] [electronic article]. 2021;(<a href="http://arxiv.org/abs/2106.10577" rel="noreferrer">http://arxiv.org/abs/2106.10577</a>). (Accessed September 17, 2021)</li> </ol>
772
causal inference
Causal Inference of Between-Group Differences in Time Series Data
https://stats.stackexchange.com/questions/476700/causal-inference-of-between-group-differences-in-time-series-data
<p>I'm relatively new to time series data/causal inference (am working my way through Mostly Harmless Econometrics as we speak). Though, I'm still not sure how to appropriately test between-group differences in time series.</p> <p>Basically, I want to test if the &quot;red group&quot; is statistically different from the &quot;blue group&quot;. Here is an image of the two groups over time with log first differences on the y axis and time on the x axis:</p> <p><a href="https://i.sstatic.net/ua1nbl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ua1nbl.png" alt="log firt differences between groups" /></a></p> <p>My first thought was that we could possibly test differences in the mean value given the distribution of the values on the y-axis looks like this:</p> <p><a href="https://i.sstatic.net/AlvAdl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AlvAdl.png" alt="enter image description here" /></a></p> <p>But I'm not sure that makes sense. Can anyone provide suggestions for testing differences in these group and then cite sources I can read up on?</p>
773
causal inference
What are some current research areas of interest in machine learning and causal inference?
https://stats.stackexchange.com/questions/328602/what-are-some-current-research-areas-of-interest-in-machine-learning-and-causal
<p>I am wondering if anyone has any references or material that relates to a survey or summary of current topics of research either in machine learning or at the intersection of machine learning and causal inference. </p>
<p>Susan Athey and Guido Imbens have kindly put their lecture notes for the various 2018 causal ML courses they have taught on <a href="https://drive.google.com/drive/mobile/folders/1SEEOMluxBcSAb_tsDYgcLFtOQaeWtkLp?usp=drive_open" rel="nofollow noreferrer">a public Google Drive folder</a>. </p>
774
causal inference
How to do Causal Inference for Observational Data [Supply Cain]?
https://stats.stackexchange.com/questions/594330/how-to-do-causal-inference-for-observational-data-supply-cain
<p><strong>Problem statement:</strong> Understand what factors impact the different operational times in a supply chain warehouse operation. I have observational data (past 1 year) which contains number of orders, packages, products, time taken to push out a product out of warehouse etc.</p> <p>I want to create a model to understand which factors 'causes' the increase/decrease in time taken to push out a product out of warehouse. I also want to understand by how much magnitude do those factors impact the time taken.</p> <p>I tried using a predictive approach where my predictor variables were # of orders, # of products shipped etc. and my response variable was time taken to push out a product. I trained a random forest on this and used SHAP values to infer which factor is most important and what factors plays important part in causing the time to increase or decrease.</p> <p>However, I feel that this approach is not correct since I did not model causal part in it. I came across a python library for causal inference - 'dowhy' but unable to understand how to identify confounder variables in my use case.</p> <p>Can anyone help?</p>
775
causal inference
Using a Bayesian Additive Regression Trees model for causal inference
https://stats.stackexchange.com/questions/521925/using-a-bayesian-additive-regression-trees-model-for-causal-inference
<h1><strong>Some Context:</strong></h1> <p>I've read this <a href="https://cds.nyu.edu/wp-content/uploads/2014/04/causal-and-data-science-and-BART.pdf" rel="nofollow noreferrer">presentation</a> about using a BART model to find out the causal effect of a certain variable with respect to a target variable (say, how much does a specific medicine actually helps treating a certain disease).</p> <p>I'm still grasping the main and essential causal inference concepts, but I'm already familiar with the idea that in observational data, if we want to find out the average causal effect of a treatment, we need to make sure we're conditioning or stratifying our target variable according to some possible confounders. In the medicine example, that could be age, weight, etc. That is, we need to guarantee some form of conditional ignorability.</p> <p>I also learned that there are structural causal models, which can help choose on which input variables we'll apply this conditioning. For example, there may be an input variable that is a collider and won't really help to find out the causal effect of, say, T -&gt; Y.</p> <h1><strong>The Question:</strong></h1> <p>In this cenario, I'm assuming that a non-parametric model such as a BART model could help estimate the ACE (Average Causal Effect) by calculating the intervention data (say, the counterfactuals) and then using that to estimate the ACE.</p> <p>However, I'm not sure how one would use BART if you're not sure on what input variables you should condition. Does the model handle that?</p> <p>In fact, in a more general approach, if I only have a handful of variables (e.g.: A, B, C, D, E, Y) and I want to find out if any of the A-E variables have a causal effect on Y, what should I do? Then, assuming there IS a causal effect between any of those variables, which one is the largest? How can I determine that?</p> <h2>Bonus:</h2> <p>If there's any solution to the questions I'm posing, is that solution possible to implement in R? Are there any examples of that?</p>
<p>BART is a method of estimating <span class="math-container">$E[E[Y|A=1,X]] - E[E[Y|A=0,X]]$</span> in a highly flexible way, where <span class="math-container">$Y$</span> is the outcome, <span class="math-container">$A$</span> is the treatment, and <span class="math-container">$X$</span> are covariates. BART is one of many such methods that estimate the same quantity, including inverse probability weighting, propensity score methods, TMLE, causal random forests, etc. This is a totally separate matter from causal inference. There is nothing special about BART for causal inference. It is just a regression method. I discuss some of its statistical advantages <a href="https://stats.stackexchange.com/questions/446416/how-does-bart-bayesian-additive-regression-tree-help-with-causal-inference?rq=1">here</a>.</p> <p>Why do we care about estimating <span class="math-container">$E[E[Y|A=1,X]] - E[E[Y|A=0,X]]$</span>? When certain assumptions are met, including strong ignorability, <span class="math-container">$E[E[Y|A=1,X]] - E[E[Y|A=0,X]] = E[Y^1] - E[Y^0]$</span>, which is the average causal effect of <span class="math-container">$A$</span> on <span class="math-container">$Y$</span>. The assumptions required are about the causal status of the variables in <span class="math-container">$X$</span> with respect to <span class="math-container">$A$</span> and <span class="math-container">$Y$</span>. In particular, they must be a sufficient set of variables required for nonparametric identification of the causal effect. (Note that &quot;nonparametric identification&quot; has almost nothing to do with &quot;nonparametric&quot; used as a descriptor for an estimation method like BART). Some of those rules include, e.g., that no colliders are included, that all backdoor paths between the treatment and outcome are closed, etc. There are a set of rules required for nonparametric identification of the causal effect. These are discussed in most texts on causal inference.</p> <p>BART can't tell you which variables are required for nonparametric identification of a causal effect. It can only estimate the causal effect once you provide it with the correct set of <span class="math-container">$X$</span>. No method (to my knowledge) can tell you the causal structure of the variables without any assumptions because each joint distribution of covariates is compatible with many causal structures. Strictly speaking, there is no reason for BART to be in a talk like the one you presented except that the author of the presentation also developed the use of BART in causal effect estimation. Again, there is nothing special about BART in terms of causality. It is a highly effective method for estimating a conditional association, which is what a causal effect is under certain assumptions, but it cannot verify those assumptions.</p> <hr /> <p>I should note I am a huge fan of BART and this post is not meant to insult it or Dr. Hill, but just to point out that BART has the same causal status as all other regression methods, which is to say, none at all, except that it can be used to estimate a conditional association, which can, under certain (unverifiable) assumptions, be interpreted as a causal effect.</p>
776
causal inference
Econometrics: What are the assumptions of logistic regression for causal inference?
https://stats.stackexchange.com/questions/357915/econometrics-what-are-the-assumptions-of-logistic-regression-for-causal-inferen
<p>I'm trying to understand what are the assumptions for logistic regression when you intend to interpret the parameter as causal? The assumptions for causal OLS regressions is well-known but I can't find a good source for similar assumptions for logistic regressions.</p> <p>From what I can find on the internet, I think the following assumptions need to hold:</p> <ol> <li>Errors are distributed according to a logistic distribution and are independent of each other</li> <li>No multicolinearity</li> </ol> <p>My intuition tells me that the independent variables should not be correlated with the error term (no endogeneity) as is in the case of OLS regressions, but I can't find support of this anywhere. Does anyone have a mathematical argument for this? As in where would estimation go wrong? </p> <ul> <li>On the same point, when you're interested in the parameter in front of X1 as the causal parameter and X1 is not correlated with the error term, but X2 is correlated with the error term, although you're not interested in the parameter in front of X2 in a causal sense, can you still run this logistic regression and interpret the coefficient in front of X1 as causal? i.e., would the endogeneity of X2 mess up the parameter estimate in front of X1?</li> </ul> <p>Also I read that the errors are not identically distributed but I'm not sure why. Can anyone explain why this is true?</p> <p>Are there any other assumptions for logistic regressions when you want to use it for causal inference?</p>
<p>The capacity to interpret regression relationships as causal generally depends on experimental protocols rather than the assumed structure of the statistical model. Regression models allow us to relate the explanatory variables statistically to the response variable, where this relationship is made conditional on all the explanatory variables in the model. As a default position, that is still just a predictive relationship, and should not be interpreted causally. That is the case in standard linear regression using OLS estimation, and it is also true in logistic regression.</p> <p>Suppose we want to interpret a regression relationship causally ---e.g., we have an explanatory variable <span class="math-container">$x_k$</span> and we want to interpret its regression relationship with the response variable <span class="math-container">$Y$</span> as a causal relationship (the former causing the latter). The thing we are scared of here is the possibility that the predictive relationship might actually be due to a relationship with some <em>confounding factor</em>, which is an additional variable outside the regression that is statistically related to <span class="math-container">$x_k$</span> and is the real cause of <span class="math-container">$Y$</span>. If such a confounding factor exists, it will induce a statistical relationship between these variables that we will see in our regression. (The other mistake you can make is to condition on a mediator variable, which also leads to an incorrect causal inference.)</p> <p>So, in order to interpret regression relationships causally, we want to be confident that what we are seeing is not the result of confounding factors outside our analysis. The best way to ensure this is to use controlled experimentation to set <span class="math-container">$x_k$</span> via randomisation/blinding, thereby severing any statistical link between this explanatory variable and any would-be confounding factor. In the absence of this, the next best thing is to use uncontrolled analysis, but try to bring in as many possible confounding factors as we can, to filter them out in the regression. (No guarantees that we have found them all!) There are also other methods, such as using instrumental variables, but these generally hinge on strong assumptions about the nature of those variables.</p> <p>None of the assumptions you mention are necessary or sufficient to infer causality. Those are just model assumptions for the logistic regression, and if they do not hold you can vary your model accordingly. The main assumption you need for causal inference is to assume that <em>confounding factors are absent</em>. That can be done by using a randomisation/blinding protocol in your experiment, or it can be left as a (hope-and-pray) assumption.</p>
777
causal inference
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
https://stats.stackexchange.com/questions/142212/what-are-the-best-empirical-studies-comparing-causal-inference-with-experimental
<p>The Issue: People attempt to draw causal inferences using many different statistical techniques (e.g. regression, propensity score matching, regression discontinuity, instrumental variables, etc.). One great way to learn about the strengths and weaknesses of different statistical techniques for causal inference is to compare them on the same data. Since randomized experiments are the so called &quot;gold standard&quot; for causal inference, they are obviously an excellent benchmark.</p> <p>I have seen several studies of this last type, but I could only recall two. LaLonde's classic: &quot;<a href="https://www3.nd.edu/%7Ewevans1/class_papers/lalonde_aer.pdf" rel="nofollow noreferrer">Evaluating the Econometric Evaluations of Training Programs with Experimental Data</a>&quot; and Aiken et. al. &quot;<a href="http://erx.sagepub.com/content/22/2/207.short" rel="nofollow noreferrer">Comparison of a Randomized and Two Quasi-Experimental Designs in a Single Outcome Evaluation: Efficacy of a University-Level Remedial Writing Program</a>.&quot;</p> <p>Do you know of other examples of this type of study?</p>
<p>The type of study you are referring to is called a within-study comparison. An early example that produced a lot of discussion is <a href="http://users.nber.org/~rdehejia/papers/dehejia_wahba_jasa.pdf" rel="nofollow">Dehejia and Wahba</a> (1999; JASA) using Lalonde's (1986) NSW data in which they compared the results based on PSA to the randomized experimental benchmark. The Lalonde data set is now included in PSA packages in R such as Matching and twang, for example. There was an ongoing workshop at Northwestern (not sure if they still do it) that has an archived website with a reference list you will find useful <a href="http://www.ipr.northwestern.edu/workshops/past-workshops/design-implementation-and-analysis-of-within-study-comparisons/2012/selected-references.html" rel="nofollow">(link)</a>. </p> <p>One interesting example is the 2008 JASA paper by <a href="http://stat-athens.aueb.gr/~jpan/Shadish-JASA2008(1334-1356)-17mr09.pdf" rel="nofollow">Shadish, Clark, and Steiner</a> in which they randomized participants to be in either an observational study or a randomized experiment and then used the results from the randomized experiment as a benchmark, as you say. The more typical design is three arm (randomized treatment gp, randomized comparison group, observational comparison group). Shadish, Clark, and Steiner's design was four arm (randomized treatment gp, randomized comparison group, observational treatment gp, observational comparison group).</p>
778
causal inference
when are rational expectations a threat to causal inference?
https://stats.stackexchange.com/questions/466872/when-are-rational-expectations-a-threat-to-causal-inference
<p>Consider the impact government policy has had on deaths from COVID19. I think the potential relationships are </p> <p><a href="https://i.sstatic.net/2Sp8S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Sp8S.png" alt="enter image description here"></a></p> <p>If the relationships are as given in the above diagram, and I regress covid deaths at t+1 on policy at t-1, is the ONLY threat to causal inference that policy maker's rational expectations of covid deaths at t+1 is affecting policy choice at t-1? Specifically, do I not need to worry about the bi-direction of the relationship between policy and the epidemic because the relationship between the epidemic and covid deaths is one directional and government policy affects covid deaths only indirectly via the epidemic? </p> <p>I think it is fairly safe to assume that: 1) policy does not affect covid deaths directly, but only through its impact on the epidemic. 2) the epidemic causes covid deaths, but not vice versa. 3) the relationship between policy and epidemic runs both ways. 4) nations choose their own policies and experience their own epidemic. </p> <p>nb: I am investigating 5 measures of government policy aggregated to the national level, but for clarity referred to them as "policy" </p>
<p>I think I would add another node for policy at <span class="math-container">$t+1,$</span> so as not to violate the fundamental law of cause and effect (causes must precede effects). Let <span class="math-container">$E(t)$</span> be the epidemic at <span class="math-container">$t,$</span> <span class="math-container">$D(t)$</span> be COVID deaths at <span class="math-container">$t,$</span> and <span class="math-container">$G(t)$</span> be government policy at <span class="math-container">$t.$</span> The real question is whether we will allow modeling (accurate or not) to affect government policy. I think we should, since that's obviously happening. So I would propose this alteration to your model:</p> <p><a href="https://i.sstatic.net/Isb29.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Isb29.png" alt="enter image description here"></a></p> <p>So this is saying that government policy at <span class="math-container">$t+1$</span> has inertia (arrow <span class="math-container">$G(t-1)\to G(t+1)$</span>), and that modeling of the epidemic (as well as actual data) and the deaths affect government policy at <span class="math-container">$t+1.$</span> COVID deaths are only <em>immediately</em> affected by the epidemic, as you said.</p> <p>I would caution you that the effect of government policy on the epidemic is hotly contested, and is (inherently) a highly political topic. </p> <p>But I'm very excited that you're thinking to apply causal diagrams to this incredibly important question! I've been wanting someone to do this, because we need clarity of thought!</p>
779
causal inference
Order in which covariates are measured in an observational study - causal inference
https://stats.stackexchange.com/questions/656860/order-in-which-covariates-are-measured-in-an-observational-study-causal-infere
<p>I want to model hba1c levels for a group of type 1 diabetes patients. I have data which are extracted from a register, and my goal is to answer whether a treatment intervention decreases hba1c levels on average. I am (trying) to use causal inference, where the average treatment effect is calculated using potential outcomes. Hence my research question is <span class="math-container">$$ E\{E (Y|A=1,W)-E(Y|A=0,W) \}, $$</span> where <span class="math-container">$Y$</span> is outcome, <span class="math-container">$A$</span> is binary treatment and <span class="math-container">$W$</span> are baseline variables.</p> <p>The assumptions are <span class="math-container">$$ \begin{array}{cllc} 1 &amp; \text{ Consistency:} &amp; A=a \Rightarrow Y=Y^a,\\ 2 &amp; \text{ Exchangeability:} &amp; A \perp Y^a|W (\perp \textit{read } \text{ "independence"}),\\ 3 &amp; \text{ Positivity:} &amp; P(A=a|W=w)&gt;0 \text{ when } P(W=w)&gt;0. \end{array} $$</span> I believe these assumptions are commonly used in causal inference.</p> <p>Now I present my issue, which goes on the order in which the data are measured.</p> <p>As I wrote, the data come from a registry, and isn't from a randomized control. The causal chain obviously require that the outcome (hba1c levels) are measured after the treatment intervention (for those who receive it), and not before - hence making it meaningless to try and assess an effect of treatment. Also baseline variables are measured before the observed outcome.</p> <p>BUT for various records in data I have that the treatment intervention is given <span class="math-container">$\textit{before}$</span> some baseline variables are measured (i.e. non-deterministic variables such as blood pressure etc. which changes over time). Thus there is a chance that the treatment effect is hidden or exaggerated through baseline variables. Is there a way to overcome this, since I'm guessing this problem has been encountered before within register analysis.</p> <p>Best</p>
<p>If I understand your setup correctly, you’re working with a binary point exposure variable <span class="math-container">$A$</span>, an outcome <span class="math-container">$Y$</span>, and an adjustment set <span class="math-container">$\{W, X\}$</span> that is sufficient for blocking confounding based on either background knowledge or causal graphical assumptions. However, for some units, <span class="math-container">$X$</span> may have been measured in the post-exposure period only, creating a potential issue since <span class="math-container">$X$</span> could in fact represent two distinct causal nodes: pre-exposure <span class="math-container">$X_0$</span> and post-exposure <span class="math-container">$X_1$</span>.</p> <p>Overlooking this time-order distinction can introduce post-exposure bias when adjusting for <span class="math-container">$X_1$</span> if it's causally affected by <span class="math-container">$A$</span> itself, is a mediator, or a descendant of one, as in the following causal graph: <span class="math-container">$$ (W, X_0) \rightarrow A \rightarrow X_1 \rightarrow Y $$</span> <span class="math-container">$$ A \rightarrow Y \leftarrow (W, X_0) \rightarrow X_1 $$</span></p> <p>Adjusting for <span class="math-container">$X_1$</span> would not induce bias if it’s not a descendant of <span class="math-container">$A$</span>. Yet, the main issue arises from missing data on <span class="math-container">$X_0$</span> for some units, especially if <span class="math-container">$W$</span> alone is not sufficient to block confounding without <span class="math-container">$X_0$</span>.</p> <p>In this case, you might treat <span class="math-container">$X_0$</span> as a confounder <em>subjected to a missingness mechanism</em> and incorporate a missing data model to augment the causal graph. Two common models are:</p> <h4>Missing Completely at Random (MCAR)</h4> <p>Let <span class="math-container">$R_0$</span> be a binary indicator for missingness on <span class="math-container">$X_0$</span>, where <span class="math-container">$R_0=1$</span> implies <span class="math-container">$X_0$</span> is observed, and <span class="math-container">$R_0=0$</span> means it’s missing. If <span class="math-container">$R_0$</span> is independent of all other nodes, then <span class="math-container">$R_0 \perp X_0$</span> and so we can ignore units without <span class="math-container">$X_0$</span>. This is, one can perform <em>complete case analysis</em> (CCA), which identifies the ATE as: <span class="math-container">$$ \mathbb{E}\left\{\Delta_a \mathbb{E}[Y\mid A=a, W, X_0, R_0=1] \mid R_0=1\right\}, $$</span></p> <p>where <span class="math-container">$\Delta_a f(a) = f(1) - f(0)$</span>.</p> <h4>Missing at Random (MAR)</h4> <p>Suppose the missingness indicator <span class="math-container">$R_0$</span> depends (non-deterministically) on (some of) the fully observed covariates <span class="math-container">$W$</span>. For instance, <span class="math-container">$R_0$</span> might vary by gender, and men could have a higher probability than women of having <span class="math-container">$X$</span> measure post-exposure rather than pre-exposure. In this case, if one has that <span class="math-container">$R_0 \perp X_0 \mid W$</span>, as shown in the following causal graph:</p> <p><span class="math-container">$$ (W, X_0) \rightarrow A \rightarrow X_1 \rightarrow Y $$</span> <span class="math-container">$$ A \rightarrow Y \leftarrow (W, X_0) \rightarrow X_1 $$</span> <span class="math-container">$$ X_0 \leftarrow W \rightarrow R_0 $$</span></p> <p>Then the ATE is identifiable by marginalizing in two steps:</p> <p><span class="math-container">$$ \mathbb{E}\left\{\mathbb{E}\left\{\Delta_a \mathbb{E}[Y \mid A=a, W, X_0, R_0=1] \mid W, R_0=1\right\}\right\} $$</span></p> <p>where the second expectation standardizes over <span class="math-container">$p(X_0 \mid W, R_0=1)$</span>, and the outermost expectation is over <span class="math-container">$p(W)$</span>, leveraging the factorization <span class="math-container">$p(X_0 \mid W, R_0=1)\, p(W) = p(X_0, W)$</span>.</p> <p>Alternatively, you might impute missing values of <span class="math-container">$X_0$</span> and treat the imputed values as observed, but care should be taken in specifying the imputation model. For instance, you could use <span class="math-container">$X_1$</span> in the imputation model for <span class="math-container">$X_0$</span> if you have some enough units for which you measured both pre- and post-exposure <span class="math-container">$X$</span>.</p>
780
causal inference
Rather Than Framing Causal Inference as &quot;How Much X Causes Y to Change&quot;, Can You Frame Causal Inference As &quot;X Explains _% of the Variation in Y&quot;
https://stats.stackexchange.com/questions/650526/rather-than-framing-causal-inference-as-how-much-x-causes-y-to-change-can-you
<p>Most causal research designs seek to estimate a causal effect and interpret that causal effect as a marginal effect (a 1-unit shift in X leads to a _ amount of change in Y).</p> <p>However, as I've spent more time applying causal inference practices in industry, it seems like stakeholders want to know more than just this. For example, how does the effect of X1 compare to that of X2? How impactful is X1 compared to the entire range of variables that might cause a change in Y?</p> <p>These are questions that don't seem to be easily answered by the paradigm that I'm used to where we are focused on a very narrow estimand rather than how our estimate fits into the entire DGP that explains Y. Something I've seen done a lot in older studies is &quot;X explains _% of the variation in Y&quot;. Due to their age and sometimes poor methodology, I'm always skeptical that these studies are actually estimating how much of Y is caused by X. Still, it has got me thinking. Are there methods to go about estimating comparative effect sizes? (By comparative, I mean the effect of X compared to all other things that impact Y to gauge substantive significance of a given treatment)?</p>
781
causal inference
Pearl, Causal Inference in Statistics Q3.5.1 (Backdoor criterion)
https://stats.stackexchange.com/questions/582291/pearl-causal-inference-in-statistics-q3-5-1-backdoor-criterion
<p>This is a question about backdoor criterion (as per J. Pearl) on finding causal effects. It is linked to a specific exercise in a specific book, but I hope it will be sufficiently generic and self-contained to be of general use.</p> <h2>Problem statement</h2> <p>I am self-studying Pearl, Glymour, Jewell <em>Causal Inference in Statistics, A Primer</em>. Not quite sure about Q3.5.1 b. There we are given a causal diagram</p> <p><a href="https://i.sstatic.net/d28hW.png" rel="noreferrer"><img src="https://i.sstatic.net/d28hW.png" alt="enter image description here" /></a></p> <p>And asked to find z-specific effect of X on Y, i.e.:</p> <p><span class="math-container">$$ P\left[Y=y\, \Big|\,do\left(X=x\right),\,Z=z\right] $$</span></p> <p>As soon as we condition on <span class="math-container">$Z$</span>, we are creating constraint that correlates <span class="math-container">$B$</span> and <span class="math-container">$C$</span> and thus opens a back-door <span class="math-container">$XABZCDY$</span>. To estimate effect of <span class="math-container">$X$</span> on <span class="math-container">$Y$</span>, that backdoor path needs to be broken. In my understanding, this can be done by conditioning on any of the <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span> or <span class="math-container">$D$</span>.</p> <h2>Model solution</h2> <p>I also have model solutions (found online). There, only one option is mentioned - to condition on <span class="math-container">$C$</span>:</p> <p><span class="math-container">$$ P\left[Y=y\, \Big|\,do\left(X=x\right),\,Z=z\right]=\sum_{c} P\left[Y=y\, \Big|\,X=x,\,Z=z,\,C=c\right]\cdot P\left[C=c\right] $$</span></p> <h2>Attempt to explain the model solution</h2> <p>Is there a reason why conditioning on <span class="math-container">$C$</span> is given as a sole solution? I can rule out conditioning on <span class="math-container">$A$</span> or <span class="math-container">$D$</span> since those are not independent variables. That leaves a question of whether I could condition on <span class="math-container">$B$</span>.</p> <p>One way I can think of explaining why conditioning on <span class="math-container">$B$</span> would not work is by noting that causal effect corresponds to conditional probability on a modified diagram:</p> <p><a href="https://i.sstatic.net/okHC7.png" rel="noreferrer"><img src="https://i.sstatic.net/okHC7.png" alt="enter image description here" /></a></p> <p><span class="math-container">$$ P\left[Y=y\, \Big|\,do\left(X=x\right),\,Z=z\right]=P_m\left[Y=y\, \Big|\,X=x,\,Z=z\right] $$</span></p> <p>Now, I can express this as:</p> <p><span class="math-container">$$ \begin{align} P_m\left[Y=y\, \Big|\,X=x,\,Z=z\right] &amp;= \sum_{b} P_m\left[Y=y\, \Big|\,X=x,\,Z=z,\,B=b\right]\cdot P_m\left[B=b\right] \\ &amp;=\sum_{c} P_m\left[Y=y\, \Big|\,X=x,\,Z=z,\,C=c\right]\cdot P_m\left[C=c\right] \end{align} $$</span></p> <p>Since <span class="math-container">$B$</span> is independent, its probability would not be affected by modification of the diagram, so <span class="math-container">$P_m\left[B=b\right]=P\left[B=b\right]$</span>, and same for <span class="math-container">$C$</span>.</p> <p>When it comes to conditional probability, we can use the fact that with fixed <span class="math-container">$Z=z$</span> there is no causal link from <span class="math-container">$C$</span> to <span class="math-container">$X$</span>, thus conditioning on <span class="math-container">$C$</span> is the same on both the original and the modified diagram: <span class="math-container">$P_m\left[Y=y\, \Big|\,X=x,\,Z=z,\,C=c\right]=P\left[Y=y\, \Big|\,X=x,\,Z=z,\,C=c\right]$</span>. This logic would not work for <span class="math-container">$B$</span> since <span class="math-container">$B$</span> does affect <span class="math-container">$X$</span> on the original diagram. Therefore we can only condition on <span class="math-container">$C$</span>:</p> <p><span class="math-container">\begin{align} P\left[Y=y\, \Big|\,do\left(X=x\right),\,Z=z\right]&amp;=P_m\left[Y=y\, \Big|\,X=x,\,Z=z\right] \\ &amp;=\sum_{c} P_m\left[Y=y\, \Big|\,X=x,\,Z=z,\,C=c\right]\cdot P_m\left[C=c\right] \\ &amp;=\sum_{c} P\left[Y=y\, \Big|\,X=x,\,Z=z,\,C=c\right]\cdot P\left[C=c\right] \end{align}</span></p> <p><strong>Does this make sense?</strong></p>
<p>No you were right to begin with, you can control for any variable along the back door path so long as it doesn’t open up new such paths.</p> <p>You can try it for yourself for the specific diagram here (set Z to adjusted and some other one to see only the causal path remain colored): <a href="http://dagitty.net/dags.html?id=331" rel="noreferrer">http://dagitty.net/dags.html?id=331</a></p>
782
causal inference
Eye disease counterfactual example from Elements of Causal Inference
https://stats.stackexchange.com/questions/663641/eye-disease-counterfactual-example-from-elements-of-causal-inference
<p>I'm reading this example from <em>Elements of Causal Inference</em> by Peters, Janzing, and Schölkopf.</p> <hr /> <p><strong>Example 3.4 (Eye disease)</strong></p> <p>There exists a rather effective treatment for an eye disease. For 99% of all patients, the treatment works and the patient gets cured <span class="math-container">$(B= 0)$</span>; if untreated, these patients turn blind within a day <span class="math-container">$(B = 1)$</span>. For the remaining 1%, the treatment has the opposite effect and they turn blind <span class="math-container">$(B = 1)$</span> within a day. If untreated, they regain normal vision <span class="math-container">$(B = 0)$</span>.</p> <p>Which category a patient belongs to is controlled by a rare condition <span class="math-container">$(N_B = 1)$</span> that is unknown to the doctor, whose decision whether to administer the treatment <span class="math-container">$(T = 1)$</span> is thus independent of <span class="math-container">$N_B$</span>. We write it as a noise variable <span class="math-container">$N_T$</span>.<br /> Assume the underlying Structural Causal Model <span class="math-container">$\mathcal{C}$</span>:</p> <p><span class="math-container">$$ T := N_T \\ B := T\cdot N_B + (1 - T)\cdot (1-N_B) $$</span></p> <p>with Bernoulli distributed <span class="math-container">$N_B \sim Ber(0.01)$</span>; note that the corresponding causal graph is <span class="math-container">$T\rightarrow B$</span>.</p> <p>Now imagine a specific patient with poor eyesight comes to the hospital and goes blind <span class="math-container">$(B = 1)$</span> after the doctor administers the treatment <span class="math-container">$(T = 1)$</span>. We can now ask the counterfactual question “What would have happened had the doctor administered treatment <span class="math-container">$T = 0$</span>?” Surprisingly, this can be answered. The observation <span class="math-container">$B = T = 1$</span> implies with (3.5) that for the given patient, we had <span class="math-container">$N_B = 1$</span>. This, in turn, lets us calculate the effect of <span class="math-container">$do(T := 0)$</span>.</p> <p>...</p> <hr /> <p>The example goes on. However, <em>I am perplexed by the relation described by</em> <span class="math-container">$B:=T\cdot N_B + (1-T)\cdot (1- N_B)$</span>.</p> <p>In language, it looks like it describes the probability of (eventually) being blind is</p> <p><span class="math-container">$\text{(test admin)}\cdot \text{(prob of disease)} + \text{(test not admin)}\cdot \text{(prob not disease)}$</span>.</p> <p>But shouldn't this be the other way (e.g. switch the <span class="math-container">$(\text{prob..)}$</span> terms)? If the test is not administered, the prob of blind is <span class="math-container">$N_B$</span>, and if the test <em>is</em> administered, the prob of blind is <span class="math-container">$(1-N_B)$</span>.</p> <p>Am I missing something here?</p> <h3>Edit (Solved)</h3> <p>The confusion is that <span class="math-container">$N_B$</span> is <em>not</em> the condition that the patient has the disease, rather it is the condition that the patient has the peculiar inverse treatment property.</p> <p>Also see Noah's answer below</p>
<p>None of the terms in the structural model are probabilities; they are all indicator functions. It is simply a compact way to write the following function: <span class="math-container">$$ B := \begin{cases} 1, &amp; \text{if $T=1,N_B=1$} \\ 1, &amp; \text{if $T=0,N_B=0$} \\ 0, &amp; \text{if $T=1,N_B=0$} \\ 0, &amp; \text{if $T=0,N_B=1$} \end{cases} = \begin{cases} 1, &amp; \text{if $T=N_B$} \\ 0, &amp; \text{if $T\ne N_B$} \end{cases} $$</span> That is, <span class="math-container">$B=1$</span> if <span class="math-container">$T=N_B=1$</span> (patient goes blind if the treatment is given and the patient has the condition) or <span class="math-container">$T=N_B=0$</span> (patient goes blind if treatment is not given and patient doesn't have the condition). <span class="math-container">$B=0$</span> otherwise (patient does not go blind otherwise, i.e., they are given the treatment but don't have the condition or are not given the treatment but do have the condition).</p>
783
causal inference
How do I interpret the identification step logs in Causal Inference using DoWhy?
https://stats.stackexchange.com/questions/623285/how-do-i-interpret-the-identification-step-logs-in-causal-inference-using-dowhy
<p>I am running Causal Inference to determine whether the mass of a vehicle affects the Co2 emissions. I understand that DoWhy follows a particular structure that is modeling-&gt; identification -&gt; estimation -&gt; refutations. I was logging the outputs of each step in Python. I am having trouble understanding and interpreting the results of the Identification step in the code I am running. This is the specific log I am having trouble with:</p> <p><a href="https://i.sstatic.net/oHZ6B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oHZ6B.png" alt="Snippet of my print statements when running the identification step" /></a></p> <p>How should I approach reading this output? And what information is it telling me?</p>
784
causal inference
How to determine an appropriate &quot;closeness&quot; threshold when matching for causal inference?
https://stats.stackexchange.com/questions/489880/how-to-determine-an-appropriate-closeness-threshold-when-matching-for-causal-i
<p>Say I have a [yes/no] treatment variable (e.g. the customer complained about their order) and I want to estimate the causal impact of this &quot;treatment&quot; on the average customer's future spend. To do so, I match tens of thousands of observations in such a way as to minimize their Mahalanobis distance as calculated across a dozen covariates. To estimate the average treatment effect, I prepare a difference-of-means t-test, but before I implement this test across the &quot;treated&quot; and &quot;control&quot; groups, I need to prune my observations of pairs that are insufficiently similar to serve as an effective control -i.e. I need to make a judgement call on the maximum distance a pair of observations can have before being dropped. It goes without saying, the results of the t-test vary drastically as a function of this threshold.</p> <p>How do I rigorously determine an appropriate &quot;closeness&quot; threshold in the context of causal inference matching?</p>
<p>There are two qualities on which matched samples should be assessed: covariate balance and remaining (effective) sample size. Covariate balance is the degree to which the covariate distributions are the same between the treatment groups in the matched sample. Remaining sample size is the number of units remaining after discarding unmatched units. Covariate balance is required to eliminate bias due to confounding, and remaining sample size is required to achieve a precise estimate. In many cases, there is a trade-off: discarding units can improve balance but reduces remaining sample size. This is an instance of the fundamental bias-variance trade-off that is ubiquitous in statistics.</p> <p>Another potentially important feature of the matched dataset is the degree to which it represents the population to which you want your effect to generalize. If you discard units in such a way that the remaining matched sample does not resemble your target population, the estimated effect will not be valid for that population. In general, discarding units moves your sample further from the target population. In some cases, this is not so important because the target population itself may be poorly defined or arbitrary, in which case you can say a treatment effect may exist for <em>some</em> population, but not a specific one. I discuss this a bit in my answer <a href="https://stats.stackexchange.com/questions/478486/how-does-propensity-score-matching-that-uses-only-a-small-proportion-of-eligible/478507#478507">here</a>.</p> <p>So, the answer to your question is to find the cutoff that ensures balance, retains many units, and ensures the sample resembles the target population. There is no magic number, and the optimal value will vary from dataset to dataset and is in principle unknown to the analyst. A commonly used criterion is to disallow pairs of units that are more than .2 standard deviations of the logit of the propensity score apart from each other. Typically, rather than perform a match and then discard distant pairs, you incorporate this criterion, which is known as a &quot;caliper&quot;, into the matching itself; that way you don't discard a unit that might have been a good match for a different unit. Calipers are optional in matching; if your matched sample is well-balanced, there is no need to impose a restriction on the distance between paired units.</p>
785
causal inference
How is the counterfactual estimated in Judea Pearls book based on causal inference ? (dowhy)
https://stats.stackexchange.com/questions/580963/how-is-the-counterfactual-estimated-in-judea-pearls-book-based-on-causal-inferen
<p>I have started looking into causal inference, in particular Dowhy package based on Judea Pearls book of why. What i don't understand is how the counterfactual is estimated?</p> <p>My understanding is that DoWhy package (based on judea pearl book) addresses counterfactuals by creating a bayesian graphical model (at very high level) but i don't understand the math of how this is done. Can anybody point me in right direction of how it is estimated?</p> <p>Step 2 in the 4 step process also confuses me. When we say step 2 is 'identification' do we mean that the graphical model tries to write the structure of model we feed it in a probabilistic way?</p> <p>Any light somebody could shed on how counterfactual is created using dowhy/judea pearl way would be great.</p> <p>4 STEPS:</p> <p>The four Steps of causal inference.</p> <ol> <li>Modelling: create a causal graph to encode assumptions a. This is about creating a causal graph that encodes our assumptions that we are bringing in from our domain knowledge and our knowledge about how world works to augment data that we are using</li> <li>Identification: formulate what to estimate a. Here we are going to be taking those assumptions in causal graph and model and using it to formulate what we need to estimate</li> <li>Estimation: compute the estimate a. Given all the realities of the dataset what is the best way to trade off the bias and variance for whatever task we’re trying to do to get a causual estimate of what is the impact of doing one thing vs the other on outcomes we care about</li> <li>Refutation: validate the assumptions a. Lets see if we can refute the estimate i.e. come up with a reason not to trust it</li> </ol>
<p>The &quot;identification&quot; step refers to identifying an estimator for the quantity of interest and whether or not it can be computed given the causal model. For example, if there are unknown confounders, or undirected edges in a causal graph, an effect may not be identifiable.</p> <p>A counterfactual is estimated by changing the value of a given set of nodes, leaving the rest in their &quot;factual&quot; state, and re-computing the state of the system. It is similar to an intervention, with the important difference that we are interested in the alternative state of the system, rather than only the change due to the intervention. This requires knowledge about the values of the other system variables that are not intervened upon.</p>
786
causal inference
Causal Inference with unconfoundedness
https://stats.stackexchange.com/questions/466961/causal-inference-with-unconfoundedness
<p>Consider an observational study with binary treatment. I denote treatment variable as <span class="math-container">$z_i$</span>, denote observed outcome as <span class="math-container">$y_i$</span>, denote potential outcome as <span class="math-container">$y_i(1),y_i(0)$</span>, and denote covariates as <span class="math-container">$x_i$</span>. Assume the unconfoundedness <span class="math-container">$P(Z | Y,X) = P(Z|X)$</span> holds in this problem.</p> <p>I want to inference the sample average treatment effect on treatment group, i.e. <span class="math-container">$\frac{1}{\left| \{i|z_i = 1\} \right|} \sum_{i,z_i = 1} \left[Y_i(1) - Y_i(0)\right]$</span>. A way to do this is to regress <span class="math-container">$Y(0) \sim X$</span> using data from control group, and directly apply the regression function <span class="math-container">$\hat{f}(X)$</span> to treatment group to estimate <span class="math-container">$Y_i(0)$</span> , finally, use <span class="math-container">$\frac{1}{\left| \{i|z_i = 1\} \right|} \sum_{i,z_i = 1} \left[Y_i(1) - \hat{f}(x_i)\right]$</span> to estimate <span class="math-container">$\frac{1}{\left| \{i|z_i = 1\} \right|} \sum_{i,z_i = 1} \left[Y_i(1) - Y_i(0)\right]$</span>.</p> <p>The method sounds weird, I actually want to estimate <span class="math-container">$Y(0)$</span> on the treatment group, but I only use data from the control group in regression. How can I improve it?</p>
<p>The units in the control group provide information about the relationship between the <span class="math-container">$X$</span> and <span class="math-container">$Y(0)$</span>, because for those units, <span class="math-container">$Y$</span> (the observed outcome) is equal to <span class="math-container">$Y(0)$</span>. If you want <span class="math-container">$Y(0)$</span> in the treated group, you can use that information to estimate it from <span class="math-container">$X$</span> in the treated group, just as you describe.</p> <p>For the treated units, you have <span class="math-container">$Y_i(1)$</span> because <span class="math-container">$Y_i = Y_i(1)$</span> for them. It's not an estimate of <span class="math-container">$Y_i(1)$</span>; it <em>is</em> <span class="math-container">$Y_i(1)$</span>. From the treated group alone, we have half of the information required to estimate <span class="math-container">$\frac{1}{|{i|z_i=1}|} \sum_{i,z_i=1} [Y_i(1)-Y_i(0)]$</span>, i.e., the SATT, because we have <span class="math-container">$Y_i(1)$</span> for <span class="math-container">$i$</span> where <span class="math-container">$z_i=1$</span> without doing any modeling. However, for these units, <span class="math-container">$Y_i(0)$</span> is missing. We don't know what would have happened had the treated units received control. </p> <p>How could we get <span class="math-container">$Y_i(0)$</span> for the treated units? For those units, it's unobserved, so no modeling of just the treated units could tell you the relationship between <span class="math-container">$X$</span> and <span class="math-container">$Y_i(0)$</span>. That information <em>only</em> exists in the control group. So even though we're interested in a quality of the treated group (i.e., the treated group SATT), we <em>have</em> to consult the control group because only the control group contains information that can help us fill in the other half of the estimate of the SATT.</p> <p>There are, of course, other methods of estimating the SATT, including weighting and matching, that don't require you to estimate <span class="math-container">$Y_i(0)$</span> for each treated unit. But, because <span class="math-container">$Y_i(0)$</span> is only observed for the control units, we have to include them in the analysis.</p>
787
causal inference
What is the mode of inference for frequentist IPTW estimation in the causal inference context
https://stats.stackexchange.com/questions/611834/what-is-the-mode-of-inference-for-frequentist-iptw-estimation-in-the-causal-infe
<p>In Rubin 1990, Donald Rubin describes four different modes of statistical inference for causal effects:</p> <ol> <li>Randomization-based tests of sharp-null hypotheses - in the tradition of Fisher, if you've got an unconfounded assignment mechanism combined with a sharp null hypothesis of no treatment effect, you compute the value of a test statistic in your sample and compare that to the sampling distribution of the test statistic under the null to get a p-value (which can also give you a confidence interval by inverting the null hypothesis test)</li> <li>Randomization-based inference for sampling distributions of estimands (aka repeated sampling randomization-based inference) - in the tradition of Neyman for survey sampling, where you define an estimand of interest Q, select a statistic <span class="math-container">$\hat{Q}$</span> that is an unbiased estimator of the estimand, find a statistic <span class="math-container">$\hat{V}$</span> that is an unbiased estimator of the variance of <span class="math-container">$\hat{Q}$</span>, assume the randomization distribution of <span class="math-container">$(Q - \hat{Q}) \sim N(0, \hat{V})$</span>, and perform inference using that distribution (sometimes you assume a t-distribution instead of a normal distribution).</li> <li>Bayesian inference (aka Bayesian model-based inference) - take the assignment mechanism from the potential outcomes framework, supplement it with a joint probability model <span class="math-container">$Pr(X,Y)$</span> (factored in such a way that <span class="math-container">$Pr(X,Y) = \int \prod_{i=1}^{N} f(X_i, Y_i | \theta) Pr(\theta) d\theta$</span>, where <span class="math-container">$\theta$</span> is a parameter such that it's straightforward to compute the causal estimand Q as a function of <span class="math-container">$\theta$</span>) for your covariates and outcome, specify the prior distribution of the parameter <span class="math-container">$Pr(\theta)$</span> and calculate the posterior distribution of the causal estimand of interest Q.</li> <li>Superpopulation frequency inference (aka repeated-sampling model-based inference)- take the assignment mechanism and the probability model <span class="math-container">$\prod_{i=1}^{N} f(X_i, Y_i | \theta)$</span>, but discard the prior distribution and draw frequency inferences about <span class="math-container">$\theta$</span> using tools of mathematical statistics like maximum likelihood, likelihood ratios, etc.</li> </ol> <p>Suppose I am using an inverse probability of treatment weighted (IPTW) estimator to fit a marginal structural model to estimate the average treatment effect (ATE) of an active treatment relative to a control treatment using observational data. Let's make the usual assumptions needed to do this kind of inference (treatment version irrelevance, no interference, positivity, conditional exchangeability/no unmeasured confounders, no measurement error in X, correct specification of the nuisance model to estimate the weights, any missing data satisfy the stratified MCAR assumption).</p> <p>If I want to do frequentist inference and get a 95% confidence interval for the ATE, am I appealing to inference mode 2 (repeated sampling randomization-based inference) or 4 (repeated sampling model-based inference)? Or is there some other argument used in this setting to justify variance estimation.</p> <p>Rubin, Donald B. (1990). Formal modes of statistical inference for causal effects. <em>Journal of Statistical Planning and Inference.</em> 25. 279-292.</p>
788
causal inference
Causal Inference After Feature Selection
https://stats.stackexchange.com/questions/502082/causal-inference-after-feature-selection
<p>I am interested in this forum's thoughts concerning the use of LASSO for feature selection in a high dimensional dataset and subsequent OLS regression to adjust for confounding on the most frequently selected variables (I'm using 100 random draws). I'm aware that feature selection does not take into account the causal relationship of the selected variables. Therefore, I want to use OLS to adjust for potential confounding after feature selection. Are there potential issues that may arise? Has this been done in the literature before? Citations appreciated.</p>
<p>Data mining for potential predictors and causal inference don't go together well. Few problems may arise:</p> <p><em>Identification problems:</em></p> <ul> <li><p><strong>Confoundness:</strong> If some unobserved common causes are still not included in the data, the estimators are still biased.</p> </li> <li><p><strong>Bad control:</strong> Including too many variables is harmful. You can include colliders and unwanted mediators. This helps predictions, but harm causal inference badly. Estimators may be biased in all magnitudes and directions, especially after something as LASSO.</p> </li> <li><p><strong>Mismeasurement:</strong> If the variables are not measured perfectly, as they appear in the DGP (real world), their estimators may be biased towards zero (attenuation bias). However if you mismeasure control variables, they do not fully control for potential confoundness.</p> </li> <li><p><strong>Sampling issues:</strong> Unless the sample is 'representative' you may run into unwanted conditioning on colliders.</p> </li> </ul> <p><em>Estimation problems:</em></p> <ul> <li><p><strong>False significancy:</strong> Whatever significance level you choose, that much of false rejections of nulls you will have. In theory you can try to adjust p-values by Bonferroni (or others) procedures, but this means the loss of power by exclusion of some previously significant predictors.</p> </li> <li><p><strong>Functional form:</strong> Unless the functional relationship between variables is perfectly described in model, issues similar to mismeasurement arise. It is hard to deal with it when you perform mass scanning of the data.</p> </li> </ul> <p>In theory, with some work, you can try to automatically solve estimation problems, but not the identification ones. This group of problems requires careful inquiry about DGP, and possibly some additional assumptions about how the data was generated.</p> <hr /> <p><strong>EDIT:</strong></p> <p>Similar points make <a href="https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2020/11/ciwhatif_hernanrobins_23nov20.pdf" rel="noreferrer">Hernan and Robins (2020)</a> in chapter 18 of their book.</p> <p>In 18.2 they emphasise the role of bias-inducing variables (what I referred to as <em>bad control</em>). Then they argue, that <strong>the decision wether to control for a variable must be based on the information outside the data</strong>. And <strong>therefore this decision can not be made by any automated procedures thet rely exclusively on statistical associations</strong>.</p> <p>They also point out, that this problem is already deeply studied. They provide criticism, introduction to some basic solutions and additional citations.</p> <hr /> <p>Hernán MA, Robins JM (2020). <em>Causal Inference: What If.</em> Boca Raton: Chapman &amp; Hall/CRC.</p>
789
causal inference
Understanding the Intersection Between Causal and Statistical Inference
https://stats.stackexchange.com/questions/627417/understanding-the-intersection-between-causal-and-statistical-inference
<p>Assume a simple example motivating a causal research design. Say that I collect a data set on rural counties in Texas and I wish to understand if rainfall causes a change in crop sales. Working with this observational data, I run a regression, conditioning on a necessary adjustment set (to the best of my capability) and run a sensitivity analysis to examine how the point estimate shifts when exposed to varying levels of hypothetical unobserved confounding.</p> <p>Under such a scenario, I will still estimate information &quot;quantifying uncertainty&quot; (confidence intervals, credible intervals, etc.). However, if:</p> <ul> <li>1: Identifying assumptions ensure that, with satisfaction of these assumptions, an estimate is causal and</li> <li>2: Under selection on observables, I can quantify uncertainty of the satisfaction of identifying assumptions</li> </ul> <p>then what information are statistical inferential tools providing? It seems to me that P(H) is determined by the satisfaction of identifying assumptions, not by any sort of Bayesian or frequentist quantification of uncertainty. If I wish to evaluate the uncertainty of P(H), does not a sensitivity analysis of identifying assumptions do this very thing?</p> <p>I can see the value of quantifying the uncertainty of P(D) given that the data used to estimate a causal effect may not be representative of the broader population and may be insufficient to approximate the causal parameter that exists in the broader population. However, this caveat does not seem to satisfy my understanding of how statistical inference interacts with causal inference when either:</p> <ul> <li>1: The data is the population itself (so P(D) may not be of interest anymore).</li> <li>2: One pursues a Bayesian approach where one is interested in P(H|D) rather than P(D|<span class="math-container">$H_0$</span>). Again, does not the satisfaction of identifying assumptions and sensitivity analyses of these assumptions address P(H)?</li> </ul> <p>I recognize that these topics are not competing and that there is simply confusion on my end for how these goals interact. I appreciate any feedback and suggested readings on the topic.</p>
790
causal inference
How is it reasonable that randomised controlled trials can be used to perform causal inference?
https://stats.stackexchange.com/questions/638977/how-is-it-reasonable-that-randomised-controlled-trials-can-be-used-to-perform-ca
<p>I understand that <a href="https://en.wikipedia.org/wiki/Randomized_controlled_trial" rel="noreferrer">randomised controlled trials (RCTs)</a> are used to perform causal inference, but I'm a confused about how this is reasonable. Let's say that we have a treatment, and we want to find out if this treatment &quot;works&quot;. We randomly allocate participants into two groups, and one group is allocated the treatment and the other the placebo/control. But these people are not clones of each other operating in identical, controlled environments. And my understanding of RCTs is that, often, these people go about their normal lives whilst undertaking the trial – they're not isolated into some totally controlled, hermetic environment. So we have people who are vastly different, who then operate in vastly different, uncontrolled environments during the trial. How can this possibly reasonably allow us to perform causal inference? How does randomly allocating participants into two groups and randomly allocating them to treatment/control make up for all of these innumerable other possible factors that could influence variables of interest? How do we know whether any effects, either in one direction or another, were caused by the treatment, or some other factor (of which, as I said, there are innumerable)? It kind of seems to me like trying to find a needle in a haystack in a hurricane – not reasonable/realistic.</p>
<p>The randomization of participants into different groups, eg, treatment and control. is central to the inference of causality about the treatment. The randomization makes the treatment groups closely similar, on average. Larger randomized groups are more similar.</p> <p>High variation among the participants is essential to concluding that any treatment will work in the larger population, with its high environmental, social, and even genetic variation. Designers of trials make a serious effort to include human variation in their study populations. Randomization makes the two groups as similar as possible in the presence of wide individual variation.</p> <p>There is a chance, of course, as with any experiment, that the appearance of a real difference between treatment and control with a significant statistical test is wrong. This chance is reflected in the p-value. If this is small then the chance of a false result is small. This supports the conclusion that the treatment caused the difference.</p> <p>Many clinical trials are done on groups of people with one strong similarity, the presence of a disease, such as a specific diagnosis of cancer. Otherwise, participants often reflect the variety found among humans. That variety is necessary to conclude any effect can be relied on in other people with the same disease.</p> <p>A large clinical trial is usually planned after a treatment is found to be reasonable safe and promising for a small number of people.</p> <p>If a study is well designed with adequate power and does not yield a significant difference then this is usually thought to mean the effect of the treatment is small or nonexistent,certainly not as large as originally thought.</p>
791
causal inference
Can autocorrelation confound causal inference?
https://stats.stackexchange.com/questions/474179/can-autocorrelation-confound-causal-inference
<p>I'm working with a weekly aggregated time series that has autocorrelation and I'm trying to find out why the trend has been decreasing by regressing other features onto - I noticed that when I use an ARIMA to account for autocorrelation, it masks some features that wouldn't have been masked from OLS.</p> <p>In the case of this time series, there's certainly yearly seasonality, but when it comes to short term lags there's really no reason to believe that they have a causal influence on eachother, its more likely just caused by the fact that they occur within the same seasonality.</p> <p>Is it better to use something like OLS in this case and ignore the fact that there's autocorrelation in the errors? Or is there justification for still accounting for the autocorrelation? If so, what is it?</p>
<p>You want your model to be not only theoretically adequate but also statistically adequate. For a methodological discussion on that, see the Probabilistic Reduction methodology by Aris Spanos; I have summarized it in my earlier post <a href="https://stats.stackexchange.com/questions/303887">&quot;Effects of model selection and misspecification testing on inference: Probabilistic Reduction approach (Aris Spanos)&quot;</a>.</p> <p>If you have autocorrelated errors while your model assumes uncorrelated ones, you violate statistical adequacy. You can do that and may still get consistent estimators (though not always! see <a href="https://stats.stackexchange.com/questions/384791/proof-of-contemporaneous-exogeneity-and-its-implications-for-an-ar1-model/384879#384879">Christoph Hanck's answer</a>) and in such cases even do valid inference if you adjust your standard errors for autocorrelation, but you lose efficiency (i.e. your point estimates could be improved upon); a cleaner approach is to model the structure in the errors explicitly. Francis X. Diebold also advocates that; see <a href="https://stats.stackexchange.com/search?q=diebold+emperor">these threads</a> (e.g. start by <a href="https://stats.stackexchange.com/questions/246559/ols-hac-std-err-vs-conditional-mean-equation-from-garch/246620#246620">this</a>) for references and examples of applying his argument.</p>
792
causal inference
Causal Inference using Linear Regression
https://stats.stackexchange.com/questions/442256/causal-inference-using-linear-regression
<p>I have been reading recently on fitting linear regression to evaluate causal effect of some treatment. Let's call the variable in the model representing treatment as Xj.</p> <p>From what I have read, we need to make sure to include in the model other variables that affect <strong>both</strong> the responsible variable 'y' <strong>and</strong> the treatment variable Xj. I understand that only variables that affect both will impact the coefficient of Xj.</p> <p>However, if a variable Xi impacts the response variable 'y' but is independent of Xj , isn't it still important to include it in the model since it can reduce the error ? It won't change the coefficient of Xj but it will affect its standard error which is important when trying to establish whether treatment effect is significant or not. </p> <p>Is my logic incorrect ? Do we only need to worry about adding variables that affect both response and treatment variable ?</p>
<p>The following holds generally, but the exact relationships may differ depending on the data.</p> <p>Including a variable that is a predictor of the outcome and the treatment will reduce the bias and variance of the effect estimate. The more related to the outcome and the less related to the treatment, the more variance will be reduced.</p> <p>Including a variable that is a predictor of the outcome and unrelated to the treatment will reduce variance without affecting bias.</p> <p>Including a variable that is a predictor of the treatment and unrelated to the outcome (i.e., an instrument) will increase variance without affecting bias, as long as there is no unmeasured confounding. If there is unmeasured confounding, including instruments will increase the bias.</p> <p>There are many other phenomena that can occur, too. Omitting a variable whose coefficient is smaller than its standard error can reduce the mean squared error even though doing so induces bias (Rao, 1971). Including variables with various properties can increase or decrease bias depending on those properties (Steiner &amp; Kim, 2016). In general, though, your reasoning is correct.</p> <hr> <p>Rao, P. (1971). Some Notes on Misspecification in Multiple Regressions. The American Statistician, 25(5), 37–39. <a href="https://doi.org/10.2307/2686082" rel="nofollow noreferrer">https://doi.org/10.2307/2686082</a></p> <p>Steiner, P. M., &amp; Kim, Y. (2016). The Mechanics of Omitted Variable Bias: Bias Amplification and Cancellation of Offsetting Biases. Journal of Causal Inference, 4(2). <a href="https://doi.org/10.1515/jci-2016-0009" rel="nofollow noreferrer">https://doi.org/10.1515/jci-2016-0009</a></p>
793
causal inference
Causal Inference on test scores
https://stats.stackexchange.com/questions/499169/causal-inference-on-test-scores
<p>I administered a test and wanted to know if the exam scores were influenced by watching videos. The participants were randomly entered into 2 arms. I have one control arm that did not watch videos, and the second arm being the group that did watch videos. I administered a pretest, had them watch the videos, and then take a post-test to the groups, acquiring their scores. After tallying them, I combined the two scores into a data set of scores with the pre-test data first and the post-test data afterward. I ran causalimpact in R on the set. Here are the results that I got.</p> <pre><code>Posterior tail-area probability p: 0.0111 Posterior prob. of a causal effect: 98.89% </code></pre> <p>I wanted to know if I implemented causal impact appropriately given that I combined both the pretest and posttest data into one set and split based upon when the pre-test data ended and the post-test data started. I also wanted to know if my assumption of causality would make sense in this instance.</p>
<p>I'm new to causal inference, but this sounds like a straightforward application of linear models.</p> <p>You're interested in computing</p> <p><span class="math-container">$$ Pr(\mbox{Score} \vert do(\mbox{Videos}) ) $$</span></p> <p>By randomizing, you have severed any arrows in your dag from confounders to the treatment. So it should be, in principle, as easy as performing a t-test between the group scores.</p> <p>However, you might say...</p> <blockquote> <p>But Demetri, what if the subjects in the experimental group all happen to be poor test takers. I need to adjust for pre-test score.</p> </blockquote> <p>The randomization addresses this issue. If pre-intervention ability lead you to select which subjects got the intervention, then that would be a different story. In that case, you have classic fork confounding where pretest ability causes treatment and the outcome. Conditioning on pre-test ability would be the right thing to do in that scenario, but is not needed in the one you describe.</p>
794
causal inference
How to use causal inference models when we don&#39;t know the structure of the graph?
https://stats.stackexchange.com/questions/562738/how-to-use-causal-inference-models-when-we-dont-know-the-structure-of-the-graph
<p>I have recently started reading some materials in Causal Inference. Based on readings, we assume a graph that explains the relationship between treatment, outcome, and confounders. Then, they propose some methods like inverse probability weighting to compute the ATE. In many cases, we don't have access to the graph and so don't know how to find which variables to use for the adjustment formula. In this situation, how can we use IPW? should we consider all covariates?</p>
795
causal inference
What does it mean to &quot;non-parametrically&quot; identify a causal effect within the super-population perspective in causal inference?
https://stats.stackexchange.com/questions/405086/what-does-it-mean-to-non-parametrically-identify-a-causal-effect-within-the-su
<p>I am wondering, within the context of causal inference, what it means to "non-parametrically" identify a causal effect within the super-population perspective. For example, in Hernan/Robins Causal Inference Book Draft:</p> <p><a href="https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/02/hernanrobins_v1.10.38.pdf" rel="noreferrer">https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/02/hernanrobins_v1.10.38.pdf</a></p> <p>It defines non-parametric identification on pg. 43 and 123 as:</p> <blockquote> <p>...identification that does not require any modeling assumptions when the size of the study population is quasi-infinite. By acting as if we could obtain an unlimited number of individuals for our studies, we could ignore random fluctuations and could focus our attention on systematic biases due to confounding, selection, and measurement. Statisticians have a name for problems in which we can assume the size of the study population is effectively infinite: identification problems.</p> </blockquote> <p>I understand the <strong>identification</strong> part to mean that under the strong ignorability assumption, there is only ONE way for the observed data to correspond to a causal effect estimand. What confuses me is why we need to assume the size of the study is quasi-infinite. </p> <p>For example, in the book it gives an example of a 20 person study where <strong>each</strong> subject was representative of 1 billion identical subjects, and to view the hypothetical super-population as that of 20 billion people. Specifically, on pg. 13 it states that:</p> <blockquote> <p>... we will assume that counterfactual outcomes are deterministic and that we have recorded data on every subject in a very large (perhaps hypothetical) super-population. This is equivalent to viewing our population of 20 subjects as a population of 20 billion subjects in which 1 billion subjects are identical to the 1st subject, 1 billion subjects are identical to the 2nd subject, and so on.</p> </blockquote> <p>My confusion here is what it means to assume a single person is representative of 1 billion identical individuals. Is it assuming that each of the 1 billion are identical with respect to their outcomes and treatment only, but differ with respect to the covariates? Or is it assuming the individual is a summary measure of the 1 billion? My instinct is that the notion of the 1 billion is entertaining the fact we may draw many times without having a case where we have a lack of samples. I.e., small sample sizes result in more unstable estimates. </p> <p>Essentially, what is so crucial about assuming there are many identical individuals in the "background", if they are just going to be the same as a patient you observe? What happens or breaks down if instead of the 1 billion, we only had 2 identical individuals?</p> <p>Thank you for any insight. </p>
<p>Thank you for bringing this interesting book to our attention. Below are my two cents.</p> <blockquote> <p>...we will assume that counterfactual outcomes are deterministic and that we have recorded data on every subject in a very large (perhaps hypothetical) super-population.</p> </blockquote> <p>The above seems to merely be referring to the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem" rel="nofollow noreferrer">Central Limit Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Law_of_large_numbers" rel="nofollow noreferrer">Law of Large Numbers</a>. In other words, as the sample size <code>N</code> increases, from a frequentist perspective, your standard error around an <code>effect modifier</code> (or equally <code>causal risk ratio</code>, <code>risk difference</code>, etc.) estimate is shrinking to virtually zero; or from a Bayesian perspective, your <a href="https://en.wikipedia.org/wiki/Credible_interval" rel="nofollow noreferrer">credible interval</a> is collapsing to a single point. In other words, the posterior distribution becomes a point estimate. So in theory a deterministic value as opposed to a random variable.</p> <blockquote> <p>Is it assuming that each of the 1 billion are identical with respect to their outcomes and treatment only, but differ with respect to the covariates? Or is it assuming the individual is a summary measure of the 1 billion? My instinct is that the notion of the 1 billion is entertaining the fact we may draw many times without having a case where we have a lack of samples. I.e., small sample sizes result in more unstable estimates.</p> </blockquote> <p>I agree with your intuition. See also the side note on Pg. 9:</p> <blockquote> <p>Technically, when <span class="math-container">$i$</span> refers to a specific individual, such as Zeus, <span class="math-container">$Y_i^a$</span> is not a random variable because we are assuming that individual counterfactual outcomes are deterministic.</p> </blockquote> <p>In addition on Pg. 14 they note:</p> <blockquote> <p>However, for <strong>pedagogic</strong> reasons, we will continue to largely ignore random error until Chapter 10. Specifically, we will assume that counterfactual outcomes are deterministic</p> </blockquote> <p>So all of the slightly confusing language (around 1 observation representing 1 billion, etc.) is for simplification to ignore "random error" in these estimates and focus the attention on <a href="https://sciencing.com/difference-between-systematic-random-errors-8254711.html" rel="nofollow noreferrer">systematic biases</a> (errors) due to confounding, selection, and measurement.</p>
796
causal inference
Finding causal Inference from sentiment analysis
https://stats.stackexchange.com/questions/553578/finding-causal-inference-from-sentiment-analysis
<p>I am conducting a sentiment analysis on thousands of social media posts by unemployed manufacturing workers to see how online sentiment of the group members I am analyzing has changed after an announcement of an economic policy program aimed at helping that group. Specifically, I am interested in moving beyond descriptive statistics of sentiment analysis, towards measuring whether there is a causal relationship between the feelings/sentiment of the group behavior online, before and after the policy's introduction.</p> <p>A sample of the dataset is shown below:</p> <pre><code>date &lt;S3: POSIXct&gt; sentiment &lt;chr&gt; year &lt;dbl&gt; treatment_implementation &lt;chr&gt; 2011-12-01 neutral 2011 pre 2011-12-01 negative 2011 pre 2011-12-01 negative 2011 pre 2011-12-03 negative 2011 pre 2011-12-03 negative 2011 pre 2011-12-04 negative 2011 pre 2011-12-07 negative 2011 pre 2011-12-15 positive 2011 post 2011-12-20 positive 2011 post 2011-12-20 positive 2011 post 2011-12-22 positive 2011 post 2011-12-23 positive 2011 post 2011-12-27 positive 2011 post 2011-12-27 positive 2011 post </code></pre> <p>What methods can I use to measure whether there is a causal relationship between my binary outcome variable: positive/negative sentiment, and the introduction of a government program aimed directly to support the group of jobless manufacturing workers.</p> <p>Is it possible to use logistic or multinomial regression in this case?</p>
797
causal inference
Is Sensitivity Analysis for Making Causal Inferences Only for Backdoor Adjustment?
https://stats.stackexchange.com/questions/607068/is-sensitivity-analysis-for-making-causal-inferences-only-for-backdoor-adjustmen
<p>I am wondering if sensitivity analysis for causal inference is only applicable when doing backdoor adjustment/selecting on observables.</p> <p>Conventionally, sensitivity analysis evaluates the threat of an unknown confounder to a causal estimate in observational studies. In studies where we select on the observables, this type of sensitivity analysis makes sense since adjusting for observable confounders is the adjustment method.</p> <p>However, in front-door adjustment methods (instrumental variables, regression discontinuity, etc.) confounders are not incorporated into the adjustment method. As a result, any sensitivity analysis that seeks to quantify the damage a hypothetical unknown confounder would cause does not seem applicable.</p> <p>Am I correct in this thinking? If so, are there alternative sensitivity analysis methods for front-door adjustment methods?</p>
798
causal inference
In causal inference in statistics, how do you interpret the consistency assumption in mathematical terms?
https://stats.stackexchange.com/questions/304799/in-causal-inference-in-statistics-how-do-you-interpret-the-consistency-assumpti
<p>In causal inference, the consistency assumption states that there are no multiple versions of treatment. Specifically, for a potential outcome unit $Y_i$ and a binary treatment vector $\mathbf{Z}$, </p> <p>$$ Y_i(\mathbf{Z})=Y_i(\mathbf{Z'}) \ \ \forall \ \mathbf{Z},\mathbf{Z'}:\mathbf{Z}=\mathbf{Z'} $$ In literature, it says that <em>"This says that the mechanism used to assign the treatments does not matter and assigning the treatments in a different way does not constitute a different treatment."</em></p> <p>I am wondering how to make sense of this equation. What is it actually trying to say?</p>
<p>Let me use $X$ for the treatment, $Y$ for the observed outcome and $Y(x)$ for the potential outcome under $X = x$. </p> <p>Consistency means that for an individual $i$, his observed outcome $Y_i$ when $X_i = x$ is his potential outcome $Y_{i}(x)$. Or, more formally:</p> <p>$$X_i = x \implies Y_i(x) = Y_i$$</p> <p>When the treatment is binary ($X \in \{0,1\}$) consistency translates to the well known equation:</p> <p>$$ Y_i = X_i Y_i(1) +(1-X_i)Y_i(0) $$</p> <p>In an informal way, that's what people intend to convey when stating that the "way" $X$ is assigned doesn't matter, that is, that there aren't multiple versions of the treatment. For if there were multiple potential outcomes to the same $x$, when you see $X_i = x$, then which potential outcome is the observed outcome $Y_i$? And if it doesn't matter how the treatment is assigned, then when $X_i = x$, $Y_i(x)$ is well defined and usually <em>assumed</em> to be equal to $Y_i$ (but note that, even without multiple versions of the treatment, you still <em>need</em> to assume that $X_i = x \implies Y_i(x) = Y_i$).</p> <p><strong>What would happen without consistency?</strong></p> <p>Consistency is what connects the potential outcomes with the observed data. That is, it's consistency that allows us writing things like:</p> <p>$$ E[Y(x)|X = x] = E[Y|X =x] $$</p> <p>Which transforms expressions of counterfactual quantities $Y(x)$ into expressions of observed quantities $Y$.</p> <p>Without consistency, all of your potential outcomes data would be "missing". To make this clear, consider again a binary treatment. If consistency holds , when $X=1$ you observe $Y_i = Y(1)$ and when $X = 0$ you observe $Y_i = Y_i(0)$. The other two potential outcomes are unobserved, so your potential outcomes table would look like this:</p> <p>\begin{array} {|r|r|r|r|} \hline &amp;Y_i(1) &amp; Y_i(0) \\ \hline X_i=1 &amp;Y_i &amp;unobserved\\ \hline X_i=0 &amp; unobserved &amp; Y_i\\ \hline \end{array}</p> <p>If consistency did not hold, this is what you would get --- you don't observe any potential outcome:</p> <p>\begin{array} {|r|r|r|r|} \hline &amp;Y_i(1) &amp; Y_i(0) \\ \hline X_i=1 &amp;unobserved &amp;unobserved\\ \hline X_i=0 &amp; unobserved &amp; unobserved\\ \hline \end{array}</p> <p><strong>Consistency, potential outcomes,axioms of counterfactuals and structural causal models</strong></p> <p>In some of the potential outcomes literature, consistency is not explicitly defined, and casually considered together with other substantive assumptions such as SUTVA. In the axiomatization of counterfactuals, consistency is seen as a corollary of the axiom of composition. However, once you properly define a structural causal model (SCM) and you define counterfactuals as derived from interventional submodels, consistency is simply a natural consequence that automatically holds for all SCMs <a href="http://ftp.cs.ucla.edu/pub/stat_ser/R250.pdf" rel="noreferrer">(see Gales and Pearl</a> and also <a href="http://amzn.to/2hJPaxY" rel="noreferrer">Pearl's Causality, Chapter 7)</a>. </p> <p>Finally, whether consistency really "holds" in your model when compared to the real world is a practical/modeling issue. That is, to make any inference, you always need consistency, so what you are question is not the rule, but modeling assumptions. For example, do you think you have properly defined the treatment assignment $X$? Or are there more relevant things in the assignment of $X$ that would matter for the outcome that you didn't model? These are the questions you need to think about to judge whether your model is a good approximation for the problem you are investigating.</p>
799