idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
14,301
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a presentation of portions of sections 9.1 and 9.3 of Pattern Recognition and Machine Learning. $K$-means $K$-means seeks to find a binary assignment matrix $[r_{j,i}]$, with exactly one non-zero value in each row, one row for each of $N$ observations, and one column for each of $K$ clusters. The algorithm itself amounts to picking initial mean vectors $\mu_i$, and then alternating between the following two steps: E-step: For each observation $j$, set $r_{j,k^*}=1$ and $r_{j, k} = 0$ for $k \neq k^*$, where $k^*$ is the index of the closest cluster center: \begin{align} k^* = \underset{k}{\text{argmin}}~ ||x_j - \mu_k||^2 \end{align} M-step: For each cluster $j$, re-estimate the cluster center as the mean of the points in that cluster: \begin{align} \mu_k^{\text{new}} = \frac{\sum_{j=1}^N r_{j,k}x_j}{\sum_{j=1}^N r_{j,k}} \end{align} Expectation-Maximization for Gaussian Mixture Models Next, consider the standard Expectation-Maximization steps for Gaussian Mixture models, after picking initial mean vectors $\mu_k$, covariances $\Sigma_k$, and mixing coefficients $\pi_k$: E-step: For each observation $j$, evaluate the "responsibility" of each cluster $k$ for that observation: \begin{align} r_{j,k} & = \frac{\pi_k \mathcal{N}(x_j | \mu_k, \sigma_k)}{\sum_{i=1}^K\pi_i \mathcal{N}(x_j | \mu_i, \sigma_i)} \end{align} M-step: For each cluster $k$, re-estimate the parameters $\mu_k$, $\Sigma_k$, $\pi_k$ as a weighted average using the responsibilities as weights: \begin{align} \mu_k^{\text{new}} & = \frac{1}{\sum_{j=1}^N r_{j, k}} \sum_{j=1}^N r_{j,k} x_j \\ \Sigma_k^{\text{new}} & = \frac{1}{\sum_{j=1}^N r_{j, k}} \sum_{j=1}^N r_{j,k}( x_j - \mu_k^{\text{new}})(x_j - \mu_k^{\text{new}})^T \\ \pi_k^{\text{new}} & = \frac{\sum_{j=1}^N r_{j, k}}{N} \end{align} If we compare these update equations to the update equations for $K$-means, we see that, in both, $r_{j,i}$ serves as a probability distribution over clusters for each observation. The primary difference is that in $K$-means, the $r_{j,\cdot}$ is a probability distribution that gives zero probability to all but one cluster, while EM for GMMs gives non-zero probability to every cluster. Now consider EM for Gaussians in which we treat the covariance matrix as observed and of the form $\epsilon\textbf{I}$. Because $\mathcal{N}(x | \mu, \epsilon\textbf{I}) \propto \exp\left(-\frac{1}{2\epsilon}||x - \mu||^2\right)$, the M-step now computes the responsibilities as: \begin{align} r_{j,k} & = \frac{\pi_k \exp\left(-\frac{1}{2\epsilon}||x_j - \mu_k||^2\right)}{ \sum_{i=1}^K \pi_i \exp\left(-\frac{1}{2\epsilon}||x_j - \mu_i||^2\right) } \end{align} Due to the exponential in the numerator, $r_{j, k}$ here approaches the $K$-means $r_{j, k}$ as $\epsilon$ goes to zero. Moreover, as we are now treating the covariances $\Sigma_k$ as observed, there is no need to re-estimate $\Sigma_k$; it's simply $\epsilon\text{I}$.
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a present
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical? In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a presentation of portions of sections 9.1 and 9.3 of Pattern Recognition and Machine Learning. $K$-means $K$-means seeks to find a binary assignment matrix $[r_{j,i}]$, with exactly one non-zero value in each row, one row for each of $N$ observations, and one column for each of $K$ clusters. The algorithm itself amounts to picking initial mean vectors $\mu_i$, and then alternating between the following two steps: E-step: For each observation $j$, set $r_{j,k^*}=1$ and $r_{j, k} = 0$ for $k \neq k^*$, where $k^*$ is the index of the closest cluster center: \begin{align} k^* = \underset{k}{\text{argmin}}~ ||x_j - \mu_k||^2 \end{align} M-step: For each cluster $j$, re-estimate the cluster center as the mean of the points in that cluster: \begin{align} \mu_k^{\text{new}} = \frac{\sum_{j=1}^N r_{j,k}x_j}{\sum_{j=1}^N r_{j,k}} \end{align} Expectation-Maximization for Gaussian Mixture Models Next, consider the standard Expectation-Maximization steps for Gaussian Mixture models, after picking initial mean vectors $\mu_k$, covariances $\Sigma_k$, and mixing coefficients $\pi_k$: E-step: For each observation $j$, evaluate the "responsibility" of each cluster $k$ for that observation: \begin{align} r_{j,k} & = \frac{\pi_k \mathcal{N}(x_j | \mu_k, \sigma_k)}{\sum_{i=1}^K\pi_i \mathcal{N}(x_j | \mu_i, \sigma_i)} \end{align} M-step: For each cluster $k$, re-estimate the parameters $\mu_k$, $\Sigma_k$, $\pi_k$ as a weighted average using the responsibilities as weights: \begin{align} \mu_k^{\text{new}} & = \frac{1}{\sum_{j=1}^N r_{j, k}} \sum_{j=1}^N r_{j,k} x_j \\ \Sigma_k^{\text{new}} & = \frac{1}{\sum_{j=1}^N r_{j, k}} \sum_{j=1}^N r_{j,k}( x_j - \mu_k^{\text{new}})(x_j - \mu_k^{\text{new}})^T \\ \pi_k^{\text{new}} & = \frac{\sum_{j=1}^N r_{j, k}}{N} \end{align} If we compare these update equations to the update equations for $K$-means, we see that, in both, $r_{j,i}$ serves as a probability distribution over clusters for each observation. The primary difference is that in $K$-means, the $r_{j,\cdot}$ is a probability distribution that gives zero probability to all but one cluster, while EM for GMMs gives non-zero probability to every cluster. Now consider EM for Gaussians in which we treat the covariance matrix as observed and of the form $\epsilon\textbf{I}$. Because $\mathcal{N}(x | \mu, \epsilon\textbf{I}) \propto \exp\left(-\frac{1}{2\epsilon}||x - \mu||^2\right)$, the M-step now computes the responsibilities as: \begin{align} r_{j,k} & = \frac{\pi_k \exp\left(-\frac{1}{2\epsilon}||x_j - \mu_k||^2\right)}{ \sum_{i=1}^K \pi_i \exp\left(-\frac{1}{2\epsilon}||x_j - \mu_i||^2\right) } \end{align} Due to the exponential in the numerator, $r_{j, k}$ here approaches the $K$-means $r_{j, k}$ as $\epsilon$ goes to zero. Moreover, as we are now treating the covariances $\Sigma_k$ as observed, there is no need to re-estimate $\Sigma_k$; it's simply $\epsilon\text{I}$.
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a present
14,302
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
@ThomasLumley's answer is excellent. For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For example, you can compute the probability a given point came from each of the different fitted components. A GMM can also fit and return overlapping clusters, whereas $k$-means necessarily imposes a hard break between clusters.
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
@ThomasLumley's answer is excellent. For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical? @ThomasLumley's answer is excellent. For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For example, you can compute the probability a given point came from each of the different fitted components. A GMM can also fit and return overlapping clusters, whereas $k$-means necessarily imposes a hard break between clusters.
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a @ThomasLumley's answer is excellent. For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For
14,303
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
$K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a parameter for every observation that says to which cluster it belongs. Note that this is not an i.i.d. model, because the distribution is different for observations that belong to different clusters. Also note that this is not a standard ML problem, because the number of parameters grows with the number of points, so standard asymptotic results for ML estimators do not hold. In fact $K$-means is a counterexample for the claim that all ML estimators are consistent. If you have one-dimensional data, 50% from a ${\cal N}(-1,1)$-distribution and 50% from a ${\cal N}(1,1)$-distribution, the true difference between the means is 2, however $K$-means will overestimate that, because it will for $n\to\infty$ assign all observations smaller than 0 to the lower mean cluster and all larger than 0 to the higher mean cluster. The estimated means will then be means from truncated Gaussians (e.g., on the lower side, the left part of the lower mean Gaussian truncated at 0 plus the left part of the higher mean Gaussian truncated at 0), not from the original Gaussians. See P.G. Bryant, J. Williamson, Asymptotic behaviour of classification maximum likelihood estimates, Biometrika, 65 (1978), pp. 273-281. The Gaussian mixture model models data as i.i.d., with a probability of $\pi_k$, using fkpate's notation, for each observation to have come from cluster $k$. It estimates the cluster means as weighted means, not assigning observations in a crisp manner to one of the clusters. In this way it avoids the problem explained above and it will be consistent as ML estimator (in general this is problematic because of issues of degeneration of the covariance matrix, however not if you assume them spherical and equal). In practice, if you generate observations from a number of Gaussians with same spherical covariance matrix and different means, $K$-means will therefore overestimate the distances between the means, whereas the ML-estimator for the mixture model will not. It will be much slower though, if you have a big dataset, because crisp point assignment makes the $K$-means algorithm much faster (if somewhat less stable, but you can repeat it umpteen times before the Gaussian mixture EM has finished).
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
$K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a param
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical? $K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a parameter for every observation that says to which cluster it belongs. Note that this is not an i.i.d. model, because the distribution is different for observations that belong to different clusters. Also note that this is not a standard ML problem, because the number of parameters grows with the number of points, so standard asymptotic results for ML estimators do not hold. In fact $K$-means is a counterexample for the claim that all ML estimators are consistent. If you have one-dimensional data, 50% from a ${\cal N}(-1,1)$-distribution and 50% from a ${\cal N}(1,1)$-distribution, the true difference between the means is 2, however $K$-means will overestimate that, because it will for $n\to\infty$ assign all observations smaller than 0 to the lower mean cluster and all larger than 0 to the higher mean cluster. The estimated means will then be means from truncated Gaussians (e.g., on the lower side, the left part of the lower mean Gaussian truncated at 0 plus the left part of the higher mean Gaussian truncated at 0), not from the original Gaussians. See P.G. Bryant, J. Williamson, Asymptotic behaviour of classification maximum likelihood estimates, Biometrika, 65 (1978), pp. 273-281. The Gaussian mixture model models data as i.i.d., with a probability of $\pi_k$, using fkpate's notation, for each observation to have come from cluster $k$. It estimates the cluster means as weighted means, not assigning observations in a crisp manner to one of the clusters. In this way it avoids the problem explained above and it will be consistent as ML estimator (in general this is problematic because of issues of degeneration of the covariance matrix, however not if you assume them spherical and equal). In practice, if you generate observations from a number of Gaussians with same spherical covariance matrix and different means, $K$-means will therefore overestimate the distances between the means, whereas the ML-estimator for the mixture model will not. It will be much slower though, if you have a big dataset, because crisp point assignment makes the $K$-means algorithm much faster (if somewhat less stable, but you can repeat it umpteen times before the Gaussian mixture EM has finished).
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a $K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a param
14,304
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever deal with absolutes. That said, this is more an interpretation of language. We can think of a test that fails to reject $H_0$ having no evidence at it's current state (given the current data, specific test, and set thresholds). That said, the problem with this is that at first glance (to someone not too familiar with hypothesis testing for instance) it glosses over that our test is only as precise or correct as our data/test/threshold allows it to be. That is why I agree with you that "insufficient" is a better way of communicating a failure to reject. That said, this may be a difference in language between different fields. One thing to note: I do feel that your reasoning of switching $\alpha$ in regards to evidence is not entirely correct. A significance level is set before the test occurs and stays set, otherwise the test's conclusions become muddled. One way to gain more evidence is to find more data related to what is being tested.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever deal with absolutes. That said, this is more an interpretation of language. We can think of a test that fails to reject $H_0$ having no evidence at it's current state (given the current data, specific test, and set thresholds). That said, the problem with this is that at first glance (to someone not too familiar with hypothesis testing for instance) it glosses over that our test is only as precise or correct as our data/test/threshold allows it to be. That is why I agree with you that "insufficient" is a better way of communicating a failure to reject. That said, this may be a difference in language between different fields. One thing to note: I do feel that your reasoning of switching $\alpha$ in regards to evidence is not entirely correct. A significance level is set before the test occurs and stays set, otherwise the test's conclusions become muddled. One way to gain more evidence is to find more data related to what is being tested.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever
14,305
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is not an inherent propery of the $p$-value because it requires an additional criterion set by the researcher. What makes more sense is to talk about the evidence against the null hypothesis provided by the $p$-value. If we adopt the view$^{[1,2]}$ that the $p$-value is a continuous measure of compatibility between our data and the model (including the null hypothesis), it makes sense to talk about various degrees of evidence against $H_0$. Personally, I like the approach of Rafi & Greenland$^{[1]}$ to transform the $p$-value into (Shannon) surprise as $s=-\log_2(p)$ (aka Shannon information). For an extensive discussion on the distinction of $p$-values for decision and $p$-values as compatibility measures, see the recent paper by Greenland$^{[2]}$. This provides an absolute scale on which to view the information that a specific $p$-value provides. If a single coin toss provides $1$ bit of information, a $p$-value of, say, $0.05$ provides $s=-\log_2(0.05)=4.32$ bits of information against the null hypothesis. In other words: A $p$-value of $0.05$ is roughly as surprising as seeing all heads in four tosses of a fair coin. This approach makes it very clear that the evidence provided by a $p$-value is nonlinear. For example: A $p$-values $0.10$ provides $3.32$ bits of information whereas a $p$-value of $0.15$ provides $2.74$ bits. The first $p$-value thus provides roughly $21$% more evidence against $H_0$ as the second. In a second example, a $p$-value of $0.001$ provides roughly $132$% more evidence than a $p$-value of $0.051$, despite the absolute difference between them being the same as in the first example ($0.05$). Here is an illustration from paper $[1]$: To answer the question: As long as the $p$-value is smaller than $1$, it provides some evidence against the null hypothesis because it shows some incompatibility between the data and the model. To say "no evidence" would therefore not be entirely accurate. References $[1]$: Rafi, Z., Greenland, S. Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise. BMC Med Res Methodol 20, 244 (2020). https://doi.org/10.1186/s12874-020-01105-9 $[2]$: Greenland, S. (2023). Divergence versus decision P-values: A distinction worth making in theory and keeping in practice: Or, how divergence P-values measure evidence even when decision P-values do not. Scand J Statist, 50( 1), 54– 88. https://doi.org/10.1111/sjos.12625
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is n
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is not an inherent propery of the $p$-value because it requires an additional criterion set by the researcher. What makes more sense is to talk about the evidence against the null hypothesis provided by the $p$-value. If we adopt the view$^{[1,2]}$ that the $p$-value is a continuous measure of compatibility between our data and the model (including the null hypothesis), it makes sense to talk about various degrees of evidence against $H_0$. Personally, I like the approach of Rafi & Greenland$^{[1]}$ to transform the $p$-value into (Shannon) surprise as $s=-\log_2(p)$ (aka Shannon information). For an extensive discussion on the distinction of $p$-values for decision and $p$-values as compatibility measures, see the recent paper by Greenland$^{[2]}$. This provides an absolute scale on which to view the information that a specific $p$-value provides. If a single coin toss provides $1$ bit of information, a $p$-value of, say, $0.05$ provides $s=-\log_2(0.05)=4.32$ bits of information against the null hypothesis. In other words: A $p$-value of $0.05$ is roughly as surprising as seeing all heads in four tosses of a fair coin. This approach makes it very clear that the evidence provided by a $p$-value is nonlinear. For example: A $p$-values $0.10$ provides $3.32$ bits of information whereas a $p$-value of $0.15$ provides $2.74$ bits. The first $p$-value thus provides roughly $21$% more evidence against $H_0$ as the second. In a second example, a $p$-value of $0.001$ provides roughly $132$% more evidence than a $p$-value of $0.051$, despite the absolute difference between them being the same as in the first example ($0.05$). Here is an illustration from paper $[1]$: To answer the question: As long as the $p$-value is smaller than $1$, it provides some evidence against the null hypothesis because it shows some incompatibility between the data and the model. To say "no evidence" would therefore not be entirely accurate. References $[1]$: Rafi, Z., Greenland, S. Semantic and cognitive tools to aid statistical science: replace confidence and significance by compatibility and surprise. BMC Med Res Methodol 20, 244 (2020). https://doi.org/10.1186/s12874-020-01105-9 $[2]$: Greenland, S. (2023). Divergence versus decision P-values: A distinction worth making in theory and keeping in practice: Or, how divergence P-values measure evidence even when decision P-values do not. Scand J Statist, 50( 1), 54– 88. https://doi.org/10.1111/sjos.12625
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is n
14,306
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value. You might not have used the "optimal" statistic, obtained the sharpest probabilistic bounds, etc. but there is a fixed process that transforms the data into a $p$-value based on $H_0$. At this point, the $p$-value is your "evidence," and its strength is inversely proportional to its magnitude. Now, "rejecting" the null hypothesis based on a pre-chosen value of $\alpha$ is somewhat objective, as it is based on your intuition on "how much evidence is enough evidence". Picking $\alpha$ after seeing the $p$-value is problematic because you willingly influence the outcome by varying $\alpha$, i.e. you are able to "move the goalposts". Ultimately, I'd agree with 392781's answer, that there is "insufficient evidence," provided you have defined in advance what "sufficient evidence" would look like, in the form of picking $\alpha$. Still, it's helpful to remember that "evidence" is not a perfect word here, because it is often used to refer to discrete, objective reasoning, rather than probabilistic heuristics.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value. You might not have used the "optimal" statistic, obtained the sharpest probabilistic bounds, etc. but there is a fixed process that transforms the data into a $p$-value based on $H_0$. At this point, the $p$-value is your "evidence," and its strength is inversely proportional to its magnitude. Now, "rejecting" the null hypothesis based on a pre-chosen value of $\alpha$ is somewhat objective, as it is based on your intuition on "how much evidence is enough evidence". Picking $\alpha$ after seeing the $p$-value is problematic because you willingly influence the outcome by varying $\alpha$, i.e. you are able to "move the goalposts". Ultimately, I'd agree with 392781's answer, that there is "insufficient evidence," provided you have defined in advance what "sufficient evidence" would look like, in the form of picking $\alpha$. Still, it's helpful to remember that "evidence" is not a perfect word here, because it is often used to refer to discrete, objective reasoning, rather than probabilistic heuristics.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value.
14,307
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
This is to some extent similar to some other answers, however I feel still worth saying. What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evidence language". If we fix a level, I'd just say "We do not reject at level $\alpha$" (or we do, of course). Maybe (if you want to bring the term evidence in), "there is no significant evidence" (at level $\alpha$; unless there is). Alternatively I'd interpret test results in a non-binary way saying "There is very strong/strong/modest/weak/no evidence" for p<0.001/0.01/0.05/0.1/p>0.1. I don't like the term "insufficient", as it seems to suggest that we wanted to reject but failed to do so (same with the wording in the question "fail to reject"), whereas I think that a scientist should be open to any result rather than hoping for significance (even though in many cases it may be arguably more honest to say something like "I wanted significance so much but didn't get it, boohoo", in which case the researcher probably better says it this way so that people know what to think of the researcher's neutrality...).
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
This is to some extent similar to some other answers, however I feel still worth saying. What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evid
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? This is to some extent similar to some other answers, however I feel still worth saying. What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evidence language". If we fix a level, I'd just say "We do not reject at level $\alpha$" (or we do, of course). Maybe (if you want to bring the term evidence in), "there is no significant evidence" (at level $\alpha$; unless there is). Alternatively I'd interpret test results in a non-binary way saying "There is very strong/strong/modest/weak/no evidence" for p<0.001/0.01/0.05/0.1/p>0.1. I don't like the term "insufficient", as it seems to suggest that we wanted to reject but failed to do so (same with the wording in the question "fail to reject"), whereas I think that a scientist should be open to any result rather than hoping for significance (even though in many cases it may be arguably more honest to say something like "I wanted significance so much but didn't get it, boohoo", in which case the researcher probably better says it this way so that people know what to think of the researcher's neutrality...).
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden This is to some extent similar to some other answers, however I feel still worth saying. What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evid
14,308
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
The two sentences have nearly the same meaning. The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the amount of evidence' that has not been passed. The other phrase can be regarded as a shortened/abbreviated sentence saying more or less the same "We have no evidence (that is sufficient)". The second case has the same meaning but is just stated in a different way.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
The two sentences have nearly the same meaning. The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? The two sentences have nearly the same meaning. The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the amount of evidence' that has not been passed. The other phrase can be regarded as a shortened/abbreviated sentence saying more or less the same "We have no evidence (that is sufficient)". The second case has the same meaning but is just stated in a different way.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden The two sentences have nearly the same meaning. The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the
14,309
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value was is "sufficient" evidence and that p-value is greater than you threshold , then you have "insufficient" evidence. The p-value is really a feature of the data. It is was Neyman and Pearson who formulated hypothesis testing as an accept-reject paradigm. Up to that point (1940's ???) The Fisherian formalism was to report the p-value and let that speak for itself. Fisher attacked the N-P formalism vociferously. And that didn't end the argument because the Bayesians were yet to be heard. I think it’s interesting that this is generating some discrepant responses. So far 6 upvoted and 7 downvotes. I thought it was a well established principle that the Null was almost never “true”.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value wa
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value was is "sufficient" evidence and that p-value is greater than you threshold , then you have "insufficient" evidence. The p-value is really a feature of the data. It is was Neyman and Pearson who formulated hypothesis testing as an accept-reject paradigm. Up to that point (1940's ???) The Fisherian formalism was to report the p-value and let that speak for itself. Fisher attacked the N-P formalism vociferously. And that didn't end the argument because the Bayesians were yet to be heard. I think it’s interesting that this is generating some discrepant responses. So far 6 upvoted and 7 downvotes. I thought it was a well established principle that the Null was almost never “true”.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value wa
14,310
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that it gives a decision that the evidence is strong enough (according to the pre-data specified level of alpha) to require the null hypothesis to be discarded, or not strong enough. It does not, by itself or by design, provide for any evidential assessment beyond that. However... The hypothesis test method published by Neyman & Pearson did not depend on a p-value (and did not provide one), but modern usage of the hypothesis tests almost always involves comparing a p-value to a threshold rather than looking to see if the test statistic falls in a "critical region". It is the p-value that lets you make statements about the strength of the evidence in the data against the null hypothesis, according to the statistical model. The p-value is best understood as a product of a (neo-) Fisherian significance test rather than a hypothesis test or the hybrid thing often called 'NHST'. To some the distinction seems subtle and rather pointless, but if you want to know what the testing procedures allow you to know and the types of inferences that they support I think the distinction is essential. See here for my extended take on the topic: https://link.springer.com/chapter/10.1007/164_2019_286 If you want to talk of evidence and to persist with the conventional accept/reject approach then you need to know that, depending on the alpha that you choose and the experimental design, you may be rejecting the null hypothesis with fairly weak evidence or with very strong evidence.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that it gives a decision that the evidence is strong enough (according to the pre-data specified level of alpha) to require the null hypothesis to be discarded, or not strong enough. It does not, by itself or by design, provide for any evidential assessment beyond that. However... The hypothesis test method published by Neyman & Pearson did not depend on a p-value (and did not provide one), but modern usage of the hypothesis tests almost always involves comparing a p-value to a threshold rather than looking to see if the test statistic falls in a "critical region". It is the p-value that lets you make statements about the strength of the evidence in the data against the null hypothesis, according to the statistical model. The p-value is best understood as a product of a (neo-) Fisherian significance test rather than a hypothesis test or the hybrid thing often called 'NHST'. To some the distinction seems subtle and rather pointless, but if you want to know what the testing procedures allow you to know and the types of inferences that they support I think the distinction is essential. See here for my extended take on the topic: https://link.springer.com/chapter/10.1007/164_2019_286 If you want to talk of evidence and to persist with the conventional accept/reject approach then you need to know that, depending on the alpha that you choose and the experimental design, you may be rejecting the null hypothesis with fairly weak evidence or with very strong evidence.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that
14,311
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
My preference is to use "no evidence" The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the decision to reject the null hypothesis (which is fixed by the data and has no uncertainty) with the underlying truth or falsity of the hypotheses (which is uncertain). For that reason I would recommend you avoid talking about "evidence to reject" and instead use wording that either refers to evidence in favour of the alternative hypothesis, or the actual rejection decision that was made: We found no evidence in favour of $H_A$ at the significance level $\alpha$. We reject $H_0$ in favour of $H_A$ at the significance level $\alpha$. We found evidence in favour of $H_A$ at the significance level $\alpha$. We do not reject $H_0$ in favour of $H_A$ at the significance level $\alpha$. Alternatively, you can build in the "statistically significant" description: We found no statistically significant evidence in favour of $H_A$ (at the $\alpha$ level). We found statistically significant evidence in favour of $H_A$ (at the $\alpha$ level). Alternatively, in many contexts it is more sensible to just state the relevant p-value and characterise the evidence without use of a specific significant level:$^\dagger$ We found no evidence in favour of $H_A$ ($p=0.3255$). We found weak evidence in favour of $H_A$ ($p=0.0341$). We found strong evidence in favour of $H_A$ ($p=0.0076$). We found very strong evidence in favour of $H_A$ ($p=0.0008$). The main reason I prefer not to use "insufficient evidence" is that it suggests some evidence in favour of the alternative hypothesis when that may not be the case. For example, if you have a p-value of $p=0.3255$, that means that if the null hypothesis is true, almost one-third of the time you would see a result that is at least that conducive to the alternative hypothesis. My view is that this is accurately characterised as "no evidence", not "insufficient evidence to reject". $^\dagger$ Here I use my own assessments of the strength of evidence, to wit: "weak" for p-value between 0.01-0.05, "strong" for p-value between 0.001-0.01, "very strong" for p-value of 0.001 or lower. Others may take a different view of the appropriate correspondence, but so long as you state the p-value, it should be fine.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
My preference is to use "no evidence" The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the d
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? My preference is to use "no evidence" The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the decision to reject the null hypothesis (which is fixed by the data and has no uncertainty) with the underlying truth or falsity of the hypotheses (which is uncertain). For that reason I would recommend you avoid talking about "evidence to reject" and instead use wording that either refers to evidence in favour of the alternative hypothesis, or the actual rejection decision that was made: We found no evidence in favour of $H_A$ at the significance level $\alpha$. We reject $H_0$ in favour of $H_A$ at the significance level $\alpha$. We found evidence in favour of $H_A$ at the significance level $\alpha$. We do not reject $H_0$ in favour of $H_A$ at the significance level $\alpha$. Alternatively, you can build in the "statistically significant" description: We found no statistically significant evidence in favour of $H_A$ (at the $\alpha$ level). We found statistically significant evidence in favour of $H_A$ (at the $\alpha$ level). Alternatively, in many contexts it is more sensible to just state the relevant p-value and characterise the evidence without use of a specific significant level:$^\dagger$ We found no evidence in favour of $H_A$ ($p=0.3255$). We found weak evidence in favour of $H_A$ ($p=0.0341$). We found strong evidence in favour of $H_A$ ($p=0.0076$). We found very strong evidence in favour of $H_A$ ($p=0.0008$). The main reason I prefer not to use "insufficient evidence" is that it suggests some evidence in favour of the alternative hypothesis when that may not be the case. For example, if you have a p-value of $p=0.3255$, that means that if the null hypothesis is true, almost one-third of the time you would see a result that is at least that conducive to the alternative hypothesis. My view is that this is accurately characterised as "no evidence", not "insufficient evidence to reject". $^\dagger$ Here I use my own assessments of the strength of evidence, to wit: "weak" for p-value between 0.01-0.05, "strong" for p-value between 0.001-0.01, "very strong" for p-value of 0.001 or lower. Others may take a different view of the appropriate correspondence, but so long as you state the p-value, it should be fine.
Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden My preference is to use "no evidence" The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the d
14,312
Widespread overfitting in health domain research?
You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years is to assume that machine learning algorithms somehow fix this problem. While algorithms can be tuned with cross-validation to not overfit, many such as random forests typically result in massive overfitting. It is a not correct to use one method of supervised learning to select features to promote for use in another method. The second method has lost the context and does not know how to apply the proper amount of shrinkage. In addition, the first method has a very low chance of finding the "right" features. For example many practitioners think that lasso finds the right features when it fact it usually fails miserably in that task. I go into many of these issues at length in RMS and BBR. The most general-purpose, safest, and interpretable solution is heavy use of unsupervised learning (sparse principal components; regular principal components after doing variable clustering, etc.). This allows either traditional or ML supervised learning to be used on the reduced, combined, features with much stability and without much overfitting. Data are not capable of informing us about which variables are important, so we should stop trying to use data to do that. A simple simulation in RMS shows that. With a limited number of candidate features, a high signal:noise ratio, and zero collinearity, stepwise variable selection still has a very low chance of selecting the right variables. Same with methods such as lasso. If it can't work in an ideal setting it can't work on real data.
Widespread overfitting in health domain research?
You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years
Widespread overfitting in health domain research? You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years is to assume that machine learning algorithms somehow fix this problem. While algorithms can be tuned with cross-validation to not overfit, many such as random forests typically result in massive overfitting. It is a not correct to use one method of supervised learning to select features to promote for use in another method. The second method has lost the context and does not know how to apply the proper amount of shrinkage. In addition, the first method has a very low chance of finding the "right" features. For example many practitioners think that lasso finds the right features when it fact it usually fails miserably in that task. I go into many of these issues at length in RMS and BBR. The most general-purpose, safest, and interpretable solution is heavy use of unsupervised learning (sparse principal components; regular principal components after doing variable clustering, etc.). This allows either traditional or ML supervised learning to be used on the reduced, combined, features with much stability and without much overfitting. Data are not capable of informing us about which variables are important, so we should stop trying to use data to do that. A simple simulation in RMS shows that. With a limited number of candidate features, a high signal:noise ratio, and zero collinearity, stepwise variable selection still has a very low chance of selecting the right variables. Same with methods such as lasso. If it can't work in an ideal setting it can't work on real data.
Widespread overfitting in health domain research? You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years
14,313
Widespread overfitting in health domain research?
It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis. cross-validation just helps you to estimate out-of-sample prediction, but it won't help you to correct standard errors or p-values of individual coefficients in a model. You can use bootstrap for that, where feature elimination and model selection steps are repeated in each bootstrap repetition. Machine learning is not magic. If you are afraid of over-fitting some standard statistical model, it won't help to use an even more flexible model. Various feature importance metrics are not improvements on your standard statistical tools but rather bad approximations. Usually, they just tell you what is happening in your model, but they cannot be made to make inferences about effects in the population. Also, the goal of a statistical model is not just to find any relationship between variables but to estimate an effect of a variable while correcting for other variables or other known structures in your data. backward selection is not a recommended tool, and statisticians actively warn against it. If you care about valid statistical inference, then the solution is not to exchange bad statistics for machine learning but to use good statistics. I am not saying that machine learning doesn't have its place in health research, and there cannot be interesting research questions answered using ML methods, but it is not an automatic solution to statistical problems.
Widespread overfitting in health domain research?
It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis. cross-validation just helps you to estimate out-
Widespread overfitting in health domain research? It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis. cross-validation just helps you to estimate out-of-sample prediction, but it won't help you to correct standard errors or p-values of individual coefficients in a model. You can use bootstrap for that, where feature elimination and model selection steps are repeated in each bootstrap repetition. Machine learning is not magic. If you are afraid of over-fitting some standard statistical model, it won't help to use an even more flexible model. Various feature importance metrics are not improvements on your standard statistical tools but rather bad approximations. Usually, they just tell you what is happening in your model, but they cannot be made to make inferences about effects in the population. Also, the goal of a statistical model is not just to find any relationship between variables but to estimate an effect of a variable while correcting for other variables or other known structures in your data. backward selection is not a recommended tool, and statisticians actively warn against it. If you care about valid statistical inference, then the solution is not to exchange bad statistics for machine learning but to use good statistics. I am not saying that machine learning doesn't have its place in health research, and there cannot be interesting research questions answered using ML methods, but it is not an automatic solution to statistical problems.
Widespread overfitting in health domain research? It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis. cross-validation just helps you to estimate out-
14,314
Widespread overfitting in health domain research?
As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect: Evidence of Inflated Prediction Performance: A Commentary on Machine Learning and Suicide Research A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal While not fixing the problem, appropriate (especially: external) evaluation techniques at least help be aware of it, as Frank also argued in this excellent paper. Finally, as a shameless self-plug, we go into a number of the related technical challenges in a recent survey of ours concerning responsible ML for medicine.
Widespread overfitting in health domain research?
As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect: Evidence of Inflated Prediction Performance: A Commentary on
Widespread overfitting in health domain research? As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect: Evidence of Inflated Prediction Performance: A Commentary on Machine Learning and Suicide Research A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal While not fixing the problem, appropriate (especially: external) evaluation techniques at least help be aware of it, as Frank also argued in this excellent paper. Finally, as a shameless self-plug, we go into a number of the related technical challenges in a recent survey of ours concerning responsible ML for medicine.
Widespread overfitting in health domain research? As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect: Evidence of Inflated Prediction Performance: A Commentary on
14,315
Why are parametric tests more powerful than non-parametric tests?
This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so I guess it's an answer. Why are parametric tests more powerful than non-parametric tests? As a general statement, the title premise is false. Parametric tests are not in general more powerful than nonparametric tests. Some books make such general claims but it makes no sense unless we are very specific about which parametric tests and which nonparametric tests under which parametric assumptions, and we find that in fact it's typically only true if we specifically choose the circumstances under which a parametric test has the highest power relative to any other test -- and even then, there may often be nonparametric tests that have equivalent power in very large samples (with small effect sizes). Is the word choice of "power" the same as statistical power? Yes. However, to compute power we need to specify a precise set of assumptions and a specific alternative. I don't understand how this relates to statistical tests based on the normal distribution specifically. Nothing about the terms "parametric" nor "nonparametric" relate specifically to the normal distribution. see the opening paragraph here: https://en.wikipedia.org/wiki/Parametric_statistics Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modeled by a probability distribution that has a fixed set of parameters.$^{[1]}$ Conversely a non-parametric model does not assume an explicit (finite-parametric) mathematical form for the distribution when modeling the data. However, it may make some assumptions about that distribution, such as continuity or symmetry. Some textbooks (particularly ones written for students in some application areas, typically by academics in those areas) get this definition quite wrong. Beware; in my experience, if this term is misused, much else will tend to be wrong as well. Can we make a true statement that says something like what's in your question? Yes, but it requires heavy qualification. If we use the uniformly most powerful test (should such a test exist) under some specific distributional assumption, and that distributional assumption is exactly correct, and all the other assumptions hold, then a nonparametric test will not exceed that power (otherwise the parametric test would not have been uniformly most powerful after all). However - in spite of stacking the deck in favour of the parametric test like that - in many cases you can find a nonparametric test that has the same large sample power in exactly that stacked-deck situation -- it just won't be one of the common rank-based tests you're likely to have seen before. What we're doing is in the parametric case choosing a test statistic which has all the information in the statistic about the difference from the null, given the distributional assumption and the specific form of alternative. If you optimize power under some set of assumptions, obviously you can't beat it under those assumptions, and that's the situation we're in. Conover's book Practical Nonparametric Statistics has a section discussing tests with an asymptotic relative efficiency (ARE) of 1, relative to tests that assume normality. This ARE is while under that normal assumption. He focuses there on normal scores tests (score-based rank-tests which I would tend to avoid in most typical situations for other reasons), but it does help to illustrate that the claimed advantages for parametric tests may not always be so clear. It's the next section (on permutation tests, under "Fishers Method of Randomization") where I tend to focus. In any case, such stacking of the deck in favour of the parametric assumption still doesn't universally beat nonparametric tests. Of course, in a real-world testing situation such neatly 'stacked decks' don't occur. The parametric model is not a fact about our real data, but a model -- a convenient approximation. As George Box put it, All models are wrong. In this case the questions we would want to ask are (a) "is there a nonparametric test that's essentially as powerful as this parametric test in the situation where the parametric assumption holds?" (to which the answer is often 'yes') and (b) "how far do we need to modify the exact parametric assumption before it is less powerful than some suitable nonparametric test?" (which is often "hardly at all"). In that case, if you don't know which of the two sets of circumstances you're in, why would you prefer the parametric test? Let me address a common test. Consider the two-sample equal-variance t-test, which is uniformly most powerful for a one-sided test of a shift in mean when the population is exactly normal. (a) Is it more powerful than every nonparametric test? Well, no, in the sense that there are nonparametric tests whose asymptotic relative efficiency is 1 (that is, if you look at the ratio of sample sizes required to achieve the same power at a given significance level, that ratio goes to 1 in large samples); specifically there are permutation tests (e.g. based on the same statistic) with this property. The asymptotic power is also a good guide to the relative power at typical sample sizes (if you make sure the tests are being performed at the same actual significance level). (b) Do you need to modify the situation much before some non-parametric test has better power? As I suggested above, in this location-test under normality case, hardly at all. Even if we restrict consideration to just the most commonly used rank tests (which is limiting our potential power), you don't need to make the distribution very much more heavy-tailed than the normal before the Wilcoxon-Mann-Whitney test typically has better power. If we're allowed to choose something with better power at the normal (though the Wilcoxon-Mann-Whitney has excellent performance there), it can kick in even quicker. It can be extremely hard to tell whether you're sampling from a population with a very slightly heavier tail than the one you assumed, so having slightly better power (at best) in a situation you cannot be confident holds may be an extremely dubious advantage. In any case you should not try to tell which situation you're in by looking at the sample you're conducting the test on (at least not if it will affect your choice of test), since that data-based test choice will impact the properties of your subsequently chosen test.
Why are parametric tests more powerful than non-parametric tests?
This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so
Why are parametric tests more powerful than non-parametric tests? This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so I guess it's an answer. Why are parametric tests more powerful than non-parametric tests? As a general statement, the title premise is false. Parametric tests are not in general more powerful than nonparametric tests. Some books make such general claims but it makes no sense unless we are very specific about which parametric tests and which nonparametric tests under which parametric assumptions, and we find that in fact it's typically only true if we specifically choose the circumstances under which a parametric test has the highest power relative to any other test -- and even then, there may often be nonparametric tests that have equivalent power in very large samples (with small effect sizes). Is the word choice of "power" the same as statistical power? Yes. However, to compute power we need to specify a precise set of assumptions and a specific alternative. I don't understand how this relates to statistical tests based on the normal distribution specifically. Nothing about the terms "parametric" nor "nonparametric" relate specifically to the normal distribution. see the opening paragraph here: https://en.wikipedia.org/wiki/Parametric_statistics Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modeled by a probability distribution that has a fixed set of parameters.$^{[1]}$ Conversely a non-parametric model does not assume an explicit (finite-parametric) mathematical form for the distribution when modeling the data. However, it may make some assumptions about that distribution, such as continuity or symmetry. Some textbooks (particularly ones written for students in some application areas, typically by academics in those areas) get this definition quite wrong. Beware; in my experience, if this term is misused, much else will tend to be wrong as well. Can we make a true statement that says something like what's in your question? Yes, but it requires heavy qualification. If we use the uniformly most powerful test (should such a test exist) under some specific distributional assumption, and that distributional assumption is exactly correct, and all the other assumptions hold, then a nonparametric test will not exceed that power (otherwise the parametric test would not have been uniformly most powerful after all). However - in spite of stacking the deck in favour of the parametric test like that - in many cases you can find a nonparametric test that has the same large sample power in exactly that stacked-deck situation -- it just won't be one of the common rank-based tests you're likely to have seen before. What we're doing is in the parametric case choosing a test statistic which has all the information in the statistic about the difference from the null, given the distributional assumption and the specific form of alternative. If you optimize power under some set of assumptions, obviously you can't beat it under those assumptions, and that's the situation we're in. Conover's book Practical Nonparametric Statistics has a section discussing tests with an asymptotic relative efficiency (ARE) of 1, relative to tests that assume normality. This ARE is while under that normal assumption. He focuses there on normal scores tests (score-based rank-tests which I would tend to avoid in most typical situations for other reasons), but it does help to illustrate that the claimed advantages for parametric tests may not always be so clear. It's the next section (on permutation tests, under "Fishers Method of Randomization") where I tend to focus. In any case, such stacking of the deck in favour of the parametric assumption still doesn't universally beat nonparametric tests. Of course, in a real-world testing situation such neatly 'stacked decks' don't occur. The parametric model is not a fact about our real data, but a model -- a convenient approximation. As George Box put it, All models are wrong. In this case the questions we would want to ask are (a) "is there a nonparametric test that's essentially as powerful as this parametric test in the situation where the parametric assumption holds?" (to which the answer is often 'yes') and (b) "how far do we need to modify the exact parametric assumption before it is less powerful than some suitable nonparametric test?" (which is often "hardly at all"). In that case, if you don't know which of the two sets of circumstances you're in, why would you prefer the parametric test? Let me address a common test. Consider the two-sample equal-variance t-test, which is uniformly most powerful for a one-sided test of a shift in mean when the population is exactly normal. (a) Is it more powerful than every nonparametric test? Well, no, in the sense that there are nonparametric tests whose asymptotic relative efficiency is 1 (that is, if you look at the ratio of sample sizes required to achieve the same power at a given significance level, that ratio goes to 1 in large samples); specifically there are permutation tests (e.g. based on the same statistic) with this property. The asymptotic power is also a good guide to the relative power at typical sample sizes (if you make sure the tests are being performed at the same actual significance level). (b) Do you need to modify the situation much before some non-parametric test has better power? As I suggested above, in this location-test under normality case, hardly at all. Even if we restrict consideration to just the most commonly used rank tests (which is limiting our potential power), you don't need to make the distribution very much more heavy-tailed than the normal before the Wilcoxon-Mann-Whitney test typically has better power. If we're allowed to choose something with better power at the normal (though the Wilcoxon-Mann-Whitney has excellent performance there), it can kick in even quicker. It can be extremely hard to tell whether you're sampling from a population with a very slightly heavier tail than the one you assumed, so having slightly better power (at best) in a situation you cannot be confident holds may be an extremely dubious advantage. In any case you should not try to tell which situation you're in by looking at the sample you're conducting the test on (at least not if it will affect your choice of test), since that data-based test choice will impact the properties of your subsequently chosen test.
Why are parametric tests more powerful than non-parametric tests? This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so
14,316
Why are parametric tests more powerful than non-parametric tests?
You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power. Consider a parametric bootstrapping where you constrain all possible distributions to a particular set of distributions such as normal. So instead of infinite set of all possible distributions you are only looking at Gaussian with just two parameters. Naturally, the tests you can come up with are going to be sharper. Remember that the power comes from the assumptions, and unfortunately, assumptions are often wrong. If your assumption is wrong then the power evaporates. No free lunch here
Why are parametric tests more powerful than non-parametric tests?
You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power. Consider a parametric
Why are parametric tests more powerful than non-parametric tests? You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power. Consider a parametric bootstrapping where you constrain all possible distributions to a particular set of distributions such as normal. So instead of infinite set of all possible distributions you are only looking at Gaussian with just two parameters. Naturally, the tests you can come up with are going to be sharper. Remember that the power comes from the assumptions, and unfortunately, assumptions are often wrong. If your assumption is wrong then the power evaporates. No free lunch here
Why are parametric tests more powerful than non-parametric tests? You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power. Consider a parametric
14,317
Why are parametric tests more powerful than non-parametric tests?
Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advantage is often minor or even trivial. When parametric methods have an advantage in power it comes from one or both of two things: more information in the statistical model (e.g. knowing the distribution type of the population can allow the sample estimates to be more efficiently used); more information extracted from the data (e.g. when it is appropriate, using the actual numbers is more informative than using ranks). It is important to know that when the statistical model assumptions are invalid then the parametric methods may have less power than a well-chosen non-parametric approach.
Why are parametric tests more powerful than non-parametric tests?
Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advant
Why are parametric tests more powerful than non-parametric tests? Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advantage is often minor or even trivial. When parametric methods have an advantage in power it comes from one or both of two things: more information in the statistical model (e.g. knowing the distribution type of the population can allow the sample estimates to be more efficiently used); more information extracted from the data (e.g. when it is appropriate, using the actual numbers is more informative than using ranks). It is important to know that when the statistical model assumptions are invalid then the parametric methods may have less power than a well-chosen non-parametric approach.
Why are parametric tests more powerful than non-parametric tests? Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advant
14,318
Probability that number of heads exceeds sum of die rolls
Another way is by simulating a million match-offs between $X$ and $Y$ to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.] set.seed(825) d = replicate(10^6, sum(sample(1:6,100,rep=T))- rbinom(1,600,.5)) mean(d > 0) [1] 0.990736 2*sd(d > 0)/1000 [1] 0.0001916057 # aprx 95% margin of simulation error Notes per @AntoniParellada's Comment: In R, the function sample(1:6, 100, rep=T) simulates 100 rolls a fair die; the sum of this simulates $X$. Also rbinom is R code for simulating a binomial random variable; here it's $Y.$ The difference is $D = X - Y.$ The procedure replicate makes a vector of a million differences d. Then (d > 0) is a logical vector of a million TRUEs and FALSEs, the mean of which is its proportion of TRUEs--our Answer. Finally, the last statement gives the margin of error of a 95% confidence interval of the proportion of TRUEs (using 2 instead of 1.96), as a reality check on the accuracy of the simulated Answer. [With a million iterations one ordinarily expects 2 or 3 decimal paces of accuracy for probabilities--sometimes more for probabilities so far from 1/2.]
Probability that number of heads exceeds sum of die rolls
Another way is by simulating a million match-offs between $X$ and $Y$ to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.] set.seed(825) d = replicate(10^6, sum(sample(1:6,100,rep=T))-
Probability that number of heads exceeds sum of die rolls Another way is by simulating a million match-offs between $X$ and $Y$ to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.] set.seed(825) d = replicate(10^6, sum(sample(1:6,100,rep=T))- rbinom(1,600,.5)) mean(d > 0) [1] 0.990736 2*sd(d > 0)/1000 [1] 0.0001916057 # aprx 95% margin of simulation error Notes per @AntoniParellada's Comment: In R, the function sample(1:6, 100, rep=T) simulates 100 rolls a fair die; the sum of this simulates $X$. Also rbinom is R code for simulating a binomial random variable; here it's $Y.$ The difference is $D = X - Y.$ The procedure replicate makes a vector of a million differences d. Then (d > 0) is a logical vector of a million TRUEs and FALSEs, the mean of which is its proportion of TRUEs--our Answer. Finally, the last statement gives the margin of error of a 95% confidence interval of the proportion of TRUEs (using 2 instead of 1.96), as a reality check on the accuracy of the simulated Answer. [With a million iterations one ordinarily expects 2 or 3 decimal paces of accuracy for probabilities--sometimes more for probabilities so far from 1/2.]
Probability that number of heads exceeds sum of die rolls Another way is by simulating a million match-offs between $X$ and $Y$ to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.] set.seed(825) d = replicate(10^6, sum(sample(1:6,100,rep=T))-
14,319
Probability that number of heads exceeds sum of die rolls
It is possible to do exact calculations. For example in R rolls <- 100 flips <- 600 ddice <- rep(1/6, 6) for (n in 2:rolls){ ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0)+ c(0,0,0,0,ddice,0,0)+c(0,0,0,0,0,ddice,0)+c(0,0,0,0,0,0,ddice))/6} sum(ddice * (1-pbinom(1:flips, flips, 1/2))) # probability coins more # 0.00809003 sum(ddice * dbinom(1:flips, flips, 1/2)) # probability equality # 0.00111972 sum(ddice * pbinom(0:(flips-1), flips, 1/2)) # probability dice more # 0.99079025 with this last figure matching BruceET's simulation The interesting parts of the probability mass functions look like this (coin flips in red, dice totals in blue)
Probability that number of heads exceeds sum of die rolls
It is possible to do exact calculations. For example in R rolls <- 100 flips <- 600 ddice <- rep(1/6, 6) for (n in 2:rolls){ ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0)
Probability that number of heads exceeds sum of die rolls It is possible to do exact calculations. For example in R rolls <- 100 flips <- 600 ddice <- rep(1/6, 6) for (n in 2:rolls){ ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0)+ c(0,0,0,0,ddice,0,0)+c(0,0,0,0,0,ddice,0)+c(0,0,0,0,0,0,ddice))/6} sum(ddice * (1-pbinom(1:flips, flips, 1/2))) # probability coins more # 0.00809003 sum(ddice * dbinom(1:flips, flips, 1/2)) # probability equality # 0.00111972 sum(ddice * pbinom(0:(flips-1), flips, 1/2)) # probability dice more # 0.99079025 with this last figure matching BruceET's simulation The interesting parts of the probability mass functions look like this (coin flips in red, dice totals in blue)
Probability that number of heads exceeds sum of die rolls It is possible to do exact calculations. For example in R rolls <- 100 flips <- 600 ddice <- rep(1/6, 6) for (n in 2:rolls){ ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0)
14,320
Probability that number of heads exceeds sum of die rolls
A bit more precise: The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation $\sqrt{292 + 150} \approx 21$. If we want to know how often we expect this variable to be below 0, we can try to approximate our difference by a normal distribution, and we need to look up the $z$-score for $z = \frac{50}{21} \approx 2.38$. Of course, our actual distribution will be a bit wider (since we convolve a binomial pdf with a uniform distribution pdf), but hopefully this will not be too inaccurate. The probability that our total will be positive, according to a $z$-score table, is about $0.992$. I ran a quick experiment in Python, running 10000 iterations, and I got $\frac{9923}{10000}$ positives. Not too far off. My code: import numpy as np c = np.random.randint(0, 2, size = (10000, 100, 6)).sum(axis=-1) d = np.random.randint(1, 7, size = (10000, 100)) (d.sum(axis=-1) > c.sum(axis=-1)).sum() --> 9923
Probability that number of heads exceeds sum of die rolls
A bit more precise: The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation
Probability that number of heads exceeds sum of die rolls A bit more precise: The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation $\sqrt{292 + 150} \approx 21$. If we want to know how often we expect this variable to be below 0, we can try to approximate our difference by a normal distribution, and we need to look up the $z$-score for $z = \frac{50}{21} \approx 2.38$. Of course, our actual distribution will be a bit wider (since we convolve a binomial pdf with a uniform distribution pdf), but hopefully this will not be too inaccurate. The probability that our total will be positive, according to a $z$-score table, is about $0.992$. I ran a quick experiment in Python, running 10000 iterations, and I got $\frac{9923}{10000}$ positives. Not too far off. My code: import numpy as np c = np.random.randint(0, 2, size = (10000, 100, 6)).sum(axis=-1) d = np.random.randint(1, 7, size = (10000, 100)) (d.sum(axis=-1) > c.sum(axis=-1)).sum() --> 9923
Probability that number of heads exceeds sum of die rolls A bit more precise: The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation
14,321
Probability that number of heads exceeds sum of die rolls
The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerically to a reasonable level of accuracy, which doesn't take long, are probably the better way to go - but if you want the "mathematical" way of getting the exact answer, then: Let $X$ denote the sum of dots we see in $100$ die rolls, with probability mass function $p_X(x)$. Let $Y$ denote the number of heads in $600$ coin flips, with probability mass function $p_Y(y)$. We seek $P(X > Y) = P(X - Y > 0) = P(D > 0)$ where $D = X - Y$ is the difference between sum of dots and number of heads. Let $Z = -Y$, with probability mass function $p_Z(z) = p_Y(-z)$. Then the difference $D = X - Y$ can be rewritten as a sum $D = X + Z$ which means, since $X$ and $Z$ are independent, we can find the probability mass function of $D$ by taking the discrete convolution of the PMFs of $X$ and $Z$: $$p_D(d) = \Pr(X + Z = d) = \sum_{k =-\infty}^{\infty} \Pr(X = k \cap Z = d - k) = \sum_{k =-\infty}^{\infty} p_X(k) p_Z(d-k) $$ In practice the sum only needs to be done over values of $k$ for which the probabilities are non-zero, of course. The idea here is exactly what @IlmariKaronen has done, I just wanted to write up the mathematical basis for it. Now I haven't said how to find the PMF of $X$, which is left as an exercise, but note that if $X_1, X_2, \dots, X_{100}$ are the number of dots on each of 100 independent dice rolls, each with discrete uniform PMFs on $\{1, 2, 3, 4, 5, 6\}$, then $X = X_1 + X_2 + \dots + X_{100}$ and so... # Store the PMFs of variables as dataframes with "value" and "prob" columns. # Important the values are consecutive and ascending for consistency when convolving, # so include intermediate values with probability 0 if needed! # Function to check if dataframe conforms to above definition of PMF # Use message_intro to explain what check is failing is.pmf <- function(x, message_intro = "") { if(!is.data.frame(x)) {stop(paste0(message_intro, "Not a dataframe"))} if(!nrow(x) > 0) {stop(paste0(message_intro, "Dataframe has no rows"))} if(!"value" %in% colnames(x)) {stop(paste0(message_intro, "No 'value' column"))} if(!"prob" %in% colnames(x)) {stop(paste0(message_intro, "No 'prob' column"))} if(!is.numeric(x$value)) {stop(paste0(message_intro, "'value' column not numeric"))} if(!all(is.finite(x$value))) {stop(paste0(message_intro, "Does 'value' contain NA, Inf, NaN etc?"))} if(!all(diff(x$value) == 1)) {stop(paste0(message_intro, "'value' not consecutive and ascending"))} if(!is.numeric(x$prob)) {stop(paste0(message_intro, "'prob' column not numeric"))} if(!all(is.finite(x$prob))) {stop(paste0(message_intro, "Does 'prob' contain NA, Inf, NaN etc?"))} if(!all.equal(sum(x$prob), 1)) {stop(paste0(message_intro, "'prob' column does not sum to 1"))} return(TRUE) } # Function to convolve PMFs of x and y # Note that to convolve in R we need to reverse the second vector # name1 and name2 are used in error reporting for the two inputs convolve.pmf <- function(x, y, name1 = "x", name2 = "y") { is.pmf(x, message_intro = paste0("Checking ", name1, " is valid PMF: ")) is.pmf(y, message_intro = paste0("Checking ", name2, " is valid PMF: ")) x_plus_y <- data.frame( value = seq(from = min(x$value) + min(y$value), to = max(x$value) + max(y$value), by = 1), prob = convolve(x$prob, rev(y$prob), type = "open") ) return(x_plus_y) } # Let x_i be the score on individual dice throw i # Note PMF of x_i is the same for each i=1 to i=100) x_i <- data.frame( value = 1:6, prob = rep(1/6, 6) ) # Let t_i be the total of x_1, x_2, ..., x_i # We'll store the PMFs of t_1, t_2... in a list t_i <- list() t_i[[1]] <- x_i #t_1 is just x_1 so has same PMF # PMF of t_i is convolution of PMFs of t_(i-1) and x_i for (i in 2:100) { t_i[[i]] <- convolve.pmf(t_i[[i-1]], x_i, name1 = paste0("t_i[[", i-1, "]]"), name2 = "x_i") } # Let x be the sum of the scores of all 100 independent dice rolls x <- t_i[[100]] is.pmf(x, message_intro = "Checking x is valid PMF: ") # Let y be the number of heads in 600 coin flips, so has Binomial(600, 0.5) distribution: y <- data.frame(value = 0:600) y$prob <- dbinom(y$value, size = 600, prob = 0.5) is.pmf(y, message_intro = "Checking y is valid PMF: ") # Let z be the negative of y (note we reverse the order to keep the values ascending) z <- data.frame(value = -rev(y$value), prob = rev(y$prob)) is.pmf(z, message_intro = "Checking z is valid PMF: ") # Let d be the difference, d = x - y = x + z d <- convolve.pmf(x, z, name1 = "x", name2 = "z") is.pmf(d, message_intro = "Checking d is valid PMF: ") # Prob(X > Y) = Prob(D > 0) sum(d[d$value > 0, "prob"]) # [1] 0.9907902 Try it online! Not that it matters practically if you're just after reasonable accuracy, since the above code runs in a fraction of a second anyway, but there is a shortcut to do the convolutions for the sum of 100 independent identically distributed variables: since 100 = 64 + 32 + 4 when expressed as the sum of powers of 2, you can keep convolving your intermediate answers with themselves as much as possible. Writing the subtotals for the first $i$ dice rolls as $T_i = \sum_{k=1}^{k=i}X_k$ we can obtain the PMFs of $T_2 = X_1 + X_2$, $T_4 = T_2 + T_2'$ (where $T_2'$ is independent of $T_2$ but has the same PMF), and similarly $T_8 = T_4 + T_4'$, $T_{16} = T_8 + T_8'$, $T_{32} = T_{16} + T_{16}'$ and $T_{64} = T_{32} + T_{32}'$. We need two more convolutions to find the total score of all 100 dice as the sum of three independent variables, $X = T_{100} = ( T_{64} + T_{32}'' ) + T_4''$, and a final convolution for $D = X + Z$. So I think you only need nine convolutions in all - and for the final one, you can just restrict yourself to the parts of the convolution giving a positive value for $D$. Or if it's less hassle, the parts that give the non-positive values for $D$ and then take the complement. Provided you pick the most efficient way, I reckon that means your worst case is effectively eight-and-a-half convolutions. EDIT: and as @whuber suggests, this isn't necessarily optimal either! Using the nine-convolution method I identified, with the gmp package so I could work with bigq objects and writing a not-at-all-optimised loop to do the convolutions (since R's built-in method doesn't deal with bigq inputs), it took just a couple of seconds to work out the exact simplified fraction: 1342994286789364913259466589226414913145071640552263974478047652925028002001448330257335942966819418087658458889485712017471984746983053946540181650207455490497876104509955761041797420425037042000821811370562452822223052224332163891926447848261758144860052289/1355477899826721990460331878897812400287035152117007099242967137806414779868504848322476153909567683818236244909105993544861767898849017476783551366983047536680132501682168520276732248143444078295080865383592365060506205489222306287318639217916612944423026688 which does indeed round to 0.9907902. Now for the exact answer, I wouldn't have wanted to do that with too many more convolutions, I could feel the gears of my laptop starting to creak!
Probability that number of heads exceeds sum of die rolls
The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerical
Probability that number of heads exceeds sum of die rolls The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerically to a reasonable level of accuracy, which doesn't take long, are probably the better way to go - but if you want the "mathematical" way of getting the exact answer, then: Let $X$ denote the sum of dots we see in $100$ die rolls, with probability mass function $p_X(x)$. Let $Y$ denote the number of heads in $600$ coin flips, with probability mass function $p_Y(y)$. We seek $P(X > Y) = P(X - Y > 0) = P(D > 0)$ where $D = X - Y$ is the difference between sum of dots and number of heads. Let $Z = -Y$, with probability mass function $p_Z(z) = p_Y(-z)$. Then the difference $D = X - Y$ can be rewritten as a sum $D = X + Z$ which means, since $X$ and $Z$ are independent, we can find the probability mass function of $D$ by taking the discrete convolution of the PMFs of $X$ and $Z$: $$p_D(d) = \Pr(X + Z = d) = \sum_{k =-\infty}^{\infty} \Pr(X = k \cap Z = d - k) = \sum_{k =-\infty}^{\infty} p_X(k) p_Z(d-k) $$ In practice the sum only needs to be done over values of $k$ for which the probabilities are non-zero, of course. The idea here is exactly what @IlmariKaronen has done, I just wanted to write up the mathematical basis for it. Now I haven't said how to find the PMF of $X$, which is left as an exercise, but note that if $X_1, X_2, \dots, X_{100}$ are the number of dots on each of 100 independent dice rolls, each with discrete uniform PMFs on $\{1, 2, 3, 4, 5, 6\}$, then $X = X_1 + X_2 + \dots + X_{100}$ and so... # Store the PMFs of variables as dataframes with "value" and "prob" columns. # Important the values are consecutive and ascending for consistency when convolving, # so include intermediate values with probability 0 if needed! # Function to check if dataframe conforms to above definition of PMF # Use message_intro to explain what check is failing is.pmf <- function(x, message_intro = "") { if(!is.data.frame(x)) {stop(paste0(message_intro, "Not a dataframe"))} if(!nrow(x) > 0) {stop(paste0(message_intro, "Dataframe has no rows"))} if(!"value" %in% colnames(x)) {stop(paste0(message_intro, "No 'value' column"))} if(!"prob" %in% colnames(x)) {stop(paste0(message_intro, "No 'prob' column"))} if(!is.numeric(x$value)) {stop(paste0(message_intro, "'value' column not numeric"))} if(!all(is.finite(x$value))) {stop(paste0(message_intro, "Does 'value' contain NA, Inf, NaN etc?"))} if(!all(diff(x$value) == 1)) {stop(paste0(message_intro, "'value' not consecutive and ascending"))} if(!is.numeric(x$prob)) {stop(paste0(message_intro, "'prob' column not numeric"))} if(!all(is.finite(x$prob))) {stop(paste0(message_intro, "Does 'prob' contain NA, Inf, NaN etc?"))} if(!all.equal(sum(x$prob), 1)) {stop(paste0(message_intro, "'prob' column does not sum to 1"))} return(TRUE) } # Function to convolve PMFs of x and y # Note that to convolve in R we need to reverse the second vector # name1 and name2 are used in error reporting for the two inputs convolve.pmf <- function(x, y, name1 = "x", name2 = "y") { is.pmf(x, message_intro = paste0("Checking ", name1, " is valid PMF: ")) is.pmf(y, message_intro = paste0("Checking ", name2, " is valid PMF: ")) x_plus_y <- data.frame( value = seq(from = min(x$value) + min(y$value), to = max(x$value) + max(y$value), by = 1), prob = convolve(x$prob, rev(y$prob), type = "open") ) return(x_plus_y) } # Let x_i be the score on individual dice throw i # Note PMF of x_i is the same for each i=1 to i=100) x_i <- data.frame( value = 1:6, prob = rep(1/6, 6) ) # Let t_i be the total of x_1, x_2, ..., x_i # We'll store the PMFs of t_1, t_2... in a list t_i <- list() t_i[[1]] <- x_i #t_1 is just x_1 so has same PMF # PMF of t_i is convolution of PMFs of t_(i-1) and x_i for (i in 2:100) { t_i[[i]] <- convolve.pmf(t_i[[i-1]], x_i, name1 = paste0("t_i[[", i-1, "]]"), name2 = "x_i") } # Let x be the sum of the scores of all 100 independent dice rolls x <- t_i[[100]] is.pmf(x, message_intro = "Checking x is valid PMF: ") # Let y be the number of heads in 600 coin flips, so has Binomial(600, 0.5) distribution: y <- data.frame(value = 0:600) y$prob <- dbinom(y$value, size = 600, prob = 0.5) is.pmf(y, message_intro = "Checking y is valid PMF: ") # Let z be the negative of y (note we reverse the order to keep the values ascending) z <- data.frame(value = -rev(y$value), prob = rev(y$prob)) is.pmf(z, message_intro = "Checking z is valid PMF: ") # Let d be the difference, d = x - y = x + z d <- convolve.pmf(x, z, name1 = "x", name2 = "z") is.pmf(d, message_intro = "Checking d is valid PMF: ") # Prob(X > Y) = Prob(D > 0) sum(d[d$value > 0, "prob"]) # [1] 0.9907902 Try it online! Not that it matters practically if you're just after reasonable accuracy, since the above code runs in a fraction of a second anyway, but there is a shortcut to do the convolutions for the sum of 100 independent identically distributed variables: since 100 = 64 + 32 + 4 when expressed as the sum of powers of 2, you can keep convolving your intermediate answers with themselves as much as possible. Writing the subtotals for the first $i$ dice rolls as $T_i = \sum_{k=1}^{k=i}X_k$ we can obtain the PMFs of $T_2 = X_1 + X_2$, $T_4 = T_2 + T_2'$ (where $T_2'$ is independent of $T_2$ but has the same PMF), and similarly $T_8 = T_4 + T_4'$, $T_{16} = T_8 + T_8'$, $T_{32} = T_{16} + T_{16}'$ and $T_{64} = T_{32} + T_{32}'$. We need two more convolutions to find the total score of all 100 dice as the sum of three independent variables, $X = T_{100} = ( T_{64} + T_{32}'' ) + T_4''$, and a final convolution for $D = X + Z$. So I think you only need nine convolutions in all - and for the final one, you can just restrict yourself to the parts of the convolution giving a positive value for $D$. Or if it's less hassle, the parts that give the non-positive values for $D$ and then take the complement. Provided you pick the most efficient way, I reckon that means your worst case is effectively eight-and-a-half convolutions. EDIT: and as @whuber suggests, this isn't necessarily optimal either! Using the nine-convolution method I identified, with the gmp package so I could work with bigq objects and writing a not-at-all-optimised loop to do the convolutions (since R's built-in method doesn't deal with bigq inputs), it took just a couple of seconds to work out the exact simplified fraction: 1342994286789364913259466589226414913145071640552263974478047652925028002001448330257335942966819418087658458889485712017471984746983053946540181650207455490497876104509955761041797420425037042000821811370562452822223052224332163891926447848261758144860052289/1355477899826721990460331878897812400287035152117007099242967137806414779868504848322476153909567683818236244909105993544861767898849017476783551366983047536680132501682168520276732248143444078295080865383592365060506205489222306287318639217916612944423026688 which does indeed round to 0.9907902. Now for the exact answer, I wouldn't have wanted to do that with too many more convolutions, I could feel the gears of my laptop starting to creak!
Probability that number of heads exceeds sum of die rolls The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerical
14,322
Probability that number of heads exceeds sum of die rolls
The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries. from collections import defaultdict # define the distributions of a single coin and die coin = tuple((i, 1/2) for i in (0, 1)) die = tuple((i, 1/6) for i in (1, 2, 3, 4, 5, 6)) # a simple function to compute the sum of two random variables def add_rv(a, b): sum = defaultdict(float) for i, p in a: for j, q in b: sum[i + j] += p * q return tuple(sum.items()) # compute the sums of 600 coins and 100 dice coin_sum = dice_sum = ((0, 1),) for _ in range(600): coin_sum = add_rv(coin_sum, coin) for _ in range(100): dice_sum = add_rv(dice_sum, die) # calculate the probability of the dice sum being higher prob = 0 for i, p in dice_sum: for j, q in coin_sum: if i > j: prob += p * q print("probability of 100 dice summing to more than 600 coins = %.10f" % prob) Try it online! The script above represents a discrete probability distribution as a list of (value, probability) pairs, and uses a simple pair of nested loops to compute the distribution of the sum of two random variables (iterating over all possible values of each of the summands). This is not necessarily the most efficient possible representation, but it's easy to work with and more than fast enough for this purpose. (FWIW, this representation of probability distributions is also compatible with the collection of utility functions for modelling more complex dice rolls that I wrote for a post on our sister site a while ago.) Of course, there are also domain-specific libraries and even entire programming languages for calculations like this. Using one such online tool, called AnyDice, the same calculation can be written much more compactly: X: 100d6 Y: 600d{0,1} output X > Y named "1 if X > Y, else 0" Under the hood, I believe AnyDice calculates the result pretty much like my Python script does, except maybe with slightly more optimizations. In any case, both give the same probability of 0.9907902497 for the sum of the dice being greater than the number of heads. If you want, AnyDice can also plot the distributions of the two sums for you. To get similar plots out of the Python code, you'd have to feed the dice_sum and coin_sum lists into a graph plotting library like pyplot.
Probability that number of heads exceeds sum of die rolls
The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries. from co
Probability that number of heads exceeds sum of die rolls The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries. from collections import defaultdict # define the distributions of a single coin and die coin = tuple((i, 1/2) for i in (0, 1)) die = tuple((i, 1/6) for i in (1, 2, 3, 4, 5, 6)) # a simple function to compute the sum of two random variables def add_rv(a, b): sum = defaultdict(float) for i, p in a: for j, q in b: sum[i + j] += p * q return tuple(sum.items()) # compute the sums of 600 coins and 100 dice coin_sum = dice_sum = ((0, 1),) for _ in range(600): coin_sum = add_rv(coin_sum, coin) for _ in range(100): dice_sum = add_rv(dice_sum, die) # calculate the probability of the dice sum being higher prob = 0 for i, p in dice_sum: for j, q in coin_sum: if i > j: prob += p * q print("probability of 100 dice summing to more than 600 coins = %.10f" % prob) Try it online! The script above represents a discrete probability distribution as a list of (value, probability) pairs, and uses a simple pair of nested loops to compute the distribution of the sum of two random variables (iterating over all possible values of each of the summands). This is not necessarily the most efficient possible representation, but it's easy to work with and more than fast enough for this purpose. (FWIW, this representation of probability distributions is also compatible with the collection of utility functions for modelling more complex dice rolls that I wrote for a post on our sister site a while ago.) Of course, there are also domain-specific libraries and even entire programming languages for calculations like this. Using one such online tool, called AnyDice, the same calculation can be written much more compactly: X: 100d6 Y: 600d{0,1} output X > Y named "1 if X > Y, else 0" Under the hood, I believe AnyDice calculates the result pretty much like my Python script does, except maybe with slightly more optimizations. In any case, both give the same probability of 0.9907902497 for the sum of the dice being greater than the number of heads. If you want, AnyDice can also plot the distributions of the two sums for you. To get similar plots out of the Python code, you'd have to feed the dice_sum and coin_sum lists into a graph plotting library like pyplot.
Probability that number of heads exceeds sum of die rolls The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries. from co
14,323
Posterior distribution and MCMC [duplicate]
If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial perspective. How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? MCMC is based on the assumption that the product$$\pi(\theta)f(x^\text{obs}|\theta)$$can be numerically computed (hence is known) for a given $\theta$, where $x^\text{obs}$ denotes the observation, $\pi(\cdot)$ the prior, and $f(x^\text{obs}|\theta)$ the likelihood. This does not imply an in-depth knowledge about this function of $\theta$. Still, from a mathematical perspective the posterior density is completely and entirely determined by $$\pi(\theta|x^\text{obs})=\dfrac{\pi(\theta)f(x^\text{obs}|\theta)}{\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta}\tag{1}$$Thus, it is not particularly surprising that simulation methods can be found using solely the input of the product $$\pi(\theta)\times f(x^\text{obs}|\theta)$$ The amazing feature of Monte Carlo methods is that some methods like Markov chain Monte Carlo (MCMC) algorithms do not formally require anything further than this computation of the product, when compared with accept-reject algorithms for instance, which calls for an upper bound. A related software like Stan operates on this input and still delivers high end performances with tools like NUTS and HMC, including numerical differentiation. A side comment written later in the light of some of the other answers is that the normalising constant$$\mathfrak{Z}=\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta$$is not particularly useful for conducting Bayesian inference in that, were I to "know" its exact numerical value in addition to the function in the numerator of (1), $\mathfrak{Z}=3.17232\,10^{-23}$ say, I would not have made any progress towards finding Bayes estimates or credible regions. (The only exception when this constant matters is in conducting Bayesian model comparison.) When teaching about MCMC algorithms, my analogy is that in a videogame we have a complete map (the posterior) and a moving player that can only illuminate a portion of the map at once. Visualising the entire map and spotting the highest regions is possible with enough attempts (and a perfect remembrance of things past!). A local and primitive knowledge of the posterior density (up to a constant) is therefore sufficient to learn about the distribution. Again, how can you determine which parameter estimate "fits your data better" without first knowing your posterior distribution? Again, the distribution is known in a mathematical or numerical sense. The Bayes parameter estimates provided by MCMC, if needed, are based on the same principle as most simulation methods, the law of large numbers. More generally, Monte Carlo based (Bayesian) inference replaces the exact posterior distribution with an empirical version. Hence, once more, a numerical approach to the posterior, one value at a time, is sufficient to build a convergent representation of the associated estimator. The only restriction is the available computing time, i.e., the number of terms one can call in the law of large numbers approximation. If you already know the properties of your posterior distribution (as is indicated by 1) and 2)), then what's the point of using this method in the first place? It is the very paradox of (1) that this is a perfectly well-defined mathematical object such that most integrals related with (1) including its denominator may be out of reach from analytical and numerical methods. Exploiting the stochastic nature of the object by simulation methods (Monte Carlo integration) is a natural and manageable alternative that has proven immensely helpful. Connected X validated questions: Confusion related to MCMC technique What are Monte Carlo simulations? Is Markov chain based sampling the “best” for Monte Carlo sampling? Are there alternative schemes available? MCMC; Can we be sure that we have a ''pure'' and ''large enough'' sample from the posterior? How can it work if we are not? How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? How to do MC integration from Gibbs sampling of posterior?
Posterior distribution and MCMC [duplicate]
If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial
Posterior distribution and MCMC [duplicate] If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial perspective. How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? MCMC is based on the assumption that the product$$\pi(\theta)f(x^\text{obs}|\theta)$$can be numerically computed (hence is known) for a given $\theta$, where $x^\text{obs}$ denotes the observation, $\pi(\cdot)$ the prior, and $f(x^\text{obs}|\theta)$ the likelihood. This does not imply an in-depth knowledge about this function of $\theta$. Still, from a mathematical perspective the posterior density is completely and entirely determined by $$\pi(\theta|x^\text{obs})=\dfrac{\pi(\theta)f(x^\text{obs}|\theta)}{\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta}\tag{1}$$Thus, it is not particularly surprising that simulation methods can be found using solely the input of the product $$\pi(\theta)\times f(x^\text{obs}|\theta)$$ The amazing feature of Monte Carlo methods is that some methods like Markov chain Monte Carlo (MCMC) algorithms do not formally require anything further than this computation of the product, when compared with accept-reject algorithms for instance, which calls for an upper bound. A related software like Stan operates on this input and still delivers high end performances with tools like NUTS and HMC, including numerical differentiation. A side comment written later in the light of some of the other answers is that the normalising constant$$\mathfrak{Z}=\int_ \Theta \pi(\theta)f(x^\text{obs}|\theta)\,\text{d}\theta$$is not particularly useful for conducting Bayesian inference in that, were I to "know" its exact numerical value in addition to the function in the numerator of (1), $\mathfrak{Z}=3.17232\,10^{-23}$ say, I would not have made any progress towards finding Bayes estimates or credible regions. (The only exception when this constant matters is in conducting Bayesian model comparison.) When teaching about MCMC algorithms, my analogy is that in a videogame we have a complete map (the posterior) and a moving player that can only illuminate a portion of the map at once. Visualising the entire map and spotting the highest regions is possible with enough attempts (and a perfect remembrance of things past!). A local and primitive knowledge of the posterior density (up to a constant) is therefore sufficient to learn about the distribution. Again, how can you determine which parameter estimate "fits your data better" without first knowing your posterior distribution? Again, the distribution is known in a mathematical or numerical sense. The Bayes parameter estimates provided by MCMC, if needed, are based on the same principle as most simulation methods, the law of large numbers. More generally, Monte Carlo based (Bayesian) inference replaces the exact posterior distribution with an empirical version. Hence, once more, a numerical approach to the posterior, one value at a time, is sufficient to build a convergent representation of the associated estimator. The only restriction is the available computing time, i.e., the number of terms one can call in the law of large numbers approximation. If you already know the properties of your posterior distribution (as is indicated by 1) and 2)), then what's the point of using this method in the first place? It is the very paradox of (1) that this is a perfectly well-defined mathematical object such that most integrals related with (1) including its denominator may be out of reach from analytical and numerical methods. Exploiting the stochastic nature of the object by simulation methods (Monte Carlo integration) is a natural and manageable alternative that has proven immensely helpful. Connected X validated questions: Confusion related to MCMC technique What are Monte Carlo simulations? Is Markov chain based sampling the “best” for Monte Carlo sampling? Are there alternative schemes available? MCMC; Can we be sure that we have a ''pure'' and ''large enough'' sample from the posterior? How can it work if we are not? How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? How to do MC integration from Gibbs sampling of posterior?
Posterior distribution and MCMC [duplicate] If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial
14,324
Posterior distribution and MCMC [duplicate]
How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? In Bayesian analysis we usually know that the posterior distribution is proportional to some known function (the likelihood multiplied by the prior) but we don't know the constant of integration that would give us the actual posterior density: $$\pi( \theta | \mathbb{x} ) = \frac{\overbrace{L_\mathbb{x}(\theta) \pi(\theta)}^{\text{Known}}}{\underbrace{\int L_\mathbb{x}(\theta) \pi(\theta) d\theta}_{\text{Unknown}}} \overset{\theta}{\propto} \overbrace{L_\mathbb{x}(\theta) \pi(\theta)}^{\text{Known}}.$$ So we actually do know one major property of the distribution; that it is proportional to a particular known function. Now, in the context of MCMC analysis, a Markov chain takes in a starting value $\theta_{(0)}$ and produces a series of values $\theta_{(1)}, \theta_{(2)}, \theta_{(3)}, ...$ for this parameter. The Markov chain has a stationary distribution which is the distribution that preserves itself if you run it through the chain. Under certain broad assumptions (e.g., the chain is irreducible, aperiodic), the stationary distribution will also be the limiting distribution of the Markov chain, so that regardless of how you choose the starting value, this will be the distribution that the outputs converge towards as you run the chain longer and longer. It turns out that it is possible to design a Markov chain with a stationary distribution equal to the posterior distribution, even though we don't know exactly what that distribution is. That is, it is possible to design a Markov chain that has $\pi( \theta | \mathbb{x} )$ as its stationary limiting distribution, even if all we know is that $\pi( \theta | \mathbb{x} ) \propto L_\mathbb{x}(\theta) \pi(\theta)$. There are various ways to design this kind of Markov chain, and these various designs constitute available MCMC algorithms for generating values from the posterior distribution. Once we have designed an MCMC method like this, we know that we can feed in any arbitrary starting value $\theta_{(0)}$ and the distribution of the outputs will converge to the posterior distribution (since this is the stationary limiting distribution of the chain). So we can draw (non-independent) samples from the posterior distribution by starting with an arbitrary starting value, feeding it into the MCMC algorithm, waiting for the chain to converge close to its stationary distribution, and then taking the subsequent outputs as our draws. This usually involves generating $\theta_{(1)}, \theta_{(2)}, \theta_{(3)}, ..., \theta_{(M)}$ for some large value of $M$, and discarding $B < M$ "burn-in" iterations to allow the convergence to occur, leaving us with draws $\theta_{(B+1)}, \theta_{(B+2)}, \theta_{(B+3)}, ..., \theta_{(M)} \sim \pi( \theta | \mathbb{x} )$ (approximately). If you already know the properties of your posterior distribution ... then what's the point of using this method in the first place? Use of the MCMC simulation allows us to go from a state where we know that the posterior distribution is proportional to some given function (the likelihood multiplied by the prior) to actually simulating from this distribution. From these simulations we can estimate the constant of integration for the posterior distribution, and then we have a good estimate of the actual distribution. We can also use these simulations to estimate other aspects of the posterior distribution, such as its moments. Now, bear in mind that MCMC is not the only way we can do this. Another method would be to use some other method of numerical integration to try to find the constant-of-integration for the posterior distribution. MCMC goes directly to simulation of the values, rather than attempting to estimate the constant-of-integration, so it is a popular method.
Posterior distribution and MCMC [duplicate]
How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? In Bayesian analysis we usually know that the posterior distribution is proport
Posterior distribution and MCMC [duplicate] How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? In Bayesian analysis we usually know that the posterior distribution is proportional to some known function (the likelihood multiplied by the prior) but we don't know the constant of integration that would give us the actual posterior density: $$\pi( \theta | \mathbb{x} ) = \frac{\overbrace{L_\mathbb{x}(\theta) \pi(\theta)}^{\text{Known}}}{\underbrace{\int L_\mathbb{x}(\theta) \pi(\theta) d\theta}_{\text{Unknown}}} \overset{\theta}{\propto} \overbrace{L_\mathbb{x}(\theta) \pi(\theta)}^{\text{Known}}.$$ So we actually do know one major property of the distribution; that it is proportional to a particular known function. Now, in the context of MCMC analysis, a Markov chain takes in a starting value $\theta_{(0)}$ and produces a series of values $\theta_{(1)}, \theta_{(2)}, \theta_{(3)}, ...$ for this parameter. The Markov chain has a stationary distribution which is the distribution that preserves itself if you run it through the chain. Under certain broad assumptions (e.g., the chain is irreducible, aperiodic), the stationary distribution will also be the limiting distribution of the Markov chain, so that regardless of how you choose the starting value, this will be the distribution that the outputs converge towards as you run the chain longer and longer. It turns out that it is possible to design a Markov chain with a stationary distribution equal to the posterior distribution, even though we don't know exactly what that distribution is. That is, it is possible to design a Markov chain that has $\pi( \theta | \mathbb{x} )$ as its stationary limiting distribution, even if all we know is that $\pi( \theta | \mathbb{x} ) \propto L_\mathbb{x}(\theta) \pi(\theta)$. There are various ways to design this kind of Markov chain, and these various designs constitute available MCMC algorithms for generating values from the posterior distribution. Once we have designed an MCMC method like this, we know that we can feed in any arbitrary starting value $\theta_{(0)}$ and the distribution of the outputs will converge to the posterior distribution (since this is the stationary limiting distribution of the chain). So we can draw (non-independent) samples from the posterior distribution by starting with an arbitrary starting value, feeding it into the MCMC algorithm, waiting for the chain to converge close to its stationary distribution, and then taking the subsequent outputs as our draws. This usually involves generating $\theta_{(1)}, \theta_{(2)}, \theta_{(3)}, ..., \theta_{(M)}$ for some large value of $M$, and discarding $B < M$ "burn-in" iterations to allow the convergence to occur, leaving us with draws $\theta_{(B+1)}, \theta_{(B+2)}, \theta_{(B+3)}, ..., \theta_{(M)} \sim \pi( \theta | \mathbb{x} )$ (approximately). If you already know the properties of your posterior distribution ... then what's the point of using this method in the first place? Use of the MCMC simulation allows us to go from a state where we know that the posterior distribution is proportional to some given function (the likelihood multiplied by the prior) to actually simulating from this distribution. From these simulations we can estimate the constant of integration for the posterior distribution, and then we have a good estimate of the actual distribution. We can also use these simulations to estimate other aspects of the posterior distribution, such as its moments. Now, bear in mind that MCMC is not the only way we can do this. Another method would be to use some other method of numerical integration to try to find the constant-of-integration for the posterior distribution. MCMC goes directly to simulation of the values, rather than attempting to estimate the constant-of-integration, so it is a popular method.
Posterior distribution and MCMC [duplicate] How can you "draw samples from the posterior distribution" without first knowing the properties of said distribution? In Bayesian analysis we usually know that the posterior distribution is proport
14,325
Posterior distribution and MCMC [duplicate]
Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is multivariate, and you want to marginalize over some dimensions of $\theta$ but not others. So for instance, $\theta$ might be a vector of 10 parameters, and you're interested in the marginal distribution $p(\theta_1|X)=\int p(\theta|X)d\theta_{2:10}$. The integrals required to do this marginalization are often very hard to compute exactly. They may be analytically intractable, and (deterministic) numerical integration is often cumbersome in high dimensions. This is where MCMC can help. As long as you know $p(\theta|X)$ up to a constant of multiplication, you can generate samples of $\theta$ that follow this distribution. Then, given a sufficient number of such samples, you can simply look at the distribution of sampled values of $\theta_1$ (e.g. by making a histogram), and those samples will approximate the desired marginal distribution. Compared to numerical integration methods, MCMC is more efficient because it spends more time exploring parts of the distribution where more of the probability mass is concentrated. Also, many MCMC algorithms (such as the classic Metropolis Hastings algorithm) only require that you know the target distribution up to a constant of proportionality, which is helpful if you don't know the normalization constant required to make the distribution proper (which is very often the case, because to compute that constant itself often requires computing a multivariate integral just as complex as the one you're interested in). Edit: it occurred to me that this perhaps doesn't fully answer your first question. The answer to this is that MCMC only requires that you can calculate the posterior probability (density) of a certain parameter value (up to a constant of proportionality). So all you need is a function where, if you put a parameter value in, it gives you its probability under the target distribution (or a value proportional to that probability). That is the sense in which the target distribution must be 'known'. But you don't need to know anything else about it. You can be blissfully ignorant about the mean & covariance of the distribution, or about the little squiggles and bumps that it has here or there, or any number of other things (although some of those things can be helpful to know in order to make MCMC run more smoothly).
Posterior distribution and MCMC [duplicate]
Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is mult
Posterior distribution and MCMC [duplicate] Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is multivariate, and you want to marginalize over some dimensions of $\theta$ but not others. So for instance, $\theta$ might be a vector of 10 parameters, and you're interested in the marginal distribution $p(\theta_1|X)=\int p(\theta|X)d\theta_{2:10}$. The integrals required to do this marginalization are often very hard to compute exactly. They may be analytically intractable, and (deterministic) numerical integration is often cumbersome in high dimensions. This is where MCMC can help. As long as you know $p(\theta|X)$ up to a constant of multiplication, you can generate samples of $\theta$ that follow this distribution. Then, given a sufficient number of such samples, you can simply look at the distribution of sampled values of $\theta_1$ (e.g. by making a histogram), and those samples will approximate the desired marginal distribution. Compared to numerical integration methods, MCMC is more efficient because it spends more time exploring parts of the distribution where more of the probability mass is concentrated. Also, many MCMC algorithms (such as the classic Metropolis Hastings algorithm) only require that you know the target distribution up to a constant of proportionality, which is helpful if you don't know the normalization constant required to make the distribution proper (which is very often the case, because to compute that constant itself often requires computing a multivariate integral just as complex as the one you're interested in). Edit: it occurred to me that this perhaps doesn't fully answer your first question. The answer to this is that MCMC only requires that you can calculate the posterior probability (density) of a certain parameter value (up to a constant of proportionality). So all you need is a function where, if you put a parameter value in, it gives you its probability under the target distribution (or a value proportional to that probability). That is the sense in which the target distribution must be 'known'. But you don't need to know anything else about it. You can be blissfully ignorant about the mean & covariance of the distribution, or about the little squiggles and bumps that it has here or there, or any number of other things (although some of those things can be helpful to know in order to make MCMC run more smoothly).
Posterior distribution and MCMC [duplicate] Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is mult
14,326
Posterior distribution and MCMC [duplicate]
Just one example to address part (1). Sometimes you can evaluate the posterior up to a partition function only. For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown. The metropolis hasting algorithm: -Initialize $x_0$ -Choose some distribution $q$ Repeat: -Sample $y$ from $q(x_{i-1})$ -Accept $y$ if $p(y)$ is large (essentially) via an "acceptance rule" -if accepted set $x_i=y$ But at each step we don't know $p(y)$, we only know $f(y)$ because $z$ is unknown. However, The acceptance rule can be written (essentially) as a ratio of $p(x_{i-1})$ and $p(y)$ so $z$ cancels. The final output of the sampling then provides $p(x), z $ included, but you never had to compute (or know) $z$.
Posterior distribution and MCMC [duplicate]
Just one example to address part (1). Sometimes you can evaluate the posterior up to a partition function only. For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown. The metropolis h
Posterior distribution and MCMC [duplicate] Just one example to address part (1). Sometimes you can evaluate the posterior up to a partition function only. For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown. The metropolis hasting algorithm: -Initialize $x_0$ -Choose some distribution $q$ Repeat: -Sample $y$ from $q(x_{i-1})$ -Accept $y$ if $p(y)$ is large (essentially) via an "acceptance rule" -if accepted set $x_i=y$ But at each step we don't know $p(y)$, we only know $f(y)$ because $z$ is unknown. However, The acceptance rule can be written (essentially) as a ratio of $p(x_{i-1})$ and $p(y)$ so $z$ cancels. The final output of the sampling then provides $p(x), z $ included, but you never had to compute (or know) $z$.
Posterior distribution and MCMC [duplicate] Just one example to address part (1). Sometimes you can evaluate the posterior up to a partition function only. For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown. The metropolis h
14,327
What kind of curve (or model) should I fit to my percentage data?
another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getting better ideas of where the "uncertainty" is Stan is a Monte Carlo sampler with a relatively easy to use programmatic interface, libraries are available for R and others but I'm using Python here we use a sigmoid like everybody else: it has biochemical motivations as well as being mathematically very convenient to work with. a nice parameterization for this task is: import numpy as np def sigfn(x, alpha, beta): return 1 / (1 + np.exp(-(x - alpha) * beta)) where alpha defines the midpoint of the sigmoid curve (i.e. where it crosses 50%) and beta defines the slope, values nearer zero are flatter to show what this looks like, we can pull in your data and plot it with: import pandas as pd import matplotlib.pyplot as plt import seaborn as sns df = pd.read_table('raw_data.txt', delim_whitespace=True) df.columns = ['subsample', 'virus', 'coverage', 'copies'] df.coverage /= 100 x = np.logspace(-1, 6, 201) plt.semilogx(x, sigfn(np.log(x), 5.5, 3), label='sigfn', color='C2') sns.scatterplot(df.copies, df.coverage, hue=df.virus, edgecolor='none') where raw_data.txt contains the data you gave and I transformed the coverage to something more useful. the coefficients 5.5 and 3 look nice and give a plot very much like the other answers: to "fit" this function using Stan we need to define our model using its own language that's a mix between R and C++. a simple model would be something like: data { int<lower=1> N; // number of rows vector[N] log_copies; vector<lower=0,upper=1>[N] coverage; } parameters { real alpha; real beta; real<lower=0> sigma; } model { vector[N] mu; mu = 1 ./ (1 + exp(-(log_copies - alpha) * beta)); sigma ~ cauchy(0, 0.1); alpha ~ normal(0, 5); beta ~ normal(0, 5); coverage ~ normal(mu, sigma); } which hopefully reads OK. we have a data block that defines the data we expect when we sample the model, parameters define the things that are sampled, and model defines the likelihood function. You tell Stan to "compile" the model, which takes a while, and then you can sample from it with some data. for example: import pystan model = pystan.StanModel(model_code=code) model.sampling(data=dict( N=len(df), log_copies=np.log(df.copies), coverage=df.coverage, ), iter=10000, chains=4, thin=10) import arviz arviz.plot_trace(fit) arviz makes nice diagnostic plots easy, while printing the fit gives you a nice R-style parameter summary: 4 chains, each with iter=10000; warmup=5000; thin=10; post-warmup draws per chain=500, total post-warmup draws=2000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat alpha 5.51 6.0e-3 0.26 4.96 5.36 5.49 5.64 6.12 1849 1.0 beta 2.89 0.04 1.71 1.55 1.98 2.32 2.95 8.08 1698 1.0 sigma 0.08 2.7e-4 0.01 0.06 0.07 0.08 0.09 0.1 1790 1.0 lp__ 57.12 0.04 1.76 52.9 56.1 57.58 58.51 59.19 1647 1.0 the large standard deviation on beta says that the data really doesn't provide much information about this parameter. also some of the answers giving 10+ significant digits in their model fits are overstating things somewhat because some answers noted that each virus might need its own parameters I extended the model to allow alpha and beta to vary by "Virus". it all gets a bit fiddly, but the two viruses almost certainly have different alpha values (i.e. you need more copies/μL of RRAV for the same coverage) and a plot showing this is: the data is the same as before, but I've drawn a curve for 40 samples of the posterior. UMAV seems relatively well determined, while RRAV could follow the same slope and need a higher copy count, or have a steeper slope and a similar copy count. most of the posterior mass is on needing a higher copy count, but this uncertainty might explain some of the differences in other answers finding different things I mostly used answering this as an exercise to improve my knowledge of Stan, and I've put a Jupyter notebook of this here in case anyone is interested/wants to replicate this.
What kind of curve (or model) should I fit to my percentage data?
another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getti
What kind of curve (or model) should I fit to my percentage data? another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getting better ideas of where the "uncertainty" is Stan is a Monte Carlo sampler with a relatively easy to use programmatic interface, libraries are available for R and others but I'm using Python here we use a sigmoid like everybody else: it has biochemical motivations as well as being mathematically very convenient to work with. a nice parameterization for this task is: import numpy as np def sigfn(x, alpha, beta): return 1 / (1 + np.exp(-(x - alpha) * beta)) where alpha defines the midpoint of the sigmoid curve (i.e. where it crosses 50%) and beta defines the slope, values nearer zero are flatter to show what this looks like, we can pull in your data and plot it with: import pandas as pd import matplotlib.pyplot as plt import seaborn as sns df = pd.read_table('raw_data.txt', delim_whitespace=True) df.columns = ['subsample', 'virus', 'coverage', 'copies'] df.coverage /= 100 x = np.logspace(-1, 6, 201) plt.semilogx(x, sigfn(np.log(x), 5.5, 3), label='sigfn', color='C2') sns.scatterplot(df.copies, df.coverage, hue=df.virus, edgecolor='none') where raw_data.txt contains the data you gave and I transformed the coverage to something more useful. the coefficients 5.5 and 3 look nice and give a plot very much like the other answers: to "fit" this function using Stan we need to define our model using its own language that's a mix between R and C++. a simple model would be something like: data { int<lower=1> N; // number of rows vector[N] log_copies; vector<lower=0,upper=1>[N] coverage; } parameters { real alpha; real beta; real<lower=0> sigma; } model { vector[N] mu; mu = 1 ./ (1 + exp(-(log_copies - alpha) * beta)); sigma ~ cauchy(0, 0.1); alpha ~ normal(0, 5); beta ~ normal(0, 5); coverage ~ normal(mu, sigma); } which hopefully reads OK. we have a data block that defines the data we expect when we sample the model, parameters define the things that are sampled, and model defines the likelihood function. You tell Stan to "compile" the model, which takes a while, and then you can sample from it with some data. for example: import pystan model = pystan.StanModel(model_code=code) model.sampling(data=dict( N=len(df), log_copies=np.log(df.copies), coverage=df.coverage, ), iter=10000, chains=4, thin=10) import arviz arviz.plot_trace(fit) arviz makes nice diagnostic plots easy, while printing the fit gives you a nice R-style parameter summary: 4 chains, each with iter=10000; warmup=5000; thin=10; post-warmup draws per chain=500, total post-warmup draws=2000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat alpha 5.51 6.0e-3 0.26 4.96 5.36 5.49 5.64 6.12 1849 1.0 beta 2.89 0.04 1.71 1.55 1.98 2.32 2.95 8.08 1698 1.0 sigma 0.08 2.7e-4 0.01 0.06 0.07 0.08 0.09 0.1 1790 1.0 lp__ 57.12 0.04 1.76 52.9 56.1 57.58 58.51 59.19 1647 1.0 the large standard deviation on beta says that the data really doesn't provide much information about this parameter. also some of the answers giving 10+ significant digits in their model fits are overstating things somewhat because some answers noted that each virus might need its own parameters I extended the model to allow alpha and beta to vary by "Virus". it all gets a bit fiddly, but the two viruses almost certainly have different alpha values (i.e. you need more copies/μL of RRAV for the same coverage) and a plot showing this is: the data is the same as before, but I've drawn a curve for 40 samples of the posterior. UMAV seems relatively well determined, while RRAV could follow the same slope and need a higher copy count, or have a steeper slope and a similar copy count. most of the posterior mass is on needing a higher copy count, but this uncertainty might explain some of the differences in other answers finding different things I mostly used answering this as an exercise to improve my knowledge of Stan, and I've put a Jupyter notebook of this here in case anyone is interested/wants to replicate this.
What kind of curve (or model) should I fit to my percentage data? another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getti
14,328
What kind of curve (or model) should I fit to my percentage data?
(Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.) Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded between 0 and 100% and is well-justified theoretically in many areas of biology. Note that you might have to divide all values by 100 to fit it, since programs frequently expect the data to range between 0 and 1. And as Ben Bolker recommends, to address possible problems caused by the binomial distribution's strict assumptions regarding variance, use a quasibinomial distribution instead. I've made some assumptions based on your code, such as that there are 2 viruses you are interested in and they may show different patterns (i.e. there may be an interaction between virus type and number of copies). First, the model fit: dat <- read.csv('Book1.csv') dat$logcopies <- log10(dat$Copies_per_uL) dat$Genome_cov_norm <- dat$Genome_cov/100 fit <- glm(Genome_cov_norm ~ logcopies * Virus, data = dat, family = quasibinomial()) summary(fit) Call: glm(formula = Genome_cov_norm ~ logcopies * Virus, family = quasibinomial(), data = dat) Deviance Residuals: Min 1Q Median 3Q Max -0.55073 -0.13362 0.07825 0.20362 0.70086 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -5.9702 2.8857 -2.069 0.0486 * logcopies 2.3262 1.0961 2.122 0.0435 * VirusUMAV 2.6147 3.3049 0.791 0.4360 logcopies:VirusUMAV -0.6028 1.3173 -0.458 0.6510 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for quasibinomial family taken to be 0.6934319) Null deviance: 30.4473 on 29 degrees of freedom Residual deviance: 2.7033 on 26 degrees of freedom If you trust the p-values, the output does not suggest that the two viruses differ meaningfully. This is in contrast to @NickCox's results below, though we used different methods. I'd not be very confident either way with 30 data points. Second, the plotting: It's not hard to code up a way to visualize the output yourself, but there appears to be a ggPredict package that will do most of the work for you (can't vouch for it, I haven't tried it myself). The code will look something like: library(ggiraphExtra) ggPredict(fit) + theme_bw(base_size = 20) + geom_line(size = 2) Update: I no longer recommend the code or the ggPredict function more generally. After trying it out I found that the plotted points don't exactly reflect the input data but instead are changed for some bizarre reason (some of the plotted points were above 1 and below 0). So I recommend coding it up yourself, though that is more work.
What kind of curve (or model) should I fit to my percentage data?
(Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.) Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded
What kind of curve (or model) should I fit to my percentage data? (Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.) Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded between 0 and 100% and is well-justified theoretically in many areas of biology. Note that you might have to divide all values by 100 to fit it, since programs frequently expect the data to range between 0 and 1. And as Ben Bolker recommends, to address possible problems caused by the binomial distribution's strict assumptions regarding variance, use a quasibinomial distribution instead. I've made some assumptions based on your code, such as that there are 2 viruses you are interested in and they may show different patterns (i.e. there may be an interaction between virus type and number of copies). First, the model fit: dat <- read.csv('Book1.csv') dat$logcopies <- log10(dat$Copies_per_uL) dat$Genome_cov_norm <- dat$Genome_cov/100 fit <- glm(Genome_cov_norm ~ logcopies * Virus, data = dat, family = quasibinomial()) summary(fit) Call: glm(formula = Genome_cov_norm ~ logcopies * Virus, family = quasibinomial(), data = dat) Deviance Residuals: Min 1Q Median 3Q Max -0.55073 -0.13362 0.07825 0.20362 0.70086 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -5.9702 2.8857 -2.069 0.0486 * logcopies 2.3262 1.0961 2.122 0.0435 * VirusUMAV 2.6147 3.3049 0.791 0.4360 logcopies:VirusUMAV -0.6028 1.3173 -0.458 0.6510 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for quasibinomial family taken to be 0.6934319) Null deviance: 30.4473 on 29 degrees of freedom Residual deviance: 2.7033 on 26 degrees of freedom If you trust the p-values, the output does not suggest that the two viruses differ meaningfully. This is in contrast to @NickCox's results below, though we used different methods. I'd not be very confident either way with 30 data points. Second, the plotting: It's not hard to code up a way to visualize the output yourself, but there appears to be a ggPredict package that will do most of the work for you (can't vouch for it, I haven't tried it myself). The code will look something like: library(ggiraphExtra) ggPredict(fit) + theme_bw(base_size = 20) + geom_line(size = 2) Update: I no longer recommend the code or the ggPredict function more generally. After trying it out I found that the plotted points don't exactly reflect the input data but instead are changed for some bizarre reason (some of the plotted points were above 1 and below 0). So I recommend coding it up yourself, though that is more work.
What kind of curve (or model) should I fit to my percentage data? (Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.) Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded
14,329
What kind of curve (or model) should I fit to my percentage data?
This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph An equation is 100 invlogit(-4.192654 + 1.880951 log10(Copies)) Now I fit curves separately for each virus in the simplest scenario of virus defining an indicator variable. Here for the record is a Stata script: clear input id str9 Subsample str4 Virus Genome_cov Copies_per_uL 1 S1.1_RRAV RRAV 100 92500 2 S1.2_RRAV RRAV 100 95900 3 S1.3_RRAV RRAV 100 92900 4 S2.1_RRAV RRAV 100 4049.54 5 S2.2_RRAV RRAV 96.9935 3809 6 S2.3_RRAV RRAV 94.5054 3695.06 7 S3.1_RRAV RRAV 3.7235 86.37 8 S3.2_RRAV RRAV 11.8186 84.2 9 S3.3_RRAV RRAV 11.0929 95.2 10 S4.1_RRAV RRAV 0 2.12 11 S4.2_RRAV RRAV 5.0799 2.71 12 S4.3_RRAV RRAV 0 2.39 13 S5.1_RRAV RRAV 4.9503 0.16 14 S5.2_RRAV RRAV 0 0.08 15 S5.3_RRAV RRAV 4.4147 0.08 16 S1.1_UMAV UMAV 5.7666 1.38 17 S1.2_UMAV UMAV 26.0379 1.72 18 S1.3_UMAV UMAV 7.4128 2.52 19 S2.1_UMAV UMAV 21.172 31.06 20 S2.2_UMAV UMAV 16.1663 29.87 21 S2.3_UMAV UMAV 9.121 32.82 22 S3.1_UMAV UMAV 92.903 627.24 23 S3.2_UMAV UMAV 83.0314 615.36 24 S3.3_UMAV UMAV 90.3458 632.67 25 S4.1_UMAV UMAV 98.6696 11180 26 S4.2_UMAV UMAV 98.8405 12720 27 S4.3_UMAV UMAV 98.7939 8680 28 S5.1_UMAV UMAV 98.6489 318200 29 S5.2_UMAV UMAV 99.1303 346100 30 S5.3_UMAV UMAV 98.8767 345100 end gen log10Copies = log10(Copies) gen Genome_cov_pr = Genome_cov / 100 encode Virus, gen(virus) set seed 2803 fracreg logit Genome_cov_pr log10Copies i.virus, vce(bootstrap, reps(10000)) twoway function invlogit(-5.055519 + 1.961538 * x), lc(orange) ra(log10Copies) /// || function invlogit(-5.055519 + 1.233273 + 1.961538 * x), ra(log10Copies) lc(blue) /// || scatter Genome_cov_pr log10Copies if Virus == "RRAV", mc(orange) ms(Oh) /// || scatter Genome_cov_pr log10Copies if Virus == "UMAV", mc(blue) ms(+) /// legend(order(4 "UMAV" 3 "RRAV") pos(11) col(1) ring(0)) /// xla(-1 "0.1" 0 "1" 1 "10" 2 "100" 3 "10{sup:3}" 4 "10{sup:4}" 5 "10{sup:5}") /// yla(0 .25 "25" .5 "50" .75 "75" 1 "100", ang(h)) /// ytitle(Genome coverage (%)) xtitle(Genome copies / {&mu}L) scheme(s1color) This is pushing hard on a tiny dataset but the P-value for virus looks supportive of fitting two curves jointly. Fractional logistic regression Number of obs = 30 Replications = 10,000 Wald chi2(2) = 48.14 Prob > chi2 = 0.0000 Log pseudolikelihood = -6.9603063 Pseudo R2 = 0.6646 ------------------------------------------------------------------------------- | Observed Bootstrap Normal-based Genome_cov_pr | Coef. Std. Err. z P>|z| [95% Conf. Interval] --------------+---------------------------------------------------------------- log10Copies | 1.961538 .2893965 6.78 0.000 1.394331 2.528745 | virus | UMAV | 1.233273 .5557609 2.22 0.026 .1440018 2.322544 _cons | -5.055519 .8971009 -5.64 0.000 -6.813805 -3.297234 -------------------------------------------------------------------------------
What kind of curve (or model) should I fit to my percentage data?
This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph An equat
What kind of curve (or model) should I fit to my percentage data? This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph An equation is 100 invlogit(-4.192654 + 1.880951 log10(Copies)) Now I fit curves separately for each virus in the simplest scenario of virus defining an indicator variable. Here for the record is a Stata script: clear input id str9 Subsample str4 Virus Genome_cov Copies_per_uL 1 S1.1_RRAV RRAV 100 92500 2 S1.2_RRAV RRAV 100 95900 3 S1.3_RRAV RRAV 100 92900 4 S2.1_RRAV RRAV 100 4049.54 5 S2.2_RRAV RRAV 96.9935 3809 6 S2.3_RRAV RRAV 94.5054 3695.06 7 S3.1_RRAV RRAV 3.7235 86.37 8 S3.2_RRAV RRAV 11.8186 84.2 9 S3.3_RRAV RRAV 11.0929 95.2 10 S4.1_RRAV RRAV 0 2.12 11 S4.2_RRAV RRAV 5.0799 2.71 12 S4.3_RRAV RRAV 0 2.39 13 S5.1_RRAV RRAV 4.9503 0.16 14 S5.2_RRAV RRAV 0 0.08 15 S5.3_RRAV RRAV 4.4147 0.08 16 S1.1_UMAV UMAV 5.7666 1.38 17 S1.2_UMAV UMAV 26.0379 1.72 18 S1.3_UMAV UMAV 7.4128 2.52 19 S2.1_UMAV UMAV 21.172 31.06 20 S2.2_UMAV UMAV 16.1663 29.87 21 S2.3_UMAV UMAV 9.121 32.82 22 S3.1_UMAV UMAV 92.903 627.24 23 S3.2_UMAV UMAV 83.0314 615.36 24 S3.3_UMAV UMAV 90.3458 632.67 25 S4.1_UMAV UMAV 98.6696 11180 26 S4.2_UMAV UMAV 98.8405 12720 27 S4.3_UMAV UMAV 98.7939 8680 28 S5.1_UMAV UMAV 98.6489 318200 29 S5.2_UMAV UMAV 99.1303 346100 30 S5.3_UMAV UMAV 98.8767 345100 end gen log10Copies = log10(Copies) gen Genome_cov_pr = Genome_cov / 100 encode Virus, gen(virus) set seed 2803 fracreg logit Genome_cov_pr log10Copies i.virus, vce(bootstrap, reps(10000)) twoway function invlogit(-5.055519 + 1.961538 * x), lc(orange) ra(log10Copies) /// || function invlogit(-5.055519 + 1.233273 + 1.961538 * x), ra(log10Copies) lc(blue) /// || scatter Genome_cov_pr log10Copies if Virus == "RRAV", mc(orange) ms(Oh) /// || scatter Genome_cov_pr log10Copies if Virus == "UMAV", mc(blue) ms(+) /// legend(order(4 "UMAV" 3 "RRAV") pos(11) col(1) ring(0)) /// xla(-1 "0.1" 0 "1" 1 "10" 2 "100" 3 "10{sup:3}" 4 "10{sup:4}" 5 "10{sup:5}") /// yla(0 .25 "25" .5 "50" .75 "75" 1 "100", ang(h)) /// ytitle(Genome coverage (%)) xtitle(Genome copies / {&mu}L) scheme(s1color) This is pushing hard on a tiny dataset but the P-value for virus looks supportive of fitting two curves jointly. Fractional logistic regression Number of obs = 30 Replications = 10,000 Wald chi2(2) = 48.14 Prob > chi2 = 0.0000 Log pseudolikelihood = -6.9603063 Pseudo R2 = 0.6646 ------------------------------------------------------------------------------- | Observed Bootstrap Normal-based Genome_cov_pr | Coef. Std. Err. z P>|z| [95% Conf. Interval] --------------+---------------------------------------------------------------- log10Copies | 1.961538 .2893965 6.78 0.000 1.394331 2.528745 | virus | UMAV | 1.233273 .5557609 2.22 0.026 .1440018 2.322544 _cons | -5.055519 .8971009 -5.64 0.000 -6.813805 -3.297234 -------------------------------------------------------------------------------
What kind of curve (or model) should I fit to my percentage data? This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph An equat
14,330
What kind of curve (or model) should I fit to my percentage data?
Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice. Given the plots, I can't rule out a simple step function either. I'm afraid you will not be able to differentiate between a step function and any number of sigmoid specifications. You don't have any observations where your percentage is in 50% range, so the simple step formulation can be the most parsimoinous choice that performs no worse than more complex models
What kind of curve (or model) should I fit to my percentage data?
Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice. Given the plots, I can't rule out a simple step function eith
What kind of curve (or model) should I fit to my percentage data? Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice. Given the plots, I can't rule out a simple step function either. I'm afraid you will not be able to differentiate between a step function and any number of sigmoid specifications. You don't have any observations where your percentage is in 50% range, so the simple step formulation can be the most parsimoinous choice that performs no worse than more complex models
What kind of curve (or model) should I fit to my percentage data? Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice. Given the plots, I can't rule out a simple step function eith
14,331
What kind of curve (or model) should I fit to my percentage data?
Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Comparing Limits of Detection of Bioassays”, Anal. Chem. 87 (2015) 9795-9801. The 4PL equation is shown in both figures and the parameter meanings are as follows: a = lower asymptote, b = slope factor, c = inflection point, and d = upper asymptote. Figure 1 constrains a to equal 0% and d to equal 100%: Figure 2 has no constraints on the 4 parameters in the 4PL equation: This was fun, I make no pretence of knowing anything biological and it will be interesting to see how it all settles out!
What kind of curve (or model) should I fit to my percentage data?
Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Com
What kind of curve (or model) should I fit to my percentage data? Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Comparing Limits of Detection of Bioassays”, Anal. Chem. 87 (2015) 9795-9801. The 4PL equation is shown in both figures and the parameter meanings are as follows: a = lower asymptote, b = slope factor, c = inflection point, and d = upper asymptote. Figure 1 constrains a to equal 0% and d to equal 100%: Figure 2 has no constraints on the 4 parameters in the 4PL equation: This was fun, I make no pretence of knowing anything biological and it will be interesting to see how it all settles out!
What kind of curve (or model) should I fit to my percentage data? Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Com
14,332
What kind of curve (or model) should I fit to my percentage data?
I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log base 10 per your plot. The fitted parameters were a = 9.0005947126706630E+01, b = 1.2831794858584102E+07, and c = 6.6483431489473155E+00 for my extracted data, a fit of the (log 10 x) original data should yield similar results if you re-fit the original data using my values as initial parameter estimates. My parameter values are yielding R-squared = 0.983 and RMSE = 5.625 on the extracted data. EDIT: Now that the question has been edited to include the actual data, here is a plot using the above 3-parameter equation and initial parameter estimates.
What kind of curve (or model) should I fit to my percentage data?
I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log bas
What kind of curve (or model) should I fit to my percentage data? I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log base 10 per your plot. The fitted parameters were a = 9.0005947126706630E+01, b = 1.2831794858584102E+07, and c = 6.6483431489473155E+00 for my extracted data, a fit of the (log 10 x) original data should yield similar results if you re-fit the original data using my values as initial parameter estimates. My parameter values are yielding R-squared = 0.983 and RMSE = 5.625 on the extracted data. EDIT: Now that the question has been edited to include the actual data, here is a plot using the above 3-parameter equation and initial parameter estimates.
What kind of curve (or model) should I fit to my percentage data? I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log bas
14,333
What kind of curve (or model) should I fit to my percentage data?
Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data set -- that is, the Heaviside is assuming the data on either side has all derivatives = 0 . RH side std dev = 4.76 LH side std dev = 7.72 Since it turns out there's 15 samples in each batch, the overall std dev is the mean, or 6.24 . Assuming the "RMSE" quoted in other answers is "RMS error" overall, the Heaviside function would appear to do at least as well as, if not better than, most of the "Z-curve" (borrowed from photographic response nomenclature) fits here. edit Useless graph, but requested in comments:
What kind of curve (or model) should I fit to my percentage data?
Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data
What kind of curve (or model) should I fit to my percentage data? Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data set -- that is, the Heaviside is assuming the data on either side has all derivatives = 0 . RH side std dev = 4.76 LH side std dev = 7.72 Since it turns out there's 15 samples in each batch, the overall std dev is the mean, or 6.24 . Assuming the "RMSE" quoted in other answers is "RMS error" overall, the Heaviside function would appear to do at least as well as, if not better than, most of the "Z-curve" (borrowed from photographic response nomenclature) fits here. edit Useless graph, but requested in comments:
What kind of curve (or model) should I fit to my percentage data? Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data
14,334
What does linear stand for in linear regression?
Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is not. A linear model means that your estimate of your parameter vector can be written $\hat{\beta} = \sum_i{w_iy_i}$, where the $\{w_i\}$ are weights determined by your estimation procedure. Linear models can be solved algebraically in closed form, while many non-linear models need to be solved by numerical maximization using a computer.
What does linear stand for in linear regression?
Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is
What does linear stand for in linear regression? Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is not. A linear model means that your estimate of your parameter vector can be written $\hat{\beta} = \sum_i{w_iy_i}$, where the $\{w_i\}$ are weights determined by your estimation procedure. Linear models can be solved algebraically in closed form, while many non-linear models need to be solved by numerical maximization using a computer.
What does linear stand for in linear regression? Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is
14,335
What does linear stand for in linear regression?
This post at minitab.com provides a very clear explanation: A model is linear when it can be written in this format: Response = constant + parameter * predictor + ... + parameter * predictor That is, when each term (in the model) is either a constant or the product of a parameter and a predictor variable. So both of these are linear models: $Y = B_0 + B_1X_1$ (This is a straight line) $Y = B_0 + B_1X_1^2$ (This is a curve) If the model cannot be expressed using the above format, it is non-linear. Examples of non-linear models: $Y = B_0 + $$X_1^{B_1}$ $Y = B_0 \centerdot \cos (B_1 \centerdot X_1)$
What does linear stand for in linear regression?
This post at minitab.com provides a very clear explanation: A model is linear when it can be written in this format: Response = constant + parameter * predictor + ... + parameter * predictor That
What does linear stand for in linear regression? This post at minitab.com provides a very clear explanation: A model is linear when it can be written in this format: Response = constant + parameter * predictor + ... + parameter * predictor That is, when each term (in the model) is either a constant or the product of a parameter and a predictor variable. So both of these are linear models: $Y = B_0 + B_1X_1$ (This is a straight line) $Y = B_0 + B_1X_1^2$ (This is a curve) If the model cannot be expressed using the above format, it is non-linear. Examples of non-linear models: $Y = B_0 + $$X_1^{B_1}$ $Y = B_0 \centerdot \cos (B_1 \centerdot X_1)$
What does linear stand for in linear regression? This post at minitab.com provides a very clear explanation: A model is linear when it can be written in this format: Response = constant + parameter * predictor + ... + parameter * predictor That
14,336
What does linear stand for in linear regression?
I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example: http://wiener.math.csi.cuny.edu/st/stRmanual/ModelFormula.html Assuming you're asking if the following equation is linear: a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * (b*c)) The answer is yes, if you assemble a new independent variable such as: newv = b * c Substituting the above newv equation into the original equation probably looks like what you're expecting for a linear equation: a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * newv) As far as references go, Google "r regression", or whatever you think might work for you.
What does linear stand for in linear regression?
I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example: http://wiene
What does linear stand for in linear regression? I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example: http://wiener.math.csi.cuny.edu/st/stRmanual/ModelFormula.html Assuming you're asking if the following equation is linear: a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * (b*c)) The answer is yes, if you assemble a new independent variable such as: newv = b * c Substituting the above newv equation into the original equation probably looks like what you're expecting for a linear equation: a = coeff0 + (coeff1 * b) + (coeff2 * c) + (coeff3 * newv) As far as references go, Google "r regression", or whatever you think might work for you.
What does linear stand for in linear regression? I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example: http://wiene
14,337
What does linear stand for in linear regression?
You can write out the linear regression as a (linear) matrix equation. $ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2 \\b_3 & c_3 & b_3*c_3 \\b_4 & c_4 & b_4*c_4 \\b_5 & c_5 & b_5*c_5 \\ &...& \\ b_n & c_n & b_n*c_n } \right] \times \left[\matrix{\alpha_b & \alpha_c & \alpha_{b*c}} \right] + \left[ \matrix{\epsilon_1 \\\epsilon_2 \\\epsilon_3 \\\epsilon_4 \\\epsilon_5 \\ ... \\ \epsilon_n} \right] $ or if you collapse this: $\mathbf{a} = \alpha_b \mathbf{b} + \alpha_c \mathbf{c} + \alpha_{b*c} \mathbf{b*c} + \mathbf{\epsilon} $ This linear regression is equivalent to finding the linear combination of vectors $\mathbf{b}$, $\mathbf{c}$ and $\mathbf{b*c}$ that is closest to the vector $\mathbf{a}$. (This has also a geometrical interpretation as finding the projection of $\mathbf{a}$ on the span of the vectors $\mathbf{b}$, $\mathbf{c}$ and $\mathbf{b*c}$. For a problem with two column vectors with three measurements this can still be drawn as a figure for instance as shown here: http://www.math.brown.edu/~banchoff/gc/linalg/linalg.html ) Understanding this concept is also important in non-linear regression. For instance it is much easier to solve $y=a e^{ct} + b e^{dt}$ than $y=u(e^{c(t-v)}+e^{d(t-v)})$ because the first parameterization allows to solve the $a$ and $b$ coefficients with the techniques for linear regression.
What does linear stand for in linear regression?
You can write out the linear regression as a (linear) matrix equation. $ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2
What does linear stand for in linear regression? You can write out the linear regression as a (linear) matrix equation. $ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2 \\b_3 & c_3 & b_3*c_3 \\b_4 & c_4 & b_4*c_4 \\b_5 & c_5 & b_5*c_5 \\ &...& \\ b_n & c_n & b_n*c_n } \right] \times \left[\matrix{\alpha_b & \alpha_c & \alpha_{b*c}} \right] + \left[ \matrix{\epsilon_1 \\\epsilon_2 \\\epsilon_3 \\\epsilon_4 \\\epsilon_5 \\ ... \\ \epsilon_n} \right] $ or if you collapse this: $\mathbf{a} = \alpha_b \mathbf{b} + \alpha_c \mathbf{c} + \alpha_{b*c} \mathbf{b*c} + \mathbf{\epsilon} $ This linear regression is equivalent to finding the linear combination of vectors $\mathbf{b}$, $\mathbf{c}$ and $\mathbf{b*c}$ that is closest to the vector $\mathbf{a}$. (This has also a geometrical interpretation as finding the projection of $\mathbf{a}$ on the span of the vectors $\mathbf{b}$, $\mathbf{c}$ and $\mathbf{b*c}$. For a problem with two column vectors with three measurements this can still be drawn as a figure for instance as shown here: http://www.math.brown.edu/~banchoff/gc/linalg/linalg.html ) Understanding this concept is also important in non-linear regression. For instance it is much easier to solve $y=a e^{ct} + b e^{dt}$ than $y=u(e^{c(t-v)}+e^{d(t-v)})$ because the first parameterization allows to solve the $a$ and $b$ coefficients with the techniques for linear regression.
What does linear stand for in linear regression? You can write out the linear regression as a (linear) matrix equation. $ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2
14,338
What does linear stand for in linear regression?
The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then the new variable created will be a mathematical product, but it also has meanings when one or both of the variables are categorical (known as factors in R parlance.) The reason that it is called a linear model is that the formula implies a relationship between the lefthand side and the righthand side that is determined by parameters for each term that are "linear" or "constants" which are solved for to minimize the total deviation of the data from the model. The answer from Sextus Empiricus lays that out formally: $\mathbf{a} = \alpha_b \mathbf{b} + \alpha_c \mathbf{c} + \alpha_{b*c} \mathbf{b*c} + \mathbf{\epsilon} $ In R the variables a, b, and c can be defined in a manner that will produce a "non-planar" interaction. (I choose that term because to use the phrase "non-linear" would conflict with the its meaning in regression terminology.) The best fit interaction model will be a twisted surface. c=runif(100) b= runif(100) a = 3*b +6*c - 8*b*c + rnorm(100) # higher combined values of b & c will be lower than without the interaction ls.fit <- lm(a~b+c+b*c) # formula could have been just a~b*c summary( lm(a~b+c+b*c) ) #-------------------- Call: lm(formula = a ~ b + c + b * c) Residuals: Min 1Q Median 3Q Max -2.61259 -0.50276 0.09259 0.69230 2.11442 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.6965 0.3540 -1.967 0.052 . b 3.7147 0.6363 5.838 7.18e-08 *** c 7.3041 0.6500 11.237 < 2e-16 *** b:c -9.5917 1.2091 -7.933 3.94e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9264 on 96 degrees of freedom Multiple R-squared: 0.5973, Adjusted R-squared: 0.5847 F-statistic: 47.46 on 3 and 96 DF, p-value: < 2.2e-16 #------------------- y <- predict( lm(a~b+c+b*c), # predict idealized values from rectangular grid newdata=expand.grid(b=seq(0,1,length=20), c=seq(0,1,length=20)) ) png() wireframe( y~b+c, data=data.frame( y, expand.grid(b=seq(0,1,length=20), c=seq(0,1,length=20))) , screen = list(z = 90, x = -60)) dev.off() # now insert it in answer
What does linear stand for in linear regression?
The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then
What does linear stand for in linear regression? The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then the new variable created will be a mathematical product, but it also has meanings when one or both of the variables are categorical (known as factors in R parlance.) The reason that it is called a linear model is that the formula implies a relationship between the lefthand side and the righthand side that is determined by parameters for each term that are "linear" or "constants" which are solved for to minimize the total deviation of the data from the model. The answer from Sextus Empiricus lays that out formally: $\mathbf{a} = \alpha_b \mathbf{b} + \alpha_c \mathbf{c} + \alpha_{b*c} \mathbf{b*c} + \mathbf{\epsilon} $ In R the variables a, b, and c can be defined in a manner that will produce a "non-planar" interaction. (I choose that term because to use the phrase "non-linear" would conflict with the its meaning in regression terminology.) The best fit interaction model will be a twisted surface. c=runif(100) b= runif(100) a = 3*b +6*c - 8*b*c + rnorm(100) # higher combined values of b & c will be lower than without the interaction ls.fit <- lm(a~b+c+b*c) # formula could have been just a~b*c summary( lm(a~b+c+b*c) ) #-------------------- Call: lm(formula = a ~ b + c + b * c) Residuals: Min 1Q Median 3Q Max -2.61259 -0.50276 0.09259 0.69230 2.11442 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.6965 0.3540 -1.967 0.052 . b 3.7147 0.6363 5.838 7.18e-08 *** c 7.3041 0.6500 11.237 < 2e-16 *** b:c -9.5917 1.2091 -7.933 3.94e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9264 on 96 degrees of freedom Multiple R-squared: 0.5973, Adjusted R-squared: 0.5847 F-statistic: 47.46 on 3 and 96 DF, p-value: < 2.2e-16 #------------------- y <- predict( lm(a~b+c+b*c), # predict idealized values from rectangular grid newdata=expand.grid(b=seq(0,1,length=20), c=seq(0,1,length=20)) ) png() wireframe( y~b+c, data=data.frame( y, expand.grid(b=seq(0,1,length=20), c=seq(0,1,length=20))) , screen = list(z = 90, x = -60)) dev.off() # now insert it in answer
What does linear stand for in linear regression? The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then
14,339
What's the maximum value of Kullback-Leibler (KL) divergence
Or even with the same support, when one distribution has a much fatter tail than the other. Take $$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$ when $$p(x)=\overbrace{\frac{1}{\pi}\,\frac{1}{1+x^2}}^\text{Cauchy density}\qquad q(x)=\overbrace{\frac{1}{\sqrt{2\pi}}\,\exp\{-x^2/2\}}^\text{Normal density}$$ then $$KL(P\vert\vert Q) = \int \frac{1}{\pi}\,\frac{1}{1+x^2} \log p(x) \,\text{d}x + \int \frac{1}{\pi}\,\frac{1}{1+x^2} [\log(2\pi)/2+x^2/2]\,\text{d}x$$ and $$\int \frac{1}{\pi}\,\frac{1}{1+x^2} x^2/2\,\text{d}x=+\infty$$ There exist other distances that remain bounded such as the $L¹$ distance, equivalent to the total variation distance, the Wasserstein distances the Hellinger distance
What's the maximum value of Kullback-Leibler (KL) divergence
Or even with the same support, when one distribution has a much fatter tail than the other. Take $$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$ when $$p(x)=\overbrace{
What's the maximum value of Kullback-Leibler (KL) divergence Or even with the same support, when one distribution has a much fatter tail than the other. Take $$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$ when $$p(x)=\overbrace{\frac{1}{\pi}\,\frac{1}{1+x^2}}^\text{Cauchy density}\qquad q(x)=\overbrace{\frac{1}{\sqrt{2\pi}}\,\exp\{-x^2/2\}}^\text{Normal density}$$ then $$KL(P\vert\vert Q) = \int \frac{1}{\pi}\,\frac{1}{1+x^2} \log p(x) \,\text{d}x + \int \frac{1}{\pi}\,\frac{1}{1+x^2} [\log(2\pi)/2+x^2/2]\,\text{d}x$$ and $$\int \frac{1}{\pi}\,\frac{1}{1+x^2} x^2/2\,\text{d}x=+\infty$$ There exist other distances that remain bounded such as the $L¹$ distance, equivalent to the total variation distance, the Wasserstein distances the Hellinger distance
What's the maximum value of Kullback-Leibler (KL) divergence Or even with the same support, when one distribution has a much fatter tail than the other. Take $$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$ when $$p(x)=\overbrace{
14,340
What's the maximum value of Kullback-Leibler (KL) divergence
For distributions which do not have the same support, KL divergence is not bounded. Look at the definition: $$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$ if P and Q have not the same support, there exists some point $x'$ where $p(x') \neq 0$ and $q(x') = 0$, making KL go to infinity. This is also applicable for discrete distributions, which is your case. Edit: Maybe a better choice to measure divergence between probability distributions would be the so called Wasserstein distance which is a metric and has better properties than KL divergence. It has become quite popular due to its applications in deep-learning (see WGAN networks)
What's the maximum value of Kullback-Leibler (KL) divergence
For distributions which do not have the same support, KL divergence is not bounded. Look at the definition: $$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$ if
What's the maximum value of Kullback-Leibler (KL) divergence For distributions which do not have the same support, KL divergence is not bounded. Look at the definition: $$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$ if P and Q have not the same support, there exists some point $x'$ where $p(x') \neq 0$ and $q(x') = 0$, making KL go to infinity. This is also applicable for discrete distributions, which is your case. Edit: Maybe a better choice to measure divergence between probability distributions would be the so called Wasserstein distance which is a metric and has better properties than KL divergence. It has become quite popular due to its applications in deep-learning (see WGAN networks)
What's the maximum value of Kullback-Leibler (KL) divergence For distributions which do not have the same support, KL divergence is not bounded. Look at the definition: $$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$ if
14,341
What's the maximum value of Kullback-Leibler (KL) divergence
To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact support, and for the reference density to be bounded. This result also establishes an implicit bound for the maximum of the KL divergence (see theorem and proof below). Theorem: If the densities $p$ and $q$ have the same compact support $\mathscr{X}$ and the density $p$ is bounded on that support (i.e., is has a finite upper bound) then $KL(P||Q) < \infty$. Proof: Since $q$ has compact support $\mathscr{X}$ this means that there is some positive infimum value: $$\underline{q} \equiv \inf_{x \in \mathscr{X}} q(x) > 0.$$ Similarly, since $p$ has compact support $\mathscr{X}$ this means that there is some positive supremum value: $$\bar{p} \equiv \sup_{x \in \mathscr{X}} p(x) > 0.$$ Moreover, since these are both densities on the same support, and the latter is bounded, we have $0 < \underline{q} \leqslant \bar{p} < \infty$. This means that: $$\sup_{x \in \mathscr{X}} \ln \Bigg( \frac{p(x)}{q(x)} \Bigg) \leqslant \ln ( \bar{p}) - \ln(\underline{q}).$$ Now, letting $\underline{L} \equiv \ln ( \bar{p}) - \ln(\underline{q})$ be the latter upper bound, we clearly have $0 \leqslant \underline{L} < \infty$ so that: $$\begin{equation} \begin{aligned} KL(P||Q) &= \int \limits_{\mathscr{X}} \ln \Bigg( \frac{p(x)}{q(x)} \Bigg) p(x) dx \\[6pt] &\leqslant \sup_{x \in \mathscr{X}} \ln \Bigg( \frac{p(x)}{q(x)} \Bigg) \int \limits_{\mathscr{X}} p(x) dx \\[6pt] &\leqslant (\ln ( \bar{p}) - \ln(\underline{q})) \int \limits_{\mathscr{X}} p(x) dx \\[6pt] &= \underline{L} < \infty. \\[6pt] \end{aligned} \end{equation}$$ This establishes the required upper bound, which proves the theorem. $\blacksquare$
What's the maximum value of Kullback-Leibler (KL) divergence
To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact
What's the maximum value of Kullback-Leibler (KL) divergence To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact support, and for the reference density to be bounded. This result also establishes an implicit bound for the maximum of the KL divergence (see theorem and proof below). Theorem: If the densities $p$ and $q$ have the same compact support $\mathscr{X}$ and the density $p$ is bounded on that support (i.e., is has a finite upper bound) then $KL(P||Q) < \infty$. Proof: Since $q$ has compact support $\mathscr{X}$ this means that there is some positive infimum value: $$\underline{q} \equiv \inf_{x \in \mathscr{X}} q(x) > 0.$$ Similarly, since $p$ has compact support $\mathscr{X}$ this means that there is some positive supremum value: $$\bar{p} \equiv \sup_{x \in \mathscr{X}} p(x) > 0.$$ Moreover, since these are both densities on the same support, and the latter is bounded, we have $0 < \underline{q} \leqslant \bar{p} < \infty$. This means that: $$\sup_{x \in \mathscr{X}} \ln \Bigg( \frac{p(x)}{q(x)} \Bigg) \leqslant \ln ( \bar{p}) - \ln(\underline{q}).$$ Now, letting $\underline{L} \equiv \ln ( \bar{p}) - \ln(\underline{q})$ be the latter upper bound, we clearly have $0 \leqslant \underline{L} < \infty$ so that: $$\begin{equation} \begin{aligned} KL(P||Q) &= \int \limits_{\mathscr{X}} \ln \Bigg( \frac{p(x)}{q(x)} \Bigg) p(x) dx \\[6pt] &\leqslant \sup_{x \in \mathscr{X}} \ln \Bigg( \frac{p(x)}{q(x)} \Bigg) \int \limits_{\mathscr{X}} p(x) dx \\[6pt] &\leqslant (\ln ( \bar{p}) - \ln(\underline{q})) \int \limits_{\mathscr{X}} p(x) dx \\[6pt] &= \underline{L} < \infty. \\[6pt] \end{aligned} \end{equation}$$ This establishes the required upper bound, which proves the theorem. $\blacksquare$
What's the maximum value of Kullback-Leibler (KL) divergence To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact
14,342
What's the maximum value of Kullback-Leibler (KL) divergence
An answer is here https://arxiv.org/abs/2008.05932 You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of the distribution, and going back to a probability distribution. The maximum of the KL for your distribution P is KL(P||L).
What's the maximum value of Kullback-Leibler (KL) divergence
An answer is here https://arxiv.org/abs/2008.05932 You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of th
What's the maximum value of Kullback-Leibler (KL) divergence An answer is here https://arxiv.org/abs/2008.05932 You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of the distribution, and going back to a probability distribution. The maximum of the KL for your distribution P is KL(P||L).
What's the maximum value of Kullback-Leibler (KL) divergence An answer is here https://arxiv.org/abs/2008.05932 You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of th
14,343
What is the expected value of the logarithm of Gamma distribution?
This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter). We are supposing $X$ has a $\Gamma(\alpha,\beta)$ distribution and we wish to find the expectation of $Y=\log(X).$ First, because $\beta$ is a scale parameter, its effect will be to shift the logarithm by $\log\beta.$ (If you use $\beta$ as a rate parameter, as in the question, it will shift the logarithm by $-\log\beta.$) This permits us to work with the case $\beta=1.$ After this simplification, the probability element of $X$ is $$f_X(x) = \frac{1}{\Gamma(\alpha)} x^\alpha e^{-x} \frac{\mathrm{d}x}{x}$$ where $\Gamma(\alpha)$ is the normalizing constant $$\Gamma(\alpha) = \int_0^\infty x^\alpha e^{-x} \frac{\mathrm{d}x}{x}.$$ Substituting $x=e^y,$ which entails $\mathrm{d}x/x = \mathrm{d}y,$ gives the probability element of $Y$, $$f_Y(y) = \frac{1}{\Gamma(\alpha)} e^{\alpha y - e^y} \mathrm{d}y.$$ The possible values of $Y$ now range over all the real numbers $\mathbb{R}.$ Because $f_Y$ must integrate to unity, we obtain (trivially) $$\Gamma(\alpha) = \int_\mathbb{R} e^{\alpha y - e^y} \mathrm{d}y.\tag{1}$$ Notice $f_Y(y)$ is a differentiable function of $\alpha.$ An easy calculation gives $$\frac{\mathrm{d}}{\mathrm{d}\alpha}e^{\alpha y - e^y} \mathrm{d}y = y\, e^{\alpha y - e^y} \mathrm{d}y = \Gamma(\alpha) y\,f_Y(y).$$ The next step exploits the relation obtained by dividing both sides of this identity by $\Gamma(\alpha),$ thereby exposing the very object we need to integrate to find the expectation; namely, $y f_Y(y):$ $$\eqalign{ \mathbb{E}(Y) &= \int_\mathbb{R} y\, f_Y(y) = \frac{1}{\Gamma(\alpha)} \int_\mathbb{R} \frac{\mathrm{d}}{\mathrm{d}\alpha}e^{\alpha y - e^y} \mathrm{d}y \\ &= \frac{1}{\Gamma(\alpha)} \frac{\mathrm{d}}{\mathrm{d}\alpha}\int_\mathbb{R} e^{\alpha y - e^y} \mathrm{d}y\\ &= \frac{1}{\Gamma(\alpha)} \frac{\mathrm{d}}{\mathrm{d}\alpha}\Gamma(\alpha)\\ &= \frac{\mathrm{d}}{\mathrm{d}\alpha}\log\Gamma(\alpha)\\ &=\psi(\alpha), }$$ the logarithmic derivative of the gamma function (aka "polygamma"). The integral was computed using identity $(1).$ Re-introducing the factor $\beta$ shows the general result is $$\mathbb{E}(\log(X)) = \log\beta + \psi(\alpha)$$ for a scale parameterization (where the density function depends on $x/\beta$) or $$\mathbb{E}(\log(X)) = -\log\beta + \psi(\alpha)$$ for a rate parameterization (where the density function depends on $x\beta$).
What is the expected value of the logarithm of Gamma distribution?
This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter). We are s
What is the expected value of the logarithm of Gamma distribution? This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter). We are supposing $X$ has a $\Gamma(\alpha,\beta)$ distribution and we wish to find the expectation of $Y=\log(X).$ First, because $\beta$ is a scale parameter, its effect will be to shift the logarithm by $\log\beta.$ (If you use $\beta$ as a rate parameter, as in the question, it will shift the logarithm by $-\log\beta.$) This permits us to work with the case $\beta=1.$ After this simplification, the probability element of $X$ is $$f_X(x) = \frac{1}{\Gamma(\alpha)} x^\alpha e^{-x} \frac{\mathrm{d}x}{x}$$ where $\Gamma(\alpha)$ is the normalizing constant $$\Gamma(\alpha) = \int_0^\infty x^\alpha e^{-x} \frac{\mathrm{d}x}{x}.$$ Substituting $x=e^y,$ which entails $\mathrm{d}x/x = \mathrm{d}y,$ gives the probability element of $Y$, $$f_Y(y) = \frac{1}{\Gamma(\alpha)} e^{\alpha y - e^y} \mathrm{d}y.$$ The possible values of $Y$ now range over all the real numbers $\mathbb{R}.$ Because $f_Y$ must integrate to unity, we obtain (trivially) $$\Gamma(\alpha) = \int_\mathbb{R} e^{\alpha y - e^y} \mathrm{d}y.\tag{1}$$ Notice $f_Y(y)$ is a differentiable function of $\alpha.$ An easy calculation gives $$\frac{\mathrm{d}}{\mathrm{d}\alpha}e^{\alpha y - e^y} \mathrm{d}y = y\, e^{\alpha y - e^y} \mathrm{d}y = \Gamma(\alpha) y\,f_Y(y).$$ The next step exploits the relation obtained by dividing both sides of this identity by $\Gamma(\alpha),$ thereby exposing the very object we need to integrate to find the expectation; namely, $y f_Y(y):$ $$\eqalign{ \mathbb{E}(Y) &= \int_\mathbb{R} y\, f_Y(y) = \frac{1}{\Gamma(\alpha)} \int_\mathbb{R} \frac{\mathrm{d}}{\mathrm{d}\alpha}e^{\alpha y - e^y} \mathrm{d}y \\ &= \frac{1}{\Gamma(\alpha)} \frac{\mathrm{d}}{\mathrm{d}\alpha}\int_\mathbb{R} e^{\alpha y - e^y} \mathrm{d}y\\ &= \frac{1}{\Gamma(\alpha)} \frac{\mathrm{d}}{\mathrm{d}\alpha}\Gamma(\alpha)\\ &= \frac{\mathrm{d}}{\mathrm{d}\alpha}\log\Gamma(\alpha)\\ &=\psi(\alpha), }$$ the logarithmic derivative of the gamma function (aka "polygamma"). The integral was computed using identity $(1).$ Re-introducing the factor $\beta$ shows the general result is $$\mathbb{E}(\log(X)) = \log\beta + \psi(\alpha)$$ for a scale parameterization (where the density function depends on $x/\beta$) or $$\mathbb{E}(\log(X)) = -\log\beta + \psi(\alpha)$$ for a rate parameterization (where the density function depends on $x\beta$).
What is the expected value of the logarithm of Gamma distribution? This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter). We are s
14,344
What is the expected value of the logarithm of Gamma distribution?
The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the overall technique. Consider a family of distributions $\{F_\theta : \theta \in \Theta\}$ which consitute an exponential family, meaning they admit a density $$ f_\theta(x) = \exp\left\{s(x)\theta - A(\theta) + h(x)\right\} $$ with respect to some common dominating measure (usually, Lebesgue or counting measure). Differentiating both sides of $$ \int f_\theta(x) \ dx = 1 $$ with respect to $\theta$ we arrive at the score equation $$ \int f'_\theta(x) = \int \frac{f'_\theta(x)}{f_\theta(x)} f_\theta(x) = \int u_\theta(x) \, f_\theta(x) \ dx = 0 \tag{$\dagger$} $$ where $u_\theta(x) = \frac d {d\theta} \log f_\theta(x)$ is the score function and we have defined $f'_\theta(x) = \frac{d}{d\theta} f_\theta(x)$. In the case of an exponential family, we have $$ u_\theta(x) = s(x) - A'(\theta) $$ where $A'(\theta) = \frac d {d\theta} A(\theta)$; this is sometimes called the cumulant function, as it is evidently very closely related to the cumulant-generating function. It follows now from $(\dagger)$ that $E_\theta[s(X)] = A'(\theta)$. We now show this helps us compute the require expectation. We can write the gamma density with fixed $\beta$ as an exponential family $$ f_\theta(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} = \exp\left\{\log(x) \alpha + \alpha \log \beta - \log \Gamma(\alpha) - \beta x \right\}. $$ This is an exponential family in $\alpha$ alone with $s(x) = \log x$ and $A(\alpha) = \log \Gamma(\alpha) - \alpha \log \beta$. It now follows immediately by computing $\frac d {d\alpha} A(\alpha)$ that $$ E[\log X] = \psi(\alpha) - \log \beta. $$
What is the expected value of the logarithm of Gamma distribution?
The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the
What is the expected value of the logarithm of Gamma distribution? The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the overall technique. Consider a family of distributions $\{F_\theta : \theta \in \Theta\}$ which consitute an exponential family, meaning they admit a density $$ f_\theta(x) = \exp\left\{s(x)\theta - A(\theta) + h(x)\right\} $$ with respect to some common dominating measure (usually, Lebesgue or counting measure). Differentiating both sides of $$ \int f_\theta(x) \ dx = 1 $$ with respect to $\theta$ we arrive at the score equation $$ \int f'_\theta(x) = \int \frac{f'_\theta(x)}{f_\theta(x)} f_\theta(x) = \int u_\theta(x) \, f_\theta(x) \ dx = 0 \tag{$\dagger$} $$ where $u_\theta(x) = \frac d {d\theta} \log f_\theta(x)$ is the score function and we have defined $f'_\theta(x) = \frac{d}{d\theta} f_\theta(x)$. In the case of an exponential family, we have $$ u_\theta(x) = s(x) - A'(\theta) $$ where $A'(\theta) = \frac d {d\theta} A(\theta)$; this is sometimes called the cumulant function, as it is evidently very closely related to the cumulant-generating function. It follows now from $(\dagger)$ that $E_\theta[s(X)] = A'(\theta)$. We now show this helps us compute the require expectation. We can write the gamma density with fixed $\beta$ as an exponential family $$ f_\theta(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} = \exp\left\{\log(x) \alpha + \alpha \log \beta - \log \Gamma(\alpha) - \beta x \right\}. $$ This is an exponential family in $\alpha$ alone with $s(x) = \log x$ and $A(\alpha) = \log \Gamma(\alpha) - \alpha \log \beta$. It now follows immediately by computing $\frac d {d\alpha} A(\alpha)$ that $$ E[\log X] = \psi(\alpha) - \log \beta. $$
What is the expected value of the logarithm of Gamma distribution? The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the
14,345
How to find the mode of a probability density function?
Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none. If there's more than one mode you need to specify if you want all of them or just the global mode (if there is exactly one). Assuming we restrict ourselves to unimodal distributions*, so we can speak of "the" mode, they're found in the same way as finding maxima of functions more generally. *note that page says "as the term "mode" has multiple meanings, so does the term "unimodal"" and offers several definitions of mode -- which can change what, exactly, counts as a mode, whether there is 0 1 or more -- and also alters the strategy for identifying them. Note particularly how general the "more general" phrasing of what unimodality is in the opening paragraph "unimodality means there is only a single highest value, somehow defined" One definition offered on that page is: A mode of a continuous probability distribution is a value at which the probability density function (pdf) attains its maximum value So given a specific definition of the mode you find it as you would find that particular definition of "highest value" when dealing with functions more generally, (assuming that the distribution is unimodal under that definition). There are a variety of strategies in mathematics for identifying such things, depending on circumstances. See, the "Finding functional maxima and minima" section of the Wikipedia page on Maxima and minima which gives a brief discussion. For example, if things are sufficiently nice -- say we're dealing with a continuous random variable, where the density function has continuous first derivative -- you might proceed by trying to find where the derivative of the density function is zero, and checking which type of critical point it is (maximum, minimum, horizontal point of inflexion). If there's exactly one such point which is a local maximum, it should be the mode of a unimodal distribution. However, in general things are more complicated (e.g. the mode may not be a critical point), and the broader strategies for finding maxima of functions come in. Sometimes, finding where derivatives are zero algebraically may be difficult or at least cumbersome, but it may still be possible to identify maxima in other ways. For example, it may be that one might invoke symmetry considerations in identifying the mode of a unimodal distribution. Or one might invoke some form of numerical algorithm on a computer, to find a mode numerically. Here are some cases that illustrate typical things that you need to check for - even when the function is unimodal and at least piecewise continuous. So, for example, we must check endpoints (center diagram), points where the derivative changes sign (but may not be zero; first diagram), and points of discontinuity (third diagram). In some cases, things may not be so neat as these three; you have to try to understand the characteristics of the particular function you're dealing with. I haven't touched on the multivariate case, where even when functions are quite "nice", just finding local maxima may be substantially more complex (e.g. the numerical methods for doing so can fail in a practical sense, even when they logically must succeed eventually).
How to find the mode of a probability density function?
Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none. If there's more than one mode you need to specify if you want a
How to find the mode of a probability density function? Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none. If there's more than one mode you need to specify if you want all of them or just the global mode (if there is exactly one). Assuming we restrict ourselves to unimodal distributions*, so we can speak of "the" mode, they're found in the same way as finding maxima of functions more generally. *note that page says "as the term "mode" has multiple meanings, so does the term "unimodal"" and offers several definitions of mode -- which can change what, exactly, counts as a mode, whether there is 0 1 or more -- and also alters the strategy for identifying them. Note particularly how general the "more general" phrasing of what unimodality is in the opening paragraph "unimodality means there is only a single highest value, somehow defined" One definition offered on that page is: A mode of a continuous probability distribution is a value at which the probability density function (pdf) attains its maximum value So given a specific definition of the mode you find it as you would find that particular definition of "highest value" when dealing with functions more generally, (assuming that the distribution is unimodal under that definition). There are a variety of strategies in mathematics for identifying such things, depending on circumstances. See, the "Finding functional maxima and minima" section of the Wikipedia page on Maxima and minima which gives a brief discussion. For example, if things are sufficiently nice -- say we're dealing with a continuous random variable, where the density function has continuous first derivative -- you might proceed by trying to find where the derivative of the density function is zero, and checking which type of critical point it is (maximum, minimum, horizontal point of inflexion). If there's exactly one such point which is a local maximum, it should be the mode of a unimodal distribution. However, in general things are more complicated (e.g. the mode may not be a critical point), and the broader strategies for finding maxima of functions come in. Sometimes, finding where derivatives are zero algebraically may be difficult or at least cumbersome, but it may still be possible to identify maxima in other ways. For example, it may be that one might invoke symmetry considerations in identifying the mode of a unimodal distribution. Or one might invoke some form of numerical algorithm on a computer, to find a mode numerically. Here are some cases that illustrate typical things that you need to check for - even when the function is unimodal and at least piecewise continuous. So, for example, we must check endpoints (center diagram), points where the derivative changes sign (but may not be zero; first diagram), and points of discontinuity (third diagram). In some cases, things may not be so neat as these three; you have to try to understand the characteristics of the particular function you're dealing with. I haven't touched on the multivariate case, where even when functions are quite "nice", just finding local maxima may be substantially more complex (e.g. the numerical methods for doing so can fail in a practical sense, even when they logically must succeed eventually).
How to find the mode of a probability density function? Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none. If there's more than one mode you need to specify if you want a
14,346
How to find the mode of a probability density function?
This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically, then the preferred answer is, in brief, to look for the single maximum or multiple maxima directly, as in the answer from @Glen_b. "Half-sample modes" may be calculated using recursive selection of the half-sample with the shortest length. Although it has longer roots, an excellent presentation of this idea was given by Bickel and Frühwirth (2006). The idea of estimating the mode as the midpoint of the shortest interval that contains a fixed number of observations goes back at least to Dalenius (1965). See also Robertson and Cryer (1974), Bickel (2002) and Bickel and Frühwirth (2006) on other estimators of the mode. The order statistics of a sample of $n$ values of $x$ are defined by $ x_{(1)} \le x_{(2)} \le \cdots \le x_{(n-1)} \le x_{(n)}$. The half-sample mode is here defined using two rules. Rule 1. If $n = 1$, the half-sample mode is $x_{(1)}$. If $n = 2$, the half-sample mode is $(x_{(1)} + x_{(2)}) / 2$. If $n = 3$, the half-sample mode is $(x_{(1)} + x_{(2)}) / 2$ if $x_{(1)}$ and $x_{(2)}$ are closer than $x_{(2)}$ and $x_{(3)}$, $(x_{(2)} + x_{(3)}) / 2$ if the opposite is true, and $x_{(2)}$ otherwise. Rule 2. If $n \ge 4$, we apply recursive selection until left with $3$ or fewer values. First let $h_1 = \lfloor n / 2\rfloor$. The shortest half of the data from rank $k$ to rank $k + h_1$ is identified to minimise $x_{(k + h_1)} - x_{(k)}$ over $k = 1, \cdots, n - h_1$. Then the shortest half of those $h_1 + 1$ values is identified using $h_2 = \lfloor h_1 / 2\rfloor$, and so on. To finish, use Rule 1. The idea of identifying the shortest half is applied in the "shorth" named by J.W. Tukey and introduced in the Princeton robustness study of estimators of location by Andrews, Bickel, Hampel, Huber, Rogers and Tukey (1972, p.26) as the mean of the shortest half-length $x_{(k)}, \cdots, x_{(k + h)}$ for $h = \lfloor n / 2 \rfloor$. Rousseeuw (1984), building on a suggestion by Hampel (1975), pointed out that the midpoint of the shortest half $(x_k + x_{(k + h)}) / 2$ is the least median of squares (LMS) estimator of location for $x$. See Rousseeuw (1984) and Rousseeuw and Leroy (1987) for applications of LMS and related ideas to regression and other problems. Note that this LMS midpoint is also called the shorth in some more recent literature (e.g. Maronna, Martin and Yohai 2006, p.48). Further, the shortest half itself is also sometimes called the shorth, as the title of Grübel (1988) indicates. For a Stata implementation and more detail, see shorth from SSC. Some broad-brush comments follow on advantages and disadvantages of half-sample modes, from the standpoint of practical data analysts as much as mathematical or theoretical statisticians. Whatever the project, it will always be wise to compare results with standard summary measures (e.g. medians or means, including geometric and harmonic means) and to relate results to graphs of distributions. Moreover, if your interest is in the existence or extent of bimodality or multimodality, it will be best to look directly at suitably smoothed estimates of the density function. Mode estimation By summarizing where the data are densest, the half-sample mode adds an automated estimator of the mode to the toolbox. More traditional estimates of the mode based on identifying peaks on histograms or even kernel density plots are sensitive to decisions about bin origin or width or kernel type and kernel half-width and more difficult to automate in any case. When applied to distributions that are unimodal and approximately symmetric, the half-sample mode will be close to the mean and median, but more resistant than the mean to outliers in either tail. When applied to distributions that are unimodal and asymmetric, the half-sample mode will typically be much nearer the mode identified by other methods than either the mean or the median. Simplicity The idea of the half-sample mode is fairly simple and easy to explain to students and researchers who do not regard themselves as statistical specialists. Graphic interpretation The half-sample mode can easily be related to standard displays of distributions such as kernel density plots, cumulative distribution and quantile plots, histograms and stem-and-leaf plots. At the same time, note that Not useful for all distributions When applied to distributions that are approximately J-shaped, the half-sample mode will approximate the minimum of the data. When applied to distributions that are approximately U-shaped, the half-sample mode will be within whichever half of the distribution happens to have higher average density. Neither behaviour seems especially interesting or useful, but equally there is little call for single mode-like summaries for J-shaped or U-shaped distributions. For U shapes, bimodality makes the idea of a single mode moot, if not invalid. Ties The shortest half may not be uniquely defined. Even with measured data, rounding of reported values may frequently give rise to ties. What to do with two or more shortest halves has been little discussed in the literature. Note that tied halves may either overlap or be disjoint. The procedure adopted in the Stata implementation hsmode given $t$ ties is to use the middlemost in order, except that that is in turn not uniquely defined unless $t$ is odd. The middlemost is arbitrarily taken to have position $\lceil t/ 2\rceil$ in order, counting upwards. This is thus the 1st of 2, the 2nd of 3 or 4, and so forth. This tie-break rule has some quirky consequences. Thus with values $-9, -4, -1 , 0, -1, 4, 9$, the rules yield $-0.5$ as the half-sample mode, not $0$ as would be natural on all other grounds. Otherwise put, this problem can arise because for a window to be placed symmetrically the window length $1 + \lfloor n / 2\rfloor$ must be odd for odd $n$ and even for even $n$, which is difficult to achieve given other desiderata, notably that window length should never decrease with sample size. We prefer to believe that this is a minor problem with datasets of reasonable size. Rationale for window length Why half is taken to mean $1 + \lfloor n / 2\rfloor$ also does not appear to be discussed. Evidently we need a rule that yields a window length for both odd and even $n$; it is preferable that the rule be simple; and there is usually some slight arbitrariness in choosing a rule of this kind. It is also important that any rule behave reasonably for small $n$: even if a program is not deliberately invoked for very small sample sizes the procedure used should make sense for all possible sizes. Note that, given $n = 1,$ the half-sample mode is just the single sample value, and, given $n = 2$, it is the average of the two sample values. A further detail about this rule is that it always defines a slight majority, thus enforcing democratic decisions about the data. However, there seems no strong reason not to use $\lceil n / 2\rceil$ as an even simpler rule, except that if it makes much difference, then it is likely that your sample size or variable is unsuitable for the purpose. Robertson and Cryer (1974, p.1014) reported 35 measurements of uric acid (in mg/100 ml): $1.6, 3.11, 3.95, 4.2, 4.2, 4.62, 4.62, 4.62, 4.7, 4.87, 5.04, 5.29, 5.3, 5.38, 5.38, 5.38, 5.54, 5.54, 5.63, 5.71, 6.13, 6.38, 6.38, 6.67, 6.69, 6.97, 7.22, 7.72, 7.98, 7.98, 8.74, 8.99, 9.27, 9.74, 10.66.$ The Stata implementation hsmode reports a mode of 5.38. Robertson and Cryer's own estimates using a rather different procedure are $5.00, 5.02, 5.04$. Compare with your favourite density estimation procedure. Andrews, D.F., P.J. Bickel, F.R. Hampel, P.J. Huber, W.H. Rogers and J.W. Tukey. 1972. Robust estimates of location: survey and advances. Princeton, NJ: Princeton University Press. Bickel, D.R. 2002. Robust estimators of the mode and skewness of continuous data. Computational Statistics & Data Analysis 39: 153-163. Bickel, D.R. and R. Frühwirth. 2006. On a fast, robust estimator of the mode: comparisons to other estimators with applications. Computational Statistics & Data Analysis 50: 3500-3530. Dalenius, T. 1965. The mode - A neglected statistical parameter. Journal, Royal Statistical Society A 128: 110-117. Grübel, R. 1988. The length of the shorth. Annals of Statistics 16: 619-628. Hampel, F.R. 1975. Beyond location parameters: robust concepts and methods. Bulletin, International Statistical Institute 46: 375-382. Maronna, R.A., R.D. Martin and V.J. Yohai. 2006. Robust statistics: theory and methods. Chichester: John Wiley. Robertson, T. and J.D. Cryer. 1974. An iterative procedure for estimating the mode. Journal, American Statistical Association 69: 1012-1016. Rousseeuw, P.J. 1984. Least median of squares regression. Journal, American Statistical Association 79: 871-880. Rousseeuw, P.J. and A.M. Leroy. 1987. Robust regression and outlier detection. New York: John Wiley. This account is based on documentation for Cox, N.J. 2007. HSMODE: Stata module to calculate half-sample modes, http://EconPapers.repec.org/RePEc:boc:bocode:s456818. See also David R. Bickel's website here for information on implementations in other software.
How to find the mode of a probability density function?
This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically,
How to find the mode of a probability density function? This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically, then the preferred answer is, in brief, to look for the single maximum or multiple maxima directly, as in the answer from @Glen_b. "Half-sample modes" may be calculated using recursive selection of the half-sample with the shortest length. Although it has longer roots, an excellent presentation of this idea was given by Bickel and Frühwirth (2006). The idea of estimating the mode as the midpoint of the shortest interval that contains a fixed number of observations goes back at least to Dalenius (1965). See also Robertson and Cryer (1974), Bickel (2002) and Bickel and Frühwirth (2006) on other estimators of the mode. The order statistics of a sample of $n$ values of $x$ are defined by $ x_{(1)} \le x_{(2)} \le \cdots \le x_{(n-1)} \le x_{(n)}$. The half-sample mode is here defined using two rules. Rule 1. If $n = 1$, the half-sample mode is $x_{(1)}$. If $n = 2$, the half-sample mode is $(x_{(1)} + x_{(2)}) / 2$. If $n = 3$, the half-sample mode is $(x_{(1)} + x_{(2)}) / 2$ if $x_{(1)}$ and $x_{(2)}$ are closer than $x_{(2)}$ and $x_{(3)}$, $(x_{(2)} + x_{(3)}) / 2$ if the opposite is true, and $x_{(2)}$ otherwise. Rule 2. If $n \ge 4$, we apply recursive selection until left with $3$ or fewer values. First let $h_1 = \lfloor n / 2\rfloor$. The shortest half of the data from rank $k$ to rank $k + h_1$ is identified to minimise $x_{(k + h_1)} - x_{(k)}$ over $k = 1, \cdots, n - h_1$. Then the shortest half of those $h_1 + 1$ values is identified using $h_2 = \lfloor h_1 / 2\rfloor$, and so on. To finish, use Rule 1. The idea of identifying the shortest half is applied in the "shorth" named by J.W. Tukey and introduced in the Princeton robustness study of estimators of location by Andrews, Bickel, Hampel, Huber, Rogers and Tukey (1972, p.26) as the mean of the shortest half-length $x_{(k)}, \cdots, x_{(k + h)}$ for $h = \lfloor n / 2 \rfloor$. Rousseeuw (1984), building on a suggestion by Hampel (1975), pointed out that the midpoint of the shortest half $(x_k + x_{(k + h)}) / 2$ is the least median of squares (LMS) estimator of location for $x$. See Rousseeuw (1984) and Rousseeuw and Leroy (1987) for applications of LMS and related ideas to regression and other problems. Note that this LMS midpoint is also called the shorth in some more recent literature (e.g. Maronna, Martin and Yohai 2006, p.48). Further, the shortest half itself is also sometimes called the shorth, as the title of Grübel (1988) indicates. For a Stata implementation and more detail, see shorth from SSC. Some broad-brush comments follow on advantages and disadvantages of half-sample modes, from the standpoint of practical data analysts as much as mathematical or theoretical statisticians. Whatever the project, it will always be wise to compare results with standard summary measures (e.g. medians or means, including geometric and harmonic means) and to relate results to graphs of distributions. Moreover, if your interest is in the existence or extent of bimodality or multimodality, it will be best to look directly at suitably smoothed estimates of the density function. Mode estimation By summarizing where the data are densest, the half-sample mode adds an automated estimator of the mode to the toolbox. More traditional estimates of the mode based on identifying peaks on histograms or even kernel density plots are sensitive to decisions about bin origin or width or kernel type and kernel half-width and more difficult to automate in any case. When applied to distributions that are unimodal and approximately symmetric, the half-sample mode will be close to the mean and median, but more resistant than the mean to outliers in either tail. When applied to distributions that are unimodal and asymmetric, the half-sample mode will typically be much nearer the mode identified by other methods than either the mean or the median. Simplicity The idea of the half-sample mode is fairly simple and easy to explain to students and researchers who do not regard themselves as statistical specialists. Graphic interpretation The half-sample mode can easily be related to standard displays of distributions such as kernel density plots, cumulative distribution and quantile plots, histograms and stem-and-leaf plots. At the same time, note that Not useful for all distributions When applied to distributions that are approximately J-shaped, the half-sample mode will approximate the minimum of the data. When applied to distributions that are approximately U-shaped, the half-sample mode will be within whichever half of the distribution happens to have higher average density. Neither behaviour seems especially interesting or useful, but equally there is little call for single mode-like summaries for J-shaped or U-shaped distributions. For U shapes, bimodality makes the idea of a single mode moot, if not invalid. Ties The shortest half may not be uniquely defined. Even with measured data, rounding of reported values may frequently give rise to ties. What to do with two or more shortest halves has been little discussed in the literature. Note that tied halves may either overlap or be disjoint. The procedure adopted in the Stata implementation hsmode given $t$ ties is to use the middlemost in order, except that that is in turn not uniquely defined unless $t$ is odd. The middlemost is arbitrarily taken to have position $\lceil t/ 2\rceil$ in order, counting upwards. This is thus the 1st of 2, the 2nd of 3 or 4, and so forth. This tie-break rule has some quirky consequences. Thus with values $-9, -4, -1 , 0, -1, 4, 9$, the rules yield $-0.5$ as the half-sample mode, not $0$ as would be natural on all other grounds. Otherwise put, this problem can arise because for a window to be placed symmetrically the window length $1 + \lfloor n / 2\rfloor$ must be odd for odd $n$ and even for even $n$, which is difficult to achieve given other desiderata, notably that window length should never decrease with sample size. We prefer to believe that this is a minor problem with datasets of reasonable size. Rationale for window length Why half is taken to mean $1 + \lfloor n / 2\rfloor$ also does not appear to be discussed. Evidently we need a rule that yields a window length for both odd and even $n$; it is preferable that the rule be simple; and there is usually some slight arbitrariness in choosing a rule of this kind. It is also important that any rule behave reasonably for small $n$: even if a program is not deliberately invoked for very small sample sizes the procedure used should make sense for all possible sizes. Note that, given $n = 1,$ the half-sample mode is just the single sample value, and, given $n = 2$, it is the average of the two sample values. A further detail about this rule is that it always defines a slight majority, thus enforcing democratic decisions about the data. However, there seems no strong reason not to use $\lceil n / 2\rceil$ as an even simpler rule, except that if it makes much difference, then it is likely that your sample size or variable is unsuitable for the purpose. Robertson and Cryer (1974, p.1014) reported 35 measurements of uric acid (in mg/100 ml): $1.6, 3.11, 3.95, 4.2, 4.2, 4.62, 4.62, 4.62, 4.7, 4.87, 5.04, 5.29, 5.3, 5.38, 5.38, 5.38, 5.54, 5.54, 5.63, 5.71, 6.13, 6.38, 6.38, 6.67, 6.69, 6.97, 7.22, 7.72, 7.98, 7.98, 8.74, 8.99, 9.27, 9.74, 10.66.$ The Stata implementation hsmode reports a mode of 5.38. Robertson and Cryer's own estimates using a rather different procedure are $5.00, 5.02, 5.04$. Compare with your favourite density estimation procedure. Andrews, D.F., P.J. Bickel, F.R. Hampel, P.J. Huber, W.H. Rogers and J.W. Tukey. 1972. Robust estimates of location: survey and advances. Princeton, NJ: Princeton University Press. Bickel, D.R. 2002. Robust estimators of the mode and skewness of continuous data. Computational Statistics & Data Analysis 39: 153-163. Bickel, D.R. and R. Frühwirth. 2006. On a fast, robust estimator of the mode: comparisons to other estimators with applications. Computational Statistics & Data Analysis 50: 3500-3530. Dalenius, T. 1965. The mode - A neglected statistical parameter. Journal, Royal Statistical Society A 128: 110-117. Grübel, R. 1988. The length of the shorth. Annals of Statistics 16: 619-628. Hampel, F.R. 1975. Beyond location parameters: robust concepts and methods. Bulletin, International Statistical Institute 46: 375-382. Maronna, R.A., R.D. Martin and V.J. Yohai. 2006. Robust statistics: theory and methods. Chichester: John Wiley. Robertson, T. and J.D. Cryer. 1974. An iterative procedure for estimating the mode. Journal, American Statistical Association 69: 1012-1016. Rousseeuw, P.J. 1984. Least median of squares regression. Journal, American Statistical Association 79: 871-880. Rousseeuw, P.J. and A.M. Leroy. 1987. Robust regression and outlier detection. New York: John Wiley. This account is based on documentation for Cox, N.J. 2007. HSMODE: Stata module to calculate half-sample modes, http://EconPapers.repec.org/RePEc:boc:bocode:s456818. See also David R. Bickel's website here for information on implementations in other software.
How to find the mode of a probability density function? This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically,
14,347
How to find the mode of a probability density function?
If you have samples from the distribution in a vector "x", I would do: mymode <- function(x){ d<-density(x) return(d$x[which(d$y==max(d$y)[1])]) } You should tune the density function so it is smooth enough on the top ;-). If you have only the density of the distribution, I would use an optimiser to find the mode (REML, LBFGS, simplex, etc.)... fx <- function(x) {some density equation} mode <- optim(inits,fx) Or use a Monte-Carlo sampler to get some samples from the distribution (package rstan) and use the procedure above. (Anyway, Stan package as an "optimizing" function to get the mode of a distribution).
How to find the mode of a probability density function?
If you have samples from the distribution in a vector "x", I would do: mymode <- function(x){ d<-density(x) return(d$x[which(d$y==max(d$y)[1])]) } You should tune the density function so it i
How to find the mode of a probability density function? If you have samples from the distribution in a vector "x", I would do: mymode <- function(x){ d<-density(x) return(d$x[which(d$y==max(d$y)[1])]) } You should tune the density function so it is smooth enough on the top ;-). If you have only the density of the distribution, I would use an optimiser to find the mode (REML, LBFGS, simplex, etc.)... fx <- function(x) {some density equation} mode <- optim(inits,fx) Or use a Monte-Carlo sampler to get some samples from the distribution (package rstan) and use the procedure above. (Anyway, Stan package as an "optimizing" function to get the mode of a distribution).
How to find the mode of a probability density function? If you have samples from the distribution in a vector "x", I would do: mymode <- function(x){ d<-density(x) return(d$x[which(d$y==max(d$y)[1])]) } You should tune the density function so it i
14,348
How to find the mode of a probability density function?
Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ... Step 2: fund the 2nd derivative of the function if the value of second derivative is negative then the value of x obtained in case of 1st derivative is the mode...and the function is maximum at that value of x.. BUT the case is different when the function is cubic equation ...in this case we get quadratic equation as the answer of first derivative ...and thus we get 2 values of x by solving the quadratic equation... After that we find 2nd derivative and check that if it gives positive value of x by equating it to zero ....if it gives positive value of x then put the two values of x obtained in the first step in the 2nd derivative equation...as one value was negative and other was positive so we will get one positive and one negative value ....the value of x which will give positive answer is actually the mode of that cubic equation... Now lastly....you have to note that if the value of x obtained is in the range of x given in question...otherwise the function has no mode....Thankyou..
How to find the mode of a probability density function?
Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ... Step 2: fund the 2nd derivative of the function if the value of second derivative is neg
How to find the mode of a probability density function? Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ... Step 2: fund the 2nd derivative of the function if the value of second derivative is negative then the value of x obtained in case of 1st derivative is the mode...and the function is maximum at that value of x.. BUT the case is different when the function is cubic equation ...in this case we get quadratic equation as the answer of first derivative ...and thus we get 2 values of x by solving the quadratic equation... After that we find 2nd derivative and check that if it gives positive value of x by equating it to zero ....if it gives positive value of x then put the two values of x obtained in the first step in the 2nd derivative equation...as one value was negative and other was positive so we will get one positive and one negative value ....the value of x which will give positive answer is actually the mode of that cubic equation... Now lastly....you have to note that if the value of x obtained is in the range of x given in question...otherwise the function has no mode....Thankyou..
How to find the mode of a probability density function? Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ... Step 2: fund the 2nd derivative of the function if the value of second derivative is neg
14,349
Probability of drawing a given word from a bag of letters in Scrabble
A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers an algorithm which is (a) tantamount to a formula involving sums of products of binomial coefficients and (b) can be ported to many platforms. To obtain such a formula, break down the possibilities into mutually disjoint groups in two ways: according to how many letters not in the word are selected in the rack (let this be $m$) and according to how many wildcards (blanks) are selected (let this be $w$). When there are $r=7$ tiles in the rack, $N$ available tiles, $M$ available tiles with letters not in the word, and $W=2$ blanks available, the number of possible choices given by $(m,w)$ is $$\binom{M}{m}\binom{W}{w}\binom{N-M-W}{r-m-w}$$ because the choices of non-word letters, blanks, and word letters are independent conditional on $(m,w,r).$ This reduces the problem to finding the number of ways to spell a word when selecting only from the tiles representing the word's letters, given that $w$ blanks are available and $r-m-w$ tiles will be selected. The situation is messy and no closed formula seems available. For instance, with $w=0$ blanks and $m=3$ out-of-word letters are drawn there will be precisely four letters left to spell "boot" that were drawn from the "b", "o", and "t" tiles. Given there are $2$ "b"'s, $8$ "o"'s, and $6$ "t"'s in the Scrabble tile set, there are positive probabilities of drawing (multisets) "bboo", "bbot", "bbtt", "booo", "boot", "bott", "bttt", "oooo", "ooot", "oott", "ottt", and "tttt", but only one of these spells "boot". And that was the easy case! For example, supposing the rack contains five tiles chosen randomly from the "o", "b", and "t" tiles, together with both blanks, there are many more ways to spell "boot"--and not to spell it. For instance, "boot" can be spelled from "__boott" and "__bbttt", but not from "__ttttt". This counting--the heart of the problem--can be handled recursively. I will describe it with an example. Suppose we wish to count the ways of spelling "boot" with one blank and four more tiles from the collection of "b", "o", and "t" tiles (whence the remaining two tiles show non-blank letters not in {"b", "o", "t"}). Consider the first letter, "b": A "b" can be drawn in $\binom{2}{1}$ ways from the two "b" tiles available. This reduces the problem to counting the number of ways of spelling the suffix "oot" using both blanks and just three more tiles from the collection of "o" and "t" tiles. One blank can be designated as a "b". This reduces the problem to counting the number of ways of spelling "oot" using the remaining blank and just three more tiles from the collection of "o" and "t" tiles. In general, steps (1) and (2)--which are disjoint and therefore contribute additively to the probability calculations--can be implemented as a loop over the possible number of blanks that might be used for the first letter. The reduced problem is solved recursively. The base case occurs when there's one letter left, there is a certain number of tiles with that letter available, and there may be some blanks in the rack, too. We only have to make sure that the number of blanks in the rack plus the number of available tiles will be enough to obtain the desired quantity of that last letter. Here is R code for the recursive step. rack usually equals $7$, word is an array of counts of the letters (such as c(b=1, o=2, t=1)), alphabet is a similar structure giving the numbers of available tiles with those letters, and wild is the number of blanks assumed to occur in the rack. f <- function(rack, word, alphabet, wild) { if (length(word) == 1) { return(ifelse(word > rack+wild, 0, choose(alphabet, rack))) } n <- word[1] if (n <= 0) return(0) m <- alphabet[1] x <- sapply(max(0, n-wild):min(m, rack), function(i) { choose(m, i) * f(rack-i, word[-1], alphabet[-1], wild-max(0, n-i)) }) return(sum(x)) } An interface to this function specifies the standard Scrabble tiles, converts a given word into its multiset data structure, and performs the double sum over $m$ and $w$. Here is where the binomial coefficients $\binom{M}{m}$ and $\binom{W}{w}$ are computed and multiplied. scrabble <- function(sword, n.wild=2, rack=7, alphabet=c(a=9,b=2,c=2,d=4,e=12,f=2,g=3,h=2,i=9,j=1,k=1,l=4,m=2, n=6,o=8,p=2,q=1,r=6,s=4,t=6,u=4,v=2,w=2,x=1,y=2,z=1), N=sum(alphabet)+n.wild) { word = sort(table(strsplit(sword, NULL))) # Sorting speeds things a little a <- sapply(names(word), function(s) alphabet[s]) names(a) <- names(word) x <- sapply(0:n.wild, function(w) { sapply(sum(word):rack-w, function(i) { f(i, word, a, wild=w) * choose(n.wild, w) * choose(N-n.wild-sum(a), rack-w-i) }) }) return(list(numerator = sum(x), denominator = choose(N, rack), value=sum(x) / choose(N, rack))) } Let's try out this solution and time it as we go. The following test uses the same inputs employed in the simulations by @Rasmus Bååth: system.time(x <- sapply(c("boot", "red", "axe", "zoology"), scrabble)) This machine reports $0.05$ seconds total elapsed time: reasonably quick. The results? > x boot red axe zoology numerator 114327888 1249373480 823897928 11840 denominator 16007560800 16007560800 16007560800 16007560800 value 0.007142118 0.07804896 0.0514693 7.396505e-07 The probability for "boot" of $114327888/16007560800$ exactly equals the value $2381831/333490850$ obtained in my other answer (which uses a similar method but couches it in a more powerful framework requiring a symbolic algebra computing platform). The probabilities for all four words are reasonably close to Bååth's simulations (which could not be expected to give an accurate value for "zoology" due to its low probability of $11840/16007560800,$ which is less than one in a million).
Probability of drawing a given word from a bag of letters in Scrabble
A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers a
Probability of drawing a given word from a bag of letters in Scrabble A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers an algorithm which is (a) tantamount to a formula involving sums of products of binomial coefficients and (b) can be ported to many platforms. To obtain such a formula, break down the possibilities into mutually disjoint groups in two ways: according to how many letters not in the word are selected in the rack (let this be $m$) and according to how many wildcards (blanks) are selected (let this be $w$). When there are $r=7$ tiles in the rack, $N$ available tiles, $M$ available tiles with letters not in the word, and $W=2$ blanks available, the number of possible choices given by $(m,w)$ is $$\binom{M}{m}\binom{W}{w}\binom{N-M-W}{r-m-w}$$ because the choices of non-word letters, blanks, and word letters are independent conditional on $(m,w,r).$ This reduces the problem to finding the number of ways to spell a word when selecting only from the tiles representing the word's letters, given that $w$ blanks are available and $r-m-w$ tiles will be selected. The situation is messy and no closed formula seems available. For instance, with $w=0$ blanks and $m=3$ out-of-word letters are drawn there will be precisely four letters left to spell "boot" that were drawn from the "b", "o", and "t" tiles. Given there are $2$ "b"'s, $8$ "o"'s, and $6$ "t"'s in the Scrabble tile set, there are positive probabilities of drawing (multisets) "bboo", "bbot", "bbtt", "booo", "boot", "bott", "bttt", "oooo", "ooot", "oott", "ottt", and "tttt", but only one of these spells "boot". And that was the easy case! For example, supposing the rack contains five tiles chosen randomly from the "o", "b", and "t" tiles, together with both blanks, there are many more ways to spell "boot"--and not to spell it. For instance, "boot" can be spelled from "__boott" and "__bbttt", but not from "__ttttt". This counting--the heart of the problem--can be handled recursively. I will describe it with an example. Suppose we wish to count the ways of spelling "boot" with one blank and four more tiles from the collection of "b", "o", and "t" tiles (whence the remaining two tiles show non-blank letters not in {"b", "o", "t"}). Consider the first letter, "b": A "b" can be drawn in $\binom{2}{1}$ ways from the two "b" tiles available. This reduces the problem to counting the number of ways of spelling the suffix "oot" using both blanks and just three more tiles from the collection of "o" and "t" tiles. One blank can be designated as a "b". This reduces the problem to counting the number of ways of spelling "oot" using the remaining blank and just three more tiles from the collection of "o" and "t" tiles. In general, steps (1) and (2)--which are disjoint and therefore contribute additively to the probability calculations--can be implemented as a loop over the possible number of blanks that might be used for the first letter. The reduced problem is solved recursively. The base case occurs when there's one letter left, there is a certain number of tiles with that letter available, and there may be some blanks in the rack, too. We only have to make sure that the number of blanks in the rack plus the number of available tiles will be enough to obtain the desired quantity of that last letter. Here is R code for the recursive step. rack usually equals $7$, word is an array of counts of the letters (such as c(b=1, o=2, t=1)), alphabet is a similar structure giving the numbers of available tiles with those letters, and wild is the number of blanks assumed to occur in the rack. f <- function(rack, word, alphabet, wild) { if (length(word) == 1) { return(ifelse(word > rack+wild, 0, choose(alphabet, rack))) } n <- word[1] if (n <= 0) return(0) m <- alphabet[1] x <- sapply(max(0, n-wild):min(m, rack), function(i) { choose(m, i) * f(rack-i, word[-1], alphabet[-1], wild-max(0, n-i)) }) return(sum(x)) } An interface to this function specifies the standard Scrabble tiles, converts a given word into its multiset data structure, and performs the double sum over $m$ and $w$. Here is where the binomial coefficients $\binom{M}{m}$ and $\binom{W}{w}$ are computed and multiplied. scrabble <- function(sword, n.wild=2, rack=7, alphabet=c(a=9,b=2,c=2,d=4,e=12,f=2,g=3,h=2,i=9,j=1,k=1,l=4,m=2, n=6,o=8,p=2,q=1,r=6,s=4,t=6,u=4,v=2,w=2,x=1,y=2,z=1), N=sum(alphabet)+n.wild) { word = sort(table(strsplit(sword, NULL))) # Sorting speeds things a little a <- sapply(names(word), function(s) alphabet[s]) names(a) <- names(word) x <- sapply(0:n.wild, function(w) { sapply(sum(word):rack-w, function(i) { f(i, word, a, wild=w) * choose(n.wild, w) * choose(N-n.wild-sum(a), rack-w-i) }) }) return(list(numerator = sum(x), denominator = choose(N, rack), value=sum(x) / choose(N, rack))) } Let's try out this solution and time it as we go. The following test uses the same inputs employed in the simulations by @Rasmus Bååth: system.time(x <- sapply(c("boot", "red", "axe", "zoology"), scrabble)) This machine reports $0.05$ seconds total elapsed time: reasonably quick. The results? > x boot red axe zoology numerator 114327888 1249373480 823897928 11840 denominator 16007560800 16007560800 16007560800 16007560800 value 0.007142118 0.07804896 0.0514693 7.396505e-07 The probability for "boot" of $114327888/16007560800$ exactly equals the value $2381831/333490850$ obtained in my other answer (which uses a similar method but couches it in a more powerful framework requiring a symbolic algebra computing platform). The probabilities for all four words are reasonably close to Bååth's simulations (which could not be expected to give an accurate value for "zoology" due to its low probability of $11840/16007560800,$ which is less than one in a million).
Probability of drawing a given word from a bag of letters in Scrabble A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers a
14,350
Probability of drawing a given word from a bag of letters in Scrabble
Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form the target, and subtract that from $1$. This computation is fast. Simulations (shown at the end) support the computed answers. Details As in the previous answer, Mathematica is used to perform the calculations. Specify the problem: the word (or words, if you like), the letters, their counts, and the rack size. Because all letters not in the word act the same, it greatly speeds the computation to replace them all by a single symbol $\chi$ representing "any letter not in the word." word = {b, o, o, t}; letters = {b, o, t, \[Chi], \[Psi]}; tileCounts = {2, 8, 6, 82, 2}; rack = 7; Create a dictionary of this word (or words) and augment it to include all possible wildcard spellings. dict[words_, nWild_Integer] := Module[{wildcard, w}, wildcard = {xx___, _, yy___} -> {xx, \[Psi], yy}; w = Nest[Flatten[ReplaceList[#, wildcard] & /@ #, 1] &, words, nWild]; Union[Times @@@ Join[w, Times @@@ words]]]; dictionary = dict[{word}, 2] $\left\{b o^2 t, b o^2 \psi ,b o t \psi ,o^2 t \psi ,b o \psi ^2,o^2 \psi ^2,b t \psi ^2,o t \psi ^2\right\}$ Compute the nonwords: alphabet = Plus @@ letters; nonwords = Nest[PolynomialMod[# alphabet, dictionary] &, 1, rack] $b^7 + 7 b^6 o + 21 b^5 o^2 + \cdots +7 \chi \psi ^6+\psi ^7$ (There are $185$ non-words in this case.) Compute the chances. For sampling with replacement, just substitute the tile counts for the variables: chances = (Transpose[{letters, tileCounts/(Plus @@ tileCounts)}] /. {a_, b_} -> a -> b); q = nonwords /. chances; 1 - q $\frac{207263413}{39062500000}$ This value is approximately $0.00756036.$ For sampling without replacement, use factorial powers instead of powers: multiplicities = MapThread[Rule, {letters, tileCounts}]; chance[m_] := (ReplaceRepeated[m , Power[xx_, n_] -> FactorialPower[xx, n]] /. multiplicities); histor = chance /@ MonomialList[nonwords]; q0 = Plus @@ histor / FactorialPower[Total[tiles], nn]; 1 - q0 $\frac{2381831}{333490850}$ This value is approximately $0.00714212.$ The calculations were practically instantaneous. Simulation results Results of $10^6$ iterations with replacement: simulation = RandomChoice[tiles -> letters, {10^6, 7}]; u = Tally[Times @@@ simulation]; (p = Total[Cases[Join[{PolynomialMod[u[[All, 1]], dictionary]}\[Transpose], u, 2], {0, _, a_} :> a]] / Length[simulation] ) // N $0.007438$ Compare it to the computed value relative to its standard error: (p - (1 - q)) / Sqrt[q (1 - q) / Length[simulation]] // N $-1.41259$ The agreement is fine, strongly supporting the computed result. Results of $10^6$ iterations without replacement: tilesAll = Flatten[MapThread[ConstantArray[#1, #2] &, {letters, tiles}] ] (p - (1 - q)) / Sqrt[q (1 - q) / Length[simulation]] // N; simulation = Table[RandomSample[tilesAll, 7], {i, 1, 10^6}]; u = Tally[Times @@@ simulation]; (p0 = Total[Cases[Join[{PolynomialMod[u[[All, 1]], dictionary]}\[Transpose], u, 2], {0, _, a_} :> a]] / Length[simulation] ) // N $0.00717$ Make the comparison: (p0 - (1 - q0)) / Sqrt[q0 (1 - q0) / Length[simulation]] // N $0.331106$ The agreement in this simulation was excellent. The total time for simulation was $12$ seconds.
Probability of drawing a given word from a bag of letters in Scrabble
Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form
Probability of drawing a given word from a bag of letters in Scrabble Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form the target, and subtract that from $1$. This computation is fast. Simulations (shown at the end) support the computed answers. Details As in the previous answer, Mathematica is used to perform the calculations. Specify the problem: the word (or words, if you like), the letters, their counts, and the rack size. Because all letters not in the word act the same, it greatly speeds the computation to replace them all by a single symbol $\chi$ representing "any letter not in the word." word = {b, o, o, t}; letters = {b, o, t, \[Chi], \[Psi]}; tileCounts = {2, 8, 6, 82, 2}; rack = 7; Create a dictionary of this word (or words) and augment it to include all possible wildcard spellings. dict[words_, nWild_Integer] := Module[{wildcard, w}, wildcard = {xx___, _, yy___} -> {xx, \[Psi], yy}; w = Nest[Flatten[ReplaceList[#, wildcard] & /@ #, 1] &, words, nWild]; Union[Times @@@ Join[w, Times @@@ words]]]; dictionary = dict[{word}, 2] $\left\{b o^2 t, b o^2 \psi ,b o t \psi ,o^2 t \psi ,b o \psi ^2,o^2 \psi ^2,b t \psi ^2,o t \psi ^2\right\}$ Compute the nonwords: alphabet = Plus @@ letters; nonwords = Nest[PolynomialMod[# alphabet, dictionary] &, 1, rack] $b^7 + 7 b^6 o + 21 b^5 o^2 + \cdots +7 \chi \psi ^6+\psi ^7$ (There are $185$ non-words in this case.) Compute the chances. For sampling with replacement, just substitute the tile counts for the variables: chances = (Transpose[{letters, tileCounts/(Plus @@ tileCounts)}] /. {a_, b_} -> a -> b); q = nonwords /. chances; 1 - q $\frac{207263413}{39062500000}$ This value is approximately $0.00756036.$ For sampling without replacement, use factorial powers instead of powers: multiplicities = MapThread[Rule, {letters, tileCounts}]; chance[m_] := (ReplaceRepeated[m , Power[xx_, n_] -> FactorialPower[xx, n]] /. multiplicities); histor = chance /@ MonomialList[nonwords]; q0 = Plus @@ histor / FactorialPower[Total[tiles], nn]; 1 - q0 $\frac{2381831}{333490850}$ This value is approximately $0.00714212.$ The calculations were practically instantaneous. Simulation results Results of $10^6$ iterations with replacement: simulation = RandomChoice[tiles -> letters, {10^6, 7}]; u = Tally[Times @@@ simulation]; (p = Total[Cases[Join[{PolynomialMod[u[[All, 1]], dictionary]}\[Transpose], u, 2], {0, _, a_} :> a]] / Length[simulation] ) // N $0.007438$ Compare it to the computed value relative to its standard error: (p - (1 - q)) / Sqrt[q (1 - q) / Length[simulation]] // N $-1.41259$ The agreement is fine, strongly supporting the computed result. Results of $10^6$ iterations without replacement: tilesAll = Flatten[MapThread[ConstantArray[#1, #2] &, {letters, tiles}] ] (p - (1 - q)) / Sqrt[q (1 - q) / Length[simulation]] // N; simulation = Table[RandomSample[tilesAll, 7], {i, 1, 10^6}]; u = Tally[Times @@@ simulation]; (p0 = Total[Cases[Join[{PolynomialMod[u[[All, 1]], dictionary]}\[Transpose], u, 2], {0, _, a_} :> a]] / Length[simulation] ) // N $0.00717$ Make the comparison: (p0 - (1 - q0)) / Sqrt[q0 (1 - q0) / Length[simulation]] // N $0.331106$ The agreement in this simulation was excellent. The total time for simulation was $12$ seconds.
Probability of drawing a given word from a bag of letters in Scrabble Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form
14,351
Probability of drawing a given word from a bag of letters in Scrabble
So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able to form the given word. I've written the solution in R, but you could use any other programming language, say Python or Ruby. I'm first going to describe how to simulate one draw. First let's define the tile frequencies. # The tile frequency used in English Scrabble, using "_" for blank. tile_freq <- c(2, 9 ,2 ,2 ,4 ,12,2 ,3 ,2 ,9 ,1 ,1 ,4 ,2 ,6 ,8 ,2 ,1 ,6 ,4 ,6 ,4 ,2 ,2 ,1 ,2 ,1) tile_names <- as.factor(c("_", letters)) tiles <- rep(tile_names, tile_freq) ## [1] _ _ a a a a a a a a a b b c c d d d d e e e e e e ## [26] e e e e e e f f g g g h h i i i i i i i i i j k l ## [51] l l l m m n n n n n n o o o o o o o o p p q r r r ## [76] r r r s s s s t t t t t t u u u u v v w w x y y z ## 27 Levels: _ a b c d e f g h i j k l m n o p q r ... z Then encode the word as a vector of letter counts. word <- "boot" # A vector of the counts of the letters in the word word_vector <- table( factor(strsplit(word, "")[[1]], levels=tile_names)) ## _ a b c d e f g h i j k l m n o p q r s t u v w x y z ## 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 1 0 0 0 0 0 0 Now draw a sample of seven tiles and encode them in the same way as the word. tile_sample <- table(sample(tiles, size=7)) ## _ a b c d e f g h i j k l m n o p q r s t u v w x y z ## 1 0 0 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 0 1 0 0 0 At last, calculate what letters are missing... missing <- word_vector - tile_sample missing <- ifelse(missing < 0, 0, missing) ## _ a b c d e f g h i j k l m n o p q r s t u v w x y z ## 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 ... and sum the number of missing letters and subtract the number of available blanks. If the result is zero or less we succeeded in spelling the word. sum(missing) - tile_sample["blank"] <= 0 ## FALSE In this particular case we didn't though... Now we just need to repeat this many times and calculate the percentage of successful draws. All this is done by the following R function: word_prob <- function(word, reps = 50000) { tile_freq <- c(2, 9 ,2 ,2 ,4 ,12,2 ,3 ,2 ,9 ,1 ,1 ,4 ,2 ,6 ,8 ,2 ,1 ,6 ,4 ,6 ,4 ,2 ,2 ,1 ,2 ,1) tile_names <- as.factor(c("_", letters)) tiles <- rep(tile_names, tile_freq) word_vector <- table( factor(strsplit(word, "")[[1]], levels=tile_names)) successful_draws <- replicate(reps, { tile_sample <- table(sample(tiles, size=7)) missing <- word_vector - tile_sample missing <- ifelse(missing < 0, 0, missing) sum(missing) - tile_sample["_"] <= 0 }) mean(successful_draws) } Here reps is the number of simulated draws. Now we can try it out on a number of different words. > word_prob("boot") [1] 0.0072 > word_prob("red") [1] 0.07716 > word_prob("axe") [1] 0.05088 > word_prob("zoology") [1] 2e-05
Probability of drawing a given word from a bag of letters in Scrabble
So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able
Probability of drawing a given word from a bag of letters in Scrabble So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able to form the given word. I've written the solution in R, but you could use any other programming language, say Python or Ruby. I'm first going to describe how to simulate one draw. First let's define the tile frequencies. # The tile frequency used in English Scrabble, using "_" for blank. tile_freq <- c(2, 9 ,2 ,2 ,4 ,12,2 ,3 ,2 ,9 ,1 ,1 ,4 ,2 ,6 ,8 ,2 ,1 ,6 ,4 ,6 ,4 ,2 ,2 ,1 ,2 ,1) tile_names <- as.factor(c("_", letters)) tiles <- rep(tile_names, tile_freq) ## [1] _ _ a a a a a a a a a b b c c d d d d e e e e e e ## [26] e e e e e e f f g g g h h i i i i i i i i i j k l ## [51] l l l m m n n n n n n o o o o o o o o p p q r r r ## [76] r r r s s s s t t t t t t u u u u v v w w x y y z ## 27 Levels: _ a b c d e f g h i j k l m n o p q r ... z Then encode the word as a vector of letter counts. word <- "boot" # A vector of the counts of the letters in the word word_vector <- table( factor(strsplit(word, "")[[1]], levels=tile_names)) ## _ a b c d e f g h i j k l m n o p q r s t u v w x y z ## 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 1 0 0 0 0 0 0 Now draw a sample of seven tiles and encode them in the same way as the word. tile_sample <- table(sample(tiles, size=7)) ## _ a b c d e f g h i j k l m n o p q r s t u v w x y z ## 1 0 0 0 0 1 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 1 0 1 0 0 0 At last, calculate what letters are missing... missing <- word_vector - tile_sample missing <- ifelse(missing < 0, 0, missing) ## _ a b c d e f g h i j k l m n o p q r s t u v w x y z ## 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 ... and sum the number of missing letters and subtract the number of available blanks. If the result is zero or less we succeeded in spelling the word. sum(missing) - tile_sample["blank"] <= 0 ## FALSE In this particular case we didn't though... Now we just need to repeat this many times and calculate the percentage of successful draws. All this is done by the following R function: word_prob <- function(word, reps = 50000) { tile_freq <- c(2, 9 ,2 ,2 ,4 ,12,2 ,3 ,2 ,9 ,1 ,1 ,4 ,2 ,6 ,8 ,2 ,1 ,6 ,4 ,6 ,4 ,2 ,2 ,1 ,2 ,1) tile_names <- as.factor(c("_", letters)) tiles <- rep(tile_names, tile_freq) word_vector <- table( factor(strsplit(word, "")[[1]], levels=tile_names)) successful_draws <- replicate(reps, { tile_sample <- table(sample(tiles, size=7)) missing <- word_vector - tile_sample missing <- ifelse(missing < 0, 0, missing) sum(missing) - tile_sample["_"] <= 0 }) mean(successful_draws) } Here reps is the number of simulated draws. Now we can try it out on a number of different words. > word_prob("boot") [1] 0.0072 > word_prob("red") [1] 0.07716 > word_prob("axe") [1] 0.05088 > word_prob("zoology") [1] 2e-05
Probability of drawing a given word from a bag of letters in Scrabble So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able
14,352
Probability of drawing a given word from a bag of letters in Scrabble
For the word "BOOT" with no wildcards: $$ p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} $$ With wildcards, it becomes more tedious. Let $p_k$ indicate the probability of being able to play "BOOT" with $k$ wildcards: $$ \begin{eqnarray*} p_0&=&\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} \\ p_1&=&p_0 +\frac{\binom{n_*}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} + \frac{\binom{n_b}{1}\binom{n_o}{1}\binom{n_*}{1}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} + \frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_*}{1}\binom{n-4}{3}}{\binom{n}{7}}\\ &=&p_0 +\frac{\binom{n_*}{1}\binom{n-4}{3}}{\binom{n}{7}}(\binom{n_o}{2}\binom{n_t}{1} + \binom{n_b}{1}\binom{n_o}{1}\binom{n_t}{1} + \binom{n_b}{1}\binom{n_o}{2})\\ p_2&=&p_1 + \frac{\binom{n_*}{2}\binom{n-4}{3}}{\binom{n}{7}}(\binom{n_b}{1}\binom{n_o}{1} + \binom{n_b}{1}\binom{n_t}{1} + \binom{n_o}{2} + \binom{n_o}{1}\binom{n_t}{1})\\ p_3&=&p_2 + \frac{\binom{n_*}{3}\binom{n-4}{3}}{\binom{n}{7}}(\binom{n_b}{1} + \binom{n_o}{1} + \binom{n_t}{1})\\ p_4&=&p_3 + \frac{\binom{n_*}{4}\binom{n-4}{3}}{\binom{n}{7}}\\ p_i&=&p_4, i\ge4 \end{eqnarray*} $$
Probability of drawing a given word from a bag of letters in Scrabble
For the word "BOOT" with no wildcards: $$ p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} $$ With wildcards, it becomes more tedious. Let $p_k$ indicate the probabili
Probability of drawing a given word from a bag of letters in Scrabble For the word "BOOT" with no wildcards: $$ p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} $$ With wildcards, it becomes more tedious. Let $p_k$ indicate the probability of being able to play "BOOT" with $k$ wildcards: $$ \begin{eqnarray*} p_0&=&\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} \\ p_1&=&p_0 +\frac{\binom{n_*}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} + \frac{\binom{n_b}{1}\binom{n_o}{1}\binom{n_*}{1}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} + \frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_*}{1}\binom{n-4}{3}}{\binom{n}{7}}\\ &=&p_0 +\frac{\binom{n_*}{1}\binom{n-4}{3}}{\binom{n}{7}}(\binom{n_o}{2}\binom{n_t}{1} + \binom{n_b}{1}\binom{n_o}{1}\binom{n_t}{1} + \binom{n_b}{1}\binom{n_o}{2})\\ p_2&=&p_1 + \frac{\binom{n_*}{2}\binom{n-4}{3}}{\binom{n}{7}}(\binom{n_b}{1}\binom{n_o}{1} + \binom{n_b}{1}\binom{n_t}{1} + \binom{n_o}{2} + \binom{n_o}{1}\binom{n_t}{1})\\ p_3&=&p_2 + \frac{\binom{n_*}{3}\binom{n-4}{3}}{\binom{n}{7}}(\binom{n_b}{1} + \binom{n_o}{1} + \binom{n_t}{1})\\ p_4&=&p_3 + \frac{\binom{n_*}{4}\binom{n-4}{3}}{\binom{n}{7}}\\ p_i&=&p_4, i\ge4 \end{eqnarray*} $$
Probability of drawing a given word from a bag of letters in Scrabble For the word "BOOT" with no wildcards: $$ p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}} $$ With wildcards, it becomes more tedious. Let $p_k$ indicate the probabili
14,353
Probability of drawing a given word from a bag of letters in Scrabble
Meh. $$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$ $$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}(\frac{1}{c+\gamma-1}+$$ $$+\sum_{k=0}^{r-1}(\frac{1}{c+\alpha+\kappa}+\frac{1}{c+\beta+\kappa}+\frac{1}{c+1+\kappa}-\frac{1}{c+\gamma+\kappa}))x^r$$ $$=b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}(ln \ x+\frac{1}{c+\gamma-1}+$$ $$+\sum_{k=0}^{r-1}(\frac{1}{c+\alpha+\kappa}+\frac{1}{c+\beta+\kappa}-\frac{1}{c+1+\kappa}-\frac{1}{c+\gamma+\kappa}))x^r$$. It's been a while since I looked at how I built my project. And my math may be entirely incorrect below, or correct. I may have it backwards. Honestly, I forget. BUT! Using only binomial combination, without taking into account blank tiles which throws the entire thing out of whack. The simple combination solution without wild. I asked these questions myself, and built my own scrabble words probability dictionary because of it. You don't need a dictionary of possible words pulled out, only the math behind it and available letters based on letters in tile bag. The array of English rules is below. I spent weeks developing the math just to answer this question for all English words that can be used in a game, including words that can not be used in a game. It may all be incorrect. The probability of drawing a given word from a bag of letters in Scrabble, requires how many letters are available in the bag, for each letter ( A-Z ) and, whether we're using the wild card as an addition to the math. The blank tiles are included in this math - assuming 100 tiles, 2 of which are blank. Also, how many tiles are available differs based on language of the game, and game rules from around the world. English scrabble differs from Arabic scrabble, obviously. Just alter the available letters, and the math should do the work. If anyone finds errors, I will be sure to update and resolve them. Boot: The probability of Boot in a game of scrabble is 0.000386% which is a chance of 67 out of 173,758 hands as shown on the word page for boot. English Tiles all is the array of letters in the bag. count is the array of available tiles for that letter, and point is the point value of the letter. // All arranged by letter, number of letters in scrabble game, and point for the letter. $all = array("a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"); $count = array("9", "2", "2", "4", "12", "2", "3", "2", "9", "1", "1", "4", "2", "6", "8", "2", "1", "6", "4", "6", "4", "2", "2", "1", "2", "1"); $point = array("1", "3", "3", "2", "1", "4", "2", "4", "1", "8", "5", "1", "3", "1", "1", "3", "10", "1", "1", "1", "1", "4", "4", "8", "4", "10"); There are 100 tiles in an English scrabble game (i.e., the sum of $count). It does not matter how the tiles are pulled, so it's not a permutation. The Math I Used Determine how many letters are in the word and what letters are in the word, how many of those letters are available in the tile bag ( count for each letter, unique and allchars ). Binomial coefficient of each, divided by binomial coefficient of length word. Determine the binomial combinations available let C(n,r) be binomial coefficient: n!/[n!(n-r)!], or 0 if r > n Foreach letter, what is the binomial coefficient. There is 1 "B". There are 2 available, a 2% chance of pulling the b. There is 2 "O". There are 8 available, a 8% chance of pulling the o. There is 1 "T". There are 6 available, a 6% chance of pulling the t. BOOT is a 4 letter word, being taken from a 100 tile set with blanks, 98 without. n = 98. The number of tiles without blank in the English set $B = {2 \choose 1} = \frac{2!}{2!(2-1)!}$ $O = {8 \choose 2} = \frac{8!}{8!(8-2)!}$ $T = {6 \choose 1} = \frac{6!}{6!(6-1)!}$ ${B \times O \times T}$ divided by the binomial coefficient of tilecount $\frac{98!}{98!(98-{\rm length})!}$
Probability of drawing a given word from a bag of letters in Scrabble
Meh. $$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$ $$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(
Probability of drawing a given word from a bag of letters in Scrabble Meh. $$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$ $$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}(\frac{1}{c+\gamma-1}+$$ $$+\sum_{k=0}^{r-1}(\frac{1}{c+\alpha+\kappa}+\frac{1}{c+\beta+\kappa}+\frac{1}{c+1+\kappa}-\frac{1}{c+\gamma+\kappa}))x^r$$ $$=b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}(ln \ x+\frac{1}{c+\gamma-1}+$$ $$+\sum_{k=0}^{r-1}(\frac{1}{c+\alpha+\kappa}+\frac{1}{c+\beta+\kappa}-\frac{1}{c+1+\kappa}-\frac{1}{c+\gamma+\kappa}))x^r$$. It's been a while since I looked at how I built my project. And my math may be entirely incorrect below, or correct. I may have it backwards. Honestly, I forget. BUT! Using only binomial combination, without taking into account blank tiles which throws the entire thing out of whack. The simple combination solution without wild. I asked these questions myself, and built my own scrabble words probability dictionary because of it. You don't need a dictionary of possible words pulled out, only the math behind it and available letters based on letters in tile bag. The array of English rules is below. I spent weeks developing the math just to answer this question for all English words that can be used in a game, including words that can not be used in a game. It may all be incorrect. The probability of drawing a given word from a bag of letters in Scrabble, requires how many letters are available in the bag, for each letter ( A-Z ) and, whether we're using the wild card as an addition to the math. The blank tiles are included in this math - assuming 100 tiles, 2 of which are blank. Also, how many tiles are available differs based on language of the game, and game rules from around the world. English scrabble differs from Arabic scrabble, obviously. Just alter the available letters, and the math should do the work. If anyone finds errors, I will be sure to update and resolve them. Boot: The probability of Boot in a game of scrabble is 0.000386% which is a chance of 67 out of 173,758 hands as shown on the word page for boot. English Tiles all is the array of letters in the bag. count is the array of available tiles for that letter, and point is the point value of the letter. // All arranged by letter, number of letters in scrabble game, and point for the letter. $all = array("a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"); $count = array("9", "2", "2", "4", "12", "2", "3", "2", "9", "1", "1", "4", "2", "6", "8", "2", "1", "6", "4", "6", "4", "2", "2", "1", "2", "1"); $point = array("1", "3", "3", "2", "1", "4", "2", "4", "1", "8", "5", "1", "3", "1", "1", "3", "10", "1", "1", "1", "1", "4", "4", "8", "4", "10"); There are 100 tiles in an English scrabble game (i.e., the sum of $count). It does not matter how the tiles are pulled, so it's not a permutation. The Math I Used Determine how many letters are in the word and what letters are in the word, how many of those letters are available in the tile bag ( count for each letter, unique and allchars ). Binomial coefficient of each, divided by binomial coefficient of length word. Determine the binomial combinations available let C(n,r) be binomial coefficient: n!/[n!(n-r)!], or 0 if r > n Foreach letter, what is the binomial coefficient. There is 1 "B". There are 2 available, a 2% chance of pulling the b. There is 2 "O". There are 8 available, a 8% chance of pulling the o. There is 1 "T". There are 6 available, a 6% chance of pulling the t. BOOT is a 4 letter word, being taken from a 100 tile set with blanks, 98 without. n = 98. The number of tiles without blank in the English set $B = {2 \choose 1} = \frac{2!}{2!(2-1)!}$ $O = {8 \choose 2} = \frac{8!}{8!(8-2)!}$ $T = {6 \choose 1} = \frac{6!}{6!(6-1)!}$ ${B \times O \times T}$ divided by the binomial coefficient of tilecount $\frac{98!}{98!(98-{\rm length})!}$
Probability of drawing a given word from a bag of letters in Scrabble Meh. $$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$ $$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(
14,354
Can data cleaning worsen the results of statistical analysis?
It actually depends on the purpose of your research. In my opinion, there could be several: You want to understand what are the typical factors that causes cases and deaths and that are not affected by epidemic periods and factors that causes epidemics (so you are interested in typical not force major probabilities) - in this case you obviously need to remove the epidemic periods from the data, as they are by the purpose of research the outliers to what you would like to conclude You may want to include epidemic changes into your models (regime-switching models, for instance, any good links and model suggestions from the community are welcome here), because you want to know the probability of epidemic period to occur (and also how long it will last), to test the stability and to forecast - in this case you do not exclude epidemic periods, but search for more complicated models rather than go for hammer-econometric-tool $OLS$ or something similar Your primarily goal IS to detect epidemic periods and monitor for them in real-time - it's a special field in econometrics a number of my colleagues are working with at Vilnius University (definitely, you would like to have a lot of epidemic observations to deal with) So if your primarily goal is something like 2, clearing the data will cause wrong conclusions about the future forecasts, i.e. inaccurate forecasting performance. It is also true that the 2nd case does not necessarily provide better forecasts, but you at least could make conclusions about the probabilities of epidemic periods and their length. This IS vitally important for actuarial mathematicians, so may be you are the one?
Can data cleaning worsen the results of statistical analysis?
It actually depends on the purpose of your research. In my opinion, there could be several: You want to understand what are the typical factors that causes cases and deaths and that are not affected
Can data cleaning worsen the results of statistical analysis? It actually depends on the purpose of your research. In my opinion, there could be several: You want to understand what are the typical factors that causes cases and deaths and that are not affected by epidemic periods and factors that causes epidemics (so you are interested in typical not force major probabilities) - in this case you obviously need to remove the epidemic periods from the data, as they are by the purpose of research the outliers to what you would like to conclude You may want to include epidemic changes into your models (regime-switching models, for instance, any good links and model suggestions from the community are welcome here), because you want to know the probability of epidemic period to occur (and also how long it will last), to test the stability and to forecast - in this case you do not exclude epidemic periods, but search for more complicated models rather than go for hammer-econometric-tool $OLS$ or something similar Your primarily goal IS to detect epidemic periods and monitor for them in real-time - it's a special field in econometrics a number of my colleagues are working with at Vilnius University (definitely, you would like to have a lot of epidemic observations to deal with) So if your primarily goal is something like 2, clearing the data will cause wrong conclusions about the future forecasts, i.e. inaccurate forecasting performance. It is also true that the 2nd case does not necessarily provide better forecasts, but you at least could make conclusions about the probabilities of epidemic periods and their length. This IS vitally important for actuarial mathematicians, so may be you are the one?
Can data cleaning worsen the results of statistical analysis? It actually depends on the purpose of your research. In my opinion, there could be several: You want to understand what are the typical factors that causes cases and deaths and that are not affected
14,355
Can data cleaning worsen the results of statistical analysis?
I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a person aged 4 is a single parent, etc.). The presence of a real effect in your data does not make it "messy" (to the contrary, the presence of real effects would make it rich) - although it can make your mathematical task more involved. I would suggest that the data be "cleaned" in this way if it is the only feasible way to get a prediction. If there is a feasible way which doesn't throw away information, then use that. It sounds like you may benefit from some sort of cyclical analysis, given that you say this effect comes around periodically (kind of like a "business cycle"). From my point of view, if you are looking at forecasting something, then removing a genuine effect from that source can only make your predictions worse. This is because you have effectively "thrown away" the very information that you wish to predict! The other point is that it may be difficult to determine how much of a set of deaths were due to the epidemic, and how much was caused by the ordinary fluctuations. In statistical terminology, the epidemic sounds like that, from your point of view, it is a "nuisance" to what you actually want to analyse. So you aren't particularly interested in it, but you need to somehow account for it in your analysis. One "quick and dirty" way to do this in a regression setting is to include an indicator for the epidemic years/periods as a regressor variable. This will give you an average estimate of the effect of epidemics (and implicitly assumes the affect is the same for each epidemic). However, this approach only works for describing the effect, because in forecasting, your regression variable is unknown (you don't know which periods in the future will be epidemic ones). Another way to account for the epidemic is to use a mixture model with two components: one model for the epidemic part and one model for the "ordinary" part. The model then proceeds in two steps: 1) classify an period as epidemic or normal, then 2) apply the model to which it was classified.
Can data cleaning worsen the results of statistical analysis?
I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a
Can data cleaning worsen the results of statistical analysis? I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a person aged 4 is a single parent, etc.). The presence of a real effect in your data does not make it "messy" (to the contrary, the presence of real effects would make it rich) - although it can make your mathematical task more involved. I would suggest that the data be "cleaned" in this way if it is the only feasible way to get a prediction. If there is a feasible way which doesn't throw away information, then use that. It sounds like you may benefit from some sort of cyclical analysis, given that you say this effect comes around periodically (kind of like a "business cycle"). From my point of view, if you are looking at forecasting something, then removing a genuine effect from that source can only make your predictions worse. This is because you have effectively "thrown away" the very information that you wish to predict! The other point is that it may be difficult to determine how much of a set of deaths were due to the epidemic, and how much was caused by the ordinary fluctuations. In statistical terminology, the epidemic sounds like that, from your point of view, it is a "nuisance" to what you actually want to analyse. So you aren't particularly interested in it, but you need to somehow account for it in your analysis. One "quick and dirty" way to do this in a regression setting is to include an indicator for the epidemic years/periods as a regressor variable. This will give you an average estimate of the effect of epidemics (and implicitly assumes the affect is the same for each epidemic). However, this approach only works for describing the effect, because in forecasting, your regression variable is unknown (you don't know which periods in the future will be epidemic ones). Another way to account for the epidemic is to use a mixture model with two components: one model for the epidemic part and one model for the "ordinary" part. The model then proceeds in two steps: 1) classify an period as epidemic or normal, then 2) apply the model to which it was classified.
Can data cleaning worsen the results of statistical analysis? I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a
14,356
Can data cleaning worsen the results of statistical analysis?
To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting. The situation is similar to the experiment performed my Robert Millikan in determining the charge of an electron. Decades after winning the Nobel prize for his experiment, his notes were examined and it was found that he threw out a large number of data points because they disagreed with the results he was looking for. Is that bad science? If you find a few outliers, then maybe they are due to "statistical abberations". However, if you find more than a few outliers, you need to explore your data more closely. If you cannot attribute a cause for the abberations, then you do not understand the process and a statistical model will not solve your problem. The purpose of a model is to summarize a process, the model will not magically summarize a process the experimenter does not understand.
Can data cleaning worsen the results of statistical analysis?
To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting. The situation is
Can data cleaning worsen the results of statistical analysis? To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting. The situation is similar to the experiment performed my Robert Millikan in determining the charge of an electron. Decades after winning the Nobel prize for his experiment, his notes were examined and it was found that he threw out a large number of data points because they disagreed with the results he was looking for. Is that bad science? If you find a few outliers, then maybe they are due to "statistical abberations". However, if you find more than a few outliers, you need to explore your data more closely. If you cannot attribute a cause for the abberations, then you do not understand the process and a statistical model will not solve your problem. The purpose of a model is to summarize a process, the model will not magically summarize a process the experimenter does not understand.
Can data cleaning worsen the results of statistical analysis? To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting. The situation is
14,357
Can data cleaning worsen the results of statistical analysis?
The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the current model that we are entertaining. These "outliers" if untreated permit an unwanted distortion in the model parameters as estimation is "driven to explain these data points" that are "not behaving according to our hypothesized model". In other words there is a lot of payback in terms of explained Sum of Squares by focusing on the "baddies". The empirically identified points that require cleansing should be carefully scrutinized in order to potentially develop/suggest cause factors which are not in the current model. The identified Level Shift in STATE1 for the data you presented in the question below is an example of "knowledge waiting to be discovered". How to assess effect of intervention in one state versus another using annual case fatality rate? To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. How else would you know that a point violated that model? In fact, the process of growing, understanding, finding, and examining outliers must be iterative. This isn't a new thought. Sir Frances Bacon, writing in Novum Organum about 400 years ago said: “Errors of Nature, Sports and Monsters correct the understanding in regard to ordinary things, and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows herdeviations will more accurately describe her ways.” We change our rules by observing when the current rules fail. If indeed the identified outliers are all pulses and have similar effects (size) then we suggest the following ( quoted from another poster ) "One "quick and dirty" way to do this in a regression setting is to include an indicator for the epidemic years/periods as a regressor variable. This will give you an average estimate of the effect of epidemics (and implicitly assumes the affect is the same for each epidemic). However, this approach only works for describing the effect, because in forecasting, your regression variable is unknown (you don't know which periods in the future will be epidemic ones)." This if course requires that the individual anomalies(pulse years) have similar effects. If they differ then a portmanteau variable described above would be incorrect.
Can data cleaning worsen the results of statistical analysis?
The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the cur
Can data cleaning worsen the results of statistical analysis? The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the current model that we are entertaining. These "outliers" if untreated permit an unwanted distortion in the model parameters as estimation is "driven to explain these data points" that are "not behaving according to our hypothesized model". In other words there is a lot of payback in terms of explained Sum of Squares by focusing on the "baddies". The empirically identified points that require cleansing should be carefully scrutinized in order to potentially develop/suggest cause factors which are not in the current model. The identified Level Shift in STATE1 for the data you presented in the question below is an example of "knowledge waiting to be discovered". How to assess effect of intervention in one state versus another using annual case fatality rate? To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. How else would you know that a point violated that model? In fact, the process of growing, understanding, finding, and examining outliers must be iterative. This isn't a new thought. Sir Frances Bacon, writing in Novum Organum about 400 years ago said: “Errors of Nature, Sports and Monsters correct the understanding in regard to ordinary things, and reveal general forms. For whoever knows the ways of Nature will more easily notice her deviations; and, on the other hand, whoever knows herdeviations will more accurately describe her ways.” We change our rules by observing when the current rules fail. If indeed the identified outliers are all pulses and have similar effects (size) then we suggest the following ( quoted from another poster ) "One "quick and dirty" way to do this in a regression setting is to include an indicator for the epidemic years/periods as a regressor variable. This will give you an average estimate of the effect of epidemics (and implicitly assumes the affect is the same for each epidemic). However, this approach only works for describing the effect, because in forecasting, your regression variable is unknown (you don't know which periods in the future will be epidemic ones)." This if course requires that the individual anomalies(pulse years) have similar effects. If they differ then a portmanteau variable described above would be incorrect.
Can data cleaning worsen the results of statistical analysis? The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the cur
14,358
Can data cleaning worsen the results of statistical analysis?
One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted models, rather than the models themselves, to see places where the "day in, day out" predictions of the model fail - one of the ways the model can fail is with the appearance of an epidemic. It's imperative however that you distinguish between hunting down outliers in your results - probably not the greatest idea ever - and what most people refer to as "data cleaning". Here, you are looking for outliers not because they represent a statistical problem, but because they raise data quality issues. For example, in a data set I have, there is a variable for onset of disease. For one subject, this date is in November of 1929. Do I think this is correct? No. This indicates a data quality problem that needs to be fixed - in this case correcting the date based on other information about the subject. This type of data cleaning will actively improve the quality of your statistical results.
Can data cleaning worsen the results of statistical analysis?
One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted
Can data cleaning worsen the results of statistical analysis? One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted models, rather than the models themselves, to see places where the "day in, day out" predictions of the model fail - one of the ways the model can fail is with the appearance of an epidemic. It's imperative however that you distinguish between hunting down outliers in your results - probably not the greatest idea ever - and what most people refer to as "data cleaning". Here, you are looking for outliers not because they represent a statistical problem, but because they raise data quality issues. For example, in a data set I have, there is a variable for onset of disease. For one subject, this date is in November of 1929. Do I think this is correct? No. This indicates a data quality problem that needs to be fixed - in this case correcting the date based on other information about the subject. This type of data cleaning will actively improve the quality of your statistical results.
Can data cleaning worsen the results of statistical analysis? One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted
14,359
Strong ignorability: confusion on the relationship between outcomes and treatment
I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X$. The key is to realize that every individual $i$ has potential outcomes $(Y_{i1},Y_{i0})$, but you only observe $Y_{iT}$ in the data. Ignorability says $$(Y_0,Y_1) \perp \!\!\! \perp T|X$$ which says that conditional on $X$, then the potential outcomes are independent of treatment $T$. It is not saying that $Y$ is independent of $T$. As you point out, that makes no sense. In fact, a classic way to re-write $Y$ is as $$Y = Y_1T + Y_0(1-T)$$ which tells us that for every individual, we observe $Y_i$ which is either $Y_{i1}$ or $Y_{i0}$ depending on the value of treatment $T_i$. The reason for potential outcomes is that we want to know the effect $Y_{i1} - Y_{i0}$ but only observe one of the two objects for everyone. The question is: what would have $Y_{i0}$ been for the individuals $i$ who have $T_i=1$ (and vice versa)? Ignoring the conditional on $X$ part, the ignorability assumption essentially says that treatment $T$ can certainly affect $Y$ by virtue of $Y$ being equal to $Y_1$ or $Y_0$, but that $T$ is unrelated to the values of $Y_0,Y_1$ themselves. To motivate this, consider a simple example where we have only two types of people: weak people and strong people. Let treatment $T$ be receiving medication, and $Y$ is health of patient (higher $Y$ means healthier). Strong people are far healthier than weak people. Now suppose that receiving medication makes everyone healthier by a fixed amount. First case: suppose that only unhealthy people seek out medication. Then those with $T=1$ will be mostly the weak people, since they are the unhealthy people, and those with $T=0$ will be mostly strong people. But then ignorability fails, since the values of $(Y_1,Y_0)$ are related to treatment status $T$: in this case, both $Y_1$ and $Y_0$ will be lower for $T=1$ than for $T=0$ since $T=1$ is filled with mostly weak people and we stated that weak people just are less healthy overall. Second case: suppose that we randomly assign medication to our pool of strong and weak people. Here, ignorability holds, since $(Y_1,Y_0)$ are independent of treatment status $T$: weak and strong people are equally likely to receive treatment, so the values of $Y_1$ and $Y_0$ are on average the same for $T=0$ and $T=1$. However, since $T$ makes everyone healthier, clearly $Y$ is not independent of $T$.. it has a fixed effect on health in my example! In other words, ignorability allows that $T$ directly affects whether you receive $Y_1$ or $Y_0$, but treatment status is not related to the these values. In this case, we can figure out what $Y_0$ would have been for those who get treatment by looking at the effect of those who didn't get treatment! We get a treatment effect by comparing those who get treatment to those who don't, but we need a way to make sure that those who get treatment are not fundamentally different from those who don't get treatment, and that's precisely what the ignorability condition assumes. We can illustrate with two other examples: A classic case where this holds is in randomized control trials (RCTs) where you randomly assign treatment to individuals. Then clearly those who get treatment may have a different outcome because treatment affects your outcome (unless treatment really has no effect on outcome), but those who get treatment are randomly selected and so treatment receival is independent of potential outcomes, and so you indeed do have that $(Y_0,Y_1) \perp \!\!\! \perp T$. Ignorability assumption holds. For an example where this fails, consider treatment $T$ be an indicator for finishing high school or not, and let the outcome $Y$ be income in 10 years, and define $(Y_0,Y_1)$ as before. Then $(Y_0,Y_1)$ is not independent of $T$ since presumably the potential outcomes for those with $T=0$ are fundamentally different from those with $T=1$. Maybe people who finish high school have more perseverance than those who don't, or are from wealthier families, and these in turn imply that if we could have observed a world where individuals who finished high school had not finished it, their outcomes would still have been different than the observed pool of individuals who did not finish high school. As such, ignorability assumption likely does not hold: treatment is related to potential outcomes, and in this case, we may expect that $Y_0 | T_i = 1 > Y_0 | T_i = 0$. The conditioning on $X$ part is simply for cases where ignorability holds conditional on some controls. In your example, it may be that treatment is independent of these potential outcomes only after conditioning on patient history. For an example where this may happen, suppose that individuals with higher patient history $X$ are both sicker and more likely to receive treatment $T$. Then without $X$, we run into the same problem as described as above: the unrealized $Y_0$ for those who receive treatment may be lower than the realized $Y_0$ for those who did not receive treatment because the former are more likely to be unhealthy individuals, and so comparing those with and without treatment will cause issues since we are not comparing the same people. However, if we control for patient history, we can instead assume that conditional on $X$, treatment assignment to individuals is again unrelated to their potential outcomes and so we are good to go again. Edit As a final note, based on chat with OP, it may be helpful to relate the potential outcomes framework to the DAG in OP's post (Noah's response covers a similar setting with more formality, so definitely also worth checking that out). In these type of DAGs, we fully model relationships between variables. Forgetting about $X$ for a it, suppose we just have that $T \rightarrow Y$. What does this mean? Well it means that the only effect of T is through $T = 1$ or $T= 0$, and through no other channels, so we immediately have that T affects $Y_1T+ Y_0(1-T)$ only through the value of $T$. You may think "well what if T affects Y through some other channel" but by saying $T \rightarrow Y$, we are saying there are no other channels. Next, consider your case of $X \rightarrow T \rightarrow Y \leftarrow X$. Here, we have that T directly affects Y, but X also directly affects T and Y. Why does ignorability fail? Because T can be 1 through the effect of X, which will also affect Y, and so $T = 1$ could affect $Y_0$ and $Y_1$ for the group where $T=1$, and so T affects $Y_1T + Y_0(1-T)$ both through 1. the direct effect of the value of T, but 2. T now also affects $Y_1$ and $Y_0$ through the fact that $X$ affects $Y$ and $T$ at the same time.
Strong ignorability: confusion on the relationship between outcomes and treatment
I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X
Strong ignorability: confusion on the relationship between outcomes and treatment I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X$. The key is to realize that every individual $i$ has potential outcomes $(Y_{i1},Y_{i0})$, but you only observe $Y_{iT}$ in the data. Ignorability says $$(Y_0,Y_1) \perp \!\!\! \perp T|X$$ which says that conditional on $X$, then the potential outcomes are independent of treatment $T$. It is not saying that $Y$ is independent of $T$. As you point out, that makes no sense. In fact, a classic way to re-write $Y$ is as $$Y = Y_1T + Y_0(1-T)$$ which tells us that for every individual, we observe $Y_i$ which is either $Y_{i1}$ or $Y_{i0}$ depending on the value of treatment $T_i$. The reason for potential outcomes is that we want to know the effect $Y_{i1} - Y_{i0}$ but only observe one of the two objects for everyone. The question is: what would have $Y_{i0}$ been for the individuals $i$ who have $T_i=1$ (and vice versa)? Ignoring the conditional on $X$ part, the ignorability assumption essentially says that treatment $T$ can certainly affect $Y$ by virtue of $Y$ being equal to $Y_1$ or $Y_0$, but that $T$ is unrelated to the values of $Y_0,Y_1$ themselves. To motivate this, consider a simple example where we have only two types of people: weak people and strong people. Let treatment $T$ be receiving medication, and $Y$ is health of patient (higher $Y$ means healthier). Strong people are far healthier than weak people. Now suppose that receiving medication makes everyone healthier by a fixed amount. First case: suppose that only unhealthy people seek out medication. Then those with $T=1$ will be mostly the weak people, since they are the unhealthy people, and those with $T=0$ will be mostly strong people. But then ignorability fails, since the values of $(Y_1,Y_0)$ are related to treatment status $T$: in this case, both $Y_1$ and $Y_0$ will be lower for $T=1$ than for $T=0$ since $T=1$ is filled with mostly weak people and we stated that weak people just are less healthy overall. Second case: suppose that we randomly assign medication to our pool of strong and weak people. Here, ignorability holds, since $(Y_1,Y_0)$ are independent of treatment status $T$: weak and strong people are equally likely to receive treatment, so the values of $Y_1$ and $Y_0$ are on average the same for $T=0$ and $T=1$. However, since $T$ makes everyone healthier, clearly $Y$ is not independent of $T$.. it has a fixed effect on health in my example! In other words, ignorability allows that $T$ directly affects whether you receive $Y_1$ or $Y_0$, but treatment status is not related to the these values. In this case, we can figure out what $Y_0$ would have been for those who get treatment by looking at the effect of those who didn't get treatment! We get a treatment effect by comparing those who get treatment to those who don't, but we need a way to make sure that those who get treatment are not fundamentally different from those who don't get treatment, and that's precisely what the ignorability condition assumes. We can illustrate with two other examples: A classic case where this holds is in randomized control trials (RCTs) where you randomly assign treatment to individuals. Then clearly those who get treatment may have a different outcome because treatment affects your outcome (unless treatment really has no effect on outcome), but those who get treatment are randomly selected and so treatment receival is independent of potential outcomes, and so you indeed do have that $(Y_0,Y_1) \perp \!\!\! \perp T$. Ignorability assumption holds. For an example where this fails, consider treatment $T$ be an indicator for finishing high school or not, and let the outcome $Y$ be income in 10 years, and define $(Y_0,Y_1)$ as before. Then $(Y_0,Y_1)$ is not independent of $T$ since presumably the potential outcomes for those with $T=0$ are fundamentally different from those with $T=1$. Maybe people who finish high school have more perseverance than those who don't, or are from wealthier families, and these in turn imply that if we could have observed a world where individuals who finished high school had not finished it, their outcomes would still have been different than the observed pool of individuals who did not finish high school. As such, ignorability assumption likely does not hold: treatment is related to potential outcomes, and in this case, we may expect that $Y_0 | T_i = 1 > Y_0 | T_i = 0$. The conditioning on $X$ part is simply for cases where ignorability holds conditional on some controls. In your example, it may be that treatment is independent of these potential outcomes only after conditioning on patient history. For an example where this may happen, suppose that individuals with higher patient history $X$ are both sicker and more likely to receive treatment $T$. Then without $X$, we run into the same problem as described as above: the unrealized $Y_0$ for those who receive treatment may be lower than the realized $Y_0$ for those who did not receive treatment because the former are more likely to be unhealthy individuals, and so comparing those with and without treatment will cause issues since we are not comparing the same people. However, if we control for patient history, we can instead assume that conditional on $X$, treatment assignment to individuals is again unrelated to their potential outcomes and so we are good to go again. Edit As a final note, based on chat with OP, it may be helpful to relate the potential outcomes framework to the DAG in OP's post (Noah's response covers a similar setting with more formality, so definitely also worth checking that out). In these type of DAGs, we fully model relationships between variables. Forgetting about $X$ for a it, suppose we just have that $T \rightarrow Y$. What does this mean? Well it means that the only effect of T is through $T = 1$ or $T= 0$, and through no other channels, so we immediately have that T affects $Y_1T+ Y_0(1-T)$ only through the value of $T$. You may think "well what if T affects Y through some other channel" but by saying $T \rightarrow Y$, we are saying there are no other channels. Next, consider your case of $X \rightarrow T \rightarrow Y \leftarrow X$. Here, we have that T directly affects Y, but X also directly affects T and Y. Why does ignorability fail? Because T can be 1 through the effect of X, which will also affect Y, and so $T = 1$ could affect $Y_0$ and $Y_1$ for the group where $T=1$, and so T affects $Y_1T + Y_0(1-T)$ both through 1. the direct effect of the value of T, but 2. T now also affects $Y_1$ and $Y_0$ through the fact that $X$ affects $Y$ and $T$ at the same time.
Strong ignorability: confusion on the relationship between outcomes and treatment I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X
14,360
Strong ignorability: confusion on the relationship between outcomes and treatment
Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me. First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thing to do since the word "outcome" is in their name, but considering it this way clarifies some issues. They represent two combinations of the actual covariates, $X$. So, let's rewrite them as such: $$Y_0 = f_0(X) \\ Y_1 = f_1(X)$$ (Seeing them this helps divorce them from the observed outcome, $Y$, which we'll get to shortly.) Importantly, if we could observe both of these values, we would not need to assign treatment to anyone. The causal effect of interest is $Y_1 - Y_0$; nowhere in that definition is $T$, the actual treatment assigned, mentioned. This is because we can define the causal effect independently of the actual treatment assignment $T$. Now, think of $T$, the actual treatment received, as revealing one of the two potential outcomes. The treatment doesn't create the potential outcomes; it merely reveals one of them. The potential outcomes exist in a hidden state prior to treatment receipt, and receiving treatment reveals one of them and leaves the other one hidden. The revealed potential outcome is what we call $Y$, the observed outcome. However, to understand strong ignorability, we don't need to even get to the step where the treatment reveals one of the potential outcomes. Strong ignorability is about potential outcomes (the pretreatment covariates that act as two separate combinations of $X$), not about the observed outcomes. $Y$ does not need to exist (yet) to talk about strong ignorability; it only concerns pre-treatment covariates (including $Y_0$ and $Y_1$) and the mechanism of the assignment of the actual treatment received. So, prior to one of the potential outcomes being revealed, let's take stock of what we have. We have $X$, the set of pretreatment covariates, $f_0(X)$ and $f_1(X)$, two combinations of $X$, and $T$, the treatment. Unconditional strong ignorability states that $f_0(X)$ and $f_1(X)$ are unrelated to $T$. This would occur if $T$ were randomly assigned or depended only on factors unrelated to $f_0(X)$ and $f_1(X)$. If $T$ depends on $X$, then clearly $f_0(X)$ and $f_1(X)$ are not unrelated to $T$ because both $T$ and $f_0(X)$ and $f_1(X)$ depend on the same variables, namely, $X$. Conditional strong ignorability (which Rubin calls strong ignorability) simply states that we have observed the set of $X$ that goes into $f_0(X)$, $f_1(X)$, and $T$. Conditional on $X$, $f_0(X)$ and $f_1(X)$ are just constants (potentially plus random noise), and conditional on $X$, $T$ is a random process. It is under this circumstance that we can use specific statistical methods to arrive at a consistent estimate of the causal effect of treatment. Potential outcomes are confusing. They are typically not taught in an intuitive way, and if they are taught after you have learned about statistics, it is very easy to confuse them with the concepts of the observed treatment $T$ and the observed outcome $Y$, which is what data analysts actually deal with, and the causal effect with a parameter in a regression model rather than as a contrast between two unobserved quantities. Potential outcomes are abstract quantities that serve primarily as explanatory tools. However, because they are confusing, they are not very good explanatory tools. The graphical (DAG) approach to causal inference is far more intuitive because it relies on the notions of $Y$ and $T$ as they are understood by data analysts. The concept of strong ignorability is isomorphic to d-separation of $T$ and $Y$ in DAG language. Consider reading Pearl's Book of Why to help cement these ideas in an intuitive but still rigorous way. In response to comments: The system of structural equations that my description of potential outcomes conforms to is the following (with all variables that are not dependent variables considered exogenous and independent): \begin{align} Y_0 &= f_0(X, U_0) \\ Y_1 &= f_1(X, U_1) \\ T &= f_T(X, U_T) \\ Y &= f_Y(Y_0, Y_1, T) = T Y_1 + (1-T)Y_0 \end{align} This is displayed in the DAG below: Strong ignorability is that $\{U_{Y0}, U_{Y1}\} \perp U_T$, which is equivalent to d-separation of $T$ and $Y$ given $X$. Note that this DAG is simply a graphical translation of the system of structural equations. There are other ways of displaying potential outcomes in DAGs, one of which is the single-world intervention graph (SWIG).
Strong ignorability: confusion on the relationship between outcomes and treatment
Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me. First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thi
Strong ignorability: confusion on the relationship between outcomes and treatment Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me. First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thing to do since the word "outcome" is in their name, but considering it this way clarifies some issues. They represent two combinations of the actual covariates, $X$. So, let's rewrite them as such: $$Y_0 = f_0(X) \\ Y_1 = f_1(X)$$ (Seeing them this helps divorce them from the observed outcome, $Y$, which we'll get to shortly.) Importantly, if we could observe both of these values, we would not need to assign treatment to anyone. The causal effect of interest is $Y_1 - Y_0$; nowhere in that definition is $T$, the actual treatment assigned, mentioned. This is because we can define the causal effect independently of the actual treatment assignment $T$. Now, think of $T$, the actual treatment received, as revealing one of the two potential outcomes. The treatment doesn't create the potential outcomes; it merely reveals one of them. The potential outcomes exist in a hidden state prior to treatment receipt, and receiving treatment reveals one of them and leaves the other one hidden. The revealed potential outcome is what we call $Y$, the observed outcome. However, to understand strong ignorability, we don't need to even get to the step where the treatment reveals one of the potential outcomes. Strong ignorability is about potential outcomes (the pretreatment covariates that act as two separate combinations of $X$), not about the observed outcomes. $Y$ does not need to exist (yet) to talk about strong ignorability; it only concerns pre-treatment covariates (including $Y_0$ and $Y_1$) and the mechanism of the assignment of the actual treatment received. So, prior to one of the potential outcomes being revealed, let's take stock of what we have. We have $X$, the set of pretreatment covariates, $f_0(X)$ and $f_1(X)$, two combinations of $X$, and $T$, the treatment. Unconditional strong ignorability states that $f_0(X)$ and $f_1(X)$ are unrelated to $T$. This would occur if $T$ were randomly assigned or depended only on factors unrelated to $f_0(X)$ and $f_1(X)$. If $T$ depends on $X$, then clearly $f_0(X)$ and $f_1(X)$ are not unrelated to $T$ because both $T$ and $f_0(X)$ and $f_1(X)$ depend on the same variables, namely, $X$. Conditional strong ignorability (which Rubin calls strong ignorability) simply states that we have observed the set of $X$ that goes into $f_0(X)$, $f_1(X)$, and $T$. Conditional on $X$, $f_0(X)$ and $f_1(X)$ are just constants (potentially plus random noise), and conditional on $X$, $T$ is a random process. It is under this circumstance that we can use specific statistical methods to arrive at a consistent estimate of the causal effect of treatment. Potential outcomes are confusing. They are typically not taught in an intuitive way, and if they are taught after you have learned about statistics, it is very easy to confuse them with the concepts of the observed treatment $T$ and the observed outcome $Y$, which is what data analysts actually deal with, and the causal effect with a parameter in a regression model rather than as a contrast between two unobserved quantities. Potential outcomes are abstract quantities that serve primarily as explanatory tools. However, because they are confusing, they are not very good explanatory tools. The graphical (DAG) approach to causal inference is far more intuitive because it relies on the notions of $Y$ and $T$ as they are understood by data analysts. The concept of strong ignorability is isomorphic to d-separation of $T$ and $Y$ in DAG language. Consider reading Pearl's Book of Why to help cement these ideas in an intuitive but still rigorous way. In response to comments: The system of structural equations that my description of potential outcomes conforms to is the following (with all variables that are not dependent variables considered exogenous and independent): \begin{align} Y_0 &= f_0(X, U_0) \\ Y_1 &= f_1(X, U_1) \\ T &= f_T(X, U_T) \\ Y &= f_Y(Y_0, Y_1, T) = T Y_1 + (1-T)Y_0 \end{align} This is displayed in the DAG below: Strong ignorability is that $\{U_{Y0}, U_{Y1}\} \perp U_T$, which is equivalent to d-separation of $T$ and $Y$ given $X$. Note that this DAG is simply a graphical translation of the system of structural equations. There are other ways of displaying potential outcomes in DAGs, one of which is the single-world intervention graph (SWIG).
Strong ignorability: confusion on the relationship between outcomes and treatment Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me. First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thi
14,361
Strong ignorability: confusion on the relationship between outcomes and treatment
First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding. Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is no hidden confounder, i.e. no factors that both influence the treatment $T$ and the (value of the) outcome $Y$ On a DAG, that simply means that there is no confounder $C$ that points to both $Y$ and $T$. You can have a $C$ that points only to one of them and that's ok. In order to get a better understanding of conditional probabilities on DAGs, I recommend you to read this (you can see a screenshot of this pdf below). BONUS (for asking a great question)! Your initial condition ( $(Y_0,Y_1) \perp \!\!\! \perp T|X$) plus the condititon of overlap/positivity/common summport ($0 \lt P(T=1 | X) \lt 1$) are called strong ignorability condition. Strong ignorability condition is a sufficient condition for identifying individual treatment effect (ITE). In other words - if no factor both influences taking the treatment and the outcome (no confounding) + all subgroups of the data with different covariates have some probability of receiving any value of treatment (positivity), you can, theoretically, determine the effect of a treatment of any individual from your data. Edit: if you want to draw a DAG with $Y_0, Y_1$ instead of $Y$, it should look like below (the yellow part is the confounder part)
Strong ignorability: confusion on the relationship between outcomes and treatment
First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding. Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is
Strong ignorability: confusion on the relationship between outcomes and treatment First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding. Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is no hidden confounder, i.e. no factors that both influence the treatment $T$ and the (value of the) outcome $Y$ On a DAG, that simply means that there is no confounder $C$ that points to both $Y$ and $T$. You can have a $C$ that points only to one of them and that's ok. In order to get a better understanding of conditional probabilities on DAGs, I recommend you to read this (you can see a screenshot of this pdf below). BONUS (for asking a great question)! Your initial condition ( $(Y_0,Y_1) \perp \!\!\! \perp T|X$) plus the condititon of overlap/positivity/common summport ($0 \lt P(T=1 | X) \lt 1$) are called strong ignorability condition. Strong ignorability condition is a sufficient condition for identifying individual treatment effect (ITE). In other words - if no factor both influences taking the treatment and the outcome (no confounding) + all subgroups of the data with different covariates have some probability of receiving any value of treatment (positivity), you can, theoretically, determine the effect of a treatment of any individual from your data. Edit: if you want to draw a DAG with $Y_0, Y_1$ instead of $Y$, it should look like below (the yellow part is the confounder part)
Strong ignorability: confusion on the relationship between outcomes and treatment First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding. Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is
14,362
Extracting slopes for cases from a mixed effects model (lme4)
The model: library(lme4) data(sleepstudy) fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy) The function coef is the right approach for extracting individual differences. > coef(fm1)$Subject (Intercept) Days 308 253.6637 19.6662581 309 211.0065 1.8475834 310 212.4449 5.0184067 330 275.0956 5.6529540 331 273.6653 7.3973908 332 260.4446 10.1951151 333 268.2455 10.2436611 334 244.1725 11.5418622 335 251.0714 -0.2848735 337 286.2955 19.0955694 349 226.1950 11.6407008 350 238.3351 17.0814915 351 255.9829 7.4520286 352 272.2687 14.0032989 369 254.6806 11.3395025 370 225.7922 15.2897513 371 252.2121 9.4791308 372 263.7196 11.7513155 These values are a combination of the fixed effects and the variance components (random effects). You can use summary and coef to obtain the coefficients of the fixed effects. > coef(summary(fm1))[ , "Estimate"] (Intercept) Days 251.40510 10.46729 The intercept is 251.4 and the slope (associated with Days) is 10.4. These coeffcients are the mean of all subjects. To obtain the random effects, you can use ranef. > ranef(fm1)$Subject (Intercept) Days 308 2.2585637 9.1989722 309 -40.3985802 -8.6197026 310 -38.9602496 -5.4488792 330 23.6905025 -4.8143320 331 22.2602062 -3.0698952 332 9.0395271 -0.2721709 333 16.8404333 -0.2236248 334 -7.2325803 1.0745763 335 -0.3336936 -10.7521594 337 34.8903534 8.6282835 349 -25.2101138 1.1734148 350 -13.0699598 6.6142055 351 4.5778364 -3.0152574 352 20.8635944 3.5360130 369 3.2754532 0.8722166 370 -25.6128737 4.8224653 371 0.8070401 -0.9881551 372 12.3145406 1.2840295 These values are the variance components of the subjects. Every row corresponds to one subject. Inherently the mean of each column is zero since the values correspond to the differences in relation to the fixed effects. > colMeans(ranef(fm1)$Subject) (Intercept) Days 4.092529e-13 -2.000283e-13 Note that these values are equal to zero, deviations are due to imprecision of floating point number representation. The result of coef(fm1)$Subject incoporates the fixed effects into the random effects, i.e., the fixed effect coefficients are added to the random effects. The results are individual intercepts and slopes.
Extracting slopes for cases from a mixed effects model (lme4)
The model: library(lme4) data(sleepstudy) fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy) The function coef is the right approach for extracting individual differences. > coef(fm1)$Subject
Extracting slopes for cases from a mixed effects model (lme4) The model: library(lme4) data(sleepstudy) fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy) The function coef is the right approach for extracting individual differences. > coef(fm1)$Subject (Intercept) Days 308 253.6637 19.6662581 309 211.0065 1.8475834 310 212.4449 5.0184067 330 275.0956 5.6529540 331 273.6653 7.3973908 332 260.4446 10.1951151 333 268.2455 10.2436611 334 244.1725 11.5418622 335 251.0714 -0.2848735 337 286.2955 19.0955694 349 226.1950 11.6407008 350 238.3351 17.0814915 351 255.9829 7.4520286 352 272.2687 14.0032989 369 254.6806 11.3395025 370 225.7922 15.2897513 371 252.2121 9.4791308 372 263.7196 11.7513155 These values are a combination of the fixed effects and the variance components (random effects). You can use summary and coef to obtain the coefficients of the fixed effects. > coef(summary(fm1))[ , "Estimate"] (Intercept) Days 251.40510 10.46729 The intercept is 251.4 and the slope (associated with Days) is 10.4. These coeffcients are the mean of all subjects. To obtain the random effects, you can use ranef. > ranef(fm1)$Subject (Intercept) Days 308 2.2585637 9.1989722 309 -40.3985802 -8.6197026 310 -38.9602496 -5.4488792 330 23.6905025 -4.8143320 331 22.2602062 -3.0698952 332 9.0395271 -0.2721709 333 16.8404333 -0.2236248 334 -7.2325803 1.0745763 335 -0.3336936 -10.7521594 337 34.8903534 8.6282835 349 -25.2101138 1.1734148 350 -13.0699598 6.6142055 351 4.5778364 -3.0152574 352 20.8635944 3.5360130 369 3.2754532 0.8722166 370 -25.6128737 4.8224653 371 0.8070401 -0.9881551 372 12.3145406 1.2840295 These values are the variance components of the subjects. Every row corresponds to one subject. Inherently the mean of each column is zero since the values correspond to the differences in relation to the fixed effects. > colMeans(ranef(fm1)$Subject) (Intercept) Days 4.092529e-13 -2.000283e-13 Note that these values are equal to zero, deviations are due to imprecision of floating point number representation. The result of coef(fm1)$Subject incoporates the fixed effects into the random effects, i.e., the fixed effect coefficients are added to the random effects. The results are individual intercepts and slopes.
Extracting slopes for cases from a mixed effects model (lme4) The model: library(lme4) data(sleepstudy) fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy) The function coef is the right approach for extracting individual differences. > coef(fm1)$Subject
14,363
Extracting slopes for cases from a mixed effects model (lme4)
I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer. > ranef(fm1) $Subject (Intercept) Days 308 2.2585509 9.1989758 309 -40.3987381 -8.6196806 310 -38.9604090 -5.4488565 330 23.6906196 -4.8143503 331 22.2603126 -3.0699116 332 9.0395679 -0.2721770 333 16.8405086 -0.2236361 334 -7.2326151 1.0745816 335 -0.3336684 -10.7521652 337 34.8904868 8.6282652 349 -25.2102286 1.1734322 350 -13.0700342 6.6142178 351 4.5778642 -3.0152621 352 20.8636782 3.5360011 369 3.2754656 0.8722149 370 -25.6129993 4.8224850 371 0.8070461 -0.9881562 372 12.3145921 1.2840221 with conditional variances for “Subject” > fixef(fm1) (Intercept) Days 251.40510 10.46729
Extracting slopes for cases from a mixed effects model (lme4)
I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer. > ranef(fm1) $Subject
Extracting slopes for cases from a mixed effects model (lme4) I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer. > ranef(fm1) $Subject (Intercept) Days 308 2.2585509 9.1989758 309 -40.3987381 -8.6196806 310 -38.9604090 -5.4488565 330 23.6906196 -4.8143503 331 22.2603126 -3.0699116 332 9.0395679 -0.2721770 333 16.8405086 -0.2236361 334 -7.2326151 1.0745816 335 -0.3336684 -10.7521652 337 34.8904868 8.6282652 349 -25.2102286 1.1734322 350 -13.0700342 6.6142178 351 4.5778642 -3.0152621 352 20.8636782 3.5360011 369 3.2754656 0.8722149 370 -25.6129993 4.8224850 371 0.8070461 -0.9881562 372 12.3145921 1.2840221 with conditional variances for “Subject” > fixef(fm1) (Intercept) Days 251.40510 10.46729
Extracting slopes for cases from a mixed effects model (lme4) I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer. > ranef(fm1) $Subject
14,364
Question on how to normalize regression coefficient
Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas. The question Let's begin by restating the question and using unambiguous terminology. The data consist of a list of ordered pairs $(t_i, y_i)$ . Known constants $\alpha_1$ and $\alpha_2$ determine values $x_{1,i} = \exp(\alpha_1 t_i)$ and $x_{2,i} = \exp(\alpha_2 t_i)$. We posit a model in which $$y_i = \beta_1 x_{1,i} + \beta_2 x_{2,i} + \varepsilon_i$$ for constants $\beta_1$ and $\beta_2$ to be estimated, $\varepsilon_i$ are random, and--to a good approximation anyway--independent and having a common variance (whose estimation is also of interest). Background: linear "matching" Mosteller and Tukey refer to the variables $x_1$ = $(x_{1,1}, x_{1,2}, \ldots)$ and $x_2$ as "matchers." They will be used to "match" the values of $y = (y_1, y_2, \ldots)$ in a specific way, which I will illustrate. More generally, let $y$ and $x$ be any two vectors in the same Euclidean vector space, with $y$ playing the role of "target" and $x$ that of "matcher". We contemplate systematically varying a coefficient $\lambda$ in order to approximate $y$ by the multiple $\lambda x$. The best approximation is obtained when $\lambda x$ is as close to $y$ as possible. Equivalently, the squared length of $y - \lambda x$ is minimized. One way to visualize this matching process is to make a scatterplot of $x$ and $y$ on which is drawn the graph of $x \to \lambda x$. The vertical distances between the scatterplot points and this graph are the components of the residual vector $y - \lambda x$; the sum of their squares is to be made as small as possible. Up to a constant of proportionality, these squares are the areas of circles centered at the points $(x_i, y_i)$ with radii equal to the residuals: we wish to minimize the sum of areas of all these circles. Here is an example showing the optimal value of $\lambda$ in the middle panel: The points in the scatterplot are blue; the graph of $x \to \lambda x$ is a red line. This illustration emphasizes that the red line is constrained to pass through the origin $(0,0)$: it is a very special case of line fitting. Multiple regression can be obtained by sequential matching Returning to the setting of the question, we have one target $y$ and two matchers $x_1$ and $x_2$. We seek numbers $b_1$ and $b_2$ for which $y$ is approximated as closely as possible by $b_1 x_1 + b_2 x_2$, again in the least-distance sense. Arbitrarily beginning with $x_1$, Mosteller & Tukey match the remaining variables $x_2$ and $y$ to $x_1$. Write the residuals for these matches as $x_{2\cdot 1}$ and $y_{\cdot 1}$, respectively: the $_{\cdot 1}$ indicates that $x_1$ has been "taken out of" the variable. We can write $$y = \lambda_1 x_1 + y_{\cdot 1}\text{ and }x_2 = \lambda_2 x_1 + x_{2\cdot 1}.$$ Having taken $x_1$ out of $x_2$ and $y$, we proceed to match the target residuals $y_{\cdot 1}$ to the matcher residuals $x_{2\cdot 1}$. The final residuals are $y_{\cdot 12}$. Algebraically, we have written $$\eqalign{ y_{\cdot 1} &= \lambda_3 x_{2\cdot 1} + y_{\cdot 12}; \text{ whence} \\ y &= \lambda_1 x_1 + y_{\cdot 1} = \lambda_1 x_1 + \lambda_3 x_{2\cdot 1} + y_{\cdot 12} =\lambda_1 x_1 + \lambda_3 \left(x_2 - \lambda_2 x_1\right) + y_{\cdot 12} \\ &=\left(\lambda_1 - \lambda_3 \lambda_2\right)x_1 + \lambda_3 x_2 + y_{\cdot 12}. }$$ This shows that the $\lambda_3$ in the last step is the coefficient of $x_2$ in a matching of $x_1$ and $x_2$ to $y$. We could just as well have proceeded by first taking $x_2$ out of $x_1$ and $y$, producing $x_{1\cdot 2}$ and $y_{\cdot 2}$, and then taking $x_{1\cdot 2}$ out of $y_{\cdot 2}$, yielding a different set of residuals $y_{\cdot 21}$. This time, the coefficient of $x_1$ found in the last step--let's call it $\mu_3$--is the coefficient of $x_1$ in a matching of $x_1$ and $x_2$ to $y$. Finally, for comparison, we might run a multiple (ordinary least squares regression) of $y$ against $x_1$ and $x_2$. Let those residuals be $y_{\cdot lm}$. It turns out that the coefficients in this multiple regression are precisely the coefficients $\mu_3$ and $\lambda_3$ found previously and that all three sets of residuals, $y_{\cdot 12}$, $y_{\cdot 21}$, and $y_{\cdot lm}$, are identical. Depicting the process None of this is new: it's all in the text. I would like to offer a pictorial analysis, using a scatterplot matrix of everything we have obtained so far. Because these data are simulated, we have the luxury of showing the underlying "true" values of $y$ on the last row and column: these are the values $\beta_1 x_1 + \beta_2 x_2$ without the error added in. The scatterplots below the diagonal have been decorated with the graphs of the matchers, exactly as in the first figure. Graphs with zero slopes are drawn in red: these indicate situations where the matcher gives us nothing new; the residuals are the same as the target. Also, for reference, the origin (wherever it appears within a plot) is shown as an open red circle: recall that all possible matching lines have to pass through this point. Much can be learned about regression through studying this plot. Some of the highlights are: The matching of $x_2$ to $x_1$ (row 2, column 1) is poor. This is a good thing: it indicates that $x_1$ and $x_2$ are providing very different information; using both together will likely be a much better fit to $y$ than using either one alone. Once a variable has been taken out of a target, it does no good to try to take that variable out again: the best matching line will be zero. See the scatterplots for $x_{2\cdot 1}$ versus $x_1$ or $y_{\cdot 1}$ versus $x_1$, for instance. The values $x_1$, $x_2$, $x_{1\cdot 2}$, and $x_{2\cdot 1}$ have all been taken out of $y_{\cdot lm}$. Multiple regression of $y$ against $x_1$ and $x_2$ can be achieved first by computing $y_{\cdot 1}$ and $x_{2\cdot 1}$. These scatterplots appear at (row, column) = $(8,1)$ and $(2,1)$, respectively. With these residuals in hand, we look at their scatterplot at $(4,3)$. These three one-variable regressions do the trick. As Mosteller & Tukey explain, the standard errors of the coefficients can be obtained almost as easily from these regressions, too--but that's not the topic of this question, so I will stop here. Code These data were (reproducibly) created in R with a simulation. The analyses, checks, and plots were also produced with R. This is the code. # # Simulate the data. # set.seed(17) t.var <- 1:50 # The "times" t[i] x <- exp(t.var %o% c(x1=-0.1, x2=0.025) ) # The two "matchers" x[1,] and x[2,] beta <- c(5, -1) # The (unknown) coefficients sigma <- 1/2 # Standard deviation of the errors error <- sigma * rnorm(length(t.var)) # Simulated errors y <- (y.true <- as.vector(x %*% beta)) + error # True and simulated y values data <- data.frame(t.var, x, y, y.true) par(col="Black", bty="o", lty=0, pch=1) pairs(data) # Get a close look at the data # # Take out the various matchers. # take.out <- function(y, x) {fit <- lm(y ~ x - 1); resid(fit)} data <- transform(transform(data, x2.1 = take.out(x2, x1), y.1 = take.out(y, x1), x1.2 = take.out(x1, x2), y.2 = take.out(y, x2) ), y.21 = take.out(y.2, x1.2), y.12 = take.out(y.1, x2.1) ) data$y.lm <- resid(lm(y ~ x - 1)) # Multiple regression for comparison # # Analysis. # # Reorder the dataframe (for presentation): data <- data[c(1:3, 5:12, 4)] # Confirm that the three ways to obtain the fit are the same: pairs(subset(data, select=c(y.12, y.21, y.lm))) # Explore what happened: panel.lm <- function (x, y, col=par("col"), bg=NA, pch=par("pch"), cex=1, col.smooth="red", ...) { box(col="Gray", bty="o") ok <- is.finite(x) & is.finite(y) if (any(ok)) { b <- coef(lm(y[ok] ~ x[ok] - 1)) col0 <- ifelse(abs(b) < 10^-8, "Red", "Blue") lwd0 <- ifelse(abs(b) < 10^-8, 3, 2) abline(c(0, b), col=col0, lwd=lwd0) } points(x, y, pch = pch, col="Black", bg = bg, cex = cex) points(matrix(c(0,0), nrow=1), col="Red", pch=1) } panel.hist <- function(x, ...) { usr <- par("usr"); on.exit(par(usr)) par(usr = c(usr[1:2], 0, 1.5) ) h <- hist(x, plot = FALSE) breaks <- h$breaks; nB <- length(breaks) y <- h$counts; y <- y/max(y) rect(breaks[-nB], 0, breaks[-1], y, ...) } par(lty=1, pch=19, col="Gray") pairs(subset(data, select=c(-t.var, -y.12, -y.21)), col="Gray", cex=0.8, lower.panel=panel.lm, diag.panel=panel.hist) # Additional interesting plots: par(col="Black", pch=1) #pairs(subset(data, select=c(-t.var, -x1.2, -y.2, -y.21))) #pairs(subset(data, select=c(-t.var, -x1, -x2))) #pairs(subset(data, select=c(x2.1, y.1, y.12))) # Details of the variances, showing how to obtain multiple regression # standard errors from the OLS matches. norm <- function(x) sqrt(sum(x * x)) lapply(data, norm) s <- summary(lm(y ~ x1 + x2 - 1, data=data)) c(s$sigma, s$coefficients["x1", "Std. Error"] * norm(data$x1.2)) # Equal c(s$sigma, s$coefficients["x2", "Std. Error"] * norm(data$x2.1)) # Equal c(s$sigma, norm(data$y.12) / sqrt(length(data$y.12) - 2)) # Equal
Question on how to normalize regression coefficient
Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas. The question Let's begin by restating the question and using
Question on how to normalize regression coefficient Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas. The question Let's begin by restating the question and using unambiguous terminology. The data consist of a list of ordered pairs $(t_i, y_i)$ . Known constants $\alpha_1$ and $\alpha_2$ determine values $x_{1,i} = \exp(\alpha_1 t_i)$ and $x_{2,i} = \exp(\alpha_2 t_i)$. We posit a model in which $$y_i = \beta_1 x_{1,i} + \beta_2 x_{2,i} + \varepsilon_i$$ for constants $\beta_1$ and $\beta_2$ to be estimated, $\varepsilon_i$ are random, and--to a good approximation anyway--independent and having a common variance (whose estimation is also of interest). Background: linear "matching" Mosteller and Tukey refer to the variables $x_1$ = $(x_{1,1}, x_{1,2}, \ldots)$ and $x_2$ as "matchers." They will be used to "match" the values of $y = (y_1, y_2, \ldots)$ in a specific way, which I will illustrate. More generally, let $y$ and $x$ be any two vectors in the same Euclidean vector space, with $y$ playing the role of "target" and $x$ that of "matcher". We contemplate systematically varying a coefficient $\lambda$ in order to approximate $y$ by the multiple $\lambda x$. The best approximation is obtained when $\lambda x$ is as close to $y$ as possible. Equivalently, the squared length of $y - \lambda x$ is minimized. One way to visualize this matching process is to make a scatterplot of $x$ and $y$ on which is drawn the graph of $x \to \lambda x$. The vertical distances between the scatterplot points and this graph are the components of the residual vector $y - \lambda x$; the sum of their squares is to be made as small as possible. Up to a constant of proportionality, these squares are the areas of circles centered at the points $(x_i, y_i)$ with radii equal to the residuals: we wish to minimize the sum of areas of all these circles. Here is an example showing the optimal value of $\lambda$ in the middle panel: The points in the scatterplot are blue; the graph of $x \to \lambda x$ is a red line. This illustration emphasizes that the red line is constrained to pass through the origin $(0,0)$: it is a very special case of line fitting. Multiple regression can be obtained by sequential matching Returning to the setting of the question, we have one target $y$ and two matchers $x_1$ and $x_2$. We seek numbers $b_1$ and $b_2$ for which $y$ is approximated as closely as possible by $b_1 x_1 + b_2 x_2$, again in the least-distance sense. Arbitrarily beginning with $x_1$, Mosteller & Tukey match the remaining variables $x_2$ and $y$ to $x_1$. Write the residuals for these matches as $x_{2\cdot 1}$ and $y_{\cdot 1}$, respectively: the $_{\cdot 1}$ indicates that $x_1$ has been "taken out of" the variable. We can write $$y = \lambda_1 x_1 + y_{\cdot 1}\text{ and }x_2 = \lambda_2 x_1 + x_{2\cdot 1}.$$ Having taken $x_1$ out of $x_2$ and $y$, we proceed to match the target residuals $y_{\cdot 1}$ to the matcher residuals $x_{2\cdot 1}$. The final residuals are $y_{\cdot 12}$. Algebraically, we have written $$\eqalign{ y_{\cdot 1} &= \lambda_3 x_{2\cdot 1} + y_{\cdot 12}; \text{ whence} \\ y &= \lambda_1 x_1 + y_{\cdot 1} = \lambda_1 x_1 + \lambda_3 x_{2\cdot 1} + y_{\cdot 12} =\lambda_1 x_1 + \lambda_3 \left(x_2 - \lambda_2 x_1\right) + y_{\cdot 12} \\ &=\left(\lambda_1 - \lambda_3 \lambda_2\right)x_1 + \lambda_3 x_2 + y_{\cdot 12}. }$$ This shows that the $\lambda_3$ in the last step is the coefficient of $x_2$ in a matching of $x_1$ and $x_2$ to $y$. We could just as well have proceeded by first taking $x_2$ out of $x_1$ and $y$, producing $x_{1\cdot 2}$ and $y_{\cdot 2}$, and then taking $x_{1\cdot 2}$ out of $y_{\cdot 2}$, yielding a different set of residuals $y_{\cdot 21}$. This time, the coefficient of $x_1$ found in the last step--let's call it $\mu_3$--is the coefficient of $x_1$ in a matching of $x_1$ and $x_2$ to $y$. Finally, for comparison, we might run a multiple (ordinary least squares regression) of $y$ against $x_1$ and $x_2$. Let those residuals be $y_{\cdot lm}$. It turns out that the coefficients in this multiple regression are precisely the coefficients $\mu_3$ and $\lambda_3$ found previously and that all three sets of residuals, $y_{\cdot 12}$, $y_{\cdot 21}$, and $y_{\cdot lm}$, are identical. Depicting the process None of this is new: it's all in the text. I would like to offer a pictorial analysis, using a scatterplot matrix of everything we have obtained so far. Because these data are simulated, we have the luxury of showing the underlying "true" values of $y$ on the last row and column: these are the values $\beta_1 x_1 + \beta_2 x_2$ without the error added in. The scatterplots below the diagonal have been decorated with the graphs of the matchers, exactly as in the first figure. Graphs with zero slopes are drawn in red: these indicate situations where the matcher gives us nothing new; the residuals are the same as the target. Also, for reference, the origin (wherever it appears within a plot) is shown as an open red circle: recall that all possible matching lines have to pass through this point. Much can be learned about regression through studying this plot. Some of the highlights are: The matching of $x_2$ to $x_1$ (row 2, column 1) is poor. This is a good thing: it indicates that $x_1$ and $x_2$ are providing very different information; using both together will likely be a much better fit to $y$ than using either one alone. Once a variable has been taken out of a target, it does no good to try to take that variable out again: the best matching line will be zero. See the scatterplots for $x_{2\cdot 1}$ versus $x_1$ or $y_{\cdot 1}$ versus $x_1$, for instance. The values $x_1$, $x_2$, $x_{1\cdot 2}$, and $x_{2\cdot 1}$ have all been taken out of $y_{\cdot lm}$. Multiple regression of $y$ against $x_1$ and $x_2$ can be achieved first by computing $y_{\cdot 1}$ and $x_{2\cdot 1}$. These scatterplots appear at (row, column) = $(8,1)$ and $(2,1)$, respectively. With these residuals in hand, we look at their scatterplot at $(4,3)$. These three one-variable regressions do the trick. As Mosteller & Tukey explain, the standard errors of the coefficients can be obtained almost as easily from these regressions, too--but that's not the topic of this question, so I will stop here. Code These data were (reproducibly) created in R with a simulation. The analyses, checks, and plots were also produced with R. This is the code. # # Simulate the data. # set.seed(17) t.var <- 1:50 # The "times" t[i] x <- exp(t.var %o% c(x1=-0.1, x2=0.025) ) # The two "matchers" x[1,] and x[2,] beta <- c(5, -1) # The (unknown) coefficients sigma <- 1/2 # Standard deviation of the errors error <- sigma * rnorm(length(t.var)) # Simulated errors y <- (y.true <- as.vector(x %*% beta)) + error # True and simulated y values data <- data.frame(t.var, x, y, y.true) par(col="Black", bty="o", lty=0, pch=1) pairs(data) # Get a close look at the data # # Take out the various matchers. # take.out <- function(y, x) {fit <- lm(y ~ x - 1); resid(fit)} data <- transform(transform(data, x2.1 = take.out(x2, x1), y.1 = take.out(y, x1), x1.2 = take.out(x1, x2), y.2 = take.out(y, x2) ), y.21 = take.out(y.2, x1.2), y.12 = take.out(y.1, x2.1) ) data$y.lm <- resid(lm(y ~ x - 1)) # Multiple regression for comparison # # Analysis. # # Reorder the dataframe (for presentation): data <- data[c(1:3, 5:12, 4)] # Confirm that the three ways to obtain the fit are the same: pairs(subset(data, select=c(y.12, y.21, y.lm))) # Explore what happened: panel.lm <- function (x, y, col=par("col"), bg=NA, pch=par("pch"), cex=1, col.smooth="red", ...) { box(col="Gray", bty="o") ok <- is.finite(x) & is.finite(y) if (any(ok)) { b <- coef(lm(y[ok] ~ x[ok] - 1)) col0 <- ifelse(abs(b) < 10^-8, "Red", "Blue") lwd0 <- ifelse(abs(b) < 10^-8, 3, 2) abline(c(0, b), col=col0, lwd=lwd0) } points(x, y, pch = pch, col="Black", bg = bg, cex = cex) points(matrix(c(0,0), nrow=1), col="Red", pch=1) } panel.hist <- function(x, ...) { usr <- par("usr"); on.exit(par(usr)) par(usr = c(usr[1:2], 0, 1.5) ) h <- hist(x, plot = FALSE) breaks <- h$breaks; nB <- length(breaks) y <- h$counts; y <- y/max(y) rect(breaks[-nB], 0, breaks[-1], y, ...) } par(lty=1, pch=19, col="Gray") pairs(subset(data, select=c(-t.var, -y.12, -y.21)), col="Gray", cex=0.8, lower.panel=panel.lm, diag.panel=panel.hist) # Additional interesting plots: par(col="Black", pch=1) #pairs(subset(data, select=c(-t.var, -x1.2, -y.2, -y.21))) #pairs(subset(data, select=c(-t.var, -x1, -x2))) #pairs(subset(data, select=c(x2.1, y.1, y.12))) # Details of the variances, showing how to obtain multiple regression # standard errors from the OLS matches. norm <- function(x) sqrt(sum(x * x)) lapply(data, norm) s <- summary(lm(y ~ x1 + x2 - 1, data=data)) c(s$sigma, s$coefficients["x1", "Std. Error"] * norm(data$x1.2)) # Equal c(s$sigma, s$coefficients["x2", "Std. Error"] * norm(data$x2.1)) # Equal c(s$sigma, norm(data$y.12) / sqrt(length(data$y.12) - 2)) # Equal
Question on how to normalize regression coefficient Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas. The question Let's begin by restating the question and using
14,365
Why is controlling for too many variables considered harmful?
There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in mind the estimation of the causal effect of a particular variable. You use a graphic tool called the DAG to map out the causal relationships and then you condition on a set of variables that will yield you the causal effect. Conditioning on variables generally blocks the flow of association but conditioning on a collider (common effect) will induce association between variables that are not causally related. The more variables you condition on, the more likely you are to condition on a collider and thus induce association without causation; that said the more variables you condition on you are also blocking more backdoor paths, including those with colliders. The reasoning here should not revolve around "how many variables?" but around "which variables?" to condition on. Below is an example where not conditioning on anything is what you want in order to estimate the direct causal effect of A on B. On the other hand, conditioning on the set {D} or {C,D} will bias the direct causal effect of A on B because it conditions on the collider D and opens backdoor path(s). This post here can serve as a good introduction to causal reasoning with DAGs.
Why is controlling for too many variables considered harmful?
There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in m
Why is controlling for too many variables considered harmful? There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in mind the estimation of the causal effect of a particular variable. You use a graphic tool called the DAG to map out the causal relationships and then you condition on a set of variables that will yield you the causal effect. Conditioning on variables generally blocks the flow of association but conditioning on a collider (common effect) will induce association between variables that are not causally related. The more variables you condition on, the more likely you are to condition on a collider and thus induce association without causation; that said the more variables you condition on you are also blocking more backdoor paths, including those with colliders. The reasoning here should not revolve around "how many variables?" but around "which variables?" to condition on. Below is an example where not conditioning on anything is what you want in order to estimate the direct causal effect of A on B. On the other hand, conditioning on the set {D} or {C,D} will bias the direct causal effect of A on B because it conditions on the collider D and opens backdoor path(s). This post here can serve as a good introduction to causal reasoning with DAGs.
Why is controlling for too many variables considered harmful? There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in m
14,366
Why is controlling for too many variables considered harmful?
I would point out three things: (1) Generally (related to the estimation of causal effects) Usually you want to explain phenomena out there in the world with parsimonious models including variables deduced from some theory. You may just add any variable that comes to your mind to a regression model and end up with an almost perfect fit, but you did not learn anything about (or even fundamentally distorted) the relationship (aka causal/treatment effects) you actually interested in (also see the DAGs @ColorStatistics pointed to). (Literature e.g.: "Causal Inference in Statistics" by Judea Pearl). (2) Specifically (more related to the overspecified model term) You can perceive adding irrelevant variables to a regression model as estimating coefficients on irrelevant variables that are truly zero. Then if you do this the estimators of our regression coefficients are still unbiased, but also inefficient since we did not consider (true) zero restrictions on the coefficients of the irrelevant variables. Hence, inference stays valid but confidence intervals become broader. (Literature: basically any econometrics textbook, e.g. Wooldrige). (3) Additionally (related to prediction) If you are solely interested in prediction performance of a model based on your training data then adding 'irrelevant' variables to your model is less harmful (irrelevant in the sense of being not causal and not in the sense of having true zero restrictions on the coefficients). As the overspefication of your model only becomes problematic if you want to do inference (broader confidence intervals). (Have a look in the causal machine learning literature).
Why is controlling for too many variables considered harmful?
I would point out three things: (1) Generally (related to the estimation of causal effects) Usually you want to explain phenomena out there in the world with parsimonious models including variables de
Why is controlling for too many variables considered harmful? I would point out three things: (1) Generally (related to the estimation of causal effects) Usually you want to explain phenomena out there in the world with parsimonious models including variables deduced from some theory. You may just add any variable that comes to your mind to a regression model and end up with an almost perfect fit, but you did not learn anything about (or even fundamentally distorted) the relationship (aka causal/treatment effects) you actually interested in (also see the DAGs @ColorStatistics pointed to). (Literature e.g.: "Causal Inference in Statistics" by Judea Pearl). (2) Specifically (more related to the overspecified model term) You can perceive adding irrelevant variables to a regression model as estimating coefficients on irrelevant variables that are truly zero. Then if you do this the estimators of our regression coefficients are still unbiased, but also inefficient since we did not consider (true) zero restrictions on the coefficients of the irrelevant variables. Hence, inference stays valid but confidence intervals become broader. (Literature: basically any econometrics textbook, e.g. Wooldrige). (3) Additionally (related to prediction) If you are solely interested in prediction performance of a model based on your training data then adding 'irrelevant' variables to your model is less harmful (irrelevant in the sense of being not causal and not in the sense of having true zero restrictions on the coefficients). As the overspefication of your model only becomes problematic if you want to do inference (broader confidence intervals). (Have a look in the causal machine learning literature).
Why is controlling for too many variables considered harmful? I would point out three things: (1) Generally (related to the estimation of causal effects) Usually you want to explain phenomena out there in the world with parsimonious models including variables de
14,367
Why is controlling for too many variables considered harmful?
Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield significant results (you just plug in or out until you get significant results, and you report those). There's a very nice post in FiveThirtyEight about this so you can experience the idea, where you can even obtain contradictory results depending on which variables you wish to "correct for".
Why is controlling for too many variables considered harmful?
Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield sign
Why is controlling for too many variables considered harmful? Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield significant results (you just plug in or out until you get significant results, and you report those). There's a very nice post in FiveThirtyEight about this so you can experience the idea, where you can even obtain contradictory results depending on which variables you wish to "correct for".
Why is controlling for too many variables considered harmful? Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield sign
14,368
Why is controlling for too many variables considered harmful?
The term you are looking for is overfitting. Wikipedia has a good explanation.
Why is controlling for too many variables considered harmful?
The term you are looking for is overfitting. Wikipedia has a good explanation.
Why is controlling for too many variables considered harmful? The term you are looking for is overfitting. Wikipedia has a good explanation.
Why is controlling for too many variables considered harmful? The term you are looking for is overfitting. Wikipedia has a good explanation.
14,369
Why is controlling for too many variables considered harmful?
There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example. Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an impact on crash risk. You look at the data, and at first pass you see that brunettes are 10% more likely to crash than blondes. But in the same data you see that brunettes are also more likely to get caught speeding. You do the controls to take out the effect of speeding, you'd find that the effect of hair on crash risk drops below signifance. That would probably be an example of an inappropriate thing to control. It is likely that the fact that our brunettes speed more is the mechanism by which they're more likely to crash. As such, if you insist on zeroing out that mechanism, you're forcing yourself to see no effect even if it's obviously there. Intuitively, "Actually brunettes are very safe drivers considering how much they speed" is very obviously an unreasonable defence to make! Conversely, suppose we look at the dataset again and find that people who are bald are 50% more likely to crash than those with red hair. But it also happens that the bald people in the dataset were typically older men, and younger women were underrepresented. Again you throw statistical controls at the situation and the effect disappears. This is probably a good thing to control, not least because your insurance company already asks about age and gender so you don't want to double count any effects. Again intuitively, saying "We already knew that age and gender are known to have an impact on road safety and on baldness prevalence. These data show bald young women are just as safe as hairy young women and bald old men are just as safe as hairy old men." seems like a very reasonable clarification. (This example is entirely made up, and isn't an allegation that any particular hair type in the real world are dangerous drivers!)
Why is controlling for too many variables considered harmful?
There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example. Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an i
Why is controlling for too many variables considered harmful? There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example. Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an impact on crash risk. You look at the data, and at first pass you see that brunettes are 10% more likely to crash than blondes. But in the same data you see that brunettes are also more likely to get caught speeding. You do the controls to take out the effect of speeding, you'd find that the effect of hair on crash risk drops below signifance. That would probably be an example of an inappropriate thing to control. It is likely that the fact that our brunettes speed more is the mechanism by which they're more likely to crash. As such, if you insist on zeroing out that mechanism, you're forcing yourself to see no effect even if it's obviously there. Intuitively, "Actually brunettes are very safe drivers considering how much they speed" is very obviously an unreasonable defence to make! Conversely, suppose we look at the dataset again and find that people who are bald are 50% more likely to crash than those with red hair. But it also happens that the bald people in the dataset were typically older men, and younger women were underrepresented. Again you throw statistical controls at the situation and the effect disappears. This is probably a good thing to control, not least because your insurance company already asks about age and gender so you don't want to double count any effects. Again intuitively, saying "We already knew that age and gender are known to have an impact on road safety and on baldness prevalence. These data show bald young women are just as safe as hairy young women and bald old men are just as safe as hairy old men." seems like a very reasonable clarification. (This example is entirely made up, and isn't an allegation that any particular hair type in the real world are dangerous drivers!)
Why is controlling for too many variables considered harmful? There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example. Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an i
14,370
What's in a name: hyperparameters
The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a known variance (1 in this case) $$ y \sim N(X\beta,I) $$ and then a prior on the parameters, e.g. $$ \beta \sim N(0,\lambda I) $$ Here $\lambda$ determines the distribution of $\beta$ and $\beta$ determines the distribution for $y$. When I want to just refer to $\beta$ I may call it the parameter and when I want to just refer to $\lambda$, I may call it the hyperparameter. The naming gets more complicated when parameters show up on multiple levels or when there are more hierarchical levels (and you don't want to use the term hyperhyperparameters). It is best if the author's specify exactly what is meant when they use the term hyperparameter or parameter for that matter.
What's in a name: hyperparameters
The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a
What's in a name: hyperparameters The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a known variance (1 in this case) $$ y \sim N(X\beta,I) $$ and then a prior on the parameters, e.g. $$ \beta \sim N(0,\lambda I) $$ Here $\lambda$ determines the distribution of $\beta$ and $\beta$ determines the distribution for $y$. When I want to just refer to $\beta$ I may call it the parameter and when I want to just refer to $\lambda$, I may call it the hyperparameter. The naming gets more complicated when parameters show up on multiple levels or when there are more hierarchical levels (and you don't want to use the term hyperhyperparameters). It is best if the author's specify exactly what is meant when they use the term hyperparameter or parameter for that matter.
What's in a name: hyperparameters The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a
14,371
What's in a name: hyperparameters
A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve the problem (hence the hyper, because they are not part of the optimization problem, but rather are "addons"). For what I've seen, but I have no reference, this relationship is unidirectional (a hyperparameter cannot be influenced by the parameters it has influence on, hence also the hyper). They are usually introduced in regularization or meta-optimization schemes. For example, your $\lambda$ parameter can freely impact $\mu$ and $\sigma$ to adjust for the regularization cost (but $\mu$ and $\sigma$ have no influence on $\lambda$). Thus, $\lambda$ is a hyperparameter for $\mu$ and $\sigma$. If you had an additional $\tau$ parameter influencing $\lambda$, it would be a hyperparameter for $\lambda$, and a hyperhyperparameter for $\mu$ and $\sigma$ (but I've never seen this nomenclatura, but I wouldn't feel it would be wrong if I saw it). I found the hyperparameter concept very useful for cross-validation, because it reminds you of the hierarchy of parameters, while also reminding you that if you are still modifying (hyper-)parameters, you are still cross-validating and not generalizing so you must remain careful about your conclusions (to avoid circular thinking).
What's in a name: hyperparameters
A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve
What's in a name: hyperparameters A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve the problem (hence the hyper, because they are not part of the optimization problem, but rather are "addons"). For what I've seen, but I have no reference, this relationship is unidirectional (a hyperparameter cannot be influenced by the parameters it has influence on, hence also the hyper). They are usually introduced in regularization or meta-optimization schemes. For example, your $\lambda$ parameter can freely impact $\mu$ and $\sigma$ to adjust for the regularization cost (but $\mu$ and $\sigma$ have no influence on $\lambda$). Thus, $\lambda$ is a hyperparameter for $\mu$ and $\sigma$. If you had an additional $\tau$ parameter influencing $\lambda$, it would be a hyperparameter for $\lambda$, and a hyperhyperparameter for $\mu$ and $\sigma$ (but I've never seen this nomenclatura, but I wouldn't feel it would be wrong if I saw it). I found the hyperparameter concept very useful for cross-validation, because it reminds you of the hierarchy of parameters, while also reminding you that if you are still modifying (hyper-)parameters, you are still cross-validating and not generalizing so you must remain careful about your conclusions (to avoid circular thinking).
What's in a name: hyperparameters A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve
14,372
What's in a name: hyperparameters
The other explanations are a bit vague; here's a more concrete explanation that should clarify it. Hyperparameters are parameters of the model only, not of the physical process that is being modeled. You introduce them "artificially" to make your model "work" in the presence of finite data and/or finite computation time. If you had infinite power to measure or compute anything, hyperparameters would no longer exist in your model, since they wouldn't be describing any physical aspect of the actual system. Regular parameters, on the other hand, are those that describe the physical system, and aren't merely modeling artifacts.
What's in a name: hyperparameters
The other explanations are a bit vague; here's a more concrete explanation that should clarify it. Hyperparameters are parameters of the model only, not of the physical process that is being modeled.
What's in a name: hyperparameters The other explanations are a bit vague; here's a more concrete explanation that should clarify it. Hyperparameters are parameters of the model only, not of the physical process that is being modeled. You introduce them "artificially" to make your model "work" in the presence of finite data and/or finite computation time. If you had infinite power to measure or compute anything, hyperparameters would no longer exist in your model, since they wouldn't be describing any physical aspect of the actual system. Regular parameters, on the other hand, are those that describe the physical system, and aren't merely modeling artifacts.
What's in a name: hyperparameters The other explanations are a bit vague; here's a more concrete explanation that should clarify it. Hyperparameters are parameters of the model only, not of the physical process that is being modeled.
14,373
What's in a name: hyperparameters
It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage. A hyperparameter is a quantity estimated in a machine learning algorithm that does not participate in the functional form of the final predictive function. Let me unwind that with an example, ridge regression. In ridge regression we solve the following optimization problem: $$ \beta^*(\lambda) = \text{argmin}_{\beta} \left( (y - X\beta)^t (y - X\beta) + \lambda \beta^t \beta \right)$$ $$ \beta^* = \text{argmin}_{\lambda} (y' - X'\beta(\lambda))^t (y' - X'\beta(\lambda)) $$ In the first problem $X, y$ is the training data, and in the second $X', y'$ is a hold out data set. The final functional form of the model, which I called above the predictive function is $$ f(X) = X \beta^* $$ in which $\lambda$ does not appear. This makes $\beta$ a parameter vector, and $\lambda$ a hyper parameter.
What's in a name: hyperparameters
It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage. A hyperparameter is a quantity estimated in a machine learning
What's in a name: hyperparameters It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage. A hyperparameter is a quantity estimated in a machine learning algorithm that does not participate in the functional form of the final predictive function. Let me unwind that with an example, ridge regression. In ridge regression we solve the following optimization problem: $$ \beta^*(\lambda) = \text{argmin}_{\beta} \left( (y - X\beta)^t (y - X\beta) + \lambda \beta^t \beta \right)$$ $$ \beta^* = \text{argmin}_{\lambda} (y' - X'\beta(\lambda))^t (y' - X'\beta(\lambda)) $$ In the first problem $X, y$ is the training data, and in the second $X', y'$ is a hold out data set. The final functional form of the model, which I called above the predictive function is $$ f(X) = X \beta^* $$ in which $\lambda$ does not appear. This makes $\beta$ a parameter vector, and $\lambda$ a hyper parameter.
What's in a name: hyperparameters It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage. A hyperparameter is a quantity estimated in a machine learning
14,374
What's in a name: hyperparameters
As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the others, using usually conditional probability statements. But the same terminology arises in other contexts with different meanings as well. For instance, I have seen the term hyperparameter been used to refer to the parameters of the simulation (running length, number of independent replications, number of interacting particles in each replication etc.) of a stochastic model, which did not result from a multilevel modeling.
What's in a name: hyperparameters
As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the oth
What's in a name: hyperparameters As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the others, using usually conditional probability statements. But the same terminology arises in other contexts with different meanings as well. For instance, I have seen the term hyperparameter been used to refer to the parameters of the simulation (running length, number of independent replications, number of interacting particles in each replication etc.) of a stochastic model, which did not result from a multilevel modeling.
What's in a name: hyperparameters As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the oth
14,375
Fixed effect vs random effect when all possibilities are included in a mixed effects model
The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them: (1) Fixed effects are constant across individuals, and random effects vary. For example, in a growth study, a model with random intercepts $a_i$ and fixed slope $b$ corresponds to parallel lines for different individuals $i$, or the model $y_{it} = a_i + b_t$. Kreft and De Leeuw (1998) thus distinguish between fixed and random coefficients. (2) Effects are fixed if they are interesting in themselves or random if there is interest in the underlying population. Searle, Casella, and McCulloch (1992, Section 1.4) explore this distinction in depth. (3) “When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small (i.e., negligible) part of the population the corresponding variable is random.” (Green and Tukey, 1960) (4) “If an effect is assumed to be a realized value of a random variable, it is called a random effect.” (LaMotte, 1983) (5) Fixed effects are estimated using least squares (or, more generally, maximum likelihood) and random effects are estimated with shrinkage (“linear unbiased prediction” in the terminology of Robinson, 1991). This definition is standard in the multilevel modeling literature (see, for example, Snijders and Bosker, 1999, Section 4.2) and in econometrics. and notices that they are not consistent. In his book Data Analysis Using Regression and Multilevel/Hierarchical Models he generally avoids using those terms and in their work he focuses on fixed or varying between groups intercepts and slopes because Fixed effects can be viewed as special cases of random effects, in which the higher-level variance (in model (1.1), this would be $\sigma^2_\alpha$ ) is set to $0$ or $\infty$. Hence, in our framework, all regression parameters are “random,” and the term “multilevel” is all-encompassing. This is especially true with Bayesian framework - commonly used for mixed models - where all the effects are random per se. If you are thinking Bayesian, you are not really concerned with "fixed" effects and point estimates and have no problem with treating all the effects as random. The more I read on this topic, the more I am convinced that this is rather an ideological discussion on what we can (or should) estimate and what we only can predict (here I could refer also to your own answer). You use random effects if you have a random sample of possible outcomes, so you are not concerned about individual estimates and you care rather about the population effects, then individuals. So the answer of your question depends also on what do you think about if you want or can estimate the fixed effects given your data. If all the possible levels are included in your data you can estimate fixed effects - also, like in your example, the number of levels could be small and that would generally not be good for estimating random effects and there are some minimal requirements for this. Best case scenario argument Say you have unlimited amounts of data and unlimited computational power. In this case you could imagine estimating every effect as fixed, since fixed effects give you more flexibility (enable us to compare the individual effects). However, even in this case, most of us would be reluctant to use fixed effects for everything. For example, imagine that you want to model exam results of schools in some region and you have data on all the 100 schools in the region. In this case you could threat schools as fixed - since you have data on all the levels - but in practice you probably would rather think of them as random. Why is that? One reason is that generally in this kind of cases you are not interested in effects of individual schools (and it is hard to compare all of them), but rather a general variability between schools. Another argument in here is model parsimony. Generally you are not interested in "every possible influence" model, so in your model you include few fixed effects that you want to test and control for the other possible sources of variability. This makes mixed effects models fit the general way of thinking about statistical modeling where you estimate something and control for other things. With complicated (multilevel or hierarchical) data you have many effects to include, so you threat some as "fixed" and some as "random" so to control for them. In this scenario you also wouldn't think of the schools as each having its own, unique, influence on the results, but rather as about schools having some influence in general. So this argument would be that we believe that is is not really possible to estimate the unique effects of individual schools and so we threat them as random sample of possible schools effects. Mixed effects models are somewhere in between "everything fixed" and "everything random" scenarios. The data we encounter makes us to lower our expectations about estimate everything as fixed effects, so we decide what effects we want to compare and what effects we want to control, or have general feeling about their influence. It is not only about what the data is, but also how we think of the data while modeling it.
Fixed effect vs random effect when all possibilities are included in a mixed effects model
The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them: (1) Fixed effects are constant across individuals, and ra
Fixed effect vs random effect when all possibilities are included in a mixed effects model The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them: (1) Fixed effects are constant across individuals, and random effects vary. For example, in a growth study, a model with random intercepts $a_i$ and fixed slope $b$ corresponds to parallel lines for different individuals $i$, or the model $y_{it} = a_i + b_t$. Kreft and De Leeuw (1998) thus distinguish between fixed and random coefficients. (2) Effects are fixed if they are interesting in themselves or random if there is interest in the underlying population. Searle, Casella, and McCulloch (1992, Section 1.4) explore this distinction in depth. (3) “When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small (i.e., negligible) part of the population the corresponding variable is random.” (Green and Tukey, 1960) (4) “If an effect is assumed to be a realized value of a random variable, it is called a random effect.” (LaMotte, 1983) (5) Fixed effects are estimated using least squares (or, more generally, maximum likelihood) and random effects are estimated with shrinkage (“linear unbiased prediction” in the terminology of Robinson, 1991). This definition is standard in the multilevel modeling literature (see, for example, Snijders and Bosker, 1999, Section 4.2) and in econometrics. and notices that they are not consistent. In his book Data Analysis Using Regression and Multilevel/Hierarchical Models he generally avoids using those terms and in their work he focuses on fixed or varying between groups intercepts and slopes because Fixed effects can be viewed as special cases of random effects, in which the higher-level variance (in model (1.1), this would be $\sigma^2_\alpha$ ) is set to $0$ or $\infty$. Hence, in our framework, all regression parameters are “random,” and the term “multilevel” is all-encompassing. This is especially true with Bayesian framework - commonly used for mixed models - where all the effects are random per se. If you are thinking Bayesian, you are not really concerned with "fixed" effects and point estimates and have no problem with treating all the effects as random. The more I read on this topic, the more I am convinced that this is rather an ideological discussion on what we can (or should) estimate and what we only can predict (here I could refer also to your own answer). You use random effects if you have a random sample of possible outcomes, so you are not concerned about individual estimates and you care rather about the population effects, then individuals. So the answer of your question depends also on what do you think about if you want or can estimate the fixed effects given your data. If all the possible levels are included in your data you can estimate fixed effects - also, like in your example, the number of levels could be small and that would generally not be good for estimating random effects and there are some minimal requirements for this. Best case scenario argument Say you have unlimited amounts of data and unlimited computational power. In this case you could imagine estimating every effect as fixed, since fixed effects give you more flexibility (enable us to compare the individual effects). However, even in this case, most of us would be reluctant to use fixed effects for everything. For example, imagine that you want to model exam results of schools in some region and you have data on all the 100 schools in the region. In this case you could threat schools as fixed - since you have data on all the levels - but in practice you probably would rather think of them as random. Why is that? One reason is that generally in this kind of cases you are not interested in effects of individual schools (and it is hard to compare all of them), but rather a general variability between schools. Another argument in here is model parsimony. Generally you are not interested in "every possible influence" model, so in your model you include few fixed effects that you want to test and control for the other possible sources of variability. This makes mixed effects models fit the general way of thinking about statistical modeling where you estimate something and control for other things. With complicated (multilevel or hierarchical) data you have many effects to include, so you threat some as "fixed" and some as "random" so to control for them. In this scenario you also wouldn't think of the schools as each having its own, unique, influence on the results, but rather as about schools having some influence in general. So this argument would be that we believe that is is not really possible to estimate the unique effects of individual schools and so we threat them as random sample of possible schools effects. Mixed effects models are somewhere in between "everything fixed" and "everything random" scenarios. The data we encounter makes us to lower our expectations about estimate everything as fixed effects, so we decide what effects we want to compare and what effects we want to control, or have general feeling about their influence. It is not only about what the data is, but also how we think of the data while modeling it.
Fixed effect vs random effect when all possibilities are included in a mixed effects model The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them: (1) Fixed effects are constant across individuals, and ra
14,376
Fixed effect vs random effect when all possibilities are included in a mixed effects model
Executive summary It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO DISTINCT REASONS: (1) If the number of levels is large, then it can make sense to treat the [crossed] factor as random. I agree with both @Tim and @RobertLong here: if a factor has a large number of levels that are all included in the model (such as e.g. all countries in the world; or all schools in a country; or maybe the entire population of subjects is surveyed, etc.), then there is nothing wrong with treating it as random --- this could be more parsimonious, could provide some shrinkage, etc. lmer(size ~ age + subjectID) # fixed effect lmer(size ~ age + (1|subjectID)) # random effect (2) If the factor is nested within another random effect, then it has to be treated as random, independent of its number of levels. There was a huge confusion in this thread (see comments) because other answers are about case #1 above, but the example you gave is an example of a different situation, namely this case #2. Here there are only two levels (i.e. not at all "a large number"!) and they do exhaust all possibilities, but they are nested inside another random effect, yielding a nested random effect. lmer(size ~ age + (1|subject) + (1|subject:side) # side HAS to be random Detailed discussion of your example Sides and subjects in your imaginary experiment are related like classes and schools in the standard hierarchical model example. Perhaps each school (#1, #2, #3, etc.) has class A and class B, and these two classes are supposed to be roughly the same. You will not model classes A and B as a fixed effect with two levels; this would be a mistake. But you will not model classes A and B as a "separate" (i.e. crossed) random effect with two levels either; this would be a mistake too. Instead, you will model classes as a nested random effect inside schools. See here: Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4? In your imaginary foot-size study, subject and side are random effects and side is nested inside subject. This essentially means that a combined variable is formed, e.g. John-Left, John-Right, Mary-Left, Mary-Right, etc., and there are two crossed random effects: subjects and subjects-sides. So for subject $i=1\ldots n$ and for side $j=1,2$ we would have: $$\text{Size}_{ijk} = \mu+\alpha\cdot\text{Height}_{ijk}+\beta\cdot\text{Weight}_{ijk}+\gamma\cdot\text{Age}_{ijk}+\epsilon_i + \color{red}{\epsilon_{ij}} + \epsilon_{ijk}$$ $$\epsilon_i\sim\mathcal N(0,\sigma^2_\mathrm{subjects}),\quad\quad\text{Random intercept for each subject}$$ $$\color{red}{\epsilon_{ij}}\sim\mathcal N(0,\sigma^2_\text{subject-side}),\quad\quad\text{Random int. for side nested in subject}$$ $$\epsilon_{ijk}\sim\mathcal N(0,\sigma^2_\text{noise}),\quad\quad\text{Error term}$$ As you wrote yourself, "there is no reason to believe that right feet will on average be larger than left feet". So there should be no "global" effect (neither fixed nor random crossed) of right or left foot at all; instead, each subject can be thought of having "one" foot and "another" foot, and this variability we should include into the model. These "one" and "another" feet are nested within subjects, hence nested random effects. More details in response to the comments. [Sep 26] My model above includes Side as a nested random effect within Subjects. Here is an alternative model, suggested by @Robert, where Side is a fixed effect: $$\text{Size}_{ijk} = \mu+\alpha\cdot\text{Height}_{ijk}+\beta\cdot\text{Weight}_{ijk}+\gamma\cdot\text{Age}_{ijk} + \color{red}{\delta\cdot\text{Side}_j}+\epsilon_i + \epsilon_{ijk}$$ I challenge @RobertLong or @gung to explain how this model can take care of the dependencies existing for consecutive measurements of the same Side of the same Subject, i.e. of the dependencies for data points with the same $ij$ combination. It cannot. The same is true for @gung's hypothetical model with Side as a crossed random effect: $$\text{Size}_{ijk} = \mu+\alpha\cdot\text{Height}_{ijk}+\beta\cdot\text{Weight}_{ijk}+\gamma\cdot\text{Age}_{ijk} +\epsilon_i + \color{red}{\epsilon_j} + \epsilon_{ijk}$$ It fails to account for dependencies as well. Demonstration via a simulation [Oct 2] Here is a direct demonstration in R. I generate a toy dataset with five subjects measured on both feet for five consecutive years. The effect of age is linear. Each subject has a random intercept. And each subject has one of the feet (either the left or the right) larger than another one. set.seed(17) demo = data.frame(expand.grid(age = 1:5, side=c("Left", "Right"), subject=c("Subject A", "Subject B", "Subject C", "Subject D", "Subject E"))) demo$size = 10 + demo$age + rnorm(nrow(demo))/3 for (s in unique(demo$subject)){ # adding a random intercept for each subject demo[demo$subject==s,]$size = demo[demo$subject==s,]$size + rnorm(1)*10 # making the two feet of each subject different for (l in unique(demo$side)){ demo[demo$subject==s & demo$side==l,]$size = demo[demo$subject==s & demo$side==l,]$size + rnorm(1)*7 } } plot(1:50, demo$size) Apologies for my awful R skills. Here is how the data look like (each consecutive five dots is one feet of one person measured over the years; each consecutive ten dots are two feet of the same person): Now we can fit a bunch of models: require(lme4) summary(lmer(size ~ age + side + (1|subject), demo)) summary(lmer(size ~ age + (1|side) + (1|subject), demo)) summary(lmer(size ~ age + (1|subject/side), demo)) All models include a fixed effect of age and a random effect of subject, but treat side differently. Model 1: fixed effect of side. This is @Robert's model. Result: age comes out not significant ($t=1.8$), residual variance is huge (29.81). Model 2: crossed random effect of side. This is @gung's "hypothetical" model from OP. Result: age comes out not significant ($t=1.4$), residual variance is huge (29.81). Model 3: nested random effect of side. This is my model. Result: age is very significant ($t=37$, yes, thirty-seven), residual variance is tiny (0.07). This clearly shows that side should be treated as a nested random effect. Finally, in the comments @Robert suggested to include the global effect of side as a control variable. We can do it, while keeping the nested random effect: summary(lmer(size ~ age + side + (1|subject/side), demo)) summary(lmer(size ~ age + (1|side) + (1|subject/side), demo)) These two models do not differe much from #3. Model 4 yields a tiny and insignificant fixed effect of side ($t=0.5$). Model 5 yields an estimate of side variance equal to exactly zero.
Fixed effect vs random effect when all possibilities are included in a mixed effects model
Executive summary It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO D
Fixed effect vs random effect when all possibilities are included in a mixed effects model Executive summary It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO DISTINCT REASONS: (1) If the number of levels is large, then it can make sense to treat the [crossed] factor as random. I agree with both @Tim and @RobertLong here: if a factor has a large number of levels that are all included in the model (such as e.g. all countries in the world; or all schools in a country; or maybe the entire population of subjects is surveyed, etc.), then there is nothing wrong with treating it as random --- this could be more parsimonious, could provide some shrinkage, etc. lmer(size ~ age + subjectID) # fixed effect lmer(size ~ age + (1|subjectID)) # random effect (2) If the factor is nested within another random effect, then it has to be treated as random, independent of its number of levels. There was a huge confusion in this thread (see comments) because other answers are about case #1 above, but the example you gave is an example of a different situation, namely this case #2. Here there are only two levels (i.e. not at all "a large number"!) and they do exhaust all possibilities, but they are nested inside another random effect, yielding a nested random effect. lmer(size ~ age + (1|subject) + (1|subject:side) # side HAS to be random Detailed discussion of your example Sides and subjects in your imaginary experiment are related like classes and schools in the standard hierarchical model example. Perhaps each school (#1, #2, #3, etc.) has class A and class B, and these two classes are supposed to be roughly the same. You will not model classes A and B as a fixed effect with two levels; this would be a mistake. But you will not model classes A and B as a "separate" (i.e. crossed) random effect with two levels either; this would be a mistake too. Instead, you will model classes as a nested random effect inside schools. See here: Crossed vs nested random effects: how do they differ and how are they specified correctly in lme4? In your imaginary foot-size study, subject and side are random effects and side is nested inside subject. This essentially means that a combined variable is formed, e.g. John-Left, John-Right, Mary-Left, Mary-Right, etc., and there are two crossed random effects: subjects and subjects-sides. So for subject $i=1\ldots n$ and for side $j=1,2$ we would have: $$\text{Size}_{ijk} = \mu+\alpha\cdot\text{Height}_{ijk}+\beta\cdot\text{Weight}_{ijk}+\gamma\cdot\text{Age}_{ijk}+\epsilon_i + \color{red}{\epsilon_{ij}} + \epsilon_{ijk}$$ $$\epsilon_i\sim\mathcal N(0,\sigma^2_\mathrm{subjects}),\quad\quad\text{Random intercept for each subject}$$ $$\color{red}{\epsilon_{ij}}\sim\mathcal N(0,\sigma^2_\text{subject-side}),\quad\quad\text{Random int. for side nested in subject}$$ $$\epsilon_{ijk}\sim\mathcal N(0,\sigma^2_\text{noise}),\quad\quad\text{Error term}$$ As you wrote yourself, "there is no reason to believe that right feet will on average be larger than left feet". So there should be no "global" effect (neither fixed nor random crossed) of right or left foot at all; instead, each subject can be thought of having "one" foot and "another" foot, and this variability we should include into the model. These "one" and "another" feet are nested within subjects, hence nested random effects. More details in response to the comments. [Sep 26] My model above includes Side as a nested random effect within Subjects. Here is an alternative model, suggested by @Robert, where Side is a fixed effect: $$\text{Size}_{ijk} = \mu+\alpha\cdot\text{Height}_{ijk}+\beta\cdot\text{Weight}_{ijk}+\gamma\cdot\text{Age}_{ijk} + \color{red}{\delta\cdot\text{Side}_j}+\epsilon_i + \epsilon_{ijk}$$ I challenge @RobertLong or @gung to explain how this model can take care of the dependencies existing for consecutive measurements of the same Side of the same Subject, i.e. of the dependencies for data points with the same $ij$ combination. It cannot. The same is true for @gung's hypothetical model with Side as a crossed random effect: $$\text{Size}_{ijk} = \mu+\alpha\cdot\text{Height}_{ijk}+\beta\cdot\text{Weight}_{ijk}+\gamma\cdot\text{Age}_{ijk} +\epsilon_i + \color{red}{\epsilon_j} + \epsilon_{ijk}$$ It fails to account for dependencies as well. Demonstration via a simulation [Oct 2] Here is a direct demonstration in R. I generate a toy dataset with five subjects measured on both feet for five consecutive years. The effect of age is linear. Each subject has a random intercept. And each subject has one of the feet (either the left or the right) larger than another one. set.seed(17) demo = data.frame(expand.grid(age = 1:5, side=c("Left", "Right"), subject=c("Subject A", "Subject B", "Subject C", "Subject D", "Subject E"))) demo$size = 10 + demo$age + rnorm(nrow(demo))/3 for (s in unique(demo$subject)){ # adding a random intercept for each subject demo[demo$subject==s,]$size = demo[demo$subject==s,]$size + rnorm(1)*10 # making the two feet of each subject different for (l in unique(demo$side)){ demo[demo$subject==s & demo$side==l,]$size = demo[demo$subject==s & demo$side==l,]$size + rnorm(1)*7 } } plot(1:50, demo$size) Apologies for my awful R skills. Here is how the data look like (each consecutive five dots is one feet of one person measured over the years; each consecutive ten dots are two feet of the same person): Now we can fit a bunch of models: require(lme4) summary(lmer(size ~ age + side + (1|subject), demo)) summary(lmer(size ~ age + (1|side) + (1|subject), demo)) summary(lmer(size ~ age + (1|subject/side), demo)) All models include a fixed effect of age and a random effect of subject, but treat side differently. Model 1: fixed effect of side. This is @Robert's model. Result: age comes out not significant ($t=1.8$), residual variance is huge (29.81). Model 2: crossed random effect of side. This is @gung's "hypothetical" model from OP. Result: age comes out not significant ($t=1.4$), residual variance is huge (29.81). Model 3: nested random effect of side. This is my model. Result: age is very significant ($t=37$, yes, thirty-seven), residual variance is tiny (0.07). This clearly shows that side should be treated as a nested random effect. Finally, in the comments @Robert suggested to include the global effect of side as a control variable. We can do it, while keeping the nested random effect: summary(lmer(size ~ age + side + (1|subject/side), demo)) summary(lmer(size ~ age + (1|side) + (1|subject/side), demo)) These two models do not differe much from #3. Model 4 yields a tiny and insignificant fixed effect of side ($t=0.5$). Model 5 yields an estimate of side variance equal to exactly zero.
Fixed effect vs random effect when all possibilities are included in a mixed effects model Executive summary It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO D
14,377
Fixed effect vs random effect when all possibilities are included in a mixed effects model
To add to the other answers: I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a factor as random are not met, I might be inclined to still model it as random when there are a large number of levels, so that treating the factor as fixed would consume many degrees of freedom and result in a cumbersome and less parsimonious model.
Fixed effect vs random effect when all possibilities are included in a mixed effects model
To add to the other answers: I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a fact
Fixed effect vs random effect when all possibilities are included in a mixed effects model To add to the other answers: I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a factor as random are not met, I might be inclined to still model it as random when there are a large number of levels, so that treating the factor as fixed would consume many degrees of freedom and result in a cumbersome and less parsimonious model.
Fixed effect vs random effect when all possibilities are included in a mixed effects model To add to the other answers: I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a fact
14,378
Fixed effect vs random effect when all possibilities are included in a mixed effects model
If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with random effects. The reason that you want to set random effect to a factor is because you wish to make inference on the effects of all levels of that factor, which are typically unknown. To make that kind of inference, you impose the assumption that the effects of all levels form a normal distribution in general. But given your problem setting, you can estimates the effects of all levels. Then there is certainly no need to set random effects and impose additional assumption. It's like the situation that you are able to get all the values of the population (thus you know the true mean), but you are trying to take a large sample from the population and use central limit theorem to approximate the sampling distribution, and then make inference on the true mean.
Fixed effect vs random effect when all possibilities are included in a mixed effects model
If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with ra
Fixed effect vs random effect when all possibilities are included in a mixed effects model If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with random effects. The reason that you want to set random effect to a factor is because you wish to make inference on the effects of all levels of that factor, which are typically unknown. To make that kind of inference, you impose the assumption that the effects of all levels form a normal distribution in general. But given your problem setting, you can estimates the effects of all levels. Then there is certainly no need to set random effects and impose additional assumption. It's like the situation that you are able to get all the values of the population (thus you know the true mean), but you are trying to take a large sample from the population and use central limit theorem to approximate the sampling distribution, and then make inference on the true mean.
Fixed effect vs random effect when all possibilities are included in a mixed effects model If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with ra
14,379
Fixed effect vs random effect when all possibilities are included in a mixed effects model
Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model: lmer(size ~ age + (1+side|subject), demo) [model 4 or lme4] (because as you said: there is a random variation of size across subject and in addition to this, a random variation of the side effect across subjects). I checked this model in the simulation above. It gives the same result as model 3 (noted lme3 below) (where side was modelled as a random factor nested in subject): t for Age = 37, residual variance = 0.07. The dotplot helps understanding the similarity between the two models : For model 3: dotplot(ranef(lme3, condVar=T))$side For model 4: dotplot(ranef(lme4, condVar=T)) I found this very enlightening and thought I would share it. This is my first participation here, so I hope I am not missing the point. best, N
Fixed effect vs random effect when all possibilities are included in a mixed effects model
Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model: lmer(size ~ age + (1+side|subject), demo) [model 4 o
Fixed effect vs random effect when all possibilities are included in a mixed effects model Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model: lmer(size ~ age + (1+side|subject), demo) [model 4 or lme4] (because as you said: there is a random variation of size across subject and in addition to this, a random variation of the side effect across subjects). I checked this model in the simulation above. It gives the same result as model 3 (noted lme3 below) (where side was modelled as a random factor nested in subject): t for Age = 37, residual variance = 0.07. The dotplot helps understanding the similarity between the two models : For model 3: dotplot(ranef(lme3, condVar=T))$side For model 4: dotplot(ranef(lme4, condVar=T)) I found this very enlightening and thought I would share it. This is my first participation here, so I hope I am not missing the point. best, N
Fixed effect vs random effect when all possibilities are included in a mixed effects model Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model: lmer(size ~ age + (1+side|subject), demo) [model 4 o
14,380
How is the kurtosis of a distribution related to the geometry of the density function?
The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function. Consider, for instance, the following graphs. Each of these is the graph of a non-negative function integrating to $1$: they are all PDFs. Moreover, they all have exactly the same moments--every last infinite number of them. Thus they share a common kurtosis (which happens to equal $-3+3 e^2+2 e^3+e^4$.) The formulas for these functions are $$f_{k,s}(x) = \frac{1}{\sqrt{2\pi}x} \exp\left(-\frac{1}{2}(\log(x))^2\right)\left(1 + s\sin(2 k \pi \log(x))\right)$$ for $x \gt 0,$ $-1\le s\le 1,$ and $k\in\mathbb{Z}.$ The figure displays values of $s$ at the left and values of $k$ across the top. The left-hand column shows the PDF for the standard lognormal distribution. Exercise 6.21 in Kendall's Advanced Theory of Statistics (Stuart & Ord, 5th edition) asks the reader to show that these all have the same moments. One can similarly modify any pdf to create another pdf of radically different shape but with the same second and fourth central moments (say), which therefore would have the same kurtosis. From this example alone it should be abundantly clear that kurtosis is not an easily interpretable or intuitive measure of symmetry, unimodality, bimodality, convexity, or any other familiar geometric characterization of a curve. Functions of moments, therefore (and kurtosis as a special case) do not describe geometric properties of the graph of the pdf. This intuitively makes sense: because a pdf represents probability by means of area, we can almost freely shift probability density around from one location to another, radically changing the appearance of the pdf, while fixing any finite number of pre-specified moments.
How is the kurtosis of a distribution related to the geometry of the density function?
The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function. Consider, for instance, the following graphs. Eac
How is the kurtosis of a distribution related to the geometry of the density function? The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function. Consider, for instance, the following graphs. Each of these is the graph of a non-negative function integrating to $1$: they are all PDFs. Moreover, they all have exactly the same moments--every last infinite number of them. Thus they share a common kurtosis (which happens to equal $-3+3 e^2+2 e^3+e^4$.) The formulas for these functions are $$f_{k,s}(x) = \frac{1}{\sqrt{2\pi}x} \exp\left(-\frac{1}{2}(\log(x))^2\right)\left(1 + s\sin(2 k \pi \log(x))\right)$$ for $x \gt 0,$ $-1\le s\le 1,$ and $k\in\mathbb{Z}.$ The figure displays values of $s$ at the left and values of $k$ across the top. The left-hand column shows the PDF for the standard lognormal distribution. Exercise 6.21 in Kendall's Advanced Theory of Statistics (Stuart & Ord, 5th edition) asks the reader to show that these all have the same moments. One can similarly modify any pdf to create another pdf of radically different shape but with the same second and fourth central moments (say), which therefore would have the same kurtosis. From this example alone it should be abundantly clear that kurtosis is not an easily interpretable or intuitive measure of symmetry, unimodality, bimodality, convexity, or any other familiar geometric characterization of a curve. Functions of moments, therefore (and kurtosis as a special case) do not describe geometric properties of the graph of the pdf. This intuitively makes sense: because a pdf represents probability by means of area, we can almost freely shift probability density around from one location to another, radically changing the appearance of the pdf, while fixing any finite number of pre-specified moments.
How is the kurtosis of a distribution related to the geometry of the density function? The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function. Consider, for instance, the following graphs. Eac
14,381
How is the kurtosis of a distribution related to the geometry of the density function?
[NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much of the post should be relevant here.] Kurtosis doesn't really measure the shape of distributions. Within some distribution families perhaps, you can say it describes the shape, but more generally kurtosis doesn't tell you terribly much about the actual shape. Shape is impacted by many things, including things unrelated to kurtosis. If one does image searches for kurtosis, quite a few images like this one show up: which instead seem to be showing changing variance, rather than increasing kurtosis. For comparison, here's three normal densities I just drew (using R) with different standard deviations: As you can see, it looks almost identical to the previous picture. These all have exactly the same kurtosis. By contrast, here's an example that is probably nearer to what the diagram was aiming for The green curve is both more peaked and heavier tailed (though this display isn't well suited to seeing how much heavier the tail actually is). The blue curve is less peaked and has very light tails (indeed it has no tails at all beyond $\sqrt{6}$ standard deviations from the mean). This is usually what people mean when they talk about kurtosis indicating the shape of the density. However, kurtosis can be subtle -- it doesn't have to work like that. For example, at a given variance higher kurtosis can actually occur with a lower peak. One must also beware the temptation (and in quite a few books it's openly stated) that zero excess kurtosis implies normality. There are distributions with excess kurtosis 0 that are nothing like normal. Here's an example: Indeed, that also illustrates the previous point. I could readily construct a similar-looking distribution with higher kurtosis than the normal but which is still zero at the center - a complete absence of peak. There are a number of posts on site that describe kurtosis further. One example is here.
How is the kurtosis of a distribution related to the geometry of the density function?
[NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much o
How is the kurtosis of a distribution related to the geometry of the density function? [NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much of the post should be relevant here.] Kurtosis doesn't really measure the shape of distributions. Within some distribution families perhaps, you can say it describes the shape, but more generally kurtosis doesn't tell you terribly much about the actual shape. Shape is impacted by many things, including things unrelated to kurtosis. If one does image searches for kurtosis, quite a few images like this one show up: which instead seem to be showing changing variance, rather than increasing kurtosis. For comparison, here's three normal densities I just drew (using R) with different standard deviations: As you can see, it looks almost identical to the previous picture. These all have exactly the same kurtosis. By contrast, here's an example that is probably nearer to what the diagram was aiming for The green curve is both more peaked and heavier tailed (though this display isn't well suited to seeing how much heavier the tail actually is). The blue curve is less peaked and has very light tails (indeed it has no tails at all beyond $\sqrt{6}$ standard deviations from the mean). This is usually what people mean when they talk about kurtosis indicating the shape of the density. However, kurtosis can be subtle -- it doesn't have to work like that. For example, at a given variance higher kurtosis can actually occur with a lower peak. One must also beware the temptation (and in quite a few books it's openly stated) that zero excess kurtosis implies normality. There are distributions with excess kurtosis 0 that are nothing like normal. Here's an example: Indeed, that also illustrates the previous point. I could readily construct a similar-looking distribution with higher kurtosis than the normal but which is still zero at the center - a complete absence of peak. There are a number of posts on site that describe kurtosis further. One example is here.
How is the kurtosis of a distribution related to the geometry of the density function? [NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much o
14,382
How is the kurtosis of a distribution related to the geometry of the density function?
For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf. It is not true that kurtosis measures (or is in general related) to the peakedness of a distribution. Rather, kurtosis measure how far the underlying distribution is from being symmetric and bimodal (algebraically, a perfectly symmetric and bimodal distribution will have a kurtosis of 1, which is the smallest possible value the kurtosis can have)[0]. In a nutshell[1], if you define: $$k=E(x-\mu)^4/\sigma^4$$ with $E(X)=\mu,V(X)=\sigma^2$, then $$k=V(Z^2)+1\ge1$$ for $Z=(X-\mu)/\sigma$. This implies that $k$ can be seen as a measure of dispersion of $Z^2$ around its expectation 1. In other words, if you have a geometrical interpretation of the variance and the expectation, than that of the kurtosis follows. [0] R. B. Darlington (1970). Is Kurtosis Really "Peakedness?". The American Statistician , Vol. 24, No. 2. [1] J. J. A. Moors (1986).The Meaning of Kurtosis: Darlington Reexamined. The American Statistician, Volume 40, Issue 4.
How is the kurtosis of a distribution related to the geometry of the density function?
For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf. It is not true that kurtosis measures (or is
How is the kurtosis of a distribution related to the geometry of the density function? For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf. It is not true that kurtosis measures (or is in general related) to the peakedness of a distribution. Rather, kurtosis measure how far the underlying distribution is from being symmetric and bimodal (algebraically, a perfectly symmetric and bimodal distribution will have a kurtosis of 1, which is the smallest possible value the kurtosis can have)[0]. In a nutshell[1], if you define: $$k=E(x-\mu)^4/\sigma^4$$ with $E(X)=\mu,V(X)=\sigma^2$, then $$k=V(Z^2)+1\ge1$$ for $Z=(X-\mu)/\sigma$. This implies that $k$ can be seen as a measure of dispersion of $Z^2$ around its expectation 1. In other words, if you have a geometrical interpretation of the variance and the expectation, than that of the kurtosis follows. [0] R. B. Darlington (1970). Is Kurtosis Really "Peakedness?". The American Statistician , Vol. 24, No. 2. [1] J. J. A. Moors (1986).The Meaning of Kurtosis: Darlington Reexamined. The American Statistician, Volume 40, Issue 4.
How is the kurtosis of a distribution related to the geometry of the density function? For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf. It is not true that kurtosis measures (or is
14,383
How is the kurtosis of a distribution related to the geometry of the density function?
A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments. Start with the definition of kurtosis: $$ \DeclareMathOperator{\E}{\mathbb{E}} k = \E \left( \frac{X-\mu}{\sigma} \right)^4 =\int \left(\frac{x-\mu}{\sigma}\right)^4 f(x) \; dx $$ where $f$ is the density of $X$, $\mu, \sigma^2$ respectively expectation and variance. The nonnegative function under the integral sign integrates to the kurtosis, and gives contribution to kurtosis from around $x$. We can call it the kurtosis density, and plotting it shows the kurtosis graphically. (Note that in this post we are not using the excess kurtosis $k_e=k-3$ at all). In the following I will show a plot of graphical kurtosis for some symmetric distributions, all centered at zero and scaled to have variance 1. Note the virtual absence of contribution to the kurtosis from the center, showing that kurtosis does not have much to do with "peakedness".
How is the kurtosis of a distribution related to the geometry of the density function?
A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments. Start with the definition of k
How is the kurtosis of a distribution related to the geometry of the density function? A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments. Start with the definition of kurtosis: $$ \DeclareMathOperator{\E}{\mathbb{E}} k = \E \left( \frac{X-\mu}{\sigma} \right)^4 =\int \left(\frac{x-\mu}{\sigma}\right)^4 f(x) \; dx $$ where $f$ is the density of $X$, $\mu, \sigma^2$ respectively expectation and variance. The nonnegative function under the integral sign integrates to the kurtosis, and gives contribution to kurtosis from around $x$. We can call it the kurtosis density, and plotting it shows the kurtosis graphically. (Note that in this post we are not using the excess kurtosis $k_e=k-3$ at all). In the following I will show a plot of graphical kurtosis for some symmetric distributions, all centered at zero and scaled to have variance 1. Note the virtual absence of contribution to the kurtosis from the center, showing that kurtosis does not have much to do with "peakedness".
How is the kurtosis of a distribution related to the geometry of the density function? A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments. Start with the definition of k
14,384
How is the kurtosis of a distribution related to the geometry of the density function?
Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ range) the geometry can show an infinite peak, a flat peak, or bimodal peaks, both in cases where the kurtosis is infinite, and in cases where the kurtosis is less than that of the normal distribution. Kurtosis measures tail behavior (outliers) only. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4321753/ Edit 11/23/2018: Since writing this post, I have developed some geometrical perspectives on kurtosis. One is that excess kurtosis can indeed be visualized geometrically in terms of deviations from the expected 45 degree line in the tails of the normal quantile-quantile plot; see Does this Q-Q plot indicates leptokurtic or platykurtic distribution? Another (perhaps more physical than geometrical) interpretation of kurtosis is that kurtosis can be visualized as the point of balance of the distribution $p_V(v)$, where $V = \{(X - \mu)/\sigma \}^4$. Note that (non-excess) kurtosis of $X$ is equal to $E(V)$. Thus, the distribution of $V$ balances at the kurtosis of $X$. Another result that shows that geometry in the $\mu \pm \sigma$ range is nearly irrelevant to kurtosis is given as follows. Consider the pdf of any RV $X$ having finite fourth moment. (Thus the result applies to all empirical distributions.) Replace the mass (or geometry) within the $\mu \pm \sigma$ range arbitrarily to get a new distribution, but keep the mean and standard deviation of the resulting distribution equal to $\mu$ and $\sigma$ of the original $X$. Then the maximum difference in kurtosis for all such replacements is $\le 0.25$. On the other hand, if you replace the mass outside the $\mu \pm \sigma$ range, keeping the center mass as well as $\mu$, $\sigma$ fixed, the difference in kurtosis is unbounded for all such replacements.
How is the kurtosis of a distribution related to the geometry of the density function?
Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ ran
How is the kurtosis of a distribution related to the geometry of the density function? Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ range) the geometry can show an infinite peak, a flat peak, or bimodal peaks, both in cases where the kurtosis is infinite, and in cases where the kurtosis is less than that of the normal distribution. Kurtosis measures tail behavior (outliers) only. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4321753/ Edit 11/23/2018: Since writing this post, I have developed some geometrical perspectives on kurtosis. One is that excess kurtosis can indeed be visualized geometrically in terms of deviations from the expected 45 degree line in the tails of the normal quantile-quantile plot; see Does this Q-Q plot indicates leptokurtic or platykurtic distribution? Another (perhaps more physical than geometrical) interpretation of kurtosis is that kurtosis can be visualized as the point of balance of the distribution $p_V(v)$, where $V = \{(X - \mu)/\sigma \}^4$. Note that (non-excess) kurtosis of $X$ is equal to $E(V)$. Thus, the distribution of $V$ balances at the kurtosis of $X$. Another result that shows that geometry in the $\mu \pm \sigma$ range is nearly irrelevant to kurtosis is given as follows. Consider the pdf of any RV $X$ having finite fourth moment. (Thus the result applies to all empirical distributions.) Replace the mass (or geometry) within the $\mu \pm \sigma$ range arbitrarily to get a new distribution, but keep the mean and standard deviation of the resulting distribution equal to $\mu$ and $\sigma$ of the original $X$. Then the maximum difference in kurtosis for all such replacements is $\le 0.25$. On the other hand, if you replace the mass outside the $\mu \pm \sigma$ range, keeping the center mass as well as $\mu$, $\sigma$ fixed, the difference in kurtosis is unbounded for all such replacements.
How is the kurtosis of a distribution related to the geometry of the density function? Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ ran
14,385
Is multicollinearity really a problem?
It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents model convergence or results in singular matrices, and then you won't get predictions anyway). This, I think, is the meaning of that blog post, as well. It sounds like you may be insisting on a yes-or-no answer when the answer is that it depends. Here's what it depends on, and why it can at least be said that (non-perfect) multicollinearity is never a reason to drop a variable from a model - any problems that multicollinearity indicate won't go away because you dropped a variable and stopped seeing the collinearity. Predictors that are highly correlated with each other just don't do as good a job of improving your predictions as they would if they were not collinear, but still separately correlated with the outcome variable; neither one is doing much more work than the other one is already doing and would do on its own anyway. Maybe they're so strongly related to each other because they are capturing basically the same underlying construct, in which case neither one is adding much more on top of the other for good reason, and it would be impossible to separate them out ontologically for predictive purposes anyway, by manipulating the units of observation to have different values on each of the two predictor variables so that they work better as predictors. But that doesn't mean that including both of them in your model as-is is bad or wrong. Throw them both in, why not - any random measurement error that's involved will also be partly addressed by the fact that you basically have two separate measurements of the same thing, so you might have a marginal increase in predictive power for that good reason (and in this sense I think I disagree with Kjetil's comment above). When it comes to causal inference, it's a problem simply because it prevents us from being able to tell, confidently at least, which of the collinear predictors is doing the predicting, and therefore the explaining and, presumably, causing. With enough observations, you will eventually be able to identify the separate effects of even highly collinear (but never perfectly collinear) variables. This is why Rob Franzese and UMich likes to call multicollinearity "micronumerosity." There's always some collinearity between predictors. That's one of the reasons why we generally just need lots of observations. Sometimes an impossible amount, for our causal-inference needs. But the problem is the complexity of the world and the unfortunate circumstances that prevent us from observing a wider variety of situations where different factors vary more in relation to each other. Multicollinearity is the symptom of that lack of useful data, and multivariate regression is the (imperfect) cure. Yet so many people seem to think of multicollinearity as something they're doing wrong with their model, and as if it's a reason to doubt what findings they do have.
Is multicollinearity really a problem?
It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents m
Is multicollinearity really a problem? It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents model convergence or results in singular matrices, and then you won't get predictions anyway). This, I think, is the meaning of that blog post, as well. It sounds like you may be insisting on a yes-or-no answer when the answer is that it depends. Here's what it depends on, and why it can at least be said that (non-perfect) multicollinearity is never a reason to drop a variable from a model - any problems that multicollinearity indicate won't go away because you dropped a variable and stopped seeing the collinearity. Predictors that are highly correlated with each other just don't do as good a job of improving your predictions as they would if they were not collinear, but still separately correlated with the outcome variable; neither one is doing much more work than the other one is already doing and would do on its own anyway. Maybe they're so strongly related to each other because they are capturing basically the same underlying construct, in which case neither one is adding much more on top of the other for good reason, and it would be impossible to separate them out ontologically for predictive purposes anyway, by manipulating the units of observation to have different values on each of the two predictor variables so that they work better as predictors. But that doesn't mean that including both of them in your model as-is is bad or wrong. Throw them both in, why not - any random measurement error that's involved will also be partly addressed by the fact that you basically have two separate measurements of the same thing, so you might have a marginal increase in predictive power for that good reason (and in this sense I think I disagree with Kjetil's comment above). When it comes to causal inference, it's a problem simply because it prevents us from being able to tell, confidently at least, which of the collinear predictors is doing the predicting, and therefore the explaining and, presumably, causing. With enough observations, you will eventually be able to identify the separate effects of even highly collinear (but never perfectly collinear) variables. This is why Rob Franzese and UMich likes to call multicollinearity "micronumerosity." There's always some collinearity between predictors. That's one of the reasons why we generally just need lots of observations. Sometimes an impossible amount, for our causal-inference needs. But the problem is the complexity of the world and the unfortunate circumstances that prevent us from observing a wider variety of situations where different factors vary more in relation to each other. Multicollinearity is the symptom of that lack of useful data, and multivariate regression is the (imperfect) cure. Yet so many people seem to think of multicollinearity as something they're doing wrong with their model, and as if it's a reason to doubt what findings they do have.
Is multicollinearity really a problem? It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents m
14,386
Is multicollinearity really a problem?
It's not an issue for predictive modeling when all you care about is the forecast and nothing else. Consider this simple model: $$y=\beta+\beta_xx+\beta_zz+\varepsilon$$ Suppose that $z=\alpha x$ We have perfectly collinear regressors, and a typical OLS solution will not exist because $(X^TX)^{-1}$ has a singularity. However, let's plug one equation into another: $$y=\beta+\beta_xx+\beta_z\alpha x+\varepsilon= \beta+\beta_2 x+\varepsilon,$$ where $\beta_2\equiv \beta_x+\beta_z\alpha$ So, clearly, we can estimate $\hat\beta_2$ by usual OLS methods, i.e. there is a solution. The only problem's that it's not unique! We can choose any $\hat\beta_z$, which would give us $\hat\beta_x=\beta_2-\alpha\hat\beta_x$: we have infinite number of pairs $(\hat\beta_x,\hat\beta_z)$ that correspond to a unique solution $\hat\beta_2$. Obviously, any of these pairs is as good as any other for prediction of $\hat y$. Moreover, all these pairs are as good as the unique $\hat\beta_2$ coefficient for the purpose of forecasting. The only problem is the inference. If you want to know how $x$ impacts $y$ your typical analysis of $\hat\beta_x$ coefficient and its variance will be pointless.
Is multicollinearity really a problem?
It's not an issue for predictive modeling when all you care about is the forecast and nothing else. Consider this simple model: $$y=\beta+\beta_xx+\beta_zz+\varepsilon$$ Suppose that $z=\alpha x$ We
Is multicollinearity really a problem? It's not an issue for predictive modeling when all you care about is the forecast and nothing else. Consider this simple model: $$y=\beta+\beta_xx+\beta_zz+\varepsilon$$ Suppose that $z=\alpha x$ We have perfectly collinear regressors, and a typical OLS solution will not exist because $(X^TX)^{-1}$ has a singularity. However, let's plug one equation into another: $$y=\beta+\beta_xx+\beta_z\alpha x+\varepsilon= \beta+\beta_2 x+\varepsilon,$$ where $\beta_2\equiv \beta_x+\beta_z\alpha$ So, clearly, we can estimate $\hat\beta_2$ by usual OLS methods, i.e. there is a solution. The only problem's that it's not unique! We can choose any $\hat\beta_z$, which would give us $\hat\beta_x=\beta_2-\alpha\hat\beta_x$: we have infinite number of pairs $(\hat\beta_x,\hat\beta_z)$ that correspond to a unique solution $\hat\beta_2$. Obviously, any of these pairs is as good as any other for prediction of $\hat y$. Moreover, all these pairs are as good as the unique $\hat\beta_2$ coefficient for the purpose of forecasting. The only problem is the inference. If you want to know how $x$ impacts $y$ your typical analysis of $\hat\beta_x$ coefficient and its variance will be pointless.
Is multicollinearity really a problem? It's not an issue for predictive modeling when all you care about is the forecast and nothing else. Consider this simple model: $$y=\beta+\beta_xx+\beta_zz+\varepsilon$$ Suppose that $z=\alpha x$ We
14,387
Is multicollinearity really a problem?
Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal. It's a problem for model interpretation (trying to understand the data): Multicollinearity affects the variance of the coefficient estimators, and therefore estimation precision. Thus, it would be harder to reject a null hypothesis (because of the higher standard errors). We have Type II error problem. The addition or deletion of just a few sample observations can substantially change the estimated coefficients The signs of the estimated coefficient can be the opposite of those expected. Imagine if you have to write a report to your boss about your data. You build a near-perfect multicollinearity model, and tell your boss about the model. You might say "my first predictor is positively correlated with the response... I'm going to tell you more why.. Your boss is happy, but askes you to try again without a few data points. Your coefficients in your new model is now ... very different, the coefficient for your first predictor is now negative! Your boss won't trust you anymore! Your model is not robust. Multicollinearity is still a problem for predictive power. Your model will overfit and less likely to generalize to out-of-sample data. Fortunately, your $R^2$ will be unaffected and your coefficients will still be unbiased.
Is multicollinearity really a problem?
Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal. It's a problem for model interpretation (trying to underst
Is multicollinearity really a problem? Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal. It's a problem for model interpretation (trying to understand the data): Multicollinearity affects the variance of the coefficient estimators, and therefore estimation precision. Thus, it would be harder to reject a null hypothesis (because of the higher standard errors). We have Type II error problem. The addition or deletion of just a few sample observations can substantially change the estimated coefficients The signs of the estimated coefficient can be the opposite of those expected. Imagine if you have to write a report to your boss about your data. You build a near-perfect multicollinearity model, and tell your boss about the model. You might say "my first predictor is positively correlated with the response... I'm going to tell you more why.. Your boss is happy, but askes you to try again without a few data points. Your coefficients in your new model is now ... very different, the coefficient for your first predictor is now negative! Your boss won't trust you anymore! Your model is not robust. Multicollinearity is still a problem for predictive power. Your model will overfit and less likely to generalize to out-of-sample data. Fortunately, your $R^2$ will be unaffected and your coefficients will still be unbiased.
Is multicollinearity really a problem? Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal. It's a problem for model interpretation (trying to underst
14,388
Is multicollinearity really a problem?
I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinearity affecting the accuracy of out-of-sample predictions. Multicollinearity just adds another assumption (consistent correlation) that must be reasonably met for your model to keep performing well.
Is multicollinearity really a problem?
I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinear
Is multicollinearity really a problem? I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinearity affecting the accuracy of out-of-sample predictions. Multicollinearity just adds another assumption (consistent correlation) that must be reasonably met for your model to keep performing well.
Is multicollinearity really a problem? I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinear
14,389
What is the difference between moment generating function and probability generating function?
The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same information. Let $X$ be a non-negative random variable. Then (see https://en.wikipedia.org/wiki/Probability-generating_function) the probability generating function is defined as $$ \DeclareMathOperator{\P}{\mathbb{P}} \DeclareMathOperator{\E}{\mathbb{E}} G(z) = \E z^X $$ and the moment generating function is $$ M_X(t) = \E e^{t X} $$ Now define $\log z=t$ so that $e^t=z$. Then $$ G(z)=\E z^X = \E (e^t)^X = \E e^{t X} =M_X(t)=M_X(\log z) $$ So, to conclude, the relationship is simple: $$ G(z) = M_X(\log z) $$ EDIT @Carl writes in a comment about this my formula " ... which is true, except when it is false" so I need to have some comments. Of course, the equality $ G(z) = M_X(\log z)$ assumes that both are defined, and a domain for the variable $z$ need be given. I thought the post was clear enough without that formalities, but yes, sometimes I am too informal. But there is another point: yes, the probability generating function is mostly used for (nonnegative argument) probability mass functions, wherefrom the name comes. But there is nothing in the definition which assumes this, it can as well be used for any nonnegative random variable! As an example, take the exponential distribution with rate 1, we can calculate $$G(z)=\E z^X=\int_0^\infty z^x e^{-x}\; dx=\dots=\frac1{1-\log z} $$ which could be used for all purposes we do use the moment generating function, and you can check the relationships between the two function are fulfilled. Normally we do not do this, it is probably more practical to use the same definitions with (possibly) negative as well as with nonnegative variables. But it is not forced by the mathematics.
What is the difference between moment generating function and probability generating function?
The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same
What is the difference between moment generating function and probability generating function? The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same information. Let $X$ be a non-negative random variable. Then (see https://en.wikipedia.org/wiki/Probability-generating_function) the probability generating function is defined as $$ \DeclareMathOperator{\P}{\mathbb{P}} \DeclareMathOperator{\E}{\mathbb{E}} G(z) = \E z^X $$ and the moment generating function is $$ M_X(t) = \E e^{t X} $$ Now define $\log z=t$ so that $e^t=z$. Then $$ G(z)=\E z^X = \E (e^t)^X = \E e^{t X} =M_X(t)=M_X(\log z) $$ So, to conclude, the relationship is simple: $$ G(z) = M_X(\log z) $$ EDIT @Carl writes in a comment about this my formula " ... which is true, except when it is false" so I need to have some comments. Of course, the equality $ G(z) = M_X(\log z)$ assumes that both are defined, and a domain for the variable $z$ need be given. I thought the post was clear enough without that formalities, but yes, sometimes I am too informal. But there is another point: yes, the probability generating function is mostly used for (nonnegative argument) probability mass functions, wherefrom the name comes. But there is nothing in the definition which assumes this, it can as well be used for any nonnegative random variable! As an example, take the exponential distribution with rate 1, we can calculate $$G(z)=\E z^X=\int_0^\infty z^x e^{-x}\; dx=\dots=\frac1{1-\log z} $$ which could be used for all purposes we do use the moment generating function, and you can check the relationships between the two function are fulfilled. Normally we do not do this, it is probably more practical to use the same definitions with (possibly) negative as well as with nonnegative variables. But it is not forced by the mathematics.
What is the difference between moment generating function and probability generating function? The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same
14,390
What is the difference between moment generating function and probability generating function?
Let us define both first and then specify the difference. 1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specification of its probability distribution. 2) In probability theory, the probability generating function (pgf) of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. The mgf can be regarded as a generalization of the pgf. The difference is among other things is that the probability generating function applies to discrete random variables whereas the moment generating function applies to discrete random variables and also to some continuous random variables. For example, both could be applied to the Poisson distribution as it is discrete. Indeed, they yield a result of the same form; $e^{\lambda(z - 1)}$. Only the mgf would apply to a normal distribution and neither the mgf nor the pgf apply to the Cauchy distribution, but for slightly different reasons. Edit As @kjetilbhalvorsen points out, the pgf applies to non-negative rather than only discrete random variables. Thus, the current Wikipedia entry in probability generating function has a mistake of omission, and should be improved.
What is the difference between moment generating function and probability generating function?
Let us define both first and then specify the difference. 1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specificatio
What is the difference between moment generating function and probability generating function? Let us define both first and then specify the difference. 1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specification of its probability distribution. 2) In probability theory, the probability generating function (pgf) of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. The mgf can be regarded as a generalization of the pgf. The difference is among other things is that the probability generating function applies to discrete random variables whereas the moment generating function applies to discrete random variables and also to some continuous random variables. For example, both could be applied to the Poisson distribution as it is discrete. Indeed, they yield a result of the same form; $e^{\lambda(z - 1)}$. Only the mgf would apply to a normal distribution and neither the mgf nor the pgf apply to the Cauchy distribution, but for slightly different reasons. Edit As @kjetilbhalvorsen points out, the pgf applies to non-negative rather than only discrete random variables. Thus, the current Wikipedia entry in probability generating function has a mistake of omission, and should be improved.
What is the difference between moment generating function and probability generating function? Let us define both first and then specify the difference. 1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specificatio
14,391
Why is there always at least one policy that is better than or equal to all other policies?
Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affect rewards for actions taken in others, so we can simply maximize the policy state-by-state.
Why is there always at least one policy that is better than or equal to all other policies?
Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affec
Why is there always at least one policy that is better than or equal to all other policies? Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affect rewards for actions taken in others, so we can simply maximize the policy state-by-state.
Why is there always at least one policy that is better than or equal to all other policies? Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affec
14,392
Why is there always at least one policy that is better than or equal to all other policies?
The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means: $$\pi' \geq \pi \iff v_{\pi'}(s) \geq v_{\pi}(s), \forall s \in S $$ Since this is only a partial ordering, there could be a case where two policies, $\pi_1$ and $\pi_2$, are not comparable. In other words, there are subsets of the state space, $S_1$ and $S_2$ such that: $$v_{\pi'}(s) \geq v_{\pi}(s), \forall s \in S_1$$ $$v_{\pi}(s) \geq v_{\pi'}(s),\forall s \in S_2$$ In this case, we can't say that one policy is better than the other. But if we are dealing with finite MDPs with bounded value functions, then such a scenario never occurs. There is exactly one optimal value functions, though there might be multiple optimal policies. For a proof of this, you need to understand the Banach Fixed Point theorem. For a detailed analysis, please refer.
Why is there always at least one policy that is better than or equal to all other policies?
The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means: $$\pi' \geq \pi \iff v_{\pi'}(s) \g
Why is there always at least one policy that is better than or equal to all other policies? The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means: $$\pi' \geq \pi \iff v_{\pi'}(s) \geq v_{\pi}(s), \forall s \in S $$ Since this is only a partial ordering, there could be a case where two policies, $\pi_1$ and $\pi_2$, are not comparable. In other words, there are subsets of the state space, $S_1$ and $S_2$ such that: $$v_{\pi'}(s) \geq v_{\pi}(s), \forall s \in S_1$$ $$v_{\pi}(s) \geq v_{\pi'}(s),\forall s \in S_2$$ In this case, we can't say that one policy is better than the other. But if we are dealing with finite MDPs with bounded value functions, then such a scenario never occurs. There is exactly one optimal value functions, though there might be multiple optimal policies. For a proof of this, you need to understand the Banach Fixed Point theorem. For a detailed analysis, please refer.
Why is there always at least one policy that is better than or equal to all other policies? The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means: $$\pi' \geq \pi \iff v_{\pi'}(s) \g
14,393
Why is there always at least one policy that is better than or equal to all other policies?
$\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$ Setting We are considering in the setting of: Discrete actions Discrete states Bounded rewards Stationary policy Infinite horizon The optimal policy is defined as: $$ \pi^\ast \in \arg \max_\pi V^\pi(s), \forall s \in \mc{S} \tag{1} $$ and the optimal value function is: $$ V^\ast = \max_\pi V^\pi (s), \forall s \in \mc S \tag{2} $$ There can be a set of policies which achieve the maximum. But there is only one optimal value function: $$ V^\ast = V^{\pi^\ast} \tag{3} $$ The question How to prove that there exists at least one $\pi^\ast$ which satisfies (1) simultaneously for all $s \in \mc{S}$ ? Outline of proof Construct the optimal equation to be used as a temporary surrogate definition of optimal value function, which we will prove in step 2 that it is equivalent to the definition via Eq.(2). $$ V^\ast(s) = \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) V^\ast(s^\prime)] \tag{4} $$ Derive the equivalency of defining optimal value function via Eq.(4) and via Eq.(2). (Note in fact we only need the necessity direction in the proof, because the sufficiency is obvious since we constructed Eq.(4) from Eq.(2).) Prove that there is a unique solution to Eq.(4). By step 2, we know that the solution obtained in step 3 is also a solution to Eq.(2), so it is an optimal value function. From an optimal value function, we can recover an optimal policy by choosing the maximizer action in Eq.(4) for each state. Details of the steps 1 Since $V^\ast(s) = V^{\pi^\ast}(s) = \mb E_a [Q^{\pi^\ast}(s, a)]$, we have $V^{\pi^\ast}(s) \le \max_{a \in \mc A} Q^{\pi^\ast} (s, a)$. And if there is any $\tilde{s}$ such that $V^{\pi^\ast} \neq \max_{a \in \mc A} Q^{\pi^\ast} (s, a)$, we can choose a better policy by maximizing $Q^{\ast} (s, a) = Q^{\pi^\ast} (s, a)$ over $a$. 2 (=>) Follows by step 1. (<=) i.e. If $\tilde V$ satisfies $\tilde V(s) = \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) \tilde V(s^\prime)]$, then $\tilde V(s) = V^\ast(s) = \max_\pi V^\pi(s), \forall s \in \mc S$. Define the optimal Bellman operator as $$ \mc T V(s) = \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) V(s^\prime)] \tag{5} $$ So our goal is to prove that if $\tilde V = \mc T \tilde V$, then $\tilde V = V^\ast$. We show this by combining two results, following Puterman[1]: a) If $\tilde V \ge \mc T \tilde V$, then $\tilde V \ge V^\ast$. b) If $\tilde V \le \mc T \tilde V$, then $\tilde V \le V^\ast$. Proof: a) For any $\pi = (d_1, d_2, ...)$, $$ \begin{align} \tilde V &\ge \mc T \tilde V = \max_{d} [ R_d + \gamma \, P_d \tilde V] \\ &\ge R_{d_1} + \gamma \, P_{d_1} \tilde V \\ \end{align} $$ Here $d$ is the decision rule(action profile at specific time), $R_d$ is the vector representation of immediate reward induced from $d$ and $P_d$ is transition matrix induced from $d$. By induction, for any $n$, $$ \tilde V \ge R_{d_1} + \sum_{i=1}^{n-1} \gamma^i P_\pi^i R_{d_{i+1}} + \gamma^n P_\pi^n \tilde V $$ where $P_\pi^j$ represents the $j$-step transition matrix under $\pi$. Since $$ V^\pi = R_{d_1} + \sum_{i=1}^{\infty}\gamma^i P_\pi^i R_{d_{i+1}} $$ we have $$ \tilde V - V^\pi \ge \underbrace{\gamma^n P_\pi^n \tilde V -\sum_{i=n}^{\infty}\gamma^i P_\pi^i R_{d_{i+1}}}_{\rightarrow 0 \ \text{as}\ n\rightarrow \infty} $$ So we have $\tilde V \ge V^\pi$. And since this holds for any $\pi$, we conclude that $$ \tilde V \ge \max_\pi V^\pi = V^\ast $$ b) Follows from step 1. 3 The optimal Bellman operator is a contraction in $L_\infty$ norm, cf. [2]. Proof: For any $s$, $$ \begin{align} \left\vert \mc T V_1(s) - \mc TV_2(s) \right\vert &= \left\vert \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) V_1(s^\prime)] -\max_{a^\prime \in \mc A} [ R(s, a^\prime) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a^\prime, s^\prime) V(s^\prime)]\right\vert \\ &\overset{(*)}{\le} \left\vert \max_{a \in \mc A} [\gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) (V_1(s^\prime) - V_2(s^\prime))] \right\vert \\ &\le \gamma \Vert V_1 - V_2 \Vert_\infty \end{align} $$ where in (*) we used the fact that $$ \max_a f(a) - \max_{a^\prime} g(a^\prime) \le \max_a [f(a) - g(a)] $$ Thus by Banach fixed point theorum it follows that $\mc T$ has a unique fixed point. References [1] Puterman, Martin L.. “Markov Decision Processes : Discrete Stochastic Dynamic Programming.” (2016). [2] A. Lazaric. http://researchers.lille.inria.fr/~lazaric/Webpage/MVA-RL_Course14_files/slides-lecture-02-handout.pdf
Why is there always at least one policy that is better than or equal to all other policies?
$\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$ Setting We are considering in the setting of: Discrete actions Discrete states Bounded rewards Stationary policy Infinite horizon The optimal p
Why is there always at least one policy that is better than or equal to all other policies? $\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$ Setting We are considering in the setting of: Discrete actions Discrete states Bounded rewards Stationary policy Infinite horizon The optimal policy is defined as: $$ \pi^\ast \in \arg \max_\pi V^\pi(s), \forall s \in \mc{S} \tag{1} $$ and the optimal value function is: $$ V^\ast = \max_\pi V^\pi (s), \forall s \in \mc S \tag{2} $$ There can be a set of policies which achieve the maximum. But there is only one optimal value function: $$ V^\ast = V^{\pi^\ast} \tag{3} $$ The question How to prove that there exists at least one $\pi^\ast$ which satisfies (1) simultaneously for all $s \in \mc{S}$ ? Outline of proof Construct the optimal equation to be used as a temporary surrogate definition of optimal value function, which we will prove in step 2 that it is equivalent to the definition via Eq.(2). $$ V^\ast(s) = \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) V^\ast(s^\prime)] \tag{4} $$ Derive the equivalency of defining optimal value function via Eq.(4) and via Eq.(2). (Note in fact we only need the necessity direction in the proof, because the sufficiency is obvious since we constructed Eq.(4) from Eq.(2).) Prove that there is a unique solution to Eq.(4). By step 2, we know that the solution obtained in step 3 is also a solution to Eq.(2), so it is an optimal value function. From an optimal value function, we can recover an optimal policy by choosing the maximizer action in Eq.(4) for each state. Details of the steps 1 Since $V^\ast(s) = V^{\pi^\ast}(s) = \mb E_a [Q^{\pi^\ast}(s, a)]$, we have $V^{\pi^\ast}(s) \le \max_{a \in \mc A} Q^{\pi^\ast} (s, a)$. And if there is any $\tilde{s}$ such that $V^{\pi^\ast} \neq \max_{a \in \mc A} Q^{\pi^\ast} (s, a)$, we can choose a better policy by maximizing $Q^{\ast} (s, a) = Q^{\pi^\ast} (s, a)$ over $a$. 2 (=>) Follows by step 1. (<=) i.e. If $\tilde V$ satisfies $\tilde V(s) = \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) \tilde V(s^\prime)]$, then $\tilde V(s) = V^\ast(s) = \max_\pi V^\pi(s), \forall s \in \mc S$. Define the optimal Bellman operator as $$ \mc T V(s) = \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) V(s^\prime)] \tag{5} $$ So our goal is to prove that if $\tilde V = \mc T \tilde V$, then $\tilde V = V^\ast$. We show this by combining two results, following Puterman[1]: a) If $\tilde V \ge \mc T \tilde V$, then $\tilde V \ge V^\ast$. b) If $\tilde V \le \mc T \tilde V$, then $\tilde V \le V^\ast$. Proof: a) For any $\pi = (d_1, d_2, ...)$, $$ \begin{align} \tilde V &\ge \mc T \tilde V = \max_{d} [ R_d + \gamma \, P_d \tilde V] \\ &\ge R_{d_1} + \gamma \, P_{d_1} \tilde V \\ \end{align} $$ Here $d$ is the decision rule(action profile at specific time), $R_d$ is the vector representation of immediate reward induced from $d$ and $P_d$ is transition matrix induced from $d$. By induction, for any $n$, $$ \tilde V \ge R_{d_1} + \sum_{i=1}^{n-1} \gamma^i P_\pi^i R_{d_{i+1}} + \gamma^n P_\pi^n \tilde V $$ where $P_\pi^j$ represents the $j$-step transition matrix under $\pi$. Since $$ V^\pi = R_{d_1} + \sum_{i=1}^{\infty}\gamma^i P_\pi^i R_{d_{i+1}} $$ we have $$ \tilde V - V^\pi \ge \underbrace{\gamma^n P_\pi^n \tilde V -\sum_{i=n}^{\infty}\gamma^i P_\pi^i R_{d_{i+1}}}_{\rightarrow 0 \ \text{as}\ n\rightarrow \infty} $$ So we have $\tilde V \ge V^\pi$. And since this holds for any $\pi$, we conclude that $$ \tilde V \ge \max_\pi V^\pi = V^\ast $$ b) Follows from step 1. 3 The optimal Bellman operator is a contraction in $L_\infty$ norm, cf. [2]. Proof: For any $s$, $$ \begin{align} \left\vert \mc T V_1(s) - \mc TV_2(s) \right\vert &= \left\vert \max_{a \in \mc A} [ R(s, a) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) V_1(s^\prime)] -\max_{a^\prime \in \mc A} [ R(s, a^\prime) + \gamma \, \sum_{s^\prime \in \mc S} T(s, a^\prime, s^\prime) V(s^\prime)]\right\vert \\ &\overset{(*)}{\le} \left\vert \max_{a \in \mc A} [\gamma \, \sum_{s^\prime \in \mc S} T(s, a, s^\prime) (V_1(s^\prime) - V_2(s^\prime))] \right\vert \\ &\le \gamma \Vert V_1 - V_2 \Vert_\infty \end{align} $$ where in (*) we used the fact that $$ \max_a f(a) - \max_{a^\prime} g(a^\prime) \le \max_a [f(a) - g(a)] $$ Thus by Banach fixed point theorum it follows that $\mc T$ has a unique fixed point. References [1] Puterman, Martin L.. “Markov Decision Processes : Discrete Stochastic Dynamic Programming.” (2016). [2] A. Lazaric. http://researchers.lille.inria.fr/~lazaric/Webpage/MVA-RL_Course14_files/slides-lecture-02-handout.pdf
Why is there always at least one policy that is better than or equal to all other policies? $\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$ Setting We are considering in the setting of: Discrete actions Discrete states Bounded rewards Stationary policy Infinite horizon The optimal p
14,394
Why is there always at least one policy that is better than or equal to all other policies?
I spent a bit of time researching on this for my master thesis. Even though this question is a bit old, I've decided to share my findings anyway since it would have helped me earlier when I ran into this page and didn't think the answers so far gave a full picture. The short story: The question is non-trivial for general state and action spaces. Here are a couple of results: Let $A(s) \subseteq \mathcal{A}$ denote the set of available actions at a state $s \in \mathcal{S}$. Thm: If $\mathcal{S}$ is countable and $\mathcal{A}$ is standard Borel $A(s)$ is compact for all $s \in \mathcal{S}$ For every $s \in \mathcal{S}$ the reward function $r(s, \cdot)$ is upper semicontinuous. Then there exists a deterministic optimal policy $\pi^*$. Thm: If $\mathcal{S}$ is standard Borel $A(s)$ is finite for all $s \in \mathcal{S}$ Then there exists a deterministic optimal policy $\pi^*$. The long story: Here are some references that I found useful: A good source for an overview, and basis for the short story above: Feinberg: Total Expected Discounted Reward MDPs: Existence of Optimal Policies. The most 'deep' of the results comes, as far as I can see, from this article Schäl: On dynamic programming: Compactness of the space of policies. Another good source giving its own comprehensive account is Bertsekas & Shreve: Stochastic Optimal Control: Discrete time case chapters 7,8 and 9. Cheers.
Why is there always at least one policy that is better than or equal to all other policies?
I spent a bit of time researching on this for my master thesis. Even though this question is a bit old, I've decided to share my findings anyway since it would have helped me earlier when I ran into t
Why is there always at least one policy that is better than or equal to all other policies? I spent a bit of time researching on this for my master thesis. Even though this question is a bit old, I've decided to share my findings anyway since it would have helped me earlier when I ran into this page and didn't think the answers so far gave a full picture. The short story: The question is non-trivial for general state and action spaces. Here are a couple of results: Let $A(s) \subseteq \mathcal{A}$ denote the set of available actions at a state $s \in \mathcal{S}$. Thm: If $\mathcal{S}$ is countable and $\mathcal{A}$ is standard Borel $A(s)$ is compact for all $s \in \mathcal{S}$ For every $s \in \mathcal{S}$ the reward function $r(s, \cdot)$ is upper semicontinuous. Then there exists a deterministic optimal policy $\pi^*$. Thm: If $\mathcal{S}$ is standard Borel $A(s)$ is finite for all $s \in \mathcal{S}$ Then there exists a deterministic optimal policy $\pi^*$. The long story: Here are some references that I found useful: A good source for an overview, and basis for the short story above: Feinberg: Total Expected Discounted Reward MDPs: Existence of Optimal Policies. The most 'deep' of the results comes, as far as I can see, from this article Schäl: On dynamic programming: Compactness of the space of policies. Another good source giving its own comprehensive account is Bertsekas & Shreve: Stochastic Optimal Control: Discrete time case chapters 7,8 and 9. Cheers.
Why is there always at least one policy that is better than or equal to all other policies? I spent a bit of time researching on this for my master thesis. Even though this question is a bit old, I've decided to share my findings anyway since it would have helped me earlier when I ran into t
14,395
Kullback-Leibler divergence - interpretation [duplicate]
Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs: The KL distance from $F$ to $G$ is the expectation, under the probability law $F$, of the difference in logarithms of their PDFs. Let us therefore look closely at the log PDFs. The values near 0 matter a lot, so let's examine them. The next figure plots the log PDFs in the region from $x=0$ to $x=0.10$: Mathematica computes that KL(red, blue) = 0.574461 and KL(red, green) = 0.641924. In the graph it is clear that between 0 and 0.02, approximately, log(green) differs far more from log(red) than does log(blue). Moreover, in this range there is still substantially large probability density for red: its logarithm is greater than -1 (so the density is greater than about 1/2). Take a look at the differences in logarithms. Now the blue curve is the difference log(red) - log(blue) and the green curve is log(red) - log(green). The KL divergences (w.r.t. red) are the expectations (according to the red pdf) of these functions. (Note the change in horizontal scale, which now focuses more closely near 0.) Very roughly, it looks like a typical vertical distance between these curves is around 10 over the interval from 0 to 0.02, while a typical value for the red pdf is about 1/2. Thus, this interval alone should add about 10 * 0.02 /2 = 0.1 to the KL divergences. This just about explains the difference of .067. Yes, it's true that the blue logarithms are further away than the green logs for larger horizontal values, but the differences are not as extreme and the red PDF decays quickly. In brief, extreme differences in the left tails of the blue and green distributions, for values between 0 and 0.02, explain why KL(red, green) exceeds KL(red, blue). Incidentally, KL(blue, red) = 0.454776 and KL(green, red) = 0.254469. Code Specify the distributions red = GammaDistribution[1/.85, 1]; green = InverseGaussianDistribution[1, 1/3.]; blue = InverseGaussianDistribution[1, 1/5.]; Compute KL Clear[kl]; (* Numeric integation between specified endpoints. *) kl[pF_, qF_, l_, u_] := Module[{p, q}, p[x_] := PDF[pF, x]; q[x_] := PDF[qF, x]; NIntegrate[p[x] (Log[p[x]] - Log[q[x]]), {x, l, u}, Method -> "LocalAdaptive"] ]; (* Integration over the entire domain. *) kl[pF_, qF_] := Module[{p, q}, p[x_] := PDF[pF, x]; q[x_] := PDF[qF, x]; Integrate[p[x] (Log[p[x]] - Log[q[x]]), {x, 0, \[Infinity]}] ]; kl[red, blue] kl[red, green] kl[blue, red, 0, \[Infinity]] kl[green, red, 0, \[Infinity]] Make the plots Clear[plot]; plot[{f_, u_, r_}] := Plot[Evaluate[f[#, x] & /@ {blue, red, green}], {x, 0, u}, PlotStyle -> {{Thick, Darker[Blue]}, {Thick, Darker[Red]}, {Thick, Darker[Green]}}, PlotRange -> r, Exclusions -> {0}, ImageSize -> 400 ]; Table[ plot[f], {f, {{PDF, 4, {Full, {0, 3}}}, {Log[PDF[##]] &, 0.1, {Full, Automatic}}}} ] // TableForm Plot[{Log[PDF[red, x]] - Log[PDF[blue, x]], Log[PDF[red, x]] - Log[PDF[green, x]]}, {x, 0, 0.04}, PlotRange -> {Full, Automatic}, PlotStyle -> {{Thick, Darker[Blue]}, {Thick, Darker[Green]}}]
Kullback-Leibler divergence - interpretation [duplicate]
Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs: The KL distance from $F$ to $G$ is the expect
Kullback-Leibler divergence - interpretation [duplicate] Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs: The KL distance from $F$ to $G$ is the expectation, under the probability law $F$, of the difference in logarithms of their PDFs. Let us therefore look closely at the log PDFs. The values near 0 matter a lot, so let's examine them. The next figure plots the log PDFs in the region from $x=0$ to $x=0.10$: Mathematica computes that KL(red, blue) = 0.574461 and KL(red, green) = 0.641924. In the graph it is clear that between 0 and 0.02, approximately, log(green) differs far more from log(red) than does log(blue). Moreover, in this range there is still substantially large probability density for red: its logarithm is greater than -1 (so the density is greater than about 1/2). Take a look at the differences in logarithms. Now the blue curve is the difference log(red) - log(blue) and the green curve is log(red) - log(green). The KL divergences (w.r.t. red) are the expectations (according to the red pdf) of these functions. (Note the change in horizontal scale, which now focuses more closely near 0.) Very roughly, it looks like a typical vertical distance between these curves is around 10 over the interval from 0 to 0.02, while a typical value for the red pdf is about 1/2. Thus, this interval alone should add about 10 * 0.02 /2 = 0.1 to the KL divergences. This just about explains the difference of .067. Yes, it's true that the blue logarithms are further away than the green logs for larger horizontal values, but the differences are not as extreme and the red PDF decays quickly. In brief, extreme differences in the left tails of the blue and green distributions, for values between 0 and 0.02, explain why KL(red, green) exceeds KL(red, blue). Incidentally, KL(blue, red) = 0.454776 and KL(green, red) = 0.254469. Code Specify the distributions red = GammaDistribution[1/.85, 1]; green = InverseGaussianDistribution[1, 1/3.]; blue = InverseGaussianDistribution[1, 1/5.]; Compute KL Clear[kl]; (* Numeric integation between specified endpoints. *) kl[pF_, qF_, l_, u_] := Module[{p, q}, p[x_] := PDF[pF, x]; q[x_] := PDF[qF, x]; NIntegrate[p[x] (Log[p[x]] - Log[q[x]]), {x, l, u}, Method -> "LocalAdaptive"] ]; (* Integration over the entire domain. *) kl[pF_, qF_] := Module[{p, q}, p[x_] := PDF[pF, x]; q[x_] := PDF[qF, x]; Integrate[p[x] (Log[p[x]] - Log[q[x]]), {x, 0, \[Infinity]}] ]; kl[red, blue] kl[red, green] kl[blue, red, 0, \[Infinity]] kl[green, red, 0, \[Infinity]] Make the plots Clear[plot]; plot[{f_, u_, r_}] := Plot[Evaluate[f[#, x] & /@ {blue, red, green}], {x, 0, u}, PlotStyle -> {{Thick, Darker[Blue]}, {Thick, Darker[Red]}, {Thick, Darker[Green]}}, PlotRange -> r, Exclusions -> {0}, ImageSize -> 400 ]; Table[ plot[f], {f, {{PDF, 4, {Full, {0, 3}}}, {Log[PDF[##]] &, 0.1, {Full, Automatic}}}} ] // TableForm Plot[{Log[PDF[red, x]] - Log[PDF[blue, x]], Log[PDF[red, x]] - Log[PDF[green, x]]}, {x, 0, 0.04}, PlotRange -> {Full, Automatic}, PlotStyle -> {{Thick, Darker[Blue]}, {Thick, Darker[Green]}}]
Kullback-Leibler divergence - interpretation [duplicate] Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs: The KL distance from $F$ to $G$ is the expect
14,396
Kullback-Leibler divergence - interpretation [duplicate]
KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen that the empirical distribution of this sample mimicks the blue distribution. This is rare but this may happen... albeit with a probability which is vanishingly small, and which behaves like $\mathrm{e}^{-nH}$. The exponent $H$ is the KL divergence of the blue distribution with respect to the red one. Having said that, I wonder why your KL divergences are ranked in the order you say they are.
Kullback-Leibler divergence - interpretation [duplicate]
KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen t
Kullback-Leibler divergence - interpretation [duplicate] KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen that the empirical distribution of this sample mimicks the blue distribution. This is rare but this may happen... albeit with a probability which is vanishingly small, and which behaves like $\mathrm{e}^{-nH}$. The exponent $H$ is the KL divergence of the blue distribution with respect to the red one. Having said that, I wonder why your KL divergences are ranked in the order you say they are.
Kullback-Leibler divergence - interpretation [duplicate] KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen t
14,397
When the Central Limit Theorem and the Law of Large Numbers disagree
The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a constant random variable, it has a jump discontinuity at $x=1$, hence it is incorrect to conclude that the CDF converges to $F(x)=1$.
When the Central Limit Theorem and the Law of Large Numbers disagree
The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a co
When the Central Limit Theorem and the Law of Large Numbers disagree The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a constant random variable, it has a jump discontinuity at $x=1$, hence it is incorrect to conclude that the CDF converges to $F(x)=1$.
When the Central Limit Theorem and the Law of Large Numbers disagree The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a co
14,398
When the Central Limit Theorem and the Law of Large Numbers disagree
For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define \begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\ Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i. \end{align} Now, the CLT says that for every fixed real number $z$, $\lim_{n\to\infty} F_{Z_n}(z) = \Phi(z-1)$. The OP applies the CLT to evaluate $$\lim_{n\to\infty}P\left(Z_n \leq \frac{1}{\sqrt{n}}\right) = \Phi(0) = \frac 12.$$ As the other answers as well as several of the comments on the OP's question have pointed out, it is the OP's evaluation of $\lim_{n\to\infty} P(Y_n \leq 1)$ that is suspect. Consider the special case when the iid $X_i$ are discrete random variables taking on values $0$ and $2$ with equal probability $\frac 12$. Now, $\sum_{i=1}^n X_i$ can take on all even integer values in $[0,2n]$ and so when $n$ is odd, $\sum_{i=1}^n X_i$ cannot take on value $n$ and hence $Y_n = \frac 1n \sum_{i=1}^n X_i$ cannot take on value $1$. Furthermore, since the distribution of $Y_n$ is symmetric about $1$, we have that $P(Y_n \leq 1) = F_{Y_n}(1)$ has value $\frac 12$ whenever $n$ is odd. Thus, the sequence of numbers $$P(Y_1 \leq 1), P(Y_2 \leq 1), \ldots, P(Y_n \leq 1), \ldots$$ contains the subsequence $$P(Y_1 \leq 1), P(Y_3 \leq 1), \ldots, P(Y_{2k-1} \leq 1), \ldots$$ in which all the terms have value $\frac 12$. On the other hand, the subsequence $$P(Y_2 \leq 1), P(Y_ 4\leq 1), \ldots, P(Y_{2k} \leq 1), \ldots$$ is converging to $1$. Hence, $\lim_{n\to\infty} P(Y_n \leq 1)$ does not exist and claims of convergence of $P(Y_n\leq 1)$ to 1 must be viewed with a great deal of suspicion.
When the Central Limit Theorem and the Law of Large Numbers disagree
For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define \begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\ Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i. \end{align} Now, the C
When the Central Limit Theorem and the Law of Large Numbers disagree For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define \begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\ Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i. \end{align} Now, the CLT says that for every fixed real number $z$, $\lim_{n\to\infty} F_{Z_n}(z) = \Phi(z-1)$. The OP applies the CLT to evaluate $$\lim_{n\to\infty}P\left(Z_n \leq \frac{1}{\sqrt{n}}\right) = \Phi(0) = \frac 12.$$ As the other answers as well as several of the comments on the OP's question have pointed out, it is the OP's evaluation of $\lim_{n\to\infty} P(Y_n \leq 1)$ that is suspect. Consider the special case when the iid $X_i$ are discrete random variables taking on values $0$ and $2$ with equal probability $\frac 12$. Now, $\sum_{i=1}^n X_i$ can take on all even integer values in $[0,2n]$ and so when $n$ is odd, $\sum_{i=1}^n X_i$ cannot take on value $n$ and hence $Y_n = \frac 1n \sum_{i=1}^n X_i$ cannot take on value $1$. Furthermore, since the distribution of $Y_n$ is symmetric about $1$, we have that $P(Y_n \leq 1) = F_{Y_n}(1)$ has value $\frac 12$ whenever $n$ is odd. Thus, the sequence of numbers $$P(Y_1 \leq 1), P(Y_2 \leq 1), \ldots, P(Y_n \leq 1), \ldots$$ contains the subsequence $$P(Y_1 \leq 1), P(Y_3 \leq 1), \ldots, P(Y_{2k-1} \leq 1), \ldots$$ in which all the terms have value $\frac 12$. On the other hand, the subsequence $$P(Y_2 \leq 1), P(Y_ 4\leq 1), \ldots, P(Y_{2k} \leq 1), \ldots$$ is converging to $1$. Hence, $\lim_{n\to\infty} P(Y_n \leq 1)$ does not exist and claims of convergence of $P(Y_n\leq 1)$ to 1 must be viewed with a great deal of suspicion.
When the Central Limit Theorem and the Law of Large Numbers disagree For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define \begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\ Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i. \end{align} Now, the C
14,399
When the Central Limit Theorem and the Law of Large Numbers disagree
Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement: $$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$ This statement is false (the right-hand-side should be $\tfrac{1}{2}$) and it does not follow from the law of large numbers as asserted. The weak law of large numbers (which you invoke) says that: $$\lim_{n \rightarrow \infty} \mathbb{P} \Big( |\bar{X}_n - 1| \leqslant \varepsilon \Big) = 1 \quad \quad \text{for all } \varepsilon > 0.$$ For all $\varepsilon > 0$ the condition $|\bar{X}_n - 1| \leqslant \varepsilon$ spans some values where $\bar{X}_n \leqslant 1$ and some values where $\bar{X}_n > 1$. Hence, it does not follow from the LLN that $\lim_{n \rightarrow \infty} \mathbb{P} ( \bar{X}_n \leqslant 1 ) = 1$.
When the Central Limit Theorem and the Law of Large Numbers disagree
Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement: $$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$ This statement is false (the ri
When the Central Limit Theorem and the Law of Large Numbers disagree Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement: $$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$ This statement is false (the right-hand-side should be $\tfrac{1}{2}$) and it does not follow from the law of large numbers as asserted. The weak law of large numbers (which you invoke) says that: $$\lim_{n \rightarrow \infty} \mathbb{P} \Big( |\bar{X}_n - 1| \leqslant \varepsilon \Big) = 1 \quad \quad \text{for all } \varepsilon > 0.$$ For all $\varepsilon > 0$ the condition $|\bar{X}_n - 1| \leqslant \varepsilon$ spans some values where $\bar{X}_n \leqslant 1$ and some values where $\bar{X}_n > 1$. Hence, it does not follow from the LLN that $\lim_{n \rightarrow \infty} \mathbb{P} ( \bar{X}_n \leqslant 1 ) = 1$.
When the Central Limit Theorem and the Law of Large Numbers disagree Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement: $$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$ This statement is false (the ri
14,400
When the Central Limit Theorem and the Law of Large Numbers disagree
Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple values are possible at the discontinuity). where $F_{\bar X_n}()$ is the distribution function of the sample mean $\bar X_n$, which by the LLN converges in probability (and so also in distribution) to the constant $1$, This is not right, and it is also easy to show that it can not be right (different from the disagreement between CLT and LLN). The limiting distribution (which can be seen as the limit for a sequence of normal distributed variables) should be: $$F_{\bar{X}_\infty}(x) = \begin{cases} 0 & \text{for } x<1 \\ 0.5& \text{for } x=1\\ 1 & \text{for } x>1 \end{cases}$$ for this function you have that, for any $\epsilon>0$ and every $x$, the difference $|F_{\bar{X}_n}(x)-F_{\bar{X}_\infty}(x)|<\epsilon$ for sufficiently large $n$. This would fail if $F_{\bar{X}_\infty}(1)=1$ instead of $F_{\bar{X}_\infty}(1)=0.5$ Limit of a normal distribution It may be helpful to explicitly write out the sum used to invoke the law of large numbers. $$\bar{X}_n=\frac{1}{n}\sum_{i=1}^n X_i \sim N(1,\frac{1}{n}) $$ The limit $n\to \infty$ for $\hat{X}_n$ is actually equivalent to the Dirac Delta function when it is represented as the limit of the normal distribution with the variance going to zero. Using that expression it is more easy to see what is going on under the hood, rather than using the ready-made laws of the CLT an LLN which obscure the reasoning behind the laws. Convergence in probability The law of large numbers gives you 'convergence in probability' $$\lim_{n \to \infty} P(|\bar{X}_n-1|>\epsilon) =0 $$ with $\epsilon > 0$ An equivalent statement could be made for the central limit theorem with $\lim_{n \to \infty} P(|\frac{1}{\sqrt{n}}\sum \left( X_i-1 \right)|>\frac{\epsilon}{n}) =0 $ It is wrong to state that this implies $$\lim_{n \to \infty} P(|\bar{X}_n-1|>0) =0 $$ It is less nice that this question is cross-posted so early (confusing, yet interesting to see the different discussions/approaches math vs stats, so not that too bad). The answer by Michael Hardy on the math stackexchange deals with it very effectively in terms of the strong law of large numbers (the same principle as the accepted answer from drhab in the cross posted question and Dilip here). We are almost sure that a sequence $\bar{X}_1, \bar{X}_2, \bar{X}_3, ... \bar{X}_n$ converges to 1, but this does not mean that $\lim_{n \to \infty} P(\bar{X}_n = 1)$ will be equal to 1 (or it may not even exist as Dilip shows). The dice example in the comments by Tomasz shows this very nicely from a different angle (instead of the limit not existing, the limit goes to zero). The mean of a sequence of dice rolls will converge to the mean of the dice but the probability to be equal to this goes to zero. Heaviside step function and Dirac delta function The CDF of $\bar{X}_n$ is the following: $$F_{\bar{X}_n}(x) = \frac{1}{2} \left(1 + \text{erf} \frac{x-1}{\sqrt{2/n}} \right)$$ with, if you like, $\lim_{n \to \infty} F_{\bar{X}_n}(1) = 0.5$ (related to the Heaviside step function, the integral of the Dirac delta function when viewed as the limit of normal distribution). I believe that this view intuitively resolves your question regarding 'show that it is wrong' or at least it shows that the question about understanding the cause of this disagreement of CLT and LLN is equivalent to the question of understanding the integral of the Dirac delta function or a sequence of normal distributions with variance decreasing to zero.
When the Central Limit Theorem and the Law of Large Numbers disagree
Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple value
When the Central Limit Theorem and the Law of Large Numbers disagree Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple values are possible at the discontinuity). where $F_{\bar X_n}()$ is the distribution function of the sample mean $\bar X_n$, which by the LLN converges in probability (and so also in distribution) to the constant $1$, This is not right, and it is also easy to show that it can not be right (different from the disagreement between CLT and LLN). The limiting distribution (which can be seen as the limit for a sequence of normal distributed variables) should be: $$F_{\bar{X}_\infty}(x) = \begin{cases} 0 & \text{for } x<1 \\ 0.5& \text{for } x=1\\ 1 & \text{for } x>1 \end{cases}$$ for this function you have that, for any $\epsilon>0$ and every $x$, the difference $|F_{\bar{X}_n}(x)-F_{\bar{X}_\infty}(x)|<\epsilon$ for sufficiently large $n$. This would fail if $F_{\bar{X}_\infty}(1)=1$ instead of $F_{\bar{X}_\infty}(1)=0.5$ Limit of a normal distribution It may be helpful to explicitly write out the sum used to invoke the law of large numbers. $$\bar{X}_n=\frac{1}{n}\sum_{i=1}^n X_i \sim N(1,\frac{1}{n}) $$ The limit $n\to \infty$ for $\hat{X}_n$ is actually equivalent to the Dirac Delta function when it is represented as the limit of the normal distribution with the variance going to zero. Using that expression it is more easy to see what is going on under the hood, rather than using the ready-made laws of the CLT an LLN which obscure the reasoning behind the laws. Convergence in probability The law of large numbers gives you 'convergence in probability' $$\lim_{n \to \infty} P(|\bar{X}_n-1|>\epsilon) =0 $$ with $\epsilon > 0$ An equivalent statement could be made for the central limit theorem with $\lim_{n \to \infty} P(|\frac{1}{\sqrt{n}}\sum \left( X_i-1 \right)|>\frac{\epsilon}{n}) =0 $ It is wrong to state that this implies $$\lim_{n \to \infty} P(|\bar{X}_n-1|>0) =0 $$ It is less nice that this question is cross-posted so early (confusing, yet interesting to see the different discussions/approaches math vs stats, so not that too bad). The answer by Michael Hardy on the math stackexchange deals with it very effectively in terms of the strong law of large numbers (the same principle as the accepted answer from drhab in the cross posted question and Dilip here). We are almost sure that a sequence $\bar{X}_1, \bar{X}_2, \bar{X}_3, ... \bar{X}_n$ converges to 1, but this does not mean that $\lim_{n \to \infty} P(\bar{X}_n = 1)$ will be equal to 1 (or it may not even exist as Dilip shows). The dice example in the comments by Tomasz shows this very nicely from a different angle (instead of the limit not existing, the limit goes to zero). The mean of a sequence of dice rolls will converge to the mean of the dice but the probability to be equal to this goes to zero. Heaviside step function and Dirac delta function The CDF of $\bar{X}_n$ is the following: $$F_{\bar{X}_n}(x) = \frac{1}{2} \left(1 + \text{erf} \frac{x-1}{\sqrt{2/n}} \right)$$ with, if you like, $\lim_{n \to \infty} F_{\bar{X}_n}(1) = 0.5$ (related to the Heaviside step function, the integral of the Dirac delta function when viewed as the limit of normal distribution). I believe that this view intuitively resolves your question regarding 'show that it is wrong' or at least it shows that the question about understanding the cause of this disagreement of CLT and LLN is equivalent to the question of understanding the integral of the Dirac delta function or a sequence of normal distributions with variance decreasing to zero.
When the Central Limit Theorem and the Law of Large Numbers disagree Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple value