idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
30,001
What is the meaning of an F value less than 1 in one-way ANOVA?
Your question in the title is an interesting question that crossed my mind today too. I just want to add a correction. The F-ratio is : $$\frac{MS_{treatment}}{MS_{residual}}=\frac{\frac{SS_{treatment}}{t-1}}{\frac{SS_{residual}}{t(r-1)}}$$ What you wrote is the $$\frac{E(MS_{treatment})}{E(MS_{residual})}$$ While the first fraction can be less than 1, the second fraction cannot be less than 1. But that's not a problem since it's a quotient of expectations.
What is the meaning of an F value less than 1 in one-way ANOVA?
Your question in the title is an interesting question that crossed my mind today too. I just want to add a correction. The F-ratio is : $$\frac{MS_{treatment}}{MS_{residual}}=\frac{\frac{SS_{treatment
What is the meaning of an F value less than 1 in one-way ANOVA? Your question in the title is an interesting question that crossed my mind today too. I just want to add a correction. The F-ratio is : $$\frac{MS_{treatment}}{MS_{residual}}=\frac{\frac{SS_{treatment}}{t-1}}{\frac{SS_{residual}}{t(r-1)}}$$ What you wrote is the $$\frac{E(MS_{treatment})}{E(MS_{residual})}$$ While the first fraction can be less than 1, the second fraction cannot be less than 1. But that's not a problem since it's a quotient of expectations.
What is the meaning of an F value less than 1 in one-way ANOVA? Your question in the title is an interesting question that crossed my mind today too. I just want to add a correction. The F-ratio is : $$\frac{MS_{treatment}}{MS_{residual}}=\frac{\frac{SS_{treatment
30,002
What is the meaning of an F value less than 1 in one-way ANOVA?
Note that while values of the F statistic less than 1 can occur by chance when the null hypothesis is true (or near true) as others have explained, values close to 0 can indicate violations of the assumptions that ANOVA depends on. Some analysts will look at the area to the left of the statistic in the F-distribution as a p-value checking assumption violations. Some of the violations that lead to small F-stats include unequal variances, improper randomization, lack of independence, or just faking the data.
What is the meaning of an F value less than 1 in one-way ANOVA?
Note that while values of the F statistic less than 1 can occur by chance when the null hypothesis is true (or near true) as others have explained, values close to 0 can indicate violations of the ass
What is the meaning of an F value less than 1 in one-way ANOVA? Note that while values of the F statistic less than 1 can occur by chance when the null hypothesis is true (or near true) as others have explained, values close to 0 can indicate violations of the assumptions that ANOVA depends on. Some analysts will look at the area to the left of the statistic in the F-distribution as a p-value checking assumption violations. Some of the violations that lead to small F-stats include unequal variances, improper randomization, lack of independence, or just faking the data.
What is the meaning of an F value less than 1 in one-way ANOVA? Note that while values of the F statistic less than 1 can occur by chance when the null hypothesis is true (or near true) as others have explained, values close to 0 can indicate violations of the ass
30,003
What is the meaning of an F value less than 1 in one-way ANOVA?
The issue here is that hypothesis testing involves a null AND an alternative hypotheis, and therefore; the rejection region is determined by both hypotheses. Consider a simpler example. If you are investigating a process that possibly has a MEAN of zero, but could not have a mean of less than zero, then you might be interested in performing the following test \begin{equation} \begin{array}{c} H_{0}: \mu = 0 \ H_{1}: \mu > 0 \ \end{array} \nonumber \end{equation} at a level alpha. Your rejection region of the null hypothesis is on the right of zero. It is not impossible for you to get a sample mean that is negative, albeit with a small probability. If you were to get a negative sample mean in your experiment, you would not question the veracity of the experiment. Now consider your question. The reason that the rejection region for the F-statistic is on the right is because of the alternative hypothesis in the one-way ANOVA. You are testing the hypothesis that \begin{equation} \begin{array}{c} H_{0}: \sum \tau_{i}^{2} = 0 \ H_{1}: \sum \tau_{i}^{2} \ne 0 \ \end{array} \nonumber \end{equation} The null hypothesis dictates that you use the central F distribution, and the alternative hypothesis, forcing the distribution to the right when the alternative hypothesis is true means that all of the Type I error probability must be located on the right. Is it possible for the test statistic to be less than one? When the null hypothesis is true, it is certainly possible; just as in the previous example where it was possible for the test statistic to be negative even if the MEAN of the data is zero.
What is the meaning of an F value less than 1 in one-way ANOVA?
The issue here is that hypothesis testing involves a null AND an alternative hypotheis, and therefore; the rejection region is determined by both hypotheses. Consider a simpler example. If you are in
What is the meaning of an F value less than 1 in one-way ANOVA? The issue here is that hypothesis testing involves a null AND an alternative hypotheis, and therefore; the rejection region is determined by both hypotheses. Consider a simpler example. If you are investigating a process that possibly has a MEAN of zero, but could not have a mean of less than zero, then you might be interested in performing the following test \begin{equation} \begin{array}{c} H_{0}: \mu = 0 \ H_{1}: \mu > 0 \ \end{array} \nonumber \end{equation} at a level alpha. Your rejection region of the null hypothesis is on the right of zero. It is not impossible for you to get a sample mean that is negative, albeit with a small probability. If you were to get a negative sample mean in your experiment, you would not question the veracity of the experiment. Now consider your question. The reason that the rejection region for the F-statistic is on the right is because of the alternative hypothesis in the one-way ANOVA. You are testing the hypothesis that \begin{equation} \begin{array}{c} H_{0}: \sum \tau_{i}^{2} = 0 \ H_{1}: \sum \tau_{i}^{2} \ne 0 \ \end{array} \nonumber \end{equation} The null hypothesis dictates that you use the central F distribution, and the alternative hypothesis, forcing the distribution to the right when the alternative hypothesis is true means that all of the Type I error probability must be located on the right. Is it possible for the test statistic to be less than one? When the null hypothesis is true, it is certainly possible; just as in the previous example where it was possible for the test statistic to be negative even if the MEAN of the data is zero.
What is the meaning of an F value less than 1 in one-way ANOVA? The issue here is that hypothesis testing involves a null AND an alternative hypotheis, and therefore; the rejection region is determined by both hypotheses. Consider a simpler example. If you are in
30,004
What is the meaning of an F value less than 1 in one-way ANOVA?
After looking in a folder I haven't looked for years (like real folder and not computer folder) I found this paper which may be of interest for this question: Voelkle, M. C., Ackerman, P. L., & Wittmann, W. W. (2007). Effect Sizes and F Ratios < 1.0. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 3(1), 35–46. doi:10.1027/1614-2241.3.1.35 The abstract says: Standard statistics texts indicate that the expected value of the $F$ ratio is $1.0$ (more precisely: $N/(N-2)$) in a completely balanced fixed-effects ANOVA, when the null hypothesis is true. Even though some authors suggest that the null hypothesis is rarely true in practice (e.g., Meehl, 1990), $F$ ratios $< 1.0$ are reported quite frequently in the literature. However, standard effect size statistics (e.g., Cohen's $f$) often yield positive values when $F < 1.0$, which appears to create confusion about the meaningfulness of effect size statistics when the null hypothesis may be true. Given the repeated emphasis on reporting effect sizes, it is shown that in the face of $F < 1.0$ it is misleading to only report sample effect size estimates as often recommended. Causes of $F$ ratios $< 1.0$ are reviewed, illustrated by a short simulation study. The calculation and interpretation of corrected and uncorrected effect size statistics under these conditions are discussed. Computing adjusted measures of association strength and incorporating effect size confidence intervals are helpful in an effort to reduce confusion surrounding results when sample sizes are small. Detailed recommendations are directed to authors, journal editors, and reviewers.
What is the meaning of an F value less than 1 in one-way ANOVA?
After looking in a folder I haven't looked for years (like real folder and not computer folder) I found this paper which may be of interest for this question: Voelkle, M. C., Ackerman, P. L., & Wittma
What is the meaning of an F value less than 1 in one-way ANOVA? After looking in a folder I haven't looked for years (like real folder and not computer folder) I found this paper which may be of interest for this question: Voelkle, M. C., Ackerman, P. L., & Wittmann, W. W. (2007). Effect Sizes and F Ratios < 1.0. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 3(1), 35–46. doi:10.1027/1614-2241.3.1.35 The abstract says: Standard statistics texts indicate that the expected value of the $F$ ratio is $1.0$ (more precisely: $N/(N-2)$) in a completely balanced fixed-effects ANOVA, when the null hypothesis is true. Even though some authors suggest that the null hypothesis is rarely true in practice (e.g., Meehl, 1990), $F$ ratios $< 1.0$ are reported quite frequently in the literature. However, standard effect size statistics (e.g., Cohen's $f$) often yield positive values when $F < 1.0$, which appears to create confusion about the meaningfulness of effect size statistics when the null hypothesis may be true. Given the repeated emphasis on reporting effect sizes, it is shown that in the face of $F < 1.0$ it is misleading to only report sample effect size estimates as often recommended. Causes of $F$ ratios $< 1.0$ are reviewed, illustrated by a short simulation study. The calculation and interpretation of corrected and uncorrected effect size statistics under these conditions are discussed. Computing adjusted measures of association strength and incorporating effect size confidence intervals are helpful in an effort to reduce confusion surrounding results when sample sizes are small. Detailed recommendations are directed to authors, journal editors, and reviewers.
What is the meaning of an F value less than 1 in one-way ANOVA? After looking in a folder I haven't looked for years (like real folder and not computer folder) I found this paper which may be of interest for this question: Voelkle, M. C., Ackerman, P. L., & Wittma
30,005
What is the meaning of an F value less than 1 in one-way ANOVA?
A student asked me a similar question today. The short answer is that F is < 1 when there is more variance within groups than between. The following is an example of this: Group 1 values: 25, 50, 75 Group 2 values: 26, 50, 75 Group 3 values: 27, 50, 75 There is little difference between the group means: Group 1 Mean = 50.00 Group 2 Mean = 50.33 Group 3 Mean = 50.66 But the differences within the groups compared to the group means are relatively large. Most scores are roughly 25 points different from the mean: Group 1 values: -25, 0, 25 Group 2 values: -24.33, -0.33, 24.66 Group 3 values: -23.66, -0.66, 24.33 This scenario leads to a large amount of variance within groups (600.55) and little variance between (0.33). The result is an F-ratio of 0.00055!
What is the meaning of an F value less than 1 in one-way ANOVA?
A student asked me a similar question today. The short answer is that F is < 1 when there is more variance within groups than between. The following is an example of this: Group 1 values: 25, 50, 7
What is the meaning of an F value less than 1 in one-way ANOVA? A student asked me a similar question today. The short answer is that F is < 1 when there is more variance within groups than between. The following is an example of this: Group 1 values: 25, 50, 75 Group 2 values: 26, 50, 75 Group 3 values: 27, 50, 75 There is little difference between the group means: Group 1 Mean = 50.00 Group 2 Mean = 50.33 Group 3 Mean = 50.66 But the differences within the groups compared to the group means are relatively large. Most scores are roughly 25 points different from the mean: Group 1 values: -25, 0, 25 Group 2 values: -24.33, -0.33, 24.66 Group 3 values: -23.66, -0.66, 24.33 This scenario leads to a large amount of variance within groups (600.55) and little variance between (0.33). The result is an F-ratio of 0.00055!
What is the meaning of an F value less than 1 in one-way ANOVA? A student asked me a similar question today. The short answer is that F is < 1 when there is more variance within groups than between. The following is an example of this: Group 1 values: 25, 50, 7
30,006
What is the meaning of an F value less than 1 in one-way ANOVA?
If F value is less than one this mean sum of squares due to treatments is less than sum.of squares due to error.Hence, there is no need to calculate F the null hypothesis is true all the samples are equally significant.
What is the meaning of an F value less than 1 in one-way ANOVA?
If F value is less than one this mean sum of squares due to treatments is less than sum.of squares due to error.Hence, there is no need to calculate F the null hypothesis is true all the samples are e
What is the meaning of an F value less than 1 in one-way ANOVA? If F value is less than one this mean sum of squares due to treatments is less than sum.of squares due to error.Hence, there is no need to calculate F the null hypothesis is true all the samples are equally significant.
What is the meaning of an F value less than 1 in one-way ANOVA? If F value is less than one this mean sum of squares due to treatments is less than sum.of squares due to error.Hence, there is no need to calculate F the null hypothesis is true all the samples are e
30,007
Are linear combinations of independent random variables again independent?
Yes, for the content of your question. and No, for the title, in general. Yes: Your $a_1,\dots, a_n$ are just some constant numbers. Then the independence of $X_1,\dots, X_n$ implies that the $a_iX_i$ are also independent. In fact, for any functions $g_1,\dots, g_n$ you would find that $g_1(X_1)$, $g_2(X_2)\dots g_n(X_n)$ are independent. From independence it follows that $E\prod g_i(X_i) = \prod E g_i(X_i)$. Your calculations are correct. No: Now look at $Y = \sum_{i=1}^n a_iX_i$ and $Z = \sum_{i=1}^n b_iX_i$. The linear combinations $Y$ and $Z$ of the $X_i$ are in general not independent. They are independent when $b_i=0$ for all $i$ where $a_i\neq 0$, and $a_j=0$ for all $j$ with $b_j\neq 0$. The simplest case where this is not fulfilled is $a_i=b_i$ for all $i$: then $Y=Z$. Normal distributed variables $X_i$ When the $X_i$ are normal distributed, the condition "$a_i\neq 0 \implies b_i=0$" can be relaxed. In that case the linear combinations $Y$ and $Z$ are independent whenever $\sum a_ib_i = 0$, i.e., when the vectors $a=(a_1,\dots, a_n)$ and $b=(b_1,\dots,b_n)$ are orthogonal. This is a fact that contributes to the popularity of the normal distribution for modelling. One of the consequences of this fact is, that estimates $\bar x$ for the mean and $s^2$ for the variance of normal distributed r.v. are independent :-)
Are linear combinations of independent random variables again independent?
Yes, for the content of your question. and No, for the title, in general. Yes: Your $a_1,\dots, a_n$ are just some constant numbers. Then the independence of $X_1,\dots, X_n$ implies that the $a_iX_
Are linear combinations of independent random variables again independent? Yes, for the content of your question. and No, for the title, in general. Yes: Your $a_1,\dots, a_n$ are just some constant numbers. Then the independence of $X_1,\dots, X_n$ implies that the $a_iX_i$ are also independent. In fact, for any functions $g_1,\dots, g_n$ you would find that $g_1(X_1)$, $g_2(X_2)\dots g_n(X_n)$ are independent. From independence it follows that $E\prod g_i(X_i) = \prod E g_i(X_i)$. Your calculations are correct. No: Now look at $Y = \sum_{i=1}^n a_iX_i$ and $Z = \sum_{i=1}^n b_iX_i$. The linear combinations $Y$ and $Z$ of the $X_i$ are in general not independent. They are independent when $b_i=0$ for all $i$ where $a_i\neq 0$, and $a_j=0$ for all $j$ with $b_j\neq 0$. The simplest case where this is not fulfilled is $a_i=b_i$ for all $i$: then $Y=Z$. Normal distributed variables $X_i$ When the $X_i$ are normal distributed, the condition "$a_i\neq 0 \implies b_i=0$" can be relaxed. In that case the linear combinations $Y$ and $Z$ are independent whenever $\sum a_ib_i = 0$, i.e., when the vectors $a=(a_1,\dots, a_n)$ and $b=(b_1,\dots,b_n)$ are orthogonal. This is a fact that contributes to the popularity of the normal distribution for modelling. One of the consequences of this fact is, that estimates $\bar x$ for the mean and $s^2$ for the variance of normal distributed r.v. are independent :-)
Are linear combinations of independent random variables again independent? Yes, for the content of your question. and No, for the title, in general. Yes: Your $a_1,\dots, a_n$ are just some constant numbers. Then the independence of $X_1,\dots, X_n$ implies that the $a_iX_
30,008
Are linear combinations of independent random variables again independent?
If $X_1,...,X_n$ are mutually independent then $a_1 X_1,...,a_n X_n$ are also mutually independent (i.e., using scalar multiples does not get rid of independence). However, the quantity $Y_n = \sum a_i X_i$ is typically going to be dependent with $X_1,...,X_n$. This means that you can indeed express the MGF pf $Y_n$ the way you write it in your post, but if you were to look at the relationship between $Y_n$ and any of the $X_i$ values, you would have to deal with the dependence between these quantities.
Are linear combinations of independent random variables again independent?
If $X_1,...,X_n$ are mutually independent then $a_1 X_1,...,a_n X_n$ are also mutually independent (i.e., using scalar multiples does not get rid of independence). However, the quantity $Y_n = \sum a
Are linear combinations of independent random variables again independent? If $X_1,...,X_n$ are mutually independent then $a_1 X_1,...,a_n X_n$ are also mutually independent (i.e., using scalar multiples does not get rid of independence). However, the quantity $Y_n = \sum a_i X_i$ is typically going to be dependent with $X_1,...,X_n$. This means that you can indeed express the MGF pf $Y_n$ the way you write it in your post, but if you were to look at the relationship between $Y_n$ and any of the $X_i$ values, you would have to deal with the dependence between these quantities.
Are linear combinations of independent random variables again independent? If $X_1,...,X_n$ are mutually independent then $a_1 X_1,...,a_n X_n$ are also mutually independent (i.e., using scalar multiples does not get rid of independence). However, the quantity $Y_n = \sum a
30,009
Are linear combinations of independent random variables again independent?
I appreciate the answers here. If we assume that the random variables $X_1,X_2,\ldots X_n$ are independent, then the CDF $F_{X_1,X_2,\ldots X_n}(x_1,x_2\ldots x_n)$ factors into $F_{X_1}(x_1)F_{X_2}(x_2)\ldots F_{X_n}(x_n)$. The CDF of $Y_n=\sum_{j=1}^{n}a_jX_j$ is nothing but $F_{Y_n}(y_n)=F_{a_1X_1,a_2X_2,\ldots a_nX_n}(a_1x_1,a_2x_2,\ldots a_nx_n)=\mathbb{P}(a_1X_1\leq c_1,a_2X_2\leq c_2,\ldots a_nX_n\leq c_n)=\mathbb{P}(X_1\leq \frac{c_1}{a_1},X_2\leq \frac{c_2}{a_2},\ldots X_n\leq \frac{c_n}{a_n})=F_{X_1}(\frac{c_1}{a_1})F_{X_2}(\frac{c_2}{a_2})\ldots F_{X_n}(\frac{c_n}{a_n})$ by our assumption . So lineair combinations of random variables are random aswell.
Are linear combinations of independent random variables again independent?
I appreciate the answers here. If we assume that the random variables $X_1,X_2,\ldots X_n$ are independent, then the CDF $F_{X_1,X_2,\ldots X_n}(x_1,x_2\ldots x_n)$ factors into $F_{X_1}(x_1)F_{X_2}(x
Are linear combinations of independent random variables again independent? I appreciate the answers here. If we assume that the random variables $X_1,X_2,\ldots X_n$ are independent, then the CDF $F_{X_1,X_2,\ldots X_n}(x_1,x_2\ldots x_n)$ factors into $F_{X_1}(x_1)F_{X_2}(x_2)\ldots F_{X_n}(x_n)$. The CDF of $Y_n=\sum_{j=1}^{n}a_jX_j$ is nothing but $F_{Y_n}(y_n)=F_{a_1X_1,a_2X_2,\ldots a_nX_n}(a_1x_1,a_2x_2,\ldots a_nx_n)=\mathbb{P}(a_1X_1\leq c_1,a_2X_2\leq c_2,\ldots a_nX_n\leq c_n)=\mathbb{P}(X_1\leq \frac{c_1}{a_1},X_2\leq \frac{c_2}{a_2},\ldots X_n\leq \frac{c_n}{a_n})=F_{X_1}(\frac{c_1}{a_1})F_{X_2}(\frac{c_2}{a_2})\ldots F_{X_n}(\frac{c_n}{a_n})$ by our assumption . So lineair combinations of random variables are random aswell.
Are linear combinations of independent random variables again independent? I appreciate the answers here. If we assume that the random variables $X_1,X_2,\ldots X_n$ are independent, then the CDF $F_{X_1,X_2,\ldots X_n}(x_1,x_2\ldots x_n)$ factors into $F_{X_1}(x_1)F_{X_2}(x
30,010
How do you draw structural equation/MPLUS models?
I use OpenMx for SEM modeling where I simply use the omxGraphViz function to return a dotfile. I haven't found it too inflexible -- the default output looks pretty good and though I've rarely needed to modify the dotfile, it's not hard to do. Update By the way, Graphviz can output SVG files, which can be imported into Inkscape, giving you the best of both worlds. :)
How do you draw structural equation/MPLUS models?
I use OpenMx for SEM modeling where I simply use the omxGraphViz function to return a dotfile. I haven't found it too inflexible -- the default output looks pretty good and though I've rarely needed
How do you draw structural equation/MPLUS models? I use OpenMx for SEM modeling where I simply use the omxGraphViz function to return a dotfile. I haven't found it too inflexible -- the default output looks pretty good and though I've rarely needed to modify the dotfile, it's not hard to do. Update By the way, Graphviz can output SVG files, which can be imported into Inkscape, giving you the best of both worlds. :)
How do you draw structural equation/MPLUS models? I use OpenMx for SEM modeling where I simply use the omxGraphViz function to return a dotfile. I haven't found it too inflexible -- the default output looks pretty good and though I've rarely needed
30,011
How do you draw structural equation/MPLUS models?
Onyx is a free program for drawing and estimating Structural Equation Models. It can import/export models from/to OpenMx. With limitations, also to Mplus, and (soon) to lavaan. Export to bitmaps (JPEG) and LaTex vector formats are possible. Onyx can be downloaded here: http://onyx.brandmaier.de/
How do you draw structural equation/MPLUS models?
Onyx is a free program for drawing and estimating Structural Equation Models. It can import/export models from/to OpenMx. With limitations, also to Mplus, and (soon) to lavaan. Export to bitmaps (JPEG
How do you draw structural equation/MPLUS models? Onyx is a free program for drawing and estimating Structural Equation Models. It can import/export models from/to OpenMx. With limitations, also to Mplus, and (soon) to lavaan. Export to bitmaps (JPEG) and LaTex vector formats are possible. Onyx can be downloaded here: http://onyx.brandmaier.de/
How do you draw structural equation/MPLUS models? Onyx is a free program for drawing and estimating Structural Equation Models. It can import/export models from/to OpenMx. With limitations, also to Mplus, and (soon) to lavaan. Export to bitmaps (JPEG
30,012
How do you draw structural equation/MPLUS models?
I use the psych R package for CFA and John Fox's sem package with simple SEM. Note that the graphical backend is graphviz. I don't remember if the lavaan package provides similar or better facilities. Otherwise, the Mx software for genetic modeling features a graphical interface in its Windows flavour, and you can export the model with path coefficients.
How do you draw structural equation/MPLUS models?
I use the psych R package for CFA and John Fox's sem package with simple SEM. Note that the graphical backend is graphviz. I don't remember if the lavaan package provides similar or better facilities.
How do you draw structural equation/MPLUS models? I use the psych R package for CFA and John Fox's sem package with simple SEM. Note that the graphical backend is graphviz. I don't remember if the lavaan package provides similar or better facilities. Otherwise, the Mx software for genetic modeling features a graphical interface in its Windows flavour, and you can export the model with path coefficients.
How do you draw structural equation/MPLUS models? I use the psych R package for CFA and John Fox's sem package with simple SEM. Note that the graphical backend is graphviz. I don't remember if the lavaan package provides similar or better facilities.
30,013
How do you draw structural equation/MPLUS models?
I am currently developing the semPlot package for R which is aimed at visualizing models and parameter estimates for SEM models from various packages, including Mplus. Its first version is on CRAN. It has a few bugs though which have mostly been solved in the devellopmental version on github (https://github.com/SachaEpskamp/semPlot). For some examples, see: http://sachaepskamp.com/semPlot.
How do you draw structural equation/MPLUS models?
I am currently developing the semPlot package for R which is aimed at visualizing models and parameter estimates for SEM models from various packages, including Mplus. Its first version is on CRAN. It
How do you draw structural equation/MPLUS models? I am currently developing the semPlot package for R which is aimed at visualizing models and parameter estimates for SEM models from various packages, including Mplus. Its first version is on CRAN. It has a few bugs though which have mostly been solved in the devellopmental version on github (https://github.com/SachaEpskamp/semPlot). For some examples, see: http://sachaepskamp.com/semPlot.
How do you draw structural equation/MPLUS models? I am currently developing the semPlot package for R which is aimed at visualizing models and parameter estimates for SEM models from various packages, including Mplus. Its first version is on CRAN. It
30,014
How do you draw structural equation/MPLUS models?
I have worked with graphviz, which is also the graphics engine behind R's sem package (my understanding is that John Fox designed the syntax to be as closely compatible with graphviz as possible, so it would be easy to convert one syntax to another). It gets cumbersome quite quickly, and these days I draw my SEM path diagrams in Dia.
How do you draw structural equation/MPLUS models?
I have worked with graphviz, which is also the graphics engine behind R's sem package (my understanding is that John Fox designed the syntax to be as closely compatible with graphviz as possible, so i
How do you draw structural equation/MPLUS models? I have worked with graphviz, which is also the graphics engine behind R's sem package (my understanding is that John Fox designed the syntax to be as closely compatible with graphviz as possible, so it would be easy to convert one syntax to another). It gets cumbersome quite quickly, and these days I draw my SEM path diagrams in Dia.
How do you draw structural equation/MPLUS models? I have worked with graphviz, which is also the graphics engine behind R's sem package (my understanding is that John Fox designed the syntax to be as closely compatible with graphviz as possible, so i
30,015
How do you draw structural equation/MPLUS models?
I answered another question on the site, Software for drawing bayesian networks (graphical models) suggesting the Tikz library in $\LaTeX$. One of the nice properties of the Tikz code for drawing these models is that the RAM path notation is functionally similar to how you define nodes and edges in Tikz. It is not as nice if you want to automatically draw models you have already estimated in MPLUS or whatever (as would be the case for some of the R programs), but with my (admittedly) brief attempt at making some of the graphs in R or graphviz, I had a much easier time creating what I wanted in Tikz. For another potential solution in R though the qgraph library in R has some nice examples.
How do you draw structural equation/MPLUS models?
I answered another question on the site, Software for drawing bayesian networks (graphical models) suggesting the Tikz library in $\LaTeX$. One of the nice properties of the Tikz code for drawing thes
How do you draw structural equation/MPLUS models? I answered another question on the site, Software for drawing bayesian networks (graphical models) suggesting the Tikz library in $\LaTeX$. One of the nice properties of the Tikz code for drawing these models is that the RAM path notation is functionally similar to how you define nodes and edges in Tikz. It is not as nice if you want to automatically draw models you have already estimated in MPLUS or whatever (as would be the case for some of the R programs), but with my (admittedly) brief attempt at making some of the graphs in R or graphviz, I had a much easier time creating what I wanted in Tikz. For another potential solution in R though the qgraph library in R has some nice examples.
How do you draw structural equation/MPLUS models? I answered another question on the site, Software for drawing bayesian networks (graphical models) suggesting the Tikz library in $\LaTeX$. One of the nice properties of the Tikz code for drawing thes
30,016
How do you draw structural equation/MPLUS models?
I used Lisrel, AMOS, Mplus before but only R. In R, one can do almost every step to fit SEM with the data, from exploring pattern to fitting the model and improving the model. Recently (2012), there are many new R packages and updated ones, which allow us to fit SEM intuitively. Moreover, R is free and open-source software. Here is a review on using R to run/fit SEM, and still updating. http://pairach.com/2011/08/13/r-packages-for-structural-equation-model/
How do you draw structural equation/MPLUS models?
I used Lisrel, AMOS, Mplus before but only R. In R, one can do almost every step to fit SEM with the data, from exploring pattern to fitting the model and improving the model. Recently (2012), there a
How do you draw structural equation/MPLUS models? I used Lisrel, AMOS, Mplus before but only R. In R, one can do almost every step to fit SEM with the data, from exploring pattern to fitting the model and improving the model. Recently (2012), there are many new R packages and updated ones, which allow us to fit SEM intuitively. Moreover, R is free and open-source software. Here is a review on using R to run/fit SEM, and still updating. http://pairach.com/2011/08/13/r-packages-for-structural-equation-model/
How do you draw structural equation/MPLUS models? I used Lisrel, AMOS, Mplus before but only R. In R, one can do almost every step to fit SEM with the data, from exploring pattern to fitting the model and improving the model. Recently (2012), there a
30,017
How do you draw structural equation/MPLUS models?
I would recommend you to try "yed", http://www.yworks.com/en/products_yed_about.html. It is a very versatile program and I've used it to draw path diagrams, flowcharts, timelines etc. It helps you get figures aligned, equal distances between boxes, and so on. Give it at try!
How do you draw structural equation/MPLUS models?
I would recommend you to try "yed", http://www.yworks.com/en/products_yed_about.html. It is a very versatile program and I've used it to draw path diagrams, flowcharts, timelines etc. It helps you get
How do you draw structural equation/MPLUS models? I would recommend you to try "yed", http://www.yworks.com/en/products_yed_about.html. It is a very versatile program and I've used it to draw path diagrams, flowcharts, timelines etc. It helps you get figures aligned, equal distances between boxes, and so on. Give it at try!
How do you draw structural equation/MPLUS models? I would recommend you to try "yed", http://www.yworks.com/en/products_yed_about.html. It is a very versatile program and I've used it to draw path diagrams, flowcharts, timelines etc. It helps you get
30,018
How do you draw structural equation/MPLUS models?
I think Omnigraffle is the best for drawing (only). It is fantastic! Far easier than any other program I have seen, and is beautiful.
How do you draw structural equation/MPLUS models?
I think Omnigraffle is the best for drawing (only). It is fantastic! Far easier than any other program I have seen, and is beautiful.
How do you draw structural equation/MPLUS models? I think Omnigraffle is the best for drawing (only). It is fantastic! Far easier than any other program I have seen, and is beautiful.
How do you draw structural equation/MPLUS models? I think Omnigraffle is the best for drawing (only). It is fantastic! Far easier than any other program I have seen, and is beautiful.
30,019
How do you draw structural equation/MPLUS models?
THANK-YOU!! I tried a few of these but the free software Dia is all I need to draw my structural equation model (4 latent variables). I viewed a few Youtube tutorials and went to the wiki as needed https://wiki.gnome.org/Apps/Dia/Documentation I did this in an evening or in about 3 hours had my full model developed and edited.
How do you draw structural equation/MPLUS models?
THANK-YOU!! I tried a few of these but the free software Dia is all I need to draw my structural equation model (4 latent variables). I viewed a few Youtube tutorials and went to the wiki as needed h
How do you draw structural equation/MPLUS models? THANK-YOU!! I tried a few of these but the free software Dia is all I need to draw my structural equation model (4 latent variables). I viewed a few Youtube tutorials and went to the wiki as needed https://wiki.gnome.org/Apps/Dia/Documentation I did this in an evening or in about 3 hours had my full model developed and edited.
How do you draw structural equation/MPLUS models? THANK-YOU!! I tried a few of these but the free software Dia is all I need to draw my structural equation model (4 latent variables). I viewed a few Youtube tutorials and went to the wiki as needed h
30,020
Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical?
Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical? In general, not unless N=2 in both samples. If N is larger than 2, they can differ. You can see this simply by trying it with some simple cases. Perhaps the easiest case is to take an asymmetric sample of size $N=3$ and flip it around its mean ($2,3$ and $7$ have a mean of $4$; if you take a new sample of $6,5$ and $1$, respectively, it has the same mean and the same magnitude of deviations from the mean as the original sample, so it will have the same variance and hence standard deviation). For another example consider these three samples of size three that have the same standard deviation: Set A: $-1, 0, 1$ Set B: $-a, -a, 2a\quad$ (where $a = \sqrt{\frac13}$) (i.e. approximately -.57735,-.57735,1.1547) Set C: $-2b, -b, 3b\quad$ (where $b = \sqrt{\frac17}$) (i.e. approximately -0.7559289, -0.3779645, 1.1338934) These all have mean 0 and sd 1. You can make any other mean and sd from these by multiplying by the desired standard deviation and then adding the desired mean. If not, are there any restrictions that would need to be imposed to ensure that the values of the two samples would be identical? Sure, you need additional restrictions that reduce the available degrees of freedom for the data down to 0. These restrictions might take many forms, e.g. specifying skewness, or the median or the sample maximum, etc. Not all additional restrictions will always work to reduce the free dimensions by one (with some sets of existing restrictions, some additional restrictions may be redundant), but that's usually what it takes.
Given two samples that have the same mean, standard deviation, and N: are the values in each sample
Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical? In general, not unless N=2 in both samples. If N is larger than 2, they can differ. You
Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical? Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical? In general, not unless N=2 in both samples. If N is larger than 2, they can differ. You can see this simply by trying it with some simple cases. Perhaps the easiest case is to take an asymmetric sample of size $N=3$ and flip it around its mean ($2,3$ and $7$ have a mean of $4$; if you take a new sample of $6,5$ and $1$, respectively, it has the same mean and the same magnitude of deviations from the mean as the original sample, so it will have the same variance and hence standard deviation). For another example consider these three samples of size three that have the same standard deviation: Set A: $-1, 0, 1$ Set B: $-a, -a, 2a\quad$ (where $a = \sqrt{\frac13}$) (i.e. approximately -.57735,-.57735,1.1547) Set C: $-2b, -b, 3b\quad$ (where $b = \sqrt{\frac17}$) (i.e. approximately -0.7559289, -0.3779645, 1.1338934) These all have mean 0 and sd 1. You can make any other mean and sd from these by multiplying by the desired standard deviation and then adding the desired mean. If not, are there any restrictions that would need to be imposed to ensure that the values of the two samples would be identical? Sure, you need additional restrictions that reduce the available degrees of freedom for the data down to 0. These restrictions might take many forms, e.g. specifying skewness, or the median or the sample maximum, etc. Not all additional restrictions will always work to reduce the free dimensions by one (with some sets of existing restrictions, some additional restrictions may be redundant), but that's usually what it takes.
Given two samples that have the same mean, standard deviation, and N: are the values in each sample Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical? In general, not unless N=2 in both samples. If N is larger than 2, they can differ. You
30,021
Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical?
No. Many data sets can yield the same mean, SD and n. In the graph above, the three data sets on the left of each graph all share the same mean, SD and n. So do the three data sets on the right of each graph. This is Figure 1 from: Weissgerber, T.L., Milic, N.M., Winham, S.J., and Garovic, V.D. (2015). Beyond bar and line graphs: time for a new data presentation paradigm. PLoS Biol 13: e1002128.
Given two samples that have the same mean, standard deviation, and N: are the values in each sample
No. Many data sets can yield the same mean, SD and n. In the graph above, the three data sets on the left of each graph all share the same mean, SD and n. So do the three data sets on the right of eac
Given two samples that have the same mean, standard deviation, and N: are the values in each sample identical? No. Many data sets can yield the same mean, SD and n. In the graph above, the three data sets on the left of each graph all share the same mean, SD and n. So do the three data sets on the right of each graph. This is Figure 1 from: Weissgerber, T.L., Milic, N.M., Winham, S.J., and Garovic, V.D. (2015). Beyond bar and line graphs: time for a new data presentation paradigm. PLoS Biol 13: e1002128.
Given two samples that have the same mean, standard deviation, and N: are the values in each sample No. Many data sets can yield the same mean, SD and n. In the graph above, the three data sets on the left of each graph all share the same mean, SD and n. So do the three data sets on the right of eac
30,022
Is the value of a probability density function for a given input a point, a range, or both?
The citation is true. When you plug $x=0$ to the PDF function, you do NOT get the probability of taking this particular value. The resulting number is probability density which is not a probability. The probability of taking exactly $x=0$ is zero (consider the infinite number of similarly-likely values in the tiny interval $x\in[0,10^{-100}]$). To further convince yourself that this $\varphi(x)$ cannot be a probability, consider decreasing the standard deviation of your normal distribution from $\sigma = 1$ to $\sigma = \frac{1}{100}$. Now, $\varphi(0)=\frac{100}{\sqrt{2\pi}}$ - much more than one. Not a probability.
Is the value of a probability density function for a given input a point, a range, or both?
The citation is true. When you plug $x=0$ to the PDF function, you do NOT get the probability of taking this particular value. The resulting number is probability density which is not a probability.
Is the value of a probability density function for a given input a point, a range, or both? The citation is true. When you plug $x=0$ to the PDF function, you do NOT get the probability of taking this particular value. The resulting number is probability density which is not a probability. The probability of taking exactly $x=0$ is zero (consider the infinite number of similarly-likely values in the tiny interval $x\in[0,10^{-100}]$). To further convince yourself that this $\varphi(x)$ cannot be a probability, consider decreasing the standard deviation of your normal distribution from $\sigma = 1$ to $\sigma = \frac{1}{100}$. Now, $\varphi(0)=\frac{100}{\sqrt{2\pi}}$ - much more than one. Not a probability.
Is the value of a probability density function for a given input a point, a range, or both? The citation is true. When you plug $x=0$ to the PDF function, you do NOT get the probability of taking this particular value. The resulting number is probability density which is not a probability.
30,023
Is the value of a probability density function for a given input a point, a range, or both?
Elaborating a bit on Trisoloriansunscreen's answer: it's very much true that you only got a probability density function. I'd like to draw an analogy for you. Imagine you have a 3D object, say some complex spaceship, and you know the mass density at every point. For example, some parts of the spaceship might contain water, which has a mass density of $997 \frac{\text{g}}{\text{l}}$. Does this already tell you anything about the mass of the whole spaceship? No, it does not! Precisely because you only know this value at a specific point. You've got no information on how much water there actually is. It might be $1\ \text{ml}$ or $1\ \text{l}$. Now suppose you know the amount of water, let's say $2\ \text{l}$. By simple multiplication $997 \frac{\text{g}}{\text{l}} \cdot 2\ \text{l}$, you get roughly $1994\ \text{g}$. I would like to make the point that you just did integration in disguise! Consider the following picture: The mass you calculated is just the greenly shaded rectangular area. This was only doable as a simple multiplication because the mass density was constant for the amount of water considered and thus yielded a rectangular area. What if you had mixed forms of water, e.g. some gaseous, some liquid, some in varying temperatures and so on? It could look like this: Now for computing the mass you would need to integrate that mass density function over the amount of water. Do you see the parallel to probability density functions now? To get an actual probability (cf. mass) you need to integrate the probability density (cf. mass density) over some domain.
Is the value of a probability density function for a given input a point, a range, or both?
Elaborating a bit on Trisoloriansunscreen's answer: it's very much true that you only got a probability density function. I'd like to draw an analogy for you. Imagine you have a 3D object, say some co
Is the value of a probability density function for a given input a point, a range, or both? Elaborating a bit on Trisoloriansunscreen's answer: it's very much true that you only got a probability density function. I'd like to draw an analogy for you. Imagine you have a 3D object, say some complex spaceship, and you know the mass density at every point. For example, some parts of the spaceship might contain water, which has a mass density of $997 \frac{\text{g}}{\text{l}}$. Does this already tell you anything about the mass of the whole spaceship? No, it does not! Precisely because you only know this value at a specific point. You've got no information on how much water there actually is. It might be $1\ \text{ml}$ or $1\ \text{l}$. Now suppose you know the amount of water, let's say $2\ \text{l}$. By simple multiplication $997 \frac{\text{g}}{\text{l}} \cdot 2\ \text{l}$, you get roughly $1994\ \text{g}$. I would like to make the point that you just did integration in disguise! Consider the following picture: The mass you calculated is just the greenly shaded rectangular area. This was only doable as a simple multiplication because the mass density was constant for the amount of water considered and thus yielded a rectangular area. What if you had mixed forms of water, e.g. some gaseous, some liquid, some in varying temperatures and so on? It could look like this: Now for computing the mass you would need to integrate that mass density function over the amount of water. Do you see the parallel to probability density functions now? To get an actual probability (cf. mass) you need to integrate the probability density (cf. mass density) over some domain.
Is the value of a probability density function for a given input a point, a range, or both? Elaborating a bit on Trisoloriansunscreen's answer: it's very much true that you only got a probability density function. I'd like to draw an analogy for you. Imagine you have a 3D object, say some co
30,024
How to calculate the p.value of an odds ratio in R?
You can use Fisher's exact test, which inputs a contingency table and outputs a p-value, with a null hypothesis that the odds ratio is 1 and an alternative hypothesis that the odds ratio is not equal to 1. (tab <- matrix(c(38, 25, 162, 75), nrow=2)) # [,1] [,2] # [1,] 38 162 # [2,] 25 75 fisher.test(tab) # # Fisher's Exact Test for Count Data # # data: tab # p-value = 0.2329 # alternative hypothesis: true odds ratio is not equal to 1 # 95 percent confidence interval: # 0.3827433 1.3116294 # sample estimates: # odds ratio # 0.7045301 In this case the p value is 0.23.
How to calculate the p.value of an odds ratio in R?
You can use Fisher's exact test, which inputs a contingency table and outputs a p-value, with a null hypothesis that the odds ratio is 1 and an alternative hypothesis that the odds ratio is not equal
How to calculate the p.value of an odds ratio in R? You can use Fisher's exact test, which inputs a contingency table and outputs a p-value, with a null hypothesis that the odds ratio is 1 and an alternative hypothesis that the odds ratio is not equal to 1. (tab <- matrix(c(38, 25, 162, 75), nrow=2)) # [,1] [,2] # [1,] 38 162 # [2,] 25 75 fisher.test(tab) # # Fisher's Exact Test for Count Data # # data: tab # p-value = 0.2329 # alternative hypothesis: true odds ratio is not equal to 1 # 95 percent confidence interval: # 0.3827433 1.3116294 # sample estimates: # odds ratio # 0.7045301 In this case the p value is 0.23.
How to calculate the p.value of an odds ratio in R? You can use Fisher's exact test, which inputs a contingency table and outputs a p-value, with a null hypothesis that the odds ratio is 1 and an alternative hypothesis that the odds ratio is not equal
30,025
How to calculate the p.value of an odds ratio in R?
Another way to do it (other than Fisher's exact test) is to put the values into a binomial GLM: d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) g <- glm(s/(s+f)~g,weights=s+f,data=d, family="binomial") coef(summary(g))["g2",c("Estimate","Pr(>|z|)")] ## Estimate Pr(>|z|) ## -0.3513979 0.2303337 To get the likelihood ratio test (slightly more accurate than the Wald $p$-value shown above), do anova(g,test="Chisq") which gives ## Df Deviance Resid. Df Resid. Dev Pr(>Chi) ## NULL 1 1.4178 ## g 1 1.4178 0 0.0000 0.2338 (LRT $p=0.2338 \approx$ Wald $p=0.2303337 \approx$ Fisher $p=0.2329$ in this case because the sample is fairly large)
How to calculate the p.value of an odds ratio in R?
Another way to do it (other than Fisher's exact test) is to put the values into a binomial GLM: d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) g <- glm(s/(s+f)
How to calculate the p.value of an odds ratio in R? Another way to do it (other than Fisher's exact test) is to put the values into a binomial GLM: d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) g <- glm(s/(s+f)~g,weights=s+f,data=d, family="binomial") coef(summary(g))["g2",c("Estimate","Pr(>|z|)")] ## Estimate Pr(>|z|) ## -0.3513979 0.2303337 To get the likelihood ratio test (slightly more accurate than the Wald $p$-value shown above), do anova(g,test="Chisq") which gives ## Df Deviance Resid. Df Resid. Dev Pr(>Chi) ## NULL 1 1.4178 ## g 1 1.4178 0 0.0000 0.2338 (LRT $p=0.2338 \approx$ Wald $p=0.2303337 \approx$ Fisher $p=0.2329$ in this case because the sample is fairly large)
How to calculate the p.value of an odds ratio in R? Another way to do it (other than Fisher's exact test) is to put the values into a binomial GLM: d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) g <- glm(s/(s+f)
30,026
How to calculate the p.value of an odds ratio in R?
It's better to generalize the solution and use the likelihood ratio $\chi^2$ test from a statistical model such as the logistic model. The LR test provides fairly accurate $P$-values. This also handles cases where you need to test more than one parameter, e.g., 3-group problems, continuous effects that are nonlinear, etc. The LR test for the overall model (which is all that's needed in this example since there are no adjustment variables) may be easily obtained in base R or using the rms package, e.g. f <- lrm(y ~ groups, weights=freqs) f # prints LR chi-sq, d.f., P, many other quantities Here the nested models are this model and an intercept-only model.
How to calculate the p.value of an odds ratio in R?
It's better to generalize the solution and use the likelihood ratio $\chi^2$ test from a statistical model such as the logistic model. The LR test provides fairly accurate $P$-values. This also hand
How to calculate the p.value of an odds ratio in R? It's better to generalize the solution and use the likelihood ratio $\chi^2$ test from a statistical model such as the logistic model. The LR test provides fairly accurate $P$-values. This also handles cases where you need to test more than one parameter, e.g., 3-group problems, continuous effects that are nonlinear, etc. The LR test for the overall model (which is all that's needed in this example since there are no adjustment variables) may be easily obtained in base R or using the rms package, e.g. f <- lrm(y ~ groups, weights=freqs) f # prints LR chi-sq, d.f., P, many other quantities Here the nested models are this model and an intercept-only model.
How to calculate the p.value of an odds ratio in R? It's better to generalize the solution and use the likelihood ratio $\chi^2$ test from a statistical model such as the logistic model. The LR test provides fairly accurate $P$-values. This also hand
30,027
How to calculate the p.value of an odds ratio in R?
An even simpler code is to use directly a poisson regression with library(tidyverse) library(broom) d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) d_long = pivot_longer(d, cols = 2:3) tidy(glm(value ~ name*g, data=d_long, family = 'poisson')) Some explanations If you look at the poisson model, you see that it models the probability $\pi$ as \begin{equation} \ln(\pi_{i}) = \mathbf{X_{i}} \beta \end{equation} the log of probabilities is a simple linear combinations of predictors. This is the same equation as the logistic equation which is called "logit" \begin{equation} \text{logit}(\pi_{i}) = \ln \left( \frac{\pi_{i}}{1 - \pi_{i}} \right) = \mathbf{X_{i}} \beta \end{equation} You can test this equivalence with a simple model. You start with this linear model \begin{equation} y_{i} = 2u_{i} + \epsilon_{i} \end{equation} set.seed(123) n = 100 u = rbinom(n, 1, 0.2) y = u*2 + rnorm(n, 0, 0.5) y_prob = plogis(y) # transform log(odds) into a proba y_bin = rbinom(n, 1, prob = y_prob) You can directly model the data with a logistic regression glm(y_bin ~ u, family = "binomial") Coefficients: (Intercept) u -0.09764 2.93085 This is the log of odds, which we can simply retrieve with a table data.frame(table(u, y_bin)) %>% mutate(p = Freq/sum(Freq)) $$ \begin{aligned}[ht] \begin{array}{rllrr} \hline & u & y\_bin & Freq & p \\ \hline 1 & 0 & 0 & 43 & 0.43 \\ 2 & 1 & 0 & 1 & 0.01 \\ 3 & 0 & 1 & 39 & 0.39 \\ 4 & 1 & 1 & 17 & 0.17 \\ \hline \end{array} \end{aligned} $$ The coefficient of your logistic regression is the log odds-ratio of this table \begin{equation} \ln \left( \frac{\frac{0.17}{0.39}}{\frac{0.01}{0.43}} \right) = 2.930852 \end{equation} log((0.17/0.39) / (0.01/0.43)) 2.930852 This is exactly the same as modelling the table directly with a poisson regression Just prepare the table df_poiss = data.frame(table(u, y_bin)) Run the poisson, and the interaction will give you the log odds-ratio between the outcome and the predictor u glm(Freq ~ y_bin*u, data = df_poiss, family = "poisson") Coefficients: (Intercept) y_bin1 u1 y_bin1:u1 3.76120 -0.09764 -3.76120 2.93085 The problem is that you can always transform dummy variables into counts for the purpose of using a poisson model. But you often cannot use aggregated counts to revert back to dummies for using logistic regression. A final note if you look closely at the two equations, the logistic model (the logit transformation) once exponentiate will give you odds, while when exponentiating the poisson, will give you a probability (or risk). However, when you take an interaction with the poisson, then you have a ratio of odds, which is the same as the logistic model.
How to calculate the p.value of an odds ratio in R?
An even simpler code is to use directly a poisson regression with library(tidyverse) library(broom) d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) d_long = p
How to calculate the p.value of an odds ratio in R? An even simpler code is to use directly a poisson regression with library(tidyverse) library(broom) d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) d_long = pivot_longer(d, cols = 2:3) tidy(glm(value ~ name*g, data=d_long, family = 'poisson')) Some explanations If you look at the poisson model, you see that it models the probability $\pi$ as \begin{equation} \ln(\pi_{i}) = \mathbf{X_{i}} \beta \end{equation} the log of probabilities is a simple linear combinations of predictors. This is the same equation as the logistic equation which is called "logit" \begin{equation} \text{logit}(\pi_{i}) = \ln \left( \frac{\pi_{i}}{1 - \pi_{i}} \right) = \mathbf{X_{i}} \beta \end{equation} You can test this equivalence with a simple model. You start with this linear model \begin{equation} y_{i} = 2u_{i} + \epsilon_{i} \end{equation} set.seed(123) n = 100 u = rbinom(n, 1, 0.2) y = u*2 + rnorm(n, 0, 0.5) y_prob = plogis(y) # transform log(odds) into a proba y_bin = rbinom(n, 1, prob = y_prob) You can directly model the data with a logistic regression glm(y_bin ~ u, family = "binomial") Coefficients: (Intercept) u -0.09764 2.93085 This is the log of odds, which we can simply retrieve with a table data.frame(table(u, y_bin)) %>% mutate(p = Freq/sum(Freq)) $$ \begin{aligned}[ht] \begin{array}{rllrr} \hline & u & y\_bin & Freq & p \\ \hline 1 & 0 & 0 & 43 & 0.43 \\ 2 & 1 & 0 & 1 & 0.01 \\ 3 & 0 & 1 & 39 & 0.39 \\ 4 & 1 & 1 & 17 & 0.17 \\ \hline \end{array} \end{aligned} $$ The coefficient of your logistic regression is the log odds-ratio of this table \begin{equation} \ln \left( \frac{\frac{0.17}{0.39}}{\frac{0.01}{0.43}} \right) = 2.930852 \end{equation} log((0.17/0.39) / (0.01/0.43)) 2.930852 This is exactly the same as modelling the table directly with a poisson regression Just prepare the table df_poiss = data.frame(table(u, y_bin)) Run the poisson, and the interaction will give you the log odds-ratio between the outcome and the predictor u glm(Freq ~ y_bin*u, data = df_poiss, family = "poisson") Coefficients: (Intercept) y_bin1 u1 y_bin1:u1 3.76120 -0.09764 -3.76120 2.93085 The problem is that you can always transform dummy variables into counts for the purpose of using a poisson model. But you often cannot use aggregated counts to revert back to dummies for using logistic regression. A final note if you look closely at the two equations, the logistic model (the logit transformation) once exponentiate will give you odds, while when exponentiating the poisson, will give you a probability (or risk). However, when you take an interaction with the poisson, then you have a ratio of odds, which is the same as the logistic model.
How to calculate the p.value of an odds ratio in R? An even simpler code is to use directly a poisson regression with library(tidyverse) library(broom) d <- data.frame(g=factor(1:2), s=c(25,75), f=c(38,162)) d_long = p
30,028
Mathematica's random number generator deviating from binomial probability?
The mishap is the use of strict less than. With ten tosses, the only way to get a proportion-of-heads outcome strictly between 0.4 and 0.6 is if you get exactly 5 heads. That has a probability of about 0.246 (${{_{10}}\choose{^5}}(\frac{_1}{^2})^{10}\approx 0.246$), which is about what your simulations (correctly) give. If you include 0.4 and 0.6 in your limits, (i.e. 4, 5 or 6 heads in 10 tosses) the result has a probability of about 0.656, much as you expected. Your first thought should not be a problem with the random number generator. That kind of problem would have been obvious in a heavily used package like Mathematica long before now.
Mathematica's random number generator deviating from binomial probability?
The mishap is the use of strict less than. With ten tosses, the only way to get a proportion-of-heads outcome strictly between 0.4 and 0.6 is if you get exactly 5 heads. That has a probability of abou
Mathematica's random number generator deviating from binomial probability? The mishap is the use of strict less than. With ten tosses, the only way to get a proportion-of-heads outcome strictly between 0.4 and 0.6 is if you get exactly 5 heads. That has a probability of about 0.246 (${{_{10}}\choose{^5}}(\frac{_1}{^2})^{10}\approx 0.246$), which is about what your simulations (correctly) give. If you include 0.4 and 0.6 in your limits, (i.e. 4, 5 or 6 heads in 10 tosses) the result has a probability of about 0.656, much as you expected. Your first thought should not be a problem with the random number generator. That kind of problem would have been obvious in a heavily used package like Mathematica long before now.
Mathematica's random number generator deviating from binomial probability? The mishap is the use of strict less than. With ten tosses, the only way to get a proportion-of-heads outcome strictly between 0.4 and 0.6 is if you get exactly 5 heads. That has a probability of abou
30,029
Mathematica's random number generator deviating from binomial probability?
Some comments about the code you wrote: You defined experiment[n_] but never used it, instead repeating its definition in trialheadcount[n_]. experiment[n_] could be much more efficiently programmed (without using the built-in command BinomialDistribution) as Total[RandomInteger[{0,1},n]/n and this would also make X unnecessary. Counting the number of cases where experiment[n_] is strictly between 0.4 and 0.6 is more efficiently accomplished by writing Length[Select[Table[experiment[10],{10^6}], 0.4 < # < 0.6 &]]. But, for the actual question itself, as Glen_b points out, the binomial distribution is discrete. Out of 10 coin tosses with $x$ observed heads, the probability that the sample proportion of heads $\hat p = x/10$ is strictly between 0.4 and 0.6 is actually just the case $x = 5$; i.e., $$\Pr[X = 5] = \binom{10}{5} (0.5)^5 (1-0.5)^5 \approx 0.246094.$$ Whereas, if you were to calculate the probability that the sample proportion is between 0.4 and 0.6 inclusive, that would be $$\Pr[4 \le X \le 6] = \sum_{x=4}^6 \binom{10}{x} (0.5)^x (1-0.5)^{10-x} = \frac{672}{1024} \approx 0.65625.$$ Therefore, you need only modify your code to use 0.4 <= # <= 0.6 instead. But of course, we could also write Length[Select[RandomVariate[BinomialDistribution[10,1/2],{10^6}], 4 <= # <= 6 &]] This command is approximately 9.6 times faster than your original code. I imagine someone even more proficient than I am at Mathematica could speed it up even further.
Mathematica's random number generator deviating from binomial probability?
Some comments about the code you wrote: You defined experiment[n_] but never used it, instead repeating its definition in trialheadcount[n_]. experiment[n_] could be much more efficiently programmed
Mathematica's random number generator deviating from binomial probability? Some comments about the code you wrote: You defined experiment[n_] but never used it, instead repeating its definition in trialheadcount[n_]. experiment[n_] could be much more efficiently programmed (without using the built-in command BinomialDistribution) as Total[RandomInteger[{0,1},n]/n and this would also make X unnecessary. Counting the number of cases where experiment[n_] is strictly between 0.4 and 0.6 is more efficiently accomplished by writing Length[Select[Table[experiment[10],{10^6}], 0.4 < # < 0.6 &]]. But, for the actual question itself, as Glen_b points out, the binomial distribution is discrete. Out of 10 coin tosses with $x$ observed heads, the probability that the sample proportion of heads $\hat p = x/10$ is strictly between 0.4 and 0.6 is actually just the case $x = 5$; i.e., $$\Pr[X = 5] = \binom{10}{5} (0.5)^5 (1-0.5)^5 \approx 0.246094.$$ Whereas, if you were to calculate the probability that the sample proportion is between 0.4 and 0.6 inclusive, that would be $$\Pr[4 \le X \le 6] = \sum_{x=4}^6 \binom{10}{x} (0.5)^x (1-0.5)^{10-x} = \frac{672}{1024} \approx 0.65625.$$ Therefore, you need only modify your code to use 0.4 <= # <= 0.6 instead. But of course, we could also write Length[Select[RandomVariate[BinomialDistribution[10,1/2],{10^6}], 4 <= # <= 6 &]] This command is approximately 9.6 times faster than your original code. I imagine someone even more proficient than I am at Mathematica could speed it up even further.
Mathematica's random number generator deviating from binomial probability? Some comments about the code you wrote: You defined experiment[n_] but never used it, instead repeating its definition in trialheadcount[n_]. experiment[n_] could be much more efficiently programmed
30,030
Mathematica's random number generator deviating from binomial probability?
Doing Probability Experiments in Mathematica Mathematica offers a very comfortable framework to work with probabilities and distributions and -- while the main issue of appropriate limits has been addressed -- I would like to use this question to make this clearer and maybe useful as a reference. Let's simply make the experiments repeatable and define some plot options to fit our taste: SeedRandom["Repeatable_151115"]; $PlotTheme = "Detailed"; SetOptions[Plot, Filling -> Axis]; SetOptions[DiscretePlot, ExtentSize -> Scaled[0.5], PlotMarkers -> "Point"]; Working with parametric distributions We can now define the asymptotical distribution for one event which is the proportion $\pi$ of heads in $n$ throws of a (fair) coin: distProportionTenCoinThrows = With[ { n = 10, (* number of coin throws *) p = 1/2 (* fair coin probability of head*) }, (* derive the distribution for the proportion of heads *) TransformedDistribution[ x/n, x \[Distributed] BinomialDistribution[ n, p ] ]; With[ { pr = PlotRange -> {{0, 1}, {0, 0.25}} }, theoreticalPlot = DiscretePlot[ Evaluate @ PDF[ distProportionTenCoinThrows, p ], {p, 0, 1, 0.1}, pr ]; (* show plot with colored range *) Show @ { theoreticalPlot, DiscretePlot[ Evaluate @ PDF[ distProportionTenCoinThrows, p ], {p, 0.4, 0.6, 0.1}, pr, FillingStyle -> Red, PlotLegends -> None ] } ] Which gives us the plot of the discrete distribution of proportions: We can use the distribution immediately to calculate probabilities for $Pr[\,0.4 \leq \pi \leq 0.6\, |\,\pi \sim B(10,\frac{1}{2})]$ and $Pr[\,0.4 < \pi < 0.6\, |\,\pi \sim B(10,\frac{1}{2})]$: { Probability[ 0.4 <= p <= 0.6, p \[Distributed] distProportionTenCoinThrows ], Probability[ 0.4 < p < 0.6, p \[Distributed] distProportionTenCoinThrows ] } // N {0.65625, 0.246094} Doing Monte Carlo Experiments We can use the distribution for one event to repeatedly sample from it (Monte Carlo). distProportionsOneMillionCoinThrows = With[ { sampleSize = 1000000 }, EmpiricalDistribution[ RandomVariate[ distProportionTenCoinThrows, sampleSize ] ] ]; empiricalPlot = DiscretePlot[ Evaluate@PDF[ distProportionsOneMillionCoinThrows, p ], {p, 0, 1, 0.1}, PlotRange -> {{0, 1}, {0, 0.25}} , ExtentSize -> None, PlotLegends -> None, PlotStyle -> Red ] ] Comparing this with the theoretical/asymptotical distribution shows that everthing pretty much fits in: Show @ { theoreticalPlot, empiricalPlot }
Mathematica's random number generator deviating from binomial probability?
Doing Probability Experiments in Mathematica Mathematica offers a very comfortable framework to work with probabilities and distributions and -- while the main issue of appropriate limits has been add
Mathematica's random number generator deviating from binomial probability? Doing Probability Experiments in Mathematica Mathematica offers a very comfortable framework to work with probabilities and distributions and -- while the main issue of appropriate limits has been addressed -- I would like to use this question to make this clearer and maybe useful as a reference. Let's simply make the experiments repeatable and define some plot options to fit our taste: SeedRandom["Repeatable_151115"]; $PlotTheme = "Detailed"; SetOptions[Plot, Filling -> Axis]; SetOptions[DiscretePlot, ExtentSize -> Scaled[0.5], PlotMarkers -> "Point"]; Working with parametric distributions We can now define the asymptotical distribution for one event which is the proportion $\pi$ of heads in $n$ throws of a (fair) coin: distProportionTenCoinThrows = With[ { n = 10, (* number of coin throws *) p = 1/2 (* fair coin probability of head*) }, (* derive the distribution for the proportion of heads *) TransformedDistribution[ x/n, x \[Distributed] BinomialDistribution[ n, p ] ]; With[ { pr = PlotRange -> {{0, 1}, {0, 0.25}} }, theoreticalPlot = DiscretePlot[ Evaluate @ PDF[ distProportionTenCoinThrows, p ], {p, 0, 1, 0.1}, pr ]; (* show plot with colored range *) Show @ { theoreticalPlot, DiscretePlot[ Evaluate @ PDF[ distProportionTenCoinThrows, p ], {p, 0.4, 0.6, 0.1}, pr, FillingStyle -> Red, PlotLegends -> None ] } ] Which gives us the plot of the discrete distribution of proportions: We can use the distribution immediately to calculate probabilities for $Pr[\,0.4 \leq \pi \leq 0.6\, |\,\pi \sim B(10,\frac{1}{2})]$ and $Pr[\,0.4 < \pi < 0.6\, |\,\pi \sim B(10,\frac{1}{2})]$: { Probability[ 0.4 <= p <= 0.6, p \[Distributed] distProportionTenCoinThrows ], Probability[ 0.4 < p < 0.6, p \[Distributed] distProportionTenCoinThrows ] } // N {0.65625, 0.246094} Doing Monte Carlo Experiments We can use the distribution for one event to repeatedly sample from it (Monte Carlo). distProportionsOneMillionCoinThrows = With[ { sampleSize = 1000000 }, EmpiricalDistribution[ RandomVariate[ distProportionTenCoinThrows, sampleSize ] ] ]; empiricalPlot = DiscretePlot[ Evaluate@PDF[ distProportionsOneMillionCoinThrows, p ], {p, 0, 1, 0.1}, PlotRange -> {{0, 1}, {0, 0.25}} , ExtentSize -> None, PlotLegends -> None, PlotStyle -> Red ] ] Comparing this with the theoretical/asymptotical distribution shows that everthing pretty much fits in: Show @ { theoreticalPlot, empiricalPlot }
Mathematica's random number generator deviating from binomial probability? Doing Probability Experiments in Mathematica Mathematica offers a very comfortable framework to work with probabilities and distributions and -- while the main issue of appropriate limits has been add
30,031
How to interpret standardized regression coefficients and p-values in multiple regression?
For the standard linear regression model the absolute value of the coefficient estimates and the p-value are not related in the way you describe. It is very possible to have absolutely large coefficients which are insignificant and absolutely small coefficients which are very significant. What your missing in your interpretation is the effect of the coefficient estimate standard errors. The coefficients R reports (lets call them $b_1,b_2,b_3,...,b_k$) are the best linear unbiased estimators of the true parameters $\beta_1,\beta_2,\beta_3,...,\beta_k$ in that they minimize the sum of squared error or formally: $$ \{b_1,b_2,...,b_k\} = {\textrm{argmin} \atop \alpha}\left\{ \sum_{i=1}^{n}(y_i-\alpha_1x_{i,1}-. . .-\alpha_kx_{i,k})^2]\right\} $$ The p-value for the $i^{th}$ coefficient which R is reporting is the result of the following hypothesis test: $H_0: \beta_i = 0$ $H_A: \beta_i \neq 0$ Assuming the regression is properly specified, it can be shown, with the central limit theorem, that each $b_i$ is a normally distributed random variable with mean $\beta_i$ and some standard deviation (also called standard error) $\sigma_i$. This is because the $b$'s are estimated with a random sample so they too are random variables (roughly speaking). What determines the $i^{th}$ p-value is where 0 "lands" in the normal distribution $N(\beta_i,\sigma_i^2)$ (technically the test is done using a t-distribution...but the difference is not so important for addressing your question). If zero is in the tails of $N(\beta_i,\sigma_i^2)$ the p-value is low, if it's more in the middle the p-value is high. So given two estimates $b_i$ and $b_j$ where $b_i$ is "super far away" from zero and $b_j$ is "super close to" zero, the p-value of $b_i$ would be lower than $b_j$ assuming $\sigma_i$=$\sigma_j$. The part you are missing in your interpretation is that $\sigma_i$ and $\sigma_j$ can be very different. Essentially if $b_i$ is "huge" but $\sigma_i$ is also "huge" you see that you can get a high p-value. Conversely for "small" $b_i$ and "super small" $\sigma_i$, you see you can get a small p-value. I hope that helps!
How to interpret standardized regression coefficients and p-values in multiple regression?
For the standard linear regression model the absolute value of the coefficient estimates and the p-value are not related in the way you describe. It is very possible to have absolutely large coeffici
How to interpret standardized regression coefficients and p-values in multiple regression? For the standard linear regression model the absolute value of the coefficient estimates and the p-value are not related in the way you describe. It is very possible to have absolutely large coefficients which are insignificant and absolutely small coefficients which are very significant. What your missing in your interpretation is the effect of the coefficient estimate standard errors. The coefficients R reports (lets call them $b_1,b_2,b_3,...,b_k$) are the best linear unbiased estimators of the true parameters $\beta_1,\beta_2,\beta_3,...,\beta_k$ in that they minimize the sum of squared error or formally: $$ \{b_1,b_2,...,b_k\} = {\textrm{argmin} \atop \alpha}\left\{ \sum_{i=1}^{n}(y_i-\alpha_1x_{i,1}-. . .-\alpha_kx_{i,k})^2]\right\} $$ The p-value for the $i^{th}$ coefficient which R is reporting is the result of the following hypothesis test: $H_0: \beta_i = 0$ $H_A: \beta_i \neq 0$ Assuming the regression is properly specified, it can be shown, with the central limit theorem, that each $b_i$ is a normally distributed random variable with mean $\beta_i$ and some standard deviation (also called standard error) $\sigma_i$. This is because the $b$'s are estimated with a random sample so they too are random variables (roughly speaking). What determines the $i^{th}$ p-value is where 0 "lands" in the normal distribution $N(\beta_i,\sigma_i^2)$ (technically the test is done using a t-distribution...but the difference is not so important for addressing your question). If zero is in the tails of $N(\beta_i,\sigma_i^2)$ the p-value is low, if it's more in the middle the p-value is high. So given two estimates $b_i$ and $b_j$ where $b_i$ is "super far away" from zero and $b_j$ is "super close to" zero, the p-value of $b_i$ would be lower than $b_j$ assuming $\sigma_i$=$\sigma_j$. The part you are missing in your interpretation is that $\sigma_i$ and $\sigma_j$ can be very different. Essentially if $b_i$ is "huge" but $\sigma_i$ is also "huge" you see that you can get a high p-value. Conversely for "small" $b_i$ and "super small" $\sigma_i$, you see you can get a small p-value. I hope that helps!
How to interpret standardized regression coefficients and p-values in multiple regression? For the standard linear regression model the absolute value of the coefficient estimates and the p-value are not related in the way you describe. It is very possible to have absolutely large coeffici
30,032
How to interpret standardized regression coefficients and p-values in multiple regression?
Standardized regression coefficients do not work for categorical variables or for nonlinear effects. You are assuming everything has a linear effect, which is unlikely. Standardization also assumes that the SD is the right scaling constant. To me standardized coefficients are harder to interpret than the original coefficients, and the standardization is arbitrary. It can also be misleading. It is not always true that a variable should have more importance just because its standard deviation is different from another's.
How to interpret standardized regression coefficients and p-values in multiple regression?
Standardized regression coefficients do not work for categorical variables or for nonlinear effects. You are assuming everything has a linear effect, which is unlikely. Standardization also assumes
How to interpret standardized regression coefficients and p-values in multiple regression? Standardized regression coefficients do not work for categorical variables or for nonlinear effects. You are assuming everything has a linear effect, which is unlikely. Standardization also assumes that the SD is the right scaling constant. To me standardized coefficients are harder to interpret than the original coefficients, and the standardization is arbitrary. It can also be misleading. It is not always true that a variable should have more importance just because its standard deviation is different from another's.
How to interpret standardized regression coefficients and p-values in multiple regression? Standardized regression coefficients do not work for categorical variables or for nonlinear effects. You are assuming everything has a linear effect, which is unlikely. Standardization also assumes
30,033
How to interpret standardized regression coefficients and p-values in multiple regression?
Your question seems to reflect the mistaken understanding that the statistical "significance" of the p-value somehow means "meaningful", "important", or "relevant to real life". This is a false but very widely held misunderstanding. P-values are a standardized representation of how reliable the effect size measures are. I say that the p-value is "standardized" in the sense that no matter what the test statistic, whether a t score, an F, a $\chi^2$, or whatever (in the case of linear regression, it is the t score), the p-value represents it on the same scale of a number from 0 to 1 such that values closer to 0 give greater confidence that the effect size measures are reliable, that is, are not mere haphazard statistical flukes. (Of course, by convention, 0.05 is the most common cutoff measure for an "acceptable" p-value, though that should not be taken to be a magic number. The "right" cutoff for a p-value actually depends on both the sample size and the strength of the expected effect size. So, in some cases, 0.05 might not be strict enough, and in other cases, it might be too stringent.) Effect size measures are the most important results that actually reflect what is "meaningful", "important", or "relevant to real life". In the case of linear regression, these are the coefficients and standardized coefficients (or the adjusted $R^2$ for the overall regression model). So, with that understanding, to directly answer your question: The p-values reflect how much confidence you should have in the reliability of the standardized coefficients. If you take the 0.05 traditional cutoff, the only thing that the p-values tell you in your results is that all your standardized coefficients are reliable. They absolutely do not tell you how important any specific variable might be, and they should never be misinterpreted in that way. So, it is irrelevant which p-values are larger than others, as long as they are all well below 0.05. The standardized coefficients are what you should focus on in trying to determine which variables are more important. In regression, what they mean is that one standard deviation increase in the given variable will give the specified number of standard deviations of change in the target variable. So, in your case, one standard deviation increase in x2 (measured in x2's units) will increase y by 0.24 standardard deviations (measured in y's units); one standard deviation increase in x3 (measured in x3's units) will increase y by 0.27 standardard deviations (measured in y's units). The relative p-values of x2 and x3 are irrelevant; they do not reflect importance; they only indicate that the estimates of x2 and x3 both have high confidence. That said, considering that 0.24 and 0.27 as $\beta$ values are very close, you should certainly not trust that your results mean that x3 is conclusively more important than x2. The numbers are too close; they should be considered approximately equal; any apparent differences here might indeed be a statistical fluke. If you really want to test which one is more important, then there are specific tests for that.
How to interpret standardized regression coefficients and p-values in multiple regression?
Your question seems to reflect the mistaken understanding that the statistical "significance" of the p-value somehow means "meaningful", "important", or "relevant to real life". This is a false but ve
How to interpret standardized regression coefficients and p-values in multiple regression? Your question seems to reflect the mistaken understanding that the statistical "significance" of the p-value somehow means "meaningful", "important", or "relevant to real life". This is a false but very widely held misunderstanding. P-values are a standardized representation of how reliable the effect size measures are. I say that the p-value is "standardized" in the sense that no matter what the test statistic, whether a t score, an F, a $\chi^2$, or whatever (in the case of linear regression, it is the t score), the p-value represents it on the same scale of a number from 0 to 1 such that values closer to 0 give greater confidence that the effect size measures are reliable, that is, are not mere haphazard statistical flukes. (Of course, by convention, 0.05 is the most common cutoff measure for an "acceptable" p-value, though that should not be taken to be a magic number. The "right" cutoff for a p-value actually depends on both the sample size and the strength of the expected effect size. So, in some cases, 0.05 might not be strict enough, and in other cases, it might be too stringent.) Effect size measures are the most important results that actually reflect what is "meaningful", "important", or "relevant to real life". In the case of linear regression, these are the coefficients and standardized coefficients (or the adjusted $R^2$ for the overall regression model). So, with that understanding, to directly answer your question: The p-values reflect how much confidence you should have in the reliability of the standardized coefficients. If you take the 0.05 traditional cutoff, the only thing that the p-values tell you in your results is that all your standardized coefficients are reliable. They absolutely do not tell you how important any specific variable might be, and they should never be misinterpreted in that way. So, it is irrelevant which p-values are larger than others, as long as they are all well below 0.05. The standardized coefficients are what you should focus on in trying to determine which variables are more important. In regression, what they mean is that one standard deviation increase in the given variable will give the specified number of standard deviations of change in the target variable. So, in your case, one standard deviation increase in x2 (measured in x2's units) will increase y by 0.24 standardard deviations (measured in y's units); one standard deviation increase in x3 (measured in x3's units) will increase y by 0.27 standardard deviations (measured in y's units). The relative p-values of x2 and x3 are irrelevant; they do not reflect importance; they only indicate that the estimates of x2 and x3 both have high confidence. That said, considering that 0.24 and 0.27 as $\beta$ values are very close, you should certainly not trust that your results mean that x3 is conclusively more important than x2. The numbers are too close; they should be considered approximately equal; any apparent differences here might indeed be a statistical fluke. If you really want to test which one is more important, then there are specific tests for that.
How to interpret standardized regression coefficients and p-values in multiple regression? Your question seems to reflect the mistaken understanding that the statistical "significance" of the p-value somehow means "meaningful", "important", or "relevant to real life". This is a false but ve
30,034
How to interpret standardized regression coefficients and p-values in multiple regression?
p-value and $|\beta_i|$ have different meanings. p-value of $\beta_i$ relates to "the probability that $\beta_i = 0$". Full explanation is: if $\beta_i = 0$, p-value means the probability that "sample will be regressed to $\beta_i = \hat{\beta}_i$", where, $\hat{\beta}_i$ means the current coefficient of regression. $|\beta_i|$ means: the level of effect from $\beta_i$ to y, which value relies on variances. In your case, I think $x_i$ may have different variances. It cause that several $x_i$ has high coefficient and p-value. The second possibility is non-Gaussian distribution of variables. In case of unify distribution with little records: > b = runif(50) > a = runif(50) > mo = lm(b ~ a + 1) > summary(mo) Call: lm(formula = b ~ a + 1) Residuals: Min 1Q Median 3Q Max -0.42894 -0.16301 0.01589 0.17515 0.49369 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.41346 0.07508 5.507 1.41e-06 *** a 0.19266 0.12700 1.517 0.136 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.253 on 48 degrees of freedom Multiple R-squared: 0.04575, Adjusted R-squared: 0.02587 F-statistic: 2.301 on 1 and 48 DF, p-value: 0.1358
How to interpret standardized regression coefficients and p-values in multiple regression?
p-value and $|\beta_i|$ have different meanings. p-value of $\beta_i$ relates to "the probability that $\beta_i = 0$". Full explanation is: if $\beta_i = 0$, p-value means the probability that "sampl
How to interpret standardized regression coefficients and p-values in multiple regression? p-value and $|\beta_i|$ have different meanings. p-value of $\beta_i$ relates to "the probability that $\beta_i = 0$". Full explanation is: if $\beta_i = 0$, p-value means the probability that "sample will be regressed to $\beta_i = \hat{\beta}_i$", where, $\hat{\beta}_i$ means the current coefficient of regression. $|\beta_i|$ means: the level of effect from $\beta_i$ to y, which value relies on variances. In your case, I think $x_i$ may have different variances. It cause that several $x_i$ has high coefficient and p-value. The second possibility is non-Gaussian distribution of variables. In case of unify distribution with little records: > b = runif(50) > a = runif(50) > mo = lm(b ~ a + 1) > summary(mo) Call: lm(formula = b ~ a + 1) Residuals: Min 1Q Median 3Q Max -0.42894 -0.16301 0.01589 0.17515 0.49369 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.41346 0.07508 5.507 1.41e-06 *** a 0.19266 0.12700 1.517 0.136 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.253 on 48 degrees of freedom Multiple R-squared: 0.04575, Adjusted R-squared: 0.02587 F-statistic: 2.301 on 1 and 48 DF, p-value: 0.1358
How to interpret standardized regression coefficients and p-values in multiple regression? p-value and $|\beta_i|$ have different meanings. p-value of $\beta_i$ relates to "the probability that $\beta_i = 0$". Full explanation is: if $\beta_i = 0$, p-value means the probability that "sampl
30,035
Logistic Regression on Big Data
It is not appropriate to do feature screening and then to feed surviving features into a method that does not understand how much data torture was done previously. It is better to use a method that can handle all potential features (e.g., elastic net). Others' suggestions about using data reduction are also excellent ideas.
Logistic Regression on Big Data
It is not appropriate to do feature screening and then to feed surviving features into a method that does not understand how much data torture was done previously. It is better to use a method that c
Logistic Regression on Big Data It is not appropriate to do feature screening and then to feed surviving features into a method that does not understand how much data torture was done previously. It is better to use a method that can handle all potential features (e.g., elastic net). Others' suggestions about using data reduction are also excellent ideas.
Logistic Regression on Big Data It is not appropriate to do feature screening and then to feed surviving features into a method that does not understand how much data torture was done previously. It is better to use a method that c
30,036
Logistic Regression on Big Data
A first approach is to use PCA in order to reduce the dimensionality of the dataset. Try to retain ~97% of the total variance, this may help out quite a bit. Another option is to use something like stochastic gradient descent, this can be a much faster algorithm and able to fit into R's memory. EDIT: One problem with R is that you can only use your RAM so if you only have 8 GB of memory then that is what you are limited to. I have run into many problems with this and have since moved onto using python's scikit-learn which seems to handle bigger datasets much better. A very nice chart which gives some idea of places to start based on your dataset size can be found here: http://3.bp.blogspot.com/-dofu6J0sZ8o/UrctKb69QdI/AAAAAAAADfg/79ewPecn5XU/s1600/scikit-learn-flow-chart.jpg
Logistic Regression on Big Data
A first approach is to use PCA in order to reduce the dimensionality of the dataset. Try to retain ~97% of the total variance, this may help out quite a bit. Another option is to use something like s
Logistic Regression on Big Data A first approach is to use PCA in order to reduce the dimensionality of the dataset. Try to retain ~97% of the total variance, this may help out quite a bit. Another option is to use something like stochastic gradient descent, this can be a much faster algorithm and able to fit into R's memory. EDIT: One problem with R is that you can only use your RAM so if you only have 8 GB of memory then that is what you are limited to. I have run into many problems with this and have since moved onto using python's scikit-learn which seems to handle bigger datasets much better. A very nice chart which gives some idea of places to start based on your dataset size can be found here: http://3.bp.blogspot.com/-dofu6J0sZ8o/UrctKb69QdI/AAAAAAAADfg/79ewPecn5XU/s1600/scikit-learn-flow-chart.jpg
Logistic Regression on Big Data A first approach is to use PCA in order to reduce the dimensionality of the dataset. Try to retain ~97% of the total variance, this may help out quite a bit. Another option is to use something like s
30,037
Logistic Regression on Big Data
As @Frank Harrell already mentioned, using elastic net or LASSO to perform penalized regression with all 5000 features (p) would be a good start for feature selection (one can't simply remove 3500 variables because they are not "statistically significant" with the dependent variable of interest). Either of these methods can be performed using the R package, glmnet. In order to take into account the relationships shared between the potential predictor variables of interest (p = 5000), I would recommend running a random forest using the randomForest package and/or gradient boosting using the gbm package to assess the relative importance of the potential predictor variables in regards to the binary outcome. With this information, you will be much more prepared to build a more parsimonious logistic regression model.
Logistic Regression on Big Data
As @Frank Harrell already mentioned, using elastic net or LASSO to perform penalized regression with all 5000 features (p) would be a good start for feature selection (one can't simply remove 3500 var
Logistic Regression on Big Data As @Frank Harrell already mentioned, using elastic net or LASSO to perform penalized regression with all 5000 features (p) would be a good start for feature selection (one can't simply remove 3500 variables because they are not "statistically significant" with the dependent variable of interest). Either of these methods can be performed using the R package, glmnet. In order to take into account the relationships shared between the potential predictor variables of interest (p = 5000), I would recommend running a random forest using the randomForest package and/or gradient boosting using the gbm package to assess the relative importance of the potential predictor variables in regards to the binary outcome. With this information, you will be much more prepared to build a more parsimonious logistic regression model.
Logistic Regression on Big Data As @Frank Harrell already mentioned, using elastic net or LASSO to perform penalized regression with all 5000 features (p) would be a good start for feature selection (one can't simply remove 3500 var
30,038
Logistic Regression on Big Data
I assume you are not limited to R, since this is a big data problem you probably shouldn't be. You can try MLlib, which is Apache Spark's scalable machine learning library. Apache Spark, in turn, is a fast and general engine for in-memory large-scale data processing. These operate on a Hadoop framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Note that 'thousands of machines' is optional(!), you can set it up on your local work/home desktop as well. Going back to MLlib, it comes with the below algorithms out of the box: K-means clustering with K-means|| initialization. L1- and L2-regularized linear regression. L1- and L2-regularized logistic regression. Alternating least squares collaborative filtering, with explicit ratings or implicit feedback. Naive Bayes multinomial classification. Stochastic gradient descent. If you are regularly working with big data, you may need to adopt a Hadoop solution.
Logistic Regression on Big Data
I assume you are not limited to R, since this is a big data problem you probably shouldn't be. You can try MLlib, which is Apache Spark's scalable machine learning library. Apache Spark, in turn, is a
Logistic Regression on Big Data I assume you are not limited to R, since this is a big data problem you probably shouldn't be. You can try MLlib, which is Apache Spark's scalable machine learning library. Apache Spark, in turn, is a fast and general engine for in-memory large-scale data processing. These operate on a Hadoop framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Note that 'thousands of machines' is optional(!), you can set it up on your local work/home desktop as well. Going back to MLlib, it comes with the below algorithms out of the box: K-means clustering with K-means|| initialization. L1- and L2-regularized linear regression. L1- and L2-regularized logistic regression. Alternating least squares collaborative filtering, with explicit ratings or implicit feedback. Naive Bayes multinomial classification. Stochastic gradient descent. If you are regularly working with big data, you may need to adopt a Hadoop solution.
Logistic Regression on Big Data I assume you are not limited to R, since this is a big data problem you probably shouldn't be. You can try MLlib, which is Apache Spark's scalable machine learning library. Apache Spark, in turn, is a
30,039
Logistic Regression on Big Data
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You can try Vowpal Wabbit: Vowpal Wabbit . It works well with very large datasets and very large number of features. according to the website: This is a project started at Yahoo! Research and continuing at Microsoft Research to design a fast, scalable, useful learning algorithm. VW is the essence of speed in machine learning, able to learn from terafeature datasets with ease. Via parallel learning, it can exceed the throughput of any single machine network interface when doing linear learning, a first amongst learning algorithms.
Logistic Regression on Big Data
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Logistic Regression on Big Data Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You can try Vowpal Wabbit: Vowpal Wabbit . It works well with very large datasets and very large number of features. according to the website: This is a project started at Yahoo! Research and continuing at Microsoft Research to design a fast, scalable, useful learning algorithm. VW is the essence of speed in machine learning, able to learn from terafeature datasets with ease. Via parallel learning, it can exceed the throughput of any single machine network interface when doing linear learning, a first amongst learning algorithms.
Logistic Regression on Big Data Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
30,040
Can the interaction term of two insignificant coefficients be significant?
$A*B$ can be significant in all of these scenarios. Consider $A \in \{-1, 0, 1\}$ and $B \in \{-1, 1\}$ where the underlying model is $E[Y|A,B] = A*B$. In a roughly balanced situation, with (roughly) equal sample sizes for each combination of $A \times B$, neither $A$ nor $B$ will be significant (except for the $\alpha$ fraction of the time when a true null hypothesis is rejected), but the interaction term certainly will be! Here's a numeric example: A <- rep(c(-1,0,1), 100) B <- rep(c(-1,1), 150) X <- A*B Y <- X + rnorm(300) > summary(lm(Y~A+B+A*B)) Call: lm(formula = Y ~ A + B + A * B) Residuals: Min 1Q Median 3Q Max -3.03520 -0.59349 -0.03184 0.62857 2.49359 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.02083 0.05668 -0.367 0.714 A -0.03797 0.06942 -0.547 0.585 B 0.05867 0.05668 1.035 0.301 A:B 0.90789 0.06942 13.078 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9818 on 296 degrees of freedom Multiple R-squared: 0.3681, Adjusted R-squared: 0.3617 F-statistic: 57.47 on 3 and 296 DF, p-value: < 2.2e-16 Or, more simply: > cor(A,Y) [1] -0.02527534 > cor(B,Y) [1] 0.04782935 > cor(A*B,Y) [1] 0.6042723 It should be intuitively clear that if we can construct an example where $A$ and $B$ are both insignificant, yet the interaction is significant, we can do so for either of your other two cases. As for likely... One could argue that in real life, apart from physics and a few other disciplines, pretty much all interaction terms are very likely to be nonzero (albeit perhaps very small), and "significance" in its statistical sense is merely a function of sample size.
Can the interaction term of two insignificant coefficients be significant?
$A*B$ can be significant in all of these scenarios. Consider $A \in \{-1, 0, 1\}$ and $B \in \{-1, 1\}$ where the underlying model is $E[Y|A,B] = A*B$. In a roughly balanced situation, with (roughly
Can the interaction term of two insignificant coefficients be significant? $A*B$ can be significant in all of these scenarios. Consider $A \in \{-1, 0, 1\}$ and $B \in \{-1, 1\}$ where the underlying model is $E[Y|A,B] = A*B$. In a roughly balanced situation, with (roughly) equal sample sizes for each combination of $A \times B$, neither $A$ nor $B$ will be significant (except for the $\alpha$ fraction of the time when a true null hypothesis is rejected), but the interaction term certainly will be! Here's a numeric example: A <- rep(c(-1,0,1), 100) B <- rep(c(-1,1), 150) X <- A*B Y <- X + rnorm(300) > summary(lm(Y~A+B+A*B)) Call: lm(formula = Y ~ A + B + A * B) Residuals: Min 1Q Median 3Q Max -3.03520 -0.59349 -0.03184 0.62857 2.49359 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.02083 0.05668 -0.367 0.714 A -0.03797 0.06942 -0.547 0.585 B 0.05867 0.05668 1.035 0.301 A:B 0.90789 0.06942 13.078 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.9818 on 296 degrees of freedom Multiple R-squared: 0.3681, Adjusted R-squared: 0.3617 F-statistic: 57.47 on 3 and 296 DF, p-value: < 2.2e-16 Or, more simply: > cor(A,Y) [1] -0.02527534 > cor(B,Y) [1] 0.04782935 > cor(A*B,Y) [1] 0.6042723 It should be intuitively clear that if we can construct an example where $A$ and $B$ are both insignificant, yet the interaction is significant, we can do so for either of your other two cases. As for likely... One could argue that in real life, apart from physics and a few other disciplines, pretty much all interaction terms are very likely to be nonzero (albeit perhaps very small), and "significance" in its statistical sense is merely a function of sample size.
Can the interaction term of two insignificant coefficients be significant? $A*B$ can be significant in all of these scenarios. Consider $A \in \{-1, 0, 1\}$ and $B \in \{-1, 1\}$ where the underlying model is $E[Y|A,B] = A*B$. In a roughly balanced situation, with (roughly
30,041
Can the interaction term of two insignificant coefficients be significant?
Jbowman's answer is correct but to add to the "real life" dimension he or she adverts to: You really should think about "real life" here because the basic answer to your question is: "Impossible to say; it depends on what you are modeling." The answer to the main question -- can there be a "significant" interaction between two "nonsignificant" predictors -- is "of course." Imagine, e.g., a disease that is equally likely to be terminal for members of two subpopulations & that can be effectively treated with an intervention in only 1. Membership in the groups will not predict death from the disease; and the main effect of the treatment -- which will be a (sample-size weighted) average of the effect on the two groups might well be nonsignificant too if the sample size of the treatment-responsive population or the effect size of the intervention is small. But add a cross-product interaction term -- & voila, you see that the effect of treatment is "significant" for the treatment-responsive group. Maybe you can see from this example that your questions about the relative "likelihood" & "theoretical possibility" etc. of signficant interactions conditional on the predictor & moderator being significant can't be answered in a meaningful way. Everything depends on how the predictor & moderator are related to the outcome being modeled. For a phenomenon in which it is not meaningful or plausible to envision the two variables interacting, there's no point asking about how "likely" or "theoretically possible," whether or not the predictor and moderator are significant or nonsignificant (likely the interaction will be nonsignificant in that case, but if it turns out otherwise, it's likely a coincidence or a reflection of "significant" but meaningless relations between variables when you have large sample, etc.) If such a relationship is plausible, then by definition a "significant" interaction is "theoretically possible" & whether one would expect the predictor and moderator to be significant or nonsignificant on their own in that situation necessarily depends on what you are modeling. (Because the universe of things you might investigate is infinite, there's no way to say what's more likely -- both variables, one, or neither "significant" ) Statistics won't help anyone who doesn't known what & why he or she is using them to understand a particular phenomenon.
Can the interaction term of two insignificant coefficients be significant?
Jbowman's answer is correct but to add to the "real life" dimension he or she adverts to: You really should think about "real life" here because the basic answer to your question is: "Impossible to s
Can the interaction term of two insignificant coefficients be significant? Jbowman's answer is correct but to add to the "real life" dimension he or she adverts to: You really should think about "real life" here because the basic answer to your question is: "Impossible to say; it depends on what you are modeling." The answer to the main question -- can there be a "significant" interaction between two "nonsignificant" predictors -- is "of course." Imagine, e.g., a disease that is equally likely to be terminal for members of two subpopulations & that can be effectively treated with an intervention in only 1. Membership in the groups will not predict death from the disease; and the main effect of the treatment -- which will be a (sample-size weighted) average of the effect on the two groups might well be nonsignificant too if the sample size of the treatment-responsive population or the effect size of the intervention is small. But add a cross-product interaction term -- & voila, you see that the effect of treatment is "significant" for the treatment-responsive group. Maybe you can see from this example that your questions about the relative "likelihood" & "theoretical possibility" etc. of signficant interactions conditional on the predictor & moderator being significant can't be answered in a meaningful way. Everything depends on how the predictor & moderator are related to the outcome being modeled. For a phenomenon in which it is not meaningful or plausible to envision the two variables interacting, there's no point asking about how "likely" or "theoretically possible," whether or not the predictor and moderator are significant or nonsignificant (likely the interaction will be nonsignificant in that case, but if it turns out otherwise, it's likely a coincidence or a reflection of "significant" but meaningless relations between variables when you have large sample, etc.) If such a relationship is plausible, then by definition a "significant" interaction is "theoretically possible" & whether one would expect the predictor and moderator to be significant or nonsignificant on their own in that situation necessarily depends on what you are modeling. (Because the universe of things you might investigate is infinite, there's no way to say what's more likely -- both variables, one, or neither "significant" ) Statistics won't help anyone who doesn't known what & why he or she is using them to understand a particular phenomenon.
Can the interaction term of two insignificant coefficients be significant? Jbowman's answer is correct but to add to the "real life" dimension he or she adverts to: You really should think about "real life" here because the basic answer to your question is: "Impossible to s
30,042
Can the interaction term of two insignificant coefficients be significant?
Alternatively, you can test if the interaction is spurious or not by FWL orthogonalization, and when the interaction stands, then you can remove those independent variables that now are not significant anymore. The objective is to remove as much interaction as possible because it confuses the analysis of the parameters. See: Empirical Economics 2012, Hatice Ozer Balli and Bent E. Sørensen, Interaction effects in econometrics. [DOI]
Can the interaction term of two insignificant coefficients be significant?
Alternatively, you can test if the interaction is spurious or not by FWL orthogonalization, and when the interaction stands, then you can remove those independent variables that now are not significan
Can the interaction term of two insignificant coefficients be significant? Alternatively, you can test if the interaction is spurious or not by FWL orthogonalization, and when the interaction stands, then you can remove those independent variables that now are not significant anymore. The objective is to remove as much interaction as possible because it confuses the analysis of the parameters. See: Empirical Economics 2012, Hatice Ozer Balli and Bent E. Sørensen, Interaction effects in econometrics. [DOI]
Can the interaction term of two insignificant coefficients be significant? Alternatively, you can test if the interaction is spurious or not by FWL orthogonalization, and when the interaction stands, then you can remove those independent variables that now are not significan
30,043
Can the interaction term of two insignificant coefficients be significant?
Of course. Let me try and explain in a theoretical way instead of difficult numerical ways. Let's imagine Psychology research in which you investigate the effect of identification with your ethnic group and the group's attitudinal norm towards an outgroup on the identification with that specific outgroup: $\text{Ingroup ID} + \text{norm} = \text{Outgroup ID}$ Now imagine that only those who highly identify with their ingroup identify strongly with the outgroup IF the attitudinal norm towards that outgroup is high (positive) and very little if the norm towards that outgroup is low (negative). This is a part of your sample that can result in a significant effect but might be too small if you only check both main effects in which all participants (regardless of specific score on both variables) are put into the equation to see for any effect on the dependent variable. In other words, all the scores taken together without any specific interacting combination nullifies a significant effect.
Can the interaction term of two insignificant coefficients be significant?
Of course. Let me try and explain in a theoretical way instead of difficult numerical ways. Let's imagine Psychology research in which you investigate the effect of identification with your ethnic gro
Can the interaction term of two insignificant coefficients be significant? Of course. Let me try and explain in a theoretical way instead of difficult numerical ways. Let's imagine Psychology research in which you investigate the effect of identification with your ethnic group and the group's attitudinal norm towards an outgroup on the identification with that specific outgroup: $\text{Ingroup ID} + \text{norm} = \text{Outgroup ID}$ Now imagine that only those who highly identify with their ingroup identify strongly with the outgroup IF the attitudinal norm towards that outgroup is high (positive) and very little if the norm towards that outgroup is low (negative). This is a part of your sample that can result in a significant effect but might be too small if you only check both main effects in which all participants (regardless of specific score on both variables) are put into the equation to see for any effect on the dependent variable. In other words, all the scores taken together without any specific interacting combination nullifies a significant effect.
Can the interaction term of two insignificant coefficients be significant? Of course. Let me try and explain in a theoretical way instead of difficult numerical ways. Let's imagine Psychology research in which you investigate the effect of identification with your ethnic gro
30,044
Can one leave out data from research because it is not significant?
In the report cited in whuber's comment, it says on page 104 [pg 114 in the pdf]: The survey succeeded in activating the participation of approximately 8,900 doctoral candidates from more than 30 countries... Then, spanning pages 104-105, it says: While conducting data cleaning procedures, the Eurodoc survey experts' team decided to run a power test analysis. Based on the assumption of fully completed questionnaires which will result in a multi normal distribution, a power test for estimation of the confidence interval was used. This was done to test the accuracy of the data. It was decided to accept maximum a 6% error-level at a 95% confidence interval. A loss of 16% of the sampling size resulted in a sample of 12 participating countries with 7,600 participants. So it's not really clear exactly why the 16% loss in the sample, but the assumption of incomplete responses is likely correct. (And you can see why the reporter was confused.)
Can one leave out data from research because it is not significant?
In the report cited in whuber's comment, it says on page 104 [pg 114 in the pdf]: The survey succeeded in activating the participation of approximately 8,900 doctoral candidates from more than 30 cou
Can one leave out data from research because it is not significant? In the report cited in whuber's comment, it says on page 104 [pg 114 in the pdf]: The survey succeeded in activating the participation of approximately 8,900 doctoral candidates from more than 30 countries... Then, spanning pages 104-105, it says: While conducting data cleaning procedures, the Eurodoc survey experts' team decided to run a power test analysis. Based on the assumption of fully completed questionnaires which will result in a multi normal distribution, a power test for estimation of the confidence interval was used. This was done to test the accuracy of the data. It was decided to accept maximum a 6% error-level at a 95% confidence interval. A loss of 16% of the sampling size resulted in a sample of 12 participating countries with 7,600 participants. So it's not really clear exactly why the 16% loss in the sample, but the assumption of incomplete responses is likely correct. (And you can see why the reporter was confused.)
Can one leave out data from research because it is not significant? In the report cited in whuber's comment, it says on page 104 [pg 114 in the pdf]: The survey succeeded in activating the participation of approximately 8,900 doctoral candidates from more than 30 cou
30,045
Can one leave out data from research because it is not significant?
That sentence does not actually make sense and is clearly in error. Data cannot be statistically significant or insignificant. Only relationships between data, the product of statistical tests, can be spoken about in these terms. If the question is: Can we drop data from our analyses because the inclusion of that data means we cannot reject the null hypothesis? The answer is — obviously, I hope! — no. The message you've cited is a news report, not a scientific paper. Had it been a paper that was reviewed, it never would have gotten in. Probably, data was not included because there are substantive reasons to not include those data. Probably, as others have suggested, the excluded data was incomplete or collected using different or incomparable methods.
Can one leave out data from research because it is not significant?
That sentence does not actually make sense and is clearly in error. Data cannot be statistically significant or insignificant. Only relationships between data, the product of statistical tests, can be
Can one leave out data from research because it is not significant? That sentence does not actually make sense and is clearly in error. Data cannot be statistically significant or insignificant. Only relationships between data, the product of statistical tests, can be spoken about in these terms. If the question is: Can we drop data from our analyses because the inclusion of that data means we cannot reject the null hypothesis? The answer is — obviously, I hope! — no. The message you've cited is a news report, not a scientific paper. Had it been a paper that was reviewed, it never would have gotten in. Probably, data was not included because there are substantive reasons to not include those data. Probably, as others have suggested, the excluded data was incomplete or collected using different or incomparable methods.
Can one leave out data from research because it is not significant? That sentence does not actually make sense and is clearly in error. Data cannot be statistically significant or insignificant. Only relationships between data, the product of statistical tests, can be
30,046
Can one leave out data from research because it is not significant?
No. I suspect the reporter meant to say that the other individuals were left out because the surveys were incomplete or internally inconsistent.
Can one leave out data from research because it is not significant?
No. I suspect the reporter meant to say that the other individuals were left out because the surveys were incomplete or internally inconsistent.
Can one leave out data from research because it is not significant? No. I suspect the reporter meant to say that the other individuals were left out because the surveys were incomplete or internally inconsistent.
Can one leave out data from research because it is not significant? No. I suspect the reporter meant to say that the other individuals were left out because the surveys were incomplete or internally inconsistent.
30,047
Can one leave out data from research because it is not significant?
No, but reporters can use technical jargon completely nonsensically.
Can one leave out data from research because it is not significant?
No, but reporters can use technical jargon completely nonsensically.
Can one leave out data from research because it is not significant? No, but reporters can use technical jargon completely nonsensically.
Can one leave out data from research because it is not significant? No, but reporters can use technical jargon completely nonsensically.
30,048
How to interpret Residuals vs. Fitted Plot
Both the cutoff in the residual plot and the bump in the QQ plot are consequences of model misspecification. You are modeling the conditional mean of the visitor count; let’s call it $Y_{it}$. When you estimate the conditional mean with OLS, it fits $E(Y_{it}\mid X_{it})=\alpha+\beta X_{it}$. Notice that this specification assumes that if $\beta>0$, you can find a low enough $X_{it}$ that pushes the conditional mean of the visitor count into the negative region. This however cannot be the case in our everyday experience. Visitor count is a count variable and therefore a count regression would be more appropriate. For example, a Poisson regression fits $E(Y_{it}\mid X_{it})=e^{\alpha+\beta X_{it}}$. Under this specification, you can take $X_{it}$ arbitrarily far towards negative infinity, but the conditional mean of the visitor count will still be positive. All of this implies that your residuals can't by their nature be normally distributed. You seem to not have enough statistical power to reject the null that they are normal. But that null is guaranteed to be false by knowing what your data are. The cutoff in the residual plot is a consequence of this. You observe the cutoff because for low predicted (fitted) visitor counts the prediction error (residual) can only get so low. The bump at the end of your QQ plot also follows from this. OLS underpredicts in the right tail because it assumes that the relationship between $X_{it}$ and the outcome is linear. Poisson would assume it is multiplicative. In turn, the right tail of the residuals in the misspecified model is fatter than that of the normal distribution. I think @BruceET is making a good point that a “wobble” is natural for any estimator, and the question is whether the wobble is outside of a valid confidence bound. But in this case it also signals model misspecification.
How to interpret Residuals vs. Fitted Plot
Both the cutoff in the residual plot and the bump in the QQ plot are consequences of model misspecification. You are modeling the conditional mean of the visitor count; let’s call it $Y_{it}$. When yo
How to interpret Residuals vs. Fitted Plot Both the cutoff in the residual plot and the bump in the QQ plot are consequences of model misspecification. You are modeling the conditional mean of the visitor count; let’s call it $Y_{it}$. When you estimate the conditional mean with OLS, it fits $E(Y_{it}\mid X_{it})=\alpha+\beta X_{it}$. Notice that this specification assumes that if $\beta>0$, you can find a low enough $X_{it}$ that pushes the conditional mean of the visitor count into the negative region. This however cannot be the case in our everyday experience. Visitor count is a count variable and therefore a count regression would be more appropriate. For example, a Poisson regression fits $E(Y_{it}\mid X_{it})=e^{\alpha+\beta X_{it}}$. Under this specification, you can take $X_{it}$ arbitrarily far towards negative infinity, but the conditional mean of the visitor count will still be positive. All of this implies that your residuals can't by their nature be normally distributed. You seem to not have enough statistical power to reject the null that they are normal. But that null is guaranteed to be false by knowing what your data are. The cutoff in the residual plot is a consequence of this. You observe the cutoff because for low predicted (fitted) visitor counts the prediction error (residual) can only get so low. The bump at the end of your QQ plot also follows from this. OLS underpredicts in the right tail because it assumes that the relationship between $X_{it}$ and the outcome is linear. Poisson would assume it is multiplicative. In turn, the right tail of the residuals in the misspecified model is fatter than that of the normal distribution. I think @BruceET is making a good point that a “wobble” is natural for any estimator, and the question is whether the wobble is outside of a valid confidence bound. But in this case it also signals model misspecification.
How to interpret Residuals vs. Fitted Plot Both the cutoff in the residual plot and the bump in the QQ plot are consequences of model misspecification. You are modeling the conditional mean of the visitor count; let’s call it $Y_{it}$. When yo
30,049
How to interpret Residuals vs. Fitted Plot
Here are a dozen normal probability plots in R, each for a sample of size 100 from a known standard normal population. Each plot is roughly linear, but most have a 'wobble' or two, especially toward the extremes. set.seed(116) par(mfrow=c(3,4)) for(i in 1:12) { z = rnorm(100); qqnorm(z, pch=20) } par(mfrow=c(1,1)) Repeat the code (without the set.seed statement) for more examples. Examples of normal probability plots in textbooks seem, on average, to be better behaved than the plots one typically sees in practice -- even when normality assumptions are very nearly true. Addendum: Six additional plots with reference lines as suggested in Comment by @Henry. set.seed(117) par(mfrow=c(2,3)) for(i in 1:6) { z = rnorm(100) qqnorm(z); qqline(z, col=2) } par(mfrow=c(1,1))
How to interpret Residuals vs. Fitted Plot
Here are a dozen normal probability plots in R, each for a sample of size 100 from a known standard normal population. Each plot is roughly linear, but most have a 'wobble' or two, especially toward t
How to interpret Residuals vs. Fitted Plot Here are a dozen normal probability plots in R, each for a sample of size 100 from a known standard normal population. Each plot is roughly linear, but most have a 'wobble' or two, especially toward the extremes. set.seed(116) par(mfrow=c(3,4)) for(i in 1:12) { z = rnorm(100); qqnorm(z, pch=20) } par(mfrow=c(1,1)) Repeat the code (without the set.seed statement) for more examples. Examples of normal probability plots in textbooks seem, on average, to be better behaved than the plots one typically sees in practice -- even when normality assumptions are very nearly true. Addendum: Six additional plots with reference lines as suggested in Comment by @Henry. set.seed(117) par(mfrow=c(2,3)) for(i in 1:6) { z = rnorm(100) qqnorm(z); qqline(z, col=2) } par(mfrow=c(1,1))
How to interpret Residuals vs. Fitted Plot Here are a dozen normal probability plots in R, each for a sample of size 100 from a known standard normal population. Each plot is roughly linear, but most have a 'wobble' or two, especially toward t
30,050
How to interpret Residuals vs. Fitted Plot
Let's assume "visitors" is the total number of visitors and thus whole positive numbers. Let us assume, the model predicts zero visitors and there are zero visitors,then the residual is zero. If there are more then zero visitors, the the residuals must be positive. If the modell predicts a negative number of visitors, then the residual must be at least of absolute value as the prediction. In general: as the visitors are bound to a positive or zero value, there is a lower limit to the residuals. The bump in the QQ plot is minimal and probably not worth worrying about with regards to regression assumptions.
How to interpret Residuals vs. Fitted Plot
Let's assume "visitors" is the total number of visitors and thus whole positive numbers. Let us assume, the model predicts zero visitors and there are zero visitors,then the residual is zero. If there
How to interpret Residuals vs. Fitted Plot Let's assume "visitors" is the total number of visitors and thus whole positive numbers. Let us assume, the model predicts zero visitors and there are zero visitors,then the residual is zero. If there are more then zero visitors, the the residuals must be positive. If the modell predicts a negative number of visitors, then the residual must be at least of absolute value as the prediction. In general: as the visitors are bound to a positive or zero value, there is a lower limit to the residuals. The bump in the QQ plot is minimal and probably not worth worrying about with regards to regression assumptions.
How to interpret Residuals vs. Fitted Plot Let's assume "visitors" is the total number of visitors and thus whole positive numbers. Let us assume, the model predicts zero visitors and there are zero visitors,then the residual is zero. If there
30,051
Missing quartile in boxplot
The median is probably identical to the first quartile, which is why they overlap. This tends to happen when you have a large proportion of identical, low values in the dataset. Here's an example that reproduces this pattern: dat <- c(1,2,2,2,3,5,6) median(dat) ## 2 quantile(dat, 0.25) ## 25% ## 2 boxplot(dat) You can read a basic introduction about how to interpret boxplots here. Though as Nick Cox points out below, its discussion of what are called 'outliers' is flawed and should be ignored. Outliers should not be deleted unless there is very strong reason to, such as a clear data recording error. Note also that a boxplot is not a great way to display many datasets. I agree with Stephan Kolassa's recommendation of a beeswarm plot for small datasets and a violin plot/kernel density plot for larger ones.
Missing quartile in boxplot
The median is probably identical to the first quartile, which is why they overlap. This tends to happen when you have a large proportion of identical, low values in the dataset. Here's an example that
Missing quartile in boxplot The median is probably identical to the first quartile, which is why they overlap. This tends to happen when you have a large proportion of identical, low values in the dataset. Here's an example that reproduces this pattern: dat <- c(1,2,2,2,3,5,6) median(dat) ## 2 quantile(dat, 0.25) ## 25% ## 2 boxplot(dat) You can read a basic introduction about how to interpret boxplots here. Though as Nick Cox points out below, its discussion of what are called 'outliers' is flawed and should be ignored. Outliers should not be deleted unless there is very strong reason to, such as a clear data recording error. Note also that a boxplot is not a great way to display many datasets. I agree with Stephan Kolassa's recommendation of a beeswarm plot for small datasets and a violin plot/kernel density plot for larger ones.
Missing quartile in boxplot The median is probably identical to the first quartile, which is why they overlap. This tends to happen when you have a large proportion of identical, low values in the dataset. Here's an example that
30,052
Missing quartile in boxplot
The "box" in a boxplot extends from the first to the third quartile, i.e., from the 25th to the 75th percentile. Visually, this means that your 25th percentile is around 6 messages, and your 75th percentile around 8. In addition, boxplots indicate the median (i.e., the second quartile, or 50th percentile) using a horizontal line. Of course, the median can coincide with a quartile. Good implementations therefore use a different color or line type for the median line. In the present case, we see that the bottom horizontal line is green. It is obviously plotted over the first quartile line. Thus, this is not only the first quartile, but simultaneously the median. Therefore, your median is also about 6. You should be able to verify this from your data, by calculating the quartiles and the median.
Missing quartile in boxplot
The "box" in a boxplot extends from the first to the third quartile, i.e., from the 25th to the 75th percentile. Visually, this means that your 25th percentile is around 6 messages, and your 75th perc
Missing quartile in boxplot The "box" in a boxplot extends from the first to the third quartile, i.e., from the 25th to the 75th percentile. Visually, this means that your 25th percentile is around 6 messages, and your 75th percentile around 8. In addition, boxplots indicate the median (i.e., the second quartile, or 50th percentile) using a horizontal line. Of course, the median can coincide with a quartile. Good implementations therefore use a different color or line type for the median line. In the present case, we see that the bottom horizontal line is green. It is obviously plotted over the first quartile line. Thus, this is not only the first quartile, but simultaneously the median. Therefore, your median is also about 6. You should be able to verify this from your data, by calculating the quartiles and the median.
Missing quartile in boxplot The "box" in a boxplot extends from the first to the third quartile, i.e., from the 25th to the 75th percentile. Visually, this means that your 25th percentile is around 6 messages, and your 75th perc
30,053
How to measure the shift between two cumulative distribution functions (CDFs)?
Think about what a CDF represents in terms of probability. Let the variables on the x-axis be referred to as $x$ and y-axis values be referred to as $y$. By definition the cumulative distribution function is showing the probability that a variable is less than or equal to $x$. More specifically, if you look at $x=0$ for each curve the CDF is telling you: $P_{\text{red}}(X \leq 0) \approx 0.5$ and $P_{\text{green}}(X \leq 0) \approx 0.7$. Your question is a little vague so I will answer it in two parts. How meaningful is the difference at a particular point?: Let's assume $X$ represents the difference in points from an average test score (with a negative value representing below average and positive value representing above average). Let the green curve represent boys and red curve represent girls. Now, $P_{\text{red}}(X \leq 0) \approx 0.5$ and $P_{\text{green}}(X \leq 0) \approx 0.7$ tells us that the probability of a boy scoring below average is higher than the probability of a girl scoring below average. If we look at the CDF as whole (green always above red) this suggests in your sample population, girls score higher than boys. Whether or not this result is statistically significant is yet to be determined. How meaningful is the difference overall? (edited as a response to @whuber) : This depends on how you use it. For instance, if the green CDF represented the CDF of some reference distribution and the red CDF was an empirical sample distribution, then the point by point vertical differences can be used in a Kolmogorov–Smirnov test for equality between the two distributions. The fact that the green "leads" the red and the two curves are similarly shaped contribute to the fact that the green is always above the red, but this does not necessarily have to be the case. Consider that your populations do not come from the same underlying distribution. In this case the shape of the CDF would differ and the fact that the green "leads" the red would not necessarily result in the green always being above the red. For example, here are various CDFs of a logistic distribution (from Wikipedia) Notice that the red curve in the plot above "leads" (starts from a nonzero value) before the rest of the curves do, but ultimately end up below most of the curves as the x-values approach x=20.
How to measure the shift between two cumulative distribution functions (CDFs)?
Think about what a CDF represents in terms of probability. Let the variables on the x-axis be referred to as $x$ and y-axis values be referred to as $y$. By definition the cumulative distribution func
How to measure the shift between two cumulative distribution functions (CDFs)? Think about what a CDF represents in terms of probability. Let the variables on the x-axis be referred to as $x$ and y-axis values be referred to as $y$. By definition the cumulative distribution function is showing the probability that a variable is less than or equal to $x$. More specifically, if you look at $x=0$ for each curve the CDF is telling you: $P_{\text{red}}(X \leq 0) \approx 0.5$ and $P_{\text{green}}(X \leq 0) \approx 0.7$. Your question is a little vague so I will answer it in two parts. How meaningful is the difference at a particular point?: Let's assume $X$ represents the difference in points from an average test score (with a negative value representing below average and positive value representing above average). Let the green curve represent boys and red curve represent girls. Now, $P_{\text{red}}(X \leq 0) \approx 0.5$ and $P_{\text{green}}(X \leq 0) \approx 0.7$ tells us that the probability of a boy scoring below average is higher than the probability of a girl scoring below average. If we look at the CDF as whole (green always above red) this suggests in your sample population, girls score higher than boys. Whether or not this result is statistically significant is yet to be determined. How meaningful is the difference overall? (edited as a response to @whuber) : This depends on how you use it. For instance, if the green CDF represented the CDF of some reference distribution and the red CDF was an empirical sample distribution, then the point by point vertical differences can be used in a Kolmogorov–Smirnov test for equality between the two distributions. The fact that the green "leads" the red and the two curves are similarly shaped contribute to the fact that the green is always above the red, but this does not necessarily have to be the case. Consider that your populations do not come from the same underlying distribution. In this case the shape of the CDF would differ and the fact that the green "leads" the red would not necessarily result in the green always being above the red. For example, here are various CDFs of a logistic distribution (from Wikipedia) Notice that the red curve in the plot above "leads" (starts from a nonzero value) before the rest of the curves do, but ultimately end up below most of the curves as the x-values approach x=20.
How to measure the shift between two cumulative distribution functions (CDFs)? Think about what a CDF represents in terms of probability. Let the variables on the x-axis be referred to as $x$ and y-axis values be referred to as $y$. By definition the cumulative distribution func
30,054
How to measure the shift between two cumulative distribution functions (CDFs)?
The absolute value of this area is $$\int_{x=-\infty}^\infty \lvert F(x) - G(x)\rvert \,\mathrm{d}x,$$ which note – at least for continuous distributions – is exactly equal to $$\int_{x=-\infty}^\infty \lvert F^{-1}(x) - G^{-1}(x)\rvert \,\mathrm{d}x.$$ In one dimension, the latter is the 1-Wasserstein distance, the 1-Kantorovich distance, or the "earth-mover's distance." It is quite a reasonable distance between probability distributions, which is easy to compute between one-dimensional distributions based on their cdfs. For multivariate distributions, there is a natural extension (not based on CDFs, which become hard to work with), most commonly defined based on optimal transport. You can think of it this way: think of each density function as a pile of dirt. The amount of dirt that you need to move to transform one density into another is exactly this distance. This leads to the name of "earth-mover's distance." It's not entirely obvious at first that this distance corresponds to the difference in area between CDFs. But imagine doing this for two point mass distributions, one at $x$ and one at $x'$; the area between their CDFs is a rectangle, with area $1 \times \lvert x - x' \rvert$, exactly the amount that you need to move the "dirt." You can then envision doing the same thing for a collection of point masses, getting a series of rectangles that you add up. When you go to continuous distributions in the limit, you get the integral written above, and it should hopefully make sense that these are the same thing. The traditional way to estimate this distance from samples is by directly computing this transportation problem with a linear program, though there are more recent fast approximations. The beautiful Kantorovich-Rubinstein duality also applies to this distance. This relation has this year has led to an explosion of interest in the Wasserstein distance among the deep learning community, via this paper which uses it for generative modeling. The distance has also been popular in computer vision applications for decades.
How to measure the shift between two cumulative distribution functions (CDFs)?
The absolute value of this area is $$\int_{x=-\infty}^\infty \lvert F(x) - G(x)\rvert \,\mathrm{d}x,$$ which note – at least for continuous distributions – is exactly equal to $$\int_{x=-\infty}^\inft
How to measure the shift between two cumulative distribution functions (CDFs)? The absolute value of this area is $$\int_{x=-\infty}^\infty \lvert F(x) - G(x)\rvert \,\mathrm{d}x,$$ which note – at least for continuous distributions – is exactly equal to $$\int_{x=-\infty}^\infty \lvert F^{-1}(x) - G^{-1}(x)\rvert \,\mathrm{d}x.$$ In one dimension, the latter is the 1-Wasserstein distance, the 1-Kantorovich distance, or the "earth-mover's distance." It is quite a reasonable distance between probability distributions, which is easy to compute between one-dimensional distributions based on their cdfs. For multivariate distributions, there is a natural extension (not based on CDFs, which become hard to work with), most commonly defined based on optimal transport. You can think of it this way: think of each density function as a pile of dirt. The amount of dirt that you need to move to transform one density into another is exactly this distance. This leads to the name of "earth-mover's distance." It's not entirely obvious at first that this distance corresponds to the difference in area between CDFs. But imagine doing this for two point mass distributions, one at $x$ and one at $x'$; the area between their CDFs is a rectangle, with area $1 \times \lvert x - x' \rvert$, exactly the amount that you need to move the "dirt." You can then envision doing the same thing for a collection of point masses, getting a series of rectangles that you add up. When you go to continuous distributions in the limit, you get the integral written above, and it should hopefully make sense that these are the same thing. The traditional way to estimate this distance from samples is by directly computing this transportation problem with a linear program, though there are more recent fast approximations. The beautiful Kantorovich-Rubinstein duality also applies to this distance. This relation has this year has led to an explosion of interest in the Wasserstein distance among the deep learning community, via this paper which uses it for generative modeling. The distance has also been popular in computer vision applications for decades.
How to measure the shift between two cumulative distribution functions (CDFs)? The absolute value of this area is $$\int_{x=-\infty}^\infty \lvert F(x) - G(x)\rvert \,\mathrm{d}x,$$ which note – at least for continuous distributions – is exactly equal to $$\int_{x=-\infty}^\inft
30,055
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$?
The formulas you're using have long been known to be numerically unstable. If the squared means are large compared to the variances and/or products-of-means are large compared to the covariances, then the difference in the numerator and in the bracketed terms in the denominator can have problems with catastrophic cancellation. This can sometimes lead to calculated variances or covariances that don't even retain a single digit of precision (i.e. that are worse than useless). Don't use these formulas. They made some sense when people calculated by hand, where you could see, and deal with such loss of precision when it happened – e.g., use of these formulas was normally preceded by eliminating the common digits, so numbers like this: 8901234.567... 8901234.575... 8901234.412... would first have 8901234 subtracted (at least) – which would save a lot of time in the working as well as avoid the cancellation issue. Means (and similar quantities) would then be adjusted back at the end, while variances and covariances could be used as-is. Similar ideas (and other ideas) can be used with computers, but really you need to use them all the time, rather than trying to guess when you might need them. Efficient ways to deal with this issue have been known for over half a century – e.g., see Welford's 1962 paper [1] (where he gives one-pass variance and covariance algorithms – stable two-pass algorithms were well know already). Chan et al [2] (1983) compare a number of variance algorithms and offer a way to decide when to use which (though in most implementations generally people use only one algorithm). See Wikipedia's discussion on this issue in relation to variance and its discussion on variance algorithms. Similar comments apply to covariance. [1] B. P. Welford (1962), "Note on a Method for Calculating Corrected Sums of Squares and Products", Technometrics Vol. 4 , Iss. 3, 419-420 (link) Also see https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford's_online_algorithm [2] T.F. Chan, G.H. Golub and R.J. LeVeque (1983) "Algorithms for Computing the Sample Variance: Analysis and Recommendations", The American Statistician, Vol. 37, No. 3 (Aug.1983), pp. 242-247 Tech report version
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$?
The formulas you're using have long been known to be numerically unstable. If the squared means are large compared to the variances and/or products-of-means are large compared to the covariances, then
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$? The formulas you're using have long been known to be numerically unstable. If the squared means are large compared to the variances and/or products-of-means are large compared to the covariances, then the difference in the numerator and in the bracketed terms in the denominator can have problems with catastrophic cancellation. This can sometimes lead to calculated variances or covariances that don't even retain a single digit of precision (i.e. that are worse than useless). Don't use these formulas. They made some sense when people calculated by hand, where you could see, and deal with such loss of precision when it happened – e.g., use of these formulas was normally preceded by eliminating the common digits, so numbers like this: 8901234.567... 8901234.575... 8901234.412... would first have 8901234 subtracted (at least) – which would save a lot of time in the working as well as avoid the cancellation issue. Means (and similar quantities) would then be adjusted back at the end, while variances and covariances could be used as-is. Similar ideas (and other ideas) can be used with computers, but really you need to use them all the time, rather than trying to guess when you might need them. Efficient ways to deal with this issue have been known for over half a century – e.g., see Welford's 1962 paper [1] (where he gives one-pass variance and covariance algorithms – stable two-pass algorithms were well know already). Chan et al [2] (1983) compare a number of variance algorithms and offer a way to decide when to use which (though in most implementations generally people use only one algorithm). See Wikipedia's discussion on this issue in relation to variance and its discussion on variance algorithms. Similar comments apply to covariance. [1] B. P. Welford (1962), "Note on a Method for Calculating Corrected Sums of Squares and Products", Technometrics Vol. 4 , Iss. 3, 419-420 (link) Also see https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford's_online_algorithm [2] T.F. Chan, G.H. Golub and R.J. LeVeque (1983) "Algorithms for Computing the Sample Variance: Analysis and Recommendations", The American Statistician, Vol. 37, No. 3 (Aug.1983), pp. 242-247 Tech report version
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$? The formulas you're using have long been known to be numerically unstable. If the squared means are large compared to the variances and/or products-of-means are large compared to the covariances, then
30,056
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$?
The Pearson correlation coefficient is indeed between $-1$ and $+1$ (inclusive). This follows from the Cauchy-Schwarz inequality. Getting a correlation coefficient of $1.0000000002$ is possibly (but unlikely) due to numerical error, while -3 almost certainly indicates an error in implementation (or a platform unsuitable for numerics! :).
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$?
The Pearson correlation coefficient is indeed between $-1$ and $+1$ (inclusive). This follows from the Cauchy-Schwarz inequality. Getting a correlation coefficient of $1.0000000002$ is possibly (but u
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$? The Pearson correlation coefficient is indeed between $-1$ and $+1$ (inclusive). This follows from the Cauchy-Schwarz inequality. Getting a correlation coefficient of $1.0000000002$ is possibly (but unlikely) due to numerical error, while -3 almost certainly indicates an error in implementation (or a platform unsuitable for numerics! :).
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$? The Pearson correlation coefficient is indeed between $-1$ and $+1$ (inclusive). This follows from the Cauchy-Schwarz inequality. Getting a correlation coefficient of $1.0000000002$ is possibly (but u
30,057
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$?
The likely reason is the round-off error, that is the error due to the fact that a computer performs calculations to a finite precision - keeping only certain number of significant digits of a number (e.g., only 8 digits or only 16 digits - see , e.g., here.) It can be swept under the carpet as a numerical error, since one encounters it when performing numerical calculations. However, it is different in nature: "numerical errors" arise from an approximate nature of numerical calculations, whereas round-off error is due to the finite number of digits stored in the computer memory. Numerical errors are usually defined by the number of digits after the floating point, whereas the round-off error concerns the total number of digits. Thus, if using, e.g., decimal32 format, the computer would be able to distinguish between $0.001$ and $0.002$, but not between $7000000.001$ and $7000000.002$. In practice the rounding-off error often manifests itself as non-zero last digit(s) in a number after performing a sequence of algebraic operations, e.g., like getting $1.00000002$ instead of $1.00000000$. One thus should treat the results as correct only up to the precision of the number format used. Some software provide common comparison operators robust to small error. E.g., numpy.allclose(a,b) allows comparing arrays up to specified tolerance, although they might not be equal when using usual comparison, like (a == b).all()
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$?
The likely reason is the round-off error, that is the error due to the fact that a computer performs calculations to a finite precision - keeping only certain number of significant digits of a number
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$? The likely reason is the round-off error, that is the error due to the fact that a computer performs calculations to a finite precision - keeping only certain number of significant digits of a number (e.g., only 8 digits or only 16 digits - see , e.g., here.) It can be swept under the carpet as a numerical error, since one encounters it when performing numerical calculations. However, it is different in nature: "numerical errors" arise from an approximate nature of numerical calculations, whereas round-off error is due to the finite number of digits stored in the computer memory. Numerical errors are usually defined by the number of digits after the floating point, whereas the round-off error concerns the total number of digits. Thus, if using, e.g., decimal32 format, the computer would be able to distinguish between $0.001$ and $0.002$, but not between $7000000.001$ and $7000000.002$. In practice the rounding-off error often manifests itself as non-zero last digit(s) in a number after performing a sequence of algebraic operations, e.g., like getting $1.00000002$ instead of $1.00000000$. One thus should treat the results as correct only up to the precision of the number format used. Some software provide common comparison operators robust to small error. E.g., numpy.allclose(a,b) allows comparing arrays up to specified tolerance, although they might not be equal when using usual comparison, like (a == b).all()
Is it possible to have Pearson correlation coefficient values $< -1$ or values $> 1$? The likely reason is the round-off error, that is the error due to the fact that a computer performs calculations to a finite precision - keeping only certain number of significant digits of a number
30,058
Basic Gini impurity derivation
I don't know about the algebra, but you can prove the identity with a probabilistic argument. If I roll two dice with $m$ sides and the probability of side $i$ is $f_i$, then the probability of a double is $\sum f_i^2$. Thus $1-\sum f_i^2$ is the probability that I roll distinct values. But arguing differently, the probability, say, that I get $i$ followed by $j$ is $f_if_j$. Summing over all possibilities, with $i \neq j$, I get the probability of rolling distinct outcomes: $\sum f_if_j$, and the identity is proven. As for the first point, If you role the $m$ sided die, there is a probability $f_i$ that side $i$ comes up. Suppose I have to guess the value, and I do this by rolling a die of my own with the same weights. The probability that I guess wrong, conditional on value $i$ being true, is $1-f_i$. The probability that I get it wrong, summing over the possible values, is $\sum f_i(1-f_i)$.
Basic Gini impurity derivation
I don't know about the algebra, but you can prove the identity with a probabilistic argument. If I roll two dice with $m$ sides and the probability of side $i$ is $f_i$, then the probability of a doub
Basic Gini impurity derivation I don't know about the algebra, but you can prove the identity with a probabilistic argument. If I roll two dice with $m$ sides and the probability of side $i$ is $f_i$, then the probability of a double is $\sum f_i^2$. Thus $1-\sum f_i^2$ is the probability that I roll distinct values. But arguing differently, the probability, say, that I get $i$ followed by $j$ is $f_if_j$. Summing over all possibilities, with $i \neq j$, I get the probability of rolling distinct outcomes: $\sum f_if_j$, and the identity is proven. As for the first point, If you role the $m$ sided die, there is a probability $f_i$ that side $i$ comes up. Suppose I have to guess the value, and I do this by rolling a die of my own with the same weights. The probability that I guess wrong, conditional on value $i$ being true, is $1-f_i$. The probability that I get it wrong, summing over the possible values, is $\sum f_i(1-f_i)$.
Basic Gini impurity derivation I don't know about the algebra, but you can prove the identity with a probabilistic argument. If I roll two dice with $m$ sides and the probability of side $i$ is $f_i$, then the probability of a doub
30,059
Basic Gini impurity derivation
I think it's best to answer your question in reverse order as we'll back into your first question by answering your second. Question 2 Imagine you have a probability distribution function ($f_i$) that distributes its probabilities as such: I can then square the probabilities ($f_i^2$) and get: Another way of looking at it is putting each probability distribution along the axis of a grid. Each cell now represents the product of the function along the respective axes. The grid itself sums to 1, just like you'd see in a two dice roll probability table. It should be clear that 1 minus the sum of the diagonal probabilities is the same as the non-highlighted squares below. If we call one of the axis k to differentiate it, but still have it render the same function, we can then make the statement. $1-\sum f_i^2$ = $\sum_{i \neq k} f_if_k$ Question 1 We can now use some of the intuition from answering question 2 to drive the intuition for question 1. Let's take our same table from question 2, but change what the two axes mean. Across one axis we'll have labels for objects, while on the other we'll have the actual object. For a concrete example, let's assume we have a bowl of fruit: apples, oranges and pears. In another bowl we'll have labels corresponding to apples, oranges and pears in the same proportion as the actual objects. If we then look at the probability of choosing each at random we get the following distribution. Now we want to look at the joint distribution. The Geni impurity tells us the probability that we select an object at random and a label at random and it is an incorrect match. The Geni impurity is the sum of the probabilities in the black shaded areas. These are where the label does not match the object, thus the impurity. This should look very familiar to the answer to question 2. If the explanation for question 2 convinced you that $1-\sum f_i^2$, you should be able to work backwards through the algebra you provided to see that also equals $\sum f_i(1-f_i)$
Basic Gini impurity derivation
I think it's best to answer your question in reverse order as we'll back into your first question by answering your second. Question 2 Imagine you have a probability distribution function ($f_i$) that
Basic Gini impurity derivation I think it's best to answer your question in reverse order as we'll back into your first question by answering your second. Question 2 Imagine you have a probability distribution function ($f_i$) that distributes its probabilities as such: I can then square the probabilities ($f_i^2$) and get: Another way of looking at it is putting each probability distribution along the axis of a grid. Each cell now represents the product of the function along the respective axes. The grid itself sums to 1, just like you'd see in a two dice roll probability table. It should be clear that 1 minus the sum of the diagonal probabilities is the same as the non-highlighted squares below. If we call one of the axis k to differentiate it, but still have it render the same function, we can then make the statement. $1-\sum f_i^2$ = $\sum_{i \neq k} f_if_k$ Question 1 We can now use some of the intuition from answering question 2 to drive the intuition for question 1. Let's take our same table from question 2, but change what the two axes mean. Across one axis we'll have labels for objects, while on the other we'll have the actual object. For a concrete example, let's assume we have a bowl of fruit: apples, oranges and pears. In another bowl we'll have labels corresponding to apples, oranges and pears in the same proportion as the actual objects. If we then look at the probability of choosing each at random we get the following distribution. Now we want to look at the joint distribution. The Geni impurity tells us the probability that we select an object at random and a label at random and it is an incorrect match. The Geni impurity is the sum of the probabilities in the black shaded areas. These are where the label does not match the object, thus the impurity. This should look very familiar to the answer to question 2. If the explanation for question 2 convinced you that $1-\sum f_i^2$, you should be able to work backwards through the algebra you provided to see that also equals $\sum f_i(1-f_i)$
Basic Gini impurity derivation I think it's best to answer your question in reverse order as we'll back into your first question by answering your second. Question 2 Imagine you have a probability distribution function ($f_i$) that
30,060
Basic Gini impurity derivation
1) Remember that the classification is done randomly proportional to the frequency of the value. $i$ is then miscategorized with probability $(1-f_i)$. 2) The $f_i$ sum to 1. So if I sum all $f_if_j$ this equals 1*1. So if I sum just the ones where $i\neq j$ this equals $1- \sum f_if_i$. Sorry for the brevity, answering from my phone. Comment with questions.
Basic Gini impurity derivation
1) Remember that the classification is done randomly proportional to the frequency of the value. $i$ is then miscategorized with probability $(1-f_i)$. 2) The $f_i$ sum to 1. So if I sum all $f_if_j
Basic Gini impurity derivation 1) Remember that the classification is done randomly proportional to the frequency of the value. $i$ is then miscategorized with probability $(1-f_i)$. 2) The $f_i$ sum to 1. So if I sum all $f_if_j$ this equals 1*1. So if I sum just the ones where $i\neq j$ this equals $1- \sum f_if_i$. Sorry for the brevity, answering from my phone. Comment with questions.
Basic Gini impurity derivation 1) Remember that the classification is done randomly proportional to the frequency of the value. $i$ is then miscategorized with probability $(1-f_i)$. 2) The $f_i$ sum to 1. So if I sum all $f_if_j
30,061
Basic Gini impurity derivation
Gini index Main idea is here. We can make an example based on this statquest. Let's say there is a machine that can detect Heart Disease (HD). The machine can predict HD 30% of the time. Following is the sample we have: HD !HD Machine 30% 70% This means the following cases are possible: Machine classify as HD and it is HD (P = 0.3*0.3) Machine classify as !HD and it is !HD (P = 0.7*0.7) Machine classify as HD but it is !HD Machine classify as !HD but it is HD We pray that cases 3&4 happen less often. The sum of all probabilities is 1. P(3&4) is therefore given by 1-(0.3^2)-(0.7^2)=0.42. P(3&4) is the impurity or how bad the machines' prediction is, AKA GINI=0.42. The alternative is to check if someone is clutching his chest or not due to chest pain (CP) and then guess based on probability data if he has HD or not. The following is the sample we have. For each case we calculate the GINI. Then we take the average of it (assuming similar sample size) and this estimates the GINI impurity using CP to predict HD. HD !HD GINI !CP 25% 75% 0.375 CP 80% 20% 0.32 avg NA NA 0.38 Smaller the impurity the better. So we decide instead of buying the machine (GINI=0.42) we can just use CP as an indicator (GINI=0.38). P.S. This is also an explanation of what happens at each node of a decision tree, which is where I came across GINI index.
Basic Gini impurity derivation
Gini index Main idea is here. We can make an example based on this statquest. Let's say there is a machine that can detect Heart Disease (HD). The machine can predict HD 30% of the time. Following is
Basic Gini impurity derivation Gini index Main idea is here. We can make an example based on this statquest. Let's say there is a machine that can detect Heart Disease (HD). The machine can predict HD 30% of the time. Following is the sample we have: HD !HD Machine 30% 70% This means the following cases are possible: Machine classify as HD and it is HD (P = 0.3*0.3) Machine classify as !HD and it is !HD (P = 0.7*0.7) Machine classify as HD but it is !HD Machine classify as !HD but it is HD We pray that cases 3&4 happen less often. The sum of all probabilities is 1. P(3&4) is therefore given by 1-(0.3^2)-(0.7^2)=0.42. P(3&4) is the impurity or how bad the machines' prediction is, AKA GINI=0.42. The alternative is to check if someone is clutching his chest or not due to chest pain (CP) and then guess based on probability data if he has HD or not. The following is the sample we have. For each case we calculate the GINI. Then we take the average of it (assuming similar sample size) and this estimates the GINI impurity using CP to predict HD. HD !HD GINI !CP 25% 75% 0.375 CP 80% 20% 0.32 avg NA NA 0.38 Smaller the impurity the better. So we decide instead of buying the machine (GINI=0.42) we can just use CP as an indicator (GINI=0.38). P.S. This is also an explanation of what happens at each node of a decision tree, which is where I came across GINI index.
Basic Gini impurity derivation Gini index Main idea is here. We can make an example based on this statquest. Let's say there is a machine that can detect Heart Disease (HD). The machine can predict HD 30% of the time. Following is
30,062
What does a p-value of exactly 1.0000 mean?
I'm not exactly familiar with the specific R functions, but if there's a Bonferroni correction, I believe that is likely to be the explanation. For example, suppose you tested two hypotheses and got unadjusted p = 0.6, 0.6. The simplistic Bonferroni adjustment would be 1.2, 1.2, but since these are not valid probabilities, it would truncate these to 1.0 and 1.0.
What does a p-value of exactly 1.0000 mean?
I'm not exactly familiar with the specific R functions, but if there's a Bonferroni correction, I believe that is likely to be the explanation. For example, suppose you tested two hypotheses and got u
What does a p-value of exactly 1.0000 mean? I'm not exactly familiar with the specific R functions, but if there's a Bonferroni correction, I believe that is likely to be the explanation. For example, suppose you tested two hypotheses and got unadjusted p = 0.6, 0.6. The simplistic Bonferroni adjustment would be 1.2, 1.2, but since these are not valid probabilities, it would truncate these to 1.0 and 1.0.
What does a p-value of exactly 1.0000 mean? I'm not exactly familiar with the specific R functions, but if there's a Bonferroni correction, I believe that is likely to be the explanation. For example, suppose you tested two hypotheses and got u
30,063
What does a p-value of exactly 1.0000 mean?
If the data are discrete it's possible to get an exact p-value of 1 on a paired t-test, when the mean difference is exactly 0. Otherwise, yes, a value just less than 1 may be shown as 1 at some given number of significant figures.
What does a p-value of exactly 1.0000 mean?
If the data are discrete it's possible to get an exact p-value of 1 on a paired t-test, when the mean difference is exactly 0. Otherwise, yes, a value just less than 1 may be shown as 1 at some given
What does a p-value of exactly 1.0000 mean? If the data are discrete it's possible to get an exact p-value of 1 on a paired t-test, when the mean difference is exactly 0. Otherwise, yes, a value just less than 1 may be shown as 1 at some given number of significant figures.
What does a p-value of exactly 1.0000 mean? If the data are discrete it's possible to get an exact p-value of 1 on a paired t-test, when the mean difference is exactly 0. Otherwise, yes, a value just less than 1 may be shown as 1 at some given
30,064
What does a p-value of exactly 1.0000 mean?
@Cliff-ab already provided a right answer. In case you want to get further insight on the meaning of the obtained p-values, making a histogram out of them (before correcting for multiple testing) may be of help. In particular, as nicely described by @david-robinson in http://varianceexplained.org/statistics/interpreting-pvalue-histogram/, p-values close to 1.0 may indicate that you have been applying a one-sided test when maybe you wanted a two-sided test or may be caused by missing values in your data, distorting the results of the test. Another option (as @Cliff-AB mentions) is the Bonferroni correction that you are applying, which seems the most plausible cause.
What does a p-value of exactly 1.0000 mean?
@Cliff-ab already provided a right answer. In case you want to get further insight on the meaning of the obtained p-values, making a histogram out of them (before correcting for multiple testing) may
What does a p-value of exactly 1.0000 mean? @Cliff-ab already provided a right answer. In case you want to get further insight on the meaning of the obtained p-values, making a histogram out of them (before correcting for multiple testing) may be of help. In particular, as nicely described by @david-robinson in http://varianceexplained.org/statistics/interpreting-pvalue-histogram/, p-values close to 1.0 may indicate that you have been applying a one-sided test when maybe you wanted a two-sided test or may be caused by missing values in your data, distorting the results of the test. Another option (as @Cliff-AB mentions) is the Bonferroni correction that you are applying, which seems the most plausible cause.
What does a p-value of exactly 1.0000 mean? @Cliff-ab already provided a right answer. In case you want to get further insight on the meaning of the obtained p-values, making a histogram out of them (before correcting for multiple testing) may
30,065
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two One-Way ANOVAs?
Yes, for several reasons! 1) Simpsons paradox. Unless the design is balanced, if one of the variables affects the outcome, you can't properly assess even the direction of the effect of the other one without adjusting for the first (see the first diagram at the link, in particular - reproduced below**). This illustrates the problem - the within-group effect is increasing (the two colored lines), but if you ignore the red-blue grouping you get a decreasing effect (the dashed, gray line) -- completely the wrong sign! While that's showing a situation with one continuous and one grouping variable, similar things can happen when unbalanced two-way main effects ANOVA is treated as two one-way models. 2) Let's assume there's a completely balanced design. Then you still want to do it, because if you ignore the second variable while looking at the first (assuming both have some impact) then the effect of the second goes into the noise term, inflating it... and so biasing all your standard errors upward. In which case, significant - and important - effects might look like noise. Consider the following data, a continuous response and two nominal categorical factors: y x1 x2 1 2.33 A 1 2 1.90 B 1 3 4.77 C 1 4 3.48 A 2 5 1.34 B 2 6 4.16 C 2 7 5.88 A 3 8 2.56 B 3 9 5.97 C 3 10 5.10 A 4 11 2.62 B 4 12 6.21 C 4 13 6.54 A 5 14 6.01 B 5 15 9.62 C 5 The two way main effects anova is highly significant (because it's balanced, order doesn't matter): Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x1 2 26.644 13.3220 24.284 0.0004000 x2 4 38.889 9.7222 17.722 0.0004859 Residuals 8 4.389 0.5486 But the individual one way anovas are not significant at the 5% level: (1) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x1 2 26.687 13.3436 3.6967 0.05613 Residuals 12 43.315 3.6096 (2) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x2 4 38.889 9.7222 3.1329 0.06511 Residuals 10 31.033 3.1033 Notice in each case that the mean square for the factor was unchanged ... but the residual mean squares increased dramatically (from 0.55 to over 3 in each case). That's the effect of leaving out an important variable. **(the above diagram was made by Wikipedia user Schutz, but placed in the public domain; while attribution isn't required for items in the public domain, I feel it's worthy of recognition)
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two
Yes, for several reasons! 1) Simpsons paradox. Unless the design is balanced, if one of the variables affects the outcome, you can't properly assess even the direction of the effect of the other one w
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two One-Way ANOVAs? Yes, for several reasons! 1) Simpsons paradox. Unless the design is balanced, if one of the variables affects the outcome, you can't properly assess even the direction of the effect of the other one without adjusting for the first (see the first diagram at the link, in particular - reproduced below**). This illustrates the problem - the within-group effect is increasing (the two colored lines), but if you ignore the red-blue grouping you get a decreasing effect (the dashed, gray line) -- completely the wrong sign! While that's showing a situation with one continuous and one grouping variable, similar things can happen when unbalanced two-way main effects ANOVA is treated as two one-way models. 2) Let's assume there's a completely balanced design. Then you still want to do it, because if you ignore the second variable while looking at the first (assuming both have some impact) then the effect of the second goes into the noise term, inflating it... and so biasing all your standard errors upward. In which case, significant - and important - effects might look like noise. Consider the following data, a continuous response and two nominal categorical factors: y x1 x2 1 2.33 A 1 2 1.90 B 1 3 4.77 C 1 4 3.48 A 2 5 1.34 B 2 6 4.16 C 2 7 5.88 A 3 8 2.56 B 3 9 5.97 C 3 10 5.10 A 4 11 2.62 B 4 12 6.21 C 4 13 6.54 A 5 14 6.01 B 5 15 9.62 C 5 The two way main effects anova is highly significant (because it's balanced, order doesn't matter): Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x1 2 26.644 13.3220 24.284 0.0004000 x2 4 38.889 9.7222 17.722 0.0004859 Residuals 8 4.389 0.5486 But the individual one way anovas are not significant at the 5% level: (1) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x1 2 26.687 13.3436 3.6967 0.05613 Residuals 12 43.315 3.6096 (2) Analysis of Variance Table Response: y Df Sum Sq Mean Sq F value Pr(>F) x2 4 38.889 9.7222 3.1329 0.06511 Residuals 10 31.033 3.1033 Notice in each case that the mean square for the factor was unchanged ... but the residual mean squares increased dramatically (from 0.55 to over 3 in each case). That's the effect of leaving out an important variable. **(the above diagram was made by Wikipedia user Schutz, but placed in the public domain; while attribution isn't required for items in the public domain, I feel it's worthy of recognition)
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two Yes, for several reasons! 1) Simpsons paradox. Unless the design is balanced, if one of the variables affects the outcome, you can't properly assess even the direction of the effect of the other one w
30,066
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two One-Way ANOVAs?
Yes. If the two independent variables are related and/or the ANOVA is not balanced, then a two way ANOVA shows you the effect of each variable controlling for the other one.
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two
Yes. If the two independent variables are related and/or the ANOVA is not balanced, then a two way ANOVA shows you the effect of each variable controlling for the other one.
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two One-Way ANOVAs? Yes. If the two independent variables are related and/or the ANOVA is not balanced, then a two way ANOVA shows you the effect of each variable controlling for the other one.
If I'm not interested in the interaction, is there any reason to run a Two-Way ANOVA instead of two Yes. If the two independent variables are related and/or the ANOVA is not balanced, then a two way ANOVA shows you the effect of each variable controlling for the other one.
30,067
Why do a density plot and a rug plot seem to disagree?
From the R package MASS, of the $506$ total observations in Boston, $369$ have a value for tax below 470 and $137$ have a value for tax above 665. In fact 666 is by far the most common value in the data set, appearing $132$ times. So if the area of the density plot to the left is about twice the area to the right, then that could reasonably be taken as representing the distribution. Visual inspection suggests this might be what is happening. A more accurate representation would have the right peak much higher and narrower, and this could be achieved by adjusting the parameters. Added for comments: For example with a much narrower bandwidth for the density function and some manual jitter: library(MASS) plot(density(Boston$tax, bw=5)) rug(Boston$tax + rnorm(length(Boston$tax), sd=5), col=2, lwd=3.5) you would get something like this
Why do a density plot and a rug plot seem to disagree?
From the R package MASS, of the $506$ total observations in Boston, $369$ have a value for tax below 470 and $137$ have a value for tax above 665. In fact 666 is by far the most common value in the d
Why do a density plot and a rug plot seem to disagree? From the R package MASS, of the $506$ total observations in Boston, $369$ have a value for tax below 470 and $137$ have a value for tax above 665. In fact 666 is by far the most common value in the data set, appearing $132$ times. So if the area of the density plot to the left is about twice the area to the right, then that could reasonably be taken as representing the distribution. Visual inspection suggests this might be what is happening. A more accurate representation would have the right peak much higher and narrower, and this could be achieved by adjusting the parameters. Added for comments: For example with a much narrower bandwidth for the density function and some manual jitter: library(MASS) plot(density(Boston$tax, bw=5)) rug(Boston$tax + rnorm(length(Boston$tax), sd=5), col=2, lwd=3.5) you would get something like this
Why do a density plot and a rug plot seem to disagree? From the R package MASS, of the $506$ total observations in Boston, $369$ have a value for tax below 470 and $137$ have a value for tax above 665. In fact 666 is by far the most common value in the d
30,068
Why do a density plot and a rug plot seem to disagree?
The way to make a rug plot less misleading is often to use something different instead. Rug plots necessarily are relatively good at showing distinct values and very poor at indicating their relative frequency. Here is a spike representation of the frequency distribution of the data used in the original post. The principle is that often used to show discrete distributions, a spike proportional in height to the frequency of each distinct value. The Boston data as available and documented here were read into Stata and the spikeplot command used. Something similar should be trivial in all good statistical software. If you like, this is a hybrid of a histogram and a rug plot, although historically graphs of this kind may long predate rug plots. Stata documentation for spikeplot, including, more to the point of this question, further examples of such graphs, is accessible here EDIT 3 August 2022 See also this paper for examples of marginal spike histograms as inspired by Harrell, F.E. 2001/2015. Regression Modeling Strategies. Springer. The Boston housing data are often used as an example. Many researchers follow in the path of the original authors, who did not show any plots of the raw data in their paper. Here are four of the more unusual distributions for predictors in that dataset, as shown by quantile plots, plots of the ordered values versus an estimate of cumulative probability. The vertical axis labels are minimum, lower quartile, median, upper quartile and maximum, which in some instances coincide. A quantile plot is good for showing marked spikes and gaps in the data. Naturally an unusual (e.g. spiky) distribution doesn't stop a variable being useful as a predictor, as exemplified also by indicator or dummy variables, but it is a good idea to look at your data before modelling. In some problems, there may be real doubts about data quality, or at least puzzles about definitions and measurements that need to inform the interpretation.
Why do a density plot and a rug plot seem to disagree?
The way to make a rug plot less misleading is often to use something different instead. Rug plots necessarily are relatively good at showing distinct values and very poor at indicating their relative
Why do a density plot and a rug plot seem to disagree? The way to make a rug plot less misleading is often to use something different instead. Rug plots necessarily are relatively good at showing distinct values and very poor at indicating their relative frequency. Here is a spike representation of the frequency distribution of the data used in the original post. The principle is that often used to show discrete distributions, a spike proportional in height to the frequency of each distinct value. The Boston data as available and documented here were read into Stata and the spikeplot command used. Something similar should be trivial in all good statistical software. If you like, this is a hybrid of a histogram and a rug plot, although historically graphs of this kind may long predate rug plots. Stata documentation for spikeplot, including, more to the point of this question, further examples of such graphs, is accessible here EDIT 3 August 2022 See also this paper for examples of marginal spike histograms as inspired by Harrell, F.E. 2001/2015. Regression Modeling Strategies. Springer. The Boston housing data are often used as an example. Many researchers follow in the path of the original authors, who did not show any plots of the raw data in their paper. Here are four of the more unusual distributions for predictors in that dataset, as shown by quantile plots, plots of the ordered values versus an estimate of cumulative probability. The vertical axis labels are minimum, lower quartile, median, upper quartile and maximum, which in some instances coincide. A quantile plot is good for showing marked spikes and gaps in the data. Naturally an unusual (e.g. spiky) distribution doesn't stop a variable being useful as a predictor, as exemplified also by indicator or dummy variables, but it is a good idea to look at your data before modelling. In some problems, there may be real doubts about data quality, or at least puzzles about definitions and measurements that need to inform the interpretation.
Why do a density plot and a rug plot seem to disagree? The way to make a rug plot less misleading is often to use something different instead. Rug plots necessarily are relatively good at showing distinct values and very poor at indicating their relative
30,069
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
That's a treemap, I guess (http://en.wikipedia.org/wiki/Treemapping). There are several packages, e.g. in R, that create treemaps. One of the packages is called treemap, and another one is portfolio. For example, Nathan Yau offers a tutorial on how to create a treemap using R (http://flowingdata.com/2010/02/11/an-easy-way-to-make-a-treemap/).
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
That's a treemap, I guess (http://en.wikipedia.org/wiki/Treemapping). There are several packages, e.g. in R, that create treemaps. One of the packages is called treemap, and another one is portfolio.
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot That's a treemap, I guess (http://en.wikipedia.org/wiki/Treemapping). There are several packages, e.g. in R, that create treemaps. One of the packages is called treemap, and another one is portfolio. For example, Nathan Yau offers a tutorial on how to create a treemap using R (http://flowingdata.com/2010/02/11/an-easy-way-to-make-a-treemap/).
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot That's a treemap, I guess (http://en.wikipedia.org/wiki/Treemapping). There are several packages, e.g. in R, that create treemaps. One of the packages is called treemap, and another one is portfolio.
30,070
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
The question is the name, but how well it works is also open to discussion. Here's something much more prosaic as an alternative, a horizontal bar chart. What we might want to do with such a graph varies from some grasp of the overall pattern to some scrutiny of individual cases (what about Hawke's Bay, and so forth). I'd assert that both are easier with a bar chart. Small details are that I use lower case in titles and names where easy and don't repeat the % sign. I've roughly imitated the colour coding without finding out what it means, so that is just as clear, or obscure, as what you copied. I suggest that some of the appeal of treemaps lies in their relative novelty. They might work as well as or better than bar charts if there are dozens of names, which can be spread over a two-dimensional area rather than listed in a long column. But for 15 or so names, a bar chart remains a strong competitor in my view. I am happy with anyone who prefers a (Cleveland) dot chart here. A vertical bar chart would face the difficulty of placing the region names comfortably. (Just imagine rotating this graph to see that.) I like the idea of giving the numbers too, although conservatives don't like mixing graph and table ideas. The graph was drawn in Stata.
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
The question is the name, but how well it works is also open to discussion. Here's something much more prosaic as an alternative, a horizontal bar chart. What we might want to do with such a graph
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot The question is the name, but how well it works is also open to discussion. Here's something much more prosaic as an alternative, a horizontal bar chart. What we might want to do with such a graph varies from some grasp of the overall pattern to some scrutiny of individual cases (what about Hawke's Bay, and so forth). I'd assert that both are easier with a bar chart. Small details are that I use lower case in titles and names where easy and don't repeat the % sign. I've roughly imitated the colour coding without finding out what it means, so that is just as clear, or obscure, as what you copied. I suggest that some of the appeal of treemaps lies in their relative novelty. They might work as well as or better than bar charts if there are dozens of names, which can be spread over a two-dimensional area rather than listed in a long column. But for 15 or so names, a bar chart remains a strong competitor in my view. I am happy with anyone who prefers a (Cleveland) dot chart here. A vertical bar chart would face the difficulty of placing the region names comfortably. (Just imagine rotating this graph to see that.) I like the idea of giving the numbers too, although conservatives don't like mixing graph and table ideas. The graph was drawn in Stata.
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot The question is the name, but how well it works is also open to discussion. Here's something much more prosaic as an alternative, a horizontal bar chart. What we might want to do with such a graph
30,071
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
Edit / addition I have since discovered that the treemap package gives a much better result than the map.market() function mentioned (and adapted) below; but I'll leave my answer in for historical reasons. Original Answer Thanks for the answers. Building on the flowing data link provided by @JTT but disliking the need to tweak by hand in Illustrator or Inkscape just to get a reasonable graphic, I tweaked the map.market() function in Jeff Enos and David Kane's portfolio package to make it more user-controlled, the labels vary by rectangle size, and avoid red-green contrasts. Example usage: library(portfolio) library(extrafont) data(dow.jan.2005) with(dow.jan.2005, treemap(id = symbol, area = price, group = sector, color = 100 * month.ret, labsc = .12, # user-chosen scaling of labels fontfamily="Comic Sans MS") ) For what it's worth, I also agree with @NickCox that in the example in my original question a dot plot is superior. Code of my adapted treemap() function follows. treemap <- function (id, area, group, color, scale = NULL, lab = c(group = TRUE, id = FALSE), low="red", middle="grey60", high="blue", main = "Map of the Market", labsc = c(.5, 1), print = TRUE, ...) { # Adapted by Peter Ellis from map.market() by Jeff Enos and David Kane in the portfolio package on CRAN # See map.market for the original helpfile. The changes are: # 1. low, middle and high are user-set color ramp choices # 2. The font size now varies with the area of the rectangle being labelled; labsc is a scaling parameter to make it look ok. # First element of labsc is scaling parameter for size of group labels. Second element is scaling for id labels. # 3. ... extra arguments to be passed to gpar() when drawing labels; expected use is for fontfamily="whatever" require(portfolio) if (any(length(id) != length(area), length(id) != length(group), length(id) != length(color))) { stop("id, area, group, and color must be the same length.") } if (length(lab) == 1) { lab[2] <- lab[1] } if (missing(id)) { id <- seq_along(area) lab["id"] <- FALSE } stopifnot(all(!is.na(id))) data <- data.frame(label = id, group, area, color) data <- data[order(data$area, decreasing = TRUE), ] na.idx <- which(is.na(data$area) | is.na(data$group) | is.na(data$color)) if (length(na.idx)) { warning("Stocks with NAs for area, group, or color will not be shown") data <- data[-na.idx, ] } zero.area.idx <- which(data$area == 0) if (length(zero.area.idx)) { data <- data[-zero.area.idx, ] } if (nrow(data) == 0) { stop("No records to display") } data$color.orig <- data$color if (is.null(scale)) { data$color <- data$color * 1/max(abs(data$color)) } else { data$color <- sapply(data$color, function(x) { if (x/scale > 1) 1 else if (-1 > x/scale) -1 else x/scale }) } data.by.group <- split(data, data$group, drop = TRUE) group.data <- lapply(data.by.group, function(x) { sum(x[, 3]) }) group.data <- data.frame(area = as.numeric(group.data), label = names(group.data)) group.data <- group.data[order(group.data$area, decreasing = TRUE), ] group.data$color <- rep(NULL, nrow(group.data)) color.ramp.pos <- colorRamp(c(middle, high)) color.ramp.neg <- colorRamp(c(middle, low)) color.ramp.rgb <- function(x) { col.mat <- mapply(function(x) { if (x < 0) { color.ramp.neg(abs(x)) } else { color.ramp.pos(abs(x)) } }, x) mapply(rgb, col.mat[1, ], col.mat[2, ], col.mat[3, ], max = 255) } add.viewport <- function(z, label, color, x.0, y.0, x.1, y.1) { for (i in 1:length(label)) { if (is.null(color[i])) { filler <- gpar(col = "blue", fill = "transparent", cex = 1) } else { filler.col <- color.ramp.rgb(color[i]) filler <- gpar(col = filler.col, fill = filler.col, cex = 0.6) } new.viewport <- viewport(x = x.0[i], y = y.0[i], width = (x.1[i] - x.0[i]), height = (y.1[i] - y.0[i]), default.units = "npc", just = c("left", "bottom"), name = as.character(label[i]), clip = "on", gp = filler) z <- append(z, list(new.viewport)) } z } squarified.treemap <- function(z, x = 0, y = 0, w = 1, h = 1, func = add.viewport, viewport.list) { cz <- cumsum(z$area)/sum(z$area) n <- which.min(abs(log(max(w/h, h/w) * sum(z$area) * ((cz^2)/z$area)))) more <- n < length(z$area) a <- c(0, cz[1:n])/cz[n] if (h > w) { viewport.list <- func(viewport.list, z$label[1:n], z$color[1:n], x + w * a[1:(length(a) - 1)], rep(y, n), x + w * a[-1], rep(y + h * cz[n], n)) if (more) { viewport.list <- Recall(z[-(1:n), ], x, y + h * cz[n], w, h * (1 - cz[n]), func, viewport.list) } } else { viewport.list <- func(viewport.list, z$label[1:n], z$color[1:n], rep(x, n), y + h * a[1:(length(a) - 1)], rep(x + w * cz[n], n), y + h * a[-1]) if (more) { viewport.list <- Recall(z[-(1:n), ], x + w * cz[n], y, w * (1 - cz[n]), h, func, viewport.list) } } viewport.list } map.viewport <- viewport(x = 0.05, y = 0.05, width = 0.9, height = 0.75, default.units = "npc", name = "MAP", just = c("left", "bottom")) map.tree <- gTree(vp = map.viewport, name = "MAP", children = gList(rectGrob(gp = gpar(col = "dark grey"), name = "background"))) group.viewports <- squarified.treemap(z = group.data, viewport.list = list()) for (i in 1:length(group.viewports)) { this.group <- data.by.group[[group.data$label[i]]] this.data <- data.frame(this.group$area, this.group$label, this.group$color) names(this.data) <- c("area", "label", "color") stock.viewports <- squarified.treemap(z = this.data, viewport.list = list()) group.tree <- gTree(vp = group.viewports[[i]], name = group.data$label[i]) for (s in 1:length(stock.viewports)) { stock.tree <- gTree(vp = stock.viewports[[s]], name = this.data$label[s], children = gList(rectGrob(name = "color"))) if (lab[2]) { stock.tree <- addGrob(stock.tree, textGrob(x = unit(1, "lines"), y = unit(1, "npc") - unit(1, "lines"), label = this.data$label[s], gp = gpar(col = "white", fontsize=this.data$area[s] * labsc[2], ...), name = "label", just = c("left", "top"))) } group.tree <- addGrob(group.tree, stock.tree) } group.tree <- addGrob(group.tree, rectGrob(gp = gpar(col = "grey"), name = "border")) if (lab[1]) { group.tree <- addGrob(group.tree, textGrob(label = group.data$label[i], name = "label", gp = gpar(col = "white", fontsize=group.data$area[i] * labsc[1], ...))) } map.tree <- addGrob(map.tree, group.tree) } op <- options(digits = 1) top.viewport <- viewport(x = 0.05, y = 1, width = 0.9, height = 0.2, default.units = "npc", name = "TOP", , just = c("left", "top")) legend.ncols <- 51 l.x <- (0:(legend.ncols - 1))/(legend.ncols) l.y <- unit(0.25, "npc") l.cols <- color.ramp.rgb(seq(-1, 1, by = 2/(legend.ncols - 1))) if (is.null(scale)) { l.end <- max(abs(data$color.orig)) } else { l.end <- scale } top.list <- gList(textGrob(label = main, y = unit(0.7, "npc"), just = c("center", "center"), gp = gpar(cex = 2, ...)), segmentsGrob(x0 = seq(0, 1, by = 0.25), y0 = unit(0.25, "npc"), x1 = seq(0, 1, by = 0.25), y1 = unit(0.2, "npc")), rectGrob(x = l.x, y = l.y, width = 1/legend.ncols, height = unit(1, "lines"), just = c("left", "bottom"), gp = gpar(col = NA, fill = l.cols), default.units = "npc"), textGrob(label = format(l.end * seq(-1, 1, by = 0.5), trim = TRUE), x = seq(0, 1, by = 0.25), y = 0.1, default.units = "npc", just = c("center", "center"), gp = gpar(col = "black", cex = 0.8, fontface = "bold"))) options(op) top.tree <- gTree(vp = top.viewport, name = "TOP", children = top.list) mapmarket <- gTree(name = "MAPMARKET", children = gList(rectGrob(gp = gpar(col = "dark grey", fill = "dark grey"), name = "background"), top.tree, map.tree)) if (print) { grid.newpage() grid.draw(mapmarket) } invisible(mapmarket) }
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
Edit / addition I have since discovered that the treemap package gives a much better result than the map.market() function mentioned (and adapted) below; but I'll leave my answer in for historical rea
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot Edit / addition I have since discovered that the treemap package gives a much better result than the map.market() function mentioned (and adapted) below; but I'll leave my answer in for historical reasons. Original Answer Thanks for the answers. Building on the flowing data link provided by @JTT but disliking the need to tweak by hand in Illustrator or Inkscape just to get a reasonable graphic, I tweaked the map.market() function in Jeff Enos and David Kane's portfolio package to make it more user-controlled, the labels vary by rectangle size, and avoid red-green contrasts. Example usage: library(portfolio) library(extrafont) data(dow.jan.2005) with(dow.jan.2005, treemap(id = symbol, area = price, group = sector, color = 100 * month.ret, labsc = .12, # user-chosen scaling of labels fontfamily="Comic Sans MS") ) For what it's worth, I also agree with @NickCox that in the example in my original question a dot plot is superior. Code of my adapted treemap() function follows. treemap <- function (id, area, group, color, scale = NULL, lab = c(group = TRUE, id = FALSE), low="red", middle="grey60", high="blue", main = "Map of the Market", labsc = c(.5, 1), print = TRUE, ...) { # Adapted by Peter Ellis from map.market() by Jeff Enos and David Kane in the portfolio package on CRAN # See map.market for the original helpfile. The changes are: # 1. low, middle and high are user-set color ramp choices # 2. The font size now varies with the area of the rectangle being labelled; labsc is a scaling parameter to make it look ok. # First element of labsc is scaling parameter for size of group labels. Second element is scaling for id labels. # 3. ... extra arguments to be passed to gpar() when drawing labels; expected use is for fontfamily="whatever" require(portfolio) if (any(length(id) != length(area), length(id) != length(group), length(id) != length(color))) { stop("id, area, group, and color must be the same length.") } if (length(lab) == 1) { lab[2] <- lab[1] } if (missing(id)) { id <- seq_along(area) lab["id"] <- FALSE } stopifnot(all(!is.na(id))) data <- data.frame(label = id, group, area, color) data <- data[order(data$area, decreasing = TRUE), ] na.idx <- which(is.na(data$area) | is.na(data$group) | is.na(data$color)) if (length(na.idx)) { warning("Stocks with NAs for area, group, or color will not be shown") data <- data[-na.idx, ] } zero.area.idx <- which(data$area == 0) if (length(zero.area.idx)) { data <- data[-zero.area.idx, ] } if (nrow(data) == 0) { stop("No records to display") } data$color.orig <- data$color if (is.null(scale)) { data$color <- data$color * 1/max(abs(data$color)) } else { data$color <- sapply(data$color, function(x) { if (x/scale > 1) 1 else if (-1 > x/scale) -1 else x/scale }) } data.by.group <- split(data, data$group, drop = TRUE) group.data <- lapply(data.by.group, function(x) { sum(x[, 3]) }) group.data <- data.frame(area = as.numeric(group.data), label = names(group.data)) group.data <- group.data[order(group.data$area, decreasing = TRUE), ] group.data$color <- rep(NULL, nrow(group.data)) color.ramp.pos <- colorRamp(c(middle, high)) color.ramp.neg <- colorRamp(c(middle, low)) color.ramp.rgb <- function(x) { col.mat <- mapply(function(x) { if (x < 0) { color.ramp.neg(abs(x)) } else { color.ramp.pos(abs(x)) } }, x) mapply(rgb, col.mat[1, ], col.mat[2, ], col.mat[3, ], max = 255) } add.viewport <- function(z, label, color, x.0, y.0, x.1, y.1) { for (i in 1:length(label)) { if (is.null(color[i])) { filler <- gpar(col = "blue", fill = "transparent", cex = 1) } else { filler.col <- color.ramp.rgb(color[i]) filler <- gpar(col = filler.col, fill = filler.col, cex = 0.6) } new.viewport <- viewport(x = x.0[i], y = y.0[i], width = (x.1[i] - x.0[i]), height = (y.1[i] - y.0[i]), default.units = "npc", just = c("left", "bottom"), name = as.character(label[i]), clip = "on", gp = filler) z <- append(z, list(new.viewport)) } z } squarified.treemap <- function(z, x = 0, y = 0, w = 1, h = 1, func = add.viewport, viewport.list) { cz <- cumsum(z$area)/sum(z$area) n <- which.min(abs(log(max(w/h, h/w) * sum(z$area) * ((cz^2)/z$area)))) more <- n < length(z$area) a <- c(0, cz[1:n])/cz[n] if (h > w) { viewport.list <- func(viewport.list, z$label[1:n], z$color[1:n], x + w * a[1:(length(a) - 1)], rep(y, n), x + w * a[-1], rep(y + h * cz[n], n)) if (more) { viewport.list <- Recall(z[-(1:n), ], x, y + h * cz[n], w, h * (1 - cz[n]), func, viewport.list) } } else { viewport.list <- func(viewport.list, z$label[1:n], z$color[1:n], rep(x, n), y + h * a[1:(length(a) - 1)], rep(x + w * cz[n], n), y + h * a[-1]) if (more) { viewport.list <- Recall(z[-(1:n), ], x + w * cz[n], y, w * (1 - cz[n]), h, func, viewport.list) } } viewport.list } map.viewport <- viewport(x = 0.05, y = 0.05, width = 0.9, height = 0.75, default.units = "npc", name = "MAP", just = c("left", "bottom")) map.tree <- gTree(vp = map.viewport, name = "MAP", children = gList(rectGrob(gp = gpar(col = "dark grey"), name = "background"))) group.viewports <- squarified.treemap(z = group.data, viewport.list = list()) for (i in 1:length(group.viewports)) { this.group <- data.by.group[[group.data$label[i]]] this.data <- data.frame(this.group$area, this.group$label, this.group$color) names(this.data) <- c("area", "label", "color") stock.viewports <- squarified.treemap(z = this.data, viewport.list = list()) group.tree <- gTree(vp = group.viewports[[i]], name = group.data$label[i]) for (s in 1:length(stock.viewports)) { stock.tree <- gTree(vp = stock.viewports[[s]], name = this.data$label[s], children = gList(rectGrob(name = "color"))) if (lab[2]) { stock.tree <- addGrob(stock.tree, textGrob(x = unit(1, "lines"), y = unit(1, "npc") - unit(1, "lines"), label = this.data$label[s], gp = gpar(col = "white", fontsize=this.data$area[s] * labsc[2], ...), name = "label", just = c("left", "top"))) } group.tree <- addGrob(group.tree, stock.tree) } group.tree <- addGrob(group.tree, rectGrob(gp = gpar(col = "grey"), name = "border")) if (lab[1]) { group.tree <- addGrob(group.tree, textGrob(label = group.data$label[i], name = "label", gp = gpar(col = "white", fontsize=group.data$area[i] * labsc[1], ...))) } map.tree <- addGrob(map.tree, group.tree) } op <- options(digits = 1) top.viewport <- viewport(x = 0.05, y = 1, width = 0.9, height = 0.2, default.units = "npc", name = "TOP", , just = c("left", "top")) legend.ncols <- 51 l.x <- (0:(legend.ncols - 1))/(legend.ncols) l.y <- unit(0.25, "npc") l.cols <- color.ramp.rgb(seq(-1, 1, by = 2/(legend.ncols - 1))) if (is.null(scale)) { l.end <- max(abs(data$color.orig)) } else { l.end <- scale } top.list <- gList(textGrob(label = main, y = unit(0.7, "npc"), just = c("center", "center"), gp = gpar(cex = 2, ...)), segmentsGrob(x0 = seq(0, 1, by = 0.25), y0 = unit(0.25, "npc"), x1 = seq(0, 1, by = 0.25), y1 = unit(0.2, "npc")), rectGrob(x = l.x, y = l.y, width = 1/legend.ncols, height = unit(1, "lines"), just = c("left", "bottom"), gp = gpar(col = NA, fill = l.cols), default.units = "npc"), textGrob(label = format(l.end * seq(-1, 1, by = 0.5), trim = TRUE), x = seq(0, 1, by = 0.25), y = 0.1, default.units = "npc", just = c("center", "center"), gp = gpar(col = "black", cex = 0.8, fontface = "bold"))) options(op) top.tree <- gTree(vp = top.viewport, name = "TOP", children = top.list) mapmarket <- gTree(name = "MAPMARKET", children = gList(rectGrob(gp = gpar(col = "dark grey", fill = "dark grey"), name = "background"), top.tree, map.tree)) if (print) { grid.newpage() grid.draw(mapmarket) } invisible(mapmarket) }
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot Edit / addition I have since discovered that the treemap package gives a much better result than the map.market() function mentioned (and adapted) below; but I'll leave my answer in for historical rea
30,072
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
It is a treemap, you can do it easily with Tableau 8 and free Tableau Public, see sample here: http://www.tableausoftware.com/new-features/new-view-types . You can also see @this URL that Treemap can be combined with Bar Chart
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot
It is a treemap, you can do it easily with Tableau 8 and free Tableau Public, see sample here: http://www.tableausoftware.com/new-features/new-view-types . You can also see @this URL that Treemap can
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot It is a treemap, you can do it easily with Tableau 8 and free Tableau Public, see sample here: http://www.tableausoftware.com/new-features/new-view-types . You can also see @this URL that Treemap can be combined with Bar Chart
Is there a name for this chart - sort of a cross between a pie chart and a mekko plot It is a treemap, you can do it easily with Tableau 8 and free Tableau Public, see sample here: http://www.tableausoftware.com/new-features/new-view-types . You can also see @this URL that Treemap can
30,073
How to use R gbm with distribution = "adaboost"?
The adaboost method gives the predictions on logit scale. You can convert it to the 0-1 output: gbm_predicted<-plogis(2*gbm_predicted) note the 2* inside the logis
How to use R gbm with distribution = "adaboost"?
The adaboost method gives the predictions on logit scale. You can convert it to the 0-1 output: gbm_predicted<-plogis(2*gbm_predicted) note the 2* inside the logis
How to use R gbm with distribution = "adaboost"? The adaboost method gives the predictions on logit scale. You can convert it to the 0-1 output: gbm_predicted<-plogis(2*gbm_predicted) note the 2* inside the logis
How to use R gbm with distribution = "adaboost"? The adaboost method gives the predictions on logit scale. You can convert it to the 0-1 output: gbm_predicted<-plogis(2*gbm_predicted) note the 2* inside the logis
30,074
How to use R gbm with distribution = "adaboost"?
You can also directly obtain the probabilities from the predict.gbm function; predict(gbm_algorithm, test_dataset, n.trees = 5000, type = 'response')
How to use R gbm with distribution = "adaboost"?
You can also directly obtain the probabilities from the predict.gbm function; predict(gbm_algorithm, test_dataset, n.trees = 5000, type = 'response')
How to use R gbm with distribution = "adaboost"? You can also directly obtain the probabilities from the predict.gbm function; predict(gbm_algorithm, test_dataset, n.trees = 5000, type = 'response')
How to use R gbm with distribution = "adaboost"? You can also directly obtain the probabilities from the predict.gbm function; predict(gbm_algorithm, test_dataset, n.trees = 5000, type = 'response')
30,075
How to use R gbm with distribution = "adaboost"?
The adaboost link function is described here. This example provides a detailed description of the computation: library(gbm); set.seed(123); n <- 1000; sim.df <- data.frame(x.1 = sample(0:1, n, replace=TRUE), x.2 = sample(0:1, n, replace=TRUE)); prob.array <- c(0.9, 0.7, 0.2, 0.8); df$y <- rbinom(n, size = 1, prob=prob.array[1+sim.df$x.1+2*sim.df$x.2]) n.trees <- 10; shrinkage <- 0.01; gbmFit <- gbm( formula = y~., distribution = "bernoulli", data = sim.df, n.trees = n.trees, interaction.depth = 2, n.minobsinnode = 2, shrinkage = shrinkage, bag.fraction = 0.5, cv.folds = 0, # verbose = FALSE n.cores = 1 ); sim.df$logods <- predict(gbmFit, sim.df, n.trees = n.trees); #$ sim.df$prob <- predict(gbmFit, sim.df, n.trees = n.trees, type = 'response'); #$ sim.df$prob.2 <- plogis(predict(gbmFit, sim.df, n.trees = n.trees)); #$ sim.df$logloss <- sim.df$y*log(sim.df$prob) + (1-sim.df$y)*log(1-sim.df$prob); #$ gbmFit <- gbm( formula = y~., distribution = "adaboost", data = sim.df, n.trees = n.trees, interaction.depth = 2, n.minobsinnode = 2, shrinkage = shrinkage, bag.fraction = 0.5, cv.folds = 0, # verbose = FALSE n.cores = 1 ); sim.df$exp.scale <- predict(gbmFit, sim.df, n.trees = n.trees); #$ sim.df$ada.resp <- predict(gbmFit, sim.df, n.trees = n.trees, type = 'response'); #$ sim.df$ada.resp.2 <- plogis(2*predict(gbmFit, sim.df, n.trees = n.trees)); #$ sim.df$ada.error <- -exp(-sim.df$y * sim.df$exp.scale); #$ sim.df[1:20,]
How to use R gbm with distribution = "adaboost"?
The adaboost link function is described here. This example provides a detailed description of the computation: library(gbm); set.seed(123); n <- 1000; sim.df <- data.frame(x.1 = sample(0:
How to use R gbm with distribution = "adaboost"? The adaboost link function is described here. This example provides a detailed description of the computation: library(gbm); set.seed(123); n <- 1000; sim.df <- data.frame(x.1 = sample(0:1, n, replace=TRUE), x.2 = sample(0:1, n, replace=TRUE)); prob.array <- c(0.9, 0.7, 0.2, 0.8); df$y <- rbinom(n, size = 1, prob=prob.array[1+sim.df$x.1+2*sim.df$x.2]) n.trees <- 10; shrinkage <- 0.01; gbmFit <- gbm( formula = y~., distribution = "bernoulli", data = sim.df, n.trees = n.trees, interaction.depth = 2, n.minobsinnode = 2, shrinkage = shrinkage, bag.fraction = 0.5, cv.folds = 0, # verbose = FALSE n.cores = 1 ); sim.df$logods <- predict(gbmFit, sim.df, n.trees = n.trees); #$ sim.df$prob <- predict(gbmFit, sim.df, n.trees = n.trees, type = 'response'); #$ sim.df$prob.2 <- plogis(predict(gbmFit, sim.df, n.trees = n.trees)); #$ sim.df$logloss <- sim.df$y*log(sim.df$prob) + (1-sim.df$y)*log(1-sim.df$prob); #$ gbmFit <- gbm( formula = y~., distribution = "adaboost", data = sim.df, n.trees = n.trees, interaction.depth = 2, n.minobsinnode = 2, shrinkage = shrinkage, bag.fraction = 0.5, cv.folds = 0, # verbose = FALSE n.cores = 1 ); sim.df$exp.scale <- predict(gbmFit, sim.df, n.trees = n.trees); #$ sim.df$ada.resp <- predict(gbmFit, sim.df, n.trees = n.trees, type = 'response'); #$ sim.df$ada.resp.2 <- plogis(2*predict(gbmFit, sim.df, n.trees = n.trees)); #$ sim.df$ada.error <- -exp(-sim.df$y * sim.df$exp.scale); #$ sim.df[1:20,]
How to use R gbm with distribution = "adaboost"? The adaboost link function is described here. This example provides a detailed description of the computation: library(gbm); set.seed(123); n <- 1000; sim.df <- data.frame(x.1 = sample(0:
30,076
How do you calculate standard errors for a transformation of the MLE?
The Delta method is used for this purpose. Under some standard regularity assumptions, we know the MLE, $\hat{\theta}$ for $\theta$ is approximately (i.e. asymptotically) distributed as $$ \hat{\theta} \sim N(\theta, \mathcal{I}^{-1}(\theta)) $$ where $\mathcal{I}^{-1}(\theta)$ is the inverse of the Fisher information for the entire sample, evaluated at $\theta$ and $N(\mu,\sigma^{2})$ denotes the normal distribution with mean $\mu$ and variance $\sigma^{2}$. The functional invariance of the MLE says that the MLE of $g(\theta)$, where $g$ is some known function, is $g(\hat{\theta})$ (as you pointed out) and has approximate distribution $$ g(\hat{\theta}) \sim N( g(\theta), \mathcal{I}^{-1}(\theta) [g'(\theta)]^{2} ) $$ where you can plug in consistent estimators for the unknown quantities (i.e. plug in $\hat{\theta}$ where $\theta$ appears in the variance). I would assume the standard errors you have are based on the Fisher information (since you have MLEs). Denote that standard error by $s$. Then the standard error of $e^{\hat{\theta} }$, as in your example, is $$ \sqrt{s^{2}e^{2 \hat{\theta}}} $$ I may be interpreting you backwards and in reality you have the variance of the MLE of $\theta$ and want the variance of the MLE of $\log(\theta)$ in which case the standard would be $$ \sqrt{ s^{2}/\hat{\theta}^{2} } $$
How do you calculate standard errors for a transformation of the MLE?
The Delta method is used for this purpose. Under some standard regularity assumptions, we know the MLE, $\hat{\theta}$ for $\theta$ is approximately (i.e. asymptotically) distributed as $$ \hat{\thet
How do you calculate standard errors for a transformation of the MLE? The Delta method is used for this purpose. Under some standard regularity assumptions, we know the MLE, $\hat{\theta}$ for $\theta$ is approximately (i.e. asymptotically) distributed as $$ \hat{\theta} \sim N(\theta, \mathcal{I}^{-1}(\theta)) $$ where $\mathcal{I}^{-1}(\theta)$ is the inverse of the Fisher information for the entire sample, evaluated at $\theta$ and $N(\mu,\sigma^{2})$ denotes the normal distribution with mean $\mu$ and variance $\sigma^{2}$. The functional invariance of the MLE says that the MLE of $g(\theta)$, where $g$ is some known function, is $g(\hat{\theta})$ (as you pointed out) and has approximate distribution $$ g(\hat{\theta}) \sim N( g(\theta), \mathcal{I}^{-1}(\theta) [g'(\theta)]^{2} ) $$ where you can plug in consistent estimators for the unknown quantities (i.e. plug in $\hat{\theta}$ where $\theta$ appears in the variance). I would assume the standard errors you have are based on the Fisher information (since you have MLEs). Denote that standard error by $s$. Then the standard error of $e^{\hat{\theta} }$, as in your example, is $$ \sqrt{s^{2}e^{2 \hat{\theta}}} $$ I may be interpreting you backwards and in reality you have the variance of the MLE of $\theta$ and want the variance of the MLE of $\log(\theta)$ in which case the standard would be $$ \sqrt{ s^{2}/\hat{\theta}^{2} } $$
How do you calculate standard errors for a transformation of the MLE? The Delta method is used for this purpose. Under some standard regularity assumptions, we know the MLE, $\hat{\theta}$ for $\theta$ is approximately (i.e. asymptotically) distributed as $$ \hat{\thet
30,077
How do you calculate standard errors for a transformation of the MLE?
Macro gave the correct answer on how to transform standard errors via the delta method. Though the OP specifically asked for the standard errors, I suspect that the objective is to produce confidence intervals for $p$. Besides computing estimated standard errors of $\hat{p}$ you can directly transform a confidence interval, $[q_1, q_2]$, in the $q$-parametrization to a confidence interval $[\exp(q_1), \exp(q_2)]$ in the $p$-parametrization. This is perfectly valid, and it may even be a better idea depending on how well the normal approximation used to justify a confidence interval based on standard errors works in the $q$-parametrization versus the $p$-parametrization. Moreover, the directly transformed confidence interval will fulfill the positivity constraint.
How do you calculate standard errors for a transformation of the MLE?
Macro gave the correct answer on how to transform standard errors via the delta method. Though the OP specifically asked for the standard errors, I suspect that the objective is to produce confidence
How do you calculate standard errors for a transformation of the MLE? Macro gave the correct answer on how to transform standard errors via the delta method. Though the OP specifically asked for the standard errors, I suspect that the objective is to produce confidence intervals for $p$. Besides computing estimated standard errors of $\hat{p}$ you can directly transform a confidence interval, $[q_1, q_2]$, in the $q$-parametrization to a confidence interval $[\exp(q_1), \exp(q_2)]$ in the $p$-parametrization. This is perfectly valid, and it may even be a better idea depending on how well the normal approximation used to justify a confidence interval based on standard errors works in the $q$-parametrization versus the $p$-parametrization. Moreover, the directly transformed confidence interval will fulfill the positivity constraint.
How do you calculate standard errors for a transformation of the MLE? Macro gave the correct answer on how to transform standard errors via the delta method. Though the OP specifically asked for the standard errors, I suspect that the objective is to produce confidence
30,078
Running an R script line-by-line
You can use R's built-in debugger; it must be triggered on a function, so a little wrapper is needed: sourceDebugging<-function(f){ #Function to inject the code to theCode<-function(){} #Injection parse(text=c('{',readLines(f),'}'))->body(theCode) #Triggering debug debug(theCode) #Lift-off theCode() } sourceDebugging(<file with code>) This is quite handy for debug (gives you a chance to inspect the state after each line), however, will only evaluate in a fresh environment of theCode instead of source's default .GlobalEnv... this means for instance that the variables made inside will disappear unless explicitly globalised. Option two is just to emulate writing from keyboard and pressing ENTER... but as caracal pointed out this can be achieved just by source(<file with code>,echo=TRUE).
Running an R script line-by-line
You can use R's built-in debugger; it must be triggered on a function, so a little wrapper is needed: sourceDebugging<-function(f){ #Function to inject the code to theCode<-function(){} #Injection
Running an R script line-by-line You can use R's built-in debugger; it must be triggered on a function, so a little wrapper is needed: sourceDebugging<-function(f){ #Function to inject the code to theCode<-function(){} #Injection parse(text=c('{',readLines(f),'}'))->body(theCode) #Triggering debug debug(theCode) #Lift-off theCode() } sourceDebugging(<file with code>) This is quite handy for debug (gives you a chance to inspect the state after each line), however, will only evaluate in a fresh environment of theCode instead of source's default .GlobalEnv... this means for instance that the variables made inside will disappear unless explicitly globalised. Option two is just to emulate writing from keyboard and pressing ENTER... but as caracal pointed out this can be achieved just by source(<file with code>,echo=TRUE).
Running an R script line-by-line You can use R's built-in debugger; it must be triggered on a function, so a little wrapper is needed: sourceDebugging<-function(f){ #Function to inject the code to theCode<-function(){} #Injection
30,079
Running an R script line-by-line
Open the script file inside your RGui and press Ctrl+R to run line by line (you need to press many times though;)). However I would recommend to use RStudio for the convenient work with R. In this case you run line by Ctrl+Enter. Or you may modify your script to print() (or cat()) the objects.
Running an R script line-by-line
Open the script file inside your RGui and press Ctrl+R to run line by line (you need to press many times though;)). However I would recommend to use RStudio for the convenient work with R. In this cas
Running an R script line-by-line Open the script file inside your RGui and press Ctrl+R to run line by line (you need to press many times though;)). However I would recommend to use RStudio for the convenient work with R. In this case you run line by Ctrl+Enter. Or you may modify your script to print() (or cat()) the objects.
Running an R script line-by-line Open the script file inside your RGui and press Ctrl+R to run line by line (you need to press many times though;)). However I would recommend to use RStudio for the convenient work with R. In this cas
30,080
How to create a barplot diagram where bars are side-by-side in R
I shall assume that you are able to import your data in R with read.table() or the short-hand read.csv() functions. Then you can apply any summary functions you want, for instance table or mean, as below: x <- replicate(4, rnorm(100)) apply(x, 2, mean) or x <- replicate(2, sample(letters[1:2], 100, rep=T)) apply(x, 2, table) The idea is to end up with a matrix or table for the summary values you want to display. For the graphical output, look at the barplot() function with the option beside=TRUE, e.g. barplot(matrix(c(5,3,8,9),nr=2), beside=T, col=c("aquamarine3","coral"), names.arg=LETTERS[1:2]) legend("topleft", c("A","B"), pch=15, col=c("aquamarine3","coral"), bty="n") The space argument can be used to add an extra space between juxtaposed bars.
How to create a barplot diagram where bars are side-by-side in R
I shall assume that you are able to import your data in R with read.table() or the short-hand read.csv() functions. Then you can apply any summary functions you want, for instance table or mean, as be
How to create a barplot diagram where bars are side-by-side in R I shall assume that you are able to import your data in R with read.table() or the short-hand read.csv() functions. Then you can apply any summary functions you want, for instance table or mean, as below: x <- replicate(4, rnorm(100)) apply(x, 2, mean) or x <- replicate(2, sample(letters[1:2], 100, rep=T)) apply(x, 2, table) The idea is to end up with a matrix or table for the summary values you want to display. For the graphical output, look at the barplot() function with the option beside=TRUE, e.g. barplot(matrix(c(5,3,8,9),nr=2), beside=T, col=c("aquamarine3","coral"), names.arg=LETTERS[1:2]) legend("topleft", c("A","B"), pch=15, col=c("aquamarine3","coral"), bty="n") The space argument can be used to add an extra space between juxtaposed bars.
How to create a barplot diagram where bars are side-by-side in R I shall assume that you are able to import your data in R with read.table() or the short-hand read.csv() functions. Then you can apply any summary functions you want, for instance table or mean, as be
30,081
How to create a barplot diagram where bars are side-by-side in R
Here ggplot version: library(ggplot2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable_name="metric") ggplot(df, aes(experiment, value, fill=metric)) + geom_bar(position="dodge")
How to create a barplot diagram where bars are side-by-side in R
Here ggplot version: library(ggplot2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable_name="metric") ggplot(df, aes(experiment, value, fill=me
How to create a barplot diagram where bars are side-by-side in R Here ggplot version: library(ggplot2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable_name="metric") ggplot(df, aes(experiment, value, fill=metric)) + geom_bar(position="dodge")
How to create a barplot diagram where bars are side-by-side in R Here ggplot version: library(ggplot2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable_name="metric") ggplot(df, aes(experiment, value, fill=me
30,082
How to create a barplot diagram where bars are side-by-side in R
I wanted to update teucer's answer to reflect reshape2. library(ggplot2) library(reshape2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable.name="metric") ggplot(df, aes(experiment, value, fill=metric)) + geom_bar(position="dodge",stat="identity") Note that teucer's answer produces the error "Error in eval(expr, envir, enclos) : object 'metric' not found" with reshape2 because reshape2 uses variable.name instead of variable_name. I also found that I needed to add stat="identity" to the geom_bar function because otherwise it gave "Error : Mapping a variable to y and also using stat="bin"."
How to create a barplot diagram where bars are side-by-side in R
I wanted to update teucer's answer to reflect reshape2. library(ggplot2) library(reshape2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable.n
How to create a barplot diagram where bars are side-by-side in R I wanted to update teucer's answer to reflect reshape2. library(ggplot2) library(reshape2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable.name="metric") ggplot(df, aes(experiment, value, fill=metric)) + geom_bar(position="dodge",stat="identity") Note that teucer's answer produces the error "Error in eval(expr, envir, enclos) : object 'metric' not found" with reshape2 because reshape2 uses variable.name instead of variable_name. I also found that I needed to add stat="identity" to the geom_bar function because otherwise it gave "Error : Mapping a variable to y and also using stat="bin"."
How to create a barplot diagram where bars are side-by-side in R I wanted to update teucer's answer to reflect reshape2. library(ggplot2) library(reshape2) df = melt(data.frame(A=c(2, 10), B=c(3, 20), experiment=c("X", "X & Y")), variable.n
30,083
Is the sum of two singular covariance matrices also singular?
Sum of singular covariance matrices No, a sum of singular matrices does not need to be singular. See Is the sum of two singular matrices also singular? A counter example in the answers sums two matrices that correspond to matrices for fully positive and fully negative correlation which are each singular but their sum is not. $$\begin{bmatrix} 1&1\\ 1&1 \end{bmatrix} + \begin{bmatrix} 1&-1\\ -1&1 \end{bmatrix} = \begin{bmatrix} 2&0\\ 0&2 \end{bmatrix} $$ Sum of non-singular covariance matrices In general it is possible for two non-singular matrices to add up to a singular matrix. E.g. for any non-singular matrix $A$ the matrices $A$ and $B=-A$ are non-singular but the sum $A+B=0$ is singular. However, as mentioned in the comments this is not true for covariance matrices which are positive definite of the variables are independent. The sum of positive definite matrices, which are non-singular, are positive definite and remain non-singular. Intuitive approach If some matrix is a covariance matrix then it has a square root and can be written as $X^tX$. From the definition of the covariance matrix, it is the cross product of vectors after their mean is subtracted. Then the sum of two covariance matrix can be seen as a single matric where the vectors are concatenated. The property of singularity can be linked to the independence of the vectors in $X$. As you say, if $X$ has less samples $n$ than variables $p$ then the matrix will be singular. But by adding the samples of two variables together you can get that the variables become independent and the matrix will be non-singular. The other way around can not happen. Once the variables are independent, then you can not get then dependent by adding more samples. With the example above $$\overbrace{ \begin{bmatrix} 1\\1 \end{bmatrix}\cdot \begin{bmatrix} 1&1\\ \end{bmatrix} + \begin{bmatrix} 1\\-1 \end{bmatrix}\cdot \begin{bmatrix} 1&-1 \end{bmatrix}}^{\begin{bmatrix} 1&1\\ 1&1 \end{bmatrix} + \begin{bmatrix} 1&-1\\ -1&1 \end{bmatrix} } = \overbrace{ \begin{bmatrix} 1&-1\\ 1&1 \end{bmatrix}\cdot \begin{bmatrix} 1&1\\ -1&1 \end{bmatrix}}^{ \begin{bmatrix} 2&0\\ 0&2 \end{bmatrix} }$$
Is the sum of two singular covariance matrices also singular?
Sum of singular covariance matrices No, a sum of singular matrices does not need to be singular. See Is the sum of two singular matrices also singular? A counter example in the answers sums two matric
Is the sum of two singular covariance matrices also singular? Sum of singular covariance matrices No, a sum of singular matrices does not need to be singular. See Is the sum of two singular matrices also singular? A counter example in the answers sums two matrices that correspond to matrices for fully positive and fully negative correlation which are each singular but their sum is not. $$\begin{bmatrix} 1&1\\ 1&1 \end{bmatrix} + \begin{bmatrix} 1&-1\\ -1&1 \end{bmatrix} = \begin{bmatrix} 2&0\\ 0&2 \end{bmatrix} $$ Sum of non-singular covariance matrices In general it is possible for two non-singular matrices to add up to a singular matrix. E.g. for any non-singular matrix $A$ the matrices $A$ and $B=-A$ are non-singular but the sum $A+B=0$ is singular. However, as mentioned in the comments this is not true for covariance matrices which are positive definite of the variables are independent. The sum of positive definite matrices, which are non-singular, are positive definite and remain non-singular. Intuitive approach If some matrix is a covariance matrix then it has a square root and can be written as $X^tX$. From the definition of the covariance matrix, it is the cross product of vectors after their mean is subtracted. Then the sum of two covariance matrix can be seen as a single matric where the vectors are concatenated. The property of singularity can be linked to the independence of the vectors in $X$. As you say, if $X$ has less samples $n$ than variables $p$ then the matrix will be singular. But by adding the samples of two variables together you can get that the variables become independent and the matrix will be non-singular. The other way around can not happen. Once the variables are independent, then you can not get then dependent by adding more samples. With the example above $$\overbrace{ \begin{bmatrix} 1\\1 \end{bmatrix}\cdot \begin{bmatrix} 1&1\\ \end{bmatrix} + \begin{bmatrix} 1\\-1 \end{bmatrix}\cdot \begin{bmatrix} 1&-1 \end{bmatrix}}^{\begin{bmatrix} 1&1\\ 1&1 \end{bmatrix} + \begin{bmatrix} 1&-1\\ -1&1 \end{bmatrix} } = \overbrace{ \begin{bmatrix} 1&-1\\ 1&1 \end{bmatrix}\cdot \begin{bmatrix} 1&1\\ -1&1 \end{bmatrix}}^{ \begin{bmatrix} 2&0\\ 0&2 \end{bmatrix} }$$
Is the sum of two singular covariance matrices also singular? Sum of singular covariance matrices No, a sum of singular matrices does not need to be singular. See Is the sum of two singular matrices also singular? A counter example in the answers sums two matric
30,084
Is the sum of two singular covariance matrices also singular?
The Spectral Theorem informs us that any $n\times n$ covariance matrix $\Sigma$ can be diagonalized. That is, there exists an orthogonal matrix $Q$ for which $$Q\Sigma Q^\prime = \pmatrix{\sigma^2_1 & 0 & \cdots & 0 \\ 0 & \sigma^2_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0\\ 0 & \cdots & 0 & \sigma^2_{n}}.$$ As a mere matter of notation, write $D_{i;n}(\sigma^2)$ for the $n\times n$ matrix that has all zeros except for $\sigma^2$ in its $(i,i)$ position. The sum of these $D_{i;n}(\sigma_i^2)$ over $i=1,2,\ldots, n$ equals this diagonal matrix on the right. When $X_i$ is a random variable with variance $\sigma_i^2,$ $D_{i;n}(\sigma_i^2)$ is the covariance matrix for the multivariate random variable $\mathbf{X}_i = (0,0,\ldots, 0, X_i,0,\ldots, 0)$ with $X_i$ in the $i^\text{th}$ position. These matrices are obviously singular when $n\gt 1$ (because they have entire rows and columns of zeros); indeed, they manifestly each have rank at most $1$ (the rank is $0$ when $\sigma^2_i=0$). Consequently, every matrix of the form $Q^\prime D_{i;n}(\sigma_i^2) Q$ is singular when $n\gt 1.$ Nevertheless, when the $X_i$ are independent $$\operatorname{Cov}(Q^\prime(\mathbf{X}_1 + \mathbf{X}_2 + \cdots + \mathbf{X}_n)Q) = Q^\prime D_{1;n}(\sigma^2_1)Q + \cdots + Q^\prime D_{n;n}(\sigma^2_b)Q = \Sigma.$$ You are now well equipped to answer your questions (and any related questions). We have seen how any covariance matrix is the sum of singular covariance matrices, so clearly the sum of singular covariance matrices need not be singular. Nevertheless, because any partial sum of $k\lt n$ matrices in this decomposition has rank at most $k,$ it is singular, showing that the sum of singular covariance matrices can still be singular. Finally, the sum of nonsingular covariance matrices must be nonsingular due to the positive-definiteness property: when the sum tells you the variance of any vector is zero, then each component of the sum must return zero, too, and positive-definiteness means that this must be the zero vector, proving the sum is positive-definite (and therefore nonsingular). As a demonstration of practicality, here is an R function to express any covariance matrix $\Sigma$ as a sum of such rank-1 (or rank-0) matrices. It returns the components as a list of $n$ matrices. decompose <- function(Sigma) { zero <- function(d, i) {d[-i] <- 0; d} # Zero out all but element `i` of `d` with(svd(Sigma), lapply(seq_along(d), function(i) v %*% diag(zero(d, i)) %*% t(v)) ) } For instance, let's generate a random covariance matrix, in this case a $4\times 4$ matrix: set.seed(17) n <- 4 Sigma <- cov(matrix(rnorm(2*n^2), ncol=n)) After calling decompose, let's re-assemble the components (that is, sum them up) and compare them to the original $\Sigma$: D <- decompose(Sigma) Sigma.check <- matrix(0, n, n) for (d in D) Sigma.check <- Sigma.check + d round(Sigma.check - Sigma, 8) [,1] [,2] [,3] [,4] [1,] 0 0 0 0 [2,] 0 0 0 0 [3,] 0 0 0 0 [4,] 0 0 0 0 The difference is zero: the decomposition is accurate. Nevertheless, we can verify that each matrix in the decomposition is singular. Since $n$ is small, it suffices to show that all their determinants are zero: round(sapply(D, det), 16) [1] 0 0 0 0 And now the acid test: let's generate a lot of the $X_i$ and verify that the covariance matrices of this sample behave as claimed. library(MASS) N <- 5e4 X <- lapply(D, function(d) mvrnorm(N, mu=rep(0,n), Sigma=d)) X is an array of $n=4$ samples corresponding to $X_1, \ldots, X_n.$ Each sample should have a nearly singular covariance matrix, so let's look at the singular values of those matrices: lapply(X, function(x) zapsmall(svd(x)$d)) [[1]] [1] 230.3927 0.0000 0.0000 0.0000 [[2]] [1] 193.1368 0.0000 0.0000 0.0000 [[3]] [1] 156.2236 0.0000 0.0000 0.0000 [[4]] [1] 103.298 0.000 0.000 0.000 Sure enough, each one of these covariance matrices has only one nonzero singular value: they are all of rank $1$ and all are singular. Finally, let's assemble them into the sum and check that its covariance matrix is close to $\Sigma:$ X.sum <- matrix(0, N, n) for (x in X) X.sum <- X.sum + x signif(cov(X.sum) - Sigma, 2) [,1] [,2] [,3] [,4] [1,] -0.0012 -0.00140 -0.0045 -0.0040 [2,] -0.0014 -0.00008 0.0013 0.0035 [3,] -0.0045 0.00130 -0.0042 -0.0045 [4,] -0.0040 0.00350 -0.0045 0.0044 These small differences are due to sampling variation and help confirm everything works as intended.
Is the sum of two singular covariance matrices also singular?
The Spectral Theorem informs us that any $n\times n$ covariance matrix $\Sigma$ can be diagonalized. That is, there exists an orthogonal matrix $Q$ for which $$Q\Sigma Q^\prime = \pmatrix{\sigma^2_1
Is the sum of two singular covariance matrices also singular? The Spectral Theorem informs us that any $n\times n$ covariance matrix $\Sigma$ can be diagonalized. That is, there exists an orthogonal matrix $Q$ for which $$Q\Sigma Q^\prime = \pmatrix{\sigma^2_1 & 0 & \cdots & 0 \\ 0 & \sigma^2_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0\\ 0 & \cdots & 0 & \sigma^2_{n}}.$$ As a mere matter of notation, write $D_{i;n}(\sigma^2)$ for the $n\times n$ matrix that has all zeros except for $\sigma^2$ in its $(i,i)$ position. The sum of these $D_{i;n}(\sigma_i^2)$ over $i=1,2,\ldots, n$ equals this diagonal matrix on the right. When $X_i$ is a random variable with variance $\sigma_i^2,$ $D_{i;n}(\sigma_i^2)$ is the covariance matrix for the multivariate random variable $\mathbf{X}_i = (0,0,\ldots, 0, X_i,0,\ldots, 0)$ with $X_i$ in the $i^\text{th}$ position. These matrices are obviously singular when $n\gt 1$ (because they have entire rows and columns of zeros); indeed, they manifestly each have rank at most $1$ (the rank is $0$ when $\sigma^2_i=0$). Consequently, every matrix of the form $Q^\prime D_{i;n}(\sigma_i^2) Q$ is singular when $n\gt 1.$ Nevertheless, when the $X_i$ are independent $$\operatorname{Cov}(Q^\prime(\mathbf{X}_1 + \mathbf{X}_2 + \cdots + \mathbf{X}_n)Q) = Q^\prime D_{1;n}(\sigma^2_1)Q + \cdots + Q^\prime D_{n;n}(\sigma^2_b)Q = \Sigma.$$ You are now well equipped to answer your questions (and any related questions). We have seen how any covariance matrix is the sum of singular covariance matrices, so clearly the sum of singular covariance matrices need not be singular. Nevertheless, because any partial sum of $k\lt n$ matrices in this decomposition has rank at most $k,$ it is singular, showing that the sum of singular covariance matrices can still be singular. Finally, the sum of nonsingular covariance matrices must be nonsingular due to the positive-definiteness property: when the sum tells you the variance of any vector is zero, then each component of the sum must return zero, too, and positive-definiteness means that this must be the zero vector, proving the sum is positive-definite (and therefore nonsingular). As a demonstration of practicality, here is an R function to express any covariance matrix $\Sigma$ as a sum of such rank-1 (or rank-0) matrices. It returns the components as a list of $n$ matrices. decompose <- function(Sigma) { zero <- function(d, i) {d[-i] <- 0; d} # Zero out all but element `i` of `d` with(svd(Sigma), lapply(seq_along(d), function(i) v %*% diag(zero(d, i)) %*% t(v)) ) } For instance, let's generate a random covariance matrix, in this case a $4\times 4$ matrix: set.seed(17) n <- 4 Sigma <- cov(matrix(rnorm(2*n^2), ncol=n)) After calling decompose, let's re-assemble the components (that is, sum them up) and compare them to the original $\Sigma$: D <- decompose(Sigma) Sigma.check <- matrix(0, n, n) for (d in D) Sigma.check <- Sigma.check + d round(Sigma.check - Sigma, 8) [,1] [,2] [,3] [,4] [1,] 0 0 0 0 [2,] 0 0 0 0 [3,] 0 0 0 0 [4,] 0 0 0 0 The difference is zero: the decomposition is accurate. Nevertheless, we can verify that each matrix in the decomposition is singular. Since $n$ is small, it suffices to show that all their determinants are zero: round(sapply(D, det), 16) [1] 0 0 0 0 And now the acid test: let's generate a lot of the $X_i$ and verify that the covariance matrices of this sample behave as claimed. library(MASS) N <- 5e4 X <- lapply(D, function(d) mvrnorm(N, mu=rep(0,n), Sigma=d)) X is an array of $n=4$ samples corresponding to $X_1, \ldots, X_n.$ Each sample should have a nearly singular covariance matrix, so let's look at the singular values of those matrices: lapply(X, function(x) zapsmall(svd(x)$d)) [[1]] [1] 230.3927 0.0000 0.0000 0.0000 [[2]] [1] 193.1368 0.0000 0.0000 0.0000 [[3]] [1] 156.2236 0.0000 0.0000 0.0000 [[4]] [1] 103.298 0.000 0.000 0.000 Sure enough, each one of these covariance matrices has only one nonzero singular value: they are all of rank $1$ and all are singular. Finally, let's assemble them into the sum and check that its covariance matrix is close to $\Sigma:$ X.sum <- matrix(0, N, n) for (x in X) X.sum <- X.sum + x signif(cov(X.sum) - Sigma, 2) [,1] [,2] [,3] [,4] [1,] -0.0012 -0.00140 -0.0045 -0.0040 [2,] -0.0014 -0.00008 0.0013 0.0035 [3,] -0.0045 0.00130 -0.0042 -0.0045 [4,] -0.0040 0.00350 -0.0045 0.0044 These small differences are due to sampling variation and help confirm everything works as intended.
Is the sum of two singular covariance matrices also singular? The Spectral Theorem informs us that any $n\times n$ covariance matrix $\Sigma$ can be diagonalized. That is, there exists an orthogonal matrix $Q$ for which $$Q\Sigma Q^\prime = \pmatrix{\sigma^2_1
30,085
Is the sum of two singular covariance matrices also singular?
No, consider the singular covariance matrix $\pmatrix{1 & 0\\ 0 & 0}$ summed with the singular covariance matrix $\pmatrix{0&0\\0&1}$. Regarding your second question, the sum of two nonsingular covariance matrices is also nonsingular. This is because the set of nonsingular covariance matrices is exactly the set of positive definite matrices (which is closed under positive combination, see here for the sum case).
Is the sum of two singular covariance matrices also singular?
No, consider the singular covariance matrix $\pmatrix{1 & 0\\ 0 & 0}$ summed with the singular covariance matrix $\pmatrix{0&0\\0&1}$. Regarding your second question, the sum of two nonsingular covari
Is the sum of two singular covariance matrices also singular? No, consider the singular covariance matrix $\pmatrix{1 & 0\\ 0 & 0}$ summed with the singular covariance matrix $\pmatrix{0&0\\0&1}$. Regarding your second question, the sum of two nonsingular covariance matrices is also nonsingular. This is because the set of nonsingular covariance matrices is exactly the set of positive definite matrices (which is closed under positive combination, see here for the sum case).
Is the sum of two singular covariance matrices also singular? No, consider the singular covariance matrix $\pmatrix{1 & 0\\ 0 & 0}$ summed with the singular covariance matrix $\pmatrix{0&0\\0&1}$. Regarding your second question, the sum of two nonsingular covari
30,086
Comparing model evaluations of machine learning and statistics
Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization capability, and vice versa for the ML scientists? Scientists and analysts using "pure" statistics have recently gotten into some trouble precisely for focusing excessively on metrics of significance. In fact, in 2017, the American Statistical Association held a Symposium on Statistical Inference which to a large extent discussed the over-reliance that many statisticians have on statistical significance and p-values. It's been a fairly serious problem in the sciences recently. But your question is difficult to answer, mostly because there are still no agreed upon definitions that mark the difference between statistics/machine learning/data science. A carpenter 2000 years ago used a hammer and nails. Now, a carpenter uses a power drill. When the power drill was invented, did the carpenters who started using it rename themselves? Did they also rename "nail" "multi-object concatenation device (MOCD)" and rename "hammer" "manual MOCD implementer?" Obviously not. That would be absurd. Yet machine learning practitioners and data scientists have more or less done this, and it makes statistics and machine learning seem to be more different than they really are. High computing power, parallel processing, and new methods became available. Now suddenly a "variable" is a "feature!" You no longer recode your variables, you engage in feature engineering. "Pearson's phi" that was used 100 years ago, you say? No, no, no, it is now the MCC! The list goes on. Differences in jargon often do not reflect any real differences in the underlying mathematics or theory. Having said that, to the extent that the focus of jobs called "statistician" versus "machine learning practitioner" are starting to diverge, I think you'll find that they diverge in largely the ways you expect. "Statistician" jobs tend to be more scientifically conservative, focusing on understanding relationships and testing assumptions. "Machine learning" jobs tend to be in industry where your goal is to deal quickly and efficiently with a lot of data in order to help make decisions that yield a desired result, which is rarely scientific understanding or evidence for/against hypotheses.
Comparing model evaluations of machine learning and statistics
Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization
Comparing model evaluations of machine learning and statistics Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization capability, and vice versa for the ML scientists? Scientists and analysts using "pure" statistics have recently gotten into some trouble precisely for focusing excessively on metrics of significance. In fact, in 2017, the American Statistical Association held a Symposium on Statistical Inference which to a large extent discussed the over-reliance that many statisticians have on statistical significance and p-values. It's been a fairly serious problem in the sciences recently. But your question is difficult to answer, mostly because there are still no agreed upon definitions that mark the difference between statistics/machine learning/data science. A carpenter 2000 years ago used a hammer and nails. Now, a carpenter uses a power drill. When the power drill was invented, did the carpenters who started using it rename themselves? Did they also rename "nail" "multi-object concatenation device (MOCD)" and rename "hammer" "manual MOCD implementer?" Obviously not. That would be absurd. Yet machine learning practitioners and data scientists have more or less done this, and it makes statistics and machine learning seem to be more different than they really are. High computing power, parallel processing, and new methods became available. Now suddenly a "variable" is a "feature!" You no longer recode your variables, you engage in feature engineering. "Pearson's phi" that was used 100 years ago, you say? No, no, no, it is now the MCC! The list goes on. Differences in jargon often do not reflect any real differences in the underlying mathematics or theory. Having said that, to the extent that the focus of jobs called "statistician" versus "machine learning practitioner" are starting to diverge, I think you'll find that they diverge in largely the ways you expect. "Statistician" jobs tend to be more scientifically conservative, focusing on understanding relationships and testing assumptions. "Machine learning" jobs tend to be in industry where your goal is to deal quickly and efficiently with a lot of data in order to help make decisions that yield a desired result, which is rarely scientific understanding or evidence for/against hypotheses.
Comparing model evaluations of machine learning and statistics Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization
30,087
Comparing model evaluations of machine learning and statistics
The answer is no, too many claims in your post, not only to the main question. The difference between ML and stats is arbitrary, superficial, and not important. There is plenty of statisticians that are doing predictive models where the main goal is, of course prediction. Common machine learning methods have been developed by statisticians and published in statistical journals. The most popular ML book is the elements of statistical learning, after all. Also, there is a huge subfield of ML that is concerned about the interpretation of fitted models and the effects of individual variables. I can tell you that (at least contemporary) papers discussing differences between ML and stats are cynical citation grabs and not really worth the time. Both ML and stats are concerned about learning the underlying pattern from the data. In stats, this might be called estimating effects, but it is the same thing. "[in ML] We never want to have a perfect fit of the model into the training data." Sometimes you do (search double descent) Statisticians do care about the generalization. I would even say more than ML people. All these statistical concepts, like controlling for type I error, confidence intervals, credible intervals etc., are there to tell you how well your findings generalize to the population. Now, if a machine learner tests 2 models in a test set and one is better than the other, this does not tell you much about how this finding will generalize to the population unless you calculate some standard errors, p-values or something on that difference. This is usually not the problem in computer vision with hundreds of thousands of examples, where pretty much any difference will be significant, but it is important in smaller problems. If a model has large goodness of fit because it is overfitted, p/t/F values will not be good, and statisticians will not find significant results in that scenario. In this case, variables will be colinear, standard errors will be huge, and nothing will be significant. in stats, we are often not using test sets because these statistical procedures had been developed to be valid without the need of the test set. If I use F-test to compare which model is better, the results will be generalization to the population because that's what the test is for. It will not automatically select the model with a higher R2. In summary, statisticians are concerned about the generalization, and the common statistical measures and concepts are there for this purpose. On the other hand, ML people are often not interested in generalization because simple CV or test set performance is often not treated as an estimate of the performance in the population,
Comparing model evaluations of machine learning and statistics
The answer is no, too many claims in your post, not only to the main question. The difference between ML and stats is arbitrary, superficial, and not important. There is plenty of statisticians that
Comparing model evaluations of machine learning and statistics The answer is no, too many claims in your post, not only to the main question. The difference between ML and stats is arbitrary, superficial, and not important. There is plenty of statisticians that are doing predictive models where the main goal is, of course prediction. Common machine learning methods have been developed by statisticians and published in statistical journals. The most popular ML book is the elements of statistical learning, after all. Also, there is a huge subfield of ML that is concerned about the interpretation of fitted models and the effects of individual variables. I can tell you that (at least contemporary) papers discussing differences between ML and stats are cynical citation grabs and not really worth the time. Both ML and stats are concerned about learning the underlying pattern from the data. In stats, this might be called estimating effects, but it is the same thing. "[in ML] We never want to have a perfect fit of the model into the training data." Sometimes you do (search double descent) Statisticians do care about the generalization. I would even say more than ML people. All these statistical concepts, like controlling for type I error, confidence intervals, credible intervals etc., are there to tell you how well your findings generalize to the population. Now, if a machine learner tests 2 models in a test set and one is better than the other, this does not tell you much about how this finding will generalize to the population unless you calculate some standard errors, p-values or something on that difference. This is usually not the problem in computer vision with hundreds of thousands of examples, where pretty much any difference will be significant, but it is important in smaller problems. If a model has large goodness of fit because it is overfitted, p/t/F values will not be good, and statisticians will not find significant results in that scenario. In this case, variables will be colinear, standard errors will be huge, and nothing will be significant. in stats, we are often not using test sets because these statistical procedures had been developed to be valid without the need of the test set. If I use F-test to compare which model is better, the results will be generalization to the population because that's what the test is for. It will not automatically select the model with a higher R2. In summary, statisticians are concerned about the generalization, and the common statistical measures and concepts are there for this purpose. On the other hand, ML people are often not interested in generalization because simple CV or test set performance is often not treated as an estimate of the performance in the population,
Comparing model evaluations of machine learning and statistics The answer is no, too many claims in your post, not only to the main question. The difference between ML and stats is arbitrary, superficial, and not important. There is plenty of statisticians that
30,088
Comparing model evaluations of machine learning and statistics
Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization capability, and vice versa for the ML scientists? No. Measuring generalisation capability is a large portion of statistics practice. Cross validation and bootstrapping techniques addresses the question of how well statistical models generalise and selecting a model in Occam's sense: A survey of cross-validation procedures for model selection. One thing seems to diverge significantly with modern Machine Learning (ML) community, unfortunately, that ML community stop practicing Occam's razor. This is partially because deep learning defy the core paradigms, such as phenomenon of double deep descent. ML community try to establish generalisation via test-train curves on detecting overfitting, i.e., single holdout only. Actually in overfitting, we require two models to compare, not only a holdout on a single model. This is best described by Andrew Gelman, see What is overfitting: Overfitting is when you have a complicated model that gives worse predictions, on average, than a simpler model. However, there is now a significant research activity in deep learning as well to introduce Occam's razor indirectly via Neural Architecture Search (NAS). It doesn't aim at directly overfitting rather a model compression, but it is actually form of prevention of overfitting in Gelman's definition.
Comparing model evaluations of machine learning and statistics
Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization
Comparing model evaluations of machine learning and statistics Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization capability, and vice versa for the ML scientists? No. Measuring generalisation capability is a large portion of statistics practice. Cross validation and bootstrapping techniques addresses the question of how well statistical models generalise and selecting a model in Occam's sense: A survey of cross-validation procedures for model selection. One thing seems to diverge significantly with modern Machine Learning (ML) community, unfortunately, that ML community stop practicing Occam's razor. This is partially because deep learning defy the core paradigms, such as phenomenon of double deep descent. ML community try to establish generalisation via test-train curves on detecting overfitting, i.e., single holdout only. Actually in overfitting, we require two models to compare, not only a holdout on a single model. This is best described by Andrew Gelman, see What is overfitting: Overfitting is when you have a complicated model that gives worse predictions, on average, than a simpler model. However, there is now a significant research activity in deep learning as well to introduce Occam's razor indirectly via Neural Architecture Search (NAS). It doesn't aim at directly overfitting rather a model compression, but it is actually form of prevention of overfitting in Gelman's definition.
Comparing model evaluations of machine learning and statistics Is it true to some extent that statisticians are usually more concerned about the model's goodness-of-fit and the corresponding metrics of significance, and not that much about model's generalization
30,089
Comparing model evaluations of machine learning and statistics
The question is very long, and covering it in full would require a very lengthy answer. So here I will try to provide my view with very brief bullet points. The question over-emphasises the difference between machine learning and statistics. For a good reflection on those I recommend reading the answer by Michael I. Jordan which was given during his Q&A reddit session: link You use "pure" statisticians in an unusual way. From my experience most "pure" statisticians don't care about concrete data at all, they are more interested in creating estimators and proving properties about these estimators. Applied statisticians (of which machine learning practitioners is one type) then use these estimators on concrete datasets in order to answer practical questions. "Pure" statistician come up with methods and then test how well those methods behave under various contexts. Applied statisticians then use well-behaved estimators that align with the properties of their data. Machine learning practitioners do the same, they use well-tested concepts like cross-validation (itself a product of statistics) to estimate the accuracy of their models. p-value is a measure of uncertainty, not accuracy. The concept itself is best understood as a form of enhanced induction, where you test a theory about the real world by only having observed a few facts. For example p-value can be applied to check how certain we are about our measure of accuracy on the test set. Accuracy cannot be a substitute for a p-value. Consider a scenario where we ask if there is a difference between two classes, and the overlap of those classes is 99%. Whatever you do with pure prediction you will only be able to get an accuracy of 51%. But with a big enough sample size you will reach an arbitrary small p-value, stating that those two classes are indeed different. Contrary to the statement that statisticians don't care about model generalisation - it's the opposite. Statisticians care about the generalisation of everything, not just accuracy. That's what things like confidence intervals and p-values try to achieve - to give a hint how well some estimate on a sample generalises to a broader population. In my personal opinion one of the bigger differences between the communities of statistics and machine learning (with exceptions of course) is the overall context of the effort. Statisticians place more emphasis on assumptions and selecting the best tool/model for the job at hand. You understand your data, check some assumptions, have a question, and device the best strategy of answering that single question and nothing else. While machine learning people place emphasis on the best overall possible method. For example, I would bet that a lot ML practitioners have a hope of achieving the perfect model that can mimic the human brain and can learn everything and solve multiple problems without being pre-trained, etc. and seems like that is what a big chunk of the community is working towards.
Comparing model evaluations of machine learning and statistics
The question is very long, and covering it in full would require a very lengthy answer. So here I will try to provide my view with very brief bullet points. The question over-emphasises the differenc
Comparing model evaluations of machine learning and statistics The question is very long, and covering it in full would require a very lengthy answer. So here I will try to provide my view with very brief bullet points. The question over-emphasises the difference between machine learning and statistics. For a good reflection on those I recommend reading the answer by Michael I. Jordan which was given during his Q&A reddit session: link You use "pure" statisticians in an unusual way. From my experience most "pure" statisticians don't care about concrete data at all, they are more interested in creating estimators and proving properties about these estimators. Applied statisticians (of which machine learning practitioners is one type) then use these estimators on concrete datasets in order to answer practical questions. "Pure" statistician come up with methods and then test how well those methods behave under various contexts. Applied statisticians then use well-behaved estimators that align with the properties of their data. Machine learning practitioners do the same, they use well-tested concepts like cross-validation (itself a product of statistics) to estimate the accuracy of their models. p-value is a measure of uncertainty, not accuracy. The concept itself is best understood as a form of enhanced induction, where you test a theory about the real world by only having observed a few facts. For example p-value can be applied to check how certain we are about our measure of accuracy on the test set. Accuracy cannot be a substitute for a p-value. Consider a scenario where we ask if there is a difference between two classes, and the overlap of those classes is 99%. Whatever you do with pure prediction you will only be able to get an accuracy of 51%. But with a big enough sample size you will reach an arbitrary small p-value, stating that those two classes are indeed different. Contrary to the statement that statisticians don't care about model generalisation - it's the opposite. Statisticians care about the generalisation of everything, not just accuracy. That's what things like confidence intervals and p-values try to achieve - to give a hint how well some estimate on a sample generalises to a broader population. In my personal opinion one of the bigger differences between the communities of statistics and machine learning (with exceptions of course) is the overall context of the effort. Statisticians place more emphasis on assumptions and selecting the best tool/model for the job at hand. You understand your data, check some assumptions, have a question, and device the best strategy of answering that single question and nothing else. While machine learning people place emphasis on the best overall possible method. For example, I would bet that a lot ML practitioners have a hope of achieving the perfect model that can mimic the human brain and can learn everything and solve multiple problems without being pre-trained, etc. and seems like that is what a big chunk of the community is working towards.
Comparing model evaluations of machine learning and statistics The question is very long, and covering it in full would require a very lengthy answer. So here I will try to provide my view with very brief bullet points. The question over-emphasises the differenc
30,090
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2?
I think that the reason you were marked as incorrect is not so much that the answer you gave to the multichoice question was wrong, rather that option 3 "Male and females have similar right skewed distributions with the former, 20 minutes shifted to the left" would have been a better choice as it is more informative based on the information provided.
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less
I think that the reason you were marked as incorrect is not so much that the answer you gave to the multichoice question was wrong, rather that option 3 "Male and females have similar right skewed dis
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2? I think that the reason you were marked as incorrect is not so much that the answer you gave to the multichoice question was wrong, rather that option 3 "Male and females have similar right skewed distributions with the former, 20 minutes shifted to the left" would have been a better choice as it is more informative based on the information provided.
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less I think that the reason you were marked as incorrect is not so much that the answer you gave to the multichoice question was wrong, rather that option 3 "Male and females have similar right skewed dis
30,091
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2?
Here's the smallest counter-example I could find : A ([1, 4, 10]) and B ([0, 6, 9]) have the same average (5) B has a larger median (6) than A (4) There's a 5/9 probability that a random A element is larger than a random B element. Here's another example with 4 elements:
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less
Here's the smallest counter-example I could find : A ([1, 4, 10]) and B ([0, 6, 9]) have the same average (5) B has a larger median (6) than A (4) There's a 5/9 probability that a random A element i
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2? Here's the smallest counter-example I could find : A ([1, 4, 10]) and B ([0, 6, 9]) have the same average (5) B has a larger median (6) than A (4) There's a 5/9 probability that a random A element is larger than a random B element. Here's another example with 4 elements:
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less Here's the smallest counter-example I could find : A ([1, 4, 10]) and B ([0, 6, 9]) have the same average (5) B has a larger median (6) than A (4) There's a 5/9 probability that a random A element i
30,092
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2?
"Most men are faster than most women" is potentially a little ambiguous, but I would normally interpret the intent of it to be that if we look at random parirings, most of the time the man would be faster -- i.e. $P(M_i<F_j)>\frac12$ for random $i,j$ (where $M_i$ is 'time for the $i$-th male' etc). Of course other interpretations of the phrase are possible (that's what ambiguity is, after all) and some of those other possibilities might be consistent with your reasoning. [We also have the issue of whether we're talking about samples or populations... "most men [...] most women" seems to be a population statement (about a population of potential times) but we only have observed times that we seem to be treating as a sample, so we must be careful with how broad we make the claim.] Note that $P(M_i<F_j)>\frac12$ is not implied by $\widetilde{M}<\widetilde{F}$. They can go in opposite directions. [I am not saying you're wrong in thinking that the proportion of random M-F pairs where the man was faster than the woman is more than 1/2 -- you're almost certainly correct. I am just saying you can't tell it by comparing medians. Nor can you tell it by looking at the proportion in each sample above or below the median of the other sample. You'd have to make a different comparison.] That is, while the median man may be faster than the median woman, it is possible to have a sample of times (or a continuous distribution of times, for that matter) where the chance that a random man is faster than a random woman is less than $\frac12$. In large samples the two opposite indications can each be significant. Example: Data set A: 1.58 2.10 16.64 17.34 18.74 19.90 1.53 2.78 16.48 17.53 18.57 19.05 1.64 2.01 16.79 17.10 18.14 19.70 1.25 2.73 16.19 17.76 18.82 19.08 1.42 2.56 16.73 17.01 18.86 19.98 Data set B: 3.35 4.62 5.03 20.97 21.25 22.92 3.12 4.83 5.29 20.82 21.64 22.06 3.39 4.67 5.34 20.52 21.10 22.29 3.38 4.96 5.70 20.45 21.67 22.89 3.44 4.13 6.00 20.85 21.82 22.05 Data set C: 6.63 7.92 8.15 9.97 23.34 24.70 6.40 7.54 8.24 9.37 23.33 24.26 6.18 7.74 8.63 9.62 23.07 24.80 6.54 7.37 8.37 9.09 23.22 24.16 6.57 7.58 8.81 9.08 23.43 24.45 (The data are here, but being used for a different purpose there -- to my recollection I generated this one myself) Note that the proportion of A's < B's is 2/3, the proportion of A < C is 5/9 and the proportion of B < C is 2/3. Both A vs B and B vs C are significant at the 5% level but we can achieve any level of significance simply by adding sufficient copies of the samples. We can even avoid ties, by duplicating the samples but adding sufficiently tiny jitter (sufficiently smaller than the smallest gap between points) The sample medians go the other direction: median(A) > median (B) > median (C) Again we could achieve significance for some comparison of medians - to any significance level - by repeating the samples. To relate it to the present problem, imagine that A is "women's times" and B is "men's times". Then the median men's time is faster, but a randomly chosen man will 2/3 of the time be slower than a randomly chosen woman. Taking our cue from samples A and C we can generate a larger set of data (in R) as follows: n <- 300 F <- c(runif(n/3,0,5),runif(n-n/3,15,20)) M <- c(runif(n-n/3,7.5,12.5),runif(n/3,22.5,27.5)) The median of F will be around 16.25 while the median of M will be around 11.25 but the proportion of cases where F < M will be 5/9. [If we replaced the n/3 with a binomial variate with parameters $n$ and $\frac13$ we'd be sampling from a population where the median of the distribution of F is at 16.25 while the median of the distribution of M is at 11.25. Meanwhile in that population the probability that F < M will again be 5/9.] Note also that $P(F<\text{med}(M))=\frac23$ and $P(M>\text{med}(F))=\frac23$ while $\text{med}(M)<\text{med}(F)$ (by a substantial distance).
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less
"Most men are faster than most women" is potentially a little ambiguous, but I would normally interpret the intent of it to be that if we look at random parirings, most of the time the man would be fa
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2? "Most men are faster than most women" is potentially a little ambiguous, but I would normally interpret the intent of it to be that if we look at random parirings, most of the time the man would be faster -- i.e. $P(M_i<F_j)>\frac12$ for random $i,j$ (where $M_i$ is 'time for the $i$-th male' etc). Of course other interpretations of the phrase are possible (that's what ambiguity is, after all) and some of those other possibilities might be consistent with your reasoning. [We also have the issue of whether we're talking about samples or populations... "most men [...] most women" seems to be a population statement (about a population of potential times) but we only have observed times that we seem to be treating as a sample, so we must be careful with how broad we make the claim.] Note that $P(M_i<F_j)>\frac12$ is not implied by $\widetilde{M}<\widetilde{F}$. They can go in opposite directions. [I am not saying you're wrong in thinking that the proportion of random M-F pairs where the man was faster than the woman is more than 1/2 -- you're almost certainly correct. I am just saying you can't tell it by comparing medians. Nor can you tell it by looking at the proportion in each sample above or below the median of the other sample. You'd have to make a different comparison.] That is, while the median man may be faster than the median woman, it is possible to have a sample of times (or a continuous distribution of times, for that matter) where the chance that a random man is faster than a random woman is less than $\frac12$. In large samples the two opposite indications can each be significant. Example: Data set A: 1.58 2.10 16.64 17.34 18.74 19.90 1.53 2.78 16.48 17.53 18.57 19.05 1.64 2.01 16.79 17.10 18.14 19.70 1.25 2.73 16.19 17.76 18.82 19.08 1.42 2.56 16.73 17.01 18.86 19.98 Data set B: 3.35 4.62 5.03 20.97 21.25 22.92 3.12 4.83 5.29 20.82 21.64 22.06 3.39 4.67 5.34 20.52 21.10 22.29 3.38 4.96 5.70 20.45 21.67 22.89 3.44 4.13 6.00 20.85 21.82 22.05 Data set C: 6.63 7.92 8.15 9.97 23.34 24.70 6.40 7.54 8.24 9.37 23.33 24.26 6.18 7.74 8.63 9.62 23.07 24.80 6.54 7.37 8.37 9.09 23.22 24.16 6.57 7.58 8.81 9.08 23.43 24.45 (The data are here, but being used for a different purpose there -- to my recollection I generated this one myself) Note that the proportion of A's < B's is 2/3, the proportion of A < C is 5/9 and the proportion of B < C is 2/3. Both A vs B and B vs C are significant at the 5% level but we can achieve any level of significance simply by adding sufficient copies of the samples. We can even avoid ties, by duplicating the samples but adding sufficiently tiny jitter (sufficiently smaller than the smallest gap between points) The sample medians go the other direction: median(A) > median (B) > median (C) Again we could achieve significance for some comparison of medians - to any significance level - by repeating the samples. To relate it to the present problem, imagine that A is "women's times" and B is "men's times". Then the median men's time is faster, but a randomly chosen man will 2/3 of the time be slower than a randomly chosen woman. Taking our cue from samples A and C we can generate a larger set of data (in R) as follows: n <- 300 F <- c(runif(n/3,0,5),runif(n-n/3,15,20)) M <- c(runif(n-n/3,7.5,12.5),runif(n/3,22.5,27.5)) The median of F will be around 16.25 while the median of M will be around 11.25 but the proportion of cases where F < M will be 5/9. [If we replaced the n/3 with a binomial variate with parameters $n$ and $\frac13$ we'd be sampling from a population where the median of the distribution of F is at 16.25 while the median of the distribution of M is at 11.25. Meanwhile in that population the probability that F < M will again be 5/9.] Note also that $P(F<\text{med}(M))=\frac23$ and $P(M>\text{med}(F))=\frac23$ while $\text{med}(M)<\text{med}(F)$ (by a substantial distance).
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less "Most men are faster than most women" is potentially a little ambiguous, but I would normally interpret the intent of it to be that if we look at random parirings, most of the time the man would be fa
30,093
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2?
The following figures are taken from this blog post, which illustrates an important practical application of these ideas. Standardization provides a powerful device for comparing 2 distributions. The following 3 figures compare heights of 130-month-old boys and girls from England's National Child Measurement Programme (NCMP). (This was the modal age in this data set; I selected it simply to get the most data, and therefore the smoothest plots, within a single age cohort.) Figure 1: Heights of boys and girls aged 130 months, from England’s National Child Measurement Programme (NCMP) Figure 2: Percentiles of height for boys and girls aged 130 months. Source: English NCMP Figure 3: Distribution of heights of 130-month-old girls relative to boys of the same age. In the last of these figures, the height comparison has been standardized according to boys' heights. Thus, reading along the dotted gray lines in Figure 3, you can make statements such as: The median (i.e., 50th-percentile) height for boys is just about 45th percentile for girls. Thus, 100% – 45%=55% of girls were taller than the median boy. The top-quartile height (75th percentile) for girls hits the top quintile (80th percentile) for boys. Thus, among children aged 130 mos, a girl who is taller than 3 out of 4 girls is also taller than 4 out of 5 boys. One point of possible confusion in this plot does deserve mention. Although the boys' 45° line is 'higher' on the plot than the girls' magenta curve, this observation nevertheless corresponds to the well-known fact that at this age (these are 6th graders), the girls are typically taller than the boys. Note that this tallerness is properly reflected in the fact that the magenta curve is shifted to the right relative to the blue line. This approach is quite generic. Under such a comparison, one of the groups — the one to which you standardize — becomes the 45° line. The other group may in general be any monotone increasing curve drawn from lower left to top right. Provided that the underlying distributions are continuous (the densities lack point masses), the compared curve will be continuous. If the underlying densities share the same support, the curve must run from $(0,0)$ to $(1,1)$. Your original question can now be recast in geometrical terms, as a question about whether you could draw the magenta curve of Figure 3 so as to achieve simultaneously (a) the postulated relation between the medians and (b) the slightly elusive relation that @Glen_b elucidated (correctly, I believe) in his answer. I wonder if distributional discontinuities (point masses in the densities) might enable a 'pathological' case to be provided. I conjecture that any such pathological case will be the 'exception that proves the rule'. If one makes the most straightforward, logical translation of your quiz question into more formal language amenable to analysis, then (using the setting of childrens' heights from above) we might like to say an individual $x$ has the property TMB if $x$ is taller than most boys. Then your quiz question asked simply whether most girls have the TMB property. If one defines 'most' to mean more than half, then having the TMB property means being taller than the median-height boy. Asking whether most girls have the TMB property then amounts to asking whether the median girl has this property. On this account, the answer to the quiz question would be yes. On the other hand, if the actual intent of 'most' was ">50%", one might expect the more precise phrase "a majority of" to have been employed. If somebody tells me something "probably" will happen, I would think a subjective probability of 60% or more is being alluded to. Likewise, "most" to me means something a bit more like 70–80%. Clearly, from the plot above, if 'most' is taken as a criterion any more stringent than 52.5%, then you can't say "most girls [have the property that they] are taller than most boys." I wonder if part of the rationale for the quiz question was to stimulate an examination of words as they relate to numerical notions. (If you think this is all a bit silly, consider these graphs, showing how people tend to interpret different probabilistic words and phrases.) Perhaps the intent also was to underscore the point that a lot of variation is present in real-world distributions, and that a single statistic (median, mean, what-have-you) will rarely support broad, sweeping statements.
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less
The following figures are taken from this blog post, which illustrates an important practical application of these ideas. Standardization provides a powerful device for comparing 2 distributions. The
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less than most in group 2? The following figures are taken from this blog post, which illustrates an important practical application of these ideas. Standardization provides a powerful device for comparing 2 distributions. The following 3 figures compare heights of 130-month-old boys and girls from England's National Child Measurement Programme (NCMP). (This was the modal age in this data set; I selected it simply to get the most data, and therefore the smoothest plots, within a single age cohort.) Figure 1: Heights of boys and girls aged 130 months, from England’s National Child Measurement Programme (NCMP) Figure 2: Percentiles of height for boys and girls aged 130 months. Source: English NCMP Figure 3: Distribution of heights of 130-month-old girls relative to boys of the same age. In the last of these figures, the height comparison has been standardized according to boys' heights. Thus, reading along the dotted gray lines in Figure 3, you can make statements such as: The median (i.e., 50th-percentile) height for boys is just about 45th percentile for girls. Thus, 100% – 45%=55% of girls were taller than the median boy. The top-quartile height (75th percentile) for girls hits the top quintile (80th percentile) for boys. Thus, among children aged 130 mos, a girl who is taller than 3 out of 4 girls is also taller than 4 out of 5 boys. One point of possible confusion in this plot does deserve mention. Although the boys' 45° line is 'higher' on the plot than the girls' magenta curve, this observation nevertheless corresponds to the well-known fact that at this age (these are 6th graders), the girls are typically taller than the boys. Note that this tallerness is properly reflected in the fact that the magenta curve is shifted to the right relative to the blue line. This approach is quite generic. Under such a comparison, one of the groups — the one to which you standardize — becomes the 45° line. The other group may in general be any monotone increasing curve drawn from lower left to top right. Provided that the underlying distributions are continuous (the densities lack point masses), the compared curve will be continuous. If the underlying densities share the same support, the curve must run from $(0,0)$ to $(1,1)$. Your original question can now be recast in geometrical terms, as a question about whether you could draw the magenta curve of Figure 3 so as to achieve simultaneously (a) the postulated relation between the medians and (b) the slightly elusive relation that @Glen_b elucidated (correctly, I believe) in his answer. I wonder if distributional discontinuities (point masses in the densities) might enable a 'pathological' case to be provided. I conjecture that any such pathological case will be the 'exception that proves the rule'. If one makes the most straightforward, logical translation of your quiz question into more formal language amenable to analysis, then (using the setting of childrens' heights from above) we might like to say an individual $x$ has the property TMB if $x$ is taller than most boys. Then your quiz question asked simply whether most girls have the TMB property. If one defines 'most' to mean more than half, then having the TMB property means being taller than the median-height boy. Asking whether most girls have the TMB property then amounts to asking whether the median girl has this property. On this account, the answer to the quiz question would be yes. On the other hand, if the actual intent of 'most' was ">50%", one might expect the more precise phrase "a majority of" to have been employed. If somebody tells me something "probably" will happen, I would think a subjective probability of 60% or more is being alluded to. Likewise, "most" to me means something a bit more like 70–80%. Clearly, from the plot above, if 'most' is taken as a criterion any more stringent than 52.5%, then you can't say "most girls [have the property that they] are taller than most boys." I wonder if part of the rationale for the quiz question was to stimulate an examination of words as they relate to numerical notions. (If you think this is all a bit silly, consider these graphs, showing how people tend to interpret different probabilistic words and phrases.) Perhaps the intent also was to underscore the point that a lot of variation is present in real-world distributions, and that a single statistic (median, mean, what-have-you) will rarely support broad, sweeping statements.
Why doesn't the fact that 1 median is lower than another median, mean that most in group 1 are less The following figures are taken from this blog post, which illustrates an important practical application of these ideas. Standardization provides a powerful device for comparing 2 distributions. The
30,094
Can I test for correlation between variables before standardize them?
Can I test for correlation between variables before standardize them? I am not quite sure what should I do first. Correlation will be the same regardless whether you calculate it before or after standardization. To see this, it is enough to know that correlation is invariant to scale. Take $b \in \mathbb{R}$ and $a>0$, then $$ \begin{aligned} \text{Corr}(aX-b,Y) &= \frac{\text{Cov}(aX-b,Y)}{\sqrt{\text{Var}(aX-b)}\sqrt{(\text{Var}(Y)}} \\ &= \frac{\text{Cov}(aX,Y)}{\sqrt{\text{Var}(aX)}\sqrt{\text{Var}(Y)}} \\ &= \frac{a \text{Cov}(X,Y)}{\sqrt{a^2 \text{Var}(X)}\sqrt{\text{Var}(Y)}} \\ &= \frac{a \text{Cov}(X,Y)}{a \sqrt{\text{Var}(X)}\sqrt{\text{Var}(Y)}} \\ &= \frac{\text{Cov}(X,Y)}{\sqrt{\text{Var}(X)}\sqrt{\text{Var}(Y)}} \\ &= \text{Corr}(X,Y) \end{aligned} $$ The first equality is a definition. The second uses the property that covariance as well as variance are invariant to location shifts. The third uses the properties of covariance and variance with respect to multiplication by a constant. The fourth uses the fact that $a>0$. The fifth just cancels out the multipliers. The sixth is again a definition. This covers standardization, which is subtracting the mean and dividing by the standard deviation (a positive number).
Can I test for correlation between variables before standardize them?
Can I test for correlation between variables before standardize them? I am not quite sure what should I do first. Correlation will be the same regardless whether you calculate it before or after stan
Can I test for correlation between variables before standardize them? Can I test for correlation between variables before standardize them? I am not quite sure what should I do first. Correlation will be the same regardless whether you calculate it before or after standardization. To see this, it is enough to know that correlation is invariant to scale. Take $b \in \mathbb{R}$ and $a>0$, then $$ \begin{aligned} \text{Corr}(aX-b,Y) &= \frac{\text{Cov}(aX-b,Y)}{\sqrt{\text{Var}(aX-b)}\sqrt{(\text{Var}(Y)}} \\ &= \frac{\text{Cov}(aX,Y)}{\sqrt{\text{Var}(aX)}\sqrt{\text{Var}(Y)}} \\ &= \frac{a \text{Cov}(X,Y)}{\sqrt{a^2 \text{Var}(X)}\sqrt{\text{Var}(Y)}} \\ &= \frac{a \text{Cov}(X,Y)}{a \sqrt{\text{Var}(X)}\sqrt{\text{Var}(Y)}} \\ &= \frac{\text{Cov}(X,Y)}{\sqrt{\text{Var}(X)}\sqrt{\text{Var}(Y)}} \\ &= \text{Corr}(X,Y) \end{aligned} $$ The first equality is a definition. The second uses the property that covariance as well as variance are invariant to location shifts. The third uses the properties of covariance and variance with respect to multiplication by a constant. The fourth uses the fact that $a>0$. The fifth just cancels out the multipliers. The sixth is again a definition. This covers standardization, which is subtracting the mean and dividing by the standard deviation (a positive number).
Can I test for correlation between variables before standardize them? Can I test for correlation between variables before standardize them? I am not quite sure what should I do first. Correlation will be the same regardless whether you calculate it before or after stan
30,095
Can I test for correlation between variables before standardize them?
Yes, verifying the correlations between your explanatory variables is part of the data exploration as suggested in Zuur et al. (2010) A protocol for data exploration to avoid common statistical problems. This should be done before you standardize them and construct your GLMMs. However, I'm not sure how it would affect the correlations if you standardized your explanatory variables first but I would guess that the correlation results would be relatively the same.
Can I test for correlation between variables before standardize them?
Yes, verifying the correlations between your explanatory variables is part of the data exploration as suggested in Zuur et al. (2010) A protocol for data exploration to avoid common statistical proble
Can I test for correlation between variables before standardize them? Yes, verifying the correlations between your explanatory variables is part of the data exploration as suggested in Zuur et al. (2010) A protocol for data exploration to avoid common statistical problems. This should be done before you standardize them and construct your GLMMs. However, I'm not sure how it would affect the correlations if you standardized your explanatory variables first but I would guess that the correlation results would be relatively the same.
Can I test for correlation between variables before standardize them? Yes, verifying the correlations between your explanatory variables is part of the data exploration as suggested in Zuur et al. (2010) A protocol for data exploration to avoid common statistical proble
30,096
Can I test for correlation between variables before standardize them?
+1 to both answers but just to state the obvious: Linear correlation is defined as the scaled version of the covariance between two variables. The scaling itself is simply the product of the standard deviations of the two variables. Therefore, standardising (or any linear transformation of the variables examined for that matter) will not change the correlation as any prior rescaling effect that might affect the covariance, will be nullified by the scale normalisation that gives the final correlation estimate.
Can I test for correlation between variables before standardize them?
+1 to both answers but just to state the obvious: Linear correlation is defined as the scaled version of the covariance between two variables. The scaling itself is simply the product of the standard
Can I test for correlation between variables before standardize them? +1 to both answers but just to state the obvious: Linear correlation is defined as the scaled version of the covariance between two variables. The scaling itself is simply the product of the standard deviations of the two variables. Therefore, standardising (or any linear transformation of the variables examined for that matter) will not change the correlation as any prior rescaling effect that might affect the covariance, will be nullified by the scale normalisation that gives the final correlation estimate.
Can I test for correlation between variables before standardize them? +1 to both answers but just to state the obvious: Linear correlation is defined as the scaled version of the covariance between two variables. The scaling itself is simply the product of the standard
30,097
How do you interpret results from unit root tests?
This tests the null hypothesis that Demand follows a unit root process. You usually reject the null when the p-value is less than or equal to a specified significance level, often 0.05 (5%), or 0.01 (1%) and even 0.1 (10%). Your approximate p-value is 0.2924, so you would fail to reject the null in all these cases, but that does not imply that the null hypothesis is true. The data are merely consistent with it. The other way to see this is that your test statistic is smaller (in absolute value) than the 10% critical value. If you observed a test statistic like -4, then you could reject the null and claim that your variable is stationary. This might be more familiar way if you remember that you reject when the test statistic is "extreme". I find the absolute value thing a bit confusing, so I prefer to look at the p-value. But you aren't done yet. Some things to worry about and try: You don't have any lags here. There are three schools of thought on how to choose the right number. One, is to use the frequency of the data to decide (4 lags for quarterly, 12 for monthly). Two, chose some number of lags that you are confident are larger than needed, and trim away the longest lag as long as it is insignificant, one-by-one. This is a stepwise approach and can lead you astray. Three, use the modified DF test (dfgls in Stata), which includes estimates of the optimal number of lags to use. This test is also more powerful in a statistical sense of that word. You also don't have a drift or a trend terms. If a graph of the data shows an upward trend over time, add the trend option. If there's no trend, but you have a nonzero mean, the default option you have is fine. It might help if you post a graph of the data.
How do you interpret results from unit root tests?
This tests the null hypothesis that Demand follows a unit root process. You usually reject the null when the p-value is less than or equal to a specified significance level, often 0.05 (5%), or 0.01 (
How do you interpret results from unit root tests? This tests the null hypothesis that Demand follows a unit root process. You usually reject the null when the p-value is less than or equal to a specified significance level, often 0.05 (5%), or 0.01 (1%) and even 0.1 (10%). Your approximate p-value is 0.2924, so you would fail to reject the null in all these cases, but that does not imply that the null hypothesis is true. The data are merely consistent with it. The other way to see this is that your test statistic is smaller (in absolute value) than the 10% critical value. If you observed a test statistic like -4, then you could reject the null and claim that your variable is stationary. This might be more familiar way if you remember that you reject when the test statistic is "extreme". I find the absolute value thing a bit confusing, so I prefer to look at the p-value. But you aren't done yet. Some things to worry about and try: You don't have any lags here. There are three schools of thought on how to choose the right number. One, is to use the frequency of the data to decide (4 lags for quarterly, 12 for monthly). Two, chose some number of lags that you are confident are larger than needed, and trim away the longest lag as long as it is insignificant, one-by-one. This is a stepwise approach and can lead you astray. Three, use the modified DF test (dfgls in Stata), which includes estimates of the optimal number of lags to use. This test is also more powerful in a statistical sense of that word. You also don't have a drift or a trend terms. If a graph of the data shows an upward trend over time, add the trend option. If there's no trend, but you have a nonzero mean, the default option you have is fine. It might help if you post a graph of the data.
How do you interpret results from unit root tests? This tests the null hypothesis that Demand follows a unit root process. You usually reject the null when the p-value is less than or equal to a specified significance level, often 0.05 (5%), or 0.01 (
30,098
How do you interpret results from unit root tests?
Addition to @ Dimitriy: The Stata runs the OLS regression for the ADF in first difference form. So, the null is that the coefficient on lag of level of dependent variable (Demand here) on the right hand side is zero (you need to use the options regress, to confirm that it is running regression in first difference form) . The alternative is that it is less than zero (one-tailed test). So, when you compare the computed test statistics and critical value, you have to reject the null if computed value is smaller than critical value (note that this is one (left) tailed test). In your case,-1.987 is not smaller than -3.580 (1% critical value) [Try not to use the absolute value because that is usually applied to two-tailed test]. So, we do not reject the null at 1%. If you go on like that, you will see that null is also not rejected at 5% or 10%. This is also confirmed by MacKinnon approximate p-value for Z(t) = 0.2924 which says that null will be rejected only around 30% which is quite a high considering the traditional level of significance (1,5,and 10 %). More theoretical: Under the null, the demand follows an unit root process. So we can't apply the usual central limit theorem. We instead need to use functional central limit theorem. In other words, the test statistics don't follow t distribution but Tau distribution. So, we can't use the critical values from t-distribution.
How do you interpret results from unit root tests?
Addition to @ Dimitriy: The Stata runs the OLS regression for the ADF in first difference form. So, the null is that the coefficient on lag of level of dependent variable (Demand here) on the right h
How do you interpret results from unit root tests? Addition to @ Dimitriy: The Stata runs the OLS regression for the ADF in first difference form. So, the null is that the coefficient on lag of level of dependent variable (Demand here) on the right hand side is zero (you need to use the options regress, to confirm that it is running regression in first difference form) . The alternative is that it is less than zero (one-tailed test). So, when you compare the computed test statistics and critical value, you have to reject the null if computed value is smaller than critical value (note that this is one (left) tailed test). In your case,-1.987 is not smaller than -3.580 (1% critical value) [Try not to use the absolute value because that is usually applied to two-tailed test]. So, we do not reject the null at 1%. If you go on like that, you will see that null is also not rejected at 5% or 10%. This is also confirmed by MacKinnon approximate p-value for Z(t) = 0.2924 which says that null will be rejected only around 30% which is quite a high considering the traditional level of significance (1,5,and 10 %). More theoretical: Under the null, the demand follows an unit root process. So we can't apply the usual central limit theorem. We instead need to use functional central limit theorem. In other words, the test statistics don't follow t distribution but Tau distribution. So, we can't use the critical values from t-distribution.
How do you interpret results from unit root tests? Addition to @ Dimitriy: The Stata runs the OLS regression for the ADF in first difference form. So, the null is that the coefficient on lag of level of dependent variable (Demand here) on the right h
30,099
How do you interpret results from unit root tests?
STATA Valor z >Valor crítico 5% >>>> Acepto Ho: la serie tiene raíces unitarias >>>> Si hay raíces unitarias >>>> serie no estacionaria La probabilidad del valor de z(t) es no significativo >>>> serie no estacionaria Valor z ≤ Valor crítico 5% >>>> Rechazo Ho: la serie tiene raíces unitarias >>>>No hay raíces unitarias >>>> serie estacionaria La probabilidad del valor de z(t) es significativo >>>> serie estacionaria (Rough and somewhat free) translation If $z > z_{0.05}$ where $z_{0.05}$ is the critical value of the test, then we "accept" $H_0$, i.e., that the series has a unit root. If there are unit roots, the series is not stationary. Accordingly, if the $p$-value of $z(t)$ is not significant, the series is not stationary. If $z \leq z_{0.05}$ then we reject the null hypothesis $H_0$ that the series has a unit root. If there are no unit roots, then we conclude the series is stationary. The $p$-value of $z(t)$ being significant would lead us to conclude that the series is stationary.
How do you interpret results from unit root tests?
STATA Valor z >Valor crítico 5% >>>> Acepto Ho: la serie tiene raíces unitarias >>>> Si hay raíces unitarias >>>> serie no estacionaria La probabilidad del valor de z(t) es no significativo >>>> serie
How do you interpret results from unit root tests? STATA Valor z >Valor crítico 5% >>>> Acepto Ho: la serie tiene raíces unitarias >>>> Si hay raíces unitarias >>>> serie no estacionaria La probabilidad del valor de z(t) es no significativo >>>> serie no estacionaria Valor z ≤ Valor crítico 5% >>>> Rechazo Ho: la serie tiene raíces unitarias >>>>No hay raíces unitarias >>>> serie estacionaria La probabilidad del valor de z(t) es significativo >>>> serie estacionaria (Rough and somewhat free) translation If $z > z_{0.05}$ where $z_{0.05}$ is the critical value of the test, then we "accept" $H_0$, i.e., that the series has a unit root. If there are unit roots, the series is not stationary. Accordingly, if the $p$-value of $z(t)$ is not significant, the series is not stationary. If $z \leq z_{0.05}$ then we reject the null hypothesis $H_0$ that the series has a unit root. If there are no unit roots, then we conclude the series is stationary. The $p$-value of $z(t)$ being significant would lead us to conclude that the series is stationary.
How do you interpret results from unit root tests? STATA Valor z >Valor crítico 5% >>>> Acepto Ho: la serie tiene raíces unitarias >>>> Si hay raíces unitarias >>>> serie no estacionaria La probabilidad del valor de z(t) es no significativo >>>> serie
30,100
What is Item Response Theory (IRT) called for continuous response?
If you have a continuous indicator, then you would use factor analysis. Think of FA as linear regression and IRT it's logistic regression brother.
What is Item Response Theory (IRT) called for continuous response?
If you have a continuous indicator, then you would use factor analysis. Think of FA as linear regression and IRT it's logistic regression brother.
What is Item Response Theory (IRT) called for continuous response? If you have a continuous indicator, then you would use factor analysis. Think of FA as linear regression and IRT it's logistic regression brother.
What is Item Response Theory (IRT) called for continuous response? If you have a continuous indicator, then you would use factor analysis. Think of FA as linear regression and IRT it's logistic regression brother.