idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
19,601
What does it mean for a statistical test to be "robust"?
Roughly speaking, a test or estimator is called 'robust' if it still works reasonably well, even if some assumptions required for its theoretical development are not met in practice. Comments: If you need to do one-factor ("one-way") ANOVA for data with different variances at each level of the factor, then it is best to use some variant of one-way ANOVA such as oneway.test in R that does not require equal variances. As you say, a 'pooled' t test or simple one-way ANOVA where the numbers of replications per factor differ greatly, may be problematic if variances also differ among levels of the factor. Some texts seem to say 2-sample t test and one-way ANOVA are OK for non-normal data whenever there are more than 30 replications per group. But this may not be true if data within groups are highly skewed. If levels of 2-sample t or one-factor ANOVA are far from normal, but differences between groups are mainly a 'shift' of location (with little change in shape or variance) then it may be best to use Welch t test or Kruskal-Wallis nonparametric test instead of t or ANOVA, respectively. Note: I could show an example to illustrate, if you could say what test is of particular interest and what assumption you feel unsure of.
What does it mean for a statistical test to be "robust"?
Roughly speaking, a test or estimator is called 'robust' if it still works reasonably well, even if some assumptions required for its theoretical development are not met in practice. Comments: If you
What does it mean for a statistical test to be "robust"? Roughly speaking, a test or estimator is called 'robust' if it still works reasonably well, even if some assumptions required for its theoretical development are not met in practice. Comments: If you need to do one-factor ("one-way") ANOVA for data with different variances at each level of the factor, then it is best to use some variant of one-way ANOVA such as oneway.test in R that does not require equal variances. As you say, a 'pooled' t test or simple one-way ANOVA where the numbers of replications per factor differ greatly, may be problematic if variances also differ among levels of the factor. Some texts seem to say 2-sample t test and one-way ANOVA are OK for non-normal data whenever there are more than 30 replications per group. But this may not be true if data within groups are highly skewed. If levels of 2-sample t or one-factor ANOVA are far from normal, but differences between groups are mainly a 'shift' of location (with little change in shape or variance) then it may be best to use Welch t test or Kruskal-Wallis nonparametric test instead of t or ANOVA, respectively. Note: I could show an example to illustrate, if you could say what test is of particular interest and what assumption you feel unsure of.
What does it mean for a statistical test to be "robust"? Roughly speaking, a test or estimator is called 'robust' if it still works reasonably well, even if some assumptions required for its theoretical development are not met in practice. Comments: If you
19,602
What does it mean for a statistical test to be "robust"?
When we say that a procedure is "robust" or "robust to [a particular failure of assumption]" we mean that the procedure still works well when the underlying assumption is not met. So, in the present case, the quoted statement is telling you that, under the stipulated conditions, ANOVA still works well even when the normality or homoskedasticity conditions in a model are not a realistic reflection of the data.
What does it mean for a statistical test to be "robust"?
When we say that a procedure is "robust" or "robust to [a particular failure of assumption]" we mean that the procedure still works well when the underlying assumption is not met. So, in the present
What does it mean for a statistical test to be "robust"? When we say that a procedure is "robust" or "robust to [a particular failure of assumption]" we mean that the procedure still works well when the underlying assumption is not met. So, in the present case, the quoted statement is telling you that, under the stipulated conditions, ANOVA still works well even when the normality or homoskedasticity conditions in a model are not a realistic reflection of the data.
What does it mean for a statistical test to be "robust"? When we say that a procedure is "robust" or "robust to [a particular failure of assumption]" we mean that the procedure still works well when the underlying assumption is not met. So, in the present
19,603
What does it mean for a statistical test to be "robust"?
We must be specific about what the claim is. It's not sufficient to wave our hands and say something vague like the test "works well" in those circumstances -- that is not what was examined in order to make the statement. Both statements are specifically about accuracy of the significance level (a.k.a. "level-robustness"). That is to say, the type I error rate is claimed not to be too far from what you would calculate/choose under the (violated) assumption in those circumstances. Even in that restricted sense, these sorts of general claims are too vague to be useful in practice, however. For example, you don't really know how large is sufficiently large for your purposes in the first case, because you don't know the population distribution (if you did, you wouldn't need to consider this issue at all!). Of course, significance level is not the only consideration with tests. Certainly I'd hope that people care about power. Sadly, however, the direct evidence that the people who repeat these statements care much in practice is weak when common statements like these so rarely are accompanied by the merest mention of what happens with power. In the first case, large samples don't save you when you're looking at relative efficiency (the relative sample sizes needed to achieve a given level of power) -- and relative efficiency can be arbitrarily poor in large samples -- so if your sample sizes were large because your anticipated effect size was small, you might have some potentially serious issues.
What does it mean for a statistical test to be "robust"?
We must be specific about what the claim is. It's not sufficient to wave our hands and say something vague like the test "works well" in those circumstances -- that is not what was examined in order t
What does it mean for a statistical test to be "robust"? We must be specific about what the claim is. It's not sufficient to wave our hands and say something vague like the test "works well" in those circumstances -- that is not what was examined in order to make the statement. Both statements are specifically about accuracy of the significance level (a.k.a. "level-robustness"). That is to say, the type I error rate is claimed not to be too far from what you would calculate/choose under the (violated) assumption in those circumstances. Even in that restricted sense, these sorts of general claims are too vague to be useful in practice, however. For example, you don't really know how large is sufficiently large for your purposes in the first case, because you don't know the population distribution (if you did, you wouldn't need to consider this issue at all!). Of course, significance level is not the only consideration with tests. Certainly I'd hope that people care about power. Sadly, however, the direct evidence that the people who repeat these statements care much in practice is weak when common statements like these so rarely are accompanied by the merest mention of what happens with power. In the first case, large samples don't save you when you're looking at relative efficiency (the relative sample sizes needed to achieve a given level of power) -- and relative efficiency can be arbitrarily poor in large samples -- so if your sample sizes were large because your anticipated effect size was small, you might have some potentially serious issues.
What does it mean for a statistical test to be "robust"? We must be specific about what the claim is. It's not sufficient to wave our hands and say something vague like the test "works well" in those circumstances -- that is not what was examined in order t
19,604
Why are density functions sometimes written with conditional notation?
In a Bayesian context, the parameters are random variables, so in that context the density is actually the conditional density of $X \mid (\mu, \sigma)$. In that setting, the notation is very natural. Outside of a Bayesian context, it is just a way to make it clear that the density depends (here I am using this word colloquially, not probabilistically) on the parameters. Some people use $f_{\mu, \sigma}(x)$ or $f(x; \mu, \sigma)$ to the same effect. This latter point can be important in the context of likelihood functions. A likelihood function is a function of the parameters $\theta$, given some data $x$. The likelihood is sometimes written as $L(\theta \mid x)$ or $L(\theta ; x)$, or sometimes as $L(\theta)$ when the data $x$ is understood to be given. What is confusing is that in the case of a continuous distribution, the likelihood function is defined as the value of the density corresponding to the parameter $\theta$, evaluated at the data $x$, i.e. $L(\theta; x) := f_\theta(x)$. Writing $L(\theta; x) = f(x)$ would be confusing, since the left-hand side is a function of $\theta$, while the right-hand side ostensibly does not appear to depend on $\theta$. While I prefer writing $L(\theta; x) := f_\theta(x)$, some might write $L(\theta; x) := f(x \mid \theta)$. I have not really seen much consistency in notation across different authors, although someone more well-read than I can correct me if I am wrong.
Why are density functions sometimes written with conditional notation?
In a Bayesian context, the parameters are random variables, so in that context the density is actually the conditional density of $X \mid (\mu, \sigma)$. In that setting, the notation is very natural.
Why are density functions sometimes written with conditional notation? In a Bayesian context, the parameters are random variables, so in that context the density is actually the conditional density of $X \mid (\mu, \sigma)$. In that setting, the notation is very natural. Outside of a Bayesian context, it is just a way to make it clear that the density depends (here I am using this word colloquially, not probabilistically) on the parameters. Some people use $f_{\mu, \sigma}(x)$ or $f(x; \mu, \sigma)$ to the same effect. This latter point can be important in the context of likelihood functions. A likelihood function is a function of the parameters $\theta$, given some data $x$. The likelihood is sometimes written as $L(\theta \mid x)$ or $L(\theta ; x)$, or sometimes as $L(\theta)$ when the data $x$ is understood to be given. What is confusing is that in the case of a continuous distribution, the likelihood function is defined as the value of the density corresponding to the parameter $\theta$, evaluated at the data $x$, i.e. $L(\theta; x) := f_\theta(x)$. Writing $L(\theta; x) = f(x)$ would be confusing, since the left-hand side is a function of $\theta$, while the right-hand side ostensibly does not appear to depend on $\theta$. While I prefer writing $L(\theta; x) := f_\theta(x)$, some might write $L(\theta; x) := f(x \mid \theta)$. I have not really seen much consistency in notation across different authors, although someone more well-read than I can correct me if I am wrong.
Why are density functions sometimes written with conditional notation? In a Bayesian context, the parameters are random variables, so in that context the density is actually the conditional density of $X \mid (\mu, \sigma)$. In that setting, the notation is very natural.
19,605
Why are density functions sometimes written with conditional notation?
This notation is used often in MLE context to differentiate it from likelihood function and the estimation of the parameters conditional on data. In MLE you do something like this: $$\hat\mu,\hat\sigma|X= \underset{\mu,\sigma}{\operatorname{argmax}} \mathcal L(X|\mu,\sigma)$$ $$\mathcal L(X|\mu,\sigma)=\prod_i f(x_i\in X|\mu,\sigma) $$ So, this notation emphasizes that you use the PDF $f(.)$ of the data set conditional on a candidate set of parameters to obtain the likelihood function $\mathcal L$. Then you pick the set that maximizes the likelihood as your solution $\hat\mu,\hat\sigma$. Thus, the solution is truly conditional on the data set $X$, while the likelihood is conditional on the candidate parameter set $\mu,\sigma$. That's why this notation is good for didactic purpose to show how the conditions sort of "flip" on left- and right-hand side.
Why are density functions sometimes written with conditional notation?
This notation is used often in MLE context to differentiate it from likelihood function and the estimation of the parameters conditional on data. In MLE you do something like this: $$\hat\mu,\hat\sigm
Why are density functions sometimes written with conditional notation? This notation is used often in MLE context to differentiate it from likelihood function and the estimation of the parameters conditional on data. In MLE you do something like this: $$\hat\mu,\hat\sigma|X= \underset{\mu,\sigma}{\operatorname{argmax}} \mathcal L(X|\mu,\sigma)$$ $$\mathcal L(X|\mu,\sigma)=\prod_i f(x_i\in X|\mu,\sigma) $$ So, this notation emphasizes that you use the PDF $f(.)$ of the data set conditional on a candidate set of parameters to obtain the likelihood function $\mathcal L$. Then you pick the set that maximizes the likelihood as your solution $\hat\mu,\hat\sigma$. Thus, the solution is truly conditional on the data set $X$, while the likelihood is conditional on the candidate parameter set $\mu,\sigma$. That's why this notation is good for didactic purpose to show how the conditions sort of "flip" on left- and right-hand side.
Why are density functions sometimes written with conditional notation? This notation is used often in MLE context to differentiate it from likelihood function and the estimation of the parameters conditional on data. In MLE you do something like this: $$\hat\mu,\hat\sigm
19,606
Is there a colloquial way of saying "small but significant"?
If the audience knows what "statistical significant" and $p \le 0.05$ means, there is not much that can go wrong. But otherwise, I really like the fantastic suggestion by Jordan Ellenberg as an alternative to "statistically significant" in general: [...] "statistically noticeable” or statistically detectable" instead of “statistically significant”! That would be truer to the meaning of the method [...] - Jordan Ellenberg in his book "How Not to Be Wrong: The Power of Mathematical Thinking" Edit based on the short discussion in the comments: Note this answer does not specifically address the "small effect" situation but rather proposes an apt word for statistical significance in general where you don't have to fear that people take "statistically significant" as "relevant". In this way, you can distinguish in a clean and understandable way the two topics "hypothesis testing" and "effect size".
Is there a colloquial way of saying "small but significant"?
If the audience knows what "statistical significant" and $p \le 0.05$ means, there is not much that can go wrong. But otherwise, I really like the fantastic suggestion by Jordan Ellenberg as an altern
Is there a colloquial way of saying "small but significant"? If the audience knows what "statistical significant" and $p \le 0.05$ means, there is not much that can go wrong. But otherwise, I really like the fantastic suggestion by Jordan Ellenberg as an alternative to "statistically significant" in general: [...] "statistically noticeable” or statistically detectable" instead of “statistically significant”! That would be truer to the meaning of the method [...] - Jordan Ellenberg in his book "How Not to Be Wrong: The Power of Mathematical Thinking" Edit based on the short discussion in the comments: Note this answer does not specifically address the "small effect" situation but rather proposes an apt word for statistical significance in general where you don't have to fear that people take "statistically significant" as "relevant". In this way, you can distinguish in a clean and understandable way the two topics "hypothesis testing" and "effect size".
Is there a colloquial way of saying "small but significant"? If the audience knows what "statistical significant" and $p \le 0.05$ means, there is not much that can go wrong. But otherwise, I really like the fantastic suggestion by Jordan Ellenberg as an altern
19,607
Is there a colloquial way of saying "small but significant"?
I have found the distinction between statistically significant and physically significant is often useful when communicating results to non-statisticians. The phrases clinically significant or practically significant may be preferred in some fields (thanks @Jelsema). You are describing a situation where the effect may be statistically significant, but physically insignificant. As an aside, there is actually a push right now to stop (or limit) using the phrase "statistically significant" altogether. Some relevant reading: Moving to a World Beyond "p < 0.05" The Difference Between "Significant" and "Not Significant" is Not Itself Statistically Significant Scientists rise up against statistical significance References Wasserstein, Ronald L., Allen L. Schirm, and Nicole A. Lazar. "Moving to a world beyond “p< 0.05”." (2019): 1-19. Gelman, Andrew, and Hal Stern. "The difference between “significant” and “not significant” is not itself statistically significant." The American Statistician 60.4 (2006): 328-331. Amrhein, Valentin, Sander Greenland, and Blake McShane. "Scientists rise up against statistical significance." (2019): 305-307.
Is there a colloquial way of saying "small but significant"?
I have found the distinction between statistically significant and physically significant is often useful when communicating results to non-statisticians. The phrases clinically significant or practic
Is there a colloquial way of saying "small but significant"? I have found the distinction between statistically significant and physically significant is often useful when communicating results to non-statisticians. The phrases clinically significant or practically significant may be preferred in some fields (thanks @Jelsema). You are describing a situation where the effect may be statistically significant, but physically insignificant. As an aside, there is actually a push right now to stop (or limit) using the phrase "statistically significant" altogether. Some relevant reading: Moving to a World Beyond "p < 0.05" The Difference Between "Significant" and "Not Significant" is Not Itself Statistically Significant Scientists rise up against statistical significance References Wasserstein, Ronald L., Allen L. Schirm, and Nicole A. Lazar. "Moving to a world beyond “p< 0.05”." (2019): 1-19. Gelman, Andrew, and Hal Stern. "The difference between “significant” and “not significant” is not itself statistically significant." The American Statistician 60.4 (2006): 328-331. Amrhein, Valentin, Sander Greenland, and Blake McShane. "Scientists rise up against statistical significance." (2019): 305-307.
Is there a colloquial way of saying "small but significant"? I have found the distinction between statistically significant and physically significant is often useful when communicating results to non-statisticians. The phrases clinically significant or practic
19,608
Is there a colloquial way of saying "small but significant"?
I would use something like, "We notice a difference that is extremely likely to be there and not a fluke. However, the difference has no practical meaning. They are effectively the same, like the distance to the sun versus the distance to the sun plus an inch." The acknowledges the statistical significance ("not a fluke") while acknowledging the lack of practical significance ("no practical meaning").
Is there a colloquial way of saying "small but significant"?
I would use something like, "We notice a difference that is extremely likely to be there and not a fluke. However, the difference has no practical meaning. They are effectively the same, like the dist
Is there a colloquial way of saying "small but significant"? I would use something like, "We notice a difference that is extremely likely to be there and not a fluke. However, the difference has no practical meaning. They are effectively the same, like the distance to the sun versus the distance to the sun plus an inch." The acknowledges the statistical significance ("not a fluke") while acknowledging the lack of practical significance ("no practical meaning").
Is there a colloquial way of saying "small but significant"? I would use something like, "We notice a difference that is extremely likely to be there and not a fluke. However, the difference has no practical meaning. They are effectively the same, like the dist
19,609
Is there a colloquial way of saying "small but significant"?
Referring to the tale of "The Princess and the Pea", you could say : "There seems to be a pea under all these mattresses." See also: https://en.wikipedia.org/wiki/The_Princess_and_the_Pea
Is there a colloquial way of saying "small but significant"?
Referring to the tale of "The Princess and the Pea", you could say : "There seems to be a pea under all these mattresses." See also: https://en.wikipedia.org/wiki/The_Princess_and_the_Pea
Is there a colloquial way of saying "small but significant"? Referring to the tale of "The Princess and the Pea", you could say : "There seems to be a pea under all these mattresses." See also: https://en.wikipedia.org/wiki/The_Princess_and_the_Pea
Is there a colloquial way of saying "small but significant"? Referring to the tale of "The Princess and the Pea", you could say : "There seems to be a pea under all these mattresses." See also: https://en.wikipedia.org/wiki/The_Princess_and_the_Pea
19,610
What is the demonstration of the variance of the difference of two dependent variables?
When $X$ and $Y$ are dependent variables with covariance $\mathrm{Cov}[X,Y] = \mathrm{E}[(X-\mathrm{E}[X])(Y-\mathrm{E}[Y])]$, then the variance of their difference is given by $$ \mathrm{Var}[X-Y] = \mathrm{Var}[X] + \mathrm{Var}[Y] - 2 \mathrm{Cov}[X,Y] $$ This is mentioned among the basic properties of variance on http://en.wikipedia.org/wiki/Variance. If $X$ and $Y$ happen to be uncorrelated (which is a fortiori the case when they are independent), then their covariance is zero and we have $$ \mathrm{Var}[X-Y] = \mathrm{Var}[X] + \mathrm{Var}[Y] $$
What is the demonstration of the variance of the difference of two dependent variables?
When $X$ and $Y$ are dependent variables with covariance $\mathrm{Cov}[X,Y] = \mathrm{E}[(X-\mathrm{E}[X])(Y-\mathrm{E}[Y])]$, then the variance of their difference is given by $$ \mathrm{Var}[X-Y]
What is the demonstration of the variance of the difference of two dependent variables? When $X$ and $Y$ are dependent variables with covariance $\mathrm{Cov}[X,Y] = \mathrm{E}[(X-\mathrm{E}[X])(Y-\mathrm{E}[Y])]$, then the variance of their difference is given by $$ \mathrm{Var}[X-Y] = \mathrm{Var}[X] + \mathrm{Var}[Y] - 2 \mathrm{Cov}[X,Y] $$ This is mentioned among the basic properties of variance on http://en.wikipedia.org/wiki/Variance. If $X$ and $Y$ happen to be uncorrelated (which is a fortiori the case when they are independent), then their covariance is zero and we have $$ \mathrm{Var}[X-Y] = \mathrm{Var}[X] + \mathrm{Var}[Y] $$
What is the demonstration of the variance of the difference of two dependent variables? When $X$ and $Y$ are dependent variables with covariance $\mathrm{Cov}[X,Y] = \mathrm{E}[(X-\mathrm{E}[X])(Y-\mathrm{E}[Y])]$, then the variance of their difference is given by $$ \mathrm{Var}[X-Y]
19,611
What is the demonstration of the variance of the difference of two dependent variables?
Let $X$ and $Y$ be two random variables. We want to show that $Var[X-Y]=Var[X]+Var[Y]-2\times Cov[X,Y]$. Let's define $Z:=-Y$, so we have: $Var[X-Y]=Var[X+Z]=Var[X]+Var[Z]+2\times Cov[X,Z]$. $Var[Z] = Var[-Y] = Var[Y]$, since $Var[\alpha Y] = \alpha^2 Var[Y]\enspace \forall \alpha \in \mathbb{R}.$ We also have $Cov[X,Z] = Cov[X,-Y] = -Cov[X,Y]$, because $Cov(X,\beta Y) = \beta Cov(X,Y)\enspace \forall \beta \in \mathbb{R}$. Putting all pieces together, we have $Var[X-Y]=Var[X]+Var[Y]-2\times Cov[X,Y]$.
What is the demonstration of the variance of the difference of two dependent variables?
Let $X$ and $Y$ be two random variables. We want to show that $Var[X-Y]=Var[X]+Var[Y]-2\times Cov[X,Y]$. Let's define $Z:=-Y$, so we have: $Var[X-Y]=Var[X+Z]=Var[X]+Var[Z]+2\times Cov[X,Z]$. $Var[Z]
What is the demonstration of the variance of the difference of two dependent variables? Let $X$ and $Y$ be two random variables. We want to show that $Var[X-Y]=Var[X]+Var[Y]-2\times Cov[X,Y]$. Let's define $Z:=-Y$, so we have: $Var[X-Y]=Var[X+Z]=Var[X]+Var[Z]+2\times Cov[X,Z]$. $Var[Z] = Var[-Y] = Var[Y]$, since $Var[\alpha Y] = \alpha^2 Var[Y]\enspace \forall \alpha \in \mathbb{R}.$ We also have $Cov[X,Z] = Cov[X,-Y] = -Cov[X,Y]$, because $Cov(X,\beta Y) = \beta Cov(X,Y)\enspace \forall \beta \in \mathbb{R}$. Putting all pieces together, we have $Var[X-Y]=Var[X]+Var[Y]-2\times Cov[X,Y]$.
What is the demonstration of the variance of the difference of two dependent variables? Let $X$ and $Y$ be two random variables. We want to show that $Var[X-Y]=Var[X]+Var[Y]-2\times Cov[X,Y]$. Let's define $Z:=-Y$, so we have: $Var[X-Y]=Var[X+Z]=Var[X]+Var[Z]+2\times Cov[X,Z]$. $Var[Z]
19,612
Does "correlation" also mean the slope in regression analysis?
First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? Analysis of variance (ANOVA) is just a technique comparing the variance explained by the model versus the variance not explained by the model. Since regression models have both the explained and unexplained component, it's natural that ANOVA can be applied to them. In many software packages, ANOVA results are routinely reported with linear regression. Regression is also a very versatile technique. In fact, both t-test and ANOVA can be expressed in regression form; they are just a special case of regression. For example, here is a sample regression output. The outcome is miles per gallon of some cars and the independent variable is whether the car was domestic or foreign: Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 13.18 Model | 378.153515 1 378.153515 Prob > F = 0.0005 Residual | 2065.30594 72 28.6848048 R-squared = 0.1548 -------------+------------------------------ Adj R-squared = 0.1430 Total | 2443.45946 73 33.4720474 Root MSE = 5.3558 ------------------------------------------------------------------------------ mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.foreign | 4.945804 1.362162 3.63 0.001 2.230384 7.661225 _cons | 19.82692 .7427186 26.70 0.000 18.34634 21.30751 ------------------------------------------------------------------------------ You can see the ANOVA reported at top left. The overall F-statistics is 13.18, with a p-value of 0.0005, indicating the model being predictive. And here is the ANOVA output: Number of obs = 74 R-squared = 0.1548 Root MSE = 5.35582 Adj R-squared = 0.1430 Source | Partial SS df MS F Prob > F -----------+---------------------------------------------------- Model | 378.153515 1 378.153515 13.18 0.0005 | foreign | 378.153515 1 378.153515 13.18 0.0005 | Residual | 2065.30594 72 28.6848048 -----------+---------------------------------------------------- Total | 2443.45946 73 33.4720474 Notice that you can recover the same F-statistics and p-value there. And then he wrote about the correlation coefficient, is that not from correlation analysis? Or this word could also be used to describe regression slope? Assuming the analysis involved using only B and Y, technically I would not agree with the word choice. In most of the cases, slope and correlation coefficient cannot be used interchangeably. In one special case, these two are the same, that is when both the independent and dependent variables are standardized (aka in the unit of z-score.) For example, let's correlate miles per gallon and the price of the car: | price mpg -------------+------------------ price | 1.0000 mpg | -0.4686 1.0000 And here is the same test, using the standardized variables, you can see the correlation coefficient remains unchanged: | sdprice sdmpg -------------+------------------ sdprice | 1.0000 sdmpg | -0.4686 1.0000 Now, here are the two regression models using the original variables: . reg mpg price Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 20.26 Model | 536.541807 1 536.541807 Prob > F = 0.0000 Residual | 1906.91765 72 26.4849674 R-squared = 0.2196 -------------+------------------------------ Adj R-squared = 0.2087 Total | 2443.45946 73 33.4720474 Root MSE = 5.1464 ------------------------------------------------------------------------------ mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- price | -.0009192 .0002042 -4.50 0.000 -.0013263 -.0005121 _cons | 26.96417 1.393952 19.34 0.000 24.18538 29.74297 ------------------------------------------------------------------------------ ... and here is the one with standardized variables: . reg sdmpg sdprice Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 20.26 Model | 16.0295482 1 16.0295482 Prob > F = 0.0000 Residual | 56.9704514 72 .791256269 R-squared = 0.2196 -------------+------------------------------ Adj R-squared = 0.2087 Total | 72.9999996 73 .999999994 Root MSE = .88953 ------------------------------------------------------------------------------ sdmpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- sdprice | -.4685967 .1041111 -4.50 0.000 -.6761384 -.2610549 _cons | -7.22e-09 .1034053 -0.00 1.000 -.2061347 .2061347 ------------------------------------------------------------------------------ As you can see, the slope of the original variables is -0.0009192, and the one with standardized variables is -0.4686, which is also the correlation coefficient. So, unless the A, B, C, and Y are standardized, I would not agree with the article's "correlating." Instead, I'd just opt of a one unit increase in B is associated with the average of Y being 0.27 higher. In more complicated situation, where more than one independent variable is involved, the phenomenon described above will no longer be true.
Does "correlation" also mean the slope in regression analysis?
First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? Analysis of variance (ANOVA) is just a technique comparing the variance explained by the model ve
Does "correlation" also mean the slope in regression analysis? First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? Analysis of variance (ANOVA) is just a technique comparing the variance explained by the model versus the variance not explained by the model. Since regression models have both the explained and unexplained component, it's natural that ANOVA can be applied to them. In many software packages, ANOVA results are routinely reported with linear regression. Regression is also a very versatile technique. In fact, both t-test and ANOVA can be expressed in regression form; they are just a special case of regression. For example, here is a sample regression output. The outcome is miles per gallon of some cars and the independent variable is whether the car was domestic or foreign: Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 13.18 Model | 378.153515 1 378.153515 Prob > F = 0.0005 Residual | 2065.30594 72 28.6848048 R-squared = 0.1548 -------------+------------------------------ Adj R-squared = 0.1430 Total | 2443.45946 73 33.4720474 Root MSE = 5.3558 ------------------------------------------------------------------------------ mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- 1.foreign | 4.945804 1.362162 3.63 0.001 2.230384 7.661225 _cons | 19.82692 .7427186 26.70 0.000 18.34634 21.30751 ------------------------------------------------------------------------------ You can see the ANOVA reported at top left. The overall F-statistics is 13.18, with a p-value of 0.0005, indicating the model being predictive. And here is the ANOVA output: Number of obs = 74 R-squared = 0.1548 Root MSE = 5.35582 Adj R-squared = 0.1430 Source | Partial SS df MS F Prob > F -----------+---------------------------------------------------- Model | 378.153515 1 378.153515 13.18 0.0005 | foreign | 378.153515 1 378.153515 13.18 0.0005 | Residual | 2065.30594 72 28.6848048 -----------+---------------------------------------------------- Total | 2443.45946 73 33.4720474 Notice that you can recover the same F-statistics and p-value there. And then he wrote about the correlation coefficient, is that not from correlation analysis? Or this word could also be used to describe regression slope? Assuming the analysis involved using only B and Y, technically I would not agree with the word choice. In most of the cases, slope and correlation coefficient cannot be used interchangeably. In one special case, these two are the same, that is when both the independent and dependent variables are standardized (aka in the unit of z-score.) For example, let's correlate miles per gallon and the price of the car: | price mpg -------------+------------------ price | 1.0000 mpg | -0.4686 1.0000 And here is the same test, using the standardized variables, you can see the correlation coefficient remains unchanged: | sdprice sdmpg -------------+------------------ sdprice | 1.0000 sdmpg | -0.4686 1.0000 Now, here are the two regression models using the original variables: . reg mpg price Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 20.26 Model | 536.541807 1 536.541807 Prob > F = 0.0000 Residual | 1906.91765 72 26.4849674 R-squared = 0.2196 -------------+------------------------------ Adj R-squared = 0.2087 Total | 2443.45946 73 33.4720474 Root MSE = 5.1464 ------------------------------------------------------------------------------ mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- price | -.0009192 .0002042 -4.50 0.000 -.0013263 -.0005121 _cons | 26.96417 1.393952 19.34 0.000 24.18538 29.74297 ------------------------------------------------------------------------------ ... and here is the one with standardized variables: . reg sdmpg sdprice Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 20.26 Model | 16.0295482 1 16.0295482 Prob > F = 0.0000 Residual | 56.9704514 72 .791256269 R-squared = 0.2196 -------------+------------------------------ Adj R-squared = 0.2087 Total | 72.9999996 73 .999999994 Root MSE = .88953 ------------------------------------------------------------------------------ sdmpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- sdprice | -.4685967 .1041111 -4.50 0.000 -.6761384 -.2610549 _cons | -7.22e-09 .1034053 -0.00 1.000 -.2061347 .2061347 ------------------------------------------------------------------------------ As you can see, the slope of the original variables is -0.0009192, and the one with standardized variables is -0.4686, which is also the correlation coefficient. So, unless the A, B, C, and Y are standardized, I would not agree with the article's "correlating." Instead, I'd just opt of a one unit increase in B is associated with the average of Y being 0.27 higher. In more complicated situation, where more than one independent variable is involved, the phenomenon described above will no longer be true.
Does "correlation" also mean the slope in regression analysis? First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? Analysis of variance (ANOVA) is just a technique comparing the variance explained by the model ve
19,613
Does "correlation" also mean the slope in regression analysis?
First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? The analysis of variance table is a summary of part of the information you can get from regression. (What you may think of as an analysis of variance is a special case of regression. In either case you can partition the sums of squares into components that can be used to test various hypotheses, and this is called an analysis of variance table.) And then he wrote about the correlation coefficient, is that not from correlation analysis? Or this word could also be used to describe regression slope? The correlation is not the same thing as regression slope, but the two are related. However, unless they left a word (or perhaps several words) out, the pairwise correlation of B with Y doesn't tell you directly about the significance of the slope in the multiple regression. In a simple regression, the two are directly related, and such a relationship does hold. In multiple regression partial correlations are related to slopes in the corresponding way.
Does "correlation" also mean the slope in regression analysis?
First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? The analysis of variance table is a summary of part of the information you can get from regression.
Does "correlation" also mean the slope in regression analysis? First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? The analysis of variance table is a summary of part of the information you can get from regression. (What you may think of as an analysis of variance is a special case of regression. In either case you can partition the sums of squares into components that can be used to test various hypotheses, and this is called an analysis of variance table.) And then he wrote about the correlation coefficient, is that not from correlation analysis? Or this word could also be used to describe regression slope? The correlation is not the same thing as regression slope, but the two are related. However, unless they left a word (or perhaps several words) out, the pairwise correlation of B with Y doesn't tell you directly about the significance of the slope in the multiple regression. In a simple regression, the two are directly related, and such a relationship does hold. In multiple regression partial correlations are related to slopes in the corresponding way.
Does "correlation" also mean the slope in regression analysis? First, he said he would run a regression analysis, then he showed us the analysis of variance. Why? The analysis of variance table is a summary of part of the information you can get from regression.
19,614
Does "correlation" also mean the slope in regression analysis?
I am providing codes in R just an example, you can just see answers if you do not have experience with R. I just want to make some cases with examples. correlation vs regression Simple linear correlation and regression with one Y and one X: The model: y = a + betaX + error (residual) Let's say we have only two variables: X = c(4,5,8,6,12,15) Y = c(3,6,9,8,6, 18) plot(X,Y, pch = 19) On a scatter diagram, the closer the points lie to a straight line, the stronger the linear relationship between two variables. Let's see linear correlation. cor(X,Y) 0.7828747 Now linear regression and pull-out R squared values. reg1 <- lm(Y~X) summary(reg1)$r.squared 0.6128929 Thus coefficients of the model are: reg1$coefficients (Intercept) X 2.2535971 0.7877698 The beta for X is 0.7877698. Thus out model will be: Y = 2.2535971 + 0.7877698 * X Square root of the R-squared value in regression is same as r in linear regression. sqrt(summary(reg1)$r.squared) [1] 0.7828747 Let's see scale effect on regression slope and correlation using the same above example and multiply X with a constant say 12. X = c(4,5,8,6,12,15) Y = c(3,6,9,8,6, 18) X12 <- X*12 cor(X12,Y) [1] 0.7828747 The correlation remain unchanged as do R-squared. reg12 <- lm(Y~X12) summary(reg12)$r.squared [1] 0.6128929 reg12$coefficients (Intercept) X12 0.53571429 0.07797619 You can see the regression coefficients changed but not R-square. Now another experiment lets add a constant to X and see what this will have effect. X = c(4,5,8,6,12,15) Y = c(3,6,9,8,6, 18) X5 <- X+5 cor(X5,Y) [1] 0.7828747 Correlation is still not changed after adding 5. Let's see how this will have effect on regression coefficients. reg5 <- lm(Y~X5) summary(reg5)$r.squared [1] 0.6128929 reg5$coefficients (Intercept) X5 -4.1428571 0.9357143 The R-square and correlation do not have scale effect but intercept and slope do. So slope is not same as correlation coefficient (unless variables are standardized with mean 0 and variance 1). what is ANOVA and Why we do ANOVA ? ANOVA is technique where we compare variances to make decisions. The response variable (called Y) is quantitative variable while X can quantitative or qualitative (factor with different levels). Both X and Y can be one or more in number. Usually we say ANOVA for qualitative variables, ANOVA in regression context is less discussed. May be this may be cause of your confusion. The null hypothesis in qualitative variable (factors eg. groups) is that mean of groups is not different / equal while in regression analysis we test whether slope of line is significantly different from 0. Let's see an example where we can do both regression analysis and qualitative factor ANOVA as both X and Y are quantitative, but we can treat X as factor. X1 <- rep(1:5, each = 5) Y1 <- c(12,14,18,12,14, 21,22,23,24,18, 25,23,20,25,26, 29,29,28,30,25, 29,30,32,28,27) myd <- data.frame (X1,Y1) The data looks like follows. X1 Y1 1 1 12 2 1 14 3 1 18 4 1 12 5 1 14 6 2 21 7 2 22 8 2 23 9 2 24 10 2 18 11 3 25 12 3 23 13 3 20 14 3 25 15 3 26 16 4 29 17 4 29 18 4 28 19 4 30 20 4 25 21 5 29 22 5 30 23 5 32 24 5 28 25 5 27 Now we do both regression and ANOVA. First regression: reg <- lm(Y1~X1, data=myd) anova(reg) Analysis of Variance Table Response: Y1 Df Sum Sq Mean Sq F value Pr(>F) X1 1 684.50 684.50 101.4 6.703e-10 *** Residuals 23 155.26 6.75 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 reg$coefficients (Intercept) X1 12.26 3.70 Now conventional ANOVA (mean ANOVA for factor/qualitative variable) by converting X1 to factor. myd$X1f <- as.factor (myd$X1) regf <- lm(Y1~X1f, data=myd) anova(regf) Analysis of Variance Table Response: Y1 Df Sum Sq Mean Sq F value Pr(>F) X1f 4 742.16 185.54 38.02 4.424e-09 *** Residuals 20 97.60 4.88 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 You can see changed X1f Df which is 4 instead of 1 in above case. In contrast to ANOVA for qualitative variables, in context of quantitative variables where we do regression analysis - Analysis of Variance (ANOVA) consists of calculations that provide information about levels of variability within a regression model and form a basis for tests of significance. Basically ANOVA tests the null hypothesis beta = 0 (with alternative hypothesis beta is not equal to 0). Here we do F test which ratio of variability explained by the model vs error (residual variance). Model variance comes from the amount explained by the line you fit while residual comes from the value that is not explained by the model. A significant F means that beta value is not equal to zero, means that there is significant relationship between two variables. > anova(reg1) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X 1 81.719 81.719 6.3331 0.0656 . Residuals 4 51.614 12.904 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Here we can see high correlation or R-squared but still not significant result. Sometime you may get a result where low correlation still significant correlation. The reason of non significant relation in this case is that we do not have enough data (n = 6, residual df = 4), so the F should be looked at F distribution with numerator 1 df vs 4 denomerator df. So this case we could not rule out slope is not equal to 0. Let's see another example: X = c(4,5,8,6,2, 5,6,4,2,3, 8,2,5,6,3, 8,9,3,5,10) Y = c(3,6,9,8,6, 8,6,8,10,5, 3,3,2,4,3, 11,12,4,2,14) reg3 <- lm(Y~X) anova(reg3) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X 1 69.009 69.009 7.414 0.01396 * Residuals 18 167.541 9.308 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-square value for this new data: summary(reg3)$r.squared [1] 0.2917296 cor(X,Y) [1] 0.54012 Although the correlation is lower than previous case we got a significant slope. More data increases df and provides enough information so that we can rule out null hypothesis that slope is not equal to zero. Lets take another example where there is negate correlation: X1 = c(4,5,8,6,12,15) Y1 = c(18,16,2,4,2, 8) # correlation cor(X1,Y1) -0.5266847 # r-square using regression reg2 <- lm(Y1~X1) summary(reg2)$r.squared 0.2773967 sqrt(summary(reg2)$r.squared) [1] 0.5266847 As values were squared square root will not provide information about positive or negative relationship here. But the magnitude is the same. Multiple regression case: Multiple linear regression attempts to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data. The above discussion can be extended to multiple regression case. In this case we have multiple beta in the term: y = a + beta1X1 + beta2X2 + beta2X3 + ................+ betapXp + error Example: X1 = c(4,5,8,6,2, 5,6,4,2,3, 8,2,5,6,3, 8,9,3,5,10) X2 = c(14,15,8,16,2, 15,3,2,4,7, 9,12,5,6,3, 12,19,13,15,20) Y = c(3,6,9,8,6, 8,6,8,10,5, 3,3,2,4,3, 11,12,4,2,14) reg4 <- lm(Y~X1+X2) Let's see the coefficients of the model: reg4$coefficients (Intercept) X1 X2 2.04055116 0.72169350 0.05566427 Thus your multiple linear regression model would be: Y = 2.04055116 + 0.72169350 * X1 + 0.05566427* X2 Now lets test if the beta for X1 and X2 are greater than 0. anova(reg4) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X1 1 69.009 69.009 7.0655 0.01656 * X2 1 1.504 1.504 0.1540 0.69965 Residuals 17 166.038 9.767 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Here we say that the slope of X1 is greater than 0 while we could not rule that the slope of X2 being greater than 0. Please note that slope is not correlation between X1 and Y or X2 and Y. > cor(Y, X1) [1] 0.54012 > cor(Y,X2) [1] 0.3361571 In multiple variate situation (where the variable are greater than two Partial correlation comes into the play. Partial correlation is the correlation of two variables while controlling for a third or more other variables. source("http://www.yilab.gatech.edu/pcor.R") pcor.test(X1, Y,X2) estimate p.value statistic n gn Method Use 1 0.4567979 0.03424027 2.117231 20 1 Pearson Var-Cov matrix pcor.test(X2, Y,X1) estimate p.value statistic n gn Method Use 1 0.09473812 0.6947774 0.3923801 20 1 Pearson Var-Cov matrix
Does "correlation" also mean the slope in regression analysis?
I am providing codes in R just an example, you can just see answers if you do not have experience with R. I just want to make some cases with examples. correlation vs regression Simple linear correl
Does "correlation" also mean the slope in regression analysis? I am providing codes in R just an example, you can just see answers if you do not have experience with R. I just want to make some cases with examples. correlation vs regression Simple linear correlation and regression with one Y and one X: The model: y = a + betaX + error (residual) Let's say we have only two variables: X = c(4,5,8,6,12,15) Y = c(3,6,9,8,6, 18) plot(X,Y, pch = 19) On a scatter diagram, the closer the points lie to a straight line, the stronger the linear relationship between two variables. Let's see linear correlation. cor(X,Y) 0.7828747 Now linear regression and pull-out R squared values. reg1 <- lm(Y~X) summary(reg1)$r.squared 0.6128929 Thus coefficients of the model are: reg1$coefficients (Intercept) X 2.2535971 0.7877698 The beta for X is 0.7877698. Thus out model will be: Y = 2.2535971 + 0.7877698 * X Square root of the R-squared value in regression is same as r in linear regression. sqrt(summary(reg1)$r.squared) [1] 0.7828747 Let's see scale effect on regression slope and correlation using the same above example and multiply X with a constant say 12. X = c(4,5,8,6,12,15) Y = c(3,6,9,8,6, 18) X12 <- X*12 cor(X12,Y) [1] 0.7828747 The correlation remain unchanged as do R-squared. reg12 <- lm(Y~X12) summary(reg12)$r.squared [1] 0.6128929 reg12$coefficients (Intercept) X12 0.53571429 0.07797619 You can see the regression coefficients changed but not R-square. Now another experiment lets add a constant to X and see what this will have effect. X = c(4,5,8,6,12,15) Y = c(3,6,9,8,6, 18) X5 <- X+5 cor(X5,Y) [1] 0.7828747 Correlation is still not changed after adding 5. Let's see how this will have effect on regression coefficients. reg5 <- lm(Y~X5) summary(reg5)$r.squared [1] 0.6128929 reg5$coefficients (Intercept) X5 -4.1428571 0.9357143 The R-square and correlation do not have scale effect but intercept and slope do. So slope is not same as correlation coefficient (unless variables are standardized with mean 0 and variance 1). what is ANOVA and Why we do ANOVA ? ANOVA is technique where we compare variances to make decisions. The response variable (called Y) is quantitative variable while X can quantitative or qualitative (factor with different levels). Both X and Y can be one or more in number. Usually we say ANOVA for qualitative variables, ANOVA in regression context is less discussed. May be this may be cause of your confusion. The null hypothesis in qualitative variable (factors eg. groups) is that mean of groups is not different / equal while in regression analysis we test whether slope of line is significantly different from 0. Let's see an example where we can do both regression analysis and qualitative factor ANOVA as both X and Y are quantitative, but we can treat X as factor. X1 <- rep(1:5, each = 5) Y1 <- c(12,14,18,12,14, 21,22,23,24,18, 25,23,20,25,26, 29,29,28,30,25, 29,30,32,28,27) myd <- data.frame (X1,Y1) The data looks like follows. X1 Y1 1 1 12 2 1 14 3 1 18 4 1 12 5 1 14 6 2 21 7 2 22 8 2 23 9 2 24 10 2 18 11 3 25 12 3 23 13 3 20 14 3 25 15 3 26 16 4 29 17 4 29 18 4 28 19 4 30 20 4 25 21 5 29 22 5 30 23 5 32 24 5 28 25 5 27 Now we do both regression and ANOVA. First regression: reg <- lm(Y1~X1, data=myd) anova(reg) Analysis of Variance Table Response: Y1 Df Sum Sq Mean Sq F value Pr(>F) X1 1 684.50 684.50 101.4 6.703e-10 *** Residuals 23 155.26 6.75 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 reg$coefficients (Intercept) X1 12.26 3.70 Now conventional ANOVA (mean ANOVA for factor/qualitative variable) by converting X1 to factor. myd$X1f <- as.factor (myd$X1) regf <- lm(Y1~X1f, data=myd) anova(regf) Analysis of Variance Table Response: Y1 Df Sum Sq Mean Sq F value Pr(>F) X1f 4 742.16 185.54 38.02 4.424e-09 *** Residuals 20 97.60 4.88 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 You can see changed X1f Df which is 4 instead of 1 in above case. In contrast to ANOVA for qualitative variables, in context of quantitative variables where we do regression analysis - Analysis of Variance (ANOVA) consists of calculations that provide information about levels of variability within a regression model and form a basis for tests of significance. Basically ANOVA tests the null hypothesis beta = 0 (with alternative hypothesis beta is not equal to 0). Here we do F test which ratio of variability explained by the model vs error (residual variance). Model variance comes from the amount explained by the line you fit while residual comes from the value that is not explained by the model. A significant F means that beta value is not equal to zero, means that there is significant relationship between two variables. > anova(reg1) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X 1 81.719 81.719 6.3331 0.0656 . Residuals 4 51.614 12.904 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Here we can see high correlation or R-squared but still not significant result. Sometime you may get a result where low correlation still significant correlation. The reason of non significant relation in this case is that we do not have enough data (n = 6, residual df = 4), so the F should be looked at F distribution with numerator 1 df vs 4 denomerator df. So this case we could not rule out slope is not equal to 0. Let's see another example: X = c(4,5,8,6,2, 5,6,4,2,3, 8,2,5,6,3, 8,9,3,5,10) Y = c(3,6,9,8,6, 8,6,8,10,5, 3,3,2,4,3, 11,12,4,2,14) reg3 <- lm(Y~X) anova(reg3) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X 1 69.009 69.009 7.414 0.01396 * Residuals 18 167.541 9.308 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-square value for this new data: summary(reg3)$r.squared [1] 0.2917296 cor(X,Y) [1] 0.54012 Although the correlation is lower than previous case we got a significant slope. More data increases df and provides enough information so that we can rule out null hypothesis that slope is not equal to zero. Lets take another example where there is negate correlation: X1 = c(4,5,8,6,12,15) Y1 = c(18,16,2,4,2, 8) # correlation cor(X1,Y1) -0.5266847 # r-square using regression reg2 <- lm(Y1~X1) summary(reg2)$r.squared 0.2773967 sqrt(summary(reg2)$r.squared) [1] 0.5266847 As values were squared square root will not provide information about positive or negative relationship here. But the magnitude is the same. Multiple regression case: Multiple linear regression attempts to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data. The above discussion can be extended to multiple regression case. In this case we have multiple beta in the term: y = a + beta1X1 + beta2X2 + beta2X3 + ................+ betapXp + error Example: X1 = c(4,5,8,6,2, 5,6,4,2,3, 8,2,5,6,3, 8,9,3,5,10) X2 = c(14,15,8,16,2, 15,3,2,4,7, 9,12,5,6,3, 12,19,13,15,20) Y = c(3,6,9,8,6, 8,6,8,10,5, 3,3,2,4,3, 11,12,4,2,14) reg4 <- lm(Y~X1+X2) Let's see the coefficients of the model: reg4$coefficients (Intercept) X1 X2 2.04055116 0.72169350 0.05566427 Thus your multiple linear regression model would be: Y = 2.04055116 + 0.72169350 * X1 + 0.05566427* X2 Now lets test if the beta for X1 and X2 are greater than 0. anova(reg4) Analysis of Variance Table Response: Y Df Sum Sq Mean Sq F value Pr(>F) X1 1 69.009 69.009 7.0655 0.01656 * X2 1 1.504 1.504 0.1540 0.69965 Residuals 17 166.038 9.767 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Here we say that the slope of X1 is greater than 0 while we could not rule that the slope of X2 being greater than 0. Please note that slope is not correlation between X1 and Y or X2 and Y. > cor(Y, X1) [1] 0.54012 > cor(Y,X2) [1] 0.3361571 In multiple variate situation (where the variable are greater than two Partial correlation comes into the play. Partial correlation is the correlation of two variables while controlling for a third or more other variables. source("http://www.yilab.gatech.edu/pcor.R") pcor.test(X1, Y,X2) estimate p.value statistic n gn Method Use 1 0.4567979 0.03424027 2.117231 20 1 Pearson Var-Cov matrix pcor.test(X2, Y,X1) estimate p.value statistic n gn Method Use 1 0.09473812 0.6947774 0.3923801 20 1 Pearson Var-Cov matrix
Does "correlation" also mean the slope in regression analysis? I am providing codes in R just an example, you can just see answers if you do not have experience with R. I just want to make some cases with examples. correlation vs regression Simple linear correl
19,615
Does "correlation" also mean the slope in regression analysis?
Analysis of variance (ANOVA) and regression are actually very similar (some would say they are the same thing). In Analysis of variance, typically you have some categories (groups) and a quantitative response variable. You calculate the amount of overall error, the amount of error within a group and the amount of error between groups. In regression, you don't necessarily have groups anymore, but you can still partition the amount of error into an overall error, the amount of error explained by your regression model and error unexplained by your regression model. Regression models are often displayed using ANOVA tables and it's an easy way of seeing how much variation is explained by your model.
Does "correlation" also mean the slope in regression analysis?
Analysis of variance (ANOVA) and regression are actually very similar (some would say they are the same thing). In Analysis of variance, typically you have some categories (groups) and a quantitativ
Does "correlation" also mean the slope in regression analysis? Analysis of variance (ANOVA) and regression are actually very similar (some would say they are the same thing). In Analysis of variance, typically you have some categories (groups) and a quantitative response variable. You calculate the amount of overall error, the amount of error within a group and the amount of error between groups. In regression, you don't necessarily have groups anymore, but you can still partition the amount of error into an overall error, the amount of error explained by your regression model and error unexplained by your regression model. Regression models are often displayed using ANOVA tables and it's an easy way of seeing how much variation is explained by your model.
Does "correlation" also mean the slope in regression analysis? Analysis of variance (ANOVA) and regression are actually very similar (some would say they are the same thing). In Analysis of variance, typically you have some categories (groups) and a quantitativ
19,616
What's the order of correlation?
Here is a nice resource for understanding these issues. It's excellent; you should read it thoroughly. However, I will give a quick introduction. Imagine you have 3 variables, $x$, $y$ and $z$. You are primarily interested in the relationship between $x$ and $y$, but you know that $y$ is also related to $z$, and that unfortunately, $z$ is confounded with $x$. If you simply wanted to know the strength of the relationship, Pearson's product-moment correlation coefficient $r$ is a useful effect size measure. In this situation, you could simply ignore $z$ and compute the correlation between $x$ and $y$ (this is not really a good idea, as the value would be a biased estimate of the direct correlation). Since you have controlled for nothing, this is a 'zero-order' correlation. You might opt instead for a more conscientious approach and control for the confounding with $z$, by partialling out $z$. (One conceptually clear way to do this, albeit not computationally optimal, is to regress $y$ onto $z$, and $x$ onto $z$, and then compute the correlation between the residuals of the two models.) Because you have controlled for one variable, this would be a 'first-order' partial correlation. Another possibility is to partial $z$ out of only one variable, say $y$. For example, you could regress $y$ onto $z$ and correlate those residuals with $x$. This would be a 'first-order' semi-partial (or part) correlation*. I have never seen such a thing in practice, but if you partialled out 17 other variables, you would have a 'seventeenth-order' partial correlation. The linked website is very informative, with examples, multiple formulas and diagrams; go read it. To be technical, there isn't really any such thing as a 'first-order' correlation, nor is there such a thing as a 'zero-order' partial or semi-partial correlation. There are only 'zero-order' correlations, and only 'first-', 'second-', etc., 'order' partial and semi-partial correlations. * Regarding why you might use a partial vs. semi-partial correlation, it depends on the question you want to answer. Often, it may have to do with the pattern of causal connections that people believe creates the pattern of correlations that are seen. For example, a 'first-order' partial correlation between $x$ and $y$ controlling for $z$ of $0$ (i.e., $r_{xy|z}=0$) is consistent with the idea that both $x$ and $y$ are effects of $z$ with no direct connection between them. Likewise, someone might want to show that $y$ is correlated with $x$ even after controlling for $z$. Part of what is going on in a Structural Equations Model can be understood as partial and semi-partial correlations.
What's the order of correlation?
Here is a nice resource for understanding these issues. It's excellent; you should read it thoroughly. However, I will give a quick introduction. Imagine you have 3 variables, $x$, $y$ and $z$. Y
What's the order of correlation? Here is a nice resource for understanding these issues. It's excellent; you should read it thoroughly. However, I will give a quick introduction. Imagine you have 3 variables, $x$, $y$ and $z$. You are primarily interested in the relationship between $x$ and $y$, but you know that $y$ is also related to $z$, and that unfortunately, $z$ is confounded with $x$. If you simply wanted to know the strength of the relationship, Pearson's product-moment correlation coefficient $r$ is a useful effect size measure. In this situation, you could simply ignore $z$ and compute the correlation between $x$ and $y$ (this is not really a good idea, as the value would be a biased estimate of the direct correlation). Since you have controlled for nothing, this is a 'zero-order' correlation. You might opt instead for a more conscientious approach and control for the confounding with $z$, by partialling out $z$. (One conceptually clear way to do this, albeit not computationally optimal, is to regress $y$ onto $z$, and $x$ onto $z$, and then compute the correlation between the residuals of the two models.) Because you have controlled for one variable, this would be a 'first-order' partial correlation. Another possibility is to partial $z$ out of only one variable, say $y$. For example, you could regress $y$ onto $z$ and correlate those residuals with $x$. This would be a 'first-order' semi-partial (or part) correlation*. I have never seen such a thing in practice, but if you partialled out 17 other variables, you would have a 'seventeenth-order' partial correlation. The linked website is very informative, with examples, multiple formulas and diagrams; go read it. To be technical, there isn't really any such thing as a 'first-order' correlation, nor is there such a thing as a 'zero-order' partial or semi-partial correlation. There are only 'zero-order' correlations, and only 'first-', 'second-', etc., 'order' partial and semi-partial correlations. * Regarding why you might use a partial vs. semi-partial correlation, it depends on the question you want to answer. Often, it may have to do with the pattern of causal connections that people believe creates the pattern of correlations that are seen. For example, a 'first-order' partial correlation between $x$ and $y$ controlling for $z$ of $0$ (i.e., $r_{xy|z}=0$) is consistent with the idea that both $x$ and $y$ are effects of $z$ with no direct connection between them. Likewise, someone might want to show that $y$ is correlated with $x$ even after controlling for $z$. Part of what is going on in a Structural Equations Model can be understood as partial and semi-partial correlations.
What's the order of correlation? Here is a nice resource for understanding these issues. It's excellent; you should read it thoroughly. However, I will give a quick introduction. Imagine you have 3 variables, $x$, $y$ and $z$. Y
19,617
How to use CDF and PDF statistics for analysis
It's partly a matter of taste and convention, but theory, attention to your objectives, and a smidgen of cognitive neuroscience [see the references] can provide some guidance. Because a pdf and a cdf convey the same information, the distinction between them arises from how they do it: a pdf represents probability with areas while a cdf represents probability with (vertical) distances. Studies show that people compare distances faster and more accurately than they compare areas and that they systematically mis-estimate areas. Thus, if your purpose is to provide a graphical tool for reading off probabilities, your should favor using a cdf. Pdfs and cdfs also represent probability density: the former does so by means of height while the latter represents density by slope. Now the tables are turned, because people are poor estimators of slope (which is the tangent of an angle; we tend to see the angle itself). Densities are good at conveying information about modes, heaviness of tails, and gaps. Favor using pdfs in such situations and anywhere else where local details of the probability distribution need to be emphasized. Sometimes a pdf or cdf provides useful theoretical information. Its value (or rather the inverse thereof) is involved in formulas for standard errors for quantiles, extremes, and rank statistics. Display a pdf rather than a cdf in such situations. When studying multivariate correlations in a nonparametric setting, such as with copulas, the cdf turns out to be more useful (perhaps because it is the function that transforms a continuous probability law into a uniform one). A pdf or cdf can be intimately associated with a particular statistical test. The Kolmogorov-Smirnov test (and the KS statistic) has a simple graphical representation in terms of a vertical buffer around the cdf; it has no simple graphical representation in terms of the pdf (that I know of). The ccdf (complementary cdf) is used in special applications that focus on survivorship and rare events. Its use tends to be established by convention. References W.S. Cleveland (1994). The Elements of Graphing Data. Summit, NJ, USA: Hobart Press. ISBN 0-9634884-1-4 B.D. Dent (1999). Cartography: Thematic Map Design 5th Ed. Boston, MA, USA: WCB McGraw-Hill. A.M. MacEachren (2004). How Maps Work. New York, NY, USA: The Guilford Press. ISBN 1-57230-040-X
How to use CDF and PDF statistics for analysis
It's partly a matter of taste and convention, but theory, attention to your objectives, and a smidgen of cognitive neuroscience [see the references] can provide some guidance. Because a pdf and a cdf
How to use CDF and PDF statistics for analysis It's partly a matter of taste and convention, but theory, attention to your objectives, and a smidgen of cognitive neuroscience [see the references] can provide some guidance. Because a pdf and a cdf convey the same information, the distinction between them arises from how they do it: a pdf represents probability with areas while a cdf represents probability with (vertical) distances. Studies show that people compare distances faster and more accurately than they compare areas and that they systematically mis-estimate areas. Thus, if your purpose is to provide a graphical tool for reading off probabilities, your should favor using a cdf. Pdfs and cdfs also represent probability density: the former does so by means of height while the latter represents density by slope. Now the tables are turned, because people are poor estimators of slope (which is the tangent of an angle; we tend to see the angle itself). Densities are good at conveying information about modes, heaviness of tails, and gaps. Favor using pdfs in such situations and anywhere else where local details of the probability distribution need to be emphasized. Sometimes a pdf or cdf provides useful theoretical information. Its value (or rather the inverse thereof) is involved in formulas for standard errors for quantiles, extremes, and rank statistics. Display a pdf rather than a cdf in such situations. When studying multivariate correlations in a nonparametric setting, such as with copulas, the cdf turns out to be more useful (perhaps because it is the function that transforms a continuous probability law into a uniform one). A pdf or cdf can be intimately associated with a particular statistical test. The Kolmogorov-Smirnov test (and the KS statistic) has a simple graphical representation in terms of a vertical buffer around the cdf; it has no simple graphical representation in terms of the pdf (that I know of). The ccdf (complementary cdf) is used in special applications that focus on survivorship and rare events. Its use tends to be established by convention. References W.S. Cleveland (1994). The Elements of Graphing Data. Summit, NJ, USA: Hobart Press. ISBN 0-9634884-1-4 B.D. Dent (1999). Cartography: Thematic Map Design 5th Ed. Boston, MA, USA: WCB McGraw-Hill. A.M. MacEachren (2004). How Maps Work. New York, NY, USA: The Guilford Press. ISBN 1-57230-040-X
How to use CDF and PDF statistics for analysis It's partly a matter of taste and convention, but theory, attention to your objectives, and a smidgen of cognitive neuroscience [see the references] can provide some guidance. Because a pdf and a cdf
19,618
How to use CDF and PDF statistics for analysis
I agree with whuber's answer, but have one additional minor point: The CDF has a simple non-parametric estimator that needs no choices to be made: the empirical distribution function. It's not quite so simple to estimate a PDF. If you use a histogram you need to choose the bin width and the starting point for the first bin. If you use kernel density estimation you need to choose the kernel shape and bandwidth. A suspicious or cynical reader may wonder if you really chose these entirely a priori or if you tried a few different values and chose the ones that gave the result you most liked. This is only a minor point though. The ones whuber made are more important, so i'd probably only use this to choose when I was still undecided after considering those.
How to use CDF and PDF statistics for analysis
I agree with whuber's answer, but have one additional minor point: The CDF has a simple non-parametric estimator that needs no choices to be made: the empirical distribution function. It's not quite s
How to use CDF and PDF statistics for analysis I agree with whuber's answer, but have one additional minor point: The CDF has a simple non-parametric estimator that needs no choices to be made: the empirical distribution function. It's not quite so simple to estimate a PDF. If you use a histogram you need to choose the bin width and the starting point for the first bin. If you use kernel density estimation you need to choose the kernel shape and bandwidth. A suspicious or cynical reader may wonder if you really chose these entirely a priori or if you tried a few different values and chose the ones that gave the result you most liked. This is only a minor point though. The ones whuber made are more important, so i'd probably only use this to choose when I was still undecided after considering those.
How to use CDF and PDF statistics for analysis I agree with whuber's answer, but have one additional minor point: The CDF has a simple non-parametric estimator that needs no choices to be made: the empirical distribution function. It's not quite s
19,619
How to use CDF and PDF statistics for analysis
I guess it depends on what statistics or findings you are going to find out, research, study, or report. I'm assuming you will prob be using these graphs to represent findings for your university topic, right? Like for example, if you want to present your finding about say, 'How long users stays on a a certain website', it may be good to show it in CDF as it shows the accumulated time he spent on that website, through the pages etc. On the other hand, if you want to simply show the probability of users clicking on an advert link (e.g. Google adwords link) then you may want to present it in PDF form as it will probably be a normal distribution bell curve and you can show the probability of that heppening. Hope this helps, Jeff
How to use CDF and PDF statistics for analysis
I guess it depends on what statistics or findings you are going to find out, research, study, or report. I'm assuming you will prob be using these graphs to represent findings for your university topi
How to use CDF and PDF statistics for analysis I guess it depends on what statistics or findings you are going to find out, research, study, or report. I'm assuming you will prob be using these graphs to represent findings for your university topic, right? Like for example, if you want to present your finding about say, 'How long users stays on a a certain website', it may be good to show it in CDF as it shows the accumulated time he spent on that website, through the pages etc. On the other hand, if you want to simply show the probability of users clicking on an advert link (e.g. Google adwords link) then you may want to present it in PDF form as it will probably be a normal distribution bell curve and you can show the probability of that heppening. Hope this helps, Jeff
How to use CDF and PDF statistics for analysis I guess it depends on what statistics or findings you are going to find out, research, study, or report. I'm assuming you will prob be using these graphs to represent findings for your university topi
19,620
How to detect when a regression model is over-fit?
Cross validation is a fairly common way to detect overfitting, while regularization is a technique to prevent it. For a quick take, I'd recommend Andrew Moore's tutorial slides on the use of cross-validation (mirror) -- pay particular attention to the caveats. For more detail, definitely read chapters 3 and 7 of EOSL, which cover the topic and associated matter in good depth.
How to detect when a regression model is over-fit?
Cross validation is a fairly common way to detect overfitting, while regularization is a technique to prevent it. For a quick take, I'd recommend Andrew Moore's tutorial slides on the use of cross-va
How to detect when a regression model is over-fit? Cross validation is a fairly common way to detect overfitting, while regularization is a technique to prevent it. For a quick take, I'd recommend Andrew Moore's tutorial slides on the use of cross-validation (mirror) -- pay particular attention to the caveats. For more detail, definitely read chapters 3 and 7 of EOSL, which cover the topic and associated matter in good depth.
How to detect when a regression model is over-fit? Cross validation is a fairly common way to detect overfitting, while regularization is a technique to prevent it. For a quick take, I'd recommend Andrew Moore's tutorial slides on the use of cross-va
19,621
How to detect when a regression model is over-fit?
When I'm fitting a model myself I generally use information criteria during the fitting process, such as AIC or BIC, or alternatively Likelihood-ratio tests for models fit based on maximum likelihood or F-test for models fit based on least squares. All are conceptually similar in that they penalise additional parameters. They set a threshold of "additional explanatory power" for each new parameter added to a model. They are all a form of regularisation. For others' models I look at the methods section to see if such techniques are used and also use rules of thumb, such as the number of observations per parameter - if there are around 5 (or fewer) observations per parameter I start to wonder. Always remember that a variable need need not be "significant" in a model to be important. I may be a confounder and should be included on that basis if your goal is to estimate the effect of other variables.
How to detect when a regression model is over-fit?
When I'm fitting a model myself I generally use information criteria during the fitting process, such as AIC or BIC, or alternatively Likelihood-ratio tests for models fit based on maximum likelihood
How to detect when a regression model is over-fit? When I'm fitting a model myself I generally use information criteria during the fitting process, such as AIC or BIC, or alternatively Likelihood-ratio tests for models fit based on maximum likelihood or F-test for models fit based on least squares. All are conceptually similar in that they penalise additional parameters. They set a threshold of "additional explanatory power" for each new parameter added to a model. They are all a form of regularisation. For others' models I look at the methods section to see if such techniques are used and also use rules of thumb, such as the number of observations per parameter - if there are around 5 (or fewer) observations per parameter I start to wonder. Always remember that a variable need need not be "significant" in a model to be important. I may be a confounder and should be included on that basis if your goal is to estimate the effect of other variables.
How to detect when a regression model is over-fit? When I'm fitting a model myself I generally use information criteria during the fitting process, such as AIC or BIC, or alternatively Likelihood-ratio tests for models fit based on maximum likelihood
19,622
How to detect when a regression model is over-fit?
I would suggest that this is a problem with how the results are reported. Not to "beat the Bayesian drum" but approaching model uncertainty from a Bayesian perspective as an inference problem would greatly help here. And it doesn't have to be a big change either. If the report simply contained the probability that the model is true this would be very helpful. This is an easy quantity to approximate using BIC. Call the BIC for the mth model $BIC_{m}$. Then the probability that mth model is the "true" model, given that $M$ models were fit (and that one of the models is true) is given by: $$P(\text{model m is true}|\text{one of the M models is true})\approx\frac{w_{m}\exp\left(-\frac{1}{2}BIC_{m}\right)}{\sum_{j=1}^{M}w_{j}\exp\left(-\frac{1}{2}BIC_{j}\right)}$$ $$=\frac{1}{1+\sum_{j\neq m}^{M}\frac{w_{j}}{w_{m}}\exp\left(-\frac{1}{2}(BIC_{j}-BIC_{m})\right)}$$ Where $w_{j}$ is proportional to the prior probability for the jth model. Note that this includes a "penalty" for trying to many models - and the penalty depends on how well the other models fit the data. Usually you will set $w_{j}=1$, however, you may have some "theoretical" models within your class that you would expect to be better prior to seeing any data. Now if somebody else doesn't report all the BIC's from all the models, then I would attempt to infer the above quantity from what you have been given. Suppose you are given the BIC from the model - note that BIC is calculable from the mean square error of the regression model, so you can always get BIC for the reported model. Now if we take the basic premise that the final model was chosen from the smallest BIC then we have $BIC_{final}<BIC_{j}$. Now, suppose you were told that "forward" or "forward stepwise" model selection was used, starting from the intercept using $p$ potential variables. If the final model is of dimension $d$, then the procedure must have tried at least $$M\geq 1+p+(p-1)+\dots+(p-d+1)=1+\frac{p(p-1)-(p-d)(p-d-1)}{2}$$ different models (exact for forward selection), If the backwards selection was used, then we know at least $$M\geq 1+p+(p-1)+\dots+(d+1)=1+\frac{p(p-1)-d(d-1)}{2}$$ Models were tried (the +1 comes from the null model or the full model). Now we could try an be more specific, but these are "minimal" parameters which a standard model selection must satisfy. We could specify a probability model for the number of models tried $M$ and the sizes of the $BIC_{j}$ - but simply plugging in some values may be useful here anyway. For example suppose that all the BICs were $\lambda$ bigger than the one of the model chosen so that $BIC_{m}=BIC_{j}-\lambda$, then the probability becomes: $$\frac{1}{1+(M-1)\exp\left(-\frac{\lambda}{2}\right)}$$ So what this means is that unless $\lambda$ is large or $M$ is small, the probability will be small also. From an "over-fitting" perspective, this would occur when the BIC for the bigger model is not much bigger than the BIC for the smaller model - a non-neglible term appears in the denominator. Plugging in the backward selection formula for $M$ we get: $$\frac{1}{1+\frac{p(p-1)-d(d-1)}{2}\exp\left(-\frac{\lambda}{2}\right)}$$ Now suppose we invert the problem. say $p=50$ and the backward selection gave $d=20$ variables, what would $\lambda$ have to be to make the probability of the model greater than some value $P_{0}$? we have $$\lambda > -2 log\left(\frac{2(1-P_{0})}{P_{0}[p(p-1)-d(d-1)]}\right)$$ Setting $P_{0}=0.9$ we get $\lambda > 18.28$ - so BIC of the winning model has to win by a lot for the model to be certain.
How to detect when a regression model is over-fit?
I would suggest that this is a problem with how the results are reported. Not to "beat the Bayesian drum" but approaching model uncertainty from a Bayesian perspective as an inference problem would g
How to detect when a regression model is over-fit? I would suggest that this is a problem with how the results are reported. Not to "beat the Bayesian drum" but approaching model uncertainty from a Bayesian perspective as an inference problem would greatly help here. And it doesn't have to be a big change either. If the report simply contained the probability that the model is true this would be very helpful. This is an easy quantity to approximate using BIC. Call the BIC for the mth model $BIC_{m}$. Then the probability that mth model is the "true" model, given that $M$ models were fit (and that one of the models is true) is given by: $$P(\text{model m is true}|\text{one of the M models is true})\approx\frac{w_{m}\exp\left(-\frac{1}{2}BIC_{m}\right)}{\sum_{j=1}^{M}w_{j}\exp\left(-\frac{1}{2}BIC_{j}\right)}$$ $$=\frac{1}{1+\sum_{j\neq m}^{M}\frac{w_{j}}{w_{m}}\exp\left(-\frac{1}{2}(BIC_{j}-BIC_{m})\right)}$$ Where $w_{j}$ is proportional to the prior probability for the jth model. Note that this includes a "penalty" for trying to many models - and the penalty depends on how well the other models fit the data. Usually you will set $w_{j}=1$, however, you may have some "theoretical" models within your class that you would expect to be better prior to seeing any data. Now if somebody else doesn't report all the BIC's from all the models, then I would attempt to infer the above quantity from what you have been given. Suppose you are given the BIC from the model - note that BIC is calculable from the mean square error of the regression model, so you can always get BIC for the reported model. Now if we take the basic premise that the final model was chosen from the smallest BIC then we have $BIC_{final}<BIC_{j}$. Now, suppose you were told that "forward" or "forward stepwise" model selection was used, starting from the intercept using $p$ potential variables. If the final model is of dimension $d$, then the procedure must have tried at least $$M\geq 1+p+(p-1)+\dots+(p-d+1)=1+\frac{p(p-1)-(p-d)(p-d-1)}{2}$$ different models (exact for forward selection), If the backwards selection was used, then we know at least $$M\geq 1+p+(p-1)+\dots+(d+1)=1+\frac{p(p-1)-d(d-1)}{2}$$ Models were tried (the +1 comes from the null model or the full model). Now we could try an be more specific, but these are "minimal" parameters which a standard model selection must satisfy. We could specify a probability model for the number of models tried $M$ and the sizes of the $BIC_{j}$ - but simply plugging in some values may be useful here anyway. For example suppose that all the BICs were $\lambda$ bigger than the one of the model chosen so that $BIC_{m}=BIC_{j}-\lambda$, then the probability becomes: $$\frac{1}{1+(M-1)\exp\left(-\frac{\lambda}{2}\right)}$$ So what this means is that unless $\lambda$ is large or $M$ is small, the probability will be small also. From an "over-fitting" perspective, this would occur when the BIC for the bigger model is not much bigger than the BIC for the smaller model - a non-neglible term appears in the denominator. Plugging in the backward selection formula for $M$ we get: $$\frac{1}{1+\frac{p(p-1)-d(d-1)}{2}\exp\left(-\frac{\lambda}{2}\right)}$$ Now suppose we invert the problem. say $p=50$ and the backward selection gave $d=20$ variables, what would $\lambda$ have to be to make the probability of the model greater than some value $P_{0}$? we have $$\lambda > -2 log\left(\frac{2(1-P_{0})}{P_{0}[p(p-1)-d(d-1)]}\right)$$ Setting $P_{0}=0.9$ we get $\lambda > 18.28$ - so BIC of the winning model has to win by a lot for the model to be certain.
How to detect when a regression model is over-fit? I would suggest that this is a problem with how the results are reported. Not to "beat the Bayesian drum" but approaching model uncertainty from a Bayesian perspective as an inference problem would g
19,623
How to interpret KL divergence quantitatively?
Suppose you are given n IID samples generated by either p or by q. You want to identify which distribution generated them. Take as null hypothesis that they were generated by q. Let a indicate probability of Type I error, mistakenly rejecting the null hypothesis, and b indicate probability of Type II error. Then for large n, probability of Type I error is at least $\exp(-n \text{KL}(p,q))$ In other words, for an "optimal" decision procedure, probability of Type I falls at most by a factor of exp(KL(p,q)) with each datapoint. Type II error falls by factor of $\exp(\text{KL}(q,p))$ at most. For arbitrary n, a and b are related as follows $b \log \frac{b}{1-a}+(1-b)\log \frac{1-b}{a} \le n \text{KL}(p,q)$ and $a \log \frac{a}{1-b}+(1-a)\log \frac{1-a}{b} \le n \text{KL}(q,p)$ If we express the bound above as the lower bound on a in terms of b and KL and decrease b to 0, result seems to approach the "exp(-n KL(q,p))" bound even for small n More details on page 10 here, and pages 74-77 of Kullback's "Information Theory and Statistics" (1978). As a side note, this interpretation can be used to motivate Fisher Information metric, since for any pair of distributions p,q at Fisher's distance k from each other (small k) you need the same number of observations to to tell them apart
How to interpret KL divergence quantitatively?
Suppose you are given n IID samples generated by either p or by q. You want to identify which distribution generated them. Take as null hypothesis that they were generated by q. Let a indicate probabi
How to interpret KL divergence quantitatively? Suppose you are given n IID samples generated by either p or by q. You want to identify which distribution generated them. Take as null hypothesis that they were generated by q. Let a indicate probability of Type I error, mistakenly rejecting the null hypothesis, and b indicate probability of Type II error. Then for large n, probability of Type I error is at least $\exp(-n \text{KL}(p,q))$ In other words, for an "optimal" decision procedure, probability of Type I falls at most by a factor of exp(KL(p,q)) with each datapoint. Type II error falls by factor of $\exp(\text{KL}(q,p))$ at most. For arbitrary n, a and b are related as follows $b \log \frac{b}{1-a}+(1-b)\log \frac{1-b}{a} \le n \text{KL}(p,q)$ and $a \log \frac{a}{1-b}+(1-a)\log \frac{1-a}{b} \le n \text{KL}(q,p)$ If we express the bound above as the lower bound on a in terms of b and KL and decrease b to 0, result seems to approach the "exp(-n KL(q,p))" bound even for small n More details on page 10 here, and pages 74-77 of Kullback's "Information Theory and Statistics" (1978). As a side note, this interpretation can be used to motivate Fisher Information metric, since for any pair of distributions p,q at Fisher's distance k from each other (small k) you need the same number of observations to to tell them apart
How to interpret KL divergence quantitatively? Suppose you are given n IID samples generated by either p or by q. You want to identify which distribution generated them. Take as null hypothesis that they were generated by q. Let a indicate probabi
19,624
How to interpret KL divergence quantitatively?
KL has a deep meaning when you visualize a set of dentities as a manifold within the fisher metric tensor, it gives the geodesic distance between two "close" distributions. Formally: $ds^2=2KL(p(x, \theta ),p(x,\theta + d \theta))$ The following lines are here to explain with details what is meant by this las mathematical formulae. Definition of the Fisher metric. Consider a parametrized family of probability distributions $D=(f(x, \theta ))$ (given by densities in $R^n$), where $x$ is a random variable and theta is a parameter in $R^p$. You may all knnow that the fisher information matrix $F=(F_{ij})$ is $F_{ij}=E[d(\log f(x,\theta))/d \theta_i d(\log f(x,\theta))/d \theta_j]$ With this notation $D$ is a riemannian manifold and $F(\theta)$ is a Riemannian metric tensor. (The interest of this metric is given by cramer Rao lower bound theorem) You may say ... OK mathematical abstraction but where is KL ? It is not mathematical abstraction, if $p=1$ you can really imagine your parametrized density as a curve (instead of a subset of a space of infinite dimension) and $F_{11}$ is connected to the curvature of that curve... (see the seminal paper of Bradley Efron: Defining the Curvature of a Statistical Problem (with Applications to Second Order Efficiency)) The geometric answer to part of point a/ in your question : the squared distance $ds^2$ between two (close) distributions $p(x,\theta)$ and $p(x,\theta+d \theta)$ on the manifold (think of geodesic distance on earth of two points that are close, it is related to the curvature of the earth) is given by the quadratic form: $ds^2= \sum F_{ij} d \theta^i d \theta^j$ and it is known to be twice the Kullback Leibler Divergence: $ds^2=2KL(p(x, \theta ),p(x,\theta + d \theta))$ If you want to learn more about that I suggest reading the paper from Amari: Differential Geometry of Curved Exponential Families-Curvatures and Information Loss (I think there is also a book from Amari about Riemannian geometry in statistic but I don't remember the name)
How to interpret KL divergence quantitatively?
KL has a deep meaning when you visualize a set of dentities as a manifold within the fisher metric tensor, it gives the geodesic distance between two "close" distributions. Formally: $ds^2=2KL(p(x, \t
How to interpret KL divergence quantitatively? KL has a deep meaning when you visualize a set of dentities as a manifold within the fisher metric tensor, it gives the geodesic distance between two "close" distributions. Formally: $ds^2=2KL(p(x, \theta ),p(x,\theta + d \theta))$ The following lines are here to explain with details what is meant by this las mathematical formulae. Definition of the Fisher metric. Consider a parametrized family of probability distributions $D=(f(x, \theta ))$ (given by densities in $R^n$), where $x$ is a random variable and theta is a parameter in $R^p$. You may all knnow that the fisher information matrix $F=(F_{ij})$ is $F_{ij}=E[d(\log f(x,\theta))/d \theta_i d(\log f(x,\theta))/d \theta_j]$ With this notation $D$ is a riemannian manifold and $F(\theta)$ is a Riemannian metric tensor. (The interest of this metric is given by cramer Rao lower bound theorem) You may say ... OK mathematical abstraction but where is KL ? It is not mathematical abstraction, if $p=1$ you can really imagine your parametrized density as a curve (instead of a subset of a space of infinite dimension) and $F_{11}$ is connected to the curvature of that curve... (see the seminal paper of Bradley Efron: Defining the Curvature of a Statistical Problem (with Applications to Second Order Efficiency)) The geometric answer to part of point a/ in your question : the squared distance $ds^2$ between two (close) distributions $p(x,\theta)$ and $p(x,\theta+d \theta)$ on the manifold (think of geodesic distance on earth of two points that are close, it is related to the curvature of the earth) is given by the quadratic form: $ds^2= \sum F_{ij} d \theta^i d \theta^j$ and it is known to be twice the Kullback Leibler Divergence: $ds^2=2KL(p(x, \theta ),p(x,\theta + d \theta))$ If you want to learn more about that I suggest reading the paper from Amari: Differential Geometry of Curved Exponential Families-Curvatures and Information Loss (I think there is also a book from Amari about Riemannian geometry in statistic but I don't remember the name)
How to interpret KL divergence quantitatively? KL has a deep meaning when you visualize a set of dentities as a manifold within the fisher metric tensor, it gives the geodesic distance between two "close" distributions. Formally: $ds^2=2KL(p(x, \t
19,625
How to interpret KL divergence quantitatively?
The KL(p,q) divergence between distributions p(.) and q(.) has an intuitive information theoretic interpretation which you may find useful. Suppose we observe data x generated by some probability distribution p(.). A lower bound on the average codelength in bits required to state the data generated by p(.) is given by the entropy of p(.). Now, since we don't know p(.) we choose another distribution, say, q(.) to encode (or describe, state) the data. The average codelength of data generated by p(.) and encoded using q(.) will necessarily be longer than if the true distribution p(.) was used for the coding. The KL divergence tells us about the inefficiencies of this alternative code. In other words, the KL divergence between p(.) and q(.) is the average number of extra bits required to encode data generated by p(.) using coding distribution q(.). The KL divergence is non-negative and equal to zero iff the actual data generating distribution is used to encode the data.
How to interpret KL divergence quantitatively?
The KL(p,q) divergence between distributions p(.) and q(.) has an intuitive information theoretic interpretation which you may find useful. Suppose we observe data x generated by some probability dis
How to interpret KL divergence quantitatively? The KL(p,q) divergence between distributions p(.) and q(.) has an intuitive information theoretic interpretation which you may find useful. Suppose we observe data x generated by some probability distribution p(.). A lower bound on the average codelength in bits required to state the data generated by p(.) is given by the entropy of p(.). Now, since we don't know p(.) we choose another distribution, say, q(.) to encode (or describe, state) the data. The average codelength of data generated by p(.) and encoded using q(.) will necessarily be longer than if the true distribution p(.) was used for the coding. The KL divergence tells us about the inefficiencies of this alternative code. In other words, the KL divergence between p(.) and q(.) is the average number of extra bits required to encode data generated by p(.) using coding distribution q(.). The KL divergence is non-negative and equal to zero iff the actual data generating distribution is used to encode the data.
How to interpret KL divergence quantitatively? The KL(p,q) divergence between distributions p(.) and q(.) has an intuitive information theoretic interpretation which you may find useful. Suppose we observe data x generated by some probability dis
19,626
How to interpret KL divergence quantitatively?
For part (b) of your question, you might be running into the problem that one of of your distributions has density in a region where the other does not. $$ D( P \Vert Q ) = \sum p_i \ln \frac{p_i}{q_i} $$ This diverges if there exists an $i$ where $p_i>0$ and $q_i=0$. The numerical epsilon in the R implementation "saves you" from this problem; but it means that the resulting value is dependent on this parameter (technically $q_i=0$ is no required, just that $q_i$ is less than the numerical epsilon).
How to interpret KL divergence quantitatively?
For part (b) of your question, you might be running into the problem that one of of your distributions has density in a region where the other does not. $$ D( P \Vert Q ) = \sum p_i \ln \frac{p_i}{q_
How to interpret KL divergence quantitatively? For part (b) of your question, you might be running into the problem that one of of your distributions has density in a region where the other does not. $$ D( P \Vert Q ) = \sum p_i \ln \frac{p_i}{q_i} $$ This diverges if there exists an $i$ where $p_i>0$ and $q_i=0$. The numerical epsilon in the R implementation "saves you" from this problem; but it means that the resulting value is dependent on this parameter (technically $q_i=0$ is no required, just that $q_i$ is less than the numerical epsilon).
How to interpret KL divergence quantitatively? For part (b) of your question, you might be running into the problem that one of of your distributions has density in a region where the other does not. $$ D( P \Vert Q ) = \sum p_i \ln \frac{p_i}{q_
19,627
Inverse transform sampling - CDF is not invertible
In low dimensions a good alternative is to use rejection sampling from the pdf $f_X$ (in high dimensions this becomes very inefficient). Say $f_X$ is your pdf for some random variable $X$, which you want to sample from in the interval $I=[x_\mathrm{min}, x_\mathrm{max}]$. Then you can draw samples $x_i$ uniformly from $I$ and accept/reject them with the probability $f_X(x_i)$, i.e. you draw another uniformly distributed random number $u_i\in[0, \max(f_X(x)|x \in I)]$ and if $u_i \lt f_X(x_i)$ you accept that sample point otherwise you reject it.
Inverse transform sampling - CDF is not invertible
In low dimensions a good alternative is to use rejection sampling from the pdf $f_X$ (in high dimensions this becomes very inefficient). Say $f_X$ is your pdf for some random variable $X$, which you w
Inverse transform sampling - CDF is not invertible In low dimensions a good alternative is to use rejection sampling from the pdf $f_X$ (in high dimensions this becomes very inefficient). Say $f_X$ is your pdf for some random variable $X$, which you want to sample from in the interval $I=[x_\mathrm{min}, x_\mathrm{max}]$. Then you can draw samples $x_i$ uniformly from $I$ and accept/reject them with the probability $f_X(x_i)$, i.e. you draw another uniformly distributed random number $u_i\in[0, \max(f_X(x)|x \in I)]$ and if $u_i \lt f_X(x_i)$ you accept that sample point otherwise you reject it.
Inverse transform sampling - CDF is not invertible In low dimensions a good alternative is to use rejection sampling from the pdf $f_X$ (in high dimensions this becomes very inefficient). Say $f_X$ is your pdf for some random variable $X$, which you w
19,628
Inverse transform sampling - CDF is not invertible
The inverse cdf method operates even when the cdf is not invertible, using the generalised inverse$$F^-(u)=\sup\{x;\ F(x)\le u\}\tag{1}$$ which is always defined for $u\in(0,1)$. When there is no solution in $x$ to the equation $$F(x)=u$$ it means that $F$ has a jump between a value less than $u$, $u-\epsilon$ and a value more than $u$, $u+\eta$. Hence the distribution has a point mass with mass $\eta+\epsilon$ at a point $x_0$, with $$F(x_0^{-})=u-\epsilon\quad\text{and}\quad F(x_0)=u+\eta$$ In that case (1) leads to$$F^-(u)=x_0$$ Similarly, if the equation $$F(x)=u$$ has an infinite number of solutions, say $x\in [a,b)$, it means that the cdf is constant over this interval and hence that all values in (a,b) have zero probability to occur. In that case, $$F^-(u)=b$$ [which is of course a convention since the event $U=F(a)$ has probability zero to occur.
Inverse transform sampling - CDF is not invertible
The inverse cdf method operates even when the cdf is not invertible, using the generalised inverse$$F^-(u)=\sup\{x;\ F(x)\le u\}\tag{1}$$ which is always defined for $u\in(0,1)$. When there is no solu
Inverse transform sampling - CDF is not invertible The inverse cdf method operates even when the cdf is not invertible, using the generalised inverse$$F^-(u)=\sup\{x;\ F(x)\le u\}\tag{1}$$ which is always defined for $u\in(0,1)$. When there is no solution in $x$ to the equation $$F(x)=u$$ it means that $F$ has a jump between a value less than $u$, $u-\epsilon$ and a value more than $u$, $u+\eta$. Hence the distribution has a point mass with mass $\eta+\epsilon$ at a point $x_0$, with $$F(x_0^{-})=u-\epsilon\quad\text{and}\quad F(x_0)=u+\eta$$ In that case (1) leads to$$F^-(u)=x_0$$ Similarly, if the equation $$F(x)=u$$ has an infinite number of solutions, say $x\in [a,b)$, it means that the cdf is constant over this interval and hence that all values in (a,b) have zero probability to occur. In that case, $$F^-(u)=b$$ [which is of course a convention since the event $U=F(a)$ has probability zero to occur.
Inverse transform sampling - CDF is not invertible The inverse cdf method operates even when the cdf is not invertible, using the generalised inverse$$F^-(u)=\sup\{x;\ F(x)\le u\}\tag{1}$$ which is always defined for $u\in(0,1)$. When there is no solu
19,629
Inverse transform sampling - CDF is not invertible
Methods can be quite different depending on the distribution you want to simulate. There many good books on simulation methods and lots of info about simulation on the Internet. Methods often exploit relationships among distributions (such as Poisson, exponential, gamma, chi-squared, F, beta---see discussion in comments). Ultimately, almost all current computer simulation ultimately uses standard uniform output from a pseudo-random generator. Sometimes, methods depend on the current state of technology. For example, the normal CDF is one of the many CDFs that cannot be expressed in closed form. The first method of simulating normal variates in the 1950's, when computation beyond basic arithmetic was expensive, was to use $Z = \sum_{i=1}^{12} U_i - 6 \stackrel{aprx}{\sim}\mathsf{Norm}(0,1),$ by the CLT, where $U_i \stackrel{iid}{\sim}\mathsf{Unif}(0,1).$ Subsequently, the Box-Muller method was used to obtain two independent standard normal random variables from to independent standard uniform ones. Currently, it is common to use a very accurate rational approximation to the standard normal quantile function to get one standard normal deviate from one standard uniform deviate. I believe the function runif in R uses Michael Wichura's rational approximation, which is accurate up double-precision computer representation.
Inverse transform sampling - CDF is not invertible
Methods can be quite different depending on the distribution you want to simulate. There many good books on simulation methods and lots of info about simulation on the Internet. Methods often exploit
Inverse transform sampling - CDF is not invertible Methods can be quite different depending on the distribution you want to simulate. There many good books on simulation methods and lots of info about simulation on the Internet. Methods often exploit relationships among distributions (such as Poisson, exponential, gamma, chi-squared, F, beta---see discussion in comments). Ultimately, almost all current computer simulation ultimately uses standard uniform output from a pseudo-random generator. Sometimes, methods depend on the current state of technology. For example, the normal CDF is one of the many CDFs that cannot be expressed in closed form. The first method of simulating normal variates in the 1950's, when computation beyond basic arithmetic was expensive, was to use $Z = \sum_{i=1}^{12} U_i - 6 \stackrel{aprx}{\sim}\mathsf{Norm}(0,1),$ by the CLT, where $U_i \stackrel{iid}{\sim}\mathsf{Unif}(0,1).$ Subsequently, the Box-Muller method was used to obtain two independent standard normal random variables from to independent standard uniform ones. Currently, it is common to use a very accurate rational approximation to the standard normal quantile function to get one standard normal deviate from one standard uniform deviate. I believe the function runif in R uses Michael Wichura's rational approximation, which is accurate up double-precision computer representation.
Inverse transform sampling - CDF is not invertible Methods can be quite different depending on the distribution you want to simulate. There many good books on simulation methods and lots of info about simulation on the Internet. Methods often exploit
19,630
What does Fisher mean by this quote?
Here is my paraphrase of what Fisher says in your bolded quote. It should not be forgotten that quite a lot goes into choosing what hypothesis to test, so much so that even for a single person's decision, you could not specify it all. It also should not be forgotten that, for reasons stated above, you cannot decide on a particular trial's significance level always the same way, as a life long habit. A scientific hypothesis is selected as worth testing against many other competing hypotheses because of the biases of the researcher and their current state of knowledge. The hypotheses are "highly selected", not the samples; the hypotheses are the cases where we apply tests. The selection process of the hypotheses affects our significance level. If we are very sure of a hypothesis, that should make the significance level less stringent to satisfy ourselves. If we are unsure there is higher burden of proof. Other factors come into play as well, such as Type I error being worse than Type II in drug trials. I think when he says "indicated by" he simply means "chosen for". Yes, it is a preset value where we reject the hypothesis if the p-value is more extreme.
What does Fisher mean by this quote?
Here is my paraphrase of what Fisher says in your bolded quote. It should not be forgotten that quite a lot goes into choosing what hypothesis to test, so much so that even for a single person's decis
What does Fisher mean by this quote? Here is my paraphrase of what Fisher says in your bolded quote. It should not be forgotten that quite a lot goes into choosing what hypothesis to test, so much so that even for a single person's decision, you could not specify it all. It also should not be forgotten that, for reasons stated above, you cannot decide on a particular trial's significance level always the same way, as a life long habit. A scientific hypothesis is selected as worth testing against many other competing hypotheses because of the biases of the researcher and their current state of knowledge. The hypotheses are "highly selected", not the samples; the hypotheses are the cases where we apply tests. The selection process of the hypotheses affects our significance level. If we are very sure of a hypothesis, that should make the significance level less stringent to satisfy ourselves. If we are unsure there is higher burden of proof. Other factors come into play as well, such as Type I error being worse than Type II in drug trials. I think when he says "indicated by" he simply means "chosen for". Yes, it is a preset value where we reject the hypothesis if the p-value is more extreme.
What does Fisher mean by this quote? Here is my paraphrase of what Fisher says in your bolded quote. It should not be forgotten that quite a lot goes into choosing what hypothesis to test, so much so that even for a single person's decis
19,631
What does Fisher mean by this quote?
The cases to which Fisher is referring are not observations but tests. That is, we select hypotheses to test. We don't just test random hypotheses - we base them on observation, the literature, scientific theories and so on. If you did test random hypotheses, then the number of times you are mistaken (in the first sentence of your quote) would be 1% (or whatever value is chosen). E.g. if we tested hypotheses like The parity of a person's social security number is related to his IQ Blond haired people throw Frisbees better than dark haired people The time to getting an answer on Cross Validated is related to the number of syllables in your first name. And tested a whole bunch of them at 1%, we would reject the null about 1% of the time, and do so incorrectly. (Unless, of course, I am on to something with the above nonsense). I did once see an article about hair color and Frisbee throwing - and it found a difference! So, I call this sort of thing "Frisbee research". But the part I like the best from the quote is this: for in fact no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas. He must be spinning in his grave.
What does Fisher mean by this quote?
The cases to which Fisher is referring are not observations but tests. That is, we select hypotheses to test. We don't just test random hypotheses - we base them on observation, the literature, scient
What does Fisher mean by this quote? The cases to which Fisher is referring are not observations but tests. That is, we select hypotheses to test. We don't just test random hypotheses - we base them on observation, the literature, scientific theories and so on. If you did test random hypotheses, then the number of times you are mistaken (in the first sentence of your quote) would be 1% (or whatever value is chosen). E.g. if we tested hypotheses like The parity of a person's social security number is related to his IQ Blond haired people throw Frisbees better than dark haired people The time to getting an answer on Cross Validated is related to the number of syllables in your first name. And tested a whole bunch of them at 1%, we would reject the null about 1% of the time, and do so incorrectly. (Unless, of course, I am on to something with the above nonsense). I did once see an article about hair color and Frisbee throwing - and it found a difference! So, I call this sort of thing "Frisbee research". But the part I like the best from the quote is this: for in fact no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas. He must be spinning in his grave.
What does Fisher mean by this quote? The cases to which Fisher is referring are not observations but tests. That is, we select hypotheses to test. We don't just test random hypotheses - we base them on observation, the literature, scient
19,632
What does Fisher mean by this quote?
Trying to see the background of the quote I came to a version of the book (I am not sure which is which version) that has a slightly different quote https://archive.org/details/in.ernet.dli.2015.134555/page/n47 The attempts that have been made to explain the cogency of tests of significance in scientific research, by reference to hypothetical frequencies of possible statements, based on them, being right or wrong, thus seem to miss the essential nature of such tests. A man who "rejects" a hypothesis provisionally, as a matter of habitual practice, when the significance is at the 1% level or higher, will certainly be mistaken in not more than 1% of such decisions. For when the hypothesis is correct he will be mistaken in just 1% of these cases, and when it is incorrect he will never be mistaken in rejection. This inequality statement can therefore be made. However, the calculation is absurdly academic, for in fact no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas. Further, the calculation is based solely on a hypothesis, which, in the light of the evidence, is often not believed to be true at all, so that the actual probability of erroneous decision, supposing such a phrase to have any meaning, may be much less than the frequency specifying the level of significance. To a practical man, also, who rejects a hypothesis, it is, of course, a matter of indifference with what probability he might be led to accept the hypothesis falsely, for in his case he is not accepting it. This seems to me a criticism to use the mathematical expression of rejection possibilities, type I errors, as some rigorous argument. Those expressions are often not a good expression for what is relevant and neither are they rigorous. Why are the cases chosen for applying a test "highly selected"? This seems to relate to the sentence Further, the calculation is based solely on a hypothesis, which, in the light of the evidence, is often not believed to be true at all We are not indifferent towards the hypothesis that is being tested, and often a hypothesis that is being tested is not believed to be true. how is this related to the choice of the significance level? This relates to so that the actual probability of erroneous decision, supposing such a phrase to have any meaning, may be much less than the frequency specifying the level of significance The p-value is just the frequency of making a mistake when the null-hypothesis is true. But the actual frequency of making a mistake will be different (lower). what is "the actual level of significance indicated by a particular trial" referring to I believe that this part refers to some sort of p-value hacking. Changing the significance level, alpha, after the observations have occurred in order to match the observed p-value, and pretend that this was the cut-off value all along from the beginning.
What does Fisher mean by this quote?
Trying to see the background of the quote I came to a version of the book (I am not sure which is which version) that has a slightly different quote https://archive.org/details/in.ernet.dli.2015.13455
What does Fisher mean by this quote? Trying to see the background of the quote I came to a version of the book (I am not sure which is which version) that has a slightly different quote https://archive.org/details/in.ernet.dli.2015.134555/page/n47 The attempts that have been made to explain the cogency of tests of significance in scientific research, by reference to hypothetical frequencies of possible statements, based on them, being right or wrong, thus seem to miss the essential nature of such tests. A man who "rejects" a hypothesis provisionally, as a matter of habitual practice, when the significance is at the 1% level or higher, will certainly be mistaken in not more than 1% of such decisions. For when the hypothesis is correct he will be mistaken in just 1% of these cases, and when it is incorrect he will never be mistaken in rejection. This inequality statement can therefore be made. However, the calculation is absurdly academic, for in fact no scientific worker has a fixed level of significance at which from year to year, and in all circumstances, he rejects hypotheses; he rather gives his mind to each particular case in the light of his evidence and his ideas. Further, the calculation is based solely on a hypothesis, which, in the light of the evidence, is often not believed to be true at all, so that the actual probability of erroneous decision, supposing such a phrase to have any meaning, may be much less than the frequency specifying the level of significance. To a practical man, also, who rejects a hypothesis, it is, of course, a matter of indifference with what probability he might be led to accept the hypothesis falsely, for in his case he is not accepting it. This seems to me a criticism to use the mathematical expression of rejection possibilities, type I errors, as some rigorous argument. Those expressions are often not a good expression for what is relevant and neither are they rigorous. Why are the cases chosen for applying a test "highly selected"? This seems to relate to the sentence Further, the calculation is based solely on a hypothesis, which, in the light of the evidence, is often not believed to be true at all We are not indifferent towards the hypothesis that is being tested, and often a hypothesis that is being tested is not believed to be true. how is this related to the choice of the significance level? This relates to so that the actual probability of erroneous decision, supposing such a phrase to have any meaning, may be much less than the frequency specifying the level of significance The p-value is just the frequency of making a mistake when the null-hypothesis is true. But the actual frequency of making a mistake will be different (lower). what is "the actual level of significance indicated by a particular trial" referring to I believe that this part refers to some sort of p-value hacking. Changing the significance level, alpha, after the observations have occurred in order to match the observed p-value, and pretend that this was the cut-off value all along from the beginning.
What does Fisher mean by this quote? Trying to see the background of the quote I came to a version of the book (I am not sure which is which version) that has a slightly different quote https://archive.org/details/in.ernet.dli.2015.13455
19,633
Why is an estimator considered a random variable?
Somewhat loosely -- I have a coin in front of me. The value of the next toss of the coin (let's take {Head=1, Tail=0} say) is a random variable. It has some probability of taking the value $1$ ($\frac12$ if the experiment is "fair"). But once I have tossed it and observed the outcome, it's an observation, and that observation doesn't vary, I know what it is. Consider now I will toss the coin twice ($X_1, X_2$). Both of these are random variables and so is their sum (the total number of heads in two tosses). So is their average (the proportion of head in two tosses) and their difference, and so forth. That is, functions of random variables are in turn random variables. So an estimator -- which is a function of random variables -- is itself a random variable. But once you observe that random variable -- like when you observe a coin toss or any other random variable -- the observed value is just a number. It doesn't vary -- you know what it is. So an estimate -- the value you have calculated based on a sample is an observation on a random variable (the estimator) rather than a random variable itself.
Why is an estimator considered a random variable?
Somewhat loosely -- I have a coin in front of me. The value of the next toss of the coin (let's take {Head=1, Tail=0} say) is a random variable. It has some probability of taking the value $1$ ($\fra
Why is an estimator considered a random variable? Somewhat loosely -- I have a coin in front of me. The value of the next toss of the coin (let's take {Head=1, Tail=0} say) is a random variable. It has some probability of taking the value $1$ ($\frac12$ if the experiment is "fair"). But once I have tossed it and observed the outcome, it's an observation, and that observation doesn't vary, I know what it is. Consider now I will toss the coin twice ($X_1, X_2$). Both of these are random variables and so is their sum (the total number of heads in two tosses). So is their average (the proportion of head in two tosses) and their difference, and so forth. That is, functions of random variables are in turn random variables. So an estimator -- which is a function of random variables -- is itself a random variable. But once you observe that random variable -- like when you observe a coin toss or any other random variable -- the observed value is just a number. It doesn't vary -- you know what it is. So an estimate -- the value you have calculated based on a sample is an observation on a random variable (the estimator) rather than a random variable itself.
Why is an estimator considered a random variable? Somewhat loosely -- I have a coin in front of me. The value of the next toss of the coin (let's take {Head=1, Tail=0} say) is a random variable. It has some probability of taking the value $1$ ($\fra
19,634
Why is an estimator considered a random variable?
My understandings: An estimator is not only a function, which input is some random variable and output another random variable, but also a random variable, which is just the output of the function. Something like $y=y(x)$, when we talk about $y$, we mean both the function $y()$, and the result $y$. Example:an estimator $\overline X=\mu(X_1,X_2,X_3)=\frac{X_1+X_2+X_3}{3}$, we mean both $\mu()$ ,which is a function,and its result $\overline X$,which is random variable. The difference between estimator and estimate is about before observing or after observing. Actually, similar to an estimator, an estimate is both a function and a value(the function output) too. But the estimate is in the context of after observing, and by contrast, the estimator is in the context of before observing. A picture illustrates the idea above: I have researched this question during my weekend, after reading lots of material from the internet, i am still confused. Although I am not completely sure that my answer is right, it seems like, to me, it's the only way to let everything make sense.
Why is an estimator considered a random variable?
My understandings: An estimator is not only a function, which input is some random variable and output another random variable, but also a random variable, which is just the output of the function. S
Why is an estimator considered a random variable? My understandings: An estimator is not only a function, which input is some random variable and output another random variable, but also a random variable, which is just the output of the function. Something like $y=y(x)$, when we talk about $y$, we mean both the function $y()$, and the result $y$. Example:an estimator $\overline X=\mu(X_1,X_2,X_3)=\frac{X_1+X_2+X_3}{3}$, we mean both $\mu()$ ,which is a function,and its result $\overline X$,which is random variable. The difference between estimator and estimate is about before observing or after observing. Actually, similar to an estimator, an estimate is both a function and a value(the function output) too. But the estimate is in the context of after observing, and by contrast, the estimator is in the context of before observing. A picture illustrates the idea above: I have researched this question during my weekend, after reading lots of material from the internet, i am still confused. Although I am not completely sure that my answer is right, it seems like, to me, it's the only way to let everything make sense.
Why is an estimator considered a random variable? My understandings: An estimator is not only a function, which input is some random variable and output another random variable, but also a random variable, which is just the output of the function. S
19,635
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate?
He's referring, rather clumsily, to the well known fact that frequentist analysis doesn't model the state of our knowledge about an unknown parameter with a probability distribution, so having calculated a (say 95%) confidence interval (say 1.2 to 3.4) for a population parameter (say the mean of a Gaussian distribution) from some data you can't then go ahead & claim that there's a 95% probability of the mean falling between 1.2 and 3.4. The probability's one or zero—you don't know which. But what you can say, in general, is that your procedure for calculating 95% confidence intervals is one that ensures they contain the true parameter value 95% of the time. This seems reason enough for saying that CIs reflect uncertainty. As Sir David Cox put it† We define procedures for assessing evidence that are calibrated by how they would perform were they used repeatedly. In that sense they do not differ from other measuring instruments. See here & here for further explanation. Other things you can say vary according to the particular method you used to calculate the confidence interval; if you ensure the values inside have greater likelihood, given the data, than the points outside, then you can say that (& it's often approximately true for commonly used methods). See here for more. † Cox (2006), Principles of Statistical Inference, §1.5.2
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate?
He's referring, rather clumsily, to the well known fact that frequentist analysis doesn't model the state of our knowledge about an unknown parameter with a probability distribution, so having calcula
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate? He's referring, rather clumsily, to the well known fact that frequentist analysis doesn't model the state of our knowledge about an unknown parameter with a probability distribution, so having calculated a (say 95%) confidence interval (say 1.2 to 3.4) for a population parameter (say the mean of a Gaussian distribution) from some data you can't then go ahead & claim that there's a 95% probability of the mean falling between 1.2 and 3.4. The probability's one or zero—you don't know which. But what you can say, in general, is that your procedure for calculating 95% confidence intervals is one that ensures they contain the true parameter value 95% of the time. This seems reason enough for saying that CIs reflect uncertainty. As Sir David Cox put it† We define procedures for assessing evidence that are calibrated by how they would perform were they used repeatedly. In that sense they do not differ from other measuring instruments. See here & here for further explanation. Other things you can say vary according to the particular method you used to calculate the confidence interval; if you ensure the values inside have greater likelihood, given the data, than the points outside, then you can say that (& it's often approximately true for commonly used methods). See here for more. † Cox (2006), Principles of Statistical Inference, §1.5.2
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate? He's referring, rather clumsily, to the well known fact that frequentist analysis doesn't model the state of our knowledge about an unknown parameter with a probability distribution, so having calcula
19,636
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate?
It can be hard to mathematically characterize uncertainty, but I know it when I see it; it usually has wide 95% confidence intervals.
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate?
It can be hard to mathematically characterize uncertainty, but I know it when I see it; it usually has wide 95% confidence intervals.
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate? It can be hard to mathematically characterize uncertainty, but I know it when I see it; it usually has wide 95% confidence intervals.
Does a confidence interval actually provide a measure of the uncertainty of a parameter estimate? It can be hard to mathematically characterize uncertainty, but I know it when I see it; it usually has wide 95% confidence intervals.
19,637
Choosing number of principal components to retain
The following article : Component retention in principal component analysis with application to cDNA microarray data by Cangelosi and Goriely gives a rather nice overview of the standard rule of thumbs to detect the number of components in a study. (Scree plot, Proportion of total variance explained, Average eigenvalue rule, Log-eigenvalue diagram, etc.) Most of them are quite straightforward to implement in R. In general if your scree plot is very inconclusive then you just need to "pick your poison". There is no absolute right or wrong for any data as in reality the number of PCs to use actually depends on your understanding of the problem. The only data-set you can "really" know the dimensionality of is the one you constructed yourself. :-) Principal Components in the end of the day provide the optimal decomposition of the data under an RSS metric (where as a by-product you get each component to represent a principal mode of variation) and including or excluding a given number of components dictates your perception about the dimensionality of your problem. As matter of personal preference, I like Minka's approach on this Automatic choice of dimensionality for PCA which based on a probabilistic interpretation of PCA but then again, you get into the game of trying to model the likelihood of your data for a given dimensionality. (Link provides Matlab code if you wish to follow this rationale.) Try to understand your data more. eg. Do you really believe that 99.99% of your data-set's variation is due to your model's covariates? If not probably you probably don't need to include dimensions that exhibit such a small proportion of total variance. Do you think that in reality a component reflects variation below a threshold of just noticeable differences? That again probably means that there is little relevance in including that component to your analysis. In any case, good luck and check your data carefully. (Plotting them makes wonders also.)
Choosing number of principal components to retain
The following article : Component retention in principal component analysis with application to cDNA microarray data by Cangelosi and Goriely gives a rather nice overview of the standard rule of thumb
Choosing number of principal components to retain The following article : Component retention in principal component analysis with application to cDNA microarray data by Cangelosi and Goriely gives a rather nice overview of the standard rule of thumbs to detect the number of components in a study. (Scree plot, Proportion of total variance explained, Average eigenvalue rule, Log-eigenvalue diagram, etc.) Most of them are quite straightforward to implement in R. In general if your scree plot is very inconclusive then you just need to "pick your poison". There is no absolute right or wrong for any data as in reality the number of PCs to use actually depends on your understanding of the problem. The only data-set you can "really" know the dimensionality of is the one you constructed yourself. :-) Principal Components in the end of the day provide the optimal decomposition of the data under an RSS metric (where as a by-product you get each component to represent a principal mode of variation) and including or excluding a given number of components dictates your perception about the dimensionality of your problem. As matter of personal preference, I like Minka's approach on this Automatic choice of dimensionality for PCA which based on a probabilistic interpretation of PCA but then again, you get into the game of trying to model the likelihood of your data for a given dimensionality. (Link provides Matlab code if you wish to follow this rationale.) Try to understand your data more. eg. Do you really believe that 99.99% of your data-set's variation is due to your model's covariates? If not probably you probably don't need to include dimensions that exhibit such a small proportion of total variance. Do you think that in reality a component reflects variation below a threshold of just noticeable differences? That again probably means that there is little relevance in including that component to your analysis. In any case, good luck and check your data carefully. (Plotting them makes wonders also.)
Choosing number of principal components to retain The following article : Component retention in principal component analysis with application to cDNA microarray data by Cangelosi and Goriely gives a rather nice overview of the standard rule of thumb
19,638
Choosing number of principal components to retain
There has been very nice subsequent work on this problem in the past few years since this question was originally asked and answered. I highly recommend the following paper by Gavish and Donoho: The Optimal Hard Threshold for Singular Values is 4/sqrt(3) Their result is based on asymptotic analysis (i.e. there is a well-defined optimal solution as your data matrix becomes infinitely large), but they show impressive numerical results that show the asymptotically optimal procedure works for small and realistically sized datasets, even under different noise models. Essentially, the optimal procedure boils down to estimating the noise, $\sigma$, added to each element of the matrix. Based on this you calculate a threshold and remove principal components whose singular value falls below the threshold. For a square $n \times n$ matrix, the proportionality constant 4/sqrt(3) shows up as suggested in the title: $$\lambda = \frac{4\sigma\sqrt{n}}{\sqrt{3}}$$ They also explain the non-square case in the paper. They have a nice code supplement (in MATLAB) here, but the algorithms would be easy to implement in R or anywhere else: https://purl.stanford.edu/vg705qn9070 Caveats: If you have missing data, I'm not sure this will work If each feature in your dataset has different noise magnitudes, I'm not sure this will work (though whitening could probably get around this under certain assumptions) Would be interesting to see if similar results hold for other low-rank matrix factorizations (e.g. non-negative matrix factorization).
Choosing number of principal components to retain
There has been very nice subsequent work on this problem in the past few years since this question was originally asked and answered. I highly recommend the following paper by Gavish and Donoho: The O
Choosing number of principal components to retain There has been very nice subsequent work on this problem in the past few years since this question was originally asked and answered. I highly recommend the following paper by Gavish and Donoho: The Optimal Hard Threshold for Singular Values is 4/sqrt(3) Their result is based on asymptotic analysis (i.e. there is a well-defined optimal solution as your data matrix becomes infinitely large), but they show impressive numerical results that show the asymptotically optimal procedure works for small and realistically sized datasets, even under different noise models. Essentially, the optimal procedure boils down to estimating the noise, $\sigma$, added to each element of the matrix. Based on this you calculate a threshold and remove principal components whose singular value falls below the threshold. For a square $n \times n$ matrix, the proportionality constant 4/sqrt(3) shows up as suggested in the title: $$\lambda = \frac{4\sigma\sqrt{n}}{\sqrt{3}}$$ They also explain the non-square case in the paper. They have a nice code supplement (in MATLAB) here, but the algorithms would be easy to implement in R or anywhere else: https://purl.stanford.edu/vg705qn9070 Caveats: If you have missing data, I'm not sure this will work If each feature in your dataset has different noise magnitudes, I'm not sure this will work (though whitening could probably get around this under certain assumptions) Would be interesting to see if similar results hold for other low-rank matrix factorizations (e.g. non-negative matrix factorization).
Choosing number of principal components to retain There has been very nice subsequent work on this problem in the past few years since this question was originally asked and answered. I highly recommend the following paper by Gavish and Donoho: The O
19,639
Choosing number of principal components to retain
The problem with Kaiser's criterion (all eigenvalues greater than one) is that the number of factors extracted is usually about one third the number of items or scales in the battery, regardless of whether many of the additional factors are noise. Parallel analysis and the scree criterion are generally more accurate procedures for determining the number of factors to extract (according to classic texts by Harmon and Ledyard Tucker as well as more recent work by Wayne Velicer.
Choosing number of principal components to retain
The problem with Kaiser's criterion (all eigenvalues greater than one) is that the number of factors extracted is usually about one third the number of items or scales in the battery, regardless of wh
Choosing number of principal components to retain The problem with Kaiser's criterion (all eigenvalues greater than one) is that the number of factors extracted is usually about one third the number of items or scales in the battery, regardless of whether many of the additional factors are noise. Parallel analysis and the scree criterion are generally more accurate procedures for determining the number of factors to extract (according to classic texts by Harmon and Ledyard Tucker as well as more recent work by Wayne Velicer.
Choosing number of principal components to retain The problem with Kaiser's criterion (all eigenvalues greater than one) is that the number of factors extracted is usually about one third the number of items or scales in the battery, regardless of wh
19,640
OLS with clustered standard errors vs. multilevel modeling when the main interest is at the individual level [duplicate]
You have two major options: multilevel analysis that you must have been reading about; OLS with clustered standard errors (Peter Flom made a comment that OLS assumes that the errors are independent, but that assumption is easy to circumvent with the right choice of the covariance matrix estimator) Multilevel analysis surely is fancy and hot. That's also the reason it is misused a lot, because everybody seems to want to do something multilevel, no matter whether their data are suitable for it or not. My reaction to about 2/3 of the questions with this tag on this site is that the goals of the study (except for being published in a highly ranked journal, which is often THE goal of many studies) are better addressed by other methods. In multilevel analysis, you have to make strong assumptions: (i) that your random effects are normal (or, if you have random slopes as long as random intercepts, that the joint distribution is multivariate normal), (ii) that your model contains all relevant variables, so that you are safe assuming that errors and regressors are uncorrelated at all levels, (iii) you have enough observations at each level to really utilize the asymptotic theory results concerning the likelihood ratio test statistics and inverse of the information matrix as the estimator of the variances of the parameter estimates. These assumptions are swept under the carpet, most of the time, and rarely if ever checked. The methods that deal with them do exist, but they would require a Ph.D. in statistics to read them. There are also alternative Bayesian solutions which too require a solid stat sequence in Bayesian computing before you even dare to open these papers. OLS with clustered errors makes fewer assumptions: something like (ii) above, i.e., to be able to convince yourself that the regressors and errors are uncorrelated, and something like (iii), that you have enough clusters so that the variance-covariance estimate is obtained as a sum over sufficiently many independent terms. Note that you don't need to have asymptotics in terms of the number of observations per cluster, unlike multilevel models. An unpleasant side effect concerning OLS with clustered standard errors is that you may run out of degrees of freedom if you have a model with 40 variables and only 30 clusters. (Well if you have 30 clusters, you're screwed anyway.) An interesting feature of multilevel models is that they can address interactions between levels (e.g., how does the education and experience of a teacher affect the student gains?) It is messier, but possible, to address in OLS as well by explicitly constructing the interactions and using them as explanatory variables in your regression. With enough data, you can run both analyses and construct a Hausman specification test on the difference between the efficient estimator (multilevel model) and a less efficient and more robust estimator (OLS with clustered standard errors) for the parameters that both models estimate. Most of the time, I would trust the OLS with clustered standard errors more than I would multilevel analysis, frankly.
OLS with clustered standard errors vs. multilevel modeling when the main interest is at the individu
You have two major options: multilevel analysis that you must have been reading about; OLS with clustered standard errors (Peter Flom made a comment that OLS assumes that the errors are independent,
OLS with clustered standard errors vs. multilevel modeling when the main interest is at the individual level [duplicate] You have two major options: multilevel analysis that you must have been reading about; OLS with clustered standard errors (Peter Flom made a comment that OLS assumes that the errors are independent, but that assumption is easy to circumvent with the right choice of the covariance matrix estimator) Multilevel analysis surely is fancy and hot. That's also the reason it is misused a lot, because everybody seems to want to do something multilevel, no matter whether their data are suitable for it or not. My reaction to about 2/3 of the questions with this tag on this site is that the goals of the study (except for being published in a highly ranked journal, which is often THE goal of many studies) are better addressed by other methods. In multilevel analysis, you have to make strong assumptions: (i) that your random effects are normal (or, if you have random slopes as long as random intercepts, that the joint distribution is multivariate normal), (ii) that your model contains all relevant variables, so that you are safe assuming that errors and regressors are uncorrelated at all levels, (iii) you have enough observations at each level to really utilize the asymptotic theory results concerning the likelihood ratio test statistics and inverse of the information matrix as the estimator of the variances of the parameter estimates. These assumptions are swept under the carpet, most of the time, and rarely if ever checked. The methods that deal with them do exist, but they would require a Ph.D. in statistics to read them. There are also alternative Bayesian solutions which too require a solid stat sequence in Bayesian computing before you even dare to open these papers. OLS with clustered errors makes fewer assumptions: something like (ii) above, i.e., to be able to convince yourself that the regressors and errors are uncorrelated, and something like (iii), that you have enough clusters so that the variance-covariance estimate is obtained as a sum over sufficiently many independent terms. Note that you don't need to have asymptotics in terms of the number of observations per cluster, unlike multilevel models. An unpleasant side effect concerning OLS with clustered standard errors is that you may run out of degrees of freedom if you have a model with 40 variables and only 30 clusters. (Well if you have 30 clusters, you're screwed anyway.) An interesting feature of multilevel models is that they can address interactions between levels (e.g., how does the education and experience of a teacher affect the student gains?) It is messier, but possible, to address in OLS as well by explicitly constructing the interactions and using them as explanatory variables in your regression. With enough data, you can run both analyses and construct a Hausman specification test on the difference between the efficient estimator (multilevel model) and a less efficient and more robust estimator (OLS with clustered standard errors) for the parameters that both models estimate. Most of the time, I would trust the OLS with clustered standard errors more than I would multilevel analysis, frankly.
OLS with clustered standard errors vs. multilevel modeling when the main interest is at the individu You have two major options: multilevel analysis that you must have been reading about; OLS with clustered standard errors (Peter Flom made a comment that OLS assumes that the errors are independent,
19,641
Assessing forecastability of time series
Here's a second idea based on stl. You could fit an stl decomposition to each series, and then compare the standard error of the remainder component to the mean of the original data ignoring any partial years. Series that are easy to forecast should have a small ratio of se(remainder) to mean(data). The reason I suggest ignoring partial years is that seasonality will affect the mean of the data otherwise. In the example in the question, all series have seven complete years, so it is not an issue. But if the series extended part way into 2012, I suggest the mean is computed only up to the end of 2011 to avoid seasonal contamination of the mean. This idea assumes that mean(data) makes sense -- that is that the data are mean stationary (apart from seasonality). It probably wouldn't work well for data with strong trends or unit roots. It also assumes that a good stl fit translates into good forecasts, but I can't think of an example where that wouldn't be true so it is probably an ok assumption.
Assessing forecastability of time series
Here's a second idea based on stl. You could fit an stl decomposition to each series, and then compare the standard error of the remainder component to the mean of the original data ignoring any part
Assessing forecastability of time series Here's a second idea based on stl. You could fit an stl decomposition to each series, and then compare the standard error of the remainder component to the mean of the original data ignoring any partial years. Series that are easy to forecast should have a small ratio of se(remainder) to mean(data). The reason I suggest ignoring partial years is that seasonality will affect the mean of the data otherwise. In the example in the question, all series have seven complete years, so it is not an issue. But if the series extended part way into 2012, I suggest the mean is computed only up to the end of 2011 to avoid seasonal contamination of the mean. This idea assumes that mean(data) makes sense -- that is that the data are mean stationary (apart from seasonality). It probably wouldn't work well for data with strong trends or unit roots. It also assumes that a good stl fit translates into good forecasts, but I can't think of an example where that wouldn't be true so it is probably an ok assumption.
Assessing forecastability of time series Here's a second idea based on stl. You could fit an stl decomposition to each series, and then compare the standard error of the remainder component to the mean of the original data ignoring any part
19,642
Assessing forecastability of time series
You might be interested in ForeCA: Forecastable Component Analysis (disclaimer: I am the author). As the name suggests it is a dimension reduction / blind source separation (BSS) technique to find most forecastable signals from many multivariate - more or less stationary - time series. For your particular case of 20,000 time series it might not be the fastest thing to do (the solution involves multivariate power spectra and iterative, analytic updating of the best weightvector; furthermore I guess it might run into the $p \gg n$ problem.) There is also an R package ForeCA available at CRAN (again: I am the author) which implements basic functionality; right now it supports the functionality to estimate forecastability measure $\Omega(x_t)$ for univariate time series and it has some good wrapper functions for multivariate spectra (again 20,000 time series is probably too much to handle at once). But maybe you can try to use the MASE measure proposed by Rob to make a coarse grid separation of the 20,000 in several sub-groups and then apply ForeCA to each separately.
Assessing forecastability of time series
You might be interested in ForeCA: Forecastable Component Analysis (disclaimer: I am the author). As the name suggests it is a dimension reduction / blind source separation (BSS) technique to find mos
Assessing forecastability of time series You might be interested in ForeCA: Forecastable Component Analysis (disclaimer: I am the author). As the name suggests it is a dimension reduction / blind source separation (BSS) technique to find most forecastable signals from many multivariate - more or less stationary - time series. For your particular case of 20,000 time series it might not be the fastest thing to do (the solution involves multivariate power spectra and iterative, analytic updating of the best weightvector; furthermore I guess it might run into the $p \gg n$ problem.) There is also an R package ForeCA available at CRAN (again: I am the author) which implements basic functionality; right now it supports the functionality to estimate forecastability measure $\Omega(x_t)$ for univariate time series and it has some good wrapper functions for multivariate spectra (again 20,000 time series is probably too much to handle at once). But maybe you can try to use the MASE measure proposed by Rob to make a coarse grid separation of the 20,000 in several sub-groups and then apply ForeCA to each separately.
Assessing forecastability of time series You might be interested in ForeCA: Forecastable Component Analysis (disclaimer: I am the author). As the name suggests it is a dimension reduction / blind source separation (BSS) technique to find mos
19,643
Assessing forecastability of time series
This is a fairly common problem in forecasting. The traditional solution is to compute mean absolute percentage errors (MAPEs) on each item. The lower the MAPE, the more easily forecasted is the item. One problem with that is many series contain zero values and then MAPE is undefined. I proposed a solution in Hyndman and Koehler (IJF 2006) [Preprint version] using mean absolute scaled errors (MASEs). For monthly time series, the scaling would be based on in-sample seasonal naive forecasts. That is if $y_t$ is an observation at time $t$, data are available from times 1 to $T$ and $$ Q = \frac{1}{T-12}\sum_{t=13}^T |y_t-y_{t-12}|, $$ then a scaled error is $q_t = (y_t-\hat{y}_t)/Q$, where $\hat{y}_t$ is a forecast of $y_t$ using whatever forecasting method you are implementing for that item. Take the mean absolute value of the scaled errors to get the MASE. For example, you might use a rolling origin (aka time series cross-validation) and take the mean absolute value of the resulting one-step (or $h$-step) errors. Series that are easy to forecast should have low values of MASE. Here "easy to forecast" is interpreted relative to the seasonal naive forecast. In some circumstances, it may make more sense to use an alternative base measure to scale the results.
Assessing forecastability of time series
This is a fairly common problem in forecasting. The traditional solution is to compute mean absolute percentage errors (MAPEs) on each item. The lower the MAPE, the more easily forecasted is the item.
Assessing forecastability of time series This is a fairly common problem in forecasting. The traditional solution is to compute mean absolute percentage errors (MAPEs) on each item. The lower the MAPE, the more easily forecasted is the item. One problem with that is many series contain zero values and then MAPE is undefined. I proposed a solution in Hyndman and Koehler (IJF 2006) [Preprint version] using mean absolute scaled errors (MASEs). For monthly time series, the scaling would be based on in-sample seasonal naive forecasts. That is if $y_t$ is an observation at time $t$, data are available from times 1 to $T$ and $$ Q = \frac{1}{T-12}\sum_{t=13}^T |y_t-y_{t-12}|, $$ then a scaled error is $q_t = (y_t-\hat{y}_t)/Q$, where $\hat{y}_t$ is a forecast of $y_t$ using whatever forecasting method you are implementing for that item. Take the mean absolute value of the scaled errors to get the MASE. For example, you might use a rolling origin (aka time series cross-validation) and take the mean absolute value of the resulting one-step (or $h$-step) errors. Series that are easy to forecast should have low values of MASE. Here "easy to forecast" is interpreted relative to the seasonal naive forecast. In some circumstances, it may make more sense to use an alternative base measure to scale the results.
Assessing forecastability of time series This is a fairly common problem in forecasting. The traditional solution is to compute mean absolute percentage errors (MAPEs) on each item. The lower the MAPE, the more easily forecasted is the item.
19,644
Assessing forecastability of time series
This answer is very late, but for those who are still looking for an appropriate measure of forecastability for product demand time series, I highly suggest looking at approximate entropy. The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations.[7] A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn. Product demand tend to have a very strong seasonal component, making the coefficient of variation (CV) inappropriate. ApEn(m, r) is able to correctly handle this. In my case, since my data tends to have a strong weekly seasonality, I set parameters m=7 and r=0.2*std as recommended here.
Assessing forecastability of time series
This answer is very late, but for those who are still looking for an appropriate measure of forecastability for product demand time series, I highly suggest looking at approximate entropy. The prese
Assessing forecastability of time series This answer is very late, but for those who are still looking for an appropriate measure of forecastability for product demand time series, I highly suggest looking at approximate entropy. The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations.[7] A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn. Product demand tend to have a very strong seasonal component, making the coefficient of variation (CV) inappropriate. ApEn(m, r) is able to correctly handle this. In my case, since my data tends to have a strong weekly seasonality, I set parameters m=7 and r=0.2*std as recommended here.
Assessing forecastability of time series This answer is very late, but for those who are still looking for an appropriate measure of forecastability for product demand time series, I highly suggest looking at approximate entropy. The prese
19,645
Can someone please explain the back-propagation algorithm? [duplicate]
The back propagation algorithm is a gradient descent algorithm for fitting a neural network model. (as mentionned by @Dikran) Let me explain how. Formally: Using the calculation of the gradient at the end of this post within equation [1] below (that is a definition of the gradient descent) gives the back propagation algorithm as a particular case of the use of a gradient descent. A neural network model Formally, we fix ideas with a simple single layer model: $$ f(x)=g(A^1(s(A^2(x)))) $$ where $g:\mathbb{R} \rightarrow \mathbb{R}$ and $s:\mathbb{R}^M\rightarrow \mathbb{R}^M$ are known with for all $m=1\dots,M$, $s(x)[m]=\sigma(x[m])$, and $A^1:\mathbb{R}^M\rightarrow \mathbb{R}$, $A^2\mathbb{R}^p\rightarrow \mathbb{R}^M$ are unknown affine functions. The function $\sigma:\mathbb{R}\rightarrow \mathbb{R}$ is called activation function in the framework of classification. A quadratic Loss function is taken to fix ideas. Hence the input $(x_1,\dots,x_n)$ vectors of $\mathbb{R}^p$ can be fitted to the real output $(y_1,\dots,y_n)$ of $\mathbb{R}$ (could be vectors) by minimizing the empirical loss: $$\mathcal{R}_n(A^1,A^2)=\sum_{i=1}^n (y_i-f(x_i))^2\;\;\;\;\;\;\; [1]$$ with respect to the choice of $A^1$ and $A^2$. Gradient descent A grandient descent for minimizing $\mathcal{R}$ is an algorithm that iterate: $$\mathbf{a}_{l+1}=\mathbf{a}_l-\gamma_l \nabla \mathcal{R}(\mathbf{a}_l),\ l \ge 0.$$ for well chosen step sizes $(\gamma_l)_l$ (also called learning rate in the framework of back propagation). It requires the calculation of the gradient of $\mathcal{R}$. In the considered case $\mathbf{a}_l=(A^1_{l},A^2_{l})$. Gradient of $\mathcal{R}$ (for the simple considered neural net model) Let us denote, by $\nabla_1 \mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^1$, and $\nabla_2\mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^2$. Standard calculation (using the rule for derivation of composition of functions) and the use of the notation $z_i=A^1(s(A^2(x_i)))$ give $$\nabla_1 \mathcal{R}[1:M] =-2\times \sum_{i=1}^n z_i g'(z_i) (y_i-f(x_i))$$ for all $m=1,\dots,M$ $$\nabla_2 \mathcal{R}[1:p,m] =-2\times \sum_{i=1}^n x_i g'(z_i) z_i[m]\sigma'(A^2(x_i)[m]) (y_i-f(x_i))$$ Here I used the R notation: $x[a:b]$ is the vector composed of the coordinates of $x$ from index $a$ to index $b$.
Can someone please explain the back-propagation algorithm? [duplicate]
The back propagation algorithm is a gradient descent algorithm for fitting a neural network model. (as mentionned by @Dikran) Let me explain how. Formally: Using the calculation of the gradient at th
Can someone please explain the back-propagation algorithm? [duplicate] The back propagation algorithm is a gradient descent algorithm for fitting a neural network model. (as mentionned by @Dikran) Let me explain how. Formally: Using the calculation of the gradient at the end of this post within equation [1] below (that is a definition of the gradient descent) gives the back propagation algorithm as a particular case of the use of a gradient descent. A neural network model Formally, we fix ideas with a simple single layer model: $$ f(x)=g(A^1(s(A^2(x)))) $$ where $g:\mathbb{R} \rightarrow \mathbb{R}$ and $s:\mathbb{R}^M\rightarrow \mathbb{R}^M$ are known with for all $m=1\dots,M$, $s(x)[m]=\sigma(x[m])$, and $A^1:\mathbb{R}^M\rightarrow \mathbb{R}$, $A^2\mathbb{R}^p\rightarrow \mathbb{R}^M$ are unknown affine functions. The function $\sigma:\mathbb{R}\rightarrow \mathbb{R}$ is called activation function in the framework of classification. A quadratic Loss function is taken to fix ideas. Hence the input $(x_1,\dots,x_n)$ vectors of $\mathbb{R}^p$ can be fitted to the real output $(y_1,\dots,y_n)$ of $\mathbb{R}$ (could be vectors) by minimizing the empirical loss: $$\mathcal{R}_n(A^1,A^2)=\sum_{i=1}^n (y_i-f(x_i))^2\;\;\;\;\;\;\; [1]$$ with respect to the choice of $A^1$ and $A^2$. Gradient descent A grandient descent for minimizing $\mathcal{R}$ is an algorithm that iterate: $$\mathbf{a}_{l+1}=\mathbf{a}_l-\gamma_l \nabla \mathcal{R}(\mathbf{a}_l),\ l \ge 0.$$ for well chosen step sizes $(\gamma_l)_l$ (also called learning rate in the framework of back propagation). It requires the calculation of the gradient of $\mathcal{R}$. In the considered case $\mathbf{a}_l=(A^1_{l},A^2_{l})$. Gradient of $\mathcal{R}$ (for the simple considered neural net model) Let us denote, by $\nabla_1 \mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^1$, and $\nabla_2\mathcal{R}$ the gradient of $\mathcal{R}$ as a function of $A^2$. Standard calculation (using the rule for derivation of composition of functions) and the use of the notation $z_i=A^1(s(A^2(x_i)))$ give $$\nabla_1 \mathcal{R}[1:M] =-2\times \sum_{i=1}^n z_i g'(z_i) (y_i-f(x_i))$$ for all $m=1,\dots,M$ $$\nabla_2 \mathcal{R}[1:p,m] =-2\times \sum_{i=1}^n x_i g'(z_i) z_i[m]\sigma'(A^2(x_i)[m]) (y_i-f(x_i))$$ Here I used the R notation: $x[a:b]$ is the vector composed of the coordinates of $x$ from index $a$ to index $b$.
Can someone please explain the back-propagation algorithm? [duplicate] The back propagation algorithm is a gradient descent algorithm for fitting a neural network model. (as mentionned by @Dikran) Let me explain how. Formally: Using the calculation of the gradient at th
19,646
Can someone please explain the back-propagation algorithm? [duplicate]
Back-propogation is a way of working out the derivative of the error function with respect to the weights, so that the model can be trained by gradient descent optimisation methods - it is basically just the application of the "chain rule". There isn't really much more to it than that, so if you are comfortable with calculus that is basically the best way to look at it. If you are not comfortable with calculus, a better way would be to say that we know how badly the output units are doing because we have a desired output with which to compare the actual output. However we don't have a desired output for the hidden units, so what do we do? The back-propagation rule is basically a way of speading out the blame for the error of the output units onto the hidden units. The more influence a hidden unit has on a particular output unit, the more blame it gets for the error. The total blame associated with a hidden unit then give an indication of how much the input-to-hidden layer weights need changing. The two things that govern how much blame is passed back is the weight connecting the hidden and output layer weights (obviously) and the output of the hidden unit (if it is shouting rather than whispering it is likely to have a larger influence). The rest is just the mathematical niceties that turn that intuition into the derivative of the training criterion. I'd also recommend Bishops book for a proper answer! ;o)
Can someone please explain the back-propagation algorithm? [duplicate]
Back-propogation is a way of working out the derivative of the error function with respect to the weights, so that the model can be trained by gradient descent optimisation methods - it is basically j
Can someone please explain the back-propagation algorithm? [duplicate] Back-propogation is a way of working out the derivative of the error function with respect to the weights, so that the model can be trained by gradient descent optimisation methods - it is basically just the application of the "chain rule". There isn't really much more to it than that, so if you are comfortable with calculus that is basically the best way to look at it. If you are not comfortable with calculus, a better way would be to say that we know how badly the output units are doing because we have a desired output with which to compare the actual output. However we don't have a desired output for the hidden units, so what do we do? The back-propagation rule is basically a way of speading out the blame for the error of the output units onto the hidden units. The more influence a hidden unit has on a particular output unit, the more blame it gets for the error. The total blame associated with a hidden unit then give an indication of how much the input-to-hidden layer weights need changing. The two things that govern how much blame is passed back is the weight connecting the hidden and output layer weights (obviously) and the output of the hidden unit (if it is shouting rather than whispering it is likely to have a larger influence). The rest is just the mathematical niceties that turn that intuition into the derivative of the training criterion. I'd also recommend Bishops book for a proper answer! ;o)
Can someone please explain the back-propagation algorithm? [duplicate] Back-propogation is a way of working out the derivative of the error function with respect to the weights, so that the model can be trained by gradient descent optimisation methods - it is basically j
19,647
Can someone please explain the back-propagation algorithm? [duplicate]
It's an algorithm for training feedforward multilayer neural networks (multilayer perceptrons). There are several nice java applets around the web that illustrate what's happening, like this one: http://neuron.eng.wayne.edu/bpFunctionApprox/bpFunctionApprox.html. Also, Bishop's book on NNs is the standard desk reference for anything to do with NNs.
Can someone please explain the back-propagation algorithm? [duplicate]
It's an algorithm for training feedforward multilayer neural networks (multilayer perceptrons). There are several nice java applets around the web that illustrate what's happening, like this one: http
Can someone please explain the back-propagation algorithm? [duplicate] It's an algorithm for training feedforward multilayer neural networks (multilayer perceptrons). There are several nice java applets around the web that illustrate what's happening, like this one: http://neuron.eng.wayne.edu/bpFunctionApprox/bpFunctionApprox.html. Also, Bishop's book on NNs is the standard desk reference for anything to do with NNs.
Can someone please explain the back-propagation algorithm? [duplicate] It's an algorithm for training feedforward multilayer neural networks (multilayer perceptrons). There are several nice java applets around the web that illustrate what's happening, like this one: http
19,648
Why do you take the sqrt of 1/n for RMSE?
While Demetri's answer gives a very good derivation or RMSE, it doesn't really explain why not the other method you suggest. I think you can get a little more insight by observing that MRSE is not a valid name for your suggested measure. Look closely and the steps are Square the residuals Add them up Square root Divide by the number of samples A "mean" needs to have the sum and the divide consecutive. So the MRSE would actually be: $$ MRSE = \frac{1}{n} \sum \sqrt{(\hat{y}_i - y_i)^2} = \frac{1}{n}\sum |\hat{y}_i - y_i| = MAE$$ So, RMSE is the square-root of a mean - it is then just transformed (by square root) for convenience. The MAE is itself a mean. What you have created, isn't a mean - you are not adding things up and dividing by the number there are, you are adding things up, then square rooting, then dividing by the number there are. In fact the construct before the 1/n is a Euclidean distance - the total distance that the sample is from the predicted y-vector. As pointed out by Amin's answer, this error naturally grows as sqrt of the size of the y-vector, so by dividing by n your error will systematically get smaller the larger the sample.
Why do you take the sqrt of 1/n for RMSE?
While Demetri's answer gives a very good derivation or RMSE, it doesn't really explain why not the other method you suggest. I think you can get a little more insight by observing that MRSE is not a
Why do you take the sqrt of 1/n for RMSE? While Demetri's answer gives a very good derivation or RMSE, it doesn't really explain why not the other method you suggest. I think you can get a little more insight by observing that MRSE is not a valid name for your suggested measure. Look closely and the steps are Square the residuals Add them up Square root Divide by the number of samples A "mean" needs to have the sum and the divide consecutive. So the MRSE would actually be: $$ MRSE = \frac{1}{n} \sum \sqrt{(\hat{y}_i - y_i)^2} = \frac{1}{n}\sum |\hat{y}_i - y_i| = MAE$$ So, RMSE is the square-root of a mean - it is then just transformed (by square root) for convenience. The MAE is itself a mean. What you have created, isn't a mean - you are not adding things up and dividing by the number there are, you are adding things up, then square rooting, then dividing by the number there are. In fact the construct before the 1/n is a Euclidean distance - the total distance that the sample is from the predicted y-vector. As pointed out by Amin's answer, this error naturally grows as sqrt of the size of the y-vector, so by dividing by n your error will systematically get smaller the larger the sample.
Why do you take the sqrt of 1/n for RMSE? While Demetri's answer gives a very good derivation or RMSE, it doesn't really explain why not the other method you suggest. I think you can get a little more insight by observing that MRSE is not a
19,649
Why do you take the sqrt of 1/n for RMSE?
Interesting question. Let's break this down into: Why squared error, why mean squared error, and then why root mean squared error. I think that should answer your question. Why squared error (SE) Squared error happens to be a proper scoring rule, which is a really desirable property for your loss function to have (feel free to read up on proper scoring rules by searching this site). However, the squared error can grow simply by just adding more data. So if I have two data sets (maybe one from yesterday and one from today), and they are of different sizes, I could be fooled into thinking my model is doing poorly simply because I had more data today than yesterday. Which leads me to... Why mean squared error (MSE) Taking the mean of the squared eliminates this problem of different data sizes. By taking the average loss, we retain the nice properties of the proper scoring rule, but now can compare the loss of a model on different data sets of possibly different sizes. But the interpretation of MSE is kind of hard. If $y$ is measured in dollars, what is a dollar squared? Which leads me too... Why root mean squared error (RMSE) MSE has weird units, but if we took the square root of MSE the result would be on the scale of $y$. This makes interpretation a little easier. In summation: SE is a proper scoring rule. We like that To prevent misleading inflation of the error due to sample sizes, we take the average of SE, or MSE MSE is hard to interpret, so instead we take the square root of MSE to get RMSE and have the error units on the same scale as the outcome.
Why do you take the sqrt of 1/n for RMSE?
Interesting question. Let's break this down into: Why squared error, why mean squared error, and then why root mean squared error. I think that should answer your question. Why squared error (SE)
Why do you take the sqrt of 1/n for RMSE? Interesting question. Let's break this down into: Why squared error, why mean squared error, and then why root mean squared error. I think that should answer your question. Why squared error (SE) Squared error happens to be a proper scoring rule, which is a really desirable property for your loss function to have (feel free to read up on proper scoring rules by searching this site). However, the squared error can grow simply by just adding more data. So if I have two data sets (maybe one from yesterday and one from today), and they are of different sizes, I could be fooled into thinking my model is doing poorly simply because I had more data today than yesterday. Which leads me to... Why mean squared error (MSE) Taking the mean of the squared eliminates this problem of different data sizes. By taking the average loss, we retain the nice properties of the proper scoring rule, but now can compare the loss of a model on different data sets of possibly different sizes. But the interpretation of MSE is kind of hard. If $y$ is measured in dollars, what is a dollar squared? Which leads me too... Why root mean squared error (RMSE) MSE has weird units, but if we took the square root of MSE the result would be on the scale of $y$. This makes interpretation a little easier. In summation: SE is a proper scoring rule. We like that To prevent misleading inflation of the error due to sample sizes, we take the average of SE, or MSE MSE is hard to interpret, so instead we take the square root of MSE to get RMSE and have the error units on the same scale as the outcome.
Why do you take the sqrt of 1/n for RMSE? Interesting question. Let's break this down into: Why squared error, why mean squared error, and then why root mean squared error. I think that should answer your question. Why squared error (SE)
19,650
Why do you take the sqrt of 1/n for RMSE?
The goal is to have an unbiased estimator for the error your model makes on average. Let's call that $\bar \epsilon$. Now let's see what's the relationship of the two estimators you asked about with the $\bar \epsilon$: $\hat y_{i} - y_{i} = \epsilon_{i}$ $\frac{1}{n}\sum_{i=1}^{n}(\hat y_{i} - y_{i})^2 = \frac{1}{n}\sum_{i=1}^{n}(\epsilon_{i})^2 = \bar \epsilon^2$ thus $ \sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}} \approx \bar \epsilon$ which is what we aimed for. Now let's see what the other estimator will give you: $\frac{1}{n} \sqrt{\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}} = $ $\frac{1}{n} \sqrt{n \times \frac{1}{n}\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}} = $ $\frac{\sqrt{n}}{n} \times \bar \epsilon$ As you can see the second estimator has a bias of $\frac{\sqrt{n}}{n}$ in estimating the average error you aimed for. For example, if for a data generating process of $f(x) = 0$ you always predict 2 then you would want the estimator to give you $\bar \epsilon = 2$ which is given by the first estimator while the second estimator (assuming n = 10) will give you ($2 \times \frac{\sqrt 10}{10}$).
Why do you take the sqrt of 1/n for RMSE?
The goal is to have an unbiased estimator for the error your model makes on average. Let's call that $\bar \epsilon$. Now let's see what's the relationship of the two estimators you asked about with t
Why do you take the sqrt of 1/n for RMSE? The goal is to have an unbiased estimator for the error your model makes on average. Let's call that $\bar \epsilon$. Now let's see what's the relationship of the two estimators you asked about with the $\bar \epsilon$: $\hat y_{i} - y_{i} = \epsilon_{i}$ $\frac{1}{n}\sum_{i=1}^{n}(\hat y_{i} - y_{i})^2 = \frac{1}{n}\sum_{i=1}^{n}(\epsilon_{i})^2 = \bar \epsilon^2$ thus $ \sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}} \approx \bar \epsilon$ which is what we aimed for. Now let's see what the other estimator will give you: $\frac{1}{n} \sqrt{\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}} = $ $\frac{1}{n} \sqrt{n \times \frac{1}{n}\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}} = $ $\frac{\sqrt{n}}{n} \times \bar \epsilon$ As you can see the second estimator has a bias of $\frac{\sqrt{n}}{n}$ in estimating the average error you aimed for. For example, if for a data generating process of $f(x) = 0$ you always predict 2 then you would want the estimator to give you $\bar \epsilon = 2$ which is given by the first estimator while the second estimator (assuming n = 10) will give you ($2 \times \frac{\sqrt 10}{10}$).
Why do you take the sqrt of 1/n for RMSE? The goal is to have an unbiased estimator for the error your model makes on average. Let's call that $\bar \epsilon$. Now let's see what's the relationship of the two estimators you asked about with t
19,651
Why do you take the sqrt of 1/n for RMSE?
I think both RMSE and MRSE could be potentially used for the purpose of creating a metric related to residuals per data point. The difference, and why the RMSE is commonly used and MRSE is not probably lies in interpretation of the terms and their related metrics. If we roll back RMSE to SE at every step of the way we get a term that is commonly used and interpretable. We square RMSE and get MSE (variance), we square root it and get SE. However, rolling back MRSE does not give as nice a terms. We multiply MRSE by sample size and get RSE, which is not commonly used to evaluate anything. This may be the reason why one is used and the other isn't.
Why do you take the sqrt of 1/n for RMSE?
I think both RMSE and MRSE could be potentially used for the purpose of creating a metric related to residuals per data point. The difference, and why the RMSE is commonly used and MRSE is not probabl
Why do you take the sqrt of 1/n for RMSE? I think both RMSE and MRSE could be potentially used for the purpose of creating a metric related to residuals per data point. The difference, and why the RMSE is commonly used and MRSE is not probably lies in interpretation of the terms and their related metrics. If we roll back RMSE to SE at every step of the way we get a term that is commonly used and interpretable. We square RMSE and get MSE (variance), we square root it and get SE. However, rolling back MRSE does not give as nice a terms. We multiply MRSE by sample size and get RSE, which is not commonly used to evaluate anything. This may be the reason why one is used and the other isn't.
Why do you take the sqrt of 1/n for RMSE? I think both RMSE and MRSE could be potentially used for the purpose of creating a metric related to residuals per data point. The difference, and why the RMSE is commonly used and MRSE is not probabl
19,652
Difference between autocorrelation and partial autocorrelation
For a while forget about time stamps. Consider three variables: $X, Y, Z$. Let's say $Z$ has a direct influence on the variable $X$. You can think of $Z$ as some economic parameter in US which is influencing some other economic parameter $X$ of China. Now it may be that a parameter $Y$ (some parameter in England) is also directly influenced by $Z$. But there is an independent relationship between $X$ and $Y$ as well. By independence here I mean that this relationship is independent from $Z$. So you see when $Z$ changes, $X$ changes because of the direct relationship between $X$ and $Z$, and also because $Z$ changes $Y$ which in turn changes $X$. So $X$ changes because of two reasons. Now read this with $Z=y_{t-h}, \ \ Y=y_{t-h+\tau}$ and $X=y_t$ (where $h>\tau$). Autocorrelation between $X$ and $Z$ will take into account all changes in $X$ whether coming from $Z$ directly or through $Y$. Partial autocorrelation removes the indirect impact of $Z$ on $X$ coming through $Y$. How it is done? That is explained in the other answer given to your question.
Difference between autocorrelation and partial autocorrelation
For a while forget about time stamps. Consider three variables: $X, Y, Z$. Let's say $Z$ has a direct influence on the variable $X$. You can think of $Z$ as some economic parameter in US which is infl
Difference between autocorrelation and partial autocorrelation For a while forget about time stamps. Consider three variables: $X, Y, Z$. Let's say $Z$ has a direct influence on the variable $X$. You can think of $Z$ as some economic parameter in US which is influencing some other economic parameter $X$ of China. Now it may be that a parameter $Y$ (some parameter in England) is also directly influenced by $Z$. But there is an independent relationship between $X$ and $Y$ as well. By independence here I mean that this relationship is independent from $Z$. So you see when $Z$ changes, $X$ changes because of the direct relationship between $X$ and $Z$, and also because $Z$ changes $Y$ which in turn changes $X$. So $X$ changes because of two reasons. Now read this with $Z=y_{t-h}, \ \ Y=y_{t-h+\tau}$ and $X=y_t$ (where $h>\tau$). Autocorrelation between $X$ and $Z$ will take into account all changes in $X$ whether coming from $Z$ directly or through $Y$. Partial autocorrelation removes the indirect impact of $Z$ on $X$ coming through $Y$. How it is done? That is explained in the other answer given to your question.
Difference between autocorrelation and partial autocorrelation For a while forget about time stamps. Consider three variables: $X, Y, Z$. Let's say $Z$ has a direct influence on the variable $X$. You can think of $Z$ as some economic parameter in US which is infl
19,653
Difference between autocorrelation and partial autocorrelation
The difference between (sample) ACF and PACF is easy to see from the linear regression perspective. To get the sample ACF $\hat{\gamma}_h$ at lag $h$, you fit the linear regression model $$ y_t = \alpha + \beta y_{t-h} + u_t $$ and the resulting $\hat{\beta}$ is $\hat{\gamma}_h$. Because of (weak) stationarity, the estimate $\hat{\beta}$ is the sample correlation between $y_t$ and $y_{t-h}$. (There are some trivial differences between how sample moments are computed between time series and linear regression contexts, but they are negligible when sample size is large.) To get the sample PACF $\hat{\rho}_h$ at lag $h$, you fit the linear regression model $$ y_t = \alpha + \, ? y_{t-1} + \cdots + \, ? y_{t-h + 1} + \beta y_{t-h} + u_t $$ and the resulting $\hat{\beta}$ is $\hat{\rho}_h$. So $\hat{\rho}_h$ is the "correlation between $y_t$ and $y_{t-h}$ after controlling for the intermediate elements." The same discussion applies verbatim to the difference between population ACF and PACF. Just replace sample regressions by population regressions. For a stationary AR(p) process, you'll find the PACF to be zero for lags $h > p$. This is not surprising. The process is specified by a linear regression. $$ y_t = \phi_0 + \phi_1 y_{t-1} + \cdots \phi_p y_{t-p} + \epsilon_t $$ If you add an regressor (say $y_{t-p-1}$) on the right-hand side that is uncorrelated with the error term $\epsilon_t$, the resulting coefficient (the PACF at lag $p+1$ in this case) would be zero.
Difference between autocorrelation and partial autocorrelation
The difference between (sample) ACF and PACF is easy to see from the linear regression perspective. To get the sample ACF $\hat{\gamma}_h$ at lag $h$, you fit the linear regression model $$ y_t = \alp
Difference between autocorrelation and partial autocorrelation The difference between (sample) ACF and PACF is easy to see from the linear regression perspective. To get the sample ACF $\hat{\gamma}_h$ at lag $h$, you fit the linear regression model $$ y_t = \alpha + \beta y_{t-h} + u_t $$ and the resulting $\hat{\beta}$ is $\hat{\gamma}_h$. Because of (weak) stationarity, the estimate $\hat{\beta}$ is the sample correlation between $y_t$ and $y_{t-h}$. (There are some trivial differences between how sample moments are computed between time series and linear regression contexts, but they are negligible when sample size is large.) To get the sample PACF $\hat{\rho}_h$ at lag $h$, you fit the linear regression model $$ y_t = \alpha + \, ? y_{t-1} + \cdots + \, ? y_{t-h + 1} + \beta y_{t-h} + u_t $$ and the resulting $\hat{\beta}$ is $\hat{\rho}_h$. So $\hat{\rho}_h$ is the "correlation between $y_t$ and $y_{t-h}$ after controlling for the intermediate elements." The same discussion applies verbatim to the difference between population ACF and PACF. Just replace sample regressions by population regressions. For a stationary AR(p) process, you'll find the PACF to be zero for lags $h > p$. This is not surprising. The process is specified by a linear regression. $$ y_t = \phi_0 + \phi_1 y_{t-1} + \cdots \phi_p y_{t-p} + \epsilon_t $$ If you add an regressor (say $y_{t-p-1}$) on the right-hand side that is uncorrelated with the error term $\epsilon_t$, the resulting coefficient (the PACF at lag $p+1$ in this case) would be zero.
Difference between autocorrelation and partial autocorrelation The difference between (sample) ACF and PACF is easy to see from the linear regression perspective. To get the sample ACF $\hat{\gamma}_h$ at lag $h$, you fit the linear regression model $$ y_t = \alp
19,654
How different are restricted cubic splines and penalized splines?
From my reading, the two concepts you ask us to compare are quite different beasts and would require an apples and oranges-like comparison. This makes many of your questions somewhat moot — ideally (assuming one can write a wiggliness penalty down for the RCS basis in the required form) you'd use a penalised restricted cubic regression spline model. Restricted Cubic Splines A restricted cubic spline (or a natural spline) is a spline basis built from piecewise cubic polynomial functions that join smoothly at some pre-specified locations, or knots. What distinguishes a restricted cubic spline from a cubic spline is that additional constraints are imposed on the restricted version such that the spline is linear before the first knot and after the last knot. This is done to improve performance of the spline in the tails of $X$. Model selection with an RCS typically involves choosing the number of knots and their location, with the former governing how wiggly or complex the resulting spline is. Unless some further steps are in place to regularize the estimated coefficients when model fitting, then the number of knots directly controls spline complexity. This means that the user has some problems to overcome when estimating a model containing one or more RCS terms: How many knots to use?, Where to place those knots in the span of $X$?, How to compare models with different numbers of knots? On their own, RCS terms require user intervention to solve these problems. Penalized splines Penalized regression splines (sensu Hodges) on their own tackle issue 3. only, but they allow for issue 1. to be circumvented. The idea here is that as well as the basis expansion of $X$, and for now let's just assume this is a cubic spline basis, you also create a wiggliness penalty matrix. Wiggliness is measured using some derivative of the estimated spline, with the typical derivative used being the second derivative, and the penalty itself represents the squared second derivative integrated over the range of $X$. This penalty can be written in quadratic form as $$\boldsymbol{\beta}^{\mathsf{T}} \mathbf{S} \boldsymbol{\beta}$$ where $\mathbf{S}$ is a penalty matrix and $\boldsymbol{\beta}$ are the model coefficients. Then coefficient values are found to maximise the penalised log-likelihood $\mathcal{L}_p$ ceriterion $$\mathcal{L}_p = \mathcal{L} - \lambda \boldsymbol{\beta}^{\mathsf{T}} \mathbf{S} \boldsymbol{\beta}$$ where $\mathcal{L}$ is the log-likelihood of the model and $\lambda$ is the smoothness parameter, which controls how strongly to penalize the wiggliness of the spline. As the penalised log-likelihood can be evaluated in terms of the model coefficients, fitting this model effectively becomes a problem in finding an optimal value for $\lambda$ whilst updating the coefficients during the search for that optimal $\lambda$. $\lambda$ can be chosen using cross-validation, generalised cross-validation(GCV), or marginal likelihood or restricted marginal likelihood criteria. The latter two effectively recast the spline model as a mixed effects model (the perfectly smooth parts of the basis become fixed effects and the wiggly parts of the basis are random effects, and the smoothness parameter is inversely related to the variance term for the random effects), which is what Hodges is considering in his book. Why does this solve the problem of how many knots to use? Well, it only kind of does that. This solves the problem of not requiring a knot at every unique data point (a smoothing spline), but you still need to choose how many knots or basis functions to use. However, because the penalty shrinks the coefficients you can get away with choosing as large a basis dimension as you think is needed to contain either the true function or a close approximation to it, and then you let the penalty control how wiggly the estimated spline ultimately is, with the extra potential wiggliness available in the basis being removed or controlled by the penalty. Comparison Penalized (regression) splines and RCS are quite different concepts. There is nothing stopping you creating a RCS basis and an associated penalty in quadratic form and then estimating the spline coefficients using the ideas from the penalized regression spline model. RCS is just one kind of basis you can use to create a spline basis, and penalized regression splines are one way to estimate a model containing one or more splines with associated wiggliness penalties. Can we avoid issues 1., 2., and 3.? Yes, to some extent, with a thin plate spline (TPS) basis. A TPS basis has as many basis functions as unique data values in $X$. What Wood (2003) showed was that you can create a Thin Plate Regression Spline (TPRS) basis uses an eigendecomposition of the the TPS basis functions, and retaining only the first $k$ largest say. You still have to specify $k$, the number of basis functions you want to use, but the choice is generally based on how wiggly you expect the fitted function to be and how much computational hit you are willing to take. There is no need to specify the knot locations either, and the penalty shrinks the coefficients so one avoids the model selection problem as you only have one penalized model not many unpenalized ones with differing numbers of knots. P-splines Just to make things more complicated, there is a type of spline basis known as a P-spline (Eilers & Marx, 1996)), where the $P$ often gets interpreted as "penalized". P-splines are a B-spline basis with a difference penalty applied directly to the model coefficients. In typical use the P-spline penalty penalizes the squared differences between adjacent model coefficients, which in turn penalises wiggliness. P-splines are very easy to set-up and result in a sparse penalty matrix which makes them very amenable to estimation of spline terms in MCMC based Bayesian models (Wood, 2017). References Eilers, P. H. C., and B. D. Marx. 1996. Flexible Smoothing with -splines and Penalties. Stat. Sci. Wood, S. N. 2003. Thin plate regression splines. J. R. Stat. Soc. Series B Stat. Methodol. 65: 95–114. doi:10.1111/1467-9868.00374 Wood, S. N. 2017. Generalized Additive Models: An Introduction with R, Second Edition, CRC Press.
How different are restricted cubic splines and penalized splines?
From my reading, the two concepts you ask us to compare are quite different beasts and would require an apples and oranges-like comparison. This makes many of your questions somewhat moot — ideally (a
How different are restricted cubic splines and penalized splines? From my reading, the two concepts you ask us to compare are quite different beasts and would require an apples and oranges-like comparison. This makes many of your questions somewhat moot — ideally (assuming one can write a wiggliness penalty down for the RCS basis in the required form) you'd use a penalised restricted cubic regression spline model. Restricted Cubic Splines A restricted cubic spline (or a natural spline) is a spline basis built from piecewise cubic polynomial functions that join smoothly at some pre-specified locations, or knots. What distinguishes a restricted cubic spline from a cubic spline is that additional constraints are imposed on the restricted version such that the spline is linear before the first knot and after the last knot. This is done to improve performance of the spline in the tails of $X$. Model selection with an RCS typically involves choosing the number of knots and their location, with the former governing how wiggly or complex the resulting spline is. Unless some further steps are in place to regularize the estimated coefficients when model fitting, then the number of knots directly controls spline complexity. This means that the user has some problems to overcome when estimating a model containing one or more RCS terms: How many knots to use?, Where to place those knots in the span of $X$?, How to compare models with different numbers of knots? On their own, RCS terms require user intervention to solve these problems. Penalized splines Penalized regression splines (sensu Hodges) on their own tackle issue 3. only, but they allow for issue 1. to be circumvented. The idea here is that as well as the basis expansion of $X$, and for now let's just assume this is a cubic spline basis, you also create a wiggliness penalty matrix. Wiggliness is measured using some derivative of the estimated spline, with the typical derivative used being the second derivative, and the penalty itself represents the squared second derivative integrated over the range of $X$. This penalty can be written in quadratic form as $$\boldsymbol{\beta}^{\mathsf{T}} \mathbf{S} \boldsymbol{\beta}$$ where $\mathbf{S}$ is a penalty matrix and $\boldsymbol{\beta}$ are the model coefficients. Then coefficient values are found to maximise the penalised log-likelihood $\mathcal{L}_p$ ceriterion $$\mathcal{L}_p = \mathcal{L} - \lambda \boldsymbol{\beta}^{\mathsf{T}} \mathbf{S} \boldsymbol{\beta}$$ where $\mathcal{L}$ is the log-likelihood of the model and $\lambda$ is the smoothness parameter, which controls how strongly to penalize the wiggliness of the spline. As the penalised log-likelihood can be evaluated in terms of the model coefficients, fitting this model effectively becomes a problem in finding an optimal value for $\lambda$ whilst updating the coefficients during the search for that optimal $\lambda$. $\lambda$ can be chosen using cross-validation, generalised cross-validation(GCV), or marginal likelihood or restricted marginal likelihood criteria. The latter two effectively recast the spline model as a mixed effects model (the perfectly smooth parts of the basis become fixed effects and the wiggly parts of the basis are random effects, and the smoothness parameter is inversely related to the variance term for the random effects), which is what Hodges is considering in his book. Why does this solve the problem of how many knots to use? Well, it only kind of does that. This solves the problem of not requiring a knot at every unique data point (a smoothing spline), but you still need to choose how many knots or basis functions to use. However, because the penalty shrinks the coefficients you can get away with choosing as large a basis dimension as you think is needed to contain either the true function or a close approximation to it, and then you let the penalty control how wiggly the estimated spline ultimately is, with the extra potential wiggliness available in the basis being removed or controlled by the penalty. Comparison Penalized (regression) splines and RCS are quite different concepts. There is nothing stopping you creating a RCS basis and an associated penalty in quadratic form and then estimating the spline coefficients using the ideas from the penalized regression spline model. RCS is just one kind of basis you can use to create a spline basis, and penalized regression splines are one way to estimate a model containing one or more splines with associated wiggliness penalties. Can we avoid issues 1., 2., and 3.? Yes, to some extent, with a thin plate spline (TPS) basis. A TPS basis has as many basis functions as unique data values in $X$. What Wood (2003) showed was that you can create a Thin Plate Regression Spline (TPRS) basis uses an eigendecomposition of the the TPS basis functions, and retaining only the first $k$ largest say. You still have to specify $k$, the number of basis functions you want to use, but the choice is generally based on how wiggly you expect the fitted function to be and how much computational hit you are willing to take. There is no need to specify the knot locations either, and the penalty shrinks the coefficients so one avoids the model selection problem as you only have one penalized model not many unpenalized ones with differing numbers of knots. P-splines Just to make things more complicated, there is a type of spline basis known as a P-spline (Eilers & Marx, 1996)), where the $P$ often gets interpreted as "penalized". P-splines are a B-spline basis with a difference penalty applied directly to the model coefficients. In typical use the P-spline penalty penalizes the squared differences between adjacent model coefficients, which in turn penalises wiggliness. P-splines are very easy to set-up and result in a sparse penalty matrix which makes them very amenable to estimation of spline terms in MCMC based Bayesian models (Wood, 2017). References Eilers, P. H. C., and B. D. Marx. 1996. Flexible Smoothing with -splines and Penalties. Stat. Sci. Wood, S. N. 2003. Thin plate regression splines. J. R. Stat. Soc. Series B Stat. Methodol. 65: 95–114. doi:10.1111/1467-9868.00374 Wood, S. N. 2017. Generalized Additive Models: An Introduction with R, Second Edition, CRC Press.
How different are restricted cubic splines and penalized splines? From my reading, the two concepts you ask us to compare are quite different beasts and would require an apples and oranges-like comparison. This makes many of your questions somewhat moot — ideally (a
19,655
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed]
Quick thoughts: Let's pretend—instead of scenario-building or making jokes—each of their plot lines are actual predictions. They make a lot of predictions, so their Type I error is very high, but their Type II error is also very low. If every creative choice is making a prediction AND they have been around for decades, then their show is similar to a medical test that almost always says you have whatever disease you are testing for: You will almost never miss a positive case, but you will be telling a lot of people that they have a disease which they do not have. People probably only consider a subset of scenarios ("predictions") that are feasible. If the Simpsons were visited by aliens, nobody would consider this a prediction—because we know the odds of it happening are very low. So the universe of predictions we are considering is highly correlated with the prior probability that they will come true—this is stacking the deck in favor of the Simpsons. The writers of the Simpsons are smart people that also live in the same society in which they are making their "predictions." They are trying to be funny, so what they do, in a Bayesian sense, is construct funny situations that are not assuredly going to happen (these are boring predictions) and are not never going to happen (these are absurd predictions). So again we see the prior probability of these things happening stacking the deck toward the Simpsons being correct: If they write about things with a solid-ish probability of happening (like Canada legalizing weed), then we shouldn't be too surprised when their predictions are correct. There is no time limit on their predictions. This gives us an unlimited amount of "trials" (let's say the unit of analysis is days or elections or news cycles or celebrity careers, etc.), and all we ever need to do is hit truth once and the Simpsons are "correct." When you consider all of these together, we can see that making a ton of predictions over an unlimited number of trials where you only have to hit once to be "correct," and people define "success" by ignoring Type I errors and shaping the universe of possible as things with only some probability of occurring, and the creators themselves generally make predictions in areas that have some prior probability of occurring—we can get to the conclusion that the Simpsons can "predict the future."
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed]
Quick thoughts: Let's pretend—instead of scenario-building or making jokes—each of their plot lines are actual predictions. They make a lot of predictions, so their Type I error is very high, but the
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed] Quick thoughts: Let's pretend—instead of scenario-building or making jokes—each of their plot lines are actual predictions. They make a lot of predictions, so their Type I error is very high, but their Type II error is also very low. If every creative choice is making a prediction AND they have been around for decades, then their show is similar to a medical test that almost always says you have whatever disease you are testing for: You will almost never miss a positive case, but you will be telling a lot of people that they have a disease which they do not have. People probably only consider a subset of scenarios ("predictions") that are feasible. If the Simpsons were visited by aliens, nobody would consider this a prediction—because we know the odds of it happening are very low. So the universe of predictions we are considering is highly correlated with the prior probability that they will come true—this is stacking the deck in favor of the Simpsons. The writers of the Simpsons are smart people that also live in the same society in which they are making their "predictions." They are trying to be funny, so what they do, in a Bayesian sense, is construct funny situations that are not assuredly going to happen (these are boring predictions) and are not never going to happen (these are absurd predictions). So again we see the prior probability of these things happening stacking the deck toward the Simpsons being correct: If they write about things with a solid-ish probability of happening (like Canada legalizing weed), then we shouldn't be too surprised when their predictions are correct. There is no time limit on their predictions. This gives us an unlimited amount of "trials" (let's say the unit of analysis is days or elections or news cycles or celebrity careers, etc.), and all we ever need to do is hit truth once and the Simpsons are "correct." When you consider all of these together, we can see that making a ton of predictions over an unlimited number of trials where you only have to hit once to be "correct," and people define "success" by ignoring Type I errors and shaping the universe of possible as things with only some probability of occurring, and the creators themselves generally make predictions in areas that have some prior probability of occurring—we can get to the conclusion that the Simpsons can "predict the future."
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed] Quick thoughts: Let's pretend—instead of scenario-building or making jokes—each of their plot lines are actual predictions. They make a lot of predictions, so their Type I error is very high, but the
19,656
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed]
Perhaps the most remarkable "prediction" ... is Trump as US president (made in 2000!). That is indeed a pretty impressive prediction, but it is less outlandish than you are probably aware. In regard to this prediction, it is worth bearing in mind that even by 2000, Trump had established himself as a popular business figure in the USA, and had made well-known forays in entertainment and politics, including an aborted presidential run. During the 1980-90s Trump had made regular forays into political issues and published various newspaper advertisements setting out his views on foreign policy and crime control. In 1999 he sought candidacy as the Presidential nominee for the Reform Party of the USA, but he withdrew from his attempted candidacy in February 2000, citing problems with the party. He indicated that he might run for President in a future election. Trump was one of the most admired business figures of the 1980-90s. He had made a successful career in New-York real-estate and also commonly featured in popular culture (e.g., in a bit part as himself in the 1992 movie Home Alone II). He was regularly interviewed in the media in relation to political and social issues in New York. In September 1987 he published a major advertisement in multiple newspapers advertising his foreign policy views (see e.g., this NYT article). His spokesman told the media, "There is absolutely no plan to run for mayor, governor or United States senator. He will not comment about the Presidency." In 1989, during a period of high crime in New York, he published another advertisement calling for reinstatement of the death penalty and increases in police. In a 1989 Gallup poll he was listed as the tenth most admired man in America. Aside from The Simpsons there have been various other items of popular entertainment that featured Trump in his aspirations for the presidency at around the same time. The video clip for the 1999 Rage Against the Machine song "Sleep Now in the Fire" shows the band holding a concert on Wall Street, with a mixture of head-bangers and bankers, and they show one of the bankers holding up a sign promoting Trump's 2000 presidential run (at 1:04 min). The episode of The Simpsons that you are referring to would probably have been written during the run-up to the 2000 presidential election, and so the writers would have been aware that Trump was a candidate for a minor party. It is likely that the episode was making fun of the fact that he was (at that stage) an outside candidate for a minor party, who had some populist support, but was unlikely to win. Trump's aborted candidacy for President in 2000 foreshadowed his later runs, and even when he withdrew from that attempt, he indicated that he might run again. In January 2000, prior to withdrawing from his presidential run, he released his political book, "The America We Deserve". I suspect that this episode of The Simpsons was just poking a bit of fun at a possible future with a populist candidate who was running for a small party. However, even back then, it was not a stretch to imagine that Trump would run again for the presidency. It is certainly an impressive prediction, but less so if you know the history of Trump in politics.
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed]
Perhaps the most remarkable "prediction" ... is Trump as US president (made in 2000!). That is indeed a pretty impressive prediction, but it is less outlandish than you are probably aware. In regard
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed] Perhaps the most remarkable "prediction" ... is Trump as US president (made in 2000!). That is indeed a pretty impressive prediction, but it is less outlandish than you are probably aware. In regard to this prediction, it is worth bearing in mind that even by 2000, Trump had established himself as a popular business figure in the USA, and had made well-known forays in entertainment and politics, including an aborted presidential run. During the 1980-90s Trump had made regular forays into political issues and published various newspaper advertisements setting out his views on foreign policy and crime control. In 1999 he sought candidacy as the Presidential nominee for the Reform Party of the USA, but he withdrew from his attempted candidacy in February 2000, citing problems with the party. He indicated that he might run for President in a future election. Trump was one of the most admired business figures of the 1980-90s. He had made a successful career in New-York real-estate and also commonly featured in popular culture (e.g., in a bit part as himself in the 1992 movie Home Alone II). He was regularly interviewed in the media in relation to political and social issues in New York. In September 1987 he published a major advertisement in multiple newspapers advertising his foreign policy views (see e.g., this NYT article). His spokesman told the media, "There is absolutely no plan to run for mayor, governor or United States senator. He will not comment about the Presidency." In 1989, during a period of high crime in New York, he published another advertisement calling for reinstatement of the death penalty and increases in police. In a 1989 Gallup poll he was listed as the tenth most admired man in America. Aside from The Simpsons there have been various other items of popular entertainment that featured Trump in his aspirations for the presidency at around the same time. The video clip for the 1999 Rage Against the Machine song "Sleep Now in the Fire" shows the band holding a concert on Wall Street, with a mixture of head-bangers and bankers, and they show one of the bankers holding up a sign promoting Trump's 2000 presidential run (at 1:04 min). The episode of The Simpsons that you are referring to would probably have been written during the run-up to the 2000 presidential election, and so the writers would have been aware that Trump was a candidate for a minor party. It is likely that the episode was making fun of the fact that he was (at that stage) an outside candidate for a minor party, who had some populist support, but was unlikely to win. Trump's aborted candidacy for President in 2000 foreshadowed his later runs, and even when he withdrew from that attempt, he indicated that he might run again. In January 2000, prior to withdrawing from his presidential run, he released his political book, "The America We Deserve". I suspect that this episode of The Simpsons was just poking a bit of fun at a possible future with a populist candidate who was running for a small party. However, even back then, it was not a stretch to imagine that Trump would run again for the presidency. It is certainly an impressive prediction, but less so if you know the history of Trump in politics.
Why are The Simpsons (TV series) so apparently successful in "predicting" the future? [closed] Perhaps the most remarkable "prediction" ... is Trump as US president (made in 2000!). That is indeed a pretty impressive prediction, but it is less outlandish than you are probably aware. In regard
19,657
How to calculate optimal zero padding for convolutional neural networks?
The possible values for the padding size, $P$, depends the input size $W$ (following the notation of the blog), the filter size $F$ and the stride $S$. We assume width and height are the same. What you need to ensure is that the output size, $(W-F+2P)/S+1$, is an integer. When $S = 1$ then you get your first equation $P = (F-1)/2$ as necessary condition. But, in general, you need to consider the three parameters, namely $W$, $F$ and $S$ in order to determine valid values of $P$.
How to calculate optimal zero padding for convolutional neural networks?
The possible values for the padding size, $P$, depends the input size $W$ (following the notation of the blog), the filter size $F$ and the stride $S$. We assume width and height are the same. What yo
How to calculate optimal zero padding for convolutional neural networks? The possible values for the padding size, $P$, depends the input size $W$ (following the notation of the blog), the filter size $F$ and the stride $S$. We assume width and height are the same. What you need to ensure is that the output size, $(W-F+2P)/S+1$, is an integer. When $S = 1$ then you get your first equation $P = (F-1)/2$ as necessary condition. But, in general, you need to consider the three parameters, namely $W$, $F$ and $S$ in order to determine valid values of $P$.
How to calculate optimal zero padding for convolutional neural networks? The possible values for the padding size, $P$, depends the input size $W$ (following the notation of the blog), the filter size $F$ and the stride $S$. We assume width and height are the same. What yo
19,658
How to calculate optimal zero padding for convolutional neural networks?
The general formular for the required padding P to achieve SAME padding is as follows: P = ((S-1)*W-S+F)/2, with F = filter size, S = stride, W = input size Of course, the padding P cannot be a fraction, hence you should round it up to the next higher int value.
How to calculate optimal zero padding for convolutional neural networks?
The general formular for the required padding P to achieve SAME padding is as follows: P = ((S-1)*W-S+F)/2, with F = filter size, S = stride, W = input size Of course, the padding P cannot be a fract
How to calculate optimal zero padding for convolutional neural networks? The general formular for the required padding P to achieve SAME padding is as follows: P = ((S-1)*W-S+F)/2, with F = filter size, S = stride, W = input size Of course, the padding P cannot be a fraction, hence you should round it up to the next higher int value.
How to calculate optimal zero padding for convolutional neural networks? The general formular for the required padding P to achieve SAME padding is as follows: P = ((S-1)*W-S+F)/2, with F = filter size, S = stride, W = input size Of course, the padding P cannot be a fract
19,659
How to calculate optimal zero padding for convolutional neural networks?
There are situations where (input_dim + 2*padding_side - filter) % stride == 0 has no solutions for padding_side. The formula (filter - 1) // 2 is good enough for the formula where the output shape is (input_dim + 2*padding_side - filter) // stride + 1. The output image will not retain all the information from the padded image but it's ok since we truncate only from the padding.
How to calculate optimal zero padding for convolutional neural networks?
There are situations where (input_dim + 2*padding_side - filter) % stride == 0 has no solutions for padding_side. The formula (filter - 1) // 2 is good enough for the formula where the output shape is
How to calculate optimal zero padding for convolutional neural networks? There are situations where (input_dim + 2*padding_side - filter) % stride == 0 has no solutions for padding_side. The formula (filter - 1) // 2 is good enough for the formula where the output shape is (input_dim + 2*padding_side - filter) // stride + 1. The output image will not retain all the information from the padded image but it's ok since we truncate only from the padding.
How to calculate optimal zero padding for convolutional neural networks? There are situations where (input_dim + 2*padding_side - filter) % stride == 0 has no solutions for padding_side. The formula (filter - 1) // 2 is good enough for the formula where the output shape is
19,660
How to calculate optimal zero padding for convolutional neural networks?
Given an expected output dimension then return the padding The formula to get the output dimension on the $l$ layer is given below and from there we can extract a general formula for any output value. \begin{equation} n_{H}^{[l]} =\lfloor\frac{n_{H}^{[l-1]} + 2p^{[l]} - f^{[l]}}{s^{[l]}}\rfloor + 1 \end{equation} We can say that: \begin{equation} p = \frac{n_{H}^{[l]} - n_{H}^{[l-1]} - s^{[l]} + f^{[l]} }{2} \end{equation} Validation of the formula above If $p = 0, s = 1, f = 7 $ and $n_{H}^{[l-1]} = 63$ the output dimension is $ n_{H}^{[l]} = 57$ \begin{equation} p = \frac{57 - 63 - 1 + 7 }{2} = 0 \end{equation} Expecting same convolution Although, for this case, the formula is more simple. $p = (f^{[l]} − 1)/2$ due to the input dimension and the output is the same. \begin{equation} p = \frac{63 - 63 - 1 + 7 }{2} = 3 \end{equation} When we expect any dimension Given a $ n_{H}^{[l]} >= n_{H}^{[l-1]}$ \begin{equation} p = \frac{73 - 63 - 1 + 7 }{2} = 8 \end{equation}
How to calculate optimal zero padding for convolutional neural networks?
Given an expected output dimension then return the padding The formula to get the output dimension on the $l$ layer is given below and from there we can extract a general formula for any output value.
How to calculate optimal zero padding for convolutional neural networks? Given an expected output dimension then return the padding The formula to get the output dimension on the $l$ layer is given below and from there we can extract a general formula for any output value. \begin{equation} n_{H}^{[l]} =\lfloor\frac{n_{H}^{[l-1]} + 2p^{[l]} - f^{[l]}}{s^{[l]}}\rfloor + 1 \end{equation} We can say that: \begin{equation} p = \frac{n_{H}^{[l]} - n_{H}^{[l-1]} - s^{[l]} + f^{[l]} }{2} \end{equation} Validation of the formula above If $p = 0, s = 1, f = 7 $ and $n_{H}^{[l-1]} = 63$ the output dimension is $ n_{H}^{[l]} = 57$ \begin{equation} p = \frac{57 - 63 - 1 + 7 }{2} = 0 \end{equation} Expecting same convolution Although, for this case, the formula is more simple. $p = (f^{[l]} − 1)/2$ due to the input dimension and the output is the same. \begin{equation} p = \frac{63 - 63 - 1 + 7 }{2} = 3 \end{equation} When we expect any dimension Given a $ n_{H}^{[l]} >= n_{H}^{[l-1]}$ \begin{equation} p = \frac{73 - 63 - 1 + 7 }{2} = 8 \end{equation}
How to calculate optimal zero padding for convolutional neural networks? Given an expected output dimension then return the padding The formula to get the output dimension on the $l$ layer is given below and from there we can extract a general formula for any output value.
19,661
How to calculate optimal zero padding for convolutional neural networks?
To find the padding size for any kernel size : Horizontal padding horizontal total_padding = (#rows_in_image * (horizontal_stride -1) - horizontal_stride + horizontal_dilation * (rows_in_kernel - 1) + 1 left pad = horizontal total_paddint // 2 right pad = total_padding - left pad similarly, you can find vertical padding.
How to calculate optimal zero padding for convolutional neural networks?
To find the padding size for any kernel size : Horizontal padding horizontal total_padding = (#rows_in_image * (horizontal_stride -1) - horizontal_stride + horizontal_dilation * (rows_in_kernel - 1) +
How to calculate optimal zero padding for convolutional neural networks? To find the padding size for any kernel size : Horizontal padding horizontal total_padding = (#rows_in_image * (horizontal_stride -1) - horizontal_stride + horizontal_dilation * (rows_in_kernel - 1) + 1 left pad = horizontal total_paddint // 2 right pad = total_padding - left pad similarly, you can find vertical padding.
How to calculate optimal zero padding for convolutional neural networks? To find the padding size for any kernel size : Horizontal padding horizontal total_padding = (#rows_in_image * (horizontal_stride -1) - horizontal_stride + horizontal_dilation * (rows_in_kernel - 1) +
19,662
How to calculate optimal zero padding for convolutional neural networks?
The formula given for calculating the output size (one dimension) of a convolution is $(W - F + 2P) / S + 1$. You can reason it in this way: when you add padding to the input and subtract the filter size, you get the number of neurons before the last location where the filter is applied. If you divide this by the stride, you get the number of times the filter is applied, before the last location. For example, with input size 7, filter size 3, and stride 1, you can apply the filter four times before the last location: [x][x][x][x][ ][ ][ ] With stride 2, you can apply the filter two times before the last location, where the filter fits. A couple of things to note about this formula: $P$ is the amount of zeros added on each side of the input. That's why there's $2P$ in the formula. The formula is valid when $F >= S$. For example, with input size $W = 6$, filter size $F = 1$, and stride $S = 2$, you can obviously apply the filter three times, but the formula gives $5 / 2 + 1$. I think generally $(W - max(F, S) + 2P)/S + 1$ is correct. If you want to keep the output size same as the input size, you can equate $(W - max(F, S) + 2P) / S + 1 = W$. Solving this, when $S = 1$ gives: $W - F + 2P + 1 = W$ $2P = F - 1$ In total, $F - 1$ neuros need to be added to the input. If $F - 1$ is an odd number, then you would have to add more padding on one side than the other. Typically the convolution operation supports only adding an equal amount of padding on all sides, so if $(F - 1) / 2$ is not a whole number, you need to add the padding with a separate "pad" operation. When $S > 1$, you could solve the amount of padding needed to keep the output size same as the input size: $(W - max(F, S) + 2P) / S + 1 = W$ $W - max(F, S) + 2P = (W - 1)S$ $2P = (W - 1)S - W + max(F, S)$ However, it would be strange to use stride > 1 and add so much padding that the output size isn't reduced. A more likely scenario is that you want to exactly halve the input size, when using stride 2, and so on. In this case you get: $(W - max(F, S) + 2P) / S + 1 = W / S$ $W - max(F, S) + 2P + S = W$ $2P = max(F, S) - S$
How to calculate optimal zero padding for convolutional neural networks?
The formula given for calculating the output size (one dimension) of a convolution is $(W - F + 2P) / S + 1$. You can reason it in this way: when you add padding to the input and subtract the filter s
How to calculate optimal zero padding for convolutional neural networks? The formula given for calculating the output size (one dimension) of a convolution is $(W - F + 2P) / S + 1$. You can reason it in this way: when you add padding to the input and subtract the filter size, you get the number of neurons before the last location where the filter is applied. If you divide this by the stride, you get the number of times the filter is applied, before the last location. For example, with input size 7, filter size 3, and stride 1, you can apply the filter four times before the last location: [x][x][x][x][ ][ ][ ] With stride 2, you can apply the filter two times before the last location, where the filter fits. A couple of things to note about this formula: $P$ is the amount of zeros added on each side of the input. That's why there's $2P$ in the formula. The formula is valid when $F >= S$. For example, with input size $W = 6$, filter size $F = 1$, and stride $S = 2$, you can obviously apply the filter three times, but the formula gives $5 / 2 + 1$. I think generally $(W - max(F, S) + 2P)/S + 1$ is correct. If you want to keep the output size same as the input size, you can equate $(W - max(F, S) + 2P) / S + 1 = W$. Solving this, when $S = 1$ gives: $W - F + 2P + 1 = W$ $2P = F - 1$ In total, $F - 1$ neuros need to be added to the input. If $F - 1$ is an odd number, then you would have to add more padding on one side than the other. Typically the convolution operation supports only adding an equal amount of padding on all sides, so if $(F - 1) / 2$ is not a whole number, you need to add the padding with a separate "pad" operation. When $S > 1$, you could solve the amount of padding needed to keep the output size same as the input size: $(W - max(F, S) + 2P) / S + 1 = W$ $W - max(F, S) + 2P = (W - 1)S$ $2P = (W - 1)S - W + max(F, S)$ However, it would be strange to use stride > 1 and add so much padding that the output size isn't reduced. A more likely scenario is that you want to exactly halve the input size, when using stride 2, and so on. In this case you get: $(W - max(F, S) + 2P) / S + 1 = W / S$ $W - max(F, S) + 2P + S = W$ $2P = max(F, S) - S$
How to calculate optimal zero padding for convolutional neural networks? The formula given for calculating the output size (one dimension) of a convolution is $(W - F + 2P) / S + 1$. You can reason it in this way: when you add padding to the input and subtract the filter s
19,663
How to calculate optimal zero padding for convolutional neural networks?
If the Height and Width of the image are different, then you have to calculate the output image separately. The formula remains the same i-e ((W - F * 2P)/S) + 1 and ((H - F * 2P)/S) + 1 Dimension difference does not affect the expected output.
How to calculate optimal zero padding for convolutional neural networks?
If the Height and Width of the image are different, then you have to calculate the output image separately. The formula remains the same i-e ((W - F * 2P)/S) + 1 and ((H - F * 2P)/S) + 1 Dimension
How to calculate optimal zero padding for convolutional neural networks? If the Height and Width of the image are different, then you have to calculate the output image separately. The formula remains the same i-e ((W - F * 2P)/S) + 1 and ((H - F * 2P)/S) + 1 Dimension difference does not affect the expected output.
How to calculate optimal zero padding for convolutional neural networks? If the Height and Width of the image are different, then you have to calculate the output image separately. The formula remains the same i-e ((W - F * 2P)/S) + 1 and ((H - F * 2P)/S) + 1 Dimension
19,664
Using iloc to set values [closed]
If you reverse the selectors, and select by column first, it will work fine: Code: df.feature_a.iloc[[1, 3, 15]] = 88 Why? When you did the first (non-working way) you are selecting a non-contiguous section of the data frame. You should have received the warning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-> docs/stable/indexing.html#indexing-view-versus-copy This is because there are two independent operations taking place. combined.iloc[[1,3,15]] creates a new dataframe of only three rows, and the frame is necessarily copied. then... select one column via ["feature_a"] but it is selected against the copy. So the assignment goes to the copy. There are various ways to fix this, but in this case, it is easier (and cheaper) to select the column first, then select parts of the columns for assignment. Test Code: df = pd.DataFrame(np.zeros((20, 3)), columns=['feature_a', 'b', 'c']) df.feature_a.iloc[[1, 3, 15]] = 88 print(df) Results: feature_a b c 0 0.0 0.0 0.0 1 88.0 0.0 0.0 2 0.0 0.0 0.0 3 88.0 0.0 0.0 4 0.0 0.0 0.0 5 0.0 0.0 0.0 6 0.0 0.0 0.0 7 0.0 0.0 0.0 8 0.0 0.0 0.0 9 0.0 0.0 0.0 10 0.0 0.0 0.0 11 0.0 0.0 0.0 12 0.0 0.0 0.0 13 0.0 0.0 0.0 14 0.0 0.0 0.0 15 88.0 0.0 0.0 16 0.0 0.0 0.0 17 0.0 0.0 0.0 18 0.0 0.0 0.0 19 0.0 0.0 0.0
Using iloc to set values [closed]
If you reverse the selectors, and select by column first, it will work fine: Code: df.feature_a.iloc[[1, 3, 15]] = 88 Why? When you did the first (non-working way) you are selecting a non-contiguous
Using iloc to set values [closed] If you reverse the selectors, and select by column first, it will work fine: Code: df.feature_a.iloc[[1, 3, 15]] = 88 Why? When you did the first (non-working way) you are selecting a non-contiguous section of the data frame. You should have received the warning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-> docs/stable/indexing.html#indexing-view-versus-copy This is because there are two independent operations taking place. combined.iloc[[1,3,15]] creates a new dataframe of only three rows, and the frame is necessarily copied. then... select one column via ["feature_a"] but it is selected against the copy. So the assignment goes to the copy. There are various ways to fix this, but in this case, it is easier (and cheaper) to select the column first, then select parts of the columns for assignment. Test Code: df = pd.DataFrame(np.zeros((20, 3)), columns=['feature_a', 'b', 'c']) df.feature_a.iloc[[1, 3, 15]] = 88 print(df) Results: feature_a b c 0 0.0 0.0 0.0 1 88.0 0.0 0.0 2 0.0 0.0 0.0 3 88.0 0.0 0.0 4 0.0 0.0 0.0 5 0.0 0.0 0.0 6 0.0 0.0 0.0 7 0.0 0.0 0.0 8 0.0 0.0 0.0 9 0.0 0.0 0.0 10 0.0 0.0 0.0 11 0.0 0.0 0.0 12 0.0 0.0 0.0 13 0.0 0.0 0.0 14 0.0 0.0 0.0 15 88.0 0.0 0.0 16 0.0 0.0 0.0 17 0.0 0.0 0.0 18 0.0 0.0 0.0 19 0.0 0.0 0.0
Using iloc to set values [closed] If you reverse the selectors, and select by column first, it will work fine: Code: df.feature_a.iloc[[1, 3, 15]] = 88 Why? When you did the first (non-working way) you are selecting a non-contiguous
19,665
Palantir's Asian discrimination case: how were the probabilities computed?
I am going to reverse-engineer this from experience with discrimination cases. I can definitely establish where the values of "one in 741," etc, came from. However, so much information was lost in translation that the rest of my reconstruction relies on having seen how people do statistics in courtroom settings. I can only guess at some of the details. Since the time anti-discrimination laws were passed in the 1960's (Title VI), the courts in the United States have learned to look at p-values and compare them to thresholds of $0.05$ and $0.01$. They have also learned to look at standardized effects, typically referred to as "standard deviations," and compare them to a threshold of "two to three standard deviations." In order to establish a prima facie case for a discrimination suit, plaintiffs typically attempt a statistical calculation showing a "disparate impact" that exceeds these thresholds. If such a calculation cannot be supported, the case usually cannot advance. Statistical experts for plaintiffs often attempt to phrase their results in these familiar terms. Some of the experts conduct a statistical test in which the null hypothesis expresses "no adverse impact," assuming employment decisions were purely random and ungoverned by any other characteristics of the employees. (Whether it is a one-tailed or two-tailed alternative may depend on the expert and the circumstances.) They then convert the p-value of this test into a number of "standard deviations" by referring it to the standard Normal distribution--even when the standard Normal is irrelevant to the original test. In this roundabout way they hope to communicate their conclusions clearly to the judge. The favored test for data that can be summarized in contingency tables is Fisher's Exact Test. The occurrence of "Exact" in its name is particularly pleasing to the plaintiffs, because it connotes a statistical determination that has been made without error (whatever that might be!). Here, then, is my (speculative reconstruction) of the Department of Labor's calculations. They ran Fisher's Exact Test, or something like it (such as a $\chi^2$ test with a p-value determined via randomization). This test assumes a hypergeometric distribution as described in Matthew Gunn's answer. (For the small numbers of people involved in this complaint, the hypergeometric distribution is not well approximated by a Normal distribution.) They converted its p-value to a normal Z score ("number of standard deviations"). They rounded the Z score to the nearest integer: "exceeds three standard deviations," "exceeds five standard deviations," and "exceeds six standard deviations." (Because some of these Z-scores rounded the up to more standard deviations, I cannot justify the "exceeds"; all I can do is quote it.) In the complaint these integral Z scores were converted back to p-values! Again the standard Normal distribution was used. These p-values are described (arguably in a misleading way) as "the likelihood that this result occurred according to chance." To support this speculation, note that the p-values for Fisher's Exact Test in the three instances are approximately $1/1280$, $1/565000$, and $1/58000000$. These are based on assuming pools of $730$, $1160$, and $130$ corresponding to "more than" $730$, $1160$, and $130$, respectively. These numbers have normal Z scores of $-3.16$, $-4.64$, and $-5.52$, respectively, which when rounded are three, five, and six standard deviations, exactly the numbers appearing in the complaint. They correspond to (one-tailed) normal p-values of $1/741$, $1/3500000$, and $1/1000000000$: precisely the values cited in the complaint. Here is some R code used to perform these calculations. f <- function(total, percent.asian, hired.asian, hired.non.asian) { asian <- round(percent.asian/100 * total) non.asian <- total-asian x <- matrix(c(asian-hired.asian, non.asian-hired.non.asian, hired.asian, hired.non.asian), nrow = 2, dimnames=list(Race=c("Asian", "non-Asian"), Status=c("Not hired", "Hired"))) s <- fisher.test(x) s$p.value } 1/pnorm(round(qnorm(f(730, 77, 1, 6)))) 1/pnorm(round(qnorm(f(1160, 85, 11, 14)))) 1/pnorm(round(qnorm(f(130, 73, 4, 17))))
Palantir's Asian discrimination case: how were the probabilities computed?
I am going to reverse-engineer this from experience with discrimination cases. I can definitely establish where the values of "one in 741," etc, came from. However, so much information was lost in t
Palantir's Asian discrimination case: how were the probabilities computed? I am going to reverse-engineer this from experience with discrimination cases. I can definitely establish where the values of "one in 741," etc, came from. However, so much information was lost in translation that the rest of my reconstruction relies on having seen how people do statistics in courtroom settings. I can only guess at some of the details. Since the time anti-discrimination laws were passed in the 1960's (Title VI), the courts in the United States have learned to look at p-values and compare them to thresholds of $0.05$ and $0.01$. They have also learned to look at standardized effects, typically referred to as "standard deviations," and compare them to a threshold of "two to three standard deviations." In order to establish a prima facie case for a discrimination suit, plaintiffs typically attempt a statistical calculation showing a "disparate impact" that exceeds these thresholds. If such a calculation cannot be supported, the case usually cannot advance. Statistical experts for plaintiffs often attempt to phrase their results in these familiar terms. Some of the experts conduct a statistical test in which the null hypothesis expresses "no adverse impact," assuming employment decisions were purely random and ungoverned by any other characteristics of the employees. (Whether it is a one-tailed or two-tailed alternative may depend on the expert and the circumstances.) They then convert the p-value of this test into a number of "standard deviations" by referring it to the standard Normal distribution--even when the standard Normal is irrelevant to the original test. In this roundabout way they hope to communicate their conclusions clearly to the judge. The favored test for data that can be summarized in contingency tables is Fisher's Exact Test. The occurrence of "Exact" in its name is particularly pleasing to the plaintiffs, because it connotes a statistical determination that has been made without error (whatever that might be!). Here, then, is my (speculative reconstruction) of the Department of Labor's calculations. They ran Fisher's Exact Test, or something like it (such as a $\chi^2$ test with a p-value determined via randomization). This test assumes a hypergeometric distribution as described in Matthew Gunn's answer. (For the small numbers of people involved in this complaint, the hypergeometric distribution is not well approximated by a Normal distribution.) They converted its p-value to a normal Z score ("number of standard deviations"). They rounded the Z score to the nearest integer: "exceeds three standard deviations," "exceeds five standard deviations," and "exceeds six standard deviations." (Because some of these Z-scores rounded the up to more standard deviations, I cannot justify the "exceeds"; all I can do is quote it.) In the complaint these integral Z scores were converted back to p-values! Again the standard Normal distribution was used. These p-values are described (arguably in a misleading way) as "the likelihood that this result occurred according to chance." To support this speculation, note that the p-values for Fisher's Exact Test in the three instances are approximately $1/1280$, $1/565000$, and $1/58000000$. These are based on assuming pools of $730$, $1160$, and $130$ corresponding to "more than" $730$, $1160$, and $130$, respectively. These numbers have normal Z scores of $-3.16$, $-4.64$, and $-5.52$, respectively, which when rounded are three, five, and six standard deviations, exactly the numbers appearing in the complaint. They correspond to (one-tailed) normal p-values of $1/741$, $1/3500000$, and $1/1000000000$: precisely the values cited in the complaint. Here is some R code used to perform these calculations. f <- function(total, percent.asian, hired.asian, hired.non.asian) { asian <- round(percent.asian/100 * total) non.asian <- total-asian x <- matrix(c(asian-hired.asian, non.asian-hired.non.asian, hired.asian, hired.non.asian), nrow = 2, dimnames=list(Race=c("Asian", "non-Asian"), Status=c("Not hired", "Hired"))) s <- fisher.test(x) s$p.value } 1/pnorm(round(qnorm(f(730, 77, 1, 6)))) 1/pnorm(round(qnorm(f(1160, 85, 11, 14)))) 1/pnorm(round(qnorm(f(130, 73, 4, 17))))
Palantir's Asian discrimination case: how were the probabilities computed? I am going to reverse-engineer this from experience with discrimination cases. I can definitely establish where the values of "one in 741," etc, came from. However, so much information was lost in t
19,666
Palantir's Asian discrimination case: how were the probabilities computed?
How to calculate pvals properly using the hypergeometric distribution: Drawing $k$ successes in $n$ trials without replacement from a set with $K$ successes amid $N$ total items will follow the hypergeometric distribution. For a one-sided test, in MATLAB, you can call pval = hygecdf(k, N, K, n); or in this case pval = hygecdf(1, 730, 562, 7) which is about .0007839. Mean and standard deviation are given by: $$ \mu = n \frac{K}{N} \quad \quad \quad s = \sqrt{n \frac{K}{N} \frac{N - K}{N} \frac{N - n}{N-1}}$$ Thus we're -3.957 standard deviations outside the mean. I've tried various things to replicate the p-values (eg. hypergeometric cdf, $\chi^2$ test, z-test), but I can't get an exact match. (Update: WHuber's answer has an algorithm that produces an exact match... it's scary stuff!) Looking for formulas the OFCCP might use, this site I saw may perhaps be helpful: http://www.hr-software.net/EmploymentStatistics/DisparateImpact.htm Summary of some calculations: $$ \begin{array}{rrrr} \text{Number and method} & \text{Part A} & \text{Part B} & \text{Part C} \\ \text{PVal from hypergeometric CDF} & \text{7.839e-04} & \text{1.77e-06} & \text{1.72e-08}\\ \chi^2 \text{ stat} & 15.68 & 33.68 & 37.16\\ \chi^2 \text{ pval} & \text{7.49e-05} & \text{6.47e-09} & \text{1.09e-09} \\ \text{Pval from above document} & .00135 & \text{2.94e-07} & \text{1.00e-09} \end{array} $$ For $\chi^2$ stat I used the standard $\sum \frac{(\text{expected} - \text{actual})^2}{\text{expected}}$ over the four cells.
Palantir's Asian discrimination case: how were the probabilities computed?
How to calculate pvals properly using the hypergeometric distribution: Drawing $k$ successes in $n$ trials without replacement from a set with $K$ successes amid $N$ total items will follow the hyperg
Palantir's Asian discrimination case: how were the probabilities computed? How to calculate pvals properly using the hypergeometric distribution: Drawing $k$ successes in $n$ trials without replacement from a set with $K$ successes amid $N$ total items will follow the hypergeometric distribution. For a one-sided test, in MATLAB, you can call pval = hygecdf(k, N, K, n); or in this case pval = hygecdf(1, 730, 562, 7) which is about .0007839. Mean and standard deviation are given by: $$ \mu = n \frac{K}{N} \quad \quad \quad s = \sqrt{n \frac{K}{N} \frac{N - K}{N} \frac{N - n}{N-1}}$$ Thus we're -3.957 standard deviations outside the mean. I've tried various things to replicate the p-values (eg. hypergeometric cdf, $\chi^2$ test, z-test), but I can't get an exact match. (Update: WHuber's answer has an algorithm that produces an exact match... it's scary stuff!) Looking for formulas the OFCCP might use, this site I saw may perhaps be helpful: http://www.hr-software.net/EmploymentStatistics/DisparateImpact.htm Summary of some calculations: $$ \begin{array}{rrrr} \text{Number and method} & \text{Part A} & \text{Part B} & \text{Part C} \\ \text{PVal from hypergeometric CDF} & \text{7.839e-04} & \text{1.77e-06} & \text{1.72e-08}\\ \chi^2 \text{ stat} & 15.68 & 33.68 & 37.16\\ \chi^2 \text{ pval} & \text{7.49e-05} & \text{6.47e-09} & \text{1.09e-09} \\ \text{Pval from above document} & .00135 & \text{2.94e-07} & \text{1.00e-09} \end{array} $$ For $\chi^2$ stat I used the standard $\sum \frac{(\text{expected} - \text{actual})^2}{\text{expected}}$ over the four cells.
Palantir's Asian discrimination case: how were the probabilities computed? How to calculate pvals properly using the hypergeometric distribution: Drawing $k$ successes in $n$ trials without replacement from a set with $K$ successes amid $N$ total items will follow the hyperg
19,667
How to interpret Arima(0,0,0)
An ARIMA(0,0,0) model with zero mean is white noise, so it means that the errors are uncorrelated across time. This doesn't imply anything about the size of the errors, so no in general it is not an indication of good or bad fit. In your case, you'll note that your $\sigma^2$ is 0.007612 and that ME is -6.321953e-17. These are very very small numbers, so yes, the model "fits" well. However, the reason why they are very small is because you are fitting 15 parameters (14 coefficients + 1 error variance) to only 18 points. You are likely overfitting the data to an extreme degree, and you will likely not be able to forecast out of sample very well.
How to interpret Arima(0,0,0)
An ARIMA(0,0,0) model with zero mean is white noise, so it means that the errors are uncorrelated across time. This doesn't imply anything about the size of the errors, so no in general it is not an
How to interpret Arima(0,0,0) An ARIMA(0,0,0) model with zero mean is white noise, so it means that the errors are uncorrelated across time. This doesn't imply anything about the size of the errors, so no in general it is not an indication of good or bad fit. In your case, you'll note that your $\sigma^2$ is 0.007612 and that ME is -6.321953e-17. These are very very small numbers, so yes, the model "fits" well. However, the reason why they are very small is because you are fitting 15 parameters (14 coefficients + 1 error variance) to only 18 points. You are likely overfitting the data to an extreme degree, and you will likely not be able to forecast out of sample very well.
How to interpret Arima(0,0,0) An ARIMA(0,0,0) model with zero mean is white noise, so it means that the errors are uncorrelated across time. This doesn't imply anything about the size of the errors, so no in general it is not an
19,668
How to interpret Arima(0,0,0)
Your model fit, well. ARIMA(0,0,0) can often appear in time series. An Autoregressive Let us have a look at how an ARMA(p,q) (Autoregressive-Moving-Average) modell is structured. $x_t = c + \epsilon_t + \sum\limits_{i}^p * \phi_i *x_t-_1 + \sum\limits_{i}^q\epsilon_t-_1 $ An ARMA(p,0) modell is the same as an AR(q) modell (Autoregressive modell of order p). It can be represented using the following representation. $x_t = c + \epsilon_t + \sum\limits_{i}^p * \phi_i *x_t-_1 + \epsilon_t $ An ARMA(0,q) modell is the same as an MA(q) modell (Moving-Average modell of order q). It can be represented using the following representation. $x_t = \mu + \epsilon_t + \sum\limits_{i}^q\epsilon_t-_1 $ Hence an ARMA(0,0) modell is the same as an AR(0) (Autoregressive model of order 0) model or an MA(0) Moving average modell of order 0 modell. An ARMA(0,0) modell is shown in the next equtation. $x_t = c + \epsilon_t$ So the ARMA(0,0) model is made up of two parts: A Constant An error term This means ARMA(0,0), but now have a closer look what ARIMA(0,0,0) means. The I in ARIMA stands for integration. You have to integrate the time series I before applying the ARMA modell. So in our case you have to integrate it 0 times. An example for an ARIMA(0,0,0) modell is a time series only containing a constant and white noise, so for example a time series in which all values are the same is ARIMA(0,0,0) Here is some explanatory code in R: Generate two processes FirstARIMA is a time series which consists only of a constant. SecondARIMA is a process which consists of a constant and a normally distributed error term (gaussian noise). library(forecast) ARIMA000 <- rep(10,10) FirstARIMA <- ts(ARIMA000) noise <- rnorm(10, mean = 0, sd = 1) SecondARIMA <- ts(ARIMA000 + noise) auto.arima(FirstARIMA) Shows you that the first process is an ARIMA(0,0,0) process. Series: FirstARIMA ARIMA(0,0,0) with non-zero mean Coefficients: intercept 10 sigma^2 estimated as 0: log likelihood=Inf AIC=-Inf AICc=-Inf BIC=-Inf auto.arima(SecondARIMA) Shows you that the second process is also an ARIMA(0,0,0) process. Series: SecondARIMA ARIMA(0,0,0) with non-zero mean Coefficients: intercept 10.1683 s.e. 0.2434 sigma^2 estimated as 0.6581: log likelihood=-11.57 AIC=27.14 AICc=28.86 BIC=27.75 I am plotting the two time series. plot.ts(FirstARIMA) plot.ts(SecondARIMA)
How to interpret Arima(0,0,0)
Your model fit, well. ARIMA(0,0,0) can often appear in time series. An Autoregressive Let us have a look at how an ARMA(p,q) (Autoregressive-Moving-Average) modell is structured. $x_t = c + \epsilon
How to interpret Arima(0,0,0) Your model fit, well. ARIMA(0,0,0) can often appear in time series. An Autoregressive Let us have a look at how an ARMA(p,q) (Autoregressive-Moving-Average) modell is structured. $x_t = c + \epsilon_t + \sum\limits_{i}^p * \phi_i *x_t-_1 + \sum\limits_{i}^q\epsilon_t-_1 $ An ARMA(p,0) modell is the same as an AR(q) modell (Autoregressive modell of order p). It can be represented using the following representation. $x_t = c + \epsilon_t + \sum\limits_{i}^p * \phi_i *x_t-_1 + \epsilon_t $ An ARMA(0,q) modell is the same as an MA(q) modell (Moving-Average modell of order q). It can be represented using the following representation. $x_t = \mu + \epsilon_t + \sum\limits_{i}^q\epsilon_t-_1 $ Hence an ARMA(0,0) modell is the same as an AR(0) (Autoregressive model of order 0) model or an MA(0) Moving average modell of order 0 modell. An ARMA(0,0) modell is shown in the next equtation. $x_t = c + \epsilon_t$ So the ARMA(0,0) model is made up of two parts: A Constant An error term This means ARMA(0,0), but now have a closer look what ARIMA(0,0,0) means. The I in ARIMA stands for integration. You have to integrate the time series I before applying the ARMA modell. So in our case you have to integrate it 0 times. An example for an ARIMA(0,0,0) modell is a time series only containing a constant and white noise, so for example a time series in which all values are the same is ARIMA(0,0,0) Here is some explanatory code in R: Generate two processes FirstARIMA is a time series which consists only of a constant. SecondARIMA is a process which consists of a constant and a normally distributed error term (gaussian noise). library(forecast) ARIMA000 <- rep(10,10) FirstARIMA <- ts(ARIMA000) noise <- rnorm(10, mean = 0, sd = 1) SecondARIMA <- ts(ARIMA000 + noise) auto.arima(FirstARIMA) Shows you that the first process is an ARIMA(0,0,0) process. Series: FirstARIMA ARIMA(0,0,0) with non-zero mean Coefficients: intercept 10 sigma^2 estimated as 0: log likelihood=Inf AIC=-Inf AICc=-Inf BIC=-Inf auto.arima(SecondARIMA) Shows you that the second process is also an ARIMA(0,0,0) process. Series: SecondARIMA ARIMA(0,0,0) with non-zero mean Coefficients: intercept 10.1683 s.e. 0.2434 sigma^2 estimated as 0.6581: log likelihood=-11.57 AIC=27.14 AICc=28.86 BIC=27.75 I am plotting the two time series. plot.ts(FirstARIMA) plot.ts(SecondARIMA)
How to interpret Arima(0,0,0) Your model fit, well. ARIMA(0,0,0) can often appear in time series. An Autoregressive Let us have a look at how an ARMA(p,q) (Autoregressive-Moving-Average) modell is structured. $x_t = c + \epsilon
19,669
Identical coefficients estimated in Poisson vs Quasi-Poisson model
This is almost a duplicate; the linked question explains that you shouldn't expect the coefficient estimates, residual deviance, nor degrees of freedom to change. The only thing that changes when moving from Poisson to quasi-Poisson is that a scale parameter that was previously fixed to 1 is computed from some estimate of residual variability/badness-of-fit (usually estimated via the sum of squares of the Pearson residuals ($\chi^2$) divided by the residual df, although asymptotically using the residual deviance gives the same result). The result is that the standard errors are scaled by the square root of this scale parameter, with concomitant changes in the confidence intervals and $p$-values. The benefit of quasi-likelihood is that it fixes the basic fallacy of assuming that the data are Poisson (= homogeneous, independent counts); however, fixing the problem in this way potentially masks other issues with the data. (See below.) Quasi-likelihood is one way of handling overdispersion; if you don't address overdispersion in some way, your coefficients will be reasonable but your inference (CIs, $p$-values, etc.) will be garbage. As you comment above, there are lots of different approaches to overdispersion (Tweedie, different negative binomial parameterizations, quasi-likelihood, zero-inflation/alteration). With an overdispersion factor of >5 (8.4), I would worry a bit about whether it is being driven by some kind of model mis-fit (outliers, zero-inflation [which I see you've already tried], nonlinearity) rather than representing across-the-board heterogeneity. My general approach to this is graphical exploration of the raw data and regression diagnostics ...
Identical coefficients estimated in Poisson vs Quasi-Poisson model
This is almost a duplicate; the linked question explains that you shouldn't expect the coefficient estimates, residual deviance, nor degrees of freedom to change. The only thing that changes when mov
Identical coefficients estimated in Poisson vs Quasi-Poisson model This is almost a duplicate; the linked question explains that you shouldn't expect the coefficient estimates, residual deviance, nor degrees of freedom to change. The only thing that changes when moving from Poisson to quasi-Poisson is that a scale parameter that was previously fixed to 1 is computed from some estimate of residual variability/badness-of-fit (usually estimated via the sum of squares of the Pearson residuals ($\chi^2$) divided by the residual df, although asymptotically using the residual deviance gives the same result). The result is that the standard errors are scaled by the square root of this scale parameter, with concomitant changes in the confidence intervals and $p$-values. The benefit of quasi-likelihood is that it fixes the basic fallacy of assuming that the data are Poisson (= homogeneous, independent counts); however, fixing the problem in this way potentially masks other issues with the data. (See below.) Quasi-likelihood is one way of handling overdispersion; if you don't address overdispersion in some way, your coefficients will be reasonable but your inference (CIs, $p$-values, etc.) will be garbage. As you comment above, there are lots of different approaches to overdispersion (Tweedie, different negative binomial parameterizations, quasi-likelihood, zero-inflation/alteration). With an overdispersion factor of >5 (8.4), I would worry a bit about whether it is being driven by some kind of model mis-fit (outliers, zero-inflation [which I see you've already tried], nonlinearity) rather than representing across-the-board heterogeneity. My general approach to this is graphical exploration of the raw data and regression diagnostics ...
Identical coefficients estimated in Poisson vs Quasi-Poisson model This is almost a duplicate; the linked question explains that you shouldn't expect the coefficient estimates, residual deviance, nor degrees of freedom to change. The only thing that changes when mov
19,670
Is PCA still done via the eigendecomposition of the covariance matrix when dimensionality is larger than the number of observations?
The covariance matrix is of $D\times D$ size and is given by $$\mathbf C = \frac{1}{N-1}\mathbf X_0^\top \mathbf X^\phantom\top_0.$$ The matrix you are talking about is of course not a covariance matrix; it is called Gram matrix and is of $N\times N$ size: $$\mathbf G = \frac{1}{N-1}\mathbf X^\phantom\top_0 \mathbf X_0^\top.$$ Principal component analysis (PCA) can be implemented via eigendecomposition of either of these matrices. These are just two different ways to compute the same thing. The easiest and the most useful way to see this is to use the singular value decomposition of the data matrix $\mathbf X = \mathbf {USV}^\top$. Plugging this into the expressions for $\mathbf C$ and $\mathbf G$, we get: \begin{align}\mathbf C&=\mathbf V\frac{\mathbf S^2}{N-1}\mathbf V^\top\\\mathbf G&=\mathbf U\frac{\mathbf S^2}{N-1}\mathbf U^\top.\end{align} Eigenvectors $\mathbf V$ of the covariance matrix are principal directions. Projections of the data on these eigenvectors are principal components; these projections are given by $\mathbf {US}$. Principal components scaled to unit length are given by $\mathbf U$. As you see, eigenvectors of the Gram matrix are exactly these scaled principal components. And the eigenvalues of $\mathbf C$ and $\mathbf G$ coincide. The reason why you might see it recommended to use Gram matrix if $N<D$ is because it will be of smaller size, as compared to the covariance matrix, and hence be faster to compute and faster to eigendecompose. In fact, if your dimensionality $D$ is too high, there is no way you can even store the covariance matrix in memory, so operating on a Gram matrix is the only way to do PCA. But for manageable $D$ you can still use eigendecomposition of the covariance matrix if you prefer even if $N<D$. See also: Relationship between eigenvectors of $\frac{1}{N}XX^\top$ and $\frac{1}{N}X^\top X$ in the context of PCA
Is PCA still done via the eigendecomposition of the covariance matrix when dimensionality is larger
The covariance matrix is of $D\times D$ size and is given by $$\mathbf C = \frac{1}{N-1}\mathbf X_0^\top \mathbf X^\phantom\top_0.$$ The matrix you are talking about is of course not a covariance matr
Is PCA still done via the eigendecomposition of the covariance matrix when dimensionality is larger than the number of observations? The covariance matrix is of $D\times D$ size and is given by $$\mathbf C = \frac{1}{N-1}\mathbf X_0^\top \mathbf X^\phantom\top_0.$$ The matrix you are talking about is of course not a covariance matrix; it is called Gram matrix and is of $N\times N$ size: $$\mathbf G = \frac{1}{N-1}\mathbf X^\phantom\top_0 \mathbf X_0^\top.$$ Principal component analysis (PCA) can be implemented via eigendecomposition of either of these matrices. These are just two different ways to compute the same thing. The easiest and the most useful way to see this is to use the singular value decomposition of the data matrix $\mathbf X = \mathbf {USV}^\top$. Plugging this into the expressions for $\mathbf C$ and $\mathbf G$, we get: \begin{align}\mathbf C&=\mathbf V\frac{\mathbf S^2}{N-1}\mathbf V^\top\\\mathbf G&=\mathbf U\frac{\mathbf S^2}{N-1}\mathbf U^\top.\end{align} Eigenvectors $\mathbf V$ of the covariance matrix are principal directions. Projections of the data on these eigenvectors are principal components; these projections are given by $\mathbf {US}$. Principal components scaled to unit length are given by $\mathbf U$. As you see, eigenvectors of the Gram matrix are exactly these scaled principal components. And the eigenvalues of $\mathbf C$ and $\mathbf G$ coincide. The reason why you might see it recommended to use Gram matrix if $N<D$ is because it will be of smaller size, as compared to the covariance matrix, and hence be faster to compute and faster to eigendecompose. In fact, if your dimensionality $D$ is too high, there is no way you can even store the covariance matrix in memory, so operating on a Gram matrix is the only way to do PCA. But for manageable $D$ you can still use eigendecomposition of the covariance matrix if you prefer even if $N<D$. See also: Relationship between eigenvectors of $\frac{1}{N}XX^\top$ and $\frac{1}{N}X^\top X$ in the context of PCA
Is PCA still done via the eigendecomposition of the covariance matrix when dimensionality is larger The covariance matrix is of $D\times D$ size and is given by $$\mathbf C = \frac{1}{N-1}\mathbf X_0^\top \mathbf X^\phantom\top_0.$$ The matrix you are talking about is of course not a covariance matr
19,671
How do I test if two (non-normal) distributions differ?
There are several senses in which "it depends". (One potential concern is that it looks like the original data might perhaps be discrete; that should be clarified.) depending on sample size, the non-normality may not be as big an issue as all that for the t-test. For large samples at least there's generally good level-robustness - Type I error rates should not be too badly affected if it's not really far from normal. Power may be more of an issue with heavy tails. If you're looking for any kind of differences in distribution, a two-sample goodness of fit test, such as the two-sample Kolmogorov-Smirnov test might be suitable (though other tests might be done instead). If you're looking for location-type differences in a location-family, or scale differences in a scale family, or even just a P(X>Y)>P(Y>X) type relation, a Wilcoxon-Mann-Whitney two sample test might be suitable. You might consider resampling tests such as permutation or bootstrap tests, if you can find a suitable statistic for the kind(s) of differences you want to have sensitivity to. Also, if I have 13 distributions, do i need to do 13^2 tests? Well, no. Firstly, you don't need to test $A$ vs $B$ and $B$ vs $A$ (the second comparison is redundant). Secondly, you don't need to test $A$ vs $A$. Those two things cut the pairwise comparisons down from 169 to 78. Thirdly, it would be much more usual (but not compulsory) to test collectively for any differences, and then, perhaps to look at pairwise differences in post-hoc pairwise tests if the first null was rejected. For example, in place of a Wilcoxon-Mann-Whitney as in item 3. above, one might do a Kruskal-Wallis test, which is sensitive to any differences in location between groups. There are also k-sample versions of the Kolmogorov-Smirnov test, and similar tests of some of the other two-sample goodness of fit tests might exist, or be constructed. There are also k-sample versions of resampling tests, and of the t-test (i.e. ANOVA, which might be okay if the sample sizes are reasonably large). It would be really nice to get more information about what we're dealing with and what kinds of differences you're most interested in; or failing that, to see Q-Q plots of some of the samples.
How do I test if two (non-normal) distributions differ?
There are several senses in which "it depends". (One potential concern is that it looks like the original data might perhaps be discrete; that should be clarified.) depending on sample size, the non-
How do I test if two (non-normal) distributions differ? There are several senses in which "it depends". (One potential concern is that it looks like the original data might perhaps be discrete; that should be clarified.) depending on sample size, the non-normality may not be as big an issue as all that for the t-test. For large samples at least there's generally good level-robustness - Type I error rates should not be too badly affected if it's not really far from normal. Power may be more of an issue with heavy tails. If you're looking for any kind of differences in distribution, a two-sample goodness of fit test, such as the two-sample Kolmogorov-Smirnov test might be suitable (though other tests might be done instead). If you're looking for location-type differences in a location-family, or scale differences in a scale family, or even just a P(X>Y)>P(Y>X) type relation, a Wilcoxon-Mann-Whitney two sample test might be suitable. You might consider resampling tests such as permutation or bootstrap tests, if you can find a suitable statistic for the kind(s) of differences you want to have sensitivity to. Also, if I have 13 distributions, do i need to do 13^2 tests? Well, no. Firstly, you don't need to test $A$ vs $B$ and $B$ vs $A$ (the second comparison is redundant). Secondly, you don't need to test $A$ vs $A$. Those two things cut the pairwise comparisons down from 169 to 78. Thirdly, it would be much more usual (but not compulsory) to test collectively for any differences, and then, perhaps to look at pairwise differences in post-hoc pairwise tests if the first null was rejected. For example, in place of a Wilcoxon-Mann-Whitney as in item 3. above, one might do a Kruskal-Wallis test, which is sensitive to any differences in location between groups. There are also k-sample versions of the Kolmogorov-Smirnov test, and similar tests of some of the other two-sample goodness of fit tests might exist, or be constructed. There are also k-sample versions of resampling tests, and of the t-test (i.e. ANOVA, which might be okay if the sample sizes are reasonably large). It would be really nice to get more information about what we're dealing with and what kinds of differences you're most interested in; or failing that, to see Q-Q plots of some of the samples.
How do I test if two (non-normal) distributions differ? There are several senses in which "it depends". (One potential concern is that it looks like the original data might perhaps be discrete; that should be clarified.) depending on sample size, the non-
19,672
How do I test if two (non-normal) distributions differ?
Yes, I think you cannot do better than testing each distribution against the others... If think that your question is related to this one :Comparison of 2 distributions You advise you to use a Kolmogorov-Sminorv test or a Cramér-Von Mises test. They are both very classical adequation tests. In R, function ks.testin stats package implements the first one. The second one can be found in packages like cramer. To learn about these two tests : http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test http://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion
How do I test if two (non-normal) distributions differ?
Yes, I think you cannot do better than testing each distribution against the others... If think that your question is related to this one :Comparison of 2 distributions You advise you to use a Kolmogo
How do I test if two (non-normal) distributions differ? Yes, I think you cannot do better than testing each distribution against the others... If think that your question is related to this one :Comparison of 2 distributions You advise you to use a Kolmogorov-Sminorv test or a Cramér-Von Mises test. They are both very classical adequation tests. In R, function ks.testin stats package implements the first one. The second one can be found in packages like cramer. To learn about these two tests : http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test http://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93von_Mises_criterion
How do I test if two (non-normal) distributions differ? Yes, I think you cannot do better than testing each distribution against the others... If think that your question is related to this one :Comparison of 2 distributions You advise you to use a Kolmogo
19,673
How do I test if two (non-normal) distributions differ?
You can try Kruskal–Wallis one-way analysis of variance "It is used for comparing more than two samples that are independent, or not related" Normality violations in ANOVA were discussed in Rutherford Introducing Anova and Ancova: A GLM Approach 9.1.2 Normality violations The first line there is "Although most sources report ANOVA ... as being robust with respect to violations of the normality assumption..."
How do I test if two (non-normal) distributions differ?
You can try Kruskal–Wallis one-way analysis of variance "It is used for comparing more than two samples that are independent, or not related" Normality violations in ANOVA were discussed in Rutherford
How do I test if two (non-normal) distributions differ? You can try Kruskal–Wallis one-way analysis of variance "It is used for comparing more than two samples that are independent, or not related" Normality violations in ANOVA were discussed in Rutherford Introducing Anova and Ancova: A GLM Approach 9.1.2 Normality violations The first line there is "Although most sources report ANOVA ... as being robust with respect to violations of the normality assumption..."
How do I test if two (non-normal) distributions differ? You can try Kruskal–Wallis one-way analysis of variance "It is used for comparing more than two samples that are independent, or not related" Normality violations in ANOVA were discussed in Rutherford
19,674
Do components of PCA really represent percentage of variance? Can they sum to more than 100%?
Use summary.princomp to see the "Proportion of Variance" and "Cumulative Proportion". pca <- princomp(date.stock.matrix[,2:ncol(date.stock.matrix)]) summary(pca)
Do components of PCA really represent percentage of variance? Can they sum to more than 100%?
Use summary.princomp to see the "Proportion of Variance" and "Cumulative Proportion". pca <- princomp(date.stock.matrix[,2:ncol(date.stock.matrix)]) summary(pca)
Do components of PCA really represent percentage of variance? Can they sum to more than 100%? Use summary.princomp to see the "Proportion of Variance" and "Cumulative Proportion". pca <- princomp(date.stock.matrix[,2:ncol(date.stock.matrix)]) summary(pca)
Do components of PCA really represent percentage of variance? Can they sum to more than 100%? Use summary.princomp to see the "Proportion of Variance" and "Cumulative Proportion". pca <- princomp(date.stock.matrix[,2:ncol(date.stock.matrix)]) summary(pca)
19,675
Do components of PCA really represent percentage of variance? Can they sum to more than 100%?
They should sum to $100~\%.$ The total variance of a $p$-variate random variable $X$ with covariance matrix $\Sigma$ is defined as $${\rm tr}(\Sigma)=\sigma_{11}+\sigma_{22}+\cdots+\sigma_{pp}.$$ Now, the trace of a symmetric matrix is the sum of its eigenvalues $\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_p.$ Thus the total variance is $${\rm tr}(\Sigma)=\lambda_1+\cdots+\lambda_p$$ if we use $\lambda_i$ to denote the eigenvalues of $\Sigma$. Note that $\lambda_p\geq 0$ since covariance matrices are positive-semidefinite, so that the total variance is non-negative. But the principal components are given by $e_iX$, where $e_i$ is the $i$:th eigenvector (standardized to have length $1$), corresponding to the eigenvalue $\lambda_i$. Its variance is $${\rm Var}(e_iX)=e_i'\Sigma e_i=\lambda_ie_i'e_i=\lambda_i$$ and therefore the first $k$ principal components make up $$\Big(\frac{\lambda_1+\cdots+\lambda_k}{\lambda_1+\cdots+\lambda_p}\cdot 100\Big)~\%$$ of the total variance. In particular, they make up $100~\%$ of the total variance when $k=p$.
Do components of PCA really represent percentage of variance? Can they sum to more than 100%?
They should sum to $100~\%.$ The total variance of a $p$-variate random variable $X$ with covariance matrix $\Sigma$ is defined as $${\rm tr}(\Sigma)=\sigma_{11}+\sigma_{22}+\cdots+\sigma_{pp}.$$ Now,
Do components of PCA really represent percentage of variance? Can they sum to more than 100%? They should sum to $100~\%.$ The total variance of a $p$-variate random variable $X$ with covariance matrix $\Sigma$ is defined as $${\rm tr}(\Sigma)=\sigma_{11}+\sigma_{22}+\cdots+\sigma_{pp}.$$ Now, the trace of a symmetric matrix is the sum of its eigenvalues $\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_p.$ Thus the total variance is $${\rm tr}(\Sigma)=\lambda_1+\cdots+\lambda_p$$ if we use $\lambda_i$ to denote the eigenvalues of $\Sigma$. Note that $\lambda_p\geq 0$ since covariance matrices are positive-semidefinite, so that the total variance is non-negative. But the principal components are given by $e_iX$, where $e_i$ is the $i$:th eigenvector (standardized to have length $1$), corresponding to the eigenvalue $\lambda_i$. Its variance is $${\rm Var}(e_iX)=e_i'\Sigma e_i=\lambda_ie_i'e_i=\lambda_i$$ and therefore the first $k$ principal components make up $$\Big(\frac{\lambda_1+\cdots+\lambda_k}{\lambda_1+\cdots+\lambda_p}\cdot 100\Big)~\%$$ of the total variance. In particular, they make up $100~\%$ of the total variance when $k=p$.
Do components of PCA really represent percentage of variance? Can they sum to more than 100%? They should sum to $100~\%.$ The total variance of a $p$-variate random variable $X$ with covariance matrix $\Sigma$ is defined as $${\rm tr}(\Sigma)=\sigma_{11}+\sigma_{22}+\cdots+\sigma_{pp}.$$ Now,
19,676
Do components of PCA really represent percentage of variance? Can they sum to more than 100%?
Here is some R code to complement previous answers (pca[["sdev"]] is usually written pca$sdev, but it causes misformatting in the snippet below). # Generate a dummy dataset. set.seed(123) x <- matrix(rnorm(400, sd=3), ncol=4) # Note that princomp performs an unscaled PCA. pca1 <- princomp(x) # Show the fraction variance of each PC. pca1[["sdev"]]^2 cumsum(pca1[["sdev"]]^2)/sum(pca1[["sdev"]]^2) # Perform a scaled PCA. pca2 <- princomp(x, cor=TRUE) pca2[["sdev"]]^2 cumsum(pca2[["sdev"]]^2)/sum(pca2[["sdev"]]^2) So, as @Max points out, working with the variance instead of the standard deviation and not forgetting to divide by the total variance solves the issue.
Do components of PCA really represent percentage of variance? Can they sum to more than 100%?
Here is some R code to complement previous answers (pca[["sdev"]] is usually written pca$sdev, but it causes misformatting in the snippet below). # Generate a dummy dataset. set.seed(123) x <- matrix(
Do components of PCA really represent percentage of variance? Can they sum to more than 100%? Here is some R code to complement previous answers (pca[["sdev"]] is usually written pca$sdev, but it causes misformatting in the snippet below). # Generate a dummy dataset. set.seed(123) x <- matrix(rnorm(400, sd=3), ncol=4) # Note that princomp performs an unscaled PCA. pca1 <- princomp(x) # Show the fraction variance of each PC. pca1[["sdev"]]^2 cumsum(pca1[["sdev"]]^2)/sum(pca1[["sdev"]]^2) # Perform a scaled PCA. pca2 <- princomp(x, cor=TRUE) pca2[["sdev"]]^2 cumsum(pca2[["sdev"]]^2)/sum(pca2[["sdev"]]^2) So, as @Max points out, working with the variance instead of the standard deviation and not forgetting to divide by the total variance solves the issue.
Do components of PCA really represent percentage of variance? Can they sum to more than 100%? Here is some R code to complement previous answers (pca[["sdev"]] is usually written pca$sdev, but it causes misformatting in the snippet below). # Generate a dummy dataset. set.seed(123) x <- matrix(
19,677
R/Stata package for zero-truncated negative binomial GEE?
For R two options spring to mind, both of which I am only vaguely familiar with at best. The first is the pscl package, which can fit zero truncated inflated and hurdle models in a very nice, flexible manner. The pscl package suggests the use of the sandwich package which provides "Model-robust standard error estimators for cross-sectional, time series and longitudinal data". So you could fit your count model and then use the sandwich package to estimate an appropriate covariance matrix for the residuals taking into account the longitudinal nature of the data. The second option might be to look the geepack package which looks like it can do what you want but only for a negative binomial model with known theta, as it will fit any type of GLM that R's glm() function can (so use the family function from MASS). A third option has raised it's head: gamlss and it's add-on package gamlss.tr. The latter includes a function gen.trun() that can turn any of the distributions supported by gamlss() into a truncated distribution in a flexible way - you can specify left truncated at 0 negative binomial distribution for example. gamlss() itself includes support for random effects which should take care of the longitudinal nature of the data. It isn't immediately clear however if you have to use at least one smooth function of a covariate in the model or can just model everything as linear functions like in a GLM.
R/Stata package for zero-truncated negative binomial GEE?
For R two options spring to mind, both of which I am only vaguely familiar with at best. The first is the pscl package, which can fit zero truncated inflated and hurdle models in a very nice, flexible
R/Stata package for zero-truncated negative binomial GEE? For R two options spring to mind, both of which I am only vaguely familiar with at best. The first is the pscl package, which can fit zero truncated inflated and hurdle models in a very nice, flexible manner. The pscl package suggests the use of the sandwich package which provides "Model-robust standard error estimators for cross-sectional, time series and longitudinal data". So you could fit your count model and then use the sandwich package to estimate an appropriate covariance matrix for the residuals taking into account the longitudinal nature of the data. The second option might be to look the geepack package which looks like it can do what you want but only for a negative binomial model with known theta, as it will fit any type of GLM that R's glm() function can (so use the family function from MASS). A third option has raised it's head: gamlss and it's add-on package gamlss.tr. The latter includes a function gen.trun() that can turn any of the distributions supported by gamlss() into a truncated distribution in a flexible way - you can specify left truncated at 0 negative binomial distribution for example. gamlss() itself includes support for random effects which should take care of the longitudinal nature of the data. It isn't immediately clear however if you have to use at least one smooth function of a covariate in the model or can just model everything as linear functions like in a GLM.
R/Stata package for zero-truncated negative binomial GEE? For R two options spring to mind, both of which I am only vaguely familiar with at best. The first is the pscl package, which can fit zero truncated inflated and hurdle models in a very nice, flexible
19,678
R/Stata package for zero-truncated negative binomial GEE?
Hmm, good first question! I don't know of a package that meets your precise requirements. I think Stata's xtgee is a good choice if you also specify the vce(robust) option to give Huber-White standard errors, or vce(bootstrap) if that's practical. Either of these options will ensure the standard errors are consistently estimated despite the model misspecification that you'll have by ignoring the zero truncation. That leaves the question of what effect ignoring the zero truncation will have on the point estimate(s) of interest to you. It's worth a quick search to see if there is relevant literature on this in general, i.e. not necessarily in a GEE context -- i would have thought you can pretty safely assume any such results will be relevant in the GEE case too. If you can't find anything, you could always simulate data with zero-truncation and known effect estimates and assess the bias by simulation.
R/Stata package for zero-truncated negative binomial GEE?
Hmm, good first question! I don't know of a package that meets your precise requirements. I think Stata's xtgee is a good choice if you also specify the vce(robust) option to give Huber-White standard
R/Stata package for zero-truncated negative binomial GEE? Hmm, good first question! I don't know of a package that meets your precise requirements. I think Stata's xtgee is a good choice if you also specify the vce(robust) option to give Huber-White standard errors, or vce(bootstrap) if that's practical. Either of these options will ensure the standard errors are consistently estimated despite the model misspecification that you'll have by ignoring the zero truncation. That leaves the question of what effect ignoring the zero truncation will have on the point estimate(s) of interest to you. It's worth a quick search to see if there is relevant literature on this in general, i.e. not necessarily in a GEE context -- i would have thought you can pretty safely assume any such results will be relevant in the GEE case too. If you can't find anything, you could always simulate data with zero-truncation and known effect estimates and assess the bias by simulation.
R/Stata package for zero-truncated negative binomial GEE? Hmm, good first question! I don't know of a package that meets your precise requirements. I think Stata's xtgee is a good choice if you also specify the vce(robust) option to give Huber-White standard
19,679
R/Stata package for zero-truncated negative binomial GEE?
I had the same issue in my dissertation. In Stata, I just built myself a custom .ado program with two calls to xtgee. For this, I found the "Modeling Health Care Costs and Counts" slides/programs by Partha Deb, Willard Manning, and Edward Norton to be useful. They don't talk about longitudinal data, but it's a useful starting point.
R/Stata package for zero-truncated negative binomial GEE?
I had the same issue in my dissertation. In Stata, I just built myself a custom .ado program with two calls to xtgee. For this, I found the "Modeling Health Care Costs and Counts" slides/programs b
R/Stata package for zero-truncated negative binomial GEE? I had the same issue in my dissertation. In Stata, I just built myself a custom .ado program with two calls to xtgee. For this, I found the "Modeling Health Care Costs and Counts" slides/programs by Partha Deb, Willard Manning, and Edward Norton to be useful. They don't talk about longitudinal data, but it's a useful starting point.
R/Stata package for zero-truncated negative binomial GEE? I had the same issue in my dissertation. In Stata, I just built myself a custom .ado program with two calls to xtgee. For this, I found the "Modeling Health Care Costs and Counts" slides/programs b
19,680
R/Stata package for zero-truncated negative binomial GEE?
I was looking for answers on glmmADMB interpretation and I saw your post. I know it was a long time ago but I might have the answer. Look into the package glmmADMB when using hurdle models. You have to split into two the analyses of your data: one of them treats just the no zero data. You may add mixed effects and chose the distribution. The condition is that the data has to be zero-inflated and I do not know if this fitted your requirements! Anyways, I hope you found out long ago!
R/Stata package for zero-truncated negative binomial GEE?
I was looking for answers on glmmADMB interpretation and I saw your post. I know it was a long time ago but I might have the answer. Look into the package glmmADMB when using hurdle models. You have t
R/Stata package for zero-truncated negative binomial GEE? I was looking for answers on glmmADMB interpretation and I saw your post. I know it was a long time ago but I might have the answer. Look into the package glmmADMB when using hurdle models. You have to split into two the analyses of your data: one of them treats just the no zero data. You may add mixed effects and chose the distribution. The condition is that the data has to be zero-inflated and I do not know if this fitted your requirements! Anyways, I hope you found out long ago!
R/Stata package for zero-truncated negative binomial GEE? I was looking for answers on glmmADMB interpretation and I saw your post. I know it was a long time ago but I might have the answer. Look into the package glmmADMB when using hurdle models. You have t
19,681
Filtering a dataframe [closed]
If you want to combine several filters in subset function use logical operators: subset(data, D1 == "E" | D2 == "E") will select those rows for which either column D1 or column D2 has value "E". Look at the help pages for available logical operators: > ?"|" For your second question what you need is to filter the rows. This can be achieved in the following way collist <- c("D1","D2","D3","D4") sel <- apply(data[,collist],1,function(row) "E" %in% row) data[sel,] The first argument to apply suplies the columns on which we need to filter. The second argument is 1, meaning that we are looping through rows of the data. The third argument is unnamed one-line function which returns TRUE if "E" is present in the row and FALSE if the "E" is not present. The result of the apply function will be logical vector sel, which has the length the same as number of rows in data. We then use this vector to select the necessary rows. Update The same can be achieved with grep: sel <- apply(data[,collist],1,function(row) length(grep("E",row))>0) in R grep with default arguments returns the numbers of elements in the supplied vector which have the matching pattern.
Filtering a dataframe [closed]
If you want to combine several filters in subset function use logical operators: subset(data, D1 == "E" | D2 == "E") will select those rows for which either column D1 or column D2 has value "E". Loo
Filtering a dataframe [closed] If you want to combine several filters in subset function use logical operators: subset(data, D1 == "E" | D2 == "E") will select those rows for which either column D1 or column D2 has value "E". Look at the help pages for available logical operators: > ?"|" For your second question what you need is to filter the rows. This can be achieved in the following way collist <- c("D1","D2","D3","D4") sel <- apply(data[,collist],1,function(row) "E" %in% row) data[sel,] The first argument to apply suplies the columns on which we need to filter. The second argument is 1, meaning that we are looping through rows of the data. The third argument is unnamed one-line function which returns TRUE if "E" is present in the row and FALSE if the "E" is not present. The result of the apply function will be logical vector sel, which has the length the same as number of rows in data. We then use this vector to select the necessary rows. Update The same can be achieved with grep: sel <- apply(data[,collist],1,function(row) length(grep("E",row))>0) in R grep with default arguments returns the numbers of elements in the supplied vector which have the matching pattern.
Filtering a dataframe [closed] If you want to combine several filters in subset function use logical operators: subset(data, D1 == "E" | D2 == "E") will select those rows for which either column D1 or column D2 has value "E". Loo
19,682
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
As a consequence of the inspiring answers and discussion to my question I constructed the following plots that do not rely on any model based parameters, but present the underlying data. The reasons are that independent of whatever kind of standard-error I may choose, the standard-error is a model based parameter. So, why not present the underlying data and thereby transmit more information? Furthermore, if choosing the SE from the ANOVA, two problems arise for my specific problems. First (at least for me) it is somehow unclear what the SEs from SPSS ANOVA Output actually are (see also this discussion, in the comments). They are somehow related to the MSE but how exactly I don't know. Second, they are only reasonable when the underlying assumptions are met. However, as the following plots show, the assumptions of homogeneity of variance is clearly violated. The plots with boxplots: The plots with all data points: Note that the two groups are dislocated a little to the left or the right: deductive to the left, inductive to the right. The means are still plotted in black and the data or boxplots in the background in grey. The differences between the plots on the left and on the right are if the means are dislocated the same as the points or boxplots or if they are presented centrally. Sorry for the nonoptimal quality of the graphs and the missing x-axis labels. The question that remains is, which one of the above plots is the one to choose now. I have to think about it and ask the other author of our paper. But right now, I prefer the "points with means dislocated". And I still would be very interested in comments. Update: After some programming I finally managed to write a R-function to automatically create a plot like points with means dislocated. Check it out (and send me comments)!
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
As a consequence of the inspiring answers and discussion to my question I constructed the following plots that do not rely on any model based parameters, but present the underlying data. The reasons
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? As a consequence of the inspiring answers and discussion to my question I constructed the following plots that do not rely on any model based parameters, but present the underlying data. The reasons are that independent of whatever kind of standard-error I may choose, the standard-error is a model based parameter. So, why not present the underlying data and thereby transmit more information? Furthermore, if choosing the SE from the ANOVA, two problems arise for my specific problems. First (at least for me) it is somehow unclear what the SEs from SPSS ANOVA Output actually are (see also this discussion, in the comments). They are somehow related to the MSE but how exactly I don't know. Second, they are only reasonable when the underlying assumptions are met. However, as the following plots show, the assumptions of homogeneity of variance is clearly violated. The plots with boxplots: The plots with all data points: Note that the two groups are dislocated a little to the left or the right: deductive to the left, inductive to the right. The means are still plotted in black and the data or boxplots in the background in grey. The differences between the plots on the left and on the right are if the means are dislocated the same as the points or boxplots or if they are presented centrally. Sorry for the nonoptimal quality of the graphs and the missing x-axis labels. The question that remains is, which one of the above plots is the one to choose now. I have to think about it and ask the other author of our paper. But right now, I prefer the "points with means dislocated". And I still would be very interested in comments. Update: After some programming I finally managed to write a R-function to automatically create a plot like points with means dislocated. Check it out (and send me comments)!
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? As a consequence of the inspiring answers and discussion to my question I constructed the following plots that do not rely on any model based parameters, but present the underlying data. The reasons
19,683
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
You will not find a single reasonable error bar for inferential purposes with this type of experimental design. This is an old problem with no clear solution. It seems impossible to have the estimate SE's you have here. There are two main kinds of error in such a design, the between and within S error. They are usually very different from one another and not comparable. There just really is no good single error bar to represent your data. One might argue that the raw SEs or SDs from the data are most important in a descriptive rather than inferential sense. They either tell about the quality of the central tendency estimate (SE) or the variability of the data (SD). However, even then it's somewhat disingenuous because the thing you're testing and measuring within S is not that raw value but rather the effect of the within S variable. Therefore, reporting variability of the raw values is either meaningless or misleading with respect to within S effects. I have typically endorsed no error bars on such graphs and adjacent effects graphs indicating the variability of the effects. One might have CI's on that graph that are perfectly reasonable. See Masson & Loftus (2003) for examples of the effects graphs. Simply eliminate their ((pretty much completely useless) error bars around the mean values they show and just use the effect error bars. For your study I'd first replot the data as the 2 x 2 x 2 design it is (2-panel 2x2) and then plot immediately adjacent a graph with confidence intervals of the validity, plausibility, instruction, and interaction effects. Put SDs and SEs for the instruction groups in a table or in the text. (waiting for expected mixed effects analysis response ;) ) UPDATE: OK, after editing it's clear the only thing you want is an SE to be used to show the quality of the estimate of the value. In that case use your model values. Both values are based on a model and there is no 'true' value in your sample. Use the ones from the model you applied to your data. BUT, make sure you warn readers in the figure caption that these SEs have no inferential value whatsoever for your within S effects or interactions. UPDATE2: Looking back at the data you did present... that looks suspiciously like percentages which shouldn't have been analyzed with ANOVA in the first place. Whether it is or isn't, it's a variable that maxes at 100 and has reduced variances at the extremes so it still shouldn't be analyzed with ANOVA. I do very much like your rm.plot plots. I'd still be tempted to do separate plots of the between conditions, showing the raw data, and within conditions showing the data with between S variability removed.
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
You will not find a single reasonable error bar for inferential purposes with this type of experimental design. This is an old problem with no clear solution. It seems impossible to have the estimate
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? You will not find a single reasonable error bar for inferential purposes with this type of experimental design. This is an old problem with no clear solution. It seems impossible to have the estimate SE's you have here. There are two main kinds of error in such a design, the between and within S error. They are usually very different from one another and not comparable. There just really is no good single error bar to represent your data. One might argue that the raw SEs or SDs from the data are most important in a descriptive rather than inferential sense. They either tell about the quality of the central tendency estimate (SE) or the variability of the data (SD). However, even then it's somewhat disingenuous because the thing you're testing and measuring within S is not that raw value but rather the effect of the within S variable. Therefore, reporting variability of the raw values is either meaningless or misleading with respect to within S effects. I have typically endorsed no error bars on such graphs and adjacent effects graphs indicating the variability of the effects. One might have CI's on that graph that are perfectly reasonable. See Masson & Loftus (2003) for examples of the effects graphs. Simply eliminate their ((pretty much completely useless) error bars around the mean values they show and just use the effect error bars. For your study I'd first replot the data as the 2 x 2 x 2 design it is (2-panel 2x2) and then plot immediately adjacent a graph with confidence intervals of the validity, plausibility, instruction, and interaction effects. Put SDs and SEs for the instruction groups in a table or in the text. (waiting for expected mixed effects analysis response ;) ) UPDATE: OK, after editing it's clear the only thing you want is an SE to be used to show the quality of the estimate of the value. In that case use your model values. Both values are based on a model and there is no 'true' value in your sample. Use the ones from the model you applied to your data. BUT, make sure you warn readers in the figure caption that these SEs have no inferential value whatsoever for your within S effects or interactions. UPDATE2: Looking back at the data you did present... that looks suspiciously like percentages which shouldn't have been analyzed with ANOVA in the first place. Whether it is or isn't, it's a variable that maxes at 100 and has reduced variances at the extremes so it still shouldn't be analyzed with ANOVA. I do very much like your rm.plot plots. I'd still be tempted to do separate plots of the between conditions, showing the raw data, and within conditions showing the data with between S variability removed.
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? You will not find a single reasonable error bar for inferential purposes with this type of experimental design. This is an old problem with no clear solution. It seems impossible to have the estimate
19,684
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
This looks like a very nice experiment, so congratulations! I agree with John Christie, it is a mixed model, but provided it can be properely specified in an ANOVA design (& is balanced) I don't see why it can't be so formulated. Two factor within and 1-factor between subjects, but the between subjects factor (inductive/deductive) clearly interacts (modifies) the within-subjects effects. I assume the plotted means are from the ANOVA model (LHS) and so the model is correctly specified. Well done - this is non-trivial! Some points: 1) The "estimated" vs "actual" "error" is a false dichotomy. Both assume an underlying model and make estimates on that basis. If the model is reasonable, I would argue it is better to use the model-based estimates (they are based on the pooling of larger samples). But as James mentions, the errors differ depending on the comparison you are making, so no simple representation is possible. 2) I would prefer to see box-plots or individual data points plotted (if there are not too many), perhaps with some sideways jitter, so points with the same value can be distinguished. http://en.wikipedia.org/wiki/Box_plot 3) If you must plot an estimate of the error of the mean, never plot SDs - they are an estimate of the standard deviation of the sample and relate to population variablility, not a statistical comparison of means. It is generally preferable to plot 95% confidence intervals rather than SEs, but not in this case (see 1 and John's point) 4) The one issue with this data that concerns me is the assumption of uniform variance is probably violated as the "MP Valid and Plausible" data are clearly constrained by the 100% limit, especially for the deductive people. I'm tossing up in my own mind how important this issue is. Moving to a mixed-effects logit (binomial probability) is probably the ideal solution, but it's a hard ask. It might be best to let others answer.
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
This looks like a very nice experiment, so congratulations! I agree with John Christie, it is a mixed model, but provided it can be properely specified in an ANOVA design (& is balanced) I don't see w
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? This looks like a very nice experiment, so congratulations! I agree with John Christie, it is a mixed model, but provided it can be properely specified in an ANOVA design (& is balanced) I don't see why it can't be so formulated. Two factor within and 1-factor between subjects, but the between subjects factor (inductive/deductive) clearly interacts (modifies) the within-subjects effects. I assume the plotted means are from the ANOVA model (LHS) and so the model is correctly specified. Well done - this is non-trivial! Some points: 1) The "estimated" vs "actual" "error" is a false dichotomy. Both assume an underlying model and make estimates on that basis. If the model is reasonable, I would argue it is better to use the model-based estimates (they are based on the pooling of larger samples). But as James mentions, the errors differ depending on the comparison you are making, so no simple representation is possible. 2) I would prefer to see box-plots or individual data points plotted (if there are not too many), perhaps with some sideways jitter, so points with the same value can be distinguished. http://en.wikipedia.org/wiki/Box_plot 3) If you must plot an estimate of the error of the mean, never plot SDs - they are an estimate of the standard deviation of the sample and relate to population variablility, not a statistical comparison of means. It is generally preferable to plot 95% confidence intervals rather than SEs, but not in this case (see 1 and John's point) 4) The one issue with this data that concerns me is the assumption of uniform variance is probably violated as the "MP Valid and Plausible" data are clearly constrained by the 100% limit, especially for the deductive people. I'm tossing up in my own mind how important this issue is. Moving to a mixed-effects logit (binomial probability) is probably the ideal solution, but it's a hard ask. It might be best to let others answer.
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? This looks like a very nice experiment, so congratulations! I agree with John Christie, it is a mixed model, but provided it can be properely specified in an ANOVA design (& is balanced) I don't see w
19,685
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
Lately I've been using mixed effects analysis, and in attempting to develop an accompanying visual data analysis approach I've been using bootstrapping (see my description here), which yields confidence intervals that are not susceptible to the within-versus-between troubles of conventional CIs. Also, I would avoid mapping multiple variables to the same visual aesthetic, as you have done in the graph above; you have 3 variables (MP/AC, valid/invalid, plausible/implausible) mapped to the x-axis, which makes it rather difficult to parse the design and patterns. I would suggest instead mapping, say, MP/AC to the x-axis, valid/invalid to facet columns, and plausible/implausible to facet rows. Check out ggplot2 in R to easily achieve this, eg: library(ggplot2) ggplot( data = my_data , mapping = aes( y = mean_endorsement , x = mp_ac , linetype = deductive_inductive , shape = deductive_inductive )+ geom_point()+ geom_line()+ facet_grid( plausible_implausible ~ valid_invalid )
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs?
Lately I've been using mixed effects analysis, and in attempting to develop an accompanying visual data analysis approach I've been using bootstrapping (see my description here), which yields confiden
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? Lately I've been using mixed effects analysis, and in attempting to develop an accompanying visual data analysis approach I've been using bootstrapping (see my description here), which yields confidence intervals that are not susceptible to the within-versus-between troubles of conventional CIs. Also, I would avoid mapping multiple variables to the same visual aesthetic, as you have done in the graph above; you have 3 variables (MP/AC, valid/invalid, plausible/implausible) mapped to the x-axis, which makes it rather difficult to parse the design and patterns. I would suggest instead mapping, say, MP/AC to the x-axis, valid/invalid to facet columns, and plausible/implausible to facet rows. Check out ggplot2 in R to easily achieve this, eg: library(ggplot2) ggplot( data = my_data , mapping = aes( y = mean_endorsement , x = mp_ac , linetype = deductive_inductive , shape = deductive_inductive )+ geom_point()+ geom_line()+ facet_grid( plausible_implausible ~ valid_invalid )
Follow up: In a mixed within-between ANOVA plot estimated SEs or actual SEs? Lately I've been using mixed effects analysis, and in attempting to develop an accompanying visual data analysis approach I've been using bootstrapping (see my description here), which yields confiden
19,686
Visualization software for clustering
GGobi (http://www.ggobi.org/), along with the R package rggobi, is perfectly suited to this task. See the related presentation for examples: http://www.ggobi.org/book/2007-infovis/05-clustering.pdf
Visualization software for clustering
GGobi (http://www.ggobi.org/), along with the R package rggobi, is perfectly suited to this task. See the related presentation for examples: http://www.ggobi.org/book/2007-infovis/05-clustering.pdf
Visualization software for clustering GGobi (http://www.ggobi.org/), along with the R package rggobi, is perfectly suited to this task. See the related presentation for examples: http://www.ggobi.org/book/2007-infovis/05-clustering.pdf
Visualization software for clustering GGobi (http://www.ggobi.org/), along with the R package rggobi, is perfectly suited to this task. See the related presentation for examples: http://www.ggobi.org/book/2007-infovis/05-clustering.pdf
19,687
Visualization software for clustering
Exploring clustering results in high dimensions can be done in R using the packages clusterfly and gcExplorer. Look for more here.
Visualization software for clustering
Exploring clustering results in high dimensions can be done in R using the packages clusterfly and gcExplorer. Look for more here.
Visualization software for clustering Exploring clustering results in high dimensions can be done in R using the packages clusterfly and gcExplorer. Look for more here.
Visualization software for clustering Exploring clustering results in high dimensions can be done in R using the packages clusterfly and gcExplorer. Look for more here.
19,688
Visualization software for clustering
(Months later,) a nice way to picture k-clusters and to see the effect of various k is to build a Minimum Spanning Tree and look at the longest edges. For example, Here there are 10 clusters, with 9 longest edges 855 899 942 954 1003 1005 1069 1134 1267. For 9 clusters, collapse the cyan 855 edge; for 8, the purple 899; and so on. The single-link k-clustering algorithm ... is precisely Kruskal's algorithm ... equivalent to finding an MST and deleting the k-1 most expensive edges. — Wayne, Greedy Algorithms. 22000 points, 242M pairwise distances, take ~ 1 gigabyte (float32): might fit. To view a high-dimensional tree or graph in 2d, see Multidimensional Scaling (also from Kruskal), and the huge literature on dimension reduction. However, in dim > 20 say, most distances will be near the median, so I believe dimension reduction cannot work there.
Visualization software for clustering
(Months later,) a nice way to picture k-clusters and to see the effect of various k is to build a Minimum Spanning Tree and look at the longest edges. For example, Here there are 10 clusters, with 9
Visualization software for clustering (Months later,) a nice way to picture k-clusters and to see the effect of various k is to build a Minimum Spanning Tree and look at the longest edges. For example, Here there are 10 clusters, with 9 longest edges 855 899 942 954 1003 1005 1069 1134 1267. For 9 clusters, collapse the cyan 855 edge; for 8, the purple 899; and so on. The single-link k-clustering algorithm ... is precisely Kruskal's algorithm ... equivalent to finding an MST and deleting the k-1 most expensive edges. — Wayne, Greedy Algorithms. 22000 points, 242M pairwise distances, take ~ 1 gigabyte (float32): might fit. To view a high-dimensional tree or graph in 2d, see Multidimensional Scaling (also from Kruskal), and the huge literature on dimension reduction. However, in dim > 20 say, most distances will be near the median, so I believe dimension reduction cannot work there.
Visualization software for clustering (Months later,) a nice way to picture k-clusters and to see the effect of various k is to build a Minimum Spanning Tree and look at the longest edges. For example, Here there are 10 clusters, with 9
19,689
Visualization software for clustering
I've had good experience with KNIME during one of my project. It 's an excellent solution for quick exploratory mining and graphing. On top of that it provides R and Weka modules seamless integration.
Visualization software for clustering
I've had good experience with KNIME during one of my project. It 's an excellent solution for quick exploratory mining and graphing. On top of that it provides R and Weka modules seamless integration.
Visualization software for clustering I've had good experience with KNIME during one of my project. It 's an excellent solution for quick exploratory mining and graphing. On top of that it provides R and Weka modules seamless integration.
Visualization software for clustering I've had good experience with KNIME during one of my project. It 's an excellent solution for quick exploratory mining and graphing. On top of that it provides R and Weka modules seamless integration.
19,690
Visualization software for clustering
Also have a look at ELKI, an open-source data mining software. Wikimedia commons has a gallery with images produced with ELKI, many of which are related to cluster analysis.
Visualization software for clustering
Also have a look at ELKI, an open-source data mining software. Wikimedia commons has a gallery with images produced with ELKI, many of which are related to cluster analysis.
Visualization software for clustering Also have a look at ELKI, an open-source data mining software. Wikimedia commons has a gallery with images produced with ELKI, many of which are related to cluster analysis.
Visualization software for clustering Also have a look at ELKI, an open-source data mining software. Wikimedia commons has a gallery with images produced with ELKI, many of which are related to cluster analysis.
19,691
Visualization software for clustering
Take a look at Cluster 3.0. I'm not sure if it will do all you want, but it's pretty well documented and lets you choose from a few distance metrics. The visualization piece is through a separate program called Java TreeView (screenshot).
Visualization software for clustering
Take a look at Cluster 3.0. I'm not sure if it will do all you want, but it's pretty well documented and lets you choose from a few distance metrics. The visualization piece is through a separate pr
Visualization software for clustering Take a look at Cluster 3.0. I'm not sure if it will do all you want, but it's pretty well documented and lets you choose from a few distance metrics. The visualization piece is through a separate program called Java TreeView (screenshot).
Visualization software for clustering Take a look at Cluster 3.0. I'm not sure if it will do all you want, but it's pretty well documented and lets you choose from a few distance metrics. The visualization piece is through a separate pr
19,692
Visualization software for clustering
GGobi does look interesting for this. Another approach could be to treat your similarity/inverse distance matrices as network adjacency matrices and feeding that into a network analysis routine (e.g., either igraph in R or perhaps Pajek). With this approach I would experiment with cutting the cutting the node distances into a binary tie at various cutpoints.
Visualization software for clustering
GGobi does look interesting for this. Another approach could be to treat your similarity/inverse distance matrices as network adjacency matrices and feeding that into a network analysis routine (e.g.,
Visualization software for clustering GGobi does look interesting for this. Another approach could be to treat your similarity/inverse distance matrices as network adjacency matrices and feeding that into a network analysis routine (e.g., either igraph in R or perhaps Pajek). With this approach I would experiment with cutting the cutting the node distances into a binary tie at various cutpoints.
Visualization software for clustering GGobi does look interesting for this. Another approach could be to treat your similarity/inverse distance matrices as network adjacency matrices and feeding that into a network analysis routine (e.g.,
19,693
Visualization software for clustering
Weka is an open source program for data mining (wirtten and extensible in Java), Orange is an open source program and library for data mining and machine learning (written in Python). They both allow convenient and efficient visual exploration of multidimensional data
Visualization software for clustering
Weka is an open source program for data mining (wirtten and extensible in Java), Orange is an open source program and library for data mining and machine learning (written in Python). They both allow
Visualization software for clustering Weka is an open source program for data mining (wirtten and extensible in Java), Orange is an open source program and library for data mining and machine learning (written in Python). They both allow convenient and efficient visual exploration of multidimensional data
Visualization software for clustering Weka is an open source program for data mining (wirtten and extensible in Java), Orange is an open source program and library for data mining and machine learning (written in Python). They both allow
19,694
Visualization software for clustering
DataMelt free numeric software includes Java library called JMinHep. Please look at the manual under the section "Data clustering". It provides a GUI to visualize multidimensional data points in X-Y, and run a number of data clustering algorithms.
Visualization software for clustering
DataMelt free numeric software includes Java library called JMinHep. Please look at the manual under the section "Data clustering". It provides a GUI to visualize multidimensional data points in X-Y,
Visualization software for clustering DataMelt free numeric software includes Java library called JMinHep. Please look at the manual under the section "Data clustering". It provides a GUI to visualize multidimensional data points in X-Y, and run a number of data clustering algorithms.
Visualization software for clustering DataMelt free numeric software includes Java library called JMinHep. Please look at the manual under the section "Data clustering". It provides a GUI to visualize multidimensional data points in X-Y,
19,695
Variable selection in logistic regression model
As mentioned by @DemetriPananos, theoretical justification would be the best approach, especially if your goal is inference. That is, with expert knowledge of the actual data generation process, you can look at the causal paths between the variables and from there you can select the variables which are important, which are confounders and which are mediators. A DAG (directed acyclic graph) sometimes known as a causal diagram can be a great aid in this process. I have personally encountered DAGs with as many of 500 variables, which were able to be brought down to less than 20. Of course, this might not be practical, or feasible in your particular situation. Other methods you could use are: Principal Components Analysis. PCA is a mathematical technique which is used for dimension reduction, by generating new uncorrelated variables (components) that are linear combinations of the original (correlated) variables, such that each component in accounts for a decreasing portion of total variance. That is, PCA computes a new set of variables and expresses the data in terms of these new variables. Considered together, the new variables represent the same amount of information as the original variables, in the sense that we can restore the original data set from the transformed one. Total variance remains the same, but is redistributed so that the first component accounts for the maximum possible variance while being orthogonal to the remaining components. The second component accounts for the maximum possible of the remaining variance while also staying orthogonal to the remaining components, and so on. This explanation is deliberately non-technical. PCA is therefore very useful where variables are highly correlated, and by retaining only the first few components, it is possible to reduce the dimension of the dataset considerably. In R, PCA is available using the base prcomp function. Partial Least Squares. PLS is similar to PCA except that is also takes account of the correlation of each variable with the outcome. When used with a binary (or other categorical) outcome, it is know as PLS-DA (Partial Least Squares Discriminant Analysis). In R you could use the caret package. It is worth noting that variables should be standardised prior to PCA or PLS, in order to avoid domination by variables that are measured on larger scales. A great example of this is analysing the results of an athletics competition - if the variables are analysed on their original scale then events such as the marathon and 10,000m will dominate the results. Ridge Regression. Also known as Tikhonov Regularization, this is used to deal with variables that are highly correlated, which is very common in high dimension datasets. In ordinary least squares, where the sum of squared residuals is minimised, ridge regression adds a penalty, so that we minimize a quantity which is the sum of the squared residuals, plus a term usually proportional to the sum (or often a weighted sum) of the squared parameters. Essentially, we "penalize" large values of the parameters in the quantity we seeking to minimize. What this means in practice is that the regression estimates are shrunk towards zero. Thus they are no longer unbiased estimates, but they suffer from considerably less variance than would be the case with OLS. The regularization parameter (the penalty) is often chosen by cross validation, however, the cross validation estimator has a finite variance, which can be very large and can lead to overfitting. LASSO regression. Least Absolute Shrinkage and Selection Operator (LASSO) is very similar to ridge regression - it is also a regularization method. The main difference is that, where ridge regression adds a penalty that is proportional to the squared parameters (also called L2-norm), the LASSO uses the absolute value (L1-norm). This means that Lasso shrinks the less important variables' coefficients to zero, thus removing some variables altogether, so this works particularly well for variable election where we have a large number of variables. As previously mentioned, regularization introduces bias with the benefit of lower variance. There are also methods to reduce or eliminate this bias, called debiasing. Elastic net. This is a compromise between ridge regression and LASSO and produces a model that is penalized with both the L1-norm and L2-norm. This means that some coefficients are shrunk (as in ridge regression) and some are set to zero (as in LASSO). In R I would suggest the glmnet package which can do ridge regression, LASSO and elastic net.
Variable selection in logistic regression model
As mentioned by @DemetriPananos, theoretical justification would be the best approach, especially if your goal is inference. That is, with expert knowledge of the actual data generation process, you c
Variable selection in logistic regression model As mentioned by @DemetriPananos, theoretical justification would be the best approach, especially if your goal is inference. That is, with expert knowledge of the actual data generation process, you can look at the causal paths between the variables and from there you can select the variables which are important, which are confounders and which are mediators. A DAG (directed acyclic graph) sometimes known as a causal diagram can be a great aid in this process. I have personally encountered DAGs with as many of 500 variables, which were able to be brought down to less than 20. Of course, this might not be practical, or feasible in your particular situation. Other methods you could use are: Principal Components Analysis. PCA is a mathematical technique which is used for dimension reduction, by generating new uncorrelated variables (components) that are linear combinations of the original (correlated) variables, such that each component in accounts for a decreasing portion of total variance. That is, PCA computes a new set of variables and expresses the data in terms of these new variables. Considered together, the new variables represent the same amount of information as the original variables, in the sense that we can restore the original data set from the transformed one. Total variance remains the same, but is redistributed so that the first component accounts for the maximum possible variance while being orthogonal to the remaining components. The second component accounts for the maximum possible of the remaining variance while also staying orthogonal to the remaining components, and so on. This explanation is deliberately non-technical. PCA is therefore very useful where variables are highly correlated, and by retaining only the first few components, it is possible to reduce the dimension of the dataset considerably. In R, PCA is available using the base prcomp function. Partial Least Squares. PLS is similar to PCA except that is also takes account of the correlation of each variable with the outcome. When used with a binary (or other categorical) outcome, it is know as PLS-DA (Partial Least Squares Discriminant Analysis). In R you could use the caret package. It is worth noting that variables should be standardised prior to PCA or PLS, in order to avoid domination by variables that are measured on larger scales. A great example of this is analysing the results of an athletics competition - if the variables are analysed on their original scale then events such as the marathon and 10,000m will dominate the results. Ridge Regression. Also known as Tikhonov Regularization, this is used to deal with variables that are highly correlated, which is very common in high dimension datasets. In ordinary least squares, where the sum of squared residuals is minimised, ridge regression adds a penalty, so that we minimize a quantity which is the sum of the squared residuals, plus a term usually proportional to the sum (or often a weighted sum) of the squared parameters. Essentially, we "penalize" large values of the parameters in the quantity we seeking to minimize. What this means in practice is that the regression estimates are shrunk towards zero. Thus they are no longer unbiased estimates, but they suffer from considerably less variance than would be the case with OLS. The regularization parameter (the penalty) is often chosen by cross validation, however, the cross validation estimator has a finite variance, which can be very large and can lead to overfitting. LASSO regression. Least Absolute Shrinkage and Selection Operator (LASSO) is very similar to ridge regression - it is also a regularization method. The main difference is that, where ridge regression adds a penalty that is proportional to the squared parameters (also called L2-norm), the LASSO uses the absolute value (L1-norm). This means that Lasso shrinks the less important variables' coefficients to zero, thus removing some variables altogether, so this works particularly well for variable election where we have a large number of variables. As previously mentioned, regularization introduces bias with the benefit of lower variance. There are also methods to reduce or eliminate this bias, called debiasing. Elastic net. This is a compromise between ridge regression and LASSO and produces a model that is penalized with both the L1-norm and L2-norm. This means that some coefficients are shrunk (as in ridge regression) and some are set to zero (as in LASSO). In R I would suggest the glmnet package which can do ridge regression, LASSO and elastic net.
Variable selection in logistic regression model As mentioned by @DemetriPananos, theoretical justification would be the best approach, especially if your goal is inference. That is, with expert knowledge of the actual data generation process, you c
19,696
Variable selection in logistic regression model
Theoretical justification beyond all else. Aside from that, LASSO or similar penalized methods would be my next suggestion.
Variable selection in logistic regression model
Theoretical justification beyond all else. Aside from that, LASSO or similar penalized methods would be my next suggestion.
Variable selection in logistic regression model Theoretical justification beyond all else. Aside from that, LASSO or similar penalized methods would be my next suggestion.
Variable selection in logistic regression model Theoretical justification beyond all else. Aside from that, LASSO or similar penalized methods would be my next suggestion.
19,697
Variable selection in logistic regression model
Sounds like regularization models could be of use to you (Lasso especially, since your problem seems to be centered around picking variables, as opposed to Ridge and Elastic net). However, these can be difficult to set up. They are included in Stata 15 if you have access to that. In R, it's often a bit more handywork but hopefully someone can chime in with whether and how R supports users with regularization techniques. There are other techniques to manually pick and choose variables based on their behaviors, but with over 400 variables (assuming your have no preconceived hypothesis about any of these), I'd say doing the work to understand regularization models is probably easier than manual selection.
Variable selection in logistic regression model
Sounds like regularization models could be of use to you (Lasso especially, since your problem seems to be centered around picking variables, as opposed to Ridge and Elastic net). However, these can b
Variable selection in logistic regression model Sounds like regularization models could be of use to you (Lasso especially, since your problem seems to be centered around picking variables, as opposed to Ridge and Elastic net). However, these can be difficult to set up. They are included in Stata 15 if you have access to that. In R, it's often a bit more handywork but hopefully someone can chime in with whether and how R supports users with regularization techniques. There are other techniques to manually pick and choose variables based on their behaviors, but with over 400 variables (assuming your have no preconceived hypothesis about any of these), I'd say doing the work to understand regularization models is probably easier than manual selection.
Variable selection in logistic regression model Sounds like regularization models could be of use to you (Lasso especially, since your problem seems to be centered around picking variables, as opposed to Ridge and Elastic net). However, these can b
19,698
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
No need to use MCMC in this case: Markov Chain Monte-Carlo (MCMC) is a method used to generate values from a distribution. It produces a Markov chain of auto-correlated values with stationary distribution equal to the target distribution. This method will still work to get you what you want, even in cases where the target distribution has an analytic form. However, there are simpler and less computationally intensive methods that work in cases like this, where you are dealing with a posterior that has a nice analytic form. In the case where the posterior distribution has an available analytic form, it is possible to obtain parameter estimates (e.g., MAP) by optimisation from that distribution using standard calculus techniques. If the target distribution is sufficiently simple you might get a closed form solution for the parameter estimator, but even if it is not, you can usually use simple iterative techniques (e.g., Newton-Raphson, gradient-descent, etc.) to find the optimising parameter estimate for any given input data. If you have an analytic form for the quantile function of the target distribution, and you need to generate values from the distribution, you can do this via inverse transform sampling, which is less computationally intensive than MCMC, and allows you to generate IID values rather than values with complex auto-correlation patterns. In view of this, if you were programming from scratch, then there does not seem to be any reason you would use MCMC in the case where the target distribution has an available analytic form. The only reason you might do so is if you have a generic algorithm for MCMC already written, that can be implemented with minimal effort, and you decide that the efficiency of using the analytic form is outweighed by the effort to do the required math. In certain practical contexts you will be dealing with problems that are generally intractable, where MCMC algorithms are already set up and can be implemented with minimal effort (e.g., if you do data analysis in RStan). In these cases it may be easiest to run your existing MCMC methods rather than deriving analytic solutions to problems, though the latter can of course be used as a check on your working.
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
No need to use MCMC in this case: Markov Chain Monte-Carlo (MCMC) is a method used to generate values from a distribution. It produces a Markov chain of auto-correlated values with stationary distrib
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? No need to use MCMC in this case: Markov Chain Monte-Carlo (MCMC) is a method used to generate values from a distribution. It produces a Markov chain of auto-correlated values with stationary distribution equal to the target distribution. This method will still work to get you what you want, even in cases where the target distribution has an analytic form. However, there are simpler and less computationally intensive methods that work in cases like this, where you are dealing with a posterior that has a nice analytic form. In the case where the posterior distribution has an available analytic form, it is possible to obtain parameter estimates (e.g., MAP) by optimisation from that distribution using standard calculus techniques. If the target distribution is sufficiently simple you might get a closed form solution for the parameter estimator, but even if it is not, you can usually use simple iterative techniques (e.g., Newton-Raphson, gradient-descent, etc.) to find the optimising parameter estimate for any given input data. If you have an analytic form for the quantile function of the target distribution, and you need to generate values from the distribution, you can do this via inverse transform sampling, which is less computationally intensive than MCMC, and allows you to generate IID values rather than values with complex auto-correlation patterns. In view of this, if you were programming from scratch, then there does not seem to be any reason you would use MCMC in the case where the target distribution has an available analytic form. The only reason you might do so is if you have a generic algorithm for MCMC already written, that can be implemented with minimal effort, and you decide that the efficiency of using the analytic form is outweighed by the effort to do the required math. In certain practical contexts you will be dealing with problems that are generally intractable, where MCMC algorithms are already set up and can be implemented with minimal effort (e.g., if you do data analysis in RStan). In these cases it may be easiest to run your existing MCMC methods rather than deriving analytic solutions to problems, though the latter can of course be used as a check on your working.
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? No need to use MCMC in this case: Markov Chain Monte-Carlo (MCMC) is a method used to generate values from a distribution. It produces a Markov chain of auto-correlated values with stationary distrib
19,699
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
It is unclear to me what you call an analytical posterior $\pi(\theta)$ and hence why this analyticity should preclude one from using MCMC. Even for a posterior distribution that is available in closed form, including its normalising constant, which is how I understand analytical in this setting, there is no reason for Bayes estimates to be available in closed form, as solving the minimisation problem$$\min_\delta\int_\Theta \text{L}(\theta,\delta)\,\tilde\pi(\theta)\,f(x|\theta)\,\text{d}\theta$$when $\tilde\pi(\cdot)\propto\pi(\cdot)$ strongly depends on the loss function. When the normalising constant $$\int \tilde\pi(\theta)\,\text{d}\theta$$is not available, finding a posterior mean or median or even mode [which does not require to know the constant], most often proceeds through an MCMC algorithm. For instance, if I am given the joint density, when $x,y\in(0,1)$, $$f_\theta(x,y)=\dfrac{1+\theta[(1+x)(1+y)-3]+\theta^2(1-x)(1-y)) }{[1-\theta(1-x)(1-y)]^3}\qquad\theta\in(-1,1)$$inspired by the Ali-Mikhail-Haq copula: it may be properly normalised (and is indeed), but the conditional expectation of $\Phi^{-1}(X)$ given $Y=y$ under this density, when $\Phi(.)$ is the Normal cdf, is not available in closed form. This is however a question of primary interest. Note also that the maximum a posteriori estimator is not the most natural estimator in a Bayesian setting, since it does not correspond to a loss function and that closed-form representation of the density, even up to a constant, does not make finding the MAP necessarily easy. Or using the MAP relevant.
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
It is unclear to me what you call an analytical posterior $\pi(\theta)$ and hence why this analyticity should preclude one from using MCMC. Even for a posterior distribution that is available in close
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? It is unclear to me what you call an analytical posterior $\pi(\theta)$ and hence why this analyticity should preclude one from using MCMC. Even for a posterior distribution that is available in closed form, including its normalising constant, which is how I understand analytical in this setting, there is no reason for Bayes estimates to be available in closed form, as solving the minimisation problem$$\min_\delta\int_\Theta \text{L}(\theta,\delta)\,\tilde\pi(\theta)\,f(x|\theta)\,\text{d}\theta$$when $\tilde\pi(\cdot)\propto\pi(\cdot)$ strongly depends on the loss function. When the normalising constant $$\int \tilde\pi(\theta)\,\text{d}\theta$$is not available, finding a posterior mean or median or even mode [which does not require to know the constant], most often proceeds through an MCMC algorithm. For instance, if I am given the joint density, when $x,y\in(0,1)$, $$f_\theta(x,y)=\dfrac{1+\theta[(1+x)(1+y)-3]+\theta^2(1-x)(1-y)) }{[1-\theta(1-x)(1-y)]^3}\qquad\theta\in(-1,1)$$inspired by the Ali-Mikhail-Haq copula: it may be properly normalised (and is indeed), but the conditional expectation of $\Phi^{-1}(X)$ given $Y=y$ under this density, when $\Phi(.)$ is the Normal cdf, is not available in closed form. This is however a question of primary interest. Note also that the maximum a posteriori estimator is not the most natural estimator in a Bayesian setting, since it does not correspond to a loss function and that closed-form representation of the density, even up to a constant, does not make finding the MAP necessarily easy. Or using the MAP relevant.
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? It is unclear to me what you call an analytical posterior $\pi(\theta)$ and hence why this analyticity should preclude one from using MCMC. Even for a posterior distribution that is available in close
19,700
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
As I read it, this question is asking two somewhat orthogonal questions. One is should one use MAP-estimators over posterior means, and the other is whether one should MCMC if the posterior has an analytical form. In regards to MAP estimators over posterior means, from a theoretical perspective, posterior means are generally preferred, as @Xian notes in his answer. The real advantage to MAP estimators is that, especially in the more typical case where the posterior is not in closed form, they can be calculated much faster (i.e. several orders of magnitude) than an estimate of the posterior mean. If the posterior is approximately symmetric (which often the case in many problems with large sample sizes), then MAP estimate should be very close to the posterior mean. So the attractiveness of the MAP is actually that it can be a very cheap approximation of the posterior mean. Note that knowing the normalizing constant doesn't help us find the posterior mode, so having a closed form solution for the posterior technically doesn't help us find the MAP estimate, outside the case where we recognize the posterior as a specific distribution for which we know it's mode. In regards to the second question, if one has a closed form the posterior distribution, generally speaking there's no reason to use MCMC algorithms. Theoretically, if you had a closed form solution for the posterior distribution, but didn't have a closed form for the mean of some function and couldn't take draws directly from this closed form distribution, then one might turn to MCMC algorithms. But I'm not aware of any cases of this situation.
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
As I read it, this question is asking two somewhat orthogonal questions. One is should one use MAP-estimators over posterior means, and the other is whether one should MCMC if the posterior has an ana
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? As I read it, this question is asking two somewhat orthogonal questions. One is should one use MAP-estimators over posterior means, and the other is whether one should MCMC if the posterior has an analytical form. In regards to MAP estimators over posterior means, from a theoretical perspective, posterior means are generally preferred, as @Xian notes in his answer. The real advantage to MAP estimators is that, especially in the more typical case where the posterior is not in closed form, they can be calculated much faster (i.e. several orders of magnitude) than an estimate of the posterior mean. If the posterior is approximately symmetric (which often the case in many problems with large sample sizes), then MAP estimate should be very close to the posterior mean. So the attractiveness of the MAP is actually that it can be a very cheap approximation of the posterior mean. Note that knowing the normalizing constant doesn't help us find the posterior mode, so having a closed form solution for the posterior technically doesn't help us find the MAP estimate, outside the case where we recognize the posterior as a specific distribution for which we know it's mode. In regards to the second question, if one has a closed form the posterior distribution, generally speaking there's no reason to use MCMC algorithms. Theoretically, if you had a closed form solution for the posterior distribution, but didn't have a closed form for the mean of some function and couldn't take draws directly from this closed form distribution, then one might turn to MCMC algorithms. But I'm not aware of any cases of this situation.
Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? As I read it, this question is asking two somewhat orthogonal questions. One is should one use MAP-estimators over posterior means, and the other is whether one should MCMC if the posterior has an ana