idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
16,901
|
Pitfalls to avoid when transforming data?
|
There are two elements to @Peter's example, which it might be useful to disentangle:
(1) Model mis-specification. The models
$$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i \qquad\text{(1)}$$
&
$$w_i=\gamma_0 + \gamma_1 z_i + \zeta_i \qquad\text{(2)}$$
, where $w_i=\sqrt{\frac{y_i}{x_i}}$ & $z_i=\sqrt{x_i}$, can't both be true. If you re-express each in terms of the other's response they become non-linear in the parameters, with heteroskedastic errors.
$$w_i = \sqrt{\frac{\beta_0}{z_i^2} + \beta_1 + \frac{\varepsilon_i}{z_i^2}} \qquad\text{(1)}$$
$$y_i = (\gamma_0 \sqrt x_i + \gamma_1 \sqrt x_i + \zeta_i \sqrt x_i)^2 \qquad\text{(2)}$$
If $Y$ is assumed to be a Gaussian random variable independent of $X$, then that's a special case of Model 1 in which $\beta_1=0$, & you shouldn't be using Model 2. But equally if $W$ is assumed to be a Gaussian random variable independent of $Z$, you shouldn't be using Model 1. Any preference for one model rather than the other has to come from substantive theory or their fit to data.
(2) Transformation of the response. If you knew $Y$ & $X$ to be independent Gaussian random variables, why should the relation between $W$ & $Z$ still surprise you, or would you call it spurious? The conditional expectation of $W$ can be approximated with the delta method:
$$ \operatorname{E} \sqrt\frac{Y}{x} = \frac{\operatorname{E}\sqrt{Y}}{z} \\
\approx \frac{\sqrt{\beta_0} + \frac{\operatorname{Var}{Y}}{8\beta_0^{3/2}}}{z}$$
It is indeed a function of $z$.
Following through the example ...
set.seed(123)
x <- rnorm(100, 20, 2)
y <- rnorm(100, 20, 2)
w <- (y/x)^.5
z <- x^.5
wrong.model <- lm(w~z)
right.model <- lm(y~x)
x.vals <- as.data.frame(seq(15,25,by=.1))
names(x.vals) <- "x"
z.vals <- as.data.frame(x.vals^.5)
names(z.vals) <- "z"
plot(x,y)
lines(x.vals$x, predict(right.model, newdata=x.vals), lty=3)
lines(x.vals$x, (predict(wrong.model, newdata=z.vals)*z.vals)^2, lty=2)
abline(h=20)
legend("topright",legend=c("data","y on x fits","w on z fits", "truth"), lty=c(NA,3,2,1), pch=c(1,NA,NA,NA))
plot(z,w)
lines(z.vals$z,sqrt(predict(right.model, newdata=x.vals))/as.matrix(z.vals), lty=3)
lines(z.vals$z,predict(wrong.model, newdata=z.vals), lty=2)
lines(z.vals$z,(sqrt(20) + 2/(8*20^(3/2)))/z.vals$z)
legend("topright",legend=c("data","y on x fits","w on z fits","truth"),lty=c(NA,3,2,1), pch=c(1,NA,NA,NA))
Neither Model 1 nor Model 2 is much use for predicting $y$ from $x$, but both are all right for predicting $w$ from $z$: mis-specification hasn't done much harm here (which isn't to say it never will—when it does, it ought to be apparent from the model diagnostics). Model-2-ers will run into trouble sooner as they extrapolate further away from the data—par for the course, if your model's wrong. Some will gain pleasure from contemplation of the little stars they get to put next to their p-values, while some Model-1-ers will bitterly grudge them this—the sum total of human happiness stays about the same. And of course, Model-2-ers, looking at the plot of $w$ against $z$, might be tempted to think that intervening to increase $z$ will reduce $w$—we can only hope & pray they don't succumb to a temptation we've all been incessantly warned against; that of confusing correlation with causation.
Aldrich (2005), "Correlations Genuine and Spurious in Pearson and Yule", Statistical Science, 10, 4 provides an interesting historical perspective on these issues.
|
Pitfalls to avoid when transforming data?
|
There are two elements to @Peter's example, which it might be useful to disentangle:
(1) Model mis-specification. The models
$$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i \qquad\text{(1)}$$
&
$$w_i=\g
|
Pitfalls to avoid when transforming data?
There are two elements to @Peter's example, which it might be useful to disentangle:
(1) Model mis-specification. The models
$$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i \qquad\text{(1)}$$
&
$$w_i=\gamma_0 + \gamma_1 z_i + \zeta_i \qquad\text{(2)}$$
, where $w_i=\sqrt{\frac{y_i}{x_i}}$ & $z_i=\sqrt{x_i}$, can't both be true. If you re-express each in terms of the other's response they become non-linear in the parameters, with heteroskedastic errors.
$$w_i = \sqrt{\frac{\beta_0}{z_i^2} + \beta_1 + \frac{\varepsilon_i}{z_i^2}} \qquad\text{(1)}$$
$$y_i = (\gamma_0 \sqrt x_i + \gamma_1 \sqrt x_i + \zeta_i \sqrt x_i)^2 \qquad\text{(2)}$$
If $Y$ is assumed to be a Gaussian random variable independent of $X$, then that's a special case of Model 1 in which $\beta_1=0$, & you shouldn't be using Model 2. But equally if $W$ is assumed to be a Gaussian random variable independent of $Z$, you shouldn't be using Model 1. Any preference for one model rather than the other has to come from substantive theory or their fit to data.
(2) Transformation of the response. If you knew $Y$ & $X$ to be independent Gaussian random variables, why should the relation between $W$ & $Z$ still surprise you, or would you call it spurious? The conditional expectation of $W$ can be approximated with the delta method:
$$ \operatorname{E} \sqrt\frac{Y}{x} = \frac{\operatorname{E}\sqrt{Y}}{z} \\
\approx \frac{\sqrt{\beta_0} + \frac{\operatorname{Var}{Y}}{8\beta_0^{3/2}}}{z}$$
It is indeed a function of $z$.
Following through the example ...
set.seed(123)
x <- rnorm(100, 20, 2)
y <- rnorm(100, 20, 2)
w <- (y/x)^.5
z <- x^.5
wrong.model <- lm(w~z)
right.model <- lm(y~x)
x.vals <- as.data.frame(seq(15,25,by=.1))
names(x.vals) <- "x"
z.vals <- as.data.frame(x.vals^.5)
names(z.vals) <- "z"
plot(x,y)
lines(x.vals$x, predict(right.model, newdata=x.vals), lty=3)
lines(x.vals$x, (predict(wrong.model, newdata=z.vals)*z.vals)^2, lty=2)
abline(h=20)
legend("topright",legend=c("data","y on x fits","w on z fits", "truth"), lty=c(NA,3,2,1), pch=c(1,NA,NA,NA))
plot(z,w)
lines(z.vals$z,sqrt(predict(right.model, newdata=x.vals))/as.matrix(z.vals), lty=3)
lines(z.vals$z,predict(wrong.model, newdata=z.vals), lty=2)
lines(z.vals$z,(sqrt(20) + 2/(8*20^(3/2)))/z.vals$z)
legend("topright",legend=c("data","y on x fits","w on z fits","truth"),lty=c(NA,3,2,1), pch=c(1,NA,NA,NA))
Neither Model 1 nor Model 2 is much use for predicting $y$ from $x$, but both are all right for predicting $w$ from $z$: mis-specification hasn't done much harm here (which isn't to say it never will—when it does, it ought to be apparent from the model diagnostics). Model-2-ers will run into trouble sooner as they extrapolate further away from the data—par for the course, if your model's wrong. Some will gain pleasure from contemplation of the little stars they get to put next to their p-values, while some Model-1-ers will bitterly grudge them this—the sum total of human happiness stays about the same. And of course, Model-2-ers, looking at the plot of $w$ against $z$, might be tempted to think that intervening to increase $z$ will reduce $w$—we can only hope & pray they don't succumb to a temptation we've all been incessantly warned against; that of confusing correlation with causation.
Aldrich (2005), "Correlations Genuine and Spurious in Pearson and Yule", Statistical Science, 10, 4 provides an interesting historical perspective on these issues.
|
Pitfalls to avoid when transforming data?
There are two elements to @Peter's example, which it might be useful to disentangle:
(1) Model mis-specification. The models
$$y_i = \beta_0 + \beta_1 x_i + \varepsilon_i \qquad\text{(1)}$$
&
$$w_i=\g
|
16,902
|
Pitfalls to avoid when transforming data?
|
The earlier answer of @Glen_b is all important. Playing with transformations distorts every part of statistical inference and results in $R^2$ that is biased high. In short, not having a parameter in the model for everything you don't know will give a false sense of precision. That's why regression splines are now so popular.
|
Pitfalls to avoid when transforming data?
|
The earlier answer of @Glen_b is all important. Playing with transformations distorts every part of statistical inference and results in $R^2$ that is biased high. In short, not having a parameter i
|
Pitfalls to avoid when transforming data?
The earlier answer of @Glen_b is all important. Playing with transformations distorts every part of statistical inference and results in $R^2$ that is biased high. In short, not having a parameter in the model for everything you don't know will give a false sense of precision. That's why regression splines are now so popular.
|
Pitfalls to avoid when transforming data?
The earlier answer of @Glen_b is all important. Playing with transformations distorts every part of statistical inference and results in $R^2$ that is biased high. In short, not having a parameter i
|
16,903
|
Concepts behind fixed/random effects models
|
This seems a great question as it touches a nomenclature issue in econometrics that disturbs students when switching to statistic literature (books, teachers, etc). I suggest you http://www.amazon.com/Econometric-Analysis-Cross-Section-Panel/dp/0262232197 chapter 10.
Assume that your variable of interest $y_{it}$ is observed in two dimensions (e.g. individuals and time) depends on observed characteristics $x_{it}$ and unobserved ones $u_{it}$. If $y_{it}$ are observed wages then we may argue that it's determined by observed (education) and unobserved skills (talents, etc.). But it's clear that unobserved skills may be correlated with educational levels. So that leads to the error decomposition:
$u_{it} = e_{it}+v_i$
where $v_i$ is the error (random) component that we may assume to be correlated with the $x$'s. i.e. $v_i$ models the individual's unobserved skills as a random individual component.
Thus the model becomes:
$y_{it} = \sum_j\theta_jx_j + e_{it}+ v_{i} $
This model is usually labeled as a FE model, but as Wooldridge argues it would be wiser to call it a RE model with correlated error component whereas if $v_i$ is not correlated to the $x's$ it becomes a RE model. So this answer your second question, the FE setup is more general as it allows for correlation between $v_i$ and the $x's$.
Older books in econometrics tend to refer to FE to a model with individual specific constants, unfortunately this is still present in nowadays literature (I guess that in statistics they never have had this confussion. I definitevely suggest the Wooldridge lectures that develops the potential missunderstanding issue)
|
Concepts behind fixed/random effects models
|
This seems a great question as it touches a nomenclature issue in econometrics that disturbs students when switching to statistic literature (books, teachers, etc). I suggest you http://www.amazon.c
|
Concepts behind fixed/random effects models
This seems a great question as it touches a nomenclature issue in econometrics that disturbs students when switching to statistic literature (books, teachers, etc). I suggest you http://www.amazon.com/Econometric-Analysis-Cross-Section-Panel/dp/0262232197 chapter 10.
Assume that your variable of interest $y_{it}$ is observed in two dimensions (e.g. individuals and time) depends on observed characteristics $x_{it}$ and unobserved ones $u_{it}$. If $y_{it}$ are observed wages then we may argue that it's determined by observed (education) and unobserved skills (talents, etc.). But it's clear that unobserved skills may be correlated with educational levels. So that leads to the error decomposition:
$u_{it} = e_{it}+v_i$
where $v_i$ is the error (random) component that we may assume to be correlated with the $x$'s. i.e. $v_i$ models the individual's unobserved skills as a random individual component.
Thus the model becomes:
$y_{it} = \sum_j\theta_jx_j + e_{it}+ v_{i} $
This model is usually labeled as a FE model, but as Wooldridge argues it would be wiser to call it a RE model with correlated error component whereas if $v_i$ is not correlated to the $x's$ it becomes a RE model. So this answer your second question, the FE setup is more general as it allows for correlation between $v_i$ and the $x's$.
Older books in econometrics tend to refer to FE to a model with individual specific constants, unfortunately this is still present in nowadays literature (I guess that in statistics they never have had this confussion. I definitevely suggest the Wooldridge lectures that develops the potential missunderstanding issue)
|
Concepts behind fixed/random effects models
This seems a great question as it touches a nomenclature issue in econometrics that disturbs students when switching to statistic literature (books, teachers, etc). I suggest you http://www.amazon.c
|
16,904
|
Concepts behind fixed/random effects models
|
My best example of a random effect in a model comes from clinical trial studies. In clinical trial we enroll patients from various hospitals (called sites). The sites are selected from a large set of potential sites. There can be site related factors that effect the response to treatment. So in a linear model you often would want include site as a main effect.
But is it appropriate to have site as a fixed effect? We generally don't do that. We can often think of the sites that we selected for the trial as a random sample from the potential sites we could have selected. This may not be quite the case but it may be a more reasonable assumption than assuming the site effect is fixed. So treating site as a random effect allows us to incorporate the variability in the site effect that is due to picking a set of k sites out of a population containing N sites.
The general idea is that the group is not fixed but was selected from a larger population and other choices for the group were possible and would have led to different results. So treating it as a random effect incorporates that type of variability into the model that you would not get from a fixed effect.
|
Concepts behind fixed/random effects models
|
My best example of a random effect in a model comes from clinical trial studies. In clinical trial we enroll patients from various hospitals (called sites). The sites are selected from a large set o
|
Concepts behind fixed/random effects models
My best example of a random effect in a model comes from clinical trial studies. In clinical trial we enroll patients from various hospitals (called sites). The sites are selected from a large set of potential sites. There can be site related factors that effect the response to treatment. So in a linear model you often would want include site as a main effect.
But is it appropriate to have site as a fixed effect? We generally don't do that. We can often think of the sites that we selected for the trial as a random sample from the potential sites we could have selected. This may not be quite the case but it may be a more reasonable assumption than assuming the site effect is fixed. So treating site as a random effect allows us to incorporate the variability in the site effect that is due to picking a set of k sites out of a population containing N sites.
The general idea is that the group is not fixed but was selected from a larger population and other choices for the group were possible and would have led to different results. So treating it as a random effect incorporates that type of variability into the model that you would not get from a fixed effect.
|
Concepts behind fixed/random effects models
My best example of a random effect in a model comes from clinical trial studies. In clinical trial we enroll patients from various hospitals (called sites). The sites are selected from a large set o
|
16,905
|
Concepts behind fixed/random effects models
|
Not sure about a book but here is an example. Suppose we have a sample of birth weights from a large cohort of babies over a long period of time. The weights of babies born to the same women would be more similar than the weights of babies born to different mothers. Boys are also heavier than girls.
So, a fixed effects model ignoring correlation in weights among babies born to the same mother is:
Model 1. mean birth weight = intercept + sex
Another fixed effects model adjusting for such correlation is:
Model 2. mean birth weight = intercept + sex + mother_id
However, firstly we might not be interested in the effects for each particular mother. Also, we consider the mother to be a random mother from the population all mothers. So we construct a mixed model with a fixed effect for sex and a random effect (i.e. a random intercept) for the mother:
Model 3: mean birth weight = intercept + sex + u
This u will be different for each mother, just as in Model 2 but it is not actually estimated. Rather, only its variance is estimated. This variance estimate gives us an idea as to the level of clustering of weights by mother.
Hope that makes some sense.
|
Concepts behind fixed/random effects models
|
Not sure about a book but here is an example. Suppose we have a sample of birth weights from a large cohort of babies over a long period of time. The weights of babies born to the same women would be
|
Concepts behind fixed/random effects models
Not sure about a book but here is an example. Suppose we have a sample of birth weights from a large cohort of babies over a long period of time. The weights of babies born to the same women would be more similar than the weights of babies born to different mothers. Boys are also heavier than girls.
So, a fixed effects model ignoring correlation in weights among babies born to the same mother is:
Model 1. mean birth weight = intercept + sex
Another fixed effects model adjusting for such correlation is:
Model 2. mean birth weight = intercept + sex + mother_id
However, firstly we might not be interested in the effects for each particular mother. Also, we consider the mother to be a random mother from the population all mothers. So we construct a mixed model with a fixed effect for sex and a random effect (i.e. a random intercept) for the mother:
Model 3: mean birth weight = intercept + sex + u
This u will be different for each mother, just as in Model 2 but it is not actually estimated. Rather, only its variance is estimated. This variance estimate gives us an idea as to the level of clustering of weights by mother.
Hope that makes some sense.
|
Concepts behind fixed/random effects models
Not sure about a book but here is an example. Suppose we have a sample of birth weights from a large cohort of babies over a long period of time. The weights of babies born to the same women would be
|
16,906
|
Can the standard deviation of non-negative data exceed the mean?
|
There is nothing that states that the standard deviation has to be less than or more than the mean. Given a set of data you can keep the mean the same but change the standard deviation to an arbitrary degree by adding/subtracting a positive number appropriately.
Using @whuber's example dataset from his comment to the question: {2, 2, 2, 202}. As stated by @whuber: the mean is 52 and the standard deviation is 100.
Now, perturb each element of the data as follows: {22, 22, 22, 142}. The mean is still 52 but the standard deviation is 60.
|
Can the standard deviation of non-negative data exceed the mean?
|
There is nothing that states that the standard deviation has to be less than or more than the mean. Given a set of data you can keep the mean the same but change the standard deviation to an arbitrary
|
Can the standard deviation of non-negative data exceed the mean?
There is nothing that states that the standard deviation has to be less than or more than the mean. Given a set of data you can keep the mean the same but change the standard deviation to an arbitrary degree by adding/subtracting a positive number appropriately.
Using @whuber's example dataset from his comment to the question: {2, 2, 2, 202}. As stated by @whuber: the mean is 52 and the standard deviation is 100.
Now, perturb each element of the data as follows: {22, 22, 22, 142}. The mean is still 52 but the standard deviation is 60.
|
Can the standard deviation of non-negative data exceed the mean?
There is nothing that states that the standard deviation has to be less than or more than the mean. Given a set of data you can keep the mean the same but change the standard deviation to an arbitrary
|
16,907
|
Can the standard deviation of non-negative data exceed the mean?
|
Of course, these are independent parameters. You can set simple explorations in R (or another tool you may prefer).
R> set.seed(42) # fix RNG
R> x <- rnorm(1000) # one thousand N(0,1)
R> mean(x) # and mean is near zero
[1] -0.0258244
R> sd(x) # sd is near one
[1] 1.00252
R> sd(x * 100) # scale to std.dev of 100
[1] 100.252
R>
Similarly, you standardize the data you are looking at by subtracting the mean and dividing by the standard deviation.
Edit And following @whuber's idea, here is one an infinity of data sets which come close to your four measurements:
R> data <- c(0, 2341.141, rep(52, 545))
R> data.frame(min=min(data), max=max(data), sd=sd(data), mean=mean(data))
min max sd mean
1 0 2341.14 97.9059 56.0898
R>
|
Can the standard deviation of non-negative data exceed the mean?
|
Of course, these are independent parameters. You can set simple explorations in R (or another tool you may prefer).
R> set.seed(42) # fix RNG
R> x <- rnorm(1000) # one thousand N(0,1)
R> mean(x)
|
Can the standard deviation of non-negative data exceed the mean?
Of course, these are independent parameters. You can set simple explorations in R (or another tool you may prefer).
R> set.seed(42) # fix RNG
R> x <- rnorm(1000) # one thousand N(0,1)
R> mean(x) # and mean is near zero
[1] -0.0258244
R> sd(x) # sd is near one
[1] 1.00252
R> sd(x * 100) # scale to std.dev of 100
[1] 100.252
R>
Similarly, you standardize the data you are looking at by subtracting the mean and dividing by the standard deviation.
Edit And following @whuber's idea, here is one an infinity of data sets which come close to your four measurements:
R> data <- c(0, 2341.141, rep(52, 545))
R> data.frame(min=min(data), max=max(data), sd=sd(data), mean=mean(data))
min max sd mean
1 0 2341.14 97.9059 56.0898
R>
|
Can the standard deviation of non-negative data exceed the mean?
Of course, these are independent parameters. You can set simple explorations in R (or another tool you may prefer).
R> set.seed(42) # fix RNG
R> x <- rnorm(1000) # one thousand N(0,1)
R> mean(x)
|
16,908
|
Can the standard deviation of non-negative data exceed the mean?
|
I am not sure why @Andy is surprised at this result, but I know he is not alone. Nor am I sure what the normality of the data has to do with the fact that the sd is higher than the mean. It is quite simple to generate a data set that is normally distributed where this is the case; indeed, the standard normal has mean of 0, sd of 1. It would be hard to get a normally distribute data set of all positive values with sd > mean; indeed, it ought not be possible (but it depends on the sample size and what test of normality you use... with a very small sample, odd things happen)
However, once you remove the stipulation of normality, as @Andy did, there's no reason why sd should be larger or smaller then the mean, even for all positive values. A single outlier will do this. e.g.
x <- runif(100, 1, 200)
x <- c(x, 2000)
gives mean of 113 and sd of 198 (depending on seed, of course).
But a bigger question is why this surprises people.
I don't teach statistics, but I wonder what about the way statistics is taught makes this notion common.
|
Can the standard deviation of non-negative data exceed the mean?
|
I am not sure why @Andy is surprised at this result, but I know he is not alone. Nor am I sure what the normality of the data has to do with the fact that the sd is higher than the mean. It is quite s
|
Can the standard deviation of non-negative data exceed the mean?
I am not sure why @Andy is surprised at this result, but I know he is not alone. Nor am I sure what the normality of the data has to do with the fact that the sd is higher than the mean. It is quite simple to generate a data set that is normally distributed where this is the case; indeed, the standard normal has mean of 0, sd of 1. It would be hard to get a normally distribute data set of all positive values with sd > mean; indeed, it ought not be possible (but it depends on the sample size and what test of normality you use... with a very small sample, odd things happen)
However, once you remove the stipulation of normality, as @Andy did, there's no reason why sd should be larger or smaller then the mean, even for all positive values. A single outlier will do this. e.g.
x <- runif(100, 1, 200)
x <- c(x, 2000)
gives mean of 113 and sd of 198 (depending on seed, of course).
But a bigger question is why this surprises people.
I don't teach statistics, but I wonder what about the way statistics is taught makes this notion common.
|
Can the standard deviation of non-negative data exceed the mean?
I am not sure why @Andy is surprised at this result, but I know he is not alone. Nor am I sure what the normality of the data has to do with the fact that the sd is higher than the mean. It is quite s
|
16,909
|
Can the standard deviation of non-negative data exceed the mean?
|
Just adding a generic point that, from a calculus perspective,
$$
\int x f(x) \text{d}x
$$
and
$$
\int x^2 f(x) \text{d}x
$$
are related by Jensen's inequality, assuming both integrals exist,
$$
\int x^2 f(x) \text{d}x \ge \left\{ \int x f(x) \text{d}x \right\}^2\,.
$$
Given this general inequality, nothing prevents the variance to get arbitrarily large. Witness the Student's t distribution with $\nu$ degrees of freedom,
$$
X \sim \mathfrak{T}(\nu,\mu,\sigma)
$$
and take $Y=|X|$ whose second moment is the same as the second moment of $X$,
$$
\mathbb{E}[|X|^2] = \frac{\nu}{\nu-2}\sigma^2 + \mu^2,
$$
when $\nu>2$. So it goes to infinity when $\nu$ goes down to $2$, while the mean of $Y$ remains finite as long as $\nu>1$.
|
Can the standard deviation of non-negative data exceed the mean?
|
Just adding a generic point that, from a calculus perspective,
$$
\int x f(x) \text{d}x
$$
and
$$
\int x^2 f(x) \text{d}x
$$
are related by Jensen's inequality, assuming both integrals exist,
$$
|
Can the standard deviation of non-negative data exceed the mean?
Just adding a generic point that, from a calculus perspective,
$$
\int x f(x) \text{d}x
$$
and
$$
\int x^2 f(x) \text{d}x
$$
are related by Jensen's inequality, assuming both integrals exist,
$$
\int x^2 f(x) \text{d}x \ge \left\{ \int x f(x) \text{d}x \right\}^2\,.
$$
Given this general inequality, nothing prevents the variance to get arbitrarily large. Witness the Student's t distribution with $\nu$ degrees of freedom,
$$
X \sim \mathfrak{T}(\nu,\mu,\sigma)
$$
and take $Y=|X|$ whose second moment is the same as the second moment of $X$,
$$
\mathbb{E}[|X|^2] = \frac{\nu}{\nu-2}\sigma^2 + \mu^2,
$$
when $\nu>2$. So it goes to infinity when $\nu$ goes down to $2$, while the mean of $Y$ remains finite as long as $\nu>1$.
|
Can the standard deviation of non-negative data exceed the mean?
Just adding a generic point that, from a calculus perspective,
$$
\int x f(x) \text{d}x
$$
and
$$
\int x^2 f(x) \text{d}x
$$
are related by Jensen's inequality, assuming both integrals exist,
$$
|
16,910
|
Can the standard deviation of non-negative data exceed the mean?
|
Perhaps the OP is surprised that the mean - 1 S.D. is a negative number (especially where the minimum is 0).
Here are two examples that may clarify.
Suppose you have a class of 20 first graders, where 18 are 6 years old, 1 is 5, and 1 is 7. Now add in the 49-year-old teacher. The average age is 8.0, while the standard deviation is 9.402.
You might be thinking: one standard deviation ranges for this class ranges from -1.402 to 17.402 years. You might be surprised that the S.D. includes a negative age, which seems unreasonable.
You don't have to worry about the negative age (or the 3D plots extending less than the minimum of 0.0). Intuitively, you still have about two-thirds of the data within 1 S.D. of the mean. (You actually have 95% of the data within 2 S.D. of the mean.)
When the data takes on a non-normal distribution, you will see surprising results like this.
Second example. In his book, Fooled by Randomness, Nassim Taleb sets up the thought experiment of a blindfolded archer shooting at a wall of inifinte length. The archer can shoot between +90 degrees and -90 degrees.
Every once in a while, the archer will shoot the arrow parallel to the wall, and it will never hit. Consider how far the arrow misses the target as the distribution of numbers. The standard deviation for this scenario would be inifinte.
|
Can the standard deviation of non-negative data exceed the mean?
|
Perhaps the OP is surprised that the mean - 1 S.D. is a negative number (especially where the minimum is 0).
Here are two examples that may clarify.
Suppose you have a class of 20 first graders, wher
|
Can the standard deviation of non-negative data exceed the mean?
Perhaps the OP is surprised that the mean - 1 S.D. is a negative number (especially where the minimum is 0).
Here are two examples that may clarify.
Suppose you have a class of 20 first graders, where 18 are 6 years old, 1 is 5, and 1 is 7. Now add in the 49-year-old teacher. The average age is 8.0, while the standard deviation is 9.402.
You might be thinking: one standard deviation ranges for this class ranges from -1.402 to 17.402 years. You might be surprised that the S.D. includes a negative age, which seems unreasonable.
You don't have to worry about the negative age (or the 3D plots extending less than the minimum of 0.0). Intuitively, you still have about two-thirds of the data within 1 S.D. of the mean. (You actually have 95% of the data within 2 S.D. of the mean.)
When the data takes on a non-normal distribution, you will see surprising results like this.
Second example. In his book, Fooled by Randomness, Nassim Taleb sets up the thought experiment of a blindfolded archer shooting at a wall of inifinte length. The archer can shoot between +90 degrees and -90 degrees.
Every once in a while, the archer will shoot the arrow parallel to the wall, and it will never hit. Consider how far the arrow misses the target as the distribution of numbers. The standard deviation for this scenario would be inifinte.
|
Can the standard deviation of non-negative data exceed the mean?
Perhaps the OP is surprised that the mean - 1 S.D. is a negative number (especially where the minimum is 0).
Here are two examples that may clarify.
Suppose you have a class of 20 first graders, wher
|
16,911
|
Can the standard deviation of non-negative data exceed the mean?
|
A gamma random variable $X$ with density
$$
f_X(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} I_{(0,\infty)}(x) \, ,
$$
with $\alpha,\beta>0$, is almost surely positive. Choose any mean $m>0$ and any standard deviation $s>0$. As long as they are positive, it does not matter if $m>s$ or $m<s$. Putting $\alpha=m^2/s^2$ and $\beta=m/s^2$, the mean and standard deviation of $X$ are $\mathbb{E}[X]=\alpha/\beta=m$ and $\sqrt{\mathbb{Var}[X]}=\sqrt{\alpha/\beta^2}=s$. With a big enough sample from the distribution of $X$, by the SLLN, the sample mean and sample standard deviation will be close to $m$ and $s$. You can play with R to get a feeling about this. Here are examples with $m>s$ and $m<s$.
> m <- 10
> s <- 1
> x <- rgamma(10000, shape = m^2/s^2, rate = m/s^2)
> mean(x)
[1] 10.01113
> sd(x)
[1] 1.002632
> m <- 1
> s <- 10
> x <- rgamma(10000, shape = m^2/s^2, rate = m/s^2)
> mean(x)
[1] 1.050675
> sd(x)
[1] 10.1139
|
Can the standard deviation of non-negative data exceed the mean?
|
A gamma random variable $X$ with density
$$
f_X(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} I_{(0,\infty)}(x) \, ,
$$
with $\alpha,\beta>0$, is almost surely positive. Choose
|
Can the standard deviation of non-negative data exceed the mean?
A gamma random variable $X$ with density
$$
f_X(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} I_{(0,\infty)}(x) \, ,
$$
with $\alpha,\beta>0$, is almost surely positive. Choose any mean $m>0$ and any standard deviation $s>0$. As long as they are positive, it does not matter if $m>s$ or $m<s$. Putting $\alpha=m^2/s^2$ and $\beta=m/s^2$, the mean and standard deviation of $X$ are $\mathbb{E}[X]=\alpha/\beta=m$ and $\sqrt{\mathbb{Var}[X]}=\sqrt{\alpha/\beta^2}=s$. With a big enough sample from the distribution of $X$, by the SLLN, the sample mean and sample standard deviation will be close to $m$ and $s$. You can play with R to get a feeling about this. Here are examples with $m>s$ and $m<s$.
> m <- 10
> s <- 1
> x <- rgamma(10000, shape = m^2/s^2, rate = m/s^2)
> mean(x)
[1] 10.01113
> sd(x)
[1] 1.002632
> m <- 1
> s <- 10
> x <- rgamma(10000, shape = m^2/s^2, rate = m/s^2)
> mean(x)
[1] 1.050675
> sd(x)
[1] 10.1139
|
Can the standard deviation of non-negative data exceed the mean?
A gamma random variable $X$ with density
$$
f_X(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x} I_{(0,\infty)}(x) \, ,
$$
with $\alpha,\beta>0$, is almost surely positive. Choose
|
16,912
|
Can the standard deviation of non-negative data exceed the mean?
|
As pointed out in the other answers, the mean $\bar{x}$ and standard deviation
$\sigma_x$ are essentially unrelated in that it is not necessary for the standard deviation to be smaller than the mean. However, if the data are nonnegative, taking on values in $[0,c]$, say, then, for large data sets (where the distinction between dividing by $n$ or by $n-1$ does not matter very much), the following inequality
holds:
$$\sigma_x \leq \sqrt{\bar{x}(c-\bar{x})} \leq \frac{c}{2}$$
and so if $\bar{x} > c/2$, we can be sure that $\sigma_x$ will be smaller.
Indeed, since $\sigma_x = c/2$ only for an extremal distribution (half the
data have value $0$ and the other half value $c$), $\sigma_x < \bar{x}$ can
hold in some cases when $\bar{x} < c/2$ as well.
If the data are measurements of some physical quantity that is nonnegative
(e.g. area) and have an empirical distribution that is a good fit to a
normal distribution, then $\sigma_x$ will be considerably smaller
than $\min\{\bar{x}, c - \bar{x}\}$ since the fitted normal distribution
should assign negligibly small probability to the events $\{X < 0\}$
and $\{X > c\}$.
|
Can the standard deviation of non-negative data exceed the mean?
|
As pointed out in the other answers, the mean $\bar{x}$ and standard deviation
$\sigma_x$ are essentially unrelated in that it is not necessary for the standard deviation to be smaller than the mean.
|
Can the standard deviation of non-negative data exceed the mean?
As pointed out in the other answers, the mean $\bar{x}$ and standard deviation
$\sigma_x$ are essentially unrelated in that it is not necessary for the standard deviation to be smaller than the mean. However, if the data are nonnegative, taking on values in $[0,c]$, say, then, for large data sets (where the distinction between dividing by $n$ or by $n-1$ does not matter very much), the following inequality
holds:
$$\sigma_x \leq \sqrt{\bar{x}(c-\bar{x})} \leq \frac{c}{2}$$
and so if $\bar{x} > c/2$, we can be sure that $\sigma_x$ will be smaller.
Indeed, since $\sigma_x = c/2$ only for an extremal distribution (half the
data have value $0$ and the other half value $c$), $\sigma_x < \bar{x}$ can
hold in some cases when $\bar{x} < c/2$ as well.
If the data are measurements of some physical quantity that is nonnegative
(e.g. area) and have an empirical distribution that is a good fit to a
normal distribution, then $\sigma_x$ will be considerably smaller
than $\min\{\bar{x}, c - \bar{x}\}$ since the fitted normal distribution
should assign negligibly small probability to the events $\{X < 0\}$
and $\{X > c\}$.
|
Can the standard deviation of non-negative data exceed the mean?
As pointed out in the other answers, the mean $\bar{x}$ and standard deviation
$\sigma_x$ are essentially unrelated in that it is not necessary for the standard deviation to be smaller than the mean.
|
16,913
|
Can the standard deviation of non-negative data exceed the mean?
|
What you seem to have in mind implicitly is a prediction interval that would bound the occurrence of new observations. The catch is: you must postulate a statistical distribution compliant with the fact that your observations (triangle areas) must remain non-negative. Normal won't help, but log-normal might be just fine. In practical terms, take the log of observed areas, calculate the mean and standard deviation, form a prediction interval using the normal distribution, and finally evaluate the exponential for the lower and upper limits -- the transformed prediction interval won't be symmetric around the mean, and is guaranteed to not go below zero. This is what I think the OP actually had in mind.
|
Can the standard deviation of non-negative data exceed the mean?
|
What you seem to have in mind implicitly is a prediction interval that would bound the occurrence of new observations. The catch is: you must postulate a statistical distribution compliant with the fa
|
Can the standard deviation of non-negative data exceed the mean?
What you seem to have in mind implicitly is a prediction interval that would bound the occurrence of new observations. The catch is: you must postulate a statistical distribution compliant with the fact that your observations (triangle areas) must remain non-negative. Normal won't help, but log-normal might be just fine. In practical terms, take the log of observed areas, calculate the mean and standard deviation, form a prediction interval using the normal distribution, and finally evaluate the exponential for the lower and upper limits -- the transformed prediction interval won't be symmetric around the mean, and is guaranteed to not go below zero. This is what I think the OP actually had in mind.
|
Can the standard deviation of non-negative data exceed the mean?
What you seem to have in mind implicitly is a prediction interval that would bound the occurrence of new observations. The catch is: you must postulate a statistical distribution compliant with the fa
|
16,914
|
Can the standard deviation of non-negative data exceed the mean?
|
Felipe Nievinski points to a real issue here. It makes no sense to talk in normal distribution terms when the distribution is clearly not a normal distribution.
All-positive values with a relatively small mean and relatively large standard deviation cannot have a normal distribution. So, the task is to figure out what sort of distribution fits the situation.
The original post suggests that a normal distribution (or some such) was clearly in mind. Otherwise negative numbers would not come up. Log normal, Rayleigh, Weibull come to mind ... I don't know but wonder what might be best in a case like this?
|
Can the standard deviation of non-negative data exceed the mean?
|
Felipe Nievinski points to a real issue here. It makes no sense to talk in normal distribution terms when the distribution is clearly not a normal distribution.
All-positive values with a relatively
|
Can the standard deviation of non-negative data exceed the mean?
Felipe Nievinski points to a real issue here. It makes no sense to talk in normal distribution terms when the distribution is clearly not a normal distribution.
All-positive values with a relatively small mean and relatively large standard deviation cannot have a normal distribution. So, the task is to figure out what sort of distribution fits the situation.
The original post suggests that a normal distribution (or some such) was clearly in mind. Otherwise negative numbers would not come up. Log normal, Rayleigh, Weibull come to mind ... I don't know but wonder what might be best in a case like this?
|
Can the standard deviation of non-negative data exceed the mean?
Felipe Nievinski points to a real issue here. It makes no sense to talk in normal distribution terms when the distribution is clearly not a normal distribution.
All-positive values with a relatively
|
16,915
|
Are the random variables $X$ and $f(X)$ dependent?
|
Here is a proof of @cardinal's comment with a small twist. If $X$ and $f(X)$ are independent then
$$
\begin{array}{lcl}
P(X \in A \cap f^{-1}(B)) & = & P(X \in A, f(X) \in B) \\
& = & P(X \in A) P(f(X) \in B) \\
& = & P(X \in A) P(X \in f^{-1}(B))
\end{array}
$$
Taking $A = f^{-1}(B)$ yields the equation
$$P(f(X) \in B) = P(f(X) \in B)^2,$$
which has the two solutions 0 and 1. Thus $P(f(X) \in B) \in \{0, 1\}$ for all $B$. In complete generality, it's not possible to say more. If $X$ and $f(X)$ are independent, then $f(X)$ is a variable such that for any $B$ it is either in $B$ or in $B^c$ with probability 1. To say more, one needs more assumptions, e.g. that singleton sets $\{b\}$ are measurable.
However, details at the measure theoretic level do not seem to be the main concern of the OP. If $X$ is real and $f$ is a real function (and we use the Borel $\sigma$-algebra, say), then taking $B = (-\infty, b]$ it follows that the distribution function for the distribution of $f(X)$ only takes the values 0 and 1, hence there is a $b$ at which it jumps from $0$ to $1$ and $P(f(X) = b) = 1$.
At the end of the day, the answer to the OPs question is that $X$ and $f(X)$ are generally dependent and only independent under very special circumstances. Moreover, the Dirac measure $\delta_{f(x)}$ always qualifies for a conditional distribution of $f(X)$ given $X = x$, which is a formal way of saying that knowing $X = x$ then you also know exactly what $f(X)$ is. This special form of dependence with a degenerate conditional distribution is characteristic for functions of random variables.
|
Are the random variables $X$ and $f(X)$ dependent?
|
Here is a proof of @cardinal's comment with a small twist. If $X$ and $f(X)$ are independent then
$$
\begin{array}{lcl}
P(X \in A \cap f^{-1}(B)) & = & P(X \in A, f(X) \in B) \\
& = & P(X \in A) P
|
Are the random variables $X$ and $f(X)$ dependent?
Here is a proof of @cardinal's comment with a small twist. If $X$ and $f(X)$ are independent then
$$
\begin{array}{lcl}
P(X \in A \cap f^{-1}(B)) & = & P(X \in A, f(X) \in B) \\
& = & P(X \in A) P(f(X) \in B) \\
& = & P(X \in A) P(X \in f^{-1}(B))
\end{array}
$$
Taking $A = f^{-1}(B)$ yields the equation
$$P(f(X) \in B) = P(f(X) \in B)^2,$$
which has the two solutions 0 and 1. Thus $P(f(X) \in B) \in \{0, 1\}$ for all $B$. In complete generality, it's not possible to say more. If $X$ and $f(X)$ are independent, then $f(X)$ is a variable such that for any $B$ it is either in $B$ or in $B^c$ with probability 1. To say more, one needs more assumptions, e.g. that singleton sets $\{b\}$ are measurable.
However, details at the measure theoretic level do not seem to be the main concern of the OP. If $X$ is real and $f$ is a real function (and we use the Borel $\sigma$-algebra, say), then taking $B = (-\infty, b]$ it follows that the distribution function for the distribution of $f(X)$ only takes the values 0 and 1, hence there is a $b$ at which it jumps from $0$ to $1$ and $P(f(X) = b) = 1$.
At the end of the day, the answer to the OPs question is that $X$ and $f(X)$ are generally dependent and only independent under very special circumstances. Moreover, the Dirac measure $\delta_{f(x)}$ always qualifies for a conditional distribution of $f(X)$ given $X = x$, which is a formal way of saying that knowing $X = x$ then you also know exactly what $f(X)$ is. This special form of dependence with a degenerate conditional distribution is characteristic for functions of random variables.
|
Are the random variables $X$ and $f(X)$ dependent?
Here is a proof of @cardinal's comment with a small twist. If $X$ and $f(X)$ are independent then
$$
\begin{array}{lcl}
P(X \in A \cap f^{-1}(B)) & = & P(X \in A, f(X) \in B) \\
& = & P(X \in A) P
|
16,916
|
Are the random variables $X$ and $f(X)$ dependent?
|
Lemma: Let $X$ be a random variable and let $f$ be a (Borel measurable) function such that $X$ and $f(X)$ are independent. Then $f(X)$ is constant almost surely. That is, there is some $a \in \mathbb R$ such that $\mathbb P(f(X) = a) = 1$.
The proof is below; but, first, some remarks. The Borel measurability is just a technical condition to ensure that we can assign probabilities in a reasonable and consistent way. The "almost surely" statement is also just a technicality.
The essence of the lemma is that if we want $X$ and $f(X)$ to be independent, then our only candidates are functions of the form $f(x) = a$.
Contrast this with the case of functions $f$ such that $X$ and $f(X)$ are uncorrelated. This is a much, much weaker condition. Indeed, consider any random variable $X$ with mean zero, finite absolute third moment and that is symmetric about zero. Take $f(x) = x^2$, as in the example in the question. Then $\mathrm{Cov}(X,f(X)) = \mathbb E Xf(X) = \mathbb E X^3 = 0$, so $X$ and $f(X) = X^2$ are uncorrelated.
Below, I give the simplest proof I could come up with for the lemma. I've made it exceedingly verbose so that all the details are as obvious as possible. If anyone sees ways to improve it or simplify it, I'd enjoy knowing.
Idea of proof: Intuitively, if we know $X$, then we know $f(X)$. So, we need to find some event in $\sigma(X)$, the sigma algebra generated by $X$, that relates our knowledge of $X$ to that of $f(X)$. Then, we use that information in conjunction with the assumed independence of $X$ and $f(X)$ to show that our available choices for $f$ have been severely constrained.
Proof of lemma: Recall that $X$ and $Y$ are independent if and only if for all $A \in \sigma(X)$ and $B \in \sigma(Y)$, $\renewcommand{\Pr}{\mathbb P}\Pr(X \in A, Y \in B) = \Pr(X \in A) \Pr(Y \in B)$. Let $Y = f(X)$ for some Borel measurable function $f$ such that $X$ and $Y$ are independent. Define $\newcommand{\o}{\omega}A(y) = \{\o: f(X(\o)) \leq y\}$. Then,
$$
A(y) = \{\o: X(\o) \in f^{-1}((-\infty,y])\}
$$
and since $(-\infty,y]$ is a Borel set and $f$ is Borel-measurable, then $f^{-1}((-\infty,y])$ is also a Borel set. This implies that $A(y) \in \sigma(X)$ (by definition(!) of $\sigma(X)$).
Since $X$ and $Y$ are assumed independent and $A(y) \in \sigma(X)$, then
$$
\Pr(X \in A(y), Y \leq y) = \Pr(X \in A(y)) \Pr(Y \leq y) = \Pr(f(X) \leq y) \Pr(f(X) \leq y) \>,
$$
and this holds for all $y \in \mathbb R$. But, by definition of $A(y)$
$$
\Pr(X \in A(y), Y \leq y) = \Pr(f(X) \leq y, Y \leq y) = \Pr(f(X) \leq y) \> .
$$
Combining these last two, we get that for every $y \in \mathbb R$,
$$
\Pr(f(X) \leq y) = \Pr(f(X) \leq y) \Pr(f(X) \leq y) \>,
$$
so $\Pr(f(X) \leq y) = 0$ or $\Pr(f(X) \leq y) = 1$. This means there must be some constant $a \in \mathbb R$ such that the distribution function of $f(X)$ jumps from zero to one at $a$. In other words, $f(X) = a$ almost surely.
NB: Note that the converse is also true by an even simpler argument. That is, if $f(X) = a$ almost surely, then $X$ and $f(X)$ are independent.
|
Are the random variables $X$ and $f(X)$ dependent?
|
Lemma: Let $X$ be a random variable and let $f$ be a (Borel measurable) function such that $X$ and $f(X)$ are independent. Then $f(X)$ is constant almost surely. That is, there is some $a \in \mathbb
|
Are the random variables $X$ and $f(X)$ dependent?
Lemma: Let $X$ be a random variable and let $f$ be a (Borel measurable) function such that $X$ and $f(X)$ are independent. Then $f(X)$ is constant almost surely. That is, there is some $a \in \mathbb R$ such that $\mathbb P(f(X) = a) = 1$.
The proof is below; but, first, some remarks. The Borel measurability is just a technical condition to ensure that we can assign probabilities in a reasonable and consistent way. The "almost surely" statement is also just a technicality.
The essence of the lemma is that if we want $X$ and $f(X)$ to be independent, then our only candidates are functions of the form $f(x) = a$.
Contrast this with the case of functions $f$ such that $X$ and $f(X)$ are uncorrelated. This is a much, much weaker condition. Indeed, consider any random variable $X$ with mean zero, finite absolute third moment and that is symmetric about zero. Take $f(x) = x^2$, as in the example in the question. Then $\mathrm{Cov}(X,f(X)) = \mathbb E Xf(X) = \mathbb E X^3 = 0$, so $X$ and $f(X) = X^2$ are uncorrelated.
Below, I give the simplest proof I could come up with for the lemma. I've made it exceedingly verbose so that all the details are as obvious as possible. If anyone sees ways to improve it or simplify it, I'd enjoy knowing.
Idea of proof: Intuitively, if we know $X$, then we know $f(X)$. So, we need to find some event in $\sigma(X)$, the sigma algebra generated by $X$, that relates our knowledge of $X$ to that of $f(X)$. Then, we use that information in conjunction with the assumed independence of $X$ and $f(X)$ to show that our available choices for $f$ have been severely constrained.
Proof of lemma: Recall that $X$ and $Y$ are independent if and only if for all $A \in \sigma(X)$ and $B \in \sigma(Y)$, $\renewcommand{\Pr}{\mathbb P}\Pr(X \in A, Y \in B) = \Pr(X \in A) \Pr(Y \in B)$. Let $Y = f(X)$ for some Borel measurable function $f$ such that $X$ and $Y$ are independent. Define $\newcommand{\o}{\omega}A(y) = \{\o: f(X(\o)) \leq y\}$. Then,
$$
A(y) = \{\o: X(\o) \in f^{-1}((-\infty,y])\}
$$
and since $(-\infty,y]$ is a Borel set and $f$ is Borel-measurable, then $f^{-1}((-\infty,y])$ is also a Borel set. This implies that $A(y) \in \sigma(X)$ (by definition(!) of $\sigma(X)$).
Since $X$ and $Y$ are assumed independent and $A(y) \in \sigma(X)$, then
$$
\Pr(X \in A(y), Y \leq y) = \Pr(X \in A(y)) \Pr(Y \leq y) = \Pr(f(X) \leq y) \Pr(f(X) \leq y) \>,
$$
and this holds for all $y \in \mathbb R$. But, by definition of $A(y)$
$$
\Pr(X \in A(y), Y \leq y) = \Pr(f(X) \leq y, Y \leq y) = \Pr(f(X) \leq y) \> .
$$
Combining these last two, we get that for every $y \in \mathbb R$,
$$
\Pr(f(X) \leq y) = \Pr(f(X) \leq y) \Pr(f(X) \leq y) \>,
$$
so $\Pr(f(X) \leq y) = 0$ or $\Pr(f(X) \leq y) = 1$. This means there must be some constant $a \in \mathbb R$ such that the distribution function of $f(X)$ jumps from zero to one at $a$. In other words, $f(X) = a$ almost surely.
NB: Note that the converse is also true by an even simpler argument. That is, if $f(X) = a$ almost surely, then $X$ and $f(X)$ are independent.
|
Are the random variables $X$ and $f(X)$ dependent?
Lemma: Let $X$ be a random variable and let $f$ be a (Borel measurable) function such that $X$ and $f(X)$ are independent. Then $f(X)$ is constant almost surely. That is, there is some $a \in \mathbb
|
16,917
|
Why is this prediction of time series "pretty poor"?
|
It's sort of an optical illusion: the eye looks at the graph, and sees that the red and blue graphs are right next to each. The problem is that they are right next to each other horizontally, but what matters is the vertical distance. The eye most easily see the distance between the curves in the two-dimensional space of the Cartesian graph, but what matters is the one-dimensional distance within a particular t value. For example, suppose we had points A1= (10,100), A2 = (10.1, 90), A3 = (9.8,85), P1 = (10.1,100.1), and P2 = (9.8, 88). The eye is naturally going to compare P1 to A1, because that is the closest point, while P2 is going to be compared to A2. Since P1 is closer to A1 than P2 is to A3, P1 is going to look like a better prediction. But when you compare P1 to A1, you're just looking at how well A1 is able to just repeat what it saw earlier; with respect to A1, P1 isn't a prediction. The proper comparison is between P1 v. A2, and P2 v. A3, and in this comparison P2 is better than P1. It would have been clearer if, in addition to plotting y_actual and y_pred against t, there had been graphs of (y_pred-y_actual) against t.
|
Why is this prediction of time series "pretty poor"?
|
It's sort of an optical illusion: the eye looks at the graph, and sees that the red and blue graphs are right next to each. The problem is that they are right next to each other horizontally, but what
|
Why is this prediction of time series "pretty poor"?
It's sort of an optical illusion: the eye looks at the graph, and sees that the red and blue graphs are right next to each. The problem is that they are right next to each other horizontally, but what matters is the vertical distance. The eye most easily see the distance between the curves in the two-dimensional space of the Cartesian graph, but what matters is the one-dimensional distance within a particular t value. For example, suppose we had points A1= (10,100), A2 = (10.1, 90), A3 = (9.8,85), P1 = (10.1,100.1), and P2 = (9.8, 88). The eye is naturally going to compare P1 to A1, because that is the closest point, while P2 is going to be compared to A2. Since P1 is closer to A1 than P2 is to A3, P1 is going to look like a better prediction. But when you compare P1 to A1, you're just looking at how well A1 is able to just repeat what it saw earlier; with respect to A1, P1 isn't a prediction. The proper comparison is between P1 v. A2, and P2 v. A3, and in this comparison P2 is better than P1. It would have been clearer if, in addition to plotting y_actual and y_pred against t, there had been graphs of (y_pred-y_actual) against t.
|
Why is this prediction of time series "pretty poor"?
It's sort of an optical illusion: the eye looks at the graph, and sees that the red and blue graphs are right next to each. The problem is that they are right next to each other horizontally, but what
|
16,918
|
Why is this prediction of time series "pretty poor"?
|
Why is the first "poor"? it looks almost perfect to me, it predicts
every single change perfectly!
It is a so-called "shifted" forecast. If you look more closely at chart 1, you see that the prediction power is only in copying almost exactly the last seen value. That means model learned nothing better, and it treats the time series as a random walk. I guess that the problem may be in the fact you use the raw data that you feed to the neural network. These data are non-stationary which causes all the trouble.
|
Why is this prediction of time series "pretty poor"?
|
Why is the first "poor"? it looks almost perfect to me, it predicts
every single change perfectly!
It is a so-called "shifted" forecast. If you look more closely at chart 1, you see that the predic
|
Why is this prediction of time series "pretty poor"?
Why is the first "poor"? it looks almost perfect to me, it predicts
every single change perfectly!
It is a so-called "shifted" forecast. If you look more closely at chart 1, you see that the prediction power is only in copying almost exactly the last seen value. That means model learned nothing better, and it treats the time series as a random walk. I guess that the problem may be in the fact you use the raw data that you feed to the neural network. These data are non-stationary which causes all the trouble.
|
Why is this prediction of time series "pretty poor"?
Why is the first "poor"? it looks almost perfect to me, it predicts
every single change perfectly!
It is a so-called "shifted" forecast. If you look more closely at chart 1, you see that the predic
|
16,919
|
What follows if we fail to reject the null hypothesis? [duplicate]
|
Statistical hypothesis testing is in some way similar to the technique 'proof by contradiction' in mathematics, i.e. if you want to prove something then assume the opposite and derive a contradiction, i.e. something that is impossible.
In statistics 'impossible' does not exist, but some events are very 'improbable'. So in statistics, if you want to 'prove' something (i.e. $H_1$) then you assume the opposite (i.e. $H_0$) and if $H_0$ is true you try to derive something improbable. 'Improbable' is defined by the confidence level that you choose.
If, assuming $H_0$ is true, you can find something very improbable, then $H_0$ can not be true because it leads to a 'statistical contradiction'. Therefore $H_1$ must be true.
This implies that in statistical hypothesis testing you can only find evidence for $H_1$. If one can not reject $H_0$ then the only conclusion you can draw is 'We can not prove $H_1$' or 'we do not find evidence that $H_0$ is false and so we accept $H_0$ (as long as we do not find evidence against it)'.
But there is more ... it is also about power.
Obviously, as nothing is impossible, one can draw wrong conclusions; we might find 'false evidence' for $H_1$ meaning that we conclude that $H_0$ is false while in reality it is true. This is a type I error and the probability of making a type I error is equal to the signficance level that you have choosen.
One may also accept $H_0$ while in reality it is false, this is a type II error and the probability of making one is denoted by $\beta$.
The power of the test is defined as $1-\beta$ so 1 minus the probability of making a type II error. This is the same as the probability of not making a type II error.
So $\beta$ is the probability of accepting $H_0$ when $H_0$ is false, therefore $1-\beta$ is the probability of rejecting $H_0$ when $H_0$ is false which is the same as the probability of rejecting $H_0$ when $H_1$ is true.
By the above, rejecting $H_0$ is finding evidence for $H_1$, so the power is $1-\beta$ is the probability of finding evidence for $H_1$ when $H_1$ is true.
If you have a test with very high power (close to 1), then this means that if H1 is true, the test would have found evidence for $H_1$ (almost surely) so if we do not find evidence for $H_1$ (i.e. we do not reject $H_0$) and the test has a very high power, then probably $H_1$ is not true (and thus probably $H_0$ is true).
So what we can say is that if your test has very high power , then not rejecting H0 is ''almost as good as'' finding evidence for $H_0$.
|
What follows if we fail to reject the null hypothesis? [duplicate]
|
Statistical hypothesis testing is in some way similar to the technique 'proof by contradiction' in mathematics, i.e. if you want to prove something then assume the opposite and derive a contradiction
|
What follows if we fail to reject the null hypothesis? [duplicate]
Statistical hypothesis testing is in some way similar to the technique 'proof by contradiction' in mathematics, i.e. if you want to prove something then assume the opposite and derive a contradiction, i.e. something that is impossible.
In statistics 'impossible' does not exist, but some events are very 'improbable'. So in statistics, if you want to 'prove' something (i.e. $H_1$) then you assume the opposite (i.e. $H_0$) and if $H_0$ is true you try to derive something improbable. 'Improbable' is defined by the confidence level that you choose.
If, assuming $H_0$ is true, you can find something very improbable, then $H_0$ can not be true because it leads to a 'statistical contradiction'. Therefore $H_1$ must be true.
This implies that in statistical hypothesis testing you can only find evidence for $H_1$. If one can not reject $H_0$ then the only conclusion you can draw is 'We can not prove $H_1$' or 'we do not find evidence that $H_0$ is false and so we accept $H_0$ (as long as we do not find evidence against it)'.
But there is more ... it is also about power.
Obviously, as nothing is impossible, one can draw wrong conclusions; we might find 'false evidence' for $H_1$ meaning that we conclude that $H_0$ is false while in reality it is true. This is a type I error and the probability of making a type I error is equal to the signficance level that you have choosen.
One may also accept $H_0$ while in reality it is false, this is a type II error and the probability of making one is denoted by $\beta$.
The power of the test is defined as $1-\beta$ so 1 minus the probability of making a type II error. This is the same as the probability of not making a type II error.
So $\beta$ is the probability of accepting $H_0$ when $H_0$ is false, therefore $1-\beta$ is the probability of rejecting $H_0$ when $H_0$ is false which is the same as the probability of rejecting $H_0$ when $H_1$ is true.
By the above, rejecting $H_0$ is finding evidence for $H_1$, so the power is $1-\beta$ is the probability of finding evidence for $H_1$ when $H_1$ is true.
If you have a test with very high power (close to 1), then this means that if H1 is true, the test would have found evidence for $H_1$ (almost surely) so if we do not find evidence for $H_1$ (i.e. we do not reject $H_0$) and the test has a very high power, then probably $H_1$ is not true (and thus probably $H_0$ is true).
So what we can say is that if your test has very high power , then not rejecting H0 is ''almost as good as'' finding evidence for $H_0$.
|
What follows if we fail to reject the null hypothesis? [duplicate]
Statistical hypothesis testing is in some way similar to the technique 'proof by contradiction' in mathematics, i.e. if you want to prove something then assume the opposite and derive a contradiction
|
16,920
|
What follows if we fail to reject the null hypothesis? [duplicate]
|
It depends.
For instance, I'm testing my series for the unit-root, maybe with ADF test. Null in this case means the presence of unit root. Failing to reject null suggests that there might be a unit root in the series. The consequence is that I might have to go with modeling the series with random walk like process instead of autorgressive.
So, although it doesn't mean that I proved unit root's presence, the test outcome is not inconsequential. It steers me towards different kind of modeling than rejecting the null.
Hence, in practice failing to reject often means implicitly accepting it. If you're purist then you'd also have the alternative hypothesis of autoregressive, and accept it when failing to reject null.
|
What follows if we fail to reject the null hypothesis? [duplicate]
|
It depends.
For instance, I'm testing my series for the unit-root, maybe with ADF test. Null in this case means the presence of unit root. Failing to reject null suggests that there might be a unit ro
|
What follows if we fail to reject the null hypothesis? [duplicate]
It depends.
For instance, I'm testing my series for the unit-root, maybe with ADF test. Null in this case means the presence of unit root. Failing to reject null suggests that there might be a unit root in the series. The consequence is that I might have to go with modeling the series with random walk like process instead of autorgressive.
So, although it doesn't mean that I proved unit root's presence, the test outcome is not inconsequential. It steers me towards different kind of modeling than rejecting the null.
Hence, in practice failing to reject often means implicitly accepting it. If you're purist then you'd also have the alternative hypothesis of autoregressive, and accept it when failing to reject null.
|
What follows if we fail to reject the null hypothesis? [duplicate]
It depends.
For instance, I'm testing my series for the unit-root, maybe with ADF test. Null in this case means the presence of unit root. Failing to reject null suggests that there might be a unit ro
|
16,921
|
What follows if we fail to reject the null hypothesis? [duplicate]
|
If we fail to reject the null hypothesis, it does not mean that the null hypothesis is true. That's because a hypothesis test does not determine which hypothesis is true, or even which one is very much more likely. What it does assess is whether the evidence available is statistically significant enough to to reject the null hypothesis.
So
The data doesn't provide statistically significant evidence in the difference of the means, but it doesn't conclude that it actually is the mean we define in $H_0$.
We don't have strength of evidence against the mean being different, but the same as part 1. Therefore we can't make finite conclusions on the mean.
Check out this link for more info on P values and Significance tests
Click here
|
What follows if we fail to reject the null hypothesis? [duplicate]
|
If we fail to reject the null hypothesis, it does not mean that the null hypothesis is true. That's because a hypothesis test does not determine which hypothesis is true, or even which one is very mu
|
What follows if we fail to reject the null hypothesis? [duplicate]
If we fail to reject the null hypothesis, it does not mean that the null hypothesis is true. That's because a hypothesis test does not determine which hypothesis is true, or even which one is very much more likely. What it does assess is whether the evidence available is statistically significant enough to to reject the null hypothesis.
So
The data doesn't provide statistically significant evidence in the difference of the means, but it doesn't conclude that it actually is the mean we define in $H_0$.
We don't have strength of evidence against the mean being different, but the same as part 1. Therefore we can't make finite conclusions on the mean.
Check out this link for more info on P values and Significance tests
Click here
|
What follows if we fail to reject the null hypothesis? [duplicate]
If we fail to reject the null hypothesis, it does not mean that the null hypothesis is true. That's because a hypothesis test does not determine which hypothesis is true, or even which one is very mu
|
16,922
|
How would you explain generalized linear models to people with no statistical background?
|
If the audience really has no statistical background, I think I would try to simplify the explanation quite a bit more. First, I would draw a coordinate plane on the board with a line on it, like so:
Everyone at your talk will be familiar with the equation for a simple line, $\ y = mx + b $, because that's something that is learned in grade school. So I would display that alongside the drawing. However, I would write it backwards, like so:
$\ mx + b = y $
I would say that this equation is an example of a simple linear regression. I would then explain how you (or a computer) could fit such an equation to a scatter plot of data points, like the one shown in this image:
I would say that here, we are using the age of the organism that we are studying to predict how big it is, and that the resultant linear regression equation that we get (shown on the image) can be used to predict how big an organism is if we know its age.
Returning to our general equation $\ mx + b = y $, I would say that x's are variables that can predict the y's, so we call them predictors. The y's are commonly called responses.
Then I would explain again that this was an example of a simple linear regression equation, and that there are actually more complicated varieties. For example, in a variety called logistic regression, the y's are only allowed to be 1's or 0's. One might want to use this type of model if you are trying to predict a "yes" or "no" answer, like whether or not someone has a disease. Another special variety is something called Poisson regression, which is used to analyse "count" or "event" data (I wouldn't delve further into this unless really necessary).
I would then explain that linear regression, logistic regression, and Poisson regression are really all special examples of a more general method, something called a "generalized linear model". The great thing about "generalized linear models" is that they allow us to use "response" data that can take any value (like how big an organism is in linear regression), take only 1's or 0's (like whether or not someone has a disease in logistic regression), or take discrete counts (like number of events in Poisson regression).
I would then say that in these types of equations, the x's (predictors) are connected to the y's (responses) via something that statisticians call a "link function". We use these "link functions" in the instances in which the x's are not related to the y's in a linear manner.
Anyway, those are my two cents on the issue! Maybe my proposed explanation sounds a bit hokey and dumb, but if the purpose of this exercise is just to get the "gist" across to the audience, perhaps an explanation like this isn't too bad. I think it's important that the concept be explained in an intuitive way and that you avoid throwing around words like "random component", "systematic component", "link function", "deterministic", "logit function", etc. If you're talking to people who truly have no statistical background, like a typical biologist or physician, their eyes are just going to glaze over at hearing those words. They don't know what a probability distribution is, they've never heard of a link function, and they don't know what a "logit" function is, etc.
In your explanation to a non-statistical audience, I would also focus on when to use what variety of model. I might talk about how many predictors you are allowed to include on the left hand side of the equation (I've heard rules of thumb like no more than your sample size divided by ten). It would also be nice to include an example spread sheet with data and explain to the audience how to use a statistical software package to generate a model. I would then go through the output of that model step by step and try to explain what all the different letters and numbers mean. Biologists are clueless about this stuff and are more interested in learning what test to use when rather than actually gaining an understanding of the math behind the GUI of SPSS!
I would appreciate any comments or suggestions regarding my proposed explanation, particularly if anyone notes errors or thinks of a better way to explain it!
|
How would you explain generalized linear models to people with no statistical background?
|
If the audience really has no statistical background, I think I would try to simplify the explanation quite a bit more. First, I would draw a coordinate plane on the board with a line on it, like so:
|
How would you explain generalized linear models to people with no statistical background?
If the audience really has no statistical background, I think I would try to simplify the explanation quite a bit more. First, I would draw a coordinate plane on the board with a line on it, like so:
Everyone at your talk will be familiar with the equation for a simple line, $\ y = mx + b $, because that's something that is learned in grade school. So I would display that alongside the drawing. However, I would write it backwards, like so:
$\ mx + b = y $
I would say that this equation is an example of a simple linear regression. I would then explain how you (or a computer) could fit such an equation to a scatter plot of data points, like the one shown in this image:
I would say that here, we are using the age of the organism that we are studying to predict how big it is, and that the resultant linear regression equation that we get (shown on the image) can be used to predict how big an organism is if we know its age.
Returning to our general equation $\ mx + b = y $, I would say that x's are variables that can predict the y's, so we call them predictors. The y's are commonly called responses.
Then I would explain again that this was an example of a simple linear regression equation, and that there are actually more complicated varieties. For example, in a variety called logistic regression, the y's are only allowed to be 1's or 0's. One might want to use this type of model if you are trying to predict a "yes" or "no" answer, like whether or not someone has a disease. Another special variety is something called Poisson regression, which is used to analyse "count" or "event" data (I wouldn't delve further into this unless really necessary).
I would then explain that linear regression, logistic regression, and Poisson regression are really all special examples of a more general method, something called a "generalized linear model". The great thing about "generalized linear models" is that they allow us to use "response" data that can take any value (like how big an organism is in linear regression), take only 1's or 0's (like whether or not someone has a disease in logistic regression), or take discrete counts (like number of events in Poisson regression).
I would then say that in these types of equations, the x's (predictors) are connected to the y's (responses) via something that statisticians call a "link function". We use these "link functions" in the instances in which the x's are not related to the y's in a linear manner.
Anyway, those are my two cents on the issue! Maybe my proposed explanation sounds a bit hokey and dumb, but if the purpose of this exercise is just to get the "gist" across to the audience, perhaps an explanation like this isn't too bad. I think it's important that the concept be explained in an intuitive way and that you avoid throwing around words like "random component", "systematic component", "link function", "deterministic", "logit function", etc. If you're talking to people who truly have no statistical background, like a typical biologist or physician, their eyes are just going to glaze over at hearing those words. They don't know what a probability distribution is, they've never heard of a link function, and they don't know what a "logit" function is, etc.
In your explanation to a non-statistical audience, I would also focus on when to use what variety of model. I might talk about how many predictors you are allowed to include on the left hand side of the equation (I've heard rules of thumb like no more than your sample size divided by ten). It would also be nice to include an example spread sheet with data and explain to the audience how to use a statistical software package to generate a model. I would then go through the output of that model step by step and try to explain what all the different letters and numbers mean. Biologists are clueless about this stuff and are more interested in learning what test to use when rather than actually gaining an understanding of the math behind the GUI of SPSS!
I would appreciate any comments or suggestions regarding my proposed explanation, particularly if anyone notes errors or thinks of a better way to explain it!
|
How would you explain generalized linear models to people with no statistical background?
If the audience really has no statistical background, I think I would try to simplify the explanation quite a bit more. First, I would draw a coordinate plane on the board with a line on it, like so:
|
16,923
|
How would you explain generalized linear models to people with no statistical background?
|
I wouldn't call the response a random component. It is a combination of a deterministic and a random component.
I think I would describe generalized linear models this way. We have a response variable and a set of related variables that can aid in predicting the response. However the response and the predictors are not linearly related. The link function provides a transformation of the response so that the transformed response is linearly related to the predictors. For example in logistic regression the predictor could be continuous variables that can take on values over the entire real line. But the response is a probability (the probability of a successful outcome in a clinical trial for example). So the response is constrained to fall between 0 and 1. The link function in logistic regression is called the logit function. It equals $\log(p/(1-p))$. You can see that the logit function transforms a variable constrained to $[0,1]$ to a variable that can take values over the entire real line. In this case the link function makes the response compatible with the predictor variables and hence it is possible to make it a linear function of the predictors plus a random component.
|
How would you explain generalized linear models to people with no statistical background?
|
I wouldn't call the response a random component. It is a combination of a deterministic and a random component.
I think I would describe generalized linear models this way. We have a response variab
|
How would you explain generalized linear models to people with no statistical background?
I wouldn't call the response a random component. It is a combination of a deterministic and a random component.
I think I would describe generalized linear models this way. We have a response variable and a set of related variables that can aid in predicting the response. However the response and the predictors are not linearly related. The link function provides a transformation of the response so that the transformed response is linearly related to the predictors. For example in logistic regression the predictor could be continuous variables that can take on values over the entire real line. But the response is a probability (the probability of a successful outcome in a clinical trial for example). So the response is constrained to fall between 0 and 1. The link function in logistic regression is called the logit function. It equals $\log(p/(1-p))$. You can see that the logit function transforms a variable constrained to $[0,1]$ to a variable that can take values over the entire real line. In this case the link function makes the response compatible with the predictor variables and hence it is possible to make it a linear function of the predictors plus a random component.
|
How would you explain generalized linear models to people with no statistical background?
I wouldn't call the response a random component. It is a combination of a deterministic and a random component.
I think I would describe generalized linear models this way. We have a response variab
|
16,924
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
|
My main use for google spreadsheets have been with google forms, for collecting data, and then easily importing it into R. Here is a post I wrote about it half a year ago:
Google spreadsheets + google forms + R = Easily collecting and importing data for analysis
Also, If you are into collaboration, my tool of choice is DropBox. I wrote a post regarding it a few months ago:
Syncing files across computers using DropBox
I have now been using it for about half a year on a project with 5 co-authors, and it has been invaluable (syncing data files from 3 contributers, everyone can see the latest version of the output I am producing, and everyone are looking at the same .docx file for the article).
Both posts offer video tutorials and verbal instructions.
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
|
My main use for google spreadsheets have been with google forms, for collecting data, and then easily importing it into R. Here is a post I wrote about it half a year ago:
Google spreadsheets + googl
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
My main use for google spreadsheets have been with google forms, for collecting data, and then easily importing it into R. Here is a post I wrote about it half a year ago:
Google spreadsheets + google forms + R = Easily collecting and importing data for analysis
Also, If you are into collaboration, my tool of choice is DropBox. I wrote a post regarding it a few months ago:
Syncing files across computers using DropBox
I have now been using it for about half a year on a project with 5 co-authors, and it has been invaluable (syncing data files from 3 contributers, everyone can see the latest version of the output I am producing, and everyone are looking at the same .docx file for the article).
Both posts offer video tutorials and verbal instructions.
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
My main use for google spreadsheets have been with google forms, for collecting data, and then easily importing it into R. Here is a post I wrote about it half a year ago:
Google spreadsheets + googl
|
16,925
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
|
As an enthusiast user of R, bash, Python, asciidoc, (La)TeX, open source sofwtare or any un*x tools, I cannot provide an objective answer. Moreover, as I often argue against the use of MS Excel or spreadsheet of any kind (well, you see your data, or part of it, but what else?), I would not contribute positively to the debate. I'm not the only one, e.g.
Spreadsheet Addiction, from P. Burns.
MS Excel’s precision and accuracy, a post on the 2004 R mailing-list
L. Knusel, On the accuracy of statistical distributions in Microsoft Excel 97, Computational Statistics & Data Analysis, 26: 375–377, 1998. (pdf)
B.D. McCullough & B. Wilson, On the accuracy of statistical procedures in Microsoft Excel
2000 and Excel XP, Computational Statistics & Data Analysis, 40: 713–721, 2002.
M. Altman, J. Gill & M.P. McDonald, Numerical Issues in Statistical Computing for the Social Scientist, Wiley, 2004. [e.g., pp. 12–14]
A colleague of mine loose all his macros because of the lack of backward compatibility, etc. Another colleague tried to import genetics data (around 700 subjects genotyped on 800,000 markers, 120 Mo), just to "look at them". Excel failed, Notepad gave up too... I am able to "look at them" with vi, and quickly reformat the data with some sed/awk or perl script. So I think there are different levels to consider when discussing about the usefulness of spreadsheets. Either you work on small data sets, and only want to apply elementary statistical stuff and maybe it's fine. Then, it's up to you to trust the results, or you can always ask for the source code, but maybe it would be simpler to do a quick test of all inline procedures with the NIST benchmark. I don't think it corresponds to a good way of doing statistics simply because this is not a true statistical software (IMHO), although as an update of the aforementioned list, newer versions of MS Excel seems to have demonstrated improvements in its accuracy for statistical analyses, see Keeling and Pavur, A comparative study of the reliability of nine statistical software packages (CSDA 2007 51: 3811).
Still, about one paper out of 10 or 20 (in biomedicine, psychology, psychiatry) includes graphics made with Excel, sometimes without removing the gray background, the horizontal black line or the automatic legend (Andrew Gelman and Hadley Wickham are certainly as happy as me when seeing it). But more generally, it tend to be the most used "software" according to a recent poll on FlowingData, which remind me of an old talk of Brian Ripley (who co-authored the MASS R package, and write an excellent book on pattern recognition, among others):
Let's not kid ourselves: the most
widely used piece of software for
statistics is Excel (B. Ripley via Jan
De Leeuw), http://www.stats.ox.ac.uk/~ripley/RSS2002.pdf
Now, if you feel it provides you with a quick and easier way to get your statistics done, why not? The problem is that there are still things that cannot be done (or at least, it's rather tricky) in such an environment. I think of bootstrap, permutation, multivariate exploratory data analysis, to name a few. Unless you are very proficient in VBA (which is neither a scripting nor a programming language), I am inclined to think that even minor operations on data are better handled under R (or Matlab, or Python, providing you get the right tool for dealing with e.g. so-called data.frame). Above all, I think Excel does not promote very good practices for the data analyst (but it also applies to any "cliquodrome", see the discussion on Medstats about the need to maintain a record of data processing, Documenting analyses and data edits), and I found this post on Practical Stats relatively illustrative of some of Excel pitfalls. Still, it applies to Excel, I don't know how it translates to GDocs.
About sharing your work, I tend to think that Github (or Gist for source code) or Dropbox (although EULA might discourage some people) are very good options (revision history, grant management if needed, etc.). I cannot encourage the use of a software which basically store your data in a binary format. I know it can be imported in R, Matlab, Stata, SPSS, but to my opinion:
data should definitively be in a text format, that can be read by another statistical software;
analysis should be reproducible, meaning you should provide a complete script for your analysis and it should run (we approach the ideal case near here...) on another operating system at any time;
your own statistical software should implement acknowledged algorithms and there should be an easy way to update it to reflect current best practices in statistical modeling;
the sharing system you choose should include versioning and collaborative facilities.
That's it.
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
|
As an enthusiast user of R, bash, Python, asciidoc, (La)TeX, open source sofwtare or any un*x tools, I cannot provide an objective answer. Moreover, as I often argue against the use of MS Excel or spr
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
As an enthusiast user of R, bash, Python, asciidoc, (La)TeX, open source sofwtare or any un*x tools, I cannot provide an objective answer. Moreover, as I often argue against the use of MS Excel or spreadsheet of any kind (well, you see your data, or part of it, but what else?), I would not contribute positively to the debate. I'm not the only one, e.g.
Spreadsheet Addiction, from P. Burns.
MS Excel’s precision and accuracy, a post on the 2004 R mailing-list
L. Knusel, On the accuracy of statistical distributions in Microsoft Excel 97, Computational Statistics & Data Analysis, 26: 375–377, 1998. (pdf)
B.D. McCullough & B. Wilson, On the accuracy of statistical procedures in Microsoft Excel
2000 and Excel XP, Computational Statistics & Data Analysis, 40: 713–721, 2002.
M. Altman, J. Gill & M.P. McDonald, Numerical Issues in Statistical Computing for the Social Scientist, Wiley, 2004. [e.g., pp. 12–14]
A colleague of mine loose all his macros because of the lack of backward compatibility, etc. Another colleague tried to import genetics data (around 700 subjects genotyped on 800,000 markers, 120 Mo), just to "look at them". Excel failed, Notepad gave up too... I am able to "look at them" with vi, and quickly reformat the data with some sed/awk or perl script. So I think there are different levels to consider when discussing about the usefulness of spreadsheets. Either you work on small data sets, and only want to apply elementary statistical stuff and maybe it's fine. Then, it's up to you to trust the results, or you can always ask for the source code, but maybe it would be simpler to do a quick test of all inline procedures with the NIST benchmark. I don't think it corresponds to a good way of doing statistics simply because this is not a true statistical software (IMHO), although as an update of the aforementioned list, newer versions of MS Excel seems to have demonstrated improvements in its accuracy for statistical analyses, see Keeling and Pavur, A comparative study of the reliability of nine statistical software packages (CSDA 2007 51: 3811).
Still, about one paper out of 10 or 20 (in biomedicine, psychology, psychiatry) includes graphics made with Excel, sometimes without removing the gray background, the horizontal black line or the automatic legend (Andrew Gelman and Hadley Wickham are certainly as happy as me when seeing it). But more generally, it tend to be the most used "software" according to a recent poll on FlowingData, which remind me of an old talk of Brian Ripley (who co-authored the MASS R package, and write an excellent book on pattern recognition, among others):
Let's not kid ourselves: the most
widely used piece of software for
statistics is Excel (B. Ripley via Jan
De Leeuw), http://www.stats.ox.ac.uk/~ripley/RSS2002.pdf
Now, if you feel it provides you with a quick and easier way to get your statistics done, why not? The problem is that there are still things that cannot be done (or at least, it's rather tricky) in such an environment. I think of bootstrap, permutation, multivariate exploratory data analysis, to name a few. Unless you are very proficient in VBA (which is neither a scripting nor a programming language), I am inclined to think that even minor operations on data are better handled under R (or Matlab, or Python, providing you get the right tool for dealing with e.g. so-called data.frame). Above all, I think Excel does not promote very good practices for the data analyst (but it also applies to any "cliquodrome", see the discussion on Medstats about the need to maintain a record of data processing, Documenting analyses and data edits), and I found this post on Practical Stats relatively illustrative of some of Excel pitfalls. Still, it applies to Excel, I don't know how it translates to GDocs.
About sharing your work, I tend to think that Github (or Gist for source code) or Dropbox (although EULA might discourage some people) are very good options (revision history, grant management if needed, etc.). I cannot encourage the use of a software which basically store your data in a binary format. I know it can be imported in R, Matlab, Stata, SPSS, but to my opinion:
data should definitively be in a text format, that can be read by another statistical software;
analysis should be reproducible, meaning you should provide a complete script for your analysis and it should run (we approach the ideal case near here...) on another operating system at any time;
your own statistical software should implement acknowledged algorithms and there should be an easy way to update it to reflect current best practices in statistical modeling;
the sharing system you choose should include versioning and collaborative facilities.
That's it.
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
As an enthusiast user of R, bash, Python, asciidoc, (La)TeX, open source sofwtare or any un*x tools, I cannot provide an objective answer. Moreover, as I often argue against the use of MS Excel or spr
|
16,926
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
|
"I am also interested to hear about the bugs or flaws you have encountered with Google Docs."
I will respond to that part of the original question only. My explorations with Google Docs Spreadsheets (GSheets) have been concerned with the mathematical and statistical functions. In the end my assessment is that Google Spreadsheets is in that respect much inferior in 2012 to the maligned Excel of 1997.
Witness: Google Sheets apparently evaluates erfc(x) using erfc(x)=1-erf(x) for arguments for which erf(x) is close to 1. They evaluate a standard deviation or a variance via average of the squares minus square of the average; it is bad numerical practice. Combinatorial functions and discrete probabilities such as poisson(n,x) = pow(x,n)*exp(-x)/n! are evaluated factor-by-factor, causing needless overflow. The factorial is evaluated using Stirling's approximation factor-by-factor, causing further needless overflow. The cumulative Poisson distribution is evaluated by simply doing the finite sum, so the normalization property is lost in the round-off; the same is true for the cumulative binomial distribution. The cumulative normal distribution is completely messed up; it goes outside the [0,1] range. There is a general loss of accuracy relative to the implementations of the same functions in other packages. The descriptions of elementary functions such as rounding are often garbled and unintelligible; the interpretation is a guessing game.
I have documented these issues in two sets of postings on the Google Docs product forums:
(2011-11-13 and later) normdist throws negative value still
https://productforums.google.com/d/topic/docs/XfBPtoKJ1Ws/
(2012-05-06 and later) Errors and other issues with statistical and mathematical functions in GSheets
https://productforums.google.com/d/topic/docs/rxFCHYeMhrU/
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
|
"I am also interested to hear about the bugs or flaws you have encountered with Google Docs."
I will respond to that part of the original question only. My explorations with Google Docs Spreadsheets (
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
"I am also interested to hear about the bugs or flaws you have encountered with Google Docs."
I will respond to that part of the original question only. My explorations with Google Docs Spreadsheets (GSheets) have been concerned with the mathematical and statistical functions. In the end my assessment is that Google Spreadsheets is in that respect much inferior in 2012 to the maligned Excel of 1997.
Witness: Google Sheets apparently evaluates erfc(x) using erfc(x)=1-erf(x) for arguments for which erf(x) is close to 1. They evaluate a standard deviation or a variance via average of the squares minus square of the average; it is bad numerical practice. Combinatorial functions and discrete probabilities such as poisson(n,x) = pow(x,n)*exp(-x)/n! are evaluated factor-by-factor, causing needless overflow. The factorial is evaluated using Stirling's approximation factor-by-factor, causing further needless overflow. The cumulative Poisson distribution is evaluated by simply doing the finite sum, so the normalization property is lost in the round-off; the same is true for the cumulative binomial distribution. The cumulative normal distribution is completely messed up; it goes outside the [0,1] range. There is a general loss of accuracy relative to the implementations of the same functions in other packages. The descriptions of elementary functions such as rounding are often garbled and unintelligible; the interpretation is a guessing game.
I have documented these issues in two sets of postings on the Google Docs product forums:
(2011-11-13 and later) normdist throws negative value still
https://productforums.google.com/d/topic/docs/XfBPtoKJ1Ws/
(2012-05-06 and later) Errors and other issues with statistical and mathematical functions in GSheets
https://productforums.google.com/d/topic/docs/rxFCHYeMhrU/
|
Do some of you use Google Docs spreadsheet to conduct and share your statistical work with others?
"I am also interested to hear about the bugs or flaws you have encountered with Google Docs."
I will respond to that part of the original question only. My explorations with Google Docs Spreadsheets (
|
16,927
|
Output of Logistic Regression Prediction
|
First, it looks like you built a regular linear regression model, not a logistic regression model. To build a logistic regression model, you need to use glm() with family="binomial" , not lm().
Suppose you build the following logistic regression model using independent variables $x_1, x_2$, and $x_3$ to predict the probability of event $y$:
logit <- glm(y~x1+x2+x3,family="binomial")
This model has regression coefficients $\beta_0, \beta_1, \beta_2$ and $\beta_3$.
If you then do predict(logit), R will calculate and return b0 + b1*x1 + b2*x2 + b3*x3.
Recall that your logistic regression equation is $y = log(\frac{p}{1-p}) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3x_3$.
So, to get the probabilities that you want, you need to solve this equation for $p$.
In R, you can do something like this:
pred <- predict(logit,newdata=data) #gives you b0 + b1x1 + b2x2 + b3x3
probs <- exp(pred)/(1+exp(pred)) #gives you probability that y=1 for each observation
|
Output of Logistic Regression Prediction
|
First, it looks like you built a regular linear regression model, not a logistic regression model. To build a logistic regression model, you need to use glm() with family="binomial" , not lm().
Supp
|
Output of Logistic Regression Prediction
First, it looks like you built a regular linear regression model, not a logistic regression model. To build a logistic regression model, you need to use glm() with family="binomial" , not lm().
Suppose you build the following logistic regression model using independent variables $x_1, x_2$, and $x_3$ to predict the probability of event $y$:
logit <- glm(y~x1+x2+x3,family="binomial")
This model has regression coefficients $\beta_0, \beta_1, \beta_2$ and $\beta_3$.
If you then do predict(logit), R will calculate and return b0 + b1*x1 + b2*x2 + b3*x3.
Recall that your logistic regression equation is $y = log(\frac{p}{1-p}) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3x_3$.
So, to get the probabilities that you want, you need to solve this equation for $p$.
In R, you can do something like this:
pred <- predict(logit,newdata=data) #gives you b0 + b1x1 + b2x2 + b3x3
probs <- exp(pred)/(1+exp(pred)) #gives you probability that y=1 for each observation
|
Output of Logistic Regression Prediction
First, it looks like you built a regular linear regression model, not a logistic regression model. To build a logistic regression model, you need to use glm() with family="binomial" , not lm().
Supp
|
16,928
|
Output of Logistic Regression Prediction
|
Looking at the documentation of the predict.glm, seems that it as easy as using an extra parameter in predict call:
type = "response"
See documentation:
type - the type of prediction required. The default is on the scale of
the linear predictors; the alternative "response" is on the scale of
the response variable. Thus for a default binomial model the default
predictions are of log-odds (probabilities on logit scale) and type =
"response" gives the predicted probabilities. The "terms" option
returns a matrix giving the fitted values of each term in the model
formula on the linear predictor scale. The value of this argument can
be abbreviated
|
Output of Logistic Regression Prediction
|
Looking at the documentation of the predict.glm, seems that it as easy as using an extra parameter in predict call:
type = "response"
See documentation:
type - the type of prediction required. The
|
Output of Logistic Regression Prediction
Looking at the documentation of the predict.glm, seems that it as easy as using an extra parameter in predict call:
type = "response"
See documentation:
type - the type of prediction required. The default is on the scale of
the linear predictors; the alternative "response" is on the scale of
the response variable. Thus for a default binomial model the default
predictions are of log-odds (probabilities on logit scale) and type =
"response" gives the predicted probabilities. The "terms" option
returns a matrix giving the fitted values of each term in the model
formula on the linear predictor scale. The value of this argument can
be abbreviated
|
Output of Logistic Regression Prediction
Looking at the documentation of the predict.glm, seems that it as easy as using an extra parameter in predict call:
type = "response"
See documentation:
type - the type of prediction required. The
|
16,929
|
Bayesian criticism of frequentist p-value
|
The point that the authors are trying to make is a subtle one: they see it as a failure of NHST that, as $n$ gets arbitrarily large, the $p$-value doesn't tend to 1. It's a bit surprising that this doesn't contain any discussion of equivalence testing. To me it's somewhat obvious and reasonable that the p-value maintains its uniform distribution when the null is true considering larger and larger $n$. Large $n$ means having sensitivity to detect smaller and smaller effects, while the false positive error rate remains fixed. So under the somewhat constrained setting of the null being exactly true, the behavior of the $p$-value distribution doesn't depend on $n$ at all.
NHST is, in my mind, desirable specifically because there's no way of declaring a null hypothesis to be true, as my experimental design is setup specifically to disprove it. A non-significant result may mean that my experiment was underpowered or the assumptions were wrong, so there are risks associated with accepting the null that I'd rather not incur.
We never actually believe that the null hypothesis is true. Typically failed designs arise because the truth is too close to the null to be detectable. Having too much data is kind of a bad thing in this case, rather there's a subtle art in designing a study to obtain only enough sample size so as to reject the null when a meaningful difference is present.
One can design a frequentist test that sequentially tests for differences (one or two tailed), and depending on a negative result, performs an equivalence test (declare that the null is true as a significant result). In the latter case one can show that the power of an equivalence test goes to 1 when the null is in fact true.
|
Bayesian criticism of frequentist p-value
|
The point that the authors are trying to make is a subtle one: they see it as a failure of NHST that, as $n$ gets arbitrarily large, the $p$-value doesn't tend to 1. It's a bit surprising that this do
|
Bayesian criticism of frequentist p-value
The point that the authors are trying to make is a subtle one: they see it as a failure of NHST that, as $n$ gets arbitrarily large, the $p$-value doesn't tend to 1. It's a bit surprising that this doesn't contain any discussion of equivalence testing. To me it's somewhat obvious and reasonable that the p-value maintains its uniform distribution when the null is true considering larger and larger $n$. Large $n$ means having sensitivity to detect smaller and smaller effects, while the false positive error rate remains fixed. So under the somewhat constrained setting of the null being exactly true, the behavior of the $p$-value distribution doesn't depend on $n$ at all.
NHST is, in my mind, desirable specifically because there's no way of declaring a null hypothesis to be true, as my experimental design is setup specifically to disprove it. A non-significant result may mean that my experiment was underpowered or the assumptions were wrong, so there are risks associated with accepting the null that I'd rather not incur.
We never actually believe that the null hypothesis is true. Typically failed designs arise because the truth is too close to the null to be detectable. Having too much data is kind of a bad thing in this case, rather there's a subtle art in designing a study to obtain only enough sample size so as to reject the null when a meaningful difference is present.
One can design a frequentist test that sequentially tests for differences (one or two tailed), and depending on a negative result, performs an equivalence test (declare that the null is true as a significant result). In the latter case one can show that the power of an equivalence test goes to 1 when the null is in fact true.
|
Bayesian criticism of frequentist p-value
The point that the authors are trying to make is a subtle one: they see it as a failure of NHST that, as $n$ gets arbitrarily large, the $p$-value doesn't tend to 1. It's a bit surprising that this do
|
16,930
|
Bayesian criticism of frequentist p-value
|
I think you’re conflating two different arguments against p-values. Let me spell them out.
By definition, p is distributed uniformly under the null hypothesis (or as uniformly as possible in discrete settings). So p isn’t going to be a useful measure of evidence in favour of the null when it’s true. Evidence in favour of the null isn’t going to accumulate as we collect data, as p just makes a random walk bounded by 0 and 1. I don’t see where the your source mentions bias in this context.
There exist mathematical theorems showing that p is much less than the posterior of the null for any choices of priors for parameters in the alternative hypothesis. This is what’s meant by overstating the evidence. For example, p = 0.05 under some assumptions corresponds to a null hypothesis that is at least about 30% plausible.
Let me note that p isn’t necessary at all in frequentist inference, and it’s use isn’t supported by any coherent framework. Indeed, there are really two kinds of frequentism here.
Fisher, in which p is used and used as a measure of evidence. It is this one that the above arguments attack
Neyman-Pearson, in which if p is used at all it’s used only to define a rejection region. The connection - or lack thereof - between p and evidence is neither here nor there.
Lastly, let me clarify confusion about the connection between NHST and proof by contradiction. In NHST, when faced with a small p-value, we face the dichotomy
we observed a small p and the null was true or
we observed a small p and the null was false
which evidently reduces to nothing more than the fact that we observed small p. It gets us nowhere deductively. Proof by contradiction on the other hand obviously does allow us to deductively prove non-trivial things. What required here isn’t deductive logic - like proof by contradiction - but inductive logic.
|
Bayesian criticism of frequentist p-value
|
I think you’re conflating two different arguments against p-values. Let me spell them out.
By definition, p is distributed uniformly under the null hypothesis (or as uniformly as possible in discrete
|
Bayesian criticism of frequentist p-value
I think you’re conflating two different arguments against p-values. Let me spell them out.
By definition, p is distributed uniformly under the null hypothesis (or as uniformly as possible in discrete settings). So p isn’t going to be a useful measure of evidence in favour of the null when it’s true. Evidence in favour of the null isn’t going to accumulate as we collect data, as p just makes a random walk bounded by 0 and 1. I don’t see where the your source mentions bias in this context.
There exist mathematical theorems showing that p is much less than the posterior of the null for any choices of priors for parameters in the alternative hypothesis. This is what’s meant by overstating the evidence. For example, p = 0.05 under some assumptions corresponds to a null hypothesis that is at least about 30% plausible.
Let me note that p isn’t necessary at all in frequentist inference, and it’s use isn’t supported by any coherent framework. Indeed, there are really two kinds of frequentism here.
Fisher, in which p is used and used as a measure of evidence. It is this one that the above arguments attack
Neyman-Pearson, in which if p is used at all it’s used only to define a rejection region. The connection - or lack thereof - between p and evidence is neither here nor there.
Lastly, let me clarify confusion about the connection between NHST and proof by contradiction. In NHST, when faced with a small p-value, we face the dichotomy
we observed a small p and the null was true or
we observed a small p and the null was false
which evidently reduces to nothing more than the fact that we observed small p. It gets us nowhere deductively. Proof by contradiction on the other hand obviously does allow us to deductively prove non-trivial things. What required here isn’t deductive logic - like proof by contradiction - but inductive logic.
|
Bayesian criticism of frequentist p-value
I think you’re conflating two different arguments against p-values. Let me spell them out.
By definition, p is distributed uniformly under the null hypothesis (or as uniformly as possible in discrete
|
16,931
|
Bayesian criticism of frequentist p-value
|
For me a core issue here is that the Bayesian criticism of the p-value is based on Bayesian reasoning that a frequentist would not normally accept. For the Bayesian, the "true parameter" is a random variable (as probability distributions are about formalising uncertainty, and there is uncertainty about the value of the true parameter), whereas for the frequentist the "true parameter" is fixed and the basis of probability calculations (as probability distributions are about how data will distribute under idealised infinite replication).
The Bayesians start from a prior distribution over the parameter, which according to frequentist logic does not normally exist (unless we're in a situation where various "true" parameters are indeed generated in some kind of repeatable experiment as in "empirical Bayes" situations).
Updating the prior by the data, the Bayesian will produce a posterior and can then make statements about what the probability is that the true parameter lies in a certain set or takes a certain value. Such statements can not be made in frequentist logic, and surely the p-value doesn't do such a thing.
What's behind the "p-values overstate the evidence" issue is that some Bayesians actually interpret the p-value as (some kind of approximation of) the probability that the null hypothesis is true, in which case they can compare it with the same probability computed by Bayesian logic. Depending on the prior, one could then come to the conclusion that the p-value is too low or too high.
(This paragraph added after comments:) The connection between this and the statement about "evidence" is that some Bayesians tend to think that only probabilities that hypotheses are true (and certain quantities derived from them) qualify as valid measurements of evidence. This means that in order to accept the p-value as a measure of evidence, they need to interpret the p-value in this way. A frequentist can still think of a p-value as a measurement of evidence, but this measurement would then be something essentially different, as probabilities of hypotheses being true don't normally make sense in frequentist logic.
The problem here is that this (a probability of the null hypothesis being true) is not what the p-value is; according to frequentist logic there is no such thing as a "true prior" that could be used as a basis for this, and the p-value is a probability computed assuming the null hypothesis to be true, rather than a probability that the null hypothesis is true. Therefore a frequentist shouldn't accept the Bayesian computation as "what the p-value should be". The Bayesian argument (not shared by all Bayesians!) here is that a Bayesian interpretation of the p-value isn't as good as proper Bayesian analysis, but the frequentists can say that the p-value shouldn't be interpreted in this way in the first place.
The Bayesians have a point though in the sense that the p-value is often misinterpreted as a probability of the null hypothesis being true, so their criticism, although not applicable to a correct understanding of the p-value, applies correctly to what some people make of it. (Furthermore Bayesians can claim that parameters should be treated as random variables rather than as fixed, which is a more philosophical discussion and doesn't concern p-values in particular but the whole of the frequentist logic.)
|
Bayesian criticism of frequentist p-value
|
For me a core issue here is that the Bayesian criticism of the p-value is based on Bayesian reasoning that a frequentist would not normally accept. For the Bayesian, the "true parameter" is a random v
|
Bayesian criticism of frequentist p-value
For me a core issue here is that the Bayesian criticism of the p-value is based on Bayesian reasoning that a frequentist would not normally accept. For the Bayesian, the "true parameter" is a random variable (as probability distributions are about formalising uncertainty, and there is uncertainty about the value of the true parameter), whereas for the frequentist the "true parameter" is fixed and the basis of probability calculations (as probability distributions are about how data will distribute under idealised infinite replication).
The Bayesians start from a prior distribution over the parameter, which according to frequentist logic does not normally exist (unless we're in a situation where various "true" parameters are indeed generated in some kind of repeatable experiment as in "empirical Bayes" situations).
Updating the prior by the data, the Bayesian will produce a posterior and can then make statements about what the probability is that the true parameter lies in a certain set or takes a certain value. Such statements can not be made in frequentist logic, and surely the p-value doesn't do such a thing.
What's behind the "p-values overstate the evidence" issue is that some Bayesians actually interpret the p-value as (some kind of approximation of) the probability that the null hypothesis is true, in which case they can compare it with the same probability computed by Bayesian logic. Depending on the prior, one could then come to the conclusion that the p-value is too low or too high.
(This paragraph added after comments:) The connection between this and the statement about "evidence" is that some Bayesians tend to think that only probabilities that hypotheses are true (and certain quantities derived from them) qualify as valid measurements of evidence. This means that in order to accept the p-value as a measure of evidence, they need to interpret the p-value in this way. A frequentist can still think of a p-value as a measurement of evidence, but this measurement would then be something essentially different, as probabilities of hypotheses being true don't normally make sense in frequentist logic.
The problem here is that this (a probability of the null hypothesis being true) is not what the p-value is; according to frequentist logic there is no such thing as a "true prior" that could be used as a basis for this, and the p-value is a probability computed assuming the null hypothesis to be true, rather than a probability that the null hypothesis is true. Therefore a frequentist shouldn't accept the Bayesian computation as "what the p-value should be". The Bayesian argument (not shared by all Bayesians!) here is that a Bayesian interpretation of the p-value isn't as good as proper Bayesian analysis, but the frequentists can say that the p-value shouldn't be interpreted in this way in the first place.
The Bayesians have a point though in the sense that the p-value is often misinterpreted as a probability of the null hypothesis being true, so their criticism, although not applicable to a correct understanding of the p-value, applies correctly to what some people make of it. (Furthermore Bayesians can claim that parameters should be treated as random variables rather than as fixed, which is a more philosophical discussion and doesn't concern p-values in particular but the whole of the frequentist logic.)
|
Bayesian criticism of frequentist p-value
For me a core issue here is that the Bayesian criticism of the p-value is based on Bayesian reasoning that a frequentist would not normally accept. For the Bayesian, the "true parameter" is a random v
|
16,932
|
Bayesian criticism of frequentist p-value
|
The discussion here is excellent but at the heart of the matter is that in an attempt to "let the data speak for themselves", i.e., to be objective, the frequentist approach jettisons the desire to obtain measures of evidence in favor of assertions. As explained eloquently in Bernoulli's Fallacy, Clayton takes us step by step through statistical history to explain how this happened and what harm it has done. One of the harms is that outside information was prevented from being brought into the analysis. One of his excellent examples is ESP research where he shows that obtaining a low p-value is meaningless without factoring in the low likelihood that the laws of physics are allowed to be suspended. So one can analyze all day the amount of evidence in a p-value but I don't think that is quite worth the trouble.
|
Bayesian criticism of frequentist p-value
|
The discussion here is excellent but at the heart of the matter is that in an attempt to "let the data speak for themselves", i.e., to be objective, the frequentist approach jettisons the desire to ob
|
Bayesian criticism of frequentist p-value
The discussion here is excellent but at the heart of the matter is that in an attempt to "let the data speak for themselves", i.e., to be objective, the frequentist approach jettisons the desire to obtain measures of evidence in favor of assertions. As explained eloquently in Bernoulli's Fallacy, Clayton takes us step by step through statistical history to explain how this happened and what harm it has done. One of the harms is that outside information was prevented from being brought into the analysis. One of his excellent examples is ESP research where he shows that obtaining a low p-value is meaningless without factoring in the low likelihood that the laws of physics are allowed to be suspended. So one can analyze all day the amount of evidence in a p-value but I don't think that is quite worth the trouble.
|
Bayesian criticism of frequentist p-value
The discussion here is excellent but at the heart of the matter is that in an attempt to "let the data speak for themselves", i.e., to be objective, the frequentist approach jettisons the desire to ob
|
16,933
|
Bayesian criticism of frequentist p-value
|
The idea that p-values overstate the evidence against the null hypothesis is partly due to a misunderstanding about the nature of p-values (as others have mentioned). A p-value can be regarded as a measure to address the hypothesis testing question: Is the data consistent with the null hypothesis? Bayesian posterior probabilities are Bayesian answers to a different question: Given the data, what is the relative plausibility of each hypothesis? The key word here is relative. It should hardly be surprising then that answers to such different questions can differ, even within the one school. In short, P-values are about consistency with one hypothesis, and not about the relative plausibility of two hypotheses.
|
Bayesian criticism of frequentist p-value
|
The idea that p-values overstate the evidence against the null hypothesis is partly due to a misunderstanding about the nature of p-values (as others have mentioned). A p-value can be regarded as a me
|
Bayesian criticism of frequentist p-value
The idea that p-values overstate the evidence against the null hypothesis is partly due to a misunderstanding about the nature of p-values (as others have mentioned). A p-value can be regarded as a measure to address the hypothesis testing question: Is the data consistent with the null hypothesis? Bayesian posterior probabilities are Bayesian answers to a different question: Given the data, what is the relative plausibility of each hypothesis? The key word here is relative. It should hardly be surprising then that answers to such different questions can differ, even within the one school. In short, P-values are about consistency with one hypothesis, and not about the relative plausibility of two hypotheses.
|
Bayesian criticism of frequentist p-value
The idea that p-values overstate the evidence against the null hypothesis is partly due to a misunderstanding about the nature of p-values (as others have mentioned). A p-value can be regarded as a me
|
16,934
|
Bayesian criticism of frequentist p-value
|
The key to appreciating frequentist inference is to frame it similar to a proof by contradiction. It is by design that the p-value follows a uniform distribution under the null, i.e. $20\%$ of the time you get a p-value less than or equal to $0.2$. It answers the question, "If this other hypothesis is true, how often would I get a result like the one I've just witnessed?" It's like playing devil's advocate. A reality check. If the p-value was not uniformly distributed under the null you would not get this reality check. There is no bias nor is there an "independent property" because the population parameters do not randomly change from one repeated experiment to the next according to anyone's belief.
Bayesians have a very different way of thinking, so it is not surprising that these criticisms would be levied. Bayesian "probability" measures the belief of the experimenter, and beliefs are not facts. Any claim that the p-value overstates the evidence compared to a posterior probability is mistaking beliefs for facts.
There are one-to-one analogs on everything between the two paradigms.
Bayesians often criticize frequentism for not incorporating outside information into an analysis, but Bayesian updating of a prior into a posterior maps to a frequentist meta-analysis with p-values and confidence intervals [1] [2]. Bayesians often criticise frequentism for not accounting for all uncertainty in a model when making predictions, but Bayesian predictive distributions map to frequentist prediction intervals [3]. However, this one-to-one mapping is not a reason to use an unfalsifiable subjective definition of probability.
Even in non-normal models, central limit theorem aside, it is possible to construct confidence intervals by inverting a hypothesis test or a cumulative distribution function to find a range of plausible values of a parameter.
|
Bayesian criticism of frequentist p-value
|
The key to appreciating frequentist inference is to frame it similar to a proof by contradiction. It is by design that the p-value follows a uniform distribution under the null, i.e. $20\%$ of the ti
|
Bayesian criticism of frequentist p-value
The key to appreciating frequentist inference is to frame it similar to a proof by contradiction. It is by design that the p-value follows a uniform distribution under the null, i.e. $20\%$ of the time you get a p-value less than or equal to $0.2$. It answers the question, "If this other hypothesis is true, how often would I get a result like the one I've just witnessed?" It's like playing devil's advocate. A reality check. If the p-value was not uniformly distributed under the null you would not get this reality check. There is no bias nor is there an "independent property" because the population parameters do not randomly change from one repeated experiment to the next according to anyone's belief.
Bayesians have a very different way of thinking, so it is not surprising that these criticisms would be levied. Bayesian "probability" measures the belief of the experimenter, and beliefs are not facts. Any claim that the p-value overstates the evidence compared to a posterior probability is mistaking beliefs for facts.
There are one-to-one analogs on everything between the two paradigms.
Bayesians often criticize frequentism for not incorporating outside information into an analysis, but Bayesian updating of a prior into a posterior maps to a frequentist meta-analysis with p-values and confidence intervals [1] [2]. Bayesians often criticise frequentism for not accounting for all uncertainty in a model when making predictions, but Bayesian predictive distributions map to frequentist prediction intervals [3]. However, this one-to-one mapping is not a reason to use an unfalsifiable subjective definition of probability.
Even in non-normal models, central limit theorem aside, it is possible to construct confidence intervals by inverting a hypothesis test or a cumulative distribution function to find a range of plausible values of a parameter.
|
Bayesian criticism of frequentist p-value
The key to appreciating frequentist inference is to frame it similar to a proof by contradiction. It is by design that the p-value follows a uniform distribution under the null, i.e. $20\%$ of the ti
|
16,935
|
Doing MCMC: use jags/stan or implement it myself
|
In general, I would strongly suggest not coding your own MCMC for a real applied Bayesian analysis. This is both a good deal of work and time and very likely to introduce bugs in the code. Blackbox samplers, such as Stan, already use very sophisticated samplers. Trust me, you will not code a sampler of this caliber just for one analysis!
There are special cases in which in this will not be sufficient. For example, if you needed to do an analysis in real time (i.e. computer decision based on incoming data), these programs would not be a good idea. This is because Stan requires compiling C++ code, which may take considerably more time than just running an already prepared sampler for relatively simple models. In that case, you may want to write your own code. In addition, I believe there are special cases where packages like Stan do very poorly, such as Non-Gaussian state-space models (full disclosure: I believe Stan does poorly in this case, but do not know). In that case, it may be worth it to implement a custom MCMC. But this is the exception, not the rule!
To be quite honest, I think most researchers who write samplers for a single analysis (and this does happen, I have seen it) do so because they like to write their own samplers. At the very least, I can say that I fall under that category (i.e. I'm disappointed that writing my own sampler is not the best way to do things).
Also, while it does not make sense to write your own sampler for a single analysis, it can make a lot of sense to write your own code for a class of analyses. Being that JAGs, Stan, etc. are black-box samplers, you can always make things faster by specializing for a given model, although the amount of improvement is model dependent. But writing an extremely efficient sampler from the ground up is maybe 10-1,000 hours of work, depending on experience, model complexity etc. If you're doing research in Bayesian methods or writing statistical software, that's fine; it's your job. But if your boss says "Hey, you can you analyze this repeated measures data set?" and you spend 250 hours writing an efficient sampler, your boss is likely to be upset. In contrast, you could have written this model in Stan in, say, 2 hours, and had 2 minutes of run time
instead of the 1 minute run time achieved by the efficient sampler.
|
Doing MCMC: use jags/stan or implement it myself
|
In general, I would strongly suggest not coding your own MCMC for a real applied Bayesian analysis. This is both a good deal of work and time and very likely to introduce bugs in the code. Blackbox sa
|
Doing MCMC: use jags/stan or implement it myself
In general, I would strongly suggest not coding your own MCMC for a real applied Bayesian analysis. This is both a good deal of work and time and very likely to introduce bugs in the code. Blackbox samplers, such as Stan, already use very sophisticated samplers. Trust me, you will not code a sampler of this caliber just for one analysis!
There are special cases in which in this will not be sufficient. For example, if you needed to do an analysis in real time (i.e. computer decision based on incoming data), these programs would not be a good idea. This is because Stan requires compiling C++ code, which may take considerably more time than just running an already prepared sampler for relatively simple models. In that case, you may want to write your own code. In addition, I believe there are special cases where packages like Stan do very poorly, such as Non-Gaussian state-space models (full disclosure: I believe Stan does poorly in this case, but do not know). In that case, it may be worth it to implement a custom MCMC. But this is the exception, not the rule!
To be quite honest, I think most researchers who write samplers for a single analysis (and this does happen, I have seen it) do so because they like to write their own samplers. At the very least, I can say that I fall under that category (i.e. I'm disappointed that writing my own sampler is not the best way to do things).
Also, while it does not make sense to write your own sampler for a single analysis, it can make a lot of sense to write your own code for a class of analyses. Being that JAGs, Stan, etc. are black-box samplers, you can always make things faster by specializing for a given model, although the amount of improvement is model dependent. But writing an extremely efficient sampler from the ground up is maybe 10-1,000 hours of work, depending on experience, model complexity etc. If you're doing research in Bayesian methods or writing statistical software, that's fine; it's your job. But if your boss says "Hey, you can you analyze this repeated measures data set?" and you spend 250 hours writing an efficient sampler, your boss is likely to be upset. In contrast, you could have written this model in Stan in, say, 2 hours, and had 2 minutes of run time
instead of the 1 minute run time achieved by the efficient sampler.
|
Doing MCMC: use jags/stan or implement it myself
In general, I would strongly suggest not coding your own MCMC for a real applied Bayesian analysis. This is both a good deal of work and time and very likely to introduce bugs in the code. Blackbox sa
|
16,936
|
Doing MCMC: use jags/stan or implement it myself
|
This question is primarily opinion based, but I think there is enough here write an answer down. There could be many reasons to code one own's sampler for a research problem. Here are some of them
Proposal: As fcop suggested in their comment, if the sample is M-H, then coding your own sampler lets you play around with proposal distributions to get the best mixing sampler.
Flexibility: In built programs might not give you the flexibility you want. You might want to start at a specific random value, or use a specific seed structure.
Understanding: Coding your own sampler helps you understand the behavior of the sampler, giving insights to the Markov chain process. This is useful for a researcher working on the problem.
Onus: If the data on which I am making all my Bayesian inference comes from a program that I didn't code up, then the onus on the inference is no longer on me. As a researcher, I would like to take full responsibility of the methods/results I present. Using in-built methods does not allow you to do that.
There are probably more reasons, but these are the four that make me code my own samplers.
|
Doing MCMC: use jags/stan or implement it myself
|
This question is primarily opinion based, but I think there is enough here write an answer down. There could be many reasons to code one own's sampler for a research problem. Here are some of them
Pr
|
Doing MCMC: use jags/stan or implement it myself
This question is primarily opinion based, but I think there is enough here write an answer down. There could be many reasons to code one own's sampler for a research problem. Here are some of them
Proposal: As fcop suggested in their comment, if the sample is M-H, then coding your own sampler lets you play around with proposal distributions to get the best mixing sampler.
Flexibility: In built programs might not give you the flexibility you want. You might want to start at a specific random value, or use a specific seed structure.
Understanding: Coding your own sampler helps you understand the behavior of the sampler, giving insights to the Markov chain process. This is useful for a researcher working on the problem.
Onus: If the data on which I am making all my Bayesian inference comes from a program that I didn't code up, then the onus on the inference is no longer on me. As a researcher, I would like to take full responsibility of the methods/results I present. Using in-built methods does not allow you to do that.
There are probably more reasons, but these are the four that make me code my own samplers.
|
Doing MCMC: use jags/stan or implement it myself
This question is primarily opinion based, but I think there is enough here write an answer down. There could be many reasons to code one own's sampler for a research problem. Here are some of them
Pr
|
16,937
|
Doing MCMC: use jags/stan or implement it myself
|
I gave a +1 to Cliff AB's answer. To add one little tidbit, if you want to work at a lower level but not down to the code-everything-yourself level, you should poke around for the LaplacesDemon package. The original author was brilliant, but seems to have dropped off the grid, and the package has been taken over by someone else. (It's on Github, I believe.)
It implements an impressive number of algorithms used in MCMC and the included vignettes are worth the read even if you don't use the package. Pretty much any kind of sampler you read about, it has. You code in a different way than BUGS/JAGS or Stan, and it's all in R, but often times it's so efficient that it's competitive.
|
Doing MCMC: use jags/stan or implement it myself
|
I gave a +1 to Cliff AB's answer. To add one little tidbit, if you want to work at a lower level but not down to the code-everything-yourself level, you should poke around for the LaplacesDemon packag
|
Doing MCMC: use jags/stan or implement it myself
I gave a +1 to Cliff AB's answer. To add one little tidbit, if you want to work at a lower level but not down to the code-everything-yourself level, you should poke around for the LaplacesDemon package. The original author was brilliant, but seems to have dropped off the grid, and the package has been taken over by someone else. (It's on Github, I believe.)
It implements an impressive number of algorithms used in MCMC and the included vignettes are worth the read even if you don't use the package. Pretty much any kind of sampler you read about, it has. You code in a different way than BUGS/JAGS or Stan, and it's all in R, but often times it's so efficient that it's competitive.
|
Doing MCMC: use jags/stan or implement it myself
I gave a +1 to Cliff AB's answer. To add one little tidbit, if you want to work at a lower level but not down to the code-everything-yourself level, you should poke around for the LaplacesDemon packag
|
16,938
|
Why is F-test so sensitive for the assumption of normality?
|
I presume you mean the F-test for the ratio of variances when testing a pair of sample variances for equality (because that's the simplest one that's quite sensitive to normality; F-test for ANOVA is less sensitive)
If your samples are drawn from normal distributions, the sample variance has a scaled chi square distribution
Imagine that instead of data drawn from normal distributions, you had distribution that was heavier-tailed than normal. Then you'd get too many large variances relative to that scaled chi-square distribution, and the probability of the sample variance getting out into the far right tail is very responsive to the tails of the distribution from which the data were drawn=. (There will also be too many small variances, but the effect is a bit less pronounced)
Now if both samples are drawn from that heavier tailed distribution, the larger tail on the numerator will produce an excess of large F values and the larger tail on the denominator will produce an excess of small F values (and vice versa for the left tail)
Both of these effects will tend to lead to rejection in a two-tailed test, even though both samples have the same variance. This means that when the true distribution is heavier tailed than normal, actual significance levels tend to be higher than we want.
Conversely, drawing a sample from a lighter tailed distribution produces a distribution of sample variances that's got too short a tail -- variance values tend to be more "middling" than you get with data from normal distributions. Again, the impact is stronger in the far upper tail than the lower tail.
Now if both samples are drawn from that lighter-tailed distribution, the this results in an excess of F values near the median and too few in either tail (actual significance levels will be lower than desired).
These effects don't seem to necessarily reduce much with larger sample size; in some cases it seems to get worse.
By way of partial illustration, here are 10000 sample variances (for $n=10$) for normal, $t_5$ and uniform distributions, scaled to have the same mean as a $\chi^2_9$:
It's a bit hard to see the far tail since it's relatively small compared to the peak (and for the $t_5$ the observations in the tail extend out a fair way past where we have plotted to), but we can see something of the effect on the distribution on the variance. It's perhaps even more instructive to transform these by the inverse of the chi-square cdf,
which in the normal case looks uniform (as it should), in the t-case has a big peak in the upper tail (and a smaller peak in the lower tail) and in the uniform case is more hill-like but with a broad peak around 0.6 to 0.8 and the extremes have much lower probability than they should if we were sampling from normal distributions.
These in turn produce the effects on the distribution of the ratio of variances I described before. Again, to improve our ability to see the effect on the tails (which can be hard to see), I've transformed by the inverse of the cdf (in this case for the $F_{9,9}$ distribution):
In a two-tailed test, we look at both tails of the F distribution; both tails are over-represented when drawing from the $t_5$ and both are under-represented when drawing from a uniform.
There would be many other cases to investigate for a full study, but this at least gives a sense of the kind and direction of effect, as well as how it arises.
|
Why is F-test so sensitive for the assumption of normality?
|
I presume you mean the F-test for the ratio of variances when testing a pair of sample variances for equality (because that's the simplest one that's quite sensitive to normality; F-test for ANOVA is
|
Why is F-test so sensitive for the assumption of normality?
I presume you mean the F-test for the ratio of variances when testing a pair of sample variances for equality (because that's the simplest one that's quite sensitive to normality; F-test for ANOVA is less sensitive)
If your samples are drawn from normal distributions, the sample variance has a scaled chi square distribution
Imagine that instead of data drawn from normal distributions, you had distribution that was heavier-tailed than normal. Then you'd get too many large variances relative to that scaled chi-square distribution, and the probability of the sample variance getting out into the far right tail is very responsive to the tails of the distribution from which the data were drawn=. (There will also be too many small variances, but the effect is a bit less pronounced)
Now if both samples are drawn from that heavier tailed distribution, the larger tail on the numerator will produce an excess of large F values and the larger tail on the denominator will produce an excess of small F values (and vice versa for the left tail)
Both of these effects will tend to lead to rejection in a two-tailed test, even though both samples have the same variance. This means that when the true distribution is heavier tailed than normal, actual significance levels tend to be higher than we want.
Conversely, drawing a sample from a lighter tailed distribution produces a distribution of sample variances that's got too short a tail -- variance values tend to be more "middling" than you get with data from normal distributions. Again, the impact is stronger in the far upper tail than the lower tail.
Now if both samples are drawn from that lighter-tailed distribution, the this results in an excess of F values near the median and too few in either tail (actual significance levels will be lower than desired).
These effects don't seem to necessarily reduce much with larger sample size; in some cases it seems to get worse.
By way of partial illustration, here are 10000 sample variances (for $n=10$) for normal, $t_5$ and uniform distributions, scaled to have the same mean as a $\chi^2_9$:
It's a bit hard to see the far tail since it's relatively small compared to the peak (and for the $t_5$ the observations in the tail extend out a fair way past where we have plotted to), but we can see something of the effect on the distribution on the variance. It's perhaps even more instructive to transform these by the inverse of the chi-square cdf,
which in the normal case looks uniform (as it should), in the t-case has a big peak in the upper tail (and a smaller peak in the lower tail) and in the uniform case is more hill-like but with a broad peak around 0.6 to 0.8 and the extremes have much lower probability than they should if we were sampling from normal distributions.
These in turn produce the effects on the distribution of the ratio of variances I described before. Again, to improve our ability to see the effect on the tails (which can be hard to see), I've transformed by the inverse of the cdf (in this case for the $F_{9,9}$ distribution):
In a two-tailed test, we look at both tails of the F distribution; both tails are over-represented when drawing from the $t_5$ and both are under-represented when drawing from a uniform.
There would be many other cases to investigate for a full study, but this at least gives a sense of the kind and direction of effect, as well as how it arises.
|
Why is F-test so sensitive for the assumption of normality?
I presume you mean the F-test for the ratio of variances when testing a pair of sample variances for equality (because that's the simplest one that's quite sensitive to normality; F-test for ANOVA is
|
16,939
|
Why is F-test so sensitive for the assumption of normality?
|
As Glen_b has illustrated brilliantly in his simulations, the F-test for a ratio of variances is sensitive to the tails of the distribution. The reason for this is that the variance of a sample variance depends on the kurtosis parameter, and so the kurtosis of the underlying distribution has a strong effect on the distribution of the ratio of sample variances.
To deal with this issue, O'Neill (2014) derives a more general distributional approximation for ratios of variances that accounts for the kurtosis of the underlying distribution. In particular, if you have a population variance $S_N^2$ and sample variance $S_n^2$ with $n<N$ then Result 15 of that paper gives the distributional approximation$^\dagger$:
$$\frac{S_N^2}{S_n^2} \overset{\text{Approx}}{\sim} \frac{n-1}{N-1} + \frac{N-n}{N-1} \cdot F(DF_C, DF_n),$$
where the degrees-of-freedom (which depend on the underlying kurtosis $\kappa$) are:
$$DF_n = \frac{2n}{\kappa - (n-3)/(n-1)} \quad \quad \quad DF_C = \frac{2(N-n)}{2+(\kappa-3)(1-2/N+1/Nn)}.$$
In the special case of a mesokurtic distribution (e.g., the normal distribution) you have $\kappa=3$, which gives the standard degrees-of-freedom $DF_n = n-1$ and $DF_C = N-n$.
Although the distribution of the variance-ratio is sensitive to the underlying kurtosis, it is not actually very sensitive to normality per se. If you use a mesokurtic distribution with a different shape to the normal, you will find that the standard F-distribution approximation performs quite well. In practice the underlying kurtosis is unknown, so implementation of the above formula requires substitution of an estimator $\hat{\kappa}$. With such a substitution the approximation should perform reasonably well.
$^\dagger$ Note that this paper defines the population variance using Bessel's correction (for reasons stated in the paper, pp. 282-283). So the denominator of the population variance is $N-1$ in this analysis, not $N$. (This is actually a more helpful way to do things, since the population variance is then an unbiased estimator of the superopopulation variance parameter.)
|
Why is F-test so sensitive for the assumption of normality?
|
As Glen_b has illustrated brilliantly in his simulations, the F-test for a ratio of variances is sensitive to the tails of the distribution. The reason for this is that the variance of a sample varia
|
Why is F-test so sensitive for the assumption of normality?
As Glen_b has illustrated brilliantly in his simulations, the F-test for a ratio of variances is sensitive to the tails of the distribution. The reason for this is that the variance of a sample variance depends on the kurtosis parameter, and so the kurtosis of the underlying distribution has a strong effect on the distribution of the ratio of sample variances.
To deal with this issue, O'Neill (2014) derives a more general distributional approximation for ratios of variances that accounts for the kurtosis of the underlying distribution. In particular, if you have a population variance $S_N^2$ and sample variance $S_n^2$ with $n<N$ then Result 15 of that paper gives the distributional approximation$^\dagger$:
$$\frac{S_N^2}{S_n^2} \overset{\text{Approx}}{\sim} \frac{n-1}{N-1} + \frac{N-n}{N-1} \cdot F(DF_C, DF_n),$$
where the degrees-of-freedom (which depend on the underlying kurtosis $\kappa$) are:
$$DF_n = \frac{2n}{\kappa - (n-3)/(n-1)} \quad \quad \quad DF_C = \frac{2(N-n)}{2+(\kappa-3)(1-2/N+1/Nn)}.$$
In the special case of a mesokurtic distribution (e.g., the normal distribution) you have $\kappa=3$, which gives the standard degrees-of-freedom $DF_n = n-1$ and $DF_C = N-n$.
Although the distribution of the variance-ratio is sensitive to the underlying kurtosis, it is not actually very sensitive to normality per se. If you use a mesokurtic distribution with a different shape to the normal, you will find that the standard F-distribution approximation performs quite well. In practice the underlying kurtosis is unknown, so implementation of the above formula requires substitution of an estimator $\hat{\kappa}$. With such a substitution the approximation should perform reasonably well.
$^\dagger$ Note that this paper defines the population variance using Bessel's correction (for reasons stated in the paper, pp. 282-283). So the denominator of the population variance is $N-1$ in this analysis, not $N$. (This is actually a more helpful way to do things, since the population variance is then an unbiased estimator of the superopopulation variance parameter.)
|
Why is F-test so sensitive for the assumption of normality?
As Glen_b has illustrated brilliantly in his simulations, the F-test for a ratio of variances is sensitive to the tails of the distribution. The reason for this is that the variance of a sample varia
|
16,940
|
How to interpret a ROC curve?
|
When you do logistic regression, you are given two classes coded as $1$ and $0$. Now, you compute probabilities that given some explanatory varialbes an individual belongs to the class coded as $1$. If you now choose a probability threshold and classify all individuals with a probability greater than this threshold as class $1$ and below as $0$, you will in the most cases make some errors because usually two groups cannot be discriminated perfectly. For this threshold you can now compute your errors and the so-called sensitivity and specificity. If you do this for many thresholds, you can construct a ROC curve by plotting sensitivity against 1-Specificity for many possible thresholds. The area under the curve comes in play if you want to compare different methods that try to discriminate between two classes, e. g. discriminant analysis or a probit model. You can construct the ROC curve for all these models and the one with the highest area under the curve can be seen as the best model.
If you need to get a deeper understanding, you can also read the answer of a different question regarding ROC curves by clicking here.
|
How to interpret a ROC curve?
|
When you do logistic regression, you are given two classes coded as $1$ and $0$. Now, you compute probabilities that given some explanatory varialbes an individual belongs to the class coded as $1$. I
|
How to interpret a ROC curve?
When you do logistic regression, you are given two classes coded as $1$ and $0$. Now, you compute probabilities that given some explanatory varialbes an individual belongs to the class coded as $1$. If you now choose a probability threshold and classify all individuals with a probability greater than this threshold as class $1$ and below as $0$, you will in the most cases make some errors because usually two groups cannot be discriminated perfectly. For this threshold you can now compute your errors and the so-called sensitivity and specificity. If you do this for many thresholds, you can construct a ROC curve by plotting sensitivity against 1-Specificity for many possible thresholds. The area under the curve comes in play if you want to compare different methods that try to discriminate between two classes, e. g. discriminant analysis or a probit model. You can construct the ROC curve for all these models and the one with the highest area under the curve can be seen as the best model.
If you need to get a deeper understanding, you can also read the answer of a different question regarding ROC curves by clicking here.
|
How to interpret a ROC curve?
When you do logistic regression, you are given two classes coded as $1$ and $0$. Now, you compute probabilities that given some explanatory varialbes an individual belongs to the class coded as $1$. I
|
16,941
|
How to interpret a ROC curve?
|
The AUC is basically just telling you how frequently a random draw from your predicted response probabilities on your 1-labeled data will be greater than a random draw from your predicted response probabilities on your 0-labeled data.
|
How to interpret a ROC curve?
|
The AUC is basically just telling you how frequently a random draw from your predicted response probabilities on your 1-labeled data will be greater than a random draw from your predicted response pro
|
How to interpret a ROC curve?
The AUC is basically just telling you how frequently a random draw from your predicted response probabilities on your 1-labeled data will be greater than a random draw from your predicted response probabilities on your 0-labeled data.
|
How to interpret a ROC curve?
The AUC is basically just telling you how frequently a random draw from your predicted response probabilities on your 1-labeled data will be greater than a random draw from your predicted response pro
|
16,942
|
How to interpret a ROC curve?
|
The logistic regression model is a direct probability estimation method. Classification should play no role in its use. Any classification not based on assessing utilities (loss/cost function) on individual subjects is inappropriate except in very special emergencies. The ROC curve is not helpful here; neither are sensitivity or specificity which, like overall classification accuracy, are improper accuracy scoring rules that are optimized by a bogus model not fitted by maximum likelihood estimation.
Note that you achieve high predictive discrimination (high $c$-index (ROC area)) by overfitting the data. You need perhaps at least $15p$ observations in the least frequent category of $Y$, where $p$ is the number of candidate predictors being considered, in order to obtain a model that is not significantly overfitted [i.e., a model that is likely to work on new data about as well as it worked on the training data]. You need at least 96 observations just to estimate the intercept such that the predicted risk has a margin of error $\leq 0.05$ with 0.95 confidence.
|
How to interpret a ROC curve?
|
The logistic regression model is a direct probability estimation method. Classification should play no role in its use. Any classification not based on assessing utilities (loss/cost function) on in
|
How to interpret a ROC curve?
The logistic regression model is a direct probability estimation method. Classification should play no role in its use. Any classification not based on assessing utilities (loss/cost function) on individual subjects is inappropriate except in very special emergencies. The ROC curve is not helpful here; neither are sensitivity or specificity which, like overall classification accuracy, are improper accuracy scoring rules that are optimized by a bogus model not fitted by maximum likelihood estimation.
Note that you achieve high predictive discrimination (high $c$-index (ROC area)) by overfitting the data. You need perhaps at least $15p$ observations in the least frequent category of $Y$, where $p$ is the number of candidate predictors being considered, in order to obtain a model that is not significantly overfitted [i.e., a model that is likely to work on new data about as well as it worked on the training data]. You need at least 96 observations just to estimate the intercept such that the predicted risk has a margin of error $\leq 0.05$ with 0.95 confidence.
|
How to interpret a ROC curve?
The logistic regression model is a direct probability estimation method. Classification should play no role in its use. Any classification not based on assessing utilities (loss/cost function) on in
|
16,943
|
How to interpret a ROC curve?
|
I'm not the author of this blog and I found this blog helpful: http://fouryears.eu/2011/10/12/roc-area-under-the-curve-explained
Applying this explanation to your data, the average positive example has about 10% of negative examples scored higher than it.
|
How to interpret a ROC curve?
|
I'm not the author of this blog and I found this blog helpful: http://fouryears.eu/2011/10/12/roc-area-under-the-curve-explained
Applying this explanation to your data, the average positive example ha
|
How to interpret a ROC curve?
I'm not the author of this blog and I found this blog helpful: http://fouryears.eu/2011/10/12/roc-area-under-the-curve-explained
Applying this explanation to your data, the average positive example has about 10% of negative examples scored higher than it.
|
How to interpret a ROC curve?
I'm not the author of this blog and I found this blog helpful: http://fouryears.eu/2011/10/12/roc-area-under-the-curve-explained
Applying this explanation to your data, the average positive example ha
|
16,944
|
How to interpret a ROC curve?
|
ROC-curves can be computed for several different types of discriminative classifiers.
History
Originally developed for analyzing radar blobs during the second world war (D. Green et al.(1966). Signal detection theory and psychophysics. Wiley), ROC-curves soon became applied in medicine.
Medical applications
First an individual medical test result was characterized by its ROC-curve. Take for example the measurement of hemoglobin in a patient, done by a lab in a hospital. Such tests are widely applied to diagnose anemia. In a peripheral hospital, the probability of a too low hemoglobin reading will differ compared with that of a specialized university hospital. The prior probability of anemia is different between the two types of hospitals because only a small fraction of the anemia patients cannot be diagnosed in the peripheral hospital. Only these difficult cases become referred to the specialized university hospital. A ROC-curve lets the lab persons characterize the discriminative ability of the hemoglobin test for different prior probabilities of anemia.
ROC-curves in machine learning
Machine learning adapted ROC-curves to characterize the discriminative performance of classifiers. Besides logistic and probit models, several other types of two-class classifiers can be evaluated using a ROC-curve. As long as the classifier outputs posterior probability estimates you can compute a ROC-curve by varying the discriminative threshold that discerns the two classes. Eligible classifiers are random forests, multilayer perceptrons with sigmoid activation units, the multinomial classifier, the probabilistic k-nearest neighbor classifier, the probability outcomes of insight classifiers, a probabilistic support vector machine, and even more types. Some machine learning suites like Weka offer ROC-analysis out-of-the-box.
The area under the ROC-curve is a measure of the total discriminative performance of a two-class classifier, for any given prior probability distribution. Note that a specific classifier can perform really well in one part of the ROC-curve but show a poor discriminative ability in a different part of the ROC-curve.
|
How to interpret a ROC curve?
|
ROC-curves can be computed for several different types of discriminative classifiers.
History
Originally developed for analyzing radar blobs during the second world war (D. Green et al.(1966). Signal
|
How to interpret a ROC curve?
ROC-curves can be computed for several different types of discriminative classifiers.
History
Originally developed for analyzing radar blobs during the second world war (D. Green et al.(1966). Signal detection theory and psychophysics. Wiley), ROC-curves soon became applied in medicine.
Medical applications
First an individual medical test result was characterized by its ROC-curve. Take for example the measurement of hemoglobin in a patient, done by a lab in a hospital. Such tests are widely applied to diagnose anemia. In a peripheral hospital, the probability of a too low hemoglobin reading will differ compared with that of a specialized university hospital. The prior probability of anemia is different between the two types of hospitals because only a small fraction of the anemia patients cannot be diagnosed in the peripheral hospital. Only these difficult cases become referred to the specialized university hospital. A ROC-curve lets the lab persons characterize the discriminative ability of the hemoglobin test for different prior probabilities of anemia.
ROC-curves in machine learning
Machine learning adapted ROC-curves to characterize the discriminative performance of classifiers. Besides logistic and probit models, several other types of two-class classifiers can be evaluated using a ROC-curve. As long as the classifier outputs posterior probability estimates you can compute a ROC-curve by varying the discriminative threshold that discerns the two classes. Eligible classifiers are random forests, multilayer perceptrons with sigmoid activation units, the multinomial classifier, the probabilistic k-nearest neighbor classifier, the probability outcomes of insight classifiers, a probabilistic support vector machine, and even more types. Some machine learning suites like Weka offer ROC-analysis out-of-the-box.
The area under the ROC-curve is a measure of the total discriminative performance of a two-class classifier, for any given prior probability distribution. Note that a specific classifier can perform really well in one part of the ROC-curve but show a poor discriminative ability in a different part of the ROC-curve.
|
How to interpret a ROC curve?
ROC-curves can be computed for several different types of discriminative classifiers.
History
Originally developed for analyzing radar blobs during the second world war (D. Green et al.(1966). Signal
|
16,945
|
Does standardising independent variables reduce collinearity?
|
It doesn't change the collinearity between the main effects at all. Scaling doesn't either. Any linear transform won't do that. What it changes is the correlation between main effects and their interactions. Even if A and B are independent with a correlation of 0, the correlation between A, and A:B will be dependent upon scale factors.
Try the following in an R console. Note that rnorm just generates random samples from a normal distribution with population values you set, in this case 50 samples. The scale function standardizes the sample to a mean of 0 and SD of 1.
set.seed(1) # the samples will be controlled by setting the seed - you can try others
a <- rnorm(50, mean = 0, sd = 1)
b <- rnorm(50, mean = 0, sd = 1)
mean(a); mean(b)
# [1] 0.1004483 # not the population mean, just a sample
# [1] 0.1173265
cor(a ,b)
# [1] -0.03908718
The incidental correlation is near 0 for these independent samples. Now normalize to mean of 0 and SD of 1.
a <- scale( a )
b <- scale( b )
cor(a, b)
# [1,] -0.03908718
Again, this is the exact same value even though the mean is 0 and SD = 1 for both a and b.
cor(a, a*b)
# [1,] -0.01038144
This is also very near 0. (a*b can be considered the interaction term)
However, usually the SD and mean of predictors differ quite a bit so let's change b. Instead of taking a new sample I'll rescale the original b to have a mean of 5 and SD of 2.
b <- b * 2 + 5
cor(a, b)
# [1] -0.03908718
Again, that familiar correlation we've seen all along. The scaling is having no impact on the correlation between a and b. But!!
cor(a, a*b)
# [1,] 0.9290406
Now that will have a substantial correlation which you can make go away by centring and/or standardizing. I generally go with just the centring.
EDIT: @Tim has an answer here that's a bit more directly on topic. I didn't have Kruschke at the time. The correlation between intercept and slope is similar to the issue of correlation with interactions though. They're both about conditional relationships. The intercept is conditional on the slope; but unlike an interaction it's one way because the slope is not conditional on the intercept. Regardless, if the slope varies so will the intercept unless the mean of the predictor is 0. Standardizing or centring the predictor variables will minimize the effect of the intercept changing with the slope because the mean will be at 0 and therefore the regression line will pivot at the y-axis and it's slope will have no effect on the intercept.
|
Does standardising independent variables reduce collinearity?
|
It doesn't change the collinearity between the main effects at all. Scaling doesn't either. Any linear transform won't do that. What it changes is the correlation between main effects and their int
|
Does standardising independent variables reduce collinearity?
It doesn't change the collinearity between the main effects at all. Scaling doesn't either. Any linear transform won't do that. What it changes is the correlation between main effects and their interactions. Even if A and B are independent with a correlation of 0, the correlation between A, and A:B will be dependent upon scale factors.
Try the following in an R console. Note that rnorm just generates random samples from a normal distribution with population values you set, in this case 50 samples. The scale function standardizes the sample to a mean of 0 and SD of 1.
set.seed(1) # the samples will be controlled by setting the seed - you can try others
a <- rnorm(50, mean = 0, sd = 1)
b <- rnorm(50, mean = 0, sd = 1)
mean(a); mean(b)
# [1] 0.1004483 # not the population mean, just a sample
# [1] 0.1173265
cor(a ,b)
# [1] -0.03908718
The incidental correlation is near 0 for these independent samples. Now normalize to mean of 0 and SD of 1.
a <- scale( a )
b <- scale( b )
cor(a, b)
# [1,] -0.03908718
Again, this is the exact same value even though the mean is 0 and SD = 1 for both a and b.
cor(a, a*b)
# [1,] -0.01038144
This is also very near 0. (a*b can be considered the interaction term)
However, usually the SD and mean of predictors differ quite a bit so let's change b. Instead of taking a new sample I'll rescale the original b to have a mean of 5 and SD of 2.
b <- b * 2 + 5
cor(a, b)
# [1] -0.03908718
Again, that familiar correlation we've seen all along. The scaling is having no impact on the correlation between a and b. But!!
cor(a, a*b)
# [1,] 0.9290406
Now that will have a substantial correlation which you can make go away by centring and/or standardizing. I generally go with just the centring.
EDIT: @Tim has an answer here that's a bit more directly on topic. I didn't have Kruschke at the time. The correlation between intercept and slope is similar to the issue of correlation with interactions though. They're both about conditional relationships. The intercept is conditional on the slope; but unlike an interaction it's one way because the slope is not conditional on the intercept. Regardless, if the slope varies so will the intercept unless the mean of the predictor is 0. Standardizing or centring the predictor variables will minimize the effect of the intercept changing with the slope because the mean will be at 0 and therefore the regression line will pivot at the y-axis and it's slope will have no effect on the intercept.
|
Does standardising independent variables reduce collinearity?
It doesn't change the collinearity between the main effects at all. Scaling doesn't either. Any linear transform won't do that. What it changes is the correlation between main effects and their int
|
16,946
|
Does standardising independent variables reduce collinearity?
|
As others have already mentioned, standardization has really nothing to do with collinearity.
Perfect collinearity
Let's start with what standardization (a.k.a. normalization) is, what we mean by it is subtracting the mean and dividing by the standard deviation so that the resulting mean is equal to zero and standard deviation to unity. So if random variable $X$ has mean $\mu_X$ and standard deviation $\sigma_X$, then
$$
\newcommand{Var}{\mathrm{Var}}
Z_X = \frac{X - \mu_X}{\sigma_X}
$$
has mean $\mu_Z = 0$ and standard deviation $\sigma_Z = 1$ given the properties of expected value and variance that $E(X + a) = E(X) + a$, $E(bX) = b\,E(X)$ and $\Var(X + a) = \Var(X)$, $\Var(bX) = b^2 \Var(X)$, where $X$ is r.v. and $a,b$ are constants.
We say that two variables $X$ and $Y$ are perfectly collinear if there exists such values $\lambda_0$ and $\lambda_1$ that
$$
Y = \lambda_0 + \lambda_1 X
$$
what follows, if $X$ has mean $\mu_X$ and standard deviation $\sigma_X$, then $Y$ has mean $\mu_Y = \lambda_0 + \lambda_1 \mu_X$ and standard deviation $\sigma_Y = \lambda_1 \sigma_X$. Now, when we standardize both variables (remove their means and divide by standard deviations), we get $Z_X = Z_X$...
Correlation
Of course perfect collinearity is not something that we would see that often, but strongly correlated variables may also be a problem (and they are related species with collinearity). So does the standardization affect correlation? Please compare the following plots showing two correlated variables on two plots before and after scaling:
Can you spot the difference? As you can see, I purposefully removed the axis labels, so to convince you that I'm not cheating, see the plots with added labels:
Mathematically speaking, if correlation is
$$
\newcommand{Corr}{\mathrm{Corr}}
\newcommand{Cov}{\mathrm{Cov}}
\Corr(X, Y) = \frac{\Cov(X,Y)}{\Var(X)\,\Var(Y)}
$$
then with collinear variables we have
$$
\require{cancel}
\begin{align}
\Corr(X, Y) &= \frac{E[(X - \mu_X)(Y - \mu_Y)]}{\sigma_X\sigma_Y} \\
&=\frac{E[(X - \mu_X)(\cancel{\lambda_0} + \lambda_1 X - \cancel{\lambda_0} - \lambda_1\mu_X )]}{\sigma_X\;\lambda_1\sigma_X} \\
&= \frac{E[(X - \mu_X)(\lambda_1 X - \lambda_1\mu_X )]}{\sigma_X\;\lambda_1\sigma_X} \\
&= \frac{E[(X - \mu_X)\lambda_1( X - \mu_X )]}{\sigma_X\;\lambda_1\sigma_X} \\
&= \frac{\cancel{\lambda_1} E[(X - \mu_X)(X - \mu_X )]}{\sigma_X\;\cancel{\lambda_1}\sigma_X} \\
&= \frac{E[(X - \mu_X)(X - \mu_X )]}{\sigma_X\sigma_X}
\end{align}
$$
now since $\Cov(X,X) = \Var(X)$,
$$
\begin{align}
&= \frac{\Cov(X, X)}{\sigma_X^2} = \frac{\Var(X)}{\Var(X)} = 1
\end{align}
$$
While with standardized variables
$$
\begin{align}
\Corr(Z_X, Z_Y) &= \frac{E[(Z_X - 0)(Z_Y - 0)]}{1 \times 1} \\
&= \Cov(Z_X, Z_Y) = \Var(Z_X) = 1
\end{align}
$$
since $Z_X = Z_Y$...
Finally, notice that what Kruschke is talking about, is that standardizing of the variables makes life easier for the Gibbs sampler and leads to reducing of correlation between intercept and slope in the regression model he presents. He doesn't say that standardizing variables reduces collinearity between the variables.
|
Does standardising independent variables reduce collinearity?
|
As others have already mentioned, standardization has really nothing to do with collinearity.
Perfect collinearity
Let's start with what standardization (a.k.a. normalization) is, what we mean by it i
|
Does standardising independent variables reduce collinearity?
As others have already mentioned, standardization has really nothing to do with collinearity.
Perfect collinearity
Let's start with what standardization (a.k.a. normalization) is, what we mean by it is subtracting the mean and dividing by the standard deviation so that the resulting mean is equal to zero and standard deviation to unity. So if random variable $X$ has mean $\mu_X$ and standard deviation $\sigma_X$, then
$$
\newcommand{Var}{\mathrm{Var}}
Z_X = \frac{X - \mu_X}{\sigma_X}
$$
has mean $\mu_Z = 0$ and standard deviation $\sigma_Z = 1$ given the properties of expected value and variance that $E(X + a) = E(X) + a$, $E(bX) = b\,E(X)$ and $\Var(X + a) = \Var(X)$, $\Var(bX) = b^2 \Var(X)$, where $X$ is r.v. and $a,b$ are constants.
We say that two variables $X$ and $Y$ are perfectly collinear if there exists such values $\lambda_0$ and $\lambda_1$ that
$$
Y = \lambda_0 + \lambda_1 X
$$
what follows, if $X$ has mean $\mu_X$ and standard deviation $\sigma_X$, then $Y$ has mean $\mu_Y = \lambda_0 + \lambda_1 \mu_X$ and standard deviation $\sigma_Y = \lambda_1 \sigma_X$. Now, when we standardize both variables (remove their means and divide by standard deviations), we get $Z_X = Z_X$...
Correlation
Of course perfect collinearity is not something that we would see that often, but strongly correlated variables may also be a problem (and they are related species with collinearity). So does the standardization affect correlation? Please compare the following plots showing two correlated variables on two plots before and after scaling:
Can you spot the difference? As you can see, I purposefully removed the axis labels, so to convince you that I'm not cheating, see the plots with added labels:
Mathematically speaking, if correlation is
$$
\newcommand{Corr}{\mathrm{Corr}}
\newcommand{Cov}{\mathrm{Cov}}
\Corr(X, Y) = \frac{\Cov(X,Y)}{\Var(X)\,\Var(Y)}
$$
then with collinear variables we have
$$
\require{cancel}
\begin{align}
\Corr(X, Y) &= \frac{E[(X - \mu_X)(Y - \mu_Y)]}{\sigma_X\sigma_Y} \\
&=\frac{E[(X - \mu_X)(\cancel{\lambda_0} + \lambda_1 X - \cancel{\lambda_0} - \lambda_1\mu_X )]}{\sigma_X\;\lambda_1\sigma_X} \\
&= \frac{E[(X - \mu_X)(\lambda_1 X - \lambda_1\mu_X )]}{\sigma_X\;\lambda_1\sigma_X} \\
&= \frac{E[(X - \mu_X)\lambda_1( X - \mu_X )]}{\sigma_X\;\lambda_1\sigma_X} \\
&= \frac{\cancel{\lambda_1} E[(X - \mu_X)(X - \mu_X )]}{\sigma_X\;\cancel{\lambda_1}\sigma_X} \\
&= \frac{E[(X - \mu_X)(X - \mu_X )]}{\sigma_X\sigma_X}
\end{align}
$$
now since $\Cov(X,X) = \Var(X)$,
$$
\begin{align}
&= \frac{\Cov(X, X)}{\sigma_X^2} = \frac{\Var(X)}{\Var(X)} = 1
\end{align}
$$
While with standardized variables
$$
\begin{align}
\Corr(Z_X, Z_Y) &= \frac{E[(Z_X - 0)(Z_Y - 0)]}{1 \times 1} \\
&= \Cov(Z_X, Z_Y) = \Var(Z_X) = 1
\end{align}
$$
since $Z_X = Z_Y$...
Finally, notice that what Kruschke is talking about, is that standardizing of the variables makes life easier for the Gibbs sampler and leads to reducing of correlation between intercept and slope in the regression model he presents. He doesn't say that standardizing variables reduces collinearity between the variables.
|
Does standardising independent variables reduce collinearity?
As others have already mentioned, standardization has really nothing to do with collinearity.
Perfect collinearity
Let's start with what standardization (a.k.a. normalization) is, what we mean by it i
|
16,947
|
Does standardising independent variables reduce collinearity?
|
Standardization does not affect the correlation between variables. They remain exactly the same. The correlation captures the synchronization of the direction of the variables. There is nothing in standardization that does change the direction of the variables.
If you want to eliminate multicollinearity between your variables, I suggest using Principal Component Analysis (PCA). As you know PCA is very effective in eliminating the multicollinearity problem. On the other hand PCA renders the combined variables (principal components P1, P2, etc...) rather opaque. A PCA model is always a lot more challenging to explain than a more traditional multivariate one.
|
Does standardising independent variables reduce collinearity?
|
Standardization does not affect the correlation between variables. They remain exactly the same. The correlation captures the synchronization of the direction of the variables. There is nothing in
|
Does standardising independent variables reduce collinearity?
Standardization does not affect the correlation between variables. They remain exactly the same. The correlation captures the synchronization of the direction of the variables. There is nothing in standardization that does change the direction of the variables.
If you want to eliminate multicollinearity between your variables, I suggest using Principal Component Analysis (PCA). As you know PCA is very effective in eliminating the multicollinearity problem. On the other hand PCA renders the combined variables (principal components P1, P2, etc...) rather opaque. A PCA model is always a lot more challenging to explain than a more traditional multivariate one.
|
Does standardising independent variables reduce collinearity?
Standardization does not affect the correlation between variables. They remain exactly the same. The correlation captures the synchronization of the direction of the variables. There is nothing in
|
16,948
|
Does standardising independent variables reduce collinearity?
|
It doesn't reduce the collinearity, it can reduce the VIF. Commonly we use VIF as indicator for concerns for collinearity.
Source: http://blog.minitab.com/blog/adventures-in-statistics-2/what-are-the-effects-of-multicollinearity-and-when-can-i-ignore-them
|
Does standardising independent variables reduce collinearity?
|
It doesn't reduce the collinearity, it can reduce the VIF. Commonly we use VIF as indicator for concerns for collinearity.
Source: http://blog.minitab.com/blog/adventures-in-statistics-2/what-are-the
|
Does standardising independent variables reduce collinearity?
It doesn't reduce the collinearity, it can reduce the VIF. Commonly we use VIF as indicator for concerns for collinearity.
Source: http://blog.minitab.com/blog/adventures-in-statistics-2/what-are-the-effects-of-multicollinearity-and-when-can-i-ignore-them
|
Does standardising independent variables reduce collinearity?
It doesn't reduce the collinearity, it can reduce the VIF. Commonly we use VIF as indicator for concerns for collinearity.
Source: http://blog.minitab.com/blog/adventures-in-statistics-2/what-are-the
|
16,949
|
Does standardising independent variables reduce collinearity?
|
This is actually a very important post because it refers to two crucial milestones in preprocessing and I would like to point out that the semantics of the word standardization are very heterogenous - while some may have a linear transformation in mind, others would refer to the workhorse scalers used in sklearn. This post made me reflect upon the implications of using the scalers.
This short exercise with the iris data from sklearn demonstates some of the above mentioned statements and adds another aspect:
linear correlations do not change after a simple linear transformation by multiplication, or after scaling using linear scalers.
VIF does not change after a simple multiplication by a constant, but it changes substantially if we apply scalers.
linear correlations may change if we apply nonlinear scalers, like power transform.
The latter is very important because nonlinear change both i) VIF and ii) linear correlations. Therefore I would like to add that standardization may have implications for correlations if nonlinear scalers are applied. Sorry if the Python code below is clumsy, but I think that it is worth taking a closer look at the implications of scaling.
import math
from sklearn import preprocessing
import pandas as pd
from sklearn.datasets import load_iris
import copy as cp
from statsmodels.stats.outliers_influence import variance_inflation_factor
### VIF for original data
iris_data = load_iris()
iris_source = pd.DataFrame(data = iris_data['data'], columns = iris_data['feature_names'])
iris_cols = iris_source.columns.str.strip("(cm)")
iris_df = cp.deepcopy(iris_source)
iris_df.set_axis(iris_cols , axis=1,inplace=True)
vif_data = pd.DataFrame()
vif_data['Features'] = iris_df.columns
vif_data['VIF'] = [variance_inflation_factor(iris_df.values, i) for i in range(len(iris_df.columns))]
corrmat_orig = pd.DataFrame(round(iris_df.corr(), 4))
### Applying linear transformation by constant multiplication
iris_df_lintrans=cp.deepcopy(iris_df)
iris_df_lintrans['sepal length ']=iris_df['sepal length ']*10
iris_df_lintrans['sepal width ']=iris_df['sepal width ']*50
vif_data_lintrans = pd.DataFrame()
vif_data_lintrans['Features'] = iris_df_lintrans.columns
vif_data_lintrans['VIF'] = [variance_inflation_factor(iris_df_lintrans.values, i) for i in range(len(iris_df_lintrans.columns))]
corrmat_lintrans = pd.DataFrame(round(iris_df_lintrans.corr(), 4))
### Applying workhorse sklearn scalers, except nonlinear ones
from sklearn.preprocessing import StandardScaler, MinMaxScaler
scaler = MinMaxScaler() ### can try MinMaxScaler or any other linear scaler
iris_scaler=scaler.fit(iris_df_lintrans)
iris_transformed=pd.DataFrame(iris_scaler.transform(iris_df_lintrans), columns=iris_df_lintrans.columns)
vif_data_scaled = pd.DataFrame()
vif_data_scaled['Features'] = iris_transformed.columns
vif_data_scaled['VIF'] = [variance_inflation_factor(iris_transformed.values, i) for i in range(len(iris_transformed.columns))]
vif_data_compare = pd.DataFrame()
vif_data_compare['Features'] = iris_transformed.columns
vif_data_compare['VIF_orig'] = vif_data['VIF']
vif_data_compare['VIF_lintrans'] = vif_data_lintrans['VIF']
vif_data_compare['VIF_scaled']= vif_data_scaled['VIF']
corrmat_scaled = pd.DataFrame(round(iris_transformed.corr(), 4))
### Print VIF for each case - VIF unachnged after multiplication but decreases after scaling using "workhorse" sklearn scalers
print('─' * 100)
print(vif_data_compare)
print('─' * 100)
### Print correlation matrix for each case - no changes
print(corrmat_orig)
print('─' * 100)
print(corrmat_lintrans)
print('─' * 100)
print(corrmat_scaled)
print('─' * 100)
### Nonlinear scalers may distort linear correlations
### Applying nonlinear PowerTransformer sklearn scaler
from sklearn.preprocessing import StandardScaler, MinMaxScaler, PowerTransformer
scaler = PowerTransformer() ### can try MinMaxScaler or any other linear scaler
iris_scaler=scaler.fit(iris_df_lintrans)
iris_transformed=pd.DataFrame(iris_scaler.transform(iris_df_lintrans), columns=iris_df_lintrans.columns)
vif_data_scaled = pd.DataFrame()
vif_data_scaled['Features'] = iris_transformed.columns
vif_data_scaled['VIF'] = [variance_inflation_factor(iris_transformed.values, i) for i in range(len(iris_transformed.columns))]
vif_data_compare = pd.DataFrame()
vif_data_compare['Features'] = iris_transformed.columns
vif_data_compare['VIF_orig'] = vif_data['VIF']
vif_data_compare['VIF_lintrans'] = vif_data_lintrans['VIF']
vif_data_compare['VIF_scaled']= vif_data_scaled['VIF']
corrmat_scaled = pd.DataFrame(round(iris_transformed.corr(), 4))
print('─' * 100)
print(corrmat_scaled)
print('─' * 100)
|
Does standardising independent variables reduce collinearity?
|
This is actually a very important post because it refers to two crucial milestones in preprocessing and I would like to point out that the semantics of the word standardization are very heterogenous -
|
Does standardising independent variables reduce collinearity?
This is actually a very important post because it refers to two crucial milestones in preprocessing and I would like to point out that the semantics of the word standardization are very heterogenous - while some may have a linear transformation in mind, others would refer to the workhorse scalers used in sklearn. This post made me reflect upon the implications of using the scalers.
This short exercise with the iris data from sklearn demonstates some of the above mentioned statements and adds another aspect:
linear correlations do not change after a simple linear transformation by multiplication, or after scaling using linear scalers.
VIF does not change after a simple multiplication by a constant, but it changes substantially if we apply scalers.
linear correlations may change if we apply nonlinear scalers, like power transform.
The latter is very important because nonlinear change both i) VIF and ii) linear correlations. Therefore I would like to add that standardization may have implications for correlations if nonlinear scalers are applied. Sorry if the Python code below is clumsy, but I think that it is worth taking a closer look at the implications of scaling.
import math
from sklearn import preprocessing
import pandas as pd
from sklearn.datasets import load_iris
import copy as cp
from statsmodels.stats.outliers_influence import variance_inflation_factor
### VIF for original data
iris_data = load_iris()
iris_source = pd.DataFrame(data = iris_data['data'], columns = iris_data['feature_names'])
iris_cols = iris_source.columns.str.strip("(cm)")
iris_df = cp.deepcopy(iris_source)
iris_df.set_axis(iris_cols , axis=1,inplace=True)
vif_data = pd.DataFrame()
vif_data['Features'] = iris_df.columns
vif_data['VIF'] = [variance_inflation_factor(iris_df.values, i) for i in range(len(iris_df.columns))]
corrmat_orig = pd.DataFrame(round(iris_df.corr(), 4))
### Applying linear transformation by constant multiplication
iris_df_lintrans=cp.deepcopy(iris_df)
iris_df_lintrans['sepal length ']=iris_df['sepal length ']*10
iris_df_lintrans['sepal width ']=iris_df['sepal width ']*50
vif_data_lintrans = pd.DataFrame()
vif_data_lintrans['Features'] = iris_df_lintrans.columns
vif_data_lintrans['VIF'] = [variance_inflation_factor(iris_df_lintrans.values, i) for i in range(len(iris_df_lintrans.columns))]
corrmat_lintrans = pd.DataFrame(round(iris_df_lintrans.corr(), 4))
### Applying workhorse sklearn scalers, except nonlinear ones
from sklearn.preprocessing import StandardScaler, MinMaxScaler
scaler = MinMaxScaler() ### can try MinMaxScaler or any other linear scaler
iris_scaler=scaler.fit(iris_df_lintrans)
iris_transformed=pd.DataFrame(iris_scaler.transform(iris_df_lintrans), columns=iris_df_lintrans.columns)
vif_data_scaled = pd.DataFrame()
vif_data_scaled['Features'] = iris_transformed.columns
vif_data_scaled['VIF'] = [variance_inflation_factor(iris_transformed.values, i) for i in range(len(iris_transformed.columns))]
vif_data_compare = pd.DataFrame()
vif_data_compare['Features'] = iris_transformed.columns
vif_data_compare['VIF_orig'] = vif_data['VIF']
vif_data_compare['VIF_lintrans'] = vif_data_lintrans['VIF']
vif_data_compare['VIF_scaled']= vif_data_scaled['VIF']
corrmat_scaled = pd.DataFrame(round(iris_transformed.corr(), 4))
### Print VIF for each case - VIF unachnged after multiplication but decreases after scaling using "workhorse" sklearn scalers
print('─' * 100)
print(vif_data_compare)
print('─' * 100)
### Print correlation matrix for each case - no changes
print(corrmat_orig)
print('─' * 100)
print(corrmat_lintrans)
print('─' * 100)
print(corrmat_scaled)
print('─' * 100)
### Nonlinear scalers may distort linear correlations
### Applying nonlinear PowerTransformer sklearn scaler
from sklearn.preprocessing import StandardScaler, MinMaxScaler, PowerTransformer
scaler = PowerTransformer() ### can try MinMaxScaler or any other linear scaler
iris_scaler=scaler.fit(iris_df_lintrans)
iris_transformed=pd.DataFrame(iris_scaler.transform(iris_df_lintrans), columns=iris_df_lintrans.columns)
vif_data_scaled = pd.DataFrame()
vif_data_scaled['Features'] = iris_transformed.columns
vif_data_scaled['VIF'] = [variance_inflation_factor(iris_transformed.values, i) for i in range(len(iris_transformed.columns))]
vif_data_compare = pd.DataFrame()
vif_data_compare['Features'] = iris_transformed.columns
vif_data_compare['VIF_orig'] = vif_data['VIF']
vif_data_compare['VIF_lintrans'] = vif_data_lintrans['VIF']
vif_data_compare['VIF_scaled']= vif_data_scaled['VIF']
corrmat_scaled = pd.DataFrame(round(iris_transformed.corr(), 4))
print('─' * 100)
print(corrmat_scaled)
print('─' * 100)
|
Does standardising independent variables reduce collinearity?
This is actually a very important post because it refers to two crucial milestones in preprocessing and I would like to point out that the semantics of the word standardization are very heterogenous -
|
16,950
|
Does standardising independent variables reduce collinearity?
|
Standardization is a common way to reduce collinearity. (You should be able to verify very quickly that it works by trying it out on a couple of pairs of variables.) Whether you do it routinely depends on how much of a problem collinearity is in your analyses.
Edit: I see I was in error. What standardizing does do, though, is reduce collinearity with product terms (interaction terms).
|
Does standardising independent variables reduce collinearity?
|
Standardization is a common way to reduce collinearity. (You should be able to verify very quickly that it works by trying it out on a couple of pairs of variables.) Whether you do it routinely depe
|
Does standardising independent variables reduce collinearity?
Standardization is a common way to reduce collinearity. (You should be able to verify very quickly that it works by trying it out on a couple of pairs of variables.) Whether you do it routinely depends on how much of a problem collinearity is in your analyses.
Edit: I see I was in error. What standardizing does do, though, is reduce collinearity with product terms (interaction terms).
|
Does standardising independent variables reduce collinearity?
Standardization is a common way to reduce collinearity. (You should be able to verify very quickly that it works by trying it out on a couple of pairs of variables.) Whether you do it routinely depe
|
16,951
|
Is it possible that the AIC and BIC give totally different model selections?
|
It is possible indeed. As explained at https://methodology.psu.edu/AIC-vs-BIC, "BIC penalizes model complexity more heavily. The only way they should disagree is when AIC chooses a larger model than BIC."
If your goal is to identify a good predictive model, you should use the AIC. If your goal is to identify a good explanatory model, you should use the BIC. Rob Hyndman nicely summarizes this recommendation at
https://robjhyndman.com/hyndsight/to-explain-or-predict/:
"The AIC is better suited to model selection for prediction as it is asymptotically equivalent to leave-one-out cross-validation in regression, or one-step-cross-validation in time series. On the other hand, it might be argued that the BIC is better suited to model selection for explanation, as it is consistent."
The recommendation comes from Galit Shmueli’s paper “To explain or to predict?”, Statistical Science, 25(3), 289-310 (https://projecteuclid.org/euclid.ss/1294167961).
Addendum:
There is a third type of modeling - descriptive modeling - but I don't know of any references on which of AIC or BIC is best suited for identifying an optimal descriptive model. I hope others here can chime in with their insights.
|
Is it possible that the AIC and BIC give totally different model selections?
|
It is possible indeed. As explained at https://methodology.psu.edu/AIC-vs-BIC, "BIC penalizes model complexity more heavily. The only way they should disagree is when AIC chooses a larger model than B
|
Is it possible that the AIC and BIC give totally different model selections?
It is possible indeed. As explained at https://methodology.psu.edu/AIC-vs-BIC, "BIC penalizes model complexity more heavily. The only way they should disagree is when AIC chooses a larger model than BIC."
If your goal is to identify a good predictive model, you should use the AIC. If your goal is to identify a good explanatory model, you should use the BIC. Rob Hyndman nicely summarizes this recommendation at
https://robjhyndman.com/hyndsight/to-explain-or-predict/:
"The AIC is better suited to model selection for prediction as it is asymptotically equivalent to leave-one-out cross-validation in regression, or one-step-cross-validation in time series. On the other hand, it might be argued that the BIC is better suited to model selection for explanation, as it is consistent."
The recommendation comes from Galit Shmueli’s paper “To explain or to predict?”, Statistical Science, 25(3), 289-310 (https://projecteuclid.org/euclid.ss/1294167961).
Addendum:
There is a third type of modeling - descriptive modeling - but I don't know of any references on which of AIC or BIC is best suited for identifying an optimal descriptive model. I hope others here can chime in with their insights.
|
Is it possible that the AIC and BIC give totally different model selections?
It is possible indeed. As explained at https://methodology.psu.edu/AIC-vs-BIC, "BIC penalizes model complexity more heavily. The only way they should disagree is when AIC chooses a larger model than B
|
16,952
|
Is it possible that the AIC and BIC give totally different model selections?
|
Short answer: yes, it is very possible. The two apply different penalties based on the number of estimated parameters (2k for AIC vs ln(n) x k for BIC, where k is the number of estimated parameters and n is the sample size). Thus, if the likelihood gain from adding a parameter is small, BIC may select different models to AIC. This effect is dependent on sample size, however.
|
Is it possible that the AIC and BIC give totally different model selections?
|
Short answer: yes, it is very possible. The two apply different penalties based on the number of estimated parameters (2k for AIC vs ln(n) x k for BIC, where k is the number of estimated parameters an
|
Is it possible that the AIC and BIC give totally different model selections?
Short answer: yes, it is very possible. The two apply different penalties based on the number of estimated parameters (2k for AIC vs ln(n) x k for BIC, where k is the number of estimated parameters and n is the sample size). Thus, if the likelihood gain from adding a parameter is small, BIC may select different models to AIC. This effect is dependent on sample size, however.
|
Is it possible that the AIC and BIC give totally different model selections?
Short answer: yes, it is very possible. The two apply different penalties based on the number of estimated parameters (2k for AIC vs ln(n) x k for BIC, where k is the number of estimated parameters an
|
16,953
|
Relative size of p values at different sample sizes
|
Consider tossing a coin which you suspect may come up heads too often.
You perform an experiment, followed by a one tailed hypothesis test. In ten tosses you get 7 heads. Something at least as far from 50% could easily happen with a fair coin. Nothing unusual there.
If instead, you got 700 heads in 1000 tosses, a result at least as far from fair as that would be astonishing for a fair coin.
So 70% heads is not at all strange for a fair coin in the first case and very strange for a fair coin in the second case. The difference is sample size.
As the sample size increases, our uncertainty about where the population mean could be (the proportion of heads in our example) decreases. So larger samples are consistent with smaller ranges of possible population values - more values tend to become "ruled out" as samples get larger.
The more data we have, the more precisely we can pin down where the population mean could be... so a fixed value of the mean that is wrong will look less plausible as our sample sizes become large. That is, p-values tend to become smaller as sample size increases, unless $H_0$ is true.
|
Relative size of p values at different sample sizes
|
Consider tossing a coin which you suspect may come up heads too often.
You perform an experiment, followed by a one tailed hypothesis test. In ten tosses you get 7 heads. Something at least as far fro
|
Relative size of p values at different sample sizes
Consider tossing a coin which you suspect may come up heads too often.
You perform an experiment, followed by a one tailed hypothesis test. In ten tosses you get 7 heads. Something at least as far from 50% could easily happen with a fair coin. Nothing unusual there.
If instead, you got 700 heads in 1000 tosses, a result at least as far from fair as that would be astonishing for a fair coin.
So 70% heads is not at all strange for a fair coin in the first case and very strange for a fair coin in the second case. The difference is sample size.
As the sample size increases, our uncertainty about where the population mean could be (the proportion of heads in our example) decreases. So larger samples are consistent with smaller ranges of possible population values - more values tend to become "ruled out" as samples get larger.
The more data we have, the more precisely we can pin down where the population mean could be... so a fixed value of the mean that is wrong will look less plausible as our sample sizes become large. That is, p-values tend to become smaller as sample size increases, unless $H_0$ is true.
|
Relative size of p values at different sample sizes
Consider tossing a coin which you suspect may come up heads too often.
You perform an experiment, followed by a one tailed hypothesis test. In ten tosses you get 7 heads. Something at least as far fro
|
16,954
|
Relative size of p values at different sample sizes
|
I agree with @Glen_b, just want to explain it from another point of view.
Let's put the example of the difference of means in two populations. Rejecting $H_{0}$ is equivalent to say that 0 is not in the confidence interval for the difference of means. This interval gets smaller with n (by definition), so it will become harder and harder for any point (in this case, the zero) to be in the interval as n grows. As rejection by confidence interval is mathematically equivalent to rejection by p-value, p-value will get smaller with n.
It will come the moment when you will get an interval like $[0.0001, 0.0010]$ that will indicate that the first population has indeed a bigger mean than the second population, but this difference is so little that you would not mind it. You will reject $H_0$, but this rejection wont mean anything in real life. That is the reason why p-values are not enough to describe a result. One must always give some measure of the SIZE of the observed difference.
|
Relative size of p values at different sample sizes
|
I agree with @Glen_b, just want to explain it from another point of view.
Let's put the example of the difference of means in two populations. Rejecting $H_{0}$ is equivalent to say that 0 is not in t
|
Relative size of p values at different sample sizes
I agree with @Glen_b, just want to explain it from another point of view.
Let's put the example of the difference of means in two populations. Rejecting $H_{0}$ is equivalent to say that 0 is not in the confidence interval for the difference of means. This interval gets smaller with n (by definition), so it will become harder and harder for any point (in this case, the zero) to be in the interval as n grows. As rejection by confidence interval is mathematically equivalent to rejection by p-value, p-value will get smaller with n.
It will come the moment when you will get an interval like $[0.0001, 0.0010]$ that will indicate that the first population has indeed a bigger mean than the second population, but this difference is so little that you would not mind it. You will reject $H_0$, but this rejection wont mean anything in real life. That is the reason why p-values are not enough to describe a result. One must always give some measure of the SIZE of the observed difference.
|
Relative size of p values at different sample sizes
I agree with @Glen_b, just want to explain it from another point of view.
Let's put the example of the difference of means in two populations. Rejecting $H_{0}$ is equivalent to say that 0 is not in t
|
16,955
|
Relative size of p values at different sample sizes
|
The $p$ value for a significance-test of a null-hypothesis that a given, nonzero effect-size is actually zero in the population will decrease with increasing sample size. This is because a larger sample that provides consistent evidence of that nonzero effect is providing more evidence against the null than a smaller sample. A smaller sample offers more opportunity for random sampling error to bias effect size estimates as @Glen_b's answer illustrates. Regression to the mean reduces sampling error as sample size increases; an effect size estimate based on a sample's central tendency improves with the sample's size following the central limit theorem. Therefore $p$ – i.e., probability of obtaining more samples of the same size and with effect sizes at least as strong as that of your sample if you draw them randomly from the same population, assuming the effect size in that population is actually zero – decreases as sample size increases and the sample's effect size remains unchanged. If effect size decreases or error variation increases as sample size increases, significance can remain the same.
Here's another simple example: the correlation between $x=\{1,2,3,4,5\}$ and $y=\{2,1,2,1,3\}$. Here, Pearson's $r=.378,t_{(3)}=.71,p=.53$. If I duplicate the data and test the correlation of $x=\{1,2,3,4,5,1,2,3,4,5\}$ and $y=\{2,1,2,1,3,2,1,2,1,3\}$, $r=.378$ still, but $t_{(3)}=1.15,p=.28$. It doesn't take many copies ($n$) to approach $\lim_{n\to\infty} p(n)=0$, shown here:
|
Relative size of p values at different sample sizes
|
The $p$ value for a significance-test of a null-hypothesis that a given, nonzero effect-size is actually zero in the population will decrease with increasing sample size. This is because a larger samp
|
Relative size of p values at different sample sizes
The $p$ value for a significance-test of a null-hypothesis that a given, nonzero effect-size is actually zero in the population will decrease with increasing sample size. This is because a larger sample that provides consistent evidence of that nonzero effect is providing more evidence against the null than a smaller sample. A smaller sample offers more opportunity for random sampling error to bias effect size estimates as @Glen_b's answer illustrates. Regression to the mean reduces sampling error as sample size increases; an effect size estimate based on a sample's central tendency improves with the sample's size following the central limit theorem. Therefore $p$ – i.e., probability of obtaining more samples of the same size and with effect sizes at least as strong as that of your sample if you draw them randomly from the same population, assuming the effect size in that population is actually zero – decreases as sample size increases and the sample's effect size remains unchanged. If effect size decreases or error variation increases as sample size increases, significance can remain the same.
Here's another simple example: the correlation between $x=\{1,2,3,4,5\}$ and $y=\{2,1,2,1,3\}$. Here, Pearson's $r=.378,t_{(3)}=.71,p=.53$. If I duplicate the data and test the correlation of $x=\{1,2,3,4,5,1,2,3,4,5\}$ and $y=\{2,1,2,1,3,2,1,2,1,3\}$, $r=.378$ still, but $t_{(3)}=1.15,p=.28$. It doesn't take many copies ($n$) to approach $\lim_{n\to\infty} p(n)=0$, shown here:
|
Relative size of p values at different sample sizes
The $p$ value for a significance-test of a null-hypothesis that a given, nonzero effect-size is actually zero in the population will decrease with increasing sample size. This is because a larger samp
|
16,956
|
How to compute varimax-rotated principal components in R?
|
"Rotations" is an approach developed in factor analysis; there rotations (such as e.g. varimax) are applied to loadings, not to eigenvectors of the covariance matrix. Loadings are eigenvectors scaled by the square roots of the respective eigenvalues. After the varimax rotation, the loading vectors are not orthogonal anymore (even though the rotation is called "orthogonal"), so one cannot simply compute orthogonal projections of the data onto the rotated loading directions.
@FTusell's answer assumes that varimax rotation is applied to the eigenvectors (not to loadings). This would be pretty unconventional. Please see my detailed account of PCA+varimax for details: Is PCA followed by a rotation (such as varimax) still PCA? Briefly, if we look at the SVD of the data matrix $X=USV^\top$, then to rotate the loadings means inserting $RR^\top$ for some rotation matrix $R$ as follows: $X=(UR)(R^\top SV^\top).$
If rotation is applied to loadings (as it usually is), then there are at least three easy ways to compute varimax-rotated PCs in R :
They are readily available via function psych::principal (demonstrating that this is indeed the standard approach). Note that it returns standardized scores, i.e. all PCs have unit variance.
One can manually use varimax function to rotate the loadings, and then use the new rotated loadings to obtain the scores; one needs to multiple the data with the transposed pseudo-inverse of the rotated loadings (see formulas in this answer by @ttnphns). This will also yield standardized scores.
One can use varimax function to rotate the loadings, and then use the $rotmat rotation matrix to rotate the standardized scores obtained with prcomp.
All three methods yield the same result:
irisX <- iris[,1:4] # Iris data
ncomp <- 2
pca_iris_rotated <- psych::principal(irisX, rotate="varimax", nfactors=ncomp, scores=TRUE)
print(pca_iris_rotated$scores[1:5,]) # Scores returned by principal()
pca_iris <- prcomp(irisX, center=T, scale=T)
rawLoadings <- pca_iris$rotation[,1:ncomp] %*% diag(pca_iris$sdev, ncomp, ncomp)
rotatedLoadings <- varimax(rawLoadings)$loadings
invLoadings <- t(pracma::pinv(rotatedLoadings))
scores <- scale(irisX) %*% invLoadings
print(scores[1:5,]) # Scores computed via rotated loadings
scores <- scale(pca_iris$x[,1:2]) %*% varimax(rawLoadings)$rotmat
print(scores[1:5,]) # Scores computed via rotating the scores
This yields three identical outputs:
1 -1.083475 0.9067262
2 -1.377536 -0.2648876
3 -1.419832 0.1165198
4 -1.471607 -0.1474634
5 -1.095296 1.0949536
Note: The varimax function in R uses normalize = TRUE, eps = 1e-5 parameters by default (see documentation). One might want to change these parameters (decrease the eps tolerance and take care of Kaiser normalization) when comparing the results to other software such as SPSS. I thank @GottfriedHelms for bringing this to my attention. [Note: these parameters work when passed to the varimax function, but do not work when passed to the psych::principal function. This appears to be a bug that will be fixed.]
|
How to compute varimax-rotated principal components in R?
|
"Rotations" is an approach developed in factor analysis; there rotations (such as e.g. varimax) are applied to loadings, not to eigenvectors of the covariance matrix. Loadings are eigenvectors scaled
|
How to compute varimax-rotated principal components in R?
"Rotations" is an approach developed in factor analysis; there rotations (such as e.g. varimax) are applied to loadings, not to eigenvectors of the covariance matrix. Loadings are eigenvectors scaled by the square roots of the respective eigenvalues. After the varimax rotation, the loading vectors are not orthogonal anymore (even though the rotation is called "orthogonal"), so one cannot simply compute orthogonal projections of the data onto the rotated loading directions.
@FTusell's answer assumes that varimax rotation is applied to the eigenvectors (not to loadings). This would be pretty unconventional. Please see my detailed account of PCA+varimax for details: Is PCA followed by a rotation (such as varimax) still PCA? Briefly, if we look at the SVD of the data matrix $X=USV^\top$, then to rotate the loadings means inserting $RR^\top$ for some rotation matrix $R$ as follows: $X=(UR)(R^\top SV^\top).$
If rotation is applied to loadings (as it usually is), then there are at least three easy ways to compute varimax-rotated PCs in R :
They are readily available via function psych::principal (demonstrating that this is indeed the standard approach). Note that it returns standardized scores, i.e. all PCs have unit variance.
One can manually use varimax function to rotate the loadings, and then use the new rotated loadings to obtain the scores; one needs to multiple the data with the transposed pseudo-inverse of the rotated loadings (see formulas in this answer by @ttnphns). This will also yield standardized scores.
One can use varimax function to rotate the loadings, and then use the $rotmat rotation matrix to rotate the standardized scores obtained with prcomp.
All three methods yield the same result:
irisX <- iris[,1:4] # Iris data
ncomp <- 2
pca_iris_rotated <- psych::principal(irisX, rotate="varimax", nfactors=ncomp, scores=TRUE)
print(pca_iris_rotated$scores[1:5,]) # Scores returned by principal()
pca_iris <- prcomp(irisX, center=T, scale=T)
rawLoadings <- pca_iris$rotation[,1:ncomp] %*% diag(pca_iris$sdev, ncomp, ncomp)
rotatedLoadings <- varimax(rawLoadings)$loadings
invLoadings <- t(pracma::pinv(rotatedLoadings))
scores <- scale(irisX) %*% invLoadings
print(scores[1:5,]) # Scores computed via rotated loadings
scores <- scale(pca_iris$x[,1:2]) %*% varimax(rawLoadings)$rotmat
print(scores[1:5,]) # Scores computed via rotating the scores
This yields three identical outputs:
1 -1.083475 0.9067262
2 -1.377536 -0.2648876
3 -1.419832 0.1165198
4 -1.471607 -0.1474634
5 -1.095296 1.0949536
Note: The varimax function in R uses normalize = TRUE, eps = 1e-5 parameters by default (see documentation). One might want to change these parameters (decrease the eps tolerance and take care of Kaiser normalization) when comparing the results to other software such as SPSS. I thank @GottfriedHelms for bringing this to my attention. [Note: these parameters work when passed to the varimax function, but do not work when passed to the psych::principal function. This appears to be a bug that will be fixed.]
|
How to compute varimax-rotated principal components in R?
"Rotations" is an approach developed in factor analysis; there rotations (such as e.g. varimax) are applied to loadings, not to eigenvectors of the covariance matrix. Loadings are eigenvectors scaled
|
16,957
|
How to compute varimax-rotated principal components in R?
|
You need to use the matrix $loadings, not $rotmat:
x <- matrix(rnorm(600),60,10)
prc <- prcomp(x, center=TRUE, scale=TRUE)
varimax7 <- varimax(prc$rotation[,1:7])
newData <- scale(x) %*% varimax7$loadings
The matrix $rotmat is the orthogonal matrix that produces the new loadings from the unrotated ones.
EDIT as of Feb, 12, 2015:
As rightly pointed below by @amoeba (see also his/her previous post as well as another post from @ttnphns) this answer is not correct. Consider an $n\times m$ data matrix $X$. The singular value decomposition is
$$X = USV^T$$ where $V$ has as its columns the (normalized) eigenvectors of $X'X$. Now, a rotation is a change of coordinates and amounts to writing the above equality as:
$$X = (UST)(T^TV^T) = U^*V^*$$ with $T$ being an orthogonal matrix chosen to achieve a $V^*$ close to sparse (maximum contrast between entries, loosely speaking). Now, if that were all, which it is not, one could post-multiply the
equality above by $V^*$ to obtain scores $U^*$ as $X(V^*)^T$, But of course we never rotate all PC. Rather, we consider a subset of $k<m$ which provides still a decent rank-$k$ approximation of $X$,
$$X \approx (U_kS_k)(V_k^T)$$
so the rotated solution is now
$$X \approx (U_kS_kT_k)(T_k^TV_k^T) = U_k^*V_k^*$$
where now $V_k^*$ is a $k\times n$ matrix. We cannot any more simply multiply $X$ by the transpose of $V_k^*$, but rather we need to resort to one of the solutions described by @amoeba.
In other words, the solution I proposed is only correct in the particular case where it would be useless and nonsensical.
Heartfelt thanks go to @amoeba for making clear this matter to me; I have been living with this misconception for years.
One point where the note above departs from @amoeba's post is that she/he seems to associate $S$ with $V$ in $L$. I think in PCA it is more common to have $V$'s columns of norm 1 and absorb $S$ in the principal component's values. In fact, usually those are presented as linear combinations $v_i^TX$ $(i=1,\ldots,m)$ of the original (centered, perhaps scaled) variables subject to $\|v_i\|=1$. Either way is acceptable I think, and everything in between (as in biplot analysis).
FURTHER EDIT Feb. 12, 2015
As pointed out by @amoeba, even though $V_k^*$ is rectangular, the solution I proposed might still be acceptable: $V_k^*(V_k^*)^T$ would give a unit matrix and $X(V_k^*)^T \approx U_k^*$. So it all seems to hinge on the definition of scores that one prefers.
|
How to compute varimax-rotated principal components in R?
|
You need to use the matrix $loadings, not $rotmat:
x <- matrix(rnorm(600),60,10)
prc <- prcomp(x, center=TRUE, scale=TRUE)
varimax7 <- varimax(prc$rotation[,1:7])
newData <- scale(x) %*% varimax7
|
How to compute varimax-rotated principal components in R?
You need to use the matrix $loadings, not $rotmat:
x <- matrix(rnorm(600),60,10)
prc <- prcomp(x, center=TRUE, scale=TRUE)
varimax7 <- varimax(prc$rotation[,1:7])
newData <- scale(x) %*% varimax7$loadings
The matrix $rotmat is the orthogonal matrix that produces the new loadings from the unrotated ones.
EDIT as of Feb, 12, 2015:
As rightly pointed below by @amoeba (see also his/her previous post as well as another post from @ttnphns) this answer is not correct. Consider an $n\times m$ data matrix $X$. The singular value decomposition is
$$X = USV^T$$ where $V$ has as its columns the (normalized) eigenvectors of $X'X$. Now, a rotation is a change of coordinates and amounts to writing the above equality as:
$$X = (UST)(T^TV^T) = U^*V^*$$ with $T$ being an orthogonal matrix chosen to achieve a $V^*$ close to sparse (maximum contrast between entries, loosely speaking). Now, if that were all, which it is not, one could post-multiply the
equality above by $V^*$ to obtain scores $U^*$ as $X(V^*)^T$, But of course we never rotate all PC. Rather, we consider a subset of $k<m$ which provides still a decent rank-$k$ approximation of $X$,
$$X \approx (U_kS_k)(V_k^T)$$
so the rotated solution is now
$$X \approx (U_kS_kT_k)(T_k^TV_k^T) = U_k^*V_k^*$$
where now $V_k^*$ is a $k\times n$ matrix. We cannot any more simply multiply $X$ by the transpose of $V_k^*$, but rather we need to resort to one of the solutions described by @amoeba.
In other words, the solution I proposed is only correct in the particular case where it would be useless and nonsensical.
Heartfelt thanks go to @amoeba for making clear this matter to me; I have been living with this misconception for years.
One point where the note above departs from @amoeba's post is that she/he seems to associate $S$ with $V$ in $L$. I think in PCA it is more common to have $V$'s columns of norm 1 and absorb $S$ in the principal component's values. In fact, usually those are presented as linear combinations $v_i^TX$ $(i=1,\ldots,m)$ of the original (centered, perhaps scaled) variables subject to $\|v_i\|=1$. Either way is acceptable I think, and everything in between (as in biplot analysis).
FURTHER EDIT Feb. 12, 2015
As pointed out by @amoeba, even though $V_k^*$ is rectangular, the solution I proposed might still be acceptable: $V_k^*(V_k^*)^T$ would give a unit matrix and $X(V_k^*)^T \approx U_k^*$. So it all seems to hinge on the definition of scores that one prefers.
|
How to compute varimax-rotated principal components in R?
You need to use the matrix $loadings, not $rotmat:
x <- matrix(rnorm(600),60,10)
prc <- prcomp(x, center=TRUE, scale=TRUE)
varimax7 <- varimax(prc$rotation[,1:7])
newData <- scale(x) %*% varimax7
|
16,958
|
How to compute varimax-rotated principal components in R?
|
I was looking for a solution that works for PCA performed using ade4.
Please find the function below:
library(ade4)
irisX <- iris[,1:4] # Iris data
ncomp <- 2
# With ade4
dudi_iris <- dudi.pca(irisX, scannf = FALSE, nf = ncomp)
rotate_dudi.pca <- function(pca, ncomp = 2) {
rawLoadings <- as.matrix(pca$c1[,1:ncomp]) %*% diag(sqrt(pca$eig), ncomp, ncomp)
pca$c1 <- rawLoadings
pca$li <- scale(pca$li[,1:ncomp]) %*% varimax(rawLoadings)$rotmat
return(pca)
}
rot_iris <- rotate_dudi.pca(pca = dudi_iris, ncomp = ncomp)
print(rot_iris$li[1:5,]) # Scores computed via rotating the scores
#> [,1] [,2]
#> 1 -1.083475 -0.9067262
#> 2 -1.377536 0.2648876
#> 3 -1.419832 -0.1165198
#> 4 -1.471607 0.1474634
#> 5 -1.095296 -1.0949536
Created on 2020-01-14 by the reprex package (v0.3.0)
Hope this help!
|
How to compute varimax-rotated principal components in R?
|
I was looking for a solution that works for PCA performed using ade4.
Please find the function below:
library(ade4)
irisX <- iris[,1:4] # Iris data
ncomp <- 2
# With ade4
dudi_iris <- dudi.pca(i
|
How to compute varimax-rotated principal components in R?
I was looking for a solution that works for PCA performed using ade4.
Please find the function below:
library(ade4)
irisX <- iris[,1:4] # Iris data
ncomp <- 2
# With ade4
dudi_iris <- dudi.pca(irisX, scannf = FALSE, nf = ncomp)
rotate_dudi.pca <- function(pca, ncomp = 2) {
rawLoadings <- as.matrix(pca$c1[,1:ncomp]) %*% diag(sqrt(pca$eig), ncomp, ncomp)
pca$c1 <- rawLoadings
pca$li <- scale(pca$li[,1:ncomp]) %*% varimax(rawLoadings)$rotmat
return(pca)
}
rot_iris <- rotate_dudi.pca(pca = dudi_iris, ncomp = ncomp)
print(rot_iris$li[1:5,]) # Scores computed via rotating the scores
#> [,1] [,2]
#> 1 -1.083475 -0.9067262
#> 2 -1.377536 0.2648876
#> 3 -1.419832 -0.1165198
#> 4 -1.471607 0.1474634
#> 5 -1.095296 -1.0949536
Created on 2020-01-14 by the reprex package (v0.3.0)
Hope this help!
|
How to compute varimax-rotated principal components in R?
I was looking for a solution that works for PCA performed using ade4.
Please find the function below:
library(ade4)
irisX <- iris[,1:4] # Iris data
ncomp <- 2
# With ade4
dudi_iris <- dudi.pca(i
|
16,959
|
Can somebody illustrate how there can be dependence and zero covariance?
|
The basic idea here is that covariance only measures one particular type of dependence, therefore the two are not equivalent. Specifically,
Covariance is a measure how linearly related two variables are. If two variables are non-linearly related, this will not be reflected in the covariance. A more detailed description can be found here.
Dependence between random variables refers to any type of relationship between the two that causes them to act differently "together" than they do "by themselves". Specifically, dependence between random variables subsumes any relationship between the two that causes their joint distribution to not be the product of their marginal distributions. This includes linear relationships as well as many many others.
If two variables are non-linearly related, then they can potentially have 0 covariance but are still dependent - many examples are given here and this plot below from wikipedia gives some graphical examples in the bottom row:
One example where zero covariance and independence between random variables are equivalent conditions is when the variables are jointly normally distributed (that is, the two variables follow a bivariate normal distribution, which is not equivalent to the two variables being individually normally distributed). Another special case is that pairs of bernoulli variables are uncorrelated if and only if they are independent (thanks @cardinal). But, in general the two cannot be taken to be equivalent.
Therefore, one cannot, in general, conclude that two variables are independent just because they appear uncorrelated (e.g. didn't fail to reject the null hypothesis of no correlation). One is well advised to plot data to infer whether the two are related, not just stopping at a test of correlation. For example, (thanks @gung), if one were to run a linear regression (i.e. testing for non-zero correlation) and found a non-sigificant result, one may be tempted to conclude that the variables are not related, but you've only investigated a linear relationship.
I don't know much about Psychology but it makes sense that there could be non-linear relationships between variables there. As a toy example, it seems possible that cognitive ability is non-linearly related to age - very young and very old people are not as sharp as a 30 year old. If one were to plot some measure of cognitive ablity vs. age one may expect to see that cognitive ability is highest at a moderate age and decays around that, which would be a non-linear pattern.
|
Can somebody illustrate how there can be dependence and zero covariance?
|
The basic idea here is that covariance only measures one particular type of dependence, therefore the two are not equivalent. Specifically,
Covariance is a measure how linearly related two variables
|
Can somebody illustrate how there can be dependence and zero covariance?
The basic idea here is that covariance only measures one particular type of dependence, therefore the two are not equivalent. Specifically,
Covariance is a measure how linearly related two variables are. If two variables are non-linearly related, this will not be reflected in the covariance. A more detailed description can be found here.
Dependence between random variables refers to any type of relationship between the two that causes them to act differently "together" than they do "by themselves". Specifically, dependence between random variables subsumes any relationship between the two that causes their joint distribution to not be the product of their marginal distributions. This includes linear relationships as well as many many others.
If two variables are non-linearly related, then they can potentially have 0 covariance but are still dependent - many examples are given here and this plot below from wikipedia gives some graphical examples in the bottom row:
One example where zero covariance and independence between random variables are equivalent conditions is when the variables are jointly normally distributed (that is, the two variables follow a bivariate normal distribution, which is not equivalent to the two variables being individually normally distributed). Another special case is that pairs of bernoulli variables are uncorrelated if and only if they are independent (thanks @cardinal). But, in general the two cannot be taken to be equivalent.
Therefore, one cannot, in general, conclude that two variables are independent just because they appear uncorrelated (e.g. didn't fail to reject the null hypothesis of no correlation). One is well advised to plot data to infer whether the two are related, not just stopping at a test of correlation. For example, (thanks @gung), if one were to run a linear regression (i.e. testing for non-zero correlation) and found a non-sigificant result, one may be tempted to conclude that the variables are not related, but you've only investigated a linear relationship.
I don't know much about Psychology but it makes sense that there could be non-linear relationships between variables there. As a toy example, it seems possible that cognitive ability is non-linearly related to age - very young and very old people are not as sharp as a 30 year old. If one were to plot some measure of cognitive ablity vs. age one may expect to see that cognitive ability is highest at a moderate age and decays around that, which would be a non-linear pattern.
|
Can somebody illustrate how there can be dependence and zero covariance?
The basic idea here is that covariance only measures one particular type of dependence, therefore the two are not equivalent. Specifically,
Covariance is a measure how linearly related two variables
|
16,960
|
Can somebody illustrate how there can be dependence and zero covariance?
|
A standard way of teaching/visualizing a correlation or covariance is to plot the data, draw lines at the mean of 'x' and 'y', then draw rectangles from the point of the 2 means to the individual datapoints, like this:
The rectangles (points) in the top right and bottom left quadrants (red in the example) contribute positive values to the correlation/covariance, while the rectangles (points) in the top left and bottom right quadrants (blue in the example) contribute negative values to the correlation/covariance. If the total area of the red rectangles equals the total area of the blue rectangles then the positives and negatives cancel out and you get a zero covariance. If there is more area in the red then the covariance will be positive and if there is more area in the blue then the covariance will be negative.
Now lets look at an example from the previous discussion:
The individual points follow a parabola, so they are dependent, if you know 'x' then you know 'y' exactly, but you can also see that for every red rectangle there is a matching blue rectangle, so the final covariance will be 0.
|
Can somebody illustrate how there can be dependence and zero covariance?
|
A standard way of teaching/visualizing a correlation or covariance is to plot the data, draw lines at the mean of 'x' and 'y', then draw rectangles from the point of the 2 means to the individual data
|
Can somebody illustrate how there can be dependence and zero covariance?
A standard way of teaching/visualizing a correlation or covariance is to plot the data, draw lines at the mean of 'x' and 'y', then draw rectangles from the point of the 2 means to the individual datapoints, like this:
The rectangles (points) in the top right and bottom left quadrants (red in the example) contribute positive values to the correlation/covariance, while the rectangles (points) in the top left and bottom right quadrants (blue in the example) contribute negative values to the correlation/covariance. If the total area of the red rectangles equals the total area of the blue rectangles then the positives and negatives cancel out and you get a zero covariance. If there is more area in the red then the covariance will be positive and if there is more area in the blue then the covariance will be negative.
Now lets look at an example from the previous discussion:
The individual points follow a parabola, so they are dependent, if you know 'x' then you know 'y' exactly, but you can also see that for every red rectangle there is a matching blue rectangle, so the final covariance will be 0.
|
Can somebody illustrate how there can be dependence and zero covariance?
A standard way of teaching/visualizing a correlation or covariance is to plot the data, draw lines at the mean of 'x' and 'y', then draw rectangles from the point of the 2 means to the individual data
|
16,961
|
Can somebody illustrate how there can be dependence and zero covariance?
|
One simple test if that if the data are basically following a pattern that symmetrical around a vertical or horizontal axis through the means, the co-variance will be pretty close to zero. For example, if the symmetry is around the y-axis, it means that for each value with a given y, there is a positive x difference from mean x and a negative difference from the mean x. The addition of y*x for those values will be zero. You can see this illustrated nicely in the collection of example plots in the other answers. There are other patterns that would yield a zero co-variance but not independence, but many examples are easily evaluated by looking for symmetry or not.
|
Can somebody illustrate how there can be dependence and zero covariance?
|
One simple test if that if the data are basically following a pattern that symmetrical around a vertical or horizontal axis through the means, the co-variance will be pretty close to zero. For examp
|
Can somebody illustrate how there can be dependence and zero covariance?
One simple test if that if the data are basically following a pattern that symmetrical around a vertical or horizontal axis through the means, the co-variance will be pretty close to zero. For example, if the symmetry is around the y-axis, it means that for each value with a given y, there is a positive x difference from mean x and a negative difference from the mean x. The addition of y*x for those values will be zero. You can see this illustrated nicely in the collection of example plots in the other answers. There are other patterns that would yield a zero co-variance but not independence, but many examples are easily evaluated by looking for symmetry or not.
|
Can somebody illustrate how there can be dependence and zero covariance?
One simple test if that if the data are basically following a pattern that symmetrical around a vertical or horizontal axis through the means, the co-variance will be pretty close to zero. For examp
|
16,962
|
Can somebody illustrate how there can be dependence and zero covariance?
|
An example from Wikipedia:
"If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. For example, suppose the random variable X is symmetrically distributed about zero, and Y = X^2. Then Y is completely determined by X, so that X and Y are perfectly dependent, but their correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly normal, uncorrelatedness is equivalent to independence."
|
Can somebody illustrate how there can be dependence and zero covariance?
|
An example from Wikipedia:
"If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true because the correlation coefficient detects only linear dependencies
|
Can somebody illustrate how there can be dependence and zero covariance?
An example from Wikipedia:
"If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. For example, suppose the random variable X is symmetrically distributed about zero, and Y = X^2. Then Y is completely determined by X, so that X and Y are perfectly dependent, but their correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly normal, uncorrelatedness is equivalent to independence."
|
Can somebody illustrate how there can be dependence and zero covariance?
An example from Wikipedia:
"If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not true because the correlation coefficient detects only linear dependencies
|
16,963
|
Is there more to probability than Bayesianism?
|
The Bayesian interpretation of probability suffices for practical purposes. But even given a Bayesian interpretation of probability, there is more to statistics than probability, because the foundation of statistics is decision theory and decision theory requires not only a class of probability models but also the specification of a optimality criteria for a decision rule. Under Bayes criteria, the optimal decision rules can be obtained through Bayes' rule; but many frequentist methods are justified under minimax and other decision criteria.
|
Is there more to probability than Bayesianism?
|
The Bayesian interpretation of probability suffices for practical purposes. But even given a Bayesian interpretation of probability, there is more to statistics than probability, because the foundati
|
Is there more to probability than Bayesianism?
The Bayesian interpretation of probability suffices for practical purposes. But even given a Bayesian interpretation of probability, there is more to statistics than probability, because the foundation of statistics is decision theory and decision theory requires not only a class of probability models but also the specification of a optimality criteria for a decision rule. Under Bayes criteria, the optimal decision rules can be obtained through Bayes' rule; but many frequentist methods are justified under minimax and other decision criteria.
|
Is there more to probability than Bayesianism?
The Bayesian interpretation of probability suffices for practical purposes. But even given a Bayesian interpretation of probability, there is more to statistics than probability, because the foundati
|
16,964
|
Is there more to probability than Bayesianism?
|
"Bayesian" and "frequentist" aren't "probabilistic philosophies". They're schools of statistical thought and practice concerned mainly with quantifying uncertainty and making decisions, although they're often associated with particular interpretations of probability. Probably the most common perception, although it is incomplete, is that of probability as subjective quantification of belief versus probabilities as long-run frequencies. But even these aren't really mutually exclusive. And you may not be aware of this but there are avowed Bayesians who don't agree on particular philosophical issues about probability.
Bayesian statistics and frequentist statistics aren't orthogonal either. It seems like "frequentist" has come to mean "not Bayesian" but that's incorrect. For example, it's perfectly reasonable to ask questions about the properties of Bayesian estimators and confidence intervals under repeated sampling. It's a false dichotomy perpetuated at least in part by a lack of a common definition of the terms Bayesian and frequentist (we statisticians have no one to blame but ourselves for that).
For an amusing, pointed and thoughtful discussion I would suggest Gelman's "Objections to Bayesian Statistics", the comments, and the rejoinder, available here:
http://ba.stat.cmu.edu/vol03is03.php
There is even some discussion about confidence intervals in physics IIRC. For more in-depth discussions you could walk back through the references therein. If you want to understand the principles behind Bayesian inference, I would suggest Bernando & Smith's book but there are many, many other good references.
|
Is there more to probability than Bayesianism?
|
"Bayesian" and "frequentist" aren't "probabilistic philosophies". They're schools of statistical thought and practice concerned mainly with quantifying uncertainty and making decisions, although they'
|
Is there more to probability than Bayesianism?
"Bayesian" and "frequentist" aren't "probabilistic philosophies". They're schools of statistical thought and practice concerned mainly with quantifying uncertainty and making decisions, although they're often associated with particular interpretations of probability. Probably the most common perception, although it is incomplete, is that of probability as subjective quantification of belief versus probabilities as long-run frequencies. But even these aren't really mutually exclusive. And you may not be aware of this but there are avowed Bayesians who don't agree on particular philosophical issues about probability.
Bayesian statistics and frequentist statistics aren't orthogonal either. It seems like "frequentist" has come to mean "not Bayesian" but that's incorrect. For example, it's perfectly reasonable to ask questions about the properties of Bayesian estimators and confidence intervals under repeated sampling. It's a false dichotomy perpetuated at least in part by a lack of a common definition of the terms Bayesian and frequentist (we statisticians have no one to blame but ourselves for that).
For an amusing, pointed and thoughtful discussion I would suggest Gelman's "Objections to Bayesian Statistics", the comments, and the rejoinder, available here:
http://ba.stat.cmu.edu/vol03is03.php
There is even some discussion about confidence intervals in physics IIRC. For more in-depth discussions you could walk back through the references therein. If you want to understand the principles behind Bayesian inference, I would suggest Bernando & Smith's book but there are many, many other good references.
|
Is there more to probability than Bayesianism?
"Bayesian" and "frequentist" aren't "probabilistic philosophies". They're schools of statistical thought and practice concerned mainly with quantifying uncertainty and making decisions, although they'
|
16,965
|
Is there more to probability than Bayesianism?
|
Take a look at this paper by Cosma Shalizi and Andrew Gelman about philosophy and Bayesianism. Gelman is a proeminent bayesian and Shalizi a frequentist!
Take a look also at this short criticism by Shalizi, where he points the necessity of model checking and mock the dutch book argument used by some Bayesians.
And last, but not least, I think that, since you are a physicist, you may like this text, where the author points to “computational learning theory” (which I frankly know nothing at all), which could be an alternative to Bayesianism, as far as I can understand it (not much).
ps.: If you follow the links, specially the last one and have an opinion about the text (and the discussions that followed the text at the blog of the author)
ps.2: My own take on this: Forget about the issue of objective vs subjective probability, the likelihood principle and the argument about the necessity of being coherent. Bayesian methods are good when they allow you to model your problem well (for instance, using a prior to induce unimodal posterior when there is a bimodal likelihood etc.) and the same is true for frequentist methods. Also, forget about the stuff about the problems with p-value. I mean, p-value sucks, but in the end they are a measure of uncertainty, in the spirit of how Fisher thought of it.
|
Is there more to probability than Bayesianism?
|
Take a look at this paper by Cosma Shalizi and Andrew Gelman about philosophy and Bayesianism. Gelman is a proeminent bayesian and Shalizi a frequentist!
Take a look also at this short criticism by S
|
Is there more to probability than Bayesianism?
Take a look at this paper by Cosma Shalizi and Andrew Gelman about philosophy and Bayesianism. Gelman is a proeminent bayesian and Shalizi a frequentist!
Take a look also at this short criticism by Shalizi, where he points the necessity of model checking and mock the dutch book argument used by some Bayesians.
And last, but not least, I think that, since you are a physicist, you may like this text, where the author points to “computational learning theory” (which I frankly know nothing at all), which could be an alternative to Bayesianism, as far as I can understand it (not much).
ps.: If you follow the links, specially the last one and have an opinion about the text (and the discussions that followed the text at the blog of the author)
ps.2: My own take on this: Forget about the issue of objective vs subjective probability, the likelihood principle and the argument about the necessity of being coherent. Bayesian methods are good when they allow you to model your problem well (for instance, using a prior to induce unimodal posterior when there is a bimodal likelihood etc.) and the same is true for frequentist methods. Also, forget about the stuff about the problems with p-value. I mean, p-value sucks, but in the end they are a measure of uncertainty, in the spirit of how Fisher thought of it.
|
Is there more to probability than Bayesianism?
Take a look at this paper by Cosma Shalizi and Andrew Gelman about philosophy and Bayesianism. Gelman is a proeminent bayesian and Shalizi a frequentist!
Take a look also at this short criticism by S
|
16,966
|
Is there more to probability than Bayesianism?
|
There are non-Bayesian systems or philosophies of probability -- Baconian & Pascalian, e.g. If you are into epistemology & philosophy of science you might enjoy the debates--otherwise, you'll shake your head & conclude that in fact the Bayesian interpretation is all there is.
For good discussions,
Cohen, L.J. An introduction to the philosophy of induction and probability, (Clarendon Press ;
Oxford University Press, Oxford
New York, 1989)
Schum, D.A. The evidential foundations of probabilistic reasoning, (Wiley, New York, 1994).
|
Is there more to probability than Bayesianism?
|
There are non-Bayesian systems or philosophies of probability -- Baconian & Pascalian, e.g. If you are into epistemology & philosophy of science you might enjoy the debates--otherwise, you'll shake yo
|
Is there more to probability than Bayesianism?
There are non-Bayesian systems or philosophies of probability -- Baconian & Pascalian, e.g. If you are into epistemology & philosophy of science you might enjoy the debates--otherwise, you'll shake your head & conclude that in fact the Bayesian interpretation is all there is.
For good discussions,
Cohen, L.J. An introduction to the philosophy of induction and probability, (Clarendon Press ;
Oxford University Press, Oxford
New York, 1989)
Schum, D.A. The evidential foundations of probabilistic reasoning, (Wiley, New York, 1994).
|
Is there more to probability than Bayesianism?
There are non-Bayesian systems or philosophies of probability -- Baconian & Pascalian, e.g. If you are into epistemology & philosophy of science you might enjoy the debates--otherwise, you'll shake yo
|
16,967
|
Is there more to probability than Bayesianism?
|
For me, the important thing about Bayesianism is that it regards probability as having the same meaning we apply intuitively in everyday life, namely the degree of plausibility of the truth of a proposition. Very few of us really use probability to mean strictly a long run frequency in everyday use, if only because we are often interested in particular events that have no long run frequency, for example what is the probability that fossil fuel emissions are causing significant climate change? For this reason, Bayesian statistics are much less prone to misinterpretation than frequentist statistics.
Bayesianism also has marginalisation, priors, maxent, transformation groups etc. that all have their uses, but for me the key benefit is that the definition of probability is more appropriate for the kinds of problems I want to address.
That doesn't make Bayesian statistcs better than frequentist statistics. It seems to me that frequentist statistics are well suited to problems in quality control (where you do have repeated sampling from populations) or where you have designed experiments, rather than analysis of pre-collected data (although that lies rather beyond my expertise, so it is just intuition).
As an engineer, it is a matter of "horses for courses" and I have both sets of tools in my toolbox and I use both on a regular basis.
|
Is there more to probability than Bayesianism?
|
For me, the important thing about Bayesianism is that it regards probability as having the same meaning we apply intuitively in everyday life, namely the degree of plausibility of the truth of a propo
|
Is there more to probability than Bayesianism?
For me, the important thing about Bayesianism is that it regards probability as having the same meaning we apply intuitively in everyday life, namely the degree of plausibility of the truth of a proposition. Very few of us really use probability to mean strictly a long run frequency in everyday use, if only because we are often interested in particular events that have no long run frequency, for example what is the probability that fossil fuel emissions are causing significant climate change? For this reason, Bayesian statistics are much less prone to misinterpretation than frequentist statistics.
Bayesianism also has marginalisation, priors, maxent, transformation groups etc. that all have their uses, but for me the key benefit is that the definition of probability is more appropriate for the kinds of problems I want to address.
That doesn't make Bayesian statistcs better than frequentist statistics. It seems to me that frequentist statistics are well suited to problems in quality control (where you do have repeated sampling from populations) or where you have designed experiments, rather than analysis of pre-collected data (although that lies rather beyond my expertise, so it is just intuition).
As an engineer, it is a matter of "horses for courses" and I have both sets of tools in my toolbox and I use both on a regular basis.
|
Is there more to probability than Bayesianism?
For me, the important thing about Bayesianism is that it regards probability as having the same meaning we apply intuitively in everyday life, namely the degree of plausibility of the truth of a propo
|
16,968
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
|
It is very difficult to achieve what you want programmatically because there are so many different forms of nonlinear associations. Even looking at correlation or regression coefficients will not really help. It is always good to refer back to Anscombe's quartet when thinking about problems like this:
Obviously the association between the two variables is completely different in each plot, but each has exactly the same correlation coefficient.
If you know a priori what the possible non-linear relations could be, then you could fit a series of nonlinear models and compare the goodness of fit. But if you don't know what the possible non-linear relations could be, then I can't see how it can be done robustly without visually inspecting the data. Cubic splines could be one possibility but then it may not cope well with logarithmic, exponential and sinusoidal associations, and could be prone to overfitting. EDIT: After some further thought, another approach would be to fit a generalised additive model (GAM) which would provide good insight for many nonlinear associations, but probably not sinusoidal ones.
Truly, the best way to do what you want is visually. We can see instantly what the relations are in the plots above, but any programmatic approach such as regression is bound to have situations where it fails miserably.
So, my suggestion, if you really need to do this is to use a classifier based on the image of the bivariate plot.
create a dataset using randomly generated data for one variable, from a randomly chosen distribution.
Generate the other variable with a linear association (with random slope) and add some random noise. Then choose at random a nonlinear association and create a new set of values for the other variable. You may want to include purely random associations in this group.
Create two bivariate plots, one linear the other nonlinear from the data simulated in 1) and 2). Normalise the data first.
Repeat the above steps millions of times, or as many times as your time scale will allow
Create a classifier, train, test and validate it, to classify linear vs nonlinear images.
For your actual use case, if you have a different sample size to your simulated data then sample or re-sample to get obtain the same size. Normalise the data, create the image and apply the classifier to it.
I realise that this is probably not the kind of answer you want, but I cannot think of a robust way to do this with regression or other model-based approach.
EDIT: I hope no one is taking this too seriously. My point here is that, in a situation with bivariate data, we should always plot the data. Trying to do anything programatically, whether it is a GAM, cubic splines or a vast machine learning approach is basically allowing the analyst to not think, which is a very dangerous thing.
Please always plot your data.
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
|
It is very difficult to achieve what you want programmatically because there are so many different forms of nonlinear associations. Even looking at correlation or regression coefficients will not real
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
It is very difficult to achieve what you want programmatically because there are so many different forms of nonlinear associations. Even looking at correlation or regression coefficients will not really help. It is always good to refer back to Anscombe's quartet when thinking about problems like this:
Obviously the association between the two variables is completely different in each plot, but each has exactly the same correlation coefficient.
If you know a priori what the possible non-linear relations could be, then you could fit a series of nonlinear models and compare the goodness of fit. But if you don't know what the possible non-linear relations could be, then I can't see how it can be done robustly without visually inspecting the data. Cubic splines could be one possibility but then it may not cope well with logarithmic, exponential and sinusoidal associations, and could be prone to overfitting. EDIT: After some further thought, another approach would be to fit a generalised additive model (GAM) which would provide good insight for many nonlinear associations, but probably not sinusoidal ones.
Truly, the best way to do what you want is visually. We can see instantly what the relations are in the plots above, but any programmatic approach such as regression is bound to have situations where it fails miserably.
So, my suggestion, if you really need to do this is to use a classifier based on the image of the bivariate plot.
create a dataset using randomly generated data for one variable, from a randomly chosen distribution.
Generate the other variable with a linear association (with random slope) and add some random noise. Then choose at random a nonlinear association and create a new set of values for the other variable. You may want to include purely random associations in this group.
Create two bivariate plots, one linear the other nonlinear from the data simulated in 1) and 2). Normalise the data first.
Repeat the above steps millions of times, or as many times as your time scale will allow
Create a classifier, train, test and validate it, to classify linear vs nonlinear images.
For your actual use case, if you have a different sample size to your simulated data then sample or re-sample to get obtain the same size. Normalise the data, create the image and apply the classifier to it.
I realise that this is probably not the kind of answer you want, but I cannot think of a robust way to do this with regression or other model-based approach.
EDIT: I hope no one is taking this too seriously. My point here is that, in a situation with bivariate data, we should always plot the data. Trying to do anything programatically, whether it is a GAM, cubic splines or a vast machine learning approach is basically allowing the analyst to not think, which is a very dangerous thing.
Please always plot your data.
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
It is very difficult to achieve what you want programmatically because there are so many different forms of nonlinear associations. Even looking at correlation or regression coefficients will not real
|
16,969
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
|
Linear/nonlinear should not be a binary decision. No magic threshold exists for informing the analyst things like "definitely linear". It's all a matter of degree. Instead, consider quantifying the degree of linearity. This can be measured relative to explained variation in Y by two competing models: one that forces linearity and one that doesn't. For the one that doesn't a good general-purpose approach is to fit a restricted cubic spline function (aka natural spline) with say 4 knots (the number of join points, here the number of points at which the 3rd derivative is allowed to be discontinuous) needs to be a function of the sample size and expectations about the possible complexity of the relationship.
Once you have both linear and flexible fits you can use either log-likelihood or $R^2$ to quantify explained variation in Y. As discussed in the RMS you can compute an "adequacy index" by taking the ratio of model likelihood ratio $\chi^2$ statistics (smaller model divided by larger model). The closer this is to 1.0 the more adequate is a linear fit. Or you can take the corresponding ratio of $R^2$ to compute relative explained variation. This is identical to computing the ratio of the variances of predicted values. More about relative explained variation is here.
When you do not know beforehand that something is linear, we use such quantifications to inform us about the nature of the relationship but not to change the model. If using standard frequentist models, to get accurate p-values and confidence bands one must account for all the opportunities the model was given to fit the data. That means using the spline model for estimates, tests, and confidence bands. So you could say "allow the model to be nonlinear if you do not know beforehand it is linear". And most relationships are nonlinear.
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
|
Linear/nonlinear should not be a binary decision. No magic threshold exists for informing the analyst things like "definitely linear". It's all a matter of degree. Instead, consider quantifying the
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
Linear/nonlinear should not be a binary decision. No magic threshold exists for informing the analyst things like "definitely linear". It's all a matter of degree. Instead, consider quantifying the degree of linearity. This can be measured relative to explained variation in Y by two competing models: one that forces linearity and one that doesn't. For the one that doesn't a good general-purpose approach is to fit a restricted cubic spline function (aka natural spline) with say 4 knots (the number of join points, here the number of points at which the 3rd derivative is allowed to be discontinuous) needs to be a function of the sample size and expectations about the possible complexity of the relationship.
Once you have both linear and flexible fits you can use either log-likelihood or $R^2$ to quantify explained variation in Y. As discussed in the RMS you can compute an "adequacy index" by taking the ratio of model likelihood ratio $\chi^2$ statistics (smaller model divided by larger model). The closer this is to 1.0 the more adequate is a linear fit. Or you can take the corresponding ratio of $R^2$ to compute relative explained variation. This is identical to computing the ratio of the variances of predicted values. More about relative explained variation is here.
When you do not know beforehand that something is linear, we use such quantifications to inform us about the nature of the relationship but not to change the model. If using standard frequentist models, to get accurate p-values and confidence bands one must account for all the opportunities the model was given to fit the data. That means using the spline model for estimates, tests, and confidence bands. So you could say "allow the model to be nonlinear if you do not know beforehand it is linear". And most relationships are nonlinear.
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
Linear/nonlinear should not be a binary decision. No magic threshold exists for informing the analyst things like "definitely linear". It's all a matter of degree. Instead, consider quantifying the
|
16,970
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
|
The biggest problem you have here is that "non-linear relation" is not well defined. If you allow for any non-linear relation, there's basically no way to tell if something is "completely random" or just follows a non-linear relation that looks exactly like something that might come out of a "completely random" set up.
However, that doesn't mean you have no way to approach this problem, you just need to revise your question better. For example, you can use the standard Pearson's correlation to look for linear relations. If you want to look for monotonic relations, you can now try Spearman's Rho. If you want to look for potentially non-monotonic relations that still provide some ability to predict y given x, you can look at distance correlation. But note that as you get more flexible in what you call "correlated", you will have less power to detect such trends!
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
|
The biggest problem you have here is that "non-linear relation" is not well defined. If you allow for any non-linear relation, there's basically no way to tell if something is "completely random" or j
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
The biggest problem you have here is that "non-linear relation" is not well defined. If you allow for any non-linear relation, there's basically no way to tell if something is "completely random" or just follows a non-linear relation that looks exactly like something that might come out of a "completely random" set up.
However, that doesn't mean you have no way to approach this problem, you just need to revise your question better. For example, you can use the standard Pearson's correlation to look for linear relations. If you want to look for monotonic relations, you can now try Spearman's Rho. If you want to look for potentially non-monotonic relations that still provide some ability to predict y given x, you can look at distance correlation. But note that as you get more flexible in what you call "correlated", you will have less power to detect such trends!
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
The biggest problem you have here is that "non-linear relation" is not well defined. If you allow for any non-linear relation, there's basically no way to tell if something is "completely random" or j
|
16,971
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
|
It's relatively simple to measure linearity. To distinguish between non-linear relationship and no relationship at all, you're basically asking for a chi-squared test with a number of boxes equal to the number of possible values. For continuous variables, that means if you do a full resolution test, you'll have only one data point per box, which obviously (or I hope it's obvious) doesn't yield meaningful results. If you have a finite number of values, and the number of data points is sufficiently large compared to the number of values, you can do a chi-squared test. This will, however, ignore the order of the boxes. If you want to privilege possible relationships that take into account order, you'll need a more sophisticated method. One method would be to take several different partitions of the boxes and run the chi-squared test on all of them.
Getting back to the continuous case, you again have the option of taking a chi-squared of a bunch of different partitions. You can also look at candidate relationships such as polynomial and exponential. One method would be to do a nonlinear transformation and then test for linearity. Keep in mind that this can cause results that you may find non-intuitive, such as that x versus log(y) can give a p-value for linearity that's different from exp(x) versus y.
Another thing to keep in mind when doing multiple hypothesis tests is that the $\alpha$ you choose is how much probability mass you have to distribute among all false positives. To be rigorous, you should decide beforehand how much you're going to distribute among all the hypotheses. For instance, if your $\alpha$ is $0.05$ and you have five alternative hypotheses you are testing, you can decide beforehand that you'll reject the null only if one of the alternatives have $p < 0.01$.
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
|
It's relatively simple to measure linearity. To distinguish between non-linear relationship and no relationship at all, you're basically asking for a chi-squared test with a number of boxes equal to t
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly or not even related
It's relatively simple to measure linearity. To distinguish between non-linear relationship and no relationship at all, you're basically asking for a chi-squared test with a number of boxes equal to the number of possible values. For continuous variables, that means if you do a full resolution test, you'll have only one data point per box, which obviously (or I hope it's obvious) doesn't yield meaningful results. If you have a finite number of values, and the number of data points is sufficiently large compared to the number of values, you can do a chi-squared test. This will, however, ignore the order of the boxes. If you want to privilege possible relationships that take into account order, you'll need a more sophisticated method. One method would be to take several different partitions of the boxes and run the chi-squared test on all of them.
Getting back to the continuous case, you again have the option of taking a chi-squared of a bunch of different partitions. You can also look at candidate relationships such as polynomial and exponential. One method would be to do a nonlinear transformation and then test for linearity. Keep in mind that this can cause results that you may find non-intuitive, such as that x versus log(y) can give a p-value for linearity that's different from exp(x) versus y.
Another thing to keep in mind when doing multiple hypothesis tests is that the $\alpha$ you choose is how much probability mass you have to distribute among all false positives. To be rigorous, you should decide beforehand how much you're going to distribute among all the hypotheses. For instance, if your $\alpha$ is $0.05$ and you have five alternative hypotheses you are testing, you can decide beforehand that you'll reject the null only if one of the alternatives have $p < 0.01$.
|
What is the best programmatic way for determining whether two variables are linearly or non-linearly
It's relatively simple to measure linearity. To distinguish between non-linear relationship and no relationship at all, you're basically asking for a chi-squared test with a number of boxes equal to t
|
16,972
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
It is possible if the size of the vector is 3 or larger. For example
\begin{align}
a &= (-1, 1, 1)\\
b &= (1, -9, -3)\\
c &= (2, 3, -1)\\
\end{align}
The correlations are
\begin{equation}
\text{cor}(a,b) = -0.80...\\
\text{cor}(a,c) = -0.27...\\
\text{cor}(b,c) = -0.34...
\end{equation}
We can prove that for vectors of size 2 this is not possible:
\begin{align}
\text{cor}(a,b) &< 0\\[5pt]
2\Big(\sum_i a_i b_i\Big) - \Big(\sum_i a_i\Big)\Big(\sum_i b_i\Big) &< 0\\[5pt]
2(a_1 b_1 + a_2 b_2) - (a_1 + a_2)(b_1 b_2) &< 0\\[5pt]
2(a_1 b_1 + a_2 b_2) - (a_1 + a_2)(b_1 b_2) &< 0\\[5pt]
2(a_1 b_1 + a_2 b_2) - a_1 b_1 + a_1 b_2 + a_2 b_1 + a_2 b_2 &< 0\\[5pt]
a_1 b_1 + a_2 b_2 - a_1 b_2 + a_2 b_1 &< 0\\[5pt]
a_1 (b_1-b_2) + a_2 (b_2-b_1) &< 0\\[5pt]
(a_1-a_2)(b_1-b_2) &< 0
\end{align}
The formula makes sense: if $a_1$ is larger than $a_2$, $b_2$ has to be larger than $b_1$ to make the correlation negative.
Similarly for correlations between (a,c) and (b,c) we get
\begin{equation}
(a_1-a_2)(c_1-c_2) < 0\\
(b_1-b_2)(c_1-c_2) < 0\\
\end{equation}
Clearly, all of these three formulas can not hold at the same time.
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
It is possible if the size of the vector is 3 or larger. For example
\begin{align}
a &= (-1, 1, 1)\\
b &= (1, -9, -3)\\
c &= (2, 3, -1)\\
\end{align}
The correlations are
\begin{equation}
\text{cor}(a
|
Is it possible that 3 vectors have all negative pairwise correlations?
It is possible if the size of the vector is 3 or larger. For example
\begin{align}
a &= (-1, 1, 1)\\
b &= (1, -9, -3)\\
c &= (2, 3, -1)\\
\end{align}
The correlations are
\begin{equation}
\text{cor}(a,b) = -0.80...\\
\text{cor}(a,c) = -0.27...\\
\text{cor}(b,c) = -0.34...
\end{equation}
We can prove that for vectors of size 2 this is not possible:
\begin{align}
\text{cor}(a,b) &< 0\\[5pt]
2\Big(\sum_i a_i b_i\Big) - \Big(\sum_i a_i\Big)\Big(\sum_i b_i\Big) &< 0\\[5pt]
2(a_1 b_1 + a_2 b_2) - (a_1 + a_2)(b_1 b_2) &< 0\\[5pt]
2(a_1 b_1 + a_2 b_2) - (a_1 + a_2)(b_1 b_2) &< 0\\[5pt]
2(a_1 b_1 + a_2 b_2) - a_1 b_1 + a_1 b_2 + a_2 b_1 + a_2 b_2 &< 0\\[5pt]
a_1 b_1 + a_2 b_2 - a_1 b_2 + a_2 b_1 &< 0\\[5pt]
a_1 (b_1-b_2) + a_2 (b_2-b_1) &< 0\\[5pt]
(a_1-a_2)(b_1-b_2) &< 0
\end{align}
The formula makes sense: if $a_1$ is larger than $a_2$, $b_2$ has to be larger than $b_1$ to make the correlation negative.
Similarly for correlations between (a,c) and (b,c) we get
\begin{equation}
(a_1-a_2)(c_1-c_2) < 0\\
(b_1-b_2)(c_1-c_2) < 0\\
\end{equation}
Clearly, all of these three formulas can not hold at the same time.
|
Is it possible that 3 vectors have all negative pairwise correlations?
It is possible if the size of the vector is 3 or larger. For example
\begin{align}
a &= (-1, 1, 1)\\
b &= (1, -9, -3)\\
c &= (2, 3, -1)\\
\end{align}
The correlations are
\begin{equation}
\text{cor}(a
|
16,973
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
Yes, they can.
Suppose you have a multivariate normal distribution $X\in R^3, X\sim N(0,\Sigma)$.
The only restriction on $\Sigma$ is that it has to be positive semi-definite.
So take the following example $\Sigma = \begin{pmatrix}
1 & -0.2 & -0.2 \\
-0.2 & 1 & -0.2 \\
-0.2 & -0.2 & 1
\end{pmatrix} $
Its eigenvalues are all positive (1.2, 1.2, 0.6), and you can create vectors with negative correlation.
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
Yes, they can.
Suppose you have a multivariate normal distribution $X\in R^3, X\sim N(0,\Sigma)$.
The only restriction on $\Sigma$ is that it has to be positive semi-definite.
So take the following
|
Is it possible that 3 vectors have all negative pairwise correlations?
Yes, they can.
Suppose you have a multivariate normal distribution $X\in R^3, X\sim N(0,\Sigma)$.
The only restriction on $\Sigma$ is that it has to be positive semi-definite.
So take the following example $\Sigma = \begin{pmatrix}
1 & -0.2 & -0.2 \\
-0.2 & 1 & -0.2 \\
-0.2 & -0.2 & 1
\end{pmatrix} $
Its eigenvalues are all positive (1.2, 1.2, 0.6), and you can create vectors with negative correlation.
|
Is it possible that 3 vectors have all negative pairwise correlations?
Yes, they can.
Suppose you have a multivariate normal distribution $X\in R^3, X\sim N(0,\Sigma)$.
The only restriction on $\Sigma$ is that it has to be positive semi-definite.
So take the following
|
16,974
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
let's start with a correlation matrix for 3 variables
$\Sigma = \begin{pmatrix}
1 & p & q \\
p & 1 & r \\
q & r & 1
\end{pmatrix} $
non-negative definiteness creates constraints for pairwise correlations $p,q,r$ which can be written as
$$
pqr \ge \frac{p^2+q^2+r^2-1}2
$$
For example, if $p=q=-1$, the values of $r$ is restricted by $2r \ge r^2+1$, which forces $r=1$. On the other hand if $p=q=-\frac12$, $r$ can be within $\frac{2 \pm \sqrt{3}}4$ range.
Answering the interesting follow up question by @amoeba: "what is the lowest possible correlation that all three pairs can simultaneously have?"
Let $p=q=r=x < 0$, Find the smallest root of $2x^3-3x^2+1$, which will give you $-\frac12$. Perhaps not surprising for some.
A stronger argument can be made if one of the correlations, say $r=-1$. From the same equation $-2pq \ge p^2+q^2$, we can deduce that $p=-q$. Therefore if two correlations are $-1$, third one should be $1$.
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
let's start with a correlation matrix for 3 variables
$\Sigma = \begin{pmatrix}
1 & p & q \\
p & 1 & r \\
q & r & 1
\end{pmatrix} $
non-negative definiteness creates constraints for pairwise corre
|
Is it possible that 3 vectors have all negative pairwise correlations?
let's start with a correlation matrix for 3 variables
$\Sigma = \begin{pmatrix}
1 & p & q \\
p & 1 & r \\
q & r & 1
\end{pmatrix} $
non-negative definiteness creates constraints for pairwise correlations $p,q,r$ which can be written as
$$
pqr \ge \frac{p^2+q^2+r^2-1}2
$$
For example, if $p=q=-1$, the values of $r$ is restricted by $2r \ge r^2+1$, which forces $r=1$. On the other hand if $p=q=-\frac12$, $r$ can be within $\frac{2 \pm \sqrt{3}}4$ range.
Answering the interesting follow up question by @amoeba: "what is the lowest possible correlation that all three pairs can simultaneously have?"
Let $p=q=r=x < 0$, Find the smallest root of $2x^3-3x^2+1$, which will give you $-\frac12$. Perhaps not surprising for some.
A stronger argument can be made if one of the correlations, say $r=-1$. From the same equation $-2pq \ge p^2+q^2$, we can deduce that $p=-q$. Therefore if two correlations are $-1$, third one should be $1$.
|
Is it possible that 3 vectors have all negative pairwise correlations?
let's start with a correlation matrix for 3 variables
$\Sigma = \begin{pmatrix}
1 & p & q \\
p & 1 & r \\
q & r & 1
\end{pmatrix} $
non-negative definiteness creates constraints for pairwise corre
|
16,975
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
A simple R function to explore this:
f <- function(n,trials = 10000){
count <- 0
for(i in 1:trials){
a <- runif(n)
b <- runif(n)
c <- runif(n)
if(cor(a,b) < 0 & cor(a,c) < 0 & cor(b,c) < 0){
count <- count + 1
}
}
count/trials
}
As a function of n, f(n) starts at 0, becomes nonzero at n = 3 (with typical values around 0.06), then increases to around 0.11 by n = 15, after which it seems to stabilize:
So, not only is it possible to have all three correlations negative, it doesn't seem to be terribly uncommon (at least for uniform distributions).
|
Is it possible that 3 vectors have all negative pairwise correlations?
|
A simple R function to explore this:
f <- function(n,trials = 10000){
count <- 0
for(i in 1:trials){
a <- runif(n)
b <- runif(n)
c <- runif(n)
if(cor(a,b) < 0 & cor(a,c) < 0 & cor(
|
Is it possible that 3 vectors have all negative pairwise correlations?
A simple R function to explore this:
f <- function(n,trials = 10000){
count <- 0
for(i in 1:trials){
a <- runif(n)
b <- runif(n)
c <- runif(n)
if(cor(a,b) < 0 & cor(a,c) < 0 & cor(b,c) < 0){
count <- count + 1
}
}
count/trials
}
As a function of n, f(n) starts at 0, becomes nonzero at n = 3 (with typical values around 0.06), then increases to around 0.11 by n = 15, after which it seems to stabilize:
So, not only is it possible to have all three correlations negative, it doesn't seem to be terribly uncommon (at least for uniform distributions).
|
Is it possible that 3 vectors have all negative pairwise correlations?
A simple R function to explore this:
f <- function(n,trials = 10000){
count <- 0
for(i in 1:trials){
a <- runif(n)
b <- runif(n)
c <- runif(n)
if(cor(a,b) < 0 & cor(a,c) < 0 & cor(
|
16,976
|
Does covariance equal to zero implies independence for binary random variables?
|
For binary variables their expected value equals the probability that they are equal to one. Therefore,
$$ E(XY) = P(XY = 1) = P(X=1 \cap Y=1) \\
E(X) = P(X=1) \\
E(Y) = P(Y=1) \\
$$
If the two have zero covariance this means $E(XY) = E(X)E(Y)$, which means
$$ P(X=1 \cap Y=1) = P(X=1) \cdot P(Y=1) $$
It is trivial to see all other joint probabilities multiply as well, using the basic rules about independent events (i.e. if $A$ and $B$ are independent then their complements are independent, etc.), which means the joint mass function factorizes, which is the definition of two random variables being independent.
|
Does covariance equal to zero implies independence for binary random variables?
|
For binary variables their expected value equals the probability that they are equal to one. Therefore,
$$ E(XY) = P(XY = 1) = P(X=1 \cap Y=1) \\
E(X) = P(X=1) \\
E(Y) = P(Y=1) \\
$$
If the two hav
|
Does covariance equal to zero implies independence for binary random variables?
For binary variables their expected value equals the probability that they are equal to one. Therefore,
$$ E(XY) = P(XY = 1) = P(X=1 \cap Y=1) \\
E(X) = P(X=1) \\
E(Y) = P(Y=1) \\
$$
If the two have zero covariance this means $E(XY) = E(X)E(Y)$, which means
$$ P(X=1 \cap Y=1) = P(X=1) \cdot P(Y=1) $$
It is trivial to see all other joint probabilities multiply as well, using the basic rules about independent events (i.e. if $A$ and $B$ are independent then their complements are independent, etc.), which means the joint mass function factorizes, which is the definition of two random variables being independent.
|
Does covariance equal to zero implies independence for binary random variables?
For binary variables their expected value equals the probability that they are equal to one. Therefore,
$$ E(XY) = P(XY = 1) = P(X=1 \cap Y=1) \\
E(X) = P(X=1) \\
E(Y) = P(Y=1) \\
$$
If the two hav
|
16,977
|
Does covariance equal to zero implies independence for binary random variables?
|
Both correlation and covariance measure linear association between two given variables and it has no obligation to detect any other form of association else.
So those two variables might be associated in several other non-linear ways and covariance (and, therefore, correlation) could not distinguish from independent case.
As a very didactic, artificial and non realistic example, one can consider $X$ such that $P(X=x)=1/3$ for $x=−1,0,1$ and also consider $Y=X^2$. Notice that they are not only associated, but one is a function of the other. Nonetheless, their covariance is 0, for their association is orthogonal to the association that covariance can detect.
EDIT
Indeed, as indicated by @whuber, the above original answer was actually a comment on how the assertion is not universally true if both variables were not necessarily dichotomous. My bad!
So let's math up. (The local equivalent of Barney Stinson's "Suit up!")
Particular Case
If both $X$ and $Y$ were dichotomous, then you can assume, without loss of generality, that both assume only the values $0$ and $1$ with arbitrary probabilities $p$, $q$ and $r$ given by
$$
\begin{align*}
P(X=1) = p \in [0,1] \\
P(Y=1) = q \in [0,1] \\
P(X=1,Y=1) = r \in [0,1],
\end{align*}
$$
which characterize completely the joint distribution of $X$ and $Y$.
Taking on @DilipSarwate's hint, notice that those three values are enough to determine the joint distribution of $(X,Y)$, since
$$
\begin{align*}
P(X=0,Y=1)
&= P(Y=1) - P(X=1,Y=1)
= q - r\\
P(X=1,Y=0)
&= P(X=1) - P(X=1,Y=1)
= p - r\\
P(X=0,Y=0)
&= 1 - P(X=0,Y=1) - P(X=1,Y=0) - P(X=1,Y=1) \\
&= 1 - (q - r) - (p - r) - r
= 1 - p - q - r.
\end{align*}
$$
(On a side note, of course $r$ is bound to respect both $p-r\in[0,1]$, $q-r\in[0,1]$ and $1-p-q-r\in[0,1]$ beyond $r\in[0,1]$, which is to say $r\in[0,\min(p,q,1-p-q)]$.)
Notice that $r = P(X=1,Y=1)$ might be equal to the product $p\cdot q = P(X=1) P(Y=1)$, which would render $X$ and $Y$ independent, since
$$
\begin{align*}
P(X=0,Y=0)
&= 1 - p - q - pq
= (1-p)(1-q)
= P(X=0)P(Y=0)\\
P(X=1,Y=0)
&= p - pq
= p(1-q)
= P(X=1)P(Y=0)\\
P(X=0,Y=1)
&= q - pq
= (1-p)q
= P(X=0)P(Y=1).
\end{align*}
$$
Yes, $r$ might be equal to $pq$, BUT it can be different, as long as it respects the boundaries above.
Well, from the above joint distribution, we would have
$$
\begin{align*}
E(X)
&= 0\cdot P(X=0) + 1\cdot P(X=1)
= P(X=1)
= p
\\
E(Y)
&= 0\cdot P(Y=0) + 1\cdot P(Y=1)
= P(Y=1)
= q
\\
E(XY)
&= 0\cdot P(XY=0) + 1\cdot P(XY=1) \\
&= P(XY=1)
= P(X=1,Y=1)
= r\\
Cov(X,Y)
&= E(XY) - E(X)E(Y)
= r - pq
\end{align*}
$$
Now, notice then that $X$ and $Y$ are independent if and only if $Cov(X,Y)=0$. Indeed, if $X$ and $Y$ are independent, then $P(X=1,Y=1)=P(X=1)P(Y=1)$, which is to say $r=pq$. Therefore, $Cov(X,Y)=r-pq=0$; and, on the other hand, if $Cov(X,Y)=0$, then $r-pq=0$, which is to say $r=pq$. Therefore, $X$ and $Y$ are independent.
General Case
About the without loss of generality clause above, if $X$ and $Y$ were distributed otherwise, let's say, for $a<b$ and $c<d$,
$$
\begin{align*}
P(X=b)=p \\
P(Y=d)=q \\
P(X=b, Y=d)=r
\end{align*}
$$
then $X'$ and $Y'$ given by
$$
X'=\frac{X-a}{b-a}
\qquad
\text{and}
\qquad
Y'=\frac{Y-c}{d-c}
$$
would be distributed just as characterized above, since
$$
X=a \Leftrightarrow X'=0, \quad
X=b \Leftrightarrow X'=1, \quad
Y=c \Leftrightarrow Y'=0 \quad
\text{and} \quad
Y=d \Leftrightarrow Y'=1.
$$
So $X$ and $Y$ are independent if and only if $X'$ and $Y'$ are independent.
Also, we would have
$$
\begin{align*}
E(X')
&= E\left(\frac{X-a}{b-a}\right)
= \frac{E(X)-a}{b-a} \\
E(Y')
&= E\left(\frac{Y-c}{d-c}\right)
= \frac{E(Y)-c}{d-c} \\
E(X'Y')
&= E\left(\frac{X-a}{b-a} \frac{Y-c}{d-c}\right)
= \frac{E[(X-a)(Y-c)]}{(b-a)(d-c)} \\
&= \frac{E(XY-Xc-aY+ac)}{(b-a)(d-c)}
= \frac{E(XY)-cE(X)-aE(Y)+ac}{(b-a)(d-c)} \\
Cov(X',Y')
&= E(X'Y')-E(X')E(Y') \\
&= \frac{E(XY)-cE(X)-aE(Y)+ac}{(b-a)(d-c)}
- \frac{E(X)-a}{b-a}
\frac{E(Y)-c}{d-c} \\
&= \frac{[E(XY)-cE(X)-aE(Y)+ac] - [E(X)-a] [E(Y)-c]}{(b-a)(d-c)}\\
&= \frac{[E(XY)-cE(X)-aE(Y)+ac] - [E(X)E(Y)-cE(X)-aE(Y)+ac]}{(b-a)(d-c)}\\
&= \frac{E(XY)-E(X)E(Y)}{(b-a)(d-c)}
= \frac{1}{(b-a)(d-c)} Cov(X,Y).
\end{align*}
$$
So $Cov(X,Y)=0$ if and only $Cov(X',Y')=0$.
=D
|
Does covariance equal to zero implies independence for binary random variables?
|
Both correlation and covariance measure linear association between two given variables and it has no obligation to detect any other form of association else.
So those two variables might be associated
|
Does covariance equal to zero implies independence for binary random variables?
Both correlation and covariance measure linear association between two given variables and it has no obligation to detect any other form of association else.
So those two variables might be associated in several other non-linear ways and covariance (and, therefore, correlation) could not distinguish from independent case.
As a very didactic, artificial and non realistic example, one can consider $X$ such that $P(X=x)=1/3$ for $x=−1,0,1$ and also consider $Y=X^2$. Notice that they are not only associated, but one is a function of the other. Nonetheless, their covariance is 0, for their association is orthogonal to the association that covariance can detect.
EDIT
Indeed, as indicated by @whuber, the above original answer was actually a comment on how the assertion is not universally true if both variables were not necessarily dichotomous. My bad!
So let's math up. (The local equivalent of Barney Stinson's "Suit up!")
Particular Case
If both $X$ and $Y$ were dichotomous, then you can assume, without loss of generality, that both assume only the values $0$ and $1$ with arbitrary probabilities $p$, $q$ and $r$ given by
$$
\begin{align*}
P(X=1) = p \in [0,1] \\
P(Y=1) = q \in [0,1] \\
P(X=1,Y=1) = r \in [0,1],
\end{align*}
$$
which characterize completely the joint distribution of $X$ and $Y$.
Taking on @DilipSarwate's hint, notice that those three values are enough to determine the joint distribution of $(X,Y)$, since
$$
\begin{align*}
P(X=0,Y=1)
&= P(Y=1) - P(X=1,Y=1)
= q - r\\
P(X=1,Y=0)
&= P(X=1) - P(X=1,Y=1)
= p - r\\
P(X=0,Y=0)
&= 1 - P(X=0,Y=1) - P(X=1,Y=0) - P(X=1,Y=1) \\
&= 1 - (q - r) - (p - r) - r
= 1 - p - q - r.
\end{align*}
$$
(On a side note, of course $r$ is bound to respect both $p-r\in[0,1]$, $q-r\in[0,1]$ and $1-p-q-r\in[0,1]$ beyond $r\in[0,1]$, which is to say $r\in[0,\min(p,q,1-p-q)]$.)
Notice that $r = P(X=1,Y=1)$ might be equal to the product $p\cdot q = P(X=1) P(Y=1)$, which would render $X$ and $Y$ independent, since
$$
\begin{align*}
P(X=0,Y=0)
&= 1 - p - q - pq
= (1-p)(1-q)
= P(X=0)P(Y=0)\\
P(X=1,Y=0)
&= p - pq
= p(1-q)
= P(X=1)P(Y=0)\\
P(X=0,Y=1)
&= q - pq
= (1-p)q
= P(X=0)P(Y=1).
\end{align*}
$$
Yes, $r$ might be equal to $pq$, BUT it can be different, as long as it respects the boundaries above.
Well, from the above joint distribution, we would have
$$
\begin{align*}
E(X)
&= 0\cdot P(X=0) + 1\cdot P(X=1)
= P(X=1)
= p
\\
E(Y)
&= 0\cdot P(Y=0) + 1\cdot P(Y=1)
= P(Y=1)
= q
\\
E(XY)
&= 0\cdot P(XY=0) + 1\cdot P(XY=1) \\
&= P(XY=1)
= P(X=1,Y=1)
= r\\
Cov(X,Y)
&= E(XY) - E(X)E(Y)
= r - pq
\end{align*}
$$
Now, notice then that $X$ and $Y$ are independent if and only if $Cov(X,Y)=0$. Indeed, if $X$ and $Y$ are independent, then $P(X=1,Y=1)=P(X=1)P(Y=1)$, which is to say $r=pq$. Therefore, $Cov(X,Y)=r-pq=0$; and, on the other hand, if $Cov(X,Y)=0$, then $r-pq=0$, which is to say $r=pq$. Therefore, $X$ and $Y$ are independent.
General Case
About the without loss of generality clause above, if $X$ and $Y$ were distributed otherwise, let's say, for $a<b$ and $c<d$,
$$
\begin{align*}
P(X=b)=p \\
P(Y=d)=q \\
P(X=b, Y=d)=r
\end{align*}
$$
then $X'$ and $Y'$ given by
$$
X'=\frac{X-a}{b-a}
\qquad
\text{and}
\qquad
Y'=\frac{Y-c}{d-c}
$$
would be distributed just as characterized above, since
$$
X=a \Leftrightarrow X'=0, \quad
X=b \Leftrightarrow X'=1, \quad
Y=c \Leftrightarrow Y'=0 \quad
\text{and} \quad
Y=d \Leftrightarrow Y'=1.
$$
So $X$ and $Y$ are independent if and only if $X'$ and $Y'$ are independent.
Also, we would have
$$
\begin{align*}
E(X')
&= E\left(\frac{X-a}{b-a}\right)
= \frac{E(X)-a}{b-a} \\
E(Y')
&= E\left(\frac{Y-c}{d-c}\right)
= \frac{E(Y)-c}{d-c} \\
E(X'Y')
&= E\left(\frac{X-a}{b-a} \frac{Y-c}{d-c}\right)
= \frac{E[(X-a)(Y-c)]}{(b-a)(d-c)} \\
&= \frac{E(XY-Xc-aY+ac)}{(b-a)(d-c)}
= \frac{E(XY)-cE(X)-aE(Y)+ac}{(b-a)(d-c)} \\
Cov(X',Y')
&= E(X'Y')-E(X')E(Y') \\
&= \frac{E(XY)-cE(X)-aE(Y)+ac}{(b-a)(d-c)}
- \frac{E(X)-a}{b-a}
\frac{E(Y)-c}{d-c} \\
&= \frac{[E(XY)-cE(X)-aE(Y)+ac] - [E(X)-a] [E(Y)-c]}{(b-a)(d-c)}\\
&= \frac{[E(XY)-cE(X)-aE(Y)+ac] - [E(X)E(Y)-cE(X)-aE(Y)+ac]}{(b-a)(d-c)}\\
&= \frac{E(XY)-E(X)E(Y)}{(b-a)(d-c)}
= \frac{1}{(b-a)(d-c)} Cov(X,Y).
\end{align*}
$$
So $Cov(X,Y)=0$ if and only $Cov(X',Y')=0$.
=D
|
Does covariance equal to zero implies independence for binary random variables?
Both correlation and covariance measure linear association between two given variables and it has no obligation to detect any other form of association else.
So those two variables might be associated
|
16,978
|
Does covariance equal to zero implies independence for binary random variables?
|
IN GENERAL:
The criterion for independence is $F(x,y) = F_X(x)F_Y(y)$. Or $$f_{X,Y}(x,y)=f_X(x)\,f_Y(y)\tag 1$$
"If two variables are independent, their covariance is $0.$ But, having
a covariance of $0$ does not imply the variables are independent."
This is nicely explained by Macro here, and in the Wikipedia entry for independence.
$\text {independence} \Rightarrow \text{zero cov}$, yet
$\text{zero cov}\nRightarrow \text{independence}.$
Great example: $X \sim N(0,1)$, and $Y= X^2.$ Covariance is zero (and $\mathbb E(XY)=0$, which is the criterion for orthogonality), yet they are dependent. Credit goes to this post.
IN PARTICULAR (OP problem):
These are Bernoulli rv's, $X$ and $Y$ with probability of success $\Pr(X=1)$, and $\Pr(Y=1)$.
$\begin{align}\mathrm{cov}(X,Y)&=\mathrm E[XY] - \mathrm E[X]\,\mathrm E[Y]\\[2ex]
&\underset{*}{=} \Pr(X=1 \cap Y=1) - \Pr(X=1)\, \Pr(Y=1)\\[2ex]
&\implies \Pr(X=1 , Y=1) = \Pr (X=1)\,\Pr(Y=1).
\end{align}$
This is equivalent to the condition for independence in Eq. $(1).$
$(*)$:
$$\mathrm E[XY]\quad \underset{**}{=} \quad \displaystyle \sum_{\text{domain X, Y}} \Pr(X=x\cap Y=y)\, x\,y \underset{\neq\,0\text{ iff } x \times y\neq 0}= \Pr(X=1 \cap Y=1).$$
$(**)$: by LOTUS.
As pointed out below, the argument is incomplete without what Dilip Sarwate had pointed out in his comments shortly after the OP appeared. After searching around, I found this proof of the missing part here:
If events $A$ and $B$ are independent, then events $A^c$ and $B$ are independent, and events $ A^c$ and $B^c$ are also independent.
Proof By definition,
$A$ and $B$ are independent $\iff P(A\cap B) = P(A)P(B).$
But $B=(A\cap B) + ( A^c \cup B)$, so $P(B)= P(A\cap B) + P(A^c \cup B)$, which yields:
$\small P(A^c \cap B) = P(B) - P(A\cap B) = P(B) - P(A)\,P(B) = P(B) \left[1 - P(A)\right] = P(B)\,P( A^c).$
Repeat the argument for the events $A^c$ and $B^c,$ this time starting from the statement that $A^c$ and $B$ are independent and taking the complement of $B.$
Similarly. $A$ and $B^c$ are independent events.
So, we have shown already that $$\Pr(X=1 , Y=1) = \Pr (X=1)\,\Pr(Y=1)$$
and the above shows that this implies that
$$\Pr(X=i , Y=j) = \Pr (X=i)\,\Pr(Y=j), ~~i, j \in \{0,1\}$$
that is, the joint pmf factors into the product of marginal pmfs everywhere, not just at $(1,1)$. Hence, uncorrelated Bernoulli
random variables $X$ and $Y$ are also independent random variables.
|
Does covariance equal to zero implies independence for binary random variables?
|
IN GENERAL:
The criterion for independence is $F(x,y) = F_X(x)F_Y(y)$. Or $$f_{X,Y}(x,y)=f_X(x)\,f_Y(y)\tag 1$$
"If two variables are independent, their covariance is $0.$ But, having
a covariance of
|
Does covariance equal to zero implies independence for binary random variables?
IN GENERAL:
The criterion for independence is $F(x,y) = F_X(x)F_Y(y)$. Or $$f_{X,Y}(x,y)=f_X(x)\,f_Y(y)\tag 1$$
"If two variables are independent, their covariance is $0.$ But, having
a covariance of $0$ does not imply the variables are independent."
This is nicely explained by Macro here, and in the Wikipedia entry for independence.
$\text {independence} \Rightarrow \text{zero cov}$, yet
$\text{zero cov}\nRightarrow \text{independence}.$
Great example: $X \sim N(0,1)$, and $Y= X^2.$ Covariance is zero (and $\mathbb E(XY)=0$, which is the criterion for orthogonality), yet they are dependent. Credit goes to this post.
IN PARTICULAR (OP problem):
These are Bernoulli rv's, $X$ and $Y$ with probability of success $\Pr(X=1)$, and $\Pr(Y=1)$.
$\begin{align}\mathrm{cov}(X,Y)&=\mathrm E[XY] - \mathrm E[X]\,\mathrm E[Y]\\[2ex]
&\underset{*}{=} \Pr(X=1 \cap Y=1) - \Pr(X=1)\, \Pr(Y=1)\\[2ex]
&\implies \Pr(X=1 , Y=1) = \Pr (X=1)\,\Pr(Y=1).
\end{align}$
This is equivalent to the condition for independence in Eq. $(1).$
$(*)$:
$$\mathrm E[XY]\quad \underset{**}{=} \quad \displaystyle \sum_{\text{domain X, Y}} \Pr(X=x\cap Y=y)\, x\,y \underset{\neq\,0\text{ iff } x \times y\neq 0}= \Pr(X=1 \cap Y=1).$$
$(**)$: by LOTUS.
As pointed out below, the argument is incomplete without what Dilip Sarwate had pointed out in his comments shortly after the OP appeared. After searching around, I found this proof of the missing part here:
If events $A$ and $B$ are independent, then events $A^c$ and $B$ are independent, and events $ A^c$ and $B^c$ are also independent.
Proof By definition,
$A$ and $B$ are independent $\iff P(A\cap B) = P(A)P(B).$
But $B=(A\cap B) + ( A^c \cup B)$, so $P(B)= P(A\cap B) + P(A^c \cup B)$, which yields:
$\small P(A^c \cap B) = P(B) - P(A\cap B) = P(B) - P(A)\,P(B) = P(B) \left[1 - P(A)\right] = P(B)\,P( A^c).$
Repeat the argument for the events $A^c$ and $B^c,$ this time starting from the statement that $A^c$ and $B$ are independent and taking the complement of $B.$
Similarly. $A$ and $B^c$ are independent events.
So, we have shown already that $$\Pr(X=1 , Y=1) = \Pr (X=1)\,\Pr(Y=1)$$
and the above shows that this implies that
$$\Pr(X=i , Y=j) = \Pr (X=i)\,\Pr(Y=j), ~~i, j \in \{0,1\}$$
that is, the joint pmf factors into the product of marginal pmfs everywhere, not just at $(1,1)$. Hence, uncorrelated Bernoulli
random variables $X$ and $Y$ are also independent random variables.
|
Does covariance equal to zero implies independence for binary random variables?
IN GENERAL:
The criterion for independence is $F(x,y) = F_X(x)F_Y(y)$. Or $$f_{X,Y}(x,y)=f_X(x)\,f_Y(y)\tag 1$$
"If two variables are independent, their covariance is $0.$ But, having
a covariance of
|
16,979
|
Is it possible to create "parallel sets" plot using R?
|
Here's a version using only base graphics, thanks to Hadley's comment.
(For previous version, see edit history).
parallelset <- function(..., freq, col="gray", border=0, layer,
alpha=0.5, gap.width=0.05) {
p <- data.frame(..., freq, col, border, alpha, stringsAsFactors=FALSE)
n <- nrow(p)
if(missing(layer)) { layer <- 1:n }
p$layer <- layer
np <- ncol(p) - 5
d <- p[ , 1:np, drop=FALSE]
p <- p[ , -c(1:np), drop=FALSE]
p$freq <- with(p, freq/sum(freq))
col <- col2rgb(p$col, alpha=TRUE)
if(!identical(alpha, FALSE)) { col["alpha", ] <- p$alpha*256 }
p$col <- apply(col, 2, function(x) do.call(rgb, c(as.list(x), maxColorValue = 256)))
getp <- function(i, d, f, w=gap.width) {
a <- c(i, (1:ncol(d))[-i])
o <- do.call(order, d[a])
x <- c(0, cumsum(f[o])) * (1-w)
x <- cbind(x[-length(x)], x[-1])
gap <- cumsum( c(0L, diff(as.numeric(d[o,i])) != 0) )
gap <- gap / max(gap) * w
(x + gap)[order(o),]
}
dd <- lapply(seq_along(d), getp, d=d, f=p$freq)
par(mar = c(0, 0, 2, 0) + 0.1, xpd=TRUE )
plot(NULL, type="n",xlim=c(0, 1), ylim=c(np, 1),
xaxt="n", yaxt="n", xaxs="i", yaxs="i", xlab='', ylab='', frame=FALSE)
for(i in rev(order(p$layer)) ) {
for(j in 1:(np-1) )
polygon(c(dd[[j]][i,], rev(dd[[j+1]][i,])), c(j, j, j+1, j+1),
col=p$col[i], border=p$border[i])
}
text(0, seq_along(dd), labels=names(d), adj=c(0,-2), font=2)
for(j in seq_along(dd)) {
ax <- lapply(split(dd[[j]], d[,j]), range)
for(k in seq_along(ax)) {
lines(ax[[k]], c(j, j))
text(ax[[k]][1], j, labels=names(ax)[k], adj=c(0, -0.25))
}
}
}
data(Titanic)
myt <- subset(as.data.frame(Titanic), Age=="Adult",
select=c("Survived","Sex","Class","Freq"))
myt <- within(myt, {
Survived <- factor(Survived, levels=c("Yes","No"))
levels(Class) <- c(paste(c("First", "Second", "Third"), "Class"), "Crew")
color <- ifelse(Survived=="Yes","#008888","#330066")
})
with(myt, parallelset(Survived, Sex, Class, freq=Freq, col=color, alpha=0.2))
|
Is it possible to create "parallel sets" plot using R?
|
Here's a version using only base graphics, thanks to Hadley's comment.
(For previous version, see edit history).
parallelset <- function(..., freq, col="gray", border=0, layer,
|
Is it possible to create "parallel sets" plot using R?
Here's a version using only base graphics, thanks to Hadley's comment.
(For previous version, see edit history).
parallelset <- function(..., freq, col="gray", border=0, layer,
alpha=0.5, gap.width=0.05) {
p <- data.frame(..., freq, col, border, alpha, stringsAsFactors=FALSE)
n <- nrow(p)
if(missing(layer)) { layer <- 1:n }
p$layer <- layer
np <- ncol(p) - 5
d <- p[ , 1:np, drop=FALSE]
p <- p[ , -c(1:np), drop=FALSE]
p$freq <- with(p, freq/sum(freq))
col <- col2rgb(p$col, alpha=TRUE)
if(!identical(alpha, FALSE)) { col["alpha", ] <- p$alpha*256 }
p$col <- apply(col, 2, function(x) do.call(rgb, c(as.list(x), maxColorValue = 256)))
getp <- function(i, d, f, w=gap.width) {
a <- c(i, (1:ncol(d))[-i])
o <- do.call(order, d[a])
x <- c(0, cumsum(f[o])) * (1-w)
x <- cbind(x[-length(x)], x[-1])
gap <- cumsum( c(0L, diff(as.numeric(d[o,i])) != 0) )
gap <- gap / max(gap) * w
(x + gap)[order(o),]
}
dd <- lapply(seq_along(d), getp, d=d, f=p$freq)
par(mar = c(0, 0, 2, 0) + 0.1, xpd=TRUE )
plot(NULL, type="n",xlim=c(0, 1), ylim=c(np, 1),
xaxt="n", yaxt="n", xaxs="i", yaxs="i", xlab='', ylab='', frame=FALSE)
for(i in rev(order(p$layer)) ) {
for(j in 1:(np-1) )
polygon(c(dd[[j]][i,], rev(dd[[j+1]][i,])), c(j, j, j+1, j+1),
col=p$col[i], border=p$border[i])
}
text(0, seq_along(dd), labels=names(d), adj=c(0,-2), font=2)
for(j in seq_along(dd)) {
ax <- lapply(split(dd[[j]], d[,j]), range)
for(k in seq_along(ax)) {
lines(ax[[k]], c(j, j))
text(ax[[k]][1], j, labels=names(ax)[k], adj=c(0, -0.25))
}
}
}
data(Titanic)
myt <- subset(as.data.frame(Titanic), Age=="Adult",
select=c("Survived","Sex","Class","Freq"))
myt <- within(myt, {
Survived <- factor(Survived, levels=c("Yes","No"))
levels(Class) <- c(paste(c("First", "Second", "Third"), "Class"), "Crew")
color <- ifelse(Survived=="Yes","#008888","#330066")
})
with(myt, parallelset(Survived, Sex, Class, freq=Freq, col=color, alpha=0.2))
|
Is it possible to create "parallel sets" plot using R?
Here's a version using only base graphics, thanks to Hadley's comment.
(For previous version, see edit history).
parallelset <- function(..., freq, col="gray", border=0, layer,
|
16,980
|
Is it possible to create "parallel sets" plot using R?
|
Based on @Aaron code I developed something called "alluvial diagram". See http://bc.bojanorama.pl/2014/03/alluvial-diagrams/ Example below:
|
Is it possible to create "parallel sets" plot using R?
|
Based on @Aaron code I developed something called "alluvial diagram". See http://bc.bojanorama.pl/2014/03/alluvial-diagrams/ Example below:
|
Is it possible to create "parallel sets" plot using R?
Based on @Aaron code I developed something called "alluvial diagram". See http://bc.bojanorama.pl/2014/03/alluvial-diagrams/ Example below:
|
Is it possible to create "parallel sets" plot using R?
Based on @Aaron code I developed something called "alluvial diagram". See http://bc.bojanorama.pl/2014/03/alluvial-diagrams/ Example below:
|
16,981
|
Priors that do not become irrelevant with large sample sizes
|
I dispute your main premise.
A prior distribution is your guess (hopefully a good guess, but still a guess).
Then you observe data and see what really happens.
When you have enough observations that contradict your original guess, it is reasonable to change your mind.
What you’re observing strikes me as a feature, not a bug, of Bayesian inference.
|
Priors that do not become irrelevant with large sample sizes
|
I dispute your main premise.
A prior distribution is your guess (hopefully a good guess, but still a guess).
Then you observe data and see what really happens.
When you have enough observations that c
|
Priors that do not become irrelevant with large sample sizes
I dispute your main premise.
A prior distribution is your guess (hopefully a good guess, but still a guess).
Then you observe data and see what really happens.
When you have enough observations that contradict your original guess, it is reasonable to change your mind.
What you’re observing strikes me as a feature, not a bug, of Bayesian inference.
|
Priors that do not become irrelevant with large sample sizes
I dispute your main premise.
A prior distribution is your guess (hopefully a good guess, but still a guess).
Then you observe data and see what really happens.
When you have enough observations that c
|
16,982
|
Priors that do not become irrelevant with large sample sizes
|
The answer to this question centers on its false premise. If I can sum up your question, you are saying the posterior is really far from your prior, but rather than acknowledging that either your prior is wrong or that your likelihood is misspecified, you instead want to know how you can just use a stronger prior to enforce that the posterior is not "too far" from prior... at which point why even use data? Just start with your prior, flip a coin and roll some dice, move your prior by that amount in that direction, and then call it your posterior. From your question it sounds like if you had 2x or 10x the data you would just be asking how to make your prior 2x or 10x stronger to cancel out the data and get the posterior you want. Therefore please fix your model (or acknowledge that currently it is not possible to model this data well enough), but please do not just change your prior to get a predetermined outcome.
|
Priors that do not become irrelevant with large sample sizes
|
The answer to this question centers on its false premise. If I can sum up your question, you are saying the posterior is really far from your prior, but rather than acknowledging that either your prio
|
Priors that do not become irrelevant with large sample sizes
The answer to this question centers on its false premise. If I can sum up your question, you are saying the posterior is really far from your prior, but rather than acknowledging that either your prior is wrong or that your likelihood is misspecified, you instead want to know how you can just use a stronger prior to enforce that the posterior is not "too far" from prior... at which point why even use data? Just start with your prior, flip a coin and roll some dice, move your prior by that amount in that direction, and then call it your posterior. From your question it sounds like if you had 2x or 10x the data you would just be asking how to make your prior 2x or 10x stronger to cancel out the data and get the posterior you want. Therefore please fix your model (or acknowledge that currently it is not possible to model this data well enough), but please do not just change your prior to get a predetermined outcome.
|
Priors that do not become irrelevant with large sample sizes
The answer to this question centers on its false premise. If I can sum up your question, you are saying the posterior is really far from your prior, but rather than acknowledging that either your prio
|
16,983
|
Priors that do not become irrelevant with large sample sizes
|
I do agree with the previous answers, but if you really want to "fix" the influence of the prior, here are some ideas.
If your prior is based on historical data, you can use a power prior [1] to control the relative influence of your prior on the posterior obtained with new data.
Alternatively, you can also consider weighing the likelihood (power scaling) so that the relative influence of your prior is increased. However, if, for example, you have a Gaussian model, this would be equivalent to increasing the standard deviation of the Gaussian. So in the end, maybe you do need to change your model.
You may also be interesting in reading [2] which uses power scaling as a way to diagnose prior sensitivity.
[1] Ibrahim, J. G., Chen, M. H., Gwon, Y., & Chen, F. (2015). The power prior: Theory and applications. Statistics in Medicine, 34(28), 3724–3749. https://doi.org/10.1002/sim.6728
[2] Kallioinen, N., Paananen, T., Bürkner, P.-C., & Vehtari, A. (2021). Detecting and diagnosing prior and likelihood sensitivity with power-scaling. https://arxiv.org/abs/2107.14054v1
|
Priors that do not become irrelevant with large sample sizes
|
I do agree with the previous answers, but if you really want to "fix" the influence of the prior, here are some ideas.
If your prior is based on historical data, you can use a power prior [1] to cont
|
Priors that do not become irrelevant with large sample sizes
I do agree with the previous answers, but if you really want to "fix" the influence of the prior, here are some ideas.
If your prior is based on historical data, you can use a power prior [1] to control the relative influence of your prior on the posterior obtained with new data.
Alternatively, you can also consider weighing the likelihood (power scaling) so that the relative influence of your prior is increased. However, if, for example, you have a Gaussian model, this would be equivalent to increasing the standard deviation of the Gaussian. So in the end, maybe you do need to change your model.
You may also be interesting in reading [2] which uses power scaling as a way to diagnose prior sensitivity.
[1] Ibrahim, J. G., Chen, M. H., Gwon, Y., & Chen, F. (2015). The power prior: Theory and applications. Statistics in Medicine, 34(28), 3724–3749. https://doi.org/10.1002/sim.6728
[2] Kallioinen, N., Paananen, T., Bürkner, P.-C., & Vehtari, A. (2021). Detecting and diagnosing prior and likelihood sensitivity with power-scaling. https://arxiv.org/abs/2107.14054v1
|
Priors that do not become irrelevant with large sample sizes
I do agree with the previous answers, but if you really want to "fix" the influence of the prior, here are some ideas.
If your prior is based on historical data, you can use a power prior [1] to cont
|
16,984
|
Priors that do not become irrelevant with large sample sizes
|
Have you considered that your expectation is simply wrong, perhaps because of publication bias?
Alternatively, if you are so confident in your beliefs that you're willing to discount the posterior after an analysis of tens of thousands of data points, it seems to me that your specified prior does not truly reflect the strength of your belief. You should probably specify a much stronger prior - if it's strong enough, the posterior wouldn't be dominated by the data.
|
Priors that do not become irrelevant with large sample sizes
|
Have you considered that your expectation is simply wrong, perhaps because of publication bias?
Alternatively, if you are so confident in your beliefs that you're willing to discount the posterior aft
|
Priors that do not become irrelevant with large sample sizes
Have you considered that your expectation is simply wrong, perhaps because of publication bias?
Alternatively, if you are so confident in your beliefs that you're willing to discount the posterior after an analysis of tens of thousands of data points, it seems to me that your specified prior does not truly reflect the strength of your belief. You should probably specify a much stronger prior - if it's strong enough, the posterior wouldn't be dominated by the data.
|
Priors that do not become irrelevant with large sample sizes
Have you considered that your expectation is simply wrong, perhaps because of publication bias?
Alternatively, if you are so confident in your beliefs that you're willing to discount the posterior aft
|
16,985
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probability density function?
|
It is not a mistake
In the formal treatment of probability, via measure theory, a probability density function is a derivative of the probability measure of interest, taken with respect to a "dominating measure" (also called a "reference measure"). For discrete distributions over the integers, the probability mass function is a density function with respect to counting measure. Since a probability mass function is a particular type of probability density function, you will sometimes find references like this that refer to it as a density function, and they are not wrong to refer to it this way.
In ordinary discourse on probability and statistics, one often avoids this terminology, and draws a distinction between "mass functions" (for discrete random variables) and "density functions" (for continuous random variables), in order to distinguish discrete and continuous distributions. In other contexts, where one is stating holistic aspects of probability, it is often better to ignore the distinction and refer to both as "density functions".
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probabil
|
It is not a mistake
In the formal treatment of probability, via measure theory, a probability density function is a derivative of the probability measure of interest, taken with respect to a "dominati
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probability density function?
It is not a mistake
In the formal treatment of probability, via measure theory, a probability density function is a derivative of the probability measure of interest, taken with respect to a "dominating measure" (also called a "reference measure"). For discrete distributions over the integers, the probability mass function is a density function with respect to counting measure. Since a probability mass function is a particular type of probability density function, you will sometimes find references like this that refer to it as a density function, and they are not wrong to refer to it this way.
In ordinary discourse on probability and statistics, one often avoids this terminology, and draws a distinction between "mass functions" (for discrete random variables) and "density functions" (for continuous random variables), in order to distinguish discrete and continuous distributions. In other contexts, where one is stating holistic aspects of probability, it is often better to ignore the distinction and refer to both as "density functions".
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probabil
It is not a mistake
In the formal treatment of probability, via measure theory, a probability density function is a derivative of the probability measure of interest, taken with respect to a "dominati
|
16,986
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probability density function?
|
In addition to the more theoretical answer in terms of measure theory, it is also convenient to not distinguish between pmfs and pdfs in statistical programming. For example, R has a wealth of built-in distributions. For each distribution, it has 4 functions. For example, for the normal distribution (from the help file):
dnorm gives the density, pnorm gives the distribution function, qnorm gives the quantile function, and rnorm generates random deviates.
R users rapidly become used to the d,p,q,r prefixes. It would be annoying if you had to do something like drop d and use m for e.g. the binomial distribution. Instead, everything is as an R user would expect:
dbinom gives the density, pbinom gives the distribution function, qbinom gives the quantile function and rbinom generates random deviates.
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probabil
|
In addition to the more theoretical answer in terms of measure theory, it is also convenient to not distinguish between pmfs and pdfs in statistical programming. For example, R has a wealth of built-i
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probability density function?
In addition to the more theoretical answer in terms of measure theory, it is also convenient to not distinguish between pmfs and pdfs in statistical programming. For example, R has a wealth of built-in distributions. For each distribution, it has 4 functions. For example, for the normal distribution (from the help file):
dnorm gives the density, pnorm gives the distribution function, qnorm gives the quantile function, and rnorm generates random deviates.
R users rapidly become used to the d,p,q,r prefixes. It would be annoying if you had to do something like drop d and use m for e.g. the binomial distribution. Instead, everything is as an R user would expect:
dbinom gives the density, pbinom gives the distribution function, qbinom gives the quantile function and rbinom generates random deviates.
|
Does Wolfram Mathworld make a mistake describing a discrete probability distribution with a probabil
In addition to the more theoretical answer in terms of measure theory, it is also convenient to not distinguish between pmfs and pdfs in statistical programming. For example, R has a wealth of built-i
|
16,987
|
Confused about Autoregressive AR(1) process
|
You have two problems--and one of them is interesting.
Without a noise term, the series is no longer stationary. Its value is increasing asymptotically, but definitely, toward $1:$
ARIMA applies only to stationary models--and these data are obviously not from a stationary model. That's not terribly interesting. What is interesting is that the problem persists even with noise!
What, then, happens when we add just a tiny bit of noise?
It's still obviously not stationary--but the reason is that the initial values are inconsistent with everything that follows.
You need to remove a "burn-in period" during which the simulated values are starting to behave like the rest of the series will. Here's what this one looks like when we strip out the first $n_0=30$ values:
What does arima return?
Coefficients:
ar1 intercept
0.9074 0.9872
s.e. 0.0309 0.0088
$0.9074 \pm 0.0309$ is a great estimate of $\phi=0.9.$
I repeated this process 99 more times, producing 100 estimates of $\phi$ along with their standard errors. Here is a plot of those estimates and crude $90\%$ confidence limits (set at $1.645$ standard errors above and below the estimates):
The horizontal gray line is located at $\phi=0.9$ for reference. The red confidence intervals are those that do not overlap the reference: there are $12$ of them, indicating the confidence level is around $88\%,$ agreeing (within sampling error) with the intended value of $90\%.$ The horizontal black line is the average estimate. It's a little lower than $\phi,$ perhaps because after even $200$ time steps the series still isn't quite stationary. (One also doesn't expect the distribution of the estimate to be symmetric: $1$ is an important boundary and will cause the distribution to be skewed toward the smaller values.)
Here is the same study but with $2000$ time steps in each iteration:
The bias in the estimate has nearly disappeared.
Another solution is to start generating the series at its asymptotic mean value (equal to cnst/(1-phi) in the R code below). But that requires knowing the asymptote, which might be harder to come by in more complex models, so it's good to know about the technique of discarding the initial segment of a simulated series.
BTW, here's a reasonably efficient and compact way to generate these datasets:
phi <- 0.9 # AR(1) coefficient
n <- 200 # Total number of time steps after the initial value
cnst <- 0.1 # Intercept
sigma <- 0.01 # Error variance
Y <- Reduce(function(y, e) y * phi + e, rnorm(n, cnst, sigma), 0, accumulate=TRUE)
n0 <- which.max(abs(Y) >= quantile(abs(Y), 0.5)) # Estimate where Y levels off
Y <- Y[-seq_len(n0)] # Strip the initial values
plot(Y) # LOOK at Y before doing anything else...
|
Confused about Autoregressive AR(1) process
|
You have two problems--and one of them is interesting.
Without a noise term, the series is no longer stationary. Its value is increasing asymptotically, but definitely, toward $1:$
ARIMA applies onl
|
Confused about Autoregressive AR(1) process
You have two problems--and one of them is interesting.
Without a noise term, the series is no longer stationary. Its value is increasing asymptotically, but definitely, toward $1:$
ARIMA applies only to stationary models--and these data are obviously not from a stationary model. That's not terribly interesting. What is interesting is that the problem persists even with noise!
What, then, happens when we add just a tiny bit of noise?
It's still obviously not stationary--but the reason is that the initial values are inconsistent with everything that follows.
You need to remove a "burn-in period" during which the simulated values are starting to behave like the rest of the series will. Here's what this one looks like when we strip out the first $n_0=30$ values:
What does arima return?
Coefficients:
ar1 intercept
0.9074 0.9872
s.e. 0.0309 0.0088
$0.9074 \pm 0.0309$ is a great estimate of $\phi=0.9.$
I repeated this process 99 more times, producing 100 estimates of $\phi$ along with their standard errors. Here is a plot of those estimates and crude $90\%$ confidence limits (set at $1.645$ standard errors above and below the estimates):
The horizontal gray line is located at $\phi=0.9$ for reference. The red confidence intervals are those that do not overlap the reference: there are $12$ of them, indicating the confidence level is around $88\%,$ agreeing (within sampling error) with the intended value of $90\%.$ The horizontal black line is the average estimate. It's a little lower than $\phi,$ perhaps because after even $200$ time steps the series still isn't quite stationary. (One also doesn't expect the distribution of the estimate to be symmetric: $1$ is an important boundary and will cause the distribution to be skewed toward the smaller values.)
Here is the same study but with $2000$ time steps in each iteration:
The bias in the estimate has nearly disappeared.
Another solution is to start generating the series at its asymptotic mean value (equal to cnst/(1-phi) in the R code below). But that requires knowing the asymptote, which might be harder to come by in more complex models, so it's good to know about the technique of discarding the initial segment of a simulated series.
BTW, here's a reasonably efficient and compact way to generate these datasets:
phi <- 0.9 # AR(1) coefficient
n <- 200 # Total number of time steps after the initial value
cnst <- 0.1 # Intercept
sigma <- 0.01 # Error variance
Y <- Reduce(function(y, e) y * phi + e, rnorm(n, cnst, sigma), 0, accumulate=TRUE)
n0 <- which.max(abs(Y) >= quantile(abs(Y), 0.5)) # Estimate where Y levels off
Y <- Y[-seq_len(n0)] # Strip the initial values
plot(Y) # LOOK at Y before doing anything else...
|
Confused about Autoregressive AR(1) process
You have two problems--and one of them is interesting.
Without a noise term, the series is no longer stationary. Its value is increasing asymptotically, but definitely, toward $1:$
ARIMA applies onl
|
16,988
|
Confused about Autoregressive AR(1) process
|
The code you have created is not even generating any (pseudo) random outputs, let alone an AR(1) process. If you would like to generate the output of a stationary Gaussian AR(1) process, you can use the function below. This function generates exact output from the process, using the stationary marginal distribution of the process as the starting distribution; this means that the computation does not require any removal of "burn in" iterations. As such, this code should be computationally faster ---and statistically more exact--- than methods that anchor the process to a fixed starting value and then discard burn-in iterations.
GENERATE_NAR1 <- function(n, phi = 0, mu = 0, sigma = 1) {
if (abs(phi) >= 1) { stop('Error: This is not a stationary process --- |phi| >= 1') }
EE <- rnorm(n, mean = 0, sd = sigma);
YY <- rep(0, n);
YY[1] <- mu + EE[1]/sqrt(1-phi^2);
for (t in 2:n) {
YY[t] <- mu + phi*(YY[t-1]-mu) + EE[t]; }
YY; }
Below I will generate a series of $n=10^5$ observable values and then we can use the arima function to fit it to an ARIMA model with specified orders.
set.seed(1);
phi <- 0.9;
mu <- 5;
sigma <- 4;
Y <- GENERATE_NAR1(n = 10^5, phi, mu, sigma);
MODEL <- arima(Y, order = c(1,0,0), include.mean = TRUE);
MODEL;
Call:
arima(x = Y, order = c(1, 0, 0), include.mean = TRUE)
Coefficients:
ar1 intercept
0.8978 4.9099
s.e. 0.0014 0.1242
sigma^2 estimated as 16.11: log likelihood = -280874.1, aic = 561754.2
As you can see, with this many data points, the arima function estimates the true parameters of the AR(1) model to within a reasonable level of accuracy --- the true values are within two standard errors of the estimates. A plot of the first $n = 1,000$ values in the time-series shows that it does not require you to discard any "burn-in" iterations.
plot(Y[1:1000], type = 'l',
main = paste0('Gaussian AR(1) time-series \n (phi = ',
phi, ', mu = ' , mu, ', sigma = ' , sigma, ')'),
xlab = 'Time', ylab = 'Value');
|
Confused about Autoregressive AR(1) process
|
The code you have created is not even generating any (pseudo) random outputs, let alone an AR(1) process. If you would like to generate the output of a stationary Gaussian AR(1) process, you can use
|
Confused about Autoregressive AR(1) process
The code you have created is not even generating any (pseudo) random outputs, let alone an AR(1) process. If you would like to generate the output of a stationary Gaussian AR(1) process, you can use the function below. This function generates exact output from the process, using the stationary marginal distribution of the process as the starting distribution; this means that the computation does not require any removal of "burn in" iterations. As such, this code should be computationally faster ---and statistically more exact--- than methods that anchor the process to a fixed starting value and then discard burn-in iterations.
GENERATE_NAR1 <- function(n, phi = 0, mu = 0, sigma = 1) {
if (abs(phi) >= 1) { stop('Error: This is not a stationary process --- |phi| >= 1') }
EE <- rnorm(n, mean = 0, sd = sigma);
YY <- rep(0, n);
YY[1] <- mu + EE[1]/sqrt(1-phi^2);
for (t in 2:n) {
YY[t] <- mu + phi*(YY[t-1]-mu) + EE[t]; }
YY; }
Below I will generate a series of $n=10^5$ observable values and then we can use the arima function to fit it to an ARIMA model with specified orders.
set.seed(1);
phi <- 0.9;
mu <- 5;
sigma <- 4;
Y <- GENERATE_NAR1(n = 10^5, phi, mu, sigma);
MODEL <- arima(Y, order = c(1,0,0), include.mean = TRUE);
MODEL;
Call:
arima(x = Y, order = c(1, 0, 0), include.mean = TRUE)
Coefficients:
ar1 intercept
0.8978 4.9099
s.e. 0.0014 0.1242
sigma^2 estimated as 16.11: log likelihood = -280874.1, aic = 561754.2
As you can see, with this many data points, the arima function estimates the true parameters of the AR(1) model to within a reasonable level of accuracy --- the true values are within two standard errors of the estimates. A plot of the first $n = 1,000$ values in the time-series shows that it does not require you to discard any "burn-in" iterations.
plot(Y[1:1000], type = 'l',
main = paste0('Gaussian AR(1) time-series \n (phi = ',
phi, ', mu = ' , mu, ', sigma = ' , sigma, ')'),
xlab = 'Time', ylab = 'Value');
|
Confused about Autoregressive AR(1) process
The code you have created is not even generating any (pseudo) random outputs, let alone an AR(1) process. If you would like to generate the output of a stationary Gaussian AR(1) process, you can use
|
16,989
|
Confused about Autoregressive AR(1) process
|
As also pointed out in the comments, it's not an AR(p) process any more. arima function assumes the following and fits the coefficients accordingly:
$$y_t=c+\phi y_{t-1}+\epsilon_t$$
It's quite normal that you don't get near the correct $\phi$. Also, after adding noise term, try to increase the sample size to get more confident estimates.
|
Confused about Autoregressive AR(1) process
|
As also pointed out in the comments, it's not an AR(p) process any more. arima function assumes the following and fits the coefficients accordingly:
$$y_t=c+\phi y_{t-1}+\epsilon_t$$
It's quite normal
|
Confused about Autoregressive AR(1) process
As also pointed out in the comments, it's not an AR(p) process any more. arima function assumes the following and fits the coefficients accordingly:
$$y_t=c+\phi y_{t-1}+\epsilon_t$$
It's quite normal that you don't get near the correct $\phi$. Also, after adding noise term, try to increase the sample size to get more confident estimates.
|
Confused about Autoregressive AR(1) process
As also pointed out in the comments, it's not an AR(p) process any more. arima function assumes the following and fits the coefficients accordingly:
$$y_t=c+\phi y_{t-1}+\epsilon_t$$
It's quite normal
|
16,990
|
Cost function turning into nan after a certain number of iterations
|
Well, if you get NaN values in your cost function, it means that the input is outside of the function domain. E.g. the logarithm of 0. Or it could be in the domain analytically, but due to numerical errors we get the same problem (e.g. a small value gets rounded to 0).
It has nothing to do with an inability to "settle".
So, you have to determine what the non-allowed function input values for your given cost function are. Then, you have to determine why you are getting that input to your cost function. You may have to change the scaling of the input data and the weight initialization. Or you just have to have an adaptive learning rate as suggested by Avis, as the cost function landscape may be quiet chaotic. Or it could be because of something else, like numerical issues with some layer in your architecture.
It is very difficult to say with deep networks, but I suggest you start looking at the progression of the input values to your cost function (the output of your activation layer), and try to determine a cause.
|
Cost function turning into nan after a certain number of iterations
|
Well, if you get NaN values in your cost function, it means that the input is outside of the function domain. E.g. the logarithm of 0. Or it could be in the domain analytically, but due to numerical e
|
Cost function turning into nan after a certain number of iterations
Well, if you get NaN values in your cost function, it means that the input is outside of the function domain. E.g. the logarithm of 0. Or it could be in the domain analytically, but due to numerical errors we get the same problem (e.g. a small value gets rounded to 0).
It has nothing to do with an inability to "settle".
So, you have to determine what the non-allowed function input values for your given cost function are. Then, you have to determine why you are getting that input to your cost function. You may have to change the scaling of the input data and the weight initialization. Or you just have to have an adaptive learning rate as suggested by Avis, as the cost function landscape may be quiet chaotic. Or it could be because of something else, like numerical issues with some layer in your architecture.
It is very difficult to say with deep networks, but I suggest you start looking at the progression of the input values to your cost function (the output of your activation layer), and try to determine a cause.
|
Cost function turning into nan after a certain number of iterations
Well, if you get NaN values in your cost function, it means that the input is outside of the function domain. E.g. the logarithm of 0. Or it could be in the domain analytically, but due to numerical e
|
16,991
|
Cost function turning into nan after a certain number of iterations
|
Here are some of the things you could do:
When using SoftMax cross entropy function:
the SoftMax numerator should never have zero-values due to the exponential. However, due to floating point precision, the numerator could be a very small value, say, exp(-50000), which essentially evaluates to zero.(ref.)
Quick fixes could be to either increase the precision of your model (using 64-bit floats instead of, presumably, 32 bit floats), or just introduce a function that caps your values, so anything below zero or exactly zero is just made to be close enough to zero that the computer doesn't freak out. For example, use X = np.log(np.max(x, 1e-9)) before going into the softmax.(ref.)
You can use methods like "FastNorm" which improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence.(ref.)
Check weights initialization: If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps.
Decrease the learning rate, especially if you are getting NaNs in the first 100 iterations.
NaNs can arise from division by zero or natural log of zero or negative number.
Try evaluating your network layer by layer and see where the NaNs appear.
some of the suggestions were taken from the references from the two great posts on StackOverflow & on KDnuggests
|
Cost function turning into nan after a certain number of iterations
|
Here are some of the things you could do:
When using SoftMax cross entropy function:
the SoftMax numerator should never have zero-values due to the exponential. However, due to floating point preci
|
Cost function turning into nan after a certain number of iterations
Here are some of the things you could do:
When using SoftMax cross entropy function:
the SoftMax numerator should never have zero-values due to the exponential. However, due to floating point precision, the numerator could be a very small value, say, exp(-50000), which essentially evaluates to zero.(ref.)
Quick fixes could be to either increase the precision of your model (using 64-bit floats instead of, presumably, 32 bit floats), or just introduce a function that caps your values, so anything below zero or exactly zero is just made to be close enough to zero that the computer doesn't freak out. For example, use X = np.log(np.max(x, 1e-9)) before going into the softmax.(ref.)
You can use methods like "FastNorm" which improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence.(ref.)
Check weights initialization: If unsure, use Xavier or He initialization. Also, your initialization might be leading you to a bad local minimum, so try a different initialization and see if it helps.
Decrease the learning rate, especially if you are getting NaNs in the first 100 iterations.
NaNs can arise from division by zero or natural log of zero or negative number.
Try evaluating your network layer by layer and see where the NaNs appear.
some of the suggestions were taken from the references from the two great posts on StackOverflow & on KDnuggests
|
Cost function turning into nan after a certain number of iterations
Here are some of the things you could do:
When using SoftMax cross entropy function:
the SoftMax numerator should never have zero-values due to the exponential. However, due to floating point preci
|
16,992
|
Cost function turning into nan after a certain number of iterations
|
Possible reasons:
Gradient blow up
Your input contains nan (or unexpected values)
Loss function not implemented properly
Numerical instability in the Deep learning framework
You can check whether it always becomes nan when fed with a particular input or is it completely random.
Usual practice is to reduce the learning rate in step manner after every few iterations.
|
Cost function turning into nan after a certain number of iterations
|
Possible reasons:
Gradient blow up
Your input contains nan (or unexpected values)
Loss function not implemented properly
Numerical instability in the Deep learning framework
You can check whether i
|
Cost function turning into nan after a certain number of iterations
Possible reasons:
Gradient blow up
Your input contains nan (or unexpected values)
Loss function not implemented properly
Numerical instability in the Deep learning framework
You can check whether it always becomes nan when fed with a particular input or is it completely random.
Usual practice is to reduce the learning rate in step manner after every few iterations.
|
Cost function turning into nan after a certain number of iterations
Possible reasons:
Gradient blow up
Your input contains nan (or unexpected values)
Loss function not implemented properly
Numerical instability in the Deep learning framework
You can check whether i
|
16,993
|
How to estimate baseline hazard function in Cox Model with R
|
A Cox model was explicitly designed to be able to estimate the hazard ratios without having to estimate the baseline hazard function. This is a strength and a weakness. The strength is that you cannot make errors in functions you don't estimate. This is a real strength and is the reason why people refer to it as "semi-parametric" and is to a large extent responsible for its popularity. However, it is also a real weakness, in that once you want to know something other than the hazard ratio, you will often require the baseline hazard function and that defeats the very purpose of a Cox model.
So I tend to use Cox models only when I am interested in hazard ratios and nothing else. If I want to know other things, I typically move on to other models like the ones discussed here:
http://www.stata.com/bookstore/flexible-parametric-survival-analysis-stata/
|
How to estimate baseline hazard function in Cox Model with R
|
A Cox model was explicitly designed to be able to estimate the hazard ratios without having to estimate the baseline hazard function. This is a strength and a weakness. The strength is that you cannot
|
How to estimate baseline hazard function in Cox Model with R
A Cox model was explicitly designed to be able to estimate the hazard ratios without having to estimate the baseline hazard function. This is a strength and a weakness. The strength is that you cannot make errors in functions you don't estimate. This is a real strength and is the reason why people refer to it as "semi-parametric" and is to a large extent responsible for its popularity. However, it is also a real weakness, in that once you want to know something other than the hazard ratio, you will often require the baseline hazard function and that defeats the very purpose of a Cox model.
So I tend to use Cox models only when I am interested in hazard ratios and nothing else. If I want to know other things, I typically move on to other models like the ones discussed here:
http://www.stata.com/bookstore/flexible-parametric-survival-analysis-stata/
|
How to estimate baseline hazard function in Cox Model with R
A Cox model was explicitly designed to be able to estimate the hazard ratios without having to estimate the baseline hazard function. This is a strength and a weakness. The strength is that you cannot
|
16,994
|
How to estimate baseline hazard function in Cox Model with R
|
The baseline hazard function can be estimated in R using the "basehaz" function. The "help" file states that it is the "predicted survival" function which it's clearly not. If one inspects the code, it's clearly the cumulative hazard function from a survfit object. For further silliness, the default setting is centered=TRUE which a) is not a baseline hazard function (as the name would suggest), and b) employs prediction-at-the-means which is wildly discredited as valid in any practical sense.
And to your earlier point: yes this function makes use of the step function. You can transform that output to a hazard function using smoothing. The worst part of it all, what's the uncertainty interval for that prediction? You may get a Fields medal if you can derive it. I don't think we even know whether bootstrapping works or not.
As an example:
set.seed(1234)
x <- rweibull(1000, 2, 3)
coxfit <- coxph(Surv(x) ~ 1)
bhest <- basehaz(coxfit)
haz <- exp(diff(bhest[, 1])*diff(bhest[, 2]))
time <- (bhest[-1,2] + bhest[-1000, 2])/2
b <- 2^-3
curve(3*b*x, from=0, to=max(x), xlab='Survival time', ylab='Weibull hazard')
points(t <- bhest[-1,2], h <- diff(bhest[, 1])/diff(bhest[, 2]), col='grey')
smooth <- loess.smooth(t, h)
lines(smooth$x, smooth$y, col='red')
legend('topright', lty=c(1,1,0), col=c('black', 'red', 'grey'), pch=c(NA,NA,1), c('Actual hazard fun', 'Smoothed hazard fun', 'Stepped discrete-time hazards'), bg='white')
|
How to estimate baseline hazard function in Cox Model with R
|
The baseline hazard function can be estimated in R using the "basehaz" function. The "help" file states that it is the "predicted survival" function which it's clearly not. If one inspects the code, i
|
How to estimate baseline hazard function in Cox Model with R
The baseline hazard function can be estimated in R using the "basehaz" function. The "help" file states that it is the "predicted survival" function which it's clearly not. If one inspects the code, it's clearly the cumulative hazard function from a survfit object. For further silliness, the default setting is centered=TRUE which a) is not a baseline hazard function (as the name would suggest), and b) employs prediction-at-the-means which is wildly discredited as valid in any practical sense.
And to your earlier point: yes this function makes use of the step function. You can transform that output to a hazard function using smoothing. The worst part of it all, what's the uncertainty interval for that prediction? You may get a Fields medal if you can derive it. I don't think we even know whether bootstrapping works or not.
As an example:
set.seed(1234)
x <- rweibull(1000, 2, 3)
coxfit <- coxph(Surv(x) ~ 1)
bhest <- basehaz(coxfit)
haz <- exp(diff(bhest[, 1])*diff(bhest[, 2]))
time <- (bhest[-1,2] + bhest[-1000, 2])/2
b <- 2^-3
curve(3*b*x, from=0, to=max(x), xlab='Survival time', ylab='Weibull hazard')
points(t <- bhest[-1,2], h <- diff(bhest[, 1])/diff(bhest[, 2]), col='grey')
smooth <- loess.smooth(t, h)
lines(smooth$x, smooth$y, col='red')
legend('topright', lty=c(1,1,0), col=c('black', 'red', 'grey'), pch=c(NA,NA,1), c('Actual hazard fun', 'Smoothed hazard fun', 'Stepped discrete-time hazards'), bg='white')
|
How to estimate baseline hazard function in Cox Model with R
The baseline hazard function can be estimated in R using the "basehaz" function. The "help" file states that it is the "predicted survival" function which it's clearly not. If one inspects the code, i
|
16,995
|
Linear Mixed Effects Models
|
One major benefit of mixed-effects models is that they don't assume independence amongst observations, and there can be a correlated observations within a unit or cluster.
This is covered concisely in "Modern Applied Statistics with S" (MASS) in the first section of chapter 10 on "Random and Mixed Effects". V&R walk through an example with gasoline data comparing ANOVA and lme in that section, so it's a good overview. The R function to be used in lme in the nlme package.
The model formulation is based on Laird and Ware (1982), so you can refer to that as a primary source although it's certainly not good for an introduction.
Laird, N.M. and Ware, J.H. (1982) "Random-Effects Models for Longitudinal Data", Biometrics, 38, 963–974.
Venables, W.N. and Ripley, B.D. (2002) "Modern Applied Statistics with S", 4th Edition, Springer-Verlag.
You can also have a look at the "Linear Mixed Models" (PDF) appendix to John Fox's "An R and S-PLUS Companion to Applied Regression". And this lecture by Roger Levy (PDF) discusses mixed effects models w.r.t. a multivariate normal distribution.
|
Linear Mixed Effects Models
|
One major benefit of mixed-effects models is that they don't assume independence amongst observations, and there can be a correlated observations within a unit or cluster.
This is covered concisely in
|
Linear Mixed Effects Models
One major benefit of mixed-effects models is that they don't assume independence amongst observations, and there can be a correlated observations within a unit or cluster.
This is covered concisely in "Modern Applied Statistics with S" (MASS) in the first section of chapter 10 on "Random and Mixed Effects". V&R walk through an example with gasoline data comparing ANOVA and lme in that section, so it's a good overview. The R function to be used in lme in the nlme package.
The model formulation is based on Laird and Ware (1982), so you can refer to that as a primary source although it's certainly not good for an introduction.
Laird, N.M. and Ware, J.H. (1982) "Random-Effects Models for Longitudinal Data", Biometrics, 38, 963–974.
Venables, W.N. and Ripley, B.D. (2002) "Modern Applied Statistics with S", 4th Edition, Springer-Verlag.
You can also have a look at the "Linear Mixed Models" (PDF) appendix to John Fox's "An R and S-PLUS Companion to Applied Regression". And this lecture by Roger Levy (PDF) discusses mixed effects models w.r.t. a multivariate normal distribution.
|
Linear Mixed Effects Models
One major benefit of mixed-effects models is that they don't assume independence amongst observations, and there can be a correlated observations within a unit or cluster.
This is covered concisely in
|
16,996
|
Linear Mixed Effects Models
|
A very good article explaining the general approach of LMMs and their advantage over ANOVA is:
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390-412.
Linear mixed-effects models (LMMs) generalize regression models to have residual-like components, random effects, at the level of, e.g., people or items and not only at the level of individual observations. The models are very flexible, for instance allowing the modeling of varying slopes and intercepts.
LMMs work by using a likelihood function of some kind, the probability of your data given some parameter, and a method for maximizing this (Maximum Likelihood Estimation; MLE) by fiddling around with the parameters. MLE is a very general technique allowing lots of different models, e.g., those for binary and count data, to be fitted to data, and is explained in a number of places, e.g.,
Agresti, A. (2007). An Introduction to Categorical Data Analysis (2nd Edition). John Wiley & Sons.
LMMs, however, can't deal with non-Gaussian data like binary data or counts; for that you need Generalized Linear Mixed-effects Models (GLMMs). One way to understand these is first to look into GLMs; also see Agresti (2007).
|
Linear Mixed Effects Models
|
A very good article explaining the general approach of LMMs and their advantage over ANOVA is:
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effect
|
Linear Mixed Effects Models
A very good article explaining the general approach of LMMs and their advantage over ANOVA is:
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390-412.
Linear mixed-effects models (LMMs) generalize regression models to have residual-like components, random effects, at the level of, e.g., people or items and not only at the level of individual observations. The models are very flexible, for instance allowing the modeling of varying slopes and intercepts.
LMMs work by using a likelihood function of some kind, the probability of your data given some parameter, and a method for maximizing this (Maximum Likelihood Estimation; MLE) by fiddling around with the parameters. MLE is a very general technique allowing lots of different models, e.g., those for binary and count data, to be fitted to data, and is explained in a number of places, e.g.,
Agresti, A. (2007). An Introduction to Categorical Data Analysis (2nd Edition). John Wiley & Sons.
LMMs, however, can't deal with non-Gaussian data like binary data or counts; for that you need Generalized Linear Mixed-effects Models (GLMMs). One way to understand these is first to look into GLMs; also see Agresti (2007).
|
Linear Mixed Effects Models
A very good article explaining the general approach of LMMs and their advantage over ANOVA is:
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effect
|
16,997
|
Linear Mixed Effects Models
|
The main advantage of LME for analysing accuracy data is the ability to account for a series of random effects. In psychology experiments, researchers usually aggregate items and/or participants. Not only are people different from each other, but items also differ (some words might be more distinctive or memorable, for instance). Ignoring these sources of variability usually leads to underestimations of accuracy (for instance lower d' values). Although the participant aggregation issue can somehow be dealt with individual estimation, the item effects are still there, and are commonly larger than participant effects. LME not only allows you to tackle both random effects simultaneously, but also to add specificy additional predictor variables (age, education level, word length, etc.) to them.
A really good reference for LMEs, especially focused in the fields of linguistics and experimental psychology, is
Analyzing Linguistic Data: A Practical Introduction to Statistics using R
cheers
|
Linear Mixed Effects Models
|
The main advantage of LME for analysing accuracy data is the ability to account for a series of random effects. In psychology experiments, researchers usually aggregate items and/or participants. Not
|
Linear Mixed Effects Models
The main advantage of LME for analysing accuracy data is the ability to account for a series of random effects. In psychology experiments, researchers usually aggregate items and/or participants. Not only are people different from each other, but items also differ (some words might be more distinctive or memorable, for instance). Ignoring these sources of variability usually leads to underestimations of accuracy (for instance lower d' values). Although the participant aggregation issue can somehow be dealt with individual estimation, the item effects are still there, and are commonly larger than participant effects. LME not only allows you to tackle both random effects simultaneously, but also to add specificy additional predictor variables (age, education level, word length, etc.) to them.
A really good reference for LMEs, especially focused in the fields of linguistics and experimental psychology, is
Analyzing Linguistic Data: A Practical Introduction to Statistics using R
cheers
|
Linear Mixed Effects Models
The main advantage of LME for analysing accuracy data is the ability to account for a series of random effects. In psychology experiments, researchers usually aggregate items and/or participants. Not
|
16,998
|
Do all observations arise from probability distributions?
|
Statistics is concerned with phenomena that can be considered random. Even if you are studying a deterministic process, the measurement noise can make the observations random. We can simplify many problems by using simple models that considered all the unobserved factors as “random noise”. For example, the linear regression model
$$
\mathsf{height}_i = \alpha + \beta \,\mathsf{age}_i + \varepsilon_i
$$
does say that we model height as a function of age and consider whatever else could influence it as “random noise”. It doesn't say that we consider it as completely “random” meaning “chaotic”, “unpredictable”, etc. For another example, if you toss a coin, the outcome would be deterministic and depend only on the rules of physics, but it is influenced by many factors that contribute to its chaotic nature so we can as well consider it as a random process.
If you have a deterministic process and noiseless measurements of all the relevant data, you wouldn't need statistics for it. You would need other mathematics, for example, calculus, but not statistics. If you need to consider the noise and need to assume randomness, you do so. Nothing “arises” from probability distributions, they are only mathematical tools we use to model real-world phenomena.
|
Do all observations arise from probability distributions?
|
Statistics is concerned with phenomena that can be considered random. Even if you are studying a deterministic process, the measurement noise can make the observations random. We can simplify many pro
|
Do all observations arise from probability distributions?
Statistics is concerned with phenomena that can be considered random. Even if you are studying a deterministic process, the measurement noise can make the observations random. We can simplify many problems by using simple models that considered all the unobserved factors as “random noise”. For example, the linear regression model
$$
\mathsf{height}_i = \alpha + \beta \,\mathsf{age}_i + \varepsilon_i
$$
does say that we model height as a function of age and consider whatever else could influence it as “random noise”. It doesn't say that we consider it as completely “random” meaning “chaotic”, “unpredictable”, etc. For another example, if you toss a coin, the outcome would be deterministic and depend only on the rules of physics, but it is influenced by many factors that contribute to its chaotic nature so we can as well consider it as a random process.
If you have a deterministic process and noiseless measurements of all the relevant data, you wouldn't need statistics for it. You would need other mathematics, for example, calculus, but not statistics. If you need to consider the noise and need to assume randomness, you do so. Nothing “arises” from probability distributions, they are only mathematical tools we use to model real-world phenomena.
|
Do all observations arise from probability distributions?
Statistics is concerned with phenomena that can be considered random. Even if you are studying a deterministic process, the measurement noise can make the observations random. We can simplify many pro
|
16,999
|
Do all observations arise from probability distributions?
|
Yes, would be the shortest answer. You referred to physics. Physics always disclose measurement errors or precision one way or another. The errors were always a part of the practice of this science. What guys like Pearson did is to treat the errors as random variables. It’s nowadays common practice to follow this approach. Hence, you could say even measurements of deterministic processes are in fact sampling from distributions.
Take a look at the gravitational constant: G here, notice how with the value its uncertainty is given too. Note, this is not an inherently random quantity, it is a constant! Also read the definition of uncertainty in NIST handbook, it is described in terms of probability distributions.
Here a snapshot from a recent physics paper:
notice $\pm 0.03$ - the convention to report measurement uncertainty. Physicist sometimes omit it, and when they do it means that all reported digits are significant. For instance, if you see a value “127.010” it means that the uncertainty is around 0.0005, I.e. the last 0 cannot be skipped, because the authors are convinced that it is in fact zero. This is quite different from how quantities are reported in non scientific contexts, where uncertainty goes undisclosed usually
|
Do all observations arise from probability distributions?
|
Yes, would be the shortest answer. You referred to physics. Physics always disclose measurement errors or precision one way or another. The errors were always a part of the practice of this science.
|
Do all observations arise from probability distributions?
Yes, would be the shortest answer. You referred to physics. Physics always disclose measurement errors or precision one way or another. The errors were always a part of the practice of this science. What guys like Pearson did is to treat the errors as random variables. It’s nowadays common practice to follow this approach. Hence, you could say even measurements of deterministic processes are in fact sampling from distributions.
Take a look at the gravitational constant: G here, notice how with the value its uncertainty is given too. Note, this is not an inherently random quantity, it is a constant! Also read the definition of uncertainty in NIST handbook, it is described in terms of probability distributions.
Here a snapshot from a recent physics paper:
notice $\pm 0.03$ - the convention to report measurement uncertainty. Physicist sometimes omit it, and when they do it means that all reported digits are significant. For instance, if you see a value “127.010” it means that the uncertainty is around 0.0005, I.e. the last 0 cannot be skipped, because the authors are convinced that it is in fact zero. This is quite different from how quantities are reported in non scientific contexts, where uncertainty goes undisclosed usually
|
Do all observations arise from probability distributions?
Yes, would be the shortest answer. You referred to physics. Physics always disclose measurement errors or precision one way or another. The errors were always a part of the practice of this science.
|
17,000
|
Do all observations arise from probability distributions?
|
A distribution can be thought of as a data generating function.
When we do inferential statistics, we collect a sample of observations, and then we try to use that sample to figure out the unknown distribution that generated that data.
The reason we want to know the distribution is because we might want to use a model to predict future observations. If we can figure out a good approximation to the true distribution then we can be sure that the future predictions will approximately follow the distribution and we will have a good idea of how accurate our predictions will be.
The distribution doesn't have to be probabilistic. It will still generate data that you can observe, even if the distribution is completely deterministic.
|
Do all observations arise from probability distributions?
|
A distribution can be thought of as a data generating function.
When we do inferential statistics, we collect a sample of observations, and then we try to use that sample to figure out the unknown dis
|
Do all observations arise from probability distributions?
A distribution can be thought of as a data generating function.
When we do inferential statistics, we collect a sample of observations, and then we try to use that sample to figure out the unknown distribution that generated that data.
The reason we want to know the distribution is because we might want to use a model to predict future observations. If we can figure out a good approximation to the true distribution then we can be sure that the future predictions will approximately follow the distribution and we will have a good idea of how accurate our predictions will be.
The distribution doesn't have to be probabilistic. It will still generate data that you can observe, even if the distribution is completely deterministic.
|
Do all observations arise from probability distributions?
A distribution can be thought of as a data generating function.
When we do inferential statistics, we collect a sample of observations, and then we try to use that sample to figure out the unknown dis
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.