idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
44,001
|
What does it mean, when, three standard deviations away from the mean, I land outside of the minimum or maximum value?
|
“Three st.dev.s ($3\sqrt{\sigma^2}$) include 99.7% of the data” refers to Gaussian distributions. For distributions in general, Chebyshev's inequality puts a lower bound on the amount of probability mass withing $k$ of the mean. But is there an upper bound?
With a Bernoulli distribution with $p$ = .5, the $\sigma$ is .5 . The mean $\mu$ is also .5, which means that 100% of the distribution is within $1\sigma$ or $\mu$. What about smaller numbers of standard deviations?
Note: the following, for simplicity is an argument regarding distributions with $\mu = 0$. Its extension to distribution with arbitrary $\mu$ is reasonably trivial.
Given any positive $\varepsilon$ and $M$, there is a distribution such that you have $\varepsilon/2$ probability mass $\leftarrow M$ and $\varepsilon/2$ probability mass $\gt M$. That is,
$p(\lvert{x}\rvert \gt M) = \varepsilon$
All else being equal, as $M \to \infty$, then $\sigma \to \infty$. However, for any fixed positive $N$, once $M$ exceeds $N$, the probability mass within $N$ of zero is always $1-\varepsilon$, regardless of $M$. Thus, if we look at the relative distance from zero (that is, the number of standard deviations the value is $= \frac{\lvert{x}\rvert}{\sigma}$), then as $M \to \infty$, we have $n \to \infty$, where $n$ is the largest integer such that "$1-\varepsilon$ of the probability is within $n\sigma$ of $\mu$" is true.
This shows that for any positive numbers $\varepsilon$ and $n$, there is some distribution such that the probability of being more than $n\sigma$ from zero is less than $\varepsilon$. So, for instance, if you want a probability of 99.999% of being less than .000001 $\sigma$ from zero, there is a distribution that satisfies that.
|
What does it mean, when, three standard deviations away from the mean, I land outside of the minimum
|
“Three st.dev.s ($3\sqrt{\sigma^2}$) include 99.7% of the data” refers to Gaussian distributions. For distributions in general, Chebyshev's inequality puts a lower bound on the amount of probability m
|
What does it mean, when, three standard deviations away from the mean, I land outside of the minimum or maximum value?
“Three st.dev.s ($3\sqrt{\sigma^2}$) include 99.7% of the data” refers to Gaussian distributions. For distributions in general, Chebyshev's inequality puts a lower bound on the amount of probability mass withing $k$ of the mean. But is there an upper bound?
With a Bernoulli distribution with $p$ = .5, the $\sigma$ is .5 . The mean $\mu$ is also .5, which means that 100% of the distribution is within $1\sigma$ or $\mu$. What about smaller numbers of standard deviations?
Note: the following, for simplicity is an argument regarding distributions with $\mu = 0$. Its extension to distribution with arbitrary $\mu$ is reasonably trivial.
Given any positive $\varepsilon$ and $M$, there is a distribution such that you have $\varepsilon/2$ probability mass $\leftarrow M$ and $\varepsilon/2$ probability mass $\gt M$. That is,
$p(\lvert{x}\rvert \gt M) = \varepsilon$
All else being equal, as $M \to \infty$, then $\sigma \to \infty$. However, for any fixed positive $N$, once $M$ exceeds $N$, the probability mass within $N$ of zero is always $1-\varepsilon$, regardless of $M$. Thus, if we look at the relative distance from zero (that is, the number of standard deviations the value is $= \frac{\lvert{x}\rvert}{\sigma}$), then as $M \to \infty$, we have $n \to \infty$, where $n$ is the largest integer such that "$1-\varepsilon$ of the probability is within $n\sigma$ of $\mu$" is true.
This shows that for any positive numbers $\varepsilon$ and $n$, there is some distribution such that the probability of being more than $n\sigma$ from zero is less than $\varepsilon$. So, for instance, if you want a probability of 99.999% of being less than .000001 $\sigma$ from zero, there is a distribution that satisfies that.
|
What does it mean, when, three standard deviations away from the mean, I land outside of the minimum
“Three st.dev.s ($3\sqrt{\sigma^2}$) include 99.7% of the data” refers to Gaussian distributions. For distributions in general, Chebyshev's inequality puts a lower bound on the amount of probability m
|
44,002
|
Can I use lasso when it is not a high dimensional setting?
|
There's nothing that suggests you need a number of predictors ($p$) as large as 200 or sample size ($n$) as large 500, let alone larger. (You might find it surprising to read some of the early papers on both methods.)
You can very successfully use regularization methods like ridge regression and lasso on problems with only a few predictors -- the benefits of regularization are still present (indeed the illustration here shows ridge regression can be useful with two predictors, and one can make an argument for considering it even with a single predictor.)
|
Can I use lasso when it is not a high dimensional setting?
|
There's nothing that suggests you need a number of predictors ($p$) as large as 200 or sample size ($n$) as large 500, let alone larger. (You might find it surprising to read some of the early papers
|
Can I use lasso when it is not a high dimensional setting?
There's nothing that suggests you need a number of predictors ($p$) as large as 200 or sample size ($n$) as large 500, let alone larger. (You might find it surprising to read some of the early papers on both methods.)
You can very successfully use regularization methods like ridge regression and lasso on problems with only a few predictors -- the benefits of regularization are still present (indeed the illustration here shows ridge regression can be useful with two predictors, and one can make an argument for considering it even with a single predictor.)
|
Can I use lasso when it is not a high dimensional setting?
There's nothing that suggests you need a number of predictors ($p$) as large as 200 or sample size ($n$) as large 500, let alone larger. (You might find it surprising to read some of the early papers
|
44,003
|
Can I use lasso when it is not a high dimensional setting?
|
Whether a given setting is high-dimensional or not depends on both the number of samples you have and the number of dimensions. Increasing the number of dimensions requires exponentially more data to "fill up" the feature space - look up the curse of dimensionality.
200 predictors for 500 observations is a huge number of predictors.
|
Can I use lasso when it is not a high dimensional setting?
|
Whether a given setting is high-dimensional or not depends on both the number of samples you have and the number of dimensions. Increasing the number of dimensions requires exponentially more data to
|
Can I use lasso when it is not a high dimensional setting?
Whether a given setting is high-dimensional or not depends on both the number of samples you have and the number of dimensions. Increasing the number of dimensions requires exponentially more data to "fill up" the feature space - look up the curse of dimensionality.
200 predictors for 500 observations is a huge number of predictors.
|
Can I use lasso when it is not a high dimensional setting?
Whether a given setting is high-dimensional or not depends on both the number of samples you have and the number of dimensions. Increasing the number of dimensions requires exponentially more data to
|
44,004
|
Can I use lasso when it is not a high dimensional setting?
|
I suppose you are talking about the setting when p n or p > n (as high dimensional), lasso has an additional advantage of solving the singularity problem that occurs in the above setting,which was prior motivation for developing regularisation (Thats why it is much used in higher dimensions). More on this here. As for your case here , apart from the above advantage of lasso , as mentioned by other answers it retains its other advantages like reduction of model variance,subset selection etc.
|
Can I use lasso when it is not a high dimensional setting?
|
I suppose you are talking about the setting when p n or p > n (as high dimensional), lasso has an additional advantage of solving the singularity problem that occurs in the above setting,which was pr
|
Can I use lasso when it is not a high dimensional setting?
I suppose you are talking about the setting when p n or p > n (as high dimensional), lasso has an additional advantage of solving the singularity problem that occurs in the above setting,which was prior motivation for developing regularisation (Thats why it is much used in higher dimensions). More on this here. As for your case here , apart from the above advantage of lasso , as mentioned by other answers it retains its other advantages like reduction of model variance,subset selection etc.
|
Can I use lasso when it is not a high dimensional setting?
I suppose you are talking about the setting when p n or p > n (as high dimensional), lasso has an additional advantage of solving the singularity problem that occurs in the above setting,which was pr
|
44,005
|
Magnitude of standardized coefficients (beta) in multiple linear regression
|
It's never easy telling your professor that they are wrong.
Standardized coefficients can be greater than 1.00, as that article explains and as is easy to demonstrate. Whether they should be excluded depends on why they happened - but probably not.
They are a sign that you have some pretty serious collinearity. One case where they often occur is when you have non-linear effects, such as when $x$ and $x^2$ are included as predictors in a model.
Here's a quick demonstration:
data(cars)
cars$speed2 <- cars$speed^2
cars$speed3 <- cars$speed^3
fit1 <- lm(dist ~ speed, data=cars)
fit2 <- lm(dist ~ speed + cars$speed2, data=cars)
fit3 <- lm(dist ~ speed + cars$speed2 + speed3, data=cars)
summary(fit1)
summary(fit2)
summary(fit3)
lm.beta(fit1)
lm.beta(fit2)
lm.beta(fit3)
Final bit of output:
> lm.beta(fit3)
speed speed2 speed3
1.395526 -2.212406 1.681041
Or if you prefer you can standardize the variables first:
zcars <- as.data.frame(rapply(cars, scale, how="list"))
fit3 <- lm(dist ~ speed + speed2 + speed3, data=zcars)
summary(fit3)
Call:
lm(formula = dist ~ speed + speed2 + speed3, data = zcars)
Residuals:
Min 1Q Median 3Q Max
-1.03496 -0.37258 -0.08659 0.27456 1.73426
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.440e-16 8.344e-02 0.000 1.000
speed 1.396e+00 1.396e+00 1.000 0.323
speed2 -2.212e+00 3.163e+00 -0.699 0.488
speed3 1.681e+00 1.853e+00 0.907 0.369
Residual standard error: 0.59 on 46 degrees of freedom
Multiple R-squared: 0.6732, Adjusted R-squared: 0.6519
F-statistic: 31.58 on 3 and 46 DF, p-value: 3.074e-11
You don't need to do it with lm(), you can do it with matrix algebra if you prefer:
Rxx <- cor(cars)[c(1, 3, 4), c(1, 3, 4)]
Rxy <- cor(cars)[2, c(1, 3, 4)]
B <- (ginv(Rxx)) %*% Rxy
B
[,1]
[1,] 1.395526
[2,] -2.212406
[3,] 1.681041
|
Magnitude of standardized coefficients (beta) in multiple linear regression
|
It's never easy telling your professor that they are wrong.
Standardized coefficients can be greater than 1.00, as that article explains and as is easy to demonstrate. Whether they should be excluded
|
Magnitude of standardized coefficients (beta) in multiple linear regression
It's never easy telling your professor that they are wrong.
Standardized coefficients can be greater than 1.00, as that article explains and as is easy to demonstrate. Whether they should be excluded depends on why they happened - but probably not.
They are a sign that you have some pretty serious collinearity. One case where they often occur is when you have non-linear effects, such as when $x$ and $x^2$ are included as predictors in a model.
Here's a quick demonstration:
data(cars)
cars$speed2 <- cars$speed^2
cars$speed3 <- cars$speed^3
fit1 <- lm(dist ~ speed, data=cars)
fit2 <- lm(dist ~ speed + cars$speed2, data=cars)
fit3 <- lm(dist ~ speed + cars$speed2 + speed3, data=cars)
summary(fit1)
summary(fit2)
summary(fit3)
lm.beta(fit1)
lm.beta(fit2)
lm.beta(fit3)
Final bit of output:
> lm.beta(fit3)
speed speed2 speed3
1.395526 -2.212406 1.681041
Or if you prefer you can standardize the variables first:
zcars <- as.data.frame(rapply(cars, scale, how="list"))
fit3 <- lm(dist ~ speed + speed2 + speed3, data=zcars)
summary(fit3)
Call:
lm(formula = dist ~ speed + speed2 + speed3, data = zcars)
Residuals:
Min 1Q Median 3Q Max
-1.03496 -0.37258 -0.08659 0.27456 1.73426
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.440e-16 8.344e-02 0.000 1.000
speed 1.396e+00 1.396e+00 1.000 0.323
speed2 -2.212e+00 3.163e+00 -0.699 0.488
speed3 1.681e+00 1.853e+00 0.907 0.369
Residual standard error: 0.59 on 46 degrees of freedom
Multiple R-squared: 0.6732, Adjusted R-squared: 0.6519
F-statistic: 31.58 on 3 and 46 DF, p-value: 3.074e-11
You don't need to do it with lm(), you can do it with matrix algebra if you prefer:
Rxx <- cor(cars)[c(1, 3, 4), c(1, 3, 4)]
Rxy <- cor(cars)[2, c(1, 3, 4)]
B <- (ginv(Rxx)) %*% Rxy
B
[,1]
[1,] 1.395526
[2,] -2.212406
[3,] 1.681041
|
Magnitude of standardized coefficients (beta) in multiple linear regression
It's never easy telling your professor that they are wrong.
Standardized coefficients can be greater than 1.00, as that article explains and as is easy to demonstrate. Whether they should be excluded
|
44,006
|
Magnitude of standardized coefficients (beta) in multiple linear regression
|
This is probably a matter of definitions. Does a standardized coefficient refer to standardizing only the predictor variables? or standardizing the response variable as well? I have seen both used to compute "standardized coefficients". Even then, there is more than one way to standardize.
If you divide both the predictor and response variable by their standard deviations (common way to standardize) and fit the regression (with only a single predictor/a single slope coefficient) then it is mathematically impossible to see a coefficient outside of the -1 to 1 range (since the slope will be the same as the correlation). But if you don't standardize the response variable then it would be easy to see an estimated coefficient outside of -1 to 1 depending on the scale of the response variable.
With multiple predictors, an unusually large standardized coefficient could be a sign of multi-colinearity and is probably why some sources suggest dropping those variables.
I expect that the differences between what some sources say are possible and what you observe from others is due to the difference in definitions.
|
Magnitude of standardized coefficients (beta) in multiple linear regression
|
This is probably a matter of definitions. Does a standardized coefficient refer to standardizing only the predictor variables? or standardizing the response variable as well? I have seen both used
|
Magnitude of standardized coefficients (beta) in multiple linear regression
This is probably a matter of definitions. Does a standardized coefficient refer to standardizing only the predictor variables? or standardizing the response variable as well? I have seen both used to compute "standardized coefficients". Even then, there is more than one way to standardize.
If you divide both the predictor and response variable by their standard deviations (common way to standardize) and fit the regression (with only a single predictor/a single slope coefficient) then it is mathematically impossible to see a coefficient outside of the -1 to 1 range (since the slope will be the same as the correlation). But if you don't standardize the response variable then it would be easy to see an estimated coefficient outside of -1 to 1 depending on the scale of the response variable.
With multiple predictors, an unusually large standardized coefficient could be a sign of multi-colinearity and is probably why some sources suggest dropping those variables.
I expect that the differences between what some sources say are possible and what you observe from others is due to the difference in definitions.
|
Magnitude of standardized coefficients (beta) in multiple linear regression
This is probably a matter of definitions. Does a standardized coefficient refer to standardizing only the predictor variables? or standardizing the response variable as well? I have seen both used
|
44,007
|
Magnitude of standardized coefficients (beta) in multiple linear regression
|
A standardized beta weight greater than one is a sign of suppression, especially cooperative suppression. Such suppression increases the predictive value of the predictors and thus is of potentially great value. See http://core.ecu.edu/psyc/wuenschk/MV/multReg/Suppress.docx
|
Magnitude of standardized coefficients (beta) in multiple linear regression
|
A standardized beta weight greater than one is a sign of suppression, especially cooperative suppression. Such suppression increases the predictive value of the predictors and thus is of potentially
|
Magnitude of standardized coefficients (beta) in multiple linear regression
A standardized beta weight greater than one is a sign of suppression, especially cooperative suppression. Such suppression increases the predictive value of the predictors and thus is of potentially great value. See http://core.ecu.edu/psyc/wuenschk/MV/multReg/Suppress.docx
|
Magnitude of standardized coefficients (beta) in multiple linear regression
A standardized beta weight greater than one is a sign of suppression, especially cooperative suppression. Such suppression increases the predictive value of the predictors and thus is of potentially
|
44,008
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
It is true, as @lejohn said, that if all you have is a hammer, everything looks like a nail.
It is also true, though, that if all you have is a nail, then you might only need a hammer!
The thing to do is to define your substantive question, whether it be from market research, psychology, physics or whatever. Then investigate methods, probably not on your own. The method to solve your problem MIGHT be correlation. It might be something else that is very simple. But it might not.
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
It is true, as @lejohn said, that if all you have is a hammer, everything looks like a nail.
It is also true, though, that if all you have is a nail, then you might only need a hammer!
The thing to do
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
It is true, as @lejohn said, that if all you have is a hammer, everything looks like a nail.
It is also true, though, that if all you have is a nail, then you might only need a hammer!
The thing to do is to define your substantive question, whether it be from market research, psychology, physics or whatever. Then investigate methods, probably not on your own. The method to solve your problem MIGHT be correlation. It might be something else that is very simple. But it might not.
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
It is true, as @lejohn said, that if all you have is a hammer, everything looks like a nail.
It is also true, though, that if all you have is a nail, then you might only need a hammer!
The thing to do
|
44,009
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
For me the answer to your question is no.
I do not think that a given method or technique can be an end in itself. If you have some data at hand or even before you start collecting data, you should ask yourself what are the problems you want to tackle, or what are the questions you would like to answer. When you have the data and the question you can start thinking about techniques.
If you are focussing on one particular technique you are likely to hide a lot from yourself. In that case, the questions you will answer or would like to answer are very much conditioned by the technique you intend to use. Think a lot about the following proverb:
If all you have is a hammer, everything looks like a nail
In addition I would like to note the following. There are indeed a lot of fancy techniques out there. However, the mere fact of using a fancy and sophisticated technique does not validate a statistical analysis by itself. Or to phrase it differently, a fancy technique is no subsitute for a convincing empirical strategy.
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
For me the answer to your question is no.
I do not think that a given method or technique can be an end in itself. If you have some data at hand or even before you start collecting data, you should as
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
For me the answer to your question is no.
I do not think that a given method or technique can be an end in itself. If you have some data at hand or even before you start collecting data, you should ask yourself what are the problems you want to tackle, or what are the questions you would like to answer. When you have the data and the question you can start thinking about techniques.
If you are focussing on one particular technique you are likely to hide a lot from yourself. In that case, the questions you will answer or would like to answer are very much conditioned by the technique you intend to use. Think a lot about the following proverb:
If all you have is a hammer, everything looks like a nail
In addition I would like to note the following. There are indeed a lot of fancy techniques out there. However, the mere fact of using a fancy and sophisticated technique does not validate a statistical analysis by itself. Or to phrase it differently, a fancy technique is no subsitute for a convincing empirical strategy.
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
For me the answer to your question is no.
I do not think that a given method or technique can be an end in itself. If you have some data at hand or even before you start collecting data, you should as
|
44,010
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
Plotting you data should never be overlooked. In this example, the correlation between X and Y is 0, but surely the two variable are related.
| x x
| x x
| x x
| x x
Y| x x
| x x
| x x
| x x x
|_______________________________
X
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
Plotting you data should never be overlooked. In this example, the correlation between X and Y is 0, but surely the two variable are related.
| x x
| x
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
Plotting you data should never be overlooked. In this example, the correlation between X and Y is 0, but surely the two variable are related.
| x x
| x x
| x x
| x x
Y| x x
| x x
| x x
| x x x
|_______________________________
X
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
Plotting you data should never be overlooked. In this example, the correlation between X and Y is 0, but surely the two variable are related.
| x x
| x
|
44,011
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
I agree with the others who have posted in this thread but have one point to add: Simple methods -- like correlation -- are more justifiable when you can be more sure that there aren't more complicated things going on.
In experimental work where you have used randomization to take into account potentially complicating observed and unobserved variables, you're get much more mileage out of a simple correlation than you will out of observational data where you've got all kind of complicated interplay between the variables you've measured and the variables you haven't. I'm not sure what kind of market research data you have. If it's observational, I'd be even more worried about relying only on correlations.
That said, simple methods like correlation play a very important role. What I try to do in most of my work is start out with simple relationships like correlations, t-tests, and chi-squared tests that establish the relationship in terms that people are more likely to understand intuitively. Only after I've built a substantive understanding, I'll present more complicated models. At that point, it's about addressing threats and creating better estimates than it is about making the strong substantive point.
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
|
I agree with the others who have posted in this thread but have one point to add: Simple methods -- like correlation -- are more justifiable when you can be more sure that there aren't more complicate
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
I agree with the others who have posted in this thread but have one point to add: Simple methods -- like correlation -- are more justifiable when you can be more sure that there aren't more complicated things going on.
In experimental work where you have used randomization to take into account potentially complicating observed and unobserved variables, you're get much more mileage out of a simple correlation than you will out of observational data where you've got all kind of complicated interplay between the variables you've measured and the variables you haven't. I'm not sure what kind of market research data you have. If it's observational, I'd be even more worried about relying only on correlations.
That said, simple methods like correlation play a very important role. What I try to do in most of my work is start out with simple relationships like correlations, t-tests, and chi-squared tests that establish the relationship in terms that people are more likely to understand intuitively. Only after I've built a substantive understanding, I'll present more complicated models. At that point, it's about addressing threats and creating better estimates than it is about making the strong substantive point.
|
Given the advancements in statistical testing, can estimating correlations be an end in itself?
I agree with the others who have posted in this thread but have one point to add: Simple methods -- like correlation -- are more justifiable when you can be more sure that there aren't more complicate
|
44,012
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
|
No, the data are not heteroscedastic (by way of how you simulated them). Did you notice the 0 degrees of freedom of the test? That is a hint that something is going wrong here. The B-P test takes the squared residuals from the model and tests whether the predictors in the model (or any other predictors you specify) can account for substantial amounts of variability in these values. Since you only have the intercept in the model, it cannot account for any variability by definition.
Take a look at: http://en.wikipedia.org/wiki/Breusch-Pagan_test
Also, make sure you read help(bptest). That should help to clarify things.
One thing that is going wrong here is that the bptest() function apparently does not test for this errant case and happens to throw out a tiny p-value. In fact, if you look carefully at the code underlying the bptest() function, essentially this is happening:
format.pval(pchisq(0,0), digits=4)
which gives "< 2.2e-16". So, pchisq(0,0) returns 0 and that is turned into "< 2.2e-16" by format.pval(). In a way, that is all correct, but it would probably help to test for zero dfs in bptest() to avoid this sort of confusion.
EDIT
There is still lots of confusion concerning this question. Maybe it helps to really show what the B-P test actually does. Here is an example. First, let's simulate some data that are homoscedastic. Then we fit a regression model with two predictors. And then we carry out the B-P test with the bptest() function.
library(lmtest)
n <- 100
x1i <- rnorm(n)
x2i <- rnorm(n)
yi <- rnorm(n)
mod <- lm(yi ~ x1i + x2i)
bptest(mod)
So, what is really happening? First, take the squared residuals based on the regression model. Then take $n \times R^2$ when regressing these squared residuals on the predictors that were included in the original model (note that the bptest() function uses the same predictors as in the original model, but one can also use other predictors here if one suspects that the heteroscedasticity is a function of other variables). That is the test statistic for the B-P test. Under the null hypothesis of homoscedasticity, this test statistic follows a chi-square distribution with degrees of freedom equal to the number of predictors used in the test (not counting the intercept). So, let's see if we can get the same results:
e2 <- resid(mod)^2
bp <- summary(lm(e2 ~ x1i + x2i))$r.squared * n
bp
pchisq(bp, df=2, lower.tail=FALSE)
Yep, that works. By chance, the test above may turn out to be significant (which is a Type I error since the data simulated are homoscedastic), but in most cases it will be non-significant.
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
|
No, the data are not heteroscedastic (by way of how you simulated them). Did you notice the 0 degrees of freedom of the test? That is a hint that something is going wrong here. The B-P test takes the
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
No, the data are not heteroscedastic (by way of how you simulated them). Did you notice the 0 degrees of freedom of the test? That is a hint that something is going wrong here. The B-P test takes the squared residuals from the model and tests whether the predictors in the model (or any other predictors you specify) can account for substantial amounts of variability in these values. Since you only have the intercept in the model, it cannot account for any variability by definition.
Take a look at: http://en.wikipedia.org/wiki/Breusch-Pagan_test
Also, make sure you read help(bptest). That should help to clarify things.
One thing that is going wrong here is that the bptest() function apparently does not test for this errant case and happens to throw out a tiny p-value. In fact, if you look carefully at the code underlying the bptest() function, essentially this is happening:
format.pval(pchisq(0,0), digits=4)
which gives "< 2.2e-16". So, pchisq(0,0) returns 0 and that is turned into "< 2.2e-16" by format.pval(). In a way, that is all correct, but it would probably help to test for zero dfs in bptest() to avoid this sort of confusion.
EDIT
There is still lots of confusion concerning this question. Maybe it helps to really show what the B-P test actually does. Here is an example. First, let's simulate some data that are homoscedastic. Then we fit a regression model with two predictors. And then we carry out the B-P test with the bptest() function.
library(lmtest)
n <- 100
x1i <- rnorm(n)
x2i <- rnorm(n)
yi <- rnorm(n)
mod <- lm(yi ~ x1i + x2i)
bptest(mod)
So, what is really happening? First, take the squared residuals based on the regression model. Then take $n \times R^2$ when regressing these squared residuals on the predictors that were included in the original model (note that the bptest() function uses the same predictors as in the original model, but one can also use other predictors here if one suspects that the heteroscedasticity is a function of other variables). That is the test statistic for the B-P test. Under the null hypothesis of homoscedasticity, this test statistic follows a chi-square distribution with degrees of freedom equal to the number of predictors used in the test (not counting the intercept). So, let's see if we can get the same results:
e2 <- resid(mod)^2
bp <- summary(lm(e2 ~ x1i + x2i))$r.squared * n
bp
pchisq(bp, df=2, lower.tail=FALSE)
Yep, that works. By chance, the test above may turn out to be significant (which is a Type I error since the data simulated are homoscedastic), but in most cases it will be non-significant.
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
No, the data are not heteroscedastic (by way of how you simulated them). Did you notice the 0 degrees of freedom of the test? That is a hint that something is going wrong here. The B-P test takes the
|
44,013
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
|
The results are not meaningful without some predictor (note df=0). Heteroscedastic means that the variance is not constant, but not constant with respect to what? Perhaps you have in mind the index (order of measurement)? Then you should do
y <- rnorm(1000)
x <- 1:1000
mod <- lm(y~x)
bptest(mod) # I get p=0.59
If you just have a vector of numbers, there's not a whole lot of meaning to the question "Is the variance constant?" For example, consider a mixture of two normal distributions with different variances:
v <- sample(c(1,10), 100, repl=TRUE)
y <- rnorm(100, 0, v)
$\text{var}(y|v)$ is not constant, but depends on $v$. But unconditionally, $\text{var}(y)$ is just a number.
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
|
The results are not meaningful without some predictor (note df=0). Heteroscedastic means that the variance is not constant, but not constant with respect to what? Perhaps you have in mind the index
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
The results are not meaningful without some predictor (note df=0). Heteroscedastic means that the variance is not constant, but not constant with respect to what? Perhaps you have in mind the index (order of measurement)? Then you should do
y <- rnorm(1000)
x <- 1:1000
mod <- lm(y~x)
bptest(mod) # I get p=0.59
If you just have a vector of numbers, there's not a whole lot of meaning to the question "Is the variance constant?" For example, consider a mixture of two normal distributions with different variances:
v <- sample(c(1,10), 100, repl=TRUE)
y <- rnorm(100, 0, v)
$\text{var}(y|v)$ is not constant, but depends on $v$. But unconditionally, $\text{var}(y)$ is just a number.
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
The results are not meaningful without some predictor (note df=0). Heteroscedastic means that the variance is not constant, but not constant with respect to what? Perhaps you have in mind the index
|
44,014
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
|
:Dail To test for non-constant variance one must understand the hypothesis behind the popular statistical tests. you need to follow the recipe i,e, the tests that I outlined in How to check if the volatility is stationary?
to fully verify that a series can't be proven to have non-constant variance. All six of the tests that I outlined must yield acceptance of the null hypothesis of non-constant variance. Rejection by any one of the 6 tests suggests that the error variance is indeed non-constant.
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
|
:Dail To test for non-constant variance one must understand the hypothesis behind the popular statistical tests. you need to follow the recipe i,e, the tests that I outlined in How to check if the vol
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
:Dail To test for non-constant variance one must understand the hypothesis behind the popular statistical tests. you need to follow the recipe i,e, the tests that I outlined in How to check if the volatility is stationary?
to fully verify that a series can't be proven to have non-constant variance. All six of the tests that I outlined must yield acceptance of the null hypothesis of non-constant variance. Rejection by any one of the 6 tests suggests that the error variance is indeed non-constant.
|
Why is the Breusch-Pagan test significant on simulated data designed not to be heteroscedastic?
:Dail To test for non-constant variance one must understand the hypothesis behind the popular statistical tests. you need to follow the recipe i,e, the tests that I outlined in How to check if the vol
|
44,015
|
Free Dataset Resources? [duplicate]
|
Amazon has free Public Data sets for use with EC2.
http://aws.amazon.com/publicdatasets/
Here's a list: http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243
|
Free Dataset Resources? [duplicate]
|
Amazon has free Public Data sets for use with EC2.
http://aws.amazon.com/publicdatasets/
Here's a list: http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243
|
Free Dataset Resources? [duplicate]
Amazon has free Public Data sets for use with EC2.
http://aws.amazon.com/publicdatasets/
Here's a list: http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243
|
Free Dataset Resources? [duplicate]
Amazon has free Public Data sets for use with EC2.
http://aws.amazon.com/publicdatasets/
Here's a list: http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243
|
44,016
|
Free Dataset Resources? [duplicate]
|
I really like the FRED, from the St. Louis Fed (economics data). You can chart the series or more than one series, you can do some transformations to your data and chart it, and the NBER recessions are shaded.
|
Free Dataset Resources? [duplicate]
|
I really like the FRED, from the St. Louis Fed (economics data). You can chart the series or more than one series, you can do some transformations to your data and chart it, and the NBER recessions ar
|
Free Dataset Resources? [duplicate]
I really like the FRED, from the St. Louis Fed (economics data). You can chart the series or more than one series, you can do some transformations to your data and chart it, and the NBER recessions are shaded.
|
Free Dataset Resources? [duplicate]
I really like the FRED, from the St. Louis Fed (economics data). You can chart the series or more than one series, you can do some transformations to your data and chart it, and the NBER recessions ar
|
44,017
|
Free Dataset Resources? [duplicate]
|
For time series data, try the Time Series Data Library.
|
Free Dataset Resources? [duplicate]
|
For time series data, try the Time Series Data Library.
|
Free Dataset Resources? [duplicate]
For time series data, try the Time Series Data Library.
|
Free Dataset Resources? [duplicate]
For time series data, try the Time Series Data Library.
|
44,018
|
Free Dataset Resources? [duplicate]
|
http://infochimps.org/ - is a good resource for free data sets.
|
Free Dataset Resources? [duplicate]
|
http://infochimps.org/ - is a good resource for free data sets.
|
Free Dataset Resources? [duplicate]
http://infochimps.org/ - is a good resource for free data sets.
|
Free Dataset Resources? [duplicate]
http://infochimps.org/ - is a good resource for free data sets.
|
44,019
|
Free Dataset Resources? [duplicate]
|
For governmental data:
US: http://www.data.gov/
World: http://www.guardian.co.uk/world-government-data
|
Free Dataset Resources? [duplicate]
|
For governmental data:
US: http://www.data.gov/
World: http://www.guardian.co.uk/world-government-data
|
Free Dataset Resources? [duplicate]
For governmental data:
US: http://www.data.gov/
World: http://www.guardian.co.uk/world-government-data
|
Free Dataset Resources? [duplicate]
For governmental data:
US: http://www.data.gov/
World: http://www.guardian.co.uk/world-government-data
|
44,020
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
|
In cluster analysis, the Silhouette coefficient (SC; or Average Silhouette Width) is a distance-based statistic that measures the quality of a clustering, i.e., to what extent the objects are closer to other objects in the same class than to the closest class to which they don't belong.
This can also be computed for situations as yours in which there is a given grouping; for these data probably the Euclidean distance makes sense.
One qualification is that clusterings found by a cluster analysis method (for with the Silhouette was originally meant) tend to be better separated than data from underlying groupings that have a fairly large variation. Therefore I'd recommend to contrast the SC obtained for your categories (which may look disappointingly low for people who know typical values in cluster analysis) with a permutation test approach, i.e., simulate 1000 (say) data sets where you randomly reshuffle the group labels, compute the SC for all of these, and have a look to what extent (measured in terms of standard deviations of the permutation results, say) the SC in your data is "significantly" larger.
The webpage also mentions a Simplified Silhouette that comes with less computational effort.
Sleeping over this, I realised that I should also mention another classical cluster validity index, the Calinski-Harabasz index (CH), In R here. It can once more be calibrated (or a statistical test be run) using the permutation principle. More than the SC, this is based on the standard statistics characterising the Gaussian distribution, namely mean vector and sums of squares, so will be appropriate for within-group distributions that are not too far from the Gaussian. It is based on (multivariate) Analysis of Variance logic. In fact, as @Stephan Kolassa correctly noted, both the SC and the CH will reward classes with large within-class homogeneity, whereas (potentially nonlinear) classes with larger within-class variation may not be assessed as good.
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
|
In cluster analysis, the Silhouette coefficient (SC; or Average Silhouette Width) is a distance-based statistic that measures the quality of a clustering, i.e., to what extent the objects are closer t
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
In cluster analysis, the Silhouette coefficient (SC; or Average Silhouette Width) is a distance-based statistic that measures the quality of a clustering, i.e., to what extent the objects are closer to other objects in the same class than to the closest class to which they don't belong.
This can also be computed for situations as yours in which there is a given grouping; for these data probably the Euclidean distance makes sense.
One qualification is that clusterings found by a cluster analysis method (for with the Silhouette was originally meant) tend to be better separated than data from underlying groupings that have a fairly large variation. Therefore I'd recommend to contrast the SC obtained for your categories (which may look disappointingly low for people who know typical values in cluster analysis) with a permutation test approach, i.e., simulate 1000 (say) data sets where you randomly reshuffle the group labels, compute the SC for all of these, and have a look to what extent (measured in terms of standard deviations of the permutation results, say) the SC in your data is "significantly" larger.
The webpage also mentions a Simplified Silhouette that comes with less computational effort.
Sleeping over this, I realised that I should also mention another classical cluster validity index, the Calinski-Harabasz index (CH), In R here. It can once more be calibrated (or a statistical test be run) using the permutation principle. More than the SC, this is based on the standard statistics characterising the Gaussian distribution, namely mean vector and sums of squares, so will be appropriate for within-group distributions that are not too far from the Gaussian. It is based on (multivariate) Analysis of Variance logic. In fact, as @Stephan Kolassa correctly noted, both the SC and the CH will reward classes with large within-class homogeneity, whereas (potentially nonlinear) classes with larger within-class variation may not be assessed as good.
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
In cluster analysis, the Silhouette coefficient (SC; or Average Silhouette Width) is a distance-based statistic that measures the quality of a clustering, i.e., to what extent the objects are closer t
|
44,021
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
|
This is an interesting and large question and no answer is likely to seem complete.
You can take the question further graphically and you can take it further numerically. Existing methods do help and so I see little or no call to invent methods ad hoc.
Graphics
Your first plot already includes ellipses fitted somehow and indeed the extent to which those ellipses do or do not overlap gives a graphical handle on the question.
A once fashionable and in my view unduly neglected method plots convex hulls for each group or category, or convex hulls of points not on the convex hull, and so on -- offering compromises between inclusiveness and robustness or resistance of summary. See e.g. https://www.statalist.org/forums/forum/general-stata-discussion/general/1517556-convex-hulls-on-scatter-plots for some simple examples.
A plot like your second is likely to be seem confusing to all. Different methods include plotting groups separately in a series of small multiples or (sometimes best of all) plotting each group separately but with a backdrop of all the other points. This method has been dubbed that of front-and-back plots. See e.g. https://journals.sagepub.com/doi/pdf/10.1177/1536867X211025838
Numerics
The importance of the categorical variable as a extra predictor in regression or similar models is usually best assessed by declaring as it as a factor variable to your software and fitting more complicated models in which each group may have a different intercept, or a different slope, or both. The measure of whether groups differ is how far they make different predictions of the outcome variable.
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
|
This is an interesting and large question and no answer is likely to seem complete.
You can take the question further graphically and you can take it further numerically. Existing methods do help and
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
This is an interesting and large question and no answer is likely to seem complete.
You can take the question further graphically and you can take it further numerically. Existing methods do help and so I see little or no call to invent methods ad hoc.
Graphics
Your first plot already includes ellipses fitted somehow and indeed the extent to which those ellipses do or do not overlap gives a graphical handle on the question.
A once fashionable and in my view unduly neglected method plots convex hulls for each group or category, or convex hulls of points not on the convex hull, and so on -- offering compromises between inclusiveness and robustness or resistance of summary. See e.g. https://www.statalist.org/forums/forum/general-stata-discussion/general/1517556-convex-hulls-on-scatter-plots for some simple examples.
A plot like your second is likely to be seem confusing to all. Different methods include plotting groups separately in a series of small multiples or (sometimes best of all) plotting each group separately but with a backdrop of all the other points. This method has been dubbed that of front-and-back plots. See e.g. https://journals.sagepub.com/doi/pdf/10.1177/1536867X211025838
Numerics
The importance of the categorical variable as a extra predictor in regression or similar models is usually best assessed by declaring as it as a factor variable to your software and fitting more complicated models in which each group may have a different intercept, or a different slope, or both. The measure of whether groups differ is how far they make different predictions of the outcome variable.
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
This is an interesting and large question and no answer is likely to seem complete.
You can take the question further graphically and you can take it further numerically. Existing methods do help and
|
44,022
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
|
You need two steps:
Some way of modelling the distribution of the different categories
Comparing the distributions of the different categories.
There are many different ways to model distributions and to compare the difference between distributions.
A classical example would be MANOVA which models the mean and covariance matrix of the different distributions (and assumes equal covariance of the different distributions) and compares the variance within the groups and between the groups as a measure of the difference between the groups.
If the covariance for the different groups differs then you could use a quadratic classification model and use some performance measure of the model in predicting the right classes as a measure for the difference between categories.
For more fancy distributions you can use more fancy classification schemes. With a nearest neighbors algorithm you could approximate some sort of divergence measure (if I search on google with keywords 'nearest neighbours compute divergence' then I get several suggestions).
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
|
You need two steps:
Some way of modelling the distribution of the different categories
Comparing the distributions of the different categories.
There are many different ways to model distributions a
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
You need two steps:
Some way of modelling the distribution of the different categories
Comparing the distributions of the different categories.
There are many different ways to model distributions and to compare the difference between distributions.
A classical example would be MANOVA which models the mean and covariance matrix of the different distributions (and assumes equal covariance of the different distributions) and compares the variance within the groups and between the groups as a measure of the difference between the groups.
If the covariance for the different groups differs then you could use a quadratic classification model and use some performance measure of the model in predicting the right classes as a measure for the difference between categories.
For more fancy distributions you can use more fancy classification schemes. With a nearest neighbors algorithm you could approximate some sort of divergence measure (if I search on google with keywords 'nearest neighbours compute divergence' then I get several suggestions).
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
You need two steps:
Some way of modelling the distribution of the different categories
Comparing the distributions of the different categories.
There are many different ways to model distributions a
|
44,023
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
|
I interpreted
visually well-defined segments
as "separable in some (natural) space parametrization". I assume that for you, this is not a case of visually well-defined segments:
Further, your first image seems to suggest a GMM-type geometry, which is a natural choice. Since you already know the categorical part of your data $D$ (acting here as class assignment $k_i\in\{1...K\}$), you also have the (MLE) Gaussian Mixture Model fit $\hat{M}_D$ (no EM algorithm needed). Now you could compute a goodness of fit $T_{\hat{M}_D}$ of your model $\hat{M}_D$ by summing all pairwise Kullback-Leibler divergences $\text{KL}(\mathcal{N(\mu_i,\Sigma_i)}\,||\,\mathcal{N}(\mu_j,\Sigma_j)),\,\,(1\leq i, j \leq K)$ of Gaussians (which is never $\infty$ due to infinite support).
Two datasets $D_1$, $D_2$ with the same number of "classes" $K$ should be comparable by $T_{\hat{M}_{D_1}}$ and $T_{\hat{M}_{D_2}}$, where higher values of this "test statistic" (I have reasons not to call it that) indicate better visual separation.
EDIT: Normalization w.r.t. $K$ could be achieved by taking the average or maximum over the KL divergences (instead of summing). If you have a lot of datasets, you could also compare these aggregations against your (empirical) distribution of $T_{\hat{M}_{D_i}}|D_i$ to arrive at an absolute threshold/measure, similar to a p-value.
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
|
I interpreted
visually well-defined segments
as "separable in some (natural) space parametrization". I assume that for you, this is not a case of visually well-defined segments:
Further, your first
|
Are there any statistics to see if a categorical variable produces good segments within a scatter plot? [closed]
I interpreted
visually well-defined segments
as "separable in some (natural) space parametrization". I assume that for you, this is not a case of visually well-defined segments:
Further, your first image seems to suggest a GMM-type geometry, which is a natural choice. Since you already know the categorical part of your data $D$ (acting here as class assignment $k_i\in\{1...K\}$), you also have the (MLE) Gaussian Mixture Model fit $\hat{M}_D$ (no EM algorithm needed). Now you could compute a goodness of fit $T_{\hat{M}_D}$ of your model $\hat{M}_D$ by summing all pairwise Kullback-Leibler divergences $\text{KL}(\mathcal{N(\mu_i,\Sigma_i)}\,||\,\mathcal{N}(\mu_j,\Sigma_j)),\,\,(1\leq i, j \leq K)$ of Gaussians (which is never $\infty$ due to infinite support).
Two datasets $D_1$, $D_2$ with the same number of "classes" $K$ should be comparable by $T_{\hat{M}_{D_1}}$ and $T_{\hat{M}_{D_2}}$, where higher values of this "test statistic" (I have reasons not to call it that) indicate better visual separation.
EDIT: Normalization w.r.t. $K$ could be achieved by taking the average or maximum over the KL divergences (instead of summing). If you have a lot of datasets, you could also compare these aggregations against your (empirical) distribution of $T_{\hat{M}_{D_i}}|D_i$ to arrive at an absolute threshold/measure, similar to a p-value.
|
Are there any statistics to see if a categorical variable produces good segments within a scatter pl
I interpreted
visually well-defined segments
as "separable in some (natural) space parametrization". I assume that for you, this is not a case of visually well-defined segments:
Further, your first
|
44,024
|
Intuition behind a 0% central/equal-tailed confidence interval
|
You are close
For a continuous distribution, the 0% equal-tail CI occurs at the point corresponding to the median of the true distribution of the pivotal quantity that is used in constructing the CI. It is not always possible to invert the pivotal quantity in a way that yields an unbiased estimator of a corresponding median (of what exactly?).
|
Intuition behind a 0% central/equal-tailed confidence interval
|
You are close
For a continuous distribution, the 0% equal-tail CI occurs at the point corresponding to the median of the true distribution of the pivotal quantity that is used in constructing the CI.
|
Intuition behind a 0% central/equal-tailed confidence interval
You are close
For a continuous distribution, the 0% equal-tail CI occurs at the point corresponding to the median of the true distribution of the pivotal quantity that is used in constructing the CI. It is not always possible to invert the pivotal quantity in a way that yields an unbiased estimator of a corresponding median (of what exactly?).
|
Intuition behind a 0% central/equal-tailed confidence interval
You are close
For a continuous distribution, the 0% equal-tail CI occurs at the point corresponding to the median of the true distribution of the pivotal quantity that is used in constructing the CI.
|
44,025
|
Intuition behind a 0% central/equal-tailed confidence interval
|
A frequentist 0% confidence interval can be any point within the parameter space. One might prefer to choose a point that is near to the maximum likelihood estimate, but any other point will be just as validly a 0% confidence interval.
Typical 95% confidence intervals are usually at least roughly centred around the maximum likelihood estimate because of a preference for a shorter interval over a longer one, not because of any definitional requirement. With a 0% interval all potential intervals have the same length!
Consider that a 95% confidence interval is an interval derived from a method that will in the long run yield an interval that covers the true value of the parameter on 95% of occasions (when the model is appropriate to the data generating system). Then it is clear that any point within the parameter space will cover the true value on 0% of occasions in the long run and will thus be a valid 0% confidence interval. (For continuous parameter space.)
|
Intuition behind a 0% central/equal-tailed confidence interval
|
A frequentist 0% confidence interval can be any point within the parameter space. One might prefer to choose a point that is near to the maximum likelihood estimate, but any other point will be just a
|
Intuition behind a 0% central/equal-tailed confidence interval
A frequentist 0% confidence interval can be any point within the parameter space. One might prefer to choose a point that is near to the maximum likelihood estimate, but any other point will be just as validly a 0% confidence interval.
Typical 95% confidence intervals are usually at least roughly centred around the maximum likelihood estimate because of a preference for a shorter interval over a longer one, not because of any definitional requirement. With a 0% interval all potential intervals have the same length!
Consider that a 95% confidence interval is an interval derived from a method that will in the long run yield an interval that covers the true value of the parameter on 95% of occasions (when the model is appropriate to the data generating system). Then it is clear that any point within the parameter space will cover the true value on 0% of occasions in the long run and will thus be a valid 0% confidence interval. (For continuous parameter space.)
|
Intuition behind a 0% central/equal-tailed confidence interval
A frequentist 0% confidence interval can be any point within the parameter space. One might prefer to choose a point that is near to the maximum likelihood estimate, but any other point will be just a
|
44,026
|
Intuition behind a 0% central/equal-tailed confidence interval
|
A zero-level confidence interval can be seen as an estimator. Indeed, it has been advocated by Skovgaard (1989) "A review of higher-order likelihood inference". Bull. Int. Statist. Inst., 53, 331–351, in a class of two-sided equal-tailed confidence intervals, and is defined as the intersection of all confidence intervals at all levels.
An implementation of this idea needs estimating functions based on pivotal quantities for the parameter of interest. However, unless the pivot is exact and has a symmetric distribution around the true parameter (s.t. in the case of the Student's $t$ statistic), the median unbiasedness property for this kind of estimator does not necessarily hold.
One way to get such an estimator with low median bias is to use a higher-order pivotal quantity such as the $r^*$ of Barndorff- Nielsen (1986) "Inference on full or partial parameters based on the standardized signed log-likelihood ratio". Biometrika, 73, 307-322.
|
Intuition behind a 0% central/equal-tailed confidence interval
|
A zero-level confidence interval can be seen as an estimator. Indeed, it has been advocated by Skovgaard (1989) "A review of higher-order likelihood inference". Bull. Int. Statist. Inst., 53, 331–351,
|
Intuition behind a 0% central/equal-tailed confidence interval
A zero-level confidence interval can be seen as an estimator. Indeed, it has been advocated by Skovgaard (1989) "A review of higher-order likelihood inference". Bull. Int. Statist. Inst., 53, 331–351, in a class of two-sided equal-tailed confidence intervals, and is defined as the intersection of all confidence intervals at all levels.
An implementation of this idea needs estimating functions based on pivotal quantities for the parameter of interest. However, unless the pivot is exact and has a symmetric distribution around the true parameter (s.t. in the case of the Student's $t$ statistic), the median unbiasedness property for this kind of estimator does not necessarily hold.
One way to get such an estimator with low median bias is to use a higher-order pivotal quantity such as the $r^*$ of Barndorff- Nielsen (1986) "Inference on full or partial parameters based on the standardized signed log-likelihood ratio". Biometrika, 73, 307-322.
|
Intuition behind a 0% central/equal-tailed confidence interval
A zero-level confidence interval can be seen as an estimator. Indeed, it has been advocated by Skovgaard (1989) "A review of higher-order likelihood inference". Bull. Int. Statist. Inst., 53, 331–351,
|
44,027
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point of view?
|
I have literally never heard 'categorial' (without the second C) and assumed that this was a typo. But some googling does indicate that this word is used - in linguistics.
In statistics, as far as I know, we only use categorical.
As mild support for this claim, if one googles 'categorial statistics', Google assumes you've made a typo and returns only results for 'categorical statistics'.
Also, searching for 'categorial' on wikipedia returns no links, but the closest suggestion is 'categorial grammar' (again, about language/syntax). In contrast, searching for 'categorical' returns a bunch of suggestions including several articles about statistics (specifically categorical data), maths and logic.
EDIT: This excellent comment by Scortchi may have tracked down the origin of the confusion to German and French distinctions that are mostly absent in English.
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point o
|
I have literally never heard 'categorial' (without the second C) and assumed that this was a typo. But some googling does indicate that this word is used - in linguistics.
In statistics, as far as I k
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point of view?
I have literally never heard 'categorial' (without the second C) and assumed that this was a typo. But some googling does indicate that this word is used - in linguistics.
In statistics, as far as I know, we only use categorical.
As mild support for this claim, if one googles 'categorial statistics', Google assumes you've made a typo and returns only results for 'categorical statistics'.
Also, searching for 'categorial' on wikipedia returns no links, but the closest suggestion is 'categorial grammar' (again, about language/syntax). In contrast, searching for 'categorical' returns a bunch of suggestions including several articles about statistics (specifically categorical data), maths and logic.
EDIT: This excellent comment by Scortchi may have tracked down the origin of the confusion to German and French distinctions that are mostly absent in English.
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point o
I have literally never heard 'categorial' (without the second C) and assumed that this was a typo. But some googling does indicate that this word is used - in linguistics.
In statistics, as far as I k
|
44,028
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point of view?
|
I second mkt's answer; this is a long comment rather than an answer. In math I've never seen "Categorial".
Also, I think in English that word is rarely used. The frequency is about 20 times less than the proper word "categorical"
I asked a question here and English experts will surely help us.
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point o
|
I second mkt's answer; this is a long comment rather than an answer. In math I've never seen "Categorial".
Also, I think in English that word is rarely used. The frequency is about 20 times less than
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point of view?
I second mkt's answer; this is a long comment rather than an answer. In math I've never seen "Categorial".
Also, I think in English that word is rarely used. The frequency is about 20 times less than the proper word "categorical"
I asked a question here and English experts will surely help us.
|
Categorical or Categorial? Is there a difference between the two terms from a statistician's point o
I second mkt's answer; this is a long comment rather than an answer. In math I've never seen "Categorial".
Also, I think in English that word is rarely used. The frequency is about 20 times less than
|
44,029
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
|
Nothing strange in here.
If all the model selection methods always gave the same results, we wouldn't have multiple criteria, but just pick arbitrary one.
AIC and BIC explicitly penalize the number of parameters, cross-validation not, so again, it's not surprising that they suggest a model with fewer parameters (though nothing prohibits cross-validation from picking a model with fewer parameters).
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
|
Nothing strange in here.
If all the model selection methods always gave the same results, we wouldn't have multiple criteria, but just pick arbitrary one.
AIC and BIC explicitly penalize the number o
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
Nothing strange in here.
If all the model selection methods always gave the same results, we wouldn't have multiple criteria, but just pick arbitrary one.
AIC and BIC explicitly penalize the number of parameters, cross-validation not, so again, it's not surprising that they suggest a model with fewer parameters (though nothing prohibits cross-validation from picking a model with fewer parameters).
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
Nothing strange in here.
If all the model selection methods always gave the same results, we wouldn't have multiple criteria, but just pick arbitrary one.
AIC and BIC explicitly penalize the number o
|
44,030
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
|
AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV)
It's not equivalent to 10-fold cross-validation, which is what you're comparing it to.
It's only asymptotically equivalent, so the two methods don't always give the same answer, they're only approximately the same.
It's not really clear how you're doing the train/test/validation split when cross validating, so I can't really address your final question. Note, though, that the "best" model depends on the amount of data you're using. For example, a model with 5 features may perform well when trained on the full dataset, but could overfit when trained on only a subset in cross-validation.
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
|
AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV)
It's not equivalent to 10-fold cross-validation, which is what you're comparing it to.
It's only asymptotically equivalent, so
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV)
It's not equivalent to 10-fold cross-validation, which is what you're comparing it to.
It's only asymptotically equivalent, so the two methods don't always give the same answer, they're only approximately the same.
It's not really clear how you're doing the train/test/validation split when cross validating, so I can't really address your final question. Note, though, that the "best" model depends on the amount of data you're using. For example, a model with 5 features may perform well when trained on the full dataset, but could overfit when trained on only a subset in cross-validation.
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
AIC is asymptotically equivalent to leave-1-out cross-validation (LOOCV)
It's not equivalent to 10-fold cross-validation, which is what you're comparing it to.
It's only asymptotically equivalent, so
|
44,031
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
|
Maybe you should concentrate more on the methods that are intended precisely for feature selection, rather than model selection. Model selection methods like cross-validation or AIC try to compare models independently of how they differ (this is only approximately true, but should suffice here). Feature selection methods concentrate on comparing models that differ only by feature selection.
I have had good experience with e.g. random forest based feature selection, but there are many others, some of them more specialized like spike and slab.
Having said that, the results of those methods often contradict, especially in the less simple cases. Use the ones that are top in multiple methods as suggestions and you then select the model that works best for you w.r.t. other cost criteria like complexity, runtime...
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
|
Maybe you should concentrate more on the methods that are intended precisely for feature selection, rather than model selection. Model selection methods like cross-validation or AIC try to compare mod
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
Maybe you should concentrate more on the methods that are intended precisely for feature selection, rather than model selection. Model selection methods like cross-validation or AIC try to compare models independently of how they differ (this is only approximately true, but should suffice here). Feature selection methods concentrate on comparing models that differ only by feature selection.
I have had good experience with e.g. random forest based feature selection, but there are many others, some of them more specialized like spike and slab.
Having said that, the results of those methods often contradict, especially in the less simple cases. Use the ones that are top in multiple methods as suggestions and you then select the model that works best for you w.r.t. other cost criteria like complexity, runtime...
|
Model Selection: AIC/BIC and Cross-Validation gives different conclusion
Maybe you should concentrate more on the methods that are intended precisely for feature selection, rather than model selection. Model selection methods like cross-validation or AIC try to compare mod
|
44,032
|
What does it mean when a Data Matrix has full rank?
|
If the matrix has full rank, i.e. $rank(M) = p$ and $n > p$, the p variables are linearly independent and therefore there is no redundancy in the data. If instead the $rank(M) < p$ some columns can be recreated by linearly combining the others. In this latter case, you couldn't use all the columns of M as explanatory variables in a linear model (you could of course but it wouldn't make sense).
For example, in R:
library(Matrix)
set.seed(1234)
options(digits= 3)
M <- matrix(data= rnorm(n= 15), ncol= 3)
M
[,1] [,2] [,3]
[1,] -1.207 0.506 -0.4772
[2,] 0.277 -0.575 -0.9984
[3,] 1.084 -0.547 -0.7763
[4,] -2.346 -0.564 0.0645
[5,] 0.429 -0.890 0.9595
rankMatrix(M) # -> rank 3
# 4th column is a linear combination of column 1 and 2 - there is redundancy
M2 <- cbind(M, M[,1] + M[,2])
M2
[,1] [,2] [,3] [,4]
[1,] -1.207 0.506 -0.4772 -0.701
[2,] 0.277 -0.575 -0.9984 -0.297
[3,] 1.084 -0.547 -0.7763 0.538
[4,] -2.346 -0.564 0.0645 -2.910
[5,] 0.429 -0.890 0.9595 -0.461
rankMatrix(M2) # still rank 3 even if you have four columns
Try to fit a linear model on y using M or M2 as explanatory variables.
# A dummy response variable
y <- M[,1] + rnorm(n= nrow(M))
# Ok, the coefficients of all the variables in M can be estimated
lm(y ~ M)
# Coefficients:
# (Intercept) M1 M2 M3
# -0.6786 1.0682 -0.0178 -0.7026
# Here the coefficient of 4th variable cannot be estimated no
# matter how many observations (rows) you have
lm(y ~ M2)
# Coefficients:
# (Intercept) M21 M22 M23 M24
# -0.6786 1.0682 -0.0178 -0.7026 NA
I guess that a geometric interpretation of the linear model example is that the response matrix M
defines the space where the vector of coefficient can be fitted. More variables in M means more
dimensions available. However, if a variable in M is a linear combination of other
variables, like in M2, there is no real increase in additional dimensions (or space available)
As noted by @SingleMalt, a matrix can be full rank and still be redundant for practical purposes. For example, if you add some minimal jitter to column 4 of M2 you obtain full rank (4). You could use M2 as predictor in a linear model and estimate all 4 coefficients but the results would be very unstable and unreliable. As shown here the estimated coefficient have "gone crazy":
# Add some minimal jitter to column 4
M2[,4] <- M2[,4] + rnorm(n= nrow(M2), sd= 0.0001)
rankMatrix(M2) # <- 4 Full rank
lm(y ~ M2)
# Coefficients:
# (Intercept) M21 M22 M23 M24
# -0.267 36603.840 36604.443 1.318 -36603.504
Returning to the geometric interpretation, this means that column 4 does add a fourth dimension available to the vector of coefficients. But the increase in space is so small that some dimensions almost perfectly overlap and cannot be estimated with any reliable accuracy.
That's my interpretation - hope it helps...
|
What does it mean when a Data Matrix has full rank?
|
If the matrix has full rank, i.e. $rank(M) = p$ and $n > p$, the p variables are linearly independent and therefore there is no redundancy in the data. If instead the $rank(M) < p$ some columns can be
|
What does it mean when a Data Matrix has full rank?
If the matrix has full rank, i.e. $rank(M) = p$ and $n > p$, the p variables are linearly independent and therefore there is no redundancy in the data. If instead the $rank(M) < p$ some columns can be recreated by linearly combining the others. In this latter case, you couldn't use all the columns of M as explanatory variables in a linear model (you could of course but it wouldn't make sense).
For example, in R:
library(Matrix)
set.seed(1234)
options(digits= 3)
M <- matrix(data= rnorm(n= 15), ncol= 3)
M
[,1] [,2] [,3]
[1,] -1.207 0.506 -0.4772
[2,] 0.277 -0.575 -0.9984
[3,] 1.084 -0.547 -0.7763
[4,] -2.346 -0.564 0.0645
[5,] 0.429 -0.890 0.9595
rankMatrix(M) # -> rank 3
# 4th column is a linear combination of column 1 and 2 - there is redundancy
M2 <- cbind(M, M[,1] + M[,2])
M2
[,1] [,2] [,3] [,4]
[1,] -1.207 0.506 -0.4772 -0.701
[2,] 0.277 -0.575 -0.9984 -0.297
[3,] 1.084 -0.547 -0.7763 0.538
[4,] -2.346 -0.564 0.0645 -2.910
[5,] 0.429 -0.890 0.9595 -0.461
rankMatrix(M2) # still rank 3 even if you have four columns
Try to fit a linear model on y using M or M2 as explanatory variables.
# A dummy response variable
y <- M[,1] + rnorm(n= nrow(M))
# Ok, the coefficients of all the variables in M can be estimated
lm(y ~ M)
# Coefficients:
# (Intercept) M1 M2 M3
# -0.6786 1.0682 -0.0178 -0.7026
# Here the coefficient of 4th variable cannot be estimated no
# matter how many observations (rows) you have
lm(y ~ M2)
# Coefficients:
# (Intercept) M21 M22 M23 M24
# -0.6786 1.0682 -0.0178 -0.7026 NA
I guess that a geometric interpretation of the linear model example is that the response matrix M
defines the space where the vector of coefficient can be fitted. More variables in M means more
dimensions available. However, if a variable in M is a linear combination of other
variables, like in M2, there is no real increase in additional dimensions (or space available)
As noted by @SingleMalt, a matrix can be full rank and still be redundant for practical purposes. For example, if you add some minimal jitter to column 4 of M2 you obtain full rank (4). You could use M2 as predictor in a linear model and estimate all 4 coefficients but the results would be very unstable and unreliable. As shown here the estimated coefficient have "gone crazy":
# Add some minimal jitter to column 4
M2[,4] <- M2[,4] + rnorm(n= nrow(M2), sd= 0.0001)
rankMatrix(M2) # <- 4 Full rank
lm(y ~ M2)
# Coefficients:
# (Intercept) M21 M22 M23 M24
# -0.267 36603.840 36604.443 1.318 -36603.504
Returning to the geometric interpretation, this means that column 4 does add a fourth dimension available to the vector of coefficients. But the increase in space is so small that some dimensions almost perfectly overlap and cannot be estimated with any reliable accuracy.
That's my interpretation - hope it helps...
|
What does it mean when a Data Matrix has full rank?
If the matrix has full rank, i.e. $rank(M) = p$ and $n > p$, the p variables are linearly independent and therefore there is no redundancy in the data. If instead the $rank(M) < p$ some columns can be
|
44,033
|
What does it mean when a Data Matrix has full rank?
|
I want to connect the concept of identifiability with the rank of the design matrix in linear regression, as well as take a more linear algebraic look at the problem, since you mention you have a math background.
Some parameter in our regression model is called identifiable if it's possible for us to even guess what it is. To contrast this with the typical case, we'll never know the exact value of any parameter if there's any randomness in the process, but the least squares estimates can give a good guess if there's enough good data. But when a parameter is not identified, it's absolutely impossible to have any idea what it is: no matter what value the parameter is set to, the data that's coming out will look the exact same. And since we use the data to guess what the parameters are, if there's no influence on the data from a certain parameter, the data are not providing us any information about it.
A good example is the situation where we have categorical factors, for instance:
we want to compare medical treatment Treatment A to Treatment B to Treatment C:
We've also compared the patient's weights. The treatment outcomes ("y's") are not shown; they don't play into identifiability at all.
Here's the problem: Treatment A and C were always given to the same people. We can tell because their columns have exactly the same data in them. Inuitively, we already know we're in trouble: if those patients recover more than usual, it could be the case that Treatment A is doing all the work, and Treatment C is inert. Or maybe Treatment A is actually harmfull, but Treatment C is super beneficial and makes up for it. There are infinite possibilities, and we can't be sure what's going on with Treatment A and C individually.
A more subtle version of this is the general unidentifiable case. To see the general case, let's now bring rank into the picture. A key consequence of Rank from linear algebra is that a matrix with less than full rank turns some set of vectors that aren't zero into the zero vector: $\mathbf{X}\mathbf{b} = \mathbf{0}$ (these vectors are said to belong to the kernel or nullspace of $\mathbf{x}$).
In the case of the matrix above, one such vector is $c(1,0,-1,0)$:
This is the mathematical interpretation of our above discussion: we can't know how much better Treatment A is than Treatment C, since a 1 in the first entry and a -1 in the third entry is going to compute the difference between the first and third rows upon matrix multiplication.
But we haven't talked about any of our regression coefficients: we don't have a coefficient on what the different is between Treatments A and C. We have one individually on Treatments A and C. So why do we care if the difference is not identified? The problem is that we haven't actually cleared anything from unidentifiability yet.
This is because of the projection theorem from linear algebra, which states that any vector $\mathbf{v}$ in a linear space $\mathcal{L}$ can be decomposed along some subspace $\mathcal{S}$ as $\mathbf{v} = \mathbf{v}_{\mathcal{S}} + \mathbf{v}_{\mathcal{S}^\perp}$, where $\mathbf{v}_{\mathcal{S}} \in \mathcal{S}$ and $\mathbf{v}_{\mathcal{S}^\perp}$ is orthogonal to that space. So thinking of input parameters as unit vectors, they can be decomposed as $\mathbf{e}_i = \mathbf{v}_{\mathcal{N}} + \mathbf{v}_{\mathcal{N}^\perp} $, where $\mathbf{v}_{\mathcal{N}}$ is in the nullspace of $\mathbf{X}$ and $\mathbf{v}_{\mathcal{N}^\perp}$ is orthogonal to that.
With this decomposition, we see that it's not just $c(1,0,-1,0)$ that we can't estimate: whenever the component $\mathbf{v}_{\mathcal{N}}$ is not the zero vector, it can't be estimated. On the contrary, any vector which is orthogonal to the vector $c(1,0,-1,0)$, or in general, orthogonal to the nullspace, will be identifiable. Recall from Gil Strang's fundamental theorem of linear algebra that the subspace orthogonal to matrix's kernel is that same matrix's rowspace. Thus the rowspace of our design matrix can still be estimated even if our matrix is less than full rank. Intuitively, we should have been able to estimate the effect of "Treatment A and Treatment B", and we can verify that their sum is indeed in the rowspace and hence identifiable.
TLDR: If your matrix is less than full rank, it will send some vector to zero. This means that changing regression coefficients along that direction results in no change to the data, and we can't learn anything about what that linear combination of our parameters are, and thus what the parameters that contributed are themselves.
|
What does it mean when a Data Matrix has full rank?
|
I want to connect the concept of identifiability with the rank of the design matrix in linear regression, as well as take a more linear algebraic look at the problem, since you mention you have a math
|
What does it mean when a Data Matrix has full rank?
I want to connect the concept of identifiability with the rank of the design matrix in linear regression, as well as take a more linear algebraic look at the problem, since you mention you have a math background.
Some parameter in our regression model is called identifiable if it's possible for us to even guess what it is. To contrast this with the typical case, we'll never know the exact value of any parameter if there's any randomness in the process, but the least squares estimates can give a good guess if there's enough good data. But when a parameter is not identified, it's absolutely impossible to have any idea what it is: no matter what value the parameter is set to, the data that's coming out will look the exact same. And since we use the data to guess what the parameters are, if there's no influence on the data from a certain parameter, the data are not providing us any information about it.
A good example is the situation where we have categorical factors, for instance:
we want to compare medical treatment Treatment A to Treatment B to Treatment C:
We've also compared the patient's weights. The treatment outcomes ("y's") are not shown; they don't play into identifiability at all.
Here's the problem: Treatment A and C were always given to the same people. We can tell because their columns have exactly the same data in them. Inuitively, we already know we're in trouble: if those patients recover more than usual, it could be the case that Treatment A is doing all the work, and Treatment C is inert. Or maybe Treatment A is actually harmfull, but Treatment C is super beneficial and makes up for it. There are infinite possibilities, and we can't be sure what's going on with Treatment A and C individually.
A more subtle version of this is the general unidentifiable case. To see the general case, let's now bring rank into the picture. A key consequence of Rank from linear algebra is that a matrix with less than full rank turns some set of vectors that aren't zero into the zero vector: $\mathbf{X}\mathbf{b} = \mathbf{0}$ (these vectors are said to belong to the kernel or nullspace of $\mathbf{x}$).
In the case of the matrix above, one such vector is $c(1,0,-1,0)$:
This is the mathematical interpretation of our above discussion: we can't know how much better Treatment A is than Treatment C, since a 1 in the first entry and a -1 in the third entry is going to compute the difference between the first and third rows upon matrix multiplication.
But we haven't talked about any of our regression coefficients: we don't have a coefficient on what the different is between Treatments A and C. We have one individually on Treatments A and C. So why do we care if the difference is not identified? The problem is that we haven't actually cleared anything from unidentifiability yet.
This is because of the projection theorem from linear algebra, which states that any vector $\mathbf{v}$ in a linear space $\mathcal{L}$ can be decomposed along some subspace $\mathcal{S}$ as $\mathbf{v} = \mathbf{v}_{\mathcal{S}} + \mathbf{v}_{\mathcal{S}^\perp}$, where $\mathbf{v}_{\mathcal{S}} \in \mathcal{S}$ and $\mathbf{v}_{\mathcal{S}^\perp}$ is orthogonal to that space. So thinking of input parameters as unit vectors, they can be decomposed as $\mathbf{e}_i = \mathbf{v}_{\mathcal{N}} + \mathbf{v}_{\mathcal{N}^\perp} $, where $\mathbf{v}_{\mathcal{N}}$ is in the nullspace of $\mathbf{X}$ and $\mathbf{v}_{\mathcal{N}^\perp}$ is orthogonal to that.
With this decomposition, we see that it's not just $c(1,0,-1,0)$ that we can't estimate: whenever the component $\mathbf{v}_{\mathcal{N}}$ is not the zero vector, it can't be estimated. On the contrary, any vector which is orthogonal to the vector $c(1,0,-1,0)$, or in general, orthogonal to the nullspace, will be identifiable. Recall from Gil Strang's fundamental theorem of linear algebra that the subspace orthogonal to matrix's kernel is that same matrix's rowspace. Thus the rowspace of our design matrix can still be estimated even if our matrix is less than full rank. Intuitively, we should have been able to estimate the effect of "Treatment A and Treatment B", and we can verify that their sum is indeed in the rowspace and hence identifiable.
TLDR: If your matrix is less than full rank, it will send some vector to zero. This means that changing regression coefficients along that direction results in no change to the data, and we can't learn anything about what that linear combination of our parameters are, and thus what the parameters that contributed are themselves.
|
What does it mean when a Data Matrix has full rank?
I want to connect the concept of identifiability with the rank of the design matrix in linear regression, as well as take a more linear algebraic look at the problem, since you mention you have a math
|
44,034
|
What does it mean when a Data Matrix has full rank?
|
Suppose you have a $10×10$ matrix $X$, and $rank(X) = 1$, which is stored somewhere on your memory disk. For some reason, your memory disk was damaged, and so was the matrix. Some rows remained intact, whereas in other rows several digits were lost. The question is, how many full rows do you need to know in order to restore the entire matrix?
The minimum number of rows you need is equal to the rank of the matrix. This is due to the fact that rank represents the number of linearly independent rows. All other rows can be obtained by multiplying that one row by an arbitrary number.
E.g., if our damaged matrix looks like this:
$\begin{bmatrix}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
& 6 & & 12 & & & & & & \\
& & & & 20 & 24 & & & & \\
7 & & & & & & & & & 70 \\
& & & & \dots\ & & & & \\
\end{bmatrix}$
Then, we already know that our matrix is:
$\begin{bmatrix}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
3 & 6 & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\
4 & 8 & 12 & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\
7 & 14 & 21 & 28 & 35 & 42 & 49 & 56 & 63 & 70 \\
& & & & \dots\ & & & & \\
\end{bmatrix}$
Were $rank(X) = 2$, that woudn't work because we would need at least two full rows. Finally, if matrix $X$ had full rank (i.e. $rank(X) = 10$), we would be in a world of trouble. Since all rows are linearly independent, not a single lost digit could be restored.
This is one way to look at the concept of matrix rank.
|
What does it mean when a Data Matrix has full rank?
|
Suppose you have a $10×10$ matrix $X$, and $rank(X) = 1$, which is stored somewhere on your memory disk. For some reason, your memory disk was damaged, and so was the matrix. Some rows remained intact
|
What does it mean when a Data Matrix has full rank?
Suppose you have a $10×10$ matrix $X$, and $rank(X) = 1$, which is stored somewhere on your memory disk. For some reason, your memory disk was damaged, and so was the matrix. Some rows remained intact, whereas in other rows several digits were lost. The question is, how many full rows do you need to know in order to restore the entire matrix?
The minimum number of rows you need is equal to the rank of the matrix. This is due to the fact that rank represents the number of linearly independent rows. All other rows can be obtained by multiplying that one row by an arbitrary number.
E.g., if our damaged matrix looks like this:
$\begin{bmatrix}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
& 6 & & 12 & & & & & & \\
& & & & 20 & 24 & & & & \\
7 & & & & & & & & & 70 \\
& & & & \dots\ & & & & \\
\end{bmatrix}$
Then, we already know that our matrix is:
$\begin{bmatrix}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
3 & 6 & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\
4 & 8 & 12 & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\
7 & 14 & 21 & 28 & 35 & 42 & 49 & 56 & 63 & 70 \\
& & & & \dots\ & & & & \\
\end{bmatrix}$
Were $rank(X) = 2$, that woudn't work because we would need at least two full rows. Finally, if matrix $X$ had full rank (i.e. $rank(X) = 10$), we would be in a world of trouble. Since all rows are linearly independent, not a single lost digit could be restored.
This is one way to look at the concept of matrix rank.
|
What does it mean when a Data Matrix has full rank?
Suppose you have a $10×10$ matrix $X$, and $rank(X) = 1$, which is stored somewhere on your memory disk. For some reason, your memory disk was damaged, and so was the matrix. Some rows remained intact
|
44,035
|
Expectation Maximization and Deep Learning
|
(The original version of this post, the text of which I kept below the line for reference purposes, generated a lot of dispute and some back and forth which seems mostly to be around questions of interpretation and ambiguity, so I updated with a more direct answer)
The OP seems to be asking:
Are Deep Learning models a special case of the EM algorithm?
No they aren't. Deep Learning models are general purpose function approximators, which can use different types of objective functions and training algorithms, whereas the EM algorithm is a very specific algorithm in terms of training approach and objective function.
From this perspective, it is possible (although not very common) to use Deep Learning to emulate the EM algorithm. See this paper.
Most of the (deep learning) models can be treated as probability functions, but when is this not the case?
Probability distribution functions have to satisfy certain conditions such as summing up to one (the conditions are slightly different if you consider probability density functions). Deep Learning models can approximate functions in general - i.e. a larger class of functions than those that correspond to probability distributions and densities.
When do they not correspond to probability densities and distributions? Any time the function they approximate doesn't satisfy the axioms of probability theory. For example a network whose output layer has $tanh$ activations can take negative values, and therefore doesn't satisfy the condition for being a probability distribution or density.
There are three ways that a deep learning model can correspond to a probability distribution $P(x)$:
Use Deep Learning to learn a probability distribution directly. That is, have your neural network learn the shape of $y=P(x)$. Here $P(x)$ satisfies the conditions for being a probability density or distribution.
Have a Deep Learning model learn a general function $y=f(x)$ (that doesn't satisfy the conditions for being a probability distribution). After training the model, we then make assumptions about the probability distribution $P(y|x)$, e.g. the errors are normally distributed, and then use simulations to sample from that distribution. See here for an example of how that can be done.
Have a Deep Learning model learn a general function $y=f(x)$ (that doesn't satisfy the conditions for being a probability distribution) - and then interpret the output of the model as representing $P(y|x) = ~ \delta[y-f(x)]$, with $\delta$ being the Dirac function. There are two issues with this last approach. There is some debate as to whether the Dirac Delta constitutes a valid distribution function. It is popular in the signal processing and physics communities, but not so much among the probability and statistics crowd. It also doesn't provide any useful information from a probability and statistics point of view, since it doesn't provide anyway of quantifying the uncertainty of the output, which defeats the purpose of using a probabilistic model in practice.
Is the expectation maximization applicable to most deep learning models in literature?
Not really. There are several key differences:
Deep Learning models work by minimizing a loss function. Different loss functions are used for different problems, and then the training algorithm used focuses on the best way to minimize the particular loss function that is suitable for the problem at hand. The EM algorithm on the other hand, is about maximizing a likelihood function. The issue here isn't simply that we are maximizing instead of minimizing (both are optimization problems after all), but the fact that EM dictates a specific function to be optimized, whereas Deep Learning can use any loss function as long as it is compatible with the training method (which is usually some variant of Gradient Descent).
EM estimates the parameters in a statistical method by maximizing the likelihood of those parameters. So we chose the model before hand (e.g. a Gaussian with mean $\mu$ and variance $\sigma^2$), and then we use EM to find the best values of those parameters (e.g. which values of $\mu$ and $\sigma^2$ best fit our data). Deep Learning models are non-parametric, they don't make any assumptions about the shape or distribution of the data. Instead they are universal approximators, which given enough neurons and layers, should be able to fit any function.
Closely related to the previous point is the fact that Deep Learning models are just function approximators, that can approximate arbitrary functions without having to respect any of the constraints that are imposed on a probability distribution function. An MLE model, or even a non-parametric distribution estimator for that matter, will be bound by the laws of probability and the constraints imposed on probability distributions and densities.
Now certain types of deep learning models can be considered equivalent to an MLE model, but what is really happening under the hood is that we specifically asking the neural network to learn a probability distribution as opposed to a more general arbitrary function by choosing certain activation functions and adding some constraints on the outputs of the network. All that means is that they are acting as MLE estimators, but not that they are special cases of the EM algorithm.
Is the learning considered to be part of the EM algorithm?
I would say that it is the other way around. It is possible that someone, somewhere, has come up with a Deep Learning model that is equivalent to the EM algorithm, but that would make the EM algorithm a special case of Deep Learning, not the other way around, since for this to work, you would have to use Deep Learning + additional constraints to make the model mimic EM.
In response to the comments:
"Minimizing and maximizing can be the same thing.": Agreed, they can be (almost) equivalent - which what I specified in my response - it is NOT about maximizing vs. minimizing, it is about having to use a specific objective function dictated by MLE, vs. being able to use just about any loss function compatible with backpropagation. "The loss function in this case is the expectation of E p_theta(x|z) where p_theta is the deep neural network model." - again this is possible, but as I point out later, this would make MLE a special case of Deep Learning, not the other way around.
"Parameters in the case of the deep neural networks are the model weights. I don't think your explanation is correct" - the explanation is correct, but you are also correct that the word parametric is ambiguous, and is used in different ways in different sources. Model weights are parameters in the general sense, but in the strict sense of parametric vs. non-parametric models, they aren't the same as the parameters in a parametric model. In parametric model, the parameters have a meaning, they correspond to the mean, the variance, the seasonality in a time series, etc...whereas the parameters in a Deep Learning model don't having any meaning, they are jus the most convenient way for the network to store information. That is why neural networks are criticized for being black box - there is no established way of interpreting the meaning of the weights. Another way you can think of it is in terms of total parameters vs. number of effective parameters: In a truly parametric model that is estimated using EM, the number of fixed parameters is the same as the number of effective parameters. In a neural network, the number of effective parameters may change during training (by reducing weights to zero, or by using drop out, etc....), even if the total number of parameters is defined before hand and is fixed. Which brings us to the real difference between the two approaches: A fixed number of effective parameters means that the shape of the distribution or function is decided before hand, whereas changing effective parameters allows for models to approximate more general, and eventually arbitrary functions, per the universal approximation theorem.
"DNN also try to learn the probability distribution of the data in order to make predictions." only if we configure and constrain them to learn probability distributions. But they can also learn other things besides probability distributions. To this how this is possible, you can simply specify a multi-class neural network, with 4 outputs, with sigmoid outputs instead of softmax outputs, and train it to learn cases where the output is [1, 1, 1, 1]. Since the sum of the outputs is > 1, this is not a probability distribution, but just an arbitrary mapping of the inputs to classes. More generally Neural Networks/Deep Learning models are just general purpose function approximators, which can be configured to the specific case of estimation probability distribution functions, but they are not limited to that case. In computer vision for example, the are often used as filters and segmentation devises, instead of as classifiers or distribution estimators.
As Cagdas Ozgenc points out, just about any supervised learning problem or function approximation problem can be recast as an MLE.
|
Expectation Maximization and Deep Learning
|
(The original version of this post, the text of which I kept below the line for reference purposes, generated a lot of dispute and some back and forth which seems mostly to be around questions of inte
|
Expectation Maximization and Deep Learning
(The original version of this post, the text of which I kept below the line for reference purposes, generated a lot of dispute and some back and forth which seems mostly to be around questions of interpretation and ambiguity, so I updated with a more direct answer)
The OP seems to be asking:
Are Deep Learning models a special case of the EM algorithm?
No they aren't. Deep Learning models are general purpose function approximators, which can use different types of objective functions and training algorithms, whereas the EM algorithm is a very specific algorithm in terms of training approach and objective function.
From this perspective, it is possible (although not very common) to use Deep Learning to emulate the EM algorithm. See this paper.
Most of the (deep learning) models can be treated as probability functions, but when is this not the case?
Probability distribution functions have to satisfy certain conditions such as summing up to one (the conditions are slightly different if you consider probability density functions). Deep Learning models can approximate functions in general - i.e. a larger class of functions than those that correspond to probability distributions and densities.
When do they not correspond to probability densities and distributions? Any time the function they approximate doesn't satisfy the axioms of probability theory. For example a network whose output layer has $tanh$ activations can take negative values, and therefore doesn't satisfy the condition for being a probability distribution or density.
There are three ways that a deep learning model can correspond to a probability distribution $P(x)$:
Use Deep Learning to learn a probability distribution directly. That is, have your neural network learn the shape of $y=P(x)$. Here $P(x)$ satisfies the conditions for being a probability density or distribution.
Have a Deep Learning model learn a general function $y=f(x)$ (that doesn't satisfy the conditions for being a probability distribution). After training the model, we then make assumptions about the probability distribution $P(y|x)$, e.g. the errors are normally distributed, and then use simulations to sample from that distribution. See here for an example of how that can be done.
Have a Deep Learning model learn a general function $y=f(x)$ (that doesn't satisfy the conditions for being a probability distribution) - and then interpret the output of the model as representing $P(y|x) = ~ \delta[y-f(x)]$, with $\delta$ being the Dirac function. There are two issues with this last approach. There is some debate as to whether the Dirac Delta constitutes a valid distribution function. It is popular in the signal processing and physics communities, but not so much among the probability and statistics crowd. It also doesn't provide any useful information from a probability and statistics point of view, since it doesn't provide anyway of quantifying the uncertainty of the output, which defeats the purpose of using a probabilistic model in practice.
Is the expectation maximization applicable to most deep learning models in literature?
Not really. There are several key differences:
Deep Learning models work by minimizing a loss function. Different loss functions are used for different problems, and then the training algorithm used focuses on the best way to minimize the particular loss function that is suitable for the problem at hand. The EM algorithm on the other hand, is about maximizing a likelihood function. The issue here isn't simply that we are maximizing instead of minimizing (both are optimization problems after all), but the fact that EM dictates a specific function to be optimized, whereas Deep Learning can use any loss function as long as it is compatible with the training method (which is usually some variant of Gradient Descent).
EM estimates the parameters in a statistical method by maximizing the likelihood of those parameters. So we chose the model before hand (e.g. a Gaussian with mean $\mu$ and variance $\sigma^2$), and then we use EM to find the best values of those parameters (e.g. which values of $\mu$ and $\sigma^2$ best fit our data). Deep Learning models are non-parametric, they don't make any assumptions about the shape or distribution of the data. Instead they are universal approximators, which given enough neurons and layers, should be able to fit any function.
Closely related to the previous point is the fact that Deep Learning models are just function approximators, that can approximate arbitrary functions without having to respect any of the constraints that are imposed on a probability distribution function. An MLE model, or even a non-parametric distribution estimator for that matter, will be bound by the laws of probability and the constraints imposed on probability distributions and densities.
Now certain types of deep learning models can be considered equivalent to an MLE model, but what is really happening under the hood is that we specifically asking the neural network to learn a probability distribution as opposed to a more general arbitrary function by choosing certain activation functions and adding some constraints on the outputs of the network. All that means is that they are acting as MLE estimators, but not that they are special cases of the EM algorithm.
Is the learning considered to be part of the EM algorithm?
I would say that it is the other way around. It is possible that someone, somewhere, has come up with a Deep Learning model that is equivalent to the EM algorithm, but that would make the EM algorithm a special case of Deep Learning, not the other way around, since for this to work, you would have to use Deep Learning + additional constraints to make the model mimic EM.
In response to the comments:
"Minimizing and maximizing can be the same thing.": Agreed, they can be (almost) equivalent - which what I specified in my response - it is NOT about maximizing vs. minimizing, it is about having to use a specific objective function dictated by MLE, vs. being able to use just about any loss function compatible with backpropagation. "The loss function in this case is the expectation of E p_theta(x|z) where p_theta is the deep neural network model." - again this is possible, but as I point out later, this would make MLE a special case of Deep Learning, not the other way around.
"Parameters in the case of the deep neural networks are the model weights. I don't think your explanation is correct" - the explanation is correct, but you are also correct that the word parametric is ambiguous, and is used in different ways in different sources. Model weights are parameters in the general sense, but in the strict sense of parametric vs. non-parametric models, they aren't the same as the parameters in a parametric model. In parametric model, the parameters have a meaning, they correspond to the mean, the variance, the seasonality in a time series, etc...whereas the parameters in a Deep Learning model don't having any meaning, they are jus the most convenient way for the network to store information. That is why neural networks are criticized for being black box - there is no established way of interpreting the meaning of the weights. Another way you can think of it is in terms of total parameters vs. number of effective parameters: In a truly parametric model that is estimated using EM, the number of fixed parameters is the same as the number of effective parameters. In a neural network, the number of effective parameters may change during training (by reducing weights to zero, or by using drop out, etc....), even if the total number of parameters is defined before hand and is fixed. Which brings us to the real difference between the two approaches: A fixed number of effective parameters means that the shape of the distribution or function is decided before hand, whereas changing effective parameters allows for models to approximate more general, and eventually arbitrary functions, per the universal approximation theorem.
"DNN also try to learn the probability distribution of the data in order to make predictions." only if we configure and constrain them to learn probability distributions. But they can also learn other things besides probability distributions. To this how this is possible, you can simply specify a multi-class neural network, with 4 outputs, with sigmoid outputs instead of softmax outputs, and train it to learn cases where the output is [1, 1, 1, 1]. Since the sum of the outputs is > 1, this is not a probability distribution, but just an arbitrary mapping of the inputs to classes. More generally Neural Networks/Deep Learning models are just general purpose function approximators, which can be configured to the specific case of estimation probability distribution functions, but they are not limited to that case. In computer vision for example, the are often used as filters and segmentation devises, instead of as classifiers or distribution estimators.
As Cagdas Ozgenc points out, just about any supervised learning problem or function approximation problem can be recast as an MLE.
|
Expectation Maximization and Deep Learning
(The original version of this post, the text of which I kept below the line for reference purposes, generated a lot of dispute and some back and forth which seems mostly to be around questions of inte
|
44,036
|
Expectation Maximization and Deep Learning
|
In short, no.
Expectation maximization is a technique to solve statistical problems that consist of an "easy" maximization (if some latent variables were known), and an "easy" expectation calculation on the log-likelihood (if the parameters were known). However, the "how" and "why" the expectation and maximization steps require ingenuity, and are model-specific. So while it's possible that some models from deep learning could be posed in fashion that might leverage EM, EM is not a generic optimization technique, not even to classical statistical models.
EM as minorization/maximization
However, EM can be considered a member of a class of algorithms known as minorization-maximization (MM) algorithms. These algorithms find a surrogate that is a lower bound for the objective function everywhere, but tight at least one point. The surrogate is maximized (it should be constructed so that it's easier to maximize than the original function), and the process repeated. Finding such a surrogate also requires ingenuity or structure, but it can be thought as a generic technique in optimization. So in that sense, the theory behind EM is broadly be applicable.
A quick search of google scholar reveals some relevant literature, though it seems to be much less commonly used than stochastic gradient methods, which do not attempt to construct a surrogate.
|
Expectation Maximization and Deep Learning
|
In short, no.
Expectation maximization is a technique to solve statistical problems that consist of an "easy" maximization (if some latent variables were known), and an "easy" expectation calculation
|
Expectation Maximization and Deep Learning
In short, no.
Expectation maximization is a technique to solve statistical problems that consist of an "easy" maximization (if some latent variables were known), and an "easy" expectation calculation on the log-likelihood (if the parameters were known). However, the "how" and "why" the expectation and maximization steps require ingenuity, and are model-specific. So while it's possible that some models from deep learning could be posed in fashion that might leverage EM, EM is not a generic optimization technique, not even to classical statistical models.
EM as minorization/maximization
However, EM can be considered a member of a class of algorithms known as minorization-maximization (MM) algorithms. These algorithms find a surrogate that is a lower bound for the objective function everywhere, but tight at least one point. The surrogate is maximized (it should be constructed so that it's easier to maximize than the original function), and the process repeated. Finding such a surrogate also requires ingenuity or structure, but it can be thought as a generic technique in optimization. So in that sense, the theory behind EM is broadly be applicable.
A quick search of google scholar reveals some relevant literature, though it seems to be much less commonly used than stochastic gradient methods, which do not attempt to construct a surrogate.
|
Expectation Maximization and Deep Learning
In short, no.
Expectation maximization is a technique to solve statistical problems that consist of an "easy" maximization (if some latent variables were known), and an "easy" expectation calculation
|
44,037
|
Expectation Maximization and Deep Learning
|
Short overview about Expectation maximization :
Marginal likelihood
Expectation maximization contrasts with 'regular' likelihood maximization by refering to the maximization of a marginal likelihood.
$$\underbrace{p(X\vert \theta)}_{\substack{\text{marginal likelihood}\\\text{ $\mathcal{L}(\theta \vert X)$}}} =
\int_z \underbrace{p(X, z \vert \theta)}_{\substack{\text{likelihood}\\\text{ $\mathcal{L}(\theta \vert X,\underset{\uparrow \\ \substack{\llap{\text{This $z$ is }\rlap{\text{missing data}}}}}{z})$}}} \text{d}z =
\int_z p(X \vert \theta,z) p(z\vert X,\theta) \text{d}z $$
So this relates to an integral over some likelihood with an additional parameter $z$ (e.g. missing data).
EM algorithm
In the EM algorithm this integral is not maximized directly:
$$\hat\theta = \underset{\theta}{\text{arg max}} \left( \int_z p(X \vert \theta,z) p(z\vert X,\theta) \text{d}z \right)$$
but instead it is computed in an iterative way:
$$\hat\theta_{k+1} = \underset{\theta}{\text{arg max}} \left( \int_z p(X \vert \theta,z) p(z\vert X,\hat\theta_k) \text{d}z \right)$$
This is done by picking an initial $\theta_1$ and updating repetitively. Note that now the optimization keeps the term $p(z\vert X,\theta)$ fixed (which helps to minimize by expressing derivatives).
Not all problems are like that.
So this marginal likelihood is only the case for problems with unobserved data. For instance in finding a Gaussian Mixture (example on wikipedia) one may consider an observed variable $z$ that refers the class (which component in the mixture) that a measurement belongs to.
There are many problems that do not consider a marginal likelihood and evaluate parameters directly in order to optimize some likelihood function (or some other cost function but not a marginalized/expectation one).
Code example
The use of marginal likelihood is not always about explicitly missing data.
In the example below the (example on wikipedia) is worked out numerically in R.
In this case you do not have explicitly a case with missing data (the likelihood can be defined directly and does not need to be a marginal likelihood integrating over missing data), but one has a mixture of two multivariate Gaussian distributions. The problem with that is that one can not do as normally and compute the sum of the logarithm of the terms (which has computational advantages), because not the terms are not a product, but they involve a sum as well. Although, it would be possible to compute those logarithms of sums using an approximation (this is not done in the code below, instead the parameters are created in order to be advantageous and do not generate infinite values).
library(MASS)
# data example
set.seed(1)
x1 <- mvrnorm(100, mu=c(1,1), Sigma=diag(1,2))
x2 <- mvrnorm(30, mu=c(3,3), Sigma=diag(sqrt(0.5),2))
x <- rbind(x1,x2)
col <- c(rep(1,100),rep(2,30))
plot(x,col=col)
# Likelihood without integrating over z
Lsimple <- function(par,X=x) {
tau <- par[1]
mu1 <- c(par[2],par[3])
mu2 <- c(par[4],par[5])
sigma_1 <- par[6]
sigma_2 <- par[7]
likterms <- tau*dnorm(X[,1],mean = mu1[1], sd = sigma_1)*
dnorm(X[,2],mean = mu1[2], sd = sigma_1)+
(1-tau)*dnorm(X[,1],mean = mu2[1], sd = sigma_2)*
dnorm(X[,2],mean = mu2[2], sd = sigma_2)
logLik = sum(log(likterms))
-logLik
}
# Marginal likelihood integrating over z
LEM <- function(par,X=x,oldp=oldpar) {
tau <- par[1]
mu1 <- c(par[2],par[3])
mu2 <- c(par[4],par[5])
sigma_1 <- par[6]
sigma_2 <- par[7]
oldtau <- oldp[1]
oldmu1 <- c(oldp[2],oldp[3])
oldmu2 <- c(oldp[4],oldp[5])
oldsigma_1 <- oldp[6]
oldsigma_2 <- oldp[7]
f1 <- oldtau*dnorm(X[,1],mean = oldmu1[1], sd = oldsigma_1)*
dnorm(X[,2],mean = oldmu1[2], sd = oldsigma_1)
f2 <- (1-oldtau)*dnorm(X[,1],mean = oldmu2[1], sd = oldsigma_2)*
dnorm(X[,2],mean = oldmu2[2], sd = oldsigma_2)
pclass <- f1/(f1+f2)
### note that now the terms are a product and can be replaced by a sum of the log
#likterms <- tau*dnorm(X[,1],mean = mu1[1], sd = sigma_1)*
# dnorm(X[,2],mean = mu1[2], sd = sigma_1)*(pclass)*
# (1-tau)*dnorm(X[,1],mean = mu2[1], sd = sigma_2)*
# dnorm(X[,2],mean = mu2[2], sd = sigma_2)*(1-pclass)
loglikterms <- (log(tau)+dnorm(X[,1],mean = mu1[1], sd = sigma_1, log = TRUE)+
dnorm(X[,2],mean = mu1[2], sd = sigma_1, log = TRUE))*(pclass)+
(log(1-tau)+dnorm(X[,1],mean = mu2[1], sd = sigma_2, log = TRUE)+
dnorm(X[,2],mean = mu2[2], sd = sigma_2, log = TRUE))*(1-pclass)
logLik = sum(loglikterms)
-logLik
}
# solving with direct likelihood
par <- c(0.5,1,1,3,3,1,0.5)
p1 <- optim(par, Lsimple,
method="L-BFGS-B",
lower = c(0.1,0,0,0,0,0.1,0.1),
upper = c(0.9,5,5,5,5,3,3),
control = list(trace=3, maxit=10^3))
p1
# solving with LEM
# (this is done here indirectly/computationally with optim,
# but could be done analytically be expressing te derivative and solving)
oldpar <- c(0.5,1,1,3,3,1,0.5)
for (i in 1:100) {
p2 <- optim(oldpar, LEM,
method="L-BFGS-B",
lower = c(0.1,0,0,0,0,0.1,0.1),
upper = c(0.9,5,5,5,5,3,3),
control = list(trace=1, maxit=10^3))
oldpar <- p2$par
print(i)
}
p2
# the result is the same:
p$par
p2$par
|
Expectation Maximization and Deep Learning
|
Short overview about Expectation maximization :
Marginal likelihood
Expectation maximization contrasts with 'regular' likelihood maximization by refering to the maximization of a marginal likelihood.
|
Expectation Maximization and Deep Learning
Short overview about Expectation maximization :
Marginal likelihood
Expectation maximization contrasts with 'regular' likelihood maximization by refering to the maximization of a marginal likelihood.
$$\underbrace{p(X\vert \theta)}_{\substack{\text{marginal likelihood}\\\text{ $\mathcal{L}(\theta \vert X)$}}} =
\int_z \underbrace{p(X, z \vert \theta)}_{\substack{\text{likelihood}\\\text{ $\mathcal{L}(\theta \vert X,\underset{\uparrow \\ \substack{\llap{\text{This $z$ is }\rlap{\text{missing data}}}}}{z})$}}} \text{d}z =
\int_z p(X \vert \theta,z) p(z\vert X,\theta) \text{d}z $$
So this relates to an integral over some likelihood with an additional parameter $z$ (e.g. missing data).
EM algorithm
In the EM algorithm this integral is not maximized directly:
$$\hat\theta = \underset{\theta}{\text{arg max}} \left( \int_z p(X \vert \theta,z) p(z\vert X,\theta) \text{d}z \right)$$
but instead it is computed in an iterative way:
$$\hat\theta_{k+1} = \underset{\theta}{\text{arg max}} \left( \int_z p(X \vert \theta,z) p(z\vert X,\hat\theta_k) \text{d}z \right)$$
This is done by picking an initial $\theta_1$ and updating repetitively. Note that now the optimization keeps the term $p(z\vert X,\theta)$ fixed (which helps to minimize by expressing derivatives).
Not all problems are like that.
So this marginal likelihood is only the case for problems with unobserved data. For instance in finding a Gaussian Mixture (example on wikipedia) one may consider an observed variable $z$ that refers the class (which component in the mixture) that a measurement belongs to.
There are many problems that do not consider a marginal likelihood and evaluate parameters directly in order to optimize some likelihood function (or some other cost function but not a marginalized/expectation one).
Code example
The use of marginal likelihood is not always about explicitly missing data.
In the example below the (example on wikipedia) is worked out numerically in R.
In this case you do not have explicitly a case with missing data (the likelihood can be defined directly and does not need to be a marginal likelihood integrating over missing data), but one has a mixture of two multivariate Gaussian distributions. The problem with that is that one can not do as normally and compute the sum of the logarithm of the terms (which has computational advantages), because not the terms are not a product, but they involve a sum as well. Although, it would be possible to compute those logarithms of sums using an approximation (this is not done in the code below, instead the parameters are created in order to be advantageous and do not generate infinite values).
library(MASS)
# data example
set.seed(1)
x1 <- mvrnorm(100, mu=c(1,1), Sigma=diag(1,2))
x2 <- mvrnorm(30, mu=c(3,3), Sigma=diag(sqrt(0.5),2))
x <- rbind(x1,x2)
col <- c(rep(1,100),rep(2,30))
plot(x,col=col)
# Likelihood without integrating over z
Lsimple <- function(par,X=x) {
tau <- par[1]
mu1 <- c(par[2],par[3])
mu2 <- c(par[4],par[5])
sigma_1 <- par[6]
sigma_2 <- par[7]
likterms <- tau*dnorm(X[,1],mean = mu1[1], sd = sigma_1)*
dnorm(X[,2],mean = mu1[2], sd = sigma_1)+
(1-tau)*dnorm(X[,1],mean = mu2[1], sd = sigma_2)*
dnorm(X[,2],mean = mu2[2], sd = sigma_2)
logLik = sum(log(likterms))
-logLik
}
# Marginal likelihood integrating over z
LEM <- function(par,X=x,oldp=oldpar) {
tau <- par[1]
mu1 <- c(par[2],par[3])
mu2 <- c(par[4],par[5])
sigma_1 <- par[6]
sigma_2 <- par[7]
oldtau <- oldp[1]
oldmu1 <- c(oldp[2],oldp[3])
oldmu2 <- c(oldp[4],oldp[5])
oldsigma_1 <- oldp[6]
oldsigma_2 <- oldp[7]
f1 <- oldtau*dnorm(X[,1],mean = oldmu1[1], sd = oldsigma_1)*
dnorm(X[,2],mean = oldmu1[2], sd = oldsigma_1)
f2 <- (1-oldtau)*dnorm(X[,1],mean = oldmu2[1], sd = oldsigma_2)*
dnorm(X[,2],mean = oldmu2[2], sd = oldsigma_2)
pclass <- f1/(f1+f2)
### note that now the terms are a product and can be replaced by a sum of the log
#likterms <- tau*dnorm(X[,1],mean = mu1[1], sd = sigma_1)*
# dnorm(X[,2],mean = mu1[2], sd = sigma_1)*(pclass)*
# (1-tau)*dnorm(X[,1],mean = mu2[1], sd = sigma_2)*
# dnorm(X[,2],mean = mu2[2], sd = sigma_2)*(1-pclass)
loglikterms <- (log(tau)+dnorm(X[,1],mean = mu1[1], sd = sigma_1, log = TRUE)+
dnorm(X[,2],mean = mu1[2], sd = sigma_1, log = TRUE))*(pclass)+
(log(1-tau)+dnorm(X[,1],mean = mu2[1], sd = sigma_2, log = TRUE)+
dnorm(X[,2],mean = mu2[2], sd = sigma_2, log = TRUE))*(1-pclass)
logLik = sum(loglikterms)
-logLik
}
# solving with direct likelihood
par <- c(0.5,1,1,3,3,1,0.5)
p1 <- optim(par, Lsimple,
method="L-BFGS-B",
lower = c(0.1,0,0,0,0,0.1,0.1),
upper = c(0.9,5,5,5,5,3,3),
control = list(trace=3, maxit=10^3))
p1
# solving with LEM
# (this is done here indirectly/computationally with optim,
# but could be done analytically be expressing te derivative and solving)
oldpar <- c(0.5,1,1,3,3,1,0.5)
for (i in 1:100) {
p2 <- optim(oldpar, LEM,
method="L-BFGS-B",
lower = c(0.1,0,0,0,0,0.1,0.1),
upper = c(0.9,5,5,5,5,3,3),
control = list(trace=1, maxit=10^3))
oldpar <- p2$par
print(i)
}
p2
# the result is the same:
p$par
p2$par
|
Expectation Maximization and Deep Learning
Short overview about Expectation maximization :
Marginal likelihood
Expectation maximization contrasts with 'regular' likelihood maximization by refering to the maximization of a marginal likelihood.
|
44,038
|
Expectation Maximization and Deep Learning
|
Given the previous technical answers, one philosophical point may help to clear the ambiguity between ME vs. Deep Learning: the concept of learning.
In deep learning, there are multiple layers, each layer is a learning step. In the first step, the data input is 'converted' (or learned) into a synthetic intermediate output (a bit higher abstraction, loosely speaking). Then in each step, the input is progressively learned ( or 'transformed') into higher abstraction features, which may or may not be comprehensible to humans. Loosely speaking, the combination of these layers will approximate the 'formulas' for you, we don't need to specify any model or hypothesis beforehand. This is the same 'learning' concept as in cognitive science: construct higher abstraction from inputs.
In ME methodology, there is simply no increase in abstraction.
One example is word embedding in NLP, where words are transformed progressively into vectors of numbers with increasing abstraction, such that in the end, the vectors of numbers can actually represent syntactic (grammar) and semantic (logic) meaning. This capacity to deal with ambiguity (in languages, or other use cases) by building abstracted features is one stark distinction of deep learning vs. many other statistical methods.
|
Expectation Maximization and Deep Learning
|
Given the previous technical answers, one philosophical point may help to clear the ambiguity between ME vs. Deep Learning: the concept of learning.
In deep learning, there are multiple layers, each l
|
Expectation Maximization and Deep Learning
Given the previous technical answers, one philosophical point may help to clear the ambiguity between ME vs. Deep Learning: the concept of learning.
In deep learning, there are multiple layers, each layer is a learning step. In the first step, the data input is 'converted' (or learned) into a synthetic intermediate output (a bit higher abstraction, loosely speaking). Then in each step, the input is progressively learned ( or 'transformed') into higher abstraction features, which may or may not be comprehensible to humans. Loosely speaking, the combination of these layers will approximate the 'formulas' for you, we don't need to specify any model or hypothesis beforehand. This is the same 'learning' concept as in cognitive science: construct higher abstraction from inputs.
In ME methodology, there is simply no increase in abstraction.
One example is word embedding in NLP, where words are transformed progressively into vectors of numbers with increasing abstraction, such that in the end, the vectors of numbers can actually represent syntactic (grammar) and semantic (logic) meaning. This capacity to deal with ambiguity (in languages, or other use cases) by building abstracted features is one stark distinction of deep learning vs. many other statistical methods.
|
Expectation Maximization and Deep Learning
Given the previous technical answers, one philosophical point may help to clear the ambiguity between ME vs. Deep Learning: the concept of learning.
In deep learning, there are multiple layers, each l
|
44,039
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
If $\beta_1 + \beta_2 = 0$, then $\beta_1 = -\beta_2$, so $\beta_1 x_t + \beta_2 z_t = \beta_1 x_t - \beta_1 z_t = \beta_1 (x_t - z_t)$. So, in R, you can run
f1 <- lm(y ~ I(x - z), data = data)
f2 <- lm(y ~ x + z, data = data)
anova(f1, f2)
which will give you a test if the model where $\beta_1 + \beta_2 = 0$ (i.e., f1) fits worse than a model whether $\beta_1$ and $\beta_2$ can freely vary (i.e., f2). If the comparison is significant, then f2 is the better model and you can reject the null hypothesis that $\beta_1 + \beta_2 = 0$.
More generally, you can use the multcomp package to test general linear hypotheses:
multcomp::glht(f2, linfct = "x + z = 0")
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
If $\beta_1 + \beta_2 = 0$, then $\beta_1 = -\beta_2$, so $\beta_1 x_t + \beta_2 z_t = \beta_1 x_t - \beta_1 z_t = \beta_1 (x_t - z_t)$. So, in R, you can run
f1 <- lm(y ~ I(x - z), data = data)
f2 <
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
If $\beta_1 + \beta_2 = 0$, then $\beta_1 = -\beta_2$, so $\beta_1 x_t + \beta_2 z_t = \beta_1 x_t - \beta_1 z_t = \beta_1 (x_t - z_t)$. So, in R, you can run
f1 <- lm(y ~ I(x - z), data = data)
f2 <- lm(y ~ x + z, data = data)
anova(f1, f2)
which will give you a test if the model where $\beta_1 + \beta_2 = 0$ (i.e., f1) fits worse than a model whether $\beta_1$ and $\beta_2$ can freely vary (i.e., f2). If the comparison is significant, then f2 is the better model and you can reject the null hypothesis that $\beta_1 + \beta_2 = 0$.
More generally, you can use the multcomp package to test general linear hypotheses:
multcomp::glht(f2, linfct = "x + z = 0")
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
If $\beta_1 + \beta_2 = 0$, then $\beta_1 = -\beta_2$, so $\beta_1 x_t + \beta_2 z_t = \beta_1 x_t - \beta_1 z_t = \beta_1 (x_t - z_t)$. So, in R, you can run
f1 <- lm(y ~ I(x - z), data = data)
f2 <
|
44,040
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
Great thread which generated some great answers - though I have a feeling it will be moved to Stack Exchange because it is software related (the software being R).
To supplement Noah's answer, I will show an alternative way one can test the hypotheses of interest using the multcomp package. [Note that we can't test just a null hypothesis - we need to specify an alternative hypothesis as well.]
Recall that the linear regression model is:
$y_t = \alpha + \beta_1x_t + \beta_2 z_t$,
while the two competing hypotheses being tested are:
$H_0: \beta_1 + \beta_2 = 0$ and
$H_a: \beta_1 + \beta_2 \neq 0$.
Here, $H_0$ refers to the null hypothesis and $H_a$ refers to the alternative hypothesis.
Step 1:
The first thing we have to note is that the left hand side of the two stated hypotheses is nothing but a linear combination of the regression coefficients $\alpha$, $\beta_1$ and $\beta_2$in the linear regression model. Specifically:
$\beta_1 + \beta_2 = 0*\alpha + 1*\beta_1 + 1*\beta_2$,
where the weights given to the regression coefficients $\alpha$, $\beta_1$ and $\beta_2$ in this linear combination are 0, 1 and 1, respectively.
Step 2:
We will define a matrix of weights W with a single row, which lists the weights of the regression coefficients:
W <- rbind(c(0, 1, 1))
This is what W looks like currently:
> W
[,1] [,2] [,3]
[1,] 0 1 1
Step 3:
We assign proper names to the row and column of weights W. The row name will be beta1 + beta2. The column names will be alpha, beta1 and beta2. This is just so that we can keep track of what linear combination of the coefficients $\alpha$, $\beta_1$ and $\beta_2$ we are interested in testing.
rownames(W) <- c("beta1 + beta2")
colnames(W) <- c("alpha","beta1", "beta2")
W
This is what the beautified version of W looks like:
> W
alpha beta1 beta2
beta1 + beta2 0 1 1
Step 4:
We fit the model and perform the test of the null hypothesis against the alternative hypothesis:
library(multcomp)
model <- lm(y ~ x + z, data = data)
model.test <- glht(model, linfct = W)
summary(model.test)
If we generate the data with the commands below:
set.seed(1)
x = rnorm(100)
z = rnorm(100)
X = model.matrix(~x+z)
y = X%*%c(1,-2,3) + rnorm(100)
data <- data.frame(y=y, x=x)
here is what the R output would look like:
> summary(model.test)
Simultaneous Tests for General Linear Hypotheses
Fit: lm(formula = y ~ x + z, data = data)
Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
beta1 + beta2 == 0 0.9676 0.1601 6.043 2.81e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Adjusted p values reported -- single-step method)
From this output, we can see that the p-value for the t-test used to test the null hypothesis against the alternative hypothesis is 2.81e-08 (which is scientific notation for 0.0000000281).
Step 5:
To estimate the value of $\beta1 + \beta2$ and compute an associated confidence interval, we can use the command:
confint(model.test)
whose output will look like this:
> confint(model.test)
Simultaneous Confidence Intervals
Fit: lm(formula = y ~ x + z, data = data)
Quantile = 1.9847
95% family-wise confidence level
Linear Hypotheses:
Estimate lwr upr
beta1 + beta2 == 0 0.9676 0.6498 1.2855
From this output, we can see that the estimated value of $\beta1 + \beta2$ is 0.9676 and the corresponding 95% confidence interval is (0.6498, 1.2855).
We can plot the confidence interval we computed via these commands:
par(mar=c(4,8,4,4))
plot(model.test)
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
Great thread which generated some great answers - though I have a feeling it will be moved to Stack Exchange because it is software related (the software being R).
To supplement Noah's answer, I wil
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
Great thread which generated some great answers - though I have a feeling it will be moved to Stack Exchange because it is software related (the software being R).
To supplement Noah's answer, I will show an alternative way one can test the hypotheses of interest using the multcomp package. [Note that we can't test just a null hypothesis - we need to specify an alternative hypothesis as well.]
Recall that the linear regression model is:
$y_t = \alpha + \beta_1x_t + \beta_2 z_t$,
while the two competing hypotheses being tested are:
$H_0: \beta_1 + \beta_2 = 0$ and
$H_a: \beta_1 + \beta_2 \neq 0$.
Here, $H_0$ refers to the null hypothesis and $H_a$ refers to the alternative hypothesis.
Step 1:
The first thing we have to note is that the left hand side of the two stated hypotheses is nothing but a linear combination of the regression coefficients $\alpha$, $\beta_1$ and $\beta_2$in the linear regression model. Specifically:
$\beta_1 + \beta_2 = 0*\alpha + 1*\beta_1 + 1*\beta_2$,
where the weights given to the regression coefficients $\alpha$, $\beta_1$ and $\beta_2$ in this linear combination are 0, 1 and 1, respectively.
Step 2:
We will define a matrix of weights W with a single row, which lists the weights of the regression coefficients:
W <- rbind(c(0, 1, 1))
This is what W looks like currently:
> W
[,1] [,2] [,3]
[1,] 0 1 1
Step 3:
We assign proper names to the row and column of weights W. The row name will be beta1 + beta2. The column names will be alpha, beta1 and beta2. This is just so that we can keep track of what linear combination of the coefficients $\alpha$, $\beta_1$ and $\beta_2$ we are interested in testing.
rownames(W) <- c("beta1 + beta2")
colnames(W) <- c("alpha","beta1", "beta2")
W
This is what the beautified version of W looks like:
> W
alpha beta1 beta2
beta1 + beta2 0 1 1
Step 4:
We fit the model and perform the test of the null hypothesis against the alternative hypothesis:
library(multcomp)
model <- lm(y ~ x + z, data = data)
model.test <- glht(model, linfct = W)
summary(model.test)
If we generate the data with the commands below:
set.seed(1)
x = rnorm(100)
z = rnorm(100)
X = model.matrix(~x+z)
y = X%*%c(1,-2,3) + rnorm(100)
data <- data.frame(y=y, x=x)
here is what the R output would look like:
> summary(model.test)
Simultaneous Tests for General Linear Hypotheses
Fit: lm(formula = y ~ x + z, data = data)
Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
beta1 + beta2 == 0 0.9676 0.1601 6.043 2.81e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Adjusted p values reported -- single-step method)
From this output, we can see that the p-value for the t-test used to test the null hypothesis against the alternative hypothesis is 2.81e-08 (which is scientific notation for 0.0000000281).
Step 5:
To estimate the value of $\beta1 + \beta2$ and compute an associated confidence interval, we can use the command:
confint(model.test)
whose output will look like this:
> confint(model.test)
Simultaneous Confidence Intervals
Fit: lm(formula = y ~ x + z, data = data)
Quantile = 1.9847
95% family-wise confidence level
Linear Hypotheses:
Estimate lwr upr
beta1 + beta2 == 0 0.9676 0.6498 1.2855
From this output, we can see that the estimated value of $\beta1 + \beta2$ is 0.9676 and the corresponding 95% confidence interval is (0.6498, 1.2855).
We can plot the confidence interval we computed via these commands:
par(mar=c(4,8,4,4))
plot(model.test)
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
Great thread which generated some great answers - though I have a feeling it will be moved to Stack Exchange because it is software related (the software being R).
To supplement Noah's answer, I wil
|
44,041
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
The variance of $\beta_1 + \beta_2$ is $\operatorname{Var}(\beta_1) + \operatorname{Var}(\beta_2)
+ 2\operatorname{Cov}(\beta_1,\beta_2)$. Obtain the variance and covaraince from the covariance matrix and construct an appropriate confidence interval.
Here is some R code. I'm sure there is a package to do this, but until someone posts that, this should do fine.
x = rnorm(100)
z = rnorm(100)
X = model.matrix(~x+z)
y = X%*%c(1,-2,3) + rnorm(100)
model = lm(y~x+z)
sigma = vcov(model)
var_est = as.numeric(c(0,1,1)%*%sigma%*%c(0,1,1))
betas = coef(model)
(betas[2]+betas[3]) + c(-1,1)*1.96*sqrt(var_est)
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
The variance of $\beta_1 + \beta_2$ is $\operatorname{Var}(\beta_1) + \operatorname{Var}(\beta_2)
+ 2\operatorname{Cov}(\beta_1,\beta_2)$. Obtain the variance and covaraince from the covariance matri
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
The variance of $\beta_1 + \beta_2$ is $\operatorname{Var}(\beta_1) + \operatorname{Var}(\beta_2)
+ 2\operatorname{Cov}(\beta_1,\beta_2)$. Obtain the variance and covaraince from the covariance matrix and construct an appropriate confidence interval.
Here is some R code. I'm sure there is a package to do this, but until someone posts that, this should do fine.
x = rnorm(100)
z = rnorm(100)
X = model.matrix(~x+z)
y = X%*%c(1,-2,3) + rnorm(100)
model = lm(y~x+z)
sigma = vcov(model)
var_est = as.numeric(c(0,1,1)%*%sigma%*%c(0,1,1))
betas = coef(model)
(betas[2]+betas[3]) + c(-1,1)*1.96*sqrt(var_est)
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
The variance of $\beta_1 + \beta_2$ is $\operatorname{Var}(\beta_1) + \operatorname{Var}(\beta_2)
+ 2\operatorname{Cov}(\beta_1,\beta_2)$. Obtain the variance and covaraince from the covariance matri
|
44,042
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
Maybe you can try a Chi-square goodness of fit ( observed -expected) for your data points, for two models, one with $\beta_1= -\beta_2$ and another one where you use $\beta_2 \in (\beta_1 - \epsilon, \beta_1 + \epsilon )$ with your choice of $\epsilon >0 $ a Real number, and check the two Chi-squared statistics using the needed degrees of freedom.
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
|
Maybe you can try a Chi-square goodness of fit ( observed -expected) for your data points, for two models, one with $\beta_1= -\beta_2$ and another one where you use $\beta_2 \in (\beta_1 - \epsilon,
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
Maybe you can try a Chi-square goodness of fit ( observed -expected) for your data points, for two models, one with $\beta_1= -\beta_2$ and another one where you use $\beta_2 \in (\beta_1 - \epsilon, \beta_1 + \epsilon )$ with your choice of $\epsilon >0 $ a Real number, and check the two Chi-squared statistics using the needed degrees of freedom.
|
In R, how do I test $H_0: \beta_1+\beta_2=0$
Maybe you can try a Chi-square goodness of fit ( observed -expected) for your data points, for two models, one with $\beta_1= -\beta_2$ and another one where you use $\beta_2 \in (\beta_1 - \epsilon,
|
44,043
|
How do I generate distribution of positive numbers only with min, max and mean?
|
While the problem is very much ill-posed, since there is an infinite range of distributions satisfying these constraints, a possible solution is to find the maximum entropy distribution under the constraint of a support of $(80,12000)$ [thus using the uniform measure on that interval as the reference measure] and a mean of $\mathbb E[X]=500$ is of the form
$$p(x)=\exp\{\alpha+\beta x\}\,\mathbb I_{(80,1200)}(x)$$
with
$$\int_{80}^{12000} \exp\{\alpha+\beta x\}\,\text dx=1\qquad\text{and}\qquad
\int_{80}^{12000} x\exp\{\alpha+\beta x\}\,\text dx=500$$
which leads to
$$\exp\{-\alpha\}=\beta^{-1}[\exp\{12000\beta\}-\exp\{80\beta\}]$$
and$$\beta^{-1}\exp\{\alpha\}[12000\exp\{12000\beta\}-80\exp\{80\beta\}]-\beta^{-1}=500$$which can be solved numerically in $\beta$. Leading to
$$\beta^*=-.00238\quad\text{and}\quad\alpha^*=-5.850$$which can be easily simulated as a truncated exponential distribution, by inversion of the cdf, e.g., using qexp() in R. For instance,
function(n=1)
return(qexp(pexp(80,.00238)+runif(n)*
(pexp(12000,.00238)-pexp(80,.00238)),.00238))
If the question is instead about simulating a sample $X_{1:2000}$ such that $$\min(X_{1:2000})=80,\quad\max(X_{1:2000})=1200,\quad\bar X_{1:2000}=500$$
there is again an infinite range of solutions, the simplest being a uniform Multinomial distribution constrained by its minimum $X_{(1)}$ being 80 and its maximum $X_{(2000)}$ being 12000 since
$$\underbrace{X_{(1)}}_{80}+\cdots+\underbrace{X_{(2000)}}_{12000} = 80 + 987920 + 12000= \underbrace{2000}_p\times 500=\underbrace{10^6}_n$$
namely proportional to
$${n\choose 80\,n_2\,\cdots\,n_{p-1}\,12000}\mathbb I_{80\le n_1\le\ldots\le n_{p-1}\le 1200}$$
This is equivalent to simulate a Multinomial
$$\mathcal M_{1998}(987920,1/1998,\ldots,1/1998)$$
constrained to $(80,1200)^{1998}$, ie
x=rmultinom(1,987920,rep(1,1998))
while (min(x)<80||max(x)>12000){
x=rmultinom(1,987920,rep(1,1998))}
As an additional remark, let me add that observing a range of (80,12000) for a Multinomial $\mathcal M(10⁶;2000)$ is extremely unlikely (in the above simulation, the first attempt is always successful) and a more satisfactory approach would be to infer first about the probability vector of a Multinomial $\mathcal M(10⁶;2000;p)$ before predicting the remaining 1998 categories.
|
How do I generate distribution of positive numbers only with min, max and mean?
|
While the problem is very much ill-posed, since there is an infinite range of distributions satisfying these constraints, a possible solution is to find the maximum entropy distribution under the cons
|
How do I generate distribution of positive numbers only with min, max and mean?
While the problem is very much ill-posed, since there is an infinite range of distributions satisfying these constraints, a possible solution is to find the maximum entropy distribution under the constraint of a support of $(80,12000)$ [thus using the uniform measure on that interval as the reference measure] and a mean of $\mathbb E[X]=500$ is of the form
$$p(x)=\exp\{\alpha+\beta x\}\,\mathbb I_{(80,1200)}(x)$$
with
$$\int_{80}^{12000} \exp\{\alpha+\beta x\}\,\text dx=1\qquad\text{and}\qquad
\int_{80}^{12000} x\exp\{\alpha+\beta x\}\,\text dx=500$$
which leads to
$$\exp\{-\alpha\}=\beta^{-1}[\exp\{12000\beta\}-\exp\{80\beta\}]$$
and$$\beta^{-1}\exp\{\alpha\}[12000\exp\{12000\beta\}-80\exp\{80\beta\}]-\beta^{-1}=500$$which can be solved numerically in $\beta$. Leading to
$$\beta^*=-.00238\quad\text{and}\quad\alpha^*=-5.850$$which can be easily simulated as a truncated exponential distribution, by inversion of the cdf, e.g., using qexp() in R. For instance,
function(n=1)
return(qexp(pexp(80,.00238)+runif(n)*
(pexp(12000,.00238)-pexp(80,.00238)),.00238))
If the question is instead about simulating a sample $X_{1:2000}$ such that $$\min(X_{1:2000})=80,\quad\max(X_{1:2000})=1200,\quad\bar X_{1:2000}=500$$
there is again an infinite range of solutions, the simplest being a uniform Multinomial distribution constrained by its minimum $X_{(1)}$ being 80 and its maximum $X_{(2000)}$ being 12000 since
$$\underbrace{X_{(1)}}_{80}+\cdots+\underbrace{X_{(2000)}}_{12000} = 80 + 987920 + 12000= \underbrace{2000}_p\times 500=\underbrace{10^6}_n$$
namely proportional to
$${n\choose 80\,n_2\,\cdots\,n_{p-1}\,12000}\mathbb I_{80\le n_1\le\ldots\le n_{p-1}\le 1200}$$
This is equivalent to simulate a Multinomial
$$\mathcal M_{1998}(987920,1/1998,\ldots,1/1998)$$
constrained to $(80,1200)^{1998}$, ie
x=rmultinom(1,987920,rep(1,1998))
while (min(x)<80||max(x)>12000){
x=rmultinom(1,987920,rep(1,1998))}
As an additional remark, let me add that observing a range of (80,12000) for a Multinomial $\mathcal M(10⁶;2000)$ is extremely unlikely (in the above simulation, the first attempt is always successful) and a more satisfactory approach would be to infer first about the probability vector of a Multinomial $\mathcal M(10⁶;2000;p)$ before predicting the remaining 1998 categories.
|
How do I generate distribution of positive numbers only with min, max and mean?
While the problem is very much ill-posed, since there is an infinite range of distributions satisfying these constraints, a possible solution is to find the maximum entropy distribution under the cons
|
44,044
|
How do I generate distribution of positive numbers only with min, max and mean?
|
If you don't care about the distribution aside from min, max, and mean, then there is a simple answer.
Take 96.476510067114100 percent of draws as 80 and 3.523489932885910 percent of draws as 12000. On average, you get 500, and you have your min and max. I calculated the percentages by solving a system of equations
$$a + b =1$$ $$80a + 12000b = 500$$
The first equation establishes the the values must sum to one, making sure that we are dealing with probabilities. The second equation get us our average of 500.
D <- rep(NA,2000) # define a vector of NAs to hold your sampled values
for (i in 1:2000){
X <- rbinom(1,1,0.96476510067114100) # determine which value you'll take, 80 or 12000
if (X==0){D[i] <- 12000} # declare observation i as 12000
if (X==1){D[i] <- 80} # declare observation i as 80
}
|
How do I generate distribution of positive numbers only with min, max and mean?
|
If you don't care about the distribution aside from min, max, and mean, then there is a simple answer.
Take 96.476510067114100 percent of draws as 80 and 3.523489932885910 percent of draws as 12000. O
|
How do I generate distribution of positive numbers only with min, max and mean?
If you don't care about the distribution aside from min, max, and mean, then there is a simple answer.
Take 96.476510067114100 percent of draws as 80 and 3.523489932885910 percent of draws as 12000. On average, you get 500, and you have your min and max. I calculated the percentages by solving a system of equations
$$a + b =1$$ $$80a + 12000b = 500$$
The first equation establishes the the values must sum to one, making sure that we are dealing with probabilities. The second equation get us our average of 500.
D <- rep(NA,2000) # define a vector of NAs to hold your sampled values
for (i in 1:2000){
X <- rbinom(1,1,0.96476510067114100) # determine which value you'll take, 80 or 12000
if (X==0){D[i] <- 12000} # declare observation i as 12000
if (X==1){D[i] <- 80} # declare observation i as 80
}
|
How do I generate distribution of positive numbers only with min, max and mean?
If you don't care about the distribution aside from min, max, and mean, then there is a simple answer.
Take 96.476510067114100 percent of draws as 80 and 3.523489932885910 percent of draws as 12000. O
|
44,045
|
How do I generate distribution of positive numbers only with min, max and mean?
|
Use for example a beta distribution, shifted and rescaled to your min and max.
The beta is easy to use here since it is bounded to the interval [0;1], but the mean can be placed by parameterization.
You have mean=alpha/(alpha+beta) and hence beta=alpha/mean - alpha, or in the rescaled version beta=alpha*(max-min)/(mean-min) - alpha. With the parameter alpha you can control the shape, whether you want more values in the extremes or not.
You can also consider a truncated normal distribution. This works quite similar. Again you have to decide for a shape by choosing the standard deviation. This is straight forward to use - fix min, max, mean, and sigma. Compute the resulting mu and you have your data distribution. But the shape of this distribution will look truncated, and not as elegant as a beta distribution.
Beta distributions are smooth. If you want something simpler consider simply using two uniform distributions. Without loss of generality, assume min=0 and max=1 by rescaling and shifting.
Split the interval at the (rescaled) mean. Sampling uniformly from [0;mean] with probability p has E[X]=mean/2 and from [mean;1] with 1-p has E[X]=(mean+1)/2. Combining these two and the desired outcome yields p*mean/2+(1-p)(mean+1)/2= mean and solving for p Yields p=1-mean.
Hence a simple strategy is to uniformly sample from [min;mean] with probability 1-(mean-min)/(max-min) and from [mean;max] otherwise. The drawback is the non-smooth (stepwise) CDF.
Ultimately, you could also design the CDF directly. This would be easy if you had fixed the median, but with the mean you'll need to take the values into account. The idea is that you might want to enforce a stepwise linear or polynomial CDF, and choose the function parameters such that the resulting mean is as desired. Please do the math for this yourself.
Last but not least: you are probably asking for a skewed distribution. I would rather fix the median, not the mean. This makes above constructions a lot easier and more meaningful. The mean of a skewed distribution is not too reliable.
|
How do I generate distribution of positive numbers only with min, max and mean?
|
Use for example a beta distribution, shifted and rescaled to your min and max.
The beta is easy to use here since it is bounded to the interval [0;1], but the mean can be placed by parameterization.
Y
|
How do I generate distribution of positive numbers only with min, max and mean?
Use for example a beta distribution, shifted and rescaled to your min and max.
The beta is easy to use here since it is bounded to the interval [0;1], but the mean can be placed by parameterization.
You have mean=alpha/(alpha+beta) and hence beta=alpha/mean - alpha, or in the rescaled version beta=alpha*(max-min)/(mean-min) - alpha. With the parameter alpha you can control the shape, whether you want more values in the extremes or not.
You can also consider a truncated normal distribution. This works quite similar. Again you have to decide for a shape by choosing the standard deviation. This is straight forward to use - fix min, max, mean, and sigma. Compute the resulting mu and you have your data distribution. But the shape of this distribution will look truncated, and not as elegant as a beta distribution.
Beta distributions are smooth. If you want something simpler consider simply using two uniform distributions. Without loss of generality, assume min=0 and max=1 by rescaling and shifting.
Split the interval at the (rescaled) mean. Sampling uniformly from [0;mean] with probability p has E[X]=mean/2 and from [mean;1] with 1-p has E[X]=(mean+1)/2. Combining these two and the desired outcome yields p*mean/2+(1-p)(mean+1)/2= mean and solving for p Yields p=1-mean.
Hence a simple strategy is to uniformly sample from [min;mean] with probability 1-(mean-min)/(max-min) and from [mean;max] otherwise. The drawback is the non-smooth (stepwise) CDF.
Ultimately, you could also design the CDF directly. This would be easy if you had fixed the median, but with the mean you'll need to take the values into account. The idea is that you might want to enforce a stepwise linear or polynomial CDF, and choose the function parameters such that the resulting mean is as desired. Please do the math for this yourself.
Last but not least: you are probably asking for a skewed distribution. I would rather fix the median, not the mean. This makes above constructions a lot easier and more meaningful. The mean of a skewed distribution is not too reliable.
|
How do I generate distribution of positive numbers only with min, max and mean?
Use for example a beta distribution, shifted and rescaled to your min and max.
The beta is easy to use here since it is bounded to the interval [0;1], but the mean can be placed by parameterization.
Y
|
44,046
|
Sum of normal independent random variables with coefficients
|
First, let me note there is nothing special in having the coefficients
of the linear combination to be less or more than one.
The moment generating function is defined as
$$M_X(s)=\mathbb{E}[\exp\{sX\}]$$
when this expectation exists. Considering a linear combination of independent random variables, like $2X+3Y$, leads to the moment generating function
\begin{align}
M_{2X+3Y}(s)&=\mathbb{E}[\exp\{s(2X+3Y)\}]\tag{definition}\\&=\mathbb{E}[\exp\{s2X\}\exp\{s3Y\}]\\&=\mathbb{E}[\exp\{2sX\}]\mathbb{E}[\exp\{3sY\}]\tag{independence}\\&=M_X(2s)M_Y(3s)\tag{identification}\\&=\exp\{4s^2/2\}\exp\{-3s+9s^2\}\tag{normality}\\&=\exp\{22s^2/2-3s\}\end{align}
which uniquely and perfectly identifies a ${\cal N}(-3,22)$ distribution. The very same steps can be used to establish that any linear combination of two independent Normal variates is again a Normal variate.
|
Sum of normal independent random variables with coefficients
|
First, let me note there is nothing special in having the coefficients
of the linear combination to be less or more than one.
The moment generating function is defined as
$$M_X(s)=\mathbb{E}[\exp\{
|
Sum of normal independent random variables with coefficients
First, let me note there is nothing special in having the coefficients
of the linear combination to be less or more than one.
The moment generating function is defined as
$$M_X(s)=\mathbb{E}[\exp\{sX\}]$$
when this expectation exists. Considering a linear combination of independent random variables, like $2X+3Y$, leads to the moment generating function
\begin{align}
M_{2X+3Y}(s)&=\mathbb{E}[\exp\{s(2X+3Y)\}]\tag{definition}\\&=\mathbb{E}[\exp\{s2X\}\exp\{s3Y\}]\\&=\mathbb{E}[\exp\{2sX\}]\mathbb{E}[\exp\{3sY\}]\tag{independence}\\&=M_X(2s)M_Y(3s)\tag{identification}\\&=\exp\{4s^2/2\}\exp\{-3s+9s^2\}\tag{normality}\\&=\exp\{22s^2/2-3s\}\end{align}
which uniquely and perfectly identifies a ${\cal N}(-3,22)$ distribution. The very same steps can be used to establish that any linear combination of two independent Normal variates is again a Normal variate.
|
Sum of normal independent random variables with coefficients
First, let me note there is nothing special in having the coefficients
of the linear combination to be less or more than one.
The moment generating function is defined as
$$M_X(s)=\mathbb{E}[\exp\{
|
44,047
|
Sum of normal independent random variables with coefficients
|
Use MGF to determine that a linear combination of normal random variables is normal. (MGF uniquely defines the distribution)
Since the sum of normals is normal, take the expectation and variance of
$2X + 3Y$ to find the parameters governing the normal distribution. Use the property of variance: $V(\sum X_i) = \sum V(X_i)$ if $X_i$ are independent.
(I'm assuming you made a typo by saying $Z = 2X + 3Z$ and meant $Z = 2X + 3Y$).
Also: Your simulation is correct.
|
Sum of normal independent random variables with coefficients
|
Use MGF to determine that a linear combination of normal random variables is normal. (MGF uniquely defines the distribution)
Since the sum of normals is normal, take the expectation and variance of
$
|
Sum of normal independent random variables with coefficients
Use MGF to determine that a linear combination of normal random variables is normal. (MGF uniquely defines the distribution)
Since the sum of normals is normal, take the expectation and variance of
$2X + 3Y$ to find the parameters governing the normal distribution. Use the property of variance: $V(\sum X_i) = \sum V(X_i)$ if $X_i$ are independent.
(I'm assuming you made a typo by saying $Z = 2X + 3Z$ and meant $Z = 2X + 3Y$).
Also: Your simulation is correct.
|
Sum of normal independent random variables with coefficients
Use MGF to determine that a linear combination of normal random variables is normal. (MGF uniquely defines the distribution)
Since the sum of normals is normal, take the expectation and variance of
$
|
44,048
|
Sum of normal independent random variables with coefficients
|
You don't need to use moment generating functions. The sum of two independent normal random variables is normal with mean equal to the sum of the means and the variance equal to the sum of the variances. Also a constant c times a normal random variable is normal with mean c$\mu$ where $\mu$ is the mean of the original normal and variance equal to $c^2$ $\sigma^2$ where $\sigma^2$ is the variance of the original normal.
Given this $2X$ is normal with mean 0 and variance 4. 3Y is noemal with mean -3 and variance 9(2)=18 Therefore $2X+3Y$ is normal with mean -3 and variance 4+18 =22.
|
Sum of normal independent random variables with coefficients
|
You don't need to use moment generating functions. The sum of two independent normal random variables is normal with mean equal to the sum of the means and the variance equal to the sum of the varianc
|
Sum of normal independent random variables with coefficients
You don't need to use moment generating functions. The sum of two independent normal random variables is normal with mean equal to the sum of the means and the variance equal to the sum of the variances. Also a constant c times a normal random variable is normal with mean c$\mu$ where $\mu$ is the mean of the original normal and variance equal to $c^2$ $\sigma^2$ where $\sigma^2$ is the variance of the original normal.
Given this $2X$ is normal with mean 0 and variance 4. 3Y is noemal with mean -3 and variance 9(2)=18 Therefore $2X+3Y$ is normal with mean -3 and variance 4+18 =22.
|
Sum of normal independent random variables with coefficients
You don't need to use moment generating functions. The sum of two independent normal random variables is normal with mean equal to the sum of the means and the variance equal to the sum of the varianc
|
44,049
|
What to conclude when most results are statistically significant to fail to reject null hypothesis but not all?
|
If all of your null hypothesis are, in reality, true, then your probability of rejecting in at least one of your experiments is
$$ 1 - 0.95^4 \approx 0.19 $$
So there about a 20% chance you would find at least one rejection in your experiment, even if all of the bags had an equal distribution of colors. Not too unlikely; how you decide to act now depends on the costs of being wrong.
I suggest you eat 20% of the candy.
Wouldn't it be (1-.95)^4?
I think I got it right:
Probability of one experiment falsely rejecting: $0.05$
Probability of one experiment not falsely rejecting: $0.95$
Probability of all experiments not falsely rejecting: $0.95^4$
Probability of at least one experiment falsely rejecting: $1 - 0.95^4$
|
What to conclude when most results are statistically significant to fail to reject null hypothesis b
|
If all of your null hypothesis are, in reality, true, then your probability of rejecting in at least one of your experiments is
$$ 1 - 0.95^4 \approx 0.19 $$
So there about a 20% chance you would find
|
What to conclude when most results are statistically significant to fail to reject null hypothesis but not all?
If all of your null hypothesis are, in reality, true, then your probability of rejecting in at least one of your experiments is
$$ 1 - 0.95^4 \approx 0.19 $$
So there about a 20% chance you would find at least one rejection in your experiment, even if all of the bags had an equal distribution of colors. Not too unlikely; how you decide to act now depends on the costs of being wrong.
I suggest you eat 20% of the candy.
Wouldn't it be (1-.95)^4?
I think I got it right:
Probability of one experiment falsely rejecting: $0.05$
Probability of one experiment not falsely rejecting: $0.95$
Probability of all experiments not falsely rejecting: $0.95^4$
Probability of at least one experiment falsely rejecting: $1 - 0.95^4$
|
What to conclude when most results are statistically significant to fail to reject null hypothesis b
If all of your null hypothesis are, in reality, true, then your probability of rejecting in at least one of your experiments is
$$ 1 - 0.95^4 \approx 0.19 $$
So there about a 20% chance you would find
|
44,050
|
What to conclude when most results are statistically significant to fail to reject null hypothesis but not all?
|
If you are trying to test if distribution depends on the bag -or, equivalently, if all bags are random samples from the same population- performing tests on pairs of bags is not going to work, because it can yield contradictory results -as you found- and because probability of type I errors is going to build up due to the multiple comparisons problem -as Mathew Durry's answer explains and as the XKCD comic demonstrates in a different context.
You can avoid this problem by performing a single test using all bags: a chi-square test for homogeneity, which will tell you whether there are significant differences between bags.
Please notice that most online examples of this test use just a pair of samples, but it works equally fine for more samples. Furthermore, the test is the same as the chi-square test for independence (just interpretation is a bit different), so you can find information under both names.
If homogeneity test shows that there are significant differences between bags, you might be interested on knowing between which bags there are significant differences. Then, paired tests can be useful, but to prevent the multiple comparisons problem to happen again, you need to make corrections. I would suggest Bonferroni correction because of its simplicity.
Anyway, if your bags are just random bags taken from a shop shelf, knowing which one is significantly different is uninteresting and the homogeneity test should be enough for your purposes.
|
What to conclude when most results are statistically significant to fail to reject null hypothesis b
|
If you are trying to test if distribution depends on the bag -or, equivalently, if all bags are random samples from the same population- performing tests on pairs of bags is not going to work, because
|
What to conclude when most results are statistically significant to fail to reject null hypothesis but not all?
If you are trying to test if distribution depends on the bag -or, equivalently, if all bags are random samples from the same population- performing tests on pairs of bags is not going to work, because it can yield contradictory results -as you found- and because probability of type I errors is going to build up due to the multiple comparisons problem -as Mathew Durry's answer explains and as the XKCD comic demonstrates in a different context.
You can avoid this problem by performing a single test using all bags: a chi-square test for homogeneity, which will tell you whether there are significant differences between bags.
Please notice that most online examples of this test use just a pair of samples, but it works equally fine for more samples. Furthermore, the test is the same as the chi-square test for independence (just interpretation is a bit different), so you can find information under both names.
If homogeneity test shows that there are significant differences between bags, you might be interested on knowing between which bags there are significant differences. Then, paired tests can be useful, but to prevent the multiple comparisons problem to happen again, you need to make corrections. I would suggest Bonferroni correction because of its simplicity.
Anyway, if your bags are just random bags taken from a shop shelf, knowing which one is significantly different is uninteresting and the homogeneity test should be enough for your purposes.
|
What to conclude when most results are statistically significant to fail to reject null hypothesis b
If you are trying to test if distribution depends on the bag -or, equivalently, if all bags are random samples from the same population- performing tests on pairs of bags is not going to work, because
|
44,051
|
What to conclude when most results are statistically significant to fail to reject null hypothesis but not all?
|
After you explain the results in the Results chapter, you can state in the discussion that one result was found significant. You can provide your interpretation of the results based on literature and suggest number of plausbile explanations to the reader.
|
What to conclude when most results are statistically significant to fail to reject null hypothesis b
|
After you explain the results in the Results chapter, you can state in the discussion that one result was found significant. You can provide your interpretation of the results based on literature and
|
What to conclude when most results are statistically significant to fail to reject null hypothesis but not all?
After you explain the results in the Results chapter, you can state in the discussion that one result was found significant. You can provide your interpretation of the results based on literature and suggest number of plausbile explanations to the reader.
|
What to conclude when most results are statistically significant to fail to reject null hypothesis b
After you explain the results in the Results chapter, you can state in the discussion that one result was found significant. You can provide your interpretation of the results based on literature and
|
44,052
|
How to calculate mean and standard deviation from median and quartiles
|
You can check Wan et al. (2014)*. They build on Bland (2014) to estimate these parameters according to the data summaries available. See scenario C3 in their paper :
$$ \bar{X} ≈ \frac {q_{1} + m + q_{3}}{3}$$
$$ S ≈ \frac {q_{3} - q_{1}}{1.35}$$
or, if you have the sample size :
$$ S ≈ \frac {q_{3} - q_{1}}{2 \Phi^{-1}\left(\frac{0.75n-0.125}{n+0.25}\right) }$$
where $q_{1}$ is the first quartile, $m$ the median, $q_{3}$ is the 3rd quartile and $\Phi^{-1}(z)$ the upper zth percentile of the standard normal
distribution.
So, in R :
q1 <- 0.02
q3 <- 0.04
n <- 100
(s <- (q3 - q1) / (2 * (qnorm((0.75 * n - 0.125) / (n + 0.25)))))
#[1] 0.0150441
* Wan, Xiang, Wenqian Wang, Jiming Liu, and Tiejun Tong. 2014. “Estimating the Sample Mean and Standard Deviation from the Sample Size, Median, Range And/or Interquartile Range.” BMC Medical Research Methodology 14 (135). doi:10.1186/1471-2288-14-135.
|
How to calculate mean and standard deviation from median and quartiles
|
You can check Wan et al. (2014)*. They build on Bland (2014) to estimate these parameters according to the data summaries available. See scenario C3 in their paper :
$$ \bar{X} ≈ \frac {q_{1} + m + q_
|
How to calculate mean and standard deviation from median and quartiles
You can check Wan et al. (2014)*. They build on Bland (2014) to estimate these parameters according to the data summaries available. See scenario C3 in their paper :
$$ \bar{X} ≈ \frac {q_{1} + m + q_{3}}{3}$$
$$ S ≈ \frac {q_{3} - q_{1}}{1.35}$$
or, if you have the sample size :
$$ S ≈ \frac {q_{3} - q_{1}}{2 \Phi^{-1}\left(\frac{0.75n-0.125}{n+0.25}\right) }$$
where $q_{1}$ is the first quartile, $m$ the median, $q_{3}$ is the 3rd quartile and $\Phi^{-1}(z)$ the upper zth percentile of the standard normal
distribution.
So, in R :
q1 <- 0.02
q3 <- 0.04
n <- 100
(s <- (q3 - q1) / (2 * (qnorm((0.75 * n - 0.125) / (n + 0.25)))))
#[1] 0.0150441
* Wan, Xiang, Wenqian Wang, Jiming Liu, and Tiejun Tong. 2014. “Estimating the Sample Mean and Standard Deviation from the Sample Size, Median, Range And/or Interquartile Range.” BMC Medical Research Methodology 14 (135). doi:10.1186/1471-2288-14-135.
|
How to calculate mean and standard deviation from median and quartiles
You can check Wan et al. (2014)*. They build on Bland (2014) to estimate these parameters according to the data summaries available. See scenario C3 in their paper :
$$ \bar{X} ≈ \frac {q_{1} + m + q_
|
44,053
|
How to calculate mean and standard deviation from median and quartiles
|
Adding to Michael Chernick's comment, here's an example.
x <- runif(1000,0,1)
summary(x) #1st Q = 0.27 3rd = 0.77 mean = .51
x1 <- c(x,100)
summary(x1) #1Q = 0.27 3rd = 0.77 mean = .61
x2 <- c(rnorm(100,0,1), rnorm(10,10,.1))
summary(x2) # 1st = -.85 3rd = 0.69, mean = 0.71
With the first pair, note that a single outlier affects the mean but not the quartiles. The last example is one where the mean is larger than the 3rd quartile.
One real world case where the mean could be greater than the third quartile is income.
|
How to calculate mean and standard deviation from median and quartiles
|
Adding to Michael Chernick's comment, here's an example.
x <- runif(1000,0,1)
summary(x) #1st Q = 0.27 3rd = 0.77 mean = .51
x1 <- c(x,100)
summary(x1) #1Q = 0.27 3rd = 0.77 mean = .61
x2 <- c(
|
How to calculate mean and standard deviation from median and quartiles
Adding to Michael Chernick's comment, here's an example.
x <- runif(1000,0,1)
summary(x) #1st Q = 0.27 3rd = 0.77 mean = .51
x1 <- c(x,100)
summary(x1) #1Q = 0.27 3rd = 0.77 mean = .61
x2 <- c(rnorm(100,0,1), rnorm(10,10,.1))
summary(x2) # 1st = -.85 3rd = 0.69, mean = 0.71
With the first pair, note that a single outlier affects the mean but not the quartiles. The last example is one where the mean is larger than the 3rd quartile.
One real world case where the mean could be greater than the third quartile is income.
|
How to calculate mean and standard deviation from median and quartiles
Adding to Michael Chernick's comment, here's an example.
x <- runif(1000,0,1)
summary(x) #1st Q = 0.27 3rd = 0.77 mean = .51
x1 <- c(x,100)
summary(x1) #1Q = 0.27 3rd = 0.77 mean = .61
x2 <- c(
|
44,054
|
How to calculate mean and standard deviation from median and quartiles
|
There is a detailed publication on this topic from Greco et al, How to impute study-specific standard deviations in meta-analyses of skewed continuous endpoints? World Journal of Meta-Analysis 2015;3(5):215-224.
The main findings of this work are that it is acceptable to approximate "missing values of mean and SD with the correspondent values for median and interquartile range".
|
How to calculate mean and standard deviation from median and quartiles
|
There is a detailed publication on this topic from Greco et al, How to impute study-specific standard deviations in meta-analyses of skewed continuous endpoints? World Journal of Meta-Analysis 2015;3(
|
How to calculate mean and standard deviation from median and quartiles
There is a detailed publication on this topic from Greco et al, How to impute study-specific standard deviations in meta-analyses of skewed continuous endpoints? World Journal of Meta-Analysis 2015;3(5):215-224.
The main findings of this work are that it is acceptable to approximate "missing values of mean and SD with the correspondent values for median and interquartile range".
|
How to calculate mean and standard deviation from median and quartiles
There is a detailed publication on this topic from Greco et al, How to impute study-specific standard deviations in meta-analyses of skewed continuous endpoints? World Journal of Meta-Analysis 2015;3(
|
44,055
|
How to calculate mean and standard deviation from median and quartiles
|
If you know that the data is normally distributed, you can infer it given the lower and upper quantiles.
norm_from_quantiles = function(lower, upper, p = 0.25) {
mu = mean(c(lower, upper))
sigma = (lower - mu) / qnorm(p)
list(mu = mu, sigma = sigma)
}
Here, p and 1-p are the quantiles of lower and upper so p = 0.25 is quartiles while p = 0.1 would mean that lower and upper are 10% and 90% quantiles respectively.
|
How to calculate mean and standard deviation from median and quartiles
|
If you know that the data is normally distributed, you can infer it given the lower and upper quantiles.
norm_from_quantiles = function(lower, upper, p = 0.25) {
mu = mean(c(lower, upper))
sigma =
|
How to calculate mean and standard deviation from median and quartiles
If you know that the data is normally distributed, you can infer it given the lower and upper quantiles.
norm_from_quantiles = function(lower, upper, p = 0.25) {
mu = mean(c(lower, upper))
sigma = (lower - mu) / qnorm(p)
list(mu = mu, sigma = sigma)
}
Here, p and 1-p are the quantiles of lower and upper so p = 0.25 is quartiles while p = 0.1 would mean that lower and upper are 10% and 90% quantiles respectively.
|
How to calculate mean and standard deviation from median and quartiles
If you know that the data is normally distributed, you can infer it given the lower and upper quantiles.
norm_from_quantiles = function(lower, upper, p = 0.25) {
mu = mean(c(lower, upper))
sigma =
|
44,056
|
How to calculate mean and standard deviation from median and quartiles
|
I faced similar problem , where i calculated percentiles (0 to 100%) and then I was asked to give back mean as well , after playing in my notebook i noticed that the empirical mean of the quantiles list is in fact the mean of the distribution , thought i discovered a new theorem hahah but then found this
https://en.wikipedia.org/wiki/Inverse_transform_sampling
The theorem established that if you consider F-1 X(w) a random variable and you sample randomly in [0,1] then take the corresponding X , you can generate samples this way from the original distribution , that's why i was getting the mean when computing the quantiles mean .
It's not mentioned directly but if you can generate samples of the original distribution then their mean is the mean of the original distribution .
|
How to calculate mean and standard deviation from median and quartiles
|
I faced similar problem , where i calculated percentiles (0 to 100%) and then I was asked to give back mean as well , after playing in my notebook i noticed that the empirical mean of the quantiles li
|
How to calculate mean and standard deviation from median and quartiles
I faced similar problem , where i calculated percentiles (0 to 100%) and then I was asked to give back mean as well , after playing in my notebook i noticed that the empirical mean of the quantiles list is in fact the mean of the distribution , thought i discovered a new theorem hahah but then found this
https://en.wikipedia.org/wiki/Inverse_transform_sampling
The theorem established that if you consider F-1 X(w) a random variable and you sample randomly in [0,1] then take the corresponding X , you can generate samples this way from the original distribution , that's why i was getting the mean when computing the quantiles mean .
It's not mentioned directly but if you can generate samples of the original distribution then their mean is the mean of the original distribution .
|
How to calculate mean and standard deviation from median and quartiles
I faced similar problem , where i calculated percentiles (0 to 100%) and then I was asked to give back mean as well , after playing in my notebook i noticed that the empirical mean of the quantiles li
|
44,057
|
Relationship between RMSE and RSS
|
Having the mathematical derivations, you might ask yourself why use one measure over the other to assess the performance of a given model? You could use either, but the advantage of RMSE is that it will come out in more interpretable units. For example, if you were building a model that used house features to predict house prices, RSS would come out in dollars squared and would be a really huge number. RMSE would come out in dollars and its magnitude would make more sense given the range of your house price predictions.
|
Relationship between RMSE and RSS
|
Having the mathematical derivations, you might ask yourself why use one measure over the other to assess the performance of a given model? You could use either, but the advantage of RMSE is that it wi
|
Relationship between RMSE and RSS
Having the mathematical derivations, you might ask yourself why use one measure over the other to assess the performance of a given model? You could use either, but the advantage of RMSE is that it will come out in more interpretable units. For example, if you were building a model that used house features to predict house prices, RSS would come out in dollars squared and would be a really huge number. RMSE would come out in dollars and its magnitude would make more sense given the range of your house price predictions.
|
Relationship between RMSE and RSS
Having the mathematical derivations, you might ask yourself why use one measure over the other to assess the performance of a given model? You could use either, but the advantage of RMSE is that it wi
|
44,058
|
Relationship between RMSE and RSS
|
The RSS is the sum of the square of the errors (difference between calculation and measurement, or estimated and real values):
$ RSS = \sum{(\hat Y_i-Y_i)^2} $
The MSE is the mean of that sum of the square of the errors:
$ MSE = \frac{1}{n}\sum{(\hat Y_i-Y_i)^2}$
The RMSE is the square root of the MSE:
$ RMSE = \sqrt{MSE} $
A bit of math shows:
$ RMSE = \sqrt{MSE} = \sqrt{\frac{1}{n} \cdot RSS} $
You can check it in the example that you posted:
$ RMSE = \sqrt{\frac{1}{32} \cdot 447.6743} = 3.740297 $
Note that for the mtcars dataset $n=32$.
Also see this question
|
Relationship between RMSE and RSS
|
The RSS is the sum of the square of the errors (difference between calculation and measurement, or estimated and real values):
$ RSS = \sum{(\hat Y_i-Y_i)^2} $
The MSE is the mean of that sum of the
|
Relationship between RMSE and RSS
The RSS is the sum of the square of the errors (difference between calculation and measurement, or estimated and real values):
$ RSS = \sum{(\hat Y_i-Y_i)^2} $
The MSE is the mean of that sum of the square of the errors:
$ MSE = \frac{1}{n}\sum{(\hat Y_i-Y_i)^2}$
The RMSE is the square root of the MSE:
$ RMSE = \sqrt{MSE} $
A bit of math shows:
$ RMSE = \sqrt{MSE} = \sqrt{\frac{1}{n} \cdot RSS} $
You can check it in the example that you posted:
$ RMSE = \sqrt{\frac{1}{32} \cdot 447.6743} = 3.740297 $
Note that for the mtcars dataset $n=32$.
Also see this question
|
Relationship between RMSE and RSS
The RSS is the sum of the square of the errors (difference between calculation and measurement, or estimated and real values):
$ RSS = \sum{(\hat Y_i-Y_i)^2} $
The MSE is the mean of that sum of the
|
44,059
|
What does "irregularly spaced spatial data" mean?
|
A lot of techniques assume that data is sampled at regularly-spaced intervals. You might count how much litter is near each mile marker on the highway, or sample points in a forest on a regularly spaced grid (100, 200, 300, ... meters north and 100, 200, 300 meters east of some landmark). This also occurs in time--my EEG machine records a data point every millisecond. We call the interval between adjacent samples the sampling period.
However, a lot of data is not or cannot be sampled with a fixed sampling period. Perhaps the terrain doesn't allow us to place weather stations exactly 50 miles apart. We often study peoples' heights and weights, but these are only opportunistically measured at doctors' appointments (which are often not exactly 1 year apart). These data are irregularly sampled.
The paper you linked describes methods for dealing with the latter kind of data, where the sampling period is not constant. One possible approach is to interpolate your data onto a grid and then use a techniques intended for gridded data. The paper argues that while this works in 1 dimension, it is less satisfactory in multiple dimensions and their lifting-based approach works better.
|
What does "irregularly spaced spatial data" mean?
|
A lot of techniques assume that data is sampled at regularly-spaced intervals. You might count how much litter is near each mile marker on the highway, or sample points in a forest on a regularly spac
|
What does "irregularly spaced spatial data" mean?
A lot of techniques assume that data is sampled at regularly-spaced intervals. You might count how much litter is near each mile marker on the highway, or sample points in a forest on a regularly spaced grid (100, 200, 300, ... meters north and 100, 200, 300 meters east of some landmark). This also occurs in time--my EEG machine records a data point every millisecond. We call the interval between adjacent samples the sampling period.
However, a lot of data is not or cannot be sampled with a fixed sampling period. Perhaps the terrain doesn't allow us to place weather stations exactly 50 miles apart. We often study peoples' heights and weights, but these are only opportunistically measured at doctors' appointments (which are often not exactly 1 year apart). These data are irregularly sampled.
The paper you linked describes methods for dealing with the latter kind of data, where the sampling period is not constant. One possible approach is to interpolate your data onto a grid and then use a techniques intended for gridded data. The paper argues that while this works in 1 dimension, it is less satisfactory in multiple dimensions and their lifting-based approach works better.
|
What does "irregularly spaced spatial data" mean?
A lot of techniques assume that data is sampled at regularly-spaced intervals. You might count how much litter is near each mile marker on the highway, or sample points in a forest on a regularly spac
|
44,060
|
What does "irregularly spaced spatial data" mean?
|
Good answers by Matt (+1) and others. Just to have a picture to drive the message (visually) home. In the following figure assuming that the squares represent sampling points the grey boxes follow an obvious regularly spaced design; the red box are just random samples that are irregularly spaced.
Both designs have their pros and cons. Do not dismiss the irregular design as "worse". For example, certain adaptive sampling designs can be extremely helpful for density estimation but highly irregular strictly speaking. That is because you mostly care for regions of high volatility. Numerical integration schemes are a standard example. On the one hand, the trapezoid rule (and in general all the Newton–Cotes formulas) is based an equally spaced sampling technique. On the other hand Monte Carlo integration methods might a strongly irregular sample that sometimes can deviate a lot from being uniform and equally spaced (eg. importance sampling).
|
What does "irregularly spaced spatial data" mean?
|
Good answers by Matt (+1) and others. Just to have a picture to drive the message (visually) home. In the following figure assuming that the squares represent sampling points the grey boxes follow an
|
What does "irregularly spaced spatial data" mean?
Good answers by Matt (+1) and others. Just to have a picture to drive the message (visually) home. In the following figure assuming that the squares represent sampling points the grey boxes follow an obvious regularly spaced design; the red box are just random samples that are irregularly spaced.
Both designs have their pros and cons. Do not dismiss the irregular design as "worse". For example, certain adaptive sampling designs can be extremely helpful for density estimation but highly irregular strictly speaking. That is because you mostly care for regions of high volatility. Numerical integration schemes are a standard example. On the one hand, the trapezoid rule (and in general all the Newton–Cotes formulas) is based an equally spaced sampling technique. On the other hand Monte Carlo integration methods might a strongly irregular sample that sometimes can deviate a lot from being uniform and equally spaced (eg. importance sampling).
|
What does "irregularly spaced spatial data" mean?
Good answers by Matt (+1) and others. Just to have a picture to drive the message (visually) home. In the following figure assuming that the squares represent sampling points the grey boxes follow an
|
44,061
|
What does "irregularly spaced spatial data" mean?
|
This usually means that there is no clear underlying structure of the position of the points. I.e. it is not a rectangular grid or anything that can be represented compactly which has a clear structure.
Imagine that you have weather stations around a country and you are monitoring temperature. These weather stations are most likely no on any specifically defined grid. They are irregularly spaced and thus if one wants to do any spatial inference, one needs to create some spatial graph/mesh, most often made of triangles. Then one can do inference and interpolations based on the values at the known weather stations.
This is highly dependent on which mesh/graph you select, so there are different techniques to generate them.
|
What does "irregularly spaced spatial data" mean?
|
This usually means that there is no clear underlying structure of the position of the points. I.e. it is not a rectangular grid or anything that can be represented compactly which has a clear structur
|
What does "irregularly spaced spatial data" mean?
This usually means that there is no clear underlying structure of the position of the points. I.e. it is not a rectangular grid or anything that can be represented compactly which has a clear structure.
Imagine that you have weather stations around a country and you are monitoring temperature. These weather stations are most likely no on any specifically defined grid. They are irregularly spaced and thus if one wants to do any spatial inference, one needs to create some spatial graph/mesh, most often made of triangles. Then one can do inference and interpolations based on the values at the known weather stations.
This is highly dependent on which mesh/graph you select, so there are different techniques to generate them.
|
What does "irregularly spaced spatial data" mean?
This usually means that there is no clear underlying structure of the position of the points. I.e. it is not a rectangular grid or anything that can be represented compactly which has a clear structur
|
44,062
|
What does "irregularly spaced spatial data" mean?
|
It's a british way of saying that your data does not come evenly spaced. Say, you measure the temperature on the road, and obtain the observation every 1 mile apart. This would be regularly spaced data. As opposed to taking measurements at every gas station, which would not be equally spaced, of course.
|
What does "irregularly spaced spatial data" mean?
|
It's a british way of saying that your data does not come evenly spaced. Say, you measure the temperature on the road, and obtain the observation every 1 mile apart. This would be regularly spaced dat
|
What does "irregularly spaced spatial data" mean?
It's a british way of saying that your data does not come evenly spaced. Say, you measure the temperature on the road, and obtain the observation every 1 mile apart. This would be regularly spaced data. As opposed to taking measurements at every gas station, which would not be equally spaced, of course.
|
What does "irregularly spaced spatial data" mean?
It's a british way of saying that your data does not come evenly spaced. Say, you measure the temperature on the road, and obtain the observation every 1 mile apart. This would be regularly spaced dat
|
44,063
|
How can I explain proportional odds models to a layman?
|
I think that the first and biggest hurdle is making sure that people indeed understand logistic regression and what an odds ratio actually is. If they get that far, you simply need to explain that proportional odds models take logistic regression one step further to account for ordered categorical responses.
A naive approach could be to run a logistic regression model for a cut point. You could cut the outcome so that a positive response is a 3 or higher versus a negative response is a 2 or lower. This is a valid data analysis approach, except for that the cutpoint is arbitrary. You might get a slightly different outcome running the same model with a cutpoint at 2 instead of 3.
Proportional odds models, in a sense, "average up" over all possible cutpoint models to maximize the amount of information you can get out of the data. This is very good for modeling the association between one or more continuous or categorical predictors and an ordinal outcome, and it can even be used to predict outcomes somewhat.
An example of proportional odds "in the field" comes from the following paper, where authors examined the relationship between ambient air pollution and asthma severity (on a Likert type scale)
Our results indicate that a 10-μg/m3 increase in particulate matter less than or equal to 2.5 μm (PM2.5) lagged 1 day was associated with a 1.20 times increased odds of having a more serious asthma attack [95% confidence interval (CI), 1.05 to 1.37]
"A more serious asthma attack" here is taken to be the lay interpretation of what the proportional odds model is estimating. It conveys, in essence, a very nice counterfactual interpretation of findings which is why I, as a statistician, like these models so much.
|
How can I explain proportional odds models to a layman?
|
I think that the first and biggest hurdle is making sure that people indeed understand logistic regression and what an odds ratio actually is. If they get that far, you simply need to explain that pro
|
How can I explain proportional odds models to a layman?
I think that the first and biggest hurdle is making sure that people indeed understand logistic regression and what an odds ratio actually is. If they get that far, you simply need to explain that proportional odds models take logistic regression one step further to account for ordered categorical responses.
A naive approach could be to run a logistic regression model for a cut point. You could cut the outcome so that a positive response is a 3 or higher versus a negative response is a 2 or lower. This is a valid data analysis approach, except for that the cutpoint is arbitrary. You might get a slightly different outcome running the same model with a cutpoint at 2 instead of 3.
Proportional odds models, in a sense, "average up" over all possible cutpoint models to maximize the amount of information you can get out of the data. This is very good for modeling the association between one or more continuous or categorical predictors and an ordinal outcome, and it can even be used to predict outcomes somewhat.
An example of proportional odds "in the field" comes from the following paper, where authors examined the relationship between ambient air pollution and asthma severity (on a Likert type scale)
Our results indicate that a 10-μg/m3 increase in particulate matter less than or equal to 2.5 μm (PM2.5) lagged 1 day was associated with a 1.20 times increased odds of having a more serious asthma attack [95% confidence interval (CI), 1.05 to 1.37]
"A more serious asthma attack" here is taken to be the lay interpretation of what the proportional odds model is estimating. It conveys, in essence, a very nice counterfactual interpretation of findings which is why I, as a statistician, like these models so much.
|
How can I explain proportional odds models to a layman?
I think that the first and biggest hurdle is making sure that people indeed understand logistic regression and what an odds ratio actually is. If they get that far, you simply need to explain that pro
|
44,064
|
How can I explain proportional odds models to a layman?
|
A key step is to make sure people understand why log-odds-ratios are useful. To help motivate log-odds-ratios, try the tale of two principals:
High School A reduced the dropout rate from 10% to 5%, a dramatic 50% decrease!
High School B increased the graduation rate from 90% to 95%, a modest 5.5% increase.
The first principal was lauded by the NYTimes for slashing the drop out rate by half. The other principal got a short mention in the local newspaper. Even though they did the same thing.
Log-odds ratios puts these on even terms:
$$\log \left( \frac{0.95/0.05}{0.90/0.10}\right)=0.32$$ and
$$\log \left( \frac{0.05/0.95}{0.10/0.90}\right)=-0.32$$
In fewer words, you can say that log-odds-ratios consider a change from 10% to 5% equivalent to a change from 90% to 95%.
|
How can I explain proportional odds models to a layman?
|
A key step is to make sure people understand why log-odds-ratios are useful. To help motivate log-odds-ratios, try the tale of two principals:
High School A reduced the dropout rate from 10% to 5%,
|
How can I explain proportional odds models to a layman?
A key step is to make sure people understand why log-odds-ratios are useful. To help motivate log-odds-ratios, try the tale of two principals:
High School A reduced the dropout rate from 10% to 5%, a dramatic 50% decrease!
High School B increased the graduation rate from 90% to 95%, a modest 5.5% increase.
The first principal was lauded by the NYTimes for slashing the drop out rate by half. The other principal got a short mention in the local newspaper. Even though they did the same thing.
Log-odds ratios puts these on even terms:
$$\log \left( \frac{0.95/0.05}{0.90/0.10}\right)=0.32$$ and
$$\log \left( \frac{0.05/0.95}{0.10/0.90}\right)=-0.32$$
In fewer words, you can say that log-odds-ratios consider a change from 10% to 5% equivalent to a change from 90% to 95%.
|
How can I explain proportional odds models to a layman?
A key step is to make sure people understand why log-odds-ratios are useful. To help motivate log-odds-ratios, try the tale of two principals:
High School A reduced the dropout rate from 10% to 5%,
|
44,065
|
Statistical significance of birth month of professional boxers
|
The results, as reported, are not statistically significant.
We can arrive at this conclusion (and better understand how it is meant to be interpreted) in steps. The first step is to take to heart Scortchi's comment,
Beware of data dredging.
This is the process of looking for "patterns" in data, finding one, and then applying a formal statistical test to determine its "significance." This would be an abuse of statistical testing, as has been amply explained and demonstrated in many places.
The second step is to ask whether the pattern found in these data is nevertheless so striking that it would be reasonable to take it as evidence of a meaningful variation in birth month. Some patterns are perfectly obvious, no matter what! Let's screen the results, using crude approximations and statistical models, to see how strong the results might be. Suppose that
The data could be conceived of as a random, representative sample of a well-defined population, such as "all champion professional boxers." Although this is obviously not a random sample, it is plausible to treat it as if it were, at least for these screening purposes.
Birth months are divided into four contiguous non-overlapping seasons (without reference to the data values).
As a null hypothesis (tentatively held, to be evaluated in light of the data), all variation observed in these seasonal totals is random.
With these assumptions, the count for any individual season has a Binomial$(67, 1/4)$ distribution. A Normal approximation to this distribution, which has a mean of $67/4\approx 17$ and standard deviation of $\sqrt{67(1/4)(1-1/4)}\approx 3.5$, suggests that values within a couple SDs of the mean should be expected as a result of sampling variation. This is the interval $[10, 24]$, having a width of $14$ (equal to $21\%$ of the total).
Although the quoted statistic of $40\% - 12\%$ = $28\%$ for the range, equal to $19$, is larger than $14$ (and therefore on the high side), it isn't that high. Variations in natural birth rates as well, variations in the lengths of quarters (which range from $90$ to $92$ days), and the fact there are $12$ (not just $4$) possible three-month sequences to look at, all suggest that $19$ might be on the margin of being statistically significant.
This takes us to the third step: let's try to reproduce the data evaluation that actually occurred. If one were exploring birth date data to look for patterns, the most powerful methods would look at individual dates. I will suppose, though, that this was not performed and that initially dates were summarized by month. One might then plot frequencies by month and look for patterns of highs and lows, much as described in the question. Plausibly, such a "pattern" would consist of some contiguous series of months with high average counts and some other contiguous series of months with low average counts.
We could generously characterize this search for patterns as a systematic statistical procedure. One way would be to look for statistically significant differences (at some desired level $\alpha$, such as $\alpha = 0.05 = 5\%$) among the individual months. If such differences did not appear, one would look for significant differences among windowed monthly sums using a two-month window, then a three-month window, and so on. (It is intuitively obvious that no more information is gained beyond a six-month window.)
The statistic for this procedure will be a vector $\mathbf t = (t_1, t_2, \ldots, t_6)$ giving the observed ranges of windowed means for windows of lengths $1, 2, \ldots, 6$ months. For, example, consider these simulated monthly counts (which occurred in the second of $1,000,000$ iterations of this experiment):
Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb
0 8 8 10 9 5 3 6 5 7 3 3
Their range is $t_1 = 10 - 0=10$. Their two-month sums (given by Mar+Apr, Apr+May, ..., Jan+Feb, Feb+Mar) are
8 16 18 19 14 8 9 11 12 10 6 3
The range of those is $t_2 = 19-3=16$. Continuing like this through the six-month sums gives the vector of ranges
$$\mathbf t = (10,16,21,22,22,19).$$
Such a statistic will be considered "significant" if, as one scans through it, any of its components $t_k$ is in the critical region for a size-$\alpha$ test for windowed sums of width $k$. Because we are looking at ranges, the critical region of unusually high ranges for each $k$ can be described by a single number $c_k$. If any of the $t_k$ exceed $c_k$, one would have noticed a "pattern."
Marginal distributions of the ranges for windowed sums with $67$ total observations were computed by simulating $1,000,000$ samples. The observed value of $19$ is somewhat rare, as seen by its position in the tail of the "Window Width 3$ plot, but in the context of the overall search for patterns it does not appear unusual, as explained below.
Because multiple, interdependent tests are performed on the same data, the actual test size will not be the same as the nominal size of $\alpha$. The error rate will be inflated due to the repeated "dredging" that occurs during this six-step process. Simulation helps us estimate that error rate. For instance, when running all six steps at a nominal level of $\alpha=0.05$, simulations show that fully ten percent of perfectly random results appear to be "significant." To compensate for this inflation, I performed a search to find a smaller nominal $\alpha$ that leads to a five percent error rate. Based on a simulation of $1,000,000$ samples, the nominal $\alpha$ must be very close to $0.0254$. Using it, the critical vector is
$$\mathbf c = (c_1, c_2, \ldots, c_6) = (12, 16, 19, 20, 22, 23)$$
and the actual (Type I) error rate is $0.048\approx 5\%$. (It is not possible to hit $5\%$ exactly due to the discrete nature of the distribution.)
The one thing we know about the actual data is that $t_3 = 19$. Because this does not exceed $c_3$, we do not reject the null hypothesis. In other words, none of the information disclosed in the question is strong enough to convince us of the need for any explanation of the data behavior beyond natural, random chance variation.
The fourth step is to consider whether the previous conclusion should be modified due to departures between reality and our models of the data and the data-exploration process. The binomial model is fairly good: it accounts adequately for major behaviors in birth rates (but ignores small fluctuations in overall birth rates in the population and temporal correlation in those rates). The sequential pattern-seeking model is likely inadequate: it cannot reflect all the different ways these data might have been looked at to seek patterns. Both limitations of the models suggest they are not sufficiently conservative. We should therefore require strongly significant results before we are comfortable concluding that there is any temporal pattern to professional boxing birth rates at all.
One could conduct more powerful exploration of these data, but given that they have already been worked over so well, it seems unlikely that any new results would be strong enough to change our negative conclusion. The best use of these data might be to provide corroborative evidence to support conclusions from another related dataset that is carefully and formally evaluated.
R code to reproduce the simulation.
It requires about one second per $100,000$ iterations. Set n.iter accordingly.
#
# Precalculate coefficients for a width-k circular neighborhood sum.
#
focal.coeff <- function(n, k) {
outer(1:n, 1:n, function(i,j) {
m <- (j - i + floor((k-1)/2)) %% n
0 <= m & m < k
})
}
#
# Return days per month.
#
month.days <- function() {
months.per.year <- 12
days.per.year <- 365.25
days.per.month <- ceiling(days.per.year / months.per.year)
# This is the pattern:
d <- round(days.per.month - (((1:months.per.year-1) * 3) %% 5) / 5, 0)
# Adjust the last month to correct the total:
d[months.per.year] <- days.per.year - sum(d) + d[months.per.year]
names(d) <- c("Mar", "Apr", "May", "Jun", "Jul", "Aug",
"Sep", "Oct", "Nov", "Dec", "Jan", "Feb")
return(d)
}
#
# Multinomial simulation.
#
set.seed(17)
size <- 67
n.iter <- 1e6
p <- month.days()
x <- matrix(rmultinom(n.iter, size, p), nrow=length(p), dimnames=list(names(p)))
#
# Find the ranges of windowed sums.
#
m <- floor(length(p)/2)
ranges <- matrix(NA, m, n.iter)
for (k in 1:m) {
stats <- apply(focal.coeff(dim(x)[1], k) %*% x, 2, range)
ranges[k, ] <- stats[2, ] - stats[1, ]
}
#
# Study them.
#
# par(mfrow=c(2,3))
# range.max <- max(ranges)
# colors <- hsv(0:(m-1)/m, 0.7, 0.8)
# invisible(sapply(1:m, function(k)
# hist(ranges[k, ], breaks=(0:range.max)+1/2, xlim=c(0, 32),
# border="#e0e0e0", col=colors[k],
# xlab="Range", freq=FALSE,
# main=paste("Window width", k))))
#
# Critical values.
#
alpha <- 0.0254
(critical.values <- apply(ranges, 1, quantile, probs=1-alpha))
#
# Sequential error rates.
# The Type I error rate is the maximum of these six rates.
#
(rowMeans(apply(ranges > critical.values, 2, cumsum) > 0))
|
Statistical significance of birth month of professional boxers
|
The results, as reported, are not statistically significant.
We can arrive at this conclusion (and better understand how it is meant to be interpreted) in steps. The first step is to take to heart Sc
|
Statistical significance of birth month of professional boxers
The results, as reported, are not statistically significant.
We can arrive at this conclusion (and better understand how it is meant to be interpreted) in steps. The first step is to take to heart Scortchi's comment,
Beware of data dredging.
This is the process of looking for "patterns" in data, finding one, and then applying a formal statistical test to determine its "significance." This would be an abuse of statistical testing, as has been amply explained and demonstrated in many places.
The second step is to ask whether the pattern found in these data is nevertheless so striking that it would be reasonable to take it as evidence of a meaningful variation in birth month. Some patterns are perfectly obvious, no matter what! Let's screen the results, using crude approximations and statistical models, to see how strong the results might be. Suppose that
The data could be conceived of as a random, representative sample of a well-defined population, such as "all champion professional boxers." Although this is obviously not a random sample, it is plausible to treat it as if it were, at least for these screening purposes.
Birth months are divided into four contiguous non-overlapping seasons (without reference to the data values).
As a null hypothesis (tentatively held, to be evaluated in light of the data), all variation observed in these seasonal totals is random.
With these assumptions, the count for any individual season has a Binomial$(67, 1/4)$ distribution. A Normal approximation to this distribution, which has a mean of $67/4\approx 17$ and standard deviation of $\sqrt{67(1/4)(1-1/4)}\approx 3.5$, suggests that values within a couple SDs of the mean should be expected as a result of sampling variation. This is the interval $[10, 24]$, having a width of $14$ (equal to $21\%$ of the total).
Although the quoted statistic of $40\% - 12\%$ = $28\%$ for the range, equal to $19$, is larger than $14$ (and therefore on the high side), it isn't that high. Variations in natural birth rates as well, variations in the lengths of quarters (which range from $90$ to $92$ days), and the fact there are $12$ (not just $4$) possible three-month sequences to look at, all suggest that $19$ might be on the margin of being statistically significant.
This takes us to the third step: let's try to reproduce the data evaluation that actually occurred. If one were exploring birth date data to look for patterns, the most powerful methods would look at individual dates. I will suppose, though, that this was not performed and that initially dates were summarized by month. One might then plot frequencies by month and look for patterns of highs and lows, much as described in the question. Plausibly, such a "pattern" would consist of some contiguous series of months with high average counts and some other contiguous series of months with low average counts.
We could generously characterize this search for patterns as a systematic statistical procedure. One way would be to look for statistically significant differences (at some desired level $\alpha$, such as $\alpha = 0.05 = 5\%$) among the individual months. If such differences did not appear, one would look for significant differences among windowed monthly sums using a two-month window, then a three-month window, and so on. (It is intuitively obvious that no more information is gained beyond a six-month window.)
The statistic for this procedure will be a vector $\mathbf t = (t_1, t_2, \ldots, t_6)$ giving the observed ranges of windowed means for windows of lengths $1, 2, \ldots, 6$ months. For, example, consider these simulated monthly counts (which occurred in the second of $1,000,000$ iterations of this experiment):
Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb
0 8 8 10 9 5 3 6 5 7 3 3
Their range is $t_1 = 10 - 0=10$. Their two-month sums (given by Mar+Apr, Apr+May, ..., Jan+Feb, Feb+Mar) are
8 16 18 19 14 8 9 11 12 10 6 3
The range of those is $t_2 = 19-3=16$. Continuing like this through the six-month sums gives the vector of ranges
$$\mathbf t = (10,16,21,22,22,19).$$
Such a statistic will be considered "significant" if, as one scans through it, any of its components $t_k$ is in the critical region for a size-$\alpha$ test for windowed sums of width $k$. Because we are looking at ranges, the critical region of unusually high ranges for each $k$ can be described by a single number $c_k$. If any of the $t_k$ exceed $c_k$, one would have noticed a "pattern."
Marginal distributions of the ranges for windowed sums with $67$ total observations were computed by simulating $1,000,000$ samples. The observed value of $19$ is somewhat rare, as seen by its position in the tail of the "Window Width 3$ plot, but in the context of the overall search for patterns it does not appear unusual, as explained below.
Because multiple, interdependent tests are performed on the same data, the actual test size will not be the same as the nominal size of $\alpha$. The error rate will be inflated due to the repeated "dredging" that occurs during this six-step process. Simulation helps us estimate that error rate. For instance, when running all six steps at a nominal level of $\alpha=0.05$, simulations show that fully ten percent of perfectly random results appear to be "significant." To compensate for this inflation, I performed a search to find a smaller nominal $\alpha$ that leads to a five percent error rate. Based on a simulation of $1,000,000$ samples, the nominal $\alpha$ must be very close to $0.0254$. Using it, the critical vector is
$$\mathbf c = (c_1, c_2, \ldots, c_6) = (12, 16, 19, 20, 22, 23)$$
and the actual (Type I) error rate is $0.048\approx 5\%$. (It is not possible to hit $5\%$ exactly due to the discrete nature of the distribution.)
The one thing we know about the actual data is that $t_3 = 19$. Because this does not exceed $c_3$, we do not reject the null hypothesis. In other words, none of the information disclosed in the question is strong enough to convince us of the need for any explanation of the data behavior beyond natural, random chance variation.
The fourth step is to consider whether the previous conclusion should be modified due to departures between reality and our models of the data and the data-exploration process. The binomial model is fairly good: it accounts adequately for major behaviors in birth rates (but ignores small fluctuations in overall birth rates in the population and temporal correlation in those rates). The sequential pattern-seeking model is likely inadequate: it cannot reflect all the different ways these data might have been looked at to seek patterns. Both limitations of the models suggest they are not sufficiently conservative. We should therefore require strongly significant results before we are comfortable concluding that there is any temporal pattern to professional boxing birth rates at all.
One could conduct more powerful exploration of these data, but given that they have already been worked over so well, it seems unlikely that any new results would be strong enough to change our negative conclusion. The best use of these data might be to provide corroborative evidence to support conclusions from another related dataset that is carefully and formally evaluated.
R code to reproduce the simulation.
It requires about one second per $100,000$ iterations. Set n.iter accordingly.
#
# Precalculate coefficients for a width-k circular neighborhood sum.
#
focal.coeff <- function(n, k) {
outer(1:n, 1:n, function(i,j) {
m <- (j - i + floor((k-1)/2)) %% n
0 <= m & m < k
})
}
#
# Return days per month.
#
month.days <- function() {
months.per.year <- 12
days.per.year <- 365.25
days.per.month <- ceiling(days.per.year / months.per.year)
# This is the pattern:
d <- round(days.per.month - (((1:months.per.year-1) * 3) %% 5) / 5, 0)
# Adjust the last month to correct the total:
d[months.per.year] <- days.per.year - sum(d) + d[months.per.year]
names(d) <- c("Mar", "Apr", "May", "Jun", "Jul", "Aug",
"Sep", "Oct", "Nov", "Dec", "Jan", "Feb")
return(d)
}
#
# Multinomial simulation.
#
set.seed(17)
size <- 67
n.iter <- 1e6
p <- month.days()
x <- matrix(rmultinom(n.iter, size, p), nrow=length(p), dimnames=list(names(p)))
#
# Find the ranges of windowed sums.
#
m <- floor(length(p)/2)
ranges <- matrix(NA, m, n.iter)
for (k in 1:m) {
stats <- apply(focal.coeff(dim(x)[1], k) %*% x, 2, range)
ranges[k, ] <- stats[2, ] - stats[1, ]
}
#
# Study them.
#
# par(mfrow=c(2,3))
# range.max <- max(ranges)
# colors <- hsv(0:(m-1)/m, 0.7, 0.8)
# invisible(sapply(1:m, function(k)
# hist(ranges[k, ], breaks=(0:range.max)+1/2, xlim=c(0, 32),
# border="#e0e0e0", col=colors[k],
# xlab="Range", freq=FALSE,
# main=paste("Window width", k))))
#
# Critical values.
#
alpha <- 0.0254
(critical.values <- apply(ranges, 1, quantile, probs=1-alpha))
#
# Sequential error rates.
# The Type I error rate is the maximum of these six rates.
#
(rowMeans(apply(ranges > critical.values, 2, cumsum) > 0))
|
Statistical significance of birth month of professional boxers
The results, as reported, are not statistically significant.
We can arrive at this conclusion (and better understand how it is meant to be interpreted) in steps. The first step is to take to heart Sc
|
44,066
|
Statistical significance of birth month of professional boxers
|
A basic approach
You should be able to find data on births by time of year for the population as a whole.
To see if there is evidence that boxers have a different distribution of birth dates, given your sample size I suggest you work at a granularity of "months". Your null hypothesis is that boxers' birth months follow the same distribution as the wider population.
For each month you can calculate the "expected frequency" of boxer birthdays by multiplying your sample size by the proportion of people in the wider population who were born in that month.
You can then compare that to the "observed frequency" - the number of boxers who actually did have a birthday in that month. To determine if there is significant evidence of a difference between them, you can use a chi-squared goodness of fit test.
Issues with the basic approach
For a student working at an introductory level I'm hoping the above is an appropriately pitched answer. I don't think it is the "best" way to do things - for instance it throws away data on actual date of birth because it only looks at month, and it groups 1 Feb with 28 Feb despite it being close to Jan 31. Statisticians generally hate throwing away information from their data. This is really just a special case of binning or discretizing continuous data and that's well-known to be a bad idea.
More sophisticated approaches are certainly possible that would take account of the actual day of birth. Moreover, they should recognise that 1 January is not at the opposite end of the year to 31 December, but rather those days are adjacent - this is the domain of circular statistics (also called directional statistics). Note that the chi-squared goodness of fit test treats month as nominal data, so lacks any concept of ordering of months at all - not only is the subtle point that January is next to December missed out, so is the more obvious fact that January is next to February.
There is another issue with binning by month. If you find a significant result because, say, March and November are overrepresented while May and January are underrepresented, it is difficult to interpret that meaningfully. I suspect this relates to the underlying purpose of your investigation: it's probably not month-to-month variation you're interested in.
Relative age effect and the problem with three month windows
I thought I should say something about why time of year might matter for birthdays of professional sportspeople - at youth level it can be advantageous to be one of the older people in your age category. So what you are investigating isn't a completely silly idea - it is a well-studied phenomenon in academia and sports science called the relative age effect - though your sample size may be too low to detect such an effect even if it exists (this is the problem of statistical power).
I suggested months as you should have enough of a sample size to make a chi-squared test feasible (I imagine your expected frequencies will be at least 5 in each month) and months are a pretty objective thing to classify by.
An issue with sorting into three month windows is that it introduces some subjectivity - do you take January to be part of the window from January to March, or from December to February, or from November to January? It would be tempting to choose in such a way as to maximise the discrepancy between observed and expected births.
Suppose that in youth competition, someone born in September will be the youngest in their age category while someone born in August will the oldest, and you wonder whether this confers an advantage that might impact whether they transition to professional status, then you might just want to compare two six month windows - in my example, September to February versus March to August. You can then see whether being one of the older competitors in your age band as a youth competitor is associated with becoming a professional boxer - though this is subject to various caveats and can't prove causation. What's important is there was an objective justification for the selection of the six month windows, rather than selecting them based on the data. This could be done as a basic chi-squared goodness of fit test with two cells in your table and hence one degree of freedom.
|
Statistical significance of birth month of professional boxers
|
A basic approach
You should be able to find data on births by time of year for the population as a whole.
To see if there is evidence that boxers have a different distribution of birth dates, given yo
|
Statistical significance of birth month of professional boxers
A basic approach
You should be able to find data on births by time of year for the population as a whole.
To see if there is evidence that boxers have a different distribution of birth dates, given your sample size I suggest you work at a granularity of "months". Your null hypothesis is that boxers' birth months follow the same distribution as the wider population.
For each month you can calculate the "expected frequency" of boxer birthdays by multiplying your sample size by the proportion of people in the wider population who were born in that month.
You can then compare that to the "observed frequency" - the number of boxers who actually did have a birthday in that month. To determine if there is significant evidence of a difference between them, you can use a chi-squared goodness of fit test.
Issues with the basic approach
For a student working at an introductory level I'm hoping the above is an appropriately pitched answer. I don't think it is the "best" way to do things - for instance it throws away data on actual date of birth because it only looks at month, and it groups 1 Feb with 28 Feb despite it being close to Jan 31. Statisticians generally hate throwing away information from their data. This is really just a special case of binning or discretizing continuous data and that's well-known to be a bad idea.
More sophisticated approaches are certainly possible that would take account of the actual day of birth. Moreover, they should recognise that 1 January is not at the opposite end of the year to 31 December, but rather those days are adjacent - this is the domain of circular statistics (also called directional statistics). Note that the chi-squared goodness of fit test treats month as nominal data, so lacks any concept of ordering of months at all - not only is the subtle point that January is next to December missed out, so is the more obvious fact that January is next to February.
There is another issue with binning by month. If you find a significant result because, say, March and November are overrepresented while May and January are underrepresented, it is difficult to interpret that meaningfully. I suspect this relates to the underlying purpose of your investigation: it's probably not month-to-month variation you're interested in.
Relative age effect and the problem with three month windows
I thought I should say something about why time of year might matter for birthdays of professional sportspeople - at youth level it can be advantageous to be one of the older people in your age category. So what you are investigating isn't a completely silly idea - it is a well-studied phenomenon in academia and sports science called the relative age effect - though your sample size may be too low to detect such an effect even if it exists (this is the problem of statistical power).
I suggested months as you should have enough of a sample size to make a chi-squared test feasible (I imagine your expected frequencies will be at least 5 in each month) and months are a pretty objective thing to classify by.
An issue with sorting into three month windows is that it introduces some subjectivity - do you take January to be part of the window from January to March, or from December to February, or from November to January? It would be tempting to choose in such a way as to maximise the discrepancy between observed and expected births.
Suppose that in youth competition, someone born in September will be the youngest in their age category while someone born in August will the oldest, and you wonder whether this confers an advantage that might impact whether they transition to professional status, then you might just want to compare two six month windows - in my example, September to February versus March to August. You can then see whether being one of the older competitors in your age band as a youth competitor is associated with becoming a professional boxer - though this is subject to various caveats and can't prove causation. What's important is there was an objective justification for the selection of the six month windows, rather than selecting them based on the data. This could be done as a basic chi-squared goodness of fit test with two cells in your table and hence one degree of freedom.
|
Statistical significance of birth month of professional boxers
A basic approach
You should be able to find data on births by time of year for the population as a whole.
To see if there is evidence that boxers have a different distribution of birth dates, given yo
|
44,067
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0,1)?
|
$A-B$ has a symmetric triangular distribution on $(-1,1)$. It has mean 0 and variance $\frac{1}{6}$.
$|A-B|$ has a $\text{beta}(1,2)$ distribution. It has mean $\frac{1}{3}$ and variance $\frac{1}{18}$.
The distribution of a sum of beta random variables is known for $n=2$ (see 1).
Edit:
Actually, this particular beta is so simple we can just do the integral, I don't know why I didn't jut try it before. Let $Y_i=|A_i-B_i|$. We can compute the density of $Z=Y_1+Y_2$ and you could find the distribution of $1-\frac{1}{2}\sum_{i=1}^2 |A_i-B_i|$ from that.
By simple, direct integration (of the convolution integral),
$f_Z(z) = \begin{cases} \frac{2}{3}z\,(z^2-6z+6) &\mbox{for } 0<z\leq 1 \\
\frac{2}{3}(2-z)^3 & \mbox{for } 1<z\leq 2\\
0 &\mbox{elsewhere }. \end{cases}$
Then the density of $H_2=1-Z/2$ is just a linear rescaling of that density.
[This very rapidly becomes unwieldy. If I've done this right, a sum of 4 $Y_i$'s will have four pieces each consisting of 7th-order polynomials, and 8 would have eight pieces, each 15th order polynomials. If you've got a nice computer algebra system handy, you could certainly calculate them, but I don't think it's really going to be informative.]
--
For moderate $n$ I don't think even this relatively simple case is not known algebraically - but you could do the convolution numerically, fairly simply.
For large $n$ you could make use of the central limit theorem. The mean and variance of $1-\frac{1}{n}\sum_{i=1}^n |A_i-B_i|$ are straightforward - if I haven't made an error, they're $\frac{2}{3}$ and $\frac{1}{18n}$.
While it's not so accurate for $n=8$, by $n=20$ the normal approximation is pretty good:
===
Edit:
What exactly do you mean when you say you could do the convolution numerically? What exactly would I be convolving?
Let $Y_i=|A_i-B_i|$, so $H=1-\frac{1}{n}\sum_{i=1}^n Y_i$.
Aside from a linear rescaling -- which is trivial -- you need the distribution of the sum of $Y_i$.
That's where the convolution comes in; the density of $W+Z$ is the convolution of their pdfs.
Of course in practice, you don't do the convolution integral. The usual statistical approach is to use MGFs or more generally, characteristic functions, but apart from a matter of sign (again, trivial), a CF is just a Fourier transform (in basically the same sense that MGFs are essentially Laplace transforms).
So numerically, you could use FFTs to take care of the convolution. In fact, it proceeds as follows (I'll use the symbol for a characteristic function, but you can freely think Fourier transform):
$\phi(Y_1+Y_2+...+Y_n)= \phi(Y_1)\times \phi(Y_2)\times ...\times \phi(Y_n)$
$\hspace{2cm} = \phi(Y_1)^n$
Then we just convert back by the inverse transform.
In a program where you operate numerically, the usual approach is to suitably discretize the pdf (into some reasonably large power-of-2 pieces; in some cases $2^{8}$ to $2^{10}$ might be enough, in other cases you might perhaps want $2^{16}$ or more), take the FFT, take it to the nth-power, transform back. If you handle the constants correctly (which should be automatically handled by the inverse FFT anyway, you have at the end a discrete approximation to the pdf of the sum of the $n$ iid random variables.
In practice such operations may be a little tedious to get right the first time, but they tend to be quite fast to run.
===
1: Pham-Gia, T. and Turkkan, N. (1994). Reliability of a standby system with beta component lifelength. IEEE Transactions on Reliability, 71–75.
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0
|
$A-B$ has a symmetric triangular distribution on $(-1,1)$. It has mean 0 and variance $\frac{1}{6}$.
$|A-B|$ has a $\text{beta}(1,2)$ distribution. It has mean $\frac{1}{3}$ and variance $\frac{1}{18}
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0,1)?
$A-B$ has a symmetric triangular distribution on $(-1,1)$. It has mean 0 and variance $\frac{1}{6}$.
$|A-B|$ has a $\text{beta}(1,2)$ distribution. It has mean $\frac{1}{3}$ and variance $\frac{1}{18}$.
The distribution of a sum of beta random variables is known for $n=2$ (see 1).
Edit:
Actually, this particular beta is so simple we can just do the integral, I don't know why I didn't jut try it before. Let $Y_i=|A_i-B_i|$. We can compute the density of $Z=Y_1+Y_2$ and you could find the distribution of $1-\frac{1}{2}\sum_{i=1}^2 |A_i-B_i|$ from that.
By simple, direct integration (of the convolution integral),
$f_Z(z) = \begin{cases} \frac{2}{3}z\,(z^2-6z+6) &\mbox{for } 0<z\leq 1 \\
\frac{2}{3}(2-z)^3 & \mbox{for } 1<z\leq 2\\
0 &\mbox{elsewhere }. \end{cases}$
Then the density of $H_2=1-Z/2$ is just a linear rescaling of that density.
[This very rapidly becomes unwieldy. If I've done this right, a sum of 4 $Y_i$'s will have four pieces each consisting of 7th-order polynomials, and 8 would have eight pieces, each 15th order polynomials. If you've got a nice computer algebra system handy, you could certainly calculate them, but I don't think it's really going to be informative.]
--
For moderate $n$ I don't think even this relatively simple case is not known algebraically - but you could do the convolution numerically, fairly simply.
For large $n$ you could make use of the central limit theorem. The mean and variance of $1-\frac{1}{n}\sum_{i=1}^n |A_i-B_i|$ are straightforward - if I haven't made an error, they're $\frac{2}{3}$ and $\frac{1}{18n}$.
While it's not so accurate for $n=8$, by $n=20$ the normal approximation is pretty good:
===
Edit:
What exactly do you mean when you say you could do the convolution numerically? What exactly would I be convolving?
Let $Y_i=|A_i-B_i|$, so $H=1-\frac{1}{n}\sum_{i=1}^n Y_i$.
Aside from a linear rescaling -- which is trivial -- you need the distribution of the sum of $Y_i$.
That's where the convolution comes in; the density of $W+Z$ is the convolution of their pdfs.
Of course in practice, you don't do the convolution integral. The usual statistical approach is to use MGFs or more generally, characteristic functions, but apart from a matter of sign (again, trivial), a CF is just a Fourier transform (in basically the same sense that MGFs are essentially Laplace transforms).
So numerically, you could use FFTs to take care of the convolution. In fact, it proceeds as follows (I'll use the symbol for a characteristic function, but you can freely think Fourier transform):
$\phi(Y_1+Y_2+...+Y_n)= \phi(Y_1)\times \phi(Y_2)\times ...\times \phi(Y_n)$
$\hspace{2cm} = \phi(Y_1)^n$
Then we just convert back by the inverse transform.
In a program where you operate numerically, the usual approach is to suitably discretize the pdf (into some reasonably large power-of-2 pieces; in some cases $2^{8}$ to $2^{10}$ might be enough, in other cases you might perhaps want $2^{16}$ or more), take the FFT, take it to the nth-power, transform back. If you handle the constants correctly (which should be automatically handled by the inverse FFT anyway, you have at the end a discrete approximation to the pdf of the sum of the $n$ iid random variables.
In practice such operations may be a little tedious to get right the first time, but they tend to be quite fast to run.
===
1: Pham-Gia, T. and Turkkan, N. (1994). Reliability of a standby system with beta component lifelength. IEEE Transactions on Reliability, 71–75.
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0
$A-B$ has a symmetric triangular distribution on $(-1,1)$. It has mean 0 and variance $\frac{1}{6}$.
$|A-B|$ has a $\text{beta}(1,2)$ distribution. It has mean $\frac{1}{3}$ and variance $\frac{1}{18}
|
44,068
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0,1)?
|
If $A$ and $B$ are standard Uniform and independent, then $(A-B) \sim Triangular(-1,0,1)$.
Then $Z = |A-B|$ will have pdf $f(z)$:
f = 2 (1 - z); domain[f] = {z, 0, 1};
Then, the characteristic function (cf) of the sample mean of $Z$ is $\big(E\big[e^{\large i \frac{t}{n} z}\big] \big)^n$:
(source: tri.org.au)
where I am using the Expect function from the mathStatica package for Mathematica to automate the wurly-curlies.
(i) The EXACT pdf of the sample mean of Z
Although the cf does not appear to have a nice tractable form for symbolic inversion, we can invert it numerically, given any arbitrary value of $n$, to yield the pdf of the sample mean of $Z$. This is done below in the plot (see Blue curve).
(ii) CLT APPROXIMATION of the sample mean of Z
We derived above the pdf of $Z$, namely $f(z)$, where $Z$ has mean $E[Z]= \frac13$ and variance $Var(Z) = \frac{1}{18}$.
Then, by the Central Limit Theorem, the sample mean of $Z$ is asymptotically Normal:
$$\bar Z_n \overset{a}{\sim } N\big(E[Z], \frac{Var(Z)}{n}\big) = N\big(\frac13, \frac{1}{18 n}\big)$$
Compare actual to approximate
We can now compare the easy asymptotic Normal approximation to the exact pdf, for any given value of $n$. The following diagram illustrates the:
exact solution (derived by inverting the cf), when $n = 10$: BLUE CURVE
Normal approximation (CLT), when $n=10$: RED CURVE
(source: tri.org.au)
Even for reasonably small $n=10$, the Normal approximation works rather nicely.
All that remains is the easy transformation from $\bar Z_n$ to $1-\bar Z_n$ which just involves changing your Normal mean from $\frac13$ to $\frac23$, and all done.
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0
|
If $A$ and $B$ are standard Uniform and independent, then $(A-B) \sim Triangular(-1,0,1)$.
Then $Z = |A-B|$ will have pdf $f(z)$:
f = 2 (1 - z); domain[f] = {z, 0, 1};
Then, the characteristic fu
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0,1)?
If $A$ and $B$ are standard Uniform and independent, then $(A-B) \sim Triangular(-1,0,1)$.
Then $Z = |A-B|$ will have pdf $f(z)$:
f = 2 (1 - z); domain[f] = {z, 0, 1};
Then, the characteristic function (cf) of the sample mean of $Z$ is $\big(E\big[e^{\large i \frac{t}{n} z}\big] \big)^n$:
(source: tri.org.au)
where I am using the Expect function from the mathStatica package for Mathematica to automate the wurly-curlies.
(i) The EXACT pdf of the sample mean of Z
Although the cf does not appear to have a nice tractable form for symbolic inversion, we can invert it numerically, given any arbitrary value of $n$, to yield the pdf of the sample mean of $Z$. This is done below in the plot (see Blue curve).
(ii) CLT APPROXIMATION of the sample mean of Z
We derived above the pdf of $Z$, namely $f(z)$, where $Z$ has mean $E[Z]= \frac13$ and variance $Var(Z) = \frac{1}{18}$.
Then, by the Central Limit Theorem, the sample mean of $Z$ is asymptotically Normal:
$$\bar Z_n \overset{a}{\sim } N\big(E[Z], \frac{Var(Z)}{n}\big) = N\big(\frac13, \frac{1}{18 n}\big)$$
Compare actual to approximate
We can now compare the easy asymptotic Normal approximation to the exact pdf, for any given value of $n$. The following diagram illustrates the:
exact solution (derived by inverting the cf), when $n = 10$: BLUE CURVE
Normal approximation (CLT), when $n=10$: RED CURVE
(source: tri.org.au)
Even for reasonably small $n=10$, the Normal approximation works rather nicely.
All that remains is the easy transformation from $\bar Z_n$ to $1-\bar Z_n$ which just involves changing your Normal mean from $\frac13$ to $\frac23$, and all done.
|
What is the probability distribution of $1-\text{mean}(|A-B|)$ where $A$ and $B$ are independent U(0
If $A$ and $B$ are standard Uniform and independent, then $(A-B) \sim Triangular(-1,0,1)$.
Then $Z = |A-B|$ will have pdf $f(z)$:
f = 2 (1 - z); domain[f] = {z, 0, 1};
Then, the characteristic fu
|
44,069
|
How to refer to AIC model-averaged parameters and confidence intervals
|
If you have read Burnham & Anderson's monograph, you know just why they discourage AIC(c)-based model selection: because they subscribe to the theory of tapering effect sizes. In a nutshell, they posit that everything has an effect - it's just that most effects are pretty small (sort of a "long tail"). Thus, an AIC(c)-selected model may be more parsimonious, but it will be systematically too small (the bias-variance trade-off). Therefore they recommend averaging models.
This is also the reason why statistical significance and p values are not en vogue in the Burnham & Anderson worldview. Tapering effect sizes are another way of saying that the true coefficients are almost always nonzero, just perhaps very small. Thus, the null hypothesis is already false a priori. P values pose a question that we already know the answer to.
Thus, if you follow B&A's philosophy far enough that you do AICc-based model averaging, it seems a bit incongruous to also discuss p values and/or "marginal significance".
Now, one possibility would be to simply discuss "averaged coefficients" and their CIs, without even discussing whether CIs contain zero. Conversely, if you are in a field that deifies p values (like psychology), it may make more sense to disregard these implications of B&A in the interest of talking in a way your readers will understand, rather than follow strict AICc purity.
(Anyway, my impression is that AICc and B&A have more of a following among non-statisticians, especially ecologists. So the nuances we are discussing here may already be far away from your readership's main interests.)
|
How to refer to AIC model-averaged parameters and confidence intervals
|
If you have read Burnham & Anderson's monograph, you know just why they discourage AIC(c)-based model selection: because they subscribe to the theory of tapering effect sizes. In a nutshell, they posi
|
How to refer to AIC model-averaged parameters and confidence intervals
If you have read Burnham & Anderson's monograph, you know just why they discourage AIC(c)-based model selection: because they subscribe to the theory of tapering effect sizes. In a nutshell, they posit that everything has an effect - it's just that most effects are pretty small (sort of a "long tail"). Thus, an AIC(c)-selected model may be more parsimonious, but it will be systematically too small (the bias-variance trade-off). Therefore they recommend averaging models.
This is also the reason why statistical significance and p values are not en vogue in the Burnham & Anderson worldview. Tapering effect sizes are another way of saying that the true coefficients are almost always nonzero, just perhaps very small. Thus, the null hypothesis is already false a priori. P values pose a question that we already know the answer to.
Thus, if you follow B&A's philosophy far enough that you do AICc-based model averaging, it seems a bit incongruous to also discuss p values and/or "marginal significance".
Now, one possibility would be to simply discuss "averaged coefficients" and their CIs, without even discussing whether CIs contain zero. Conversely, if you are in a field that deifies p values (like psychology), it may make more sense to disregard these implications of B&A in the interest of talking in a way your readers will understand, rather than follow strict AICc purity.
(Anyway, my impression is that AICc and B&A have more of a following among non-statisticians, especially ecologists. So the nuances we are discussing here may already be far away from your readership's main interests.)
|
How to refer to AIC model-averaged parameters and confidence intervals
If you have read Burnham & Anderson's monograph, you know just why they discourage AIC(c)-based model selection: because they subscribe to the theory of tapering effect sizes. In a nutshell, they posi
|
44,070
|
How to refer to AIC model-averaged parameters and confidence intervals
|
If you have access, Ive found several papers that are very helpful when deciding what to report, what values to use and the common mistakes people make when using AIC. On mistake talked about is using 95% CI when you've used AIC procedures as discussed in Arnold 2010.
Arnold T.W. 2010. Uninformative Parameters and Model Selection Using Akaike’s Information Criterion. Journal of Wildlife Management
Aurr et al 2010. A protocol for data exploration to avoid common statistical problems. Methods in Ecology and Evolution
Symonds and Moussalli 2011. A brief guide to model selection, multimodel inference
and model averaging in behavioural ecology using Akaike’s information criterion
|
How to refer to AIC model-averaged parameters and confidence intervals
|
If you have access, Ive found several papers that are very helpful when deciding what to report, what values to use and the common mistakes people make when using AIC. On mistake talked about is usin
|
How to refer to AIC model-averaged parameters and confidence intervals
If you have access, Ive found several papers that are very helpful when deciding what to report, what values to use and the common mistakes people make when using AIC. On mistake talked about is using 95% CI when you've used AIC procedures as discussed in Arnold 2010.
Arnold T.W. 2010. Uninformative Parameters and Model Selection Using Akaike’s Information Criterion. Journal of Wildlife Management
Aurr et al 2010. A protocol for data exploration to avoid common statistical problems. Methods in Ecology and Evolution
Symonds and Moussalli 2011. A brief guide to model selection, multimodel inference
and model averaging in behavioural ecology using Akaike’s information criterion
|
How to refer to AIC model-averaged parameters and confidence intervals
If you have access, Ive found several papers that are very helpful when deciding what to report, what values to use and the common mistakes people make when using AIC. On mistake talked about is usin
|
44,071
|
How to refer to AIC model-averaged parameters and confidence intervals
|
Use the (AICcmodavg) package in R developed by Marc J. Mazerolle. This package will allow you to compute model average estimates and their 95% confidence intervals based on your entire list of candidate models. The estimates are weighted based on the relative importance of your models (the AIC values/ranking of your models)and includes only the models for which the variable of interest is included. When computing the model average estimates, you have to be sure to exclude any interactions in which the variable of interest is included.
In your results section, include a table with each of your models and their AIC, delta AIC and AICweight values. Also include a table of the model average estimates with the variable, estimate and upper/lower 95% CI. In your write up you can then refer to the models as "The most probable model/models" and the variables "as the most important variable". If the confidence intervals of your estimates do not contain zero, then you have an effect of your variable as compared to your reference. So, If you were comparing 3 treatment types to a control and you found that only treatment 2 was different (ie. the average estimate CI did not contain 0 but treatment 1 and 3 do), instead of saying "significantly different" you could conclude that treatments 1 and 3 were similar to the control but treatment 2 had a positive effect (or negative depending on the sign of the estimate)on your response variable.
You can then use the same package to compute the predicted values for your variables using the entire list of candidate models and look at the trend. So if you were looking at the treatment effect over time, you could make predictions for treatments 1, 2, 3 and control and then say something like "Treatment 2 had a postive effect on the response variable which increased with time throughout the duration of the study period. There was no difference in response variable between treatments 1 and the control, nor treatment 3 and the control".
|
How to refer to AIC model-averaged parameters and confidence intervals
|
Use the (AICcmodavg) package in R developed by Marc J. Mazerolle. This package will allow you to compute model average estimates and their 95% confidence intervals based on your entire list of candida
|
How to refer to AIC model-averaged parameters and confidence intervals
Use the (AICcmodavg) package in R developed by Marc J. Mazerolle. This package will allow you to compute model average estimates and their 95% confidence intervals based on your entire list of candidate models. The estimates are weighted based on the relative importance of your models (the AIC values/ranking of your models)and includes only the models for which the variable of interest is included. When computing the model average estimates, you have to be sure to exclude any interactions in which the variable of interest is included.
In your results section, include a table with each of your models and their AIC, delta AIC and AICweight values. Also include a table of the model average estimates with the variable, estimate and upper/lower 95% CI. In your write up you can then refer to the models as "The most probable model/models" and the variables "as the most important variable". If the confidence intervals of your estimates do not contain zero, then you have an effect of your variable as compared to your reference. So, If you were comparing 3 treatment types to a control and you found that only treatment 2 was different (ie. the average estimate CI did not contain 0 but treatment 1 and 3 do), instead of saying "significantly different" you could conclude that treatments 1 and 3 were similar to the control but treatment 2 had a positive effect (or negative depending on the sign of the estimate)on your response variable.
You can then use the same package to compute the predicted values for your variables using the entire list of candidate models and look at the trend. So if you were looking at the treatment effect over time, you could make predictions for treatments 1, 2, 3 and control and then say something like "Treatment 2 had a postive effect on the response variable which increased with time throughout the duration of the study period. There was no difference in response variable between treatments 1 and the control, nor treatment 3 and the control".
|
How to refer to AIC model-averaged parameters and confidence intervals
Use the (AICcmodavg) package in R developed by Marc J. Mazerolle. This package will allow you to compute model average estimates and their 95% confidence intervals based on your entire list of candida
|
44,072
|
How to refer to AIC model-averaged parameters and confidence intervals
|
While @Stephan Kolassa's answer was probably best I'd like to just add that writing "the parameter was x.x and its CI does not cross zero" is not just laborious but treats the reader like an imbecile. When dealing with CIs simply use them as parameter estimates and if they don't cross zero that will be completely self evident when reported. A side effect is that you avoid writing that you're using your CIs solely for the purposes of inverting the t-test and therefore making absolutely no progress over such a test.
|
How to refer to AIC model-averaged parameters and confidence intervals
|
While @Stephan Kolassa's answer was probably best I'd like to just add that writing "the parameter was x.x and its CI does not cross zero" is not just laborious but treats the reader like an imbecile.
|
How to refer to AIC model-averaged parameters and confidence intervals
While @Stephan Kolassa's answer was probably best I'd like to just add that writing "the parameter was x.x and its CI does not cross zero" is not just laborious but treats the reader like an imbecile. When dealing with CIs simply use them as parameter estimates and if they don't cross zero that will be completely self evident when reported. A side effect is that you avoid writing that you're using your CIs solely for the purposes of inverting the t-test and therefore making absolutely no progress over such a test.
|
How to refer to AIC model-averaged parameters and confidence intervals
While @Stephan Kolassa's answer was probably best I'd like to just add that writing "the parameter was x.x and its CI does not cross zero" is not just laborious but treats the reader like an imbecile.
|
44,073
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique words used?
|
It is the beeswarm version of a stripchart, with photos of the artists in place of dots.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique w
|
It is the beeswarm version of a stripchart, with photos of the artists in place of dots.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique words used?
It is the beeswarm version of a stripchart, with photos of the artists in place of dots.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique w
It is the beeswarm version of a stripchart, with photos of the artists in place of dots.
|
44,074
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique words used?
|
There is a single numeric axis against which values are plotted and there is some mix of stacking and jittering to separate points that might occlude or overlap each other. Short of the photos, which make the graph distinctive, I have come across the following names for broadly similar plots:
barcode charts
beeswarm plots
circle plots
column scatter plot
data distribution graph
dispersal graphs
dispersion diagrams
dit plots
dot array charts
dot charts
dot diagrams
dot histograms
dot patterns
dot plots
instance chart
line charts
line plots
linear plots
needle plots
number-line plots
one-axis data distribution graph
one-dimensional scatter plots
oneway graphs
oneway plots
point graphs
raster plots
strip charts
strip plots
stripe graph
stripes plot
unidimensional scatter plots
univariate scatter plots
Wilkinson dot plots
plus several variations of those running words together or using different hyphenation, which I am not quite crazy enough to collect.
Stata users might care to note that these are documented in the help for my stripplot command available from SSC.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique w
|
There is a single numeric axis against which values are plotted and there is some mix of stacking and jittering to separate points that might occlude or overlap each other. Short of the photos, which
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique words used?
There is a single numeric axis against which values are plotted and there is some mix of stacking and jittering to separate points that might occlude or overlap each other. Short of the photos, which make the graph distinctive, I have come across the following names for broadly similar plots:
barcode charts
beeswarm plots
circle plots
column scatter plot
data distribution graph
dispersal graphs
dispersion diagrams
dit plots
dot array charts
dot charts
dot diagrams
dot histograms
dot patterns
dot plots
instance chart
line charts
line plots
linear plots
needle plots
number-line plots
one-axis data distribution graph
one-dimensional scatter plots
oneway graphs
oneway plots
point graphs
raster plots
strip charts
strip plots
stripe graph
stripes plot
unidimensional scatter plots
univariate scatter plots
Wilkinson dot plots
plus several variations of those running words together or using different hyphenation, which I am not quite crazy enough to collect.
Stata users might care to note that these are documented in the help for my stripplot command available from SSC.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique w
There is a single numeric axis against which values are plotted and there is some mix of stacking and jittering to separate points that might occlude or overlap each other. Short of the photos, which
|
44,075
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique words used?
|
This is a dot chart with some non-random jittering for legibility.
It it not a dot plot, though there's some superficial similarity.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique w
|
This is a dot chart with some non-random jittering for legibility.
It it not a dot plot, though there's some superficial similarity.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique words used?
This is a dot chart with some non-random jittering for legibility.
It it not a dot plot, though there's some superficial similarity.
|
What to call this graph showing icons for artists on a horizontal axis indicating number of unique w
This is a dot chart with some non-random jittering for legibility.
It it not a dot plot, though there's some superficial similarity.
|
44,076
|
How can I improve the predictive power of this logistic regression model?
|
Summary
You appear to be looking at the associations between symptoms (a, b, c, d, and e, coded as linear, numeric variables) and cancer status (yes versus no, coded in binary).
Associations versus predictions
I think you are looking at associations between the symptoms and cancer status rather than the ability of the symptoms to predict cancer status. If you wanted to really investigate predictive ability, you would need to divide your data set in half, fit models to one half of the data, and then use them to predict the cancer status of the patients in the other half of the data set. Note that this describes the simplest case of validation of a model using a single data set. You shouldn't actually do this. What you could really do is employ n-fold cross validation (for example, using the rms package in R) to make the most efficient use of your data.
Starting off
You may have already done this, but prior to playing around with logistic regression modeling I think you should take a step back and just look at your data. Using the program R to compute a few basic summary statistics...
# Load libraries
library(Rmisc)
library(metafor)
# Load data
data <- read.csv("example_data.csv", header = TRUE, na.strings = "")
attach(data)
# Summarize data
summary(data)
a b c d e cancer
Min. :11.0 Min. :13.00 Min. :13.00 Min. :12.00 Min. :17.00 Min. :0.0000
1st Qu.:19.0 1st Qu.:27.00 1st Qu.:28.00 1st Qu.:36.00 1st Qu.:33.00 1st Qu.:1.0000
Median :24.0 Median :31.00 Median :32.00 Median :40.00 Median :38.00 Median :1.0000
Mean :24.8 Mean :31.39 Mean :32.44 Mean :39.39 Mean :37.71 Mean :0.9169
3rd Qu.:30.0 3rd Qu.:36.00 3rd Qu.:37.00 3rd Qu.:43.50 3rd Qu.:42.00 3rd Qu.:1.0000
Max. :49.0 Max. :50.00 Max. :50.00 Max. :50.00 Max. :50.00 Max. :1.0000
NA's :20 NA's :18 NA's :21 NA's :20 NA's :20 NA's :6
And now to plot some exploratory scatter plots... Pay attention to any linear relationships between variables that pop out to your eye. Also pay attention (as Benjamin mentioned below) to the plots of the symptom variables versus cancer status.
plot(data)
And look at some histograms to get a sense of the distribution of your data... Always good to do this before plugging them into a regression model
hist(data)
Going a bit further...
I would compute the mean and 95%CI for each symptom variable and stratify them by cancer status and plot those... Just by looking at this you will know visually which variables are going to be significant in your logistic regression model. Here I just plot the data...
forest(
x = c(24.44636,28.94667,31.63066,28.62963,32.59910,30.65852,39.79738,35.04111,37.99030,34.41185),
ci.lb = c(23.57979,25.72939,30.84611,26.15883,31.88579,28.52778,39.16493,32.27390,37.26171,32.10734),
ci.ub = c(25.31292,32.16395,32.41520,31.10043,33.31242,32.78926,40.42983,37.80832,38.71888,36.71637),
xlab = "Mean and 95% CI", slab = c("a cancer","a healthy","b cancer","b healthy","c cancer","c healthy","d cancer","d healthy","e cancer","e healthy"))
Looking at the plot above, you get a visual sense of the fact that you have way more cancer patients contributing to the data set than non-cancer patients.
Last...
I would just compute univariate effects estimates for each symptom variable for their associations with cancer outcome. Then I would multiply all of the resultant p values by five, since you are doing that many exploratory tests. You can do that in SPSS easily. For the results of the models, I would focus more on the direction, magnitude, and confidence intervals for the resultant effects estimates. Below I have plotted the effects estimates and their confidence intervals from univariate models of each separate symptom variable... Now you should go build models that are adjusted for age, gender, smoking, etc. and make another plot like this... I do agree with Benjamin that there is probably not a whole lot you can likely learn from these data given the paucity of healthy controls.
|
How can I improve the predictive power of this logistic regression model?
|
Summary
You appear to be looking at the associations between symptoms (a, b, c, d, and e, coded as linear, numeric variables) and cancer status (yes versus no, coded in binary).
Associations versus pr
|
How can I improve the predictive power of this logistic regression model?
Summary
You appear to be looking at the associations between symptoms (a, b, c, d, and e, coded as linear, numeric variables) and cancer status (yes versus no, coded in binary).
Associations versus predictions
I think you are looking at associations between the symptoms and cancer status rather than the ability of the symptoms to predict cancer status. If you wanted to really investigate predictive ability, you would need to divide your data set in half, fit models to one half of the data, and then use them to predict the cancer status of the patients in the other half of the data set. Note that this describes the simplest case of validation of a model using a single data set. You shouldn't actually do this. What you could really do is employ n-fold cross validation (for example, using the rms package in R) to make the most efficient use of your data.
Starting off
You may have already done this, but prior to playing around with logistic regression modeling I think you should take a step back and just look at your data. Using the program R to compute a few basic summary statistics...
# Load libraries
library(Rmisc)
library(metafor)
# Load data
data <- read.csv("example_data.csv", header = TRUE, na.strings = "")
attach(data)
# Summarize data
summary(data)
a b c d e cancer
Min. :11.0 Min. :13.00 Min. :13.00 Min. :12.00 Min. :17.00 Min. :0.0000
1st Qu.:19.0 1st Qu.:27.00 1st Qu.:28.00 1st Qu.:36.00 1st Qu.:33.00 1st Qu.:1.0000
Median :24.0 Median :31.00 Median :32.00 Median :40.00 Median :38.00 Median :1.0000
Mean :24.8 Mean :31.39 Mean :32.44 Mean :39.39 Mean :37.71 Mean :0.9169
3rd Qu.:30.0 3rd Qu.:36.00 3rd Qu.:37.00 3rd Qu.:43.50 3rd Qu.:42.00 3rd Qu.:1.0000
Max. :49.0 Max. :50.00 Max. :50.00 Max. :50.00 Max. :50.00 Max. :1.0000
NA's :20 NA's :18 NA's :21 NA's :20 NA's :20 NA's :6
And now to plot some exploratory scatter plots... Pay attention to any linear relationships between variables that pop out to your eye. Also pay attention (as Benjamin mentioned below) to the plots of the symptom variables versus cancer status.
plot(data)
And look at some histograms to get a sense of the distribution of your data... Always good to do this before plugging them into a regression model
hist(data)
Going a bit further...
I would compute the mean and 95%CI for each symptom variable and stratify them by cancer status and plot those... Just by looking at this you will know visually which variables are going to be significant in your logistic regression model. Here I just plot the data...
forest(
x = c(24.44636,28.94667,31.63066,28.62963,32.59910,30.65852,39.79738,35.04111,37.99030,34.41185),
ci.lb = c(23.57979,25.72939,30.84611,26.15883,31.88579,28.52778,39.16493,32.27390,37.26171,32.10734),
ci.ub = c(25.31292,32.16395,32.41520,31.10043,33.31242,32.78926,40.42983,37.80832,38.71888,36.71637),
xlab = "Mean and 95% CI", slab = c("a cancer","a healthy","b cancer","b healthy","c cancer","c healthy","d cancer","d healthy","e cancer","e healthy"))
Looking at the plot above, you get a visual sense of the fact that you have way more cancer patients contributing to the data set than non-cancer patients.
Last...
I would just compute univariate effects estimates for each symptom variable for their associations with cancer outcome. Then I would multiply all of the resultant p values by five, since you are doing that many exploratory tests. You can do that in SPSS easily. For the results of the models, I would focus more on the direction, magnitude, and confidence intervals for the resultant effects estimates. Below I have plotted the effects estimates and their confidence intervals from univariate models of each separate symptom variable... Now you should go build models that are adjusted for age, gender, smoking, etc. and make another plot like this... I do agree with Benjamin that there is probably not a whole lot you can likely learn from these data given the paucity of healthy controls.
|
How can I improve the predictive power of this logistic regression model?
Summary
You appear to be looking at the associations between symptoms (a, b, c, d, and e, coded as linear, numeric variables) and cancer status (yes versus no, coded in binary).
Associations versus pr
|
44,077
|
How can I improve the predictive power of this logistic regression model?
|
One thing to check is whether there is a linear relationship between the log odds of cancer and each of your 5 predictor variables. This is an assumption in logistic regression. If this does not hold you might want to consider adding higher order terms to the model, or even a nonlinear relationship between log odds of cancer and some of the variables (by fitting a generalized additive model).
From your output, it looks like these 5 predictors do not do a good job of classifying cancer vs non-cancer.
I'll take a look at the data and add more to this question later.
After taking a look at the data I have confirmed that indeed these variables are terrible at predicting cancer. If you plot the variables against cancer status you will see that, although for some of them the non-cancer patients have a little less variability, there is very little difference between the cancer and non-cancer patients. For example:
So if you told me that you had a patient who had a C variable of 30...I would have no idea if that is a cancer patient or a non-cancer patient.
A bit more about your output: When you don't add any variables in it says you correctly predict 91.8% of the patients. The next table that lists significance values for adding in more variables means if you add them in one at a time.
|
How can I improve the predictive power of this logistic regression model?
|
One thing to check is whether there is a linear relationship between the log odds of cancer and each of your 5 predictor variables. This is an assumption in logistic regression. If this does not hold
|
How can I improve the predictive power of this logistic regression model?
One thing to check is whether there is a linear relationship between the log odds of cancer and each of your 5 predictor variables. This is an assumption in logistic regression. If this does not hold you might want to consider adding higher order terms to the model, or even a nonlinear relationship between log odds of cancer and some of the variables (by fitting a generalized additive model).
From your output, it looks like these 5 predictors do not do a good job of classifying cancer vs non-cancer.
I'll take a look at the data and add more to this question later.
After taking a look at the data I have confirmed that indeed these variables are terrible at predicting cancer. If you plot the variables against cancer status you will see that, although for some of them the non-cancer patients have a little less variability, there is very little difference between the cancer and non-cancer patients. For example:
So if you told me that you had a patient who had a C variable of 30...I would have no idea if that is a cancer patient or a non-cancer patient.
A bit more about your output: When you don't add any variables in it says you correctly predict 91.8% of the patients. The next table that lists significance values for adding in more variables means if you add them in one at a time.
|
How can I improve the predictive power of this logistic regression model?
One thing to check is whether there is a linear relationship between the log odds of cancer and each of your 5 predictor variables. This is an assumption in logistic regression. If this does not hold
|
44,078
|
How can I improve the predictive power of this logistic regression model?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Ignore the classification tables completely. They are not based on sound statistical methods, and are completely arbitrary.
|
How can I improve the predictive power of this logistic regression model?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
How can I improve the predictive power of this logistic regression model?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Ignore the classification tables completely. They are not based on sound statistical methods, and are completely arbitrary.
|
How can I improve the predictive power of this logistic regression model?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
44,079
|
Do you use a chi-squared test or a t-test for equality of variances?
|
You do neither a T-test nor a $\chi^{2}$ test when testing $H_0: \sigma^{2}_X = \sigma^2_Y$ against $H_a: \sigma^{2}_X \neq \sigma^2_Y$. For testing the equality of variances between two normally distributed populations you use the F-test of equality of variances, which reformulates your test as $H_0: \frac{\sigma^{2}_X}{\sigma^2_Y} = 1$ against $H_a: \frac{\sigma^{2}_X}{\sigma^2_Y} \neq 1$. In R, you should run
> X=c( 11.4, 9.7, 11.4, 13.3, 7.4, 8.5, 13.4, 17.4, 12.7)
> Y=c(3.2, 2.7, 5.5, -0.9, -1.8)
> var.test(x,y)
F test to compare two variances
data: X and Y
F = 0.979, num df = 8, denom df = 4, p-value = 0.9033
alternative hypothesis: true ratio of variances is not equal to 1
95 percent confidence interval:
0.109 4.947
sample estimates:
ratio of variances
0.979
|
Do you use a chi-squared test or a t-test for equality of variances?
|
You do neither a T-test nor a $\chi^{2}$ test when testing $H_0: \sigma^{2}_X = \sigma^2_Y$ against $H_a: \sigma^{2}_X \neq \sigma^2_Y$. For testing the equality of variances between two normally dist
|
Do you use a chi-squared test or a t-test for equality of variances?
You do neither a T-test nor a $\chi^{2}$ test when testing $H_0: \sigma^{2}_X = \sigma^2_Y$ against $H_a: \sigma^{2}_X \neq \sigma^2_Y$. For testing the equality of variances between two normally distributed populations you use the F-test of equality of variances, which reformulates your test as $H_0: \frac{\sigma^{2}_X}{\sigma^2_Y} = 1$ against $H_a: \frac{\sigma^{2}_X}{\sigma^2_Y} \neq 1$. In R, you should run
> X=c( 11.4, 9.7, 11.4, 13.3, 7.4, 8.5, 13.4, 17.4, 12.7)
> Y=c(3.2, 2.7, 5.5, -0.9, -1.8)
> var.test(x,y)
F test to compare two variances
data: X and Y
F = 0.979, num df = 8, denom df = 4, p-value = 0.9033
alternative hypothesis: true ratio of variances is not equal to 1
95 percent confidence interval:
0.109 4.947
sample estimates:
ratio of variances
0.979
|
Do you use a chi-squared test or a t-test for equality of variances?
You do neither a T-test nor a $\chi^{2}$ test when testing $H_0: \sigma^{2}_X = \sigma^2_Y$ against $H_a: \sigma^{2}_X \neq \sigma^2_Y$. For testing the equality of variances between two normally dist
|
44,080
|
Do you use a chi-squared test or a t-test for equality of variances?
|
The test you get with chisq.test is for counts - used to compare proportions or test for independence with categorical data, that kind of thing.
On the other hand, t-tests are usually for comparing means.
There is a test involving variances (a one sample variance test) with normal data that is a chisquare test but you don't get that test with that command.
With two samples and normal data there's a corresponding ratio-of-variances F test for testing equality of variances, but it's generally not recommended (it's not robust to violations of normality). Levene or Browne-Forsythe -- or a few others -- are more often used, typically corresponding to a form of ANOVA on deviations from some measure of location.
When those deviations are bigger on average it would correspond (under some reasonable assumptions) to the variances being bigger.
An equivalent to Levene or Browne-Forsythe could be performed with two-samples (on deviations from the mean or median, respectively) and could even be done as a t-test rather than an ANOVA.
|
Do you use a chi-squared test or a t-test for equality of variances?
|
The test you get with chisq.test is for counts - used to compare proportions or test for independence with categorical data, that kind of thing.
On the other hand, t-tests are usually for comparing me
|
Do you use a chi-squared test or a t-test for equality of variances?
The test you get with chisq.test is for counts - used to compare proportions or test for independence with categorical data, that kind of thing.
On the other hand, t-tests are usually for comparing means.
There is a test involving variances (a one sample variance test) with normal data that is a chisquare test but you don't get that test with that command.
With two samples and normal data there's a corresponding ratio-of-variances F test for testing equality of variances, but it's generally not recommended (it's not robust to violations of normality). Levene or Browne-Forsythe -- or a few others -- are more often used, typically corresponding to a form of ANOVA on deviations from some measure of location.
When those deviations are bigger on average it would correspond (under some reasonable assumptions) to the variances being bigger.
An equivalent to Levene or Browne-Forsythe could be performed with two-samples (on deviations from the mean or median, respectively) and could even be done as a t-test rather than an ANOVA.
|
Do you use a chi-squared test or a t-test for equality of variances?
The test you get with chisq.test is for counts - used to compare proportions or test for independence with categorical data, that kind of thing.
On the other hand, t-tests are usually for comparing me
|
44,081
|
Do you use a chi-squared test or a t-test for equality of variances?
|
@Mona Jalal, There are various tests used for equality of variances, suited for different situations each having its advantages and limitations. The most common ones are
Bartlett's Test of Sphericity
Levene's test
F- Test
While the post is continuously going back and forth here, may be you want to discuss them in a chat to elaborate about the problem you are facing or you can read about all three of them on Wikipedia.
After that if you face difficulty in implementing those test or interpreting the results in R or Python, you can ask them here by rewording your question
|
Do you use a chi-squared test or a t-test for equality of variances?
|
@Mona Jalal, There are various tests used for equality of variances, suited for different situations each having its advantages and limitations. The most common ones are
Bartlett's Test of Sphericit
|
Do you use a chi-squared test or a t-test for equality of variances?
@Mona Jalal, There are various tests used for equality of variances, suited for different situations each having its advantages and limitations. The most common ones are
Bartlett's Test of Sphericity
Levene's test
F- Test
While the post is continuously going back and forth here, may be you want to discuss them in a chat to elaborate about the problem you are facing or you can read about all three of them on Wikipedia.
After that if you face difficulty in implementing those test or interpreting the results in R or Python, you can ask them here by rewording your question
|
Do you use a chi-squared test or a t-test for equality of variances?
@Mona Jalal, There are various tests used for equality of variances, suited for different situations each having its advantages and limitations. The most common ones are
Bartlett's Test of Sphericit
|
44,082
|
Do you use a chi-squared test or a t-test for equality of variances?
|
Note that t.test is for a difference of means, when you actually want to test for a difference of variances based on the null and alternative hypotheses you set up. See:
?var.test
var.test(x, y)
|
Do you use a chi-squared test or a t-test for equality of variances?
|
Note that t.test is for a difference of means, when you actually want to test for a difference of variances based on the null and alternative hypotheses you set up. See:
?var.test
var.test(x, y)
|
Do you use a chi-squared test or a t-test for equality of variances?
Note that t.test is for a difference of means, when you actually want to test for a difference of variances based on the null and alternative hypotheses you set up. See:
?var.test
var.test(x, y)
|
Do you use a chi-squared test or a t-test for equality of variances?
Note that t.test is for a difference of means, when you actually want to test for a difference of variances based on the null and alternative hypotheses you set up. See:
?var.test
var.test(x, y)
|
44,083
|
Spline fitting in R - how to force passing two data points?
|
Rather than use smooth.spline() in the stats package, there is a function cobs() in the cobs package that allows you to do exactly the sort of thing you want. COBS stands for Constrained B-splines. Possible constraints include going through specific points, setting derivatives to specified values, monotonicity (increasing or decreasing), concavity, convexity, periodicity, etc.
In your case, use
cobs(x, y, pointwise=rbind(c(0,-100,-1),c(0,100,1)))
|
Spline fitting in R - how to force passing two data points?
|
Rather than use smooth.spline() in the stats package, there is a function cobs() in the cobs package that allows you to do exactly the sort of thing you want. COBS stands for Constrained B-splines. Po
|
Spline fitting in R - how to force passing two data points?
Rather than use smooth.spline() in the stats package, there is a function cobs() in the cobs package that allows you to do exactly the sort of thing you want. COBS stands for Constrained B-splines. Possible constraints include going through specific points, setting derivatives to specified values, monotonicity (increasing or decreasing), concavity, convexity, periodicity, etc.
In your case, use
cobs(x, y, pointwise=rbind(c(0,-100,-1),c(0,100,1)))
|
Spline fitting in R - how to force passing two data points?
Rather than use smooth.spline() in the stats package, there is a function cobs() in the cobs package that allows you to do exactly the sort of thing you want. COBS stands for Constrained B-splines. Po
|
44,084
|
Spline fitting in R - how to force passing two data points?
|
I cannot think of any way to do it using smooth.spline. If you were to use a spline basis such as bs from the splines package, then you could possibly do this using quadradic programming to constrain the endpoints, but it could be complicated figuring out the constraints.
Here is an approach that uses xsplines (different but similar to other types of splines) and the optim function to find the values to use (nls could be used as well). I chose 3 internal equally spaced control points and a shape of 1, but you could play with these to compare the fit:
x <- seq( -100, 100, length=101 )
y <- sin( x/200*pi ) + rnorm(101, 0, 0.15)
myfun <- function(par) {
yh <- c(-1, par, 1)
xh <- c(-100, -50, 0, 50, 100)
sp <- xspline( xh,yh, shape=1, draw=FALSE)
yhat <- approx( sp, xout=x )$y
sum( (y-yhat)^2 )
}
out <- optim( c(-.5, 0, .5), myfun )
plot(x,y)
xspline( c(-100, -50, 0, 50, 100), c(-1, out$par, 1), shape=1,
border='blue' )
|
Spline fitting in R - how to force passing two data points?
|
I cannot think of any way to do it using smooth.spline. If you were to use a spline basis such as bs from the splines package, then you could possibly do this using quadradic programming to constrain
|
Spline fitting in R - how to force passing two data points?
I cannot think of any way to do it using smooth.spline. If you were to use a spline basis such as bs from the splines package, then you could possibly do this using quadradic programming to constrain the endpoints, but it could be complicated figuring out the constraints.
Here is an approach that uses xsplines (different but similar to other types of splines) and the optim function to find the values to use (nls could be used as well). I chose 3 internal equally spaced control points and a shape of 1, but you could play with these to compare the fit:
x <- seq( -100, 100, length=101 )
y <- sin( x/200*pi ) + rnorm(101, 0, 0.15)
myfun <- function(par) {
yh <- c(-1, par, 1)
xh <- c(-100, -50, 0, 50, 100)
sp <- xspline( xh,yh, shape=1, draw=FALSE)
yhat <- approx( sp, xout=x )$y
sum( (y-yhat)^2 )
}
out <- optim( c(-.5, 0, .5), myfun )
plot(x,y)
xspline( c(-100, -50, 0, 50, 100), c(-1, out$par, 1), shape=1,
border='blue' )
|
Spline fitting in R - how to force passing two data points?
I cannot think of any way to do it using smooth.spline. If you were to use a spline basis such as bs from the splines package, then you could possibly do this using quadradic programming to constrain
|
44,085
|
What is the advantage of having balanced panel data rather than unbalanced?
|
I believe these are largely historical reasons. In the 1940s, one had to conduct analysis of variance with paper and pencil, so having balanced designs led to simple sums for both means and variances. Any imbalance would require inverting matrices 4x4 or larger (I've done it a couple of times on regression exams, and nearly always screwed up). It is likely that in the 1960s when panel/longitudinal data first came to researchers' attention (probably with PSID), one could reasonably easily run a regression with no structure on errors already, but running GLS required heroic efforts, let alone unbalanced GLS. These days, there aren't any issues, as Dimitriy said, as all estimators are computed in the general form with the most general matrix inversion operations in the background, anyway.
Also, with balanced data sets, you can easily run models with panel autoregressions. With unbalanced panels, these will likely get trickier. I don't think that these models are actually that popular.
|
What is the advantage of having balanced panel data rather than unbalanced?
|
I believe these are largely historical reasons. In the 1940s, one had to conduct analysis of variance with paper and pencil, so having balanced designs led to simple sums for both means and variances.
|
What is the advantage of having balanced panel data rather than unbalanced?
I believe these are largely historical reasons. In the 1940s, one had to conduct analysis of variance with paper and pencil, so having balanced designs led to simple sums for both means and variances. Any imbalance would require inverting matrices 4x4 or larger (I've done it a couple of times on regression exams, and nearly always screwed up). It is likely that in the 1960s when panel/longitudinal data first came to researchers' attention (probably with PSID), one could reasonably easily run a regression with no structure on errors already, but running GLS required heroic efforts, let alone unbalanced GLS. These days, there aren't any issues, as Dimitriy said, as all estimators are computed in the general form with the most general matrix inversion operations in the background, anyway.
Also, with balanced data sets, you can easily run models with panel autoregressions. With unbalanced panels, these will likely get trickier. I don't think that these models are actually that popular.
|
What is the advantage of having balanced panel data rather than unbalanced?
I believe these are largely historical reasons. In the 1940s, one had to conduct analysis of variance with paper and pencil, so having balanced designs led to simple sums for both means and variances.
|
44,086
|
What is the advantage of having balanced panel data rather than unbalanced?
|
I think whenever you have unbalanced panels, you need to come up with a formal description of why that is the case. You need to worry about self-selection, nonresponse, and attrition, especially if you're interested in population parameters and consistency. For most estimators, the mechanics are largely the same.
|
What is the advantage of having balanced panel data rather than unbalanced?
|
I think whenever you have unbalanced panels, you need to come up with a formal description of why that is the case. You need to worry about self-selection, nonresponse, and attrition, especially if yo
|
What is the advantage of having balanced panel data rather than unbalanced?
I think whenever you have unbalanced panels, you need to come up with a formal description of why that is the case. You need to worry about self-selection, nonresponse, and attrition, especially if you're interested in population parameters and consistency. For most estimators, the mechanics are largely the same.
|
What is the advantage of having balanced panel data rather than unbalanced?
I think whenever you have unbalanced panels, you need to come up with a formal description of why that is the case. You need to worry about self-selection, nonresponse, and attrition, especially if yo
|
44,087
|
What is the advantage of having balanced panel data rather than unbalanced?
|
Balanced data is preferred over unbalanced panels, because it allows an observation of the same unit (e.g., individual, company, person, etc.) in every time period (e.g., year, month, etc.), which reduces the noise introduced by unit (individual, etc.) heterogeneity.
|
What is the advantage of having balanced panel data rather than unbalanced?
|
Balanced data is preferred over unbalanced panels, because it allows an observation of the same unit (e.g., individual, company, person, etc.) in every time period (e.g., year, month, etc.), which red
|
What is the advantage of having balanced panel data rather than unbalanced?
Balanced data is preferred over unbalanced panels, because it allows an observation of the same unit (e.g., individual, company, person, etc.) in every time period (e.g., year, month, etc.), which reduces the noise introduced by unit (individual, etc.) heterogeneity.
|
What is the advantage of having balanced panel data rather than unbalanced?
Balanced data is preferred over unbalanced panels, because it allows an observation of the same unit (e.g., individual, company, person, etc.) in every time period (e.g., year, month, etc.), which red
|
44,088
|
How can 8 dimensions be reduced to 3?
|
Ion, PCA is just a specific case of orthogonal rotation. Let X be your n x p data matrix of n points in p dimensions (axes). To obtain this same cloud of points in a new set of axes somehow rotated in space relatively the old ones, you multiply X by a p x p matrix Q of cosines between the old axes (rows) and new axes (columns): $\bf{XQ=C}$ [1], where C is your new (rotated) coordinates. This formula says that each new dimension is a linear combination of p old dimensions. It also follows that $\bf {X=CQ^{-1}}$ or, since the rotation was orthogonal - $\bf Q$ is orthonormal matrix, - $\bf {X=CQ'}$ [2] which says that each old dimension is a linear combination of p new dimensions.
Now, PCA is virtually this rotation; what makes PCA special is that Q is not an arbitrary rotation matrix; it is the matrix of such a rotation so that the sum-of-squares (or variance, if your data had been centered) in the 1st column of C becomes maximal possible: that is, variability along the 1st principal component is maximized. Then, sum-of-squares of in the 2nd C column (2nd principal component) is second maximal. Etc. Each next component is a new axis which takes off less and less of multidimensional variability in the cloud. Hence, lion's share of the variability is accounted for by only few m (m<p) new axes (principal components).
In PCA, Q is called matrix of eigenvectors (these being its columns). If you retain just m first components, by retaining just m first columns in Q, you still can use formula [1] to obtain component scores for the m components - the points' coordinates on these m dimensions. So, whatever is m, each component remains a linear combination of original variables. However, using then formula [2] to obtain p original variables from m components won't give you original variables exactly: each original variable will be a linear combination of m components plus some error term. If you perform linear regression (without constant term) of each original variable by m components as predictors you will see that regression coefficients you get are the elements of Q.
|
How can 8 dimensions be reduced to 3?
|
Ion, PCA is just a specific case of orthogonal rotation. Let X be your n x p data matrix of n points in p dimensions (axes). To obtain this same cloud of points in a new set of axes somehow rotated in
|
How can 8 dimensions be reduced to 3?
Ion, PCA is just a specific case of orthogonal rotation. Let X be your n x p data matrix of n points in p dimensions (axes). To obtain this same cloud of points in a new set of axes somehow rotated in space relatively the old ones, you multiply X by a p x p matrix Q of cosines between the old axes (rows) and new axes (columns): $\bf{XQ=C}$ [1], where C is your new (rotated) coordinates. This formula says that each new dimension is a linear combination of p old dimensions. It also follows that $\bf {X=CQ^{-1}}$ or, since the rotation was orthogonal - $\bf Q$ is orthonormal matrix, - $\bf {X=CQ'}$ [2] which says that each old dimension is a linear combination of p new dimensions.
Now, PCA is virtually this rotation; what makes PCA special is that Q is not an arbitrary rotation matrix; it is the matrix of such a rotation so that the sum-of-squares (or variance, if your data had been centered) in the 1st column of C becomes maximal possible: that is, variability along the 1st principal component is maximized. Then, sum-of-squares of in the 2nd C column (2nd principal component) is second maximal. Etc. Each next component is a new axis which takes off less and less of multidimensional variability in the cloud. Hence, lion's share of the variability is accounted for by only few m (m<p) new axes (principal components).
In PCA, Q is called matrix of eigenvectors (these being its columns). If you retain just m first components, by retaining just m first columns in Q, you still can use formula [1] to obtain component scores for the m components - the points' coordinates on these m dimensions. So, whatever is m, each component remains a linear combination of original variables. However, using then formula [2] to obtain p original variables from m components won't give you original variables exactly: each original variable will be a linear combination of m components plus some error term. If you perform linear regression (without constant term) of each original variable by m components as predictors you will see that regression coefficients you get are the elements of Q.
|
How can 8 dimensions be reduced to 3?
Ion, PCA is just a specific case of orthogonal rotation. Let X be your n x p data matrix of n points in p dimensions (axes). To obtain this same cloud of points in a new set of axes somehow rotated in
|
44,089
|
How can 8 dimensions be reduced to 3?
|
+1 for ttnphns, but I'll try to give tl;dr, math free version.
Your doubts are fully justified -- one cannot stuff 8 dims as <8 linear combinations in a general case. What PCA really does is that it converts 8 dims into 8 linear combinations in such a way that it stuffs how much diversity of the data possible to the first dim, than stuffs the most of remains in the second one and so on -- thus one may expect that the last dims contain only noise coming from errors and noises in the original data and may be omitted, what leads to a reduction of dimensionality.
This way one can imagine it as lossy compression algorithm like MP3 or JPEG -- it dumps some of the original information, but hopefully only this that doesn't matter.
|
How can 8 dimensions be reduced to 3?
|
+1 for ttnphns, but I'll try to give tl;dr, math free version.
Your doubts are fully justified -- one cannot stuff 8 dims as <8 linear combinations in a general case. What PCA really does is that it
|
How can 8 dimensions be reduced to 3?
+1 for ttnphns, but I'll try to give tl;dr, math free version.
Your doubts are fully justified -- one cannot stuff 8 dims as <8 linear combinations in a general case. What PCA really does is that it converts 8 dims into 8 linear combinations in such a way that it stuffs how much diversity of the data possible to the first dim, than stuffs the most of remains in the second one and so on -- thus one may expect that the last dims contain only noise coming from errors and noises in the original data and may be omitted, what leads to a reduction of dimensionality.
This way one can imagine it as lossy compression algorithm like MP3 or JPEG -- it dumps some of the original information, but hopefully only this that doesn't matter.
|
How can 8 dimensions be reduced to 3?
+1 for ttnphns, but I'll try to give tl;dr, math free version.
Your doubts are fully justified -- one cannot stuff 8 dims as <8 linear combinations in a general case. What PCA really does is that it
|
44,090
|
How can 8 dimensions be reduced to 3?
|
Here's my stab at this; completely without math, just some basic principles and a picture. You asked for it. ;)
Consider the scenario in the picture below. You have 2D data points along the X and Y axis. You could use the PCA to find the principal axis P.
The point of this analysis is that if your data are distributed this way, you don't really need both X and Y to work with them. You might as well only use one dimension, along P.
If you have N-dimensional input space, you can use the PCA to reduce it to anywhere between 1 to N dimensions. So yes, you can reduce from 8 to 3; whether that makes any sense to do is up to your decision (based upon the concrete data in question).
|
How can 8 dimensions be reduced to 3?
|
Here's my stab at this; completely without math, just some basic principles and a picture. You asked for it. ;)
Consider the scenario in the picture below. You have 2D data points along the X and Y a
|
How can 8 dimensions be reduced to 3?
Here's my stab at this; completely without math, just some basic principles and a picture. You asked for it. ;)
Consider the scenario in the picture below. You have 2D data points along the X and Y axis. You could use the PCA to find the principal axis P.
The point of this analysis is that if your data are distributed this way, you don't really need both X and Y to work with them. You might as well only use one dimension, along P.
If you have N-dimensional input space, you can use the PCA to reduce it to anywhere between 1 to N dimensions. So yes, you can reduce from 8 to 3; whether that makes any sense to do is up to your decision (based upon the concrete data in question).
|
How can 8 dimensions be reduced to 3?
Here's my stab at this; completely without math, just some basic principles and a picture. You asked for it. ;)
Consider the scenario in the picture below. You have 2D data points along the X and Y a
|
44,091
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
Here is a possibility, very similar than that of @Roman Lustrik, but just a little bit more automatic.
Say that
x <- c("a", "b", "b", "c")
Then
> x <- as.factor(x)
> levels(x) <- 1:length(levels(x))
> x <- as.numeric(x)
makes the job:
> print(x)
[1] 1 2 2 3
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
Here is a possibility, very similar than that of @Roman Lustrik, but just a little bit more automatic.
Say that
x <- c("a", "b", "b", "c")
Then
> x <- as.factor(x)
> levels(x) <- 1:length(leve
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
Here is a possibility, very similar than that of @Roman Lustrik, but just a little bit more automatic.
Say that
x <- c("a", "b", "b", "c")
Then
> x <- as.factor(x)
> levels(x) <- 1:length(levels(x))
> x <- as.numeric(x)
makes the job:
> print(x)
[1] 1 2 2 3
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
Here is a possibility, very similar than that of @Roman Lustrik, but just a little bit more automatic.
Say that
x <- c("a", "b", "b", "c")
Then
> x <- as.factor(x)
> levels(x) <- 1:length(leve
|
44,092
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
Another programming question has sneaked...
Anyway, the faster way is
unclass(factor(x))
additionally one can add levels(...)<-NULL to remove the redundant attribute too (not much required inside a script).
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
Another programming question has sneaked...
Anyway, the faster way is
unclass(factor(x))
additionally one can add levels(...)<-NULL to remove the redundant attribute too (not much required inside a s
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
Another programming question has sneaked...
Anyway, the faster way is
unclass(factor(x))
additionally one can add levels(...)<-NULL to remove the redundant attribute too (not much required inside a script).
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
Another programming question has sneaked...
Anyway, the faster way is
unclass(factor(x))
additionally one can add levels(...)<-NULL to remove the redundant attribute too (not much required inside a s
|
44,093
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
There's a few ways of doing this. Here's one.
> (a <- as.factor(sample(letters[1:5], 30, replace = TRUE)))
[1] d a e e e c b e b b c a d d d d c b c c b b e b e b c d c b
Levels: a b c d e
> (levels(a) <- 1:5)
[1] 1 2 3 4 5
> a <- as.numeric(a) # convert these factors into numbers
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
There's a few ways of doing this. Here's one.
> (a <- as.factor(sample(letters[1:5], 30, replace = TRUE)))
[1] d a e e e c b e b b c a d d d d c b c c b b e b e b c d c b
Levels: a b c d e
> (
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
There's a few ways of doing this. Here's one.
> (a <- as.factor(sample(letters[1:5], 30, replace = TRUE)))
[1] d a e e e c b e b b c a d d d d c b c c b b e b e b c d c b
Levels: a b c d e
> (levels(a) <- 1:5)
[1] 1 2 3 4 5
> a <- as.numeric(a) # convert these factors into numbers
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
There's a few ways of doing this. Here's one.
> (a <- as.factor(sample(letters[1:5], 30, replace = TRUE)))
[1] d a e e e c b e b b c a d d d d c b c c b b e b e b c d c b
Levels: a b c d e
> (
|
44,094
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
as.numeric(factor(c("d", "a", "b", "b", "c")))
[1] 4 1 2 2 3
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
|
as.numeric(factor(c("d", "a", "b", "b", "c")))
[1] 4 1 2 2 3
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
as.numeric(factor(c("d", "a", "b", "b", "c")))
[1] 4 1 2 2 3
|
How to convert a vector of enumerable strings into a vector of numbers? [closed]
as.numeric(factor(c("d", "a", "b", "b", "c")))
[1] 4 1 2 2 3
|
44,095
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
New answer based on comment below:
As I understand, Method 1 is to mix R code and HTML or LaTeX in the same document, using Sweave or brew for example, to create a final document, while Method 2 is to use R code to generate HTML or LaTeX, using the R2HTML or Hmisc packages for example, and then to just run the R code to create the final document. I've mostly just used Method 1 but will weigh in anyway.
As I see it, it's really just a matter of preference; I don't see any technical or statistical reason to prefer one over the other; they're both ways to make your research reproducible.
I think Method 1 is easier because you don't have to know what the R functions are that create the LaTeX or the HTML code; you just write R code, and you write HTML or LaTeX code, and the software takes care of putting them together. This is especially true when the R output only is a small amount of the final document; it would be a pain to write the R code necessary to output a lot of text, for example. In smart text editors, you also get the right syntax formatting for each kind of code which you don't get when using R2HTML or Hmisc. This method also separates the results from the commentary more cleanly, in my opinion.
However, for short snippets or just outputting the results from a command with no commentary, using R2HTML or Hmisc might be easier, though (speaking from my experience), once you're in the habit of Sweaving, you'll never go back.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
New answer based on comment below:
As I understand, Method 1 is to mix R code and HTML or LaTeX in the same document, using Sweave or brew for example, to create a final document, while Method 2 is to
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
New answer based on comment below:
As I understand, Method 1 is to mix R code and HTML or LaTeX in the same document, using Sweave or brew for example, to create a final document, while Method 2 is to use R code to generate HTML or LaTeX, using the R2HTML or Hmisc packages for example, and then to just run the R code to create the final document. I've mostly just used Method 1 but will weigh in anyway.
As I see it, it's really just a matter of preference; I don't see any technical or statistical reason to prefer one over the other; they're both ways to make your research reproducible.
I think Method 1 is easier because you don't have to know what the R functions are that create the LaTeX or the HTML code; you just write R code, and you write HTML or LaTeX code, and the software takes care of putting them together. This is especially true when the R output only is a small amount of the final document; it would be a pain to write the R code necessary to output a lot of text, for example. In smart text editors, you also get the right syntax formatting for each kind of code which you don't get when using R2HTML or Hmisc. This method also separates the results from the commentary more cleanly, in my opinion.
However, for short snippets or just outputting the results from a command with no commentary, using R2HTML or Hmisc might be easier, though (speaking from my experience), once you're in the habit of Sweaving, you'll never go back.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
New answer based on comment below:
As I understand, Method 1 is to mix R code and HTML or LaTeX in the same document, using Sweave or brew for example, to create a final document, while Method 2 is to
|
44,096
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
These are just a few points.
If you want to just write simple reports, then the set of LaTeX commands that you need to learn is a lot smaller than if you want to do complex things.
An appealing aspect of LaTeX over some simple markup systems is that if you want features like referencing, automatic numbering, multi-page tables, attractive type setting, these features are available. In particular, there have been features that initially I hadn't even thought of, but when I have needed them, they have been available in LaTeX as a package.
If you want your final report in a different format, such as HTML or rtf, then you can use various conversion programs like pandoc to convert the latex into that format.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
These are just a few points.
If you want to just write simple reports, then the set of LaTeX commands that you need to learn is a lot smaller than if you want to do complex things.
An appealing aspec
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
These are just a few points.
If you want to just write simple reports, then the set of LaTeX commands that you need to learn is a lot smaller than if you want to do complex things.
An appealing aspect of LaTeX over some simple markup systems is that if you want features like referencing, automatic numbering, multi-page tables, attractive type setting, these features are available. In particular, there have been features that initially I hadn't even thought of, but when I have needed them, they have been available in LaTeX as a package.
If you want your final report in a different format, such as HTML or rtf, then you can use various conversion programs like pandoc to convert the latex into that format.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
These are just a few points.
If you want to just write simple reports, then the set of LaTeX commands that you need to learn is a lot smaller than if you want to do complex things.
An appealing aspec
|
44,097
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
The other nicE thing potentially about LaTeX or another markup in the Sweave/odfWeave/asciiWeave paradigm is that for repeated reports you can template it a bit better once and then just reuse the template. See Harrell's rreport package as an example
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
The other nicE thing potentially about LaTeX or another markup in the Sweave/odfWeave/asciiWeave paradigm is that for repeated reports you can template it a bit better once and then just reuse the tem
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
The other nicE thing potentially about LaTeX or another markup in the Sweave/odfWeave/asciiWeave paradigm is that for repeated reports you can template it a bit better once and then just reuse the template. See Harrell's rreport package as an example
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
The other nicE thing potentially about LaTeX or another markup in the Sweave/odfWeave/asciiWeave paradigm is that for repeated reports you can template it a bit better once and then just reuse the tem
|
44,098
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
You're pretty safe in using either - though I confess I don't use either at all. I suspect the primary reason for the popularity of the LaTeX/Sweave method is the number of fields that use LaTeX as their primary paper/presentation/manuscript format that incentivizes using a LaTeX based system. I don't know of a single field where a .html end product is all that directly useful.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
You're pretty safe in using either - though I confess I don't use either at all. I suspect the primary reason for the popularity of the LaTeX/Sweave method is the number of fields that use LaTeX as th
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
You're pretty safe in using either - though I confess I don't use either at all. I suspect the primary reason for the popularity of the LaTeX/Sweave method is the number of fields that use LaTeX as their primary paper/presentation/manuscript format that incentivizes using a LaTeX based system. I don't know of a single field where a .html end product is all that directly useful.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
You're pretty safe in using either - though I confess I don't use either at all. I suspect the primary reason for the popularity of the LaTeX/Sweave method is the number of fields that use LaTeX as th
|
44,099
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
The reason option 1 is so common is because...it is so common. Sweave has been around for the better part of 10 years and, for many R users, is synonymous with reproducible research. Furthermore, the sorts of people who would hear the phrase 'reproducible research' and think 'that sounds great' are probably likely to already be familiar with LaTeX. Thus, it is not as if they are picking between two options because many won't know that option 2 even exists.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
|
The reason option 1 is so common is because...it is so common. Sweave has been around for the better part of 10 years and, for many R users, is synonymous with reproducible research. Furthermore, the
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
The reason option 1 is so common is because...it is so common. Sweave has been around for the better part of 10 years and, for many R users, is synonymous with reproducible research. Furthermore, the sorts of people who would hear the phrase 'reproducible research' and think 'that sounds great' are probably likely to already be familiar with LaTeX. Thus, it is not as if they are picking between two options because many won't know that option 2 even exists.
|
Comparing reproducible research strategies: brew or Sweave vs. R2HTML
The reason option 1 is so common is because...it is so common. Sweave has been around for the better part of 10 years and, for many R users, is synonymous with reproducible research. Furthermore, the
|
44,100
|
What methods to use for statistical prediction/forecast of trading data?
|
Convert your series to day-to-day returns, and use the package PerformanceAnalytics in R.
I saved your attached data as a .csv file on my desktop. Here's some R code demonstrating how you could evaluate this trading strategy. Keep in mind that if you have any "look-ahead" or "data-snooping" bias in your trading model, all of these stats are worse than useless. Make sure that any simulated optimization/trading decisions in the backtest made at time t(n) are based solely on information available at t(1)-t(n-1).
#Load Data
setwd('~/Desktop')
PL <- read.csv('PL.csv')
head(PL)
#Setup as a time series
library(quantmod)
Date <- as.Date(as.character(PL$DBDate),format='%Y%m%d')
PL <- xts(PL[,c('PeriodToPeriodPL','Accumulation')],order.by=Date)
PL <- na.omit(PL)
#Pretend we start with $100,000
PL$Accumulation <- PL$Accumulation+100000
#Calculate Day-to-Day percent returns
Returns <- dailyReturn(PL$Accumulation, type = "arithmetic")
#Get SPY data to compare to
getSymbols('SPY',from=min(index(Returns)),to=max(index(Returns)))
BenchmarkReturns <- dailyReturn(Cl(SPY), type = "arithmetic")
#Setup for Performance Metrics
library(PerformanceAnalytics)
MeVSspy <- cbind(Returns,BenchmarkReturns)
names(MeVSspy) <- c('Me','SPY')
#Basic Performance Metrics
charts.PerformanceSummary(MeVSspy)
table.AnnualizedReturns(MeVSspy, geometric=FALSE)
#Information ratio vs SPY
InformationRatio(Returns,BenchmarkReturns)
#Look at worst drawdowns
table.Drawdowns(Returns)
table.Drawdowns(BenchmarkReturns)
#Rolling 30-day performance
charts.RollingPerformance(MeVSspy, width = 30)
#Read more:
?PerformanceAnalytics
Here's some of the output:
Me SPY
Annualized Return 0.1911 0.0469
Annualized Std Dev 0.1765 0.2505
Annualized Sharpe (Rf=0%) 1.0825 0.1873
Performance vs. SPY (SPY in RED)
Rolling Performance over the past 30-days. Gives you some idea of volatility:
Keep in mind that all these stats are just stats, past results are in no way indicative of futures results, and that none of the above information should be taken as investment advice.
|
What methods to use for statistical prediction/forecast of trading data?
|
Convert your series to day-to-day returns, and use the package PerformanceAnalytics in R.
I saved your attached data as a .csv file on my desktop. Here's some R code demonstrating how you could evalua
|
What methods to use for statistical prediction/forecast of trading data?
Convert your series to day-to-day returns, and use the package PerformanceAnalytics in R.
I saved your attached data as a .csv file on my desktop. Here's some R code demonstrating how you could evaluate this trading strategy. Keep in mind that if you have any "look-ahead" or "data-snooping" bias in your trading model, all of these stats are worse than useless. Make sure that any simulated optimization/trading decisions in the backtest made at time t(n) are based solely on information available at t(1)-t(n-1).
#Load Data
setwd('~/Desktop')
PL <- read.csv('PL.csv')
head(PL)
#Setup as a time series
library(quantmod)
Date <- as.Date(as.character(PL$DBDate),format='%Y%m%d')
PL <- xts(PL[,c('PeriodToPeriodPL','Accumulation')],order.by=Date)
PL <- na.omit(PL)
#Pretend we start with $100,000
PL$Accumulation <- PL$Accumulation+100000
#Calculate Day-to-Day percent returns
Returns <- dailyReturn(PL$Accumulation, type = "arithmetic")
#Get SPY data to compare to
getSymbols('SPY',from=min(index(Returns)),to=max(index(Returns)))
BenchmarkReturns <- dailyReturn(Cl(SPY), type = "arithmetic")
#Setup for Performance Metrics
library(PerformanceAnalytics)
MeVSspy <- cbind(Returns,BenchmarkReturns)
names(MeVSspy) <- c('Me','SPY')
#Basic Performance Metrics
charts.PerformanceSummary(MeVSspy)
table.AnnualizedReturns(MeVSspy, geometric=FALSE)
#Information ratio vs SPY
InformationRatio(Returns,BenchmarkReturns)
#Look at worst drawdowns
table.Drawdowns(Returns)
table.Drawdowns(BenchmarkReturns)
#Rolling 30-day performance
charts.RollingPerformance(MeVSspy, width = 30)
#Read more:
?PerformanceAnalytics
Here's some of the output:
Me SPY
Annualized Return 0.1911 0.0469
Annualized Std Dev 0.1765 0.2505
Annualized Sharpe (Rf=0%) 1.0825 0.1873
Performance vs. SPY (SPY in RED)
Rolling Performance over the past 30-days. Gives you some idea of volatility:
Keep in mind that all these stats are just stats, past results are in no way indicative of futures results, and that none of the above information should be taken as investment advice.
|
What methods to use for statistical prediction/forecast of trading data?
Convert your series to day-to-day returns, and use the package PerformanceAnalytics in R.
I saved your attached data as a .csv file on my desktop. Here's some R code demonstrating how you could evalua
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.