idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
12,901
|
Is median fairer than mean?
|
The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system:
If a rational user thinks the proper rating should be 4 stars, but it currently has 4.5 stars, then the best way to get to four stars (assuming there have been more than six votes) is to vote 1 star in a mean based rating system.
While in a median based system, the user's rational choice is simply to vote exactly the number of stars the user thinks the product should have.
It's kind of the second price auction equivalent for star rating systems.
|
Is median fairer than mean?
|
The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system:
If a rational user thinks the proper rating should be 4 stars, but
|
Is median fairer than mean?
The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system:
If a rational user thinks the proper rating should be 4 stars, but it currently has 4.5 stars, then the best way to get to four stars (assuming there have been more than six votes) is to vote 1 star in a mean based rating system.
While in a median based system, the user's rational choice is simply to vote exactly the number of stars the user thinks the product should have.
It's kind of the second price auction equivalent for star rating systems.
|
Is median fairer than mean?
The great thing about using the median for star ratings is that smart users (aware of the use of the median) won't "game" the system:
If a rational user thinks the proper rating should be 4 stars, but
|
12,902
|
Is median fairer than mean?
|
Several good answers still leave room for more comments.
First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evident, but it is easy for real data to be more complicated. At most, the median is intended to discount or ignore outliers, but even that is not guaranteed. For example, with ratings of 1 1 1 5 5 5 the median and mean agree at 3, so all may seem good. But an extra 5 will tip the median to 5 and an extra 1 will tip the median to 1. The mean would move by about 0.286 in each case. Hence the mean is here more resistant than the median. The example can be dismissed as unusual, but it's not outrageous. The point is not original, naturally. One place it is made is in Mosteller, F. and Tukey, J.W. 1977. Data Analysis and Regression. Reading, MA: Addison-Wesley, pp.34-35.
Second, trimmed means have been mentioned and the idea deserves a bigger push. Mean and median need not be stark alternatives so that the analyst must choose (vote for) one or the other. You can consider all possible trimmed means based on trimming a certain number of values in each tail. The table shows as # the number of values included in the calculation of the mean:
+----------------------------+
| number # trimmed mean |
|----------------------------|
| 0 16 4.0625 |
| 1 14 4.214286 |
| 2 12 4.416667 |
| 3 10 4.6 |
| 4 8 4.75 |
| 5 6 4.833333 |
| 6 4 5 |
| 7 2 5 |
+----------------------------+
The main picture here is that you can choose your discount rate (ignore so many values in each tail as suspect) as a kind of insurance against the risk of being off because of extreme values. What I see is a fairly smooth gradient between mean and median, which is expected here because the possible values 1, 2, 3, 4, 5 are all present in the data. A big jump in the sequence is expected with an isolated outlier.
There is no obligation with trimmed means to trim equal numbers in each tail, but I will not expand on that.
Third, the example is of Amazon reviews. Context is always pertinent in guiding how you want data summarized. In the case of Amazon reviews the best answer is to read the reviews! As high and low grades alike can be on spurious grounds (implicitly: the author of this book is my friend) and/or irrelevant to your decision (explicitly: the re-seller treated me badly), there isn't to me an obvious implication for how to summarize such data, and indeed by showing you the distribution Amazon is being maximally informative.
Fourth, and most elementary but also fundamental of all, who is making you choose? Sometimes mean and median should both be reported (and, as said, a distribution graph too).
|
Is median fairer than mean?
|
Several good answers still leave room for more comments.
First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evide
|
Is median fairer than mean?
Several good answers still leave room for more comments.
First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evident, but it is easy for real data to be more complicated. At most, the median is intended to discount or ignore outliers, but even that is not guaranteed. For example, with ratings of 1 1 1 5 5 5 the median and mean agree at 3, so all may seem good. But an extra 5 will tip the median to 5 and an extra 1 will tip the median to 1. The mean would move by about 0.286 in each case. Hence the mean is here more resistant than the median. The example can be dismissed as unusual, but it's not outrageous. The point is not original, naturally. One place it is made is in Mosteller, F. and Tukey, J.W. 1977. Data Analysis and Regression. Reading, MA: Addison-Wesley, pp.34-35.
Second, trimmed means have been mentioned and the idea deserves a bigger push. Mean and median need not be stark alternatives so that the analyst must choose (vote for) one or the other. You can consider all possible trimmed means based on trimming a certain number of values in each tail. The table shows as # the number of values included in the calculation of the mean:
+----------------------------+
| number # trimmed mean |
|----------------------------|
| 0 16 4.0625 |
| 1 14 4.214286 |
| 2 12 4.416667 |
| 3 10 4.6 |
| 4 8 4.75 |
| 5 6 4.833333 |
| 6 4 5 |
| 7 2 5 |
+----------------------------+
The main picture here is that you can choose your discount rate (ignore so many values in each tail as suspect) as a kind of insurance against the risk of being off because of extreme values. What I see is a fairly smooth gradient between mean and median, which is expected here because the possible values 1, 2, 3, 4, 5 are all present in the data. A big jump in the sequence is expected with an isolated outlier.
There is no obligation with trimmed means to trim equal numbers in each tail, but I will not expand on that.
Third, the example is of Amazon reviews. Context is always pertinent in guiding how you want data summarized. In the case of Amazon reviews the best answer is to read the reviews! As high and low grades alike can be on spurious grounds (implicitly: the author of this book is my friend) and/or irrelevant to your decision (explicitly: the re-seller treated me badly), there isn't to me an obvious implication for how to summarize such data, and indeed by showing you the distribution Amazon is being maximally informative.
Fourth, and most elementary but also fundamental of all, who is making you choose? Sometimes mean and median should both be reported (and, as said, a distribution graph too).
|
Is median fairer than mean?
Several good answers still leave room for more comments.
First, no one has objected to the idea that the median is intended to eliminate outliers, but I will qualify it. The intended meaning is evide
|
12,903
|
What is the point of reporting descriptive statistics?
|
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors of traumatic brain injury following motorcycle accidents in a sample from a hospital. Her dependent variable is binary and she had a series of independent variables. Multivariable logistic regression allowed her to produce the following findings:
no helmet use adjusted OR = 4.5 (95% CI 3.6, 5.5) compared to helmet use.
all other variables were not included in the final model.
To be clear, there were no issues with the modelling. We focus on the value that the descriptive statistics can add.
Without the descriptive statistics, a reader cannot put these findings in perspective. Why? Let me show you the descriptive statistics:
age, years, mean (SD) 54 (2)
males, freq (%) 490 (98)
blood alcohol level, %, mean (SD) 0.10 (0.01)
...
You can see from the above that her sample consisted of older, intoxicated males. With this information the reader is able say what, if any, these results can say about injuries in young males or injuries in non-intoxicated riders or in female riders.
Please don't ignore descriptive statistics.
|
What is the point of reporting descriptive statistics?
|
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors
|
What is the point of reporting descriptive statistics?
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors of traumatic brain injury following motorcycle accidents in a sample from a hospital. Her dependent variable is binary and she had a series of independent variables. Multivariable logistic regression allowed her to produce the following findings:
no helmet use adjusted OR = 4.5 (95% CI 3.6, 5.5) compared to helmet use.
all other variables were not included in the final model.
To be clear, there were no issues with the modelling. We focus on the value that the descriptive statistics can add.
Without the descriptive statistics, a reader cannot put these findings in perspective. Why? Let me show you the descriptive statistics:
age, years, mean (SD) 54 (2)
males, freq (%) 490 (98)
blood alcohol level, %, mean (SD) 0.10 (0.01)
...
You can see from the above that her sample consisted of older, intoxicated males. With this information the reader is able say what, if any, these results can say about injuries in young males or injuries in non-intoxicated riders or in female riders.
Please don't ignore descriptive statistics.
|
What is the point of reporting descriptive statistics?
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors
|
12,904
|
What is the point of reporting descriptive statistics?
|
The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case tabulating the sex, grades and so on would be a beneficial addition to the logistic regression. It is not to enable people to check your assumptions although they may try to do that too.
==============
Edit to give links to some guidelines used in health
In the field with which I am familiar, health, there are specific guidelines for reporting. These have been collected together in the EQUATOR network which should be consulted for up to date details.
As an example we may take clinical trials where the relevant guideline is CONSORT. In the document outlining the guideline available here and elsewhere we read in Table 1 recommendation 15 "A table showing baseline demographic and clinical characteristics for each group".
There are similar recommendations for other study types.
|
What is the point of reporting descriptive statistics?
|
The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case
|
What is the point of reporting descriptive statistics?
The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case tabulating the sex, grades and so on would be a beneficial addition to the logistic regression. It is not to enable people to check your assumptions although they may try to do that too.
==============
Edit to give links to some guidelines used in health
In the field with which I am familiar, health, there are specific guidelines for reporting. These have been collected together in the EQUATOR network which should be consulted for up to date details.
As an example we may take clinical trials where the relevant guideline is CONSORT. In the document outlining the guideline available here and elsewhere we read in Table 1 recommendation 15 "A table showing baseline demographic and clinical characteristics for each group".
There are similar recommendations for other study types.
|
What is the point of reporting descriptive statistics?
The point of providing descriptive statistics is to characterise your sample so that people in other centres or countries can assess whether your results generalise to their situation. So in your case
|
12,905
|
What is the point of reporting descriptive statistics?
|
Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the logistic regression is going to dominate over everything else, so you will likely learn to ignore the salary, regardless of how much actual information it may hold.
Some methods are more sensitive than others to skewness and extreme values, and logistic regression is rather on the sensitive side. Of course, the final proof is in the pudding, and you can compare the results obtained with the raw data, or with each feature transformed towards normality.
|
What is the point of reporting descriptive statistics?
|
Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the
|
What is the point of reporting descriptive statistics?
Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the logistic regression is going to dominate over everything else, so you will likely learn to ignore the salary, regardless of how much actual information it may hold.
Some methods are more sensitive than others to skewness and extreme values, and logistic regression is rather on the sensitive side. Of course, the final proof is in the pudding, and you can compare the results obtained with the raw data, or with each feature transformed towards normality.
|
What is the point of reporting descriptive statistics?
Another thing is to show how well behaved your variables are. If, for example, one of your variables is the salary, and you have interviewed exactly one billionaire, when you input his salary into the
|
12,906
|
What is the point of reporting descriptive statistics?
|
A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis.
You may use data from different sources to blow up your descriptives.
1 table should be enough. The one you attached is not very intuitive.
|
What is the point of reporting descriptive statistics?
|
A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis.
You may use data from differ
|
What is the point of reporting descriptive statistics?
A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis.
You may use data from different sources to blow up your descriptives.
1 table should be enough. The one you attached is not very intuitive.
|
What is the point of reporting descriptive statistics?
A descriptive part helps to understand the reader your dataset. In applied econ it is usually highly recommended as it may show the first potential flaws in your analysis.
You may use data from differ
|
12,907
|
Why AUC =1 even classifier has misclassified half of the samples?
|
The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the positive patterns have higher probabilities than all of the negative patterns. In this case there will be a decision threshold that is higher than 0.5, which would give an error rate of zero. Note that because the AUC only measures the ranking of the probabilities, it doesn't tell you if the probabilities are well calibrated (e.g. there is no systematic bias), if calibration of the probabilities is important then look at the cross-entropy metric.
|
Why AUC =1 even classifier has misclassified half of the samples?
|
The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the po
|
Why AUC =1 even classifier has misclassified half of the samples?
The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the positive patterns have higher probabilities than all of the negative patterns. In this case there will be a decision threshold that is higher than 0.5, which would give an error rate of zero. Note that because the AUC only measures the ranking of the probabilities, it doesn't tell you if the probabilities are well calibrated (e.g. there is no systematic bias), if calibration of the probabilities is important then look at the cross-entropy metric.
|
Why AUC =1 even classifier has misclassified half of the samples?
The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the po
|
12,908
|
Why AUC =1 even classifier has misclassified half of the samples?
|
The other answers explain what is happening but I thought a picture might be nice.
You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclassification rate of 50%.
|
Why AUC =1 even classifier has misclassified half of the samples?
|
The other answers explain what is happening but I thought a picture might be nice.
You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclas
|
Why AUC =1 even classifier has misclassified half of the samples?
The other answers explain what is happening but I thought a picture might be nice.
You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclassification rate of 50%.
|
Why AUC =1 even classifier has misclassified half of the samples?
The other answers explain what is happening but I thought a picture might be nice.
You can see that the classes are perfectly separated, so the AUC is 1, but thresholding at 1/2 will produce a misclas
|
12,909
|
Why AUC =1 even classifier has misclassified half of the samples?
|
The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a randomly-selected 1 is ranked higher than a randomly-selected 0. In this sample, this is always true, so it's a probability 1 event.
Tom Fawcett has a great expository article about ROC curves. I'd suggest starting there.
Tom Fawcett. "An Introduction to ROC Analysis." Pattern Recognition Letters. 2005.
|
Why AUC =1 even classifier has misclassified half of the samples?
|
The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a random
|
Why AUC =1 even classifier has misclassified half of the samples?
The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a randomly-selected 1 is ranked higher than a randomly-selected 0. In this sample, this is always true, so it's a probability 1 event.
Tom Fawcett has a great expository article about ROC curves. I'd suggest starting there.
Tom Fawcett. "An Introduction to ROC Analysis." Pattern Recognition Letters. 2005.
|
Why AUC =1 even classifier has misclassified half of the samples?
The samples weren't "misclassified" at all. The 0 examples are ranked strictly lower than the 1 examples. AUROC is doing exactly what it's defined to do, which is measure the probability that a random
|
12,910
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as:
$$
R^2 = 1 - \frac{SS_{res}}{SS_{tot}}
$$
where $SS_{res} = \sum_{i}{(y_i - \hat{y})^2}$ is a residual sum of squares and $SS_{tot} = \sum_{i}{(y_i - \bar{y})^2}$ is a total sum of squares. You can easily get an $R^2 = 1$ (i.e. $SS_{res} = 0$) by fitting a line that passes through all of the (training) points (though this, in general, requires more flexible model as opposed to a simple linear regression, as noted by Eric), which is a perfect example of overfitting. So reducing explained variation isn't necessarily bad as it could result in a better performance on unseen (test) data. PCA can be a good preprocessing technique if there are reasons to believe that the dataset has an intrinsic lower-dimensional structure.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as:
$$
R^2 = 1 - \frac{SS_{res}}{SS_{tot}}
$$
where $SS_{res} = \sum_{i}
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as:
$$
R^2 = 1 - \frac{SS_{res}}{SS_{tot}}
$$
where $SS_{res} = \sum_{i}{(y_i - \hat{y})^2}$ is a residual sum of squares and $SS_{tot} = \sum_{i}{(y_i - \bar{y})^2}$ is a total sum of squares. You can easily get an $R^2 = 1$ (i.e. $SS_{res} = 0$) by fitting a line that passes through all of the (training) points (though this, in general, requires more flexible model as opposed to a simple linear regression, as noted by Eric), which is a perfect example of overfitting. So reducing explained variation isn't necessarily bad as it could result in a better performance on unseen (test) data. PCA can be a good preprocessing technique if there are reasons to believe that the dataset has an intrinsic lower-dimensional structure.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as:
$$
R^2 = 1 - \frac{SS_{res}}{SS_{tot}}
$$
where $SS_{res} = \sum_{i}
|
12,911
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
In your question there is an implicit assumption about the regressor being linear.
In case it is linear your assertion is correct.
But for the case of non linear regressor you may think on the dimensionality reduction step as a feature extraction.
In that case it has a very important role in order to get good results.
It might reduce the noise, it might assist with the learning, etc...
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
In your question there is an implicit assumption about the regressor being linear.
In case it is linear your assertion is correct.
But for the case of non linear regressor you may think on the dimensi
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
In your question there is an implicit assumption about the regressor being linear.
In case it is linear your assertion is correct.
But for the case of non linear regressor you may think on the dimensionality reduction step as a feature extraction.
In that case it has a very important role in order to get good results.
It might reduce the noise, it might assist with the learning, etc...
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
In your question there is an implicit assumption about the regressor being linear.
In case it is linear your assertion is correct.
But for the case of non linear regressor you may think on the dimensi
|
12,912
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model.
Performing PCA does not reduce the accuracy of the model. The principal components, when you use all of them, should also explain the 95%. It is the reduction of the dimensionality which reduces the explained variation.
So this is a matter of model selection and finding models with fewer parameters. The role of PCA is to do this model selection by redefining the parameter space in order to find a small number of components that explain a large amount of variation.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model.
Performing PCA does not reduce the accuracy of the mode
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model.
Performing PCA does not reduce the accuracy of the model. The principal components, when you use all of them, should also explain the 95%. It is the reduction of the dimensionality which reduces the explained variation.
So this is a matter of model selection and finding models with fewer parameters. The role of PCA is to do this model selection by redefining the parameter space in order to find a small number of components that explain a large amount of variation.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model.
Performing PCA does not reduce the accuracy of the mode
|
12,913
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem, the primarily alternatives are penalized maximum likelihood estimation (ridge regression, lasso, elastic net, etc.) or data reduction. Data reduction, which as a side benefit deals well with collinearity, can be more interpretable, and works in any predictive context. Data reduction is IMHO much preferred over variable selection, because in the majority of problems variable selection yields a result that is too random/unstable. The spirit of data reduction is this: Estimate the model complexity that can be supported by your available sample size. Reduce the dimensionality (in a way that is completely masked to Y) and fit a single model whose number of parameters (that are estimated against Y) is supported by the effective sample size.
When using variable clustering or sparse principal components, one represents groups of variables with scores. Sometimes an entire group can be dropped. This procedure is not distorted by collinearities.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem, the primarily alternatives are penalized maximum likelihood estimation (ridge regression, lasso, elastic net, etc.) or data reduction. Data reduction, which as a side benefit deals well with collinearity, can be more interpretable, and works in any predictive context. Data reduction is IMHO much preferred over variable selection, because in the majority of problems variable selection yields a result that is too random/unstable. The spirit of data reduction is this: Estimate the model complexity that can be supported by your available sample size. Reduce the dimensionality (in a way that is completely masked to Y) and fit a single model whose number of parameters (that are estimated against Y) is supported by the effective sample size.
When using variable clustering or sparse principal components, one represents groups of variables with scores. Sometimes an entire group can be dropped. This procedure is not distorted by collinearities.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Data reduction (unsupervised learning) is not always used because of any hope of wonderful performance, but rather out of necessity. When one has the "too many variables too few observations" problem
|
12,914
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationary period. Note: In reality, one would work a log transform of the data which assumes a constant percent change relationship across time.
Collapsing the month data across years produces good results by month if inflationary periods are rare. If you happen to guess that the year seasonality is in a non-inflationary period, you have the best estimates with the best error estimates. So, the dimensionality reduction (ignoring years) is clearly best.
However, if it turns out that you are in an inflationary periods, not so good monthly seasonal adjustment. However, a year model may capture the inflation trend and produce better results.
So which model to use, collapsed or full?
One approach is to estimate the probability that you could be an inflationary period based on history,
Next, what is the operational cost associated with having an average error of X in a months' seasonality.
Knowing the difference in cost by month due to modeling error for collapsed vs full for inflationary versus non-inflationary, and the associated probability of each case, one can make a decision that produces the lowest expected cost.
This assumes that this exercise repeats over time and starting parameters estimate are good estimates.
So, the specific answer relates to the nature of the data, model specification/estimation precision and associated knowledge relating to error cost estimates.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationar
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationary period. Note: In reality, one would work a log transform of the data which assumes a constant percent change relationship across time.
Collapsing the month data across years produces good results by month if inflationary periods are rare. If you happen to guess that the year seasonality is in a non-inflationary period, you have the best estimates with the best error estimates. So, the dimensionality reduction (ignoring years) is clearly best.
However, if it turns out that you are in an inflationary periods, not so good monthly seasonal adjustment. However, a year model may capture the inflation trend and produce better results.
So which model to use, collapsed or full?
One approach is to estimate the probability that you could be an inflationary period based on history,
Next, what is the operational cost associated with having an average error of X in a months' seasonality.
Knowing the difference in cost by month due to modeling error for collapsed vs full for inflationary versus non-inflationary, and the associated probability of each case, one can make a decision that produces the lowest expected cost.
This assumes that this exercise repeats over time and starting parameters estimate are good estimates.
So, the specific answer relates to the nature of the data, model specification/estimation precision and associated knowledge relating to error cost estimates.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Take a simple example of computing seasonal adjustment factor for months across a set of years for a company's sales. Assume there is no linear trend except if years are associated with an inflationar
|
12,915
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in time. That's nearly 10 billion data points.
If I want to analyze this data set... well, let's say some processes can take several days to weeks to run.
Alternatively, I can do a data reduction with PCA/UMAP and work with a much much smaller dataset that will give me fairly accurate and generalizable results within minutes. So even if my explained variation is lower, it can really make sense.
Something else that is worth considering is what are your results being used for? If my output is being used to make an important life-or-death decision then I really want to minimize any and all errors. If my output is going to inform further future research then I have a larger margin of error to work with.
Just a quick example, let's say my analysis yields the top 20 drugs that could be effective to treat a certain cancer. If this is going to a patient, I want drugs 1-3 to represent the best, 2nd best, and 3rd best treatment options respectively. Here the result has to be super precise, with very little error.
If, however, the goal is to try these out in cell cultures, then I don't really care if the top result happens to be the 3rd best or the 1st one. As long as my top 25% contains drugs that are more likely to work I don't really care about the order. (A lot of assumptions in this statement but just to exemplify).
To summarize, dimensionality reduction can also help with computation time for big data analysis. This is highly dependant on what the output will be used for.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
|
Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in tim
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in time. That's nearly 10 billion data points.
If I want to analyze this data set... well, let's say some processes can take several days to weeks to run.
Alternatively, I can do a data reduction with PCA/UMAP and work with a much much smaller dataset that will give me fairly accurate and generalizable results within minutes. So even if my explained variation is lower, it can really make sense.
Something else that is worth considering is what are your results being used for? If my output is being used to make an important life-or-death decision then I really want to minimize any and all errors. If my output is going to inform further future research then I have a larger margin of error to work with.
Just a quick example, let's say my analysis yields the top 20 drugs that could be effective to treat a certain cancer. If this is going to a patient, I want drugs 1-3 to represent the best, 2nd best, and 3rd best treatment options respectively. Here the result has to be super precise, with very little error.
If, however, the goal is to try these out in cell cultures, then I don't really care if the top result happens to be the 3rd best or the 1st one. As long as my top 25% contains drugs that are more likely to work I don't really care about the order. (A lot of assumptions in this statement but just to exemplify).
To summarize, dimensionality reduction can also help with computation time for big data analysis. This is highly dependant on what the output will be used for.
|
Why is dimensionality reduction used if it almost always reduces the explained variation?
Let me give a quick chime in. I'm a data analyst for DNA methylation data. I have an awesome dataset that is about 1,000 people x methylation measure in over 3 million locations each x 3 points in tim
|
12,916
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
|
The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female":
If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the remaining four have $2^4$ possible gender configurations, only $1$ of which is all-female, giving $\frac{1}{2^4}$
If it's any 8 of the 12 employees, then what's being asked is to look at all configurations of 12 employees, throw out the ones with 5 or more men, and count the proportion that are all female.
Notice that under this interpretation, each employee in the valid configurations does not have a 50% chance of being male/female, since we are assuming that there are at least 8 females in each valid configuration. What does have an equal chance is each valid configuration.
The reason this is confusing is that our intuition assumes the first interpretation, but the way the question is worded implies the second.
There is a famous statistical "paradox" that stems from this same line of reasoning:
In a family with two children, one of whom is a girl, what's the probability both are girls?
Most people assume the answer is $\frac{1}{2}$, but it's actually $\frac{1}{3}$, for the same reason as the original question. If you're still confused, see this answer which gives a more thorough explanation of the paradox and its resolution.
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
|
The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female":
If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the rem
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female":
If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the remaining four have $2^4$ possible gender configurations, only $1$ of which is all-female, giving $\frac{1}{2^4}$
If it's any 8 of the 12 employees, then what's being asked is to look at all configurations of 12 employees, throw out the ones with 5 or more men, and count the proportion that are all female.
Notice that under this interpretation, each employee in the valid configurations does not have a 50% chance of being male/female, since we are assuming that there are at least 8 females in each valid configuration. What does have an equal chance is each valid configuration.
The reason this is confusing is that our intuition assumes the first interpretation, but the way the question is worded implies the second.
There is a famous statistical "paradox" that stems from this same line of reasoning:
In a family with two children, one of whom is a girl, what's the probability both are girls?
Most people assume the answer is $\frac{1}{2}$, but it's actually $\frac{1}{3}$, for the same reason as the original question. If you're still confused, see this answer which gives a more thorough explanation of the paradox and its resolution.
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
The confusion comes from the fact that there are multiple ways to interpret "Given that 8 employees are female":
If it's 8 specific employees - say, the employees in positions 1 thru 8 - then the rem
|
12,917
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
|
Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or female, and we assume that the sexes are mutually independent. Then the "female-indicator" variables for the people in the group are:
$$X_1,...,X_{12} \sim \text{IID Bern}(\tfrac{1}{2}).$$
Consequently, the number of females in the group has a binomial distribution:
$$\dot{X} \equiv \sum_{i} X_i \sim \text{Bin}(12, \tfrac{1}{2}),$$
and the conditional probability of interest is:
$$\mathbb{P}(\dot{X} = 12 | \dot{X} \geqslant 8)
= \frac{\mathbb{P}(\dot{X} = 12)}{\mathbb{P}(\dot{X} \geqslant 8)} = \cdots$$
Can you take it from here?
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
|
Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or female, and we as
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or female, and we assume that the sexes are mutually independent. Then the "female-indicator" variables for the people in the group are:
$$X_1,...,X_{12} \sim \text{IID Bern}(\tfrac{1}{2}).$$
Consequently, the number of females in the group has a binomial distribution:
$$\dot{X} \equiv \sum_{i} X_i \sim \text{Bin}(12, \tfrac{1}{2}),$$
and the conditional probability of interest is:
$$\mathbb{P}(\dot{X} = 12 | \dot{X} \geqslant 8)
= \frac{\mathbb{P}(\dot{X} = 12)}{\mathbb{P}(\dot{X} \geqslant 8)} = \cdots$$
Can you take it from here?
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
Perhaps it would be helpful to give this some clearer structure, via explicit assumptions. Suppose we are willing to assume a priori that each person is equally likely to be male or female, and we as
|
12,918
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
|
You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different questions.
My reading of your question as asked leads to the answer "the probability is zero". Eight out of twelve employees are female, so four are male, so not all employees are female.
Let's interpret it as "Someone picked 12 employees at random, and counted how many were female. The answer was a number from eight to twelve". Or "Someone picked 12 employees at random, then picked eight of those and checked their gender, and all eight were female". Much different situation, and much different answer.
In the first case, if it was nine females, why did I say "the answer was a number from eight to twelve" and not "the answer was a number from nine to twelve"? If it was eight females, why didn't I say "the answer was a number from zero to eight"? I might have an agenda to make the impression that either lots of employees or that few employees are female, so if you don't know about the agenda, you might get different answers.
Let's say I ask you "how many children do you have" and you answer "two". Then I say "I very much prefer boys to girls. So if you tell me that you have at least one boy, I'll give you 100 dollars. If you tell me that you have two boys, I'll give you 10,000 dollars. If you lie, I'll shoot you". If you tell me "I have at least one boy" then the probability that you have two boys is zero.
But still, you can't answer the question at all. We don't know how many employees there are - because your question wasn't clear. I know there were 12 employees in a meeting, but I don't know how many were outside the meeting. Obviously the more outside the meeting, the less likely it is that they are all female. And we don't know the probability that a random employee is female. You guessed that the probability was 0.5. I would assume that the probability is an unknown number, that the chances that eight employees are female depends on that number, and vice versa you can draw conclusions from the number of females in a group what the probability is that some employee is female.
So let's restate the question. You picked 12 employees at random. I tell you "I will ask you about some attribute that each employee might or might not have, and I want you to tell me if the number of employees in the group with that attribute is from eight to twelve or not".
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
|
You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different questions.
My readi
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different questions.
My reading of your question as asked leads to the answer "the probability is zero". Eight out of twelve employees are female, so four are male, so not all employees are female.
Let's interpret it as "Someone picked 12 employees at random, and counted how many were female. The answer was a number from eight to twelve". Or "Someone picked 12 employees at random, then picked eight of those and checked their gender, and all eight were female". Much different situation, and much different answer.
In the first case, if it was nine females, why did I say "the answer was a number from eight to twelve" and not "the answer was a number from nine to twelve"? If it was eight females, why didn't I say "the answer was a number from zero to eight"? I might have an agenda to make the impression that either lots of employees or that few employees are female, so if you don't know about the agenda, you might get different answers.
Let's say I ask you "how many children do you have" and you answer "two". Then I say "I very much prefer boys to girls. So if you tell me that you have at least one boy, I'll give you 100 dollars. If you tell me that you have two boys, I'll give you 10,000 dollars. If you lie, I'll shoot you". If you tell me "I have at least one boy" then the probability that you have two boys is zero.
But still, you can't answer the question at all. We don't know how many employees there are - because your question wasn't clear. I know there were 12 employees in a meeting, but I don't know how many were outside the meeting. Obviously the more outside the meeting, the less likely it is that they are all female. And we don't know the probability that a random employee is female. You guessed that the probability was 0.5. I would assume that the probability is an unknown number, that the chances that eight employees are female depends on that number, and vice versa you can draw conclusions from the number of females in a group what the probability is that some employee is female.
So let's restate the question. You picked 12 employees at random. I tell you "I will ask you about some attribute that each employee might or might not have, and I want you to tell me if the number of employees in the group with that attribute is from eight to twelve or not".
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
You need to be very, very, very precise with the statements you make, otherwise any results will be utter nonsense - because they might be the correct answer to a totally different questions.
My readi
|
12,919
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
|
What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues.
Chief among these (and at fault here) is the reference class problem. In a frequentist framework, this corresponds to assigning your statement of "probability" to a well-populated space of outcomes (recall that probability may be axiomatized in terms of event spaces - in a Bayesian framework, the formalization is slightly different, but the practical consequences are the same). For a statement such as the one in this problem, there are multiple ways we could conceivably do this - what's worse, there's no obvious "correct" statistical interpretation purely from the wording of the problem. We need more information.
This is one of the main problems with statistics as it is currently taught. Mathematical pedagogy tends to favor short, snappy exercises - they're easy to read and easy to grade. But statistics abhors this; any sufficiently short statistical statement is almost bound to be uninterpretable.
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
|
What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues.
Chief among these (and at fault here) is the reference class problem. In a frequentist
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues.
Chief among these (and at fault here) is the reference class problem. In a frequentist framework, this corresponds to assigning your statement of "probability" to a well-populated space of outcomes (recall that probability may be axiomatized in terms of event spaces - in a Bayesian framework, the formalization is slightly different, but the practical consequences are the same). For a statement such as the one in this problem, there are multiple ways we could conceivably do this - what's worse, there's no obvious "correct" statistical interpretation purely from the wording of the problem. We need more information.
This is one of the main problems with statistics as it is currently taught. Mathematical pedagogy tends to favor short, snappy exercises - they're easy to read and easy to grade. But statistics abhors this; any sufficiently short statistical statement is almost bound to be uninterpretable.
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
What you're noticing here is that the entire field of statistics is plagued by serious interpretational issues.
Chief among these (and at fault here) is the reference class problem. In a frequentist
|
12,920
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
|
Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that does not hire men, or perhaps it is a small company run by an entrepreneur who (probably unlawfully, but that is another subject) only or primarliy hires women? Perhaps this meeting is of members of an occupation that skews female like nursing or elementary school teaching.
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
|
Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that does not hire men
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that all employees are female? [closed]
Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that does not hire men, or perhaps it is a small company run by an entrepreneur who (probably unlawfully, but that is another subject) only or primarliy hires women? Perhaps this meeting is of members of an occupation that skews female like nursing or elementary school teaching.
|
A meeting has 12 employees. Given that 8 of the employees are female, what is the probability that a
Is not there also a potential skewing of the probabilities due to "cultural" (for want of a better word) factors. If 8 of the employees are female, perhaps this is a Women's gym that does not hire men
|
12,921
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
|
Take two positive iid Cauchy variates $Y_1,Y_2$ with common density
$$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$
and infinite expectation.
The minimum variate $\min(Y_1,Y_2)$ then has density
$$g(x)=\frac{8}{\pi^2}\frac{\pi/2-\arctan(x)}{1+x^2}\mathbb I_{x>0}$$
Since (by L'Hospital's rule)
$$\frac{\pi/2-\arctan(x)}{1+x^2} \equiv \frac{1}{x^3}$$
at infinity, the function $x\mapsto xg(x)$ is integrable. Hence, $\min(Y_1,Y_2)$ has a finite expectation actually equal to $\log(16)/\pi$.
More generally, in a regular Cauchy sample $X_1,\ldots,X_n$, with $n\ge 3$, every order statistic but the extremes $X_{(1)}$ and $X_{(n)}$ enjoys a (finite) expectation. (Furthermore, $X_{(1)}$ and $X_{(n)}$ both have infinite expectations, $-\infty$ and $+\infty$ resp., rather than no expectation.)
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
|
Take two positive iid Cauchy variates $Y_1,Y_2$ with common density
$$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$
and infinite expectation.
The minimum variate $\min(Y_1,Y_2)$ then has density
$
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
Take two positive iid Cauchy variates $Y_1,Y_2$ with common density
$$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$
and infinite expectation.
The minimum variate $\min(Y_1,Y_2)$ then has density
$$g(x)=\frac{8}{\pi^2}\frac{\pi/2-\arctan(x)}{1+x^2}\mathbb I_{x>0}$$
Since (by L'Hospital's rule)
$$\frac{\pi/2-\arctan(x)}{1+x^2} \equiv \frac{1}{x^3}$$
at infinity, the function $x\mapsto xg(x)$ is integrable. Hence, $\min(Y_1,Y_2)$ has a finite expectation actually equal to $\log(16)/\pi$.
More generally, in a regular Cauchy sample $X_1,\ldots,X_n$, with $n\ge 3$, every order statistic but the extremes $X_{(1)}$ and $X_{(n)}$ enjoys a (finite) expectation. (Furthermore, $X_{(1)}$ and $X_{(n)}$ both have infinite expectations, $-\infty$ and $+\infty$ resp., rather than no expectation.)
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
Take two positive iid Cauchy variates $Y_1,Y_2$ with common density
$$f(x)=\frac{2}{\pi}\frac{\mathbb I_{x>0}}{1+x^2}$$
and infinite expectation.
The minimum variate $\min(Y_1,Y_2)$ then has density
$
|
12,922
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
|
Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computing specific integrals.
Let $Z=\min(X,Y).$ Then, from basic axioms and definitions, we can work out that for any number $z,$
$$\eqalign{
F_Z(z) &= \Pr(Z\le z) = 1 - \Pr(Z > z) = 1 - \Pr(X \gt z, Y\gt z) \\&= 1 - (1-F_X(z))(1-F_Y(z)).}$$
For any CDF $F$, the expectation is
$$E_F = \int_{-\infty}^0 F(z)\mathrm{d}z + \int_{0}^\infty (1-F(z))\mathrm{d}z,$$
the sum of a negative part and a positive part.
Consequently, the question asks whether it's possible for $E_{F_Z}$ and $E_{F_Y}$ to be infinite but for $E_{F_Z}$ to be finite. This requires both the negative and positive part of $E_{F_Z}$ to be finite. Rather than analyzing this fully, it will suffice to study what happens to the positive parts: you can work out the analog for the negative parts.
In the worst case, then, the integrals $\int_0^\infty (1-F_X(z))\mathrm{d}z$ and $\int_0^\infty (1-F_Y(z))\mathrm{d}z$ will diverge but we wonder whether the integral of the product
$$\int_0^\infty (1-F_X(z))(1-F_Y(z))\mathrm{d}z$$
diverges. Clearly it cannot be any worse than the original two integrals, because since $0\le F(z)\le 1$ for all $z,$
$$\int_0^\infty (1-F_X(z))(1-F_Y(z))\mathrm{d}z \le \int_0^\infty (1-F_X(z))\mathrm{d}z \, \sup_{z\ge 0} (1-F_Y(z)) \le \int_0^\infty (1-F_X(z)).$$
This is sufficient insight to survey the landscape. Suppose that as $z\to \infty,$ $1-F_X(z)$ is approximated by $z^{-p}$ for some positive power $p,$ and similarly $1-F_Y(z)$ is approximated by $z^{-q}$ for $q \gt 0.$ We write $1-F_X \sim O(Z^p)$ and $1-F_Y \sim O(Z^q).$ Then, when both $p$ and $q$ are less than $1,$ $E_{F_X}$ and $E_{F_Y}$ are infinite.
When $p+q \le 1,$ because $(1-F_X)(1-F_Y)\sim O(z^{p+q}),$ $E_{F_Z}=\infty.$
But when $p+q \gt 1,$ $E_{F_Z}$ is finite because $\int_0^t (1-F_Z(z))\mathrm{d}z$ is bounded above by $\int_0^1 (1-F_Z(z))\mathrm{d}z$ plus some multiple of $$\int_1^t z^{-(p+q)}\mathrm{d}z = \frac{1}{p+q-1}\left(1 - t^{-(p+q-1)}\right) \to \frac{1}{p+q-1} \lt \infty.$$
In other words, the infinite expectations of the positive parts of $X$ and $Y$ imply their survival functions $1-F_X$ and $1-F_Y$ approach their lower limit of $0$ only very slowly; but the product of those survival functions, which is the survival function of $Z,$ can approach $0$ sufficiently quickly to give $Z$ a finite expectation.
In short,
For $Z$ to have finite expectation, $(1-F_X)(1-F_Y)$ must converge to $0$ sufficiently rapidly at $+\infty.$ This can happen even when neither $1-F_X$ or $1-F_Y$ converge sufficiently rapidly.
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
|
Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computin
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computing specific integrals.
Let $Z=\min(X,Y).$ Then, from basic axioms and definitions, we can work out that for any number $z,$
$$\eqalign{
F_Z(z) &= \Pr(Z\le z) = 1 - \Pr(Z > z) = 1 - \Pr(X \gt z, Y\gt z) \\&= 1 - (1-F_X(z))(1-F_Y(z)).}$$
For any CDF $F$, the expectation is
$$E_F = \int_{-\infty}^0 F(z)\mathrm{d}z + \int_{0}^\infty (1-F(z))\mathrm{d}z,$$
the sum of a negative part and a positive part.
Consequently, the question asks whether it's possible for $E_{F_Z}$ and $E_{F_Y}$ to be infinite but for $E_{F_Z}$ to be finite. This requires both the negative and positive part of $E_{F_Z}$ to be finite. Rather than analyzing this fully, it will suffice to study what happens to the positive parts: you can work out the analog for the negative parts.
In the worst case, then, the integrals $\int_0^\infty (1-F_X(z))\mathrm{d}z$ and $\int_0^\infty (1-F_Y(z))\mathrm{d}z$ will diverge but we wonder whether the integral of the product
$$\int_0^\infty (1-F_X(z))(1-F_Y(z))\mathrm{d}z$$
diverges. Clearly it cannot be any worse than the original two integrals, because since $0\le F(z)\le 1$ for all $z,$
$$\int_0^\infty (1-F_X(z))(1-F_Y(z))\mathrm{d}z \le \int_0^\infty (1-F_X(z))\mathrm{d}z \, \sup_{z\ge 0} (1-F_Y(z)) \le \int_0^\infty (1-F_X(z)).$$
This is sufficient insight to survey the landscape. Suppose that as $z\to \infty,$ $1-F_X(z)$ is approximated by $z^{-p}$ for some positive power $p,$ and similarly $1-F_Y(z)$ is approximated by $z^{-q}$ for $q \gt 0.$ We write $1-F_X \sim O(Z^p)$ and $1-F_Y \sim O(Z^q).$ Then, when both $p$ and $q$ are less than $1,$ $E_{F_X}$ and $E_{F_Y}$ are infinite.
When $p+q \le 1,$ because $(1-F_X)(1-F_Y)\sim O(z^{p+q}),$ $E_{F_Z}=\infty.$
But when $p+q \gt 1,$ $E_{F_Z}$ is finite because $\int_0^t (1-F_Z(z))\mathrm{d}z$ is bounded above by $\int_0^1 (1-F_Z(z))\mathrm{d}z$ plus some multiple of $$\int_1^t z^{-(p+q)}\mathrm{d}z = \frac{1}{p+q-1}\left(1 - t^{-(p+q-1)}\right) \to \frac{1}{p+q-1} \lt \infty.$$
In other words, the infinite expectations of the positive parts of $X$ and $Y$ imply their survival functions $1-F_X$ and $1-F_Y$ approach their lower limit of $0$ only very slowly; but the product of those survival functions, which is the survival function of $Z,$ can approach $0$ sufficiently quickly to give $Z$ a finite expectation.
In short,
For $Z$ to have finite expectation, $(1-F_X)(1-F_Y)$ must converge to $0$ sufficiently rapidly at $+\infty.$ This can happen even when neither $1-F_X$ or $1-F_Y$ converge sufficiently rapidly.
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
Let's find a general solution for independent variables $X$ and $Y$ having CDFs $F_X$ and $F_Y,$ respectively. This will give us useful clues into what's going on, without the distraction of computin
|
12,923
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
|
Well, if you don't impose independance, yes.
Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by:
$$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \text{if} & B = 1\end{array}\right. $$
$$Y = \left\{ \begin{array}[ccc] . |Z| & \text{if} & B = 0 \\0 & \text{if} & B = 1\end{array}\right. $$
Where $|.|$ denotes absolute value. The $X$ and $Y$ have infinite expectation, but $\min(X, Y) = 0$ so $E(\min(X, Y)) = 0$.
For independent random variables, I don't know, and I would be interested in a result!
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
|
Well, if you don't impose independance, yes.
Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by:
$$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \t
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
Well, if you don't impose independance, yes.
Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by:
$$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \text{if} & B = 1\end{array}\right. $$
$$Y = \left\{ \begin{array}[ccc] . |Z| & \text{if} & B = 0 \\0 & \text{if} & B = 1\end{array}\right. $$
Where $|.|$ denotes absolute value. The $X$ and $Y$ have infinite expectation, but $\min(X, Y) = 0$ so $E(\min(X, Y)) = 0$.
For independent random variables, I don't know, and I would be interested in a result!
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
Well, if you don't impose independance, yes.
Consider $Z \sim Cauchy$ and $B \sim Bernouilli(\frac{1}{2})$. Define $X$ and $Y$ by:
$$X = \left\{ \begin{array}[ccc] 0 0 & \text{if} & B = 0\\|Z| & \t
|
12,924
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
|
This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of this approach is that it easily generalizes to different order statistics and to different moments or other functions $T(X)$. Also when the quantile function is known then the possibility or impossibility of 'making a statistic finite' by using an order statistic is easily seen by the type of singularity at 0 and 1.
A quick intuitive view of the possibility that an order statistic might have finite finite expectation even when the underlying variable does not can be done via the quantile function.
We can view the moments of a distribution as the moments of the quantile function:
https://stats.stackexchange.com/a/365385/164061
$$E(T(x)) = \int_{0}^1 T(Q(q)) dq \\$$
Say we wish to compute the first moment then $T(x) = x$. In the image below this corresponds to the area between F and the vertical line at $x=0$ (where the area on the left side may count as negative when $T(x)<0$).
The curves in the image show how much each quantile contributes in the computation. If the curve $T(Q(F))$ goes sufficiently fast enough to infinity when F approaches zero or one, then the area can be infinite.
Now, for an order statistic the integral over the quantiles $dq$ changes somewhat. For the normal variable each quantile has equal probability. For an order distribution this is beta distributed. So the integral becomes for a sample of size $n$ and using the minimum:
$$E(T(x_{(n)})) = n! \int_{0}^1 (1-q)^{n-1} T(Q(q)) dq \\$$
This term $(1-q)^{n-1}$ might be able to make a function that initially integrated to infinity because it had a pole of order 1 or higher (it's behaviour near $q=1$ was like $T(Q(q)) \sim (1-q)^{-a}$ with $a>1$), is now able to integrate to a finite value.
Example: the sample mean of the median of a sample taken from a Cauchy distributed variable is now finite because the poles of 1st order are removed. That is, $q^a(1-q)^b \tan(\pi (q-0.5))$ is finite for $a\geq 1$ and $b\geq 1$. (this relates to the more general statement of Xi'an about order statistics in relation to a Cauchy variable)
Further: When the quantile function has an essential singularity, for example $Q(p) = e^{1/(1-p)} - e$ then the sample minimum remains with infinite or undefined moments no matter the size of the sample (I just made up that quantile function as example, it relates to $f(x) = \frac{1}{(x+a)\log(x+a)^2}$, I am not sure whether there are more well known distributions that have an essential singularity in the quantile function).
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
|
This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of thi
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of this approach is that it easily generalizes to different order statistics and to different moments or other functions $T(X)$. Also when the quantile function is known then the possibility or impossibility of 'making a statistic finite' by using an order statistic is easily seen by the type of singularity at 0 and 1.
A quick intuitive view of the possibility that an order statistic might have finite finite expectation even when the underlying variable does not can be done via the quantile function.
We can view the moments of a distribution as the moments of the quantile function:
https://stats.stackexchange.com/a/365385/164061
$$E(T(x)) = \int_{0}^1 T(Q(q)) dq \\$$
Say we wish to compute the first moment then $T(x) = x$. In the image below this corresponds to the area between F and the vertical line at $x=0$ (where the area on the left side may count as negative when $T(x)<0$).
The curves in the image show how much each quantile contributes in the computation. If the curve $T(Q(F))$ goes sufficiently fast enough to infinity when F approaches zero or one, then the area can be infinite.
Now, for an order statistic the integral over the quantiles $dq$ changes somewhat. For the normal variable each quantile has equal probability. For an order distribution this is beta distributed. So the integral becomes for a sample of size $n$ and using the minimum:
$$E(T(x_{(n)})) = n! \int_{0}^1 (1-q)^{n-1} T(Q(q)) dq \\$$
This term $(1-q)^{n-1}$ might be able to make a function that initially integrated to infinity because it had a pole of order 1 or higher (it's behaviour near $q=1$ was like $T(Q(q)) \sim (1-q)^{-a}$ with $a>1$), is now able to integrate to a finite value.
Example: the sample mean of the median of a sample taken from a Cauchy distributed variable is now finite because the poles of 1st order are removed. That is, $q^a(1-q)^b \tan(\pi (q-0.5))$ is finite for $a\geq 1$ and $b\geq 1$. (this relates to the more general statement of Xi'an about order statistics in relation to a Cauchy variable)
Further: When the quantile function has an essential singularity, for example $Q(p) = e^{1/(1-p)} - e$ then the sample minimum remains with infinite or undefined moments no matter the size of the sample (I just made up that quantile function as example, it relates to $f(x) = \frac{1}{(x+a)\log(x+a)^2}$, I am not sure whether there are more well known distributions that have an essential singularity in the quantile function).
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
This answer is not as general as Whuber's answer, and relates to identical distributed X and Y, but I believe that it is a good addition because it gives some different intuition. The advantage of thi
|
12,925
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
|
It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$:
$$E_x[z]=\int_{-\infty}^xzf(z)dz$$
Let's look at the rate of growth of this exepctation:
$$\frac d {dx}E_x[z]=xf(x)$$
So the expectation on a subset grows much slower than $x$, the boundary of a subset. The implication is that although for a distribution with no moments such as modulus of Cauchy $|z|$ the expectation is infinite $E_\infty[|z|]=\infty$, its growth with upper boundary of the subset slows down a lot with large $z$. In fact for this case $E_x[z]\approx 1/x$.
Why is this relevant? Here's why. Look at the expectation of $E[x|x<y]$ where both $x,y$ are from the same distribution with density $f(.)$ that has infinite mean:
let's look at the expectation of the minimum:
$$E[x|x<y]=\int_{-\infty}^\infty dyf(y)\int_{-\infty}^{y}dxf(x)\times x\\
=\int_{-\infty}^\infty dy f(y)E_y[x]
$$
Since $E_y[x]$ grows much slower than $y$, this integral most likely will be finite. It is certainly finite for modulus of Cauchy $|x|$ and is equal to $\ln 4/\pi$:
$E_x[|z|]=\int_0^x\frac 2 \pi\frac {z}{1+z^2}dz=\int_0^x\frac 1 \pi\frac {1}{1+z^2}dz^2=\frac 1 \pi \ln(1+z^2)|_0^x=\frac{\ln(1+x^2)}{\pi}$ - here already we see how the expectation on subset slowed down from $x$ to $\ln x$.
$E[x|x<y]=\int_{0}^\infty \frac 2 \pi\frac 1 {1+x^2}\frac{\ln(1+x^2)}{\pi}dx
=\frac 1 \pi \ln 4
$
You can apply this analysis to the minimum function trivially.
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
|
It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$:
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation?
It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$:
$$E_x[z]=\int_{-\infty}^xzf(z)dz$$
Let's look at the rate of growth of this exepctation:
$$\frac d {dx}E_x[z]=xf(x)$$
So the expectation on a subset grows much slower than $x$, the boundary of a subset. The implication is that although for a distribution with no moments such as modulus of Cauchy $|z|$ the expectation is infinite $E_\infty[|z|]=\infty$, its growth with upper boundary of the subset slows down a lot with large $z$. In fact for this case $E_x[z]\approx 1/x$.
Why is this relevant? Here's why. Look at the expectation of $E[x|x<y]$ where both $x,y$ are from the same distribution with density $f(.)$ that has infinite mean:
let's look at the expectation of the minimum:
$$E[x|x<y]=\int_{-\infty}^\infty dyf(y)\int_{-\infty}^{y}dxf(x)\times x\\
=\int_{-\infty}^\infty dy f(y)E_y[x]
$$
Since $E_y[x]$ grows much slower than $y$, this integral most likely will be finite. It is certainly finite for modulus of Cauchy $|x|$ and is equal to $\ln 4/\pi$:
$E_x[|z|]=\int_0^x\frac 2 \pi\frac {z}{1+z^2}dz=\int_0^x\frac 1 \pi\frac {1}{1+z^2}dz^2=\frac 1 \pi \ln(1+z^2)|_0^x=\frac{\ln(1+x^2)}{\pi}$ - here already we see how the expectation on subset slowed down from $x$ to $\ln x$.
$E[x|x<y]=\int_{0}^\infty \frac 2 \pi\frac 1 {1+x^2}\frac{\ln(1+x^2)}{\pi}dx
=\frac 1 \pi \ln 4
$
You can apply this analysis to the minimum function trivially.
|
Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite exp
It's the case with almost any distribution because the expectation on a subset grows usually much slower than the subset. Let's look at the expectation on a subset for a variable $z$ with PDF $f(z)$:
|
12,926
|
Simple linear model with autocorrelated errors in R [closed]
|
Have a look at gls (generalized least squares) from the package nlme
You can set a correlation profile for the errors in the regression, e.g. ARMA, etc:
gls(Y ~ X, correlation=corARMA(p=1,q=1))
for ARMA(1,1) errors.
|
Simple linear model with autocorrelated errors in R [closed]
|
Have a look at gls (generalized least squares) from the package nlme
You can set a correlation profile for the errors in the regression, e.g. ARMA, etc:
gls(Y ~ X, correlation=corARMA(p=1,q=1))
for
|
Simple linear model with autocorrelated errors in R [closed]
Have a look at gls (generalized least squares) from the package nlme
You can set a correlation profile for the errors in the regression, e.g. ARMA, etc:
gls(Y ~ X, correlation=corARMA(p=1,q=1))
for ARMA(1,1) errors.
|
Simple linear model with autocorrelated errors in R [closed]
Have a look at gls (generalized least squares) from the package nlme
You can set a correlation profile for the errors in the regression, e.g. ARMA, etc:
gls(Y ~ X, correlation=corARMA(p=1,q=1))
for
|
12,927
|
Simple linear model with autocorrelated errors in R [closed]
|
In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions.
x <- 1:100
e <- 25*arima.sim(model=list(ar=0.3),n=100)
y <- 1 + 2*x + e
###Fit the model using gls()
require(nlme)
(fit1 <- gls(y~x, corr=corAR1(0.5,form=~1)))
Generalized least squares fit by REML
Model: y ~ x
Data: NULL
Log-restricted-likelihood: -443.6371
Coefficients:
(Intercept) x
4.379304 1.957357
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.3637263
Degrees of freedom: 100 total; 98 residual
Residual standard error: 22.32908
###Fit the model using arima()
(fit2 <- arima(y, xreg=x, order=c(1,0,0)))
Call:
arima(x = y, order = c(1, 0, 0), xreg = x)
Coefficients:
ar1 intercept x
0.3352 4.5052 1.9548
s.e. 0.0960 6.1743 0.1060
sigma^2 estimated as 423.7: log likelihood = -444.4, aic = 896.81
The advantage of the arima() function is that you can fit a much larger variety of ARMA error processes. If you use the auto.arima() function from the forecast package, you can automatically identify the ARMA error:
require(forecast)
fit3 <- auto.arima(y, xreg=x)
|
Simple linear model with autocorrelated errors in R [closed]
|
In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions.
x <- 1:100
e <- 25*arima.sim(model=list(ar=0.
|
Simple linear model with autocorrelated errors in R [closed]
In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions.
x <- 1:100
e <- 25*arima.sim(model=list(ar=0.3),n=100)
y <- 1 + 2*x + e
###Fit the model using gls()
require(nlme)
(fit1 <- gls(y~x, corr=corAR1(0.5,form=~1)))
Generalized least squares fit by REML
Model: y ~ x
Data: NULL
Log-restricted-likelihood: -443.6371
Coefficients:
(Intercept) x
4.379304 1.957357
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.3637263
Degrees of freedom: 100 total; 98 residual
Residual standard error: 22.32908
###Fit the model using arima()
(fit2 <- arima(y, xreg=x, order=c(1,0,0)))
Call:
arima(x = y, order = c(1, 0, 0), xreg = x)
Coefficients:
ar1 intercept x
0.3352 4.5052 1.9548
s.e. 0.0960 6.1743 0.1060
sigma^2 estimated as 423.7: log likelihood = -444.4, aic = 896.81
The advantage of the arima() function is that you can fit a much larger variety of ARMA error processes. If you use the auto.arima() function from the forecast package, you can automatically identify the ARMA error:
require(forecast)
fit3 <- auto.arima(y, xreg=x)
|
Simple linear model with autocorrelated errors in R [closed]
In addition to the gls() function from nlme, you can also use the arima() function in the stats package using MLE. Here is an example with both functions.
x <- 1:100
e <- 25*arima.sim(model=list(ar=0.
|
12,928
|
Simple linear model with autocorrelated errors in R [closed]
|
Use function gls from package nlme. Here is the example.
##Generate data frame with regressor and AR(1) error. The error term is
## \eps_t=0.3*\eps_{t-1}+v_t
df <- data.frame(x1=rnorm(100), err=filter(rnorm(100)/5,filter=0.3,method="recursive"))
##Create ther response
df$y <- 1 + 2*df$x + df$err
###Fit the model
gls(y~x, data=df, corr=corAR1(0.5,form=~1))
Generalized least squares fit by REML
Model: y ~ x
Data: df
Log-restricted-likelihood: 9.986475
Coefficients:
(Intercept) x
1.040129 2.001884
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.2686271
Degrees of freedom: 100 total; 98 residual
Residual standard error: 0.2172698
Since model is fitted using maximum likelihood you need to supply starting values. The default starting value is 0, but as always it is good to try several values to ensure the convergence.
As Dr. G pointed out you can also use other correlation structures, namely ARMA.
Note that in general least squares estimates are consistent if covariance matrix of regression errors is not multiple of identity matrix, so if you fit model with specific covariance structure, first you need to test whether it is appropriate.
|
Simple linear model with autocorrelated errors in R [closed]
|
Use function gls from package nlme. Here is the example.
##Generate data frame with regressor and AR(1) error. The error term is
## \eps_t=0.3*\eps_{t-1}+v_t
df <- data.frame(x1=rnorm(100), err=filte
|
Simple linear model with autocorrelated errors in R [closed]
Use function gls from package nlme. Here is the example.
##Generate data frame with regressor and AR(1) error. The error term is
## \eps_t=0.3*\eps_{t-1}+v_t
df <- data.frame(x1=rnorm(100), err=filter(rnorm(100)/5,filter=0.3,method="recursive"))
##Create ther response
df$y <- 1 + 2*df$x + df$err
###Fit the model
gls(y~x, data=df, corr=corAR1(0.5,form=~1))
Generalized least squares fit by REML
Model: y ~ x
Data: df
Log-restricted-likelihood: 9.986475
Coefficients:
(Intercept) x
1.040129 2.001884
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.2686271
Degrees of freedom: 100 total; 98 residual
Residual standard error: 0.2172698
Since model is fitted using maximum likelihood you need to supply starting values. The default starting value is 0, but as always it is good to try several values to ensure the convergence.
As Dr. G pointed out you can also use other correlation structures, namely ARMA.
Note that in general least squares estimates are consistent if covariance matrix of regression errors is not multiple of identity matrix, so if you fit model with specific covariance structure, first you need to test whether it is appropriate.
|
Simple linear model with autocorrelated errors in R [closed]
Use function gls from package nlme. Here is the example.
##Generate data frame with regressor and AR(1) error. The error term is
## \eps_t=0.3*\eps_{t-1}+v_t
df <- data.frame(x1=rnorm(100), err=filte
|
12,929
|
Simple linear model with autocorrelated errors in R [closed]
|
You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure.
For example:
corr=corAR1(form=~1) indicates that order of the data is the one they are in the table.
corr=corAR1(form=~Year) indicates that the order is the one of factor Year..
Finally the "0.5" value in corr=corAR1(0.5,form=~1)? is generally set up to the value of the parameter estimated to represent the variance structure (phi, in case of AR, theta in case of MA...). It is optional to set it up and use for optimization as Rob Hyndman mentioned.
|
Simple linear model with autocorrelated errors in R [closed]
|
You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure.
For example:
corr=corAR1(form=~1) indicates tha
|
Simple linear model with autocorrelated errors in R [closed]
You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure.
For example:
corr=corAR1(form=~1) indicates that order of the data is the one they are in the table.
corr=corAR1(form=~Year) indicates that the order is the one of factor Year..
Finally the "0.5" value in corr=corAR1(0.5,form=~1)? is generally set up to the value of the parameter estimated to represent the variance structure (phi, in case of AR, theta in case of MA...). It is optional to set it up and use for optimization as Rob Hyndman mentioned.
|
Simple linear model with autocorrelated errors in R [closed]
You can use predict on gls output. See ?predict.gls. Also you can specify the order of the observation by the "form" term in the correlation structure.
For example:
corr=corAR1(form=~1) indicates tha
|
12,930
|
Flaws in Frequentist Inference
|
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be true, so there is really no dispute here about the properties of the various estimators. Even if you are a Bayesian, it is clearly true that the sample mean is no longer an unbiased estimator (the very concept of "bias" being one that conditions on the unknown parameter). So first of all, the frequentist is correct that the sample mean is not an unbiased estimator (and any sensible Bayesian would have to agree with this given the assumed distributions). Secondly, if a frequentist actually encountered this situation, they would almost certainly update their estimator to reflect the censoring mechanism in the data.
It is entirely possible for the frequentist to use an estimator that is unbiased, and which reduces down to the sample mean in the special case where there is no censored data. Indeed, most standard frequentist estimators would have this property. So, although the sample mean is indeed a biased estimator in this case, the frequentist could use an alternative estimator that is unbiased, and which happens to give the same estimate as the sample mean for this particular data. Therefore, as a practical matter, the frequentist can happily accept the estimate from the sample mean is the correct estimate from this data. In other words, there is absolutely no reason that the Bayesian needs to "come to the rescue" --- the frequentist will be able to accomodate the changed information perfectly adequately.
More detail: Suppose you have $m$ non-censored data points $x_1,...,x_m$ and $n-m$ censored data points, which are known to be somewhere above the cut-off $\mu_* = 100$. Given the underlying normal distribution for the pre-censored data values, the log-likelihood function for the data is:
$$\ell_\mathbb{x}(\mu) = \sum_{i=1}^m \ln \phi (x_i-\mu) + (n-m) \ln (1 - \Phi(\mu_*-\mu)).$$
Since $\ln \phi (x_i-\mu) = - \tfrac{1}{2}(x_i-\mu)^2+\text{const}$, differentiating gives the score function:
$$\frac{d \ell_\mathbb{x}}{d \mu}(\mu) = m (\bar{x}_m - \mu)
+ (n-m) \cdot \frac{\phi(\mu_*-\mu)}{1 - \Phi(\mu_*-\mu)}.$$
so the MLE is the value $\hat{\mu}$ that solves:
$$\bar{x}_m = \hat{\mu} + \frac{n-m}{m} \cdot \frac{\phi(\mu_*-\hat{\mu})}{1 - \Phi(\mu_*-\hat{\mu})}.$$
The MLE will generally be a biased estimator, but it should have other reasonable frequentist properties, and so it would probably be considered a reasonable estimator in this case. (Even if the frequentist is looking for an improvement, like a "bias corrected" scaled version of the MLE, it is likely to be an other estimator that is asymptotically equivalent to the MLE.) In the case where there is no censored data we have $m=n$, so the MLE reduces to $\hat{\mu} = \bar{x}_m$. So in this case, if the frequentist used the MLE, they will come to the same estimate for non-censored data as if they were using the sample mean. (Note here that there is a difference between an estimator (which is a function) and an estimate (which is just one or a few output values from that function).
|
Flaws in Frequentist Inference
|
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be tru
|
Flaws in Frequentist Inference
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be true, so there is really no dispute here about the properties of the various estimators. Even if you are a Bayesian, it is clearly true that the sample mean is no longer an unbiased estimator (the very concept of "bias" being one that conditions on the unknown parameter). So first of all, the frequentist is correct that the sample mean is not an unbiased estimator (and any sensible Bayesian would have to agree with this given the assumed distributions). Secondly, if a frequentist actually encountered this situation, they would almost certainly update their estimator to reflect the censoring mechanism in the data.
It is entirely possible for the frequentist to use an estimator that is unbiased, and which reduces down to the sample mean in the special case where there is no censored data. Indeed, most standard frequentist estimators would have this property. So, although the sample mean is indeed a biased estimator in this case, the frequentist could use an alternative estimator that is unbiased, and which happens to give the same estimate as the sample mean for this particular data. Therefore, as a practical matter, the frequentist can happily accept the estimate from the sample mean is the correct estimate from this data. In other words, there is absolutely no reason that the Bayesian needs to "come to the rescue" --- the frequentist will be able to accomodate the changed information perfectly adequately.
More detail: Suppose you have $m$ non-censored data points $x_1,...,x_m$ and $n-m$ censored data points, which are known to be somewhere above the cut-off $\mu_* = 100$. Given the underlying normal distribution for the pre-censored data values, the log-likelihood function for the data is:
$$\ell_\mathbb{x}(\mu) = \sum_{i=1}^m \ln \phi (x_i-\mu) + (n-m) \ln (1 - \Phi(\mu_*-\mu)).$$
Since $\ln \phi (x_i-\mu) = - \tfrac{1}{2}(x_i-\mu)^2+\text{const}$, differentiating gives the score function:
$$\frac{d \ell_\mathbb{x}}{d \mu}(\mu) = m (\bar{x}_m - \mu)
+ (n-m) \cdot \frac{\phi(\mu_*-\mu)}{1 - \Phi(\mu_*-\mu)}.$$
so the MLE is the value $\hat{\mu}$ that solves:
$$\bar{x}_m = \hat{\mu} + \frac{n-m}{m} \cdot \frac{\phi(\mu_*-\hat{\mu})}{1 - \Phi(\mu_*-\hat{\mu})}.$$
The MLE will generally be a biased estimator, but it should have other reasonable frequentist properties, and so it would probably be considered a reasonable estimator in this case. (Even if the frequentist is looking for an improvement, like a "bias corrected" scaled version of the MLE, it is likely to be an other estimator that is asymptotically equivalent to the MLE.) In the case where there is no censored data we have $m=n$, so the MLE reduces to $\hat{\mu} = \bar{x}_m$. So in this case, if the frequentist used the MLE, they will come to the same estimate for non-censored data as if they were using the sample mean. (Note here that there is a difference between an estimator (which is a function) and an estimate (which is just one or a few output values from that function).
|
Flaws in Frequentist Inference
I am a Bayesian, but I find these kinds of criticisms against "frequentists" to be overstated and unfair. Both Bayesians and classical statisticians accept all the same mathematical results to be tru
|
12,931
|
Flaws in Frequentist Inference
|
It's worth noting that there is nothing that prevents Frequentist analysis from saying
"Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditional on some of your data being censored, the MLE estimator $\hat \mu$ is no longer equal to $\bar x$ and has some bias".
Of course, marginalizing over whether there is censored data means that this whole framework has bias, but there's nothing that prevents Frequentists from making conditional statements.
|
Flaws in Frequentist Inference
|
It's worth noting that there is nothing that prevents Frequentist analysis from saying
"Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditio
|
Flaws in Frequentist Inference
It's worth noting that there is nothing that prevents Frequentist analysis from saying
"Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditional on some of your data being censored, the MLE estimator $\hat \mu$ is no longer equal to $\bar x$ and has some bias".
Of course, marginalizing over whether there is censored data means that this whole framework has bias, but there's nothing that prevents Frequentists from making conditional statements.
|
Flaws in Frequentist Inference
It's worth noting that there is nothing that prevents Frequentist analysis from saying
"Conditional on none of your data being censored, $\hat \mu$ is equal to $\bar x$ and will be unbiased. Conditio
|
12,932
|
Flaws in Frequentist Inference
|
I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions:
We can still consider $X \sim N(\mu, 1)$. However, we are not observing $X$, but $X' = \min(100, X)$, which is another Random Variable.
2,3. Now, the authors are saying $\mathbb{E}(X') \neq \mu$, even though $\mathbb{E}(X) = \mu$. Is this a limitation in frequentist inference? Perhaps. Many statistical inference techniques (e.g. confidence intervals, p-value, hypothesis testing) require that our estimator ($\hat{\mu}$) to be a consistent (roughly speaking: asymptotically unbiased) estimate of $\mu$. Now, the thing about Bayesian inference is that it does not care (much) about biasedness. It doesn't solve the frequentist problem. It just provides another perspective on it.
The particular example given, however, is specially constructed to be paradoxical. Since we know that no values of $x_i > 100$ are observed, the mean observed $\bar{x}'=\sum_i x'_i/n$, is an unbiased estimate of $\mu$. But this is only due to an idiosyncracy of this setup, since in this particular case, $\bar{x}'=\sum_i x'_i/n = \sum_i x_i/n = \bar{x}$. However, in general, $\mathbb{E}(\bar{x}') \neq \mu$, even though $\mathbb{E}(\bar{x}) = \mu$.
A Bayesian, however, can be more flexible. First, she can define $S$ to be the event that "a randomly drawn sample contains no $x'_i=100$". Then she can say, $\mathbb{E}(\bar{x}' | S) = \mathbb{E}(\bar{x}|S) = \mu$. Note that the precise statement is quite delicate. If the sample did in fact contain one $x'_i = 100$, but we are excluding that observation, or if we are repeating the sampling procedure, and only selecting one which doesn't contain any $x'_i=100$, this is not the event specified by $S$.
This is, in some way, the limitation of frequentism. Conditioning on an event like $S$ simply does not fit in with frequentist philosophy, since it is concerned with the long term behaviour of a procedure or estimator.
On the other hand, the fact that in this particular case, the Bayesian has an unbiased estimate is something of a fluke. Bayesianism does not in the end solve the questions frequentist inference tries to address.
#### Edit ####
I just realized I actually got tricked by the paradox. Using an event like $S$ does not solve the problem, as it still implies $x_i < 100$ for all observed $i$. Ultimately, I think the answer is that the Bayesian simply is not interested in $\mathbb{E}(X')$, and the frequentist cannot consider $\mathbb{E}(\mu|X'=x')$. Thus the fact that the estimate is biased or not is irrelevant to the Bayesian. A Bayesian may of course still be interested in analysing the properties of an estimator, but that will then make him a frequentist.
|
Flaws in Frequentist Inference
|
I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions:
We can still consider $
|
Flaws in Frequentist Inference
I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions:
We can still consider $X \sim N(\mu, 1)$. However, we are not observing $X$, but $X' = \min(100, X)$, which is another Random Variable.
2,3. Now, the authors are saying $\mathbb{E}(X') \neq \mu$, even though $\mathbb{E}(X) = \mu$. Is this a limitation in frequentist inference? Perhaps. Many statistical inference techniques (e.g. confidence intervals, p-value, hypothesis testing) require that our estimator ($\hat{\mu}$) to be a consistent (roughly speaking: asymptotically unbiased) estimate of $\mu$. Now, the thing about Bayesian inference is that it does not care (much) about biasedness. It doesn't solve the frequentist problem. It just provides another perspective on it.
The particular example given, however, is specially constructed to be paradoxical. Since we know that no values of $x_i > 100$ are observed, the mean observed $\bar{x}'=\sum_i x'_i/n$, is an unbiased estimate of $\mu$. But this is only due to an idiosyncracy of this setup, since in this particular case, $\bar{x}'=\sum_i x'_i/n = \sum_i x_i/n = \bar{x}$. However, in general, $\mathbb{E}(\bar{x}') \neq \mu$, even though $\mathbb{E}(\bar{x}) = \mu$.
A Bayesian, however, can be more flexible. First, she can define $S$ to be the event that "a randomly drawn sample contains no $x'_i=100$". Then she can say, $\mathbb{E}(\bar{x}' | S) = \mathbb{E}(\bar{x}|S) = \mu$. Note that the precise statement is quite delicate. If the sample did in fact contain one $x'_i = 100$, but we are excluding that observation, or if we are repeating the sampling procedure, and only selecting one which doesn't contain any $x'_i=100$, this is not the event specified by $S$.
This is, in some way, the limitation of frequentism. Conditioning on an event like $S$ simply does not fit in with frequentist philosophy, since it is concerned with the long term behaviour of a procedure or estimator.
On the other hand, the fact that in this particular case, the Bayesian has an unbiased estimate is something of a fluke. Bayesianism does not in the end solve the questions frequentist inference tries to address.
#### Edit ####
I just realized I actually got tricked by the paradox. Using an event like $S$ does not solve the problem, as it still implies $x_i < 100$ for all observed $i$. Ultimately, I think the answer is that the Bayesian simply is not interested in $\mathbb{E}(X')$, and the frequentist cannot consider $\mathbb{E}(\mu|X'=x')$. Thus the fact that the estimate is biased or not is irrelevant to the Bayesian. A Bayesian may of course still be interested in analysing the properties of an estimator, but that will then make him a frequentist.
|
Flaws in Frequentist Inference
I think this is exaggerated language. Both frequentist and Bayesian have their merits, and statisticians routinely rely on both types in their work. To answer your questions:
We can still consider $
|
12,933
|
Flaws in Frequentist Inference
|
It is a bit sad to see printed such carelessly written prose.
Consider the phrase
"For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)=
g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the data actually
observed..."
-while the mathematical formula in the above same sentence shows that the posterior density depends on the data and on the prior density (and let's not discuss how do we determine $f_{\mu}(x)$ and $f(x)$).
Second, we have here a posterior out of sample information: we have discovered that the sample is censored. Bayesians have championed the formal and transparent inclusion of out-of-sample information in our estimation procedures, so this example should have been used to show how we could incorporate the discovered sample censoring into the estimation.
On the contrary, the passage almost advises us to ignore this information, since it chooses to end by using the example of a flat prior. that, we learn, would yield posterior expectation $92$... But it would be a serious mistake to use it because, Bayesian, frequentist or whatever, it is always a serious mistake to ignore the facts. But the passage ends almost marveling at and celebrating, that we would get $92$,
irrespective of whether or not the glitch would have affected readings
above 100.
The correct answer is $42$, by the way.
|
Flaws in Frequentist Inference
|
It is a bit sad to see printed such carelessly written prose.
Consider the phrase
"For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)=
g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the
|
Flaws in Frequentist Inference
It is a bit sad to see printed such carelessly written prose.
Consider the phrase
"For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)=
g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the data actually
observed..."
-while the mathematical formula in the above same sentence shows that the posterior density depends on the data and on the prior density (and let's not discuss how do we determine $f_{\mu}(x)$ and $f(x)$).
Second, we have here a posterior out of sample information: we have discovered that the sample is censored. Bayesians have championed the formal and transparent inclusion of out-of-sample information in our estimation procedures, so this example should have been used to show how we could incorporate the discovered sample censoring into the estimation.
On the contrary, the passage almost advises us to ignore this information, since it chooses to end by using the example of a flat prior. that, we learn, would yield posterior expectation $92$... But it would be a serious mistake to use it because, Bayesian, frequentist or whatever, it is always a serious mistake to ignore the facts. But the passage ends almost marveling at and celebrating, that we would get $92$,
irrespective of whether or not the glitch would have affected readings
above 100.
The correct answer is $42$, by the way.
|
Flaws in Frequentist Inference
It is a bit sad to see printed such carelessly written prose.
Consider the phrase
"For any prior density $g(\mu)$, the posterior density $g(\mu\mid x)=
g(\mu)f_{\mu}(x)/f(x)$ ....depends only on the
|
12,934
|
Analysis of Kullback-Leibler divergence
|
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different, and it is important to distribute these roles according to the real-world phenomenon under study.
When we write (the OP has calculated the expression using base-2 logarithms)
$$\mathbb K\left(P||Q\right) = \sum_{i}\log_2 (p_i/q_i)p_i $$
we consider the $P$ distribution to be the "target distribution" (usually considered to be the true distribution), which we approximate by using the $Q$ distribution.
Now,
$$\sum_{i}\log_2 (p_i/q_i)p_i = \sum_{i}\log_2 (p_i)p_i-\sum_{i}\log_2 (q_i)p_i = -H(P) - E_P(\ln(Q))$$
where $H(P)$ is the Shannon entropy of distribution $P$ and $-E_P(\ln(Q))$ is called the "cross-entropy of $P$ and $Q$" -also non-symmetric.
Writing
$$\mathbb K\left(P||Q\right) = H(P,Q) - H(P)$$
(here too, the order in which we write the distributions in the expression of the cross-entropy matters, since it too is not symmetric), permits us to see that KL-Divergence reflects an increase in entropy over the unavoidable entropy of distribution $P$.
So, no, KL-divergence is better not to be interpreted as a "distance measure" between distributions, but rather as a measure of entropy increase due to the use of an approximation to the true distribution rather than the true distribution itself.
So we are in Information Theory land. To hear it from the masters (Cover & Thomas) "
...if we knew the true distribution $P$ of the random variable, we
could construct a code with average description length $H(P)$. If,
instead, we used the code for a distribution $Q$, we would need $H(P)
+ \mathbb K (P||Q)$ bits on the average to describe the random variable.
The same wise people say
...it is not a true distance between distributions since it is not
symmetric and does not satisfy the triangle inequality. Nonetheless,
it is often useful to think of relative entropy as a “distance”
between distributions.
But this latter approach is useful mainly when one attempts to minimize KL-divergence in order to optimize some estimation procedure. For the interpretation of its numerical value per se, it is not useful, and one should prefer the "entropy increase" approach.
For the specific distributions of the question (always using base-2 logarithms)
$$ \mathbb K\left(P||Q\right) = 0.49282,\;\;\;\; H(P) = 1.9486$$
In other words, you need 25% more bits to describe the situation if you are going to use $Q$ while the true distribution is $P$. This means longer code lines, more time to write them, more memory, more time to read them, higher probability of mistakes etc... it is no accident that Cover & Thomas say that KL-Divergence (or "relative entropy") "measures the inefficiency caused by the approximation."
|
Analysis of Kullback-Leibler divergence
|
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different,
|
Analysis of Kullback-Leibler divergence
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different, and it is important to distribute these roles according to the real-world phenomenon under study.
When we write (the OP has calculated the expression using base-2 logarithms)
$$\mathbb K\left(P||Q\right) = \sum_{i}\log_2 (p_i/q_i)p_i $$
we consider the $P$ distribution to be the "target distribution" (usually considered to be the true distribution), which we approximate by using the $Q$ distribution.
Now,
$$\sum_{i}\log_2 (p_i/q_i)p_i = \sum_{i}\log_2 (p_i)p_i-\sum_{i}\log_2 (q_i)p_i = -H(P) - E_P(\ln(Q))$$
where $H(P)$ is the Shannon entropy of distribution $P$ and $-E_P(\ln(Q))$ is called the "cross-entropy of $P$ and $Q$" -also non-symmetric.
Writing
$$\mathbb K\left(P||Q\right) = H(P,Q) - H(P)$$
(here too, the order in which we write the distributions in the expression of the cross-entropy matters, since it too is not symmetric), permits us to see that KL-Divergence reflects an increase in entropy over the unavoidable entropy of distribution $P$.
So, no, KL-divergence is better not to be interpreted as a "distance measure" between distributions, but rather as a measure of entropy increase due to the use of an approximation to the true distribution rather than the true distribution itself.
So we are in Information Theory land. To hear it from the masters (Cover & Thomas) "
...if we knew the true distribution $P$ of the random variable, we
could construct a code with average description length $H(P)$. If,
instead, we used the code for a distribution $Q$, we would need $H(P)
+ \mathbb K (P||Q)$ bits on the average to describe the random variable.
The same wise people say
...it is not a true distance between distributions since it is not
symmetric and does not satisfy the triangle inequality. Nonetheless,
it is often useful to think of relative entropy as a “distance”
between distributions.
But this latter approach is useful mainly when one attempts to minimize KL-divergence in order to optimize some estimation procedure. For the interpretation of its numerical value per se, it is not useful, and one should prefer the "entropy increase" approach.
For the specific distributions of the question (always using base-2 logarithms)
$$ \mathbb K\left(P||Q\right) = 0.49282,\;\;\;\; H(P) = 1.9486$$
In other words, you need 25% more bits to describe the situation if you are going to use $Q$ while the true distribution is $P$. This means longer code lines, more time to write them, more memory, more time to read them, higher probability of mistakes etc... it is no accident that Cover & Thomas say that KL-Divergence (or "relative entropy") "measures the inefficiency caused by the approximation."
|
Analysis of Kullback-Leibler divergence
The Kullback-Leibler Divergence is not a metric proper, since it is not symmetric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different,
|
12,935
|
Analysis of Kullback-Leibler divergence
|
Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$.
The extra encoding cost above the minimum encoding cost that would have been attained by using the ideal code for $P$ is the KL divergence.
|
Analysis of Kullback-Leibler divergence
|
Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$.
The extra encoding cost above the minimum encoding cost that
|
Analysis of Kullback-Leibler divergence
Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$.
The extra encoding cost above the minimum encoding cost that would have been attained by using the ideal code for $P$ is the KL divergence.
|
Analysis of Kullback-Leibler divergence
Consider an information source with distribution $P$ that is encoded using the ideal code for an information source with distribution $Q$.
The extra encoding cost above the minimum encoding cost that
|
12,936
|
Analysis of Kullback-Leibler divergence
|
KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the two corresponding symbols from Q plus one bit of extra information.
|
Analysis of Kullback-Leibler divergence
|
KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the
|
Analysis of Kullback-Leibler divergence
KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the two corresponding symbols from Q plus one bit of extra information.
|
Analysis of Kullback-Leibler divergence
KL Divergence measures the information loss required to represent a symbol from P using symbols from Q. If you got a value of 0.49 that means that on average you can encode two symbols from P with the
|
12,937
|
What is the difference between data mining and statistical analysis?
|
Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting.
Data mining was a largely commercial concern and driven by business needs (coupled with the "need" for vendors to sell software and hardware systems to businesses). One thing Friedman noted was that all the "features" being hyped originated outside of statistics -- from algorithms and methods like neural nets to GUI driven data analysis -- and none of the traditional statistical offerings seemed to be a part of any of these systems (regression, hypothesis testing, etc). "Our core methodology has largely been ignored." It was also sold as user driven along the lines of what you noted: here's my data, here's my "business question", give me an answer.
I think Friedman was trying to provoke. He didn't think data mining had serious intellectual underpinnings where methodology was concerned, but that this would change and statisticians ought to play a part rather than ignoring it.
My own impression is that this has more or less happened. The lines have been blurred. Statisticians now publish in data mining journals. Data miners these days seem to have some sort of statistical training. While data mining packages still don't hype generalized linear models, logistic regression is well known among the analysts -- in addition to clustering and neural nets. Optimal experimental design may not be part of the data mining core, but the software can be coaxed to spit out p-values. Progress!
|
What is the difference between data mining and statistical analysis?
|
Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting.
Data mining was a largely commercial concern and driven by busine
|
What is the difference between data mining and statistical analysis?
Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting.
Data mining was a largely commercial concern and driven by business needs (coupled with the "need" for vendors to sell software and hardware systems to businesses). One thing Friedman noted was that all the "features" being hyped originated outside of statistics -- from algorithms and methods like neural nets to GUI driven data analysis -- and none of the traditional statistical offerings seemed to be a part of any of these systems (regression, hypothesis testing, etc). "Our core methodology has largely been ignored." It was also sold as user driven along the lines of what you noted: here's my data, here's my "business question", give me an answer.
I think Friedman was trying to provoke. He didn't think data mining had serious intellectual underpinnings where methodology was concerned, but that this would change and statisticians ought to play a part rather than ignoring it.
My own impression is that this has more or less happened. The lines have been blurred. Statisticians now publish in data mining journals. Data miners these days seem to have some sort of statistical training. While data mining packages still don't hype generalized linear models, logistic regression is well known among the analysts -- in addition to clustering and neural nets. Optimal experimental design may not be part of the data mining core, but the software can be coaxed to spit out p-values. Progress!
|
What is the difference between data mining and statistical analysis?
Jerome Friedman wrote a paper a while back: Data Mining and Statistics: What's the Connection?, which I think you'll find interesting.
Data mining was a largely commercial concern and driven by busine
|
12,938
|
What is the difference between data mining and statistical analysis?
|
The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in the area of artificial intelligence and statistics.
Section 1.4 from Witten & Frank summarizes my viewpoint so I'm going to quote it at length:
What's the difference between machine
learning and statistics? Cynics,
looking wryly at the explosion of
commercial interest (and hype) in this
area, equate data mining to statistics
plus marketing. In truth, you should
not look for a dividing line between
machine learning and statistics
because there is a continuum--and a
multidimensional one at that--of data
analysis techniques. Some derive from
the skills taught in standard
statistics courses, and others are
more closely associated with the kind
of machine learning that has arisen
out of computer science.
Historically, the two sides have had
rather different traditions. If
forced to point to a single difference
of emphasis, it might be that
statistics has been more concerned
with testing hypotheses, whereas
machine learning has been more
concerned with formulating the process
of generalization as a search through
possible hypotheses...
In the past,
very similar methods have developed in
parallel in machine learning and
statistics...
But now the two
perspectives have converged.
N.B.1 IMO, data mining and machine learning are very closely related terms. In one sense, machine learning techniques are used in data mining. I regularly see these terms as interchangeable, and in so far as they are different, they usually go together. I would suggest looking through "The Two Cultures" paper as well as the other threads from my original question.
N.B.2 The term "data mining" can have a negative connotation when used colloquially to mean letting some algorithm loose on the data without any conceptual understanding. The sense is that data mining will lead to spurious results and over-fitting. I typically avoid using the term when talking to non-experts as a result, and instead use machine learning or statistical learning as a synonym.
|
What is the difference between data mining and statistical analysis?
|
The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in
|
What is the difference between data mining and statistical analysis?
The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in the area of artificial intelligence and statistics.
Section 1.4 from Witten & Frank summarizes my viewpoint so I'm going to quote it at length:
What's the difference between machine
learning and statistics? Cynics,
looking wryly at the explosion of
commercial interest (and hype) in this
area, equate data mining to statistics
plus marketing. In truth, you should
not look for a dividing line between
machine learning and statistics
because there is a continuum--and a
multidimensional one at that--of data
analysis techniques. Some derive from
the skills taught in standard
statistics courses, and others are
more closely associated with the kind
of machine learning that has arisen
out of computer science.
Historically, the two sides have had
rather different traditions. If
forced to point to a single difference
of emphasis, it might be that
statistics has been more concerned
with testing hypotheses, whereas
machine learning has been more
concerned with formulating the process
of generalization as a search through
possible hypotheses...
In the past,
very similar methods have developed in
parallel in machine learning and
statistics...
But now the two
perspectives have converged.
N.B.1 IMO, data mining and machine learning are very closely related terms. In one sense, machine learning techniques are used in data mining. I regularly see these terms as interchangeable, and in so far as they are different, they usually go together. I would suggest looking through "The Two Cultures" paper as well as the other threads from my original question.
N.B.2 The term "data mining" can have a negative connotation when used colloquially to mean letting some algorithm loose on the data without any conceptual understanding. The sense is that data mining will lead to spurious results and over-fitting. I typically avoid using the term when talking to non-experts as a result, and instead use machine learning or statistical learning as a synonym.
|
What is the difference between data mining and statistical analysis?
The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in
|
12,939
|
What is the difference between data mining and statistical analysis?
|
Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, trends, clusters, and outliers in the data. On the other hand, Predictive is to build models and procedures for regression, classification, pattern recognition, or machine learning tasks, and assess the predictive accuracy of those models and procedures when applied to fresh data.
The mechanism used to search for patterns or structure in high-dimensional data might be manual or automated; searching might require interactively querying a database management system, or it might entail using visualization software to spot anomalies in the data. In machine-learning terms, descriptive data mining is known as unsupervised learning, whereas predictive data mining is known as supervised learning.
Most of the methods used in data mining are related to methods developed in statistics and machine learning. Foremost among those methods are the general topics of regression, classification, clustering, and visualization. Because of the enormous sizes of the data sets, many applications of data mining focus on dimensionality-reduction techniques (e.g., variable selection) and situations in which high-dimensional data are suspected of lying
on lower-dimensional hyperplanes. Recent attention has been directed to methods of identifying high-dimensional data lying on nonlinear surfaces or manifolds.
There are also situations in data mining when statistical inference — in its classical sense — either has no meaning or is of dubious validity: the former occurs when we have the entire population to search for answers, and the latter occurs when a data set is a “convenience” sample rather than being a random sample drawn from some large population. When data are collected through time (e.g., retail transactions, stock-market transactions, patient records, weather records), sampling also may not make sense; the time-ordering of the observations is crucial to understanding the phenomenon generating the data, and to treat the observations as independent when they may be highly correlated will provide biased results.
The central components of data mining are — in addition to statistical theory and methods
— computing and computational efficiency, automatic data processing, dynamic and interactive data visualization techniques, and algorithm development.
One of the most important issues in data mining is the computational problem of scalability. Algorithms developed for computing standard exploratory and confirmatory statistical methods were designed to be fast and computationally efficient when applied to small and medium-sized data sets; yet, it has been shown that most of these algorithms are not up to the challenge of handling huge data sets. As data sets grow, many existing
algorithms demonstrate a tendency to slow down dramatically (or even grind to a halt).
|
What is the difference between data mining and statistical analysis?
|
Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, t
|
What is the difference between data mining and statistical analysis?
Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, trends, clusters, and outliers in the data. On the other hand, Predictive is to build models and procedures for regression, classification, pattern recognition, or machine learning tasks, and assess the predictive accuracy of those models and procedures when applied to fresh data.
The mechanism used to search for patterns or structure in high-dimensional data might be manual or automated; searching might require interactively querying a database management system, or it might entail using visualization software to spot anomalies in the data. In machine-learning terms, descriptive data mining is known as unsupervised learning, whereas predictive data mining is known as supervised learning.
Most of the methods used in data mining are related to methods developed in statistics and machine learning. Foremost among those methods are the general topics of regression, classification, clustering, and visualization. Because of the enormous sizes of the data sets, many applications of data mining focus on dimensionality-reduction techniques (e.g., variable selection) and situations in which high-dimensional data are suspected of lying
on lower-dimensional hyperplanes. Recent attention has been directed to methods of identifying high-dimensional data lying on nonlinear surfaces or manifolds.
There are also situations in data mining when statistical inference — in its classical sense — either has no meaning or is of dubious validity: the former occurs when we have the entire population to search for answers, and the latter occurs when a data set is a “convenience” sample rather than being a random sample drawn from some large population. When data are collected through time (e.g., retail transactions, stock-market transactions, patient records, weather records), sampling also may not make sense; the time-ordering of the observations is crucial to understanding the phenomenon generating the data, and to treat the observations as independent when they may be highly correlated will provide biased results.
The central components of data mining are — in addition to statistical theory and methods
— computing and computational efficiency, automatic data processing, dynamic and interactive data visualization techniques, and algorithm development.
One of the most important issues in data mining is the computational problem of scalability. Algorithms developed for computing standard exploratory and confirmatory statistical methods were designed to be fast and computationally efficient when applied to small and medium-sized data sets; yet, it has been shown that most of these algorithms are not up to the challenge of handling huge data sets. As data sets grow, many existing
algorithms demonstrate a tendency to slow down dramatically (or even grind to a halt).
|
What is the difference between data mining and statistical analysis?
Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, t
|
12,940
|
What is the difference between data mining and statistical analysis?
|
Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird.
It is often associated with computational statistics, i.e. only stuff you can do with a computer.
Data miners stole a significant proportion of multivariate statistics and called it their own. Check the table of contents of any 1990s multivariate book and compare it to a new data mining book. Very similar.
Statistics is associated with testing hypotheses and with model building, whereas data mining is more associated with prediction and classification, regardless of whether there is an understandable model.
|
What is the difference between data mining and statistical analysis?
|
Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird.
It is often associated with computational statistics, i.e
|
What is the difference between data mining and statistical analysis?
Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird.
It is often associated with computational statistics, i.e. only stuff you can do with a computer.
Data miners stole a significant proportion of multivariate statistics and called it their own. Check the table of contents of any 1990s multivariate book and compare it to a new data mining book. Very similar.
Statistics is associated with testing hypotheses and with model building, whereas data mining is more associated with prediction and classification, regardless of whether there is an understandable model.
|
What is the difference between data mining and statistical analysis?
Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird.
It is often associated with computational statistics, i.e
|
12,941
|
What is the difference between data mining and statistical analysis?
|
I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying:
"Data mining seems more concerned with prediction using observed variables than with understanding the causal system of latent variables; psychology is typically more concerned with the causal system of latent variables.
Data mining typically involves massive datasets (e.g. 10,000 + rows) collected for a purpose other than the purpose of the data mining. Psychological datasets are typically small (e.g., less than 1,000 or 100 rows) and collected explicitly to explore a research question.
Psychological analysis typically involves testing specific models. Automated model development approaches tend not to be theoretically interesting." - Data Mining and R
|
What is the difference between data mining and statistical analysis?
|
I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying:
"Data mining seems m
|
What is the difference between data mining and statistical analysis?
I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying:
"Data mining seems more concerned with prediction using observed variables than with understanding the causal system of latent variables; psychology is typically more concerned with the causal system of latent variables.
Data mining typically involves massive datasets (e.g. 10,000 + rows) collected for a purpose other than the purpose of the data mining. Psychological datasets are typically small (e.g., less than 1,000 or 100 rows) and collected explicitly to explore a research question.
Psychological analysis typically involves testing specific models. Automated model development approaches tend not to be theoretically interesting." - Data Mining and R
|
What is the difference between data mining and statistical analysis?
I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying:
"Data mining seems m
|
12,942
|
What is the difference between data mining and statistical analysis?
|
I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling-prediction approach.
I think the tradition of statisic is build with all steps :
exploratory analysis, then modeling, then estimation, then testing, then forecasting/infering. Statistician do exploratory analysis to figure out what the data looks like (function summary under R !)
I guess datamining is less structured and could be identified with exploratory analysis. However it uses techniques from statistics that are from estimation, forecasting, classification ....
|
What is the difference between data mining and statistical analysis?
|
I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling
|
What is the difference between data mining and statistical analysis?
I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling-prediction approach.
I think the tradition of statisic is build with all steps :
exploratory analysis, then modeling, then estimation, then testing, then forecasting/infering. Statistician do exploratory analysis to figure out what the data looks like (function summary under R !)
I guess datamining is less structured and could be identified with exploratory analysis. However it uses techniques from statistics that are from estimation, forecasting, classification ....
|
What is the difference between data mining and statistical analysis?
I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling
|
12,943
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
|
1. Normal distribution of residuals:
The normality condition comes into play when you're trying to get confidence intervals and/or p-values.
$\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gauss Markov condition.
This plot tries to illustrate the distribution of points in the population in blue (with the population regression line as a solid cyan line), superimposed on a sample dataset in big yellow dots (with its estimated regression line plotted at as dashed yellow line). Evidently this is only for conceptual consumption, since there would be infinity points for each value of $X = x$) - so it is a graphical iconographic discretization of the concept of regression as the continuous distribution of values around a mean (corresponded to the predicted value of the "independent" variable) at each given value of the regressor, or explanatory variable.
If we run diagnostic R plots on the simulated "population" data we'd get...
The variance of the the residuals is constant along all values of $X.$
The typical plot would be:
Conceptually, introducing multiple regressors or explanatory variables doesn't alter the idea. I find the hands-on tutorial of the package swirl() extremely helpful in understanding how multiple regression is really a process of regressing dependent variables against each other carrying forward the residual, unexplained variation in the model; or more simply, a vectorial form of simple linear regression:
The general technique is to pick one regressor and to replace all other variables by the residuals of their regressions against that one.
2. The variability of the residuals is nearly constant (Homoskedasticity):
$E[ \varepsilon_i^2 \vert X ] = \sigma^2$
The problem with violating this condition is:
Heteroskedasticity has serious consequences for the OLS estimator. Although the OLS estimator remains unbiased, the estimated SE is wrong. Because of this, confidence intervals and hypotheses tests cannot be relied on. In addition, the OLS estimator is no longer BLUE.
In this plot the variance increases with the values of the regressor (explanatory variable), as opposed to staying constant. In this case the residuals are normally distributed, but the variance of this normal distribution changes (increases) with the explanatory variable.
Notice that the "true" (population) regression line does not change with respect to the population regression line under homoskedasticity in the first plot (solid dark blue), but it is intuitively clear that estimates are going to be more uncertain.
The diagnostic plots on the dataset are...
which correspond to "heavy-tailed" distribution, which makes sense is we were to telescope all the "side-by-side" vertical Gaussian plots into a single one, which would retain its bell shape, but have very long tails.
@Glen_b "... a complete coverage of the distinction between the two would also consider homoskedastic-but-not-normal."
The residuals are highly skewed and the variance increases with the values of the explanatory variable.
These would be the diagnostic plots...
corresponding to marked right skewed-ness.
To close the loop, we'd see also skewed-ness in a homoskedastic model with non-Gaussian distribution of errors:
with diagnostic plots as...
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
|
1. Normal distribution of residuals:
The normality condition comes into play when you're trying to get confidence intervals and/or p-values.
$\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gaus
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
1. Normal distribution of residuals:
The normality condition comes into play when you're trying to get confidence intervals and/or p-values.
$\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gauss Markov condition.
This plot tries to illustrate the distribution of points in the population in blue (with the population regression line as a solid cyan line), superimposed on a sample dataset in big yellow dots (with its estimated regression line plotted at as dashed yellow line). Evidently this is only for conceptual consumption, since there would be infinity points for each value of $X = x$) - so it is a graphical iconographic discretization of the concept of regression as the continuous distribution of values around a mean (corresponded to the predicted value of the "independent" variable) at each given value of the regressor, or explanatory variable.
If we run diagnostic R plots on the simulated "population" data we'd get...
The variance of the the residuals is constant along all values of $X.$
The typical plot would be:
Conceptually, introducing multiple regressors or explanatory variables doesn't alter the idea. I find the hands-on tutorial of the package swirl() extremely helpful in understanding how multiple regression is really a process of regressing dependent variables against each other carrying forward the residual, unexplained variation in the model; or more simply, a vectorial form of simple linear regression:
The general technique is to pick one regressor and to replace all other variables by the residuals of their regressions against that one.
2. The variability of the residuals is nearly constant (Homoskedasticity):
$E[ \varepsilon_i^2 \vert X ] = \sigma^2$
The problem with violating this condition is:
Heteroskedasticity has serious consequences for the OLS estimator. Although the OLS estimator remains unbiased, the estimated SE is wrong. Because of this, confidence intervals and hypotheses tests cannot be relied on. In addition, the OLS estimator is no longer BLUE.
In this plot the variance increases with the values of the regressor (explanatory variable), as opposed to staying constant. In this case the residuals are normally distributed, but the variance of this normal distribution changes (increases) with the explanatory variable.
Notice that the "true" (population) regression line does not change with respect to the population regression line under homoskedasticity in the first plot (solid dark blue), but it is intuitively clear that estimates are going to be more uncertain.
The diagnostic plots on the dataset are...
which correspond to "heavy-tailed" distribution, which makes sense is we were to telescope all the "side-by-side" vertical Gaussian plots into a single one, which would retain its bell shape, but have very long tails.
@Glen_b "... a complete coverage of the distinction between the two would also consider homoskedastic-but-not-normal."
The residuals are highly skewed and the variance increases with the values of the explanatory variable.
These would be the diagnostic plots...
corresponding to marked right skewed-ness.
To close the loop, we'd see also skewed-ness in a homoskedastic model with non-Gaussian distribution of errors:
with diagnostic plots as...
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
1. Normal distribution of residuals:
The normality condition comes into play when you're trying to get confidence intervals and/or p-values.
$\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gaus
|
12,944
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
|
It is not the OP's fault, but I am starting to get tired reading misinformation like this.
I read that these are the conditions for using the multiple regression
model:
the residuals of the model are nearly normal,
the variability of the residuals is nearly constant
the residuals are independent, and
each variable is linearly related to the outcome.
The "multiple regression model" is just a label declaring that one variable can be expressed as a function of other variables.
Neither the true error term nor the residuals of the model need be nearly anything in particular - if the residuals look normal, this is good for subsequent statistical inference.
The variability (variance) of the error term need not be nearly constant - if it is not, we have a model with heteroskedasticity which nowadays is rather easily handled.
The residuals are not independent in any case, since each is a function of the whole sample. The true error terms need not be independent -if they are not we have a model with autocorrelation, which, although more difficult than heteroskedasticity, can be dealt with up to a degree.
Each variable need not be linearly related to the outcome. In fact, the distinction between "linear" and "non-linear" regression has nothing to do with the relation between the variables - but of how the unknown coefficients enter the relationship.
What one could say is that if the first three hold and the fourth is properly stated, then we obtain the "Classical Normal Linear Regression Model", which is just one (although historically the first) variant of multiple regression models.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
|
It is not the OP's fault, but I am starting to get tired reading misinformation like this.
I read that these are the conditions for using the multiple regression
model:
the residuals of the model a
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
It is not the OP's fault, but I am starting to get tired reading misinformation like this.
I read that these are the conditions for using the multiple regression
model:
the residuals of the model are nearly normal,
the variability of the residuals is nearly constant
the residuals are independent, and
each variable is linearly related to the outcome.
The "multiple regression model" is just a label declaring that one variable can be expressed as a function of other variables.
Neither the true error term nor the residuals of the model need be nearly anything in particular - if the residuals look normal, this is good for subsequent statistical inference.
The variability (variance) of the error term need not be nearly constant - if it is not, we have a model with heteroskedasticity which nowadays is rather easily handled.
The residuals are not independent in any case, since each is a function of the whole sample. The true error terms need not be independent -if they are not we have a model with autocorrelation, which, although more difficult than heteroskedasticity, can be dealt with up to a degree.
Each variable need not be linearly related to the outcome. In fact, the distinction between "linear" and "non-linear" regression has nothing to do with the relation between the variables - but of how the unknown coefficients enter the relationship.
What one could say is that if the first three hold and the fourth is properly stated, then we obtain the "Classical Normal Linear Regression Model", which is just one (although historically the first) variant of multiple regression models.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
It is not the OP's fault, but I am starting to get tired reading misinformation like this.
I read that these are the conditions for using the multiple regression
model:
the residuals of the model a
|
12,945
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
|
Antoni Parellada had a perfect answer with nice graphical illustration.
I just want to add one comment to summarize difference between two statements
the residuals of the model are nearly normal
the variability of the residuals is nearly constant
Statement 1 gives the "shape" of the residual is "bell shaped curve".
Statement 2 refines the spread of the "shape" (is constant), in Antoni Parellada's plot 3. there are 3 bell shaped curves, but they are different spread.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
|
Antoni Parellada had a perfect answer with nice graphical illustration.
I just want to add one comment to summarize difference between two statements
the residuals of the model are nearly normal
th
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
Antoni Parellada had a perfect answer with nice graphical illustration.
I just want to add one comment to summarize difference between two statements
the residuals of the model are nearly normal
the variability of the residuals is nearly constant
Statement 1 gives the "shape" of the residual is "bell shaped curve".
Statement 2 refines the spread of the "shape" (is constant), in Antoni Parellada's plot 3. there are 3 bell shaped curves, but they are different spread.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
Antoni Parellada had a perfect answer with nice graphical illustration.
I just want to add one comment to summarize difference between two statements
the residuals of the model are nearly normal
th
|
12,946
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
|
There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases you don't need and, in many cases, cannot really assume that the distribution is normal.
The assumptions that you quoted are stricter than the most, yet they are formulated in unnecessarily loose language. For instance, what is exactly nearly? Also, it is not the residuals on which we impose the assumptions, it's errors. The residuals are estimates of errors, which are not observable. This tells me that you're citing from a poor source. Throw it out.
The brief answer to your question is that if you consider any distribution, e.g. Student t distribution, for your errors (I'm going to use the correct term in my answer) then you can see how the errors can have "nearly constant" variation without being from Normal distribution, and how having "nearly constant" variance doesn't require normal distribution. In other words, no, you can't devise one assumption from another without an additional requirement.
One such requirement may come from a popular formulation of the regression model as follows:
$$y_i=X_i\beta+\varepsilon_i\\
\varepsilon_i\sim\mathcal N(0,\sigma^2)$$
Here, in the second formula we states almost regression assumptions at once:
"the residuals of the model are nearly normal" - this is the fact that we used $\mathcal N(.)$ in the formula, which stands for normal (Gaussian) distribution
"the variability of the residuals is nearly constant" - this is using one constant $\sigma$ for all errors $\varepsilon_i$
"the residuals are independent" - this comes from using $\mathcal N$ that doesn't depend on anything that is correlated with errors or regressors $X$
"each variable is linearly related to the outcome" - this is $y=X\beta$ form
So when we bundle all assumptions together this way in one or two equations, it may seem as they're all dependent on each other, which is not true. I'm going to demonstrate this next.
Example 1
Imagine that instead of the above model I state the following:
$$y_i=X_i\beta+\varepsilon_i\\
\varepsilon_i\sim t_\nu$$
Here, I'm stating that the errors are from Student t distribution with $\nu$ degrees of freedom. The errors will have a constant variance, of course, and they're not Gaussian.
Example 2
$$y_i=X_i\beta+\varepsilon_i\\
\varepsilon_i\sim\mathcal N(0,\sigma^2 i)$$
Here, the distribution of errors is normal, but the variance is not constant, it's increasing with $i$.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
|
There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases you don't need and, in many cases, cannot really assume that the distribution is normal.
The assumptions that you quoted are stricter than the most, yet they are formulated in unnecessarily loose language. For instance, what is exactly nearly? Also, it is not the residuals on which we impose the assumptions, it's errors. The residuals are estimates of errors, which are not observable. This tells me that you're citing from a poor source. Throw it out.
The brief answer to your question is that if you consider any distribution, e.g. Student t distribution, for your errors (I'm going to use the correct term in my answer) then you can see how the errors can have "nearly constant" variation without being from Normal distribution, and how having "nearly constant" variance doesn't require normal distribution. In other words, no, you can't devise one assumption from another without an additional requirement.
One such requirement may come from a popular formulation of the regression model as follows:
$$y_i=X_i\beta+\varepsilon_i\\
\varepsilon_i\sim\mathcal N(0,\sigma^2)$$
Here, in the second formula we states almost regression assumptions at once:
"the residuals of the model are nearly normal" - this is the fact that we used $\mathcal N(.)$ in the formula, which stands for normal (Gaussian) distribution
"the variability of the residuals is nearly constant" - this is using one constant $\sigma$ for all errors $\varepsilon_i$
"the residuals are independent" - this comes from using $\mathcal N$ that doesn't depend on anything that is correlated with errors or regressors $X$
"each variable is linearly related to the outcome" - this is $y=X\beta$ form
So when we bundle all assumptions together this way in one or two equations, it may seem as they're all dependent on each other, which is not true. I'm going to demonstrate this next.
Example 1
Imagine that instead of the above model I state the following:
$$y_i=X_i\beta+\varepsilon_i\\
\varepsilon_i\sim t_\nu$$
Here, I'm stating that the errors are from Student t distribution with $\nu$ degrees of freedom. The errors will have a constant variance, of course, and they're not Gaussian.
Example 2
$$y_i=X_i\beta+\varepsilon_i\\
\varepsilon_i\sim\mathcal N(0,\sigma^2 i)$$
Here, the distribution of errors is normal, but the variance is not constant, it's increasing with $i$.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
There is not a single unique set of regression assumptions, but there are several variations out there. Some of these sets of assumptions are stricter, i.e. narrower, than others. Also, in most cases
|
12,947
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
|
I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary.
A regression model is a formal means of expressing the two essential ingredients of a statistical relation:
A tendency of the response variable $Y$ to vary with the predictor variable $X$ in a systematic fashion.
A scattering of points around the curve of statistical relationship.
How do we get a handle on the response variable $Y$?
By postulating that:
There is a probability distribution of $Y$ for each level of $X$.
The means of these probability distributions vary in some systematic fashion with $X$.
Regression models may differ in the form of the regression function (linear, curvilinear), in the shape of the probability distributions of $Y$ (symmetrical, skewed), and in other ways.
Whatever the variation, the concept of a probability distribution of $Y$ for any given $X$ is the formal counterpart to the empirical scatter in a statistical relation.
Similarly, the regression curve, which describes the relation between the means of the probability distributions of $Y$ and the level of $X$, is the counterpart to the general tendency of $Y$ to vary with $X$ systematically in a statistical relation.
Source : Applied Linear Statistical Models, KNNL
In Normal Error Regression model we try to estimate the conditional distribution of mean of $Y$ given $X$ that is written like below:
$$Y_i = \beta_0\ + \beta_1X_i + \epsilon$$
where:
$Y_i$ is the observed response
$X_i$ is a known constant, the level of the predictor variable
$\beta_0\\$ and $\beta_1\\$ are parameters
$\epsilon\\$ are independent $N(O,\sigma^2)$
$i$ = 1, ... ,n
So, to estimate $E(Y|X)$ we need to estimate the three parameters which are: $\beta_0\\$, $\beta_1\\$ and $\sigma^2$. We can find that by taking the partial derivative of the likelihood function w.r.t. $\beta_0\\$, $\beta_1\\$ and $\sigma^2$ and equating them to zero. This becomes relatively easy under the assumption of normality.
the residuals of the model are nearly normal,
the variability of the residuals is nearly constant
the residuals are independent, and
each variable is linearly related to the outcome.
How are 1 and 2 different?
Coming to the question
The first and second assumptions as stated by you are two parts of the same assumption of normality with zero mean and constant variance. I think the question should be posed as what are the implications of the two assumptions for a normal error regression model rather than the difference between the two assumptions. I say that because it seems like comparing apples to oranges because you are trying to find a difference between assumptions over the distribution of a scatter of points and assumptions over its variability. Variability is a property of a distribution. So I will try to answer more relevant question of the implications of the two assumptions.
Under the assumption of normality the maximum likelihood estimators(MLEs) are the same as the least squares estimators and the MLEs enjoy the property of being UMVUE which means they have minimum variance among all estimators.
Assumption of homoskedasticity lets one set up the interval estimates for the parameters $\beta_0\\$ and $\beta_1\\$and make significance tests. $t$-test is used to check for statistical significance which is robust to minor deviations from normality.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
|
I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary.
A regression model is a formal means of expressing the two essential ingredients of
|
Assumptions of multiple regression: how is normality assumption different from constant variance assumption?
I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary.
A regression model is a formal means of expressing the two essential ingredients of a statistical relation:
A tendency of the response variable $Y$ to vary with the predictor variable $X$ in a systematic fashion.
A scattering of points around the curve of statistical relationship.
How do we get a handle on the response variable $Y$?
By postulating that:
There is a probability distribution of $Y$ for each level of $X$.
The means of these probability distributions vary in some systematic fashion with $X$.
Regression models may differ in the form of the regression function (linear, curvilinear), in the shape of the probability distributions of $Y$ (symmetrical, skewed), and in other ways.
Whatever the variation, the concept of a probability distribution of $Y$ for any given $X$ is the formal counterpart to the empirical scatter in a statistical relation.
Similarly, the regression curve, which describes the relation between the means of the probability distributions of $Y$ and the level of $X$, is the counterpart to the general tendency of $Y$ to vary with $X$ systematically in a statistical relation.
Source : Applied Linear Statistical Models, KNNL
In Normal Error Regression model we try to estimate the conditional distribution of mean of $Y$ given $X$ that is written like below:
$$Y_i = \beta_0\ + \beta_1X_i + \epsilon$$
where:
$Y_i$ is the observed response
$X_i$ is a known constant, the level of the predictor variable
$\beta_0\\$ and $\beta_1\\$ are parameters
$\epsilon\\$ are independent $N(O,\sigma^2)$
$i$ = 1, ... ,n
So, to estimate $E(Y|X)$ we need to estimate the three parameters which are: $\beta_0\\$, $\beta_1\\$ and $\sigma^2$. We can find that by taking the partial derivative of the likelihood function w.r.t. $\beta_0\\$, $\beta_1\\$ and $\sigma^2$ and equating them to zero. This becomes relatively easy under the assumption of normality.
the residuals of the model are nearly normal,
the variability of the residuals is nearly constant
the residuals are independent, and
each variable is linearly related to the outcome.
How are 1 and 2 different?
Coming to the question
The first and second assumptions as stated by you are two parts of the same assumption of normality with zero mean and constant variance. I think the question should be posed as what are the implications of the two assumptions for a normal error regression model rather than the difference between the two assumptions. I say that because it seems like comparing apples to oranges because you are trying to find a difference between assumptions over the distribution of a scatter of points and assumptions over its variability. Variability is a property of a distribution. So I will try to answer more relevant question of the implications of the two assumptions.
Under the assumption of normality the maximum likelihood estimators(MLEs) are the same as the least squares estimators and the MLEs enjoy the property of being UMVUE which means they have minimum variance among all estimators.
Assumption of homoskedasticity lets one set up the interval estimates for the parameters $\beta_0\\$ and $\beta_1\\$and make significance tests. $t$-test is used to check for statistical significance which is robust to minor deviations from normality.
|
Assumptions of multiple regression: how is normality assumption different from constant variance ass
I tried to add a new dimension to the discussion and make it more general. Please excuse me if was too rudimentary.
A regression model is a formal means of expressing the two essential ingredients of
|
12,948
|
Pairwise Mahalanobis distances
|
Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD.
cholMaha <- function(X) {
dec <- chol( cov(X) )
tmp <- forwardsolve(t(dec), t(X) )
dist(t(tmp))
}
It should be faster, because forward-solving a triangular system is faster then dense matrix multiplication with the inverse covariance (see here). Here are the benchmarks with ahfoss's and whuber's solutions in several settings:
require(microbenchmark)
set.seed(26565)
N <- 100
d <- 10
X <- matrix(rnorm(N*d), N, d)
A <- cholMaha( X = X )
A1 <- fastPwMahal(x1 = X, invCovMat = solve(cov(X)))
sum(abs(A - A1))
# [1] 5.973666e-12 Ressuring!
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X))
Unit: microseconds
expr min lq median uq max neval
cholMaha 502.368 508.3750 512.3210 516.8960 542.806 100
fastPwMahal 634.439 640.7235 645.8575 651.3745 1469.112 100
mahal 839.772 850.4580 857.4405 871.0260 1856.032 100
N <- 10
d <- 5
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: microseconds
expr min lq median uq max neval
cholMaha 112.235 116.9845 119.114 122.3970 169.924 100
fastPwMahal 195.415 201.5620 205.124 208.3365 1273.486 100
mahal 163.149 169.3650 172.927 175.9650 311.422 100
N <- 500
d <- 15
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: milliseconds
expr min lq median uq max neval
cholMaha 14.58551 14.62484 14.74804 14.92414 41.70873 100
fastPwMahal 14.79692 14.91129 14.96545 15.19139 15.84825 100
mahal 12.65825 14.11171 39.43599 40.26598 41.77186 100
N <- 500
d <- 5
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: milliseconds
expr min lq median uq max neval
cholMaha 5.007198 5.030110 5.115941 5.257862 6.031427 100
fastPwMahal 5.082696 5.143914 5.245919 5.457050 6.232565 100
mahal 10.312487 12.215657 37.094138 37.986501 40.153222 100
So Cholesky seems to be uniformly faster.
|
Pairwise Mahalanobis distances
|
Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD.
cholMaha <- function(X) {
dec <- chol( cov(X) )
tmp <- forwardsolve(t(dec), t(X) )
dist(t(tmp
|
Pairwise Mahalanobis distances
Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD.
cholMaha <- function(X) {
dec <- chol( cov(X) )
tmp <- forwardsolve(t(dec), t(X) )
dist(t(tmp))
}
It should be faster, because forward-solving a triangular system is faster then dense matrix multiplication with the inverse covariance (see here). Here are the benchmarks with ahfoss's and whuber's solutions in several settings:
require(microbenchmark)
set.seed(26565)
N <- 100
d <- 10
X <- matrix(rnorm(N*d), N, d)
A <- cholMaha( X = X )
A1 <- fastPwMahal(x1 = X, invCovMat = solve(cov(X)))
sum(abs(A - A1))
# [1] 5.973666e-12 Ressuring!
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X))
Unit: microseconds
expr min lq median uq max neval
cholMaha 502.368 508.3750 512.3210 516.8960 542.806 100
fastPwMahal 634.439 640.7235 645.8575 651.3745 1469.112 100
mahal 839.772 850.4580 857.4405 871.0260 1856.032 100
N <- 10
d <- 5
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: microseconds
expr min lq median uq max neval
cholMaha 112.235 116.9845 119.114 122.3970 169.924 100
fastPwMahal 195.415 201.5620 205.124 208.3365 1273.486 100
mahal 163.149 169.3650 172.927 175.9650 311.422 100
N <- 500
d <- 15
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: milliseconds
expr min lq median uq max neval
cholMaha 14.58551 14.62484 14.74804 14.92414 41.70873 100
fastPwMahal 14.79692 14.91129 14.96545 15.19139 15.84825 100
mahal 12.65825 14.11171 39.43599 40.26598 41.77186 100
N <- 500
d <- 5
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: milliseconds
expr min lq median uq max neval
cholMaha 5.007198 5.030110 5.115941 5.257862 6.031427 100
fastPwMahal 5.082696 5.143914 5.245919 5.457050 6.232565 100
mahal 10.312487 12.215657 37.094138 37.986501 40.153222 100
So Cholesky seems to be uniformly faster.
|
Pairwise Mahalanobis distances
Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD.
cholMaha <- function(X) {
dec <- chol( cov(X) )
tmp <- forwardsolve(t(dec), t(X) )
dist(t(tmp
|
12,949
|
Pairwise Mahalanobis distances
|
The standard formula for squared Mahalanobis distance between two data points is
$$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$
where $x_i$ is a $p \times 1$ vector corresponding to observation $i$. Typically, the covariance matrix is estimated from the observed data. Not counting matrix inversion, this operation requires $p^2+p$ multiplications and $p^2+2p$ additions, each repeated $n(n-1)/2$ times.
Consider the following derivation:
\begin{eqnarray*}
D_{12} &=& (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) \\
&=& (x_1-x_2)^T \Sigma^{-\frac{1}{2}} \Sigma^{-\frac{1}{2}} (x_1-x_2) \\
&=& (x_1^T \Sigma^{-\frac{1}{2}} - x_2^T \Sigma^{-\frac{1}{2}}) (\Sigma^{-\frac{1}{2}}x_1 - \Sigma^{-\frac{1}{2}}x_2) \\
&=& (q_1^T - q_2^T)(q_1 - q_2)
\end{eqnarray*}
where $q_i = \Sigma^{-\frac{1}{2}}x_i$. Note that $x_i^T \Sigma^{-\frac{1}{2}} = (\Sigma^{-\frac{1}{2}} x_i)^T = q_i^T$. This relies on the fact that $\Sigma^{-\frac{1}{2}}$ is symmetric, which holds due to the fact that for any symmetric diagonalizable matrix $A = PEP^T$,
\begin{eqnarray*}
A^{\frac{1}{2}^T} &=& (PE^{\frac{1}{2}}P^T)^T \\
&=& P^{T^T} E^{\frac{1}{2}^T} P^T \\
&=& PE^{\frac{1}{2}}P^T \\
&=& A^{\frac{1}{2}}
\end{eqnarray*}
If we let $A=\Sigma^{-1}$, and note that $\Sigma^{-1}$ is symmetric, we see that that $\Sigma^{-\frac{1}{2}}$ must also be symmetric. If $X$ is the $n \times p$ matrix of observations and $Q$ is the $n \times p$ matrix such that the $i^{th}$ row of $Q$ is $q_i$, then $Q$ can be succinctly expressed as $X\Sigma^{-\frac{1}{2}}$. This and the previous results imply that
$$D_{k\ell} = \sum_{i=1}^p (Q_{ki}-Q_{\ell i})^2.$$
the only operations that are computed $n(n-1)/2$ times are $p$ multiplications and $2p$ additions (as opposed to the $p^2+p$ multiplications and $p^2+2p$ additions in the above method), resulting in an algorithm that is of computational complexity order $O(pn^2 + p^2n)$ instead of the original $O(p^2n^2)$.
require(ICSNP) # for pair.diff(), C implementation
fastPwMahal = function(data) {
# Calculate inverse square root matrix
invCov = solve(cov(data))
svds = svd(invCov)
invCovSqr = svds$u %*% diag(sqrt(svds$d)) %*% t(svds$u)
Q = data %*% invCovSqr
# Calculate distances
# pair.diff() calculates the n(n-1)/2 element-by-element
# pairwise differences between each row of the input matrix
sqrDiffs = pair.diff(Q)^2
distVec = rowSums(sqrDiffs)
# Create dist object without creating a n x n matrix
attr(distVec, "Size") = nrow(data)
attr(distVec, "Diag") = F
attr(distVec, "Upper") = F
class(distVec) = "dist"
return(distVec)
}
|
Pairwise Mahalanobis distances
|
The standard formula for squared Mahalanobis distance between two data points is
$$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$
where $x_i$ is a $p \times 1$ vector corresponding to observation $i$
|
Pairwise Mahalanobis distances
The standard formula for squared Mahalanobis distance between two data points is
$$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$
where $x_i$ is a $p \times 1$ vector corresponding to observation $i$. Typically, the covariance matrix is estimated from the observed data. Not counting matrix inversion, this operation requires $p^2+p$ multiplications and $p^2+2p$ additions, each repeated $n(n-1)/2$ times.
Consider the following derivation:
\begin{eqnarray*}
D_{12} &=& (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) \\
&=& (x_1-x_2)^T \Sigma^{-\frac{1}{2}} \Sigma^{-\frac{1}{2}} (x_1-x_2) \\
&=& (x_1^T \Sigma^{-\frac{1}{2}} - x_2^T \Sigma^{-\frac{1}{2}}) (\Sigma^{-\frac{1}{2}}x_1 - \Sigma^{-\frac{1}{2}}x_2) \\
&=& (q_1^T - q_2^T)(q_1 - q_2)
\end{eqnarray*}
where $q_i = \Sigma^{-\frac{1}{2}}x_i$. Note that $x_i^T \Sigma^{-\frac{1}{2}} = (\Sigma^{-\frac{1}{2}} x_i)^T = q_i^T$. This relies on the fact that $\Sigma^{-\frac{1}{2}}$ is symmetric, which holds due to the fact that for any symmetric diagonalizable matrix $A = PEP^T$,
\begin{eqnarray*}
A^{\frac{1}{2}^T} &=& (PE^{\frac{1}{2}}P^T)^T \\
&=& P^{T^T} E^{\frac{1}{2}^T} P^T \\
&=& PE^{\frac{1}{2}}P^T \\
&=& A^{\frac{1}{2}}
\end{eqnarray*}
If we let $A=\Sigma^{-1}$, and note that $\Sigma^{-1}$ is symmetric, we see that that $\Sigma^{-\frac{1}{2}}$ must also be symmetric. If $X$ is the $n \times p$ matrix of observations and $Q$ is the $n \times p$ matrix such that the $i^{th}$ row of $Q$ is $q_i$, then $Q$ can be succinctly expressed as $X\Sigma^{-\frac{1}{2}}$. This and the previous results imply that
$$D_{k\ell} = \sum_{i=1}^p (Q_{ki}-Q_{\ell i})^2.$$
the only operations that are computed $n(n-1)/2$ times are $p$ multiplications and $2p$ additions (as opposed to the $p^2+p$ multiplications and $p^2+2p$ additions in the above method), resulting in an algorithm that is of computational complexity order $O(pn^2 + p^2n)$ instead of the original $O(p^2n^2)$.
require(ICSNP) # for pair.diff(), C implementation
fastPwMahal = function(data) {
# Calculate inverse square root matrix
invCov = solve(cov(data))
svds = svd(invCov)
invCovSqr = svds$u %*% diag(sqrt(svds$d)) %*% t(svds$u)
Q = data %*% invCovSqr
# Calculate distances
# pair.diff() calculates the n(n-1)/2 element-by-element
# pairwise differences between each row of the input matrix
sqrDiffs = pair.diff(Q)^2
distVec = rowSums(sqrDiffs)
# Create dist object without creating a n x n matrix
attr(distVec, "Size") = nrow(data)
attr(distVec, "Diag") = F
attr(distVec, "Upper") = F
class(distVec) = "dist"
return(distVec)
}
|
Pairwise Mahalanobis distances
The standard formula for squared Mahalanobis distance between two data points is
$$ D_{12} = (x_1-x_2)^T \Sigma^{-1} (x_1-x_2) $$
where $x_i$ is a $p \times 1$ vector corresponding to observation $i$
|
12,950
|
Pairwise Mahalanobis distances
|
Let's try the obvious. From
$$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$
it follows we can compute the vector
$$u_i = x_i^\prime \Sigma^{-1}x_i$$
in $O(p^2)$ time and the matrix
$$V = X \Sigma^{-1} X^\prime$$
in $O(p n^2 + p^2 n)$ time, most likely using built-in fast (parallelizable) array operations, and then form the solution as
$$D = u \oplus u - 2 V$$
where $\oplus$ is the outer product with respect to $+$: $(a \oplus b)_{ij} = a_i + b_j.$
An R implementation succinctly parallels the mathematical formulation (and assumes, with it, that $\Sigma=\text{Var}(X)$ actually is invertible with inverse written $h$ here):
mahal <- function(x, h=solve(var(x))) {
u <- apply(x, 1, function(y) y %*% h %*% y)
d <- outer(u, u, `+`) - 2 * x %*% h %*% t(x)
d[lower.tri(d)]
}
Note, for compability with the other solutions, that only the unique off-diagonal elements are returned, rather than the entire (symmetric, zero-on-the-diagonal) squared distance matrix. Scatterplots show its results agree with those of fastPwMahal.
In C or C++, RAM can be re-used and $u\oplus u$ computed on the fly, obviating any need for intermediate storage of $u\oplus u$.
Timing studies with $n$ ranging from $33$ through $5000$ and $p$ ranging from $10$ to $100$ indicate this implementation is $1.5$ to $5$ times faster than fastPwMahal within that range. The improvement gets better as $p$ and $n$ increase. Consequently, we can expect fastPwMahal to be superior for smaller $p$. The break-even occurs around $p=7$ for $n\ge 100$. Whether the same computational advantages of this straightforward solution pertain in other implementations may be a matter of how well they take advantage of vectorized array operations.
|
Pairwise Mahalanobis distances
|
Let's try the obvious. From
$$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$
it follows we can compute the vecto
|
Pairwise Mahalanobis distances
Let's try the obvious. From
$$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$
it follows we can compute the vector
$$u_i = x_i^\prime \Sigma^{-1}x_i$$
in $O(p^2)$ time and the matrix
$$V = X \Sigma^{-1} X^\prime$$
in $O(p n^2 + p^2 n)$ time, most likely using built-in fast (parallelizable) array operations, and then form the solution as
$$D = u \oplus u - 2 V$$
where $\oplus$ is the outer product with respect to $+$: $(a \oplus b)_{ij} = a_i + b_j.$
An R implementation succinctly parallels the mathematical formulation (and assumes, with it, that $\Sigma=\text{Var}(X)$ actually is invertible with inverse written $h$ here):
mahal <- function(x, h=solve(var(x))) {
u <- apply(x, 1, function(y) y %*% h %*% y)
d <- outer(u, u, `+`) - 2 * x %*% h %*% t(x)
d[lower.tri(d)]
}
Note, for compability with the other solutions, that only the unique off-diagonal elements are returned, rather than the entire (symmetric, zero-on-the-diagonal) squared distance matrix. Scatterplots show its results agree with those of fastPwMahal.
In C or C++, RAM can be re-used and $u\oplus u$ computed on the fly, obviating any need for intermediate storage of $u\oplus u$.
Timing studies with $n$ ranging from $33$ through $5000$ and $p$ ranging from $10$ to $100$ indicate this implementation is $1.5$ to $5$ times faster than fastPwMahal within that range. The improvement gets better as $p$ and $n$ increase. Consequently, we can expect fastPwMahal to be superior for smaller $p$. The break-even occurs around $p=7$ for $n\ge 100$. Whether the same computational advantages of this straightforward solution pertain in other implementations may be a matter of how well they take advantage of vectorized array operations.
|
Pairwise Mahalanobis distances
Let's try the obvious. From
$$D_{ij} = (x_i-x_j)^\prime \Sigma^{-1} (x_i-x_j)=x_i^\prime \Sigma^{-1}x_i + x_j^\prime \Sigma^{-1}x_j -2 x_i^\prime \Sigma^{-1}x_j $$
it follows we can compute the vecto
|
12,951
|
Pairwise Mahalanobis distances
|
If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use dist() for that. Let $X$ denote the $n\times p$ data matrix, which we assume to be centered so that its columns have mean 0 and to have rank $p$ so that the sample covariance matrix is nonsingular. (Centering requires $O(np)$ operations.) Then the sample covariance matrix is $$S = X^T X / n.$$
The pairwise sample Mahalanobis distances of $X$ is the same as the pairwise Euclidean distances of $$X L$$ for any matrix $L$ satisfying $LL^T = S^{-1}$, e.g. the square root or Cholesky factor. This follows from some linear algebra and it leads to an algorithm requiring the computation of $S$, $S^{-1}$, and a Cholesky decomposition. The worst case complexity is $O(np^2 + p^3)$.
More deeply, these distances relate to distances between the sample principal components of $X$. Let $X=UDV^T$ denote the SVD of $X$. Then $$S=VD^2V^T/n$$ and $$S^{-1/2}=VD^{-1}V^T n^{1/2}.$$ So $$X S^{-1/2} = UV^T n^{1/2}$$ and the sample Mahalanobis distances are just the pairwise Euclidean distances of $U$ scaled by a factor of $\sqrt{n}$, because Euclidean distance is rotation invariant. This leads to an algorithm requiring the computation of the SVD of $X$ which has worst case complexity $O(n p^2)$ when $n>p$.
Here is an R implementation of the second method which I cannot test on the iPad I am using to write this answer.
u = svd(scale(x, center = TRUE, scale = FALSE), nv = 0)$u
dist(u)
# these distances need to be scaled by a factor of n
|
Pairwise Mahalanobis distances
|
If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use
|
Pairwise Mahalanobis distances
If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use dist() for that. Let $X$ denote the $n\times p$ data matrix, which we assume to be centered so that its columns have mean 0 and to have rank $p$ so that the sample covariance matrix is nonsingular. (Centering requires $O(np)$ operations.) Then the sample covariance matrix is $$S = X^T X / n.$$
The pairwise sample Mahalanobis distances of $X$ is the same as the pairwise Euclidean distances of $$X L$$ for any matrix $L$ satisfying $LL^T = S^{-1}$, e.g. the square root or Cholesky factor. This follows from some linear algebra and it leads to an algorithm requiring the computation of $S$, $S^{-1}$, and a Cholesky decomposition. The worst case complexity is $O(np^2 + p^3)$.
More deeply, these distances relate to distances between the sample principal components of $X$. Let $X=UDV^T$ denote the SVD of $X$. Then $$S=VD^2V^T/n$$ and $$S^{-1/2}=VD^{-1}V^T n^{1/2}.$$ So $$X S^{-1/2} = UV^T n^{1/2}$$ and the sample Mahalanobis distances are just the pairwise Euclidean distances of $U$ scaled by a factor of $\sqrt{n}$, because Euclidean distance is rotation invariant. This leads to an algorithm requiring the computation of the SVD of $X$ which has worst case complexity $O(n p^2)$ when $n>p$.
Here is an R implementation of the second method which I cannot test on the iPad I am using to write this answer.
u = svd(scale(x, center = TRUE, scale = FALSE), nv = 0)$u
dist(u)
# these distances need to be scaled by a factor of n
|
Pairwise Mahalanobis distances
If you wish to compute the sample Mahalanobis distance, then there are some algebraic tricks that you can exploit. They all lead to computing pairwise Euclidean distances, so let's assume we can use
|
12,952
|
Pairwise Mahalanobis distances
|
This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stats package. It seems to be slightly faster (about 10% faster in some benchmarks I have run). Note that it returns Mahalanobis distance, as opposed to squared Maha distance.
fastPwMahal = function(x1,invCovMat) {
SQRT = with(svd(invCovMat), u %*% diag(d^0.5) %*% t(v))
dist(x1 %*% SQRT)
}
This function requires an inverse covariance matrix, and doesn't return a distance object -- but I suspect that this stripped-down version of the function will be more generally useful to stack exchange users.
|
Pairwise Mahalanobis distances
|
This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stat
|
Pairwise Mahalanobis distances
This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stats package. It seems to be slightly faster (about 10% faster in some benchmarks I have run). Note that it returns Mahalanobis distance, as opposed to squared Maha distance.
fastPwMahal = function(x1,invCovMat) {
SQRT = with(svd(invCovMat), u %*% diag(d^0.5) %*% t(v))
dist(x1 %*% SQRT)
}
This function requires an inverse covariance matrix, and doesn't return a distance object -- but I suspect that this stripped-down version of the function will be more generally useful to stack exchange users.
|
Pairwise Mahalanobis distances
This is a much more succinct solution. It is still based on the derivation involving the inverse square root covariance matrix (see my other answer to this question), but only uses base R and the stat
|
12,953
|
Pairwise Mahalanobis distances
|
I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic matrix calculations as R or Matlab, but much faster with loops. The routines for Cholesky decompositions and triangle substitutions can be used from LAPACK.
If you only use the Fortran77-features in the interface, your subroutine is still portable enough for others.
|
Pairwise Mahalanobis distances
|
I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic
|
Pairwise Mahalanobis distances
I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic matrix calculations as R or Matlab, but much faster with loops. The routines for Cholesky decompositions and triangle substitutions can be used from LAPACK.
If you only use the Fortran77-features in the interface, your subroutine is still portable enough for others.
|
Pairwise Mahalanobis distances
I had a similar problem solved by writing a Fortran95 subroutine. As you do, I didn't want to calculate the duplicates among the $n^2$ distances. Compiled Fortran95 is nearly as convenient with basic
|
12,954
|
Pairwise Mahalanobis distances
|
The formula you have posted is not computing what you think you are computing (a U-statistics).
In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences of the data). You are using cov(x0) (this is the covariance matrix of your original data). I think this is a mistake in your part. The whole point of using the pairwise differences is that it relieves you from the assumption that the multivariate distribution of your data is symmetric around a centre of symmetry (or to have to estimate that centre of symmetry for that matter, since crossprod(x1) is proportional to cov(x1)). Obviously, by using cov(x0) you lose that.
This is well explained in the paper I linked to in my original answer.
|
Pairwise Mahalanobis distances
|
The formula you have posted is not computing what you think you are computing (a U-statistics).
In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences
|
Pairwise Mahalanobis distances
The formula you have posted is not computing what you think you are computing (a U-statistics).
In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences of the data). You are using cov(x0) (this is the covariance matrix of your original data). I think this is a mistake in your part. The whole point of using the pairwise differences is that it relieves you from the assumption that the multivariate distribution of your data is symmetric around a centre of symmetry (or to have to estimate that centre of symmetry for that matter, since crossprod(x1) is proportional to cov(x1)). Obviously, by using cov(x0) you lose that.
This is well explained in the paper I linked to in my original answer.
|
Pairwise Mahalanobis distances
The formula you have posted is not computing what you think you are computing (a U-statistics).
In the code I posted, I use cov(x1) as scaling matrix (this is the variance of the pairwise differences
|
12,955
|
Pairwise Mahalanobis distances
|
There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix.
#Manly (2004, p.65-66)
x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17)
x2 <- c(133.60, 132.70, 133.80, 132.30, 130.33)
x3 <- c(99.17, 99.07, 96.03, 94.53, 93.50)
x4 <- c(50.53, 50.23, 50.57, 51.97, 51.37)
#size (n x p) #Means
x <- cbind(x1, x2, x3, x4)
#size (p x p) #Variances and Covariances
Cov <- matrix(c(21.112,0.038,0.078,2.01, 0.038,23.486,5.2,2.844,
0.078,5.2,24.18,1.134, 2.01,2.844,1.134,10.154), 4, 4)
library(biotools)
Mahalanobis_Distance<-D2.dist(x, Cov)
print(Mahalanobis_Distance)
|
Pairwise Mahalanobis distances
|
There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix.
#Manly (2004, p.65-66)
x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17)
x2 <
|
Pairwise Mahalanobis distances
There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix.
#Manly (2004, p.65-66)
x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17)
x2 <- c(133.60, 132.70, 133.80, 132.30, 130.33)
x3 <- c(99.17, 99.07, 96.03, 94.53, 93.50)
x4 <- c(50.53, 50.23, 50.57, 51.97, 51.37)
#size (n x p) #Means
x <- cbind(x1, x2, x3, x4)
#size (p x p) #Variances and Covariances
Cov <- matrix(c(21.112,0.038,0.078,2.01, 0.038,23.486,5.2,2.844,
0.078,5.2,24.18,1.134, 2.01,2.844,1.134,10.154), 4, 4)
library(biotools)
Mahalanobis_Distance<-D2.dist(x, Cov)
print(Mahalanobis_Distance)
|
Pairwise Mahalanobis distances
There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix.
#Manly (2004, p.65-66)
x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17)
x2 <
|
12,956
|
Pairwise Mahalanobis distances
|
This is the expanded with code my old answer moved here from another thread.
I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a hat matrix approach using solving of a system of linear equations (for it is faster than inverting of covariance matrix).
I'm not R user so I've just tried to reproduce @ahfoss' this recipe here in SPSS along with "my" recipe, on a data of 1000 cases by 400 variables, and I've found my way considerably faster.
A faster way to calculate the full matrix of pairwise Mahalanobis distances is through hat matrix $\bf H$. I mean, if you are using a high-level language (such as R) with quite fast matrix multiplication and inversion functions built in you will need no loops at all, and it will be faster than doing casewise loops.
Definition. The double-centered matrix of squared pairwise Mahalanobis distances is equal to $\mathbf{H}(n-1)$, where the hat matrix is $\bf X(X'X)^{-1}X'$, computed from column-centered data $\bf X$.
So, center columns of the data matrix, compute the hat matrix, multiply by (n-1), and perform operation opposite to double-centering. You get the matrix of squared Mahalanobis distances.
"Double centering" is the geometrically correct conversion of squared distances (such as Euclidean and Mahalanobis) into scalar products defined from the geometric centroid of the data cloud. This operation is implicitly based on the cosine theorem. Imagine you have a matrix of squared euclidean distances between your multivariate data poits. You find the centroid (multivariate mean) of the cloud and replace each pairwise distance by the corresponding scalar product (dot product), it is based on the distances $h$s to centroid and the angle between those vectors, as shown in the link. The $h^2$s stand on the diagonal of that matrix of scalar products and $h_1h_2\cos$ are the off-diagonal entries. Then, using directly the cosine theorem formula you easily convert the "double-centrate" matrix back into the squared distance matrix.
In our settings, the "double-centrate" matrix is specifically the hat matrix (multiplied by n-1), not euclidean scalar products, and the resultant squared distance matrix is thus the squared Mahalanobis distance matrix, not squared euclidean distance matrix.
In matrix notation: Let $H$ be the diagonal of $\mathbf{H}(n-1)$, a column vector. Propagate the column into the square matrix: H= {H,H,...}; then $\mathbf {D_{mahal}^2} = H+H'-2 \mathbf{H}(n-1)$.
The code in SPSS and speed probe is below.
This first code corresponds to @ahfoss function fastPwMahal of the cited answer. It is equivalent to it mathematically. But I'm computing the complete symmetric matrix of distances (via matrix operations) while @ahfoss computed a triangle of the symmetric matrix (element by element).
matrix. /*Matrix session in SPSS;
/*note: * operator means matrix multiplication, &* means usual, elementwise multiplication.
get data. /*Dataset 1000 cases x 400 variables
!cov(data%cov). /*compute usual covariances between variables [this is my own matrix function].
comp icov= inv(cov). /*invert it
call svd(icov,u,s,v). /*svd
comp isqrcov= u*sqrt(s)*t(v). /*COV^(-1/2)
comp Q= data*isqrcov. /*Matrix Q (see ahfoss answer)
!seuclid(Q%m). /*Compute 1000x1000 matrix of squared euclidean distances;
/*computed here from Q "data" they are the squared Mahalanobis distances.
/*print m. /*Done, print
end matrix.
Time elapsed: 3.25 sec
The following is my modification of it to make it faster:
matrix.
get data.
!cov(data%cov).
/*comp icov= inv(cov). /*Don't invert.
call eigen(cov,v,s2). /*Do sdv or eigen decomposition (eigen is faster),
/*comp isqrcov= v * mdiag(1/sqrt(s2)) * t(v). /*compute 1/sqrt of the eigenvalues, and compose the matrix back, so we have COV^(-1/2).
comp isqrcov= v &* (make(nrow(cov),1,1) * t(1/sqrt(s2))) * t(v). /*Or this way not doing matrix multiplication on a diagonal matrix: a bit faster .
comp Q= data*isqrcov.
!seuclid(Q%m).
/*print m.
end matrix.
Time elapsed: 2.40 sec
Finally, the "hat matrix approach". For speed, I'm computing the hat matrix (the data must be centered first) $\bf X(X'X)^{-1}X'$ via generalized inverse $\bf (X'X)^{-1}X'$ obtained in linear system solver solve(X'X,X').
matrix.
get data.
!center(data%data). /*Center variables (columns).
comp hat= data*solve(sscp(data),t(data))*(nrow(data)-1). /*hat matrix, and multiply it by n-1 (i.e. by df of covariances).
comp ss= diag(hat)*make(1,ncol(hat),1). /*Now using its diagonal, the leverages (as column propagated into matrix).
comp m= ss+t(ss)-2*hat. /*compute matrix of squared Mahalanobis distances via "cosine rule".
/*print m.
end matrix.
[Notice that if in "comp ss" and "comp m" lines you use "sscp(t(data))",
that is, DATA*t(DATA), in place of "hat", you get usual sq.
euclidean distances]
Time elapsed: 0.95 sec
|
Pairwise Mahalanobis distances
|
This is the expanded with code my old answer moved here from another thread.
I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a ha
|
Pairwise Mahalanobis distances
This is the expanded with code my old answer moved here from another thread.
I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a hat matrix approach using solving of a system of linear equations (for it is faster than inverting of covariance matrix).
I'm not R user so I've just tried to reproduce @ahfoss' this recipe here in SPSS along with "my" recipe, on a data of 1000 cases by 400 variables, and I've found my way considerably faster.
A faster way to calculate the full matrix of pairwise Mahalanobis distances is through hat matrix $\bf H$. I mean, if you are using a high-level language (such as R) with quite fast matrix multiplication and inversion functions built in you will need no loops at all, and it will be faster than doing casewise loops.
Definition. The double-centered matrix of squared pairwise Mahalanobis distances is equal to $\mathbf{H}(n-1)$, where the hat matrix is $\bf X(X'X)^{-1}X'$, computed from column-centered data $\bf X$.
So, center columns of the data matrix, compute the hat matrix, multiply by (n-1), and perform operation opposite to double-centering. You get the matrix of squared Mahalanobis distances.
"Double centering" is the geometrically correct conversion of squared distances (such as Euclidean and Mahalanobis) into scalar products defined from the geometric centroid of the data cloud. This operation is implicitly based on the cosine theorem. Imagine you have a matrix of squared euclidean distances between your multivariate data poits. You find the centroid (multivariate mean) of the cloud and replace each pairwise distance by the corresponding scalar product (dot product), it is based on the distances $h$s to centroid and the angle between those vectors, as shown in the link. The $h^2$s stand on the diagonal of that matrix of scalar products and $h_1h_2\cos$ are the off-diagonal entries. Then, using directly the cosine theorem formula you easily convert the "double-centrate" matrix back into the squared distance matrix.
In our settings, the "double-centrate" matrix is specifically the hat matrix (multiplied by n-1), not euclidean scalar products, and the resultant squared distance matrix is thus the squared Mahalanobis distance matrix, not squared euclidean distance matrix.
In matrix notation: Let $H$ be the diagonal of $\mathbf{H}(n-1)$, a column vector. Propagate the column into the square matrix: H= {H,H,...}; then $\mathbf {D_{mahal}^2} = H+H'-2 \mathbf{H}(n-1)$.
The code in SPSS and speed probe is below.
This first code corresponds to @ahfoss function fastPwMahal of the cited answer. It is equivalent to it mathematically. But I'm computing the complete symmetric matrix of distances (via matrix operations) while @ahfoss computed a triangle of the symmetric matrix (element by element).
matrix. /*Matrix session in SPSS;
/*note: * operator means matrix multiplication, &* means usual, elementwise multiplication.
get data. /*Dataset 1000 cases x 400 variables
!cov(data%cov). /*compute usual covariances between variables [this is my own matrix function].
comp icov= inv(cov). /*invert it
call svd(icov,u,s,v). /*svd
comp isqrcov= u*sqrt(s)*t(v). /*COV^(-1/2)
comp Q= data*isqrcov. /*Matrix Q (see ahfoss answer)
!seuclid(Q%m). /*Compute 1000x1000 matrix of squared euclidean distances;
/*computed here from Q "data" they are the squared Mahalanobis distances.
/*print m. /*Done, print
end matrix.
Time elapsed: 3.25 sec
The following is my modification of it to make it faster:
matrix.
get data.
!cov(data%cov).
/*comp icov= inv(cov). /*Don't invert.
call eigen(cov,v,s2). /*Do sdv or eigen decomposition (eigen is faster),
/*comp isqrcov= v * mdiag(1/sqrt(s2)) * t(v). /*compute 1/sqrt of the eigenvalues, and compose the matrix back, so we have COV^(-1/2).
comp isqrcov= v &* (make(nrow(cov),1,1) * t(1/sqrt(s2))) * t(v). /*Or this way not doing matrix multiplication on a diagonal matrix: a bit faster .
comp Q= data*isqrcov.
!seuclid(Q%m).
/*print m.
end matrix.
Time elapsed: 2.40 sec
Finally, the "hat matrix approach". For speed, I'm computing the hat matrix (the data must be centered first) $\bf X(X'X)^{-1}X'$ via generalized inverse $\bf (X'X)^{-1}X'$ obtained in linear system solver solve(X'X,X').
matrix.
get data.
!center(data%data). /*Center variables (columns).
comp hat= data*solve(sscp(data),t(data))*(nrow(data)-1). /*hat matrix, and multiply it by n-1 (i.e. by df of covariances).
comp ss= diag(hat)*make(1,ncol(hat),1). /*Now using its diagonal, the leverages (as column propagated into matrix).
comp m= ss+t(ss)-2*hat. /*compute matrix of squared Mahalanobis distances via "cosine rule".
/*print m.
end matrix.
[Notice that if in "comp ss" and "comp m" lines you use "sscp(t(data))",
that is, DATA*t(DATA), in place of "hat", you get usual sq.
euclidean distances]
Time elapsed: 0.95 sec
|
Pairwise Mahalanobis distances
This is the expanded with code my old answer moved here from another thread.
I've been doing for a long time computation of a square symmetric matrix of pairwise Mahalanobis distances in SPSS via a ha
|
12,957
|
What is the relationship between sample size and the influence of prior on posterior?
|
Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as
$$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underbrace{p(\theta)}_{{\rm prior}} $$
or, as is more commonly displayed on the log scale,
$$ \log( p(\theta | {\bf X}) ) = c + L(\theta;{\bf X}) + \log(p(\theta)) $$
The log-likelihood, $L(\theta;{\bf X}) = \log \left( p({\bf X}|\theta) \right)$, scales with the sample size, since it is a function of the data, while the prior density does not. Therefore, as the sample size increases, the absolute value of $L(\theta;{\bf X})$ is getting larger while $\log(p(\theta))$ stays fixed (for a fixed value of $\theta$), thus the sum $L(\theta;{\bf X}) + \log(p(\theta))$ becomes more heavily influenced by $L(\theta;{\bf X})$ as the sample size increases.
Therefore, to directly answer your question - the prior distribution becomes less and less relevant as it becomes outweighed by the likelihood. So, for a small sample size, the prior distribution plays a much larger role. This agrees with intuition since, you'd expect that prior specifications would play a larger role when there isn't much data available to disprove them whereas, if the sample size is very large, the signal present in the data will outweigh whatever a priori beliefs were put into the model.
|
What is the relationship between sample size and the influence of prior on posterior?
|
Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as
$$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underb
|
What is the relationship between sample size and the influence of prior on posterior?
Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as
$$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underbrace{p(\theta)}_{{\rm prior}} $$
or, as is more commonly displayed on the log scale,
$$ \log( p(\theta | {\bf X}) ) = c + L(\theta;{\bf X}) + \log(p(\theta)) $$
The log-likelihood, $L(\theta;{\bf X}) = \log \left( p({\bf X}|\theta) \right)$, scales with the sample size, since it is a function of the data, while the prior density does not. Therefore, as the sample size increases, the absolute value of $L(\theta;{\bf X})$ is getting larger while $\log(p(\theta))$ stays fixed (for a fixed value of $\theta$), thus the sum $L(\theta;{\bf X}) + \log(p(\theta))$ becomes more heavily influenced by $L(\theta;{\bf X})$ as the sample size increases.
Therefore, to directly answer your question - the prior distribution becomes less and less relevant as it becomes outweighed by the likelihood. So, for a small sample size, the prior distribution plays a much larger role. This agrees with intuition since, you'd expect that prior specifications would play a larger role when there isn't much data available to disprove them whereas, if the sample size is very large, the signal present in the data will outweigh whatever a priori beliefs were put into the model.
|
What is the relationship between sample size and the influence of prior on posterior?
Yes. The posterior distribution for a parameter $\theta$, given a data set ${\bf X}$ can be written as
$$ p(\theta | {\bf X}) \propto \underbrace{p({\bf X} | \theta)}_{{\rm likelihood}} \cdot \underb
|
12,958
|
What is the relationship between sample size and the influence of prior on posterior?
|
Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the posterior distributions are shown when $x=n/2$ has been observed. As $n$ grows, both posteriors become more and more concentrated around $1/2$.
For $n=2$ the difference is quite big, but for $n=50$ there is virtually no difference.
The two priors below are ${\rm Beta(1/2,1/2)}$ (black) and ${\rm Beta(2,2)}$ (red). The posteriors have the same colours as the priors that they are derived from.
(Note that for many other models and other priors, $n=50$ won't be enough for the prior not to matter!)
|
What is the relationship between sample size and the influence of prior on posterior?
|
Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the
|
What is the relationship between sample size and the influence of prior on posterior?
Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the posterior distributions are shown when $x=n/2$ has been observed. As $n$ grows, both posteriors become more and more concentrated around $1/2$.
For $n=2$ the difference is quite big, but for $n=50$ there is virtually no difference.
The two priors below are ${\rm Beta(1/2,1/2)}$ (black) and ${\rm Beta(2,2)}$ (red). The posteriors have the same colours as the priors that they are derived from.
(Note that for many other models and other priors, $n=50$ won't be enough for the prior not to matter!)
|
What is the relationship between sample size and the influence of prior on posterior?
Here is an attempt to illustrate the last paragraph in Macro's excellent (+1) answer. It shows two priors for the parameter $p$ in the ${\rm Binomial}(n,p)$ distribution. For a few different $n$, the
|
12,959
|
Simple linear regression output interpretation
|
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant $p$-value doesn't tell you necessarily that there is a strong relationship; the $p$-value is simply testing whether the slope is exactly 0. For a sufficiently large sample size, even small departures from that hypothesis (e.g. ones not of practical importance) will yield a significant $p$-value.
Of the three quantities you presented, $R^2$, the coefficient of determination, gives the greatest indication of the strength of the relationship. In your case, $R^{2} = .089$, means that $8.9\%$ of the variation in your response variable can be explained a linear relationship with the predictor. What constitutes a "large" $R^2$ is discipline dependent. For example, in social sciences $R^2 = .2$ might be "large" but in controlled environments like a factory setting, $R^2 > .9$ may be required to say there is a "strong" relationship. In most situations $.089$ is a very small $R^2$, so your conclusion that there is a weak linear relationship is probably reasonable.
|
Simple linear regression output interpretation
|
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predic
|
Simple linear regression output interpretation
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant $p$-value doesn't tell you necessarily that there is a strong relationship; the $p$-value is simply testing whether the slope is exactly 0. For a sufficiently large sample size, even small departures from that hypothesis (e.g. ones not of practical importance) will yield a significant $p$-value.
Of the three quantities you presented, $R^2$, the coefficient of determination, gives the greatest indication of the strength of the relationship. In your case, $R^{2} = .089$, means that $8.9\%$ of the variation in your response variable can be explained a linear relationship with the predictor. What constitutes a "large" $R^2$ is discipline dependent. For example, in social sciences $R^2 = .2$ might be "large" but in controlled environments like a factory setting, $R^2 > .9$ may be required to say there is a "strong" relationship. In most situations $.089$ is a very small $R^2$, so your conclusion that there is a weak linear relationship is probably reasonable.
|
Simple linear regression output interpretation
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predic
|
12,960
|
Simple linear regression output interpretation
|
The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the dependent variable and the fitted values. The exact interpretation and derivation of the coefficient of determination $R^{2}$ can be found here.
The proof that the coefficient of determination is the equivalent of the Squared Pearson Correlation Coefficient between the observed values $y_{i}$ and the fitted values $\hat{y}_{i}$ can be found here.
The $R^{2}$ or coefficient of determination indicates the strength of your model in explain the dependent variable. In your case, $R^{2}=0.089$. This that your model is able explain 8.9% of variation of you dependent variable. Or, the correlation coefficient between your $y_{i}$ and your fitted values $\hat{y}_{i}$ is 0.089. What constitutes a good $R^{2}$ is discipline dependent.
Finally, to the last part of your question. You cannot get the Durbin-Watson test to say something about the correlation between you dependent and independent variables. The Durbin-Watson test tests for serial correlation. It is conducted to examine whether your error terms are mutually correlated.
|
Simple linear regression output interpretation
|
The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the depend
|
Simple linear regression output interpretation
The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the dependent variable and the fitted values. The exact interpretation and derivation of the coefficient of determination $R^{2}$ can be found here.
The proof that the coefficient of determination is the equivalent of the Squared Pearson Correlation Coefficient between the observed values $y_{i}$ and the fitted values $\hat{y}_{i}$ can be found here.
The $R^{2}$ or coefficient of determination indicates the strength of your model in explain the dependent variable. In your case, $R^{2}=0.089$. This that your model is able explain 8.9% of variation of you dependent variable. Or, the correlation coefficient between your $y_{i}$ and your fitted values $\hat{y}_{i}$ is 0.089. What constitutes a good $R^{2}$ is discipline dependent.
Finally, to the last part of your question. You cannot get the Durbin-Watson test to say something about the correlation between you dependent and independent variables. The Durbin-Watson test tests for serial correlation. It is conducted to examine whether your error terms are mutually correlated.
|
Simple linear regression output interpretation
The $R^{2}$ tells you how much variation of the dependent variable is explained by a model. However, one can interpret the $R^{2}$ as well as the correlation between the original values of the depend
|
12,961
|
Simple linear regression output interpretation
|
The $R^2$ value tells you how much variation in the data is explained by the fitted model.
The low $R^2$ value in your study suggests that your data is probably spread widely around the regression line, meaning that the regression model can only explain (very little) 8.9% of the variation in the data.
Have you checked to see whether a linear model is appropriate? Have a look at the distribution of your residuals, as you can use this to assess the fit of the model to your data. Ideally, your residuals should not show a relation with your $x$ values, and if it does, you may want to think of rescaling your variables in a suitable way, or fitting a more appropriate model.
|
Simple linear regression output interpretation
|
The $R^2$ value tells you how much variation in the data is explained by the fitted model.
The low $R^2$ value in your study suggests that your data is probably spread widely around the regression lin
|
Simple linear regression output interpretation
The $R^2$ value tells you how much variation in the data is explained by the fitted model.
The low $R^2$ value in your study suggests that your data is probably spread widely around the regression line, meaning that the regression model can only explain (very little) 8.9% of the variation in the data.
Have you checked to see whether a linear model is appropriate? Have a look at the distribution of your residuals, as you can use this to assess the fit of the model to your data. Ideally, your residuals should not show a relation with your $x$ values, and if it does, you may want to think of rescaling your variables in a suitable way, or fitting a more appropriate model.
|
Simple linear regression output interpretation
The $R^2$ value tells you how much variation in the data is explained by the fitted model.
The low $R^2$ value in your study suggests that your data is probably spread widely around the regression lin
|
12,962
|
Simple linear regression output interpretation
|
For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressand (the $y$) divided by the empirical standard deviation of the regressor (the $x$). Depending on the scaling of the $x$ and $y$, you can have a fit slope equal to one but an arbitrarily small $R^2$ value.
In short, the slope is not a good indicator of model 'fit' unless you are certain that the scales of the dependent and independent variables must be equal to each other.
|
Simple linear regression output interpretation
|
For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressa
|
Simple linear regression output interpretation
For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressand (the $y$) divided by the empirical standard deviation of the regressor (the $x$). Depending on the scaling of the $x$ and $y$, you can have a fit slope equal to one but an arbitrarily small $R^2$ value.
In short, the slope is not a good indicator of model 'fit' unless you are certain that the scales of the dependent and independent variables must be equal to each other.
|
Simple linear regression output interpretation
For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressa
|
12,963
|
Simple linear regression output interpretation
|
I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach.
Suppose we collect a bunch of observation from 1000 random people trying to find out if punches in the face are associated with headaches:
$$Headaches = \beta_0 + \beta_1 Punch\_in\_the\_face + \varepsilon $$
$\varepsilon$ contains all the omitted variables that produce headaches in the general population: stress, how contaminated your city is, lack of sleep, coffee consumption, etc.
For this regression, the $\beta_1$ might be very significant and very big, but the $R^2$ will be low. Why? For the vast majority of the population, headaches won't be explained much by punches in the face. In other words, most of the variation in the data (i.e. whether people have few or a lot of headaches) will be left unexplained if you only include punches in the face, but punches in the face are VERY important for headaches.
Graphically, this probably looks like a steep slope but with a very big variation around this slope.
|
Simple linear regression output interpretation
|
I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach.
Suppose we collect a bunch of observation from 1000 random people trying to find out
|
Simple linear regression output interpretation
I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach.
Suppose we collect a bunch of observation from 1000 random people trying to find out if punches in the face are associated with headaches:
$$Headaches = \beta_0 + \beta_1 Punch\_in\_the\_face + \varepsilon $$
$\varepsilon$ contains all the omitted variables that produce headaches in the general population: stress, how contaminated your city is, lack of sleep, coffee consumption, etc.
For this regression, the $\beta_1$ might be very significant and very big, but the $R^2$ will be low. Why? For the vast majority of the population, headaches won't be explained much by punches in the face. In other words, most of the variation in the data (i.e. whether people have few or a lot of headaches) will be left unexplained if you only include punches in the face, but punches in the face are VERY important for headaches.
Graphically, this probably looks like a steep slope but with a very big variation around this slope.
|
Simple linear regression output interpretation
I like the answers already given, but let me complement them with a different (and more tongue-in-cheek) approach.
Suppose we collect a bunch of observation from 1000 random people trying to find out
|
12,964
|
Simple linear regression output interpretation
|
@Macro had a great answer.
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant pp-value doesn't tell you necessarily that there is a strong relationship; the pp-value is simply testing whether the slope is exactly 0.
I just want to add a numerical example to show what is looks like to have a case OP described.
Low $R^2$
Significant on p-value
Slope close to $1.0$
set.seed(6)
y=c(runif(100)*50,runif(100)*50+10)
x=c(rep(1,100),rep(10,100))
plot(x,y)
fit=lm(y~x)
summary(fit)
abline(fit)
> summary(lm(y~x))
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-24.68 -13.46 -0.87 14.21 25.14
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 25.6575 1.7107 14.998 < 2e-16 ***
x 0.9164 0.2407 3.807 0.000188 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 15.32 on 198 degrees of freedom
Multiple R-squared: 0.0682, Adjusted R-squared: 0.06349
F-statistic: 14.49 on 1 and 198 DF, p-value: 0.0001877
|
Simple linear regression output interpretation
|
@Macro had a great answer.
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance
|
Simple linear regression output interpretation
@Macro had a great answer.
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant pp-value doesn't tell you necessarily that there is a strong relationship; the pp-value is simply testing whether the slope is exactly 0.
I just want to add a numerical example to show what is looks like to have a case OP described.
Low $R^2$
Significant on p-value
Slope close to $1.0$
set.seed(6)
y=c(runif(100)*50,runif(100)*50+10)
x=c(rep(1,100),rep(10,100))
plot(x,y)
fit=lm(y~x)
summary(fit)
abline(fit)
> summary(lm(y~x))
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-24.68 -13.46 -0.87 14.21 25.14
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 25.6575 1.7107 14.998 < 2e-16 ***
x 0.9164 0.2407 3.807 0.000188 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 15.32 on 198 degrees of freedom
Multiple R-squared: 0.0682, Adjusted R-squared: 0.06349
F-statistic: 14.49 on 1 and 198 DF, p-value: 0.0001877
|
Simple linear regression output interpretation
@Macro had a great answer.
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance
|
12,965
|
If I want an interpretable model, are there methods other than Linear Regression?
|
It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the lightbulbs), because it's straightforward to use and gives predictable results.
Second, linear regression is not always "interpretable". If you have linear regression model with many polynomial terms, or just a lot of features, it would be hard to interpret. For example, say that you used the raw values of each of the 784 pixels from MNIST† as features. Would knowing that pixel 237 has weight equal to -2311.67 tell you anything about the model? For image data, looking at activation maps of the convolutional neural network would be much easier to understand.
Finally, there are models that are equally interpretable, e.g. logistic regression, decision trees, naive Bayes algorithm, and many more.
† - As noticed by @Ingolifs in the comment, and as discussed in this thread, MNIST may be not the best example, since this is a very simple dataset. For most of the realistic image datasets, logistic regression would not work and looking at the weights would not give any straightforward answers. However, if you look closer at the weights in the linked thread, then their interpretation is also not straightforward, for example weights for predicting "5" or "9" do not show any obvious pattern (see image below, copied from the other thread).
|
If I want an interpretable model, are there methods other than Linear Regression?
|
It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the li
|
If I want an interpretable model, are there methods other than Linear Regression?
It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the lightbulbs), because it's straightforward to use and gives predictable results.
Second, linear regression is not always "interpretable". If you have linear regression model with many polynomial terms, or just a lot of features, it would be hard to interpret. For example, say that you used the raw values of each of the 784 pixels from MNIST† as features. Would knowing that pixel 237 has weight equal to -2311.67 tell you anything about the model? For image data, looking at activation maps of the convolutional neural network would be much easier to understand.
Finally, there are models that are equally interpretable, e.g. logistic regression, decision trees, naive Bayes algorithm, and many more.
† - As noticed by @Ingolifs in the comment, and as discussed in this thread, MNIST may be not the best example, since this is a very simple dataset. For most of the realistic image datasets, logistic regression would not work and looking at the weights would not give any straightforward answers. However, if you look closer at the weights in the linked thread, then their interpretation is also not straightforward, for example weights for predicting "5" or "9" do not show any obvious pattern (see image below, copied from the other thread).
|
If I want an interpretable model, are there methods other than Linear Regression?
It is hard for me to believe that you heard people saying this, because it would be a dumb thing to say. It's like saying that you use only the hammer (including drilling holes and for changing the li
|
12,966
|
If I want an interpretable model, are there methods other than Linear Regression?
|
Decision Tree would be another choice. Or Lasso Regression to create a sparse system.
Check this figure from An Introduction to Statistical Learning book.
http://www.sr-sv.com/wp-content/uploads/2015/09/STAT01.png
|
If I want an interpretable model, are there methods other than Linear Regression?
|
Decision Tree would be another choice. Or Lasso Regression to create a sparse system.
Check this figure from An Introduction to Statistical Learning book.
http://www.sr-sv.com/wp-content/uploads/2015
|
If I want an interpretable model, are there methods other than Linear Regression?
Decision Tree would be another choice. Or Lasso Regression to create a sparse system.
Check this figure from An Introduction to Statistical Learning book.
http://www.sr-sv.com/wp-content/uploads/2015/09/STAT01.png
|
If I want an interpretable model, are there methods other than Linear Regression?
Decision Tree would be another choice. Or Lasso Regression to create a sparse system.
Check this figure from An Introduction to Statistical Learning book.
http://www.sr-sv.com/wp-content/uploads/2015
|
12,967
|
If I want an interpretable model, are there methods other than Linear Regression?
|
I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to making ML models interpretable.
|
If I want an interpretable model, are there methods other than Linear Regression?
|
I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to ma
|
If I want an interpretable model, are there methods other than Linear Regression?
I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to making ML models interpretable.
|
If I want an interpretable model, are there methods other than Linear Regression?
I would agrre with Tim's and mkt's answers - ML models are not necessarily uninterpretable. I would direct you to the Descriptive mAchine Learning EXplanations, DALEX R package, which is devoted to ma
|
12,968
|
If I want an interpretable model, are there methods other than Linear Regression?
|
No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive models, but also machine learning methods used for regression. I include random forests, gradient boosted machines, neural networks, and more. Just because you don't get coefficients out of machine learning models that are similar to those from linear regressions does not mean that their workings cannot be understood. It just takes a bit more work.
To understand why, I'd recommend reading this question: Obtaining knowledge from a random forest . What it shows is how you can approach making almost any machine learning model interpretable.
|
If I want an interpretable model, are there methods other than Linear Regression?
|
No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive model
|
If I want an interpretable model, are there methods other than Linear Regression?
No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive models, but also machine learning methods used for regression. I include random forests, gradient boosted machines, neural networks, and more. Just because you don't get coefficients out of machine learning models that are similar to those from linear regressions does not mean that their workings cannot be understood. It just takes a bit more work.
To understand why, I'd recommend reading this question: Obtaining knowledge from a random forest . What it shows is how you can approach making almost any machine learning model interpretable.
|
If I want an interpretable model, are there methods other than Linear Regression?
No, that is needlessly restrictive. There are a large range of interpretable models including not just (as Frans Rodenburg says) linear models, generalized linear models and generalized additive model
|
12,969
|
What is the distribution of the rounded down average of Poisson random variables?
|
A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson distribution of parameter $\lambda = \lambda_1 + \lambda_2 + \cdots + \lambda_n$ and $m=n$.)
The distribution of $Y$ is easily determined by the distribution of $mY$, whose probability generating function (pgf) can be determined in terms of the pgf of $X$. Here's an outline of the derivation.
Write $p(x) = p_0 + p_1 x + \cdots + p_n x^n + \cdots$ for the pgf of $X$, where (by definition) $p_n = \Pr(X=n)$. $mY$ is constructed from $X$ in such a way that its pgf, $q$, is
$$\eqalign{q(x) &=& \left(p_0 + p_1 + \cdots + p_{m-1}\right) + \left(p_m + p_{m+1} + \cdots + p_{2m-1}\right)x^m + \cdots + \\&&\left(p_{nm} + p_{nm+1} + \cdots + p_{(n+1)m-1}\right)x^{nm} + \cdots.}$$
Because this converges absolutely for $|x| \le 1$, we can rearrange the terms into a sum of pieces of the form
$$D_{m,t}p(x) = p_t + p_{t+m}x^m + \cdots + p_{t + nm}x^{nm} + \cdots$$
for $t=0, 1, \ldots, m-1$. The power series of the functions $x^t D_{m,t}p$ consist of every $m^\text{th}$ term of the series of $p$ starting with the $t^\text{th}$: this is sometimes called a decimation of $p$. Google searches presently don't turn up much useful information on decimations, so for completeness, here's a derivation of a formula.
Let $\omega$ be any primitive $m^\text{th}$ root of unity; for instance, take $\omega = \exp(2 i \pi / m)$. Then it follows from $\omega^m=1$ and $\sum_{j=0}^{m-1}\omega^j = 0$ that
$$x^t D_{m,t}p(x) = \frac{1}{m}\sum_{j=0}^{m-1} \omega^{t j} p(x/\omega^j).$$
To see this, note that the operator $x^t D_{m,t}$ is linear, so it suffices to check the formula on the basis $\{1, x, x^2, \ldots, x^n, \ldots \}$. Applying the right hand side to $x^n$ gives
$$x^t D_{m,t}[x^n] = \frac{1}{m}\sum_{j=0}^{m-1} \omega^{t j} x^n \omega^{-nj}= \frac{x^n}{m}\sum_{j=0}^{m-1} \omega^{(t-n) j.}$$
When $t$ and $n$ differ by a multiple of $m$, each term in the sum equals $1$ and we obtain $x^n$. Otherwise, the terms cycle through powers of $\omega^{t-n}$ and these sum to zero. Whence this operator preserves all powers of $x$ congruent to $t$ modulo $m$ and kills all the others: it is precisely the desired projection.
A formula for $q$ follows readily by changing the order of summation and recognizing one of the sums as geometric, thereby writing it in closed form:
$$\eqalign{
q(x) &= \sum_{t=0}^{m-1} (D_{m,t}[p])(x) \\
&= \sum_{t=0}^{m-1} x^{-t} \frac{1}{m} \sum_{j=0}^{m-1} \omega^{t j} p(\omega^{-j}x ) \\
&= \frac{1}{m} \sum_{j=0}^{m-1} p(\omega^{-j}x) \sum_{t=0}^{m-1} \left(\omega^j/x\right)^t \\
&= \frac{x(1-x^{-m})}{m} \sum_{j=0}^{m-1} \frac{p(\omega^{-j}x)}{x-\omega^j}.
}$$
For example, the pgf of a Poisson distribution of parameter $\lambda$ is $p(x) = \exp(\lambda(x-1))$. With $m=2$, $\omega=-1$ and the pgf of $2Y$ will be
$$\eqalign{
q(x) &= \frac{x(1-x^{-2})}{2} \sum_{j=0}^{2-1} \frac{p((-1)^{-j}x)}{x-(-1)^j} \\
&= \frac{x-1/x}{2} \left(\frac{\exp(\lambda(x-1))}{x-1} + \frac{\exp(\lambda(-x-1))}{x+1}\right) \\
&= \exp(-\lambda) \left(\frac{\sinh (\lambda x)}{x}+\cosh (\lambda x)\right).
}$$
One use of this approach is to compute moments of $X$ and $mY$. The value of the $k^\text{th}$ derivative of the pgf evaluated at $x=1$ is the $k^\text{th}$ factorial moment. The $k^\text{th}$ moment is a linear combination of the first $k$ factorial moments. Using these observations we find, for instance, that for a Poisson distributed $X$, its mean (which is the first factorial moment) equals $\lambda$, the mean of $2\lfloor(X/2)\rfloor$ equals $\lambda- \frac{1}{2} + \frac{1}{2} e^{-2\lambda}$, and the mean of $3\lfloor(X/3)\rfloor$ equals $\lambda -1+e^{-3 \lambda /2} \left(\frac{\sin \left(\frac{\sqrt{3} \lambda }{2}\right)}{\sqrt{3}}+\cos \left(\frac{\sqrt{3} \lambda}{2}\right)\right)$:
The means for $m=1,2,3$ are shown in blue, red, and yellow, respectively, as functions of $\lambda$: asymptotically, the mean drops by $(m-1)/2$ compared to the original Poisson mean.
Similar formulas for the variances can be obtained. (They get messy as $m$ rises and so are omitted. One thing they definitively establish is that when $m \gt 1$ no multiple of $Y$ is Poisson: it does not have the characteristic equality of mean and variance) Here is a plot of the variances as a function of $\lambda$ for $m=1,2,3$:
It is interesting that for larger values of $\lambda$ the variances increase. Intuitively, this is due to two competing phenomena: the floor function is effectively binning groups of values that originally were distinct; this must cause the variance to decrease. At the same time, as we have seen, the means are changing, too (because each bin is represented by its smallest value); this must cause a term equal to the square of the difference of means to be added back. The increase in variance for large $\lambda$ becomes larger with larger values of $m$.
The behavior of the variance of $mY$ with $m$ is surprisingly complex. Let's end with a quick simulation (in R) showing what it can do. The plots show the difference between the variance of $m\lfloor X/m \rfloor$ and the variance of $X$ for Poisson distributed $X$ with various values of $\lambda$ ranging from $1$ through $5000$. In all cases the plots appear to have reached their asymptotic values at the right.
set.seed(17)
par(mfrow=c(3,4))
temp <- sapply(c(1,2,5,10,20,50,100,200,500,1000,2000,5000), function(lambda) {
x <- rpois(20000, lambda)
v <- sapply(1:floor(lambda + 4*sqrt(lambda)),
function(m) var(floor(x/m)*m) - var(x))
plot(v, type="l", xlab="", ylab="Increased variance",
main=toString(lambda), cex.main=.85, col="Blue", lwd=2)
})
|
What is the distribution of the rounded down average of Poisson random variables?
|
A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson
|
What is the distribution of the rounded down average of Poisson random variables?
A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson distribution of parameter $\lambda = \lambda_1 + \lambda_2 + \cdots + \lambda_n$ and $m=n$.)
The distribution of $Y$ is easily determined by the distribution of $mY$, whose probability generating function (pgf) can be determined in terms of the pgf of $X$. Here's an outline of the derivation.
Write $p(x) = p_0 + p_1 x + \cdots + p_n x^n + \cdots$ for the pgf of $X$, where (by definition) $p_n = \Pr(X=n)$. $mY$ is constructed from $X$ in such a way that its pgf, $q$, is
$$\eqalign{q(x) &=& \left(p_0 + p_1 + \cdots + p_{m-1}\right) + \left(p_m + p_{m+1} + \cdots + p_{2m-1}\right)x^m + \cdots + \\&&\left(p_{nm} + p_{nm+1} + \cdots + p_{(n+1)m-1}\right)x^{nm} + \cdots.}$$
Because this converges absolutely for $|x| \le 1$, we can rearrange the terms into a sum of pieces of the form
$$D_{m,t}p(x) = p_t + p_{t+m}x^m + \cdots + p_{t + nm}x^{nm} + \cdots$$
for $t=0, 1, \ldots, m-1$. The power series of the functions $x^t D_{m,t}p$ consist of every $m^\text{th}$ term of the series of $p$ starting with the $t^\text{th}$: this is sometimes called a decimation of $p$. Google searches presently don't turn up much useful information on decimations, so for completeness, here's a derivation of a formula.
Let $\omega$ be any primitive $m^\text{th}$ root of unity; for instance, take $\omega = \exp(2 i \pi / m)$. Then it follows from $\omega^m=1$ and $\sum_{j=0}^{m-1}\omega^j = 0$ that
$$x^t D_{m,t}p(x) = \frac{1}{m}\sum_{j=0}^{m-1} \omega^{t j} p(x/\omega^j).$$
To see this, note that the operator $x^t D_{m,t}$ is linear, so it suffices to check the formula on the basis $\{1, x, x^2, \ldots, x^n, \ldots \}$. Applying the right hand side to $x^n$ gives
$$x^t D_{m,t}[x^n] = \frac{1}{m}\sum_{j=0}^{m-1} \omega^{t j} x^n \omega^{-nj}= \frac{x^n}{m}\sum_{j=0}^{m-1} \omega^{(t-n) j.}$$
When $t$ and $n$ differ by a multiple of $m$, each term in the sum equals $1$ and we obtain $x^n$. Otherwise, the terms cycle through powers of $\omega^{t-n}$ and these sum to zero. Whence this operator preserves all powers of $x$ congruent to $t$ modulo $m$ and kills all the others: it is precisely the desired projection.
A formula for $q$ follows readily by changing the order of summation and recognizing one of the sums as geometric, thereby writing it in closed form:
$$\eqalign{
q(x) &= \sum_{t=0}^{m-1} (D_{m,t}[p])(x) \\
&= \sum_{t=0}^{m-1} x^{-t} \frac{1}{m} \sum_{j=0}^{m-1} \omega^{t j} p(\omega^{-j}x ) \\
&= \frac{1}{m} \sum_{j=0}^{m-1} p(\omega^{-j}x) \sum_{t=0}^{m-1} \left(\omega^j/x\right)^t \\
&= \frac{x(1-x^{-m})}{m} \sum_{j=0}^{m-1} \frac{p(\omega^{-j}x)}{x-\omega^j}.
}$$
For example, the pgf of a Poisson distribution of parameter $\lambda$ is $p(x) = \exp(\lambda(x-1))$. With $m=2$, $\omega=-1$ and the pgf of $2Y$ will be
$$\eqalign{
q(x) &= \frac{x(1-x^{-2})}{2} \sum_{j=0}^{2-1} \frac{p((-1)^{-j}x)}{x-(-1)^j} \\
&= \frac{x-1/x}{2} \left(\frac{\exp(\lambda(x-1))}{x-1} + \frac{\exp(\lambda(-x-1))}{x+1}\right) \\
&= \exp(-\lambda) \left(\frac{\sinh (\lambda x)}{x}+\cosh (\lambda x)\right).
}$$
One use of this approach is to compute moments of $X$ and $mY$. The value of the $k^\text{th}$ derivative of the pgf evaluated at $x=1$ is the $k^\text{th}$ factorial moment. The $k^\text{th}$ moment is a linear combination of the first $k$ factorial moments. Using these observations we find, for instance, that for a Poisson distributed $X$, its mean (which is the first factorial moment) equals $\lambda$, the mean of $2\lfloor(X/2)\rfloor$ equals $\lambda- \frac{1}{2} + \frac{1}{2} e^{-2\lambda}$, and the mean of $3\lfloor(X/3)\rfloor$ equals $\lambda -1+e^{-3 \lambda /2} \left(\frac{\sin \left(\frac{\sqrt{3} \lambda }{2}\right)}{\sqrt{3}}+\cos \left(\frac{\sqrt{3} \lambda}{2}\right)\right)$:
The means for $m=1,2,3$ are shown in blue, red, and yellow, respectively, as functions of $\lambda$: asymptotically, the mean drops by $(m-1)/2$ compared to the original Poisson mean.
Similar formulas for the variances can be obtained. (They get messy as $m$ rises and so are omitted. One thing they definitively establish is that when $m \gt 1$ no multiple of $Y$ is Poisson: it does not have the characteristic equality of mean and variance) Here is a plot of the variances as a function of $\lambda$ for $m=1,2,3$:
It is interesting that for larger values of $\lambda$ the variances increase. Intuitively, this is due to two competing phenomena: the floor function is effectively binning groups of values that originally were distinct; this must cause the variance to decrease. At the same time, as we have seen, the means are changing, too (because each bin is represented by its smallest value); this must cause a term equal to the square of the difference of means to be added back. The increase in variance for large $\lambda$ becomes larger with larger values of $m$.
The behavior of the variance of $mY$ with $m$ is surprisingly complex. Let's end with a quick simulation (in R) showing what it can do. The plots show the difference between the variance of $m\lfloor X/m \rfloor$ and the variance of $X$ for Poisson distributed $X$ with various values of $\lambda$ ranging from $1$ through $5000$. In all cases the plots appear to have reached their asymptotic values at the right.
set.seed(17)
par(mfrow=c(3,4))
temp <- sapply(c(1,2,5,10,20,50,100,200,500,1000,2000,5000), function(lambda) {
x <- rpois(20000, lambda)
v <- sapply(1:floor(lambda + 4*sqrt(lambda)),
function(m) var(floor(x/m)*m) - var(x))
plot(v, type="l", xlab="", ylab="Increased variance",
main=toString(lambda), cex.main=.85, col="Blue", lwd=2)
})
|
What is the distribution of the rounded down average of Poisson random variables?
A generalization of the question asks for the distribution of $Y = \lfloor X/m \rfloor$ when the distribution of $X$ is known and supported on the natural numbers. (In the question, $X$ has a Poisson
|
12,970
|
What is the distribution of the rounded down average of Poisson random variables?
|
As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$.
Dividing by $n$ reduces the mean to $\lambda / n$ and variance $\lambda / n^2$ so the variance will be less than the equivalent Poisson distribution. As Michael says, not all values will be integers.
Using the floor function reduces the mean slightly, by about $\frac12 -\frac{1}{2n}$, and affects the variance slightly too though in a more complicated manner. Although you have integer values, the variance will still be substantially less than the mean and so you will have a narrower distribution than the Poisson.
|
What is the distribution of the rounded down average of Poisson random variables?
|
As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$.
D
|
What is the distribution of the rounded down average of Poisson random variables?
As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$.
Dividing by $n$ reduces the mean to $\lambda / n$ and variance $\lambda / n^2$ so the variance will be less than the equivalent Poisson distribution. As Michael says, not all values will be integers.
Using the floor function reduces the mean slightly, by about $\frac12 -\frac{1}{2n}$, and affects the variance slightly too though in a more complicated manner. Although you have integer values, the variance will still be substantially less than the mean and so you will have a narrower distribution than the Poisson.
|
What is the distribution of the rounded down average of Poisson random variables?
As Michael Chernick says, if the individual random variables are independent then the the sum is Poisson with parameter (mean and variance) $\sum_{i=1}^{n} \lambda_i$ which you might call $\lambda$.
D
|
12,971
|
What is the distribution of the rounded down average of Poisson random variables?
|
The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might
not help you very much. As Michael Chernick noted in comments
on his own answer,
the sum $\sum_i X_i$ of independent Poisson random variables $X_i$ with
respective parameters $\lambda_i$ is a Poisson random variable with parameter $\lambda = \sum_i \lambda_i$. Hence,
$$P\left\{ \sum_{i=1}^n X_i= k\right\} = \exp(-\lambda)\frac{\lambda^k}{k!},
~~ k = 0, 1, 2, \ldots,$$
Thus, $\hat{Y} = n^{-1} \sum_{i=1}^n X_i$ is a random variable taking on value $k/n$ with probability $\exp(-\lambda)\frac{\lambda^k}{k!}$. Note that
$\hat{Y}$ is not an
integer-valued random variable (though it does take on uniformly-spaced
rational values). It follows easily that
$Y = \lfloor \hat{Y} \rfloor$ is an integer-valued random variable
taking on value $m$ with probability
$$P\{Y = m\} = P\left\{\left\lfloor \frac{1}{n}\sum_{i=1}^n X_i
\right\rfloor = m\right\}
= \exp(-\lambda)\sum_{i=0}^{n-1}\frac{\lambda^{mn+i}}{(mn+i)!},
~~ m = 0, 1, 2, \ldots,$$
This is not the probability mass function of a Poisson random
variable. Formulas for the mean and variance can be written down using this
probability mass function, but they don't obviously
lead to nice simple answers in terms of $\lambda$ and $n$.
Approximate values can be obtained as pointed out by Henry.
|
What is the distribution of the rounded down average of Poisson random variables?
|
The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might
not help you very much. As Michael Chernick noted in c
|
What is the distribution of the rounded down average of Poisson random variables?
The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might
not help you very much. As Michael Chernick noted in comments
on his own answer,
the sum $\sum_i X_i$ of independent Poisson random variables $X_i$ with
respective parameters $\lambda_i$ is a Poisson random variable with parameter $\lambda = \sum_i \lambda_i$. Hence,
$$P\left\{ \sum_{i=1}^n X_i= k\right\} = \exp(-\lambda)\frac{\lambda^k}{k!},
~~ k = 0, 1, 2, \ldots,$$
Thus, $\hat{Y} = n^{-1} \sum_{i=1}^n X_i$ is a random variable taking on value $k/n$ with probability $\exp(-\lambda)\frac{\lambda^k}{k!}$. Note that
$\hat{Y}$ is not an
integer-valued random variable (though it does take on uniformly-spaced
rational values). It follows easily that
$Y = \lfloor \hat{Y} \rfloor$ is an integer-valued random variable
taking on value $m$ with probability
$$P\{Y = m\} = P\left\{\left\lfloor \frac{1}{n}\sum_{i=1}^n X_i
\right\rfloor = m\right\}
= \exp(-\lambda)\sum_{i=0}^{n-1}\frac{\lambda^{mn+i}}{(mn+i)!},
~~ m = 0, 1, 2, \ldots,$$
This is not the probability mass function of a Poisson random
variable. Formulas for the mean and variance can be written down using this
probability mass function, but they don't obviously
lead to nice simple answers in terms of $\lambda$ and $n$.
Approximate values can be obtained as pointed out by Henry.
|
What is the distribution of the rounded down average of Poisson random variables?
The probability mass function of the average of $n$ independent Poisson random variables can be written down explicitly, though the answer might
not help you very much. As Michael Chernick noted in c
|
12,972
|
What is the distribution of the rounded down average of Poisson random variables?
|
Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will still have the shape of the Poisson. It is just that the discrete probabilities may occur at non-integer points.
|
What is the distribution of the rounded down average of Poisson random variables?
|
Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will st
|
What is the distribution of the rounded down average of Poisson random variables?
Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will still have the shape of the Poisson. It is just that the discrete probabilities may occur at non-integer points.
|
What is the distribution of the rounded down average of Poisson random variables?
Y will not be Poisson. Note that Poisson random variables take on non negative integer values. Once you divide by a constant you create a random variable that can have non-integer values. It will st
|
12,973
|
Why is my R-squared so low when my t-statistics are so large?
|
The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your response variable explained by your covariates. Suppose you are estimating a regression model with $n$ observations,
$$
Y_i = \beta_0 + \beta_1X_{1i} + ...+ \beta_kX_{ki}+\epsilon_i
$$
where $\epsilon_i\overset{i.i.d}{\sim}N(0,\sigma^2)$, $i=1,...,n$.
Large $t$-values (in absolute value) lead you to reject the null hypothesis that $\beta_i=0$. This means you can be confident that you have correctly estimated the sign of the coefficient. Also, if $|t|$>4 and you have $n>5$, then 0 is not in a 99% confidence interval for the coefficient. The $t$-value for a coefficient $\beta_i$ is the difference between the estimate $\hat{\beta_i}$ and 0 normalized by the standard error $se\{\hat{\beta_i}\}$.
$$
t=\frac{\hat{\beta_i}}{se\{\hat{\beta_i}\}}
$$
which is simply the estimate divided by a measure of its variability. If you have a large enough dataset, you will always have statistically significant (large) $t$-values. This does not mean necessarily mean your covariates explain much of the variation in the response variable.
As @Stat mentioned, $R^2$ measures the amount of variation in your response variable explained by your dependent variables. For more about $R^2$, go to wikipedia. In your case, it appears you have a large enough data set to accurately estimate the $\beta_i$'s, but your covariates do a poor job of explaining and\or predicting the response values.
|
Why is my R-squared so low when my t-statistics are so large?
|
The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your
|
Why is my R-squared so low when my t-statistics are so large?
The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your response variable explained by your covariates. Suppose you are estimating a regression model with $n$ observations,
$$
Y_i = \beta_0 + \beta_1X_{1i} + ...+ \beta_kX_{ki}+\epsilon_i
$$
where $\epsilon_i\overset{i.i.d}{\sim}N(0,\sigma^2)$, $i=1,...,n$.
Large $t$-values (in absolute value) lead you to reject the null hypothesis that $\beta_i=0$. This means you can be confident that you have correctly estimated the sign of the coefficient. Also, if $|t|$>4 and you have $n>5$, then 0 is not in a 99% confidence interval for the coefficient. The $t$-value for a coefficient $\beta_i$ is the difference between the estimate $\hat{\beta_i}$ and 0 normalized by the standard error $se\{\hat{\beta_i}\}$.
$$
t=\frac{\hat{\beta_i}}{se\{\hat{\beta_i}\}}
$$
which is simply the estimate divided by a measure of its variability. If you have a large enough dataset, you will always have statistically significant (large) $t$-values. This does not mean necessarily mean your covariates explain much of the variation in the response variable.
As @Stat mentioned, $R^2$ measures the amount of variation in your response variable explained by your dependent variables. For more about $R^2$, go to wikipedia. In your case, it appears you have a large enough data set to accurately estimate the $\beta_i$'s, but your covariates do a poor job of explaining and\or predicting the response values.
|
Why is my R-squared so low when my t-statistics are so large?
The $t$-values and $R^2$ are used to judge very different things. The $t$-values are used to judge the accurary of your estimate of the $\beta_i$'s, but $R^2$ measures the amount of variation in your
|
12,974
|
Why is my R-squared so low when my t-statistics are so large?
|
To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the regression that cause the response to jump around.
|
Why is my R-squared so low when my t-statistics are so large?
|
To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the
|
Why is my R-squared so low when my t-statistics are so large?
To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the regression that cause the response to jump around.
|
Why is my R-squared so low when my t-statistics are so large?
To say the same thing as caburke but more simply, you are very confidant that the average response caused by your variables is not zero. But there are lots of other things that you don't have in the
|
12,975
|
Why is my R-squared so low when my t-statistics are so large?
|
Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared is low because the errors are large, which means that the variability in your data is large and thus your regression model is not a good fit (predictions aren't as accurate)?
Just my 2 cents.
Perhaps this post can help: http://blog.minitab.com/blog/adventures-in-statistics/how-to-interpret-a-regression-model-with-low-r-squared-and-low-p-values
|
Why is my R-squared so low when my t-statistics are so large?
|
Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared
|
Why is my R-squared so low when my t-statistics are so large?
Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared is low because the errors are large, which means that the variability in your data is large and thus your regression model is not a good fit (predictions aren't as accurate)?
Just my 2 cents.
Perhaps this post can help: http://blog.minitab.com/blog/adventures-in-statistics/how-to-interpret-a-regression-model-with-low-r-squared-and-low-p-values
|
Why is my R-squared so low when my t-statistics are so large?
Could it be that although your predictors are trending linearly in terms of your response variable (slope is significantly different from zero), which makes the t values significant, but the R squared
|
12,976
|
Why is my R-squared so low when my t-statistics are so large?
|
Several answers given are close but still wrong.
"The t-values are used to judge the accurary of your estimate of the βi's"
is the one that concerns me the most.
The T-value is merely an indication of the likelihood of random occurrence. Large means unlikely. Small means very likely. Positive and Negative don't matter to the likelihood interpretation.
"R2 measures the amount of variation in your response variable explained by your covariates" is correct.
(I would have commented but am not allowed by this platform yet.)
|
Why is my R-squared so low when my t-statistics are so large?
|
Several answers given are close but still wrong.
"The t-values are used to judge the accurary of your estimate of the βi's"
is the one that concerns me the most.
The T-value is merely an indication
|
Why is my R-squared so low when my t-statistics are so large?
Several answers given are close but still wrong.
"The t-values are used to judge the accurary of your estimate of the βi's"
is the one that concerns me the most.
The T-value is merely an indication of the likelihood of random occurrence. Large means unlikely. Small means very likely. Positive and Negative don't matter to the likelihood interpretation.
"R2 measures the amount of variation in your response variable explained by your covariates" is correct.
(I would have commented but am not allowed by this platform yet.)
|
Why is my R-squared so low when my t-statistics are so large?
Several answers given are close but still wrong.
"The t-values are used to judge the accurary of your estimate of the βi's"
is the one that concerns me the most.
The T-value is merely an indication
|
12,977
|
Why is my R-squared so low when my t-statistics are so large?
|
The only way to deal with a small R squared, check the following:
Is your sample size large enough? If yes, do step 2. but if no, increase your sample size.
How many covariates did you use for your model estimation? If more than 1 as in your case, deal with the problem of multicolinearity of the covariates or simply, run the regression again and this time without the constant which is known as beta zero.
However, if the problem still persists, then do a stepwise regression and select the model with a high R squared. But which I cannot recommend to you because it brings about bias in the covariates
|
Why is my R-squared so low when my t-statistics are so large?
|
The only way to deal with a small R squared, check the following:
Is your sample size large enough? If yes, do step 2. but if no, increase your sample size.
How many covariates did you use for your m
|
Why is my R-squared so low when my t-statistics are so large?
The only way to deal with a small R squared, check the following:
Is your sample size large enough? If yes, do step 2. but if no, increase your sample size.
How many covariates did you use for your model estimation? If more than 1 as in your case, deal with the problem of multicolinearity of the covariates or simply, run the regression again and this time without the constant which is known as beta zero.
However, if the problem still persists, then do a stepwise regression and select the model with a high R squared. But which I cannot recommend to you because it brings about bias in the covariates
|
Why is my R-squared so low when my t-statistics are so large?
The only way to deal with a small R squared, check the following:
Is your sample size large enough? If yes, do step 2. but if no, increase your sample size.
How many covariates did you use for your m
|
12,978
|
Why is the Cauchy Distribution so useful?
|
In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that practitioners in finance are wary of using models that have light-tailed distributions (e.g., the normal distribution) on their returns, and they generally prefer to go the other way and use a distribution with very heavy tails (e.g., the Cauchy). The history of finance is littered with catastrophic predictions based on models that did not have heavy enough tails in their distributions. The Cauchy distribution has sufficiently heavy tails that its moments do not exist, and so it is an ideal candidate to give an error term with extremely heavy tails.
Note that this issue of the fatness of tails in error terms in finance models was one of the main contents of the popular critique by Taleb (2007). In that book, Taleb points out instances where financial models have used the normal distribution for error terms, and he notes that this underestimates the true probability of extreme events, which are particularly important in finance. (In my view this book gives an exaggerated critique, since models using heavy-tailed deviations are in fact quite common in finance. In any case, the popularity of this book shows the importance of the issue.)
|
Why is the Cauchy Distribution so useful?
|
In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that pract
|
Why is the Cauchy Distribution so useful?
In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that practitioners in finance are wary of using models that have light-tailed distributions (e.g., the normal distribution) on their returns, and they generally prefer to go the other way and use a distribution with very heavy tails (e.g., the Cauchy). The history of finance is littered with catastrophic predictions based on models that did not have heavy enough tails in their distributions. The Cauchy distribution has sufficiently heavy tails that its moments do not exist, and so it is an ideal candidate to give an error term with extremely heavy tails.
Note that this issue of the fatness of tails in error terms in finance models was one of the main contents of the popular critique by Taleb (2007). In that book, Taleb points out instances where financial models have used the normal distribution for error terms, and he notes that this underestimates the true probability of extreme events, which are particularly important in finance. (In my view this book gives an exaggerated critique, since models using heavy-tailed deviations are in fact quite common in finance. In any case, the popularity of this book shows the importance of the issue.)
|
Why is the Cauchy Distribution so useful?
In addition to its usefulness in physics, the Cauchy distribution is commonly used in models in finance to represent deviations in returns from the predictive model. The reason for this is that pract
|
12,979
|
Why is the Cauchy Distribution so useful?
|
The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(0,1)$.
The Cauchy distribution is important in physics (where it’s known as the Lorentz distribution) because it’s the solution to the differential equation describing forced resonance. In spectroscopy, it is the description of the shape of spectral lines which are subject to homogeneous broadening in which all atoms interact in the same way with the frequency range contained in the line shape.
Applications:
Used in mechanical and electrical theory, physical anthropology and
measurement and calibration problems.
In physics it is called a Lorentzian distribution, where it is the
distribution of the energy of an unstable state in quantum mechanics.
Also used to model the points of impact of a fixed straight line of
particles emitted from a point source.
Source.
|
Why is the Cauchy Distribution so useful?
|
The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(
|
Why is the Cauchy Distribution so useful?
The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(0,1)$.
The Cauchy distribution is important in physics (where it’s known as the Lorentz distribution) because it’s the solution to the differential equation describing forced resonance. In spectroscopy, it is the description of the shape of spectral lines which are subject to homogeneous broadening in which all atoms interact in the same way with the frequency range contained in the line shape.
Applications:
Used in mechanical and electrical theory, physical anthropology and
measurement and calibration problems.
In physics it is called a Lorentzian distribution, where it is the
distribution of the energy of an unstable state in quantum mechanics.
Also used to model the points of impact of a fixed straight line of
particles emitted from a point source.
Source.
|
Why is the Cauchy Distribution so useful?
The standard Cauchy distribution is derived from ratio of two independent normally distributed random variables. If $X \sim N(0,1)$, and $Y \sim N(0,1)$, then $\tfrac{X}{Y} \sim \operatorname{Cauchy}(
|
12,980
|
Choosing between uninformative beta priors
|
First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different data. As you can clearly see, the choice of "uninformative" priors affected the posterior distribution, especially in cases where the data itself did not provide much information.
"Uninformative" priors for beta distribution share the property that $\alpha = \beta$, what leads to symmetric distribution, and $\alpha \le 1, \beta \le 1$, the common choices: are uniform (Bayes-Laplace) prior ($\alpha = \beta = 1$), Jeffreys prior ($\alpha = \beta = 1/2$), "Neutral" prior ($\alpha = \beta = 1/3$) proposed by Kerman (2011), Haldane prior ($\alpha = \beta = 0$), or it's approximation ($\alpha = \beta = \varepsilon$ with $\varepsilon > 0$) (see also the great Wikipedia article).
Parameters of beta prior distribution are commonly considered as "pseudocounts" of successes ($\alpha$) and failures ($\beta$) since the posterior distribution of beta-binomial model after observing $y$ successes in $n$ trials is
$$
\theta \mid y \sim \mathcal{B}(\alpha + y, \beta + n - y)
$$
so the higher $\alpha,\beta$ are, the more influential they are on the posterior. So when choosing $\alpha=\beta=1$ you assume that you "saw" in advance one success and one failure (this may or may not be much depending on $n$).
At first sight, Haldane prior, seems to be the most "uninformative", since it leads to the posterior mean, that is exactly equal to maximum likelihood estimate
$$
\frac{\alpha + y}{\alpha + y + \beta + n - y} = y / n
$$
However, it leads to improper posterior distributions when $y=0$ or $y=n$, what has made Kernal et al to suggest their own prior that yields posterior median that is as close as possible to the maximum likelihood estimate, at the same time being a proper distribution.
There is a number of arguments for and against each of the "uninformative" priors (see Kerman, 2011; Tuyl et al, 2008). For example, as discussed by Tuyl et al,
. . . care needs to be taken with parameter values below $1$, both for
noninformative and informative priors, as such priors concentrate
their mass close to $0$ and/or $1$ and can suppress the importance of the
observed data.
On another hand, using uniform priors for small datasets may be very influential (think of it in terms of pseudocounts). You can find much more information and discussion on this topic in multiple papers and handbooks.
So sorry, but there is no single "best", "most uninformative", or "one-size-fitts-all" priors. Each of them brings some information into the model.
Kerman, J. (2011). Neutral noninformative and informative conjugate
beta and gamma prior distributions. Electronic Journal of
Statistics, 5, 1450-1470.
Tuyl, F., Gerlach, R. and Mengersen, K. (2008). A Comparison of
Bayes-Laplace, Jeffreys, and Other Priors. The American Statistician,
62(1): 40-44.
|
Choosing between uninformative beta priors
|
First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different
|
Choosing between uninformative beta priors
First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different data. As you can clearly see, the choice of "uninformative" priors affected the posterior distribution, especially in cases where the data itself did not provide much information.
"Uninformative" priors for beta distribution share the property that $\alpha = \beta$, what leads to symmetric distribution, and $\alpha \le 1, \beta \le 1$, the common choices: are uniform (Bayes-Laplace) prior ($\alpha = \beta = 1$), Jeffreys prior ($\alpha = \beta = 1/2$), "Neutral" prior ($\alpha = \beta = 1/3$) proposed by Kerman (2011), Haldane prior ($\alpha = \beta = 0$), or it's approximation ($\alpha = \beta = \varepsilon$ with $\varepsilon > 0$) (see also the great Wikipedia article).
Parameters of beta prior distribution are commonly considered as "pseudocounts" of successes ($\alpha$) and failures ($\beta$) since the posterior distribution of beta-binomial model after observing $y$ successes in $n$ trials is
$$
\theta \mid y \sim \mathcal{B}(\alpha + y, \beta + n - y)
$$
so the higher $\alpha,\beta$ are, the more influential they are on the posterior. So when choosing $\alpha=\beta=1$ you assume that you "saw" in advance one success and one failure (this may or may not be much depending on $n$).
At first sight, Haldane prior, seems to be the most "uninformative", since it leads to the posterior mean, that is exactly equal to maximum likelihood estimate
$$
\frac{\alpha + y}{\alpha + y + \beta + n - y} = y / n
$$
However, it leads to improper posterior distributions when $y=0$ or $y=n$, what has made Kernal et al to suggest their own prior that yields posterior median that is as close as possible to the maximum likelihood estimate, at the same time being a proper distribution.
There is a number of arguments for and against each of the "uninformative" priors (see Kerman, 2011; Tuyl et al, 2008). For example, as discussed by Tuyl et al,
. . . care needs to be taken with parameter values below $1$, both for
noninformative and informative priors, as such priors concentrate
their mass close to $0$ and/or $1$ and can suppress the importance of the
observed data.
On another hand, using uniform priors for small datasets may be very influential (think of it in terms of pseudocounts). You can find much more information and discussion on this topic in multiple papers and handbooks.
So sorry, but there is no single "best", "most uninformative", or "one-size-fitts-all" priors. Each of them brings some information into the model.
Kerman, J. (2011). Neutral noninformative and informative conjugate
beta and gamma prior distributions. Electronic Journal of
Statistics, 5, 1450-1470.
Tuyl, F., Gerlach, R. and Mengersen, K. (2008). A Comparison of
Bayes-Laplace, Jeffreys, and Other Priors. The American Statistician,
62(1): 40-44.
|
Choosing between uninformative beta priors
First of all, there is no such a thing as uninformative prior. Below you can see posterior distributions resulting from five different "uninformative" priors (described below the plot) given different
|
12,981
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
|
Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?.
Here are your data drawn with a 1:1 aspect ratio, revealing how much the scales of the two variables differ:
To its right, the plot of the gap stats shows the statistics by number of clusters ($k$) with standard errors drawn with vertical segments and the optimal value of $k$ marked with a vertical dashed blue line. According to the clusGap help,
The default method "firstSEmax" looks for the smallest $k$ such that its value $f(k)$ is not more than 1 standard error away from the first local maximum.
Other methods behave similarly. This criterion does not cause any of the gap statistics to stand out, resulting in an estimate of $k=1$.
Choice of scale depends on the application, but a reasonable default starting point is a measure of dispersion of the data, such as the MAD or standard deviation. This plot repeats the analysis after recentering to zero and rescaling to make a unit standard deviation for each component $a$ and $b$:
The $k=2$ K-means solution is indicated by varying symbol type and color in the scatterplot of the data at left. Among the set $k\in\{1,2,3,4,5\}$, $k=2$ is clearly favored in the gap statistics plot at right: it is the first local maximum and the stats for smaller $k$ (that is, $k=1$) are significantly lower. Larger values of $k$ are likely overfit for such a small dataset, and none are significantly better than $k=2$. They are shown here only to illustrate the general method.
Here is R code to produce these figures. The data approximately match those shown in the question.
library(cluster)
xy <- matrix(c(29,391, 31,402, 31,380, 32.5,391, 32.5,360, 33,382, 33,371,
34,405, 34,400, 34.5,404, 36,343, 36,320, 36,303, 37,344,
38,358, 38,356, 38,351, 39,318, 40,322, 40, 341), ncol=2, byrow=TRUE)
colnames(xy) <- c("a", "b")
title <- "Raw data"
par(mfrow=c(1,2))
for (i in 1:2) {
#
# Estimate optimal cluster count and perform K-means with it.
#
gap <- clusGap(xy, kmeans, K.max=10, B=500)
k <- maxSE(gap$Tab[, "gap"], gap$Tab[, "SE.sim"], method="Tibs2001SEmax")
fit <- kmeans(xy, k)
#
# Plot the results.
#
pch <- ifelse(fit$cluster==1,24,16); col <- ifelse(fit$cluster==1,"Red", "Black")
plot(xy, asp=1, main=title, pch=pch, col=col)
plot(gap, main=paste("Gap stats,", title))
abline(v=k, lty=3, lwd=2, col="Blue")
#
# Prepare for the next step.
#
xy <- apply(xy, 2, scale)
title <- "Standardized data"
}
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
|
Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?.
Here are your data
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?.
Here are your data drawn with a 1:1 aspect ratio, revealing how much the scales of the two variables differ:
To its right, the plot of the gap stats shows the statistics by number of clusters ($k$) with standard errors drawn with vertical segments and the optimal value of $k$ marked with a vertical dashed blue line. According to the clusGap help,
The default method "firstSEmax" looks for the smallest $k$ such that its value $f(k)$ is not more than 1 standard error away from the first local maximum.
Other methods behave similarly. This criterion does not cause any of the gap statistics to stand out, resulting in an estimate of $k=1$.
Choice of scale depends on the application, but a reasonable default starting point is a measure of dispersion of the data, such as the MAD or standard deviation. This plot repeats the analysis after recentering to zero and rescaling to make a unit standard deviation for each component $a$ and $b$:
The $k=2$ K-means solution is indicated by varying symbol type and color in the scatterplot of the data at left. Among the set $k\in\{1,2,3,4,5\}$, $k=2$ is clearly favored in the gap statistics plot at right: it is the first local maximum and the stats for smaller $k$ (that is, $k=1$) are significantly lower. Larger values of $k$ are likely overfit for such a small dataset, and none are significantly better than $k=2$. They are shown here only to illustrate the general method.
Here is R code to produce these figures. The data approximately match those shown in the question.
library(cluster)
xy <- matrix(c(29,391, 31,402, 31,380, 32.5,391, 32.5,360, 33,382, 33,371,
34,405, 34,400, 34.5,404, 36,343, 36,320, 36,303, 37,344,
38,358, 38,356, 38,351, 39,318, 40,322, 40, 341), ncol=2, byrow=TRUE)
colnames(xy) <- c("a", "b")
title <- "Raw data"
par(mfrow=c(1,2))
for (i in 1:2) {
#
# Estimate optimal cluster count and perform K-means with it.
#
gap <- clusGap(xy, kmeans, K.max=10, B=500)
k <- maxSE(gap$Tab[, "gap"], gap$Tab[, "SE.sim"], method="Tibs2001SEmax")
fit <- kmeans(xy, k)
#
# Plot the results.
#
pch <- ifelse(fit$cluster==1,24,16); col <- ifelse(fit$cluster==1,"Red", "Black")
plot(xy, asp=1, main=title, pch=pch, col=col)
plot(gap, main=paste("Gap stats,", title))
abline(v=k, lty=3, lwd=2, col="Blue")
#
# Prepare for the next step.
#
xy <- apply(xy, 2, scale)
title <- "Standardized data"
}
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
Clustering depends on scale, among other things. For discussions of this issue see (inter alia) When should you center and standardize data? and PCA on covariance or correlation?.
Here are your data
|
12,982
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
|
I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actually the x direction is rather small compared to the y direction. Based on that you would expect two enlonged clusters. Nevertheless it looks like your one mode of variance dominates the other. As the GAP statistics assumes a null model with a single component ($K=1$) and then tries to reject this model for an alternative one with $K>1$; what you observe is the inability to reject the null. Please note that the inability to reject the null hypothesis does not make it true. The methodological paper describing the GAP statistic it is available online if you want to check the technical particulars more.
I run your model using a Gaussian Mixture Model (GMM - a generalization of $k$-means, see this thread for more on that matter). True enough in that case too the GAP statistic suggested a single cluster. The BIC also suggested a single cluster. AIC suggests 4 clusters (!), this being a clear sign we start to overfit. The sample used is not extremely big; you have 21 points where one mode of variance dominates over the other. It is a bit of a stretch to have two 2-D clusters (ie, fit two 2-D means and two $2
\times 2$ covariances matrices) with just 21 2-D points. :)
(In the case of $k$-means your covariance matrix is more structured (you do not look at covariances) but I would not focus on that matter here.)
EDIT: Just for completeness: @whuber showed that two clusters would appear as optimal in $k$-means if one standardised his data; the GAP criterion applied on the GMM fit will also give $K=2$ as the optimal number of clusters if one standardises the data.
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
|
I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actua
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actually the x direction is rather small compared to the y direction. Based on that you would expect two enlonged clusters. Nevertheless it looks like your one mode of variance dominates the other. As the GAP statistics assumes a null model with a single component ($K=1$) and then tries to reject this model for an alternative one with $K>1$; what you observe is the inability to reject the null. Please note that the inability to reject the null hypothesis does not make it true. The methodological paper describing the GAP statistic it is available online if you want to check the technical particulars more.
I run your model using a Gaussian Mixture Model (GMM - a generalization of $k$-means, see this thread for more on that matter). True enough in that case too the GAP statistic suggested a single cluster. The BIC also suggested a single cluster. AIC suggests 4 clusters (!), this being a clear sign we start to overfit. The sample used is not extremely big; you have 21 points where one mode of variance dominates over the other. It is a bit of a stretch to have two 2-D clusters (ie, fit two 2-D means and two $2
\times 2$ covariances matrices) with just 21 2-D points. :)
(In the case of $k$-means your covariance matrix is more structured (you do not look at covariances) but I would not focus on that matter here.)
EDIT: Just for completeness: @whuber showed that two clusters would appear as optimal in $k$-means if one standardised his data; the GAP criterion applied on the GMM fit will also give $K=2$ as the optimal number of clusters if one standardises the data.
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I think you do not understand anything wrong in your use of the GAP statistic. I believe though you are partially mislead by the scale of the data in the visualization. You see two clusters but actua
|
12,983
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
|
I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power = 1, corresponds to the “historical” R implementation, whereas d.power = 2 corresponds to what Tibshirani et al had proposed. This was found by Juan Gonzalez, in 2016-02."
Consequently, changing d.power = 2 solved the problem for me.
https://www.rdocumentation.org/packages/cluster/versions/2.0.6/topics/clusGap
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
|
I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power =
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power = 1, corresponds to the “historical” R implementation, whereas d.power = 2 corresponds to what Tibshirani et al had proposed. This was found by Juan Gonzalez, in 2016-02."
Consequently, changing d.power = 2 solved the problem for me.
https://www.rdocumentation.org/packages/cluster/versions/2.0.6/topics/clusGap
|
Why does gap statistic for k-means suggest one cluster, even though there are obviously two of them?
I had the same problem as the original poster. R documentation currently says that original and default setting of d.power = 1 was incorrect and should be replaced by d.power: "The default, d.power =
|
12,984
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
We already know $\gamma$ is bounded between $[-1,1]$
The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative
Thus,
\begin{align*}
1(1-\gamma^2)-0.6(0.6-0.8\gamma)+0.8(0.6\gamma-0.8) &\geq 0\\
-\gamma^2+0.96\gamma \geq 0\\
\implies \gamma(\gamma-0.96) \leq 0 \text{ and } -1 \leq \gamma \leq 1 \\
\implies 0 \leq \gamma \leq 0.96
\end{align*}
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
We already know $\gamma$ is bounded between $[-1,1]$
The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative
Thus,
\begin{align*}
1(1-\gamma^2)-0.6(
|
Completing a 3x3 correlation matrix: two coefficients of the three given
We already know $\gamma$ is bounded between $[-1,1]$
The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative
Thus,
\begin{align*}
1(1-\gamma^2)-0.6(0.6-0.8\gamma)+0.8(0.6\gamma-0.8) &\geq 0\\
-\gamma^2+0.96\gamma \geq 0\\
\implies \gamma(\gamma-0.96) \leq 0 \text{ and } -1 \leq \gamma \leq 1 \\
\implies 0 \leq \gamma \leq 0.96
\end{align*}
|
Completing a 3x3 correlation matrix: two coefficients of the three given
We already know $\gamma$ is bounded between $[-1,1]$
The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative
Thus,
\begin{align*}
1(1-\gamma^2)-0.6(
|
12,985
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Here's a simpler (and perhaps more intuitive) solution:
Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v}_i,\mathbf{v}_j\rangle$ for the vectors $\mathbf{v}_1$, $\mathbf{v}_2$, $\mathbf{v}_3$, where the angle bracket $\langle\mathbf{v}_i,\mathbf{v}_j\rangle$ denotes the angle between $\mathbf{v}_i$ and $\mathbf{v}_j$.
It is not hard to visualize that $\langle\mathbf{v}_2,\mathbf{v}_3\rangle$ is bounded by $|\langle\mathbf{v}_1,\mathbf{v}_2\rangle\pm\langle\mathbf{v}_1,\mathbf{v}_3\rangle|$.
The bound on its cosine ($\gamma$) is thus $\cos\left[\langle\mathbf{v}_1,\mathbf{v}_2\rangle\pm\langle\mathbf{v}_1,\mathbf{v}_3\rangle\right]$. Basic trigonometry then gives $\gamma\in[0.6\times 0.8 - 0.6\times 0.8, 0.6\times 0.8 + 0.6\times 0.8] = [0, 0.96]$.
Edit: Note that the $0.6\times 0.8 \mp 0.6\times 0.8$ in the last line is really $\cos\langle\mathbf{v}_1,\mathbf{v}_2\rangle\cos\langle\mathbf{v}_1,\mathbf{v}_3\rangle\mp \sin\langle\mathbf{v}_1,\mathbf{v}_3\rangle\sin\langle\mathbf{v}_1,\mathbf{v}_2\rangle$ -- the second appearance of 0.6 and 0.8 occurs by coincidence thanks to $0.6^2+0.8^2=1$.
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Here's a simpler (and perhaps more intuitive) solution:
Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Here's a simpler (and perhaps more intuitive) solution:
Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v}_i,\mathbf{v}_j\rangle$ for the vectors $\mathbf{v}_1$, $\mathbf{v}_2$, $\mathbf{v}_3$, where the angle bracket $\langle\mathbf{v}_i,\mathbf{v}_j\rangle$ denotes the angle between $\mathbf{v}_i$ and $\mathbf{v}_j$.
It is not hard to visualize that $\langle\mathbf{v}_2,\mathbf{v}_3\rangle$ is bounded by $|\langle\mathbf{v}_1,\mathbf{v}_2\rangle\pm\langle\mathbf{v}_1,\mathbf{v}_3\rangle|$.
The bound on its cosine ($\gamma$) is thus $\cos\left[\langle\mathbf{v}_1,\mathbf{v}_2\rangle\pm\langle\mathbf{v}_1,\mathbf{v}_3\rangle\right]$. Basic trigonometry then gives $\gamma\in[0.6\times 0.8 - 0.6\times 0.8, 0.6\times 0.8 + 0.6\times 0.8] = [0, 0.96]$.
Edit: Note that the $0.6\times 0.8 \mp 0.6\times 0.8$ in the last line is really $\cos\langle\mathbf{v}_1,\mathbf{v}_2\rangle\cos\langle\mathbf{v}_1,\mathbf{v}_3\rangle\mp \sin\langle\mathbf{v}_1,\mathbf{v}_3\rangle\sin\langle\mathbf{v}_1,\mathbf{v}_2\rangle$ -- the second appearance of 0.6 and 0.8 occurs by coincidence thanks to $0.6^2+0.8^2=1$.
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Here's a simpler (and perhaps more intuitive) solution:
Think of the covariance as an inner product over an abstract vector space. Then, the entries in the correlation matrix are $\cos\langle\mathbf{v
|
12,986
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Let us consider the following convex set
$$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$
which is a spectrahedron named $3$-dimensional elliptope. Here's a depiction of this elliptope
Intersecting this elliptope with the planes defined by $x=0.6$ and by $y=0.8$, we obtain a line segment whose endpoints are colored in yellow
The boundary of the elliptope is a cubic surface defined by
$$\det \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} = 1 + 2 x y z - x^2 - y^2 - z^2 = 0$$
If $x=0.6$ and $y=0.8$, then the cubic equation above boils down to the quadratic equation
$$0.96 z - z^2 = z (0.96 - z) = 0$$
Thus, the intersection of the elliptope with the two planes is the line segment parametrized by
$$\{ (0.6, 0.8, t) \mid 0 \leq t \leq 0.96 \}$$
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Let us consider the following convex set
$$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$
which is a spectrahedron named
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Let us consider the following convex set
$$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$
which is a spectrahedron named $3$-dimensional elliptope. Here's a depiction of this elliptope
Intersecting this elliptope with the planes defined by $x=0.6$ and by $y=0.8$, we obtain a line segment whose endpoints are colored in yellow
The boundary of the elliptope is a cubic surface defined by
$$\det \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} = 1 + 2 x y z - x^2 - y^2 - z^2 = 0$$
If $x=0.6$ and $y=0.8$, then the cubic equation above boils down to the quadratic equation
$$0.96 z - z^2 = z (0.96 - z) = 0$$
Thus, the intersection of the elliptope with the two planes is the line segment parametrized by
$$\{ (0.6, 0.8, t) \mid 0 \leq t \leq 0.96 \}$$
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Let us consider the following convex set
$$\Bigg\{ (x,y,z) \in \mathbb R^3 : \begin{bmatrix} 1 & x & y\\ x & 1 & z\\ y & z & 1\end{bmatrix} \succeq \mathrm O_3 \Bigg\}$$
which is a spectrahedron named
|
12,987
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions.
For a single "free" parameter problem such as this, it's easy to see that the set of all values making the matrix psd will be a single interval. Therefore, it is sufficient to find the minimum and maximum such values. This can easily be accomplished by numerically solving a pair of linear SemiDefinite Programming (SDP) problems:
minimize γ subject to matrix is psd.
maximize γ subject to matrix is psd.
For example, these problems can be formulated and numerically solved using YALMIP under MATLAB.
gamma = sdpvar; A = [1 .6 .8;.6 1 gamma;.8 gamma 1]; optimize(A >= 0, gamma)
optimize(A >= 0,-gamma)
Fast, easy, and reliable.
BTW, if the smarty pants interviewer asking the question doesn't know that SemiDefinite Programming, which is well-developed and has sophisticated and easy to use numerical optimizers for reliably solving practical problems, can be used to solve this problem, and many much more difficult variants, tell him/her that this is no longer 1870, and it's time to take advantage of modern computational developments.
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions.
For a single "free" parameter problem such as thi
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions.
For a single "free" parameter problem such as this, it's easy to see that the set of all values making the matrix psd will be a single interval. Therefore, it is sufficient to find the minimum and maximum such values. This can easily be accomplished by numerically solving a pair of linear SemiDefinite Programming (SDP) problems:
minimize γ subject to matrix is psd.
maximize γ subject to matrix is psd.
For example, these problems can be formulated and numerically solved using YALMIP under MATLAB.
gamma = sdpvar; A = [1 .6 .8;.6 1 gamma;.8 gamma 1]; optimize(A >= 0, gamma)
optimize(A >= 0,-gamma)
Fast, easy, and reliable.
BTW, if the smarty pants interviewer asking the question doesn't know that SemiDefinite Programming, which is well-developed and has sophisticated and easy to use numerical optimizers for reliably solving practical problems, can be used to solve this problem, and many much more difficult variants, tell him/her that this is no longer 1870, and it's time to take advantage of modern computational developments.
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Playing around with principal minors may be fine on 3 by 3 or maybe 4 by 4 problems, but runs out of gas and numerical stability in higher dimensions.
For a single "free" parameter problem such as thi
|
12,988
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation).
"Matrix should be positive semidefinite" implies the variable vectors are a bunch in Euclidean space. The case of correlation matrix is easier than covariance matrix because the three vector lengths are fixed to be 1. Imagine 3 unit vectors X Y Z and remember that $r$ is the cosine of the angle. So, $\cos \alpha=r_{xy}=0.6$, and $\cos \beta=r_{yz}=0.8$. What might be the boundaries for $\cos \gamma=r_{xz}$? That correlation can take on any value defined by Z circumscribing about Y (keeping angle $r_{yz}=0.8$ with it):
As it spins, two positions are remarkable as ultimate wrt X, both are when Z falls into the plane XY. One is between X and Y, and the other is on the opposite side of Y. These are shown by blue and red vectors. At both these positions exactly the configuration XYZ (correlation matrix) is singular. And these are the minimal and maximal angle (hence correlation) Z can attain wrt X.
Picking the trigonometric formula to compute sum or difference of angles on a plane, we have:
$\cos \gamma = r_{xy} r_{yz} \mp \sqrt{(1-r_{xy}^2)(1-r_{yz}^2)} = [0,0.96]$ as the bounds.
This geometric view is just another (and a specific and simpler in 3D case) look on what @rightskewed expressed in algebraic terms (minors etc.).
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation).
"Matrix should be positive semidefinite"
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation).
"Matrix should be positive semidefinite" implies the variable vectors are a bunch in Euclidean space. The case of correlation matrix is easier than covariance matrix because the three vector lengths are fixed to be 1. Imagine 3 unit vectors X Y Z and remember that $r$ is the cosine of the angle. So, $\cos \alpha=r_{xy}=0.6$, and $\cos \beta=r_{yz}=0.8$. What might be the boundaries for $\cos \gamma=r_{xz}$? That correlation can take on any value defined by Z circumscribing about Y (keeping angle $r_{yz}=0.8$ with it):
As it spins, two positions are remarkable as ultimate wrt X, both are when Z falls into the plane XY. One is between X and Y, and the other is on the opposite side of Y. These are shown by blue and red vectors. At both these positions exactly the configuration XYZ (correlation matrix) is singular. And these are the minimal and maximal angle (hence correlation) Z can attain wrt X.
Picking the trigonometric formula to compute sum or difference of angles on a plane, we have:
$\cos \gamma = r_{xy} r_{yz} \mp \sqrt{(1-r_{xy}^2)(1-r_{yz}^2)} = [0,0.96]$ as the bounds.
This geometric view is just another (and a specific and simpler in 3D case) look on what @rightskewed expressed in algebraic terms (minors etc.).
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Here is what I meant in my initial comment to the answer and what I perceive @yangle may be speaking about (although I didn't follow/check their computation).
"Matrix should be positive semidefinite"
|
12,989
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa).
To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by the spectral theorm, since $A$ is symmetric) $A=UDU^T$ where $U$ is a matrix of orthonormal eigenvectors and $D$ is a diagonal matrix with eigen values on the diagonal. Then, let $B= U D^{1/2} U^T$ where $D^{1/2}$ is a diagonal matrix with the square root of eignevalues on the diagonal.
Then, take a vector with i.i.d. mean zero and variance 1 entries, $\mathbf{x}$ and note that $B \mathbf{x}$ also has mean zero, and covariance (and correlation) matrix $A$.
Now, to see every correlation/covariance matrix is positive semi-definite is simple: Let $R=E[\mathbf{x}\mathbf{x}^T]$ be a correlation matrix. Then, $R = R^T$ is easy to see, and $\mathbf{a}^T R \mathbf{a} = E[(\mathbf{a}^T \mathbf{x})^2] \geq 0$ so the Rayleigh quotient is non-negative for any non-zero $\mathbf{a}$ so $R$ is positive semi-definite.
Now, noting that a symmetric matrix is positive semi-definite if and only if its eigenvalues are non-negative, we see that your original approach would work: calculate the characteristic polynomial, look at its roots to see if they are non-negative. Note that testing for positive definiteness is easy with Sylvester's Criterion (as mentioned in another answer's comment; a matrix is positive definite if and only if the principal minors all have positive determinant); there are extensions for semidefinite (all minors have non-negative determinant), but you have to check $2^n$ minors in this case, versus just $n$ for positive definite.
|
Completing a 3x3 correlation matrix: two coefficients of the three given
|
Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa).
To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by t
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa).
To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by the spectral theorm, since $A$ is symmetric) $A=UDU^T$ where $U$ is a matrix of orthonormal eigenvectors and $D$ is a diagonal matrix with eigen values on the diagonal. Then, let $B= U D^{1/2} U^T$ where $D^{1/2}$ is a diagonal matrix with the square root of eignevalues on the diagonal.
Then, take a vector with i.i.d. mean zero and variance 1 entries, $\mathbf{x}$ and note that $B \mathbf{x}$ also has mean zero, and covariance (and correlation) matrix $A$.
Now, to see every correlation/covariance matrix is positive semi-definite is simple: Let $R=E[\mathbf{x}\mathbf{x}^T]$ be a correlation matrix. Then, $R = R^T$ is easy to see, and $\mathbf{a}^T R \mathbf{a} = E[(\mathbf{a}^T \mathbf{x})^2] \geq 0$ so the Rayleigh quotient is non-negative for any non-zero $\mathbf{a}$ so $R$ is positive semi-definite.
Now, noting that a symmetric matrix is positive semi-definite if and only if its eigenvalues are non-negative, we see that your original approach would work: calculate the characteristic polynomial, look at its roots to see if they are non-negative. Note that testing for positive definiteness is easy with Sylvester's Criterion (as mentioned in another answer's comment; a matrix is positive definite if and only if the principal minors all have positive determinant); there are extensions for semidefinite (all minors have non-negative determinant), but you have to check $2^n$ minors in this case, versus just $n$ for positive definite.
|
Completing a 3x3 correlation matrix: two coefficients of the three given
Every positive semi-definite matrix is a correlation/covariance matrix (and vice versa).
To see this, start with a positive semi-definite matrix $A$ and take its eigen-decomposition (which exists by t
|
12,990
|
Minimum number of layers in a deep neural network
|
"Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
|
Minimum number of layers in a deep neural network
|
"Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
|
Minimum number of layers in a deep neural network
"Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
|
Minimum number of layers in a deep neural network
"Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network.
|
12,991
|
Minimum number of layers in a deep neural network
|
"Deep"
One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)).
"Very Deep"
In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16+ hidden layers.
"Extremely Deep"
In 2016 the "extremely deep" residual networks He et al. (2016) consist of 50 up to 1,000+ hidden layers.
|
Minimum number of layers in a deep neural network
|
"Deep"
One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)).
"Very Deep"
In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16
|
Minimum number of layers in a deep neural network
"Deep"
One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)).
"Very Deep"
In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16+ hidden layers.
"Extremely Deep"
In 2016 the "extremely deep" residual networks He et al. (2016) consist of 50 up to 1,000+ hidden layers.
|
Minimum number of layers in a deep neural network
"Deep"
One of the earliest deep neural networks has three densely connected hidden layers (Hinton et al. (2006)).
"Very Deep"
In 2014 the "very deep" VGG netowrks Simonyan et al. (2014) consist of 16
|
12,992
|
Minimum number of layers in a deep neural network
|
As per the literature,
Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003.
https://en.wikipedia.org/wiki/Deep_learning
It is said that
There is no universally agreed upon threshold of depth dividing
shallow learning from deep learning, but most researchers in the field
agree that deep learning has multiple nonlinear layers (CAP > 2) and
Schmidhuber considers CAP > 10 to be very deep learning
A chain of transformations from input to output is a Credit Assignment Path or CAP. For a feedforward neural network, the depth of the CAPs, and thus the depth of the network, is the number of hidden layers plus one.
|
Minimum number of layers in a deep neural network
|
As per the literature,
Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003.
https://en.w
|
Minimum number of layers in a deep neural network
As per the literature,
Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003.
https://en.wikipedia.org/wiki/Deep_learning
It is said that
There is no universally agreed upon threshold of depth dividing
shallow learning from deep learning, but most researchers in the field
agree that deep learning has multiple nonlinear layers (CAP > 2) and
Schmidhuber considers CAP > 10 to be very deep learning
A chain of transformations from input to output is a Credit Assignment Path or CAP. For a feedforward neural network, the depth of the CAPs, and thus the depth of the network, is the number of hidden layers plus one.
|
Minimum number of layers in a deep neural network
As per the literature,
Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828free to read. doi:10.1016/j.neunet.2014.09.003.
https://en.w
|
12,993
|
Explain model adjustment, in plain English
|
Easiest to explain by way of an example:
Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than those who didn't watch it. Should the government ban football from TV? But men are more likely to watch football than women, and men are also more likely to have a heart attack than women. So the association between football-watching and heart attacks might be explained by a third factor such as sex that affects both. (Sociologists would distinguish here between gender, a cultural construct that is associated with football-watching, and sex, a biological category that is associated with heart-attack incidence, but the two are cleary very strongly correlated so i'm going to ignore that distinction for simplicity.)
Statisticians, and especially epidemiologists, call such a third factor a confounder, and the phenomenon confounding. The most obvious way to remove the problem is to look at the association between football-watching and heart-attack incidence in men and women separately, or in the jargon, to stratify by sex. If we find that the association (if there still is one) is similar in both sexes, we may then choose to combine the two estimates of the association across the two sexes. The resulting estimate of the association between football-watching and heart-attack incidence is then said to be adjusted or controlled for sex.
We would probably also wish to control for other factors in the same way. Age is another obvious one (in fact epidemiologists either stratify or adjust/control almost every association by age and sex). Socio-economic class is probably another. Others can get trickier, e.g. should we adjust for beer consumption while watching the match? Maybe yes, if we're interested in the effect of the stress of watching the match alone; but maybe no, if we're considering banning broadcasting of World Cup football and that would also reduce beer consumption. Whether given variable is a confounder or not depends on precisely what question we wish to address, and this can require very careful thought and get quite tricky and even contentious.
Clearly then, we may wish to adjust/control for several factors, some of which may be measured in several categories (e.g. social class) while others may be continuous (e.g. age). We could deal with the continuous ones by splitting into (age-)groups, thereby turning them into categorical ones. So say we have 2 sexes, 5 social class groups and 7 age groups. We can now look at the association between football-watching and heart-attack incidence in 2×5×7 = 70 strata. But if our study is fairly small, so some of those strata contain very few people, we're going to run into problems with this approach. And in practice we may wish to adjust for a dozen or more variables. An alternative way of adjusting/controlling for variables that is particularly useful when there are many of them is provided by regression analysis with multiple dependent variables, sometimes known as multivariable regression analysis. (There are different types of regression models depending on the type of outcome variable: least squares regression, logistic regression, proportional hazards (Cox) regression...). In observational studies, as opposed to experiments, we nearly always want to adjust for many potential confounders, so in practice adjustment/control for confounders is often done by regression analysis, though there are other alternatives too though, such as standardization, weighting, propensity score matching...
|
Explain model adjustment, in plain English
|
Easiest to explain by way of an example:
Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than t
|
Explain model adjustment, in plain English
Easiest to explain by way of an example:
Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than those who didn't watch it. Should the government ban football from TV? But men are more likely to watch football than women, and men are also more likely to have a heart attack than women. So the association between football-watching and heart attacks might be explained by a third factor such as sex that affects both. (Sociologists would distinguish here between gender, a cultural construct that is associated with football-watching, and sex, a biological category that is associated with heart-attack incidence, but the two are cleary very strongly correlated so i'm going to ignore that distinction for simplicity.)
Statisticians, and especially epidemiologists, call such a third factor a confounder, and the phenomenon confounding. The most obvious way to remove the problem is to look at the association between football-watching and heart-attack incidence in men and women separately, or in the jargon, to stratify by sex. If we find that the association (if there still is one) is similar in both sexes, we may then choose to combine the two estimates of the association across the two sexes. The resulting estimate of the association between football-watching and heart-attack incidence is then said to be adjusted or controlled for sex.
We would probably also wish to control for other factors in the same way. Age is another obvious one (in fact epidemiologists either stratify or adjust/control almost every association by age and sex). Socio-economic class is probably another. Others can get trickier, e.g. should we adjust for beer consumption while watching the match? Maybe yes, if we're interested in the effect of the stress of watching the match alone; but maybe no, if we're considering banning broadcasting of World Cup football and that would also reduce beer consumption. Whether given variable is a confounder or not depends on precisely what question we wish to address, and this can require very careful thought and get quite tricky and even contentious.
Clearly then, we may wish to adjust/control for several factors, some of which may be measured in several categories (e.g. social class) while others may be continuous (e.g. age). We could deal with the continuous ones by splitting into (age-)groups, thereby turning them into categorical ones. So say we have 2 sexes, 5 social class groups and 7 age groups. We can now look at the association between football-watching and heart-attack incidence in 2×5×7 = 70 strata. But if our study is fairly small, so some of those strata contain very few people, we're going to run into problems with this approach. And in practice we may wish to adjust for a dozen or more variables. An alternative way of adjusting/controlling for variables that is particularly useful when there are many of them is provided by regression analysis with multiple dependent variables, sometimes known as multivariable regression analysis. (There are different types of regression models depending on the type of outcome variable: least squares regression, logistic regression, proportional hazards (Cox) regression...). In observational studies, as opposed to experiments, we nearly always want to adjust for many potential confounders, so in practice adjustment/control for confounders is often done by regression analysis, though there are other alternatives too though, such as standardization, weighting, propensity score matching...
|
Explain model adjustment, in plain English
Easiest to explain by way of an example:
Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than t
|
12,994
|
Explain model adjustment, in plain English
|
Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females:
set.seed(69)
x <- rep(1:10,2)
y <- c(jitter(1:10, factor=4), (jitter(1:10, factor=4)+2))
sex <- rep(c("f", "m"), each=10)
df1 <- data.frame(x,y,sex)
with(df1, plot(y~x, col=c(1,2)[sex]))
lm1 <- lm(y~sex, data=df1)
lm2 <- lm(y~sex+x, data=df1)
anova(lm1); anova(lm2)
You can see that without controlling for weight (in anova(lm1)) there's very little difference between the sexes, but when weight is included as a covariate (controlled for in lm2) then the difference becomes more apparent.
#In case you want to add the fitted lines to the plot
coefs2 <- coef(lm2)
abline(coefs2[1], coefs2[3], col=1)
abline(coefs2[1]+coefs2[2], coefs2[3], col=2)
|
Explain model adjustment, in plain English
|
Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females:
set.s
|
Explain model adjustment, in plain English
Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females:
set.seed(69)
x <- rep(1:10,2)
y <- c(jitter(1:10, factor=4), (jitter(1:10, factor=4)+2))
sex <- rep(c("f", "m"), each=10)
df1 <- data.frame(x,y,sex)
with(df1, plot(y~x, col=c(1,2)[sex]))
lm1 <- lm(y~sex, data=df1)
lm2 <- lm(y~sex+x, data=df1)
anova(lm1); anova(lm2)
You can see that without controlling for weight (in anova(lm1)) there's very little difference between the sexes, but when weight is included as a covariate (controlled for in lm2) then the difference becomes more apparent.
#In case you want to add the fitted lines to the plot
coefs2 <- coef(lm2)
abline(coefs2[1], coefs2[3], col=1)
abline(coefs2[1]+coefs2[2], coefs2[3], col=2)
|
Explain model adjustment, in plain English
Onestop explained it pretty well, I'll just give a simple R example with made up data. Say x is weight and y is height, and we want to find out if there's a difference between males and females:
set.s
|
12,995
|
Find Probability of one event out of three when all of them can't happen together
|
This Venn diagram displays a situation where the chance of mutual intersection is zero:
From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not in the mutual overlap of all three disks. That permits us to update the diagram:
Applying the same reasoning to $\Pr(F\cap G) = \Pr(E\cap G) = 1/3,$ we obtain a Venn diagram displaying all the information in the question:
The Axiom of Total Probability asserts the sum of all the probabilities (including the probability for the complement of $E\cup F\cup G,$ shown at the bottom left) is $1.$
An even more basic probability axiom asserts all probabilities must be non-negative. But since $1/3+1/3+1/3+0=1,$ all the possible probability already appears. The remaining probabilities must be zero, meaning the picture can be completed only like this:
Finally, a third axiom (the same one used in the second step of filling in the Venn diagram) asserts the probability of $E$ equals the sum of the probabilities of its four parts, because they are disjoint. Thus, beginning with the central probability and moving counterclockwise around the disk that portrays $E,$
$$\Pr(E) = 0 + 1/3 + 0 + 1/3 = 2/3.$$
One moral worth remembering:
Draw Venn diagrams in full generality so they show all possible intersections of the sets, even when you know some of the probabilities are zero.
This helps you keep track of all the information systematically. (It's also conceptually more accurate, because sets of probability zero do not have to be nonempty!)
|
Find Probability of one event out of three when all of them can't happen together
|
This Venn diagram displays a situation where the chance of mutual intersection is zero:
From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not
|
Find Probability of one event out of three when all of them can't happen together
This Venn diagram displays a situation where the chance of mutual intersection is zero:
From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not in the mutual overlap of all three disks. That permits us to update the diagram:
Applying the same reasoning to $\Pr(F\cap G) = \Pr(E\cap G) = 1/3,$ we obtain a Venn diagram displaying all the information in the question:
The Axiom of Total Probability asserts the sum of all the probabilities (including the probability for the complement of $E\cup F\cup G,$ shown at the bottom left) is $1.$
An even more basic probability axiom asserts all probabilities must be non-negative. But since $1/3+1/3+1/3+0=1,$ all the possible probability already appears. The remaining probabilities must be zero, meaning the picture can be completed only like this:
Finally, a third axiom (the same one used in the second step of filling in the Venn diagram) asserts the probability of $E$ equals the sum of the probabilities of its four parts, because they are disjoint. Thus, beginning with the central probability and moving counterclockwise around the disk that portrays $E,$
$$\Pr(E) = 0 + 1/3 + 0 + 1/3 = 2/3.$$
One moral worth remembering:
Draw Venn diagrams in full generality so they show all possible intersections of the sets, even when you know some of the probabilities are zero.
This helps you keep track of all the information systematically. (It's also conceptually more accurate, because sets of probability zero do not have to be nonempty!)
|
Find Probability of one event out of three when all of them can't happen together
This Venn diagram displays a situation where the chance of mutual intersection is zero:
From $\Pr(E\cap F) = 1/3$ we deduce all this probability lies in the overlap of the $E$ and $F$ disks, but not
|
12,996
|
Find Probability of one event out of three when all of them can't happen together
|
If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\mathbb P(E)=\mathbb P(E\cap F)+\mathbb P(E\cap G)=2/3$$
|
Find Probability of one event out of three when all of them can't happen together
|
If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\m
|
Find Probability of one event out of three when all of them can't happen together
If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\mathbb P(E)=\mathbb P(E\cap F)+\mathbb P(E\cap G)=2/3$$
|
Find Probability of one event out of three when all of them can't happen together
If you try to fill in the Venn diagram, you can't put non-zero entries inside regions other than represented by pairwise intersections. They'll form up the sample space by themselves, which means $$\m
|
12,997
|
Find Probability of one event out of three when all of them can't happen together
|
The answer to the question "Can you determine $P(E)$?" is Yes.
Given events $E, F, G$ defined on a sample space $\Omega$, we know that
\begin{align}
&E\cap F\cap G\\
&E\cap F\cap G^c\\
&E\cap F^c\cap G\\
&E\cap F^c\cap G^c\\
&E^c\cap F\cap G\\
&E^c\cap F\cap G^c\\
&E^c\cap F^c\cap G\\
&E^c\cap F^c\cap G^c\\
\end{align}
are $8$ mutually exclusive events whose union is $\Omega$. Thus, the sum of the probabilities of these $8$ events is $1$. Now, we are told that $E, F, G$ cannot occur simultaneously, that is, $E\cap F\cap G = \emptyset$ and so
$P(E\cap F\cap G) = 0$. We are also told that
\begin{align}
P(E\cap F) &= P(E\cap F\cap G) + P(E\cap F\cap G^c) = \frac 13\\
P(E\cap G) &= P(E\cap F\cap G) + P(E\cap F^c \cap G) = \frac 13\\
P(F\cap G) &= P(E\cap F\cap G) + P(E^c\cap F \cap G) = \frac 13
\end{align}
where we can feel comfortable about the sum in the middle in each equation by musing on the fact that the probability of the union of two mutually exclusive events is the sum of the probabilities of the two events. Since $P(E\cap F\cap G)=0$, we conclude that
\begin{align}P(E\cap F) &= P(E\cap F\cap G^c) = \frac 13\\
P(E\cap G) &= P(E\cap F^c \cap G) = \frac 13\\
P(F\cap G) &= P(E^c\cap F \cap G) = \frac 13
\end{align}
But, of the $8$ mutually exclusive events listed above whose union is $\Omega$, we have identified three events whose probabilities add up to $1$ and so the other $5$ events (one of which is $E\cap F\cap G$) must have probability $0$. Consequently,
\begin{align} P(E) &= P(E\cap F\cap G) + P(E\cap F\cap G^c) + P(E\cap F^c\cap G) + P(E\cap F^c\cap G^c)\\
&= 0 + \frac 13 + \frac 13 + 0\\
&= \frac 23
\end{align}
By symmetry (or by a brute force repetition of the above arguments mutatis mutandis), we can conclude that $E, F, G$ all have probabiity $\frac 23$.
|
Find Probability of one event out of three when all of them can't happen together
|
The answer to the question "Can you determine $P(E)$?" is Yes.
Given events $E, F, G$ defined on a sample space $\Omega$, we know that
\begin{align}
&E\cap F\cap G\\
&E\cap F\cap G^c\\
&E\cap F^c\cap
|
Find Probability of one event out of three when all of them can't happen together
The answer to the question "Can you determine $P(E)$?" is Yes.
Given events $E, F, G$ defined on a sample space $\Omega$, we know that
\begin{align}
&E\cap F\cap G\\
&E\cap F\cap G^c\\
&E\cap F^c\cap G\\
&E\cap F^c\cap G^c\\
&E^c\cap F\cap G\\
&E^c\cap F\cap G^c\\
&E^c\cap F^c\cap G\\
&E^c\cap F^c\cap G^c\\
\end{align}
are $8$ mutually exclusive events whose union is $\Omega$. Thus, the sum of the probabilities of these $8$ events is $1$. Now, we are told that $E, F, G$ cannot occur simultaneously, that is, $E\cap F\cap G = \emptyset$ and so
$P(E\cap F\cap G) = 0$. We are also told that
\begin{align}
P(E\cap F) &= P(E\cap F\cap G) + P(E\cap F\cap G^c) = \frac 13\\
P(E\cap G) &= P(E\cap F\cap G) + P(E\cap F^c \cap G) = \frac 13\\
P(F\cap G) &= P(E\cap F\cap G) + P(E^c\cap F \cap G) = \frac 13
\end{align}
where we can feel comfortable about the sum in the middle in each equation by musing on the fact that the probability of the union of two mutually exclusive events is the sum of the probabilities of the two events. Since $P(E\cap F\cap G)=0$, we conclude that
\begin{align}P(E\cap F) &= P(E\cap F\cap G^c) = \frac 13\\
P(E\cap G) &= P(E\cap F^c \cap G) = \frac 13\\
P(F\cap G) &= P(E^c\cap F \cap G) = \frac 13
\end{align}
But, of the $8$ mutually exclusive events listed above whose union is $\Omega$, we have identified three events whose probabilities add up to $1$ and so the other $5$ events (one of which is $E\cap F\cap G$) must have probability $0$. Consequently,
\begin{align} P(E) &= P(E\cap F\cap G) + P(E\cap F\cap G^c) + P(E\cap F^c\cap G) + P(E\cap F^c\cap G^c)\\
&= 0 + \frac 13 + \frac 13 + 0\\
&= \frac 23
\end{align}
By symmetry (or by a brute force repetition of the above arguments mutatis mutandis), we can conclude that $E, F, G$ all have probabiity $\frac 23$.
|
Find Probability of one event out of three when all of them can't happen together
The answer to the question "Can you determine $P(E)$?" is Yes.
Given events $E, F, G$ defined on a sample space $\Omega$, we know that
\begin{align}
&E\cap F\cap G\\
&E\cap F\cap G^c\\
&E\cap F^c\cap
|
12,998
|
Find Probability of one event out of three when all of them can't happen together
|
Can we think of it that way?
P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3
P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1
Meaning that The probability of event E happening by itself is zero, which means it can only happen with either F or G and it can't happen with both.
P(E) = P(E ∩ F ) + P(E ∩ G) = 1/3 + 1/3 = 2/3
|
Find Probability of one event out of three when all of them can't happen together
|
Can we think of it that way?
P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3
P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1
Meaning that The probability of event E happening by itself is zero, which means it can only hap
|
Find Probability of one event out of three when all of them can't happen together
Can we think of it that way?
P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3
P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1
Meaning that The probability of event E happening by itself is zero, which means it can only happen with either F or G and it can't happen with both.
P(E) = P(E ∩ F ) + P(E ∩ G) = 1/3 + 1/3 = 2/3
|
Find Probability of one event out of three when all of them can't happen together
Can we think of it that way?
P(E ∩ F ) = P(F ∩ G) = P(E ∩ G) = 1/3
P(E ∩ F ) + P(F ∩ G) + P(E ∩ G) = 1
Meaning that The probability of event E happening by itself is zero, which means it can only hap
|
12,999
|
Find Probability of one event out of three when all of them can't happen together
|
Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob:
$$
P(E) = P(E, F) + P(E, G) = \tfrac{2}{3}
$$
Since $P(E \mid E,F)P(E, F) = P(E, F)$, ditto for $E,G$ and $P(E \mid F, G) = 0$.
|
Find Probability of one event out of three when all of them can't happen together
|
Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob:
$$
P(E) = P(E, F) + P(E, G) = \tfrac{2}{3}
$$
Since $P(E \mid E,F)P(E, F) = P(E, F)$,
|
Find Probability of one event out of three when all of them can't happen together
Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob:
$$
P(E) = P(E, F) + P(E, G) = \tfrac{2}{3}
$$
Since $P(E \mid E,F)P(E, F) = P(E, F)$, ditto for $E,G$ and $P(E \mid F, G) = 0$.
|
Find Probability of one event out of three when all of them can't happen together
Since the events $(E,F)$, $(E,G)$ $(F,G)$ are mutually exclusive and sum to one we can use the law of total prob:
$$
P(E) = P(E, F) + P(E, G) = \tfrac{2}{3}
$$
Since $P(E \mid E,F)P(E, F) = P(E, F)$,
|
13,000
|
How to smooth data and force monotonicity
|
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't as user friendly as gam(), but the steps are shown below, based mostly on the example from ?pcls, modified to suit the sample data you gave:
df <- data.frame(x=1:10, y=c(100,41,22,10,6,7,2,1,3,1))
## Set up the size of the basis functions/number of knots
k <- 5
## This fits the unconstrained model but gets us smoothness parameters that
## that we will need later
unc <- gam(y ~ s(x, k = k, bs = "cr"), data = df)
## This creates the cubic spline basis functions of `x`
## It returns an object containing the penalty matrix for the spline
## among other things; see ?smooth.construct for description of each
## element in the returned object
sm <- smoothCon(s(x, k = k, bs = "cr"), df, knots = NULL)[[1]]
## This gets the constraint matrix and constraint vector that imposes
## linear constraints to enforce montonicity on a cubic regression spline
## the key thing you need to change is `up`.
## `up = TRUE` == increasing function
## `up = FALSE` == decreasing function (as per your example)
## `xp` is a vector of knot locations that we get back from smoothCon
F <- mono.con(sm$xp, up = FALSE) # get constraints: up = FALSE == Decreasing constraint!
Now we need to fill in the object that gets passed to pcls() containing details of the penalised constrained model we want to fit
## Fill in G, the object pcsl needs to fit; this is just what `pcls` says it needs:
## X is the model matrix (of the basis functions)
## C is the identifiability constraints - no constraints needed here
## for the single smooth
## sp are the smoothness parameters from the unconstrained GAM
## p/xp are the knot locations again, but negated for a decreasing function
## y is the response data
## w are weights and this is fancy code for a vector of 1s of length(y)
G <- list(X = sm$X, C = matrix(0,0,0), sp = unc$sp,
p = -sm$xp, # note the - here! This is for decreasing fits!
y = df$y,
w = df$y*0+1)
G$Ain <- F$A # the monotonicity constraint matrix
G$bin <- F$b # the monotonicity constraint vector, both from mono.con
G$S <- sm$S # the penalty matrix for the cubic spline
G$off <- 0 # location of offsets in the penalty matrix
Now we can finally do the fitting
## Do the constrained fit
p <- pcls(G) # fit spline (using s.p. from unconstrained fit)
p contains a vector of coefficients for the basis functions corresponding to the spline. To visualize the fitted spline, we can predict from the model at 100 locations over the range of x. We do 100 values so as to get a nice smooth line on the plot.
## predict at 100 locations over range of x - get a smooth line on the plot
newx <- with(df, data.frame(x = seq(min(x), max(x), length = 100)))
To generate predicted values we use Predict.matrix(), which generates a matrix such that when multiple by coefficients p yields predicted values from the fitted model:
fv <- Predict.matrix(sm, newx) %*% p
newx <- transform(newx, yhat = fv[,1])
plot(y ~ x, data = df, pch = 16)
lines(yhat ~ x, data = newx, col = "red")
This produces:
I'll leave it up to you to get the data into a tidy form for plotting with ggplot...
You can force a closer fit (to partially answer your question about having the smoother fit the first data point) by increasing the dimension of the basis function of x. For example, setting k equal to 8 (k <- 8) and rerunning the code above we get
You can't push k much higher for these data, and you have to be careful about over fitting; all pcls() is doing is solving the penalised least squares problem given the constraints and the supplied basis functions, it's not performing smoothness selection for you - not that I know of...)
If you want interpolation, then see the base R function ?splinefun which has Hermite splines and cubic splines with monotonicty constraints. In this case you can't use this however as the data are not strictly monotonic.
|
How to smooth data and force monotonicity
|
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't
|
How to smooth data and force monotonicity
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't as user friendly as gam(), but the steps are shown below, based mostly on the example from ?pcls, modified to suit the sample data you gave:
df <- data.frame(x=1:10, y=c(100,41,22,10,6,7,2,1,3,1))
## Set up the size of the basis functions/number of knots
k <- 5
## This fits the unconstrained model but gets us smoothness parameters that
## that we will need later
unc <- gam(y ~ s(x, k = k, bs = "cr"), data = df)
## This creates the cubic spline basis functions of `x`
## It returns an object containing the penalty matrix for the spline
## among other things; see ?smooth.construct for description of each
## element in the returned object
sm <- smoothCon(s(x, k = k, bs = "cr"), df, knots = NULL)[[1]]
## This gets the constraint matrix and constraint vector that imposes
## linear constraints to enforce montonicity on a cubic regression spline
## the key thing you need to change is `up`.
## `up = TRUE` == increasing function
## `up = FALSE` == decreasing function (as per your example)
## `xp` is a vector of knot locations that we get back from smoothCon
F <- mono.con(sm$xp, up = FALSE) # get constraints: up = FALSE == Decreasing constraint!
Now we need to fill in the object that gets passed to pcls() containing details of the penalised constrained model we want to fit
## Fill in G, the object pcsl needs to fit; this is just what `pcls` says it needs:
## X is the model matrix (of the basis functions)
## C is the identifiability constraints - no constraints needed here
## for the single smooth
## sp are the smoothness parameters from the unconstrained GAM
## p/xp are the knot locations again, but negated for a decreasing function
## y is the response data
## w are weights and this is fancy code for a vector of 1s of length(y)
G <- list(X = sm$X, C = matrix(0,0,0), sp = unc$sp,
p = -sm$xp, # note the - here! This is for decreasing fits!
y = df$y,
w = df$y*0+1)
G$Ain <- F$A # the monotonicity constraint matrix
G$bin <- F$b # the monotonicity constraint vector, both from mono.con
G$S <- sm$S # the penalty matrix for the cubic spline
G$off <- 0 # location of offsets in the penalty matrix
Now we can finally do the fitting
## Do the constrained fit
p <- pcls(G) # fit spline (using s.p. from unconstrained fit)
p contains a vector of coefficients for the basis functions corresponding to the spline. To visualize the fitted spline, we can predict from the model at 100 locations over the range of x. We do 100 values so as to get a nice smooth line on the plot.
## predict at 100 locations over range of x - get a smooth line on the plot
newx <- with(df, data.frame(x = seq(min(x), max(x), length = 100)))
To generate predicted values we use Predict.matrix(), which generates a matrix such that when multiple by coefficients p yields predicted values from the fitted model:
fv <- Predict.matrix(sm, newx) %*% p
newx <- transform(newx, yhat = fv[,1])
plot(y ~ x, data = df, pch = 16)
lines(yhat ~ x, data = newx, col = "red")
This produces:
I'll leave it up to you to get the data into a tidy form for plotting with ggplot...
You can force a closer fit (to partially answer your question about having the smoother fit the first data point) by increasing the dimension of the basis function of x. For example, setting k equal to 8 (k <- 8) and rerunning the code above we get
You can't push k much higher for these data, and you have to be careful about over fitting; all pcls() is doing is solving the penalised least squares problem given the constraints and the supplied basis functions, it's not performing smoothness selection for you - not that I know of...)
If you want interpolation, then see the base R function ?splinefun which has Hermite splines and cubic splines with monotonicty constraints. In this case you can't use this however as the data are not strictly monotonic.
|
How to smooth data and force monotonicity
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.