idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
β | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
β | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
5,901
|
Compendium of cross-validation techniques
|
K-fold cross-validation (CV) randomly breaks your data up into K partitions, and you in turn hold out one of those K parts as a test case, and lump the other K-1 parts together as your training data. Leave One Out (LOO) is the special case where you take your N data items and do N-fold CV. In some sense, Hold Out is another special case, where you only choose one of your K folds as test and do not rotate through all K folds.
As far as I know, 10-fold CV is pretty much the de rigueur, since it uses your data efficiently and also helps to avoid unlucky partition choices. Hold Out does not make efficient use of your data, and LOO is not as robust (or something like that), but 10-ish-fold is just right.
If you know that your data contains more than one category, and one or more categories are much smaller than the rest, some of your K random partitions might not even contain any of the small categories at all, which would be bad. To make sure each partition is reasonably representative, you use stratification: break your data up into the categories and then create random partitions by choosing randomly and proportionally from each category.
All of these variations on K-fold CV choose from your data without replacement. The bootstrap chooses data with replacement, so the same datum can be included multiple times and some data might not be included at all. (Each "partition" will also have N items, unlike K-fold, in which each partition will have N/K items.)
(I'll have to admit that I don't know exactly how the bootstrap would be used in CV, though. The principle of testing and CV is to make sure you don't test on data that you trained on, so you get a more realistic idea of how your technique + coefficients might work in the real world.)
EDIT: Replaced "Hold Out is not efficient" with "Hold Out does not make efficient use of your data" to help clarify, per the comments.
|
Compendium of cross-validation techniques
|
K-fold cross-validation (CV) randomly breaks your data up into K partitions, and you in turn hold out one of those K parts as a test case, and lump the other K-1 parts together as your training data.
|
Compendium of cross-validation techniques
K-fold cross-validation (CV) randomly breaks your data up into K partitions, and you in turn hold out one of those K parts as a test case, and lump the other K-1 parts together as your training data. Leave One Out (LOO) is the special case where you take your N data items and do N-fold CV. In some sense, Hold Out is another special case, where you only choose one of your K folds as test and do not rotate through all K folds.
As far as I know, 10-fold CV is pretty much the de rigueur, since it uses your data efficiently and also helps to avoid unlucky partition choices. Hold Out does not make efficient use of your data, and LOO is not as robust (or something like that), but 10-ish-fold is just right.
If you know that your data contains more than one category, and one or more categories are much smaller than the rest, some of your K random partitions might not even contain any of the small categories at all, which would be bad. To make sure each partition is reasonably representative, you use stratification: break your data up into the categories and then create random partitions by choosing randomly and proportionally from each category.
All of these variations on K-fold CV choose from your data without replacement. The bootstrap chooses data with replacement, so the same datum can be included multiple times and some data might not be included at all. (Each "partition" will also have N items, unlike K-fold, in which each partition will have N/K items.)
(I'll have to admit that I don't know exactly how the bootstrap would be used in CV, though. The principle of testing and CV is to make sure you don't test on data that you trained on, so you get a more realistic idea of how your technique + coefficients might work in the real world.)
EDIT: Replaced "Hold Out is not efficient" with "Hold Out does not make efficient use of your data" to help clarify, per the comments.
|
Compendium of cross-validation techniques
K-fold cross-validation (CV) randomly breaks your data up into K partitions, and you in turn hold out one of those K parts as a test case, and lump the other K-1 parts together as your training data.
|
5,902
|
Compendium of cross-validation techniques
|
I found one of the references linked to in the Wikipedia article quite useful
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48.529&rep=rep1&type=pdf
"A study of cross-validation and bootstrap for accuracy estimation and model selection", Ron Kohavi, IJCAI95
It contains an empirical comparison for a subset of CV techniques. The tl;dr version is basically "use 10-fold CV".
|
Compendium of cross-validation techniques
|
I found one of the references linked to in the Wikipedia article quite useful
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48.529&rep=rep1&type=pdf
"A study of cross-validation and bootstr
|
Compendium of cross-validation techniques
I found one of the references linked to in the Wikipedia article quite useful
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48.529&rep=rep1&type=pdf
"A study of cross-validation and bootstrap for accuracy estimation and model selection", Ron Kohavi, IJCAI95
It contains an empirical comparison for a subset of CV techniques. The tl;dr version is basically "use 10-fold CV".
|
Compendium of cross-validation techniques
I found one of the references linked to in the Wikipedia article quite useful
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48.529&rep=rep1&type=pdf
"A study of cross-validation and bootstr
|
5,903
|
Compendium of cross-validation techniques
|
...and a guide on when to use each of them...
Unfortunately that problem is harder than it gets credit for. There are at least 2 main uses of cross-validation: selecting a model, and evaluating model performance.
Roughly speaking, if you use a CV variant which splits the data using a high train-to-test ratio, this can be better for evaluation. Using a larger training set will more accurately mimic the performance of the model fit on the full dataset.
But a high train-to-test ratio can be worse for selection. Imagine there really is a "best" model that you "ought" to choose, but your dataset is quite large. Then, too-large models which overfit slightly will have almost the same CV performance as the "best" model (because you'll successfully estimate their spurious parameters to be negligible). Randomness in the data and the CV/splitting procedure will often cause you to choose an overfitting model instead of the truly "best" model.
See Shao (1993), "Linear Model Selection by Cross-Validation" for older asymptotic theory in the linear regression case. Yang (2007), "Consistency of Cross Validation for Comparing Regression Procedures" and Yang (2006), "Comparing Learning Methods for Classification" give asymptotic theory for more general regression and classification problems. But rigorous finite-sample advice is hard to come by.
|
Compendium of cross-validation techniques
|
...and a guide on when to use each of them...
Unfortunately that problem is harder than it gets credit for. There are at least 2 main uses of cross-validation: selecting a model, and evaluating model
|
Compendium of cross-validation techniques
...and a guide on when to use each of them...
Unfortunately that problem is harder than it gets credit for. There are at least 2 main uses of cross-validation: selecting a model, and evaluating model performance.
Roughly speaking, if you use a CV variant which splits the data using a high train-to-test ratio, this can be better for evaluation. Using a larger training set will more accurately mimic the performance of the model fit on the full dataset.
But a high train-to-test ratio can be worse for selection. Imagine there really is a "best" model that you "ought" to choose, but your dataset is quite large. Then, too-large models which overfit slightly will have almost the same CV performance as the "best" model (because you'll successfully estimate their spurious parameters to be negligible). Randomness in the data and the CV/splitting procedure will often cause you to choose an overfitting model instead of the truly "best" model.
See Shao (1993), "Linear Model Selection by Cross-Validation" for older asymptotic theory in the linear regression case. Yang (2007), "Consistency of Cross Validation for Comparing Regression Procedures" and Yang (2006), "Comparing Learning Methods for Classification" give asymptotic theory for more general regression and classification problems. But rigorous finite-sample advice is hard to come by.
|
Compendium of cross-validation techniques
...and a guide on when to use each of them...
Unfortunately that problem is harder than it gets credit for. There are at least 2 main uses of cross-validation: selecting a model, and evaluating model
|
5,904
|
What are correct values for precision and recall when the denominators equal 0?
|
The answers to the linked earlier question apply here too.
If (true positives + false negatives) = 0 then no positive cases in the input data, so any analysis of this case has no information, and so no conclusion about how positive cases are handled. You want N/A or something similar as the ratio result, avoiding a division by zero error
If (true positives + false positives) = 0 then all cases have been predicted to be negative: this is one end of the ROC curve. Again, you want to recognise and report this possibility while avoiding a division by zero error.
|
What are correct values for precision and recall when the denominators equal 0?
|
The answers to the linked earlier question apply here too.
If (true positives + false negatives) = 0 then no positive cases in the input data, so any analysis of this case has no information, and so n
|
What are correct values for precision and recall when the denominators equal 0?
The answers to the linked earlier question apply here too.
If (true positives + false negatives) = 0 then no positive cases in the input data, so any analysis of this case has no information, and so no conclusion about how positive cases are handled. You want N/A or something similar as the ratio result, avoiding a division by zero error
If (true positives + false positives) = 0 then all cases have been predicted to be negative: this is one end of the ROC curve. Again, you want to recognise and report this possibility while avoiding a division by zero error.
|
What are correct values for precision and recall when the denominators equal 0?
The answers to the linked earlier question apply here too.
If (true positives + false negatives) = 0 then no positive cases in the input data, so any analysis of this case has no information, and so n
|
5,905
|
What are correct values for precision and recall when the denominators equal 0?
|
An interesting answer is offered here:
https://github.com/dice-group/gerbil/wiki/Precision,-Recall-and-F1-measure
The authors of the module output different scores for precision and recall depending on whether true positives, false positives and false negatives are all 0. If they are, the outcome is ostensibly a good one.
In some rare cases, the calculation of Precision or Recall can cause a
division by 0. Regarding the precision, this can happen if there are
no results inside the answer of an annotator and, thus, the true as
well as the false positives are 0. For these special cases, we have
defined that if the true positives, false positives and false
negatives are all 0, the precision, recall and F1-measure are 1. This
might occur in cases in which the gold standard contains a document
without any annotations and the annotator (correctly) returns no
annotations. If true positives are 0 and one of the two other counters
is larger than 0, the precision, recall and F1-measure are 0.
I'm not sure if this type of scoring would be useful in other situations outside of their special case, but it's worth some thought.
|
What are correct values for precision and recall when the denominators equal 0?
|
An interesting answer is offered here:
https://github.com/dice-group/gerbil/wiki/Precision,-Recall-and-F1-measure
The authors of the module output different scores for precision and recall depending o
|
What are correct values for precision and recall when the denominators equal 0?
An interesting answer is offered here:
https://github.com/dice-group/gerbil/wiki/Precision,-Recall-and-F1-measure
The authors of the module output different scores for precision and recall depending on whether true positives, false positives and false negatives are all 0. If they are, the outcome is ostensibly a good one.
In some rare cases, the calculation of Precision or Recall can cause a
division by 0. Regarding the precision, this can happen if there are
no results inside the answer of an annotator and, thus, the true as
well as the false positives are 0. For these special cases, we have
defined that if the true positives, false positives and false
negatives are all 0, the precision, recall and F1-measure are 1. This
might occur in cases in which the gold standard contains a document
without any annotations and the annotator (correctly) returns no
annotations. If true positives are 0 and one of the two other counters
is larger than 0, the precision, recall and F1-measure are 0.
I'm not sure if this type of scoring would be useful in other situations outside of their special case, but it's worth some thought.
|
What are correct values for precision and recall when the denominators equal 0?
An interesting answer is offered here:
https://github.com/dice-group/gerbil/wiki/Precision,-Recall-and-F1-measure
The authors of the module output different scores for precision and recall depending o
|
5,906
|
What are correct values for precision and recall when the denominators equal 0?
|
When evaluating a classifier at high thresholds, the precision might (often actually) not be 1 when recall is 0. It's usually N/A!
I think there is something wrong about how people plot the P/R curve. Avoiding N/A samples is a bias in the sense that you avoid singularity samples. I computed the average precision wrt to the average recall ignoring N/A samples and I never got a classifier starting at 1 for 0 recall for a shallow neural net in object detection. This was also true for curves computed with the tp,fp,fn numbers. It's quite easy to verify by paper and pencil with a single image. For example:
I have a classifier that outputs for a single image:
preds=[.7 .6 .5 .1 .05]
truth=[n y n n y ]
By computing the confusion matrices with the various thresholds we have:
tp=[2 1 1 1 0 0],fn=[0 1 1 1 2 2],fp=[3 3 2 1 1 0].
the recall rec=[1 .5 .5 .5 0 0], and the precision=[.4 .25 1/3 .5 0 NaN].
I don't see how it would make sense to replace a NaN or the precision(@recall==0) with 1. 1 should be an upper bound, not a value we replace precision(@recall==0) with.
|
What are correct values for precision and recall when the denominators equal 0?
|
When evaluating a classifier at high thresholds, the precision might (often actually) not be 1 when recall is 0. It's usually N/A!
I think there is something wrong about how people plot the P/R curve.
|
What are correct values for precision and recall when the denominators equal 0?
When evaluating a classifier at high thresholds, the precision might (often actually) not be 1 when recall is 0. It's usually N/A!
I think there is something wrong about how people plot the P/R curve. Avoiding N/A samples is a bias in the sense that you avoid singularity samples. I computed the average precision wrt to the average recall ignoring N/A samples and I never got a classifier starting at 1 for 0 recall for a shallow neural net in object detection. This was also true for curves computed with the tp,fp,fn numbers. It's quite easy to verify by paper and pencil with a single image. For example:
I have a classifier that outputs for a single image:
preds=[.7 .6 .5 .1 .05]
truth=[n y n n y ]
By computing the confusion matrices with the various thresholds we have:
tp=[2 1 1 1 0 0],fn=[0 1 1 1 2 2],fp=[3 3 2 1 1 0].
the recall rec=[1 .5 .5 .5 0 0], and the precision=[.4 .25 1/3 .5 0 NaN].
I don't see how it would make sense to replace a NaN or the precision(@recall==0) with 1. 1 should be an upper bound, not a value we replace precision(@recall==0) with.
|
What are correct values for precision and recall when the denominators equal 0?
When evaluating a classifier at high thresholds, the precision might (often actually) not be 1 when recall is 0. It's usually N/A!
I think there is something wrong about how people plot the P/R curve.
|
5,907
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekutieli (2001) false discovery rate procedures?
|
Benjamini and Hochberg (1995) introduced the false discovery rate. Benjamini and Yekutieli (2001) proved that the estimator is valid under some forms of dependence. Dependence can arise as follows. Consider the continuous variable used in a t-test and another variable correlated with it; for example, testing if BMI differs in two groups and if waist circumference differs in these two groups. Because these variables are correlated, the resulting p-values will also be correlated. Yekutieli and Benjamini (1999) developed another FDR controlling procedure, which can be used under general dependence by resampling the null distribution. Because the comparison is with respect to the null permutation distribution, as the total number of true positives increases, the method becomes more conservative. It turns out that BH 1995 is also conservative as the number of true positives increases. To improve this, Benjamini and Hochberg (2000) introduced the adaptive FDR procedure. This required estimation of a parameter, the null proportion, which is also used in Storey's pFDR estimator. Storey gives comparisons and argues that his method is more powerful and emphasizes the conservative nature of 1995 procedure. Storey also has results and simulations under dependence.
All of the above tests are valid under independence. The question is what kind of departure from independence can these estimates deal with.
My current thinking is that if you don't expect too many true positives the BY (1999) procedure is nice because it incorporates distributional features and dependence. However, I'm unaware of an implementation. Storey's method was designed for many true positives with some dependence. BH 1995 offers an alternative to the family-wise error rate and it is still conservative.
Benjamini, Y and Y Hochberg. On the Adaptive Control of the False Discovery Rate in Multiple Testing with Independent Statistics. Journal of Educational and Behavioral Statistics, 2000.
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekut
|
Benjamini and Hochberg (1995) introduced the false discovery rate. Benjamini and Yekutieli (2001) proved that the estimator is valid under some forms of dependence. Dependence can arise as follows.
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekutieli (2001) false discovery rate procedures?
Benjamini and Hochberg (1995) introduced the false discovery rate. Benjamini and Yekutieli (2001) proved that the estimator is valid under some forms of dependence. Dependence can arise as follows. Consider the continuous variable used in a t-test and another variable correlated with it; for example, testing if BMI differs in two groups and if waist circumference differs in these two groups. Because these variables are correlated, the resulting p-values will also be correlated. Yekutieli and Benjamini (1999) developed another FDR controlling procedure, which can be used under general dependence by resampling the null distribution. Because the comparison is with respect to the null permutation distribution, as the total number of true positives increases, the method becomes more conservative. It turns out that BH 1995 is also conservative as the number of true positives increases. To improve this, Benjamini and Hochberg (2000) introduced the adaptive FDR procedure. This required estimation of a parameter, the null proportion, which is also used in Storey's pFDR estimator. Storey gives comparisons and argues that his method is more powerful and emphasizes the conservative nature of 1995 procedure. Storey also has results and simulations under dependence.
All of the above tests are valid under independence. The question is what kind of departure from independence can these estimates deal with.
My current thinking is that if you don't expect too many true positives the BY (1999) procedure is nice because it incorporates distributional features and dependence. However, I'm unaware of an implementation. Storey's method was designed for many true positives with some dependence. BH 1995 offers an alternative to the family-wise error rate and it is still conservative.
Benjamini, Y and Y Hochberg. On the Adaptive Control of the False Discovery Rate in Multiple Testing with Independent Statistics. Journal of Educational and Behavioral Statistics, 2000.
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekut
Benjamini and Hochberg (1995) introduced the false discovery rate. Benjamini and Yekutieli (2001) proved that the estimator is valid under some forms of dependence. Dependence can arise as follows.
|
5,908
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekutieli (2001) false discovery rate procedures?
|
p.adjust is not misciting for BY. The reference is to Theorem 1.3 (proof in Section 5 on p.1182) in the paper:
Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics 29, 1165β1188.
As this paper discusses several different adjustments, the reference on the help page (at the time of writing) for p.adjust() is somewhat obscure. The method is guaranteed to control FDR, at the stated rate, under the most general dependence structure. There are informative comments in Christopher Genovese's slides at:
www.stat.cmu.edu/~genovese/talks/hannover1-04.pdf
Note the comment on slide 37, referring to the method of Theorem 1.3 in the BY 2001 paper [method='BY' with p.adjust()] that:
"Unfortunately, this is typically very conservative, sometimes even more so than Bonferroni."
Numerical example: method='BY' vs method='BH'
The following compares method='BY' with method='BH', using R's p.adjust() function, for the p-values from column 2 of Table 2 in the Benjamini and Hochberg (2000) paper:
> p <- c(0.85628,0.60282,0.44008,0.41998,0.3864,0.3689,0.31162,0.23522,0.20964,
0.19388,0.15872,0.14374,0.10026,0.08226,0.07912,0.0659,0.05802,0.05572,
0.0549,0.04678,0.0465,0.04104,0.02036,0.00964,0.00904,0.00748,0.00404,
0.00282,0.002,0.0018,2e-05,2e-05,2e-05,0)
> pmat <- rbind(p,p.adjust(p, method='BH'),p.adjust(p, method='BY'))
> rownames(pmat)<-c("pval","adj='BH","adj='BY'")
> round(pmat,4)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
pval 0.8563 0.6028 0.4401 0.4200 0.3864 0.3689 0.3116 0.2352 0.2096
adj='BH 0.8563 0.6211 0.4676 0.4606 0.4379 0.4325 0.3784 0.2962 0.2741
adj='BY' 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
[,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18]
pval 0.1939 0.1587 0.1437 0.1003 0.0823 0.0791 0.0659 0.0580 0.0557
adj='BH 0.2637 0.2249 0.2125 0.1549 0.1332 0.1332 0.1179 0.1096 0.1096
adj='BY' 1.0000 0.9260 0.8751 0.6381 0.5485 0.5485 0.4856 0.4513 0.4513
[,19] [,20] [,21] [,22] [,23] [,24] [,25] [,26] [,27]
pval 0.0549 0.0468 0.0465 0.0410 0.0204 0.0096 0.0090 0.0075 0.0040
adj='BH 0.1096 0.1060 0.1060 0.1060 0.0577 0.0298 0.0298 0.0283 0.0172
adj='BY' 0.4513 0.4367 0.4367 0.4367 0.2376 0.1227 0.1227 0.1164 0.0707
[,28] [,29] [,30] [,31] [,32] [,33] [,34]
pval 0.0028 0.0020 0.0018 0e+00 0e+00 0e+00 0
adj='BH 0.0137 0.0113 0.0113 2e-04 2e-04 2e-04 0
adj='BY' 0.0564 0.0467 0.0467 7e-04 7e-04 7e-04 0
Note: The multiplier that relates the BY values to the BH values is $\sum_{i=1}^m (1/i)$, where $m$ is the number of p-values. Multipliers are, for example values m = 30, 34, 226, 1674, 12365:
> mult <- sapply(c(11, 30, 34, 226, 1674, 12365), function(i)sum(1/(1:i)))
setNames(mult, paste(c('m =',rep('',5)), c(11, 30, 34, 226, 1674, 12365)))
m = 11 30 34 226 1674 12365
3.020 3.995 4.118 6.000 8.000 10.000
Check that for the example above, where $m$=34, the multiplier is 4.118
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekut
|
p.adjust is not misciting for BY. The reference is to Theorem 1.3 (proof in Section 5 on p.1182) in the paper:
Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in mult
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekutieli (2001) false discovery rate procedures?
p.adjust is not misciting for BY. The reference is to Theorem 1.3 (proof in Section 5 on p.1182) in the paper:
Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics 29, 1165β1188.
As this paper discusses several different adjustments, the reference on the help page (at the time of writing) for p.adjust() is somewhat obscure. The method is guaranteed to control FDR, at the stated rate, under the most general dependence structure. There are informative comments in Christopher Genovese's slides at:
www.stat.cmu.edu/~genovese/talks/hannover1-04.pdf
Note the comment on slide 37, referring to the method of Theorem 1.3 in the BY 2001 paper [method='BY' with p.adjust()] that:
"Unfortunately, this is typically very conservative, sometimes even more so than Bonferroni."
Numerical example: method='BY' vs method='BH'
The following compares method='BY' with method='BH', using R's p.adjust() function, for the p-values from column 2 of Table 2 in the Benjamini and Hochberg (2000) paper:
> p <- c(0.85628,0.60282,0.44008,0.41998,0.3864,0.3689,0.31162,0.23522,0.20964,
0.19388,0.15872,0.14374,0.10026,0.08226,0.07912,0.0659,0.05802,0.05572,
0.0549,0.04678,0.0465,0.04104,0.02036,0.00964,0.00904,0.00748,0.00404,
0.00282,0.002,0.0018,2e-05,2e-05,2e-05,0)
> pmat <- rbind(p,p.adjust(p, method='BH'),p.adjust(p, method='BY'))
> rownames(pmat)<-c("pval","adj='BH","adj='BY'")
> round(pmat,4)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
pval 0.8563 0.6028 0.4401 0.4200 0.3864 0.3689 0.3116 0.2352 0.2096
adj='BH 0.8563 0.6211 0.4676 0.4606 0.4379 0.4325 0.3784 0.2962 0.2741
adj='BY' 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
[,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18]
pval 0.1939 0.1587 0.1437 0.1003 0.0823 0.0791 0.0659 0.0580 0.0557
adj='BH 0.2637 0.2249 0.2125 0.1549 0.1332 0.1332 0.1179 0.1096 0.1096
adj='BY' 1.0000 0.9260 0.8751 0.6381 0.5485 0.5485 0.4856 0.4513 0.4513
[,19] [,20] [,21] [,22] [,23] [,24] [,25] [,26] [,27]
pval 0.0549 0.0468 0.0465 0.0410 0.0204 0.0096 0.0090 0.0075 0.0040
adj='BH 0.1096 0.1060 0.1060 0.1060 0.0577 0.0298 0.0298 0.0283 0.0172
adj='BY' 0.4513 0.4367 0.4367 0.4367 0.2376 0.1227 0.1227 0.1164 0.0707
[,28] [,29] [,30] [,31] [,32] [,33] [,34]
pval 0.0028 0.0020 0.0018 0e+00 0e+00 0e+00 0
adj='BH 0.0137 0.0113 0.0113 2e-04 2e-04 2e-04 0
adj='BY' 0.0564 0.0467 0.0467 7e-04 7e-04 7e-04 0
Note: The multiplier that relates the BY values to the BH values is $\sum_{i=1}^m (1/i)$, where $m$ is the number of p-values. Multipliers are, for example values m = 30, 34, 226, 1674, 12365:
> mult <- sapply(c(11, 30, 34, 226, 1674, 12365), function(i)sum(1/(1:i)))
setNames(mult, paste(c('m =',rep('',5)), c(11, 30, 34, 226, 1674, 12365)))
m = 11 30 34 226 1674 12365
3.020 3.995 4.118 6.000 8.000 10.000
Check that for the example above, where $m$=34, the multiplier is 4.118
|
What are the practical differences between the Benjamini & Hochberg (1995) and the Benjamini & Yekut
p.adjust is not misciting for BY. The reference is to Theorem 1.3 (proof in Section 5 on p.1182) in the paper:
Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in mult
|
5,909
|
When and how to use standardized explanatory variables in linear regression
|
Although terminology is a contentious topic, I prefer to call "explanatory" variables, "predictor" variables.
When to standardise the predictors:
A lot of software for performing multiple linear regression will provide standardised coefficients which are equivalent to unstandardised coefficients where you manually standardise predictors and the response variable (of course, it sounds like you are talking about only standardising predictors).
My opinion is that standardisation is a useful tool for making regression equations more meaningful.
This is particularly true in cases where the metric of the variable lacks meaning to the person interpreting the regression equation (e.g., a psychological scale on an arbitrary metric).
It can also be used to facilitate comparability of the relative importance of predictor variables (although other more sophisticated approaches exist for assessing relative importance; see my post for a discussion).
In cases where the metric does have meaning to the person interpreting the regression equation, unstandardised coefficients are often more informative.
I also think that relying on standardised variables may take attention away from the fact that we have not thought about how to make the metric of a variable more meaningful to the reader.
Andrew Gelman has a fair bit to say on the topic.
See his page on standardisation for example and Gelman (2008, Stats Med, FREE PDF) in particular.
Prediction based on standarisation:
I would not use standardised regression coefficients for prediction.
You can always convert standardised coefficients to unstandardised coefficients if you know the mean and standard deviation of the predictor variable in the original sample.
|
When and how to use standardized explanatory variables in linear regression
|
Although terminology is a contentious topic, I prefer to call "explanatory" variables, "predictor" variables.
When to standardise the predictors:
A lot of software for performing multiple linear regr
|
When and how to use standardized explanatory variables in linear regression
Although terminology is a contentious topic, I prefer to call "explanatory" variables, "predictor" variables.
When to standardise the predictors:
A lot of software for performing multiple linear regression will provide standardised coefficients which are equivalent to unstandardised coefficients where you manually standardise predictors and the response variable (of course, it sounds like you are talking about only standardising predictors).
My opinion is that standardisation is a useful tool for making regression equations more meaningful.
This is particularly true in cases where the metric of the variable lacks meaning to the person interpreting the regression equation (e.g., a psychological scale on an arbitrary metric).
It can also be used to facilitate comparability of the relative importance of predictor variables (although other more sophisticated approaches exist for assessing relative importance; see my post for a discussion).
In cases where the metric does have meaning to the person interpreting the regression equation, unstandardised coefficients are often more informative.
I also think that relying on standardised variables may take attention away from the fact that we have not thought about how to make the metric of a variable more meaningful to the reader.
Andrew Gelman has a fair bit to say on the topic.
See his page on standardisation for example and Gelman (2008, Stats Med, FREE PDF) in particular.
Prediction based on standarisation:
I would not use standardised regression coefficients for prediction.
You can always convert standardised coefficients to unstandardised coefficients if you know the mean and standard deviation of the predictor variable in the original sample.
|
When and how to use standardized explanatory variables in linear regression
Although terminology is a contentious topic, I prefer to call "explanatory" variables, "predictor" variables.
When to standardise the predictors:
A lot of software for performing multiple linear regr
|
5,910
|
Intuitive explanation of convergence in distribution and convergence in probability
|
How can a random number converge to a constant?
Let's say you have $N$ balls in the box. You can pick them one by one. After you picked $k$ balls, I ask you: what's the mean weight of the balls in the box? Your best answer would be $\bar x_k=\frac{1}{k}\sum_{i=1}^kx_i$. You realize that $\bar x_k$ itself is the random value? It depends on which $k$ balls you picked first.
Now, if you keep pulling the balls, at some point there'll be no balls left in the box, and you'll get $\bar x_N\equiv\mu$.
So, what we've got is the random sequence $$\bar x_1,\dots,\bar x_k, \dots, \bar x_N ,\bar x_N, \bar x_N, \dots $$ which converges to the constant $\bar x_N = \mu$. So, the key to understanding your issue with convergence in probability is realizing that we're talking about a sequence of random variables, constructed in a certain way.
Next, let's get uniform random numbers $e_1,e_2,\dots$, where $e_i\in [0,1]$. Let's look at the random sequence $\xi_1,\xi_2,\dots$, where $\xi_k=\frac{1}{\sqrt{\frac{k}{12}}}\sum_{i=1}^k \left(e_i- \frac{1}{2} \right)$. The $\xi_k$ is a random value, because all its terms are random values. We can't predict what is $\xi_k$ going to be. However, it turns out that we can claim that the probability distributions of $\xi_k$ will look more and more like the standard normal $\mathcal{N}(0,1)$. That's how the distributions converge.
|
Intuitive explanation of convergence in distribution and convergence in probability
|
How can a random number converge to a constant?
Let's say you have $N$ balls in the box. You can pick them one by one. After you picked $k$ balls, I ask you: what's the mean weight of the balls in th
|
Intuitive explanation of convergence in distribution and convergence in probability
How can a random number converge to a constant?
Let's say you have $N$ balls in the box. You can pick them one by one. After you picked $k$ balls, I ask you: what's the mean weight of the balls in the box? Your best answer would be $\bar x_k=\frac{1}{k}\sum_{i=1}^kx_i$. You realize that $\bar x_k$ itself is the random value? It depends on which $k$ balls you picked first.
Now, if you keep pulling the balls, at some point there'll be no balls left in the box, and you'll get $\bar x_N\equiv\mu$.
So, what we've got is the random sequence $$\bar x_1,\dots,\bar x_k, \dots, \bar x_N ,\bar x_N, \bar x_N, \dots $$ which converges to the constant $\bar x_N = \mu$. So, the key to understanding your issue with convergence in probability is realizing that we're talking about a sequence of random variables, constructed in a certain way.
Next, let's get uniform random numbers $e_1,e_2,\dots$, where $e_i\in [0,1]$. Let's look at the random sequence $\xi_1,\xi_2,\dots$, where $\xi_k=\frac{1}{\sqrt{\frac{k}{12}}}\sum_{i=1}^k \left(e_i- \frac{1}{2} \right)$. The $\xi_k$ is a random value, because all its terms are random values. We can't predict what is $\xi_k$ going to be. However, it turns out that we can claim that the probability distributions of $\xi_k$ will look more and more like the standard normal $\mathcal{N}(0,1)$. That's how the distributions converge.
|
Intuitive explanation of convergence in distribution and convergence in probability
How can a random number converge to a constant?
Let's say you have $N$ balls in the box. You can pick them one by one. After you picked $k$ balls, I ask you: what's the mean weight of the balls in th
|
5,911
|
Intuitive explanation of convergence in distribution and convergence in probability
|
It's not clear how much intuition a reader of this question might have about convergence of anything, let alone of random variables, so I will write as if the answer is "very little". Something that might help: rather than thinking "how can a random variable converge", ask how a sequence of random variables can converge. In other words, it's not just a single variable, but an (infinitely long!) list of variables, and ones later in the list are getting closer and closer to ... something. Perhaps a single number, perhaps an entire distribution. To develop an intuition, we need to work out what "closer and closer" means. The reason there are so many modes of convergence for random variables is that there are several types of "closeness" I might measure.
First let's recap convergence of sequences of real numbers. In $\mathbb{R}$ we can use Euclidean distance $|x-y|$ to measure how close $x$ is to $y$. Consider $x_n = \frac{n+1}{n} = 1 + \frac{1}{n}$. Then the sequence $x_1, \, x_2, \, x_3, \dots$ starts $2, \frac{3}{2}, \frac{4}{3}, \frac{5}{4}, \frac{6}{5}, \dots$ and I claim that $x_n$ converges to $1$. Clearly $x_n$ is getting closer to $1$, but it's also true that $x_n$ is getting closer to $0.9$. For instance, from the third term onwards, the terms in the sequence are a distance of $0.5$ or less from $0.9$. What matters is that they are getting arbitrarily close to $1$, but not to $0.9$. No terms in the sequence ever come within $0.05$ of $0.9$, let alone stay that close for subsequent terms. In contrast $x_{20}=1.05$ so is $0.05$ from $1$, and all subsequent terms are within $0.05$ of $1$, as shown below.
I could be stricter and demand terms get and stay within $0.001$ of $1$, and in this example I find this is true for the terms $N=1000$ and onwards. Moreover I could choose any fixed threshold of closeness $\epsilon$, no matter how strict (except for $\epsilon = 0$, i.e. the term actually being $1$), and eventually the condition $|x_n - x| \lt \epsilon$ will be satisfied for all terms beyond a certain term (symbolically: for $n \gt N$, where the value of $N$ depends on how strict an $\epsilon$ I chose). For more sophisticated examples, note that I'm not necessarily interested in the first time that the condition is met - the next term might not obey the condition, and that's fine, so long as I can find a term further along the sequence for which the condition is met and stays met for all later terms. I illustrate this for $x_n = 1 + \frac{\sin(n)}{n}$, which also converges to $1$, with $\epsilon=0.05$ shaded again.
Now consider $X \sim U(0,1)$ and the sequence of random variables $X_n = \left(1 + \frac{1}{n}\right) X$. This is a sequence of RVs with $X_1 = 2X$, $X_2 = \frac{3}{2} X$, $X_3 = \frac{4}{3} X$ and so on. In what senses can we say this is getting closer to $X$ itself?
Since $X_n$ and $X$ are distributions, not just single numbers, the condition $|X_n - X| \lt \epsilon$ is now an event: even for a fixed $n$ and $\epsilon$ this might or might not occur. Considering the probability of it being met gives rise to convergence in probability. For $X_n \overset{p}{\to} X$ we want the complementary probability $P(|X_n - X| \ge \epsilon)$ - intuitively, the probability that $X_n$ is somewhat different (by at least $\epsilon$) to $X$ - to become arbitrarily small, for sufficiently large $n$. For a fixed $\epsilon$ this gives rise to a whole sequence of probabilities, $P(|X_1 - X| \ge \epsilon)$, $P(|X_2 - X| \ge \epsilon)$, $P(|X_3 - X| \ge \epsilon)$, $\dots$ and if this sequence of probabilities converges to zero (as happens in our example) then we say $X_n$ converges in probability to $X$. Note that probability limits are often constants: for instance in regressions in econometrics, we see $\text{plim}(\hat \beta) = \beta$ as we increase the sample size $n$. But here $\text{plim}(X_n) = X \sim U(0,1)$. Effectively, convergence in probability means that it's unlikely that $X_n$ and $X$ will differ by much on a particular realisation - and I can make the probability of $X_n$ and $X$ being further than $\epsilon$ apart as small as I like, so long as I pick a sufficiently large $n$.
A different sense in which $X_n$ becomes closer to $X$ is that their distributions look more and more alike. I can measure this by comparing their CDFs. In particular, pick some $x$ at which $F_X(x) = P(X \leq x)$ is continuous (in our example $X \sim U(0,1)$ so its CDF is continuous everywhere and any $x$ will do) and evaluate the CDFs of the sequence of $X_n$s there. This produces another sequence of probabilities, $P(X_1 \leq x)$, $P(X_2 \leq x)$, $P(X_3 \leq x)$, $\dots$ and this sequence converges to $P(X \leq x)$. The CDFs evaluated at $x$ for each of the $X_n$ become arbitrarily close to the CDF of $X$ evaluated at $x$. If this result holds true regardless of which $x$ we picked, then $X_n$ converges to $X$ in distribution. It turns out this happens here, and we should not be surprised since convergence in probability to $X$ implies convergence in distribution to $X$. Note that it can't be the case that $X_n$ converges in probability to a particular non-degenerate distribution, but converges in distribution to a constant. (Which was possibly the point of confusion in the original question? But note a clarification later.)
For a different example, let $Y_n \sim U(1, \frac{n+1}{n})$. We now have a sequence of RVs, $Y_1 \sim U(1,2)$, $Y_2 \sim U(1,\frac{3}{2})$, $Y_3 \sim U(1,\frac{4}{3})$, $\dots$ and it is clear that the probability distribution is degenerating to a spike at $y=1$. Now consider the degenerate distribution $Y=1$, by which I mean $P(Y=1)=1$. It is easy to see that for any $\epsilon \gt 0$, the sequence $P(|Y_n - Y| \ge \epsilon)$ converges to zero so that $Y_n$ converges to $Y$ in probability. As a consequence, $Y_n$ must also converge to $Y$ in distribution, which we can confirm by considering the CDFs. Since the CDF $F_Y(y)$ of $Y$ is discontinuous at $y=1$ we need not consider the CDFs evaluated at that value, but for the CDFs evaluated at any other $y$ we can see that the sequence $P(Y_1 \leq y)$, $P(Y_2 \leq y)$, $P(Y_3 \leq y)$, $\dots$ converges to $P(Y \leq y)$ which is zero for $y \lt 1$ and one for $y \gt 1$. This time, because the sequence of RVs converged in probability to a constant, it converged in distribution to a constant also.
Some final clarifications:
Although convergence in probability implies convergence in distribution, the converse is false in general. Just because two variables have the same distribution, doesn't mean they have to be likely to be to close to each other. For a trivial example, take $X\sim\text{Bernouilli}(0.5)$ and $Y=1-X$. Then $X$ and $Y$ both have exactly the same distribution (a 50% chance each of being zero or one) and the sequence $X_n=X$ i.e. the sequence going $X,X,X,X,\dots$ trivially converges in distribution to $Y$ (the CDF at any position in the sequence is the same as the CDF of $Y$). But $Y$ and $X$ are always one apart, so $P(|X_n - Y| \ge 0.5)=1$ so does not tend to zero, so $X_n$ does not converge to $Y$ in probability. However, if there is convergence in distribution to a constant, then that implies convergence in probability to that constant (intuitively, further in the sequence it will become unlikely to be far from that constant).
As my examples make clear, convergence in probability can be to a constant but doesn't have to be; convergence in distribution might also be to a constant. It isn't possible to converge in probability to a constant but converge in distribution to a particular non-degenerate distribution, or vice versa.
Is it possible you've seen an example where, for instance, you were told a sequence $X_n$ converged another sequence $Y_n$? You may not have realised it was a sequence, but the give-away would be if it was a distribution that also depended on $n$. It might be that both sequences converge to a constant (i.e. degenerate distribution). Your question suggests you're wondering how a particular sequence of RVs could converge both to a constant and to a distribution; I wonder if this is the scenario you're describing.
My current explanation is not very "intuitive" - I was intending to make the intuition graphical, but haven't had time to add the graphs for the RVs yet.
|
Intuitive explanation of convergence in distribution and convergence in probability
|
It's not clear how much intuition a reader of this question might have about convergence of anything, let alone of random variables, so I will write as if the answer is "very little". Something that m
|
Intuitive explanation of convergence in distribution and convergence in probability
It's not clear how much intuition a reader of this question might have about convergence of anything, let alone of random variables, so I will write as if the answer is "very little". Something that might help: rather than thinking "how can a random variable converge", ask how a sequence of random variables can converge. In other words, it's not just a single variable, but an (infinitely long!) list of variables, and ones later in the list are getting closer and closer to ... something. Perhaps a single number, perhaps an entire distribution. To develop an intuition, we need to work out what "closer and closer" means. The reason there are so many modes of convergence for random variables is that there are several types of "closeness" I might measure.
First let's recap convergence of sequences of real numbers. In $\mathbb{R}$ we can use Euclidean distance $|x-y|$ to measure how close $x$ is to $y$. Consider $x_n = \frac{n+1}{n} = 1 + \frac{1}{n}$. Then the sequence $x_1, \, x_2, \, x_3, \dots$ starts $2, \frac{3}{2}, \frac{4}{3}, \frac{5}{4}, \frac{6}{5}, \dots$ and I claim that $x_n$ converges to $1$. Clearly $x_n$ is getting closer to $1$, but it's also true that $x_n$ is getting closer to $0.9$. For instance, from the third term onwards, the terms in the sequence are a distance of $0.5$ or less from $0.9$. What matters is that they are getting arbitrarily close to $1$, but not to $0.9$. No terms in the sequence ever come within $0.05$ of $0.9$, let alone stay that close for subsequent terms. In contrast $x_{20}=1.05$ so is $0.05$ from $1$, and all subsequent terms are within $0.05$ of $1$, as shown below.
I could be stricter and demand terms get and stay within $0.001$ of $1$, and in this example I find this is true for the terms $N=1000$ and onwards. Moreover I could choose any fixed threshold of closeness $\epsilon$, no matter how strict (except for $\epsilon = 0$, i.e. the term actually being $1$), and eventually the condition $|x_n - x| \lt \epsilon$ will be satisfied for all terms beyond a certain term (symbolically: for $n \gt N$, where the value of $N$ depends on how strict an $\epsilon$ I chose). For more sophisticated examples, note that I'm not necessarily interested in the first time that the condition is met - the next term might not obey the condition, and that's fine, so long as I can find a term further along the sequence for which the condition is met and stays met for all later terms. I illustrate this for $x_n = 1 + \frac{\sin(n)}{n}$, which also converges to $1$, with $\epsilon=0.05$ shaded again.
Now consider $X \sim U(0,1)$ and the sequence of random variables $X_n = \left(1 + \frac{1}{n}\right) X$. This is a sequence of RVs with $X_1 = 2X$, $X_2 = \frac{3}{2} X$, $X_3 = \frac{4}{3} X$ and so on. In what senses can we say this is getting closer to $X$ itself?
Since $X_n$ and $X$ are distributions, not just single numbers, the condition $|X_n - X| \lt \epsilon$ is now an event: even for a fixed $n$ and $\epsilon$ this might or might not occur. Considering the probability of it being met gives rise to convergence in probability. For $X_n \overset{p}{\to} X$ we want the complementary probability $P(|X_n - X| \ge \epsilon)$ - intuitively, the probability that $X_n$ is somewhat different (by at least $\epsilon$) to $X$ - to become arbitrarily small, for sufficiently large $n$. For a fixed $\epsilon$ this gives rise to a whole sequence of probabilities, $P(|X_1 - X| \ge \epsilon)$, $P(|X_2 - X| \ge \epsilon)$, $P(|X_3 - X| \ge \epsilon)$, $\dots$ and if this sequence of probabilities converges to zero (as happens in our example) then we say $X_n$ converges in probability to $X$. Note that probability limits are often constants: for instance in regressions in econometrics, we see $\text{plim}(\hat \beta) = \beta$ as we increase the sample size $n$. But here $\text{plim}(X_n) = X \sim U(0,1)$. Effectively, convergence in probability means that it's unlikely that $X_n$ and $X$ will differ by much on a particular realisation - and I can make the probability of $X_n$ and $X$ being further than $\epsilon$ apart as small as I like, so long as I pick a sufficiently large $n$.
A different sense in which $X_n$ becomes closer to $X$ is that their distributions look more and more alike. I can measure this by comparing their CDFs. In particular, pick some $x$ at which $F_X(x) = P(X \leq x)$ is continuous (in our example $X \sim U(0,1)$ so its CDF is continuous everywhere and any $x$ will do) and evaluate the CDFs of the sequence of $X_n$s there. This produces another sequence of probabilities, $P(X_1 \leq x)$, $P(X_2 \leq x)$, $P(X_3 \leq x)$, $\dots$ and this sequence converges to $P(X \leq x)$. The CDFs evaluated at $x$ for each of the $X_n$ become arbitrarily close to the CDF of $X$ evaluated at $x$. If this result holds true regardless of which $x$ we picked, then $X_n$ converges to $X$ in distribution. It turns out this happens here, and we should not be surprised since convergence in probability to $X$ implies convergence in distribution to $X$. Note that it can't be the case that $X_n$ converges in probability to a particular non-degenerate distribution, but converges in distribution to a constant. (Which was possibly the point of confusion in the original question? But note a clarification later.)
For a different example, let $Y_n \sim U(1, \frac{n+1}{n})$. We now have a sequence of RVs, $Y_1 \sim U(1,2)$, $Y_2 \sim U(1,\frac{3}{2})$, $Y_3 \sim U(1,\frac{4}{3})$, $\dots$ and it is clear that the probability distribution is degenerating to a spike at $y=1$. Now consider the degenerate distribution $Y=1$, by which I mean $P(Y=1)=1$. It is easy to see that for any $\epsilon \gt 0$, the sequence $P(|Y_n - Y| \ge \epsilon)$ converges to zero so that $Y_n$ converges to $Y$ in probability. As a consequence, $Y_n$ must also converge to $Y$ in distribution, which we can confirm by considering the CDFs. Since the CDF $F_Y(y)$ of $Y$ is discontinuous at $y=1$ we need not consider the CDFs evaluated at that value, but for the CDFs evaluated at any other $y$ we can see that the sequence $P(Y_1 \leq y)$, $P(Y_2 \leq y)$, $P(Y_3 \leq y)$, $\dots$ converges to $P(Y \leq y)$ which is zero for $y \lt 1$ and one for $y \gt 1$. This time, because the sequence of RVs converged in probability to a constant, it converged in distribution to a constant also.
Some final clarifications:
Although convergence in probability implies convergence in distribution, the converse is false in general. Just because two variables have the same distribution, doesn't mean they have to be likely to be to close to each other. For a trivial example, take $X\sim\text{Bernouilli}(0.5)$ and $Y=1-X$. Then $X$ and $Y$ both have exactly the same distribution (a 50% chance each of being zero or one) and the sequence $X_n=X$ i.e. the sequence going $X,X,X,X,\dots$ trivially converges in distribution to $Y$ (the CDF at any position in the sequence is the same as the CDF of $Y$). But $Y$ and $X$ are always one apart, so $P(|X_n - Y| \ge 0.5)=1$ so does not tend to zero, so $X_n$ does not converge to $Y$ in probability. However, if there is convergence in distribution to a constant, then that implies convergence in probability to that constant (intuitively, further in the sequence it will become unlikely to be far from that constant).
As my examples make clear, convergence in probability can be to a constant but doesn't have to be; convergence in distribution might also be to a constant. It isn't possible to converge in probability to a constant but converge in distribution to a particular non-degenerate distribution, or vice versa.
Is it possible you've seen an example where, for instance, you were told a sequence $X_n$ converged another sequence $Y_n$? You may not have realised it was a sequence, but the give-away would be if it was a distribution that also depended on $n$. It might be that both sequences converge to a constant (i.e. degenerate distribution). Your question suggests you're wondering how a particular sequence of RVs could converge both to a constant and to a distribution; I wonder if this is the scenario you're describing.
My current explanation is not very "intuitive" - I was intending to make the intuition graphical, but haven't had time to add the graphs for the RVs yet.
|
Intuitive explanation of convergence in distribution and convergence in probability
It's not clear how much intuition a reader of this question might have about convergence of anything, let alone of random variables, so I will write as if the answer is "very little". Something that m
|
5,912
|
Intuitive explanation of convergence in distribution and convergence in probability
|
In my mind, the existing answers all convey useful points, but they do not make an important distinction clear between the two modes of convergence.
Let $X_n$, $n=1,2,\dots$, and $Y$ be random variables. For intuition, imagine $X_n$ are assigned their values by some random experiment that changes a little bit for each $n$, giving an infinite sequence of random variables, and suppose $Y$ gets its value assigned by some other random experiment.
If $X_n\overset{p}{\to}Y$, we have, by definition, that the probability of $Y$ and $X_n$ differing from each other by some arbitrarily small amount approaches zero as $n\to\infty$, for as small amount as you like. Loosely speaking, far out in the sequence of $X_n$, we are confident $X_n$ and $Y$ will take values very close to each other.
On the other hand, if we only have convergence in distribution and not convergence in probability, then we know that for large $n$, $P(X_n\leq x)$ is almost the same as $P(Y\leq x)$, for almost any $x$. Note that this does not say anything about how close the values of $X_n$ and $Y$ are to each other. For example, if $Y\sim N(0, 10^{10})$, and thus $X_n$ is also distributed pretty much like this for large $n$, then it seems intuitively likely that the values of $X_n$ and $Y$ will differ by quite a lot in any given observation. After all, if there is no restriction on them other than convergence in distribution, they may very well for all practical reasons be independent $N(0,10^{10})$ variables.
(In some cases it may not even make sense to compare $X_n$ and $Y$, maybe they're not even defined on the same probability space. This is a more technical note, though.)
|
Intuitive explanation of convergence in distribution and convergence in probability
|
In my mind, the existing answers all convey useful points, but they do not make an important distinction clear between the two modes of convergence.
Let $X_n$, $n=1,2,\dots$, and $Y$ be random variabl
|
Intuitive explanation of convergence in distribution and convergence in probability
In my mind, the existing answers all convey useful points, but they do not make an important distinction clear between the two modes of convergence.
Let $X_n$, $n=1,2,\dots$, and $Y$ be random variables. For intuition, imagine $X_n$ are assigned their values by some random experiment that changes a little bit for each $n$, giving an infinite sequence of random variables, and suppose $Y$ gets its value assigned by some other random experiment.
If $X_n\overset{p}{\to}Y$, we have, by definition, that the probability of $Y$ and $X_n$ differing from each other by some arbitrarily small amount approaches zero as $n\to\infty$, for as small amount as you like. Loosely speaking, far out in the sequence of $X_n$, we are confident $X_n$ and $Y$ will take values very close to each other.
On the other hand, if we only have convergence in distribution and not convergence in probability, then we know that for large $n$, $P(X_n\leq x)$ is almost the same as $P(Y\leq x)$, for almost any $x$. Note that this does not say anything about how close the values of $X_n$ and $Y$ are to each other. For example, if $Y\sim N(0, 10^{10})$, and thus $X_n$ is also distributed pretty much like this for large $n$, then it seems intuitively likely that the values of $X_n$ and $Y$ will differ by quite a lot in any given observation. After all, if there is no restriction on them other than convergence in distribution, they may very well for all practical reasons be independent $N(0,10^{10})$ variables.
(In some cases it may not even make sense to compare $X_n$ and $Y$, maybe they're not even defined on the same probability space. This is a more technical note, though.)
|
Intuitive explanation of convergence in distribution and convergence in probability
In my mind, the existing answers all convey useful points, but they do not make an important distinction clear between the two modes of convergence.
Let $X_n$, $n=1,2,\dots$, and $Y$ be random variabl
|
5,913
|
Intuitive explanation of convergence in distribution and convergence in probability
|
What I don't understand is how can a random variable converge to a
single number but also converge to a distribution?
If you're learning econometrics, you're probably wondering about this in the context of a regression model. It converges to a degenerate distribution, to a constant. But something else does have a non-degenerate limiting distribution.
$\hat{\beta}_n$ converges in probability to $\beta$ if the necessary assumptions are met. This means that by choosing a large enough sample size $N$, the estimator will be as close as we want to the true parameter, with the probability of it being farther away as small as we want. If you think of plotting the histogram of $\hat{\beta}_n$ for various $n$, it will eventually be just a spike centered on $\beta$.
In what sense does $\hat{\beta}_n$ converge in distribution? It also converges to a constant. Not to a normally distributed random variable. If you compute the variance of $\hat{\beta}_n$ you see that it shrinks with $n$. So eventually it will go to zero in large enough $n$, which is why the estimator goes to a constant. What does converge to a normally distributed random variable is
$\sqrt{n}(\hat{\beta}_n - \beta)$. If you take the variance of that you'll see that it does not shrink (nor grow) with $n$. In very large samples, this will be approximately $N(0, \sigma^2)$ under standard assumptions. We can then use this approximation to approximate the distribution of $\hat{\beta}_n$ in that large sample.
But you are right that the limiting distribution of $\hat{\beta}_n$ is also a constant.
|
Intuitive explanation of convergence in distribution and convergence in probability
|
What I don't understand is how can a random variable converge to a
single number but also converge to a distribution?
If you're learning econometrics, you're probably wondering about this in the co
|
Intuitive explanation of convergence in distribution and convergence in probability
What I don't understand is how can a random variable converge to a
single number but also converge to a distribution?
If you're learning econometrics, you're probably wondering about this in the context of a regression model. It converges to a degenerate distribution, to a constant. But something else does have a non-degenerate limiting distribution.
$\hat{\beta}_n$ converges in probability to $\beta$ if the necessary assumptions are met. This means that by choosing a large enough sample size $N$, the estimator will be as close as we want to the true parameter, with the probability of it being farther away as small as we want. If you think of plotting the histogram of $\hat{\beta}_n$ for various $n$, it will eventually be just a spike centered on $\beta$.
In what sense does $\hat{\beta}_n$ converge in distribution? It also converges to a constant. Not to a normally distributed random variable. If you compute the variance of $\hat{\beta}_n$ you see that it shrinks with $n$. So eventually it will go to zero in large enough $n$, which is why the estimator goes to a constant. What does converge to a normally distributed random variable is
$\sqrt{n}(\hat{\beta}_n - \beta)$. If you take the variance of that you'll see that it does not shrink (nor grow) with $n$. In very large samples, this will be approximately $N(0, \sigma^2)$ under standard assumptions. We can then use this approximation to approximate the distribution of $\hat{\beta}_n$ in that large sample.
But you are right that the limiting distribution of $\hat{\beta}_n$ is also a constant.
|
Intuitive explanation of convergence in distribution and convergence in probability
What I don't understand is how can a random variable converge to a
single number but also converge to a distribution?
If you're learning econometrics, you're probably wondering about this in the co
|
5,914
|
Intuitive explanation of convergence in distribution and convergence in probability
|
Let me try to give a very short answer, using some very simple examples.
Convergence in distribution
Let $X_n \sim N\left(\frac{1}{n}, 1 \right)$, for all n, then $X_n$ converges to $X \sim N(0, 1)$ in distribution. However, the randomness in the realization of $X_n$ does not change over time. If we have to predict the value of $X_n$, the expectation of our error does not change over time.
Convergence in probability
Now, consider the random variable $Y_n$ that takes value $0$ with probability $1-\frac{1}{n}$ and $1$ otherwise. As $n$ goes to infinity, we are more and more sure that $Y_n$ will equal $0$. Hence, we say $Y_n$ converges in probability to $0$. Note that this also implies $Y_n$ converges in distribution to $0$.
|
Intuitive explanation of convergence in distribution and convergence in probability
|
Let me try to give a very short answer, using some very simple examples.
Convergence in distribution
Let $X_n \sim N\left(\frac{1}{n}, 1 \right)$, for all n, then $X_n$ converges to $X \sim N(0, 1)$ i
|
Intuitive explanation of convergence in distribution and convergence in probability
Let me try to give a very short answer, using some very simple examples.
Convergence in distribution
Let $X_n \sim N\left(\frac{1}{n}, 1 \right)$, for all n, then $X_n$ converges to $X \sim N(0, 1)$ in distribution. However, the randomness in the realization of $X_n$ does not change over time. If we have to predict the value of $X_n$, the expectation of our error does not change over time.
Convergence in probability
Now, consider the random variable $Y_n$ that takes value $0$ with probability $1-\frac{1}{n}$ and $1$ otherwise. As $n$ goes to infinity, we are more and more sure that $Y_n$ will equal $0$. Hence, we say $Y_n$ converges in probability to $0$. Note that this also implies $Y_n$ converges in distribution to $0$.
|
Intuitive explanation of convergence in distribution and convergence in probability
Let me try to give a very short answer, using some very simple examples.
Convergence in distribution
Let $X_n \sim N\left(\frac{1}{n}, 1 \right)$, for all n, then $X_n$ converges to $X \sim N(0, 1)$ i
|
5,915
|
Intuitive explanation of convergence in distribution and convergence in probability
|
Convergence in probability to a constant: a larger and larger share of the PDF get restrained to a band around a fixed value as the sequence progress (note that nothing is said on the values whose probabilities are not yet in the band as the sequence progress)
Convergence in distribution: consider the PDF as a function whose outputs (the values of the PDF) change with the progression of the sequence, so that the function itself, let's say after 10 iteration of the sequence is quite a different beast than the function after 9 iterations, but that eventually stabilise around a given function as the sequence progress. Note that the output is still a function (PDF), not necessarily a single value. The convergence is about the function itself, not about one particular value.
If a sequence of random variable converges in probabilities to a constant, then it also converges in probabilities to a "degenerated" PDF.
|
Intuitive explanation of convergence in distribution and convergence in probability
|
Convergence in probability to a constant: a larger and larger share of the PDF get restrained to a band around a fixed value as the sequence progress (note that nothing is said on the values whose pro
|
Intuitive explanation of convergence in distribution and convergence in probability
Convergence in probability to a constant: a larger and larger share of the PDF get restrained to a band around a fixed value as the sequence progress (note that nothing is said on the values whose probabilities are not yet in the band as the sequence progress)
Convergence in distribution: consider the PDF as a function whose outputs (the values of the PDF) change with the progression of the sequence, so that the function itself, let's say after 10 iteration of the sequence is quite a different beast than the function after 9 iterations, but that eventually stabilise around a given function as the sequence progress. Note that the output is still a function (PDF), not necessarily a single value. The convergence is about the function itself, not about one particular value.
If a sequence of random variable converges in probabilities to a constant, then it also converges in probabilities to a "degenerated" PDF.
|
Intuitive explanation of convergence in distribution and convergence in probability
Convergence in probability to a constant: a larger and larger share of the PDF get restrained to a band around a fixed value as the sequence progress (note that nothing is said on the values whose pro
|
5,916
|
Why do I get a 100% accuracy decision tree?
|
Your test sample is a subset of your training sample:
x_train = x[0:2635]
x_test = x[0:658]
y_train = y[0:2635]
y_test = y[0:658]
This means that you evaluate your model on a part of your training data, i.e., you are doing in-sample evaluation. In-sample accuracy is a notoriously poor indicator to out-of-sample accuracy, and maximizing in-sample accuracy can lead to overfitting. Therefore, one should always evaluate a model on a true holdout sample that is completely independent of the training data.
Make sure your training and your testing data are disjoint, e.g.,
x_train = x[659:2635]
x_test = x[0:658]
y_train = y[659:2635]
y_test = y[0:658]
|
Why do I get a 100% accuracy decision tree?
|
Your test sample is a subset of your training sample:
x_train = x[0:2635]
x_test = x[0:658]
y_train = y[0:2635]
y_test = y[0:658]
This means that you evaluate your model on a part of your training da
|
Why do I get a 100% accuracy decision tree?
Your test sample is a subset of your training sample:
x_train = x[0:2635]
x_test = x[0:658]
y_train = y[0:2635]
y_test = y[0:658]
This means that you evaluate your model on a part of your training data, i.e., you are doing in-sample evaluation. In-sample accuracy is a notoriously poor indicator to out-of-sample accuracy, and maximizing in-sample accuracy can lead to overfitting. Therefore, one should always evaluate a model on a true holdout sample that is completely independent of the training data.
Make sure your training and your testing data are disjoint, e.g.,
x_train = x[659:2635]
x_test = x[0:658]
y_train = y[659:2635]
y_test = y[0:658]
|
Why do I get a 100% accuracy decision tree?
Your test sample is a subset of your training sample:
x_train = x[0:2635]
x_test = x[0:658]
y_train = y[0:2635]
y_test = y[0:658]
This means that you evaluate your model on a part of your training da
|
5,917
|
Why do I get a 100% accuracy decision tree?
|
You are getting 100% accuracy because you are using a part of training data for testing. At the time of training, decision tree gained the knowledge about that data, and now if you give same data to predict it will give exactly same value. That's why decision tree producing correct results every time.
For any machine learning problem, training and test dataset should be separated. Accuracy of the model can be determined only when we examine how it is predicting for unknown values.
|
Why do I get a 100% accuracy decision tree?
|
You are getting 100% accuracy because you are using a part of training data for testing. At the time of training, decision tree gained the knowledge about that data, and now if you give same data to p
|
Why do I get a 100% accuracy decision tree?
You are getting 100% accuracy because you are using a part of training data for testing. At the time of training, decision tree gained the knowledge about that data, and now if you give same data to predict it will give exactly same value. That's why decision tree producing correct results every time.
For any machine learning problem, training and test dataset should be separated. Accuracy of the model can be determined only when we examine how it is predicting for unknown values.
|
Why do I get a 100% accuracy decision tree?
You are getting 100% accuracy because you are using a part of training data for testing. At the time of training, decision tree gained the knowledge about that data, and now if you give same data to p
|
5,918
|
Why do I get a 100% accuracy decision tree?
|
As other users have told you, you are using as test set a subset of the train set, and a decision tree is very prone to overfitting.
You almost had it when you imported
from sklearn.cross_validation import train_test_split
But then you don't use the function. You should have done:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33)
to get random train and test sets
|
Why do I get a 100% accuracy decision tree?
|
As other users have told you, you are using as test set a subset of the train set, and a decision tree is very prone to overfitting.
You almost had it when you imported
from sklearn.cross_validation
|
Why do I get a 100% accuracy decision tree?
As other users have told you, you are using as test set a subset of the train set, and a decision tree is very prone to overfitting.
You almost had it when you imported
from sklearn.cross_validation import train_test_split
But then you don't use the function. You should have done:
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33)
to get random train and test sets
|
Why do I get a 100% accuracy decision tree?
As other users have told you, you are using as test set a subset of the train set, and a decision tree is very prone to overfitting.
You almost had it when you imported
from sklearn.cross_validation
|
5,919
|
Why do I get a 100% accuracy decision tree?
|
As pointed by @Stephan Kolassa and @Sanjay Chandlekar, this is due to the fact that your test sample is a subset of your training sample.
However, for the selection of those samples, random sampling would be more appropriate to ensure that both samples are representative. Depending on your data structure, you might also consider stratified random sampling.
I'm not fluent in Python but any statistical software should allow random sampling; some hints are also available on SO.
|
Why do I get a 100% accuracy decision tree?
|
As pointed by @Stephan Kolassa and @Sanjay Chandlekar, this is due to the fact that your test sample is a subset of your training sample.
However, for the selection of those samples, random sampling w
|
Why do I get a 100% accuracy decision tree?
As pointed by @Stephan Kolassa and @Sanjay Chandlekar, this is due to the fact that your test sample is a subset of your training sample.
However, for the selection of those samples, random sampling would be more appropriate to ensure that both samples are representative. Depending on your data structure, you might also consider stratified random sampling.
I'm not fluent in Python but any statistical software should allow random sampling; some hints are also available on SO.
|
Why do I get a 100% accuracy decision tree?
As pointed by @Stephan Kolassa and @Sanjay Chandlekar, this is due to the fact that your test sample is a subset of your training sample.
However, for the selection of those samples, random sampling w
|
5,920
|
Why do I get a 100% accuracy decision tree?
|
Just want to chime in on the intuition for why you need to split training and test samples explicitly.
If you have $n$ observations and make $n$ (actually, $n-1$, and possibly far less) splits on your data, you will perfectly classify every point (if this isn't immediately clear, write down some small-scale examples, e.g., $n = 2$, and convince yourself of this).
This is called overfitting because this splitting process is exceedingly unlikely to be predictive of data points that are relevant to your problem but which you haven't yet observed.
Of course the whole point of building these prediction platforms is to create tools which can be applied to never-before-seen data; splitting the data we have into training and test samples is an attempt to simulate this self-blinding and police our models from overfitting in the above fashion.
|
Why do I get a 100% accuracy decision tree?
|
Just want to chime in on the intuition for why you need to split training and test samples explicitly.
If you have $n$ observations and make $n$ (actually, $n-1$, and possibly far less) splits on your
|
Why do I get a 100% accuracy decision tree?
Just want to chime in on the intuition for why you need to split training and test samples explicitly.
If you have $n$ observations and make $n$ (actually, $n-1$, and possibly far less) splits on your data, you will perfectly classify every point (if this isn't immediately clear, write down some small-scale examples, e.g., $n = 2$, and convince yourself of this).
This is called overfitting because this splitting process is exceedingly unlikely to be predictive of data points that are relevant to your problem but which you haven't yet observed.
Of course the whole point of building these prediction platforms is to create tools which can be applied to never-before-seen data; splitting the data we have into training and test samples is an attempt to simulate this self-blinding and police our models from overfitting in the above fashion.
|
Why do I get a 100% accuracy decision tree?
Just want to chime in on the intuition for why you need to split training and test samples explicitly.
If you have $n$ observations and make $n$ (actually, $n-1$, and possibly far less) splits on your
|
5,921
|
Why do I get a 100% accuracy decision tree?
|
You don't need 100% accuracy to get overfitting. With enough buckets, you can get irreproducible results (something that would look terrible out-of-sample).
See this excerpted article from the Lancet, describing the method of chopping a sample into buckets which are far too fine.
Munchausen's Statistical Grid
It is also the basis for the XKCD cartoon Significant
Achieving 100% accuracy is just a short step away from finding a classifier which works deceptively well.
|
Why do I get a 100% accuracy decision tree?
|
You don't need 100% accuracy to get overfitting. With enough buckets, you can get irreproducible results (something that would look terrible out-of-sample).
See this excerpted article from the Lancet,
|
Why do I get a 100% accuracy decision tree?
You don't need 100% accuracy to get overfitting. With enough buckets, you can get irreproducible results (something that would look terrible out-of-sample).
See this excerpted article from the Lancet, describing the method of chopping a sample into buckets which are far too fine.
Munchausen's Statistical Grid
It is also the basis for the XKCD cartoon Significant
Achieving 100% accuracy is just a short step away from finding a classifier which works deceptively well.
|
Why do I get a 100% accuracy decision tree?
You don't need 100% accuracy to get overfitting. With enough buckets, you can get irreproducible results (something that would look terrible out-of-sample).
See this excerpted article from the Lancet,
|
5,922
|
Bound for the correlation of three random variables
|
The common correlation $\rho$ can have value $+1$ but not $-1$. If $\rho_{X,Y}= \rho_{X,Z}=-1$, then $\rho_{Y,Z}$ cannot equal $-1$ but is in fact $+1$.
The smallest value of the common correlation of three random variables
is $-\frac{1}{2}$. More generally,
the minimum common correlation of $n$ random variables is $-\frac{1}{n-1}$
when, regarded as vectors, they are at the vertices of a simplex (of dimension $n-1$)
in $n$-dimensional space.
Consider the variance of the sum of
$n$ unit variance random variables $X_i$. We have that
$$\begin{align*}
\operatorname{var}\left(\sum_{i=1}^n X_i\right)
&= \sum_{i=1}^n \operatorname{var}(X_i) + \sum_{i=1}^n\sum_{j\neq i}^n \operatorname{cov}(X_i,X_j)\\
&= n + \sum_{i=1}^n\sum_{j\neq i}^n \rho_{X_i,X_j}\\
&= n + n(n-1)\bar{\rho} \tag{1}
\end{align*}$$
where $\bar{\rho}$ is the average value of the $\binom{n}{2}$correlation coefficients.
But since $\operatorname{var}\left(\sum_i X_i\right) \geq 0$,
we readily get from
$(1)$ that
$$\bar{\rho} \geq -\frac{1}{n-1}.$$
So, the average value of a correlation coefficient is
at least $-\frac{1}{n-1}$. If all the correlation coefficients
have the same value $\rho$, then their average also
equals $\rho$ and so we have that
$$\rho \geq -\frac{1}{n-1}.$$
Is it possible to have random variables for which the common
correlation value $\rho$ equals
$-\frac{1}{n-1}$? Yes. Suppose that the $X_i$ are uncorrelated
unit-variance random variables and set
$Y_i = X_i - \frac{1}{n}\sum_{j=1}^n X_j = X_i -\bar{X}$. Then, $E[Y_i]=0$, while
$$\displaystyle \operatorname{var}(Y_i)
= \left(\frac{n-1}{n}\right)^2 + (n-1)\left(\frac{1}{n}\right)^2
= \frac{n-1}{n}$$
and
$$\operatorname{cov}(Y_i,Y_j) = -2\left(\frac{n-1}{n}\right)\left(\frac{1}{n}\right) +
(n-2)\left(\frac{1}{n}\right)^2 = -\frac{1}{n}$$
giving
$$\rho_{Y_i,Y_j}
= \frac{\operatorname{cov}(Y_i,Y_j)}{\sqrt{\operatorname{var}(Y_i)\operatorname{var}(Y_j)}}
=\frac{-1/n}{(n-1)/n}
= -\frac{1}{n-1}.$$
Thus the $Y_i$ are random variables achieving the minimum common
correlation value of $-\frac{1}{n-1}$. Note, incidentally, that
$\sum_i Y_i = 0$, and so, regarded as vectors, the random variables
lie in a $(n-1)$-dimensional hyperplane of $n$-dimensional space.
|
Bound for the correlation of three random variables
|
The common correlation $\rho$ can have value $+1$ but not $-1$. If $\rho_{X,Y}= \rho_{X,Z}=-1$, then $\rho_{Y,Z}$ cannot equal $-1$ but is in fact $+1$.
The smallest value of the common correlation of
|
Bound for the correlation of three random variables
The common correlation $\rho$ can have value $+1$ but not $-1$. If $\rho_{X,Y}= \rho_{X,Z}=-1$, then $\rho_{Y,Z}$ cannot equal $-1$ but is in fact $+1$.
The smallest value of the common correlation of three random variables
is $-\frac{1}{2}$. More generally,
the minimum common correlation of $n$ random variables is $-\frac{1}{n-1}$
when, regarded as vectors, they are at the vertices of a simplex (of dimension $n-1$)
in $n$-dimensional space.
Consider the variance of the sum of
$n$ unit variance random variables $X_i$. We have that
$$\begin{align*}
\operatorname{var}\left(\sum_{i=1}^n X_i\right)
&= \sum_{i=1}^n \operatorname{var}(X_i) + \sum_{i=1}^n\sum_{j\neq i}^n \operatorname{cov}(X_i,X_j)\\
&= n + \sum_{i=1}^n\sum_{j\neq i}^n \rho_{X_i,X_j}\\
&= n + n(n-1)\bar{\rho} \tag{1}
\end{align*}$$
where $\bar{\rho}$ is the average value of the $\binom{n}{2}$correlation coefficients.
But since $\operatorname{var}\left(\sum_i X_i\right) \geq 0$,
we readily get from
$(1)$ that
$$\bar{\rho} \geq -\frac{1}{n-1}.$$
So, the average value of a correlation coefficient is
at least $-\frac{1}{n-1}$. If all the correlation coefficients
have the same value $\rho$, then their average also
equals $\rho$ and so we have that
$$\rho \geq -\frac{1}{n-1}.$$
Is it possible to have random variables for which the common
correlation value $\rho$ equals
$-\frac{1}{n-1}$? Yes. Suppose that the $X_i$ are uncorrelated
unit-variance random variables and set
$Y_i = X_i - \frac{1}{n}\sum_{j=1}^n X_j = X_i -\bar{X}$. Then, $E[Y_i]=0$, while
$$\displaystyle \operatorname{var}(Y_i)
= \left(\frac{n-1}{n}\right)^2 + (n-1)\left(\frac{1}{n}\right)^2
= \frac{n-1}{n}$$
and
$$\operatorname{cov}(Y_i,Y_j) = -2\left(\frac{n-1}{n}\right)\left(\frac{1}{n}\right) +
(n-2)\left(\frac{1}{n}\right)^2 = -\frac{1}{n}$$
giving
$$\rho_{Y_i,Y_j}
= \frac{\operatorname{cov}(Y_i,Y_j)}{\sqrt{\operatorname{var}(Y_i)\operatorname{var}(Y_j)}}
=\frac{-1/n}{(n-1)/n}
= -\frac{1}{n-1}.$$
Thus the $Y_i$ are random variables achieving the minimum common
correlation value of $-\frac{1}{n-1}$. Note, incidentally, that
$\sum_i Y_i = 0$, and so, regarded as vectors, the random variables
lie in a $(n-1)$-dimensional hyperplane of $n$-dimensional space.
|
Bound for the correlation of three random variables
The common correlation $\rho$ can have value $+1$ but not $-1$. If $\rho_{X,Y}= \rho_{X,Z}=-1$, then $\rho_{Y,Z}$ cannot equal $-1$ but is in fact $+1$.
The smallest value of the common correlation of
|
5,923
|
Bound for the correlation of three random variables
|
The tightest possible bound is $-1/2 \le \rho \le 1$. All such values can actually appear--none are impossible.
To show there is nothing especially deep or mysterious about the result, this answer first presents a completely elementary solution, requiring only the obvious fact that variances--being the expected values of squares--must be non-negative. This is followed by a general solution (which uses slightly more sophisticated algebraic facts).
Elementary solution
The variance of any linear combination of $x,y,z$ must be non-negative. Let the variances of these variables be $\sigma^2, \tau^2,$ and $\upsilon^2$, respectively. All are nonzero (for otherwise some of the correlations would not be defined). Using the basic properties of variances we may compute
$$0 \le \text{Var}(\alpha x/\sigma + \beta y/\tau + \gamma z/\upsilon) = \alpha^2 +\beta^2+\gamma^2 + 2\rho(\alpha\beta+\beta\gamma+\gamma\alpha)$$
for all real numbers $(\alpha, \beta, \gamma)$.
Assuming $\alpha+\beta+\gamma\ne 0$, a little algebraic manipulation implies this is equivalent to
$$\frac{-\rho}{1-\rho} \le \frac{1}{3} \left(\frac{\sqrt{(\alpha^2+\beta^2+\gamma^2)/3}}{(\alpha+\beta+\gamma)/3}\right)^2.$$
The squared term on the right hand side is the ratio of two power means of $(\alpha, \beta, \gamma)$. The elementary power-mean inequality (with weights $(1/3, 1/3, 1/3)$) asserts that ratio cannot exceed $1$ (and will equal $1$ when $\alpha=\beta=\gamma\ne 0$). A little more algebra then implies
$$\rho \ge -1/2.$$
The explicit example of $n=3$ below (involving trivariate Normal variables $(x,y,z)$) shows that all such values, $-1/2 \le \rho \le 1$, actually do arise as correlations. This example uses only the definition of multivariate Normals, but otherwise invokes no results of Calculus or Linear Algebra.
General solution
Overview
Any correlation matrix is the covariance matrix of the standardized random variables, whence--like all correlation matrices--it must be positive semi-definite. Equivalently, its eigenvalues are non-negative. This imposes a simple condition on $\rho$: it must not be any less than $-1/2$ (and of course cannot exceed $1$). Conversely, any such $\rho$ actually corresponds to the correlation matrix of some trivariate distribution, proving these bounds are the tightest possible.
Derivation of the conditions on $\rho$
Consider the $n$ by $n$ correlation matrix with all off-diagonal values equal to $\rho.$ (The question concerns the case $n=3,$ but this generalization is no more difficult to analyze.) Let's call it $\mathbb{C}(\rho, n).$ By definition, $\lambda$ is an eigenvalue of provided there exists a nonzero vector $\mathbf{x}_\lambda$ such that
$$\mathbb{C}(\rho,n) \mathbf{x}_\lambda = \lambda \mathbf{x}_\lambda.$$
These eigenvalues are easy to find in the present case, because
Letting $\mathbf{1} = (1, 1, \ldots, 1)'$, compute that
$$\mathbb{C}(\rho,n)\mathbf{1} = (1+(n-1)\rho)\mathbf{1}.$$
Letting $\mathbf{y}_j = (-1, 0, \ldots, 0, 1, 0, \ldots, 0)$ with a $1$ only in the $j^\text{th}$ place (for $j = 2, 3, \ldots, n$), compute that
$$\mathbb{C}(\rho,n)\mathbf{y}_j = (1-\rho)\mathbf{y}_j.$$
Because the $n$ eigenvectors found so far span the full $n$ dimensional space (proof: an easy row reduction shows the absolute value of their determinant equals $n$, which is nonzero), they constitute a basis of all the eigenvectors. We have therefore found all the eigenvalues and determined they are either $1+(n-1)\rho$ or $1-\rho$ (the latter with multiplicity $n-1$). In addition to the well-known inequality $-1 \le \rho \le 1$ satisfied by all correlations, non-negativity of the first eigenvalue further implies
$$\rho \ge -\frac{1}{n-1}$$
while the non-negativity of the second eigenvalue imposes no new conditions.
Proof of sufficiency of the conditions
The implications work in both directions: provided $-1/(n-1)\le \rho \le 1,$ the matrix $\mathbb{C}(\rho, n)$ is nonnegative-definite and therefore is a valid correlation matrix. It is, for instance, the correlation matrix for a multinormal distribution. Specifically, write
$$\Sigma(\rho, n) = (1 + (n-1)\rho)\mathbb{I}_n - \frac{\rho}{(1-\rho)(1+(n-1)\rho)}\mathbf{1}\mathbf{1}'$$
for the inverse of $\mathbb{C}(\rho, n)$ when $-1/(n-1) \lt \rho \lt 1.$ For example, when $n=3$
$$\color{gray}{\Sigma(\rho, 3) = \frac{1}{(1-\rho)(1+2\rho)} \left(
\begin{array}{ccc}
\rho +1 & -\rho & -\rho \\
-\rho & \rho +1 & -\rho \\
-\rho & -\rho & \rho +1 \\
\end{array}
\right)}.$$
Let the vector of random variables $(X_1, X_2, \ldots, X_n)$ have distribution function
$$f_{\rho, n}(\mathbf{x}) = \frac{\exp\left(-\frac{1}{2}\mathbf{x}\Sigma(\rho, n)\mathbf{x}'\right)}{(2\pi)^{n/2}\left((1-\rho)^{n-1}(1+(n-1)\rho)\right)^{1/2}}$$
where $\mathbf{x} = (x_1, x_2, \ldots, x_n)$. For example, when $n=3$ this equals
$$\color{gray}{\frac{1}{\sqrt{(2\pi)^{3}(1-\rho)^2(1+2\rho)}}
\exp\left(-\frac{(1+\rho)(x^2+y^2+z^2) - 2\rho(xy+yz+zx)}{2(1-\rho)(1+2\rho)}\right)}.$$
The correlation matrix for these $n$ random variables is $\mathbb{C}(\rho, n).$
Contours of the density functions $f_{\rho,3}.$ From left to right, $\rho=-4/10, 0, 4/10, 8/10$. Note how the density shifts from being concentrated near the plane $x+y+z=0$ to being concentrated near the line $x=y=z$.
The special cases $\rho = -1/(n-1)$ and $\rho = 1$ can also be realized by degenerate distributions; I won't go into the details except to point out that in the former case the distribution can be considered supported on the hyperplane $\mathbf{x}.\mathbf{1}=0$, where it is a sum of identically distributed mean-$0$ Normal distribution, while in the latter case (perfect positive correlation) it is supported on the line generated by $\mathbf{1}'$, where it has a mean-$0$ Normal distribution.
More about non-degeneracy
A review of this analysis makes it clear that the correlation matrix $\mathbb{C}(-1/(n-1), n)$ has a rank of $n-1$ and $\mathbb{C}(1, n)$ has a rank of $1$ (because only one eigenvector has a nonzero eigenvalue). For $n\ge 2$, this makes the correlation matrix degenerate in either case. Otherwise, the existence of its inverse $\Sigma(\rho, n)$ proves it is nondegenerate.
|
Bound for the correlation of three random variables
|
The tightest possible bound is $-1/2 \le \rho \le 1$. All such values can actually appear--none are impossible.
To show there is nothing especially deep or mysterious about the result, this answer fi
|
Bound for the correlation of three random variables
The tightest possible bound is $-1/2 \le \rho \le 1$. All such values can actually appear--none are impossible.
To show there is nothing especially deep or mysterious about the result, this answer first presents a completely elementary solution, requiring only the obvious fact that variances--being the expected values of squares--must be non-negative. This is followed by a general solution (which uses slightly more sophisticated algebraic facts).
Elementary solution
The variance of any linear combination of $x,y,z$ must be non-negative. Let the variances of these variables be $\sigma^2, \tau^2,$ and $\upsilon^2$, respectively. All are nonzero (for otherwise some of the correlations would not be defined). Using the basic properties of variances we may compute
$$0 \le \text{Var}(\alpha x/\sigma + \beta y/\tau + \gamma z/\upsilon) = \alpha^2 +\beta^2+\gamma^2 + 2\rho(\alpha\beta+\beta\gamma+\gamma\alpha)$$
for all real numbers $(\alpha, \beta, \gamma)$.
Assuming $\alpha+\beta+\gamma\ne 0$, a little algebraic manipulation implies this is equivalent to
$$\frac{-\rho}{1-\rho} \le \frac{1}{3} \left(\frac{\sqrt{(\alpha^2+\beta^2+\gamma^2)/3}}{(\alpha+\beta+\gamma)/3}\right)^2.$$
The squared term on the right hand side is the ratio of two power means of $(\alpha, \beta, \gamma)$. The elementary power-mean inequality (with weights $(1/3, 1/3, 1/3)$) asserts that ratio cannot exceed $1$ (and will equal $1$ when $\alpha=\beta=\gamma\ne 0$). A little more algebra then implies
$$\rho \ge -1/2.$$
The explicit example of $n=3$ below (involving trivariate Normal variables $(x,y,z)$) shows that all such values, $-1/2 \le \rho \le 1$, actually do arise as correlations. This example uses only the definition of multivariate Normals, but otherwise invokes no results of Calculus or Linear Algebra.
General solution
Overview
Any correlation matrix is the covariance matrix of the standardized random variables, whence--like all correlation matrices--it must be positive semi-definite. Equivalently, its eigenvalues are non-negative. This imposes a simple condition on $\rho$: it must not be any less than $-1/2$ (and of course cannot exceed $1$). Conversely, any such $\rho$ actually corresponds to the correlation matrix of some trivariate distribution, proving these bounds are the tightest possible.
Derivation of the conditions on $\rho$
Consider the $n$ by $n$ correlation matrix with all off-diagonal values equal to $\rho.$ (The question concerns the case $n=3,$ but this generalization is no more difficult to analyze.) Let's call it $\mathbb{C}(\rho, n).$ By definition, $\lambda$ is an eigenvalue of provided there exists a nonzero vector $\mathbf{x}_\lambda$ such that
$$\mathbb{C}(\rho,n) \mathbf{x}_\lambda = \lambda \mathbf{x}_\lambda.$$
These eigenvalues are easy to find in the present case, because
Letting $\mathbf{1} = (1, 1, \ldots, 1)'$, compute that
$$\mathbb{C}(\rho,n)\mathbf{1} = (1+(n-1)\rho)\mathbf{1}.$$
Letting $\mathbf{y}_j = (-1, 0, \ldots, 0, 1, 0, \ldots, 0)$ with a $1$ only in the $j^\text{th}$ place (for $j = 2, 3, \ldots, n$), compute that
$$\mathbb{C}(\rho,n)\mathbf{y}_j = (1-\rho)\mathbf{y}_j.$$
Because the $n$ eigenvectors found so far span the full $n$ dimensional space (proof: an easy row reduction shows the absolute value of their determinant equals $n$, which is nonzero), they constitute a basis of all the eigenvectors. We have therefore found all the eigenvalues and determined they are either $1+(n-1)\rho$ or $1-\rho$ (the latter with multiplicity $n-1$). In addition to the well-known inequality $-1 \le \rho \le 1$ satisfied by all correlations, non-negativity of the first eigenvalue further implies
$$\rho \ge -\frac{1}{n-1}$$
while the non-negativity of the second eigenvalue imposes no new conditions.
Proof of sufficiency of the conditions
The implications work in both directions: provided $-1/(n-1)\le \rho \le 1,$ the matrix $\mathbb{C}(\rho, n)$ is nonnegative-definite and therefore is a valid correlation matrix. It is, for instance, the correlation matrix for a multinormal distribution. Specifically, write
$$\Sigma(\rho, n) = (1 + (n-1)\rho)\mathbb{I}_n - \frac{\rho}{(1-\rho)(1+(n-1)\rho)}\mathbf{1}\mathbf{1}'$$
for the inverse of $\mathbb{C}(\rho, n)$ when $-1/(n-1) \lt \rho \lt 1.$ For example, when $n=3$
$$\color{gray}{\Sigma(\rho, 3) = \frac{1}{(1-\rho)(1+2\rho)} \left(
\begin{array}{ccc}
\rho +1 & -\rho & -\rho \\
-\rho & \rho +1 & -\rho \\
-\rho & -\rho & \rho +1 \\
\end{array}
\right)}.$$
Let the vector of random variables $(X_1, X_2, \ldots, X_n)$ have distribution function
$$f_{\rho, n}(\mathbf{x}) = \frac{\exp\left(-\frac{1}{2}\mathbf{x}\Sigma(\rho, n)\mathbf{x}'\right)}{(2\pi)^{n/2}\left((1-\rho)^{n-1}(1+(n-1)\rho)\right)^{1/2}}$$
where $\mathbf{x} = (x_1, x_2, \ldots, x_n)$. For example, when $n=3$ this equals
$$\color{gray}{\frac{1}{\sqrt{(2\pi)^{3}(1-\rho)^2(1+2\rho)}}
\exp\left(-\frac{(1+\rho)(x^2+y^2+z^2) - 2\rho(xy+yz+zx)}{2(1-\rho)(1+2\rho)}\right)}.$$
The correlation matrix for these $n$ random variables is $\mathbb{C}(\rho, n).$
Contours of the density functions $f_{\rho,3}.$ From left to right, $\rho=-4/10, 0, 4/10, 8/10$. Note how the density shifts from being concentrated near the plane $x+y+z=0$ to being concentrated near the line $x=y=z$.
The special cases $\rho = -1/(n-1)$ and $\rho = 1$ can also be realized by degenerate distributions; I won't go into the details except to point out that in the former case the distribution can be considered supported on the hyperplane $\mathbf{x}.\mathbf{1}=0$, where it is a sum of identically distributed mean-$0$ Normal distribution, while in the latter case (perfect positive correlation) it is supported on the line generated by $\mathbf{1}'$, where it has a mean-$0$ Normal distribution.
More about non-degeneracy
A review of this analysis makes it clear that the correlation matrix $\mathbb{C}(-1/(n-1), n)$ has a rank of $n-1$ and $\mathbb{C}(1, n)$ has a rank of $1$ (because only one eigenvector has a nonzero eigenvalue). For $n\ge 2$, this makes the correlation matrix degenerate in either case. Otherwise, the existence of its inverse $\Sigma(\rho, n)$ proves it is nondegenerate.
|
Bound for the correlation of three random variables
The tightest possible bound is $-1/2 \le \rho \le 1$. All such values can actually appear--none are impossible.
To show there is nothing especially deep or mysterious about the result, this answer fi
|
5,924
|
Bound for the correlation of three random variables
|
Your correlation matrix is
$$ \begin{pmatrix} 1&\rho&\rho\\ \rho&1&\rho\\ \rho&\rho&1 \end{pmatrix}$$
The matrix is positive semidefinite if the leading principal minors are all non-negative. The principal minors are the determinants of the "north-west" blocks of the matrix, i.e. 1, the determinant of
$$ \begin{pmatrix} 1&\rho\\ \rho&1\end{pmatrix}$$
and the determinant of the correlation matrix itself.
1 is obviously positive, the second principal minor is $1-\rho^2$, which is nonnegative for any admissible correlation $\rho\in[-1,1]$. The determinant of the entire correlation matrix is
$$ 2\rho^3-3\rho^2+1.$$
The plot shows the determinant of the function over the range of admissible correlations $[-1,1]$.
You see the function is nonnegative over the range given by @stochazesthai (which you could also check by finding the roots of the determinantal equation).
|
Bound for the correlation of three random variables
|
Your correlation matrix is
$$ \begin{pmatrix} 1&\rho&\rho\\ \rho&1&\rho\\ \rho&\rho&1 \end{pmatrix}$$
The matrix is positive semidefinite if the leading principal minors are all non-negative. The prin
|
Bound for the correlation of three random variables
Your correlation matrix is
$$ \begin{pmatrix} 1&\rho&\rho\\ \rho&1&\rho\\ \rho&\rho&1 \end{pmatrix}$$
The matrix is positive semidefinite if the leading principal minors are all non-negative. The principal minors are the determinants of the "north-west" blocks of the matrix, i.e. 1, the determinant of
$$ \begin{pmatrix} 1&\rho\\ \rho&1\end{pmatrix}$$
and the determinant of the correlation matrix itself.
1 is obviously positive, the second principal minor is $1-\rho^2$, which is nonnegative for any admissible correlation $\rho\in[-1,1]$. The determinant of the entire correlation matrix is
$$ 2\rho^3-3\rho^2+1.$$
The plot shows the determinant of the function over the range of admissible correlations $[-1,1]$.
You see the function is nonnegative over the range given by @stochazesthai (which you could also check by finding the roots of the determinantal equation).
|
Bound for the correlation of three random variables
Your correlation matrix is
$$ \begin{pmatrix} 1&\rho&\rho\\ \rho&1&\rho\\ \rho&\rho&1 \end{pmatrix}$$
The matrix is positive semidefinite if the leading principal minors are all non-negative. The prin
|
5,925
|
Bound for the correlation of three random variables
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
There exist random variables $X$, $Y$ and $Z$ with pairwise correlations $\rho_{XY} = \rho_{YZ} = \rho_{XZ} = \rho$ if and only if the correlation matrix is positive semidefinite. This happens only for $\rho \in [-\frac{1}{2},1]$.
|
Bound for the correlation of three random variables
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Bound for the correlation of three random variables
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
There exist random variables $X$, $Y$ and $Z$ with pairwise correlations $\rho_{XY} = \rho_{YZ} = \rho_{XZ} = \rho$ if and only if the correlation matrix is positive semidefinite. This happens only for $\rho \in [-\frac{1}{2},1]$.
|
Bound for the correlation of three random variables
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
5,926
|
How do DAGs help to reduce bias in causal inference?
|
A DAG is a Directed Acyclic Graph.
A βGraphβ is a structure with nodes (which are usually variables in statistics) and arcs (lines) connecting nodes to other nodes. βDirectedβ means that all the arcs have a direction, where one end of the arc has an arrow head, and the other does not, which usually refers to causation. βAcyclicβ means that the graph is not cyclic β that means there can be no path from any node that leads back to the same node. In statistics a DAG is a very powerful tool to aid in causal inference β to estimate the causal effect of one variable (often called the main exposure) on another (often called the outcome) in the presence of other variables which may be competing exposures, confounders or mediators. The DAG can be used to identify a minimal sufficient set of variables to be used in a multivariable regression model for the estimation of said causal effect. For example it is usually a very bad idea to condition on a mediator (a variable that lies on the causal path between the main exposure and the outcome), while it is usually a very good idea to condition on a confounder (a variable that is a cause, or a proxy for a cause, of both the main exposure and the outcome). It is also a bad idea to condition on a collider (to be defined below).
But first, what is the problem we want to overcome? This is what a multivariable regression model looks like to your statistical software:
The software does not βknowβ which variables are our main exposure, competing exposures, confounders or mediators. It treats them all the same. In the real world it is far more common for the variables to be inter-related. For example, knowledge of the particular area of research may indicate a structure such as:
Note that it is the researchers job to specify the causal paths, using expert knowledge about the subject at hand. DAGs represent a set of (often abstracted) causal beliefs pertinent to specific causal relationships. One researcher's DAG may be different to another researcher's DAG, for the same relationship(s), and that is completely OK. In the same way, a researcher may have more than one DAG for the same causal relationships, and using DAGs in a principled way as described below is one way to gather knowledge about, or support for a particular hypothesis.
Letβs suppose that our interest is in the causal effect of $X7$ on $Y$. What are we to do? A very naive approach is simply to put all the variables into a regression model, and take the estimated coefficient for $X7$ as our βanswerβ. This would be a big mistake. It turns out that the only variable that should be adjusted for in this DAG is $X3$, because it is a confounder. But what if our interest was in the effect of $X3$, not $X7$ ? Do we simply use the same model (also containing $X7$) and just take the estimate of $X3$ as our βanswerβ? No! In this case, we do not adjust for $X7$ because it is a mediator. No adjustment is needed at all. In both cases, we may also adjust for $X1$ because this is a competing exposure and will improve the precision of our casual inferences in both models. In both models we should not adjust for $X2$, $X4$, $X5$ and $X6$ because all of them are mediators for the effect of $X7$ on $Y$.
So, getting back to the question, how do DAGs actually enable us to do this? First we need to establish a few ground truths.
A collider is a variable which has more than 1 cause β that is, at least 2 arrows are pointing at it (hence the incoming arrows βcollideβ). $X5$ in the above DAG is a collider
If there are no variables being conditioned on, a path is blocked if and only if it contains a collider. The path $X4 \rightarrow X5 \leftarrow X6$ is blocked by the collider $X5$.
Note: when we talk about "conditioning" on a variable this could refer to a few things, for example stratifying, but perhaps more commonly including the variable as a covariate in a multivariable regression model. Other synonymous terms are "controlling for" and "adjusting for".
Any path that contains a non-collider that has been conditioned on is blocked. The path $Y \leftarrow X3 \rightarrow X7$ will be blocked if we condition on $X3$.
A collider (or a descendant of a collider) that has been conditioned on does not block a path. If we condition on $X5$ we will open the path $X4 \rightarrow X5 \leftarrow X6$
A backdoor path is a non-causal path between an outcome and a cause. It is non-causal because it contains an arrow pointing at both the cause and the outcome. For example the path $Y \leftarrow X3 \rightarrow X7$ is a backdoor path from $Y$ to $X3$.
Confounding of a causal path occurs where a common cause for both variables is present. In other words confounding occurs where an unblocked backdoor path is present. Again, $Y \leftarrow X3 \rightarrow X7$ is such a path.
So, armed with this knowledge, letβs see how DAGs help us with removing bias:
Confounding
The definition of confounding is 6 above. If we apply 4 and condition on the confounder we will block the backdoor path from the outcome to the cause, thereby removing confounding bias. The example is the association of carrying a lighter and lung cancer:
Carrying a lighter has no causal effect on lung cancer, however, they share a common cause - smoking - so applying rule 5 above, a backdoor path from Lung cancer to carrying a lighter is present which induces an association between carrying a lighter and Lung cancer. Conditioning on Smoking will remove this association, which can be demonstrate with a simple simulation where I use continuous variables for simplicity:
> set.seed(15)
> N <- 100
> Smoking <- rnorm(N, 10, 2)
> Cancer <- Smoking + rnorm(N)
> Lighter <- Smoking + rnorm(N)
> summary(lm(Cancer ~ Lighter))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.66263 0.76079 0.871 0.386
Lighter 0.91076 0.07217 12.620 <2e-16 ***
which shows the spurious association between Lighter and Cancer, but now when we condition on Smoking:
> summary(lm(Cancer ~ Lighter + Smoking))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.42978 0.60363 -0.712 0.478
Lighter 0.07781 0.11627 0.669 0.505
Smoking 0.95215 0.11658 8.168 1.18e-12 ***
...the bias is removed.
Mediation
A mediator is a variable that lies on the causal path between the cause and the outcome. This means that the outcome is a collider. Therefore, applying rule 3 means that we should not condition on the mediator otherwise the indirect effect of the cause on the outcome (i.e., that mediated by the mediator) will be blocked. A good example example is the grades of a student and their happiness. A mediating variable is self-esteem:
Here, Grades has a direct effect on Happiness, but it also has an indirect effect mediated by self-esteem. We want to estimate the total causal effect of Grades on Happiness. Rule 3 says that a path that contains a non-collider that has been conditioned on is blocked. Since we want the total effect (i.e., including the indirect effect) we should not condition on self-esteem otherwise the mediated path will be blocked, as we can see in the following simulation:
> set.seed(15)
> N <- 100
> Grades <- rnorm(N, 10, 2)
> SelfEsteem <- Grades + rnorm(N)
> Happiness <- Grades + SelfEsteem + rnorm(N)
So the total effect should be 2:
> summary(m0 <- lm(Happiness ~ Grades)) # happy times
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.05650 0.79509 1.329 0.187
Grades 1.90003 0.07649 24.840 <2e-16 ***
which is what we do find. But if we now condition on self esteem:
> summary(m0 <- lm(Happiness ~ Grades + SelfEsteem
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.39804 0.50783 2.753 0.00705 **
Grades 0.81917 0.10244 7.997 2.73e-12 ***
SelfEsteem 1.05907 0.08826 11.999 < 2e-16 ***
only the direct effect for grades is estimated, due to blocking the indirect effect by conditioning on the mediator SelfEsteem.
Collider bias
This is probably the most difficult one to understand, but with the aid of a very simple DAG we can easily see the problem:
Here, there is no causal path between X and Y. However, both cause C, the collider. If we condition on C, then applying rule 4 above we will invoke collider bias by opening up the (non causal) path between X, and Y. This may be a little hard to grasp at first, but it should become apparent by thinking in terms of equations. We have X + Y = C. Let X and Y be binary variables taking the values 1 or zero. Hence, C can only take the values of 0, 1 or 2. Now, when we condition on C we fix its value. Say we fix it at 1. This immediately means that if X is zero then Y must be 1, and if Y is zero then X must be one. That is, X = -Y, so they are perfectly (negatively) correlated, conditional on C= 1. We can also see this in action with the following simulation:
> set.seed(16)
> N <- 100
> X <- rnorm(N, 10, 2)
> Y <- rnorm(N, 15, 3)
> C <- X + Y + rnorm(N)
So, X and Y are independent so we should find no association:
> summary(m0 <- lm(Y ~ X))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 14.18496 1.54838 9.161 8.01e-15 ***
X 0.08604 0.15009 0.573 0.568
and indeed no association is found. But now condition on C
> summary(m1 <- lm(Y ~ X + C))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.10461 0.61206 1.805 0.0742 .
X -0.92633 0.05435 -17.043 <2e-16 ***
C 0.92454 0.02881 32.092 <2e-16 ***
and now we have a spurious association between X and Y.
Now letβs consider a slightly more complex situation:
Here we are interested in the causal effect of Activity on Cervical Cancer. Hypochondria is an unmeasured variable which is a psychological condition that is characterized by fears of minor and sometimes non-existent medical symptoms being an indication of major illness. Lesion is also an unobserved variable that indicates the presence of a pre-cancerous lesion. Test is a diagnostic test for early stage cervical cancer. Here we hypothesise that both the unmeasured variables affect Test, obviously in the case of Lesion, and by making frequent visits to the doctor in the case of Hypochondria. Lesion also (obviously causes Cancer) and Hypochondria causes more physical activity (because persons with hypochondria are worried about a sedentary lifestyle leading to disease in later life.
First notice that if the collider, Test, was removed and replace with an arc either from Lesion to Hypochondria or vice versa, then our causal path of interest, Activity to Cancer, would be confounded, but due to rule 2 above, the collider blocks the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$, as we can see with a simple simulation:
> set.seed(16)
> N <- 100
> Lesion <- rnorm(N, 10, 2)
> Hypochondria <- rnorm(N, 10, 2)
> Test <- Lesion + Hypochondria + rnorm(N)
> Activity <- Hypochondria + rnorm(N)
> Cancer <- Lesion + 0.25 * Activity + rnorm(N)
where we hypothesize a much smaller effect of Activity on Cancer than Lesion on Cancer
> summary(lm(Cancer ~ Activity))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10.47570 1.01150 10.357 <2e-16 ***
Activity 0.21103 0.09667 2.183 0.0314 *
And indeed we obtain a reasonable estimate.
Now, also observe the association of Activity and Cancer with Test (due to their common, but unmeasured causes:
> cor(Test, Activity); cor(Test, Cancer)
[1] 0.6245565
[1] 0.7200811
The traditional definition of confounding is that a confounder is variable that is associated with both the exposure and the outcome. So, we might mistakenly think that Test is a confounder and condition on it. However, we then open up the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$, and introduce confounding which would otherwise not be present, as we can see from:
> summary(lm(Cancer ~ Activity + Test))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.77204 0.98383 1.801 0.0748 .
Activity -0.37663 0.07971 -4.725 7.78e-06 ***
Test 0.72716 0.06160 11.804 < 2e-16 ***
Now not only is the estimate for Activity biased, but it is of larger magnitude and of the opposite sign!
Selection bias
The preceding example can also be used to demonstrate selection bias. A researcher may identify Test as a potential confounder, and then only conduct the analysis on those that have tested negative (or positive).
> dtPos <- data.frame(Lesion, Hypochondria, Test, Activity, Cancer)
> dtNeg <- dtPos[dtPos$Test < 22, ]
> dtPos <- dtPos[dtPos$Test >= 22, ]
> summary(lm(Cancer ~ Activity, data = dtPos))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 13.15915 3.07604 4.278 0.000242 ***
Activity 0.08662 0.25074 0.345 0.732637
So for those that test positive we obtain a very small positive effect, that is not statistically significant at the 5% level
> summary(lm(Cancer ~ Activity, data = dtNeg))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.18865 1.12071 10.876 <2e-16 ***
Activity -0.01553 0.11541 -0.135 0.893
And for those that test negative we obtain a very small negative association which is also not significant.
|
How do DAGs help to reduce bias in causal inference?
|
A DAG is a Directed Acyclic Graph.
A βGraphβ is a structure with nodes (which are usually variables in statistics) and arcs (lines) connecting nodes to other nodes. βDirectedβ means that all the arcs
|
How do DAGs help to reduce bias in causal inference?
A DAG is a Directed Acyclic Graph.
A βGraphβ is a structure with nodes (which are usually variables in statistics) and arcs (lines) connecting nodes to other nodes. βDirectedβ means that all the arcs have a direction, where one end of the arc has an arrow head, and the other does not, which usually refers to causation. βAcyclicβ means that the graph is not cyclic β that means there can be no path from any node that leads back to the same node. In statistics a DAG is a very powerful tool to aid in causal inference β to estimate the causal effect of one variable (often called the main exposure) on another (often called the outcome) in the presence of other variables which may be competing exposures, confounders or mediators. The DAG can be used to identify a minimal sufficient set of variables to be used in a multivariable regression model for the estimation of said causal effect. For example it is usually a very bad idea to condition on a mediator (a variable that lies on the causal path between the main exposure and the outcome), while it is usually a very good idea to condition on a confounder (a variable that is a cause, or a proxy for a cause, of both the main exposure and the outcome). It is also a bad idea to condition on a collider (to be defined below).
But first, what is the problem we want to overcome? This is what a multivariable regression model looks like to your statistical software:
The software does not βknowβ which variables are our main exposure, competing exposures, confounders or mediators. It treats them all the same. In the real world it is far more common for the variables to be inter-related. For example, knowledge of the particular area of research may indicate a structure such as:
Note that it is the researchers job to specify the causal paths, using expert knowledge about the subject at hand. DAGs represent a set of (often abstracted) causal beliefs pertinent to specific causal relationships. One researcher's DAG may be different to another researcher's DAG, for the same relationship(s), and that is completely OK. In the same way, a researcher may have more than one DAG for the same causal relationships, and using DAGs in a principled way as described below is one way to gather knowledge about, or support for a particular hypothesis.
Letβs suppose that our interest is in the causal effect of $X7$ on $Y$. What are we to do? A very naive approach is simply to put all the variables into a regression model, and take the estimated coefficient for $X7$ as our βanswerβ. This would be a big mistake. It turns out that the only variable that should be adjusted for in this DAG is $X3$, because it is a confounder. But what if our interest was in the effect of $X3$, not $X7$ ? Do we simply use the same model (also containing $X7$) and just take the estimate of $X3$ as our βanswerβ? No! In this case, we do not adjust for $X7$ because it is a mediator. No adjustment is needed at all. In both cases, we may also adjust for $X1$ because this is a competing exposure and will improve the precision of our casual inferences in both models. In both models we should not adjust for $X2$, $X4$, $X5$ and $X6$ because all of them are mediators for the effect of $X7$ on $Y$.
So, getting back to the question, how do DAGs actually enable us to do this? First we need to establish a few ground truths.
A collider is a variable which has more than 1 cause β that is, at least 2 arrows are pointing at it (hence the incoming arrows βcollideβ). $X5$ in the above DAG is a collider
If there are no variables being conditioned on, a path is blocked if and only if it contains a collider. The path $X4 \rightarrow X5 \leftarrow X6$ is blocked by the collider $X5$.
Note: when we talk about "conditioning" on a variable this could refer to a few things, for example stratifying, but perhaps more commonly including the variable as a covariate in a multivariable regression model. Other synonymous terms are "controlling for" and "adjusting for".
Any path that contains a non-collider that has been conditioned on is blocked. The path $Y \leftarrow X3 \rightarrow X7$ will be blocked if we condition on $X3$.
A collider (or a descendant of a collider) that has been conditioned on does not block a path. If we condition on $X5$ we will open the path $X4 \rightarrow X5 \leftarrow X6$
A backdoor path is a non-causal path between an outcome and a cause. It is non-causal because it contains an arrow pointing at both the cause and the outcome. For example the path $Y \leftarrow X3 \rightarrow X7$ is a backdoor path from $Y$ to $X3$.
Confounding of a causal path occurs where a common cause for both variables is present. In other words confounding occurs where an unblocked backdoor path is present. Again, $Y \leftarrow X3 \rightarrow X7$ is such a path.
So, armed with this knowledge, letβs see how DAGs help us with removing bias:
Confounding
The definition of confounding is 6 above. If we apply 4 and condition on the confounder we will block the backdoor path from the outcome to the cause, thereby removing confounding bias. The example is the association of carrying a lighter and lung cancer:
Carrying a lighter has no causal effect on lung cancer, however, they share a common cause - smoking - so applying rule 5 above, a backdoor path from Lung cancer to carrying a lighter is present which induces an association between carrying a lighter and Lung cancer. Conditioning on Smoking will remove this association, which can be demonstrate with a simple simulation where I use continuous variables for simplicity:
> set.seed(15)
> N <- 100
> Smoking <- rnorm(N, 10, 2)
> Cancer <- Smoking + rnorm(N)
> Lighter <- Smoking + rnorm(N)
> summary(lm(Cancer ~ Lighter))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.66263 0.76079 0.871 0.386
Lighter 0.91076 0.07217 12.620 <2e-16 ***
which shows the spurious association between Lighter and Cancer, but now when we condition on Smoking:
> summary(lm(Cancer ~ Lighter + Smoking))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.42978 0.60363 -0.712 0.478
Lighter 0.07781 0.11627 0.669 0.505
Smoking 0.95215 0.11658 8.168 1.18e-12 ***
...the bias is removed.
Mediation
A mediator is a variable that lies on the causal path between the cause and the outcome. This means that the outcome is a collider. Therefore, applying rule 3 means that we should not condition on the mediator otherwise the indirect effect of the cause on the outcome (i.e., that mediated by the mediator) will be blocked. A good example example is the grades of a student and their happiness. A mediating variable is self-esteem:
Here, Grades has a direct effect on Happiness, but it also has an indirect effect mediated by self-esteem. We want to estimate the total causal effect of Grades on Happiness. Rule 3 says that a path that contains a non-collider that has been conditioned on is blocked. Since we want the total effect (i.e., including the indirect effect) we should not condition on self-esteem otherwise the mediated path will be blocked, as we can see in the following simulation:
> set.seed(15)
> N <- 100
> Grades <- rnorm(N, 10, 2)
> SelfEsteem <- Grades + rnorm(N)
> Happiness <- Grades + SelfEsteem + rnorm(N)
So the total effect should be 2:
> summary(m0 <- lm(Happiness ~ Grades)) # happy times
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.05650 0.79509 1.329 0.187
Grades 1.90003 0.07649 24.840 <2e-16 ***
which is what we do find. But if we now condition on self esteem:
> summary(m0 <- lm(Happiness ~ Grades + SelfEsteem
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.39804 0.50783 2.753 0.00705 **
Grades 0.81917 0.10244 7.997 2.73e-12 ***
SelfEsteem 1.05907 0.08826 11.999 < 2e-16 ***
only the direct effect for grades is estimated, due to blocking the indirect effect by conditioning on the mediator SelfEsteem.
Collider bias
This is probably the most difficult one to understand, but with the aid of a very simple DAG we can easily see the problem:
Here, there is no causal path between X and Y. However, both cause C, the collider. If we condition on C, then applying rule 4 above we will invoke collider bias by opening up the (non causal) path between X, and Y. This may be a little hard to grasp at first, but it should become apparent by thinking in terms of equations. We have X + Y = C. Let X and Y be binary variables taking the values 1 or zero. Hence, C can only take the values of 0, 1 or 2. Now, when we condition on C we fix its value. Say we fix it at 1. This immediately means that if X is zero then Y must be 1, and if Y is zero then X must be one. That is, X = -Y, so they are perfectly (negatively) correlated, conditional on C= 1. We can also see this in action with the following simulation:
> set.seed(16)
> N <- 100
> X <- rnorm(N, 10, 2)
> Y <- rnorm(N, 15, 3)
> C <- X + Y + rnorm(N)
So, X and Y are independent so we should find no association:
> summary(m0 <- lm(Y ~ X))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 14.18496 1.54838 9.161 8.01e-15 ***
X 0.08604 0.15009 0.573 0.568
and indeed no association is found. But now condition on C
> summary(m1 <- lm(Y ~ X + C))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.10461 0.61206 1.805 0.0742 .
X -0.92633 0.05435 -17.043 <2e-16 ***
C 0.92454 0.02881 32.092 <2e-16 ***
and now we have a spurious association between X and Y.
Now letβs consider a slightly more complex situation:
Here we are interested in the causal effect of Activity on Cervical Cancer. Hypochondria is an unmeasured variable which is a psychological condition that is characterized by fears of minor and sometimes non-existent medical symptoms being an indication of major illness. Lesion is also an unobserved variable that indicates the presence of a pre-cancerous lesion. Test is a diagnostic test for early stage cervical cancer. Here we hypothesise that both the unmeasured variables affect Test, obviously in the case of Lesion, and by making frequent visits to the doctor in the case of Hypochondria. Lesion also (obviously causes Cancer) and Hypochondria causes more physical activity (because persons with hypochondria are worried about a sedentary lifestyle leading to disease in later life.
First notice that if the collider, Test, was removed and replace with an arc either from Lesion to Hypochondria or vice versa, then our causal path of interest, Activity to Cancer, would be confounded, but due to rule 2 above, the collider blocks the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$, as we can see with a simple simulation:
> set.seed(16)
> N <- 100
> Lesion <- rnorm(N, 10, 2)
> Hypochondria <- rnorm(N, 10, 2)
> Test <- Lesion + Hypochondria + rnorm(N)
> Activity <- Hypochondria + rnorm(N)
> Cancer <- Lesion + 0.25 * Activity + rnorm(N)
where we hypothesize a much smaller effect of Activity on Cancer than Lesion on Cancer
> summary(lm(Cancer ~ Activity))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10.47570 1.01150 10.357 <2e-16 ***
Activity 0.21103 0.09667 2.183 0.0314 *
And indeed we obtain a reasonable estimate.
Now, also observe the association of Activity and Cancer with Test (due to their common, but unmeasured causes:
> cor(Test, Activity); cor(Test, Cancer)
[1] 0.6245565
[1] 0.7200811
The traditional definition of confounding is that a confounder is variable that is associated with both the exposure and the outcome. So, we might mistakenly think that Test is a confounder and condition on it. However, we then open up the backdoor path $\text{Cancer}\leftarrow \text{Lesion} \rightarrow \text{Test} \leftarrow \text{Hypochondria} \rightarrow \text{Activity}$, and introduce confounding which would otherwise not be present, as we can see from:
> summary(lm(Cancer ~ Activity + Test))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.77204 0.98383 1.801 0.0748 .
Activity -0.37663 0.07971 -4.725 7.78e-06 ***
Test 0.72716 0.06160 11.804 < 2e-16 ***
Now not only is the estimate for Activity biased, but it is of larger magnitude and of the opposite sign!
Selection bias
The preceding example can also be used to demonstrate selection bias. A researcher may identify Test as a potential confounder, and then only conduct the analysis on those that have tested negative (or positive).
> dtPos <- data.frame(Lesion, Hypochondria, Test, Activity, Cancer)
> dtNeg <- dtPos[dtPos$Test < 22, ]
> dtPos <- dtPos[dtPos$Test >= 22, ]
> summary(lm(Cancer ~ Activity, data = dtPos))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 13.15915 3.07604 4.278 0.000242 ***
Activity 0.08662 0.25074 0.345 0.732637
So for those that test positive we obtain a very small positive effect, that is not statistically significant at the 5% level
> summary(lm(Cancer ~ Activity, data = dtNeg))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.18865 1.12071 10.876 <2e-16 ***
Activity -0.01553 0.11541 -0.135 0.893
And for those that test negative we obtain a very small negative association which is also not significant.
|
How do DAGs help to reduce bias in causal inference?
A DAG is a Directed Acyclic Graph.
A βGraphβ is a structure with nodes (which are usually variables in statistics) and arcs (lines) connecting nodes to other nodes. βDirectedβ means that all the arcs
|
5,927
|
How do DAGs help to reduce bias in causal inference?
|
This is generally a fairly elaborate topic, and may require more reading on your part for better understanding, but I will try to answer a couple of your questions in isolation and leave references for further reading.
Confounding
Consider the example below:
Controlling for the confounding variable "Gender" gives us more information about the relationship between the two variables "Drug" and "Recovery". You can, for example, control for the confounder Z as a covariate (by conditioning) in regression analysis, and this will reduce your bias β as you know more about the effect of X on Y.
Colliding
As mentioned here, conditioning on a collider can actually increase bias. Consider the example below
If I know you have a fever and don't have the flu, but I control for the colliding effect between Influenza and Chicken Pox knowing you have a fever actually gives me more evidence that you might have Chicken Pox (I recommend you read more about this, the link above should be useful).
Mediation
Controlling for intermediate variables may also induce bias, because it decomposes the total effect of x on y into its parts. In the example below, if you condition on the intermediate variables "Unhealthy Lifestyle", "Weight", and "Cholesterol" in your analysis, you are only measuring the effect of "Smoking" on "Cardiac Arrest", and not through the intermediate path, which would induce bias. In general, it depends on your research question when you want to control for an intermediate path or not, but you should know it can induce bias, and not reduce it.
Backdoor Path
Backdoor paths generally indicate common causes of A and Y, the simplest of which is the confounding situation below. You may want to look at the backdoor criterion [Pearl, 2000] to see whether eliminating the confounding variable is reasonable for a particular case.
Regularization
I also wanted to mention that the algorithms for statistical learning on DAGs reduce bias through regularization, see (this) for an overview. When learning on DAGS you can end up with highly complex relationships between covariates which can result in bias. This can be reduced by regularizing the complexity of the graph, as in [Murphy, 2012, 26.7.1].
Hope this provides you with enough to chew on for now..
|
How do DAGs help to reduce bias in causal inference?
|
This is generally a fairly elaborate topic, and may require more reading on your part for better understanding, but I will try to answer a couple of your questions in isolation and leave references fo
|
How do DAGs help to reduce bias in causal inference?
This is generally a fairly elaborate topic, and may require more reading on your part for better understanding, but I will try to answer a couple of your questions in isolation and leave references for further reading.
Confounding
Consider the example below:
Controlling for the confounding variable "Gender" gives us more information about the relationship between the two variables "Drug" and "Recovery". You can, for example, control for the confounder Z as a covariate (by conditioning) in regression analysis, and this will reduce your bias β as you know more about the effect of X on Y.
Colliding
As mentioned here, conditioning on a collider can actually increase bias. Consider the example below
If I know you have a fever and don't have the flu, but I control for the colliding effect between Influenza and Chicken Pox knowing you have a fever actually gives me more evidence that you might have Chicken Pox (I recommend you read more about this, the link above should be useful).
Mediation
Controlling for intermediate variables may also induce bias, because it decomposes the total effect of x on y into its parts. In the example below, if you condition on the intermediate variables "Unhealthy Lifestyle", "Weight", and "Cholesterol" in your analysis, you are only measuring the effect of "Smoking" on "Cardiac Arrest", and not through the intermediate path, which would induce bias. In general, it depends on your research question when you want to control for an intermediate path or not, but you should know it can induce bias, and not reduce it.
Backdoor Path
Backdoor paths generally indicate common causes of A and Y, the simplest of which is the confounding situation below. You may want to look at the backdoor criterion [Pearl, 2000] to see whether eliminating the confounding variable is reasonable for a particular case.
Regularization
I also wanted to mention that the algorithms for statistical learning on DAGs reduce bias through regularization, see (this) for an overview. When learning on DAGS you can end up with highly complex relationships between covariates which can result in bias. This can be reduced by regularizing the complexity of the graph, as in [Murphy, 2012, 26.7.1].
Hope this provides you with enough to chew on for now..
|
How do DAGs help to reduce bias in causal inference?
This is generally a fairly elaborate topic, and may require more reading on your part for better understanding, but I will try to answer a couple of your questions in isolation and leave references fo
|
5,928
|
How do DAGs help to reduce bias in causal inference?
|
A different angle.
It constraints the hypothesis space to hypothesis that imply the causation graph. It is like injecting domain knowledge in your model, so that the learning phase is more informed.
This can be modelled (in a very abstract way) by the VC-dimension of the model, or the PAC framework in general
|
How do DAGs help to reduce bias in causal inference?
|
A different angle.
It constraints the hypothesis space to hypothesis that imply the causation graph. It is like injecting domain knowledge in your model, so that the learning phase is more informed.
T
|
How do DAGs help to reduce bias in causal inference?
A different angle.
It constraints the hypothesis space to hypothesis that imply the causation graph. It is like injecting domain knowledge in your model, so that the learning phase is more informed.
This can be modelled (in a very abstract way) by the VC-dimension of the model, or the PAC framework in general
|
How do DAGs help to reduce bias in causal inference?
A different angle.
It constraints the hypothesis space to hypothesis that imply the causation graph. It is like injecting domain knowledge in your model, so that the learning phase is more informed.
T
|
5,929
|
Tiny (real) datasets for giving examples in class?
|
The data and story library is an " online library of datafiles and stories that illustrate the use of basic statistics methods".
This site seems to have what you need, and you can search it for particular data sets.
|
Tiny (real) datasets for giving examples in class?
|
The data and story library is an " online library of datafiles and stories that illustrate the use of basic statistics methods".
This site seems to have what you need, and you can search it for partic
|
Tiny (real) datasets for giving examples in class?
The data and story library is an " online library of datafiles and stories that illustrate the use of basic statistics methods".
This site seems to have what you need, and you can search it for particular data sets.
|
Tiny (real) datasets for giving examples in class?
The data and story library is an " online library of datafiles and stories that illustrate the use of basic statistics methods".
This site seems to have what you need, and you can search it for partic
|
5,930
|
Tiny (real) datasets for giving examples in class?
|
There's a book called "A Handbook of Small Datasets" by D.J. Hand, F. Daly, A.D. Lunn, K.J. McConway and E. Ostrowski. The Statistics department at NCSU have electronically posted the datasets from this book here.
The website above gives only the data; you would need to read the book to get the story behind the numbers, that is, any story beyond what you can glean from the data set's title. But, they are small, and they are real.
|
Tiny (real) datasets for giving examples in class?
|
There's a book called "A Handbook of Small Datasets" by D.J. Hand, F. Daly, A.D. Lunn, K.J. McConway and E. Ostrowski. The Statistics department at NCSU have electronically posted the datasets from t
|
Tiny (real) datasets for giving examples in class?
There's a book called "A Handbook of Small Datasets" by D.J. Hand, F. Daly, A.D. Lunn, K.J. McConway and E. Ostrowski. The Statistics department at NCSU have electronically posted the datasets from this book here.
The website above gives only the data; you would need to read the book to get the story behind the numbers, that is, any story beyond what you can glean from the data set's title. But, they are small, and they are real.
|
Tiny (real) datasets for giving examples in class?
There's a book called "A Handbook of Small Datasets" by D.J. Hand, F. Daly, A.D. Lunn, K.J. McConway and E. Ostrowski. The Statistics department at NCSU have electronically posted the datasets from t
|
5,931
|
Tiny (real) datasets for giving examples in class?
|
For two-way tables, I like the data on gender and survival of the titanic passengers:
| Alive Dead | Total
-------+-------------+------
Female | 308 154 | 462
Male | 142 709 | 851
-------+-------------+------
Total | 450 863 | 1313
With this data, one can discuss things like the chi-square test for independence and measure of assocation, such as the relative rate and the odds ratio. For example, female passengers were ~4 times more likely to survive than male passengers. At the same time, male passengers were ~2.5 times more likely to die than female passengers. The odds ratio for survival/dying is always 10 though.
|
Tiny (real) datasets for giving examples in class?
|
For two-way tables, I like the data on gender and survival of the titanic passengers:
| Alive Dead | Total
-------+-------------+------
Female | 308 154 | 462
Male | 142 709 | 851
|
Tiny (real) datasets for giving examples in class?
For two-way tables, I like the data on gender and survival of the titanic passengers:
| Alive Dead | Total
-------+-------------+------
Female | 308 154 | 462
Male | 142 709 | 851
-------+-------------+------
Total | 450 863 | 1313
With this data, one can discuss things like the chi-square test for independence and measure of assocation, such as the relative rate and the odds ratio. For example, female passengers were ~4 times more likely to survive than male passengers. At the same time, male passengers were ~2.5 times more likely to die than female passengers. The odds ratio for survival/dying is always 10 though.
|
Tiny (real) datasets for giving examples in class?
For two-way tables, I like the data on gender and survival of the titanic passengers:
| Alive Dead | Total
-------+-------------+------
Female | 308 154 | 462
Male | 142 709 | 851
|
5,932
|
Tiny (real) datasets for giving examples in class?
|
The Journal of Statistical Education has an archive of educational data sets.
|
Tiny (real) datasets for giving examples in class?
|
The Journal of Statistical Education has an archive of educational data sets.
|
Tiny (real) datasets for giving examples in class?
The Journal of Statistical Education has an archive of educational data sets.
|
Tiny (real) datasets for giving examples in class?
The Journal of Statistical Education has an archive of educational data sets.
|
5,933
|
Tiny (real) datasets for giving examples in class?
|
CAUSEweb has data sets as well as lots of other teaching resources.
See http://www.causeweb.org/resources/datasets/ for the datasets.
CAUSE stands for Consortium for the Advancement of Undergraduate Statistics Education.
|
Tiny (real) datasets for giving examples in class?
|
CAUSEweb has data sets as well as lots of other teaching resources.
See http://www.causeweb.org/resources/datasets/ for the datasets.
CAUSE stands for Consortium for the Advancement of Undergraduate S
|
Tiny (real) datasets for giving examples in class?
CAUSEweb has data sets as well as lots of other teaching resources.
See http://www.causeweb.org/resources/datasets/ for the datasets.
CAUSE stands for Consortium for the Advancement of Undergraduate Statistics Education.
|
Tiny (real) datasets for giving examples in class?
CAUSEweb has data sets as well as lots of other teaching resources.
See http://www.causeweb.org/resources/datasets/ for the datasets.
CAUSE stands for Consortium for the Advancement of Undergraduate S
|
5,934
|
Tiny (real) datasets for giving examples in class?
|
Probably such an obvious answer that it does not really need to be mentioned, but for correlation or linear regression Anscombe's quartet is a logical choice. Although it is not a real story with real data I think it is such a simple example it would reasonably fit into your criteria.
|
Tiny (real) datasets for giving examples in class?
|
Probably such an obvious answer that it does not really need to be mentioned, but for correlation or linear regression Anscombe's quartet is a logical choice. Although it is not a real story with real
|
Tiny (real) datasets for giving examples in class?
Probably such an obvious answer that it does not really need to be mentioned, but for correlation or linear regression Anscombe's quartet is a logical choice. Although it is not a real story with real data I think it is such a simple example it would reasonably fit into your criteria.
|
Tiny (real) datasets for giving examples in class?
Probably such an obvious answer that it does not really need to be mentioned, but for correlation or linear regression Anscombe's quartet is a logical choice. Although it is not a real story with real
|
5,935
|
Tiny (real) datasets for giving examples in class?
|
StatSci.org is a nice source for datasets.
|
Tiny (real) datasets for giving examples in class?
|
StatSci.org is a nice source for datasets.
|
Tiny (real) datasets for giving examples in class?
StatSci.org is a nice source for datasets.
|
Tiny (real) datasets for giving examples in class?
StatSci.org is a nice source for datasets.
|
5,936
|
Tiny (real) datasets for giving examples in class?
|
A nice article entitled Resource Discovery for Teaching Statistics has shed light on this this topic.
|
Tiny (real) datasets for giving examples in class?
|
A nice article entitled Resource Discovery for Teaching Statistics has shed light on this this topic.
|
Tiny (real) datasets for giving examples in class?
A nice article entitled Resource Discovery for Teaching Statistics has shed light on this this topic.
|
Tiny (real) datasets for giving examples in class?
A nice article entitled Resource Discovery for Teaching Statistics has shed light on this this topic.
|
5,937
|
Tiny (real) datasets for giving examples in class?
|
https://tuvalabs.com
I am sure you have found what you were looking for long back, but for anyone else who come across thread - TuvaLabs is nice source for the datasets for Classrooms. It curates datasets, story, description, small exercise and visualization capability also you can requests datasets on it.
|
Tiny (real) datasets for giving examples in class?
|
https://tuvalabs.com
I am sure you have found what you were looking for long back, but for anyone else who come across thread - TuvaLabs is nice source for the datasets for Classrooms. It curates data
|
Tiny (real) datasets for giving examples in class?
https://tuvalabs.com
I am sure you have found what you were looking for long back, but for anyone else who come across thread - TuvaLabs is nice source for the datasets for Classrooms. It curates datasets, story, description, small exercise and visualization capability also you can requests datasets on it.
|
Tiny (real) datasets for giving examples in class?
https://tuvalabs.com
I am sure you have found what you were looking for long back, but for anyone else who come across thread - TuvaLabs is nice source for the datasets for Classrooms. It curates data
|
5,938
|
Functions of Independent Random Variables
|
The most general and abstract definition of independence makes this assertion trivial while supplying an important qualifying condition: that two random variables are independent means the sigma-algebras they generate are independent. Because the sigma-algebra generated by a measurable function of a sigma-algebra is a sub-algebra, a fortiori any measurable functions of those random variables have independent algebras, whence those functions are independent.
(When a function is not measurable, it usually does not create a new random variable, so the concept of independent wouldn't even apply.)
Let's unwrap the definitions to see how simple this is. Recall that a random variable $X$ is a real-valued function defined on the "sample space" $\Omega$ (the set of outcomes being studied via probability).
A random variable $X$ is studied by means of the probabilities that its value lies within various intervals of real numbers (or, more generally, sets constructed in simple ways out of intervals: these are the Borel measurable sets of real numbers).
Corresponding to any Borel measurable set $I$ is the event $X^{*}(I)$ consisting of all outcomes $\omega$ for which $X(\omega)$ lies in $I$.
The sigma-algebra generated by $X$ is determined by the collection of all such events.
The naive definition says two random variables $X$ and $Y$ are independent "when their probabilities multiply." That is, when $I$ is one Borel measurable set and $J$ is another, then
$\Pr(X(\omega)\in I\text{ and }Y(\omega)\in J) = \Pr(X(\omega)\in I)\Pr(Y(\omega)\in J).$
But in the language of events (and sigma algebras) that's the same as
$\Pr(\omega \in X^{*}(I)\text{ and }\omega \in Y^{*}(J)) = \Pr(\omega\in X^{*}(I))\Pr(\omega\in Y^{*}(J)).$
Consider now two functions $f, g:\mathbb{R}\to\mathbb{R}$ and suppose that $f \circ X$ and $g\circ Y$ are random variables. (The circle is functional composition: $(f\circ X)(\omega) = f(X(\omega))$. This is what it means for $f$ to be a "function of a random variable".) Notice--this is just elementary set theory--that
$$(f\circ X)^{*}(I) = X^{*}(f^{*}(I)).$$
In other words, every event generated by $f\circ X$ (which is on the left) is automatically an event generated by $X$ (as exhibited by the form of the right hand side). Therefore (5) automatically holds for $f\circ X$ and $g\circ Y$: there's nothing to check!
NB You may replace "real-valued" everywhere by "with values in $\mathbb{R}^d$" without needing to change anything else in any material way. This covers the case of vector-valued random variables.
|
Functions of Independent Random Variables
|
The most general and abstract definition of independence makes this assertion trivial while supplying an important qualifying condition: that two random variables are independent means the sigma-algeb
|
Functions of Independent Random Variables
The most general and abstract definition of independence makes this assertion trivial while supplying an important qualifying condition: that two random variables are independent means the sigma-algebras they generate are independent. Because the sigma-algebra generated by a measurable function of a sigma-algebra is a sub-algebra, a fortiori any measurable functions of those random variables have independent algebras, whence those functions are independent.
(When a function is not measurable, it usually does not create a new random variable, so the concept of independent wouldn't even apply.)
Let's unwrap the definitions to see how simple this is. Recall that a random variable $X$ is a real-valued function defined on the "sample space" $\Omega$ (the set of outcomes being studied via probability).
A random variable $X$ is studied by means of the probabilities that its value lies within various intervals of real numbers (or, more generally, sets constructed in simple ways out of intervals: these are the Borel measurable sets of real numbers).
Corresponding to any Borel measurable set $I$ is the event $X^{*}(I)$ consisting of all outcomes $\omega$ for which $X(\omega)$ lies in $I$.
The sigma-algebra generated by $X$ is determined by the collection of all such events.
The naive definition says two random variables $X$ and $Y$ are independent "when their probabilities multiply." That is, when $I$ is one Borel measurable set and $J$ is another, then
$\Pr(X(\omega)\in I\text{ and }Y(\omega)\in J) = \Pr(X(\omega)\in I)\Pr(Y(\omega)\in J).$
But in the language of events (and sigma algebras) that's the same as
$\Pr(\omega \in X^{*}(I)\text{ and }\omega \in Y^{*}(J)) = \Pr(\omega\in X^{*}(I))\Pr(\omega\in Y^{*}(J)).$
Consider now two functions $f, g:\mathbb{R}\to\mathbb{R}$ and suppose that $f \circ X$ and $g\circ Y$ are random variables. (The circle is functional composition: $(f\circ X)(\omega) = f(X(\omega))$. This is what it means for $f$ to be a "function of a random variable".) Notice--this is just elementary set theory--that
$$(f\circ X)^{*}(I) = X^{*}(f^{*}(I)).$$
In other words, every event generated by $f\circ X$ (which is on the left) is automatically an event generated by $X$ (as exhibited by the form of the right hand side). Therefore (5) automatically holds for $f\circ X$ and $g\circ Y$: there's nothing to check!
NB You may replace "real-valued" everywhere by "with values in $\mathbb{R}^d$" without needing to change anything else in any material way. This covers the case of vector-valued random variables.
|
Functions of Independent Random Variables
The most general and abstract definition of independence makes this assertion trivial while supplying an important qualifying condition: that two random variables are independent means the sigma-algeb
|
5,939
|
Functions of Independent Random Variables
|
Consider this "less advanced" proof:
Let $X:\Omega_X\to\mathbb{R}^n,Y:\Omega_Y\to\mathbb{R}^m,f:\mathbb{R}^n\to\mathbb{R}^k,g:\mathbb{R}^m\to\mathbb{R}^p$, where $X,Y$ are independent random variables and $f,g$ are measurable functions. Then:
$$
P\{f(X)\leq x \text{ and } g(Y)\leq y\}\\=P(\{f(X)\leq x\}\cap\{g(Y)\leq y\})\\=P(\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\}\cap\{Y\in\{w\in\mathbb{R}^m:g(w)\leq y\}\}).
$$
Using independence of $X$ and $Y$,
$$
P(\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\}\cap\{Y\in\{w\in\mathbb{R}^m:g(w)\leq y\}\})=\\=P\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\cdot P\{Y\in\{w\in\mathbb{R}^m:g(w)\leq y\}\}
\\=P\{f(X)\leq x\}\cdot P\{g(Y)\leq y\}.
$$
The idea is to notice that the set
$$
\{f(X)\leq x\}\equiv\{w\in\Omega_X:f(X(w))\leq x\}=\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\},
$$
so properties that are valid for $X$ are extended to $f(X)$ and the same happens for $Y$.
|
Functions of Independent Random Variables
|
Consider this "less advanced" proof:
Let $X:\Omega_X\to\mathbb{R}^n,Y:\Omega_Y\to\mathbb{R}^m,f:\mathbb{R}^n\to\mathbb{R}^k,g:\mathbb{R}^m\to\mathbb{R}^p$, where $X,Y$ are independent random variables
|
Functions of Independent Random Variables
Consider this "less advanced" proof:
Let $X:\Omega_X\to\mathbb{R}^n,Y:\Omega_Y\to\mathbb{R}^m,f:\mathbb{R}^n\to\mathbb{R}^k,g:\mathbb{R}^m\to\mathbb{R}^p$, where $X,Y$ are independent random variables and $f,g$ are measurable functions. Then:
$$
P\{f(X)\leq x \text{ and } g(Y)\leq y\}\\=P(\{f(X)\leq x\}\cap\{g(Y)\leq y\})\\=P(\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\}\cap\{Y\in\{w\in\mathbb{R}^m:g(w)\leq y\}\}).
$$
Using independence of $X$ and $Y$,
$$
P(\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\}\cap\{Y\in\{w\in\mathbb{R}^m:g(w)\leq y\}\})=\\=P\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\cdot P\{Y\in\{w\in\mathbb{R}^m:g(w)\leq y\}\}
\\=P\{f(X)\leq x\}\cdot P\{g(Y)\leq y\}.
$$
The idea is to notice that the set
$$
\{f(X)\leq x\}\equiv\{w\in\Omega_X:f(X(w))\leq x\}=\{X\in\{w\in\mathbb{R}^n:f(w)\leq x\}\},
$$
so properties that are valid for $X$ are extended to $f(X)$ and the same happens for $Y$.
|
Functions of Independent Random Variables
Consider this "less advanced" proof:
Let $X:\Omega_X\to\mathbb{R}^n,Y:\Omega_Y\to\mathbb{R}^m,f:\mathbb{R}^n\to\mathbb{R}^k,g:\mathbb{R}^m\to\mathbb{R}^p$, where $X,Y$ are independent random variables
|
5,940
|
Functions of Independent Random Variables
|
Yes, $g(X)$ and $h(Y)$ are independent for any functions $g$ and $h$ so long as $X$ and $Y$ are independent. It's a very well known results, which is studied in probability theory courses. I'm sure you can find it in any standard text like Billingsley's.
|
Functions of Independent Random Variables
|
Yes, $g(X)$ and $h(Y)$ are independent for any functions $g$ and $h$ so long as $X$ and $Y$ are independent. It's a very well known results, which is studied in probability theory courses. I'm sure yo
|
Functions of Independent Random Variables
Yes, $g(X)$ and $h(Y)$ are independent for any functions $g$ and $h$ so long as $X$ and $Y$ are independent. It's a very well known results, which is studied in probability theory courses. I'm sure you can find it in any standard text like Billingsley's.
|
Functions of Independent Random Variables
Yes, $g(X)$ and $h(Y)$ are independent for any functions $g$ and $h$ so long as $X$ and $Y$ are independent. It's a very well known results, which is studied in probability theory courses. I'm sure yo
|
5,941
|
Functions of Independent Random Variables
|
Not as an alternative, but as an addition, to the previous brilliant answers: the independence of functions of independent random variables is, in fact, quite intuitive.
Usually, we think that $X$ and $Y$ being independent means that knowing the value of $X$ gives no information about the value of $Y$ and vice versa. This interpretation obviously implies that you can't somehow "squeeze" any information out by applying a function (or by any other means actually).
|
Functions of Independent Random Variables
|
Not as an alternative, but as an addition, to the previous brilliant answers: the independence of functions of independent random variables is, in fact, quite intuitive.
Usually, we think that $X$ and
|
Functions of Independent Random Variables
Not as an alternative, but as an addition, to the previous brilliant answers: the independence of functions of independent random variables is, in fact, quite intuitive.
Usually, we think that $X$ and $Y$ being independent means that knowing the value of $X$ gives no information about the value of $Y$ and vice versa. This interpretation obviously implies that you can't somehow "squeeze" any information out by applying a function (or by any other means actually).
|
Functions of Independent Random Variables
Not as an alternative, but as an addition, to the previous brilliant answers: the independence of functions of independent random variables is, in fact, quite intuitive.
Usually, we think that $X$ and
|
5,942
|
Ridge, lasso and elastic net
|
In The Elements of Statistical Learning book, Hastie et al. provide a very insightful and thorough comparison of these shrinkage techniques. The book is available online (pdf). The comparison is done in section 3.4.3, page 69.
The main difference between Lasso and Ridge is the penalty term they use. Ridge uses $L_2$ penalty term which limits the size of the coefficient vector. Lasso uses $L_1$ penalty which imposes sparsity among the coefficients and thus, makes the fitted model more interpretable. Elasticnet is introduced as a compromise between these two techniques, and has a penalty which is a mix of $L_1$ and $L_2$ norms.
|
Ridge, lasso and elastic net
|
In The Elements of Statistical Learning book, Hastie et al. provide a very insightful and thorough comparison of these shrinkage techniques. The book is available online (pdf). The comparison is done
|
Ridge, lasso and elastic net
In The Elements of Statistical Learning book, Hastie et al. provide a very insightful and thorough comparison of these shrinkage techniques. The book is available online (pdf). The comparison is done in section 3.4.3, page 69.
The main difference between Lasso and Ridge is the penalty term they use. Ridge uses $L_2$ penalty term which limits the size of the coefficient vector. Lasso uses $L_1$ penalty which imposes sparsity among the coefficients and thus, makes the fitted model more interpretable. Elasticnet is introduced as a compromise between these two techniques, and has a penalty which is a mix of $L_1$ and $L_2$ norms.
|
Ridge, lasso and elastic net
In The Elements of Statistical Learning book, Hastie et al. provide a very insightful and thorough comparison of these shrinkage techniques. The book is available online (pdf). The comparison is done
|
5,943
|
Ridge, lasso and elastic net
|
To summarize, here are some salient differences between Lasso, Ridge and Elastic-net:
Lasso does a sparse selection, while Ridge does not.
When you have highly-correlated variables, Ridge regression shrinks the two coefficients towards one another. Lasso is somewhat indifferent and generally picks one over the other. Depending on the context, one does not know which variable gets picked. Elastic-net is a compromise between the two that attempts to shrink and do a sparse selection simultaneously.
Ridge estimators are indifferent to multiplicative scaling of the data. That is, if both X and Y variables are multiplied by constants, the coefficients of the fit do not change, for a given $\lambda$ parameter. However, for Lasso, the fit is not independent of the scaling. In fact, the $\lambda$ parameter must be scaled up by the multiplier to get the same result. It is more complex for elastic net.
Ridge penalizes the largest $\beta$'s more than it penalizes the smaller ones (as they are squared in the penalty term). Lasso penalizes them more uniformly. This may or may not be important. In a forecasting problem with a powerful predictor, the predictor's effectiveness is shrunk by the Ridge as compared to the Lasso.
|
Ridge, lasso and elastic net
|
To summarize, here are some salient differences between Lasso, Ridge and Elastic-net:
Lasso does a sparse selection, while Ridge does not.
When you have highly-correlated variables, Ridge regression
|
Ridge, lasso and elastic net
To summarize, here are some salient differences between Lasso, Ridge and Elastic-net:
Lasso does a sparse selection, while Ridge does not.
When you have highly-correlated variables, Ridge regression shrinks the two coefficients towards one another. Lasso is somewhat indifferent and generally picks one over the other. Depending on the context, one does not know which variable gets picked. Elastic-net is a compromise between the two that attempts to shrink and do a sparse selection simultaneously.
Ridge estimators are indifferent to multiplicative scaling of the data. That is, if both X and Y variables are multiplied by constants, the coefficients of the fit do not change, for a given $\lambda$ parameter. However, for Lasso, the fit is not independent of the scaling. In fact, the $\lambda$ parameter must be scaled up by the multiplier to get the same result. It is more complex for elastic net.
Ridge penalizes the largest $\beta$'s more than it penalizes the smaller ones (as they are squared in the penalty term). Lasso penalizes them more uniformly. This may or may not be important. In a forecasting problem with a powerful predictor, the predictor's effectiveness is shrunk by the Ridge as compared to the Lasso.
|
Ridge, lasso and elastic net
To summarize, here are some salient differences between Lasso, Ridge and Elastic-net:
Lasso does a sparse selection, while Ridge does not.
When you have highly-correlated variables, Ridge regression
|
5,944
|
Ridge, lasso and elastic net
|
I highly recommended you to have a look at An introduction to statistical learning book (Tibshirani et. al, 2013).
The reason for this is that Elements of statistical learning book is intended for individuals with advanced
training in the mathematical sciences. In the foreword to ISL, the authors write:
An Introduction to Statistical
Learning arose from the perceived need for a broader and less technical
treatment of these topics. [...]
An Introduction to Statistical Learning is appropriate for advanced undergraduates or masterβs students in statistics or related quantitative fields or for individuals in other disciplines who wish to use statistical learning tools to analyze their data.
|
Ridge, lasso and elastic net
|
I highly recommended you to have a look at An introduction to statistical learning book (Tibshirani et. al, 2013).
The reason for this is that Elements of statistical learning book is intended for in
|
Ridge, lasso and elastic net
I highly recommended you to have a look at An introduction to statistical learning book (Tibshirani et. al, 2013).
The reason for this is that Elements of statistical learning book is intended for individuals with advanced
training in the mathematical sciences. In the foreword to ISL, the authors write:
An Introduction to Statistical
Learning arose from the perceived need for a broader and less technical
treatment of these topics. [...]
An Introduction to Statistical Learning is appropriate for advanced undergraduates or masterβs students in statistics or related quantitative fields or for individuals in other disciplines who wish to use statistical learning tools to analyze their data.
|
Ridge, lasso and elastic net
I highly recommended you to have a look at An introduction to statistical learning book (Tibshirani et. al, 2013).
The reason for this is that Elements of statistical learning book is intended for in
|
5,945
|
Ridge, lasso and elastic net
|
The above answers are very clear and informative. I would like to add one minor point from the statistic perspective. Take the ridge regression as an example. It is an extension of the ordinal least square regression to solve the multicollinearity problems when there are many correlated features. If the linear regression is
Y=Xb+e
The normal equation solution for the multiple linear regression
b=inv(X.T*X)*X.T*Y
The normal equation solution for the ridge regression is
b=inv(X.T*X+k*I)*X.T*Y.
It is a biased estimator for b and we can always find a penalty term k which will make the mean square error of Ridge regression smaller than that of OLS regression.
For LASSO and Elastic-Net, we could not find such a analytic solution.
|
Ridge, lasso and elastic net
|
The above answers are very clear and informative. I would like to add one minor point from the statistic perspective. Take the ridge regression as an example. It is an extension of the ordinal least s
|
Ridge, lasso and elastic net
The above answers are very clear and informative. I would like to add one minor point from the statistic perspective. Take the ridge regression as an example. It is an extension of the ordinal least square regression to solve the multicollinearity problems when there are many correlated features. If the linear regression is
Y=Xb+e
The normal equation solution for the multiple linear regression
b=inv(X.T*X)*X.T*Y
The normal equation solution for the ridge regression is
b=inv(X.T*X+k*I)*X.T*Y.
It is a biased estimator for b and we can always find a penalty term k which will make the mean square error of Ridge regression smaller than that of OLS regression.
For LASSO and Elastic-Net, we could not find such a analytic solution.
|
Ridge, lasso and elastic net
The above answers are very clear and informative. I would like to add one minor point from the statistic perspective. Take the ridge regression as an example. It is an extension of the ordinal least s
|
5,946
|
What is an instrumental variable?
|
[The following perhaps seems a little technical because of the use of equations but it builds mainly on the arrow charts to provide the intuition which only requires very basic understanding of OLS - so don't be repulsed.]
Suppose you want to estimate the causal effect of $x_i$ on $y_i$ given by the estimated coefficient for $\beta$, but for some reason there is a correlation between your explanatory variable and the error term:
$$\begin{matrix}y_i &=& \alpha &+& \beta x_i &+& \epsilon_i & \\ & && & & \hspace{-1cm}\nwarrow & \hspace{-0.8cm} \nearrow \\ & & & & & corr & \end{matrix}$$
This might have happened because we forgot to include an important variable that also correlates with $x_i$. This problem is known as omitted variable bias and then your $\widehat{\beta}$ will not give you the causal effect (see here for the details). This is a case when you would want to use an instrument because only then can you find the true causal effect.
An instrument is a new variable $z_i$ which is uncorrelated with $\epsilon_i$, but that correlates well with $x_i$ and which only influences $y_i$ through $x_i$ - so our instrument is what is called "exogenous". It's like in this chart here:
$$\begin{matrix}
z_i & \rightarrow & x_i & \rightarrow & y_i \newline
& & \uparrow & \nearrow & \newline
& & \epsilon_i &
\end{matrix}$$
So how do we use this new variable?
Maybe you remember the ANOVA type idea behind regression where you split the total variation of a dependent variable into an explained and an unexplained component. For example, if you regress your $x_i$ on the instrument,
$$\underbrace{x_i}_{\text{total variation}} = \underbrace{a \quad + \quad \pi z_i}_{\text{explained variation}} \quad + \underbrace{\eta_i}_{\text{unexplained variation}}$$
then you know that the explained variation here is exogenous to our original equation because it depends on the exogenous variable $z_i$ only. So in this sense, we split our $x_i$ up into a part that we can claim is certainly exogenous (that's the part that depends on $z_i$) and some unexplained part $\eta_i$ that keeps all the bad variation which correlates with $\epsilon_i$. Now we take the exogenous part of this regression, call it $\widehat{x_i}$,
$$x_i \quad = \underbrace{a \quad + \quad \pi z_i}_{\text{good variation} \: = \: \widehat{x}_i } \quad + \underbrace{\eta_i}_{\text{bad variation}}$$
and put this into our original regression:
$$y_i = \alpha + \beta \widehat{x}_i + \epsilon_i$$
Now since $\widehat{x}_i$ is not correlated anymore with $\epsilon_i$ (remember, we "filtered out" this part from $x_i$ and left it in $\eta_i$), we can consistently estimate our $\beta$ because the instrument has helped us to break the correlation between the explanatory variably and the error. This was one way how you can apply instrumental variables. This method is actually called 2-stage least squares, where our regression of $x_i$ on $z_i$ is called the "first stage" and the last equation here is called the "second stage".
In terms of our original picture (I leave out the $\epsilon_i$ to not make a mess but remember that it is there!), instead of taking the direct but flawed route between $x_i$ to $y_i$ we took an intermediate step via $\widehat{x}_i$
$$\begin{matrix}
& & & & & \widehat{x}_i \newline
& & & & \nearrow & \downarrow \newline
& z_i & \rightarrow & x_i & \rightarrow & y_i
\end{matrix}$$
Thanks to this slight diversion of our road to the causal effect we were able to consistently estimate $\beta$ by using the instrument. The cost of this diversion is that instrumental variables models are generally less precise, meaning that they tend to have larger standard errors.
How do we find instruments?
That's not an easy question because you need to make a good case as to why your $z_i$ would not be correlated with $\epsilon_i$ - this cannot be tested formally because the true error is unobserved. The main challenge is therefore to come up with something that can be plausibly seen as exogenous such as natural disasters, policy changes, or sometimes you can even run a randomized experiment. The other answers had some very good examples for this so I won't repeat this part.
|
What is an instrumental variable?
|
[The following perhaps seems a little technical because of the use of equations but it builds mainly on the arrow charts to provide the intuition which only requires very basic understanding of OLS -
|
What is an instrumental variable?
[The following perhaps seems a little technical because of the use of equations but it builds mainly on the arrow charts to provide the intuition which only requires very basic understanding of OLS - so don't be repulsed.]
Suppose you want to estimate the causal effect of $x_i$ on $y_i$ given by the estimated coefficient for $\beta$, but for some reason there is a correlation between your explanatory variable and the error term:
$$\begin{matrix}y_i &=& \alpha &+& \beta x_i &+& \epsilon_i & \\ & && & & \hspace{-1cm}\nwarrow & \hspace{-0.8cm} \nearrow \\ & & & & & corr & \end{matrix}$$
This might have happened because we forgot to include an important variable that also correlates with $x_i$. This problem is known as omitted variable bias and then your $\widehat{\beta}$ will not give you the causal effect (see here for the details). This is a case when you would want to use an instrument because only then can you find the true causal effect.
An instrument is a new variable $z_i$ which is uncorrelated with $\epsilon_i$, but that correlates well with $x_i$ and which only influences $y_i$ through $x_i$ - so our instrument is what is called "exogenous". It's like in this chart here:
$$\begin{matrix}
z_i & \rightarrow & x_i & \rightarrow & y_i \newline
& & \uparrow & \nearrow & \newline
& & \epsilon_i &
\end{matrix}$$
So how do we use this new variable?
Maybe you remember the ANOVA type idea behind regression where you split the total variation of a dependent variable into an explained and an unexplained component. For example, if you regress your $x_i$ on the instrument,
$$\underbrace{x_i}_{\text{total variation}} = \underbrace{a \quad + \quad \pi z_i}_{\text{explained variation}} \quad + \underbrace{\eta_i}_{\text{unexplained variation}}$$
then you know that the explained variation here is exogenous to our original equation because it depends on the exogenous variable $z_i$ only. So in this sense, we split our $x_i$ up into a part that we can claim is certainly exogenous (that's the part that depends on $z_i$) and some unexplained part $\eta_i$ that keeps all the bad variation which correlates with $\epsilon_i$. Now we take the exogenous part of this regression, call it $\widehat{x_i}$,
$$x_i \quad = \underbrace{a \quad + \quad \pi z_i}_{\text{good variation} \: = \: \widehat{x}_i } \quad + \underbrace{\eta_i}_{\text{bad variation}}$$
and put this into our original regression:
$$y_i = \alpha + \beta \widehat{x}_i + \epsilon_i$$
Now since $\widehat{x}_i$ is not correlated anymore with $\epsilon_i$ (remember, we "filtered out" this part from $x_i$ and left it in $\eta_i$), we can consistently estimate our $\beta$ because the instrument has helped us to break the correlation between the explanatory variably and the error. This was one way how you can apply instrumental variables. This method is actually called 2-stage least squares, where our regression of $x_i$ on $z_i$ is called the "first stage" and the last equation here is called the "second stage".
In terms of our original picture (I leave out the $\epsilon_i$ to not make a mess but remember that it is there!), instead of taking the direct but flawed route between $x_i$ to $y_i$ we took an intermediate step via $\widehat{x}_i$
$$\begin{matrix}
& & & & & \widehat{x}_i \newline
& & & & \nearrow & \downarrow \newline
& z_i & \rightarrow & x_i & \rightarrow & y_i
\end{matrix}$$
Thanks to this slight diversion of our road to the causal effect we were able to consistently estimate $\beta$ by using the instrument. The cost of this diversion is that instrumental variables models are generally less precise, meaning that they tend to have larger standard errors.
How do we find instruments?
That's not an easy question because you need to make a good case as to why your $z_i$ would not be correlated with $\epsilon_i$ - this cannot be tested formally because the true error is unobserved. The main challenge is therefore to come up with something that can be plausibly seen as exogenous such as natural disasters, policy changes, or sometimes you can even run a randomized experiment. The other answers had some very good examples for this so I won't repeat this part.
|
What is an instrumental variable?
[The following perhaps seems a little technical because of the use of equations but it builds mainly on the arrow charts to provide the intuition which only requires very basic understanding of OLS -
|
5,947
|
What is an instrumental variable?
|
As a medical statistician with no previous knowledge of econom(etr)ics, I struggled to get to grips with instrumental variables as I often struggled to follow their examples and didn't understand their rather different terminology (e.g. 'endogeneity', 'reduced form', 'structural equation', 'omitted variables'). Here's a few references I found useful (the first should be freely available, but I'm afraid the others probably require a subscription):
Staiger D. Instrumental Variables. AcademyHealth Cyber Seminar in Health
Services Research Methods, March 2002.
http://www.dartmouth.edu/~dstaiger/wpapers-Econ.htm
Newhouse JP, McClellan M. Econometrics in Outcomes Research: The Use
of Instrumental Variables. Annual Review of Public Health 1998;19:17-34.
http://dx.doi.org/10.1146/annurev.publhealth.19.1.17
Greenland S. An introduction to instrumental variables for epidemiologists. International Journal of Epidemiology 2000;29:722-729. http://dx.doi.org/10.1093/ije/29.4.722
Zohoori N, Savitz DA. Econometric approaches to epidemiologic data: Relating endogeneity and unobserved heterogeneity to confounding. Annals of Epidemiology 1997;7:251-257. http://dx.doi.org/10.1016/S1047-2797(97)00023-9
I'd also recommend chapter 4 of:
Angrist JD, Pischke JS. Mostly harmless econometrics: an empiricist's companion. Princeton, N.J: Princeton University Press, 2009. http://www.mostlyharmlesseconometrics.com/
|
What is an instrumental variable?
|
As a medical statistician with no previous knowledge of econom(etr)ics, I struggled to get to grips with instrumental variables as I often struggled to follow their examples and didn't understand thei
|
What is an instrumental variable?
As a medical statistician with no previous knowledge of econom(etr)ics, I struggled to get to grips with instrumental variables as I often struggled to follow their examples and didn't understand their rather different terminology (e.g. 'endogeneity', 'reduced form', 'structural equation', 'omitted variables'). Here's a few references I found useful (the first should be freely available, but I'm afraid the others probably require a subscription):
Staiger D. Instrumental Variables. AcademyHealth Cyber Seminar in Health
Services Research Methods, March 2002.
http://www.dartmouth.edu/~dstaiger/wpapers-Econ.htm
Newhouse JP, McClellan M. Econometrics in Outcomes Research: The Use
of Instrumental Variables. Annual Review of Public Health 1998;19:17-34.
http://dx.doi.org/10.1146/annurev.publhealth.19.1.17
Greenland S. An introduction to instrumental variables for epidemiologists. International Journal of Epidemiology 2000;29:722-729. http://dx.doi.org/10.1093/ije/29.4.722
Zohoori N, Savitz DA. Econometric approaches to epidemiologic data: Relating endogeneity and unobserved heterogeneity to confounding. Annals of Epidemiology 1997;7:251-257. http://dx.doi.org/10.1016/S1047-2797(97)00023-9
I'd also recommend chapter 4 of:
Angrist JD, Pischke JS. Mostly harmless econometrics: an empiricist's companion. Princeton, N.J: Princeton University Press, 2009. http://www.mostlyharmlesseconometrics.com/
|
What is an instrumental variable?
As a medical statistician with no previous knowledge of econom(etr)ics, I struggled to get to grips with instrumental variables as I often struggled to follow their examples and didn't understand thei
|
5,948
|
What is an instrumental variable?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Here are some slides that I prepared for an econometrics course at UC Berkeley. I hope that you find them useful---I believe that they answer your questions and provide some examples.
There are also more advanced treatments on the course pages for PS 236 and PS 239 (graduate-level political science methods courses) at my website: http://gibbons.bio/teaching.html.
Charlie
|
What is an instrumental variable?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
What is an instrumental variable?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Here are some slides that I prepared for an econometrics course at UC Berkeley. I hope that you find them useful---I believe that they answer your questions and provide some examples.
There are also more advanced treatments on the course pages for PS 236 and PS 239 (graduate-level political science methods courses) at my website: http://gibbons.bio/teaching.html.
Charlie
|
What is an instrumental variable?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
5,949
|
What is an instrumental variable?
|
Non-technical (usually that's all I'm good for anyway): There are times when not only does X cause Y, but Y causes X as well. An instrumental variable is a device that can "clean up" this messy, inconvenient relationship so that the best estimates can be made of X's effect on Y.
The instrumental variable is chosen by virtue of its relationships: it is a cause of X, but, other than acting through X, it has no effect on Y. The instrument (or instruments) is used in Stage One to compute a new "version" of X, one that is in no way a function of Y. This new "predicted" X is then used in a second stage, in a more standard regression, to explain/predict Y. Hence the term Two-Stage Least Squares regression.
One typically finds the IV in processes that are overriding or beyond the control of X OR Y, such as variables that depend on laws, policies, acts of nature, etc.
|
What is an instrumental variable?
|
Non-technical (usually that's all I'm good for anyway): There are times when not only does X cause Y, but Y causes X as well. An instrumental variable is a device that can "clean up" this messy, inc
|
What is an instrumental variable?
Non-technical (usually that's all I'm good for anyway): There are times when not only does X cause Y, but Y causes X as well. An instrumental variable is a device that can "clean up" this messy, inconvenient relationship so that the best estimates can be made of X's effect on Y.
The instrumental variable is chosen by virtue of its relationships: it is a cause of X, but, other than acting through X, it has no effect on Y. The instrument (or instruments) is used in Stage One to compute a new "version" of X, one that is in no way a function of Y. This new "predicted" X is then used in a second stage, in a more standard regression, to explain/predict Y. Hence the term Two-Stage Least Squares regression.
One typically finds the IV in processes that are overriding or beyond the control of X OR Y, such as variables that depend on laws, policies, acts of nature, etc.
|
What is an instrumental variable?
Non-technical (usually that's all I'm good for anyway): There are times when not only does X cause Y, but Y causes X as well. An instrumental variable is a device that can "clean up" this messy, inc
|
5,950
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
|
I self-published the basic idea of a deterministic variety of generative adversarial networks (GANs) in a 2010 blog post (archive.org). I had searched for but could not find anything similar anywhere, and had no time to try implementing it. I was not and still am not a neural network researcher and have no connections in the field. I'll copy-paste the blog post here:
2010-02-24
A method for training artificial neural networks to generate missing
data within a variable context. As the idea is hard to put in a single
sentence, I will use an example:
An image may have missing pixels (let's say, under a smudge). How can
one restore the missing pixels, knowing only the surrounding pixels?
One approach would be a "generator" neural network that, given the
surrounding pixels as input, generates the missing pixels.
But how to train such a network? One can't expect the network to
exactly produce the missing pixels. Imagine, for example, that the
missing data is a patch of grass. One could teach the network with a
bunch of images of lawns, with portions removed. The teacher knows the
data that is missing, and could score the network according to the
root mean square difference (RMSD) between the generated patch of
grass and the original data. The problem is that if the generator
encounters an image that is not part of the training set, it would be
impossible for the neural network to put all the leaves, especially in
the middle of the patch, in exactly the right places. The lowest RMSD
error would probably be achieved by the network filling the middle
area of the patch with a solid color that is the average of the color
of pixels in typical images of grass. If the network tried to generate
grass that looks convincing to a human and as such fulfills its
purpose, there would be an unfortunate penalty by the RMSD metric.
My idea is this (see figure below): Train simultaneously with the
generator a classifier network that is given, in random or alternating
sequence, generated and original data. The classifier then has to
guess, in the context of the surrounding image context, whether the
input is original (1) or generated (0). The generator network is
simultaneously trying to get a high score (1) from the classifier. The
outcome, hopefully, is that both networks start out really simple, and
progress towards generating and recognizing more and more advanced
features, approaching and possibly defeating human's ability to
discern between the generated data and the original. If multiple
training samples are considered for each score, then RMSD is the
correct error metric to use, as this will encourage the classifier
network to output probabilities.
Artificial neural network training setup
When I mention RMSD at the end I mean the error metric for the "probability estimate", not the pixel values.
I originally started considering the use of neural networks in 2000 (comp.dsp post) to generate missing high frequencies for up-sampled (resampled to a higher sampling frequency) digital audio, in a way that would be convincing rather than accurate. In 2001 I collected an audio library for the training. Here are parts of an EFNet #musicdsp Internet Relay Chat (IRC) log from 20 January 2006 in which I (yehar) talk about the idea with another user (_Beta):
[22:18] <yehar> the problem with samples is that if you don't have something "up there" already then what can you do if you upsample...
[22:22] <yehar> i once collected a big library of sounds so that i could develop a "smart" algo to solve this exact problem
[22:22] <yehar> i would have used neural networks
[22:22] <yehar> but i didn't finish the job :-D
[22:23] <_Beta> problem with neural networks is that you have to have some way of measuring the goodness of results
[22:24] <yehar> beta: i have this idea that you can develop a "listener" at the same time as you develop the "smart up-there sound creator"
[22:26] <yehar> beta: and this listener will learn to detect when it's listening a created or a natural up-there spectrum. and the creator develops at the same time to try to circumvent this detection
Sometime between 2006 and 2010, a friend invited an expert to take a look at my idea and discuss it with me. They thought that it was interesting, but said that it wasn't cost-effective to train two networks when a single network can do the job. I was never sure if they did not get the core idea or if they immediately saw a way to formulate it as a single network, perhaps with a bottleneck somewhere in the topology to separate it into two parts. This was at a time when I didn't even know that backpropagation is still the de-facto training method (learned that making videos in the Deep Dream craze of 2015). Over the years I had talked about my idea with a couple of data scientists and others that I thought might be interested, but the response was mild.
In May 2017 I saw Ian Goodfellow's tutorial presentation on YouTube [Mirror], which totally made my day. It appeared to me as the same basic idea, with differences as I currently understand outlined below, and the hard work had been done to make it give good results. Also he gave a theory, or based everything on a theory, of why it should work, while I never did any sort of a formal analysis of my idea. Goodfellow's presentation answered questions that I had had and much more.
Goodfellow's GAN and his suggested extensions include a noise source in the generator. I never thought of including a noise source but have instead the training data context, better matching the idea to a conditional GAN (cGAN) without a noise vector input and with the model conditioned on a part of the data. My current understanding based on Mathieu et al. 2016 is that a noise source is not needed for useful results if there is enough input variability. The other difference is that Goodfellow's GAN minimizes log-likelihood. Later, a least squares GAN (LSGAN) has been introduced (Mao et al. 2017) which matches my RMSD suggestion. So, my idea would match that of a conditional least squares generative adversarial network (cLSGAN) without a noise vector input to the generator and with a part of the data as the conditioning input. A generative generator samples from an approximation of the data distribution. I do now know if and doubt that real-world noisy input would enable that with my idea, but that is not to say that the results would not be useful if it didn't.
The differences mentioned in the above are the primary reason why I believe Goodfellow did not know or hear about my idea. Another is that my blog has had no other machine learning content, so it would have enjoyed very limited exposure in machine learning circles.
It is a conflict of interests when a reviewer puts pressure on an author to cite the reviewer's own work.
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
|
I self-published the basic idea of a deterministic variety of generative adversarial networks (GANs) in a 2010 blog post (archive.org). I had searched for but could not find anything similar anywhere,
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
I self-published the basic idea of a deterministic variety of generative adversarial networks (GANs) in a 2010 blog post (archive.org). I had searched for but could not find anything similar anywhere, and had no time to try implementing it. I was not and still am not a neural network researcher and have no connections in the field. I'll copy-paste the blog post here:
2010-02-24
A method for training artificial neural networks to generate missing
data within a variable context. As the idea is hard to put in a single
sentence, I will use an example:
An image may have missing pixels (let's say, under a smudge). How can
one restore the missing pixels, knowing only the surrounding pixels?
One approach would be a "generator" neural network that, given the
surrounding pixels as input, generates the missing pixels.
But how to train such a network? One can't expect the network to
exactly produce the missing pixels. Imagine, for example, that the
missing data is a patch of grass. One could teach the network with a
bunch of images of lawns, with portions removed. The teacher knows the
data that is missing, and could score the network according to the
root mean square difference (RMSD) between the generated patch of
grass and the original data. The problem is that if the generator
encounters an image that is not part of the training set, it would be
impossible for the neural network to put all the leaves, especially in
the middle of the patch, in exactly the right places. The lowest RMSD
error would probably be achieved by the network filling the middle
area of the patch with a solid color that is the average of the color
of pixels in typical images of grass. If the network tried to generate
grass that looks convincing to a human and as such fulfills its
purpose, there would be an unfortunate penalty by the RMSD metric.
My idea is this (see figure below): Train simultaneously with the
generator a classifier network that is given, in random or alternating
sequence, generated and original data. The classifier then has to
guess, in the context of the surrounding image context, whether the
input is original (1) or generated (0). The generator network is
simultaneously trying to get a high score (1) from the classifier. The
outcome, hopefully, is that both networks start out really simple, and
progress towards generating and recognizing more and more advanced
features, approaching and possibly defeating human's ability to
discern between the generated data and the original. If multiple
training samples are considered for each score, then RMSD is the
correct error metric to use, as this will encourage the classifier
network to output probabilities.
Artificial neural network training setup
When I mention RMSD at the end I mean the error metric for the "probability estimate", not the pixel values.
I originally started considering the use of neural networks in 2000 (comp.dsp post) to generate missing high frequencies for up-sampled (resampled to a higher sampling frequency) digital audio, in a way that would be convincing rather than accurate. In 2001 I collected an audio library for the training. Here are parts of an EFNet #musicdsp Internet Relay Chat (IRC) log from 20 January 2006 in which I (yehar) talk about the idea with another user (_Beta):
[22:18] <yehar> the problem with samples is that if you don't have something "up there" already then what can you do if you upsample...
[22:22] <yehar> i once collected a big library of sounds so that i could develop a "smart" algo to solve this exact problem
[22:22] <yehar> i would have used neural networks
[22:22] <yehar> but i didn't finish the job :-D
[22:23] <_Beta> problem with neural networks is that you have to have some way of measuring the goodness of results
[22:24] <yehar> beta: i have this idea that you can develop a "listener" at the same time as you develop the "smart up-there sound creator"
[22:26] <yehar> beta: and this listener will learn to detect when it's listening a created or a natural up-there spectrum. and the creator develops at the same time to try to circumvent this detection
Sometime between 2006 and 2010, a friend invited an expert to take a look at my idea and discuss it with me. They thought that it was interesting, but said that it wasn't cost-effective to train two networks when a single network can do the job. I was never sure if they did not get the core idea or if they immediately saw a way to formulate it as a single network, perhaps with a bottleneck somewhere in the topology to separate it into two parts. This was at a time when I didn't even know that backpropagation is still the de-facto training method (learned that making videos in the Deep Dream craze of 2015). Over the years I had talked about my idea with a couple of data scientists and others that I thought might be interested, but the response was mild.
In May 2017 I saw Ian Goodfellow's tutorial presentation on YouTube [Mirror], which totally made my day. It appeared to me as the same basic idea, with differences as I currently understand outlined below, and the hard work had been done to make it give good results. Also he gave a theory, or based everything on a theory, of why it should work, while I never did any sort of a formal analysis of my idea. Goodfellow's presentation answered questions that I had had and much more.
Goodfellow's GAN and his suggested extensions include a noise source in the generator. I never thought of including a noise source but have instead the training data context, better matching the idea to a conditional GAN (cGAN) without a noise vector input and with the model conditioned on a part of the data. My current understanding based on Mathieu et al. 2016 is that a noise source is not needed for useful results if there is enough input variability. The other difference is that Goodfellow's GAN minimizes log-likelihood. Later, a least squares GAN (LSGAN) has been introduced (Mao et al. 2017) which matches my RMSD suggestion. So, my idea would match that of a conditional least squares generative adversarial network (cLSGAN) without a noise vector input to the generator and with a part of the data as the conditioning input. A generative generator samples from an approximation of the data distribution. I do now know if and doubt that real-world noisy input would enable that with my idea, but that is not to say that the results would not be useful if it didn't.
The differences mentioned in the above are the primary reason why I believe Goodfellow did not know or hear about my idea. Another is that my blog has had no other machine learning content, so it would have enjoyed very limited exposure in machine learning circles.
It is a conflict of interests when a reviewer puts pressure on an author to cite the reviewer's own work.
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
I self-published the basic idea of a deterministic variety of generative adversarial networks (GANs) in a 2010 blog post (archive.org). I had searched for but could not find anything similar anywhere,
|
5,951
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
|
An answer from Ian Goodfellow on Was JΓΌrgen Schmidhuber right when he claimed credit for GANs at NIPS 2016? posted on 2017-03-21:
He isnβt claiming credit for GANs, exactly. Itβs more complicated.
You can see what he wrote in his own words when he was a reviewer of
the NIPS 2014 submission on GANs: Export Reviews, Discussions, Author
Feedback and
Meta-Reviews (mirror)
Heβs the reviewer that asked us to change the name of GANs to βinverse
PM.β
Here's the paper he believes is not being sufficiently acknowledged:
http://ftp://ftp.idsia.ch/pub/juergen/factorial.pdf (mirror)
I donβt like that there is no good way to have issues like this
adjudicated. I contacted the NIPS organizers and asked if there is a
way for JΓΌrgen to file a complaint about me and have a committee of
NIPS representatives judge whether my publication treats his unfairly.
They said there is no such process available.
I personally donβt think that there is any significant connection
between predictability minimization and GANs. I have never had any
problem acknowledging connections between GANs and other algorithms
that actually are related, like noise-contrastive estimation and
self-supervised boosting.
JΓΌrgen and I intend to write a paper together soon describing the
similarities and differences between PM and GANs, assuming weβre able
to agree on what those are.
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
|
An answer from Ian Goodfellow on Was JΓΌrgen Schmidhuber right when he claimed credit for GANs at NIPS 2016? posted on 2017-03-21:
He isnβt claiming credit for GANs, exactly. Itβs more complicated.
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
An answer from Ian Goodfellow on Was JΓΌrgen Schmidhuber right when he claimed credit for GANs at NIPS 2016? posted on 2017-03-21:
He isnβt claiming credit for GANs, exactly. Itβs more complicated.
You can see what he wrote in his own words when he was a reviewer of
the NIPS 2014 submission on GANs: Export Reviews, Discussions, Author
Feedback and
Meta-Reviews (mirror)
Heβs the reviewer that asked us to change the name of GANs to βinverse
PM.β
Here's the paper he believes is not being sufficiently acknowledged:
http://ftp://ftp.idsia.ch/pub/juergen/factorial.pdf (mirror)
I donβt like that there is no good way to have issues like this
adjudicated. I contacted the NIPS organizers and asked if there is a
way for JΓΌrgen to file a complaint about me and have a committee of
NIPS representatives judge whether my publication treats his unfairly.
They said there is no such process available.
I personally donβt think that there is any significant connection
between predictability minimization and GANs. I have never had any
problem acknowledging connections between GANs and other algorithms
that actually are related, like noise-contrastive estimation and
self-supervised boosting.
JΓΌrgen and I intend to write a paper together soon describing the
similarities and differences between PM and GANs, assuming weβre able
to agree on what those are.
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
An answer from Ian Goodfellow on Was JΓΌrgen Schmidhuber right when he claimed credit for GANs at NIPS 2016? posted on 2017-03-21:
He isnβt claiming credit for GANs, exactly. Itβs more complicated.
|
5,952
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
|
This is taken straight out from the 1991 original paper of Schmidhuber in 1991:
"I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns. The principle is based on two opposing forces. For each representational unit there is an adaptive predictor which tries to predict the unit from the remaining units. In turn, each unit tries to react to the environment such that it minimizes its predictability. This encourages each unit to filter `abstract concepts' out of the environmental input such that these concepts are statistically independent of those upon which the other units focus."
if you understand the basic principle behind GAN's and the min-max optimization game that the generator and the discriminator play, then in principle there is essentially no difference in GAN's and Schmidhuber's original paper(in principle) the principle is same and if Goodfellow didn't cite his paper then in academic terms it is called plagiarism and in real life, it is called stealing. There is essentially no debate here to have its either you accept that you took the idea or not. If you look at the original paper he proposes a similar min-max optimization game (This excerpt is directly taken from the original paper):
Maximization of the objective function(as defined in the original Schmidhuber's paper follows the following constraints) tends to force the representational units to take on binary values that maximize independence(referring to generators ability in GAN's to maximize it such that the outputs sampled from the vector of distribution encoded by the generator becomes impossible for the discriminator to discriminate) in addition to minimizing the reconstruction error(refers to minimizing the discriminator's loss function i.e maximizing its ability to differentiate between generated and real image).
Hence to anyone who understands this kind of stuff can realize who is at fault here and why Schmidhuber's annoyance is justified in the true sense. The statements in the bracket compare the principle similarity between GAN's and Schmidhuber's original 1991 paper.
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
|
This is taken straight out from the 1991 original paper of Schmidhuber in 1991:
"I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
This is taken straight out from the 1991 original paper of Schmidhuber in 1991:
"I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns. The principle is based on two opposing forces. For each representational unit there is an adaptive predictor which tries to predict the unit from the remaining units. In turn, each unit tries to react to the environment such that it minimizes its predictability. This encourages each unit to filter `abstract concepts' out of the environmental input such that these concepts are statistically independent of those upon which the other units focus."
if you understand the basic principle behind GAN's and the min-max optimization game that the generator and the discriminator play, then in principle there is essentially no difference in GAN's and Schmidhuber's original paper(in principle) the principle is same and if Goodfellow didn't cite his paper then in academic terms it is called plagiarism and in real life, it is called stealing. There is essentially no debate here to have its either you accept that you took the idea or not. If you look at the original paper he proposes a similar min-max optimization game (This excerpt is directly taken from the original paper):
Maximization of the objective function(as defined in the original Schmidhuber's paper follows the following constraints) tends to force the representational units to take on binary values that maximize independence(referring to generators ability in GAN's to maximize it such that the outputs sampled from the vector of distribution encoded by the generator becomes impossible for the discriminator to discriminate) in addition to minimizing the reconstruction error(refers to minimizing the discriminator's loss function i.e maximizing its ability to differentiate between generated and real image).
Hence to anyone who understands this kind of stuff can realize who is at fault here and why Schmidhuber's annoyance is justified in the true sense. The statements in the bracket compare the principle similarity between GAN's and Schmidhuber's original 1991 paper.
|
Were generative adversarial networks introduced by JΓΌrgen Schmidhuber?
This is taken straight out from the 1991 original paper of Schmidhuber in 1991:
"I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of
|
5,953
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
|
1) My understanding is that users have a choice of functions to use for $\phi$, and that the Gaussian function is a very common choice.
2) The density at $x$ is the mean of the different values of $\phi_h(x_i - x)$ at $x$. For example, you might have $x_1=1$, $x_2 = 2$, and a Gaussian distribution with $\sigma=1$ for $\phi_h$. In this case, the density at $x$ would be $\frac{\mathcal{N}_{1, 1}(x) + \mathcal{N}_{2, 1}(x)}{2}$.
3) You can plug in any density function you like as your window function.
4) $h$ determines the width of your chosen window function.
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
|
1) My understanding is that users have a choice of functions to use for $\phi$, and that the Gaussian function is a very common choice.
2) The density at $x$ is the mean of the different values of $\p
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
1) My understanding is that users have a choice of functions to use for $\phi$, and that the Gaussian function is a very common choice.
2) The density at $x$ is the mean of the different values of $\phi_h(x_i - x)$ at $x$. For example, you might have $x_1=1$, $x_2 = 2$, and a Gaussian distribution with $\sigma=1$ for $\phi_h$. In this case, the density at $x$ would be $\frac{\mathcal{N}_{1, 1}(x) + \mathcal{N}_{2, 1}(x)}{2}$.
3) You can plug in any density function you like as your window function.
4) $h$ determines the width of your chosen window function.
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
1) My understanding is that users have a choice of functions to use for $\phi$, and that the Gaussian function is a very common choice.
2) The density at $x$ is the mean of the different values of $\p
|
5,954
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
|
Parzen window density estimation is another name for kernel density estimation. It is a nonparametric method for estimating continuous density function from the data.
Imagine that you have some datapoints $x_1,\dots,x_n$ that come from common unknown, presumably continuous, distribution $f$. You are interested in estimating the distribution given your data. One thing that you could do is simply to look at the empirical distribution and treat it as a sample equivalent of the true distribution. However if your data is continuous, then most probably you would see each $x_i$ point appear only once in the dataset, so based on this, you would conclude that your data comes from an uniform distribution since each of the values have equal probability. Hopefully, you can do better then this: you can pack your data in some number of equally-spaced intervals and count the values that fall into each interval. This method would be based on estimating the histogram. Unfortunately, with histogram you end up with some number of bins, rather then with continuous distribution, so it's only a rough approximation.
Kernel density estimation is the third alternative. The main idea is that you approximate $f$ by a mixture of continuous distributions $K$ (using your notation $\phi$), called kernels, that are centered at $x_i$ datapoints and have scale (bandwidth) equal to $h$:
$$
\hat{f_h}(x) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{x-x_i}{h}\Big)
$$
This is illustrated on the picture below, where normal distribution is used as kernel $K$ and different values for bandwidth $h$ are used to estimate distribution given the seven datapoints (marked by the colorful lines on the top of the plots). The colorful densities on the plots are kernels centered at $x_i$ points. Notice that $h$ is a relative parameter, it's value is always chosen depending on your data and the same value of $h$ may not give similar results for different datasets.
Kernel $K$ can be thought as a probability density function, and it needs to integrate to unity. It also needs to be symmetric so that $K(x) = K(-x)$ and, what follows, centered at zero. Wikipedia article on kernels lists many popular kernels, like Gaussian (normal distribution), Epanechnikov, rectangular (uniform distribution), etc. Basically any distribution meeting those requirements can be used as a kernel.
Obviously, the final estimate will depend on your choice of kernel (but not that much) and on the bandwidth parameter $h$. The following thread
How to interpret the bandwidth value in a kernel density estimation? describes the usage of bandwidth parameters in greater detail.
Saying this in plain English, what you assume in here is that the observed points $x_i$ are just a sample and follow some distribution $f$ to be estimated. Since the distribution is continuous, we assume that there is some unknown but nonzero density around the near neighborhood of $x_i$ points (the neighborhood is defined by parameter $h$) and we use kernels $K$ to account for it. The more points is in some neighborhood, the more density is accumulated around this region and so, the higher the overall density of $\hat{f_h}$. The resulting function $\hat{f_h}$ can be now evaluated for any point $x$ (without subscript) to obtain density estimate for it, this is how we obtained function $\hat{f_h}(x)$ that is an approximation of unknown density function $f(x)$.
The nice thing about kernel densities is that, not like histograms, they are continuous functions and that they are themselves valid probability densities since they are a mixture of valid probability densities. In many cases this is as close as you can get to approximating $f$.
The difference between kernel density and other densities, as normal distribution, is that "usual" densities are mathematical functions, while kernel density is an approximation of the true density estimated using your data, so they are not "standalone" distributions.
I would recommend you the two nice introductory books on this subject by Silverman (1986) and Wand and Jones (1995).
Silverman, B.W. (1986). Density estimation for statistics and data analysis. CRC/Chapman & Hall.
Wand, M.P and Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall/CRC.
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
|
Parzen window density estimation is another name for kernel density estimation. It is a nonparametric method for estimating continuous density function from the data.
Imagine that you have some datapo
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
Parzen window density estimation is another name for kernel density estimation. It is a nonparametric method for estimating continuous density function from the data.
Imagine that you have some datapoints $x_1,\dots,x_n$ that come from common unknown, presumably continuous, distribution $f$. You are interested in estimating the distribution given your data. One thing that you could do is simply to look at the empirical distribution and treat it as a sample equivalent of the true distribution. However if your data is continuous, then most probably you would see each $x_i$ point appear only once in the dataset, so based on this, you would conclude that your data comes from an uniform distribution since each of the values have equal probability. Hopefully, you can do better then this: you can pack your data in some number of equally-spaced intervals and count the values that fall into each interval. This method would be based on estimating the histogram. Unfortunately, with histogram you end up with some number of bins, rather then with continuous distribution, so it's only a rough approximation.
Kernel density estimation is the third alternative. The main idea is that you approximate $f$ by a mixture of continuous distributions $K$ (using your notation $\phi$), called kernels, that are centered at $x_i$ datapoints and have scale (bandwidth) equal to $h$:
$$
\hat{f_h}(x) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{x-x_i}{h}\Big)
$$
This is illustrated on the picture below, where normal distribution is used as kernel $K$ and different values for bandwidth $h$ are used to estimate distribution given the seven datapoints (marked by the colorful lines on the top of the plots). The colorful densities on the plots are kernels centered at $x_i$ points. Notice that $h$ is a relative parameter, it's value is always chosen depending on your data and the same value of $h$ may not give similar results for different datasets.
Kernel $K$ can be thought as a probability density function, and it needs to integrate to unity. It also needs to be symmetric so that $K(x) = K(-x)$ and, what follows, centered at zero. Wikipedia article on kernels lists many popular kernels, like Gaussian (normal distribution), Epanechnikov, rectangular (uniform distribution), etc. Basically any distribution meeting those requirements can be used as a kernel.
Obviously, the final estimate will depend on your choice of kernel (but not that much) and on the bandwidth parameter $h$. The following thread
How to interpret the bandwidth value in a kernel density estimation? describes the usage of bandwidth parameters in greater detail.
Saying this in plain English, what you assume in here is that the observed points $x_i$ are just a sample and follow some distribution $f$ to be estimated. Since the distribution is continuous, we assume that there is some unknown but nonzero density around the near neighborhood of $x_i$ points (the neighborhood is defined by parameter $h$) and we use kernels $K$ to account for it. The more points is in some neighborhood, the more density is accumulated around this region and so, the higher the overall density of $\hat{f_h}$. The resulting function $\hat{f_h}$ can be now evaluated for any point $x$ (without subscript) to obtain density estimate for it, this is how we obtained function $\hat{f_h}(x)$ that is an approximation of unknown density function $f(x)$.
The nice thing about kernel densities is that, not like histograms, they are continuous functions and that they are themselves valid probability densities since they are a mixture of valid probability densities. In many cases this is as close as you can get to approximating $f$.
The difference between kernel density and other densities, as normal distribution, is that "usual" densities are mathematical functions, while kernel density is an approximation of the true density estimated using your data, so they are not "standalone" distributions.
I would recommend you the two nice introductory books on this subject by Silverman (1986) and Wand and Jones (1995).
Silverman, B.W. (1986). Density estimation for statistics and data analysis. CRC/Chapman & Hall.
Wand, M.P and Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall/CRC.
|
Can you explain Parzen window (kernel) density estimation in layman's terms?
Parzen window density estimation is another name for kernel density estimation. It is a nonparametric method for estimating continuous density function from the data.
Imagine that you have some datapo
|
5,955
|
Calculating confidence intervals for a logistic regression
|
Your question may come from the fact that you are dealing with Odds Ratios and Probabilities which is confusing at first. Since the logistic model is a non linear transformation of $\beta^Tx$ computing the confidence intervals is not as straightforward.
Background
Recall that for the Logistic regression model
Probability of $(Y = 1)$: $p = \frac{e^{\alpha + \beta_1x_1 + \beta_2
x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_2 x_2}}$
Odds of $(Y = 1)$: $ \left( \frac{p}{1-p}\right) = e^{\alpha + \beta_1x_1 + \beta_2
x_2}$
Log Odds of $(Y = 1)$: $ \log \left( \frac{p}{1-p}\right) = \alpha + \beta_1x_1 + \beta_2
x_2$
Consider the case where you have a one unit increase in variable $x_1$, i.e. $x_1 + 1$, then the new odds are
$$ \text{Odds}(Y = 1) = e^{\alpha + \beta_1(x_1 + 1) + \beta_2x_2} = e^{\alpha + \beta_1 x_1 + \beta_1 + \beta_2x_2 } $$
Odds Ratio (OR) are therefore
$$ \frac{\text{Odds}(x_1 + 1)}{\text{Odds}(x_1)} = \frac{e^{\alpha + \beta_1(x_1 + 1) + \beta_2x_2} }{e^{\alpha + \beta_1 x_1 + \beta_2x_2}} = e^{\beta_1} $$
Log Odds Ratio = $\beta_1$
Relative risk or (probability ratio) = $\frac{ \frac{e^{\alpha + \beta_1x_1 + \beta_1 + \beta_2
x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_1 + \beta_2 x_2}}}{ \frac{e^{\alpha + \beta_1x_1 + \beta_2
x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_2 x_2}}}$
Interpreting coefficients
How would you interpret the coefficient value $\beta_j$ ? Assuming that everything else remains fixed:
For every unit increase in $x_j$ the log-odds ratio increases by $\beta_j$.
For every unit increase in $x_j$ the odds ratio increases by $e^{\beta_j}$.
For every increase of $x_j$ from $k$ to $k + \Delta$ the odds ratio increases by $e^{\beta_j \Delta}$
If the coefficient is negative, then an increase in $x_j$ leads to a decrease in the odds ratio.
Confidence intervals for a single parameter $\beta_j$
Do I just need to use $1.96βSE$? Or do I need to convert the SE using an approach described here?
Since the parameter $\beta_j$ is estimated using Maxiumum Likelihood Estimation, MLE theory tells us that it is asymptotically normal and hence we can use the large sample Wald confidence interval to get the usual
$$ \beta_j \pm z^* SE(\beta_j)$$
Which gives a confidence interval on the log-odds ratio. Using the invariance property of the MLE allows us to exponentiate to get
$$ e^{\beta_j \pm z^* SE(\beta_j)}$$
which is a confidence interval on the odds ratio. Note that these intervals are for a single parameter only.
If I want to understand the standard-error for both variables how would I consider that?
If you include several parameters you can use the Bonferroni procedure, otherwise for all parameters you can use the confidence interval for probability estimates
Bonferroni procedure for several parameters
If $g$ parameters are to be estimated with family confidence coefficient of approximately $1 - \alpha$, the joint Bonferroni confidence limits are
$$ \beta_g \pm z_{(1 - \frac{\alpha}{2g})}SE(\beta_g)$$
Confidence intervals for probability estimates
The logistic model outputs an estimation of the probability of observing a one and we aim to construct a frequentist interval around the true probability $p$ such that $Pr(p_{L} \leq p \leq p_{U}) = .95$
One approach called endpoint transformation does the following:
Compute the upper and lower bounds of the confidence interval for the linear combination $x^T\beta$ (using the Wald CI)
Apply a monotonic transformation to the endpoints $F(x^T\beta)$ to obtain the probabilities.
Since $Pr(x^T\beta) = F(x^T\beta)$ is a monotonic transformation of $x^T\beta$
$$ [Pr(x^T\beta)_L \leq Pr(x^T\beta) \leq Pr(x^T\beta)_U] = [F(x^T\beta)_L \leq F(x^T\beta) \leq F(x^T\beta)_U] $$
Concretely this means computing $\beta^Tx \pm z^* SE(\beta^Tx)$ and then applying the logit transform to the result to get the lower and upper bounds:
$$[\frac{e^{x^T\beta - z^* SE(x^T\beta)}}{1 + e^{x^T\beta - z^* SE(x^T\beta)}}, \frac{e^{x^T\beta + z^* SE(x^T\beta)}}{1 + e^{x^T\beta + z^* SE(x^T\beta)}},] $$
The estimated approximate variance of $x^T\beta$ can be calculated using the covariance matrix of the regression coefficients using
$$ Var(x^T\beta) = x^T \Sigma x$$
The advantage of this method is that the bounds cannot be outside the range $(0,1)$
There are several other approaches as well, using the delta method, bootstrapping etc.. which each have their own assumptions, advantages and limits.
Sources and info
My favorite book on this topic is "Applied Linear Statistical Models" by Kutner, Neter, Li, Chapter 14
Otherwise here are a few online sources:
Plotting confidence intervals for the predicted probabilities from a logistic regression
https://stackoverflow.com/questions/47414842/confidence-interval-of-probability-prediction-from-logistic-regression-statsmode
Edit October 2021 - New links
https://fdocuments.net/reader/full/5logreg-beamer-online
https://jslsoc.sitehost.iu.edu/stata/ci_computations/xulong-prvalue-23aug2005.pdf
|
Calculating confidence intervals for a logistic regression
|
Your question may come from the fact that you are dealing with Odds Ratios and Probabilities which is confusing at first. Since the logistic model is a non linear transformation of $\beta^Tx$ computin
|
Calculating confidence intervals for a logistic regression
Your question may come from the fact that you are dealing with Odds Ratios and Probabilities which is confusing at first. Since the logistic model is a non linear transformation of $\beta^Tx$ computing the confidence intervals is not as straightforward.
Background
Recall that for the Logistic regression model
Probability of $(Y = 1)$: $p = \frac{e^{\alpha + \beta_1x_1 + \beta_2
x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_2 x_2}}$
Odds of $(Y = 1)$: $ \left( \frac{p}{1-p}\right) = e^{\alpha + \beta_1x_1 + \beta_2
x_2}$
Log Odds of $(Y = 1)$: $ \log \left( \frac{p}{1-p}\right) = \alpha + \beta_1x_1 + \beta_2
x_2$
Consider the case where you have a one unit increase in variable $x_1$, i.e. $x_1 + 1$, then the new odds are
$$ \text{Odds}(Y = 1) = e^{\alpha + \beta_1(x_1 + 1) + \beta_2x_2} = e^{\alpha + \beta_1 x_1 + \beta_1 + \beta_2x_2 } $$
Odds Ratio (OR) are therefore
$$ \frac{\text{Odds}(x_1 + 1)}{\text{Odds}(x_1)} = \frac{e^{\alpha + \beta_1(x_1 + 1) + \beta_2x_2} }{e^{\alpha + \beta_1 x_1 + \beta_2x_2}} = e^{\beta_1} $$
Log Odds Ratio = $\beta_1$
Relative risk or (probability ratio) = $\frac{ \frac{e^{\alpha + \beta_1x_1 + \beta_1 + \beta_2
x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_1 + \beta_2 x_2}}}{ \frac{e^{\alpha + \beta_1x_1 + \beta_2
x_2}}{1 + e^{ \alpha + \beta_1x_1 + \beta_2 x_2}}}$
Interpreting coefficients
How would you interpret the coefficient value $\beta_j$ ? Assuming that everything else remains fixed:
For every unit increase in $x_j$ the log-odds ratio increases by $\beta_j$.
For every unit increase in $x_j$ the odds ratio increases by $e^{\beta_j}$.
For every increase of $x_j$ from $k$ to $k + \Delta$ the odds ratio increases by $e^{\beta_j \Delta}$
If the coefficient is negative, then an increase in $x_j$ leads to a decrease in the odds ratio.
Confidence intervals for a single parameter $\beta_j$
Do I just need to use $1.96βSE$? Or do I need to convert the SE using an approach described here?
Since the parameter $\beta_j$ is estimated using Maxiumum Likelihood Estimation, MLE theory tells us that it is asymptotically normal and hence we can use the large sample Wald confidence interval to get the usual
$$ \beta_j \pm z^* SE(\beta_j)$$
Which gives a confidence interval on the log-odds ratio. Using the invariance property of the MLE allows us to exponentiate to get
$$ e^{\beta_j \pm z^* SE(\beta_j)}$$
which is a confidence interval on the odds ratio. Note that these intervals are for a single parameter only.
If I want to understand the standard-error for both variables how would I consider that?
If you include several parameters you can use the Bonferroni procedure, otherwise for all parameters you can use the confidence interval for probability estimates
Bonferroni procedure for several parameters
If $g$ parameters are to be estimated with family confidence coefficient of approximately $1 - \alpha$, the joint Bonferroni confidence limits are
$$ \beta_g \pm z_{(1 - \frac{\alpha}{2g})}SE(\beta_g)$$
Confidence intervals for probability estimates
The logistic model outputs an estimation of the probability of observing a one and we aim to construct a frequentist interval around the true probability $p$ such that $Pr(p_{L} \leq p \leq p_{U}) = .95$
One approach called endpoint transformation does the following:
Compute the upper and lower bounds of the confidence interval for the linear combination $x^T\beta$ (using the Wald CI)
Apply a monotonic transformation to the endpoints $F(x^T\beta)$ to obtain the probabilities.
Since $Pr(x^T\beta) = F(x^T\beta)$ is a monotonic transformation of $x^T\beta$
$$ [Pr(x^T\beta)_L \leq Pr(x^T\beta) \leq Pr(x^T\beta)_U] = [F(x^T\beta)_L \leq F(x^T\beta) \leq F(x^T\beta)_U] $$
Concretely this means computing $\beta^Tx \pm z^* SE(\beta^Tx)$ and then applying the logit transform to the result to get the lower and upper bounds:
$$[\frac{e^{x^T\beta - z^* SE(x^T\beta)}}{1 + e^{x^T\beta - z^* SE(x^T\beta)}}, \frac{e^{x^T\beta + z^* SE(x^T\beta)}}{1 + e^{x^T\beta + z^* SE(x^T\beta)}},] $$
The estimated approximate variance of $x^T\beta$ can be calculated using the covariance matrix of the regression coefficients using
$$ Var(x^T\beta) = x^T \Sigma x$$
The advantage of this method is that the bounds cannot be outside the range $(0,1)$
There are several other approaches as well, using the delta method, bootstrapping etc.. which each have their own assumptions, advantages and limits.
Sources and info
My favorite book on this topic is "Applied Linear Statistical Models" by Kutner, Neter, Li, Chapter 14
Otherwise here are a few online sources:
Plotting confidence intervals for the predicted probabilities from a logistic regression
https://stackoverflow.com/questions/47414842/confidence-interval-of-probability-prediction-from-logistic-regression-statsmode
Edit October 2021 - New links
https://fdocuments.net/reader/full/5logreg-beamer-online
https://jslsoc.sitehost.iu.edu/stata/ci_computations/xulong-prvalue-23aug2005.pdf
|
Calculating confidence intervals for a logistic regression
Your question may come from the fact that you are dealing with Odds Ratios and Probabilities which is confusing at first. Since the logistic model is a non linear transformation of $\beta^Tx$ computin
|
5,956
|
Calculating confidence intervals for a logistic regression
|
To get the 95% confidence interval of the prediction you can calculate on the logit scale and then convert those back to the probability scale 0-1. Here is an example using the titanic dataset.
library(titanic)
data("titanic_train")
titanic_train$Pclass = factor(titanic_train$Pclass, levels = c(1,2,3), labels = c('First','Second','Third'))
fit = glm(Survived ~ Sex + Pclass, data=titanic_train, family = binomial())
inverse_logit = function(x){
exp(x)/(1+exp(x))
}
predicted = predict(fit, data.frame(Sex='male', Pclass='First'), type='link', se.fit=TRUE)
se_high = inverse_logit(predicted$fit + (predicted$se.fit*1.96))
se_low = inverse_logit(predicted$fit - (predicted$se.fit*1.96))
expected = inverse_logit(predicted$fit)
The mean and low/high 95% CI.
> expected
1
0.4146556
> se_high
1
0.4960988
> se_low
1
0.3376243
And the output from just using type='response', which only gives the mean
predict(fit, data.frame(Sex='male', Pclass='First'), type='response')
1
0.4146556
|
Calculating confidence intervals for a logistic regression
|
To get the 95% confidence interval of the prediction you can calculate on the logit scale and then convert those back to the probability scale 0-1. Here is an example using the titanic dataset.
libra
|
Calculating confidence intervals for a logistic regression
To get the 95% confidence interval of the prediction you can calculate on the logit scale and then convert those back to the probability scale 0-1. Here is an example using the titanic dataset.
library(titanic)
data("titanic_train")
titanic_train$Pclass = factor(titanic_train$Pclass, levels = c(1,2,3), labels = c('First','Second','Third'))
fit = glm(Survived ~ Sex + Pclass, data=titanic_train, family = binomial())
inverse_logit = function(x){
exp(x)/(1+exp(x))
}
predicted = predict(fit, data.frame(Sex='male', Pclass='First'), type='link', se.fit=TRUE)
se_high = inverse_logit(predicted$fit + (predicted$se.fit*1.96))
se_low = inverse_logit(predicted$fit - (predicted$se.fit*1.96))
expected = inverse_logit(predicted$fit)
The mean and low/high 95% CI.
> expected
1
0.4146556
> se_high
1
0.4960988
> se_low
1
0.3376243
And the output from just using type='response', which only gives the mean
predict(fit, data.frame(Sex='male', Pclass='First'), type='response')
1
0.4146556
|
Calculating confidence intervals for a logistic regression
To get the 95% confidence interval of the prediction you can calculate on the logit scale and then convert those back to the probability scale 0-1. Here is an example using the titanic dataset.
libra
|
5,957
|
Calculating confidence intervals for a logistic regression
|
The explanations above are very nice and detailed.
Here is the simple way to just get the result:
Random data frame:
dat <- data.frame(
trial = factor(c(rep("noise", 340), rep("signal", 340))),
rating = c(rep(1, 70), rep(2, 67), rep(3, 28),
rep(4, 45), rep(5, 53), rep(6, 77), rep(1, 219),
rep(2, 56), rep(3, 16), rep(4, 24), rep(5, 14), rep(6, 11))
)
-- >sadly the following edit is not being presented correct: here will be dat-"dollar"-rating and dat-dollar-trial to edit the data frame above
dat$rating <- as.factor(ifelse(dat$rating <= 3, "sure", "not"))
dat$trial <- ifelse(dat$trial == "noise", 0, 1)
IMPORTANT
my.glm <- glm(trial ~ rating, family = binomial(probit), data = dat)
summary(my.glm)
# 95-CI for all Betas
confint(my.glm)
# 90-CI for all Betas
confint(my.glm, level = 0.90)
# 95-CI of Beta 1
confint(my.glm)[c(2, 4)]
# 90-CI of Beta 1
confint(my.glm, level = 0.90)[c(2, 4)]
|
Calculating confidence intervals for a logistic regression
|
The explanations above are very nice and detailed.
Here is the simple way to just get the result:
Random data frame:
dat <- data.frame(
trial = factor(c(rep("noise", 340), rep("signal", 340)
|
Calculating confidence intervals for a logistic regression
The explanations above are very nice and detailed.
Here is the simple way to just get the result:
Random data frame:
dat <- data.frame(
trial = factor(c(rep("noise", 340), rep("signal", 340))),
rating = c(rep(1, 70), rep(2, 67), rep(3, 28),
rep(4, 45), rep(5, 53), rep(6, 77), rep(1, 219),
rep(2, 56), rep(3, 16), rep(4, 24), rep(5, 14), rep(6, 11))
)
-- >sadly the following edit is not being presented correct: here will be dat-"dollar"-rating and dat-dollar-trial to edit the data frame above
dat$rating <- as.factor(ifelse(dat$rating <= 3, "sure", "not"))
dat$trial <- ifelse(dat$trial == "noise", 0, 1)
IMPORTANT
my.glm <- glm(trial ~ rating, family = binomial(probit), data = dat)
summary(my.glm)
# 95-CI for all Betas
confint(my.glm)
# 90-CI for all Betas
confint(my.glm, level = 0.90)
# 95-CI of Beta 1
confint(my.glm)[c(2, 4)]
# 90-CI of Beta 1
confint(my.glm, level = 0.90)[c(2, 4)]
|
Calculating confidence intervals for a logistic regression
The explanations above are very nice and detailed.
Here is the simple way to just get the result:
Random data frame:
dat <- data.frame(
trial = factor(c(rep("noise", 340), rep("signal", 340)
|
5,958
|
Do Bayesian priors become irrelevant with large sample size?
|
It is not that easy. Information in your data overwhelms prior information not only when your sample size is large, but also when your data provides enough information to overwhelm the prior information. Uninformative priors get easily persuaded by data, while strongly informative ones may be more resistant. In extreme case, with ill-defined priors, your data may not be able at all to overcome it (e.g. zero density over some region).
Recall that by Bayes theorem we use two sources of information in our statistical model, out-of-data, prior information, and information conveyed by data in likelihood function:
$$ \color{violet}{\text{posterior}} \propto \color{red}{\text{prior}} \times \color{lightblue}{\text{likelihood}} $$
When using uninformative prior (or maximum likelihood), we try to bring minimal possible prior information into our model. With informative priors we bring substantial amount of information into the model. So both, the data and prior, inform us what values of estimated parameters are more plausible, or believable. They can bring different information and each of them can overpower the other one in some cases.
Let me illustrate this with very basic beta-binomial model (see here for detailed example). With "uninformative" prior, pretty small sample may be enough to overpower it. On the plots below you can see priors (red curve), likelihood (blue curve), and posteriors (violet curve) of the same model with different sample sizes.
On another hand, you can have informative prior that is close to the true value, that would also be easily, but not that easily as with weekly informative one, persuaded by data.
The case is very different with informative prior, when it is far from what the data says (using the same data as in first example). In such case you need larger sample to overcome the prior.
So it is not only about sample size, but also about what is your data and what is your prior. Notice that this is a desired behavior, because when using informative priors we want to potentially include out-of-data information in our model and this would be impossible if large samples would always discard the priors.
Because of complicated posterior-likelihood-prior relations, it is always good to look at the posterior distribution and do some posterior predictive checks (Gelman, Meng and Stern, 1996; Gelman and Hill, 2006; Gelman et al, 2004). Moreover, as described by Spiegelhalter (2004), you can use different priors, for example "pessimistic" that express doubts about large effects, or "enthusiastic" that are optimistic about estimated effects. Comparing how different priors behave with your data may help to informally assess the extent how posterior was influenced by prior.
Spiegelhalter, D. J. (2004). Incorporating Bayesian ideas into health-care evaluation. Statistical Science, 156-174.
Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC.
Gelman, A. and Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.
Gelman, A., Meng, X. L., and Stern, H. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica sinica, 733-760.
|
Do Bayesian priors become irrelevant with large sample size?
|
It is not that easy. Information in your data overwhelms prior information not only when your sample size is large, but also when your data provides enough information to overwhelm the prior informati
|
Do Bayesian priors become irrelevant with large sample size?
It is not that easy. Information in your data overwhelms prior information not only when your sample size is large, but also when your data provides enough information to overwhelm the prior information. Uninformative priors get easily persuaded by data, while strongly informative ones may be more resistant. In extreme case, with ill-defined priors, your data may not be able at all to overcome it (e.g. zero density over some region).
Recall that by Bayes theorem we use two sources of information in our statistical model, out-of-data, prior information, and information conveyed by data in likelihood function:
$$ \color{violet}{\text{posterior}} \propto \color{red}{\text{prior}} \times \color{lightblue}{\text{likelihood}} $$
When using uninformative prior (or maximum likelihood), we try to bring minimal possible prior information into our model. With informative priors we bring substantial amount of information into the model. So both, the data and prior, inform us what values of estimated parameters are more plausible, or believable. They can bring different information and each of them can overpower the other one in some cases.
Let me illustrate this with very basic beta-binomial model (see here for detailed example). With "uninformative" prior, pretty small sample may be enough to overpower it. On the plots below you can see priors (red curve), likelihood (blue curve), and posteriors (violet curve) of the same model with different sample sizes.
On another hand, you can have informative prior that is close to the true value, that would also be easily, but not that easily as with weekly informative one, persuaded by data.
The case is very different with informative prior, when it is far from what the data says (using the same data as in first example). In such case you need larger sample to overcome the prior.
So it is not only about sample size, but also about what is your data and what is your prior. Notice that this is a desired behavior, because when using informative priors we want to potentially include out-of-data information in our model and this would be impossible if large samples would always discard the priors.
Because of complicated posterior-likelihood-prior relations, it is always good to look at the posterior distribution and do some posterior predictive checks (Gelman, Meng and Stern, 1996; Gelman and Hill, 2006; Gelman et al, 2004). Moreover, as described by Spiegelhalter (2004), you can use different priors, for example "pessimistic" that express doubts about large effects, or "enthusiastic" that are optimistic about estimated effects. Comparing how different priors behave with your data may help to informally assess the extent how posterior was influenced by prior.
Spiegelhalter, D. J. (2004). Incorporating Bayesian ideas into health-care evaluation. Statistical Science, 156-174.
Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC.
Gelman, A. and Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.
Gelman, A., Meng, X. L., and Stern, H. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica sinica, 733-760.
|
Do Bayesian priors become irrelevant with large sample size?
It is not that easy. Information in your data overwhelms prior information not only when your sample size is large, but also when your data provides enough information to overwhelm the prior informati
|
5,959
|
Do Bayesian priors become irrelevant with large sample size?
|
When performing Bayesian inference, we operate by maximizing our likelihood function in combination with the priors we have about the parameters.
This is actually not what most practitioners consider to be Bayesian inference. It is possible to estimate parameters this way, but I would not call it Bayesian inference.
Bayesian inference uses posterior distributions to calculate posterior probabilities (or ratios of probabilities) for competing hypotheses.
Posterior distributions can be estimated empirically by Monte Carlo or Markov-Chain Monte Carlo (MCMC) techniques.
Putting these distinctions aside, the question
Do Bayesian priors become irrelevant with large sample size?
still depends on the context of the problem and what you care about.
If what you care about is prediction given an already very large sample, then the answer is generally yes, the priors are asymptotically irrelevant*. However, if what you care about is model selection and Bayesian Hypothesis testing, then the answer is no, the priors matter a lot, and their effect will not deteriorate with sample size.
*Here, I am assuming that the priors aren't truncated/censored beyond the parameter space implied by the likelihood, and that they aren't so ill-specified as to cause convergence issues with near zero-density in important regions. My argument is also asymptotic, which comes with all the regular caveats.
Predictive Densities
As an example, let $\mathbf{d}_N = (d_1, d_2,...,d_N)$ be your data, where each $d_i$ signifies an observation. Let the likelihood be denoted as $f(\mathbf{d}_N\mid \theta)$, where $\theta$ is the parameter vector.
Then suppose we also specify two separate priors $\pi_0 (\theta \mid \lambda_1)$ and $\pi_0 (\theta \mid \lambda_2)$, which differ by the hyper-parameter $\lambda_1 \neq \lambda_2$.
Each prior will lead to different posterior distributions in a finite sample,
$$
\pi_N (\theta \mid \mathbf{d}_N, \lambda_j) \propto f(\mathbf{d}_N\mid \theta)\pi_0 ( \theta \mid \lambda_j)\;\;\;\;\;\mathrm{for}\;\;j=1,2
$$
Letting $\theta^*$ be the suito true parameter value, $\theta^{j}_N \sim \pi_N(\theta\mid \mathbf{d}_N, \lambda_j)$, and $\hat \theta_N = \max_\theta\{ f(\mathbf{d}_N\mid \theta) \}$, it is true that $\theta^{1}_N$, $\theta^{2}_N$, and $\hat \theta_N$ will all converge in probability to $\theta^*$. Put more formally, for any $\varepsilon >0$;
$$
\begin{align}
\lim_{N \rightarrow \infty} Pr(|\theta^j_N - \theta^*| \ge \varepsilon) &= 0\;\;\;\forall j \in \{1,2\} \\
\lim_{N \rightarrow \infty} Pr(|\hat \theta_N - \theta^*| \ge \varepsilon) &= 0
\end{align}
$$
To be more consistent with your optimization procedure, we could alternatively define $\theta^j_N = \max_\theta \{\pi_N (\theta \mid \mathbf{d}_N, \lambda_j)\} $ and although this parameter is very different then the previously defined, the above asymptotics still hold.
It follows that the predictive densities, which are defined as either $f(\tilde d \mid \mathbf{d}_N, \lambda_j) = \int_{\Theta} f(\tilde d \mid \theta,\lambda_j,\mathbf{d}_N)\pi_N (\theta \mid \lambda_j,\mathbf{d}_N)d\theta$ in a proper Bayesian approach or $f(\tilde d \mid \mathbf{d}_N, \theta^j_N)$ using optimization, converge in distribution to $f(\tilde d\mid \mathbf{d}_N, \theta^*)$. So in terms of predicting new observations conditional on an already very large sample, the prior specification makes no difference asymptotically.
Model Selection and Hypothesis Testing
If one is interested in Bayesian model selection and hypothesis testing they should be aware that the effect of the prior does not vanish asymptotically.
In a Bayesian setting we would calculate posterior probabilities or Bayes factors with marginal likelihoods. A marginal likelihood is the likelihood of the data given a model i.e. $f(\mathbf{d}_N \mid \mathrm{model})$.
The Bayes factor between two alternative models is the ratio of their marginal likelihoods;
$$
K_N = \frac{f(\mathbf{d}_N \mid \mathrm{model}_1)}{f(\mathbf{d}_N \mid \mathrm{model}_2)}
$$
The posterior probability for each model in a set of models can also be calculated from their marginal likelihoods as well;
$$
Pr(\mathrm{model}_j \mid \mathbf{d}_N) = \frac{f(\mathbf{d}_N \mid \mathrm{model}_j)Pr(\mathrm{model}_j)}{\sum_{l=1}^L f(\mathbf{d}_N \mid \mathrm{model}_l)Pr(\mathrm{model}_l)}
$$
These are useful metrics used to compare models.
For the above models, the marginal likelihoods are calculated as;
$$
f(\mathbf{d}_N \mid \lambda_j) = \int_{\Theta} f(\mathbf{d}_N \mid \theta, \lambda_j)\pi_0(\theta\mid \lambda_j)d\theta
$$
However, we can also think about sequentially adding observations to our sample, and write the marginal likelihood as a chain of predictive likelihoods;
$$
f(\mathbf{d}_N \mid \lambda_j) = \prod_{n=0}^{N-1} f(d_{n+1} \mid \mathbf{d}_n , \lambda_j)
$$
From above we know that $f(d_{N+1} \mid \mathbf{d}_N , \lambda_j)$ converges to $f(d_{N+1} \mid \mathbf{d}_N , \theta^*)$, but it is generally not true that $f(\mathbf{d}_N \mid \lambda_1)$ converges to $f(\mathbf{d}_N \mid \theta^*)$, nor does it converge to $f(\mathbf{d}_N \mid \lambda_2)$. This should be apparent given the product notation above. While latter terms in the product will be increasingly similar, the initial terms will be different, because of this, the Bayes factor
$$
\frac{f(\mathbf{d}_N \mid \lambda_1)}{ f(\mathbf{d}_N \mid \lambda_2)} \not\stackrel{p}{\rightarrow} 1
$$
This is an issue if we wished to calculate a Bayes factor for an alternative model with different likelihood and prior. For example consider the marginal likelihood $h(\mathbf{d}_N\mid M) = \int_{\Theta} h(\mathbf{d}_N\mid \theta, M)\pi_0(\theta\mid M) d\theta$; then
$$
\frac{f(\mathbf{d}_N \mid \lambda_1)}{ h(\mathbf{d}_N\mid M)} \neq \frac{f(\mathbf{d}_N \mid \lambda_2)}{ h(\mathbf{d}_N\mid M)}
$$
asymptotically or otherwise. The same can be shown for posterior probabilities. In this setting the choice of the prior significantly effects the results of inference regardless of sample size.
|
Do Bayesian priors become irrelevant with large sample size?
|
When performing Bayesian inference, we operate by maximizing our likelihood function in combination with the priors we have about the parameters.
This is actually not what most practitioners consider
|
Do Bayesian priors become irrelevant with large sample size?
When performing Bayesian inference, we operate by maximizing our likelihood function in combination with the priors we have about the parameters.
This is actually not what most practitioners consider to be Bayesian inference. It is possible to estimate parameters this way, but I would not call it Bayesian inference.
Bayesian inference uses posterior distributions to calculate posterior probabilities (or ratios of probabilities) for competing hypotheses.
Posterior distributions can be estimated empirically by Monte Carlo or Markov-Chain Monte Carlo (MCMC) techniques.
Putting these distinctions aside, the question
Do Bayesian priors become irrelevant with large sample size?
still depends on the context of the problem and what you care about.
If what you care about is prediction given an already very large sample, then the answer is generally yes, the priors are asymptotically irrelevant*. However, if what you care about is model selection and Bayesian Hypothesis testing, then the answer is no, the priors matter a lot, and their effect will not deteriorate with sample size.
*Here, I am assuming that the priors aren't truncated/censored beyond the parameter space implied by the likelihood, and that they aren't so ill-specified as to cause convergence issues with near zero-density in important regions. My argument is also asymptotic, which comes with all the regular caveats.
Predictive Densities
As an example, let $\mathbf{d}_N = (d_1, d_2,...,d_N)$ be your data, where each $d_i$ signifies an observation. Let the likelihood be denoted as $f(\mathbf{d}_N\mid \theta)$, where $\theta$ is the parameter vector.
Then suppose we also specify two separate priors $\pi_0 (\theta \mid \lambda_1)$ and $\pi_0 (\theta \mid \lambda_2)$, which differ by the hyper-parameter $\lambda_1 \neq \lambda_2$.
Each prior will lead to different posterior distributions in a finite sample,
$$
\pi_N (\theta \mid \mathbf{d}_N, \lambda_j) \propto f(\mathbf{d}_N\mid \theta)\pi_0 ( \theta \mid \lambda_j)\;\;\;\;\;\mathrm{for}\;\;j=1,2
$$
Letting $\theta^*$ be the suito true parameter value, $\theta^{j}_N \sim \pi_N(\theta\mid \mathbf{d}_N, \lambda_j)$, and $\hat \theta_N = \max_\theta\{ f(\mathbf{d}_N\mid \theta) \}$, it is true that $\theta^{1}_N$, $\theta^{2}_N$, and $\hat \theta_N$ will all converge in probability to $\theta^*$. Put more formally, for any $\varepsilon >0$;
$$
\begin{align}
\lim_{N \rightarrow \infty} Pr(|\theta^j_N - \theta^*| \ge \varepsilon) &= 0\;\;\;\forall j \in \{1,2\} \\
\lim_{N \rightarrow \infty} Pr(|\hat \theta_N - \theta^*| \ge \varepsilon) &= 0
\end{align}
$$
To be more consistent with your optimization procedure, we could alternatively define $\theta^j_N = \max_\theta \{\pi_N (\theta \mid \mathbf{d}_N, \lambda_j)\} $ and although this parameter is very different then the previously defined, the above asymptotics still hold.
It follows that the predictive densities, which are defined as either $f(\tilde d \mid \mathbf{d}_N, \lambda_j) = \int_{\Theta} f(\tilde d \mid \theta,\lambda_j,\mathbf{d}_N)\pi_N (\theta \mid \lambda_j,\mathbf{d}_N)d\theta$ in a proper Bayesian approach or $f(\tilde d \mid \mathbf{d}_N, \theta^j_N)$ using optimization, converge in distribution to $f(\tilde d\mid \mathbf{d}_N, \theta^*)$. So in terms of predicting new observations conditional on an already very large sample, the prior specification makes no difference asymptotically.
Model Selection and Hypothesis Testing
If one is interested in Bayesian model selection and hypothesis testing they should be aware that the effect of the prior does not vanish asymptotically.
In a Bayesian setting we would calculate posterior probabilities or Bayes factors with marginal likelihoods. A marginal likelihood is the likelihood of the data given a model i.e. $f(\mathbf{d}_N \mid \mathrm{model})$.
The Bayes factor between two alternative models is the ratio of their marginal likelihoods;
$$
K_N = \frac{f(\mathbf{d}_N \mid \mathrm{model}_1)}{f(\mathbf{d}_N \mid \mathrm{model}_2)}
$$
The posterior probability for each model in a set of models can also be calculated from their marginal likelihoods as well;
$$
Pr(\mathrm{model}_j \mid \mathbf{d}_N) = \frac{f(\mathbf{d}_N \mid \mathrm{model}_j)Pr(\mathrm{model}_j)}{\sum_{l=1}^L f(\mathbf{d}_N \mid \mathrm{model}_l)Pr(\mathrm{model}_l)}
$$
These are useful metrics used to compare models.
For the above models, the marginal likelihoods are calculated as;
$$
f(\mathbf{d}_N \mid \lambda_j) = \int_{\Theta} f(\mathbf{d}_N \mid \theta, \lambda_j)\pi_0(\theta\mid \lambda_j)d\theta
$$
However, we can also think about sequentially adding observations to our sample, and write the marginal likelihood as a chain of predictive likelihoods;
$$
f(\mathbf{d}_N \mid \lambda_j) = \prod_{n=0}^{N-1} f(d_{n+1} \mid \mathbf{d}_n , \lambda_j)
$$
From above we know that $f(d_{N+1} \mid \mathbf{d}_N , \lambda_j)$ converges to $f(d_{N+1} \mid \mathbf{d}_N , \theta^*)$, but it is generally not true that $f(\mathbf{d}_N \mid \lambda_1)$ converges to $f(\mathbf{d}_N \mid \theta^*)$, nor does it converge to $f(\mathbf{d}_N \mid \lambda_2)$. This should be apparent given the product notation above. While latter terms in the product will be increasingly similar, the initial terms will be different, because of this, the Bayes factor
$$
\frac{f(\mathbf{d}_N \mid \lambda_1)}{ f(\mathbf{d}_N \mid \lambda_2)} \not\stackrel{p}{\rightarrow} 1
$$
This is an issue if we wished to calculate a Bayes factor for an alternative model with different likelihood and prior. For example consider the marginal likelihood $h(\mathbf{d}_N\mid M) = \int_{\Theta} h(\mathbf{d}_N\mid \theta, M)\pi_0(\theta\mid M) d\theta$; then
$$
\frac{f(\mathbf{d}_N \mid \lambda_1)}{ h(\mathbf{d}_N\mid M)} \neq \frac{f(\mathbf{d}_N \mid \lambda_2)}{ h(\mathbf{d}_N\mid M)}
$$
asymptotically or otherwise. The same can be shown for posterior probabilities. In this setting the choice of the prior significantly effects the results of inference regardless of sample size.
|
Do Bayesian priors become irrelevant with large sample size?
When performing Bayesian inference, we operate by maximizing our likelihood function in combination with the priors we have about the parameters.
This is actually not what most practitioners consider
|
5,960
|
Do Bayesian priors become irrelevant with large sample size?
|
Another issue to keep in mind is you can have a lot of data, but still have very little information about certain parameters in your model. In such cases, even a mildly informative prior can be extremely helpful when performing inference.
As a silly example, suppose you were comparing means of two groups and you had 1,000,000 samples of group 1 and 10 samples of group 2. Then clearly having an informative prior about group 2 can improve inference, even though you've collected over a million samples.
And while that example may be trivial, it starts to lead some very important implications. If we want to understand some complex phenomena, the smart thing to do is collect a lot of information regarding the parts we don't understand and less information about the parts we do understand. If we collect a lot of data in such a manner, throwing out the prior because we have a lot of data is a really bad choice; we've just set back our analysis because we didn't waste time collecting data on things we already know!
|
Do Bayesian priors become irrelevant with large sample size?
|
Another issue to keep in mind is you can have a lot of data, but still have very little information about certain parameters in your model. In such cases, even a mildly informative prior can be extrem
|
Do Bayesian priors become irrelevant with large sample size?
Another issue to keep in mind is you can have a lot of data, but still have very little information about certain parameters in your model. In such cases, even a mildly informative prior can be extremely helpful when performing inference.
As a silly example, suppose you were comparing means of two groups and you had 1,000,000 samples of group 1 and 10 samples of group 2. Then clearly having an informative prior about group 2 can improve inference, even though you've collected over a million samples.
And while that example may be trivial, it starts to lead some very important implications. If we want to understand some complex phenomena, the smart thing to do is collect a lot of information regarding the parts we don't understand and less information about the parts we do understand. If we collect a lot of data in such a manner, throwing out the prior because we have a lot of data is a really bad choice; we've just set back our analysis because we didn't waste time collecting data on things we already know!
|
Do Bayesian priors become irrelevant with large sample size?
Another issue to keep in mind is you can have a lot of data, but still have very little information about certain parameters in your model. In such cases, even a mildly informative prior can be extrem
|
5,961
|
What is the significance of logistic regression coefficients?
|
That the author has forced someone as thoughtful as you to have ask a question like this is compelling illustration of why the practice -- still way too common -- of confining reporting of regression model results to a table like this is so unacceptable.
You can, as pointed out, try to transform the logit coefficient into some meaningful indication of the effect being estimated for the predictor in question but that's cumbersome and doesn't convey information about the precision of the prediction, which is usually pretty important in a logistic regression model (on voting in particular).
Also, the use of multiple asterisks to report "levels" of significance reinforces the misconception that p-values are some meaningful index of effect size ("wow--that one has 3 asterisks!!"); for crying out loud, w/ N's of 10,000 to 20,000, completely trivial differences will be "significant" at p < .001 blah blah.
There is absolutely no need to mystify in this way. The logistic regression model is an equation that can be used (through determinate calculation or better still simulation) to predict probability of an outcome conditional on specified values for predictors, subject to measurement error. So the researcher should report what the impact of predictors of interest are on the probability of the outcome variable of interest, & associated CI, as measured in units the practical importance of which can readily be grasped. To assure ready grasping, the results should be graphically displayed. Here, for example, the researcher could report that being a rural as opposed to an urban voter increases the likelihood of voting Republican, all else equal, by X pct points (I'm guessing around 17 in 2000; "divide by 4" is a reasonable heuristic) +/- x% at 0.95 level of confidence-- if that's something that is useful to know.
The reporting of pseudo R^2 is also a sign that the modeler is engaged in statistical ritual rather than any attempt to illuminate. There are scores of ways to compute "pseudo R^2"; one might complain that the one used here is not specified, but why bother? All are next to meaningless. The only reason anyone uses pseudo R^2 is that they or the reviewer who is torturing them learned (likely 25 or more yrs ago) that OLS linear regression is the holy grail of statistics & thinks the only thing one is ever trying to figure out is "variance explained." There are plenty of defensible ways to assess the adequacy of overall model fit for logistic analysis, and likelihood ratio conveys meaningful information for comparing models that reflect alternative hypotheses. King, G. How Not to Lie with Statistics. Am. J. Pol. Sci. 30, 666-687 (1986).
If you read a paper in which reporting is more or less confined to a table like this don't be confused, don't be intimidated, & definitely don't be impressed; instead be angry & tell the researcher he or she is doing a lousy job (particularly if he or she is polluting your local intellectual environment w/ mysticism & awe--amazing how many completely mediocre thinkers trick smart people into thinking they know something just b/c they can produce a table that the latter can't understand). For smart, & temperate, expositions of these ideas, see King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci. 44, 347-361 (2000); and Gelman, A., Pasarica, C. & Dodhia, R. Let's Practice What We Preach: Turning Tables into Graphs. Am. Stat. 56, 121-130 (2002).
|
What is the significance of logistic regression coefficients?
|
That the author has forced someone as thoughtful as you to have ask a question like this is compelling illustration of why the practice -- still way too common -- of confining reporting of regression
|
What is the significance of logistic regression coefficients?
That the author has forced someone as thoughtful as you to have ask a question like this is compelling illustration of why the practice -- still way too common -- of confining reporting of regression model results to a table like this is so unacceptable.
You can, as pointed out, try to transform the logit coefficient into some meaningful indication of the effect being estimated for the predictor in question but that's cumbersome and doesn't convey information about the precision of the prediction, which is usually pretty important in a logistic regression model (on voting in particular).
Also, the use of multiple asterisks to report "levels" of significance reinforces the misconception that p-values are some meaningful index of effect size ("wow--that one has 3 asterisks!!"); for crying out loud, w/ N's of 10,000 to 20,000, completely trivial differences will be "significant" at p < .001 blah blah.
There is absolutely no need to mystify in this way. The logistic regression model is an equation that can be used (through determinate calculation or better still simulation) to predict probability of an outcome conditional on specified values for predictors, subject to measurement error. So the researcher should report what the impact of predictors of interest are on the probability of the outcome variable of interest, & associated CI, as measured in units the practical importance of which can readily be grasped. To assure ready grasping, the results should be graphically displayed. Here, for example, the researcher could report that being a rural as opposed to an urban voter increases the likelihood of voting Republican, all else equal, by X pct points (I'm guessing around 17 in 2000; "divide by 4" is a reasonable heuristic) +/- x% at 0.95 level of confidence-- if that's something that is useful to know.
The reporting of pseudo R^2 is also a sign that the modeler is engaged in statistical ritual rather than any attempt to illuminate. There are scores of ways to compute "pseudo R^2"; one might complain that the one used here is not specified, but why bother? All are next to meaningless. The only reason anyone uses pseudo R^2 is that they or the reviewer who is torturing them learned (likely 25 or more yrs ago) that OLS linear regression is the holy grail of statistics & thinks the only thing one is ever trying to figure out is "variance explained." There are plenty of defensible ways to assess the adequacy of overall model fit for logistic analysis, and likelihood ratio conveys meaningful information for comparing models that reflect alternative hypotheses. King, G. How Not to Lie with Statistics. Am. J. Pol. Sci. 30, 666-687 (1986).
If you read a paper in which reporting is more or less confined to a table like this don't be confused, don't be intimidated, & definitely don't be impressed; instead be angry & tell the researcher he or she is doing a lousy job (particularly if he or she is polluting your local intellectual environment w/ mysticism & awe--amazing how many completely mediocre thinkers trick smart people into thinking they know something just b/c they can produce a table that the latter can't understand). For smart, & temperate, expositions of these ideas, see King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci. 44, 347-361 (2000); and Gelman, A., Pasarica, C. & Dodhia, R. Let's Practice What We Preach: Turning Tables into Graphs. Am. Stat. 56, 121-130 (2002).
|
What is the significance of logistic regression coefficients?
That the author has forced someone as thoughtful as you to have ask a question like this is compelling illustration of why the practice -- still way too common -- of confining reporting of regression
|
5,962
|
What is the significance of logistic regression coefficients?
|
The idea here is that in logistic regression, we predict not the actual probability that, say, a southerner votes Republican, but a transformed version of it, the "log odds". Instead of the probability $p$, we deal with $\log p/(1-p)$ and find linear regression coefficients for the log odds.
So for example, let's assume that an urban Northeasterner has probability 0.3 of voting for a Republican. (This would of course be part of the regression; I don't see it reported in this table, although I assume it's in the original paper.) Now, $x = 1/(1+e^{-z})$ gives $z = \log {x \over 1-x}$; that is, $f^{-1}(x) = \log {x \over 1-x}$, the "log odds" corresponding to $x$. These "log odds" are what behaves linearly; the log odds corresponding to $0.3$ are $\log 0.3/0.7 \approx -0.85$. So the log odds for an urban Southerner voting Republican are this (what Wikipedia calls the intercept, $\beta_0$) plus the logistic regression coefficient for the South, $0.903$ -- that is, $-0.85 + 0.904 = 0.05$. But you want an actual probability, so we need to invert the function $p \to \log p/(1-p)$. That gives $f(0.05) \approx 1/(1+e^{-0.05}) \approx 0.51$. The actual odds have gone from $0.43$ to $1$, to $1.05$ to $1$; the ratio $1.05/0.43$ is $e^{0.903}$, the exponential of the logistic regression coefficient.
Furthermore, the effects for, say, region of the country and urban/suburban/rural don't interact. So the log odds of a rural Midwesterner voting Republican, say, are $-0.85 + 0.37 + 0.68 = +0.20$ according to this model; the probability is $f(0.20) = 1/(1+e^{-0.20}) = 0.55$.
|
What is the significance of logistic regression coefficients?
|
The idea here is that in logistic regression, we predict not the actual probability that, say, a southerner votes Republican, but a transformed version of it, the "log odds". Instead of the probabilit
|
What is the significance of logistic regression coefficients?
The idea here is that in logistic regression, we predict not the actual probability that, say, a southerner votes Republican, but a transformed version of it, the "log odds". Instead of the probability $p$, we deal with $\log p/(1-p)$ and find linear regression coefficients for the log odds.
So for example, let's assume that an urban Northeasterner has probability 0.3 of voting for a Republican. (This would of course be part of the regression; I don't see it reported in this table, although I assume it's in the original paper.) Now, $x = 1/(1+e^{-z})$ gives $z = \log {x \over 1-x}$; that is, $f^{-1}(x) = \log {x \over 1-x}$, the "log odds" corresponding to $x$. These "log odds" are what behaves linearly; the log odds corresponding to $0.3$ are $\log 0.3/0.7 \approx -0.85$. So the log odds for an urban Southerner voting Republican are this (what Wikipedia calls the intercept, $\beta_0$) plus the logistic regression coefficient for the South, $0.903$ -- that is, $-0.85 + 0.904 = 0.05$. But you want an actual probability, so we need to invert the function $p \to \log p/(1-p)$. That gives $f(0.05) \approx 1/(1+e^{-0.05}) \approx 0.51$. The actual odds have gone from $0.43$ to $1$, to $1.05$ to $1$; the ratio $1.05/0.43$ is $e^{0.903}$, the exponential of the logistic regression coefficient.
Furthermore, the effects for, say, region of the country and urban/suburban/rural don't interact. So the log odds of a rural Midwesterner voting Republican, say, are $-0.85 + 0.37 + 0.68 = +0.20$ according to this model; the probability is $f(0.20) = 1/(1+e^{-0.20}) = 0.55$.
|
What is the significance of logistic regression coefficients?
The idea here is that in logistic regression, we predict not the actual probability that, say, a southerner votes Republican, but a transformed version of it, the "log odds". Instead of the probabilit
|
5,963
|
What is the significance of logistic regression coefficients?
|
The coefficients in the logistic regression represent the tendency for a given region/demographic to vote Republican, compared to a reference category. A positive coefficent means that region is more likely to vote Republican, and vice-versa for a negative coefficient; a larger absolute value means a stronger tendency than a smaller value.
The reference categories are "Northeast" and "urban voter", so all the coefficients represent contrasts with this particular voter type.
In general, there's also no restriction on the coefficients in a logistic regression to be in [0, 1], even in absolute value. Notice that the Wikipedia article itself has an example of a logistic regression with coefficients of -5 and 2.
|
What is the significance of logistic regression coefficients?
|
The coefficients in the logistic regression represent the tendency for a given region/demographic to vote Republican, compared to a reference category. A positive coefficent means that region is more
|
What is the significance of logistic regression coefficients?
The coefficients in the logistic regression represent the tendency for a given region/demographic to vote Republican, compared to a reference category. A positive coefficent means that region is more likely to vote Republican, and vice-versa for a negative coefficient; a larger absolute value means a stronger tendency than a smaller value.
The reference categories are "Northeast" and "urban voter", so all the coefficients represent contrasts with this particular voter type.
In general, there's also no restriction on the coefficients in a logistic regression to be in [0, 1], even in absolute value. Notice that the Wikipedia article itself has an example of a logistic regression with coefficients of -5 and 2.
|
What is the significance of logistic regression coefficients?
The coefficients in the logistic regression represent the tendency for a given region/demographic to vote Republican, compared to a reference category. A positive coefficent means that region is more
|
5,964
|
What is the significance of logistic regression coefficients?
|
You also asked "how do I know what is significant and what is not." (I assume you mean statistically significant, since practical or substantive significance is another matter.) The asterisks in the table refer to the footnote: some effects are noted as having small p-values. These are obtained using a Wald test of the significance of each coefficient. Assuming random sampling, p<.05 means that, if there were no such effect in the larger population, the probability of seeing a connection as strong as the one observed, or stronger, in a sample of this size would be less than .05. You'll see many threads on this site discussing the subtle but important related point that p<.05 does not mean that there is a .05 probability of there being no connection in the larger population.
|
What is the significance of logistic regression coefficients?
|
You also asked "how do I know what is significant and what is not." (I assume you mean statistically significant, since practical or substantive significance is another matter.) The asterisks in th
|
What is the significance of logistic regression coefficients?
You also asked "how do I know what is significant and what is not." (I assume you mean statistically significant, since practical or substantive significance is another matter.) The asterisks in the table refer to the footnote: some effects are noted as having small p-values. These are obtained using a Wald test of the significance of each coefficient. Assuming random sampling, p<.05 means that, if there were no such effect in the larger population, the probability of seeing a connection as strong as the one observed, or stronger, in a sample of this size would be less than .05. You'll see many threads on this site discussing the subtle but important related point that p<.05 does not mean that there is a .05 probability of there being no connection in the larger population.
|
What is the significance of logistic regression coefficients?
You also asked "how do I know what is significant and what is not." (I assume you mean statistically significant, since practical or substantive significance is another matter.) The asterisks in th
|
5,965
|
What is the significance of logistic regression coefficients?
|
Let me just stress the importance of what rolando2 and dmk38 both noted: significance is commonly misread, and there is a high risk of that happening with that tabular presentation of results.
Paul Schrodt recently offered a nice description of the issue:
Researchers find it nearly impossible to adhere to the correct interpretation of the significance test. The p-value tells you only the likelihood that you would get a result under the [usually] completely unrealistic conditions of the null hypothesis. Which is not what you want to knowβyou usually want to know the magnitude of the effect of an independent variable, given the data. Thatβs a Bayesian question, not a frequentist question. Instead we seeβconstantlyβthe p-value interpreted as if it gave the strength of association: this is the ubiquitous Mystical Cult of the Stars and P-Values which permeates our journals.(fn) This is not what the p-value says, nor will it ever.
In my experience, this mistake is almost impossible to avoid: even very careful analysts who are fully aware of the problem will often switch modes when verbally discussing their results, even if theyβve avoided the problem in a written exposition. And letβs not even speculate on the thousands of hours and gallons of ink weβve expended correcting this in graduate papers.
(fn) The footnote also informs on another issue, mentioned by dmk38: β[the ubiquitous Mystical Cult of the Stars and P-Values] supplanted the earlierβand equally pervasiveβCult of the Highest R2, demolishedβ¦ by King (1986).β
|
What is the significance of logistic regression coefficients?
|
Let me just stress the importance of what rolando2 and dmk38 both noted: significance is commonly misread, and there is a high risk of that happening with that tabular presentation of results.
Paul Sc
|
What is the significance of logistic regression coefficients?
Let me just stress the importance of what rolando2 and dmk38 both noted: significance is commonly misread, and there is a high risk of that happening with that tabular presentation of results.
Paul Schrodt recently offered a nice description of the issue:
Researchers find it nearly impossible to adhere to the correct interpretation of the significance test. The p-value tells you only the likelihood that you would get a result under the [usually] completely unrealistic conditions of the null hypothesis. Which is not what you want to knowβyou usually want to know the magnitude of the effect of an independent variable, given the data. Thatβs a Bayesian question, not a frequentist question. Instead we seeβconstantlyβthe p-value interpreted as if it gave the strength of association: this is the ubiquitous Mystical Cult of the Stars and P-Values which permeates our journals.(fn) This is not what the p-value says, nor will it ever.
In my experience, this mistake is almost impossible to avoid: even very careful analysts who are fully aware of the problem will often switch modes when verbally discussing their results, even if theyβve avoided the problem in a written exposition. And letβs not even speculate on the thousands of hours and gallons of ink weβve expended correcting this in graduate papers.
(fn) The footnote also informs on another issue, mentioned by dmk38: β[the ubiquitous Mystical Cult of the Stars and P-Values] supplanted the earlierβand equally pervasiveβCult of the Highest R2, demolishedβ¦ by King (1986).β
|
What is the significance of logistic regression coefficients?
Let me just stress the importance of what rolando2 and dmk38 both noted: significance is commonly misread, and there is a high risk of that happening with that tabular presentation of results.
Paul Sc
|
5,966
|
What is the purpose of characteristic functions?
|
Back in the day, people used logarithm tables to multiply numbers faster. Why is this? Logarithms convert multiplication to addition, since $\log(ab) = \log(a) + \log(b)$. So in order to multiply two large numbers $a$ and $b$, you found their logarithms, added the logarithms, $z = \log(a) + \log(b)$, and then looked up $\exp(z)$ on another table.
Now, characteristic functions do a similar thing for probability distributions. Suppose $X$ has a distribution $f$ and $Y$ has a distribution $g$, and $X$ and $Y$ are independent. Then the distribution of $X+Y$ is the convolution of $f$ and $g$, $f * g$.
Now the characteristic function is an analogy of the "logarithm table trick" for convolution, since if $\phi_f$ is the characteristic function of $f$, then the following relation holds:
$$
\phi_f \phi_g = \phi_{f * g}
$$
Furthermore, also like in the case of logarithms,it is easy to find the inverse of the characteristic function: given $\phi_h$ where $h$ is an unknown density, we can obtain $h$ by the inverse Fourier transform of $\phi_h$.
The characteristic function converts convolution to multiplication for density functions the same way that logarithms convert multiplication into addition for numbers. Both transformations convert a relatively complicated operation into a relatively simple one.
|
What is the purpose of characteristic functions?
|
Back in the day, people used logarithm tables to multiply numbers faster. Why is this? Logarithms convert multiplication to addition, since $\log(ab) = \log(a) + \log(b)$. So in order to multiply t
|
What is the purpose of characteristic functions?
Back in the day, people used logarithm tables to multiply numbers faster. Why is this? Logarithms convert multiplication to addition, since $\log(ab) = \log(a) + \log(b)$. So in order to multiply two large numbers $a$ and $b$, you found their logarithms, added the logarithms, $z = \log(a) + \log(b)$, and then looked up $\exp(z)$ on another table.
Now, characteristic functions do a similar thing for probability distributions. Suppose $X$ has a distribution $f$ and $Y$ has a distribution $g$, and $X$ and $Y$ are independent. Then the distribution of $X+Y$ is the convolution of $f$ and $g$, $f * g$.
Now the characteristic function is an analogy of the "logarithm table trick" for convolution, since if $\phi_f$ is the characteristic function of $f$, then the following relation holds:
$$
\phi_f \phi_g = \phi_{f * g}
$$
Furthermore, also like in the case of logarithms,it is easy to find the inverse of the characteristic function: given $\phi_h$ where $h$ is an unknown density, we can obtain $h$ by the inverse Fourier transform of $\phi_h$.
The characteristic function converts convolution to multiplication for density functions the same way that logarithms convert multiplication into addition for numbers. Both transformations convert a relatively complicated operation into a relatively simple one.
|
What is the purpose of characteristic functions?
Back in the day, people used logarithm tables to multiply numbers faster. Why is this? Logarithms convert multiplication to addition, since $\log(ab) = \log(a) + \log(b)$. So in order to multiply t
|
5,967
|
What is the purpose of characteristic functions?
|
@charles.y.zheng and @cardinal gave very good answers, I will add my two cents. Yes the characteristic function might look like unnecessary complication, but it is a powerful tool which can get you results. If you are trying to prove something with cumulative distribution function it is always advisable to check whether it is not possible to get the result with characteristic function. This sometimes gives very short proofs.
Although at first the characteristic function looks unintuitive way of working with probability distributions, there are some powerful results directly related with it, which imply that you cannot discard this concept as a mere mathematical amusement. For example my favorite result in probability theory is that any infinitely divisible distribution has the unique LΓ©vyβKhintchine representation. Combined with the fact that the infinitely divisible distributions are the only possible distribution for limits of sums of independent random variables (excluding bizarre cases) this is a deep result using which central limit theorem is derived.
|
What is the purpose of characteristic functions?
|
@charles.y.zheng and @cardinal gave very good answers, I will add my two cents. Yes the characteristic function might look like unnecessary complication, but it is a powerful tool which can get you re
|
What is the purpose of characteristic functions?
@charles.y.zheng and @cardinal gave very good answers, I will add my two cents. Yes the characteristic function might look like unnecessary complication, but it is a powerful tool which can get you results. If you are trying to prove something with cumulative distribution function it is always advisable to check whether it is not possible to get the result with characteristic function. This sometimes gives very short proofs.
Although at first the characteristic function looks unintuitive way of working with probability distributions, there are some powerful results directly related with it, which imply that you cannot discard this concept as a mere mathematical amusement. For example my favorite result in probability theory is that any infinitely divisible distribution has the unique LΓ©vyβKhintchine representation. Combined with the fact that the infinitely divisible distributions are the only possible distribution for limits of sums of independent random variables (excluding bizarre cases) this is a deep result using which central limit theorem is derived.
|
What is the purpose of characteristic functions?
@charles.y.zheng and @cardinal gave very good answers, I will add my two cents. Yes the characteristic function might look like unnecessary complication, but it is a powerful tool which can get you re
|
5,968
|
What is the purpose of characteristic functions?
|
The purpose of characteristic functions is that they can be used to derive the properties of distributions in probability theory. If you're not interested in such derivations you do not need to learn about characteristic functions.
|
What is the purpose of characteristic functions?
|
The purpose of characteristic functions is that they can be used to derive the properties of distributions in probability theory. If you're not interested in such derivations you do not need to learn
|
What is the purpose of characteristic functions?
The purpose of characteristic functions is that they can be used to derive the properties of distributions in probability theory. If you're not interested in such derivations you do not need to learn about characteristic functions.
|
What is the purpose of characteristic functions?
The purpose of characteristic functions is that they can be used to derive the properties of distributions in probability theory. If you're not interested in such derivations you do not need to learn
|
5,969
|
What is the purpose of characteristic functions?
|
The characteristic function is the Fourier transform of the density function of the distribution. If you have any intuition regarding Fourier transforms, this fact may be enlightening. The common story about Fourier transforms is that they describe the function 'in frequency space.' Since a probability density is usually unimodal (at least in the real world, or in the models made about the real world), this doesn't seem terribly interesting.
|
What is the purpose of characteristic functions?
|
The characteristic function is the Fourier transform of the density function of the distribution. If you have any intuition regarding Fourier transforms, this fact may be enlightening. The common sto
|
What is the purpose of characteristic functions?
The characteristic function is the Fourier transform of the density function of the distribution. If you have any intuition regarding Fourier transforms, this fact may be enlightening. The common story about Fourier transforms is that they describe the function 'in frequency space.' Since a probability density is usually unimodal (at least in the real world, or in the models made about the real world), this doesn't seem terribly interesting.
|
What is the purpose of characteristic functions?
The characteristic function is the Fourier transform of the density function of the distribution. If you have any intuition regarding Fourier transforms, this fact may be enlightening. The common sto
|
5,970
|
What is the purpose of characteristic functions?
|
The Fourier transformation is a decomposition of the function (non-periodic) in its frequencies. Interpretation for densities?
Fourier transformation is the continuous version of a Fourier series since no density is periodic no expression like "characteristic series".
|
What is the purpose of characteristic functions?
|
The Fourier transformation is a decomposition of the function (non-periodic) in its frequencies. Interpretation for densities?
Fourier transformation is the continuous version of a Fourier series sinc
|
What is the purpose of characteristic functions?
The Fourier transformation is a decomposition of the function (non-periodic) in its frequencies. Interpretation for densities?
Fourier transformation is the continuous version of a Fourier series since no density is periodic no expression like "characteristic series".
|
What is the purpose of characteristic functions?
The Fourier transformation is a decomposition of the function (non-periodic) in its frequencies. Interpretation for densities?
Fourier transformation is the continuous version of a Fourier series sinc
|
5,971
|
What are best practices in identifying interaction effects?
|
Cox and Wermuth (1996) or Cox (1984) discussed some methods for detecting interactions. The problem is usually how general the interaction terms should be. Basically, we
(a) fit (and test) all second-order interaction terms, one at a time, and (b) plot their corresponding p-values (i.e., the No. terms as a function of $1-p$). The idea is then to look if a certain number of interaction terms should be retained: Under the assumption that all interaction terms are null the distribution of the p-values should be uniform (or equivalently, the points on the scatterplot should be roughly distributed along a line passing through the origin).
Now, as @Gavin said, fitting many (if not all) interactions might lead to overfitting, but it is also useless in a certain sense (some high-order interaction terms often have no sense at all). However, this has to do with interpretation, not detection of interactions, and a good review was already provided by Cox in Interpretation of interaction: A review (The Annals of Applied Statistics 2007, 1(2), 371β385)--it includes references cited above. Other lines of research worth to look at are study of epistatic effects in genetic studies, in particular methods based on graphical models (e.g., An efficient method for identifying statistical interactors in gene association networks).
References
Cox, DR and Wermuth, N (1996). Multivariate Dependencies: Models, Analysis and Interpretation. Chapman and Hall/CRC.
Cox, DR (1984). Interaction. International Statistical Review, 52, 1β31.
|
What are best practices in identifying interaction effects?
|
Cox and Wermuth (1996) or Cox (1984) discussed some methods for detecting interactions. The problem is usually how general the interaction terms should be. Basically, we
(a) fit (and test) all second
|
What are best practices in identifying interaction effects?
Cox and Wermuth (1996) or Cox (1984) discussed some methods for detecting interactions. The problem is usually how general the interaction terms should be. Basically, we
(a) fit (and test) all second-order interaction terms, one at a time, and (b) plot their corresponding p-values (i.e., the No. terms as a function of $1-p$). The idea is then to look if a certain number of interaction terms should be retained: Under the assumption that all interaction terms are null the distribution of the p-values should be uniform (or equivalently, the points on the scatterplot should be roughly distributed along a line passing through the origin).
Now, as @Gavin said, fitting many (if not all) interactions might lead to overfitting, but it is also useless in a certain sense (some high-order interaction terms often have no sense at all). However, this has to do with interpretation, not detection of interactions, and a good review was already provided by Cox in Interpretation of interaction: A review (The Annals of Applied Statistics 2007, 1(2), 371β385)--it includes references cited above. Other lines of research worth to look at are study of epistatic effects in genetic studies, in particular methods based on graphical models (e.g., An efficient method for identifying statistical interactors in gene association networks).
References
Cox, DR and Wermuth, N (1996). Multivariate Dependencies: Models, Analysis and Interpretation. Chapman and Hall/CRC.
Cox, DR (1984). Interaction. International Statistical Review, 52, 1β31.
|
What are best practices in identifying interaction effects?
Cox and Wermuth (1996) or Cox (1984) discussed some methods for detecting interactions. The problem is usually how general the interaction terms should be. Basically, we
(a) fit (and test) all second
|
5,972
|
What are best practices in identifying interaction effects?
|
My best practice would be to think about the problem to hand before fitting the model. What is a plausible model given the phenomenon you are studying? Fitting all possible combinations of variables and interactions sounds like data dredging to me.
|
What are best practices in identifying interaction effects?
|
My best practice would be to think about the problem to hand before fitting the model. What is a plausible model given the phenomenon you are studying? Fitting all possible combinations of variables a
|
What are best practices in identifying interaction effects?
My best practice would be to think about the problem to hand before fitting the model. What is a plausible model given the phenomenon you are studying? Fitting all possible combinations of variables and interactions sounds like data dredging to me.
|
What are best practices in identifying interaction effects?
My best practice would be to think about the problem to hand before fitting the model. What is a plausible model given the phenomenon you are studying? Fitting all possible combinations of variables a
|
5,973
|
What are best practices in identifying interaction effects?
|
Fitting a tree model (i.e. using R), will help you identify complex interactions between the explanatory variables. Read the example on page 30 here.
|
What are best practices in identifying interaction effects?
|
Fitting a tree model (i.e. using R), will help you identify complex interactions between the explanatory variables. Read the example on page 30 here.
|
What are best practices in identifying interaction effects?
Fitting a tree model (i.e. using R), will help you identify complex interactions between the explanatory variables. Read the example on page 30 here.
|
What are best practices in identifying interaction effects?
Fitting a tree model (i.e. using R), will help you identify complex interactions between the explanatory variables. Read the example on page 30 here.
|
5,974
|
What are best practices in identifying interaction effects?
|
I'll preface this response as I entirely agree with Gavin, and if you're interested in fitting any type of model it should be reflective of the phenomenon under study. What the problem is with the logic of identifying any and all effects (and what Gavin refers to when he says data dredging) is that you could fit an infinite number of interactions, or quadratic terms for variables, or transformations to your data, and you would inevitably find "significant" effects for some variation of your data.
As chl states, these higher order interaction effects don't really have any interpretation, and frequently even the lower order interactions don't make any sense. If your interested in developing a causal model you should only include terms you believe could be pertinent to your dependent variable A priori to fitting your model.
If you believe they can increase predictive power of your model, you should look up resources on model selection techniques to prevent over-fitting your model.
|
What are best practices in identifying interaction effects?
|
I'll preface this response as I entirely agree with Gavin, and if you're interested in fitting any type of model it should be reflective of the phenomenon under study. What the problem is with the log
|
What are best practices in identifying interaction effects?
I'll preface this response as I entirely agree with Gavin, and if you're interested in fitting any type of model it should be reflective of the phenomenon under study. What the problem is with the logic of identifying any and all effects (and what Gavin refers to when he says data dredging) is that you could fit an infinite number of interactions, or quadratic terms for variables, or transformations to your data, and you would inevitably find "significant" effects for some variation of your data.
As chl states, these higher order interaction effects don't really have any interpretation, and frequently even the lower order interactions don't make any sense. If your interested in developing a causal model you should only include terms you believe could be pertinent to your dependent variable A priori to fitting your model.
If you believe they can increase predictive power of your model, you should look up resources on model selection techniques to prevent over-fitting your model.
|
What are best practices in identifying interaction effects?
I'll preface this response as I entirely agree with Gavin, and if you're interested in fitting any type of model it should be reflective of the phenomenon under study. What the problem is with the log
|
5,975
|
What are best practices in identifying interaction effects?
|
How large is $n$ ? how many observations do you have ? this is crucial ...
Sobol indices will tell you the proportion of variance explained by interaction if you have a lot of observations and a few $n$, otherwise you will have to do modelling (linear to start with). You have a nice R package for that called sensitivity. Anyway the idea is quite often that of decomposing the variance (also called generalized ANOVA).
If you want to know if this proportion of variance is significant you will have to do modelling (roughly, you need to know the number of degrees of freedom of your model to compare it to the variance).
Are your variables discrete or continuous ? bounded or not really (i.e you don't know the maximum) ?
|
What are best practices in identifying interaction effects?
|
How large is $n$ ? how many observations do you have ? this is crucial ...
Sobol indices will tell you the proportion of variance explained by interaction if you have a lot of observations and a few
|
What are best practices in identifying interaction effects?
How large is $n$ ? how many observations do you have ? this is crucial ...
Sobol indices will tell you the proportion of variance explained by interaction if you have a lot of observations and a few $n$, otherwise you will have to do modelling (linear to start with). You have a nice R package for that called sensitivity. Anyway the idea is quite often that of decomposing the variance (also called generalized ANOVA).
If you want to know if this proportion of variance is significant you will have to do modelling (roughly, you need to know the number of degrees of freedom of your model to compare it to the variance).
Are your variables discrete or continuous ? bounded or not really (i.e you don't know the maximum) ?
|
What are best practices in identifying interaction effects?
How large is $n$ ? how many observations do you have ? this is crucial ...
Sobol indices will tell you the proportion of variance explained by interaction if you have a lot of observations and a few
|
5,976
|
What are best practices in identifying interaction effects?
|
I think this is a good use case of LASSO. You can throw all the interaction terms, and LASSO can find the ones that matter, by using cross-validation to select the best regularization parameter. This doesn't need to be confined to linear interaction, as we can add much richer class of interactions (e.g. $x_m^2 * x_n$ terms)
For LASSO, you can have p >> n, so you don't have to worry about identification by having more covariates than number of observations. For a smaller dimension problem(n >> p), LASSO is pretty much the same as running t-test on each interaction terms, so I think it'd work similar to chl's answer.
|
What are best practices in identifying interaction effects?
|
I think this is a good use case of LASSO. You can throw all the interaction terms, and LASSO can find the ones that matter, by using cross-validation to select the best regularization parameter. This
|
What are best practices in identifying interaction effects?
I think this is a good use case of LASSO. You can throw all the interaction terms, and LASSO can find the ones that matter, by using cross-validation to select the best regularization parameter. This doesn't need to be confined to linear interaction, as we can add much richer class of interactions (e.g. $x_m^2 * x_n$ terms)
For LASSO, you can have p >> n, so you don't have to worry about identification by having more covariates than number of observations. For a smaller dimension problem(n >> p), LASSO is pretty much the same as running t-test on each interaction terms, so I think it'd work similar to chl's answer.
|
What are best practices in identifying interaction effects?
I think this is a good use case of LASSO. You can throw all the interaction terms, and LASSO can find the ones that matter, by using cross-validation to select the best regularization parameter. This
|
5,977
|
Why is a likelihood-ratio test distributed chi-squared?
|
As mentioned by @Nick this is a consequence of Wilks' theorem. But note that the test statistic is asymptotically $\chi^2$-distributed, not $\chi^2$-distributed.
I am very impressed by this theorem because it holds in a very wide context. Consider a statistical model with likelihood $l(\theta \mid y)$ where $y$ is the vector observations of $n$ independent replicated observations from a distribution with parameter $\theta$ belonging to a submanifold $B_1$ of $\mathbb{R}^d$ with dimension $\dim(B_1)=s$. Let $B_0 \subset B_1$ be a submanifold with dimension $\dim(B_0)=m$. Imagine you are interested in testing $H_0\colon\{\theta \in B_0\}$.
The likelihood ratio is
$$lr(y) = \frac{\sup_{\theta \in B_1}l(\theta \mid y)}{\sup_{\theta \in B_0}l(\theta \mid y)}. $$
Define the deviance $d(y)=2 \log \big(lr(y)\big)$. Then Wilks' theorem says that, under usual regularity assumptions, $d(y)$ is asymptotically $\chi^2$-distributed with $s-m$ degrees of freedom when $H_0$ holds true.
It is proven in Wilk's original paper mentioned by @Nick. I think this paper is not easy to read. Wilks published a book later, perhaps with an easiest presentation of his theorem. A short heuristic proof is given in Williams' excellent book.
|
Why is a likelihood-ratio test distributed chi-squared?
|
As mentioned by @Nick this is a consequence of Wilks' theorem. But note that the test statistic is asymptotically $\chi^2$-distributed, not $\chi^2$-distributed.
I am very impressed by this theorem be
|
Why is a likelihood-ratio test distributed chi-squared?
As mentioned by @Nick this is a consequence of Wilks' theorem. But note that the test statistic is asymptotically $\chi^2$-distributed, not $\chi^2$-distributed.
I am very impressed by this theorem because it holds in a very wide context. Consider a statistical model with likelihood $l(\theta \mid y)$ where $y$ is the vector observations of $n$ independent replicated observations from a distribution with parameter $\theta$ belonging to a submanifold $B_1$ of $\mathbb{R}^d$ with dimension $\dim(B_1)=s$. Let $B_0 \subset B_1$ be a submanifold with dimension $\dim(B_0)=m$. Imagine you are interested in testing $H_0\colon\{\theta \in B_0\}$.
The likelihood ratio is
$$lr(y) = \frac{\sup_{\theta \in B_1}l(\theta \mid y)}{\sup_{\theta \in B_0}l(\theta \mid y)}. $$
Define the deviance $d(y)=2 \log \big(lr(y)\big)$. Then Wilks' theorem says that, under usual regularity assumptions, $d(y)$ is asymptotically $\chi^2$-distributed with $s-m$ degrees of freedom when $H_0$ holds true.
It is proven in Wilk's original paper mentioned by @Nick. I think this paper is not easy to read. Wilks published a book later, perhaps with an easiest presentation of his theorem. A short heuristic proof is given in Williams' excellent book.
|
Why is a likelihood-ratio test distributed chi-squared?
As mentioned by @Nick this is a consequence of Wilks' theorem. But note that the test statistic is asymptotically $\chi^2$-distributed, not $\chi^2$-distributed.
I am very impressed by this theorem be
|
5,978
|
Why is a likelihood-ratio test distributed chi-squared?
|
I second Nick Sabbe's harsh comment, and my short answer is, It is not. I mean, it only is in the normal linear model. For absolutely any other sort of circumstances, the exact distribution is not a $\chi^2$. In many situations, you can hope that Wilks' theorem conditions are satisfied, and then asymptotically the log-likelihood ratio test statistics converges in distribution to $\chi^2$. Limitations and violations of the conditions of Wilks' theorem are too numerous to disregard.
The theorem assumes i.i.d. data $\Rightarrow$ expect issues with dependent data, such as time series or unequal probability survey samples (for which the likelihoods are poorly defined, anyway; the "regular" $\chi^2$ tests, such as independence tests in contingency tables, start behaving as a sum $\sum_k a_k v_k, v_k \sim \mbox{i.i.d.} \chi^2_1$ (Rao & Scott). For i.i.d. data, $a_k=1$, and the sum becomes the $\chi^2$. But for non-independent data, this is no longer the case.
The theorem assumes the true parameter to be in the interior of the parameter space. If you have a Euclidean space to work with, that's not an issue. However, in some problems, the natural restrictions may arise, such as variance $\ge$ 0 or correlation between -1 and 1. If the true parameter is one the boundary, then the asymptotic distribution is a mixture of $\chi^2$ with different degrees of freedom, in the sense that the cdf of the test is the sum of such cdfs (Andrews 2001, plus two or three more of his papers from the same period, with history going back to Chernoff 1954).
The theorem assumes that all the relevant derivatives are non-zero. This can be challenged with some nonlinear problems and/or parameterizations, and/or situations when a parameter is not identified under the null. Suppose you have a Gaussian mixture model, and your null is one component $N(\mu_0,\sigma^2_0)$ vs. the alternative of two distinct components $f N(\mu_1,\sigma_1^2) + (1-f) N(\mu_2,\sigma_2^2)$ with a mixing fraction $f$. The null is apparently nested in the alternative, but this can be expressed in a variety of ways: as $f=0$ (in which case the parameters $\mu_1,\sigma_1^2$ are not identified), $f=1$ (in which case $\mu_2, \sigma_2^2$ are not identified), or $\mu_1=\mu_2, \sigma_1=\sigma_2$ (in which case $f$ is not identified). Here, you can't even say how many degrees of freedom your test should have, as you have different number of restrictions depending on how you parameterize the nesting. See the work of Jiahua Chen on this, e.g. CJS 2001.
The $\chi^2$ may work OK if the distribution has been correctly specified. But if it was not, the test will break down again. In the (largely neglected by statisticians) subarea of multivariate analysis known as structural equation covariance modeling, a multivariate normal distribution is often assumed, but even if the structure is correct, the test will misbehave if the distribution is different. Satorra and Bentler 1995 show that the distribution will become $\sum_k a_k v_k, v_k \sim \mbox{i.i.d.} \chi^2_1$, the same story as with non-independent data in my point 1, but they've also demonstrated how the $a_k$s depend on the structure of the model and the fourth moments of the distribution.
For finite samples, in a large class of situations likelihood ratio is Bartlett-correctible: while ${\rm Prob}[d(y) \le x]=F(x;\chi^2_d)[1+O(n^{-1})]$ for a sample of size $n$, and $F(x;\chi^2_d)$ being the distribution function of the $\chi^2_d$ distribution, for the regular likelihood problems you can find a constant $b$ such that ${\rm Prob}[d(y)/(1+b/n) \le x]=F(x;\chi^2_d)[1+O(n^{-2})]$, i.e., to a higher order of accuracy. So the $\chi^2$ approximation for finite samples can be improved (and arguably should be improved if you know how). The constant $b$ depends on the structure of the model, and sometimes on the auxiliary parameters, but if it can be consistently estimated, that works, too, in improving the order of coverage.
For a review of these and similar esoteric issues in likelihood inference, see Smith 1989.
|
Why is a likelihood-ratio test distributed chi-squared?
|
I second Nick Sabbe's harsh comment, and my short answer is, It is not. I mean, it only is in the normal linear model. For absolutely any other sort of circumstances, the exact distribution is not a $
|
Why is a likelihood-ratio test distributed chi-squared?
I second Nick Sabbe's harsh comment, and my short answer is, It is not. I mean, it only is in the normal linear model. For absolutely any other sort of circumstances, the exact distribution is not a $\chi^2$. In many situations, you can hope that Wilks' theorem conditions are satisfied, and then asymptotically the log-likelihood ratio test statistics converges in distribution to $\chi^2$. Limitations and violations of the conditions of Wilks' theorem are too numerous to disregard.
The theorem assumes i.i.d. data $\Rightarrow$ expect issues with dependent data, such as time series or unequal probability survey samples (for which the likelihoods are poorly defined, anyway; the "regular" $\chi^2$ tests, such as independence tests in contingency tables, start behaving as a sum $\sum_k a_k v_k, v_k \sim \mbox{i.i.d.} \chi^2_1$ (Rao & Scott). For i.i.d. data, $a_k=1$, and the sum becomes the $\chi^2$. But for non-independent data, this is no longer the case.
The theorem assumes the true parameter to be in the interior of the parameter space. If you have a Euclidean space to work with, that's not an issue. However, in some problems, the natural restrictions may arise, such as variance $\ge$ 0 or correlation between -1 and 1. If the true parameter is one the boundary, then the asymptotic distribution is a mixture of $\chi^2$ with different degrees of freedom, in the sense that the cdf of the test is the sum of such cdfs (Andrews 2001, plus two or three more of his papers from the same period, with history going back to Chernoff 1954).
The theorem assumes that all the relevant derivatives are non-zero. This can be challenged with some nonlinear problems and/or parameterizations, and/or situations when a parameter is not identified under the null. Suppose you have a Gaussian mixture model, and your null is one component $N(\mu_0,\sigma^2_0)$ vs. the alternative of two distinct components $f N(\mu_1,\sigma_1^2) + (1-f) N(\mu_2,\sigma_2^2)$ with a mixing fraction $f$. The null is apparently nested in the alternative, but this can be expressed in a variety of ways: as $f=0$ (in which case the parameters $\mu_1,\sigma_1^2$ are not identified), $f=1$ (in which case $\mu_2, \sigma_2^2$ are not identified), or $\mu_1=\mu_2, \sigma_1=\sigma_2$ (in which case $f$ is not identified). Here, you can't even say how many degrees of freedom your test should have, as you have different number of restrictions depending on how you parameterize the nesting. See the work of Jiahua Chen on this, e.g. CJS 2001.
The $\chi^2$ may work OK if the distribution has been correctly specified. But if it was not, the test will break down again. In the (largely neglected by statisticians) subarea of multivariate analysis known as structural equation covariance modeling, a multivariate normal distribution is often assumed, but even if the structure is correct, the test will misbehave if the distribution is different. Satorra and Bentler 1995 show that the distribution will become $\sum_k a_k v_k, v_k \sim \mbox{i.i.d.} \chi^2_1$, the same story as with non-independent data in my point 1, but they've also demonstrated how the $a_k$s depend on the structure of the model and the fourth moments of the distribution.
For finite samples, in a large class of situations likelihood ratio is Bartlett-correctible: while ${\rm Prob}[d(y) \le x]=F(x;\chi^2_d)[1+O(n^{-1})]$ for a sample of size $n$, and $F(x;\chi^2_d)$ being the distribution function of the $\chi^2_d$ distribution, for the regular likelihood problems you can find a constant $b$ such that ${\rm Prob}[d(y)/(1+b/n) \le x]=F(x;\chi^2_d)[1+O(n^{-2})]$, i.e., to a higher order of accuracy. So the $\chi^2$ approximation for finite samples can be improved (and arguably should be improved if you know how). The constant $b$ depends on the structure of the model, and sometimes on the auxiliary parameters, but if it can be consistently estimated, that works, too, in improving the order of coverage.
For a review of these and similar esoteric issues in likelihood inference, see Smith 1989.
|
Why is a likelihood-ratio test distributed chi-squared?
I second Nick Sabbe's harsh comment, and my short answer is, It is not. I mean, it only is in the normal linear model. For absolutely any other sort of circumstances, the exact distribution is not a $
|
5,979
|
Why is a likelihood-ratio test distributed chi-squared?
|
As other commentators have pointed out, Wilks' theorem (Wilks 1938) only shows that, under various regularity conditions, this statistic is asymptotically chi-squared distributed. The asymptotic result follows from taking a multivariate Taylor expansion of the log-likelihood function and looking at what happens when the MLE is a critical point of the function. Using various asymptotic results relating to the MLE it is possible to eliminate all terms in the expansion except for the second-order term, which turns asymptotically into the squared norm of a normal random vector.
Derivations of Wilks' theorem can be found in various textbooks on estimation theory, and there are also versions floating around in online statistics lecture notes (see e.g., here). The general derivation requires a knowledge of mutivariate Taylor series and results pertaining to the MLE of a vector parameter. A simpler version of the derivation can be shown in the scalar case where the alternative model has only one more (scalar) parameter than the null model. For illustrative purposes, I will show the heuristic derivation of the result in this case.
Heuristic demonstration of Wilks' theorem with one degree-of-freedom: Consider the simple case where we have an alternative hypothesis with only one scalar parameter $\theta$ that is fixed to the value $\theta_0$ under the null hypothesis. In this case we have ${df}_A - {df}_0 = 1$ so the asymptotic distribution is a chi-squared distribution with one degree-of-freedom. To derive this asymptotic distribution we will use the following Taylor expansion:
$$\ell_\mathbf{x}(\theta_0)
= \ell_\mathbf{x}(\hat{\theta}_n) + \ell_\mathbf{x}'(\hat{\theta}_n) (\theta_0 - \hat{\theta}_n) + \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{2} (\theta_0 - \hat{\theta}_n)^2 + \mathcal{O}((\theta_0 - \hat{\theta}_n)^3).$$
To facilitate our analysis, we define the standardised estimation error $E_n(\theta) \equiv (\theta - \hat{\theta}_n) \sqrt{n\mathcal{I}(\theta)}$ where $\mathcal{I}$ is the Fisher information function. Now, suppose that the MLE $\hat{\theta}_n$ occurs at a critical point of the log-likelihood function so that $\ell_\mathbf{x}'(\hat{\theta}_n) = 0$. This gives the following simplified form for the Taylor expansion:
$$\begin{aligned}
\ell_\mathbf{x}(\theta_0)
&= \ell_\mathbf{x}(\hat{\theta}_n) + \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{2} (\theta_0 - \hat{\theta}_n)^2 + \mathcal{O}((\theta_0 - \hat{\theta}_n)^3) \\[6pt]
&= \ell_\mathbf{x}(\hat{\theta}_n) + \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{2 n \mathcal{I}(\theta_0)} \cdot E_n(\theta_0)^2 + \mathcal{O} \bigg( \frac{E_n(\theta_0)^3}{n^{3/2}} \bigg). \\[6pt]
\end{aligned}$$
Substituting this expansion into the likelihood-ratio statistic we get:
$$\begin{aligned}
W_n
&\equiv 2(\ell_\mathbf{x}(\hat{\theta}_n) - \ell_\mathbf{x}(\theta_0)) \\[6pt]
&= - \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{n \mathcal{I}(\theta_0)} \cdot E_n(\theta_0)^2 + \mathcal{O} \bigg( \frac{E_n(\theta_0)^3}{n^{3/2}} \bigg). \\[6pt]
\end{aligned}$$
Now, suppose you are looking at the distribution of $W_n$ under the null hypothesis that $\theta = \theta_0$. Under some regularity conditions, it is known that we get the asymptotic distribution $E_n(\theta_0) \sim \text{N}(0, 1)$ and we also get the limiting result $\ell_\mathbf{x}''(\hat{\theta}_n)/n \rightarrow -\mathcal{I}(\theta_0)$. This means that the order term in the above expansion will vanish asymptotically, and so we have the asymptotic result:
$$\begin{aligned}
W_n \rightarrow E_n(\theta_0)^2 \sim \chi_{1}^2. \\[6pt]
\end{aligned}$$
This is the chi-squared asymptotic result that holds in the case where the alternative model has only one more degree-of-freedom than the null model. The more general derivation is essentially the same, but it involves use of a multivariate parameter vector, which means we use the multivariate Taylor series and the properties of the MLE for a vector parameter.
As others have noted, Wilks' theorem uses a number of regularity conditions, and these conditions do not always hold. The result assumes that the MLE occurs at an interior point of the parameter space which is a critical point of the log-likelihood function. Additionally, it assumes all the required conditions for the standard asymptotic normality results for the MLE. Even when these various regularity conditions hold (which actually happens in quite a broad range of cases), the result is only an asymptotic result, and so it might not be a particularly good approximation for small sample sizes.
|
Why is a likelihood-ratio test distributed chi-squared?
|
As other commentators have pointed out, Wilks' theorem (Wilks 1938) only shows that, under various regularity conditions, this statistic is asymptotically chi-squared distributed. The asymptotic resu
|
Why is a likelihood-ratio test distributed chi-squared?
As other commentators have pointed out, Wilks' theorem (Wilks 1938) only shows that, under various regularity conditions, this statistic is asymptotically chi-squared distributed. The asymptotic result follows from taking a multivariate Taylor expansion of the log-likelihood function and looking at what happens when the MLE is a critical point of the function. Using various asymptotic results relating to the MLE it is possible to eliminate all terms in the expansion except for the second-order term, which turns asymptotically into the squared norm of a normal random vector.
Derivations of Wilks' theorem can be found in various textbooks on estimation theory, and there are also versions floating around in online statistics lecture notes (see e.g., here). The general derivation requires a knowledge of mutivariate Taylor series and results pertaining to the MLE of a vector parameter. A simpler version of the derivation can be shown in the scalar case where the alternative model has only one more (scalar) parameter than the null model. For illustrative purposes, I will show the heuristic derivation of the result in this case.
Heuristic demonstration of Wilks' theorem with one degree-of-freedom: Consider the simple case where we have an alternative hypothesis with only one scalar parameter $\theta$ that is fixed to the value $\theta_0$ under the null hypothesis. In this case we have ${df}_A - {df}_0 = 1$ so the asymptotic distribution is a chi-squared distribution with one degree-of-freedom. To derive this asymptotic distribution we will use the following Taylor expansion:
$$\ell_\mathbf{x}(\theta_0)
= \ell_\mathbf{x}(\hat{\theta}_n) + \ell_\mathbf{x}'(\hat{\theta}_n) (\theta_0 - \hat{\theta}_n) + \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{2} (\theta_0 - \hat{\theta}_n)^2 + \mathcal{O}((\theta_0 - \hat{\theta}_n)^3).$$
To facilitate our analysis, we define the standardised estimation error $E_n(\theta) \equiv (\theta - \hat{\theta}_n) \sqrt{n\mathcal{I}(\theta)}$ where $\mathcal{I}$ is the Fisher information function. Now, suppose that the MLE $\hat{\theta}_n$ occurs at a critical point of the log-likelihood function so that $\ell_\mathbf{x}'(\hat{\theta}_n) = 0$. This gives the following simplified form for the Taylor expansion:
$$\begin{aligned}
\ell_\mathbf{x}(\theta_0)
&= \ell_\mathbf{x}(\hat{\theta}_n) + \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{2} (\theta_0 - \hat{\theta}_n)^2 + \mathcal{O}((\theta_0 - \hat{\theta}_n)^3) \\[6pt]
&= \ell_\mathbf{x}(\hat{\theta}_n) + \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{2 n \mathcal{I}(\theta_0)} \cdot E_n(\theta_0)^2 + \mathcal{O} \bigg( \frac{E_n(\theta_0)^3}{n^{3/2}} \bigg). \\[6pt]
\end{aligned}$$
Substituting this expansion into the likelihood-ratio statistic we get:
$$\begin{aligned}
W_n
&\equiv 2(\ell_\mathbf{x}(\hat{\theta}_n) - \ell_\mathbf{x}(\theta_0)) \\[6pt]
&= - \frac{\ell_\mathbf{x}''(\hat{\theta}_n)}{n \mathcal{I}(\theta_0)} \cdot E_n(\theta_0)^2 + \mathcal{O} \bigg( \frac{E_n(\theta_0)^3}{n^{3/2}} \bigg). \\[6pt]
\end{aligned}$$
Now, suppose you are looking at the distribution of $W_n$ under the null hypothesis that $\theta = \theta_0$. Under some regularity conditions, it is known that we get the asymptotic distribution $E_n(\theta_0) \sim \text{N}(0, 1)$ and we also get the limiting result $\ell_\mathbf{x}''(\hat{\theta}_n)/n \rightarrow -\mathcal{I}(\theta_0)$. This means that the order term in the above expansion will vanish asymptotically, and so we have the asymptotic result:
$$\begin{aligned}
W_n \rightarrow E_n(\theta_0)^2 \sim \chi_{1}^2. \\[6pt]
\end{aligned}$$
This is the chi-squared asymptotic result that holds in the case where the alternative model has only one more degree-of-freedom than the null model. The more general derivation is essentially the same, but it involves use of a multivariate parameter vector, which means we use the multivariate Taylor series and the properties of the MLE for a vector parameter.
As others have noted, Wilks' theorem uses a number of regularity conditions, and these conditions do not always hold. The result assumes that the MLE occurs at an interior point of the parameter space which is a critical point of the log-likelihood function. Additionally, it assumes all the required conditions for the standard asymptotic normality results for the MLE. Even when these various regularity conditions hold (which actually happens in quite a broad range of cases), the result is only an asymptotic result, and so it might not be a particularly good approximation for small sample sizes.
|
Why is a likelihood-ratio test distributed chi-squared?
As other commentators have pointed out, Wilks' theorem (Wilks 1938) only shows that, under various regularity conditions, this statistic is asymptotically chi-squared distributed. The asymptotic resu
|
5,980
|
10-fold Cross-validation vs leave-one-out cross-validation
|
Just to add slightly to the answer of @SubravetiSuraj (+1)
Cross-validation gives a pessimistically biased estimate of performance because most statistical models will improve if the training set is made larger. This means that k-fold cross-validation estimates the performance of a model trained on a dataset $100\times\frac{(k-1)}{k}\%$ of the available data, rather than on 100% of it. So if you perform cross-validation to estimate performance, and then use a model trained on all of the data for operational use, it will perform slightly better than the cross-validation estimate suggests.
Leave-one-out cross-validation is approximately unbiased, because the difference in size between the training set used in each fold and the entire dataset is only a single pattern. There is a paper on this by Luntz and Brailovsky (in Russian).
Luntz, Aleksandr, and Viktor Brailovsky. "On estimation of characters obtained in statistical procedure of recognition." Technicheskaya Kibernetica 3.6 (1969): 6β12.
see also
Lachenbruch,Peter A., and Mickey, M. Ray. "Estimation of Error Rates in Discriminant Analysis." Technometrics 10.1 (1968): 1β11.
However, while leave-one-out cross-validation is approximately unbiased, it tends to have a high variance (so you would get very different estimates if you repeated the estimate with different initial samples of data from the same distribution). As the error of the estimator is a combination of bias and variance, whether leave-one-out cross-validation is better than 10-fold cross-validation depends on both quantities.
Now the variance in fitting the model tends to be higher if it is fitted to a small dataset (as it is more sensitive to any noise/sampling artifacts in the particular training sample used). This means that 10-fold cross-validation is likely to have a high variance (as well as a higher bias) if you only have a limited amount of data, as the size of the training set will be smaller than for LOOCV. So k-fold cross-validation can have variance issues as well, but for a different reason. This is why LOOCV is often better when the size of the dataset is small.
However, the main reason for using LOOCV in my opinion is that it is computationally inexpensive for some models (such as linear regression, most kernel methods, nearest-neighbour classifiers, etc.), and unless the dataset were very small, I would use 10-fold cross-validation if it fitted in my computational budget, or better still, bootstrap estimation and bagging.
|
10-fold Cross-validation vs leave-one-out cross-validation
|
Just to add slightly to the answer of @SubravetiSuraj (+1)
Cross-validation gives a pessimistically biased estimate of performance because most statistical models will improve if the training set is m
|
10-fold Cross-validation vs leave-one-out cross-validation
Just to add slightly to the answer of @SubravetiSuraj (+1)
Cross-validation gives a pessimistically biased estimate of performance because most statistical models will improve if the training set is made larger. This means that k-fold cross-validation estimates the performance of a model trained on a dataset $100\times\frac{(k-1)}{k}\%$ of the available data, rather than on 100% of it. So if you perform cross-validation to estimate performance, and then use a model trained on all of the data for operational use, it will perform slightly better than the cross-validation estimate suggests.
Leave-one-out cross-validation is approximately unbiased, because the difference in size between the training set used in each fold and the entire dataset is only a single pattern. There is a paper on this by Luntz and Brailovsky (in Russian).
Luntz, Aleksandr, and Viktor Brailovsky. "On estimation of characters obtained in statistical procedure of recognition." Technicheskaya Kibernetica 3.6 (1969): 6β12.
see also
Lachenbruch,Peter A., and Mickey, M. Ray. "Estimation of Error Rates in Discriminant Analysis." Technometrics 10.1 (1968): 1β11.
However, while leave-one-out cross-validation is approximately unbiased, it tends to have a high variance (so you would get very different estimates if you repeated the estimate with different initial samples of data from the same distribution). As the error of the estimator is a combination of bias and variance, whether leave-one-out cross-validation is better than 10-fold cross-validation depends on both quantities.
Now the variance in fitting the model tends to be higher if it is fitted to a small dataset (as it is more sensitive to any noise/sampling artifacts in the particular training sample used). This means that 10-fold cross-validation is likely to have a high variance (as well as a higher bias) if you only have a limited amount of data, as the size of the training set will be smaller than for LOOCV. So k-fold cross-validation can have variance issues as well, but for a different reason. This is why LOOCV is often better when the size of the dataset is small.
However, the main reason for using LOOCV in my opinion is that it is computationally inexpensive for some models (such as linear regression, most kernel methods, nearest-neighbour classifiers, etc.), and unless the dataset were very small, I would use 10-fold cross-validation if it fitted in my computational budget, or better still, bootstrap estimation and bagging.
|
10-fold Cross-validation vs leave-one-out cross-validation
Just to add slightly to the answer of @SubravetiSuraj (+1)
Cross-validation gives a pessimistically biased estimate of performance because most statistical models will improve if the training set is m
|
5,981
|
10-fold Cross-validation vs leave-one-out cross-validation
|
In my opinion, leave one out cross validation is better when you have a small set of training data. In this case, you can't really make 10 folds to make predictions on using the rest of your data to train the model.
If you have a large amount of training data on the other hand, 10-fold cross validation would be a better bet, because there will be too many iterations for leave one out cross-validation, and considering these many results to tune your hyperparameters might not be such a good idea.
According to ISL, there is always a bias-variance trade-off between doing leave one out and k fold cross validation. In LOOCV(leave one out CV), you get estimates of test error with lower bias, and higher variance because each training set contains n-1 examples, which means that you are using almost the entire training set in each iteration. This leads to higher variance too, because there is a lot of overlap between training sets, and thus the test error estimates are highly correlated, which means that the mean value of the test error estimate will have higher variance.
The opposite is true with k-fold CV, because there is relatively less overlap between training sets, thus the test error estimates are less correlated, as a result of which the mean test error value won't have as much variance as LOOCV.
|
10-fold Cross-validation vs leave-one-out cross-validation
|
In my opinion, leave one out cross validation is better when you have a small set of training data. In this case, you can't really make 10 folds to make predictions on using the rest of your data to t
|
10-fold Cross-validation vs leave-one-out cross-validation
In my opinion, leave one out cross validation is better when you have a small set of training data. In this case, you can't really make 10 folds to make predictions on using the rest of your data to train the model.
If you have a large amount of training data on the other hand, 10-fold cross validation would be a better bet, because there will be too many iterations for leave one out cross-validation, and considering these many results to tune your hyperparameters might not be such a good idea.
According to ISL, there is always a bias-variance trade-off between doing leave one out and k fold cross validation. In LOOCV(leave one out CV), you get estimates of test error with lower bias, and higher variance because each training set contains n-1 examples, which means that you are using almost the entire training set in each iteration. This leads to higher variance too, because there is a lot of overlap between training sets, and thus the test error estimates are highly correlated, which means that the mean value of the test error estimate will have higher variance.
The opposite is true with k-fold CV, because there is relatively less overlap between training sets, thus the test error estimates are less correlated, as a result of which the mean test error value won't have as much variance as LOOCV.
|
10-fold Cross-validation vs leave-one-out cross-validation
In my opinion, leave one out cross validation is better when you have a small set of training data. In this case, you can't really make 10 folds to make predictions on using the rest of your data to t
|
5,982
|
10-fold Cross-validation vs leave-one-out cross-validation
|
The existing answers focus on getting good estimates of the out of sample prediction error. This is not the only perspective on the LOOCV versus K-fold CV decision. In particular, some readers may believe there is a true model and may wish to recover it.
In this case, Shao 1993 famously showed that for linear models, LOOCV is inconsistent for recovering the true model and something resembling K-fold CV is consistent. (Shao considered a train-test split where the $n_{test}/n_{train}$ goes to 1, i.e. the test set dominates.)
Model selection is a deep and confusing topic, so let me add a couple of references to related discussions.
A related discussion here on LASSO reinforces this dichotomy of prediction vs inference and has interesting, thought-provoking comments about LASSO and CV in specific.
This site has maybe hundreds of related threads that describe AIC and BIC. In some cases, these are asymptotically equivalent to LOOCV (AIC) and K-fold CV (BIC) (More info and orignal sources in this answer: https://stats.stackexchange.com/a/414610/86176). These threads reinforce what I wrote above, because like LOOCV, AIC emphasizes prediction. Like K-fold, BIC emphasizes selecting the true model. Yuhong Yang showed that no procedure can be optimal for both purposes.
Rob Hyndman has a useful cross-validation overview here mentioning AIC, BIC, LOO, and leave-more-out CV. His position is that consistency in model selection is irrelevant because the true model is rarely in the set under consideration.
|
10-fold Cross-validation vs leave-one-out cross-validation
|
The existing answers focus on getting good estimates of the out of sample prediction error. This is not the only perspective on the LOOCV versus K-fold CV decision. In particular, some readers may bel
|
10-fold Cross-validation vs leave-one-out cross-validation
The existing answers focus on getting good estimates of the out of sample prediction error. This is not the only perspective on the LOOCV versus K-fold CV decision. In particular, some readers may believe there is a true model and may wish to recover it.
In this case, Shao 1993 famously showed that for linear models, LOOCV is inconsistent for recovering the true model and something resembling K-fold CV is consistent. (Shao considered a train-test split where the $n_{test}/n_{train}$ goes to 1, i.e. the test set dominates.)
Model selection is a deep and confusing topic, so let me add a couple of references to related discussions.
A related discussion here on LASSO reinforces this dichotomy of prediction vs inference and has interesting, thought-provoking comments about LASSO and CV in specific.
This site has maybe hundreds of related threads that describe AIC and BIC. In some cases, these are asymptotically equivalent to LOOCV (AIC) and K-fold CV (BIC) (More info and orignal sources in this answer: https://stats.stackexchange.com/a/414610/86176). These threads reinforce what I wrote above, because like LOOCV, AIC emphasizes prediction. Like K-fold, BIC emphasizes selecting the true model. Yuhong Yang showed that no procedure can be optimal for both purposes.
Rob Hyndman has a useful cross-validation overview here mentioning AIC, BIC, LOO, and leave-more-out CV. His position is that consistency in model selection is irrelevant because the true model is rarely in the set under consideration.
|
10-fold Cross-validation vs leave-one-out cross-validation
The existing answers focus on getting good estimates of the out of sample prediction error. This is not the only perspective on the LOOCV versus K-fold CV decision. In particular, some readers may bel
|
5,983
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso ($L^1$)?
|
1. Which method is preferred?
Yes, elastic net is always preferred over lasso & ridge regression because it solves the limitations of both methods, while also including each as special cases. So if the ridge or lasso solution is, indeed, the best, then any good model selection routine will identify that as part of the modeling process.
Comments to my post have pointed out that the advantages of elastic net are not unqualified. I persist in my belief that the generality of the elastic net regression is still preferable to either $L^1$ or $L^2$ regularization on its own. Specifically, I think that the points of contention between myself and others are directly tied to what assumptions we are willing to make about the modeling process. In the presence of strong knowledge about the underlying data, some methods will be preferred to others. However, my preference for elastic net is rooted in my skepticism that one will confidently know that $L^1$ or $L^2$ is the true model.
Claim: Prior knowledge may obviate one of the need to use elastic net regression.
This is somewhat circular. Forgive me if this is somewhat glib, but if you know that LASSO (ridge) is the best solution, then you won't ask yourself how to appropriately model it; you'll just fit a LASSO (ridge) model. If you're absolutely sure that the correct answer is LASSO (ridge) regression, then you're clearly convinced that there would be no reason to waste time fitting an elastic net. But if you're slightly less certain whether LASSO (ridge) is the correct way to proceed, I believe it makes sense to estimate a more flexible model, and evaluate how strongly the data support the prior belief.
Claim: Modestly large data will not permit discovery of $L^1$ or $L^2$ solutions as preferred, even in cases when the $L^1$ or $L^2$ solution is the true model.
This is also true, but I think it's circular for a similar reason: if you've estimated an optimal solution and find that $\alpha\not\in \{0,1\},$ then that's the model that the data support. On the one hand, yes, your estimated model is not the true model, but I must wonder how one would know that the true model is $\alpha=1$ (or $\alpha=0$) prior to any model estimation. There might be domains where you have this kind of prior knowledge, but my professional work is not one of them.
Claim: Introducing additional hyperparameters increases the computational cost of estimating the model.
This is only relevant if you have tight time/computer limitations; otherwise it's just a nuisance. GLMNET is the gold-standard algorithm for estimating elastic net solutions. The user supplies some value of alpha, and it uses the path properties of the regularization solution to quickly estimate a family of models for a variety of values of the penalization magnitude $\lambda$, and it can often estimate this family of solutions more quickly than estimating just one solution for a specific value $\lambda$. So, yes, using GLMNET does consign you to the domain of using grid-style methods (iterate over some values of $\alpha$ and let GLMNET try a variety of $\lambda$s), but it's pretty fast.
Claim: Improved performance of elastic net over LASSO or ridge regression is not guaranteed.
This is true, but at the step where one is contemplating which method to use, one will not know which of elastic net, ridge or LASSO is the best. If one reasons that the best solution must be LASSO or ridge regression, then we're in the domain of claim (1). If we're still uncertain which is best, then we can test LASSO, ridge and elastic net solutions, and make a choice of a final model at that point (or, if you're an academic, just write your paper about all three). This situation of prior uncertainty will either place us in the domain of claim (2), where the true model is LASSO/ridge but we did not know so ahead of time, and we accidentally select the wrong model due to poorly identified hyperparameters, or elastic net is actually the best solution.
Claim: Hyperparameter selection without cross-validation is highly biased and error-prone.
Proper model validation is an integral part of any machine learning enterprise. Model validation is usually an expensive step, too, so one would seek to minimize inefficiencies here -- if one of those inefficiencies is needlessly trying $\alpha$ values that are known to be futile, then one suggestion might be to do so. Yes, by all means do that, if you're comfortable with the strong statement that you're making about how your data are arranged -- but we're back to the territory of claim (1) and claim (2).
2. What's the intuition and math behind elastic net?
I strongly suggest reading the literature on these methods, starting with the original paper on the elastic net. The paper develops the intuition and the math, and is highly readable. Reproducing it here would only be to the detriment of the authors' explanation. But the high-level summary is that the elastic net is a convex sum of ridge and lasso penalties, so the objective function for a Gaussian error model looks like
$$\text{Residual Mean Square Error}+\alpha \cdot \text{Ridge Penalty}+(1-\alpha)\cdot \text{LASSO Penalty}$$
for $\alpha\in[0,1].$
Hui Zou and Trevor Hastie. "Regularization and variable selection via the elastic net." J. R. Statistic. Soc., vol 67 (2005), Part 2., pp. 301-320.
Richard Hardy points out that this is developed in more detail in Hastie et al. "The Elements of Statistical Learning" chapters 3 and 18.
3. What if you add additional $L^q$ norms?
This is a question posed to me in the comments:
Let me suggest one further argument against your point of view that elastic net is uniformly better than lasso or ridge alone. Imagine that we add another penalty to the elastic net cost function, e.g. an $L^3$ cost, with a hyperparameter $\gamma$. I don't think there is much research on that, but I would bet you that if you do a cross-validation search on a 3d parameter grid, then you will get $\gamma\not =0$ as the optimal value. If so, would you then argue that it is always a good idea to include $L^3$ cost too.
I appreciate that the spirit of the question is "If it's as you claim and two penalties are good, why not add another?" But I think the answer lies in why we regularize in the first place.
$L^1$ regularization tends to produce sparse solutions, but also tends to select the feature most strongly correlated with the outcome and zero out the rest. Moreover, in a data set with $n$ observations, it can select at most $n$ features. $L_2$ regularization is suited to deal with ill-posed problems resulting from highly (or perfectly) correlated features. In a data set with $p$ features, $L_2$ regularization can be used to uniquely identify a model in the $p>n$ case.
Setting aside either of these problems, the regularized model can still out-perform the ML model because the shrinkage properties of the estimators are "pessimistic" and pull coefficients toward 0.
But I am not aware of the statistical properties for $L^3$ regularization. In the problems I've worked on, we generally face both problems: the inclusion of poorly correlated features (hypotheses that are not borne out by the data), and co-linear features.
Indeed, there are compelling reasons that $L^1$ and $L^2$ penalties on parameters are the only ones typically used.
In Why do we only see $L_1$ and $L_2$ regularization but not other norms?, @whuber offers this comment:
I haven't investigated this question specifically, but experience with similar situations suggests there may be a nice qualitative answer: all norms that are second differentiable at the origin will be locally equivalent to each other, of which the $L^2$ norm is the standard. All other norms will not be differentiable at the origin and $L^1$ qualitatively reproduces their behavior. That covers the gamut. In effect, a linear combination of an $L^1$ and $L^2$ norm approximates any norm to second order at the origin--and this is what matters most in regression without outlying residuals.
So we can effectively cover the range of options which could possibly be provided by $L^q$ norms as combinations of $L^1$ and $L^2$ norms -- all without requiring additional hyperparameter tuning.
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso (
|
1. Which method is preferred?
Yes, elastic net is always preferred over lasso & ridge regression because it solves the limitations of both methods, while also including each as special cases. So if th
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso ($L^1$)?
1. Which method is preferred?
Yes, elastic net is always preferred over lasso & ridge regression because it solves the limitations of both methods, while also including each as special cases. So if the ridge or lasso solution is, indeed, the best, then any good model selection routine will identify that as part of the modeling process.
Comments to my post have pointed out that the advantages of elastic net are not unqualified. I persist in my belief that the generality of the elastic net regression is still preferable to either $L^1$ or $L^2$ regularization on its own. Specifically, I think that the points of contention between myself and others are directly tied to what assumptions we are willing to make about the modeling process. In the presence of strong knowledge about the underlying data, some methods will be preferred to others. However, my preference for elastic net is rooted in my skepticism that one will confidently know that $L^1$ or $L^2$ is the true model.
Claim: Prior knowledge may obviate one of the need to use elastic net regression.
This is somewhat circular. Forgive me if this is somewhat glib, but if you know that LASSO (ridge) is the best solution, then you won't ask yourself how to appropriately model it; you'll just fit a LASSO (ridge) model. If you're absolutely sure that the correct answer is LASSO (ridge) regression, then you're clearly convinced that there would be no reason to waste time fitting an elastic net. But if you're slightly less certain whether LASSO (ridge) is the correct way to proceed, I believe it makes sense to estimate a more flexible model, and evaluate how strongly the data support the prior belief.
Claim: Modestly large data will not permit discovery of $L^1$ or $L^2$ solutions as preferred, even in cases when the $L^1$ or $L^2$ solution is the true model.
This is also true, but I think it's circular for a similar reason: if you've estimated an optimal solution and find that $\alpha\not\in \{0,1\},$ then that's the model that the data support. On the one hand, yes, your estimated model is not the true model, but I must wonder how one would know that the true model is $\alpha=1$ (or $\alpha=0$) prior to any model estimation. There might be domains where you have this kind of prior knowledge, but my professional work is not one of them.
Claim: Introducing additional hyperparameters increases the computational cost of estimating the model.
This is only relevant if you have tight time/computer limitations; otherwise it's just a nuisance. GLMNET is the gold-standard algorithm for estimating elastic net solutions. The user supplies some value of alpha, and it uses the path properties of the regularization solution to quickly estimate a family of models for a variety of values of the penalization magnitude $\lambda$, and it can often estimate this family of solutions more quickly than estimating just one solution for a specific value $\lambda$. So, yes, using GLMNET does consign you to the domain of using grid-style methods (iterate over some values of $\alpha$ and let GLMNET try a variety of $\lambda$s), but it's pretty fast.
Claim: Improved performance of elastic net over LASSO or ridge regression is not guaranteed.
This is true, but at the step where one is contemplating which method to use, one will not know which of elastic net, ridge or LASSO is the best. If one reasons that the best solution must be LASSO or ridge regression, then we're in the domain of claim (1). If we're still uncertain which is best, then we can test LASSO, ridge and elastic net solutions, and make a choice of a final model at that point (or, if you're an academic, just write your paper about all three). This situation of prior uncertainty will either place us in the domain of claim (2), where the true model is LASSO/ridge but we did not know so ahead of time, and we accidentally select the wrong model due to poorly identified hyperparameters, or elastic net is actually the best solution.
Claim: Hyperparameter selection without cross-validation is highly biased and error-prone.
Proper model validation is an integral part of any machine learning enterprise. Model validation is usually an expensive step, too, so one would seek to minimize inefficiencies here -- if one of those inefficiencies is needlessly trying $\alpha$ values that are known to be futile, then one suggestion might be to do so. Yes, by all means do that, if you're comfortable with the strong statement that you're making about how your data are arranged -- but we're back to the territory of claim (1) and claim (2).
2. What's the intuition and math behind elastic net?
I strongly suggest reading the literature on these methods, starting with the original paper on the elastic net. The paper develops the intuition and the math, and is highly readable. Reproducing it here would only be to the detriment of the authors' explanation. But the high-level summary is that the elastic net is a convex sum of ridge and lasso penalties, so the objective function for a Gaussian error model looks like
$$\text{Residual Mean Square Error}+\alpha \cdot \text{Ridge Penalty}+(1-\alpha)\cdot \text{LASSO Penalty}$$
for $\alpha\in[0,1].$
Hui Zou and Trevor Hastie. "Regularization and variable selection via the elastic net." J. R. Statistic. Soc., vol 67 (2005), Part 2., pp. 301-320.
Richard Hardy points out that this is developed in more detail in Hastie et al. "The Elements of Statistical Learning" chapters 3 and 18.
3. What if you add additional $L^q$ norms?
This is a question posed to me in the comments:
Let me suggest one further argument against your point of view that elastic net is uniformly better than lasso or ridge alone. Imagine that we add another penalty to the elastic net cost function, e.g. an $L^3$ cost, with a hyperparameter $\gamma$. I don't think there is much research on that, but I would bet you that if you do a cross-validation search on a 3d parameter grid, then you will get $\gamma\not =0$ as the optimal value. If so, would you then argue that it is always a good idea to include $L^3$ cost too.
I appreciate that the spirit of the question is "If it's as you claim and two penalties are good, why not add another?" But I think the answer lies in why we regularize in the first place.
$L^1$ regularization tends to produce sparse solutions, but also tends to select the feature most strongly correlated with the outcome and zero out the rest. Moreover, in a data set with $n$ observations, it can select at most $n$ features. $L_2$ regularization is suited to deal with ill-posed problems resulting from highly (or perfectly) correlated features. In a data set with $p$ features, $L_2$ regularization can be used to uniquely identify a model in the $p>n$ case.
Setting aside either of these problems, the regularized model can still out-perform the ML model because the shrinkage properties of the estimators are "pessimistic" and pull coefficients toward 0.
But I am not aware of the statistical properties for $L^3$ regularization. In the problems I've worked on, we generally face both problems: the inclusion of poorly correlated features (hypotheses that are not borne out by the data), and co-linear features.
Indeed, there are compelling reasons that $L^1$ and $L^2$ penalties on parameters are the only ones typically used.
In Why do we only see $L_1$ and $L_2$ regularization but not other norms?, @whuber offers this comment:
I haven't investigated this question specifically, but experience with similar situations suggests there may be a nice qualitative answer: all norms that are second differentiable at the origin will be locally equivalent to each other, of which the $L^2$ norm is the standard. All other norms will not be differentiable at the origin and $L^1$ qualitatively reproduces their behavior. That covers the gamut. In effect, a linear combination of an $L^1$ and $L^2$ norm approximates any norm to second order at the origin--and this is what matters most in regression without outlying residuals.
So we can effectively cover the range of options which could possibly be provided by $L^q$ norms as combinations of $L^1$ and $L^2$ norms -- all without requiring additional hyperparameter tuning.
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso (
1. Which method is preferred?
Yes, elastic net is always preferred over lasso & ridge regression because it solves the limitations of both methods, while also including each as special cases. So if th
|
5,984
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso ($L^1$)?
|
I generally agree with the answer by @Sycorax, but I would like to add some qualification.
Saying that "elastic net is always preferred over lasso & ridge regression" may be a little too strong. In small or medium samples elastic net may not select pure LASSO or pure ridge solution even if the former or the latter is actually the relevant one. Given strong prior knowledge it could make sense to choose LASSO or ridge in place of elastic net. However, in absence of prior knowledge, elastic net should be the preferred solution.
Also, elastic net is computationally more expensive than LASSO or ridge as the relative weight of LASSO versus ridge has to be selected using cross validation. If a reasonable grid of alpha values is [0,1] with a step size of 0.1, that would mean elastic net is roughly 11 times as computationally expensive as LASSO or ridge. (Since LASSO and ridge do not have quite the same computational complexity, the result is just a rough guess.)
What sort of prior knowledge would lead one to prefer Lasso and what sort of prior knowledge would lead one to prefer ridge?
If it is plausible that all regressors are relevant, but they are highly correlated, then no variable selection is needed and thus ridge could be preferred. Some economic phenomena are like that. If, on the other hand, some of the regressors are likely to be completely irrelevant (but we just do not know which ones) then variable selection is needed and LASSO could be preferred. Gene expression problems are probably like that (but I am not a biologist, so please correct me if I am wrong). Generally, this knowledge would be taken from the subject-matter domain.
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso (
|
I generally agree with the answer by @Sycorax, but I would like to add some qualification.
Saying that "elastic net is always preferred over lasso & ridge regression" may be a little too strong. In sm
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso ($L^1$)?
I generally agree with the answer by @Sycorax, but I would like to add some qualification.
Saying that "elastic net is always preferred over lasso & ridge regression" may be a little too strong. In small or medium samples elastic net may not select pure LASSO or pure ridge solution even if the former or the latter is actually the relevant one. Given strong prior knowledge it could make sense to choose LASSO or ridge in place of elastic net. However, in absence of prior knowledge, elastic net should be the preferred solution.
Also, elastic net is computationally more expensive than LASSO or ridge as the relative weight of LASSO versus ridge has to be selected using cross validation. If a reasonable grid of alpha values is [0,1] with a step size of 0.1, that would mean elastic net is roughly 11 times as computationally expensive as LASSO or ridge. (Since LASSO and ridge do not have quite the same computational complexity, the result is just a rough guess.)
What sort of prior knowledge would lead one to prefer Lasso and what sort of prior knowledge would lead one to prefer ridge?
If it is plausible that all regressors are relevant, but they are highly correlated, then no variable selection is needed and thus ridge could be preferred. Some economic phenomena are like that. If, on the other hand, some of the regressors are likely to be completely irrelevant (but we just do not know which ones) then variable selection is needed and LASSO could be preferred. Gene expression problems are probably like that (but I am not a biologist, so please correct me if I am wrong). Generally, this knowledge would be taken from the subject-matter domain.
|
What is elastic net regularization, and how does it solve the drawbacks of Ridge ($L^2$) and Lasso (
I generally agree with the answer by @Sycorax, but I would like to add some qualification.
Saying that "elastic net is always preferred over lasso & ridge regression" may be a little too strong. In sm
|
5,985
|
Manually calculated $R^2$ doesn't match up with randomForest() $R^2$ for testing new data
|
The reason that the $R^2$ values are not matching is because randomForest is reporting variation explained as opposed to variance explained. I think this is a common misunderstanding about $R^2$ that is perpetuated in textbooks. I even mentioned this on another thread the other day. If you want an example, see the (otherwise quite good) textbook Seber and Lee, Linear Regression Analysis, 2nd. ed.
A general definition for $R^2$ is
$$
R^2 = 1 - \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2} .
$$
That is, we compute the mean-squared error, divide it by the variance of the original observations and then subtract this from one. (Note that if your predictions are really bad, this value can go negative.)
Now, what happens with linear regression (with an intercept term!) is that the average value of the $\hat{y}_i$'s matches $\bar{y}$. Furthermore, the residual vector $y - \hat{y}$ is orthogonal to the vector of fitted values $\hat{y}$. When you put these two things together, then the definition reduces to the one that is more commonly encountered, i.e.,
$$
R^2_{\mathrm{LR}} = \mathrm{Corr}(y,\hat{y})^2 .
$$
(I've used the subscripts $\mathrm{LR}$ in $R^2_{\mathrm{LR}}$ to indicate linear regression.)
The randomForest call is using the first definition, so if you do
> y <- testset[,1]
> 1 - sum((y-predicted)^2)/sum((y-mean(y))^2)
you'll see that the answers match.
|
Manually calculated $R^2$ doesn't match up with randomForest() $R^2$ for testing new data
|
The reason that the $R^2$ values are not matching is because randomForest is reporting variation explained as opposed to variance explained. I think this is a common misunderstanding about $R^2$ that
|
Manually calculated $R^2$ doesn't match up with randomForest() $R^2$ for testing new data
The reason that the $R^2$ values are not matching is because randomForest is reporting variation explained as opposed to variance explained. I think this is a common misunderstanding about $R^2$ that is perpetuated in textbooks. I even mentioned this on another thread the other day. If you want an example, see the (otherwise quite good) textbook Seber and Lee, Linear Regression Analysis, 2nd. ed.
A general definition for $R^2$ is
$$
R^2 = 1 - \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2} .
$$
That is, we compute the mean-squared error, divide it by the variance of the original observations and then subtract this from one. (Note that if your predictions are really bad, this value can go negative.)
Now, what happens with linear regression (with an intercept term!) is that the average value of the $\hat{y}_i$'s matches $\bar{y}$. Furthermore, the residual vector $y - \hat{y}$ is orthogonal to the vector of fitted values $\hat{y}$. When you put these two things together, then the definition reduces to the one that is more commonly encountered, i.e.,
$$
R^2_{\mathrm{LR}} = \mathrm{Corr}(y,\hat{y})^2 .
$$
(I've used the subscripts $\mathrm{LR}$ in $R^2_{\mathrm{LR}}$ to indicate linear regression.)
The randomForest call is using the first definition, so if you do
> y <- testset[,1]
> 1 - sum((y-predicted)^2)/sum((y-mean(y))^2)
you'll see that the answers match.
|
Manually calculated $R^2$ doesn't match up with randomForest() $R^2$ for testing new data
The reason that the $R^2$ values are not matching is because randomForest is reporting variation explained as opposed to variance explained. I think this is a common misunderstanding about $R^2$ that
|
5,986
|
Why is logistic regression a linear model?
|
The logistic regression model is of the form
$$
\mathrm{logit}(p_i) = \mathrm{ln}\left(\frac{p_i}{1-p_i}\right) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i}.
$$
It is called a generalized linear model not because the estimated probability of the response event is linear, but because the logit of the estimated probability response is a linear function of the predictors parameters.
More generally, the Generalized Linear Model is of the form
$$
\mathrm{g}(\mu_i) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i},
$$
where $\mu$ is the expected value of the response given the covariates.
Edit: Thank you whuber for the correction.
|
Why is logistic regression a linear model?
|
The logistic regression model is of the form
$$
\mathrm{logit}(p_i) = \mathrm{ln}\left(\frac{p_i}{1-p_i}\right) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i}.
$$
It is calle
|
Why is logistic regression a linear model?
The logistic regression model is of the form
$$
\mathrm{logit}(p_i) = \mathrm{ln}\left(\frac{p_i}{1-p_i}\right) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i}.
$$
It is called a generalized linear model not because the estimated probability of the response event is linear, but because the logit of the estimated probability response is a linear function of the predictors parameters.
More generally, the Generalized Linear Model is of the form
$$
\mathrm{g}(\mu_i) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i},
$$
where $\mu$ is the expected value of the response given the covariates.
Edit: Thank you whuber for the correction.
|
Why is logistic regression a linear model?
The logistic regression model is of the form
$$
\mathrm{logit}(p_i) = \mathrm{ln}\left(\frac{p_i}{1-p_i}\right) = \beta_0 + \beta_1 x_{1,i} + \beta_2 x_{2,i} + \cdots + \beta_p x_{p,i}.
$$
It is calle
|
5,987
|
Why is logistic regression a linear model?
|
Logistic regression uses the general linear equation $Y=b_0+β(b_i X_i)+\epsilon$. In linear regression $Y$ is a continuous dependent variable, but in logistic regression it is regressing for the probability of a categorical outcome (for example 0 and 1).
The probability of $Y=1$ is:
$$
P(Y=1) = {1 \over 1+e^{-(b_0+\sum{(b_iX_i)})}}
$$
|
Why is logistic regression a linear model?
|
Logistic regression uses the general linear equation $Y=b_0+β(b_i X_i)+\epsilon$. In linear regression $Y$ is a continuous dependent variable, but in logistic regression it is regressing for the proba
|
Why is logistic regression a linear model?
Logistic regression uses the general linear equation $Y=b_0+β(b_i X_i)+\epsilon$. In linear regression $Y$ is a continuous dependent variable, but in logistic regression it is regressing for the probability of a categorical outcome (for example 0 and 1).
The probability of $Y=1$ is:
$$
P(Y=1) = {1 \over 1+e^{-(b_0+\sum{(b_iX_i)})}}
$$
|
Why is logistic regression a linear model?
Logistic regression uses the general linear equation $Y=b_0+β(b_i X_i)+\epsilon$. In linear regression $Y$ is a continuous dependent variable, but in logistic regression it is regressing for the proba
|
5,988
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
|
Below are those residual plots with the approximate mean and spread of points (limits that include most of the values) at each value of fitted (and hence of $x$) marked in - to a rough approximation indicating the conditional mean (red) and conditional mean $\pm$ (roughly!) twice the conditional standard deviation (purple):
The second plot shows the mean residual doesn't change with the fitted values (and so is doesn't change with $x$), but the spread of the residuals (and hence of the $y$'s about the fitted line) is increasing as the fitted values (or $x$) changes. That is, the spread is not constant. Heteroskedasticity.
the third plot shows that the residuals are mostly negative when the fitted value is small, positive when the fitted value is in the middle and negative when the fitted value is large. That is, the spread is approximately constant, but the conditional mean is not - the fitted line doesn't describe how $y$ behaves as $x$ changes, since the relationship is curved.
Isn't it possible that it is linear, but that the errors are either not normally distributed, or else that they are normally distributed, but do not center around zero?
Not really*, in those situations the plots look different to the third plot.
(i) If the errors were normal but not centered at zero, but at $\theta$, say, then the intercept would pick up the mean error, and so the estimated intercept would be an estimate of $\beta_0+\theta$ (that would be its expected value, but it is estimated with error). Consequently, your residuals would still have conditional mean zero, and so the plot would look like the first plot above.
(ii) If the errors are not normally distributed the pattern of dots might be densest somewhere other than the center line (if the data were skewed), say, but the local mean residual would still be near 0.
Here the purple lines still represent a (very) roughly 95% interval, but it's no longer symmetric. (I'm glossing over a couple of issues to avoid obscuring the basic point here.)
* It's not necessarily impossible -- if you have an "error" term that doesn't really behave like errors - say where $x$ and $y$ are related to them in just the right way - you might be able to produce patterns something like these. However, we make assumptions about the error term, such as that it's not related to $x$, for example, and has zero mean; we'd have to break at least some of those sorts of assumptions to do it. (In many cases you may have reason to conclude that such effects should be absent or at least relatively small.)
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
|
Below are those residual plots with the approximate mean and spread of points (limits that include most of the values) at each value of fitted (and hence of $x$) marked in - to a rough approximation i
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
Below are those residual plots with the approximate mean and spread of points (limits that include most of the values) at each value of fitted (and hence of $x$) marked in - to a rough approximation indicating the conditional mean (red) and conditional mean $\pm$ (roughly!) twice the conditional standard deviation (purple):
The second plot shows the mean residual doesn't change with the fitted values (and so is doesn't change with $x$), but the spread of the residuals (and hence of the $y$'s about the fitted line) is increasing as the fitted values (or $x$) changes. That is, the spread is not constant. Heteroskedasticity.
the third plot shows that the residuals are mostly negative when the fitted value is small, positive when the fitted value is in the middle and negative when the fitted value is large. That is, the spread is approximately constant, but the conditional mean is not - the fitted line doesn't describe how $y$ behaves as $x$ changes, since the relationship is curved.
Isn't it possible that it is linear, but that the errors are either not normally distributed, or else that they are normally distributed, but do not center around zero?
Not really*, in those situations the plots look different to the third plot.
(i) If the errors were normal but not centered at zero, but at $\theta$, say, then the intercept would pick up the mean error, and so the estimated intercept would be an estimate of $\beta_0+\theta$ (that would be its expected value, but it is estimated with error). Consequently, your residuals would still have conditional mean zero, and so the plot would look like the first plot above.
(ii) If the errors are not normally distributed the pattern of dots might be densest somewhere other than the center line (if the data were skewed), say, but the local mean residual would still be near 0.
Here the purple lines still represent a (very) roughly 95% interval, but it's no longer symmetric. (I'm glossing over a couple of issues to avoid obscuring the basic point here.)
* It's not necessarily impossible -- if you have an "error" term that doesn't really behave like errors - say where $x$ and $y$ are related to them in just the right way - you might be able to produce patterns something like these. However, we make assumptions about the error term, such as that it's not related to $x$, for example, and has zero mean; we'd have to break at least some of those sorts of assumptions to do it. (In many cases you may have reason to conclude that such effects should be absent or at least relatively small.)
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
Below are those residual plots with the approximate mean and spread of points (limits that include most of the values) at each value of fitted (and hence of $x$) marked in - to a rough approximation i
|
5,989
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
|
You wrote
The second plot seems to indicate that the absolute value of the
residuals is strongly positively correlated with the fitted values,
It doesn't "seem" to, it does. And that's what heteroskedastic means.
Then you give a matrix of all 1s, which is irrelevant; correlation can exist and be less than 1.
Then you write
Also, why does the third plot necessarily indicate non-linearity?
Isn't it possible that it is linear, but that the errors are either
not normally distributed, or else that they are normally distributed,
but do not center around zero?
They do center around 0. Half or so are below 0, half above. It's harder to tell if they are normally distributed from this plot, but another plot that is usually recommended is a quantile normal plot of the residuals, and that would show whether they are normal or not.
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
|
You wrote
The second plot seems to indicate that the absolute value of the
residuals is strongly positively correlated with the fitted values,
It doesn't "seem" to, it does. And that's what hetero
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
You wrote
The second plot seems to indicate that the absolute value of the
residuals is strongly positively correlated with the fitted values,
It doesn't "seem" to, it does. And that's what heteroskedastic means.
Then you give a matrix of all 1s, which is irrelevant; correlation can exist and be less than 1.
Then you write
Also, why does the third plot necessarily indicate non-linearity?
Isn't it possible that it is linear, but that the errors are either
not normally distributed, or else that they are normally distributed,
but do not center around zero?
They do center around 0. Half or so are below 0, half above. It's harder to tell if they are normally distributed from this plot, but another plot that is usually recommended is a quantile normal plot of the residuals, and that would show whether they are normal or not.
|
Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model
You wrote
The second plot seems to indicate that the absolute value of the
residuals is strongly positively correlated with the fitted values,
It doesn't "seem" to, it does. And that's what hetero
|
5,990
|
Measures of variable importance in random forests
|
The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this predictor's values over your dataset), should have a negative influence on prediction, i.e.: using the same model to predict from data that is the same except for the one variable, should give worse predictions.
So, you take a predictive measure (MSE) with the original dataset and then with the 'permuted' dataset, and you compare them somehow. One way, particularly since we expect the original MSE to always be smaller, the difference can be taken. Finally, for making the values comparable over variables, these are scaled.
For the second one: at each split, you can calculate how much this split reduces node impurity (for regression trees, indeed, the difference between RSS before and after the split). This is summed over all splits for that variable, over all trees.
Note: a good read is Elements of Statistical Learning by Hastie, Tibshirani and Friedman...
|
Measures of variable importance in random forests
|
The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this pred
|
Measures of variable importance in random forests
The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this predictor's values over your dataset), should have a negative influence on prediction, i.e.: using the same model to predict from data that is the same except for the one variable, should give worse predictions.
So, you take a predictive measure (MSE) with the original dataset and then with the 'permuted' dataset, and you compare them somehow. One way, particularly since we expect the original MSE to always be smaller, the difference can be taken. Finally, for making the values comparable over variables, these are scaled.
For the second one: at each split, you can calculate how much this split reduces node impurity (for regression trees, indeed, the difference between RSS before and after the split). This is summed over all splits for that variable, over all trees.
Note: a good read is Elements of Statistical Learning by Hastie, Tibshirani and Friedman...
|
Measures of variable importance in random forests
The first one can be 'interpreted' as follows: if a predictor is important in your current model, then assigning other values for that predictor randomly but 'realistically' (i.e.: permuting this pred
|
5,991
|
Measures of variable importance in random forests
|
Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values.
http://bioinformatics.oxfordjournals.org/content/early/2010/04/12/bioinformatics.btq134.full.pdf
I have a modified implementation of random forests out on CRAN which implements their approach of estimating empirical p values and false discovery rates, here
http://cran.r-project.org/web/packages/pRF/index.html
|
Measures of variable importance in random forests
|
Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values.
http://bioinformatics.oxfordjournals.org/content/
|
Measures of variable importance in random forests
Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values.
http://bioinformatics.oxfordjournals.org/content/early/2010/04/12/bioinformatics.btq134.full.pdf
I have a modified implementation of random forests out on CRAN which implements their approach of estimating empirical p values and false discovery rates, here
http://cran.r-project.org/web/packages/pRF/index.html
|
Measures of variable importance in random forests
Random Forest importance metrics as implemented in the randomForest package in R have quirks in that correlated predictors get low importance values.
http://bioinformatics.oxfordjournals.org/content/
|
5,992
|
Standard error clustering in R (either manually or in plm)
|
Edit as of December 2021:
Probably the easiest way to get clustered standard errors in R now is via the the feols function in the fixest package or felm function in the lfe package:
feols in fixest: Clustering syntax and standard error computational procedure
felm in lfe: CRAN documentation
Original answers and some subsequent edits:
For White standard errors clustered by group with the plm framework try
coeftest(model.plm, vcov=vcovHC(model.plm,type="HC0",cluster="group"))
where model.plm is a plm model.
See also this link
http://www.inside-r.org/packages/cran/plm/docs/vcovHC or the plm package documentation
EDIT:
For two-way clustering (e.g. group and time) see the following link:
http://people.su.se/~ma/clustering.pdf
Here is another helpful guide for the plm package specifically that explains different options for clustered standard errors:
http://www.princeton.edu/~otorres/Panel101R.pdf
Clustering and other information, especially for Stata, can be found here:
http://www.kellogg.northwestern.edu/faculty/petersen/htm/papers/se/se_programming.htm
EDIT 2:
Here are examples that compare R and stata: http://www.richard-bluhm.com/clustered-ses-in-r-and-stata-2/
Also, the multiwayvcov may be helpful. This post provides a helpful overview: http://rforpublichealth.blogspot.dk/2014/10/easy-clustered-standard-errors-in-r.html
From the documentation:
library(multiwayvcov)
library(lmtest)
data(petersen)
m1 <- lm(y ~ x, data = petersen)
# Cluster by firm
vcov_firm <- cluster.vcov(m1, petersen$firmid)
coeftest(m1, vcov_firm)
# Cluster by year
vcov_year <- cluster.vcov(m1, petersen$year)
coeftest(m1, vcov_year)
# Cluster by year using a formula
vcov_year_formula <- cluster.vcov(m1, ~ year)
coeftest(m1, vcov_year_formula)
# Double cluster by firm and year
vcov_both <- cluster.vcov(m1, cbind(petersen$firmid, petersen$year))
coeftest(m1, vcov_both)
# Double cluster by firm and year using a formula
vcov_both_formula <- cluster.vcov(m1, ~ firmid + year)
coeftest(m1, vcov_both_formula)
|
Standard error clustering in R (either manually or in plm)
|
Edit as of December 2021:
Probably the easiest way to get clustered standard errors in R now is via the the feols function in the fixest package or felm function in the lfe package:
feols in fixest:
|
Standard error clustering in R (either manually or in plm)
Edit as of December 2021:
Probably the easiest way to get clustered standard errors in R now is via the the feols function in the fixest package or felm function in the lfe package:
feols in fixest: Clustering syntax and standard error computational procedure
felm in lfe: CRAN documentation
Original answers and some subsequent edits:
For White standard errors clustered by group with the plm framework try
coeftest(model.plm, vcov=vcovHC(model.plm,type="HC0",cluster="group"))
where model.plm is a plm model.
See also this link
http://www.inside-r.org/packages/cran/plm/docs/vcovHC or the plm package documentation
EDIT:
For two-way clustering (e.g. group and time) see the following link:
http://people.su.se/~ma/clustering.pdf
Here is another helpful guide for the plm package specifically that explains different options for clustered standard errors:
http://www.princeton.edu/~otorres/Panel101R.pdf
Clustering and other information, especially for Stata, can be found here:
http://www.kellogg.northwestern.edu/faculty/petersen/htm/papers/se/se_programming.htm
EDIT 2:
Here are examples that compare R and stata: http://www.richard-bluhm.com/clustered-ses-in-r-and-stata-2/
Also, the multiwayvcov may be helpful. This post provides a helpful overview: http://rforpublichealth.blogspot.dk/2014/10/easy-clustered-standard-errors-in-r.html
From the documentation:
library(multiwayvcov)
library(lmtest)
data(petersen)
m1 <- lm(y ~ x, data = petersen)
# Cluster by firm
vcov_firm <- cluster.vcov(m1, petersen$firmid)
coeftest(m1, vcov_firm)
# Cluster by year
vcov_year <- cluster.vcov(m1, petersen$year)
coeftest(m1, vcov_year)
# Cluster by year using a formula
vcov_year_formula <- cluster.vcov(m1, ~ year)
coeftest(m1, vcov_year_formula)
# Double cluster by firm and year
vcov_both <- cluster.vcov(m1, cbind(petersen$firmid, petersen$year))
coeftest(m1, vcov_both)
# Double cluster by firm and year using a formula
vcov_both_formula <- cluster.vcov(m1, ~ firmid + year)
coeftest(m1, vcov_both_formula)
|
Standard error clustering in R (either manually or in plm)
Edit as of December 2021:
Probably the easiest way to get clustered standard errors in R now is via the the feols function in the fixest package or felm function in the lfe package:
feols in fixest:
|
5,993
|
Standard error clustering in R (either manually or in plm)
|
After a lot of reading, I found the solution for doing clustering within the lm framework.
There's an excellent white paper by Mahmood Arai that provides a tutorial on clustering in the lm framework, which he does with degrees-of-freedom corrections instead of my messy attempts above. He provides his functions for both one- and two-way clustering covariance matrices here.
Finally, although the content isn't available free, Angrist and Pischke's Mostly Harmless Econometrics has a section on clustering that was very helpful.
Update on 4/27/2015 to add code from blog post.
api=read.csv("api.csv") #create the variable api from the corresponding csv
attach(api) # attach of data.frame objects
api1=api[c(1:6,8:310),] # one missing entry in row nr. 7
modell.api=lm(API00 ~ GROWTH + EMER + YR_RND, data=api1) # creation of a simple linear model for API00 using the regressors Growth, Emer and Yr_rnd.
##creation of the function according to Arai:
clx <- function(fm, dfcw, cluster) {
library(sandwich)
library(lmtest)
library(zoo)
M <- length(unique(cluster))
N <- length(cluster)
dfc <- (M/(M-1))*((N-1)/(N-fm$rank)) # anpassung der freiheitsgrade
u <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum))
vcovCL <-dfc * sandwich (fm, meat = crossprod(u)/N) * dfcw
coeftest(fm, vcovCL)
}
clx(modell.api, 1, api1$DNUM) #creation of results.
|
Standard error clustering in R (either manually or in plm)
|
After a lot of reading, I found the solution for doing clustering within the lm framework.
There's an excellent white paper by Mahmood Arai that provides a tutorial on clustering in the lm framework,
|
Standard error clustering in R (either manually or in plm)
After a lot of reading, I found the solution for doing clustering within the lm framework.
There's an excellent white paper by Mahmood Arai that provides a tutorial on clustering in the lm framework, which he does with degrees-of-freedom corrections instead of my messy attempts above. He provides his functions for both one- and two-way clustering covariance matrices here.
Finally, although the content isn't available free, Angrist and Pischke's Mostly Harmless Econometrics has a section on clustering that was very helpful.
Update on 4/27/2015 to add code from blog post.
api=read.csv("api.csv") #create the variable api from the corresponding csv
attach(api) # attach of data.frame objects
api1=api[c(1:6,8:310),] # one missing entry in row nr. 7
modell.api=lm(API00 ~ GROWTH + EMER + YR_RND, data=api1) # creation of a simple linear model for API00 using the regressors Growth, Emer and Yr_rnd.
##creation of the function according to Arai:
clx <- function(fm, dfcw, cluster) {
library(sandwich)
library(lmtest)
library(zoo)
M <- length(unique(cluster))
N <- length(cluster)
dfc <- (M/(M-1))*((N-1)/(N-fm$rank)) # anpassung der freiheitsgrade
u <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum))
vcovCL <-dfc * sandwich (fm, meat = crossprod(u)/N) * dfcw
coeftest(fm, vcovCL)
}
clx(modell.api, 1, api1$DNUM) #creation of results.
|
Standard error clustering in R (either manually or in plm)
After a lot of reading, I found the solution for doing clustering within the lm framework.
There's an excellent white paper by Mahmood Arai that provides a tutorial on clustering in the lm framework,
|
5,994
|
Standard error clustering in R (either manually or in plm)
|
The easiest way to compute clustered standard errors in R is to use the modified summary function.
lm.object <- lm(y ~ x, data = data)
summary(lm.object, cluster=c("c"))
There's an excellent post on clustering within the lm framework. The site also provides the modified summary function for both one- and two-way clustering. You can find the function and the tutorial here.
|
Standard error clustering in R (either manually or in plm)
|
The easiest way to compute clustered standard errors in R is to use the modified summary function.
lm.object <- lm(y ~ x, data = data)
summary(lm.object, cluster=c("c"))
There's an excellent post on
|
Standard error clustering in R (either manually or in plm)
The easiest way to compute clustered standard errors in R is to use the modified summary function.
lm.object <- lm(y ~ x, data = data)
summary(lm.object, cluster=c("c"))
There's an excellent post on clustering within the lm framework. The site also provides the modified summary function for both one- and two-way clustering. You can find the function and the tutorial here.
|
Standard error clustering in R (either manually or in plm)
The easiest way to compute clustered standard errors in R is to use the modified summary function.
lm.object <- lm(y ~ x, data = data)
summary(lm.object, cluster=c("c"))
There's an excellent post on
|
5,995
|
Standard error clustering in R (either manually or in plm)
|
If you don't have a time index, you don't need one: plm will add a fictitious one by itself, and it won't be used unless you ask for it. So this call should work:
> x <- plm(price ~ carat, data = diamonds, index = "cut")
Error in pdim.default(index[[1]], index[[2]]) :
duplicate couples (time-id)
Except that it doesn't, which suggests you've hit a bug in plm. (This bug has now been fixed in SVN. You can install the development version from here.)
But since this would be a fictitious time index anyway, we can create it by ourselves:
diamonds$ftime <- 1:NROW(diamonds) ##fake time
Now this works:
x <- plm(price ~ carat, data = diamonds, index = c("cut", "ftime"))
coeftest(x, vcov.=vcovHC)
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## carat 7871.08 138.44 56.856 < 2.2e-16 ***
## ---
## Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
Important note: vcovHC.plm() in plm will by default estimate Arellano clustered by group SEs. Which is different from what vcovHC.lm() in sandwich will estimate (e.g. the vcovHC SEs in the original question), that is heteroscedasticity-consistent SEs with no clustering.
A separate approach to this is sticking to lm dummy variable regressions and the multiwayvcov package.
library("multiwayvcov")
fe.lsdv <- lm(price ~ carat + factor(cut) + 0, data = diamonds)
coeftest(fe.lsdv, vcov.= function(y) cluster.vcov(y, ~ cut, df_correction = FALSE))
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## carat 7871.08 138.44 56.856 < 2.2e-16 ***
## factor(cut)Fair -3875.47 144.83 -26.759 < 2.2e-16 ***
## factor(cut)Good -2755.14 117.56 -23.436 < 2.2e-16 ***
## factor(cut)Very Good -2365.33 111.63 -21.188 < 2.2e-16 ***
## factor(cut)Premium -2436.39 123.48 -19.731 < 2.2e-16 ***
## factor(cut)Ideal -2074.55 97.30 -21.321 < 2.2e-16 ***
## ---
## Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
In both cases you will get the Arellano (1987) SEs with clustering by group. The multiwayvcov package is a direct and significant evolution of Arai's original clustering functions.
You can also look at the resulting variance-covariance matrix from both approaches, yielding the same variance estimate for carat:
vcov.plm <- vcovHC(x)
vcov.lsdv <- cluster.vcov(fe.lsdv, ~ cut, df_correction = FALSE)
vcov.plm
## carat
## carat 19165.28
diag(vcov.lsdv)
## carat factor(cut)Fair factor(cut)Good factor(cut)Very Good factor(cut)Premium factor(cut)Ideal
## 19165.283 20974.522 13820.365 12462.243 15247.584 9467.263
|
Standard error clustering in R (either manually or in plm)
|
If you don't have a time index, you don't need one: plm will add a fictitious one by itself, and it won't be used unless you ask for it. So this call should work:
> x <- plm(price ~ carat, data = dia
|
Standard error clustering in R (either manually or in plm)
If you don't have a time index, you don't need one: plm will add a fictitious one by itself, and it won't be used unless you ask for it. So this call should work:
> x <- plm(price ~ carat, data = diamonds, index = "cut")
Error in pdim.default(index[[1]], index[[2]]) :
duplicate couples (time-id)
Except that it doesn't, which suggests you've hit a bug in plm. (This bug has now been fixed in SVN. You can install the development version from here.)
But since this would be a fictitious time index anyway, we can create it by ourselves:
diamonds$ftime <- 1:NROW(diamonds) ##fake time
Now this works:
x <- plm(price ~ carat, data = diamonds, index = c("cut", "ftime"))
coeftest(x, vcov.=vcovHC)
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## carat 7871.08 138.44 56.856 < 2.2e-16 ***
## ---
## Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
Important note: vcovHC.plm() in plm will by default estimate Arellano clustered by group SEs. Which is different from what vcovHC.lm() in sandwich will estimate (e.g. the vcovHC SEs in the original question), that is heteroscedasticity-consistent SEs with no clustering.
A separate approach to this is sticking to lm dummy variable regressions and the multiwayvcov package.
library("multiwayvcov")
fe.lsdv <- lm(price ~ carat + factor(cut) + 0, data = diamonds)
coeftest(fe.lsdv, vcov.= function(y) cluster.vcov(y, ~ cut, df_correction = FALSE))
##
## t test of coefficients:
##
## Estimate Std. Error t value Pr(>|t|)
## carat 7871.08 138.44 56.856 < 2.2e-16 ***
## factor(cut)Fair -3875.47 144.83 -26.759 < 2.2e-16 ***
## factor(cut)Good -2755.14 117.56 -23.436 < 2.2e-16 ***
## factor(cut)Very Good -2365.33 111.63 -21.188 < 2.2e-16 ***
## factor(cut)Premium -2436.39 123.48 -19.731 < 2.2e-16 ***
## factor(cut)Ideal -2074.55 97.30 -21.321 < 2.2e-16 ***
## ---
## Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
In both cases you will get the Arellano (1987) SEs with clustering by group. The multiwayvcov package is a direct and significant evolution of Arai's original clustering functions.
You can also look at the resulting variance-covariance matrix from both approaches, yielding the same variance estimate for carat:
vcov.plm <- vcovHC(x)
vcov.lsdv <- cluster.vcov(fe.lsdv, ~ cut, df_correction = FALSE)
vcov.plm
## carat
## carat 19165.28
diag(vcov.lsdv)
## carat factor(cut)Fair factor(cut)Good factor(cut)Very Good factor(cut)Premium factor(cut)Ideal
## 19165.283 20974.522 13820.365 12462.243 15247.584 9467.263
|
Standard error clustering in R (either manually or in plm)
If you don't have a time index, you don't need one: plm will add a fictitious one by itself, and it won't be used unless you ask for it. So this call should work:
> x <- plm(price ~ carat, data = dia
|
5,996
|
What exactly is a Bayesian model?
|
In essence, one where inference is based on using Bayes theorem to obtain a posterior distribution for a quantity or quantities of interest form some model (such as parameter values) based on some prior distribution for the relevant unknown parameters and the likelihood from the model.
i.e. from a distributional model of some form, $f(X_i|\mathbf{\theta})$, and a prior $p(\mathbf{\theta})$, someone might seek to obtain the posterior $p(\mathbf{\theta}|\mathbf{X})$.
A simple example of a Bayesian model is discussed in this question, and in the comments of this one - Bayesian linear regression, discussed in more detail in Wikipedia here. Searches turn up discussions of a number of Bayesian models here.
But there are other things one might try to do with a Bayesian analysis besides merely fit a model - see, for example, Bayesian decision theory.
|
What exactly is a Bayesian model?
|
In essence, one where inference is based on using Bayes theorem to obtain a posterior distribution for a quantity or quantities of interest form some model (such as parameter values) based on some pri
|
What exactly is a Bayesian model?
In essence, one where inference is based on using Bayes theorem to obtain a posterior distribution for a quantity or quantities of interest form some model (such as parameter values) based on some prior distribution for the relevant unknown parameters and the likelihood from the model.
i.e. from a distributional model of some form, $f(X_i|\mathbf{\theta})$, and a prior $p(\mathbf{\theta})$, someone might seek to obtain the posterior $p(\mathbf{\theta}|\mathbf{X})$.
A simple example of a Bayesian model is discussed in this question, and in the comments of this one - Bayesian linear regression, discussed in more detail in Wikipedia here. Searches turn up discussions of a number of Bayesian models here.
But there are other things one might try to do with a Bayesian analysis besides merely fit a model - see, for example, Bayesian decision theory.
|
What exactly is a Bayesian model?
In essence, one where inference is based on using Bayes theorem to obtain a posterior distribution for a quantity or quantities of interest form some model (such as parameter values) based on some pri
|
5,997
|
What exactly is a Bayesian model?
|
A Bayesian model is just a model that draws its inferences from the posterior distribution, i.e. utilizes a prior distribution and a likelihood which are related by Bayes' theorem.
|
What exactly is a Bayesian model?
|
A Bayesian model is just a model that draws its inferences from the posterior distribution, i.e. utilizes a prior distribution and a likelihood which are related by Bayes' theorem.
|
What exactly is a Bayesian model?
A Bayesian model is just a model that draws its inferences from the posterior distribution, i.e. utilizes a prior distribution and a likelihood which are related by Bayes' theorem.
|
What exactly is a Bayesian model?
A Bayesian model is just a model that draws its inferences from the posterior distribution, i.e. utilizes a prior distribution and a likelihood which are related by Bayes' theorem.
|
5,998
|
What exactly is a Bayesian model?
|
Can I call a model wherein Bayes' Theorem is used a "Bayesian model"?
No
I am afraid such a definition might be too broad.
You are right. Bayes' theorem is a legitimate relation between marginal event probabilities and conditional probabilities. It holds regardless of your interpretation of probability.
So what exactly is a Bayesian model?
If you're using prior and posterior concepts anywhere in your exposition or interpretation, then you're likely to be using model Bayesian, but this is not the absolute rule, because these concepts are also used in non-Bayesian approaches.
In a broader sense though you must be subscribing to Bayesian interpretation of probability as a subjective belief. This little theorem of Bayes was extended and stretched by some people into this entire world view and even, shall I say, philosophy. If you belong to this camp then you are Bayesian. Bayes had no idea this would happen to his theorem. He'd be horrified, me thinks.
|
What exactly is a Bayesian model?
|
Can I call a model wherein Bayes' Theorem is used a "Bayesian model"?
No
I am afraid such a definition might be too broad.
You are right. Bayes' theorem is a legitimate relation between marginal e
|
What exactly is a Bayesian model?
Can I call a model wherein Bayes' Theorem is used a "Bayesian model"?
No
I am afraid such a definition might be too broad.
You are right. Bayes' theorem is a legitimate relation between marginal event probabilities and conditional probabilities. It holds regardless of your interpretation of probability.
So what exactly is a Bayesian model?
If you're using prior and posterior concepts anywhere in your exposition or interpretation, then you're likely to be using model Bayesian, but this is not the absolute rule, because these concepts are also used in non-Bayesian approaches.
In a broader sense though you must be subscribing to Bayesian interpretation of probability as a subjective belief. This little theorem of Bayes was extended and stretched by some people into this entire world view and even, shall I say, philosophy. If you belong to this camp then you are Bayesian. Bayes had no idea this would happen to his theorem. He'd be horrified, me thinks.
|
What exactly is a Bayesian model?
Can I call a model wherein Bayes' Theorem is used a "Bayesian model"?
No
I am afraid such a definition might be too broad.
You are right. Bayes' theorem is a legitimate relation between marginal e
|
5,999
|
What exactly is a Bayesian model?
|
A statistical model can be seen as a procedure/story describing how some data came to be. A Bayesian model is a statistical model where you use probability to represent all uncertainty within the model, both the uncertainty regarding the output but also the uncertainty regarding the input (aka parameters) to the model. The whole prior/posterior/Bayes theorem thing follows on this, but in my opinion, using probability for everything is what makes it Bayesian (and indeed a better word would perhaps just be something like probabilistic model).
That means that most other statistical models can be "cast into" a Bayesian model by modifying them to be using probability everywhere. This is especially true for models that rely on maximum likelihood, as maximum likelihood model fitting is a strict subset to Bayesian model fitting.
|
What exactly is a Bayesian model?
|
A statistical model can be seen as a procedure/story describing how some data came to be. A Bayesian model is a statistical model where you use probability to represent all uncertainty within the mode
|
What exactly is a Bayesian model?
A statistical model can be seen as a procedure/story describing how some data came to be. A Bayesian model is a statistical model where you use probability to represent all uncertainty within the model, both the uncertainty regarding the output but also the uncertainty regarding the input (aka parameters) to the model. The whole prior/posterior/Bayes theorem thing follows on this, but in my opinion, using probability for everything is what makes it Bayesian (and indeed a better word would perhaps just be something like probabilistic model).
That means that most other statistical models can be "cast into" a Bayesian model by modifying them to be using probability everywhere. This is especially true for models that rely on maximum likelihood, as maximum likelihood model fitting is a strict subset to Bayesian model fitting.
|
What exactly is a Bayesian model?
A statistical model can be seen as a procedure/story describing how some data came to be. A Bayesian model is a statistical model where you use probability to represent all uncertainty within the mode
|
6,000
|
What exactly is a Bayesian model?
|
Your question is more on the semantic side: when can I call a model "Bayesian"?
Drawing conclusions from this excellent paper:
Fienberg, S. E. (2006). When did bayesian inference become "bayesian"? Bayesian Analysis,
1(1):1-40.
there are 2 answers:
Your model is first Bayesian if it uses Bayes' rule (that's the "algorithm").
More broadly, if you infer (hidden) causes from a generative model of your system, then you are Bayesian (that's the "function").
Surprisingly, the "Bayesian models" terminology that is used all over the field only settled down around the 60s. There are many things to learn about machine learning just by looking at its history!
|
What exactly is a Bayesian model?
|
Your question is more on the semantic side: when can I call a model "Bayesian"?
Drawing conclusions from this excellent paper:
Fienberg, S. E. (2006). When did bayesian inference become "bayesian"? B
|
What exactly is a Bayesian model?
Your question is more on the semantic side: when can I call a model "Bayesian"?
Drawing conclusions from this excellent paper:
Fienberg, S. E. (2006). When did bayesian inference become "bayesian"? Bayesian Analysis,
1(1):1-40.
there are 2 answers:
Your model is first Bayesian if it uses Bayes' rule (that's the "algorithm").
More broadly, if you infer (hidden) causes from a generative model of your system, then you are Bayesian (that's the "function").
Surprisingly, the "Bayesian models" terminology that is used all over the field only settled down around the 60s. There are many things to learn about machine learning just by looking at its history!
|
What exactly is a Bayesian model?
Your question is more on the semantic side: when can I call a model "Bayesian"?
Drawing conclusions from this excellent paper:
Fienberg, S. E. (2006). When did bayesian inference become "bayesian"? B
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.