idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
16,301
Should we standardize the data while doing Gaussian process regression?
I agree with Alexey Zaytsev's answer (and the discussion of that answer) that standardising is good to do for various reasons also on the outputs. However, I just want to add an example of why it can be important to standardise the outputs. An example of where normalizing/standardizing the output is important is for small and noisy output values. The following figure illustrates it on a very simple example. MATLAB code below figure. The blue dots represent noisy samples from a sin() function. The orange/red circles represent the predicted Gaussian Process values. In the top subplot the amplitude is unity (with noise). In the second subplot the output values have been scaled with 1e-5 and we can see that the Gaussian Process model (with default settings) predicts a constant model. The default settings optimise on the noise parameter and have a fairly high lower bound. In the third subplot the noise parameter is set to zero and not optimised over. The model over-fits in this case. The fourth subplot shows the standardised outputs and a model fitted on that data. In the last subplot the outputs are scaled back from the standardisation operation (not scaled back to the original values) and also the predicted values are scaled. Note that the predicted values are scaled using the mean and standard deviation from the training data standardisation. function importance_normgp() % small outputs % data x = 0:0.01:1; x = x(:); xp = linspace(0.1, 0.9, length(x)); xp = xp(:); % noise free model y = sin(2*pi*x) + 5e-1*randn(length(x), 1); % train and predict gp model mdl = fitrgp(x, y); yp = predict(mdl, xp); figure subplot(5, 1, 1) plot(x, y, '.') hold on plot(xp, yp, 'o') title('original problem') %% make outputs small (below noise lower bound) ym = y/1e5; % train and predict gp model mdlm = fitrgp(x, ym); ypm = predict(mdlm, xp(:)); subplot(5, 1, 2) plot(x, ym, '.') hold on plot(xp, ypm, 'o') title('small outputs') %% outputs small and set sigma = 0 % train and predict gp model mdlm1 = fitrgp(x, ym, 'Sigma', 1e-12, 'ConstantSigma', true, 'SigmaLowerBound', eps); ypm1 = predict(mdlm1, xp(:)); subplot(5, 1, 3) plot(x, ym, '.') hold on plot(xp, ypm1, 'o') title('small outputs and sigma = 0') %% normalise/standardise nu = mean(ym); sigma = std(ym); yms = (ym - nu)/sigma; % train and predict gp model mdlms = fitrgp(x, yms); ypms = predict(mdlms, xp(:)); subplot(5, 1, 4) plot(x, yms, '.') hold on plot(xp, ypms, 'o') title('standardised outputs') % rescale ypms2 = ypms*sigma + nu; subplot(5, 1, 5) plot(x, ym, '.') hold on plot(xp, ypms2, 'o') title('scaled predictions') legend('true model', 'prediction', 'Location', 'best') end
Should we standardize the data while doing Gaussian process regression?
I agree with Alexey Zaytsev's answer (and the discussion of that answer) that standardising is good to do for various reasons also on the outputs. However, I just want to add an example of why it can
Should we standardize the data while doing Gaussian process regression? I agree with Alexey Zaytsev's answer (and the discussion of that answer) that standardising is good to do for various reasons also on the outputs. However, I just want to add an example of why it can be important to standardise the outputs. An example of where normalizing/standardizing the output is important is for small and noisy output values. The following figure illustrates it on a very simple example. MATLAB code below figure. The blue dots represent noisy samples from a sin() function. The orange/red circles represent the predicted Gaussian Process values. In the top subplot the amplitude is unity (with noise). In the second subplot the output values have been scaled with 1e-5 and we can see that the Gaussian Process model (with default settings) predicts a constant model. The default settings optimise on the noise parameter and have a fairly high lower bound. In the third subplot the noise parameter is set to zero and not optimised over. The model over-fits in this case. The fourth subplot shows the standardised outputs and a model fitted on that data. In the last subplot the outputs are scaled back from the standardisation operation (not scaled back to the original values) and also the predicted values are scaled. Note that the predicted values are scaled using the mean and standard deviation from the training data standardisation. function importance_normgp() % small outputs % data x = 0:0.01:1; x = x(:); xp = linspace(0.1, 0.9, length(x)); xp = xp(:); % noise free model y = sin(2*pi*x) + 5e-1*randn(length(x), 1); % train and predict gp model mdl = fitrgp(x, y); yp = predict(mdl, xp); figure subplot(5, 1, 1) plot(x, y, '.') hold on plot(xp, yp, 'o') title('original problem') %% make outputs small (below noise lower bound) ym = y/1e5; % train and predict gp model mdlm = fitrgp(x, ym); ypm = predict(mdlm, xp(:)); subplot(5, 1, 2) plot(x, ym, '.') hold on plot(xp, ypm, 'o') title('small outputs') %% outputs small and set sigma = 0 % train and predict gp model mdlm1 = fitrgp(x, ym, 'Sigma', 1e-12, 'ConstantSigma', true, 'SigmaLowerBound', eps); ypm1 = predict(mdlm1, xp(:)); subplot(5, 1, 3) plot(x, ym, '.') hold on plot(xp, ypm1, 'o') title('small outputs and sigma = 0') %% normalise/standardise nu = mean(ym); sigma = std(ym); yms = (ym - nu)/sigma; % train and predict gp model mdlms = fitrgp(x, yms); ypms = predict(mdlms, xp(:)); subplot(5, 1, 4) plot(x, yms, '.') hold on plot(xp, ypms, 'o') title('standardised outputs') % rescale ypms2 = ypms*sigma + nu; subplot(5, 1, 5) plot(x, ym, '.') hold on plot(xp, ypms2, 'o') title('scaled predictions') legend('true model', 'prediction', 'Location', 'best') end
Should we standardize the data while doing Gaussian process regression? I agree with Alexey Zaytsev's answer (and the discussion of that answer) that standardising is good to do for various reasons also on the outputs. However, I just want to add an example of why it can
16,302
Should we standardize the data while doing Gaussian process regression?
I want to add/comment to Alexey Zaytsev's answer: For the outputs $y$ - we don't need to assume 0 mean, it just simplifies the calculations when going to the conditional distribution of the new (predicted/test) data given train data. Also, we only need to de-mean, we don't have to scale. For the inputs $x$- the kernels invoke a measure of distance between the different sample points. E.g., RBF looks at $\Vert x-x^\prime\Vert$ - it doesn't make sense to look at the distance when there are different scales, so you need to normalize the input data first (unless they are on the same scale already).
Should we standardize the data while doing Gaussian process regression?
I want to add/comment to Alexey Zaytsev's answer: For the outputs $y$ - we don't need to assume 0 mean, it just simplifies the calculations when going to the conditional distribution of the new (pred
Should we standardize the data while doing Gaussian process regression? I want to add/comment to Alexey Zaytsev's answer: For the outputs $y$ - we don't need to assume 0 mean, it just simplifies the calculations when going to the conditional distribution of the new (predicted/test) data given train data. Also, we only need to de-mean, we don't have to scale. For the inputs $x$- the kernels invoke a measure of distance between the different sample points. E.g., RBF looks at $\Vert x-x^\prime\Vert$ - it doesn't make sense to look at the distance when there are different scales, so you need to normalize the input data first (unless they are on the same scale already).
Should we standardize the data while doing Gaussian process regression? I want to add/comment to Alexey Zaytsev's answer: For the outputs $y$ - we don't need to assume 0 mean, it just simplifies the calculations when going to the conditional distribution of the new (pred
16,303
Formula for 95% confidence interval for $R^2$
You can always bootstrap it: > library(boot) > foo <- boot(mtcars,function(data,indices) summary(lm(mpg~wt,data[indices,]))$r.squared,R=10000) > foo$t0 [1] 0.7528328 > quantile(foo$t,c(0.025,0.975)) 2.5% 97.5% 0.6303133 0.8584067 Carpenter & Bithell (2000, Statistics in Medicine) provide a readable introduction to bootstrapping confidence intervals, though not specifically focused on $R^2$.
Formula for 95% confidence interval for $R^2$
You can always bootstrap it: > library(boot) > foo <- boot(mtcars,function(data,indices) summary(lm(mpg~wt,data[indices,]))$r.squared,R=10000) > foo$t0 [1] 0.7528328 > quantile(foo$t,c(0.025
Formula for 95% confidence interval for $R^2$ You can always bootstrap it: > library(boot) > foo <- boot(mtcars,function(data,indices) summary(lm(mpg~wt,data[indices,]))$r.squared,R=10000) > foo$t0 [1] 0.7528328 > quantile(foo$t,c(0.025,0.975)) 2.5% 97.5% 0.6303133 0.8584067 Carpenter & Bithell (2000, Statistics in Medicine) provide a readable introduction to bootstrapping confidence intervals, though not specifically focused on $R^2$.
Formula for 95% confidence interval for $R^2$ You can always bootstrap it: > library(boot) > foo <- boot(mtcars,function(data,indices) summary(lm(mpg~wt,data[indices,]))$r.squared,R=10000) > foo$t0 [1] 0.7528328 > quantile(foo$t,c(0.025
16,304
Formula for 95% confidence interval for $R^2$
In R, you can make use of the CI.Rsq() function provided by the psychometric package. As for the formula it applies, see Cohen et al. (2003), Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, p. 88: $SE_{R^{2}} = \sqrt{\frac{4R^{2}(1-R^{2})^{2}(n-k-1)^{2}}{(n^2 - 1)(n+3)}}$ Then, the 95% CI is your $R^{2} \pm 2 \cdot SE_{R^{2}}$.
Formula for 95% confidence interval for $R^2$
In R, you can make use of the CI.Rsq() function provided by the psychometric package. As for the formula it applies, see Cohen et al. (2003), Applied Multiple Regression/Correlation Analysis for the B
Formula for 95% confidence interval for $R^2$ In R, you can make use of the CI.Rsq() function provided by the psychometric package. As for the formula it applies, see Cohen et al. (2003), Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, p. 88: $SE_{R^{2}} = \sqrt{\frac{4R^{2}(1-R^{2})^{2}(n-k-1)^{2}}{(n^2 - 1)(n+3)}}$ Then, the 95% CI is your $R^{2} \pm 2 \cdot SE_{R^{2}}$.
Formula for 95% confidence interval for $R^2$ In R, you can make use of the CI.Rsq() function provided by the psychometric package. As for the formula it applies, see Cohen et al. (2003), Applied Multiple Regression/Correlation Analysis for the B
16,305
Ink to data ratio and plot backgrounds
The data-ink ratio This concept is due to the very influential Edward Tufte, of Yale University, who described it in The Visual Display of Quantitative Information. He distinguishes "data ink" (which includes points, bars etc but also textual or grpahical labels) from erasable ink (including gridlines, axes, borders, and also redundant information). The data-ink ratio is simply that proportion of the ink used which can't be erased. There is a discussion of how these data-ink principles might apply to computer visualisations on the UX stack exchange site. Why do some experts prefer a grey background? Hadley Wickham has justified his choice of default background, in his book on ggplot2: The grey background gives the plot a similar colour (in a typographical sense) to the remainder of the text, ensuring that the graphics fit in with the flow of a text without jumping out with a bright white background. Finally, the grey background creates a continuous field of colour which ensures that the plot is perceived as a single visual entity. The principle seems to be to stop it "jumping out" at the viewer on a printed page and to provide visual unity. Personally I also like the reduced screen glare. He also justified the white gridlines on the basis that they can easily be "tuned out". I agree with Dianne Cook in the comments that this lets the data stand out above the gridlines, reducing visual clutter. The white gridlines are one advantage of a slightly darker background — interestingly, Tufte generally avoids gridlines where they are not necessary (they do not count as "data ink") but on some grey bar charts overlays white gridlines. In some ways this is a similar effect to ggplot2, but actually puts the gridlines in the foreground, giving the bars a "striped" appearance. A particular disadvantage of this is that you can't see the next-highest gridline above a bar, making it hard to visually interpolate how high a bar is numerically. Why do some experts prefer a white background? One of the most-viewed ggplot2 threads on Stack Overflow is " How do I change the background color?" which suggests the default is not universally popular. The colour of an element can appear quite different depending on what background colour it is displayed against. Tufte actually discusses this in Chapter 5 "Color and information" in his book Envisioning information but doesn't put this in the context of e.g. a scatter plot. Maureen Stone, a colour expert and adjunct professor at Simon Fraser University, strongly recommends a white background for various reasons, including that most colour palettes (in your examples, used to indicate the species or division) have been designed with a white background (for printing) in mind. Their perceptual properties will differ against a darker background. She suggests that white has a perceptual advantage, because our colour perception is relative to "local" white, so having a white background visually available can stabilise our perception. She also suggests a more practical reason that I am familiar with: that using a white background allows you to optimise a graph for both electronic display and printing, rather than having to prepare a different printer-friendly version.
Ink to data ratio and plot backgrounds
The data-ink ratio This concept is due to the very influential Edward Tufte, of Yale University, who described it in The Visual Display of Quantitative Information. He distinguishes "data ink" (which
Ink to data ratio and plot backgrounds The data-ink ratio This concept is due to the very influential Edward Tufte, of Yale University, who described it in The Visual Display of Quantitative Information. He distinguishes "data ink" (which includes points, bars etc but also textual or grpahical labels) from erasable ink (including gridlines, axes, borders, and also redundant information). The data-ink ratio is simply that proportion of the ink used which can't be erased. There is a discussion of how these data-ink principles might apply to computer visualisations on the UX stack exchange site. Why do some experts prefer a grey background? Hadley Wickham has justified his choice of default background, in his book on ggplot2: The grey background gives the plot a similar colour (in a typographical sense) to the remainder of the text, ensuring that the graphics fit in with the flow of a text without jumping out with a bright white background. Finally, the grey background creates a continuous field of colour which ensures that the plot is perceived as a single visual entity. The principle seems to be to stop it "jumping out" at the viewer on a printed page and to provide visual unity. Personally I also like the reduced screen glare. He also justified the white gridlines on the basis that they can easily be "tuned out". I agree with Dianne Cook in the comments that this lets the data stand out above the gridlines, reducing visual clutter. The white gridlines are one advantage of a slightly darker background — interestingly, Tufte generally avoids gridlines where they are not necessary (they do not count as "data ink") but on some grey bar charts overlays white gridlines. In some ways this is a similar effect to ggplot2, but actually puts the gridlines in the foreground, giving the bars a "striped" appearance. A particular disadvantage of this is that you can't see the next-highest gridline above a bar, making it hard to visually interpolate how high a bar is numerically. Why do some experts prefer a white background? One of the most-viewed ggplot2 threads on Stack Overflow is " How do I change the background color?" which suggests the default is not universally popular. The colour of an element can appear quite different depending on what background colour it is displayed against. Tufte actually discusses this in Chapter 5 "Color and information" in his book Envisioning information but doesn't put this in the context of e.g. a scatter plot. Maureen Stone, a colour expert and adjunct professor at Simon Fraser University, strongly recommends a white background for various reasons, including that most colour palettes (in your examples, used to indicate the species or division) have been designed with a white background (for printing) in mind. Their perceptual properties will differ against a darker background. She suggests that white has a perceptual advantage, because our colour perception is relative to "local" white, so having a white background visually available can stabilise our perception. She also suggests a more practical reason that I am familiar with: that using a white background allows you to optimise a graph for both electronic display and printing, rather than having to prepare a different printer-friendly version.
Ink to data ratio and plot backgrounds The data-ink ratio This concept is due to the very influential Edward Tufte, of Yale University, who described it in The Visual Display of Quantitative Information. He distinguishes "data ink" (which
16,306
Ink to data ratio and plot backgrounds
As long as the background is light enough to provide good contrast with the data marks, it's mostly a matter of aesthetics whether it's white or light gray. While the background color is "ink" in some sense, I don't think it counts as "ink" logically. There is no proportional distraction from a solid field of light gray. Conversely, the grid lines count as logical "ink" in both cases. Even though the white grid lines would consume no ink to print, they still break up the background and create extra visual processing work. I'd say the 538 grid lines take less logical ink because they have less contrast.
Ink to data ratio and plot backgrounds
As long as the background is light enough to provide good contrast with the data marks, it's mostly a matter of aesthetics whether it's white or light gray. While the background color is "ink" in some
Ink to data ratio and plot backgrounds As long as the background is light enough to provide good contrast with the data marks, it's mostly a matter of aesthetics whether it's white or light gray. While the background color is "ink" in some sense, I don't think it counts as "ink" logically. There is no proportional distraction from a solid field of light gray. Conversely, the grid lines count as logical "ink" in both cases. Even though the white grid lines would consume no ink to print, they still break up the background and create extra visual processing work. I'd say the 538 grid lines take less logical ink because they have less contrast.
Ink to data ratio and plot backgrounds As long as the background is light enough to provide good contrast with the data marks, it's mostly a matter of aesthetics whether it's white or light gray. While the background color is "ink" in some
16,307
How to calculate precision and recall in a 3 x 3 confusion matrix
If you spell out the definitions of precision (aka positive predictive value PPV) and recall (aka sensitivity), you see that they relate to one class independent of any other classes: Recall or senstitivity is the proportion of cases correctly identified as belonging to class c among all cases that truly belong to class c. (Given we have a case truly belonging to "c", what is the probability of predicting this correctly?) Precision or positive predictive value PPV is the proportion of cases correctly identified as belonging to class c among all cases of which the classifier claims that they belong to class c. In other words, of those cases predicted to belong to class c, which fraction truly belongs to class c? (Given the predicion "c", what is the probability of being correct?) negative predictive value NPV of those cases predicted not to belong to class c, which fraction truly doesn't belong to class c? (Given the predicion "not c", what is the probability of being correct?) So you can calculate precision and recall for each of your classes. For multi-class confusion tables, that's the diagonal elements divided by their row and column sums, respectively: Source: Beleites, C.; Salzer, R. & Sergo, V. Validation of soft classification models using partial class memberships: An extended concept of sensitivity & co. applied to grading of astrocytoma tissues, Chemom Intell Lab Syst, 122, 12 - 22 (2013). DOI: 10.1016/j.chemolab.2012.12.003
How to calculate precision and recall in a 3 x 3 confusion matrix
If you spell out the definitions of precision (aka positive predictive value PPV) and recall (aka sensitivity), you see that they relate to one class independent of any other classes: Recall or sensti
How to calculate precision and recall in a 3 x 3 confusion matrix If you spell out the definitions of precision (aka positive predictive value PPV) and recall (aka sensitivity), you see that they relate to one class independent of any other classes: Recall or senstitivity is the proportion of cases correctly identified as belonging to class c among all cases that truly belong to class c. (Given we have a case truly belonging to "c", what is the probability of predicting this correctly?) Precision or positive predictive value PPV is the proportion of cases correctly identified as belonging to class c among all cases of which the classifier claims that they belong to class c. In other words, of those cases predicted to belong to class c, which fraction truly belongs to class c? (Given the predicion "c", what is the probability of being correct?) negative predictive value NPV of those cases predicted not to belong to class c, which fraction truly doesn't belong to class c? (Given the predicion "not c", what is the probability of being correct?) So you can calculate precision and recall for each of your classes. For multi-class confusion tables, that's the diagonal elements divided by their row and column sums, respectively: Source: Beleites, C.; Salzer, R. & Sergo, V. Validation of soft classification models using partial class memberships: An extended concept of sensitivity & co. applied to grading of astrocytoma tissues, Chemom Intell Lab Syst, 122, 12 - 22 (2013). DOI: 10.1016/j.chemolab.2012.12.003
How to calculate precision and recall in a 3 x 3 confusion matrix If you spell out the definitions of precision (aka positive predictive value PPV) and recall (aka sensitivity), you see that they relate to one class independent of any other classes: Recall or sensti
16,308
How to calculate precision and recall in a 3 x 3 confusion matrix
By reducing the data down to forced choices (classification) and not recording whether any were "close calls", you obtain minimum-information minimum-precision statistical estimates, in addition to secretly assuming a strange utility/loss/cost function and using arbitrary thresholds. It would be far better to use maximum information, which would include the probabilities of class membership and not forced choices.
How to calculate precision and recall in a 3 x 3 confusion matrix
By reducing the data down to forced choices (classification) and not recording whether any were "close calls", you obtain minimum-information minimum-precision statistical estimates, in addition to se
How to calculate precision and recall in a 3 x 3 confusion matrix By reducing the data down to forced choices (classification) and not recording whether any were "close calls", you obtain minimum-information minimum-precision statistical estimates, in addition to secretly assuming a strange utility/loss/cost function and using arbitrary thresholds. It would be far better to use maximum information, which would include the probabilities of class membership and not forced choices.
How to calculate precision and recall in a 3 x 3 confusion matrix By reducing the data down to forced choices (classification) and not recording whether any were "close calls", you obtain minimum-information minimum-precision statistical estimates, in addition to se
16,309
How to calculate precision and recall in a 3 x 3 confusion matrix
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The easiest way is to not use confusion_matrix at all, Use classification_report(), it will give you everything you ever needed, cheers... Edit: this is the format for confusion_matrix(): [[TP,FN] [FP,TN]] And classification report gives all this
How to calculate precision and recall in a 3 x 3 confusion matrix
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to calculate precision and recall in a 3 x 3 confusion matrix Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The easiest way is to not use confusion_matrix at all, Use classification_report(), it will give you everything you ever needed, cheers... Edit: this is the format for confusion_matrix(): [[TP,FN] [FP,TN]] And classification report gives all this
How to calculate precision and recall in a 3 x 3 confusion matrix Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
16,310
How to calculate precision and recall in a 3 x 3 confusion matrix
Following is an example of a multi-class confusion matrix assuming our class labels are A, B and C A/P         A        B       C       Sum A           10       3        4       17 B           2       12        6       20 C           6        3         9       18 Sum      18     18       19       55 Now we calculate three values for Precision and Recall each and call them Pa, Pb and Pc; and similarly Ra, Rb, Rc. We know Precision = TP/(TP+FP), so for Pa true positive will be Actual A predicted as A, i.e., 10, rest of the two cells in that column, whether it is B or C, make False Positive. So Pa = 10/18 = 0.55 Ra = 10/17 = 0.59 Now precision and recall for class B are Pb and Rb. For class B, true positive is actual B predicted as B, that is the cell containing the value 12 and rest of the two cells in that column make False Positive, so Pb = 12/18 = 0.67 Rb = 12/20 = 0.6 Similarly Pc = 9/19 = 0.47 Rc = 9/18 = 0.5 The overall performance of the classifier will be determined by average Precision and Average Recall. For this we multiply precision value for each class with the actual number of instances for that class, then add them and divide them with total number of instances. Like , Avg Precision = (0.55* 17 + 0.67 * 20 + 0.47 * 18)/55 = 31.21/55 = 0.57 Avg Recall = (0.59* 17 + 0.6 * 20 + 0.5 * 18)/55 = 31.03/55 = 0.56 I hope it helps
How to calculate precision and recall in a 3 x 3 confusion matrix
Following is an example of a multi-class confusion matrix assuming our class labels are A, B and C A/P         A        B       C       Sum A           10       3        4       17 B    
How to calculate precision and recall in a 3 x 3 confusion matrix Following is an example of a multi-class confusion matrix assuming our class labels are A, B and C A/P         A        B       C       Sum A           10       3        4       17 B           2       12        6       20 C           6        3         9       18 Sum      18     18       19       55 Now we calculate three values for Precision and Recall each and call them Pa, Pb and Pc; and similarly Ra, Rb, Rc. We know Precision = TP/(TP+FP), so for Pa true positive will be Actual A predicted as A, i.e., 10, rest of the two cells in that column, whether it is B or C, make False Positive. So Pa = 10/18 = 0.55 Ra = 10/17 = 0.59 Now precision and recall for class B are Pb and Rb. For class B, true positive is actual B predicted as B, that is the cell containing the value 12 and rest of the two cells in that column make False Positive, so Pb = 12/18 = 0.67 Rb = 12/20 = 0.6 Similarly Pc = 9/19 = 0.47 Rc = 9/18 = 0.5 The overall performance of the classifier will be determined by average Precision and Average Recall. For this we multiply precision value for each class with the actual number of instances for that class, then add them and divide them with total number of instances. Like , Avg Precision = (0.55* 17 + 0.67 * 20 + 0.47 * 18)/55 = 31.21/55 = 0.57 Avg Recall = (0.59* 17 + 0.6 * 20 + 0.5 * 18)/55 = 31.03/55 = 0.56 I hope it helps
How to calculate precision and recall in a 3 x 3 confusion matrix Following is an example of a multi-class confusion matrix assuming our class labels are A, B and C A/P         A        B       C       Sum A           10       3        4       17 B    
16,311
How to calculate precision and recall in a 3 x 3 confusion matrix
If you simply want the result, my advice would be to not think too much about and use the tools at your disposal. Here is how you can do it in Python; import pandas as pd from sklearn.metrics import classification_report results = pd.DataFrame( [[1, 1], [1, 2], [1, 3], [2, 1], [2, 2], [2, 3], [3, 1], [3, 2], [3, 3]], columns=['Expected', 'Predicted']) print(results) print() print(classification_report(results['Expected'], results['Predicted'])) To get the following output Expected Predicted 0 1 1 1 1 2 2 1 3 3 2 1 4 2 2 5 2 3 6 3 1 7 3 2 8 3 3 precision recall f1-score support 1 0.33 0.33 0.33 3 2 0.33 0.33 0.33 3 3 0.33 0.33 0.33 3 avg / total 0.33 0.33 0.33 9
How to calculate precision and recall in a 3 x 3 confusion matrix
If you simply want the result, my advice would be to not think too much about and use the tools at your disposal. Here is how you can do it in Python; import pandas as pd from sklearn.metrics import c
How to calculate precision and recall in a 3 x 3 confusion matrix If you simply want the result, my advice would be to not think too much about and use the tools at your disposal. Here is how you can do it in Python; import pandas as pd from sklearn.metrics import classification_report results = pd.DataFrame( [[1, 1], [1, 2], [1, 3], [2, 1], [2, 2], [2, 3], [3, 1], [3, 2], [3, 3]], columns=['Expected', 'Predicted']) print(results) print() print(classification_report(results['Expected'], results['Predicted'])) To get the following output Expected Predicted 0 1 1 1 1 2 2 1 3 3 2 1 4 2 2 5 2 3 6 3 1 7 3 2 8 3 3 precision recall f1-score support 1 0.33 0.33 0.33 3 2 0.33 0.33 0.33 3 3 0.33 0.33 0.33 3 avg / total 0.33 0.33 0.33 9
How to calculate precision and recall in a 3 x 3 confusion matrix If you simply want the result, my advice would be to not think too much about and use the tools at your disposal. Here is how you can do it in Python; import pandas as pd from sklearn.metrics import c
16,312
Why are the geometric distribution and hypergeometric distribution called as such?
Yes, the terms refer to the probability mass functions (pmfs). 2,500 years ago, Euclid (in Books VIII and IV of his Elements) studied sequences of lengths having common proportions.. At some point such sequences came to be known as "geometric progressions" (although the term "geometric" could for a similar reason just as easily have been applied to many other regular series, including those now called "arithmetic"). The probability mass function of a geometric distribution with parameter $p$ forms a geometric progression $$p, p(1-p), p(1-p)^2, \ldots, p(1-p)^n, \ldots.$$ Here the common proportion is $1-p$. Several hundred years ago a vast generalization of such progressions became important in the studies of elliptic curves, differential equations, and many other deeply interconnected areas of mathematics. The generalization supposes that the relative proportions among successive terms at positions $k$ and $k+1$ could vary, but it limits the nature of that variation: the proportions must be a given rational function of $k$. Because these go "over" or "beyond" the geometric progression (for which the rational function is constant), they were termed hypergeometric from the ancient Greek prefix $\grave\upsilon^\prime\pi\varepsilon\rho$ ("hyper"). The probability mass function of a hypergeometric function with parameters $N, K,$ and $n$ has the form $$p(k) = \frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}}$$ for suitable $k$. The ratio of successive probabilities therefore equals $$\frac{p(k+1)}{p(k)} = \frac{(K-k)(n-k)}{(k+1)(N-K-n+k+1)},$$ a rational function of $k$ of degree $(2,2)$. This places the probabilities into a (particular kind of) hypergeometric progression.
Why are the geometric distribution and hypergeometric distribution called as such?
Yes, the terms refer to the probability mass functions (pmfs). 2,500 years ago, Euclid (in Books VIII and IV of his Elements) studied sequences of lengths having common proportions.. At some point su
Why are the geometric distribution and hypergeometric distribution called as such? Yes, the terms refer to the probability mass functions (pmfs). 2,500 years ago, Euclid (in Books VIII and IV of his Elements) studied sequences of lengths having common proportions.. At some point such sequences came to be known as "geometric progressions" (although the term "geometric" could for a similar reason just as easily have been applied to many other regular series, including those now called "arithmetic"). The probability mass function of a geometric distribution with parameter $p$ forms a geometric progression $$p, p(1-p), p(1-p)^2, \ldots, p(1-p)^n, \ldots.$$ Here the common proportion is $1-p$. Several hundred years ago a vast generalization of such progressions became important in the studies of elliptic curves, differential equations, and many other deeply interconnected areas of mathematics. The generalization supposes that the relative proportions among successive terms at positions $k$ and $k+1$ could vary, but it limits the nature of that variation: the proportions must be a given rational function of $k$. Because these go "over" or "beyond" the geometric progression (for which the rational function is constant), they were termed hypergeometric from the ancient Greek prefix $\grave\upsilon^\prime\pi\varepsilon\rho$ ("hyper"). The probability mass function of a hypergeometric function with parameters $N, K,$ and $n$ has the form $$p(k) = \frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}}$$ for suitable $k$. The ratio of successive probabilities therefore equals $$\frac{p(k+1)}{p(k)} = \frac{(K-k)(n-k)}{(k+1)(N-K-n+k+1)},$$ a rational function of $k$ of degree $(2,2)$. This places the probabilities into a (particular kind of) hypergeometric progression.
Why are the geometric distribution and hypergeometric distribution called as such? Yes, the terms refer to the probability mass functions (pmfs). 2,500 years ago, Euclid (in Books VIII and IV of his Elements) studied sequences of lengths having common proportions.. At some point su
16,313
Why are the geometric distribution and hypergeometric distribution called as such?
According to one source, it is because for the geometric distribution pmf(k) is the geometric mean of pmf(k-1) and pmf(k+1). The geometric mean of two numbers A and B is $\sqrt{A B}$. Classically this problem was interpreted as finding the length of the the sides of a square with area equal to a rectangle with sides of length A and B, a geometric problem.
Why are the geometric distribution and hypergeometric distribution called as such?
According to one source, it is because for the geometric distribution pmf(k) is the geometric mean of pmf(k-1) and pmf(k+1). The geometric mean of two numbers A and B is $\sqrt{A B}$. Classically th
Why are the geometric distribution and hypergeometric distribution called as such? According to one source, it is because for the geometric distribution pmf(k) is the geometric mean of pmf(k-1) and pmf(k+1). The geometric mean of two numbers A and B is $\sqrt{A B}$. Classically this problem was interpreted as finding the length of the the sides of a square with area equal to a rectangle with sides of length A and B, a geometric problem.
Why are the geometric distribution and hypergeometric distribution called as such? According to one source, it is because for the geometric distribution pmf(k) is the geometric mean of pmf(k-1) and pmf(k+1). The geometric mean of two numbers A and B is $\sqrt{A B}$. Classically th
16,314
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code and output included]
Another approach would be to wrap the call to lmer in a function that is passed the breakpoint as a parameter, then minimize the deviance of the fitted model conditional upon the breakpoint using optimize. This maximizes the profile log likelihood for the breakpoint, and, in general (i.e., not just for this problem) if the function interior to the wrapper (lmer in this case) finds maximum likelihood estimates conditional upon the parameter passed to it, the whole procedure finds the joint maximum likelihood estimates for all the parameters. library(lme4) str(sleepstudy) #Basis functions bp = 4 b1 <- function(x, bp) ifelse(x < bp, bp - x, 0) b2 <- function(x, bp) ifelse(x < bp, 0, x - bp) #Wrapper for Mixed effects model with variable break point foo <- function(bp) { mod <- lmer(Reaction ~ b1(Days, bp) + b2(Days, bp) + (b1(Days, bp) + b2(Days, bp) | Subject), data = sleepstudy) deviance(mod) } search.range <- c(min(sleepstudy$Days)+0.5,max(sleepstudy$Days)-0.5) foo.opt <- optimize(foo, interval = search.range) bp <- foo.opt$minimum bp [1] 6.071932 mod <- lmer(Reaction ~ b1(Days, bp) + b2(Days, bp) + (b1(Days, bp) + b2(Days, bp) | Subject), data = sleepstudy) To get a confidence interval for the breakpoint, you could use the profile likelihood. Add, e.g., qchisq(0.95,1) to the minimum deviance (for a 95% confidence interval) then search for points where foo(x) is equal to the calculated value: foo.root <- function(bp, tgt) { foo(bp) - tgt } tgt <- foo.opt$objective + qchisq(0.95,1) lb95 <- uniroot(foo.root, lower=search.range[1], upper=bp, tgt=tgt) ub95 <- uniroot(foo.root, lower=bp, upper=search.range[2], tgt=tgt) lb95$root [1] 5.754051 ub95$root [1] 6.923529 Somewhat asymmetric, but not bad precision for this toy problem. An alternative would be to bootstrap the estimation procedure, if you have enough data to make the bootstrap reliable.
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code
Another approach would be to wrap the call to lmer in a function that is passed the breakpoint as a parameter, then minimize the deviance of the fitted model conditional upon the breakpoint using opti
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code and output included] Another approach would be to wrap the call to lmer in a function that is passed the breakpoint as a parameter, then minimize the deviance of the fitted model conditional upon the breakpoint using optimize. This maximizes the profile log likelihood for the breakpoint, and, in general (i.e., not just for this problem) if the function interior to the wrapper (lmer in this case) finds maximum likelihood estimates conditional upon the parameter passed to it, the whole procedure finds the joint maximum likelihood estimates for all the parameters. library(lme4) str(sleepstudy) #Basis functions bp = 4 b1 <- function(x, bp) ifelse(x < bp, bp - x, 0) b2 <- function(x, bp) ifelse(x < bp, 0, x - bp) #Wrapper for Mixed effects model with variable break point foo <- function(bp) { mod <- lmer(Reaction ~ b1(Days, bp) + b2(Days, bp) + (b1(Days, bp) + b2(Days, bp) | Subject), data = sleepstudy) deviance(mod) } search.range <- c(min(sleepstudy$Days)+0.5,max(sleepstudy$Days)-0.5) foo.opt <- optimize(foo, interval = search.range) bp <- foo.opt$minimum bp [1] 6.071932 mod <- lmer(Reaction ~ b1(Days, bp) + b2(Days, bp) + (b1(Days, bp) + b2(Days, bp) | Subject), data = sleepstudy) To get a confidence interval for the breakpoint, you could use the profile likelihood. Add, e.g., qchisq(0.95,1) to the minimum deviance (for a 95% confidence interval) then search for points where foo(x) is equal to the calculated value: foo.root <- function(bp, tgt) { foo(bp) - tgt } tgt <- foo.opt$objective + qchisq(0.95,1) lb95 <- uniroot(foo.root, lower=search.range[1], upper=bp, tgt=tgt) ub95 <- uniroot(foo.root, lower=bp, upper=search.range[2], tgt=tgt) lb95$root [1] 5.754051 ub95$root [1] 6.923529 Somewhat asymmetric, but not bad precision for this toy problem. An alternative would be to bootstrap the estimation procedure, if you have enough data to make the bootstrap reliable.
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code Another approach would be to wrap the call to lmer in a function that is passed the breakpoint as a parameter, then minimize the deviance of the fitted model conditional upon the breakpoint using opti
16,315
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code and output included]
The solution proposed by jbowman is very good, just adding a few theoretical remarks: Given the discontinuity of the indicator function used, the profile-likelihood might be highly erratic, with multiple local minima, so usual optimizers might not work. The usual solution for such "threshold models" is to use instead the more cumbersome grid search, evaluating the deviance at each possible realized breakpoint/threshold days (and not at values in between, as done in the code). See code at bottom. Within this non-standard model, where the breakpoint is estimated, the deviance does usually not have the standard distribution. More complicated procedures are usually used. See the reference to Hansen (2000) below. The bootstrap is neither always consistent in this regard, see Yu (forthcoming) below. Finally, it is not clear to me why you are transforming the data by re-centering around the Days (i.e., bp - x instead of just x). I see two issues: With this procedure, you create artificial days such as 6.1 days, 4.1 etc. I am not sure how to interpret the result of 6.07 for example, since you only observed values for day 6 and day 7 ? (in a standard breakpoint model, any value of the threshold between 6 and 7 should give you same coef/deviance) b1 and b2 have the opposite meaning, since for b1 days are decreasing, while increasing for b2? So the informal test of no breakpoint is b1 != - b2 Standard references for this are: Standard OLS: Hansen (2000) Sample Splitting and Threshold Estimation, Econometrica, Vol. 68, No. 3. (May, 2000), pp. 575-603. More exotic models: Lee, Seo, Shin (2011) Testing for threshold effects in regression models, Journal of the American Statistical Association (Theory and Methods) (2011), 106, 220-231 Ping Yu (forthcoming) The Bootstrap in Threshold Regression", Econometric Theory. Code: # Using grid search over existing values: search.grid <- sort(unique(subset(sleepstudy, Days > search.range[1] & Days<search.range[2], "Days", drop=TRUE))) res <- unlist(lapply(as.list(search.grid), foo)) plot(search.grid, res, type="l") bp_grid <- search.grid[which.min(res)]
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code
The solution proposed by jbowman is very good, just adding a few theoretical remarks: Given the discontinuity of the indicator function used, the profile-likelihood might be highly erratic, with mult
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code and output included] The solution proposed by jbowman is very good, just adding a few theoretical remarks: Given the discontinuity of the indicator function used, the profile-likelihood might be highly erratic, with multiple local minima, so usual optimizers might not work. The usual solution for such "threshold models" is to use instead the more cumbersome grid search, evaluating the deviance at each possible realized breakpoint/threshold days (and not at values in between, as done in the code). See code at bottom. Within this non-standard model, where the breakpoint is estimated, the deviance does usually not have the standard distribution. More complicated procedures are usually used. See the reference to Hansen (2000) below. The bootstrap is neither always consistent in this regard, see Yu (forthcoming) below. Finally, it is not clear to me why you are transforming the data by re-centering around the Days (i.e., bp - x instead of just x). I see two issues: With this procedure, you create artificial days such as 6.1 days, 4.1 etc. I am not sure how to interpret the result of 6.07 for example, since you only observed values for day 6 and day 7 ? (in a standard breakpoint model, any value of the threshold between 6 and 7 should give you same coef/deviance) b1 and b2 have the opposite meaning, since for b1 days are decreasing, while increasing for b2? So the informal test of no breakpoint is b1 != - b2 Standard references for this are: Standard OLS: Hansen (2000) Sample Splitting and Threshold Estimation, Econometrica, Vol. 68, No. 3. (May, 2000), pp. 575-603. More exotic models: Lee, Seo, Shin (2011) Testing for threshold effects in regression models, Journal of the American Statistical Association (Theory and Methods) (2011), 106, 220-231 Ping Yu (forthcoming) The Bootstrap in Threshold Regression", Econometric Theory. Code: # Using grid search over existing values: search.grid <- sort(unique(subset(sleepstudy, Days > search.range[1] & Days<search.range[2], "Days", drop=TRUE))) res <- unlist(lapply(as.list(search.grid), foo)) plot(search.grid, res, type="l") bp_grid <- search.grid[which.min(res)]
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code The solution proposed by jbowman is very good, just adding a few theoretical remarks: Given the discontinuity of the indicator function used, the profile-likelihood might be highly erratic, with mult
16,316
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code and output included]
You could try a MARS model. However, I'm not sure how to specify random effects. earth(Reaction~Days+Subject, sleepstudy)
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code
You could try a MARS model. However, I'm not sure how to specify random effects. earth(Reaction~Days+Subject, sleepstudy)
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code and output included] You could try a MARS model. However, I'm not sure how to specify random effects. earth(Reaction~Days+Subject, sleepstudy)
Estimating the break point in a broken stick / piecewise linear model with random effects in R [code You could try a MARS model. However, I'm not sure how to specify random effects. earth(Reaction~Days+Subject, sleepstudy)
16,317
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alpha, lambda6 or some other index?
I think @Jeromy already said the essential so I shall concentrate on measures of reliability. The Cronbach's alpha is a sample-dependent index used to ascertain a lower-bound of the reliability of an instrument. It is no more than an indicator of variance shared by all items considered in the computation of a scale score. Therefore, it should not be confused with an absolute measure of reliability, nor does it apply to a multidimensional instrument as a whole. In effect, the following assumptions are made: (a) no residual correlations, (b) items have identical loadings, and (c) the scale is unidimensional. This means that the sole case where alpha will be essentially the same as reliability is the case of uniformly high factor loadings, no error covariances, and unidimensional instrument (1). As its precision depends on the standard error of items intercorrelations it depends on the spread of item correlations, which means that alpha will reflect this range of correlations regardless of the source or sources of this particular range (e.g., measurement error or multidimensionality). This point is largely discussed in (2). It is worth noting that when alpha is 0.70, a widely refered reliability threshold for group comparison purpose (3,4), the standard error of measurement will be over half (0.55) a standard deviation. Moreover, Cronbach alpha is a measure of internal consistency, it is not a measure of unidimensionality and can’t be used to infer unidimensionality (5). Finally, we can quote L.J. Cronbach himself, Coefficients are a crude device that does not bring to the surface many subtleties implied by variance components. In particular, the interpretations being made in current assessments are best evaluated through use of a standard error of measurement. --- Cronbach & Shavelson, (6) There are many other pitfalls that were largely discussed in several papers in the last 10 years (e.g., 7-10). Guttman (1945) proposed a series of 6 so-called lambda indices to assess a similar lower bound for reliability, and Guttman's $\lambda_3$ lowest bound is strictly equivalent to Cronbach's alpha. If instead of estimating the true variance of each item as the average covariance between items we consider the amount of variance in each item that can be accounted for by the linear regression of all other items (aka, the squared multiple correlation), we get the $\lambda_6$ estimate, which might be computed for multi-scale instrument as well. More details can be found in William Revelle's forthcoming textbook, An introduction to psychometric theory with applications in R (chapter 7). (He is also the author of the psych R package.) You might be interested in reading section 7.2.5 and 7.3, in particular, as it gives an overview of alternative measures, like McDonald's $ \omega_t$ or $\omega_h$ (instead of using the squared multiple correlation, we use item uniqueness as determined from an FA model) or Revelle's $\beta$ (replace FA with hierarchical cluster analysis, for a more general discussion see (12,13)), and provide simulation-based comparison of all indices. References Raykov, T. (1997). Scale reliability, Cronbach’s coefficient alpha, and violations of essential tau-equivalence for fixed congeneric components. Multivariate Behavioral Research, 32, 329-354. Cortina, J.M. (1993). What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology, 78(1), 98-104. Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory. McGraw-Hill Series in Psychology, Third edition. De Vaus, D. (2002). Analyzing social science data. London: Sage Publications. Danes, J.E. and Mann, O.K.. (1984). Unidimensional measurement and structural equation models with latent variables. Journal of Business Research, 12, 337-352. Cronbach, L.J. and Shavelson, R.J. (2004). My current thoughts on coefficient alpha and successorprocedures. Educational and Psychological Measurement, 64(3), 391-418. Schmitt, N. (1996). Uses and Abuses of Coefficient Alpha. Psychological Assessment, 8(4), 350-353. Iacobucci, D. and Duhachek, A. (2003). Advancing Alpha: Measuring Reliability With Confidence. Journal of Consumer Psychology, 13(4), 478-487. Shevlin, M., Miles, J.N.V., Davies, M.N.O., and Walker, S. (2000). Coefficient alpha: a useful indicator of reliability? Personality and Individual Differences, 28, 229-237. Fong, D.Y.T., Ho, S.Y., and Lam, T.H. (2010). Evaluation of internal reliability in the presence of inconsistent responses. Health and Quality of Life Outcomes, 8, 27. Guttman, L. (1945). A basis for analyzing test-retest reliability. Psychometrika, 10(4), 255-282. Zinbarg, R.E., Revelle, W., Yovel, I., and Li, W. (2005). Cronbach's $\alpha$, Revelle's $\beta$, and McDonald's $\omega_h$: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70(1), 123-133. Revelle, W. and Zinbarg, R.E. (2009) Coefficients alpha, beta, omega and the glb: comments on Sijtsma. Psychometrika, 74(1), 145-154
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alp
I think @Jeromy already said the essential so I shall concentrate on measures of reliability. The Cronbach's alpha is a sample-dependent index used to ascertain a lower-bound of the reliability of an
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alpha, lambda6 or some other index? I think @Jeromy already said the essential so I shall concentrate on measures of reliability. The Cronbach's alpha is a sample-dependent index used to ascertain a lower-bound of the reliability of an instrument. It is no more than an indicator of variance shared by all items considered in the computation of a scale score. Therefore, it should not be confused with an absolute measure of reliability, nor does it apply to a multidimensional instrument as a whole. In effect, the following assumptions are made: (a) no residual correlations, (b) items have identical loadings, and (c) the scale is unidimensional. This means that the sole case where alpha will be essentially the same as reliability is the case of uniformly high factor loadings, no error covariances, and unidimensional instrument (1). As its precision depends on the standard error of items intercorrelations it depends on the spread of item correlations, which means that alpha will reflect this range of correlations regardless of the source or sources of this particular range (e.g., measurement error or multidimensionality). This point is largely discussed in (2). It is worth noting that when alpha is 0.70, a widely refered reliability threshold for group comparison purpose (3,4), the standard error of measurement will be over half (0.55) a standard deviation. Moreover, Cronbach alpha is a measure of internal consistency, it is not a measure of unidimensionality and can’t be used to infer unidimensionality (5). Finally, we can quote L.J. Cronbach himself, Coefficients are a crude device that does not bring to the surface many subtleties implied by variance components. In particular, the interpretations being made in current assessments are best evaluated through use of a standard error of measurement. --- Cronbach & Shavelson, (6) There are many other pitfalls that were largely discussed in several papers in the last 10 years (e.g., 7-10). Guttman (1945) proposed a series of 6 so-called lambda indices to assess a similar lower bound for reliability, and Guttman's $\lambda_3$ lowest bound is strictly equivalent to Cronbach's alpha. If instead of estimating the true variance of each item as the average covariance between items we consider the amount of variance in each item that can be accounted for by the linear regression of all other items (aka, the squared multiple correlation), we get the $\lambda_6$ estimate, which might be computed for multi-scale instrument as well. More details can be found in William Revelle's forthcoming textbook, An introduction to psychometric theory with applications in R (chapter 7). (He is also the author of the psych R package.) You might be interested in reading section 7.2.5 and 7.3, in particular, as it gives an overview of alternative measures, like McDonald's $ \omega_t$ or $\omega_h$ (instead of using the squared multiple correlation, we use item uniqueness as determined from an FA model) or Revelle's $\beta$ (replace FA with hierarchical cluster analysis, for a more general discussion see (12,13)), and provide simulation-based comparison of all indices. References Raykov, T. (1997). Scale reliability, Cronbach’s coefficient alpha, and violations of essential tau-equivalence for fixed congeneric components. Multivariate Behavioral Research, 32, 329-354. Cortina, J.M. (1993). What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology, 78(1), 98-104. Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory. McGraw-Hill Series in Psychology, Third edition. De Vaus, D. (2002). Analyzing social science data. London: Sage Publications. Danes, J.E. and Mann, O.K.. (1984). Unidimensional measurement and structural equation models with latent variables. Journal of Business Research, 12, 337-352. Cronbach, L.J. and Shavelson, R.J. (2004). My current thoughts on coefficient alpha and successorprocedures. Educational and Psychological Measurement, 64(3), 391-418. Schmitt, N. (1996). Uses and Abuses of Coefficient Alpha. Psychological Assessment, 8(4), 350-353. Iacobucci, D. and Duhachek, A. (2003). Advancing Alpha: Measuring Reliability With Confidence. Journal of Consumer Psychology, 13(4), 478-487. Shevlin, M., Miles, J.N.V., Davies, M.N.O., and Walker, S. (2000). Coefficient alpha: a useful indicator of reliability? Personality and Individual Differences, 28, 229-237. Fong, D.Y.T., Ho, S.Y., and Lam, T.H. (2010). Evaluation of internal reliability in the presence of inconsistent responses. Health and Quality of Life Outcomes, 8, 27. Guttman, L. (1945). A basis for analyzing test-retest reliability. Psychometrika, 10(4), 255-282. Zinbarg, R.E., Revelle, W., Yovel, I., and Li, W. (2005). Cronbach's $\alpha$, Revelle's $\beta$, and McDonald's $\omega_h$: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70(1), 123-133. Revelle, W. and Zinbarg, R.E. (2009) Coefficients alpha, beta, omega and the glb: comments on Sijtsma. Psychometrika, 74(1), 145-154
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alp I think @Jeromy already said the essential so I shall concentrate on measures of reliability. The Cronbach's alpha is a sample-dependent index used to ascertain a lower-bound of the reliability of an
16,318
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alpha, lambda6 or some other index?
Here are some general comments: PCA: The PCA analysis does not "reveal that there are three principal components". You chose to extract three dimensions, or you relied on some default rule of thumb (typically eigenvalues over 1) to decide how many dimensions to extract. In addition eigenvalues over one often extracts more dimensions than is useful. Assessing item dimensionality: I agree that you can use PCA to assess the dimensionality of the items. However, I find that looking at the scree plot can provide a better guidance for number of dimensions. You may want to check out this page by William Revelle on assessing scale dimensionality. How to proceed? If the scale is well established, then you may want to leave it as is (assuming its properties are at least reasonable; although in your case 0.6 is relatively poor by most standards). If the scale is not well established, then you should consider theoretically what the items are intended to measure and for what purpose you want to use the resulting scale. Given that you have only six items, you do not have much room to create multiple scales without dropping to worrying numbers of items per scale. Simultaneously, it is a smart idea to check whether there are any problematic items either based on floor, ceiling, or low reliability issues. Also, you may want to check whether any items need to be reversed. I put together some links to general resources on scale development that you may find helpful The following addresses your specific questions: Do I need to perform alpha computation on each of these dimension? As you may gather from the above discussion, I don't think you should treat your data as if you have three dimensions. There are a range of arguments that you could make depending on your purposes and the details, so it's hard to say exactly what to do. In most cases, I'd be looking to create at least one good scale (perhaps deleting an item) rather than three unreliable scales. Do I have remove the items affecting reliability? It's up to you. If the scale is established, then you may choose not to. If your sample size is small, it might be an anomaly of random sampling. However, in general I'd be inclined to delete an item if it was really dropping your alpha from 0.72 to 0.60. I'd also check whether this problematic item isn't actually meant to be reversed. I'll leave the discussion of lambda 6 (discussed by William Revelle here) to others.
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alp
Here are some general comments: PCA: The PCA analysis does not "reveal that there are three principal components". You chose to extract three dimensions, or you relied on some default rule of thumb (
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alpha, lambda6 or some other index? Here are some general comments: PCA: The PCA analysis does not "reveal that there are three principal components". You chose to extract three dimensions, or you relied on some default rule of thumb (typically eigenvalues over 1) to decide how many dimensions to extract. In addition eigenvalues over one often extracts more dimensions than is useful. Assessing item dimensionality: I agree that you can use PCA to assess the dimensionality of the items. However, I find that looking at the scree plot can provide a better guidance for number of dimensions. You may want to check out this page by William Revelle on assessing scale dimensionality. How to proceed? If the scale is well established, then you may want to leave it as is (assuming its properties are at least reasonable; although in your case 0.6 is relatively poor by most standards). If the scale is not well established, then you should consider theoretically what the items are intended to measure and for what purpose you want to use the resulting scale. Given that you have only six items, you do not have much room to create multiple scales without dropping to worrying numbers of items per scale. Simultaneously, it is a smart idea to check whether there are any problematic items either based on floor, ceiling, or low reliability issues. Also, you may want to check whether any items need to be reversed. I put together some links to general resources on scale development that you may find helpful The following addresses your specific questions: Do I need to perform alpha computation on each of these dimension? As you may gather from the above discussion, I don't think you should treat your data as if you have three dimensions. There are a range of arguments that you could make depending on your purposes and the details, so it's hard to say exactly what to do. In most cases, I'd be looking to create at least one good scale (perhaps deleting an item) rather than three unreliable scales. Do I have remove the items affecting reliability? It's up to you. If the scale is established, then you may choose not to. If your sample size is small, it might be an anomaly of random sampling. However, in general I'd be inclined to delete an item if it was really dropping your alpha from 0.72 to 0.60. I'd also check whether this problematic item isn't actually meant to be reversed. I'll leave the discussion of lambda 6 (discussed by William Revelle here) to others.
Assessing reliability of a questionnaire: dimensionality, problematic items, and whether to use alp Here are some general comments: PCA: The PCA analysis does not "reveal that there are three principal components". You chose to extract three dimensions, or you relied on some default rule of thumb (
16,319
Is "every blue t-shirted person" a systematic sample?
The answer, in general, to your question is "no". Obtaining a random sample from a population (especially of humans) is notoriously difficult. By conditioning on a particular characteristic, you're by definition not obtaining a random sample. How much bias this introduces is another matter altogether. As a slightly absurd example, you wouldn't want to sample this way at, say, a football game between the Bears and the Packers, even if your population was "football fans". (Bears fans may have different characteristics than other football fans, even when the quantity you are interested in may not seem directly related to football.) There are many famous examples of hidden bias resulting from obtaining samples in this way. For example, in recent US elections in which phone polls have been conducted, it is believed that people owning only a cell phone and no landline are (perhaps dramatically) underrepresented in the sample. Since these people also tend to be, by and large, younger than those with landlines, a biased sample is obtained. Furthermore, younger people have very different political beliefs than older populations. So, this is a simple example of a case where, even when the sample was not intentionally conditioned on a particular characteristic, it still happened that way. And, even though the poll had nothing to do with the conditioning characteristic either (i.e., whether or not one uses a landline), the effect of the conditioning characteristic on the poll's conclusions was significant, both statistically and practically.
Is "every blue t-shirted person" a systematic sample?
The answer, in general, to your question is "no". Obtaining a random sample from a population (especially of humans) is notoriously difficult. By conditioning on a particular characteristic, you're by
Is "every blue t-shirted person" a systematic sample? The answer, in general, to your question is "no". Obtaining a random sample from a population (especially of humans) is notoriously difficult. By conditioning on a particular characteristic, you're by definition not obtaining a random sample. How much bias this introduces is another matter altogether. As a slightly absurd example, you wouldn't want to sample this way at, say, a football game between the Bears and the Packers, even if your population was "football fans". (Bears fans may have different characteristics than other football fans, even when the quantity you are interested in may not seem directly related to football.) There are many famous examples of hidden bias resulting from obtaining samples in this way. For example, in recent US elections in which phone polls have been conducted, it is believed that people owning only a cell phone and no landline are (perhaps dramatically) underrepresented in the sample. Since these people also tend to be, by and large, younger than those with landlines, a biased sample is obtained. Furthermore, younger people have very different political beliefs than older populations. So, this is a simple example of a case where, even when the sample was not intentionally conditioned on a particular characteristic, it still happened that way. And, even though the poll had nothing to do with the conditioning characteristic either (i.e., whether or not one uses a landline), the effect of the conditioning characteristic on the poll's conclusions was significant, both statistically and practically.
Is "every blue t-shirted person" a systematic sample? The answer, in general, to your question is "no". Obtaining a random sample from a population (especially of humans) is notoriously difficult. By conditioning on a particular characteristic, you're by
16,320
Is "every blue t-shirted person" a systematic sample?
As long as the distribution of the characteristic you are using to select units into the sample is orthogonal to the distribution of the characteristic of the population you want to estimate, you can obtain an unbiased estimate of the population quantity by conditioning selection on it. The sample is not strictly a random sample. But people tend to overlook that random samples are good because the random variable used to select units into sample is orthogonal to the distribution of the population characteristic, not because it is random. Just think about drawing randomly from a Bernoulli with P(invlogit(x_i)) where x_i in [-inf, inf] is a feature of unit i such that Cov(x, y)!=0, and y is the population characteristic whose mean you want to estimate. The sample is "random" in the sense that you are randomizing before selecting into sample. But the sample does not yield an unbiased estimate of the population mean of y. What you need is conditioning selection into sample on a variable that is as good as randomly assigned. I.e., that is orthogonal to the variable on which the quantity of interest depends. Randomizing is good because it insures orthogonality, not because of randomization itself.
Is "every blue t-shirted person" a systematic sample?
As long as the distribution of the characteristic you are using to select units into the sample is orthogonal to the distribution of the characteristic of the population you want to estimate, you can
Is "every blue t-shirted person" a systematic sample? As long as the distribution of the characteristic you are using to select units into the sample is orthogonal to the distribution of the characteristic of the population you want to estimate, you can obtain an unbiased estimate of the population quantity by conditioning selection on it. The sample is not strictly a random sample. But people tend to overlook that random samples are good because the random variable used to select units into sample is orthogonal to the distribution of the population characteristic, not because it is random. Just think about drawing randomly from a Bernoulli with P(invlogit(x_i)) where x_i in [-inf, inf] is a feature of unit i such that Cov(x, y)!=0, and y is the population characteristic whose mean you want to estimate. The sample is "random" in the sense that you are randomizing before selecting into sample. But the sample does not yield an unbiased estimate of the population mean of y. What you need is conditioning selection into sample on a variable that is as good as randomly assigned. I.e., that is orthogonal to the variable on which the quantity of interest depends. Randomizing is good because it insures orthogonality, not because of randomization itself.
Is "every blue t-shirted person" a systematic sample? As long as the distribution of the characteristic you are using to select units into the sample is orthogonal to the distribution of the characteristic of the population you want to estimate, you can
16,321
Comparing mixed effect models with the same number of degrees of freedom
Still, you can compute confidence intervals for your fixed effects, and report AIC or BIC (see e.g. Cnann et al., Stat Med 1997 16: 2349). Now, you may be interested in taking a look at Assessing model mimicry using the parametric bootstrap, from Wagenmakers et al. which seems to more closely resemble your initial question about assessing the quality of two competing models. Otherwise, the two papers about measures of explained variance in LMM that come to my mind are: Lloyd J. Edwards, Keith E. Muller, Russell D. Wolfinger, Bahjat F. Qaqish and Oliver Schabenberger (2008). An R2 statistic for fixed effects in the linear mixed model, Statistics in Medicine, 27(29), 6137–6157. Ronghui Xu (2003). Measuring explained variation in linear mixed effects models, Statistics in Medicine, 22(22), 3527–3541. But maybe there are better options.
Comparing mixed effect models with the same number of degrees of freedom
Still, you can compute confidence intervals for your fixed effects, and report AIC or BIC (see e.g. Cnann et al., Stat Med 1997 16: 2349). Now, you may be interested in taking a look at Assessing mod
Comparing mixed effect models with the same number of degrees of freedom Still, you can compute confidence intervals for your fixed effects, and report AIC or BIC (see e.g. Cnann et al., Stat Med 1997 16: 2349). Now, you may be interested in taking a look at Assessing model mimicry using the parametric bootstrap, from Wagenmakers et al. which seems to more closely resemble your initial question about assessing the quality of two competing models. Otherwise, the two papers about measures of explained variance in LMM that come to my mind are: Lloyd J. Edwards, Keith E. Muller, Russell D. Wolfinger, Bahjat F. Qaqish and Oliver Schabenberger (2008). An R2 statistic for fixed effects in the linear mixed model, Statistics in Medicine, 27(29), 6137–6157. Ronghui Xu (2003). Measuring explained variation in linear mixed effects models, Statistics in Medicine, 22(22), 3527–3541. But maybe there are better options.
Comparing mixed effect models with the same number of degrees of freedom Still, you can compute confidence intervals for your fixed effects, and report AIC or BIC (see e.g. Cnann et al., Stat Med 1997 16: 2349). Now, you may be interested in taking a look at Assessing mod
16,322
Comparing mixed effect models with the same number of degrees of freedom
Following ronaf's suggestion leads to a more recent paper by Vuong for a Likelihood Ratio Test on nonnested models. It's based on the KLIC (Kullback-Leibler Information Criterion) which is similar to the AIC in that it minimizes the KL distance. But it sets up a probabilistic specification for the hypothesis so the use of the LRT leads to a more principled comparison. A more accessible version of the Cox and Vuong tests is presented by Clarke et al; in particular see Figure 3 which presents the algorithm for computing the Vuong LRT test. Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses (Vuong, 1999) Testing Nonnested Models of International Relations: Reevaluating Realism (Clarke et al, 2000) It seems there are R implementations of the Vuong test in other models, but not lmer. Still, the outline mentioned above should be sufficient to implement one. I don't think you can obtain the likelihood evaluated at each data point from lmer as required for the computation. In a note on sig-ME, Douglas Bates has some pointers that might be helpful (in particular, the vignette he mentions). Older Another option is to consider the fitted values from the models in a test of prediction accuracy. The Williams-Kloot statistic may be appropriate here. The basic approach is to regress the actual values against a linear combination of the fitted values from the two models and test the slope: A Test for Discriminating Between Models (Atikinson, 1969) Growth and the Welfare State in the EU: A Causality Analysis (Herce et al, 2001) The first paper describes the test (and others), while the second has an application of it in an econometric panel model. When using lmer and comparing AICs, the function's default is to use the REML method (Restricted Maximum Likelihood). This is fine for obtaining less biased estimates, but when comparing models, you should re-fit with REML=FALSE which uses the Maximum Likelihood method for fitting. The Pinheiro/Bates book mentions some condition under which it's OK to compare AIC/Likelihood with either REML or ML, and these may very well apply in your case. However, the general recommendation is to simply re-fit. For example, see Douglas Bates' post here: How can I extract the AIC score from a mixed model object produced using lmer?
Comparing mixed effect models with the same number of degrees of freedom
Following ronaf's suggestion leads to a more recent paper by Vuong for a Likelihood Ratio Test on nonnested models. It's based on the KLIC (Kullback-Leibler Information Criterion) which is similar to
Comparing mixed effect models with the same number of degrees of freedom Following ronaf's suggestion leads to a more recent paper by Vuong for a Likelihood Ratio Test on nonnested models. It's based on the KLIC (Kullback-Leibler Information Criterion) which is similar to the AIC in that it minimizes the KL distance. But it sets up a probabilistic specification for the hypothesis so the use of the LRT leads to a more principled comparison. A more accessible version of the Cox and Vuong tests is presented by Clarke et al; in particular see Figure 3 which presents the algorithm for computing the Vuong LRT test. Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses (Vuong, 1999) Testing Nonnested Models of International Relations: Reevaluating Realism (Clarke et al, 2000) It seems there are R implementations of the Vuong test in other models, but not lmer. Still, the outline mentioned above should be sufficient to implement one. I don't think you can obtain the likelihood evaluated at each data point from lmer as required for the computation. In a note on sig-ME, Douglas Bates has some pointers that might be helpful (in particular, the vignette he mentions). Older Another option is to consider the fitted values from the models in a test of prediction accuracy. The Williams-Kloot statistic may be appropriate here. The basic approach is to regress the actual values against a linear combination of the fitted values from the two models and test the slope: A Test for Discriminating Between Models (Atikinson, 1969) Growth and the Welfare State in the EU: A Causality Analysis (Herce et al, 2001) The first paper describes the test (and others), while the second has an application of it in an econometric panel model. When using lmer and comparing AICs, the function's default is to use the REML method (Restricted Maximum Likelihood). This is fine for obtaining less biased estimates, but when comparing models, you should re-fit with REML=FALSE which uses the Maximum Likelihood method for fitting. The Pinheiro/Bates book mentions some condition under which it's OK to compare AIC/Likelihood with either REML or ML, and these may very well apply in your case. However, the general recommendation is to simply re-fit. For example, see Douglas Bates' post here: How can I extract the AIC score from a mixed model object produced using lmer?
Comparing mixed effect models with the same number of degrees of freedom Following ronaf's suggestion leads to a more recent paper by Vuong for a Likelihood Ratio Test on nonnested models. It's based on the KLIC (Kullback-Leibler Information Criterion) which is similar to
16,323
Comparing mixed effect models with the same number of degrees of freedom
there is a paper by d.r.cox that discusses testing separate [unnested] models. it considers a few examples, which do not rise to the complexity of mixed models. [as my facility with R code is limited, i'm not quite sure what your models are.] altho cox's paper may not solve your problem directly, it may be helpful in two possible ways. you can search google scholar for citations to his paper, to see if subsequent such results come closer to what you want. if you are of an analytical bent, you could try applying cox's method to your problem. [perhaps not for the faint-hearted.] btw - cox does mention in passing the idea srikant broached of combining the two models into a larger one. he doesn't pursue how one would then decide which model is better, but he remarks that even if neither model is very good, the combined model might give an adequate fit to the data. [it's not clear in your situation that a combined model would make sense.]
Comparing mixed effect models with the same number of degrees of freedom
there is a paper by d.r.cox that discusses testing separate [unnested] models. it considers a few examples, which do not rise to the complexity of mixed models. [as my facility with R code is limited,
Comparing mixed effect models with the same number of degrees of freedom there is a paper by d.r.cox that discusses testing separate [unnested] models. it considers a few examples, which do not rise to the complexity of mixed models. [as my facility with R code is limited, i'm not quite sure what your models are.] altho cox's paper may not solve your problem directly, it may be helpful in two possible ways. you can search google scholar for citations to his paper, to see if subsequent such results come closer to what you want. if you are of an analytical bent, you could try applying cox's method to your problem. [perhaps not for the faint-hearted.] btw - cox does mention in passing the idea srikant broached of combining the two models into a larger one. he doesn't pursue how one would then decide which model is better, but he remarks that even if neither model is very good, the combined model might give an adequate fit to the data. [it's not clear in your situation that a combined model would make sense.]
Comparing mixed effect models with the same number of degrees of freedom there is a paper by d.r.cox that discusses testing separate [unnested] models. it considers a few examples, which do not rise to the complexity of mixed models. [as my facility with R code is limited,
16,324
Comparing mixed effect models with the same number of degrees of freedom
I do not know R well enough to parse your code but here is one idea: Estimate a model where you have both center and near as covariates (call this mBoth). Then mCenter and mNear are nested in mBoth and you could use mBoth as a benchmark to compare the relative performance of mCenter and mNear.
Comparing mixed effect models with the same number of degrees of freedom
I do not know R well enough to parse your code but here is one idea: Estimate a model where you have both center and near as covariates (call this mBoth). Then mCenter and mNear are nested in mBoth a
Comparing mixed effect models with the same number of degrees of freedom I do not know R well enough to parse your code but here is one idea: Estimate a model where you have both center and near as covariates (call this mBoth). Then mCenter and mNear are nested in mBoth and you could use mBoth as a benchmark to compare the relative performance of mCenter and mNear.
Comparing mixed effect models with the same number of degrees of freedom I do not know R well enough to parse your code but here is one idea: Estimate a model where you have both center and near as covariates (call this mBoth). Then mCenter and mNear are nested in mBoth a
16,325
Comparing the variance of paired observations
You could use the fact that the distribution of the sample variance is a chi square distribution centered at the true variance. Under your null hypothesis, your test statistic would be the difference of two chi squared random variates centered at the same unknown true variance. I do not know whether the difference of two chi-squared random variates is an identifiable distribution but the above may help you to some extent.
Comparing the variance of paired observations
You could use the fact that the distribution of the sample variance is a chi square distribution centered at the true variance. Under your null hypothesis, your test statistic would be the difference
Comparing the variance of paired observations You could use the fact that the distribution of the sample variance is a chi square distribution centered at the true variance. Under your null hypothesis, your test statistic would be the difference of two chi squared random variates centered at the same unknown true variance. I do not know whether the difference of two chi-squared random variates is an identifiable distribution but the above may help you to some extent.
Comparing the variance of paired observations You could use the fact that the distribution of the sample variance is a chi square distribution centered at the true variance. Under your null hypothesis, your test statistic would be the difference
16,326
Comparing the variance of paired observations
The most naive approach I can think of is to regress $Y_i$ vs $X_i$ as $Y_i \sim \hat{m}X_i + \hat{b}$, then perform a $t$-test on the hypothesis $m = 1$. See t-test for regression slope. A less naive approach is the Morgan-Pitman test. Let $U_i = X_i - Y_i, V_i = X_i + Y_i,$ then perform a test of the Pearson Correlation coefficient of $U_i$ vs $V_i$. (One can do this simply using the Fisher R-Z transform, which gives the confidence intervals around the sample Pearson coefficient, or via a bootstrap.) If you are using R, and don't want to have to code everything yourself, I would use bootdpci from Wilcox' Robust Stats package, WRS. (see Wilcox' page.)
Comparing the variance of paired observations
The most naive approach I can think of is to regress $Y_i$ vs $X_i$ as $Y_i \sim \hat{m}X_i + \hat{b}$, then perform a $t$-test on the hypothesis $m = 1$. See t-test for regression slope. A less naive
Comparing the variance of paired observations The most naive approach I can think of is to regress $Y_i$ vs $X_i$ as $Y_i \sim \hat{m}X_i + \hat{b}$, then perform a $t$-test on the hypothesis $m = 1$. See t-test for regression slope. A less naive approach is the Morgan-Pitman test. Let $U_i = X_i - Y_i, V_i = X_i + Y_i,$ then perform a test of the Pearson Correlation coefficient of $U_i$ vs $V_i$. (One can do this simply using the Fisher R-Z transform, which gives the confidence intervals around the sample Pearson coefficient, or via a bootstrap.) If you are using R, and don't want to have to code everything yourself, I would use bootdpci from Wilcox' Robust Stats package, WRS. (see Wilcox' page.)
Comparing the variance of paired observations The most naive approach I can think of is to regress $Y_i$ vs $X_i$ as $Y_i \sim \hat{m}X_i + \hat{b}$, then perform a $t$-test on the hypothesis $m = 1$. See t-test for regression slope. A less naive
16,327
Comparing the variance of paired observations
If you want to go down the non-parametric route you could always try the squared ranks test. For the unpaired case, the assumptions for this test (taken from here) are: Both samples are random samples from their respective populations. In addition to independence within each sample there is mutual independence between the two samples. The measurement scale is at least interval. These lecture notes describe the unpaired case in detail. For the paired case you will have to change this procedure slightly. Midway down this page should give you an idea of where to start.
Comparing the variance of paired observations
If you want to go down the non-parametric route you could always try the squared ranks test. For the unpaired case, the assumptions for this test (taken from here) are: Both samples are random sampl
Comparing the variance of paired observations If you want to go down the non-parametric route you could always try the squared ranks test. For the unpaired case, the assumptions for this test (taken from here) are: Both samples are random samples from their respective populations. In addition to independence within each sample there is mutual independence between the two samples. The measurement scale is at least interval. These lecture notes describe the unpaired case in detail. For the paired case you will have to change this procedure slightly. Midway down this page should give you an idea of where to start.
Comparing the variance of paired observations If you want to go down the non-parametric route you could always try the squared ranks test. For the unpaired case, the assumptions for this test (taken from here) are: Both samples are random sampl
16,328
Comparing the variance of paired observations
If you can assume bivariate normality, then you can develop a likelihood-ratio test comparing the two possible covariance matrix structures. The unconstrained (H_a) maximum likelihood estimates are well known - just the sample covariance matrix, the constrained ones (H_0) can be derived by writing out the likelihood (and will probably be some sort of "pooled" estimate). If you don't want to derive the formulas, you can use SAS or R to fit a repeated measures model with unstructured and compound symmetry covariance structures and compare the likelihoods.
Comparing the variance of paired observations
If you can assume bivariate normality, then you can develop a likelihood-ratio test comparing the two possible covariance matrix structures. The unconstrained (H_a) maximum likelihood estimates are we
Comparing the variance of paired observations If you can assume bivariate normality, then you can develop a likelihood-ratio test comparing the two possible covariance matrix structures. The unconstrained (H_a) maximum likelihood estimates are well known - just the sample covariance matrix, the constrained ones (H_0) can be derived by writing out the likelihood (and will probably be some sort of "pooled" estimate). If you don't want to derive the formulas, you can use SAS or R to fit a repeated measures model with unstructured and compound symmetry covariance structures and compare the likelihoods.
Comparing the variance of paired observations If you can assume bivariate normality, then you can develop a likelihood-ratio test comparing the two possible covariance matrix structures. The unconstrained (H_a) maximum likelihood estimates are we
16,329
Comparing the variance of paired observations
The difficulty clearly comes because $X$ and $Y$ are corellated (I assume $(X,Y)$ is jointly gaussian, as Aniko) and you can't make a difference (as in @svadali's answer) or a ratio (as in Standard Fisher-Snedecor "F-test") because those would be of dependent $\chi^2$ distribution, and because you don't know what this dependence is which make it difficult to derive the distribution under $H_0$. My answer relies on Equation (1) below. Because the difference in variance can be factorized with a difference in eigenvalues and a difference in rotation angle the test of equality can be declined into two tests. I show that it is possible to use the Fisher-Snedecor Test together with a test on the slope such as the one suggested by @shabbychef because of a simple property of 2D gaussian vectors. Fisher-Snedecor Test: If for $i=1,2$ $(Z^i_{1},\dots,Z^i_{n_i} )$ iid gaussian random variables with empirical unbiased variance $\hat{\lambda}^2_i$ and true variance $\lambda^2_i$, then it is possible to test if $\lambda_1=\lambda_2$ using the fact that, under the null, It uses the fact that $$R=\frac{\hat{\lambda}_X^2}{\hat{\lambda}_Y^2}$$ follows a Fisher-Snedecor distribution $F(n_1-1,n_2-1)$ A simple property of 2D gaussian vector Let us denote by $$R(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix} $$ It is clear that there exists $\lambda_1,\lambda_2>0$ $\epsilon_1$, $\epsilon_2$ two independent gaussian $\mathcal{N}(0,\lambda_i^2)$ such that $$\begin{bmatrix} X \\ Y \end{bmatrix} = R(\theta)\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \end{bmatrix} $$ and that we have $$Var(X)-Var(Y)=(\lambda_1^2-\lambda_2^2)(\cos^2 \theta -\sin^2 \theta) \;\; [1]$$ Testing of $Var(X)=Var(Y)$ can be done through testing if ( $\lambda_1^2=\lambda_2^2$ or $\theta=\pi/4 \; mod \; [\pi/2]$) Conclusion (Answer to the question) Testing for $\lambda_1^2=\lambda_2^2$ is easely done by using ACP (to decorrelate) and Fisher Scnedecor test. Testing $\theta=\pi/4 [mod \; \pi/2]$ is done by testing if $|\beta_1|=1$ in the linear regression $ Y=\beta_1 X+\sigma\epsilon$ (I assume $Y$ and $X$ are centered). Testing wether $\left ( \lambda_1^2=\lambda_2^2 \text{ or }\theta=\pi/4 [mod \; \pi/2]\right )$ at level $\alpha$ is done by testing if $\lambda_1^2=\lambda_2^2$ at level $\alpha/3$ or if $|\beta_1|=1$ at level $\alpha/3$.
Comparing the variance of paired observations
The difficulty clearly comes because $X$ and $Y$ are corellated (I assume $(X,Y)$ is jointly gaussian, as Aniko) and you can't make a difference (as in @svadali's answer) or a ratio (as in Standard Fi
Comparing the variance of paired observations The difficulty clearly comes because $X$ and $Y$ are corellated (I assume $(X,Y)$ is jointly gaussian, as Aniko) and you can't make a difference (as in @svadali's answer) or a ratio (as in Standard Fisher-Snedecor "F-test") because those would be of dependent $\chi^2$ distribution, and because you don't know what this dependence is which make it difficult to derive the distribution under $H_0$. My answer relies on Equation (1) below. Because the difference in variance can be factorized with a difference in eigenvalues and a difference in rotation angle the test of equality can be declined into two tests. I show that it is possible to use the Fisher-Snedecor Test together with a test on the slope such as the one suggested by @shabbychef because of a simple property of 2D gaussian vectors. Fisher-Snedecor Test: If for $i=1,2$ $(Z^i_{1},\dots,Z^i_{n_i} )$ iid gaussian random variables with empirical unbiased variance $\hat{\lambda}^2_i$ and true variance $\lambda^2_i$, then it is possible to test if $\lambda_1=\lambda_2$ using the fact that, under the null, It uses the fact that $$R=\frac{\hat{\lambda}_X^2}{\hat{\lambda}_Y^2}$$ follows a Fisher-Snedecor distribution $F(n_1-1,n_2-1)$ A simple property of 2D gaussian vector Let us denote by $$R(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{bmatrix} $$ It is clear that there exists $\lambda_1,\lambda_2>0$ $\epsilon_1$, $\epsilon_2$ two independent gaussian $\mathcal{N}(0,\lambda_i^2)$ such that $$\begin{bmatrix} X \\ Y \end{bmatrix} = R(\theta)\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \end{bmatrix} $$ and that we have $$Var(X)-Var(Y)=(\lambda_1^2-\lambda_2^2)(\cos^2 \theta -\sin^2 \theta) \;\; [1]$$ Testing of $Var(X)=Var(Y)$ can be done through testing if ( $\lambda_1^2=\lambda_2^2$ or $\theta=\pi/4 \; mod \; [\pi/2]$) Conclusion (Answer to the question) Testing for $\lambda_1^2=\lambda_2^2$ is easely done by using ACP (to decorrelate) and Fisher Scnedecor test. Testing $\theta=\pi/4 [mod \; \pi/2]$ is done by testing if $|\beta_1|=1$ in the linear regression $ Y=\beta_1 X+\sigma\epsilon$ (I assume $Y$ and $X$ are centered). Testing wether $\left ( \lambda_1^2=\lambda_2^2 \text{ or }\theta=\pi/4 [mod \; \pi/2]\right )$ at level $\alpha$ is done by testing if $\lambda_1^2=\lambda_2^2$ at level $\alpha/3$ or if $|\beta_1|=1$ at level $\alpha/3$.
Comparing the variance of paired observations The difficulty clearly comes because $X$ and $Y$ are corellated (I assume $(X,Y)$ is jointly gaussian, as Aniko) and you can't make a difference (as in @svadali's answer) or a ratio (as in Standard Fi
16,330
Ridge\Lasso -- Standardization of dummy indicators
You have identified an important but perhaps under-appreciated issue: there is no single one-size-fits-all approach to normalize categorical variables in penalized regression. Normalization tries to ensure that penalization is applied fairly across all predictors, regardless of the scale of measurement. You don't want penalization of a predictor based on length to depend on whether you measured the length in millimeters or miles. So centering by the mean and scaling by the standard deviation before penalization can make sense for a continuous predictor. But what does one mean by the "scale of measurement" of a categorical predictor? For a binary predictor having 50% prevalence, normalization turns original values of 0 and 1 into -1 and +1 respectively, for an overall difference of 2 units on the normalized scale. For a binary predictor having 1% prevalence, original values of 0 and 1 are transformed to approximately -0.1 and +9.9, for an overall difference of 10 units on the normalized scale. Between binary predictors having these properties, normalization thus introduces a factor of 5 into their relative transformed scales, and thus in their sensitivities to penalization, versus the case in the original 0/1 coding. Is that what you want? And are normalized categorical predictors more "scale-free" so that the binary and continuous predictors are in some sense penalized fairly with respect to each other? You have to make that decision yourself, based on knowledge of the subject matter and your goals for prediction. Harrell's Regression Modeling Strategies covers this in section 9.10 on Penalized Maximum Likelihood Estimation. As he notes, there is a further problem with multi-category predictors, as results of normalization can differ depending on the choice of reference value. In this case, try penalizing together the difference of all coefficients for the same categorical predictor from their mean, instead of penalizing each coefficient individually. You do have some flexibility in choosing how to penalize. Some standard software, like glmnet() in R, allows for differential penalization among predictors, which Harrell discusses as an alternative to pre-normalizing the predictor values themselves so that the net result is scale-free. But you still have to grapple with the issue of what you wish to consider as the "scale" of a categorical predictor. If you have no useful information from subject-matter knowledge about how best (if at all) to scale your categorical predictors, why not just compare different approaches to scaling them as you build the model? You should of course validate such an approach, for example by repeating the entire model-building process on multiple bootstrap resamples of the data and testing the model predictions on the original data. With your interest in making useful predictions, this provides a principled way to see what prediction method works best for you. I appreciate the issue of destroying the sparse structure provided by binary/dummy coding, and that can be an issue with the efficiency of handling very large data sets that are coded as sparse matrices. For the scale of your problem, with just a few thousand cases and a couple of hundred predictors, this isn't a practical problem and it will make no difference in how the regression is handled: however you might have normalized the categorical variables, each will still have the same number of categories as before, just with different numerical values (and thus different sensitivity to penalization). Note that normalization by rows does not solve the problems discussed here and may exacerbate them. Normalization by rows can be a useful step in situations like gene expression studies, where all measurements are essentially on the same scale but there might be systematic differences in overall expression among samples. With mixes of continuous predictors measured on different scales together with categorical predictors, however, row-normalization won't be helpful.
Ridge\Lasso -- Standardization of dummy indicators
You have identified an important but perhaps under-appreciated issue: there is no single one-size-fits-all approach to normalize categorical variables in penalized regression. Normalization tries to e
Ridge\Lasso -- Standardization of dummy indicators You have identified an important but perhaps under-appreciated issue: there is no single one-size-fits-all approach to normalize categorical variables in penalized regression. Normalization tries to ensure that penalization is applied fairly across all predictors, regardless of the scale of measurement. You don't want penalization of a predictor based on length to depend on whether you measured the length in millimeters or miles. So centering by the mean and scaling by the standard deviation before penalization can make sense for a continuous predictor. But what does one mean by the "scale of measurement" of a categorical predictor? For a binary predictor having 50% prevalence, normalization turns original values of 0 and 1 into -1 and +1 respectively, for an overall difference of 2 units on the normalized scale. For a binary predictor having 1% prevalence, original values of 0 and 1 are transformed to approximately -0.1 and +9.9, for an overall difference of 10 units on the normalized scale. Between binary predictors having these properties, normalization thus introduces a factor of 5 into their relative transformed scales, and thus in their sensitivities to penalization, versus the case in the original 0/1 coding. Is that what you want? And are normalized categorical predictors more "scale-free" so that the binary and continuous predictors are in some sense penalized fairly with respect to each other? You have to make that decision yourself, based on knowledge of the subject matter and your goals for prediction. Harrell's Regression Modeling Strategies covers this in section 9.10 on Penalized Maximum Likelihood Estimation. As he notes, there is a further problem with multi-category predictors, as results of normalization can differ depending on the choice of reference value. In this case, try penalizing together the difference of all coefficients for the same categorical predictor from their mean, instead of penalizing each coefficient individually. You do have some flexibility in choosing how to penalize. Some standard software, like glmnet() in R, allows for differential penalization among predictors, which Harrell discusses as an alternative to pre-normalizing the predictor values themselves so that the net result is scale-free. But you still have to grapple with the issue of what you wish to consider as the "scale" of a categorical predictor. If you have no useful information from subject-matter knowledge about how best (if at all) to scale your categorical predictors, why not just compare different approaches to scaling them as you build the model? You should of course validate such an approach, for example by repeating the entire model-building process on multiple bootstrap resamples of the data and testing the model predictions on the original data. With your interest in making useful predictions, this provides a principled way to see what prediction method works best for you. I appreciate the issue of destroying the sparse structure provided by binary/dummy coding, and that can be an issue with the efficiency of handling very large data sets that are coded as sparse matrices. For the scale of your problem, with just a few thousand cases and a couple of hundred predictors, this isn't a practical problem and it will make no difference in how the regression is handled: however you might have normalized the categorical variables, each will still have the same number of categories as before, just with different numerical values (and thus different sensitivity to penalization). Note that normalization by rows does not solve the problems discussed here and may exacerbate them. Normalization by rows can be a useful step in situations like gene expression studies, where all measurements are essentially on the same scale but there might be systematic differences in overall expression among samples. With mixes of continuous predictors measured on different scales together with categorical predictors, however, row-normalization won't be helpful.
Ridge\Lasso -- Standardization of dummy indicators You have identified an important but perhaps under-appreciated issue: there is no single one-size-fits-all approach to normalize categorical variables in penalized regression. Normalization tries to e
16,331
Is there a way to incorporate new data into an already trained neural network without retraining on all my data in Keras?
In keras, you can save your model using model.save and then load that model using model.load. If you call .fit again on the model that you've loaded, it will continue training from the save point and will not restart from scratch. Each time you call .fit, keras will continue training on the model. .fit does not reset model weights. I would like to point out one issue that might arise from training your model this way though, and that issue is catastrophic forgetting. If you feed your model examples that differ significantly from previous training examples, it might be prone to catastrophic forgetting. This is basically when the neural network learns your new examples well and forgets all the previous examples because you are no longer feeding those examples to it. It arises because as optimizers get more efficient, the neural network will be get more efficient at fitting new data quickly - and the best way to fit the new data quickly might be to forget old data. If your future data is very similar to your current data, then this won't be a problem. But imagine you trained a named entity recognition system to recognize organizations. If in the future you feed it a bunch of data that teach it how to recognize people names, it might catastrophically forget how to recognize organizations.
Is there a way to incorporate new data into an already trained neural network without retraining on
In keras, you can save your model using model.save and then load that model using model.load. If you call .fit again on the model that you've loaded, it will continue training from the save point and
Is there a way to incorporate new data into an already trained neural network without retraining on all my data in Keras? In keras, you can save your model using model.save and then load that model using model.load. If you call .fit again on the model that you've loaded, it will continue training from the save point and will not restart from scratch. Each time you call .fit, keras will continue training on the model. .fit does not reset model weights. I would like to point out one issue that might arise from training your model this way though, and that issue is catastrophic forgetting. If you feed your model examples that differ significantly from previous training examples, it might be prone to catastrophic forgetting. This is basically when the neural network learns your new examples well and forgets all the previous examples because you are no longer feeding those examples to it. It arises because as optimizers get more efficient, the neural network will be get more efficient at fitting new data quickly - and the best way to fit the new data quickly might be to forget old data. If your future data is very similar to your current data, then this won't be a problem. But imagine you trained a named entity recognition system to recognize organizations. If in the future you feed it a bunch of data that teach it how to recognize people names, it might catastrophically forget how to recognize organizations.
Is there a way to incorporate new data into an already trained neural network without retraining on In keras, you can save your model using model.save and then load that model using model.load. If you call .fit again on the model that you've loaded, it will continue training from the save point and
16,332
Is there a way to incorporate new data into an already trained neural network without retraining on all my data in Keras?
The model will continue to learn each time you call model.fit. You can save a model using model.save and load it with load_model. Here is a simple example: from keras.layers import SimpleRNN, TimeDistributed model=Sequential() model.add(SimpleRNN(input_shape=(None, 2), return_sequences=True, units=5)) model.add(TimeDistributed(Dense(activation='sigmoid', units=3))) model.compile(loss = 'mse', optimizer = 'rmsprop') model.fit(inputs, outputs, epochs = 500, batch_size = 32) model.save('my_model.h5') from keras.models import load_model model = load_model('my_model.h5') # continue fitting model.fit(inputs, outputs, epochs = 500, batch_size = 32)
Is there a way to incorporate new data into an already trained neural network without retraining on
The model will continue to learn each time you call model.fit. You can save a model using model.save and load it with load_model. Here is a simple example: from keras.layers import SimpleRNN, TimeDist
Is there a way to incorporate new data into an already trained neural network without retraining on all my data in Keras? The model will continue to learn each time you call model.fit. You can save a model using model.save and load it with load_model. Here is a simple example: from keras.layers import SimpleRNN, TimeDistributed model=Sequential() model.add(SimpleRNN(input_shape=(None, 2), return_sequences=True, units=5)) model.add(TimeDistributed(Dense(activation='sigmoid', units=3))) model.compile(loss = 'mse', optimizer = 'rmsprop') model.fit(inputs, outputs, epochs = 500, batch_size = 32) model.save('my_model.h5') from keras.models import load_model model = load_model('my_model.h5') # continue fitting model.fit(inputs, outputs, epochs = 500, batch_size = 32)
Is there a way to incorporate new data into an already trained neural network without retraining on The model will continue to learn each time you call model.fit. You can save a model using model.save and load it with load_model. Here is a simple example: from keras.layers import SimpleRNN, TimeDist
16,333
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions?
Often it helps to use cumulative distribution functions. First, $$F(x) = \Pr((a-d)^2 \le x) = \Pr(|a-d| \le \sqrt{x}) = 1 - (1-\sqrt{x})^2 = 2\sqrt{x} - x.$$ Next, $$G(y) = \Pr(4 b c \le y) = \Pr(b c \le \frac{y}{4}) = \int_0^{y/4} dt + \int_{y/4}^1\frac{y\,dt}{4t} = \frac{y}{4}\left(1 - \log\left(\frac{y}{4}\right)\right).$$ Let $\delta$ range between the smallest ($0$) and largest ($5$) possible values of $(a-d)^2 + 4 b c$. Writing $x=(a-d)^2$ with CDF $F$ and $y=4 b c$ with PDF $g = G^\prime$, we need to compute $$H(\delta) = \Pr((a-d)^2 + 4 b c \le \delta) = \Pr(x\le \delta-y) = \int_0^4 F(\delta-y)g(y)dy.$$ We can expect this to be nasty--the uniform distribution PDF is discontinuous and thus ought to produce breaks in the definition of $H$--so it is somewhat amazing that Mathematica obtains a closed form (which I will not reproduce here). Differentiating it with respect to $\delta$ gives the desired density. It is defined piecewise within three intervals. In $0 \lt \delta \lt 1$, $$H^\prime(\delta) = h(\delta) = \frac{1}{8} \left(8 \sqrt{\delta }+\delta (-(2+\log (16)))+2 \left(\delta -2 \sqrt{\delta }\right) \log (\delta )\right).$$ In $1 \lt \delta \lt 4$, $$h(\delta) = \frac{1}{4} \left(-(\delta +1) \log (\delta -1)+\delta \log (\delta )-4 \sqrt{\delta } \coth ^{-1}\left(\sqrt{\delta }\right)+3+\log (4)\right).$$ And in $4 \lt \delta \lt 5$, $$\eqalign{ &h(\delta) = \\ &\frac{1}{4}\left(\delta -4 \sqrt{\delta -4}+(\delta +1) \log \left(\frac{4}{\delta -1}\right)+4 \sqrt{\delta } \tanh ^{-1}\left(\frac{\sqrt{(\delta -4) \delta }-\sqrt{\delta }}{\delta -\sqrt{\delta -4}}\right)-1\right). }$$ This figure overlays a plot of $h$ on a histogram of $10^6$ iid realizations of $(a-d)^2 + 4bc$. The two are almost indistinguishable, suggesting the correctness of the formula for $h$. The following is a nearly mindless, brute-force Mathematica solution. It automates practically everything about the calculation. For instance, it will even compute the range of the resulting variable: ClearAll[ a, b, c, d, ff, gg, hh, g, h, x, y, z, zMin, zMax, assumptions]; assumptions = 0 <= a <= 1 && 0 <= b <= 1 && 0 <= c <= 1 && 0 <= d <= 1; zMax = First@Maximize[{(a - d)^2 + 4 b c, assumptions}, {a, b, c, d}]; zMin = First@Minimize[{(a - d)^2 + 4 b c, assumptions}, {a, b, c, d}]; Here is all the integration and differentiation. (Be patient; computing $H$ takes a couple of minutes.) ff[x_] := Evaluate@FullSimplify@Integrate[Boole[(a - d)^2 <= x], {a, 0, 1}, {d, 0, 1}]; gg[y_] := Evaluate@FullSimplify@Integrate[Boole[4 b c <= y], {b, 0, 1}, {c, 0, 1}]; g[y_] := Evaluate@FullSimplify@D[gg[y], y]; hh[z_] := Evaluate@FullSimplify@Integrate[ff[-y + z] g[y], {y, 0, 4}, Assumptions -> zMin <= z <= zMax]; h[z_] := Evaluate@FullSimplify@D[hh[z], z]; Finally, a simulation and comparison to the graph of $h$: x = RandomReal[{0, 1}, {4, 10^6}]; x = (x[[1, All]] - x[[4, All]])^2 + 4 x[[2, All]] x[[3, All]]; Show[Histogram[x, {.1}, "PDF"], Plot[h[z], {z, zMin, zMax}, Exclusions -> {1, 4}], AxesLabel -> {"\[Delta]", "Density"}, BaseStyle -> Medium, Ticks -> {{{0, "0"}, {1, "1"}, {4, "4"}, {5, "5"}}, Automatic}]
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions?
Often it helps to use cumulative distribution functions. First, $$F(x) = \Pr((a-d)^2 \le x) = \Pr(|a-d| \le \sqrt{x}) = 1 - (1-\sqrt{x})^2 = 2\sqrt{x} - x.$$ Next, $$G(y) = \Pr(4 b c \le y) = \Pr(b c
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions? Often it helps to use cumulative distribution functions. First, $$F(x) = \Pr((a-d)^2 \le x) = \Pr(|a-d| \le \sqrt{x}) = 1 - (1-\sqrt{x})^2 = 2\sqrt{x} - x.$$ Next, $$G(y) = \Pr(4 b c \le y) = \Pr(b c \le \frac{y}{4}) = \int_0^{y/4} dt + \int_{y/4}^1\frac{y\,dt}{4t} = \frac{y}{4}\left(1 - \log\left(\frac{y}{4}\right)\right).$$ Let $\delta$ range between the smallest ($0$) and largest ($5$) possible values of $(a-d)^2 + 4 b c$. Writing $x=(a-d)^2$ with CDF $F$ and $y=4 b c$ with PDF $g = G^\prime$, we need to compute $$H(\delta) = \Pr((a-d)^2 + 4 b c \le \delta) = \Pr(x\le \delta-y) = \int_0^4 F(\delta-y)g(y)dy.$$ We can expect this to be nasty--the uniform distribution PDF is discontinuous and thus ought to produce breaks in the definition of $H$--so it is somewhat amazing that Mathematica obtains a closed form (which I will not reproduce here). Differentiating it with respect to $\delta$ gives the desired density. It is defined piecewise within three intervals. In $0 \lt \delta \lt 1$, $$H^\prime(\delta) = h(\delta) = \frac{1}{8} \left(8 \sqrt{\delta }+\delta (-(2+\log (16)))+2 \left(\delta -2 \sqrt{\delta }\right) \log (\delta )\right).$$ In $1 \lt \delta \lt 4$, $$h(\delta) = \frac{1}{4} \left(-(\delta +1) \log (\delta -1)+\delta \log (\delta )-4 \sqrt{\delta } \coth ^{-1}\left(\sqrt{\delta }\right)+3+\log (4)\right).$$ And in $4 \lt \delta \lt 5$, $$\eqalign{ &h(\delta) = \\ &\frac{1}{4}\left(\delta -4 \sqrt{\delta -4}+(\delta +1) \log \left(\frac{4}{\delta -1}\right)+4 \sqrt{\delta } \tanh ^{-1}\left(\frac{\sqrt{(\delta -4) \delta }-\sqrt{\delta }}{\delta -\sqrt{\delta -4}}\right)-1\right). }$$ This figure overlays a plot of $h$ on a histogram of $10^6$ iid realizations of $(a-d)^2 + 4bc$. The two are almost indistinguishable, suggesting the correctness of the formula for $h$. The following is a nearly mindless, brute-force Mathematica solution. It automates practically everything about the calculation. For instance, it will even compute the range of the resulting variable: ClearAll[ a, b, c, d, ff, gg, hh, g, h, x, y, z, zMin, zMax, assumptions]; assumptions = 0 <= a <= 1 && 0 <= b <= 1 && 0 <= c <= 1 && 0 <= d <= 1; zMax = First@Maximize[{(a - d)^2 + 4 b c, assumptions}, {a, b, c, d}]; zMin = First@Minimize[{(a - d)^2 + 4 b c, assumptions}, {a, b, c, d}]; Here is all the integration and differentiation. (Be patient; computing $H$ takes a couple of minutes.) ff[x_] := Evaluate@FullSimplify@Integrate[Boole[(a - d)^2 <= x], {a, 0, 1}, {d, 0, 1}]; gg[y_] := Evaluate@FullSimplify@Integrate[Boole[4 b c <= y], {b, 0, 1}, {c, 0, 1}]; g[y_] := Evaluate@FullSimplify@D[gg[y], y]; hh[z_] := Evaluate@FullSimplify@Integrate[ff[-y + z] g[y], {y, 0, 4}, Assumptions -> zMin <= z <= zMax]; h[z_] := Evaluate@FullSimplify@D[hh[z], z]; Finally, a simulation and comparison to the graph of $h$: x = RandomReal[{0, 1}, {4, 10^6}]; x = (x[[1, All]] - x[[4, All]])^2 + 4 x[[2, All]] x[[3, All]]; Show[Histogram[x, {.1}, "PDF"], Plot[h[z], {z, zMin, zMax}, Exclusions -> {1, 4}], AxesLabel -> {"\[Delta]", "Density"}, BaseStyle -> Medium, Ticks -> {{{0, "0"}, {1, "1"}, {4, "4"}, {5, "5"}}, Automatic}]
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions? Often it helps to use cumulative distribution functions. First, $$F(x) = \Pr((a-d)^2 \le x) = \Pr(|a-d| \le \sqrt{x}) = 1 - (1-\sqrt{x})^2 = 2\sqrt{x} - x.$$ Next, $$G(y) = \Pr(4 b c \le y) = \Pr(b c
16,334
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions?
Like the OP and whuber, I would use independence to break this up into simpler problems: Let $X = (a-d)^2$. Then the pdf of $X$, say $f(x)$ is: Let $Y = 4 b c$. Then the pdf of $Y$, say $g(y)$ is: The problem reduces to now finding the pdf of $X + Y$. There may be many ways of doing this, but the simplest for me is to use a function called TransformSum from the current developmental version of mathStatica. Unfortunately, this is not available in a public release at the present time, but here is the input: TransformSum[{f,g}, z] which returns the pdf of $Z = X + Y$ as the piecewise function: Here is a plot of the pdf just derived, say $h(z)$: Quick Monte Carlo check The following diagram compares an empirical Monte Carlo approximation of the pdf (squiggly blue) to the theoretical pdf derived above (red dashed). Looks fine.
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions?
Like the OP and whuber, I would use independence to break this up into simpler problems: Let $X = (a-d)^2$. Then the pdf of $X$, say $f(x)$ is: Let $Y = 4 b c$. Then the pdf of $Y$, say $g(y)$ is: T
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions? Like the OP and whuber, I would use independence to break this up into simpler problems: Let $X = (a-d)^2$. Then the pdf of $X$, say $f(x)$ is: Let $Y = 4 b c$. Then the pdf of $Y$, say $g(y)$ is: The problem reduces to now finding the pdf of $X + Y$. There may be many ways of doing this, but the simplest for me is to use a function called TransformSum from the current developmental version of mathStatica. Unfortunately, this is not available in a public release at the present time, but here is the input: TransformSum[{f,g}, z] which returns the pdf of $Z = X + Y$ as the piecewise function: Here is a plot of the pdf just derived, say $h(z)$: Quick Monte Carlo check The following diagram compares an empirical Monte Carlo approximation of the pdf (squiggly blue) to the theoretical pdf derived above (red dashed). Looks fine.
What's the distribution of $(a-d)^2+4bc$, where $a,b,c,d$ are uniform distributions? Like the OP and whuber, I would use independence to break this up into simpler problems: Let $X = (a-d)^2$. Then the pdf of $X$, say $f(x)$ is: Let $Y = 4 b c$. Then the pdf of $Y$, say $g(y)$ is: T
16,335
Recursive (online) regularised least squares algorithm
$\hat\beta_n=(XX^T+λI)^{−1} \sum\limits_{i=0}^{n-1} x_iy_i$ Let $M_n^{-1} = (XX^T+λI)^{−1}$, then $\hat\beta_{n+1}=M_{n+1}^{−1} (\sum\limits_{i=0}^{n-1} x_iy_i + x_ny_n)$ , and $M_{n+1} - M_n = x_nx_n^T$, we can get $\hat\beta_{n+1}=\hat\beta_{n}+M_{n+1}^{−1} x_n(y_n - x_n^T\hat\beta_{n})$ According to Woodbury formula, we have $M_{n+1}^{-1} = M_{n}^{-1} - \frac{M_{n}^{-1}x_nx_n^TM_{n}^{-1}}{(1+x_n^TM_n^{-1}x_n)}$ As a result, $\hat\beta_{n+1}=\hat\beta_{n}+\frac{M_{n}^{−1}}{1 + x_n^TM_n^{-1}x_n} x_n(y_n - x_n^T\hat\beta_{n})$ Polyak averaging indicates you can use $\eta_n = n^{-\alpha}$ to approximate $\frac{M_{n}^{−1}}{1 + x_n^TM_n^{-1}x_n}$ with $\alpha$ ranges from $0.5$ to $1$. You may try in your case to select the best $\alpha$ for your recursion. I think it also works if you apply a batch gradient algorithm: $\hat\beta_{n+1}=\hat\beta_{n}+\frac{\eta_n}{n} \sum\limits_{i=0}^{n-1}x_i(y_i - x_i^T\hat\beta_{n})$
Recursive (online) regularised least squares algorithm
$\hat\beta_n=(XX^T+λI)^{−1} \sum\limits_{i=0}^{n-1} x_iy_i$ Let $M_n^{-1} = (XX^T+λI)^{−1}$, then $\hat\beta_{n+1}=M_{n+1}^{−1} (\sum\limits_{i=0}^{n-1} x_iy_i + x_ny_n)$ , and $M_{n+1} - M_n = x_nx_
Recursive (online) regularised least squares algorithm $\hat\beta_n=(XX^T+λI)^{−1} \sum\limits_{i=0}^{n-1} x_iy_i$ Let $M_n^{-1} = (XX^T+λI)^{−1}$, then $\hat\beta_{n+1}=M_{n+1}^{−1} (\sum\limits_{i=0}^{n-1} x_iy_i + x_ny_n)$ , and $M_{n+1} - M_n = x_nx_n^T$, we can get $\hat\beta_{n+1}=\hat\beta_{n}+M_{n+1}^{−1} x_n(y_n - x_n^T\hat\beta_{n})$ According to Woodbury formula, we have $M_{n+1}^{-1} = M_{n}^{-1} - \frac{M_{n}^{-1}x_nx_n^TM_{n}^{-1}}{(1+x_n^TM_n^{-1}x_n)}$ As a result, $\hat\beta_{n+1}=\hat\beta_{n}+\frac{M_{n}^{−1}}{1 + x_n^TM_n^{-1}x_n} x_n(y_n - x_n^T\hat\beta_{n})$ Polyak averaging indicates you can use $\eta_n = n^{-\alpha}$ to approximate $\frac{M_{n}^{−1}}{1 + x_n^TM_n^{-1}x_n}$ with $\alpha$ ranges from $0.5$ to $1$. You may try in your case to select the best $\alpha$ for your recursion. I think it also works if you apply a batch gradient algorithm: $\hat\beta_{n+1}=\hat\beta_{n}+\frac{\eta_n}{n} \sum\limits_{i=0}^{n-1}x_i(y_i - x_i^T\hat\beta_{n})$
Recursive (online) regularised least squares algorithm $\hat\beta_n=(XX^T+λI)^{−1} \sum\limits_{i=0}^{n-1} x_iy_i$ Let $M_n^{-1} = (XX^T+λI)^{−1}$, then $\hat\beta_{n+1}=M_{n+1}^{−1} (\sum\limits_{i=0}^{n-1} x_iy_i + x_ny_n)$ , and $M_{n+1} - M_n = x_nx_
16,336
Recursive (online) regularised least squares algorithm
A point that no one has addressed so far is that it generally doesn't make sense to keep the regularization parameter $\lambda$ constant as data points are added. The reason for this is that $\| X \beta -y \|^{2}$ will typically grow linearly with the number of data points, while the regularization term $\| \lambda\beta \|^{2}$ won't.
Recursive (online) regularised least squares algorithm
A point that no one has addressed so far is that it generally doesn't make sense to keep the regularization parameter $\lambda$ constant as data points are added. The reason for this is that $\| X \b
Recursive (online) regularised least squares algorithm A point that no one has addressed so far is that it generally doesn't make sense to keep the regularization parameter $\lambda$ constant as data points are added. The reason for this is that $\| X \beta -y \|^{2}$ will typically grow linearly with the number of data points, while the regularization term $\| \lambda\beta \|^{2}$ won't.
Recursive (online) regularised least squares algorithm A point that no one has addressed so far is that it generally doesn't make sense to keep the regularization parameter $\lambda$ constant as data points are added. The reason for this is that $\| X \b
16,337
Recursive (online) regularised least squares algorithm
Perhaps something like Stochastic gradient descent could work here. Compute $\hat{\beta}$ using your equation above on the initial dataset, that will be your starting estimate. For each new data point you can perform one step of gradient descent to update your parameter estimate.
Recursive (online) regularised least squares algorithm
Perhaps something like Stochastic gradient descent could work here. Compute $\hat{\beta}$ using your equation above on the initial dataset, that will be your starting estimate. For each new data point
Recursive (online) regularised least squares algorithm Perhaps something like Stochastic gradient descent could work here. Compute $\hat{\beta}$ using your equation above on the initial dataset, that will be your starting estimate. For each new data point you can perform one step of gradient descent to update your parameter estimate.
Recursive (online) regularised least squares algorithm Perhaps something like Stochastic gradient descent could work here. Compute $\hat{\beta}$ using your equation above on the initial dataset, that will be your starting estimate. For each new data point
16,338
Recursive (online) regularised least squares algorithm
Here is an alternative (and less complex) approach compared to using the Woodbury formula. Note that $X^TX$ and $X^Ty$ can be written as sums. Since we are calculating things online and don't want the sum to blow up, we can alternatively use means ($X^TX/n$ and $X^Ty/n$). If you write $X$ and $y$ as : $$ X = \begin{pmatrix} x_1^T \\ \vdots \\ x_n^T \end{pmatrix}, \quad y = \begin{pmatrix} y_1 \\ \vdots \\ y_n \end{pmatrix}, $$ we can write the online updates to $X^TX/n$ and $X^Ty/n$ (calculated up to the $t$-th row) as: $$ A_t = \left(1 - \frac{1}{t}\right) A_{t-1} + \frac{1}{t}x_t x_t^T, $$ $$ b_t = \left(1 - \frac{1}{t}\right) b_{t-1} + \frac{1}{t}x_t y_t. $$ Your online estimate of $\beta$ then becomes $$\hat\beta_t = (A_t + \lambda I)^{-1}b_t.$$ Note that this also helps with the interpretation of $\lambda$ remaining constant as you add observations! This procedure is how https://github.com/joshday/OnlineStats.jl computes online estimates of linear/ridge regression.
Recursive (online) regularised least squares algorithm
Here is an alternative (and less complex) approach compared to using the Woodbury formula. Note that $X^TX$ and $X^Ty$ can be written as sums. Since we are calculating things online and don't want t
Recursive (online) regularised least squares algorithm Here is an alternative (and less complex) approach compared to using the Woodbury formula. Note that $X^TX$ and $X^Ty$ can be written as sums. Since we are calculating things online and don't want the sum to blow up, we can alternatively use means ($X^TX/n$ and $X^Ty/n$). If you write $X$ and $y$ as : $$ X = \begin{pmatrix} x_1^T \\ \vdots \\ x_n^T \end{pmatrix}, \quad y = \begin{pmatrix} y_1 \\ \vdots \\ y_n \end{pmatrix}, $$ we can write the online updates to $X^TX/n$ and $X^Ty/n$ (calculated up to the $t$-th row) as: $$ A_t = \left(1 - \frac{1}{t}\right) A_{t-1} + \frac{1}{t}x_t x_t^T, $$ $$ b_t = \left(1 - \frac{1}{t}\right) b_{t-1} + \frac{1}{t}x_t y_t. $$ Your online estimate of $\beta$ then becomes $$\hat\beta_t = (A_t + \lambda I)^{-1}b_t.$$ Note that this also helps with the interpretation of $\lambda$ remaining constant as you add observations! This procedure is how https://github.com/joshday/OnlineStats.jl computes online estimates of linear/ridge regression.
Recursive (online) regularised least squares algorithm Here is an alternative (and less complex) approach compared to using the Woodbury formula. Note that $X^TX$ and $X^Ty$ can be written as sums. Since we are calculating things online and don't want t
16,339
Recursive (online) regularised least squares algorithm
In linear regression, one possibility is updating the QR decomposition of $X$ directly, as explained here. I guess that, unless you want to re-estimate $\lambda$ after each new datapoint has been added, something very similar can be done with ridge regression.
Recursive (online) regularised least squares algorithm
In linear regression, one possibility is updating the QR decomposition of $X$ directly, as explained here. I guess that, unless you want to re-estimate $\lambda$ after each new datapoint has been adde
Recursive (online) regularised least squares algorithm In linear regression, one possibility is updating the QR decomposition of $X$ directly, as explained here. I guess that, unless you want to re-estimate $\lambda$ after each new datapoint has been added, something very similar can be done with ridge regression.
Recursive (online) regularised least squares algorithm In linear regression, one possibility is updating the QR decomposition of $X$ directly, as explained here. I guess that, unless you want to re-estimate $\lambda$ after each new datapoint has been adde
16,340
Do autocorrelated residual patterns remain even in models with appropriate correlation structures, & how to select the best models?
Q1 You are doing two things wrong here. The first is a generally bad thing; don't in general delve into model objects and rip out components. Learn to use the extractor functions, in this case resid(). In this case you are getting something useful but if you had a different type of model object, such as a GLM from glm(), then mod$residuals would contain working residuals from the last IRLS iteration and are something you generally don't want! The second thing you are doing wrong is something that has caught me out too. The residuals you extracted (and would also have extracted if you'd used resid()) are the raw or response residuals. Essentially this is the difference between the fitted values and the observed values of the response, taking into account the fixed effects terms only. These values will contain the same residual autocorrelation as that of m1 because the fixed effects (or if you prefer, the linear predictor) are the same in the two models (~ time + x). To get residuals that include the correlation term you specified, you need the normalized residuals. You get these by doing: resid(m1, type = "normalized") This (and other types of residuals available) is described in ?residuals.gls: type: an optional character string specifying the type of residuals to be used. If ‘"response"’, the "raw" residuals (observed - fitted) are used; else, if ‘"pearson"’, the standardized residuals (raw residuals divided by the corresponding standard errors) are used; else, if ‘"normalized"’, the normalized residuals (standardized residuals pre-multiplied by the inverse square-root factor of the estimated error correlation matrix) are used. Partial matching of arguments is used, so only the first character needs to be provided. Defaults to ‘"response"’. By means of comparison, here are the ACFs of the raw (response) and the normalised residuals layout(matrix(1:2)) acf(resid(m2)) acf(resid(m2, type = "normalized")) layout(1) To see why this is happening, and where the raw residuals don't include the correlation term, consider the model you fitted $$y = \beta_0 + \beta_1 \mathrm{time} + \beta_2 \mathrm{x} + \varepsilon$$ where $$ \varepsilon \sim N(0, \sigma^2 \Lambda) $$ and $\Lambda$ is a correlation matrix defined by an AR(1) with parameter $\hat{\rho}$ where the non-diagonal elements of the matrix are filled with values $\rho^{|d|}$, where $d$ is the positive integer separation in time units of pairs of residuals. The raw residuals, the default returned by resid(m2) are from the linear predictor part only, hence from this bit $$ \beta_0 + \beta_1 \mathrm{time} + \beta_2 \mathrm{x} $$ and hence they contain none of the information on the correlation term(s) included in $\Lambda$. Q2 It seems you are trying to fit a non-linear trend with a linear function of time and account for lack of fit to the "trend" with an AR(1) (or other structures). If your data are anything like the example data you give here, I would fit a GAM to allow for a smooth function of the covariates. This model would be $$y = \beta_0 + f_1(\mathrm{time}) + f_2(\mathrm{x}) + \varepsilon$$ and initially we'll assume the same distribution as for the GLS except that initially we'll assume that $\Lambda = \mathbf{I}$ (an identity matrix, so independent residuals). This model can be fitted using library("mgcv") m3 <- gam(y ~ s(time) + s(x), select = TRUE, method = "REML") where select = TRUE applies some extra shrinkage to allow the model to remove either of the terms from the model. This model gives > summary(m3) Family: gaussian Link function: identity Formula: y ~ s(time) + s(x) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 23.1532 0.7104 32.59 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(time) 8.041 9 26.364 < 2e-16 *** s(x) 1.922 9 9.749 1.09e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 and has smooth terms that look like this: The residuals from this model are also better behaved (raw residuals) acf(resid(m3)) Now a word of caution; there is an issue with smoothing time series in that the methods that decide how smooth or wiggly the functions are assumes that the data are independent. What this means in practical terms is that the smooth function of time (s(time)) could fit information that is really random autocorrelated error and not only the underlying trend. Hence you should be very careful when fitting smoothers to time series data. There are a number of ways round this, but one way is to switch to fitting the model via gamm() which calls lme() internally and which allows you to use the correlation argument you used for the gls() model. Here is an example mm1 <- gamm(y ~ s(time, k = 6, fx = TRUE) + s(x), select = TRUE, method = "REML") mm2 <- gamm(y ~ s(time, k = 6, fx = TRUE) + s(x), select = TRUE, method = "REML", correlation = corAR1(form = ~ time)) Note that I have to fix the degrees of freedom for s(time) as there is an identifiability issue with these data. The model could be a wiggly s(time) and no AR(1) ($\rho = 0$) or a linear s(time) (1 degree of freedom) and a strong AR(1) ($\rho >> .5$). Hence I make an educated guess as to the complexity of the underlying trend. (Note I didn't spend much time on this dummy data, but you should look at the data and use your existing knowledge of the variability in time to determine an appropriate number of degrees of freedom for the spline.) The model with the AR(1) does not represent a significant improvement over the model without the AR(1): > anova(mm1$lme, mm2$lme) Model df AIC BIC logLik Test L.Ratio p-value mm1$lme 1 9 301.5986 317.4494 -141.7993 mm2$lme 2 10 303.4168 321.0288 -141.7084 1 vs 2 0.1817652 0.6699 If we look at the estimate for $\hat{\rho}} we see > intervals(mm2$lme) .... Correlation structure: lower est. upper Phi -0.2696671 0.0756494 0.4037265 attr(,"label") [1] "Correlation structure:" where Phi is what I called $\rho$. Hence, 0 is a possible value for $\rho$. The estimate is slightly larger than zero so will have negligible effect on the model fit and hence you might wish to leave it in the model if there is a strong a priori reason to assume residual autocorrelation.
Do autocorrelated residual patterns remain even in models with appropriate correlation structures, &
Q1 You are doing two things wrong here. The first is a generally bad thing; don't in general delve into model objects and rip out components. Learn to use the extractor functions, in this case resid()
Do autocorrelated residual patterns remain even in models with appropriate correlation structures, & how to select the best models? Q1 You are doing two things wrong here. The first is a generally bad thing; don't in general delve into model objects and rip out components. Learn to use the extractor functions, in this case resid(). In this case you are getting something useful but if you had a different type of model object, such as a GLM from glm(), then mod$residuals would contain working residuals from the last IRLS iteration and are something you generally don't want! The second thing you are doing wrong is something that has caught me out too. The residuals you extracted (and would also have extracted if you'd used resid()) are the raw or response residuals. Essentially this is the difference between the fitted values and the observed values of the response, taking into account the fixed effects terms only. These values will contain the same residual autocorrelation as that of m1 because the fixed effects (or if you prefer, the linear predictor) are the same in the two models (~ time + x). To get residuals that include the correlation term you specified, you need the normalized residuals. You get these by doing: resid(m1, type = "normalized") This (and other types of residuals available) is described in ?residuals.gls: type: an optional character string specifying the type of residuals to be used. If ‘"response"’, the "raw" residuals (observed - fitted) are used; else, if ‘"pearson"’, the standardized residuals (raw residuals divided by the corresponding standard errors) are used; else, if ‘"normalized"’, the normalized residuals (standardized residuals pre-multiplied by the inverse square-root factor of the estimated error correlation matrix) are used. Partial matching of arguments is used, so only the first character needs to be provided. Defaults to ‘"response"’. By means of comparison, here are the ACFs of the raw (response) and the normalised residuals layout(matrix(1:2)) acf(resid(m2)) acf(resid(m2, type = "normalized")) layout(1) To see why this is happening, and where the raw residuals don't include the correlation term, consider the model you fitted $$y = \beta_0 + \beta_1 \mathrm{time} + \beta_2 \mathrm{x} + \varepsilon$$ where $$ \varepsilon \sim N(0, \sigma^2 \Lambda) $$ and $\Lambda$ is a correlation matrix defined by an AR(1) with parameter $\hat{\rho}$ where the non-diagonal elements of the matrix are filled with values $\rho^{|d|}$, where $d$ is the positive integer separation in time units of pairs of residuals. The raw residuals, the default returned by resid(m2) are from the linear predictor part only, hence from this bit $$ \beta_0 + \beta_1 \mathrm{time} + \beta_2 \mathrm{x} $$ and hence they contain none of the information on the correlation term(s) included in $\Lambda$. Q2 It seems you are trying to fit a non-linear trend with a linear function of time and account for lack of fit to the "trend" with an AR(1) (or other structures). If your data are anything like the example data you give here, I would fit a GAM to allow for a smooth function of the covariates. This model would be $$y = \beta_0 + f_1(\mathrm{time}) + f_2(\mathrm{x}) + \varepsilon$$ and initially we'll assume the same distribution as for the GLS except that initially we'll assume that $\Lambda = \mathbf{I}$ (an identity matrix, so independent residuals). This model can be fitted using library("mgcv") m3 <- gam(y ~ s(time) + s(x), select = TRUE, method = "REML") where select = TRUE applies some extra shrinkage to allow the model to remove either of the terms from the model. This model gives > summary(m3) Family: gaussian Link function: identity Formula: y ~ s(time) + s(x) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 23.1532 0.7104 32.59 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(time) 8.041 9 26.364 < 2e-16 *** s(x) 1.922 9 9.749 1.09e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 and has smooth terms that look like this: The residuals from this model are also better behaved (raw residuals) acf(resid(m3)) Now a word of caution; there is an issue with smoothing time series in that the methods that decide how smooth or wiggly the functions are assumes that the data are independent. What this means in practical terms is that the smooth function of time (s(time)) could fit information that is really random autocorrelated error and not only the underlying trend. Hence you should be very careful when fitting smoothers to time series data. There are a number of ways round this, but one way is to switch to fitting the model via gamm() which calls lme() internally and which allows you to use the correlation argument you used for the gls() model. Here is an example mm1 <- gamm(y ~ s(time, k = 6, fx = TRUE) + s(x), select = TRUE, method = "REML") mm2 <- gamm(y ~ s(time, k = 6, fx = TRUE) + s(x), select = TRUE, method = "REML", correlation = corAR1(form = ~ time)) Note that I have to fix the degrees of freedom for s(time) as there is an identifiability issue with these data. The model could be a wiggly s(time) and no AR(1) ($\rho = 0$) or a linear s(time) (1 degree of freedom) and a strong AR(1) ($\rho >> .5$). Hence I make an educated guess as to the complexity of the underlying trend. (Note I didn't spend much time on this dummy data, but you should look at the data and use your existing knowledge of the variability in time to determine an appropriate number of degrees of freedom for the spline.) The model with the AR(1) does not represent a significant improvement over the model without the AR(1): > anova(mm1$lme, mm2$lme) Model df AIC BIC logLik Test L.Ratio p-value mm1$lme 1 9 301.5986 317.4494 -141.7993 mm2$lme 2 10 303.4168 321.0288 -141.7084 1 vs 2 0.1817652 0.6699 If we look at the estimate for $\hat{\rho}} we see > intervals(mm2$lme) .... Correlation structure: lower est. upper Phi -0.2696671 0.0756494 0.4037265 attr(,"label") [1] "Correlation structure:" where Phi is what I called $\rho$. Hence, 0 is a possible value for $\rho$. The estimate is slightly larger than zero so will have negligible effect on the model fit and hence you might wish to leave it in the model if there is a strong a priori reason to assume residual autocorrelation.
Do autocorrelated residual patterns remain even in models with appropriate correlation structures, & Q1 You are doing two things wrong here. The first is a generally bad thing; don't in general delve into model objects and rip out components. Learn to use the extractor functions, in this case resid()
16,341
What are the differences between survival analysis and Poisson regression?
Brief and general answer: With Poisson regression, the response variable of interest is a count (or possibly a rate). With Cox regression (or alternative modelling strategies from survival analysis), the response variable is the time that has elapsed between some origin and an event of interest. In particular, survival analysis techniques are designed to handle censoring. Note that, under some assumptions, there is a link between the two.
What are the differences between survival analysis and Poisson regression?
Brief and general answer: With Poisson regression, the response variable of interest is a count (or possibly a rate). With Cox regression (or alternative modelling strategies from survival analysis),
What are the differences between survival analysis and Poisson regression? Brief and general answer: With Poisson regression, the response variable of interest is a count (or possibly a rate). With Cox regression (or alternative modelling strategies from survival analysis), the response variable is the time that has elapsed between some origin and an event of interest. In particular, survival analysis techniques are designed to handle censoring. Note that, under some assumptions, there is a link between the two.
What are the differences between survival analysis and Poisson regression? Brief and general answer: With Poisson regression, the response variable of interest is a count (or possibly a rate). With Cox regression (or alternative modelling strategies from survival analysis),
16,342
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and singular value decomposition (SVD)?
LSA and LSI are mostly used synonymously, with the information retrieval community usually referring to it as LSI. LSA/LSI uses SVD to decompose the term-document matrix A into a term-concept matrix U, a singular value matrix S, and a concept-document matrix V in the form: A = USV'. The wikipedia page has a detailed description of latent semantic indexing.
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and s
LSA and LSI are mostly used synonymously, with the information retrieval community usually referring to it as LSI. LSA/LSI uses SVD to decompose the term-document matrix A into a term-concept matrix U
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and singular value decomposition (SVD)? LSA and LSI are mostly used synonymously, with the information retrieval community usually referring to it as LSI. LSA/LSI uses SVD to decompose the term-document matrix A into a term-concept matrix U, a singular value matrix S, and a concept-document matrix V in the form: A = USV'. The wikipedia page has a detailed description of latent semantic indexing.
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and s LSA and LSI are mostly used synonymously, with the information retrieval community usually referring to it as LSI. LSA/LSI uses SVD to decompose the term-document matrix A into a term-concept matrix U
16,343
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and singular value decomposition (SVD)?
Notably while LSA and LSI use SVD to do their magic, there is a computationally and conceptually simpler method called HAL (Hyperspace Analogue to Language) that sifts through text keeping track of preceding and subsequent contexts. Vectors are extracted from these (often weighted) co-occurrence matrices and specific words are selected to index the semantic space. In many ways I'm given to understand it performs as well as LSA without requiring the mathematically/conceptually complex step of SVD. See Lund & Burgess, 1996 for details.
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and s
Notably while LSA and LSI use SVD to do their magic, there is a computationally and conceptually simpler method called HAL (Hyperspace Analogue to Language) that sifts through text keeping track of pr
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and singular value decomposition (SVD)? Notably while LSA and LSI use SVD to do their magic, there is a computationally and conceptually simpler method called HAL (Hyperspace Analogue to Language) that sifts through text keeping track of preceding and subsequent contexts. Vectors are extracted from these (often weighted) co-occurrence matrices and specific words are selected to index the semantic space. In many ways I'm given to understand it performs as well as LSA without requiring the mathematically/conceptually complex step of SVD. See Lund & Burgess, 1996 for details.
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and s Notably while LSA and LSI use SVD to do their magic, there is a computationally and conceptually simpler method called HAL (Hyperspace Analogue to Language) that sifts through text keeping track of pr
16,344
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and singular value decomposition (SVD)?
NMF and SVD are both matrix factorization algorithms. Wikipedia has some relevant information on NMF. SVD and PCA are intimately related. For starters, PCA is simply the eigendecomposition of the correlation. SVD is a generalization of eigendecomposition to non-square matrices. The singular values are the square root of the eigenvalues of the matrix multiplied by its transpose (making it square, and amenable to eigendecomposition). Furthermore, if the matrix is normal ($A^*A=A A^*$), the singular values are simply the absolute values of the eigenvalues. In any case, the singular values are non-negative, and losing the sign of the eigenvalues is the price you pay for being able to work with non-square matrices. The other responders have covered LSI/LSA...
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and s
NMF and SVD are both matrix factorization algorithms. Wikipedia has some relevant information on NMF. SVD and PCA are intimately related. For starters, PCA is simply the eigendecomposition of the cor
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and singular value decomposition (SVD)? NMF and SVD are both matrix factorization algorithms. Wikipedia has some relevant information on NMF. SVD and PCA are intimately related. For starters, PCA is simply the eigendecomposition of the correlation. SVD is a generalization of eigendecomposition to non-square matrices. The singular values are the square root of the eigenvalues of the matrix multiplied by its transpose (making it square, and amenable to eigendecomposition). Furthermore, if the matrix is normal ($A^*A=A A^*$), the singular values are simply the absolute values of the eigenvalues. In any case, the singular values are non-negative, and losing the sign of the eigenvalues is the price you pay for being able to work with non-square matrices. The other responders have covered LSI/LSA...
What are the differences among latent semantic analysis (LSA), latent semantic indexing (LSI), and s NMF and SVD are both matrix factorization algorithms. Wikipedia has some relevant information on NMF. SVD and PCA are intimately related. For starters, PCA is simply the eigendecomposition of the cor
16,345
In CNN, are upsampling and transpose convolution the same?
Since there is no detailed and marked answer, I'll try my best. Let's first understand where the motivation for such layers come from: e.g. a convolutional autoencoder. You can use a convolutional autoencoder to extract featuers of images while training the autoencoder to reconstruct the original image. (It is an unsupervised method.) Such an autoencoder has two parts: The encoder that extracts the features from the image and the decoder that reconstructs the original image from these features. The architecture of the encoder and decoder are usually mirrored. In a convolutional autoencoder, the encoder works with convolution and pooling layers. I assume that you know how these work. The decoder tries to mirror the encoder but instead of "making everything smaller" it has the goal of "making everything bigger" to match the original size of the image. The opposite of the convolutional layers are the transposed convolution layers (also known as deconvolution, but correctly mathematically speaking this is something different). They work with filters, kernels, strides just as the convolution layers but instead of mapping from e.g. 3x3 input pixels to 1 output they map from 1 input pixel to 3x3 pixels. Of course, also backpropagation works a little bit different. The opposite of the pooling layers are the upsampling layers which in their purest form only resize the image (or copy the pixel as many times as needed). A more advanced technique is unpooling which resverts maxpooling by remembering the location of the maxima in the maxpooling layers and in the unpooling layers copy the value to exactly this location. To quote from this (https://arxiv.org/pdf/1311.2901v3.pdf) paper: In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recording the locations of the maxima within each pooling region in a set of switch variables. In the deconvnet, the unpooling operation uses these switches to place the reconstructions from the layer above into appropriate locations, preserving the structure of the stimulus. For more technical input and context have a look at this really good, demonstrative and in depth explanation: http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html And have a look at https://www.quora.com/What-is-the-difference-between-Deconvolution-Upsampling-Unpooling-and-Convolutional-Sparse-Coding
In CNN, are upsampling and transpose convolution the same?
Since there is no detailed and marked answer, I'll try my best. Let's first understand where the motivation for such layers come from: e.g. a convolutional autoencoder. You can use a convolutional aut
In CNN, are upsampling and transpose convolution the same? Since there is no detailed and marked answer, I'll try my best. Let's first understand where the motivation for such layers come from: e.g. a convolutional autoencoder. You can use a convolutional autoencoder to extract featuers of images while training the autoencoder to reconstruct the original image. (It is an unsupervised method.) Such an autoencoder has two parts: The encoder that extracts the features from the image and the decoder that reconstructs the original image from these features. The architecture of the encoder and decoder are usually mirrored. In a convolutional autoencoder, the encoder works with convolution and pooling layers. I assume that you know how these work. The decoder tries to mirror the encoder but instead of "making everything smaller" it has the goal of "making everything bigger" to match the original size of the image. The opposite of the convolutional layers are the transposed convolution layers (also known as deconvolution, but correctly mathematically speaking this is something different). They work with filters, kernels, strides just as the convolution layers but instead of mapping from e.g. 3x3 input pixels to 1 output they map from 1 input pixel to 3x3 pixels. Of course, also backpropagation works a little bit different. The opposite of the pooling layers are the upsampling layers which in their purest form only resize the image (or copy the pixel as many times as needed). A more advanced technique is unpooling which resverts maxpooling by remembering the location of the maxima in the maxpooling layers and in the unpooling layers copy the value to exactly this location. To quote from this (https://arxiv.org/pdf/1311.2901v3.pdf) paper: In the convnet, the max pooling operation is non-invertible, however we can obtain an approximate inverse by recording the locations of the maxima within each pooling region in a set of switch variables. In the deconvnet, the unpooling operation uses these switches to place the reconstructions from the layer above into appropriate locations, preserving the structure of the stimulus. For more technical input and context have a look at this really good, demonstrative and in depth explanation: http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html And have a look at https://www.quora.com/What-is-the-difference-between-Deconvolution-Upsampling-Unpooling-and-Convolutional-Sparse-Coding
In CNN, are upsampling and transpose convolution the same? Since there is no detailed and marked answer, I'll try my best. Let's first understand where the motivation for such layers come from: e.g. a convolutional autoencoder. You can use a convolutional aut
16,346
In CNN, are upsampling and transpose convolution the same?
It may depend on the package you are using. In keras they are different. Upsampling is defined here https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py Provided you use tensorflow backend, what actually happens is keras calls tensorflow resize_images function, which essentially is an interpolation and not trainable. Transposed convolution is more involved. It's defined in the same python script listed above. It calls tensorflow conv2d_transpose function and it has the kernel and is trainable. Hope this helps.
In CNN, are upsampling and transpose convolution the same?
It may depend on the package you are using. In keras they are different. Upsampling is defined here https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py Provided you use tensorf
In CNN, are upsampling and transpose convolution the same? It may depend on the package you are using. In keras they are different. Upsampling is defined here https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py Provided you use tensorflow backend, what actually happens is keras calls tensorflow resize_images function, which essentially is an interpolation and not trainable. Transposed convolution is more involved. It's defined in the same python script listed above. It calls tensorflow conv2d_transpose function and it has the kernel and is trainable. Hope this helps.
In CNN, are upsampling and transpose convolution the same? It may depend on the package you are using. In keras they are different. Upsampling is defined here https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py Provided you use tensorf
16,347
In CNN, are upsampling and transpose convolution the same?
here is a pretty good illustration on the difference between 1) transpose convolution and 2) upsampling + convolution. https://distill.pub/2016/deconv-checkerboard/ While the transpose convolution is more efficient, the article advocates for upsampling + convolution since it does not suffer from the checkerboard artifact.
In CNN, are upsampling and transpose convolution the same?
here is a pretty good illustration on the difference between 1) transpose convolution and 2) upsampling + convolution. https://distill.pub/2016/deconv-checkerboard/ While the transpose convolution is
In CNN, are upsampling and transpose convolution the same? here is a pretty good illustration on the difference between 1) transpose convolution and 2) upsampling + convolution. https://distill.pub/2016/deconv-checkerboard/ While the transpose convolution is more efficient, the article advocates for upsampling + convolution since it does not suffer from the checkerboard artifact.
In CNN, are upsampling and transpose convolution the same? here is a pretty good illustration on the difference between 1) transpose convolution and 2) upsampling + convolution. https://distill.pub/2016/deconv-checkerboard/ While the transpose convolution is
16,348
In CNN, are upsampling and transpose convolution the same?
Deconvolution in the context of convolutional neural networks is synonymous to transpose convolution. Deconvolution may have another meanings in other fields. Transpose convolution is one strategy amongst others to perform upsampling.
In CNN, are upsampling and transpose convolution the same?
Deconvolution in the context of convolutional neural networks is synonymous to transpose convolution. Deconvolution may have another meanings in other fields. Transpose convolution is one strategy amo
In CNN, are upsampling and transpose convolution the same? Deconvolution in the context of convolutional neural networks is synonymous to transpose convolution. Deconvolution may have another meanings in other fields. Transpose convolution is one strategy amongst others to perform upsampling.
In CNN, are upsampling and transpose convolution the same? Deconvolution in the context of convolutional neural networks is synonymous to transpose convolution. Deconvolution may have another meanings in other fields. Transpose convolution is one strategy amo
16,349
Does it make sense to perform a one-tailed Kolmogorov-Smirnov test?
Is it meaningful and possible to perform a one-tailed KS test? Yes, at least in the particular sense in which a one-tailed test is generally used with the Kolmogorov-Smirnov. Beware: because we're talking about many differences at once (the cdf is a function, not a value), the direction of a difference is not a single fixed thing - the cdf might be greater at one place and less at another (so both directional tests might be significant at the same time). is the KS test inherently a two-tailed test? Not at all. What would the null hypothesis of such a test be? You don't make it clear whether you're talking about the one-sample or the two sample test. My answer here covers both - if you regard $F_X$ as representing the cdf of the population from which an $X$ sample was drawn, it's two-sample, while you get the one sample case by regarding $F_X$ as some hypothesized distribution ($F_0$, if you prefer). You could in some cases write the null as an equality (e.g.if it wasn't seen as possible for it to go the other way), but if you want to write a directional nulls for a one tailed alternative, you could write something like this: $H_0: F_Y(t)\geq F_X(t)$ $H_1: F_Y(t)< F_X(t)\,$, for at least one $t$ (or its converse for the other tail, naturally) If we add an assumption when we use the test that they're either equal or that $F_Y$ will be smaller, then rejection of the null implies (first order) stochastic ordering / first order stochastic dominance. In large enough samples, it's possible for the F's to cross - even several times, and still reject the one-sided test, so the assumption is strictly needed for stochastic dominance to hold. Loosely if $F_Y(t)\leq F_X(t)$ with strict inequality for at least some $t$ then $Y$ 'tends to be bigger' than $X$. Adding assumptions like this is not weird; it's standard. It's not particularly different from assuming (say in an ANOVA) that a difference in means is because of a shift of the whole distribution (rather than a change in skewness, where some of the distribution shifts down and some shifts up, but in such a way that the mean has changed). So let's consider, for example, a shift in mean for a normal: The fact that the distribution for $Y$ is shifted right by some amount from that for $X$ implies that $F_Y$ is lower than $F_X$. The one-sided Kolmogorov-Smirnov test will tend to reject in this situation. Similarly, consider a scale shift in a gamma: Again, the shift to a larger scale produces a lower F. Again, the one-sided Kolmogorov-Smirnov test will tend to reject in this situation. There are numerous situations where such a test may be useful. So what are $D^+$ and $D^-$? In the one-sample test, $D^+$ is the maximum positive deviation of the sample cdf from the hypothesized curve (that is the biggest distance the ECDF is above $F_0$, while $D^-$ is the maximum negative deviation - the biggest distance the ECDF is below $F_0$). Both $D^+$ and $D^-$ are positive quantities: A one tailed Kolmogorov-Smirnov test would look at either $D^+$ or $D^-$ depending on the direction of the alternative. Consider the one tailed one sample test: $H_0: F_Y(t)\geq F_0(t)$ $H_1: F_Y(t)< F_0(t)\,$, for at least one $t$ To test this one - we want sensitivity to $Y$ being stochastically larger than hypothesized (its true $F$ is lower than $F_0$). So unusually large values of $D^-$ will tend to occur when the alternative is true. As a result, to test against the alternative $F_Y(t)< F_0(t)$, we use $D^-$ in our one-tailed test. Follow-up question: how are p-values for $D^+$ and $D^−$ obtained? It's not a simple thing. There are a variety of approaches that have been used. If I recall correctly one of the ways the distribution was obtained via the use of Brownian bridge processes (this document seems to support that recollection). I believe this paper, and the paper by Marsaglia et al here both cover some of the background and give computational algorithms with lots of references. Between those, you'll get a lot of the history and various approaches that have been used. If they don't cover what you need, you'll probably need to ask this as a new question. So many of the publications I am encountering are presenting tabled values, rather than CDF of $D_n$, $D^+$ and $D^−$ That's not particularly a surprise. If I remember right, even the asymptotic distribution is obtained as a series, and in finite samples it's discrete and not in any simple form. In either case and there's no convenient way to present the information except as either a graph or a table.
Does it make sense to perform a one-tailed Kolmogorov-Smirnov test?
Is it meaningful and possible to perform a one-tailed KS test? Yes, at least in the particular sense in which a one-tailed test is generally used with the Kolmogorov-Smirnov. Beware: because we're ta
Does it make sense to perform a one-tailed Kolmogorov-Smirnov test? Is it meaningful and possible to perform a one-tailed KS test? Yes, at least in the particular sense in which a one-tailed test is generally used with the Kolmogorov-Smirnov. Beware: because we're talking about many differences at once (the cdf is a function, not a value), the direction of a difference is not a single fixed thing - the cdf might be greater at one place and less at another (so both directional tests might be significant at the same time). is the KS test inherently a two-tailed test? Not at all. What would the null hypothesis of such a test be? You don't make it clear whether you're talking about the one-sample or the two sample test. My answer here covers both - if you regard $F_X$ as representing the cdf of the population from which an $X$ sample was drawn, it's two-sample, while you get the one sample case by regarding $F_X$ as some hypothesized distribution ($F_0$, if you prefer). You could in some cases write the null as an equality (e.g.if it wasn't seen as possible for it to go the other way), but if you want to write a directional nulls for a one tailed alternative, you could write something like this: $H_0: F_Y(t)\geq F_X(t)$ $H_1: F_Y(t)< F_X(t)\,$, for at least one $t$ (or its converse for the other tail, naturally) If we add an assumption when we use the test that they're either equal or that $F_Y$ will be smaller, then rejection of the null implies (first order) stochastic ordering / first order stochastic dominance. In large enough samples, it's possible for the F's to cross - even several times, and still reject the one-sided test, so the assumption is strictly needed for stochastic dominance to hold. Loosely if $F_Y(t)\leq F_X(t)$ with strict inequality for at least some $t$ then $Y$ 'tends to be bigger' than $X$. Adding assumptions like this is not weird; it's standard. It's not particularly different from assuming (say in an ANOVA) that a difference in means is because of a shift of the whole distribution (rather than a change in skewness, where some of the distribution shifts down and some shifts up, but in such a way that the mean has changed). So let's consider, for example, a shift in mean for a normal: The fact that the distribution for $Y$ is shifted right by some amount from that for $X$ implies that $F_Y$ is lower than $F_X$. The one-sided Kolmogorov-Smirnov test will tend to reject in this situation. Similarly, consider a scale shift in a gamma: Again, the shift to a larger scale produces a lower F. Again, the one-sided Kolmogorov-Smirnov test will tend to reject in this situation. There are numerous situations where such a test may be useful. So what are $D^+$ and $D^-$? In the one-sample test, $D^+$ is the maximum positive deviation of the sample cdf from the hypothesized curve (that is the biggest distance the ECDF is above $F_0$, while $D^-$ is the maximum negative deviation - the biggest distance the ECDF is below $F_0$). Both $D^+$ and $D^-$ are positive quantities: A one tailed Kolmogorov-Smirnov test would look at either $D^+$ or $D^-$ depending on the direction of the alternative. Consider the one tailed one sample test: $H_0: F_Y(t)\geq F_0(t)$ $H_1: F_Y(t)< F_0(t)\,$, for at least one $t$ To test this one - we want sensitivity to $Y$ being stochastically larger than hypothesized (its true $F$ is lower than $F_0$). So unusually large values of $D^-$ will tend to occur when the alternative is true. As a result, to test against the alternative $F_Y(t)< F_0(t)$, we use $D^-$ in our one-tailed test. Follow-up question: how are p-values for $D^+$ and $D^−$ obtained? It's not a simple thing. There are a variety of approaches that have been used. If I recall correctly one of the ways the distribution was obtained via the use of Brownian bridge processes (this document seems to support that recollection). I believe this paper, and the paper by Marsaglia et al here both cover some of the background and give computational algorithms with lots of references. Between those, you'll get a lot of the history and various approaches that have been used. If they don't cover what you need, you'll probably need to ask this as a new question. So many of the publications I am encountering are presenting tabled values, rather than CDF of $D_n$, $D^+$ and $D^−$ That's not particularly a surprise. If I remember right, even the asymptotic distribution is obtained as a series, and in finite samples it's discrete and not in any simple form. In either case and there's no convenient way to present the information except as either a graph or a table.
Does it make sense to perform a one-tailed Kolmogorov-Smirnov test? Is it meaningful and possible to perform a one-tailed KS test? Yes, at least in the particular sense in which a one-tailed test is generally used with the Kolmogorov-Smirnov. Beware: because we're ta
16,350
How to translate the output from an lm() fit with a cubic spline into a regression equation
require(rms) f <- ols(y ~ rcs(x, 3)) # 2 d.f. for x Function(f) # represent fitted function in simplest R form latex(f) # typeset algebraic representation of fit rcs "restricted cublic spline" is another representation of a natural spline.
How to translate the output from an lm() fit with a cubic spline into a regression equation
require(rms) f <- ols(y ~ rcs(x, 3)) # 2 d.f. for x Function(f) # represent fitted function in simplest R form latex(f) # typeset algebraic representation of fit rcs "restricted cublic spline"
How to translate the output from an lm() fit with a cubic spline into a regression equation require(rms) f <- ols(y ~ rcs(x, 3)) # 2 d.f. for x Function(f) # represent fitted function in simplest R form latex(f) # typeset algebraic representation of fit rcs "restricted cublic spline" is another representation of a natural spline.
How to translate the output from an lm() fit with a cubic spline into a regression equation require(rms) f <- ols(y ~ rcs(x, 3)) # 2 d.f. for x Function(f) # represent fitted function in simplest R form latex(f) # typeset algebraic representation of fit rcs "restricted cublic spline"
16,351
Explaining two-tailed tests
This is a great question and I'm looking forward to everyones version of explaining the p-value and the two-tailed v.s. one-tailed test. I've been teaching fellow orthopaedic surgeons statistics and therefore I tried to keep it as basic as possible since most of them haven't done any advanced math for 10-30 years. My way of explaining calculating p-values & the tails I start with a explaining that if we believe that we have a fair coin we know it should end up tails 50 % of the flips on average ($=H_0$). Now if you wonder what the probability of getting only 2 tails out of 10 flips with this fair coin you can calculate that probability as I've done in the bar graph. From the graph you can see that the probability of getting 8 out of 10 flips with a fair coin is about about $\approx 4.4\%$. Since we would question the fairness of the coin if we got 9 or 10 tails we have to include these possibilities, the tail of the test. By adding the values we get that the probability now is a little more than $\approx 5.5\%$ of getting 2 tails or less. Now if we would get only 2 heads, ie 8 heads (the other tail), we would probably be just as willing to question the fairness of the coin. This means that you end up with a probability of $5.4...\%+5.4...\% \approx 10.9\%$ for a two-tailed test. Since we in medicine usually are interested in studying failures we need to include the opposite side of the probability even if our intent is to do good and to introduce a beneficial treatment. Reflections slightly out of topic This simple example also shows how dependent we are on the null hypothesis to calculate the p-value. I also like to point out the resemblance between the binomial curve and the bell curve. When changing into 200 flips you get a natural way of explaining why the probability of getting exactly 100 flips starts to lack relevance. The defining intervals of interest is a natural transition to probability density/mass function functions and their cumulative counterparts. In my class I recommend them the Khan academy statistics videos and I also use some of his explanations for certain concepts. They also get to flip coins where we look into the randomness of the coin flipping - the thing that I try to show is that randomness is more random than what we usually believe inspired by this Radiolab episode. The code I usually have one graph/slide, the R-code that I used to create the graph: library(graphics) binom_plot_function <- function(x_max, my_title = FALSE, my_prob = .5, edges = 0, col=c("green", "gold", "red")){ barplot( dbinom(0:x_max, x_max, my_prob)*100, col=c(rep(col[1], edges), rep(col[2], x_max-2*edges+1), rep(col[3], edges)), #names=0:x_max, ylab="Probability %", xlab="Number of tails", names.arg=0:x_max) if (my_title != FALSE ){ title(main=my_title) } } binom_plot_function(10, paste("Flipping coins", 10, "times"), edges=0, col=c("#449944", "gold", "#994444")) binom_plot_function(10, edges=3, col=c(rgb(200/255, 0, 0), "gold", "gold")) binom_plot_function(10, edges=3, col=c(rgb(200/255, 0, 0), "gold", rgb(200/255, 100/255, 100/255)))
Explaining two-tailed tests
This is a great question and I'm looking forward to everyones version of explaining the p-value and the two-tailed v.s. one-tailed test. I've been teaching fellow orthopaedic surgeons statistics and t
Explaining two-tailed tests This is a great question and I'm looking forward to everyones version of explaining the p-value and the two-tailed v.s. one-tailed test. I've been teaching fellow orthopaedic surgeons statistics and therefore I tried to keep it as basic as possible since most of them haven't done any advanced math for 10-30 years. My way of explaining calculating p-values & the tails I start with a explaining that if we believe that we have a fair coin we know it should end up tails 50 % of the flips on average ($=H_0$). Now if you wonder what the probability of getting only 2 tails out of 10 flips with this fair coin you can calculate that probability as I've done in the bar graph. From the graph you can see that the probability of getting 8 out of 10 flips with a fair coin is about about $\approx 4.4\%$. Since we would question the fairness of the coin if we got 9 or 10 tails we have to include these possibilities, the tail of the test. By adding the values we get that the probability now is a little more than $\approx 5.5\%$ of getting 2 tails or less. Now if we would get only 2 heads, ie 8 heads (the other tail), we would probably be just as willing to question the fairness of the coin. This means that you end up with a probability of $5.4...\%+5.4...\% \approx 10.9\%$ for a two-tailed test. Since we in medicine usually are interested in studying failures we need to include the opposite side of the probability even if our intent is to do good and to introduce a beneficial treatment. Reflections slightly out of topic This simple example also shows how dependent we are on the null hypothesis to calculate the p-value. I also like to point out the resemblance between the binomial curve and the bell curve. When changing into 200 flips you get a natural way of explaining why the probability of getting exactly 100 flips starts to lack relevance. The defining intervals of interest is a natural transition to probability density/mass function functions and their cumulative counterparts. In my class I recommend them the Khan academy statistics videos and I also use some of his explanations for certain concepts. They also get to flip coins where we look into the randomness of the coin flipping - the thing that I try to show is that randomness is more random than what we usually believe inspired by this Radiolab episode. The code I usually have one graph/slide, the R-code that I used to create the graph: library(graphics) binom_plot_function <- function(x_max, my_title = FALSE, my_prob = .5, edges = 0, col=c("green", "gold", "red")){ barplot( dbinom(0:x_max, x_max, my_prob)*100, col=c(rep(col[1], edges), rep(col[2], x_max-2*edges+1), rep(col[3], edges)), #names=0:x_max, ylab="Probability %", xlab="Number of tails", names.arg=0:x_max) if (my_title != FALSE ){ title(main=my_title) } } binom_plot_function(10, paste("Flipping coins", 10, "times"), edges=0, col=c("#449944", "gold", "#994444")) binom_plot_function(10, edges=3, col=c(rgb(200/255, 0, 0), "gold", "gold")) binom_plot_function(10, edges=3, col=c(rgb(200/255, 0, 0), "gold", rgb(200/255, 100/255, 100/255)))
Explaining two-tailed tests This is a great question and I'm looking forward to everyones version of explaining the p-value and the two-tailed v.s. one-tailed test. I've been teaching fellow orthopaedic surgeons statistics and t
16,352
Explaining two-tailed tests
Suppose that you want to test the hypothesis that the average height of men is "5 ft 7 inches". You select a random sample of men, measure their heights and calculate the sample mean. Your hypothesis then is: $H_0: \mu = 5\ \text{ft} \ 7 \ \text{inches}$ $H_A: \mu \ne 5\ \text{ft} \ 7 \ \text{inches}$ In the above situation you do a two-tailed test as you would reject your null if the sample average is either too low or too high. In this case, the p-value represents the probability of realizing a sample mean that is at least as extreme as the one we actually obtained assuming that the null is in fact true. Thus, if observe the sample mean to be "5 ft 8 inches" then the p-value will represent the probability that we will observe heights greater than "5 ft 8 inches" or heights less than "5 ft 6 inches" provided the null is true. If on the other hand your alternative was framed like so: $H_A: \mu > 5\ \text{ft} \ 7 \ \text{inches}$ In the above situation you would a one-tailed test on the right side. The reason is that you would prefer to reject the null in favor of the alternative only if the sample mean is extremely high. The interpretation of the p-value stays the same with the slight nuance that we are now talking about the probability of realizing a sample mean that is greater than the one we actually obtained. Thus, if observe the sample mean to be "5 ft 8 inches" then the p-value will represent the probability that we will observe heights greater than "5 ft 8 inches" provided the null is true.
Explaining two-tailed tests
Suppose that you want to test the hypothesis that the average height of men is "5 ft 7 inches". You select a random sample of men, measure their heights and calculate the sample mean. Your hypothesis
Explaining two-tailed tests Suppose that you want to test the hypothesis that the average height of men is "5 ft 7 inches". You select a random sample of men, measure their heights and calculate the sample mean. Your hypothesis then is: $H_0: \mu = 5\ \text{ft} \ 7 \ \text{inches}$ $H_A: \mu \ne 5\ \text{ft} \ 7 \ \text{inches}$ In the above situation you do a two-tailed test as you would reject your null if the sample average is either too low or too high. In this case, the p-value represents the probability of realizing a sample mean that is at least as extreme as the one we actually obtained assuming that the null is in fact true. Thus, if observe the sample mean to be "5 ft 8 inches" then the p-value will represent the probability that we will observe heights greater than "5 ft 8 inches" or heights less than "5 ft 6 inches" provided the null is true. If on the other hand your alternative was framed like so: $H_A: \mu > 5\ \text{ft} \ 7 \ \text{inches}$ In the above situation you would a one-tailed test on the right side. The reason is that you would prefer to reject the null in favor of the alternative only if the sample mean is extremely high. The interpretation of the p-value stays the same with the slight nuance that we are now talking about the probability of realizing a sample mean that is greater than the one we actually obtained. Thus, if observe the sample mean to be "5 ft 8 inches" then the p-value will represent the probability that we will observe heights greater than "5 ft 8 inches" provided the null is true.
Explaining two-tailed tests Suppose that you want to test the hypothesis that the average height of men is "5 ft 7 inches". You select a random sample of men, measure their heights and calculate the sample mean. Your hypothesis
16,353
Formula for weighted simple linear regression
Think of ordinary least squares (OLS) as a "black box" to minimize $$\sum_{i=1}^n (y_i - (\alpha 1 + \beta x_i))^2$$ for a data table whose $i^\text{th}$ row is the tuple $(1, x_i, y_i)$. When there are weights, necessarily positive, we can write them as $w_i^2$. By definition, weighted least squares minimizes $$\sum_{i=1}^n w_i^2(y_i - (\alpha 1 + \beta x_i))^2$$ $$=\sum_{i=1}^n (w_i y_i - (\alpha w_i + \beta w_i x_i))^2 .$$ But that's exactly what the OLS black box is minimizing when given the data table consisting of the "weighted" tuples $(w_i, w_i x_i, w_i y_i)$. So, applying the OLS formulas to these weighted tuples gives the formulas you seek.
Formula for weighted simple linear regression
Think of ordinary least squares (OLS) as a "black box" to minimize $$\sum_{i=1}^n (y_i - (\alpha 1 + \beta x_i))^2$$ for a data table whose $i^\text{th}$ row is the tuple $(1, x_i, y_i)$. When there a
Formula for weighted simple linear regression Think of ordinary least squares (OLS) as a "black box" to minimize $$\sum_{i=1}^n (y_i - (\alpha 1 + \beta x_i))^2$$ for a data table whose $i^\text{th}$ row is the tuple $(1, x_i, y_i)$. When there are weights, necessarily positive, we can write them as $w_i^2$. By definition, weighted least squares minimizes $$\sum_{i=1}^n w_i^2(y_i - (\alpha 1 + \beta x_i))^2$$ $$=\sum_{i=1}^n (w_i y_i - (\alpha w_i + \beta w_i x_i))^2 .$$ But that's exactly what the OLS black box is minimizing when given the data table consisting of the "weighted" tuples $(w_i, w_i x_i, w_i y_i)$. So, applying the OLS formulas to these weighted tuples gives the formulas you seek.
Formula for weighted simple linear regression Think of ordinary least squares (OLS) as a "black box" to minimize $$\sum_{i=1}^n (y_i - (\alpha 1 + \beta x_i))^2$$ for a data table whose $i^\text{th}$ row is the tuple $(1, x_i, y_i)$. When there a
16,354
Formula for weighted simple linear regression
The answer from whuber gives the intuition behind the maths, which is nice to have, but I could still not figure the formulas (i.e. where should I put the weights). After some search on the web, I found these slides which give the following: You want to minimize the following error: $$\sum_{i=1}^n (w_i(y_i - (\alpha + \beta x_i)))^2$$ Then, the optimal pair $(\hat\alpha,\hat\beta)$ is: $$\hat\alpha = \overline y_w - \hat\beta \overline x_w$$ $$\hat\beta = \frac{\sum_{i=1}^n w_i(x_i-\overline x_w)(y_i-\overline y_w)}{\sum_{i=1}^n w_i(x_i-\overline x_w)^2}$$ Where $\overline x_w$ and $\overline y_w$ are the weighted means: $$\overline x_w = \frac{\sum_{i=1}^n w_ix_i}{\sum_{i=1}^n w_i}$$ $$\overline y_w = \frac{\sum_{i=1}^n w_iy_i}{\sum_{i=1}^n w_i}$$
Formula for weighted simple linear regression
The answer from whuber gives the intuition behind the maths, which is nice to have, but I could still not figure the formulas (i.e. where should I put the weights). After some search on the web, I fou
Formula for weighted simple linear regression The answer from whuber gives the intuition behind the maths, which is nice to have, but I could still not figure the formulas (i.e. where should I put the weights). After some search on the web, I found these slides which give the following: You want to minimize the following error: $$\sum_{i=1}^n (w_i(y_i - (\alpha + \beta x_i)))^2$$ Then, the optimal pair $(\hat\alpha,\hat\beta)$ is: $$\hat\alpha = \overline y_w - \hat\beta \overline x_w$$ $$\hat\beta = \frac{\sum_{i=1}^n w_i(x_i-\overline x_w)(y_i-\overline y_w)}{\sum_{i=1}^n w_i(x_i-\overline x_w)^2}$$ Where $\overline x_w$ and $\overline y_w$ are the weighted means: $$\overline x_w = \frac{\sum_{i=1}^n w_ix_i}{\sum_{i=1}^n w_i}$$ $$\overline y_w = \frac{\sum_{i=1}^n w_iy_i}{\sum_{i=1}^n w_i}$$
Formula for weighted simple linear regression The answer from whuber gives the intuition behind the maths, which is nice to have, but I could still not figure the formulas (i.e. where should I put the weights). After some search on the web, I fou
16,355
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"?
LGM can be translated to a MEM and vice versa, so these models are actually the same. I discuss the comparison in the chapter on LGM in my multilevel book, the draft of that chapter is on my homepage at http://www.joophox.net/papers/chap14.pdf
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"?
LGM can be translated to a MEM and vice versa, so these models are actually the same. I discuss the comparison in the chapter on LGM in my multilevel book, the draft of that chapter is on my homepage
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"? LGM can be translated to a MEM and vice versa, so these models are actually the same. I discuss the comparison in the chapter on LGM in my multilevel book, the draft of that chapter is on my homepage at http://www.joophox.net/papers/chap14.pdf
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"? LGM can be translated to a MEM and vice versa, so these models are actually the same. I discuss the comparison in the chapter on LGM in my multilevel book, the draft of that chapter is on my homepage
16,356
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"?
Here is what I found when looking into this topic. I'm not a stats person so I tried to summarise how I understood it using relatively basic concepts :-) These two frameworks treat “time” differently: MEM requires nested data structures (e.g. students nested within classrooms) and time is treated as an independent variable at the lowest level, and the individual on the second level LGM adopts a latent variable approach and incorporate time via factor loadings (this answer elaborates more on how such factor loadings, or "time scores", work). This difference leads to different strengths of both frameworks in handling certain data. For example, in MEM framework, it is easy to add more levels (e.g. students nested in classrooms nested in schools), whilst in LGM, it is possible to model measurement error, as well as embed it in a larger path model by combining several growth curves, or by using growth factors as predictors for outcome variables. However, recent developments have blurred differences between these frameworks, and they were termed by some researchers as the “unequal twin”. Essentially, MEM is a univariate approach, with time points treated as observations of the same variable, whereas LGM a multivariate approach, with each time point treated as a separate variable. The mean and covariance structure of the latent variables in LGM correspond to the fixed and random effects in MEM, making it possible to specify the same model using either framework with identical results. So rather than considering LGM as a special case of MEM, I see it as a special case of factor analysis model with factor loadings fixed in such a way, that the interpretation of the latent (growth) factors is possible.
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"?
Here is what I found when looking into this topic. I'm not a stats person so I tried to summarise how I understood it using relatively basic concepts :-) These two frameworks treat “time” differently
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"? Here is what I found when looking into this topic. I'm not a stats person so I tried to summarise how I understood it using relatively basic concepts :-) These two frameworks treat “time” differently: MEM requires nested data structures (e.g. students nested within classrooms) and time is treated as an independent variable at the lowest level, and the individual on the second level LGM adopts a latent variable approach and incorporate time via factor loadings (this answer elaborates more on how such factor loadings, or "time scores", work). This difference leads to different strengths of both frameworks in handling certain data. For example, in MEM framework, it is easy to add more levels (e.g. students nested in classrooms nested in schools), whilst in LGM, it is possible to model measurement error, as well as embed it in a larger path model by combining several growth curves, or by using growth factors as predictors for outcome variables. However, recent developments have blurred differences between these frameworks, and they were termed by some researchers as the “unequal twin”. Essentially, MEM is a univariate approach, with time points treated as observations of the same variable, whereas LGM a multivariate approach, with each time point treated as a separate variable. The mean and covariance structure of the latent variables in LGM correspond to the fixed and random effects in MEM, making it possible to specify the same model using either framework with identical results. So rather than considering LGM as a special case of MEM, I see it as a special case of factor analysis model with factor loadings fixed in such a way, that the interpretation of the latent (growth) factors is possible.
What are the differences between "Mixed Effects Modelling" and "Latent Growth Modelling"? Here is what I found when looking into this topic. I'm not a stats person so I tried to summarise how I understood it using relatively basic concepts :-) These two frameworks treat “time” differently
16,357
Why is Propensity Score Matching better than just Matching?
The procedure you described is not propensity score matching but rather propensity score subclassification. In propensity score matching, pairs of units are selected based on the difference between their propensity scores, and unpaired units are dropped. Both methods are popular ways of using propensity scores to reduce imbalance that causes confounding bias in observational studies. In propensity score matching, the distance between two units is the difference between their propensity scores, and propensity scores are computed from the covariates, so by propensity score matching, you are matching based on a distance measure and covariate values. There are other distance measures that don't involve the propensity score that are frequently used in matching, like the Mahalanobis distance. Some studies show the Mahalanobis distance works better than the propensity score difference as a distance measure and some studies show it doesn't. The relative performance of each depends on the unique characteristics of the dataset; there is no way to provide a single rule that is always true about which method is better. Both should be tried. You can also include the propensity score as a covariate in the Mahalanobis distance. If your question is more about why we would ever do propensity score subclassification when we could do propensity score matching, there are a few considerations. As before, you should always use whichever method yields the best balance in your sample. Propensity score subclassification may do a better job at achieving balance in some datasets and propensity score matching in others. There is no reason to unilaterally decide to use one method over another. Subclassification allows you to estimate the ATT or ATE, whereas most matching methods only allow the ATT. Subclassification is closely related to propensity score weighting when used in certain ways, whereas matching typically doesn't assign nonuniform weights to individuals. With matching, you can customize the specification more (e.g., by using a caliper, by changing the ratio of controls to treated, etc.), whereas with subclassification the opportunities for customization are more limited. The distinction between matching and subclassification is blurred in the face of full matching, which is a hybrid between the two that often performs better than each. Some papers have compared the performance of the two methods, but as I mentioned before, it is important not to rely on general results and instead try both methods in your sample. Check out the documentation for the MatchIt R package which goes into detail on several matching methods and discusses some of their relative merits and methods of customization.
Why is Propensity Score Matching better than just Matching?
The procedure you described is not propensity score matching but rather propensity score subclassification. In propensity score matching, pairs of units are selected based on the difference between th
Why is Propensity Score Matching better than just Matching? The procedure you described is not propensity score matching but rather propensity score subclassification. In propensity score matching, pairs of units are selected based on the difference between their propensity scores, and unpaired units are dropped. Both methods are popular ways of using propensity scores to reduce imbalance that causes confounding bias in observational studies. In propensity score matching, the distance between two units is the difference between their propensity scores, and propensity scores are computed from the covariates, so by propensity score matching, you are matching based on a distance measure and covariate values. There are other distance measures that don't involve the propensity score that are frequently used in matching, like the Mahalanobis distance. Some studies show the Mahalanobis distance works better than the propensity score difference as a distance measure and some studies show it doesn't. The relative performance of each depends on the unique characteristics of the dataset; there is no way to provide a single rule that is always true about which method is better. Both should be tried. You can also include the propensity score as a covariate in the Mahalanobis distance. If your question is more about why we would ever do propensity score subclassification when we could do propensity score matching, there are a few considerations. As before, you should always use whichever method yields the best balance in your sample. Propensity score subclassification may do a better job at achieving balance in some datasets and propensity score matching in others. There is no reason to unilaterally decide to use one method over another. Subclassification allows you to estimate the ATT or ATE, whereas most matching methods only allow the ATT. Subclassification is closely related to propensity score weighting when used in certain ways, whereas matching typically doesn't assign nonuniform weights to individuals. With matching, you can customize the specification more (e.g., by using a caliper, by changing the ratio of controls to treated, etc.), whereas with subclassification the opportunities for customization are more limited. The distinction between matching and subclassification is blurred in the face of full matching, which is a hybrid between the two that often performs better than each. Some papers have compared the performance of the two methods, but as I mentioned before, it is important not to rely on general results and instead try both methods in your sample. Check out the documentation for the MatchIt R package which goes into detail on several matching methods and discusses some of their relative merits and methods of customization.
Why is Propensity Score Matching better than just Matching? The procedure you described is not propensity score matching but rather propensity score subclassification. In propensity score matching, pairs of units are selected based on the difference between th
16,358
Why is Propensity Score Matching better than just Matching?
Let's step back and think more broadly about how you could match given some data X. Exact or Cell Matching This is hard to do with continuous Xs. You could try rounding/discretizing each variable, but that introduces some measurement error. If you choose to proceed anyway, then you interact these new variables to define cells. Here you run into the curse of dimensionality as X gets big. If you have five variables each with three values, you have $3^5= 243$ cells. So what to do? Inexact Matching Inexact matching procedures reduce the dimension of the problem by defining a distance metric on X and then matching using the distance rather than the X. Mahalanobis distance is one common DM. But you can have two observations that are quite far in MD, but have the same probability of treatment. In many applications, if being bald and being chubby both increase propensity to seek treatment, then it could be OK to compare a treated person who is likely to be treated because he is bald with a control person who has a similar probability because he is chubby. In the PSM framework, this creates a larger pool to match from. Asymptotically, all inexact matching schemes are consistent as they all tend towards exact matches (on X or on the propensity score) as the sample gets larger. However, they can yield very different answers in finite samples and are all biased in finite samples. PSM is maybe less intuitive than finding similar people, but the goal is not to find similar people. It's to find people who have a similar probability of being treated.
Why is Propensity Score Matching better than just Matching?
Let's step back and think more broadly about how you could match given some data X. Exact or Cell Matching This is hard to do with continuous Xs. You could try rounding/discretizing each variable, but
Why is Propensity Score Matching better than just Matching? Let's step back and think more broadly about how you could match given some data X. Exact or Cell Matching This is hard to do with continuous Xs. You could try rounding/discretizing each variable, but that introduces some measurement error. If you choose to proceed anyway, then you interact these new variables to define cells. Here you run into the curse of dimensionality as X gets big. If you have five variables each with three values, you have $3^5= 243$ cells. So what to do? Inexact Matching Inexact matching procedures reduce the dimension of the problem by defining a distance metric on X and then matching using the distance rather than the X. Mahalanobis distance is one common DM. But you can have two observations that are quite far in MD, but have the same probability of treatment. In many applications, if being bald and being chubby both increase propensity to seek treatment, then it could be OK to compare a treated person who is likely to be treated because he is bald with a control person who has a similar probability because he is chubby. In the PSM framework, this creates a larger pool to match from. Asymptotically, all inexact matching schemes are consistent as they all tend towards exact matches (on X or on the propensity score) as the sample gets larger. However, they can yield very different answers in finite samples and are all biased in finite samples. PSM is maybe less intuitive than finding similar people, but the goal is not to find similar people. It's to find people who have a similar probability of being treated.
Why is Propensity Score Matching better than just Matching? Let's step back and think more broadly about how you could match given some data X. Exact or Cell Matching This is hard to do with continuous Xs. You could try rounding/discretizing each variable, but
16,359
Expectation of the Maximum of iid Gumbel Variables
I appreciate the work exhibited in your answer: thank you for that contribution. The purpose of this post is to provide a simpler demonstration. The value of simplicity is revelation: we can easily obtain the entire distribution of the maximum, not just its expectation. Ignore $\mu$ by absorbing it into the $\delta_i$ and assuming the $\epsilon_i$ all have a Gumbel$(0,1)$ distribution. (That is, replace each $\epsilon_i$ by $\epsilon_i-\mu$ and change $\delta_i$ to $\delta_i+\mu$.) This does not change the random variable $$X = \max_{i}(\delta_i + \epsilon_i) = \max_i((\delta_i+\mu) + (\epsilon_i-\mu)).$$ The independence of the $\epsilon_i$ implies for all real $x$ that $\Pr(X\le x)$ is the product of the individual chances $\Pr(\delta_i+\epsilon_i\le x)$. Taking logs and applying basic properties of exponentials yields $$\eqalign{ \log \Pr(X\le x) &= \log\prod_{i}\Pr(\delta_i + \epsilon_i \le x) = \sum_i \log\Pr(\epsilon_i \le x - \delta_i)\\ &= -\sum_ie^{\delta_i}\, e^{-x} = -\exp\left(-x + \log\sum_i e^{\delta_i}\right). }$$ This is the logarithm of the CDF of a Gumbel distribution with location parameter $\lambda=\log\sum_i e^{\delta_i}.$ That is, $X$ has a Gumbel$\left(\log\sum_i e^{\delta_i}, 1\right)$ distribution. This is much more information than requested. The mean of such a distribution is $\gamma+\lambda,$ entailing $$\mathbb{E}[X] = \gamma + \log\sum_i e^{\delta_i},$$ QED.
Expectation of the Maximum of iid Gumbel Variables
I appreciate the work exhibited in your answer: thank you for that contribution. The purpose of this post is to provide a simpler demonstration. The value of simplicity is revelation: we can easily
Expectation of the Maximum of iid Gumbel Variables I appreciate the work exhibited in your answer: thank you for that contribution. The purpose of this post is to provide a simpler demonstration. The value of simplicity is revelation: we can easily obtain the entire distribution of the maximum, not just its expectation. Ignore $\mu$ by absorbing it into the $\delta_i$ and assuming the $\epsilon_i$ all have a Gumbel$(0,1)$ distribution. (That is, replace each $\epsilon_i$ by $\epsilon_i-\mu$ and change $\delta_i$ to $\delta_i+\mu$.) This does not change the random variable $$X = \max_{i}(\delta_i + \epsilon_i) = \max_i((\delta_i+\mu) + (\epsilon_i-\mu)).$$ The independence of the $\epsilon_i$ implies for all real $x$ that $\Pr(X\le x)$ is the product of the individual chances $\Pr(\delta_i+\epsilon_i\le x)$. Taking logs and applying basic properties of exponentials yields $$\eqalign{ \log \Pr(X\le x) &= \log\prod_{i}\Pr(\delta_i + \epsilon_i \le x) = \sum_i \log\Pr(\epsilon_i \le x - \delta_i)\\ &= -\sum_ie^{\delta_i}\, e^{-x} = -\exp\left(-x + \log\sum_i e^{\delta_i}\right). }$$ This is the logarithm of the CDF of a Gumbel distribution with location parameter $\lambda=\log\sum_i e^{\delta_i}.$ That is, $X$ has a Gumbel$\left(\log\sum_i e^{\delta_i}, 1\right)$ distribution. This is much more information than requested. The mean of such a distribution is $\gamma+\lambda,$ entailing $$\mathbb{E}[X] = \gamma + \log\sum_i e^{\delta_i},$$ QED.
Expectation of the Maximum of iid Gumbel Variables I appreciate the work exhibited in your answer: thank you for that contribution. The purpose of this post is to provide a simpler demonstration. The value of simplicity is revelation: we can easily
16,360
Expectation of the Maximum of iid Gumbel Variables
It turns out that an Econometrica article by Kenneth Small and Harvey Rosen showed this in 1981, but in a very specialized context so the result requires a lot of digging, not to mention some training in economics. I decided to prove it in a way I find more accessible. Proof: Let $J$ be the number of alternatives. Depending on the values of the vector $\boldsymbol{\epsilon} = \{\epsilon_1, ..., \epsilon_J\}$, the function $\max_i(\delta_i + \epsilon_i)$ takes on different values. First, focus on the values of $\boldsymbol{\epsilon}$ such that $\max_i (\delta_i + \epsilon_i) = \delta_1 + \epsilon_1$. That is, we will integrate $\delta_1 + \epsilon_1$ over the set $M_1 \equiv \{\boldsymbol\epsilon : \delta_1 + \epsilon_1 > \delta_j + \epsilon_j, j \neq 1\}$: \begin{equation} \begin{split} E_{\boldsymbol \epsilon \in M_1} [\max_i(\delta_i + \epsilon_i)] = \hspace{3.25in}\\ \int^{\infty}_{-\infty} (\delta_1 + \epsilon_1)f(\epsilon_1) \left[\int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_2} ... \int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_J}f(\epsilon_2) ...f(\epsilon_J) d\epsilon_2 ...d\epsilon_J \right] d\epsilon_1 = \\ \int^{\infty}_{-\infty} (\delta_1 + \epsilon_1)f(\epsilon_1) \left(\int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_2} f(\epsilon_2)d\epsilon_2 \right) ... \left( \int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_J}f(\epsilon_J) d\epsilon_J \right) d\epsilon_1 = \\ \int^{\infty}_{-\infty} \left(\delta_1 + \epsilon_1\right) f(\epsilon_1) F(\delta_1 + \epsilon_1 - \delta_2) ...F(\delta_1 + \epsilon_1 - \delta_J) d\epsilon_1. \end{split} \end{equation} The term above is the first of $J$ such terms in $E[\max_i \left(\delta_i + \epsilon_i \right)]$. Specifically, \begin{equation} E\left[\max_i \left(\delta_i + \epsilon_i \right)\right] = \sum_i E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right]. \end{equation} Now we apply the functional form of the Gumbel distribution. This gives \begin{equation} \begin{split} &E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \hspace{2in} \\ &\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i\right)e^{\mu - \epsilon_i} e^{- e^{\mu - \epsilon_i}} \prod_{j \neq i} e^{-e^{\mu - \epsilon_i + \delta_j - \delta_i}}d\epsilon_i \\ =&\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i\right)e^{\mu - \epsilon_i } \prod_{j } e^{-e^{\mu - \epsilon_i + \delta_j - \delta_i}}d\epsilon_i \\ =&\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i \right) e^{\mu - \epsilon_i} \exp \Bigl\{ \sum_{j} -e^{\mu - \epsilon_i + \delta_j - \delta_i} \Bigr\}d\epsilon_i \\ =&\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i \right) e^{\mu - \epsilon_i} \exp \Bigl\{ -e^{\mu - \epsilon_i } \sum_{j} e^{ \delta_j - \delta_i} \Bigr\}d\epsilon_i, \end{split} \end{equation} where the second step comes from collecting one of the exponentiated terms into the product, along with the fact that $\delta_j - \delta_i = 0$ if $i = j$. Now we define $D_i \equiv \sum_j e^{\delta_j - \delta_i}$, and make the substitution $x = D_i\hspace{0.5mm} e^{\mu - \epsilon_i}$, so that $ dx = -D_i e^{\mu - \epsilon_i}d\epsilon_i \Rightarrow -\frac{dx} {D_i} = e^{\mu - \epsilon_i}d\epsilon_i$ and $\epsilon_i = \mu - \log\left(\frac{x}{D_i}\right)$. Note that as $\epsilon_i$ approaches infinity, $x$ approaches 0, and as $\epsilon_i$ approaches negative infinity, $x$ approaches infinity: \begin{equation} \begin{split} &\hspace{3mm} E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \\ &\hspace{3mm}\int^{0}_{\infty} \left(\delta_i + \mu - \log\left[\frac{x}{D_i} \right]\right)\left(-\frac{1}{D_i}\right)\exp\left\{ -x\right\}dx \\ =&\hspace{3mm}\frac{1}{D_i}\int^{\infty}_{0} \left(\delta_i + \mu - \log\left[\frac{x}{D_i} \right]\right)e^{ -x}dx \\ =&\hspace{3mm} \frac{\delta_i + \mu}{D_i}\int^{\infty}_{0} e^{-x}dx -\frac{1}{D_i}\int^{\infty}_{0} \log[x]e^{-x}dx + \frac{\log[D_i]} {D_i} \int^{\infty}_{0}e^{-x}dx.\\ \end{split} \end{equation} The Gamma Function is defined as $\Gamma(t) = \int^{\infty}_{0} x^{t - 1}e^{-x}dx$. For values of $t$ which are positive integers, this is equivalent to $\Gamma(t) = (t - 1)!$, so $\Gamma(1) = 0! = 1$. In addition, it is known that the Euler–Mascheroni constant, $\gamma \approx 0.57722$ satisfies $$\gamma = -\int^{\infty}_{0} \log[x] e^{-x}dx.$$ Applying these facts gives \begin{equation} \begin{split} &\hspace{3mm} E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \frac{\delta_i + \mu + \gamma + \log[D_i]}{D_i}. \end{split} \end{equation} Then we sum over $i$ to get \begin{equation} \begin{split} &\hspace{3mm} E\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \sum_i \frac{\delta_i + \mu + \gamma + \log[D_i]}{D_i}. \end{split} \end{equation} Recall that $D_i = \sum_j e^{\delta_j - \delta_i} = \frac{\sum_j e^{\delta_j}} {e^{\delta_i}}$. Notice that the familiar logit choice probabilities $P_i = \frac{e^{\delta_i}}{\sum_j e^{\delta_j}}$ are inverses of the $D_i$'s, or in other words $P_i = 1/D_i$. Also note that $\sum_i P_i = 1$. Then we have \begin{equation} \begin{split} \hspace{3mm} E\left[\max_i\left( \delta_i + \epsilon_i \right) \right] =& \sum_i P_i\left(\delta_i + \mu + \gamma + \log[D_i]\right)\\ =&\hspace{2mm} (\mu + \gamma) \sum_i P_i + \sum_i P_i\delta_i + \sum_iP_i \log[D_i] \\ =& \hspace{2mm} \mu + \gamma + \sum_i P_i \delta_i + \sum_i P_i \log\left[\frac{\sum_j e^{\delta_j}} {e^{\delta_i}} \right]\\ =& \mu + \gamma + \sum_i P_i \delta_i + \sum_i P_i \log\left[\sum_j e^{\delta_j}\right] - \sum_i P_i \log[e^{\delta_i}]\\ =& \mu + \gamma + \sum_i P_i \delta_i + \log\left[ \sum_j e^{\delta_j}\right] \sum_i P_i - \sum_i P_i \delta_i \\ =& \mu + \gamma + \log\left[ \sum_j \exp\left\{ \delta_j \right\}\right] .\end{split} \end{equation} Q.E.D.
Expectation of the Maximum of iid Gumbel Variables
It turns out that an Econometrica article by Kenneth Small and Harvey Rosen showed this in 1981, but in a very specialized context so the result requires a lot of digging, not to mention some trainin
Expectation of the Maximum of iid Gumbel Variables It turns out that an Econometrica article by Kenneth Small and Harvey Rosen showed this in 1981, but in a very specialized context so the result requires a lot of digging, not to mention some training in economics. I decided to prove it in a way I find more accessible. Proof: Let $J$ be the number of alternatives. Depending on the values of the vector $\boldsymbol{\epsilon} = \{\epsilon_1, ..., \epsilon_J\}$, the function $\max_i(\delta_i + \epsilon_i)$ takes on different values. First, focus on the values of $\boldsymbol{\epsilon}$ such that $\max_i (\delta_i + \epsilon_i) = \delta_1 + \epsilon_1$. That is, we will integrate $\delta_1 + \epsilon_1$ over the set $M_1 \equiv \{\boldsymbol\epsilon : \delta_1 + \epsilon_1 > \delta_j + \epsilon_j, j \neq 1\}$: \begin{equation} \begin{split} E_{\boldsymbol \epsilon \in M_1} [\max_i(\delta_i + \epsilon_i)] = \hspace{3.25in}\\ \int^{\infty}_{-\infty} (\delta_1 + \epsilon_1)f(\epsilon_1) \left[\int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_2} ... \int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_J}f(\epsilon_2) ...f(\epsilon_J) d\epsilon_2 ...d\epsilon_J \right] d\epsilon_1 = \\ \int^{\infty}_{-\infty} (\delta_1 + \epsilon_1)f(\epsilon_1) \left(\int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_2} f(\epsilon_2)d\epsilon_2 \right) ... \left( \int_{-\infty}^{\delta_1 + \epsilon_1 - \delta_J}f(\epsilon_J) d\epsilon_J \right) d\epsilon_1 = \\ \int^{\infty}_{-\infty} \left(\delta_1 + \epsilon_1\right) f(\epsilon_1) F(\delta_1 + \epsilon_1 - \delta_2) ...F(\delta_1 + \epsilon_1 - \delta_J) d\epsilon_1. \end{split} \end{equation} The term above is the first of $J$ such terms in $E[\max_i \left(\delta_i + \epsilon_i \right)]$. Specifically, \begin{equation} E\left[\max_i \left(\delta_i + \epsilon_i \right)\right] = \sum_i E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right]. \end{equation} Now we apply the functional form of the Gumbel distribution. This gives \begin{equation} \begin{split} &E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \hspace{2in} \\ &\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i\right)e^{\mu - \epsilon_i} e^{- e^{\mu - \epsilon_i}} \prod_{j \neq i} e^{-e^{\mu - \epsilon_i + \delta_j - \delta_i}}d\epsilon_i \\ =&\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i\right)e^{\mu - \epsilon_i } \prod_{j } e^{-e^{\mu - \epsilon_i + \delta_j - \delta_i}}d\epsilon_i \\ =&\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i \right) e^{\mu - \epsilon_i} \exp \Bigl\{ \sum_{j} -e^{\mu - \epsilon_i + \delta_j - \delta_i} \Bigr\}d\epsilon_i \\ =&\int^{\infty}_{-\infty} \left(\delta_i + \epsilon_i \right) e^{\mu - \epsilon_i} \exp \Bigl\{ -e^{\mu - \epsilon_i } \sum_{j} e^{ \delta_j - \delta_i} \Bigr\}d\epsilon_i, \end{split} \end{equation} where the second step comes from collecting one of the exponentiated terms into the product, along with the fact that $\delta_j - \delta_i = 0$ if $i = j$. Now we define $D_i \equiv \sum_j e^{\delta_j - \delta_i}$, and make the substitution $x = D_i\hspace{0.5mm} e^{\mu - \epsilon_i}$, so that $ dx = -D_i e^{\mu - \epsilon_i}d\epsilon_i \Rightarrow -\frac{dx} {D_i} = e^{\mu - \epsilon_i}d\epsilon_i$ and $\epsilon_i = \mu - \log\left(\frac{x}{D_i}\right)$. Note that as $\epsilon_i$ approaches infinity, $x$ approaches 0, and as $\epsilon_i$ approaches negative infinity, $x$ approaches infinity: \begin{equation} \begin{split} &\hspace{3mm} E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \\ &\hspace{3mm}\int^{0}_{\infty} \left(\delta_i + \mu - \log\left[\frac{x}{D_i} \right]\right)\left(-\frac{1}{D_i}\right)\exp\left\{ -x\right\}dx \\ =&\hspace{3mm}\frac{1}{D_i}\int^{\infty}_{0} \left(\delta_i + \mu - \log\left[\frac{x}{D_i} \right]\right)e^{ -x}dx \\ =&\hspace{3mm} \frac{\delta_i + \mu}{D_i}\int^{\infty}_{0} e^{-x}dx -\frac{1}{D_i}\int^{\infty}_{0} \log[x]e^{-x}dx + \frac{\log[D_i]} {D_i} \int^{\infty}_{0}e^{-x}dx.\\ \end{split} \end{equation} The Gamma Function is defined as $\Gamma(t) = \int^{\infty}_{0} x^{t - 1}e^{-x}dx$. For values of $t$ which are positive integers, this is equivalent to $\Gamma(t) = (t - 1)!$, so $\Gamma(1) = 0! = 1$. In addition, it is known that the Euler–Mascheroni constant, $\gamma \approx 0.57722$ satisfies $$\gamma = -\int^{\infty}_{0} \log[x] e^{-x}dx.$$ Applying these facts gives \begin{equation} \begin{split} &\hspace{3mm} E_{\boldsymbol \epsilon \in M_i}\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \frac{\delta_i + \mu + \gamma + \log[D_i]}{D_i}. \end{split} \end{equation} Then we sum over $i$ to get \begin{equation} \begin{split} &\hspace{3mm} E\left[\max_i\left( \delta_i + \epsilon_i \right) \right] = \sum_i \frac{\delta_i + \mu + \gamma + \log[D_i]}{D_i}. \end{split} \end{equation} Recall that $D_i = \sum_j e^{\delta_j - \delta_i} = \frac{\sum_j e^{\delta_j}} {e^{\delta_i}}$. Notice that the familiar logit choice probabilities $P_i = \frac{e^{\delta_i}}{\sum_j e^{\delta_j}}$ are inverses of the $D_i$'s, or in other words $P_i = 1/D_i$. Also note that $\sum_i P_i = 1$. Then we have \begin{equation} \begin{split} \hspace{3mm} E\left[\max_i\left( \delta_i + \epsilon_i \right) \right] =& \sum_i P_i\left(\delta_i + \mu + \gamma + \log[D_i]\right)\\ =&\hspace{2mm} (\mu + \gamma) \sum_i P_i + \sum_i P_i\delta_i + \sum_iP_i \log[D_i] \\ =& \hspace{2mm} \mu + \gamma + \sum_i P_i \delta_i + \sum_i P_i \log\left[\frac{\sum_j e^{\delta_j}} {e^{\delta_i}} \right]\\ =& \mu + \gamma + \sum_i P_i \delta_i + \sum_i P_i \log\left[\sum_j e^{\delta_j}\right] - \sum_i P_i \log[e^{\delta_i}]\\ =& \mu + \gamma + \sum_i P_i \delta_i + \log\left[ \sum_j e^{\delta_j}\right] \sum_i P_i - \sum_i P_i \delta_i \\ =& \mu + \gamma + \log\left[ \sum_j \exp\left\{ \delta_j \right\}\right] .\end{split} \end{equation} Q.E.D.
Expectation of the Maximum of iid Gumbel Variables It turns out that an Econometrica article by Kenneth Small and Harvey Rosen showed this in 1981, but in a very specialized context so the result requires a lot of digging, not to mention some trainin
16,361
Best suggested textbooks on Bootstrap resampling?
There are two "classic" ones: Efron, B. & Tibshirani, R. J. (1993). An introduction to the bootstrap. London: Chapman & Hall/CRC. Davison, A. C. & Hinkley, D. V. (2009). Bootstrap methods and their application. New York, NY: Cambridge University Press. The first one is very readable and gives you good idea what bootstrap is and what is the general reasoning behind this method. It also provides many examples and practical hints about using bootstrap in real life. The second is a really extensive review of different usages of bootstrap, with lots of examples and also examples of code written in R. I would say that those two alone give you pretty complete overview of the method and could lead you starting from the basics, up to pretty advanced topics. If you don't know much on bootstrap yet I'll suggest starting with Efron & Tibshirani since it is written in a much simpler language and walks you through the topic step by step from the basics. Davison & Hinkley is a little bit tougher to read but provides you with many practical information and details.
Best suggested textbooks on Bootstrap resampling?
There are two "classic" ones: Efron, B. & Tibshirani, R. J. (1993). An introduction to the bootstrap. London: Chapman & Hall/CRC. Davison, A. C. & Hinkley, D. V. (2009). Bootstrap methods and their ap
Best suggested textbooks on Bootstrap resampling? There are two "classic" ones: Efron, B. & Tibshirani, R. J. (1993). An introduction to the bootstrap. London: Chapman & Hall/CRC. Davison, A. C. & Hinkley, D. V. (2009). Bootstrap methods and their application. New York, NY: Cambridge University Press. The first one is very readable and gives you good idea what bootstrap is and what is the general reasoning behind this method. It also provides many examples and practical hints about using bootstrap in real life. The second is a really extensive review of different usages of bootstrap, with lots of examples and also examples of code written in R. I would say that those two alone give you pretty complete overview of the method and could lead you starting from the basics, up to pretty advanced topics. If you don't know much on bootstrap yet I'll suggest starting with Efron & Tibshirani since it is written in a much simpler language and walks you through the topic step by step from the basics. Davison & Hinkley is a little bit tougher to read but provides you with many practical information and details.
Best suggested textbooks on Bootstrap resampling? There are two "classic" ones: Efron, B. & Tibshirani, R. J. (1993). An introduction to the bootstrap. London: Chapman & Hall/CRC. Davison, A. C. & Hinkley, D. V. (2009). Bootstrap methods and their ap
16,362
Best suggested textbooks on Bootstrap resampling?
It might be worth going back to the origins of Bootstrapping and learning a bit about jackknifing from sources such as Quenouille and Tukey. Personally, the book "Data analysis and regression : a second course in statistics" by Mosteller and Tukey really helped me when I was first learning about bootstrapping.
Best suggested textbooks on Bootstrap resampling?
It might be worth going back to the origins of Bootstrapping and learning a bit about jackknifing from sources such as Quenouille and Tukey. Personally, the book "Data analysis and regression : a seco
Best suggested textbooks on Bootstrap resampling? It might be worth going back to the origins of Bootstrapping and learning a bit about jackknifing from sources such as Quenouille and Tukey. Personally, the book "Data analysis and regression : a second course in statistics" by Mosteller and Tukey really helped me when I was first learning about bootstrapping.
Best suggested textbooks on Bootstrap resampling? It might be worth going back to the origins of Bootstrapping and learning a bit about jackknifing from sources such as Quenouille and Tukey. Personally, the book "Data analysis and regression : a seco
16,363
Calculate the confidence interval for the mean of a beta distribution
While there are specific methods for calculating confidence intervals for the parameters in a beta distribution, I’ll describe a few general methods, that can be used for (almost) all sorts of distributions, including the beta distribution, and are easily implemented in R. Profile likelihood confidence intervals Let’s begin with maximum likelihood estimation with corresponding profile likelihood confidence intervals. First we need some sample data: # Sample size n = 10 # Parameters of the beta distribution alpha = 10 beta = 1.4 # Simulate some data set.seed(1) x = rbeta(n, alpha, beta) # Note that the distribution is not symmetrical curve(dbeta(x,alpha,beta)) The real/theoretical mean is > alpha/(alpha+beta) 0.877193 Now we have to create a function for calculating the negative log likelihood function for a sample from the beta distribution, with the mean as one of the parameters. We can use the dbeta() function, but since this doesn’t use a parametrisation involving the mean, we have have to express its parameters (α and β) as a function of the mean and some other parameter (like the standard deviation): # Negative log likelihood for the beta distribution nloglikbeta = function(mu, sig) { alpha = mu^2*(1-mu)/sig^2-mu beta = alpha*(1/mu-1) -sum(dbeta(x, alpha, beta, log=TRUE)) } To find the maximum likelihood estimate, we can use the mle() function in the stats4 library: library(stats4) est = mle(nloglikbeta, start=list(mu=mean(x), sig=sd(x))) Just ignore the warnings for now. They’re caused by the optimisation algorithms trying invalid values for the parameters, giving negative values for α and/or β. (To avoid the warning, you can add a lower argument and change the optimisation method used.) Now we have both estimates and confidence intervals for our two parameters: > est Call: mle(minuslogl = nloglikbeta, start = list(mu = mean(x), sig = sd(x))) Coefficients: mu sig 0.87304148 0.07129112 > confint(est) Profiling... 2.5 % 97.5 % mu 0.81336555 0.9120350 sig 0.04679421 0.1276783 Note that, as expected, the confidence intervals are not symmetrical: par(mfrow=c(1,2)) plot(profile(est)) # Profile likelihood plot (The second-outer magenta lines show the 95% confidence interval.) Also note that even with just 10 observations, we get very good estimates (a narrow confidence interval). As an alternative to mle(), you can use the fitdistr() function from the MASS package. This too calculates the maximum likelihood estimator, and has the advantage that you only need to supply the density, not the negative log likelihood, but doesn’t give you profile likelihood confidence intervals, only asymptotic (symmetrical) confidence intervals. A better option is mle2() (and related functions) from the bbmle package, which is somewhat more flexible and powerful than mle(), and gives slightly nicer plots. Bootstrap confidence intervals Another option is to use the bootstrap. It’s extremely easy to use in R, and you don’t even have to supply a density function: > library(simpleboot) > x.boot = one.boot(x, mean, R=10^4) > hist(x.boot) # Looks good > boot.ci(x.boot, type="bca") # Confidence interval BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 10000 bootstrap replicates CALL : boot.ci(boot.out = x.boot, type = "bca") Intervals : Level BCa 95% ( 0.8246, 0.9132 ) Calculations and Intervals on Original Scale The bootstrap has the added advantage that it works even if your data doesn’t come from a beta distribution. Asymptotic confidence intervals For confidence intervals on the mean, let’s not forget the good old asymptotic confidence intervals based on the central limit theorem (and the t-distribution). As long as we have either a large sample size (so the CLT applies and the distribution of the sample mean is approximately normal) or large values of both α and β (so that the beta distribution itself is approximately normal), it works well. Here we have neither, but the confidence interval still isn’t too bad: > t.test(x)$conf.int [1] 0.8190565 0.9268349 For just slightly larges values of n (and not too extreme values of the two parameters), the asymptotic confidence interval works exceedingly well.
Calculate the confidence interval for the mean of a beta distribution
While there are specific methods for calculating confidence intervals for the parameters in a beta distribution, I’ll describe a few general methods, that can be used for (almost) all sorts of distrib
Calculate the confidence interval for the mean of a beta distribution While there are specific methods for calculating confidence intervals for the parameters in a beta distribution, I’ll describe a few general methods, that can be used for (almost) all sorts of distributions, including the beta distribution, and are easily implemented in R. Profile likelihood confidence intervals Let’s begin with maximum likelihood estimation with corresponding profile likelihood confidence intervals. First we need some sample data: # Sample size n = 10 # Parameters of the beta distribution alpha = 10 beta = 1.4 # Simulate some data set.seed(1) x = rbeta(n, alpha, beta) # Note that the distribution is not symmetrical curve(dbeta(x,alpha,beta)) The real/theoretical mean is > alpha/(alpha+beta) 0.877193 Now we have to create a function for calculating the negative log likelihood function for a sample from the beta distribution, with the mean as one of the parameters. We can use the dbeta() function, but since this doesn’t use a parametrisation involving the mean, we have have to express its parameters (α and β) as a function of the mean and some other parameter (like the standard deviation): # Negative log likelihood for the beta distribution nloglikbeta = function(mu, sig) { alpha = mu^2*(1-mu)/sig^2-mu beta = alpha*(1/mu-1) -sum(dbeta(x, alpha, beta, log=TRUE)) } To find the maximum likelihood estimate, we can use the mle() function in the stats4 library: library(stats4) est = mle(nloglikbeta, start=list(mu=mean(x), sig=sd(x))) Just ignore the warnings for now. They’re caused by the optimisation algorithms trying invalid values for the parameters, giving negative values for α and/or β. (To avoid the warning, you can add a lower argument and change the optimisation method used.) Now we have both estimates and confidence intervals for our two parameters: > est Call: mle(minuslogl = nloglikbeta, start = list(mu = mean(x), sig = sd(x))) Coefficients: mu sig 0.87304148 0.07129112 > confint(est) Profiling... 2.5 % 97.5 % mu 0.81336555 0.9120350 sig 0.04679421 0.1276783 Note that, as expected, the confidence intervals are not symmetrical: par(mfrow=c(1,2)) plot(profile(est)) # Profile likelihood plot (The second-outer magenta lines show the 95% confidence interval.) Also note that even with just 10 observations, we get very good estimates (a narrow confidence interval). As an alternative to mle(), you can use the fitdistr() function from the MASS package. This too calculates the maximum likelihood estimator, and has the advantage that you only need to supply the density, not the negative log likelihood, but doesn’t give you profile likelihood confidence intervals, only asymptotic (symmetrical) confidence intervals. A better option is mle2() (and related functions) from the bbmle package, which is somewhat more flexible and powerful than mle(), and gives slightly nicer plots. Bootstrap confidence intervals Another option is to use the bootstrap. It’s extremely easy to use in R, and you don’t even have to supply a density function: > library(simpleboot) > x.boot = one.boot(x, mean, R=10^4) > hist(x.boot) # Looks good > boot.ci(x.boot, type="bca") # Confidence interval BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 10000 bootstrap replicates CALL : boot.ci(boot.out = x.boot, type = "bca") Intervals : Level BCa 95% ( 0.8246, 0.9132 ) Calculations and Intervals on Original Scale The bootstrap has the added advantage that it works even if your data doesn’t come from a beta distribution. Asymptotic confidence intervals For confidence intervals on the mean, let’s not forget the good old asymptotic confidence intervals based on the central limit theorem (and the t-distribution). As long as we have either a large sample size (so the CLT applies and the distribution of the sample mean is approximately normal) or large values of both α and β (so that the beta distribution itself is approximately normal), it works well. Here we have neither, but the confidence interval still isn’t too bad: > t.test(x)$conf.int [1] 0.8190565 0.9268349 For just slightly larges values of n (and not too extreme values of the two parameters), the asymptotic confidence interval works exceedingly well.
Calculate the confidence interval for the mean of a beta distribution While there are specific methods for calculating confidence intervals for the parameters in a beta distribution, I’ll describe a few general methods, that can be used for (almost) all sorts of distrib
16,364
Calculate the confidence interval for the mean of a beta distribution
Check out Beta regression. A good introduction to how to do it using R can be found here: http://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf Another (really easy) way of constructing a confidence interval would be to use a non-parametric boostrap approach. Wikipedia has good info: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 Also nice video here: http://www.youtube.com/watch?v=ZCXg64l9R_4
Calculate the confidence interval for the mean of a beta distribution
Check out Beta regression. A good introduction to how to do it using R can be found here: http://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf Another (really easy) way of constructing
Calculate the confidence interval for the mean of a beta distribution Check out Beta regression. A good introduction to how to do it using R can be found here: http://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf Another (really easy) way of constructing a confidence interval would be to use a non-parametric boostrap approach. Wikipedia has good info: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 Also nice video here: http://www.youtube.com/watch?v=ZCXg64l9R_4
Calculate the confidence interval for the mean of a beta distribution Check out Beta regression. A good introduction to how to do it using R can be found here: http://cran.r-project.org/web/packages/betareg/vignettes/betareg.pdf Another (really easy) way of constructing
16,365
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
I tend to hear that usually 3 largest eigenvalues are the most important, while those close to zero are noise You can test for that. See the paper linked in this post for more detail. Again if your dealing with financial times series you might wanna correct for leptokurticity first (i.e. consider the series of garch-adjusted returns, not the raw returns). I've seen a few research papers investigating how naturally occuring eigenvalue distributions differ from those calculated from random correlation matrices (again, distinguising noise from signal). Edward:> Usually, one would do it the other way arround: look at the multivariate distribution of eigenvalues (of correlation matrices) coming from the application you want. Once you have identified a credible candidate for the distribution of eigenvalues, it should be fairly easy to generate from them. The best procedure on how to identify the multivariate distribution of your eigenvalues depends on how many assets you want to consider simultaneously (i.e. what are the dimensions of your correlation matrix). There is a neat trick if $p\leq 10$ ($p$ being the number of assets). Edit (comments by Shabbychef) four step procedure: Suppose you have $j=1,...,J$ sub samples of multivariate data. You need an estimator of the variance-covariance matrix $\tilde{C}_j$ for each sub-sample $j$ ( you could use the classical estimator or a robust alternative such as the fast MCD, which is well implemented in matlab, SAS, S,R,...). As usual, if your dealing with financial times series you would want to consider the series of garch-adjusted returns, not raw returns. For each sub sample $j$, compute $\tilde{\Lambda}_j=$ $\log(\tilde{\lambda}_1^j)$ ,..., $\log(\tilde{\lambda}_p^j)$ , the eigen values of $\tilde{C}_j$. Compute $CV(\tilde{\Lambda})$, the convex hull of the $J \times p$ matrix whose j-th entry is $\tilde{\Lambda}_j$ (again, this is well implemented in Matlab, R,...). Draw points at random from inside $CV(\tilde{\Lambda})$ (this done by giving weight $w_i$ to each of the edges of $CV(\tilde{\Lambda})$ where $w_i=\frac{\gamma_i}{\sum_{i=1}^{p}\gamma_i}$, where $\gamma_i$ is a draw from an unit exponential distribution (more details here). A limitation is that fast computation of the convex hull of a series of points becomes extremely slow when the number of dimensions is larger than 10. $J\geq2$
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
I tend to hear that usually 3 largest eigenvalues are the most important, while those close to zero are noise You can test for that. See the paper linked in this post for more detail. Again if your de
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? I tend to hear that usually 3 largest eigenvalues are the most important, while those close to zero are noise You can test for that. See the paper linked in this post for more detail. Again if your dealing with financial times series you might wanna correct for leptokurticity first (i.e. consider the series of garch-adjusted returns, not the raw returns). I've seen a few research papers investigating how naturally occuring eigenvalue distributions differ from those calculated from random correlation matrices (again, distinguising noise from signal). Edward:> Usually, one would do it the other way arround: look at the multivariate distribution of eigenvalues (of correlation matrices) coming from the application you want. Once you have identified a credible candidate for the distribution of eigenvalues, it should be fairly easy to generate from them. The best procedure on how to identify the multivariate distribution of your eigenvalues depends on how many assets you want to consider simultaneously (i.e. what are the dimensions of your correlation matrix). There is a neat trick if $p\leq 10$ ($p$ being the number of assets). Edit (comments by Shabbychef) four step procedure: Suppose you have $j=1,...,J$ sub samples of multivariate data. You need an estimator of the variance-covariance matrix $\tilde{C}_j$ for each sub-sample $j$ ( you could use the classical estimator or a robust alternative such as the fast MCD, which is well implemented in matlab, SAS, S,R,...). As usual, if your dealing with financial times series you would want to consider the series of garch-adjusted returns, not raw returns. For each sub sample $j$, compute $\tilde{\Lambda}_j=$ $\log(\tilde{\lambda}_1^j)$ ,..., $\log(\tilde{\lambda}_p^j)$ , the eigen values of $\tilde{C}_j$. Compute $CV(\tilde{\Lambda})$, the convex hull of the $J \times p$ matrix whose j-th entry is $\tilde{\Lambda}_j$ (again, this is well implemented in Matlab, R,...). Draw points at random from inside $CV(\tilde{\Lambda})$ (this done by giving weight $w_i$ to each of the edges of $CV(\tilde{\Lambda})$ where $w_i=\frac{\gamma_i}{\sum_{i=1}^{p}\gamma_i}$, where $\gamma_i$ is a draw from an unit exponential distribution (more details here). A limitation is that fast computation of the convex hull of a series of points becomes extremely slow when the number of dimensions is larger than 10. $J\geq2$
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? I tend to hear that usually 3 largest eigenvalues are the most important, while those close to zero are noise You can test for that. See the paper linked in this post for more detail. Again if your de
16,366
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
Eigenvalues give magnitudes of principle components of data spread. (source: yaroslavvb.com) First dataset was generated from Gaussian with covariance matrix $\left(\matrix{3&0\\\\0&1}\right)$ second dataset is the first dataset rotated by $\pi/4$
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
Eigenvalues give magnitudes of principle components of data spread. (source: yaroslavvb.com) First dataset was generated from Gaussian with covariance matrix $\left(\matrix{3&0\\\\0&1}\right)$ secon
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? Eigenvalues give magnitudes of principle components of data spread. (source: yaroslavvb.com) First dataset was generated from Gaussian with covariance matrix $\left(\matrix{3&0\\\\0&1}\right)$ second dataset is the first dataset rotated by $\pi/4$
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? Eigenvalues give magnitudes of principle components of data spread. (source: yaroslavvb.com) First dataset was generated from Gaussian with covariance matrix $\left(\matrix{3&0\\\\0&1}\right)$ secon
16,367
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
One way I have studied this problem in the past is to construct the 'eigenportfolios' of the correlation matrix. That is, take the eigenvector associated with the $k$th largest eigenvalue of the correlation matrix and scale it to a gross leverage of 1 (i.e. make the absolute sum of the vector equal to one). Then see if you can find any real physical or financial connection between the stocks which have large representation in the portfolio. Usually the first eigenportfolio is almost equal weighted in every name, which is to say the 'market' portfolio consisting of all assets with equal dollar weights. The second eigenportfolio may have some semantical meaning, depending on which time period you look over: e.g. mostly energy stocks, or bank stocks, etc. In my experience, you would be hard pressed to make any story out of the fifth eigenportfolio or beyond, and this depends in some part universe selection and the time period considered. This is just fine because usually the fifth eigenvalue or so is not too far beyond the limits imposed by the Marchenko-Pastur distribution.
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
One way I have studied this problem in the past is to construct the 'eigenportfolios' of the correlation matrix. That is, take the eigenvector associated with the $k$th largest eigenvalue of the corre
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? One way I have studied this problem in the past is to construct the 'eigenportfolios' of the correlation matrix. That is, take the eigenvector associated with the $k$th largest eigenvalue of the correlation matrix and scale it to a gross leverage of 1 (i.e. make the absolute sum of the vector equal to one). Then see if you can find any real physical or financial connection between the stocks which have large representation in the portfolio. Usually the first eigenportfolio is almost equal weighted in every name, which is to say the 'market' portfolio consisting of all assets with equal dollar weights. The second eigenportfolio may have some semantical meaning, depending on which time period you look over: e.g. mostly energy stocks, or bank stocks, etc. In my experience, you would be hard pressed to make any story out of the fifth eigenportfolio or beyond, and this depends in some part universe selection and the time period considered. This is just fine because usually the fifth eigenvalue or so is not too far beyond the limits imposed by the Marchenko-Pastur distribution.
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? One way I have studied this problem in the past is to construct the 'eigenportfolios' of the correlation matrix. That is, take the eigenvector associated with the $k$th largest eigenvalue of the corre
16,368
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
Each value of your $N$ variables defines a point in an $N$ dimensional space. This cloud of points is often ellipsoid-like (if it is not, then you should not consider the variables as linearly related and the correlation does not mean much). The axis of the ellipsoid correspond to the eigenvectors of the correlation matrix, and their "strength" to their eigenvalues. The proof can be found in any time series analysis textbook that covers Principal Component Analysis. The loose intuition of why PCA or other eigenvalue-based methods matter is that you have some process that has some "main" causes, and the rest is "noise". If we make the ansatz that the noise is loosely equal in every dimension (because we might not know anything about it we assume it is not particularly directed).
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix?
Each value of your $N$ variables defines a point in an $N$ dimensional space. This cloud of points is often ellipsoid-like (if it is not, then you should not consider the variables as linearly related
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? Each value of your $N$ variables defines a point in an $N$ dimensional space. This cloud of points is often ellipsoid-like (if it is not, then you should not consider the variables as linearly related and the correlation does not mean much). The axis of the ellipsoid correspond to the eigenvectors of the correlation matrix, and their "strength" to their eigenvalues. The proof can be found in any time series analysis textbook that covers Principal Component Analysis. The loose intuition of why PCA or other eigenvalue-based methods matter is that you have some process that has some "main" causes, and the rest is "noise". If we make the ansatz that the noise is loosely equal in every dimension (because we might not know anything about it we assume it is not particularly directed).
Intuition / interpretation of a distribution of eigenvalues of a correlation matrix? Each value of your $N$ variables defines a point in an $N$ dimensional space. This cloud of points is often ellipsoid-like (if it is not, then you should not consider the variables as linearly related
16,369
What is the PDF for the minimum difference between a random number and a set of random numbers
If you had been looking for the distance to the next value above, and if you inserted an extra value at $1$ so this always had an answer, then using rotational symmetry the distribution of these distances $D$ would be the same as the distribution of the minimum of $n+1$ independent uniform random variables on $[0,1]$. That would have $P(D \le d) = 1-(1-d)^{n+1}$ and so density $f(d)=(n+1)(1-d)^n$ when $0 \le d \le 1$. For large $n$ and small $d$ this density can be approximated by $f(d) \approx n e^{-nd}$, explaining the exponential shape you have spotted. But your question is slightly more complicated, as you are interested in the signed distance to the nearest value above or below. As your Wikipedia link shows, the minimum of two i.i.d. exponential random variables with rate $\lambda$ is an exponential random variable with rate $2\lambda$. So you need to change the approximation to the density to reflect both the doubled rate and the possibility of negative values of $d$. The approximation actually becomes a Laplace distribution with $$f(d) \approx n e^{-2n|d|}$$ remembering this is for large $n$ and small $d$ (in particular the true density is $0$ unless $-\frac12 \lt d \lt \frac12$). As $n$ increases, this concentrates almost all the density at $0$ as in Bayequentist's response of the limit of a Dirac delta distribution With $n=10^6$ the approximation to the density would look like this, matching the shape of your simulated data.
What is the PDF for the minimum difference between a random number and a set of random numbers
If you had been looking for the distance to the next value above, and if you inserted an extra value at $1$ so this always had an answer, then using rotational symmetry the distribution of these dista
What is the PDF for the minimum difference between a random number and a set of random numbers If you had been looking for the distance to the next value above, and if you inserted an extra value at $1$ so this always had an answer, then using rotational symmetry the distribution of these distances $D$ would be the same as the distribution of the minimum of $n+1$ independent uniform random variables on $[0,1]$. That would have $P(D \le d) = 1-(1-d)^{n+1}$ and so density $f(d)=(n+1)(1-d)^n$ when $0 \le d \le 1$. For large $n$ and small $d$ this density can be approximated by $f(d) \approx n e^{-nd}$, explaining the exponential shape you have spotted. But your question is slightly more complicated, as you are interested in the signed distance to the nearest value above or below. As your Wikipedia link shows, the minimum of two i.i.d. exponential random variables with rate $\lambda$ is an exponential random variable with rate $2\lambda$. So you need to change the approximation to the density to reflect both the doubled rate and the possibility of negative values of $d$. The approximation actually becomes a Laplace distribution with $$f(d) \approx n e^{-2n|d|}$$ remembering this is for large $n$ and small $d$ (in particular the true density is $0$ unless $-\frac12 \lt d \lt \frac12$). As $n$ increases, this concentrates almost all the density at $0$ as in Bayequentist's response of the limit of a Dirac delta distribution With $n=10^6$ the approximation to the density would look like this, matching the shape of your simulated data.
What is the PDF for the minimum difference between a random number and a set of random numbers If you had been looking for the distance to the next value above, and if you inserted an extra value at $1$ so this always had an answer, then using rotational symmetry the distribution of these dista
16,370
What is the PDF for the minimum difference between a random number and a set of random numbers
When $N \to \infty$, $L_N$ contains all real numbers in $(0,1)$. Thus, the distance from any number in $(0,1)$ to the closest number in $L_N$ will approach 0 as $N \to \infty$. The distribution of distances approaches the Dirac delta distribution as $N \to \infty$. Here are some simulations: Here's a code snippet: n <- 100000 Ln <- runif(n) nSim <- 10000 distances <- rep(0,nSim) for (i in 1:nSim){ b <- runif(1) distances[i] <- min(abs(Ln-b)) } hist(distances,main="N=100000")
What is the PDF for the minimum difference between a random number and a set of random numbers
When $N \to \infty$, $L_N$ contains all real numbers in $(0,1)$. Thus, the distance from any number in $(0,1)$ to the closest number in $L_N$ will approach 0 as $N \to \infty$. The distribution of dis
What is the PDF for the minimum difference between a random number and a set of random numbers When $N \to \infty$, $L_N$ contains all real numbers in $(0,1)$. Thus, the distance from any number in $(0,1)$ to the closest number in $L_N$ will approach 0 as $N \to \infty$. The distribution of distances approaches the Dirac delta distribution as $N \to \infty$. Here are some simulations: Here's a code snippet: n <- 100000 Ln <- runif(n) nSim <- 10000 distances <- rep(0,nSim) for (i in 1:nSim){ b <- runif(1) distances[i] <- min(abs(Ln-b)) } hist(distances,main="N=100000")
What is the PDF for the minimum difference between a random number and a set of random numbers When $N \to \infty$, $L_N$ contains all real numbers in $(0,1)$. Thus, the distance from any number in $(0,1)$ to the closest number in $L_N$ will approach 0 as $N \to \infty$. The distribution of dis
16,371
What is the PDF for the minimum difference between a random number and a set of random numbers
is there a way that I can figure out what this distribution is exactly (for large but finite N)? The difference of two standard Uniform random variables is Triangular(-1,0,1) with pdf $1-|x|$ defined on $(-1,1)$. Distance is the absolute value of the difference which has pdf say $f(x)$: Repeating the exercise $n$ times and taking the minimum distance is equivalent to finding the minimum $(1^{\text{st}})$ order statistic wrt the parent pdf $f(x)$, which is given by: where I am using the OrderStat function from the mathStatica package for Mathematica to automate the nitty gritties, and where the domain of support is (0,1). The solution has a Power Function distribution with pdf of form $g(x) = a x^{a-1}$. The following diagram compares a plot of the exact pdf of the minimum distance just derived $g(x)$ (red dashed curve) ... to a Monte Carlo simulation (squiggly blue curve), when the sample size is $n=10$: Simulation: As you are using Mathematica for simulation, here is the code I am using for the data simulation in Mathematica: data = Table[Min[Abs[RandomReal[{}, 10] - RandomReal[]]], 20000];
What is the PDF for the minimum difference between a random number and a set of random numbers
is there a way that I can figure out what this distribution is exactly (for large but finite N)? The difference of two standard Uniform random variables is Triangular(-1,0,1) with pdf $1-|x|$ defined
What is the PDF for the minimum difference between a random number and a set of random numbers is there a way that I can figure out what this distribution is exactly (for large but finite N)? The difference of two standard Uniform random variables is Triangular(-1,0,1) with pdf $1-|x|$ defined on $(-1,1)$. Distance is the absolute value of the difference which has pdf say $f(x)$: Repeating the exercise $n$ times and taking the minimum distance is equivalent to finding the minimum $(1^{\text{st}})$ order statistic wrt the parent pdf $f(x)$, which is given by: where I am using the OrderStat function from the mathStatica package for Mathematica to automate the nitty gritties, and where the domain of support is (0,1). The solution has a Power Function distribution with pdf of form $g(x) = a x^{a-1}$. The following diagram compares a plot of the exact pdf of the minimum distance just derived $g(x)$ (red dashed curve) ... to a Monte Carlo simulation (squiggly blue curve), when the sample size is $n=10$: Simulation: As you are using Mathematica for simulation, here is the code I am using for the data simulation in Mathematica: data = Table[Min[Abs[RandomReal[{}, 10] - RandomReal[]]], 20000];
What is the PDF for the minimum difference between a random number and a set of random numbers is there a way that I can figure out what this distribution is exactly (for large but finite N)? The difference of two standard Uniform random variables is Triangular(-1,0,1) with pdf $1-|x|$ defined
16,372
What is the PDF for the minimum difference between a random number and a set of random numbers
For you to get a number larger than $d$ as your result, all numbers in your sample have to be $d$ away from $b$. The probability of that happening for any individual $x_0$ is just the probability mass outside the range $b \pm d$. Call that $p_{outside}$. The probability of that happening for all $x_i$ in your sample is $(p_{outside})^N$. If $x_i$ are chosen uniformly from the unit interval, then $p_{outside}$ for $b$ more than $d$ from the boundary will be $1-2d$, and that gives $p_{outside}^N = (1-2d)^N$. For large $N$ and small $d$, that can be approximated by $e^{-2Nd}$.
What is the PDF for the minimum difference between a random number and a set of random numbers
For you to get a number larger than $d$ as your result, all numbers in your sample have to be $d$ away from $b$. The probability of that happening for any individual $x_0$ is just the probability mass
What is the PDF for the minimum difference between a random number and a set of random numbers For you to get a number larger than $d$ as your result, all numbers in your sample have to be $d$ away from $b$. The probability of that happening for any individual $x_0$ is just the probability mass outside the range $b \pm d$. Call that $p_{outside}$. The probability of that happening for all $x_i$ in your sample is $(p_{outside})^N$. If $x_i$ are chosen uniformly from the unit interval, then $p_{outside}$ for $b$ more than $d$ from the boundary will be $1-2d$, and that gives $p_{outside}^N = (1-2d)^N$. For large $N$ and small $d$, that can be approximated by $e^{-2Nd}$.
What is the PDF for the minimum difference between a random number and a set of random numbers For you to get a number larger than $d$ as your result, all numbers in your sample have to be $d$ away from $b$. The probability of that happening for any individual $x_0$ is just the probability mass
16,373
What is the PDF for the minimum difference between a random number and a set of random numbers
Imagine you first draw the last one and denote it X. This does not change the problem formulation at all. For any $X_i \in L_N, i=1,...,N$, we know that $Y_i := |X-X_i|$ has some distribution (you may or may not want to compute this) and that $Y_i$ are iid given $X$. From Wikipedia, we know that the CDF of their minimum is $$ F_{min}(y) = 1 - [1-F_Y(y)]^N. $$ For any fixed $y$, we know $F_Y(y) > 0$ for any $y > 0$ and $F(y) = 0$ otherwise. Take $N \to \infty$ and you get a CDF which is identically one for $y > 0$ and identically zero otherwise. This is a delta function centered at zero, like all simulations above show. This holds for any $x \in (0,1)$ so the convergence holds always (albeit with varying convergence rates, perhaps).
What is the PDF for the minimum difference between a random number and a set of random numbers
Imagine you first draw the last one and denote it X. This does not change the problem formulation at all. For any $X_i \in L_N, i=1,...,N$, we know that $Y_i := |X-X_i|$ has some distribution (you may
What is the PDF for the minimum difference between a random number and a set of random numbers Imagine you first draw the last one and denote it X. This does not change the problem formulation at all. For any $X_i \in L_N, i=1,...,N$, we know that $Y_i := |X-X_i|$ has some distribution (you may or may not want to compute this) and that $Y_i$ are iid given $X$. From Wikipedia, we know that the CDF of their minimum is $$ F_{min}(y) = 1 - [1-F_Y(y)]^N. $$ For any fixed $y$, we know $F_Y(y) > 0$ for any $y > 0$ and $F(y) = 0$ otherwise. Take $N \to \infty$ and you get a CDF which is identically one for $y > 0$ and identically zero otherwise. This is a delta function centered at zero, like all simulations above show. This holds for any $x \in (0,1)$ so the convergence holds always (albeit with varying convergence rates, perhaps).
What is the PDF for the minimum difference between a random number and a set of random numbers Imagine you first draw the last one and denote it X. This does not change the problem formulation at all. For any $X_i \in L_N, i=1,...,N$, we know that $Y_i := |X-X_i|$ has some distribution (you may
16,374
Item-Item Collaborative Filtering vs Market Basket Analysis
@Antimony gave a perfect answer. Just wanted to add some theory that helped me to understand the difference between Item-Item Collaborative Filtering and Market Basket Analysis; as well as the applications for these two methods. The family of algorithms used for performing market basket analysis is called association rules. Market basket analysis (or association rules) and collaborative filtering answer fundamentally different questions. Collaborative filtering can answer a question “What are the items that users with interests similar to yours like?” (Fig. 1), whereas association rules answer a question “What are the items that frequently appear together?” The answer to the first question can be used to recommend you products, videos, restaurants, hotels or any other content that you haven’t seen previously and that have been appreciated by a group of other users with interests similar to yours. The similarity of interests can be estimated from explicit indicators, for example, you and a group of other users gave same ratings to same products, or implicit indicators, for example, you and they purchased same products. Collaborative filtering is widely used for building recommender systems. However, collaborative filtering is most effective when there is a rich history of user preferences or behavior. In the meantime, association rules can recommend you products that you will very likely purchase based on a set of products that are currently in your basket (Fig. 2). For example, if you buy a burger and fries, you will probably want soda; or a very famous example, those who buy diapers tend also to buy beer. Association rules are independent of personal preference profiles and for mining them you need a dataset of transactions from all users. Association rules and market basket analysis are generally used as an exploratory tool to mine a limited number of most common rules that can then be analysed by a human. However, association rules can also be used for building recommender systems. Fig. 1. The illustration of collaborative filtering. Source - Wikipedia Fig 2. A simple illustration of association rules.
Item-Item Collaborative Filtering vs Market Basket Analysis
@Antimony gave a perfect answer. Just wanted to add some theory that helped me to understand the difference between Item-Item Collaborative Filtering and Market Basket Analysis; as well as the applica
Item-Item Collaborative Filtering vs Market Basket Analysis @Antimony gave a perfect answer. Just wanted to add some theory that helped me to understand the difference between Item-Item Collaborative Filtering and Market Basket Analysis; as well as the applications for these two methods. The family of algorithms used for performing market basket analysis is called association rules. Market basket analysis (or association rules) and collaborative filtering answer fundamentally different questions. Collaborative filtering can answer a question “What are the items that users with interests similar to yours like?” (Fig. 1), whereas association rules answer a question “What are the items that frequently appear together?” The answer to the first question can be used to recommend you products, videos, restaurants, hotels or any other content that you haven’t seen previously and that have been appreciated by a group of other users with interests similar to yours. The similarity of interests can be estimated from explicit indicators, for example, you and a group of other users gave same ratings to same products, or implicit indicators, for example, you and they purchased same products. Collaborative filtering is widely used for building recommender systems. However, collaborative filtering is most effective when there is a rich history of user preferences or behavior. In the meantime, association rules can recommend you products that you will very likely purchase based on a set of products that are currently in your basket (Fig. 2). For example, if you buy a burger and fries, you will probably want soda; or a very famous example, those who buy diapers tend also to buy beer. Association rules are independent of personal preference profiles and for mining them you need a dataset of transactions from all users. Association rules and market basket analysis are generally used as an exploratory tool to mine a limited number of most common rules that can then be analysed by a human. However, association rules can also be used for building recommender systems. Fig. 1. The illustration of collaborative filtering. Source - Wikipedia Fig 2. A simple illustration of association rules.
Item-Item Collaborative Filtering vs Market Basket Analysis @Antimony gave a perfect answer. Just wanted to add some theory that helped me to understand the difference between Item-Item Collaborative Filtering and Market Basket Analysis; as well as the applica
16,375
Item-Item Collaborative Filtering vs Market Basket Analysis
An excellent question! One trivial difference that I can think of, is that market basket (MB) analysis considers each basket separately. So if you buy the same stuff together once a month, each time it constitutes a different basket, and it likely also contains different items each time. However collaborative filtering (CF) considers baskets aggregated per user. So no matter how many times you buy beer and diapers together, it still counts as one vote for beer and one vote for diapers. The other differences are more technical, such as what it is that you measure for each. In MB you care about support and confidence values and in CF, you care about a similarity measure such as cosine similarity. This is a symmetric measure. The similarity between beer and diaper is the same as the similarity between diaper and beer, but that is not the case for support/confidence. On a conceptual level, it is possible for CF to come up with more indirect similarities such as if you buy item 1, and it finds that item 2 is bought along with it, and also that item 3 and 4 are similar to item 2. Then it can recommend them even if they are not bought along with item 1, but also with item 2.
Item-Item Collaborative Filtering vs Market Basket Analysis
An excellent question! One trivial difference that I can think of, is that market basket (MB) analysis considers each basket separately. So if you buy the same stuff together once a month, each time i
Item-Item Collaborative Filtering vs Market Basket Analysis An excellent question! One trivial difference that I can think of, is that market basket (MB) analysis considers each basket separately. So if you buy the same stuff together once a month, each time it constitutes a different basket, and it likely also contains different items each time. However collaborative filtering (CF) considers baskets aggregated per user. So no matter how many times you buy beer and diapers together, it still counts as one vote for beer and one vote for diapers. The other differences are more technical, such as what it is that you measure for each. In MB you care about support and confidence values and in CF, you care about a similarity measure such as cosine similarity. This is a symmetric measure. The similarity between beer and diaper is the same as the similarity between diaper and beer, but that is not the case for support/confidence. On a conceptual level, it is possible for CF to come up with more indirect similarities such as if you buy item 1, and it finds that item 2 is bought along with it, and also that item 3 and 4 are similar to item 2. Then it can recommend them even if they are not bought along with item 1, but also with item 2.
Item-Item Collaborative Filtering vs Market Basket Analysis An excellent question! One trivial difference that I can think of, is that market basket (MB) analysis considers each basket separately. So if you buy the same stuff together once a month, each time i
16,376
High variance of the distribution of p-values (an argument in Taleb 2016)
A p-value is a random variable. Under $H_0$ (at least for a continuously-distributed statistic), the p-value should have a uniform distribution For a consistent test, under $H_1$ the p-value should go to 0 in the limit as sample sizes increase toward infinity. Similarly, as effect sizes increase the distributions of p-values should also tend shift toward 0, but it will always be "spread out". The notion of a "true" p-value sounds like nonsense to me. What would it mean, either under $H_0$ or $H_1$? You might for example say that you mean "the mean of the distribution of p-values at some given effect size and sample size", but then in what sense do you have convergence where the spread should shrink? It's not like you can increase sample size while you hold it constant. Here's an example with one sample t-tests and a small effect size under $H_1$. The p-values are nearly uniform when the sample size is small, and the distribution slowly concentrates toward 0 as sample size increases. This is exactly how p-values are supposed to behave - for a false null, as the sample size increases, the p-values should become more concentrated at low values, but there's nothing to suggest that the distribution of the values it takes when you make a type II error - when the p-value is above whatever your significance level is - should somehow end up "close" to that significance level. What, then, would a p-value be an estimate of? It's not like it's converging to something (other than to 0). It's not at all clear why one would expect a p-value to have low variance anywhere but as it approaches 0, even when the power is quite good (e.g. for $\alpha=0.05$, power in the n=1000 case there is close to 57%, but it's still perfectly possible to get a p-value way up near 1) It's often helpful to consider what's happening both with the distribution of whatever test statistic you use under the alternative and what applying the cdf under the null as a transformation to that will do to the distribution (that will give the distribution of the p-value under the specific alternative). When you think in these terms it's often not hard to see why the behavior is as it is. The issue as I see it is not so much that there's any inherent problem with p-values or hypothesis testing at all, it's more a case of whether the hypothesis test is a good tool for your particular problem or whether something else would be more appropriate in any particular case -- that's not a situation for broad-brush polemics but one of careful consideration of the kind of questions that hypothesis tests address and the particular needs of your circumstance. Unfortunately careful consideration of these issues are rarely made -- all too often one sees a question of the form "what test do I use for these data?" without any consideration of what the question of interest might be, let alone whether some hypothesis test is a good way to address it. One difficulty is that hypothesis tests are both widely misunderstood and widely misused; people very often think they tell us things that they don't. The p-value is possibly the single most misunderstood thing about hypothesis tests.
High variance of the distribution of p-values (an argument in Taleb 2016)
A p-value is a random variable. Under $H_0$ (at least for a continuously-distributed statistic), the p-value should have a uniform distribution For a consistent test, under $H_1$ the p-value should go
High variance of the distribution of p-values (an argument in Taleb 2016) A p-value is a random variable. Under $H_0$ (at least for a continuously-distributed statistic), the p-value should have a uniform distribution For a consistent test, under $H_1$ the p-value should go to 0 in the limit as sample sizes increase toward infinity. Similarly, as effect sizes increase the distributions of p-values should also tend shift toward 0, but it will always be "spread out". The notion of a "true" p-value sounds like nonsense to me. What would it mean, either under $H_0$ or $H_1$? You might for example say that you mean "the mean of the distribution of p-values at some given effect size and sample size", but then in what sense do you have convergence where the spread should shrink? It's not like you can increase sample size while you hold it constant. Here's an example with one sample t-tests and a small effect size under $H_1$. The p-values are nearly uniform when the sample size is small, and the distribution slowly concentrates toward 0 as sample size increases. This is exactly how p-values are supposed to behave - for a false null, as the sample size increases, the p-values should become more concentrated at low values, but there's nothing to suggest that the distribution of the values it takes when you make a type II error - when the p-value is above whatever your significance level is - should somehow end up "close" to that significance level. What, then, would a p-value be an estimate of? It's not like it's converging to something (other than to 0). It's not at all clear why one would expect a p-value to have low variance anywhere but as it approaches 0, even when the power is quite good (e.g. for $\alpha=0.05$, power in the n=1000 case there is close to 57%, but it's still perfectly possible to get a p-value way up near 1) It's often helpful to consider what's happening both with the distribution of whatever test statistic you use under the alternative and what applying the cdf under the null as a transformation to that will do to the distribution (that will give the distribution of the p-value under the specific alternative). When you think in these terms it's often not hard to see why the behavior is as it is. The issue as I see it is not so much that there's any inherent problem with p-values or hypothesis testing at all, it's more a case of whether the hypothesis test is a good tool for your particular problem or whether something else would be more appropriate in any particular case -- that's not a situation for broad-brush polemics but one of careful consideration of the kind of questions that hypothesis tests address and the particular needs of your circumstance. Unfortunately careful consideration of these issues are rarely made -- all too often one sees a question of the form "what test do I use for these data?" without any consideration of what the question of interest might be, let alone whether some hypothesis test is a good way to address it. One difficulty is that hypothesis tests are both widely misunderstood and widely misused; people very often think they tell us things that they don't. The p-value is possibly the single most misunderstood thing about hypothesis tests.
High variance of the distribution of p-values (an argument in Taleb 2016) A p-value is a random variable. Under $H_0$ (at least for a continuously-distributed statistic), the p-value should have a uniform distribution For a consistent test, under $H_1$ the p-value should go
16,377
High variance of the distribution of p-values (an argument in Taleb 2016)
Glen_b's answer is spot on (+1; consider mine supplemental). The paper you reference by Taleb is topically very similar to a series of papers within the psychology and statistics literature about what kind of information you can glean from analyzing distributions of p-values (what the authors call p-curve; see their site with a bunch of resources, including a p-curve analysis app here). The authors propose two primary uses of p-curve: You can appraise the evidential value of a literature by analyzing the literature's p-curve. This was their first advertised use of p-curve. Essentially, as Glen_b describes, when you're dealing with non-zero effect sizes, you should see p-curves that are positively skewed below the conventional threshold of p < .05, as smaller p-values should be more likely than p-values closer to p = .05 when an effect (or group of effects) are "real". You can therefore test a p-curve for significant positive skew as a test of evidentiary value. Conversely, the developers propose that you can perform a test of negative skew (i.e., more borderline significant p-valuesthan smaller ones) as a way to test if a given set of effects has been subject to various questionable analytic practices. You can calculate a publication-bias free meta-analytic estimate of effect size using p-curve with published p-values. This one is a bit trickier to explain succinctly, and instead, I'd recommend that you check out their effect-size-estimation focused papers (Simonsohn, Nelson, & Simmons, 2014a, 2014b) and read up on the methods yourself. But essentially, the authors suggest that p-curve can be used to skirt the issue of the file-drawer effect, when conducting a meta-analysis. So, as to your broader question of: how can this be reconciled with the traditional argument in favor of the p-value? I would say that methods like Taleb's (and others) have found a way to repurpose p-values, so that we can get useful information about entire literatures by analyzing groups of p-values, whereas one p-value on its own, might be much more limited in its usefulness. References Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014a). P-curve: A Key To The File Drawer. Journal of Experimental Psychology: General, 143, 534–547. Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014b). P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results. Perspectives on Psychological Science, 9, 666-681. Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2015). Better P-curves: Making P-curve analysis more robust to errors, fraud, and ambitious P-hacking, a Reply to Ulrich and Miller (2015).Journal of Experimental Psychology: General, 144, 1146-1152.
High variance of the distribution of p-values (an argument in Taleb 2016)
Glen_b's answer is spot on (+1; consider mine supplemental). The paper you reference by Taleb is topically very similar to a series of papers within the psychology and statistics literature about what
High variance of the distribution of p-values (an argument in Taleb 2016) Glen_b's answer is spot on (+1; consider mine supplemental). The paper you reference by Taleb is topically very similar to a series of papers within the psychology and statistics literature about what kind of information you can glean from analyzing distributions of p-values (what the authors call p-curve; see their site with a bunch of resources, including a p-curve analysis app here). The authors propose two primary uses of p-curve: You can appraise the evidential value of a literature by analyzing the literature's p-curve. This was their first advertised use of p-curve. Essentially, as Glen_b describes, when you're dealing with non-zero effect sizes, you should see p-curves that are positively skewed below the conventional threshold of p < .05, as smaller p-values should be more likely than p-values closer to p = .05 when an effect (or group of effects) are "real". You can therefore test a p-curve for significant positive skew as a test of evidentiary value. Conversely, the developers propose that you can perform a test of negative skew (i.e., more borderline significant p-valuesthan smaller ones) as a way to test if a given set of effects has been subject to various questionable analytic practices. You can calculate a publication-bias free meta-analytic estimate of effect size using p-curve with published p-values. This one is a bit trickier to explain succinctly, and instead, I'd recommend that you check out their effect-size-estimation focused papers (Simonsohn, Nelson, & Simmons, 2014a, 2014b) and read up on the methods yourself. But essentially, the authors suggest that p-curve can be used to skirt the issue of the file-drawer effect, when conducting a meta-analysis. So, as to your broader question of: how can this be reconciled with the traditional argument in favor of the p-value? I would say that methods like Taleb's (and others) have found a way to repurpose p-values, so that we can get useful information about entire literatures by analyzing groups of p-values, whereas one p-value on its own, might be much more limited in its usefulness. References Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014a). P-curve: A Key To The File Drawer. Journal of Experimental Psychology: General, 143, 534–547. Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014b). P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results. Perspectives on Psychological Science, 9, 666-681. Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2015). Better P-curves: Making P-curve analysis more robust to errors, fraud, and ambitious P-hacking, a Reply to Ulrich and Miller (2015).Journal of Experimental Psychology: General, 144, 1146-1152.
High variance of the distribution of p-values (an argument in Taleb 2016) Glen_b's answer is spot on (+1; consider mine supplemental). The paper you reference by Taleb is topically very similar to a series of papers within the psychology and statistics literature about what
16,378
How to build a confusion matrix for a multiclass classifier?
Presumably, you are using these classifiers to help choose one particular class for a given set of feature values (as you said you are creating a multiclass classifier). So, lets say you have $N$ classes, then your confusion matrix would be an $N\times N$ matrix, with the left axis showing the true class (as known in the test set) and the top axis showing the class assigned to an item with that true class. Each element $i,j$ of the matrix would be the number of items with true class $i$ that were classified as being in class $j$. This is just a straightforward extension of the 2-class confusion matrix.
How to build a confusion matrix for a multiclass classifier?
Presumably, you are using these classifiers to help choose one particular class for a given set of feature values (as you said you are creating a multiclass classifier). So, lets say you have $N$ cla
How to build a confusion matrix for a multiclass classifier? Presumably, you are using these classifiers to help choose one particular class for a given set of feature values (as you said you are creating a multiclass classifier). So, lets say you have $N$ classes, then your confusion matrix would be an $N\times N$ matrix, with the left axis showing the true class (as known in the test set) and the top axis showing the class assigned to an item with that true class. Each element $i,j$ of the matrix would be the number of items with true class $i$ that were classified as being in class $j$. This is just a straightforward extension of the 2-class confusion matrix.
How to build a confusion matrix for a multiclass classifier? Presumably, you are using these classifiers to help choose one particular class for a given set of feature values (as you said you are creating a multiclass classifier). So, lets say you have $N$ cla
16,379
How to build a confusion matrix for a multiclass classifier?
While there are some answers already on this forum I thought I'd give the explicit equations to make it more definite: Assuming you have a multi-class confusion matrix of the form, \begin{align} C=\text{Actual}\begin{matrix} & \text{Classifed} & \\ c_{11} & ... & c_{1n}\\ \vdots & \ddots & \\ c_{n1} & & c_{nn} \end{matrix} \end{align} The confusion elements for each class are given by: $tp_i = c_{ii}$ $fp_i = \sum_{l=1}^n c_{li} - tp_i$ $fn_i = \sum_{l=1}^n c_{il} - tp_i$ $tn_i = \sum_{l=1}^n \sum_{k=1}^n c_{lk} - tp_i - fp_i - fn_i$
How to build a confusion matrix for a multiclass classifier?
While there are some answers already on this forum I thought I'd give the explicit equations to make it more definite: Assuming you have a multi-class confusion matrix of the form, \begin{align} C=\te
How to build a confusion matrix for a multiclass classifier? While there are some answers already on this forum I thought I'd give the explicit equations to make it more definite: Assuming you have a multi-class confusion matrix of the form, \begin{align} C=\text{Actual}\begin{matrix} & \text{Classifed} & \\ c_{11} & ... & c_{1n}\\ \vdots & \ddots & \\ c_{n1} & & c_{nn} \end{matrix} \end{align} The confusion elements for each class are given by: $tp_i = c_{ii}$ $fp_i = \sum_{l=1}^n c_{li} - tp_i$ $fn_i = \sum_{l=1}^n c_{il} - tp_i$ $tn_i = \sum_{l=1}^n \sum_{k=1}^n c_{lk} - tp_i - fp_i - fn_i$
How to build a confusion matrix for a multiclass classifier? While there are some answers already on this forum I thought I'd give the explicit equations to make it more definite: Assuming you have a multi-class confusion matrix of the form, \begin{align} C=\te
16,380
How to build a confusion matrix for a multiclass classifier?
Using the matrix attached in the question and considering the values in the vertical axis as the actual class, and the values in the horizontal axis the prediction. Then for the Class 1: True Positive = 137 -> samples of class 1, classified as class 1 False Positive = 6 -> (1+2+3) samples of classes 2, 3 and 4, but classified as class 1 False Negative = 18 -> (13+3+1+1) samples of class 1, but classified as classes 2, 3, 6 and 7 Ture Negative = 581 -> (55+1+6...+2+26) The sum of all the values in the matrix except those in column 1 and row 1
How to build a confusion matrix for a multiclass classifier?
Using the matrix attached in the question and considering the values in the vertical axis as the actual class, and the values in the horizontal axis the prediction. Then for the Class 1: True Positiv
How to build a confusion matrix for a multiclass classifier? Using the matrix attached in the question and considering the values in the vertical axis as the actual class, and the values in the horizontal axis the prediction. Then for the Class 1: True Positive = 137 -> samples of class 1, classified as class 1 False Positive = 6 -> (1+2+3) samples of classes 2, 3 and 4, but classified as class 1 False Negative = 18 -> (13+3+1+1) samples of class 1, but classified as classes 2, 3, 6 and 7 Ture Negative = 581 -> (55+1+6...+2+26) The sum of all the values in the matrix except those in column 1 and row 1
How to build a confusion matrix for a multiclass classifier? Using the matrix attached in the question and considering the values in the vertical axis as the actual class, and the values in the horizontal axis the prediction. Then for the Class 1: True Positiv
16,381
Number of parameters in an artificial neural network for AIC
Every connection that is learned in a feedforward network is a parameter. Here is an image of a generic network from Wikipedia: This network is fully connected, although networks don't have to be (e.g., designing a network with receptive fields improves edge detection in images). With a fully connected ANN, the number of connections is simply the sum of the product of the numbers of nodes in connected layers. In the image above, that is $(3\times 4) + (4\times 2) = 20$. That image does not show any bias nodes, but many ANNs do have them; if so, include the bias node in the total for that layer. More generally (e.g., if your ANN isn't fully connected), you can simply count the connections.
Number of parameters in an artificial neural network for AIC
Every connection that is learned in a feedforward network is a parameter. Here is an image of a generic network from Wikipedia: This network is fully connected, although networks don't have to be
Number of parameters in an artificial neural network for AIC Every connection that is learned in a feedforward network is a parameter. Here is an image of a generic network from Wikipedia: This network is fully connected, although networks don't have to be (e.g., designing a network with receptive fields improves edge detection in images). With a fully connected ANN, the number of connections is simply the sum of the product of the numbers of nodes in connected layers. In the image above, that is $(3\times 4) + (4\times 2) = 20$. That image does not show any bias nodes, but many ANNs do have them; if so, include the bias node in the total for that layer. More generally (e.g., if your ANN isn't fully connected), you can simply count the connections.
Number of parameters in an artificial neural network for AIC Every connection that is learned in a feedforward network is a parameter. Here is an image of a generic network from Wikipedia: This network is fully connected, although networks don't have to be
16,382
Number of parameters in an artificial neural network for AIC
I would argue that this is an ill posed problem. Same as for many other machine learning algorithms, in neural networks it is hard to say what exactly would we count as a "parameter" when penalizing AIC. The point of AIC is to penalize the log-likelihood by the complexity of the model. In case of simple models, like linear, or logistic regression this is simple, as the number of regression parameters determines the complexity of the model. For simple feed-forward neural network this would also be the case, but consider that you can increase complexity of a neural network without increasing the number of parameters: you can use skip-connections, max-pooling, masking, weight normalization, etc., they all have no parameters. Moreover, what would you say about dropout, it "turns-off" parameters that are available for the network, so maybe somehow we should discount the number of parameters when using it? In case of complicated machine learning algorithms, the number of parameters is much less useful as a measure of model complexity. To complicate it even more, in neural networks it was observed that bias-variance trade-off seems not to apply. The rationale behind using AIC is that "simpler" model is better, because it is more explainable and less prone to overfit. If, as it appears, neural networks do not have to be more prone to overfitting with increasing number of parameters, than it is disputable if penalizing for it makes sense.
Number of parameters in an artificial neural network for AIC
I would argue that this is an ill posed problem. Same as for many other machine learning algorithms, in neural networks it is hard to say what exactly would we count as a "parameter" when penalizing A
Number of parameters in an artificial neural network for AIC I would argue that this is an ill posed problem. Same as for many other machine learning algorithms, in neural networks it is hard to say what exactly would we count as a "parameter" when penalizing AIC. The point of AIC is to penalize the log-likelihood by the complexity of the model. In case of simple models, like linear, or logistic regression this is simple, as the number of regression parameters determines the complexity of the model. For simple feed-forward neural network this would also be the case, but consider that you can increase complexity of a neural network without increasing the number of parameters: you can use skip-connections, max-pooling, masking, weight normalization, etc., they all have no parameters. Moreover, what would you say about dropout, it "turns-off" parameters that are available for the network, so maybe somehow we should discount the number of parameters when using it? In case of complicated machine learning algorithms, the number of parameters is much less useful as a measure of model complexity. To complicate it even more, in neural networks it was observed that bias-variance trade-off seems not to apply. The rationale behind using AIC is that "simpler" model is better, because it is more explainable and less prone to overfit. If, as it appears, neural networks do not have to be more prone to overfitting with increasing number of parameters, than it is disputable if penalizing for it makes sense.
Number of parameters in an artificial neural network for AIC I would argue that this is an ill posed problem. Same as for many other machine learning algorithms, in neural networks it is hard to say what exactly would we count as a "parameter" when penalizing A
16,383
Number of parameters in an artificial neural network for AIC
For a MLP fully connected network you can use the following (Python) code: def total_param(l=[]): s=0 for i in range(len(l)-1): s=s+l[i]*l[i+1]+l[i+1] return s then if you have a network with the following layer configuration input: 435 hidden: 166 hidden: 103 hidden: 64 output: 15 you just call the function with total_param([435,166,103,64,15]) 97208
Number of parameters in an artificial neural network for AIC
For a MLP fully connected network you can use the following (Python) code: def total_param(l=[]): s=0 for i in range(len(l)-1): s=s+l[i]*l[i+1]+l[i+1] return s then if you have a network with the
Number of parameters in an artificial neural network for AIC For a MLP fully connected network you can use the following (Python) code: def total_param(l=[]): s=0 for i in range(len(l)-1): s=s+l[i]*l[i+1]+l[i+1] return s then if you have a network with the following layer configuration input: 435 hidden: 166 hidden: 103 hidden: 64 output: 15 you just call the function with total_param([435,166,103,64,15]) 97208
Number of parameters in an artificial neural network for AIC For a MLP fully connected network you can use the following (Python) code: def total_param(l=[]): s=0 for i in range(len(l)-1): s=s+l[i]*l[i+1]+l[i+1] return s then if you have a network with the
16,384
Number of parameters in an artificial neural network for AIC
Neural network is just a function of functions of functions ... (as dictated by the architecture of the model). If the resulting function can't be simplified then the total number of parameters (sum of all number of parameters from each nodes) in the model is the number you want for the AIC calculation.
Number of parameters in an artificial neural network for AIC
Neural network is just a function of functions of functions ... (as dictated by the architecture of the model). If the resulting function can't be simplified then the total number of parameters (sum
Number of parameters in an artificial neural network for AIC Neural network is just a function of functions of functions ... (as dictated by the architecture of the model). If the resulting function can't be simplified then the total number of parameters (sum of all number of parameters from each nodes) in the model is the number you want for the AIC calculation.
Number of parameters in an artificial neural network for AIC Neural network is just a function of functions of functions ... (as dictated by the architecture of the model). If the resulting function can't be simplified then the total number of parameters (sum
16,385
ETS() function, how to avoid forecast not in line with historical data?
As @forecaster has pointed out, this is caused by outliers at the end of the series. You can see the problem clearly if you plot the estimated level component over the top: plot(forecast(fit2)) lines(fit2$states[,1],col='red') Note the increase in the level at the end of the series. One way to make the model more robust to outliers is to reduce the parameter space so that the smoothing parameters must take smaller values: fit2 <- ets(train_ts, upper=c(0.3,0.2,0.2,0.98)) plot(forecast(fit2))
ETS() function, how to avoid forecast not in line with historical data?
As @forecaster has pointed out, this is caused by outliers at the end of the series. You can see the problem clearly if you plot the estimated level component over the top: plot(forecast(fit2)) lines(
ETS() function, how to avoid forecast not in line with historical data? As @forecaster has pointed out, this is caused by outliers at the end of the series. You can see the problem clearly if you plot the estimated level component over the top: plot(forecast(fit2)) lines(fit2$states[,1],col='red') Note the increase in the level at the end of the series. One way to make the model more robust to outliers is to reduce the parameter space so that the smoothing parameters must take smaller values: fit2 <- ets(train_ts, upper=c(0.3,0.2,0.2,0.98)) plot(forecast(fit2))
ETS() function, how to avoid forecast not in line with historical data? As @forecaster has pointed out, this is caused by outliers at the end of the series. You can see the problem clearly if you plot the estimated level component over the top: plot(forecast(fit2)) lines(
16,386
ETS() function, how to avoid forecast not in line with historical data?
This is textbook case of having outliers at the end of the series and its unintended consequences. The problem with your data is that the last two points are outliers, you might want to identify and treat outliers before you run the forecasting algorithms. I'll update my answer and analysis later today on some strategies to identify outliers. Below is the quick update. When I rerun ets with last two data points removed, I get a reasonable forecast. Please see below: values.clean <- c(27, 27, 7, 24, 39, 40, 24, 45, 36, 37, 31, 47, 16, 24, 6, 21, 35, 36, 21, 40, 32, 33, 27, 42, 14, 21, 5, 19, 31, 32, 19, 36, 29, 29, 24, 42, 15)## Last two points removed train_ts.clean<- ts(values.clean, frequency=12) fit2.clean<-ets(train_ts.clean) ets.f.clean <- forecast(fit2.clean,h=24) plot(ets.f.clean)
ETS() function, how to avoid forecast not in line with historical data?
This is textbook case of having outliers at the end of the series and its unintended consequences. The problem with your data is that the last two points are outliers, you might want to identify and t
ETS() function, how to avoid forecast not in line with historical data? This is textbook case of having outliers at the end of the series and its unintended consequences. The problem with your data is that the last two points are outliers, you might want to identify and treat outliers before you run the forecasting algorithms. I'll update my answer and analysis later today on some strategies to identify outliers. Below is the quick update. When I rerun ets with last two data points removed, I get a reasonable forecast. Please see below: values.clean <- c(27, 27, 7, 24, 39, 40, 24, 45, 36, 37, 31, 47, 16, 24, 6, 21, 35, 36, 21, 40, 32, 33, 27, 42, 14, 21, 5, 19, 31, 32, 19, 36, 29, 29, 24, 42, 15)## Last two points removed train_ts.clean<- ts(values.clean, frequency=12) fit2.clean<-ets(train_ts.clean) ets.f.clean <- forecast(fit2.clean,h=24) plot(ets.f.clean)
ETS() function, how to avoid forecast not in line with historical data? This is textbook case of having outliers at the end of the series and its unintended consequences. The problem with your data is that the last two points are outliers, you might want to identify and t
16,387
ETS() function, how to avoid forecast not in line with historical data?
@forecaster you are correct that the last value is an outlier BUT periood 38 (the penultimate value) is not an outlier when you take into account trends and seasonal activity. This is a defining/teaching moment for testing/evaluating alternative robust approaches. If you don't identify and adjust for anomalies then the variance is inflated causing other items to not be found. Period 32 is also an outlier. Periods 3,32 and 1 are also outliers. There is a statistically significant trend in the series for the first 17 values but abates thereafter starting at period 18. So, there are really two trends in the data. The lesson to be learned here is that simple approaches that assume no trend or a particular form of a trend and/or tacitly assumes a specific form of the auto-regressive process need to be seriously questioned. Going forward a good forecast should have to consider the possible continuation of the exceptional activity found at the ultimate point (period 39). It is impossible to extract this from the data. This is a possibly useful model: The final model's statistics are here The Actual/Fit and Forecast graph is interesting as it highlights the exceptional activity.
ETS() function, how to avoid forecast not in line with historical data?
@forecaster you are correct that the last value is an outlier BUT periood 38 (the penultimate value) is not an outlier when you take into account trends and seasonal activity. This is a defining/teach
ETS() function, how to avoid forecast not in line with historical data? @forecaster you are correct that the last value is an outlier BUT periood 38 (the penultimate value) is not an outlier when you take into account trends and seasonal activity. This is a defining/teaching moment for testing/evaluating alternative robust approaches. If you don't identify and adjust for anomalies then the variance is inflated causing other items to not be found. Period 32 is also an outlier. Periods 3,32 and 1 are also outliers. There is a statistically significant trend in the series for the first 17 values but abates thereafter starting at period 18. So, there are really two trends in the data. The lesson to be learned here is that simple approaches that assume no trend or a particular form of a trend and/or tacitly assumes a specific form of the auto-regressive process need to be seriously questioned. Going forward a good forecast should have to consider the possible continuation of the exceptional activity found at the ultimate point (period 39). It is impossible to extract this from the data. This is a possibly useful model: The final model's statistics are here The Actual/Fit and Forecast graph is interesting as it highlights the exceptional activity.
ETS() function, how to avoid forecast not in line with historical data? @forecaster you are correct that the last value is an outlier BUT periood 38 (the penultimate value) is not an outlier when you take into account trends and seasonal activity. This is a defining/teach
16,388
tanh vs. sigmoid in neural net
In Symon Haykin's "Neural Networks: A Comprehensive Foundation" book there is the following explanation from which I quote: For the learning time to be minimized, the use of non-zero mean inputs should be avoided. Now, insofar as the signal vector $\bf x$ applied to a neuron in the first hidden layer of a multilayer perceptron is concerned, it is easy to remove the mean from each element of $\bf x$ before its application to the network. But what about the signals applied to the neurons in the remaining hidden and output layers of the network? The answer to this question lies in the type of activation function used in the network. If the activation function is non-symmetric, as in the case of the sigmoid function, the output of each neuron is restricted to the interval $[0,1]$. Such a choice introduces a source of systematic bias for those neurons located beyond the first layer of the network. To overcome this problem we need to use an antisymmetric activation function such as the hyperbolic tangent function. With this latter choice, the output of each neuron is permitted to assume both positive and negative values in the interval $[-1,1]$, in which case it is likely for its mean to be zero. If the network connectivity is large, back-propagation learning with antisymmetric activation functions can yield faster convergence than a similar process with non-symmetric activation functions, for which there is also empirical evidence (LeCun et al. 1991). The cited reference is: Y. LeCun, I. Kanter, and S.A.Solla: "Second-order properties of error surfaces: learning time and generalization", Advances in Neural Information Processing Systems, vol. 3, pp. 918-924, 1991. Another interesting reference is the following: Y. LeCun, L. Bottou, G. Orr and K. Muller: "Efficient BackProp", in Orr, G. and Muller K. (Eds), Neural Networks: Tricks of the trade, Springer, 1998
tanh vs. sigmoid in neural net
In Symon Haykin's "Neural Networks: A Comprehensive Foundation" book there is the following explanation from which I quote: For the learning time to be minimized, the use of non-zero mean inputs shou
tanh vs. sigmoid in neural net In Symon Haykin's "Neural Networks: A Comprehensive Foundation" book there is the following explanation from which I quote: For the learning time to be minimized, the use of non-zero mean inputs should be avoided. Now, insofar as the signal vector $\bf x$ applied to a neuron in the first hidden layer of a multilayer perceptron is concerned, it is easy to remove the mean from each element of $\bf x$ before its application to the network. But what about the signals applied to the neurons in the remaining hidden and output layers of the network? The answer to this question lies in the type of activation function used in the network. If the activation function is non-symmetric, as in the case of the sigmoid function, the output of each neuron is restricted to the interval $[0,1]$. Such a choice introduces a source of systematic bias for those neurons located beyond the first layer of the network. To overcome this problem we need to use an antisymmetric activation function such as the hyperbolic tangent function. With this latter choice, the output of each neuron is permitted to assume both positive and negative values in the interval $[-1,1]$, in which case it is likely for its mean to be zero. If the network connectivity is large, back-propagation learning with antisymmetric activation functions can yield faster convergence than a similar process with non-symmetric activation functions, for which there is also empirical evidence (LeCun et al. 1991). The cited reference is: Y. LeCun, I. Kanter, and S.A.Solla: "Second-order properties of error surfaces: learning time and generalization", Advances in Neural Information Processing Systems, vol. 3, pp. 918-924, 1991. Another interesting reference is the following: Y. LeCun, L. Bottou, G. Orr and K. Muller: "Efficient BackProp", in Orr, G. and Muller K. (Eds), Neural Networks: Tricks of the trade, Springer, 1998
tanh vs. sigmoid in neural net In Symon Haykin's "Neural Networks: A Comprehensive Foundation" book there is the following explanation from which I quote: For the learning time to be minimized, the use of non-zero mean inputs shou
16,389
tanh vs. sigmoid in neural net
These two activation functions are very similar, but are offset. My original network did not have bias terms. Since adding biases, everything is much more stable. Based on my experience I'd say one or the other of these may work better for a specific application for complex, possibly unknowable reasons, but the correct approach is to include bias terms so the dependence on activation offset can be diminished or eliminated.
tanh vs. sigmoid in neural net
These two activation functions are very similar, but are offset. My original network did not have bias terms. Since adding biases, everything is much more stable. Based on my experience I'd say one
tanh vs. sigmoid in neural net These two activation functions are very similar, but are offset. My original network did not have bias terms. Since adding biases, everything is much more stable. Based on my experience I'd say one or the other of these may work better for a specific application for complex, possibly unknowable reasons, but the correct approach is to include bias terms so the dependence on activation offset can be diminished or eliminated.
tanh vs. sigmoid in neural net These two activation functions are very similar, but are offset. My original network did not have bias terms. Since adding biases, everything is much more stable. Based on my experience I'd say one
16,390
tanh vs. sigmoid in neural net
$\tanh$ activations at output nodes do not work with (binary) cross entropy loss: $$ {\cal L} = -\frac{1}{n} \sum_{i} \left(y_i \log(p_i) + (1 - y_i) \log(1-p_i)\right) $$ where $y_i$ is the target value for sample $i$ and $p_i$ is the output of the network for sample $i$. If $p_i$ is the output of a $\tanh$ function you end up taking logarithms of negative values. So sigmoid activation functions at the output are a better choice for these cases.
tanh vs. sigmoid in neural net
$\tanh$ activations at output nodes do not work with (binary) cross entropy loss: $$ {\cal L} = -\frac{1}{n} \sum_{i} \left(y_i \log(p_i) + (1 - y_i) \log(1-p_i)\right) $$ where $y_i$ is the targ
tanh vs. sigmoid in neural net $\tanh$ activations at output nodes do not work with (binary) cross entropy loss: $$ {\cal L} = -\frac{1}{n} \sum_{i} \left(y_i \log(p_i) + (1 - y_i) \log(1-p_i)\right) $$ where $y_i$ is the target value for sample $i$ and $p_i$ is the output of the network for sample $i$. If $p_i$ is the output of a $\tanh$ function you end up taking logarithms of negative values. So sigmoid activation functions at the output are a better choice for these cases.
tanh vs. sigmoid in neural net $\tanh$ activations at output nodes do not work with (binary) cross entropy loss: $$ {\cal L} = -\frac{1}{n} \sum_{i} \left(y_i \log(p_i) + (1 - y_i) \log(1-p_i)\right) $$ where $y_i$ is the targ
16,391
What's the difference between time-series econometrics and panel data econometrics?
At least in the social sciences you often have panel data that has large N and small T asymptotics, meaning that you observe each entity for a relatively short period of time. This is why applied work with panel data is often somewhat less concerned with the time series component of the data. Nevertheless time-series elements are still important in the treatment of panel data. For instance, the degree of auto-correlation determines whether fixed effects or first differences is more efficient. In difference in differences proper treatment of the standard errors to account for autocorrelation is important for correct inference (see Bertrand et al., 2004). Dynamic panels using estimators for small N, large T asymptotics are also available, you often find such data in macroeconomics. There you may run into known time-series issues like panel non-stationarity. An excellent treatment of these topics is provided in Wooldridge (2010) "Econometric Analysis of Cross Section and Panel Data".
What's the difference between time-series econometrics and panel data econometrics?
At least in the social sciences you often have panel data that has large N and small T asymptotics, meaning that you observe each entity for a relatively short period of time. This is why applied work
What's the difference between time-series econometrics and panel data econometrics? At least in the social sciences you often have panel data that has large N and small T asymptotics, meaning that you observe each entity for a relatively short period of time. This is why applied work with panel data is often somewhat less concerned with the time series component of the data. Nevertheless time-series elements are still important in the treatment of panel data. For instance, the degree of auto-correlation determines whether fixed effects or first differences is more efficient. In difference in differences proper treatment of the standard errors to account for autocorrelation is important for correct inference (see Bertrand et al., 2004). Dynamic panels using estimators for small N, large T asymptotics are also available, you often find such data in macroeconomics. There you may run into known time-series issues like panel non-stationarity. An excellent treatment of these topics is provided in Wooldridge (2010) "Econometric Analysis of Cross Section and Panel Data".
What's the difference between time-series econometrics and panel data econometrics? At least in the social sciences you often have panel data that has large N and small T asymptotics, meaning that you observe each entity for a relatively short period of time. This is why applied work
16,392
What's the difference between time-series econometrics and panel data econometrics?
The second dimension of panel data need not be time. We could have data on twins or siblings or data on N individuals answering T survey questions. Longitudinal data, where T is a second dimension, is arguably the most common type of panel data, and has become virtually synonymous with it. Micro or short panels (large N, small T) typically have asymptotics that send N to infinity, keeping T fixed. Macro or long panels have moderate N and large T, and the asymptotics tend to hold N fixed and grow T, or grow both N and T. With micro panels, cross-unit dependence is typically not an issue because units are randomly sampled, whereas with macro panels it may be a real concern (spatial dependence between countries or states, for example). With macro panels, you also have to worry about unit roots, structural breaks, and cointegration, all of which are familiar time series concerns. You also have to occasionally worry about selectivity problems (like attrition, self-selectivity, and non-response). When T is long enough, even countries can disappear. I would take a look at Baltagi's Econometric Analysis of Panel Data, particularly chapters 8, 12, and 13. It also covers the short panels in some detail. The previous edition also had a companion volume with exercise solutions that was very nice.
What's the difference between time-series econometrics and panel data econometrics?
The second dimension of panel data need not be time. We could have data on twins or siblings or data on N individuals answering T survey questions. Longitudinal data, where T is a second dimension, is
What's the difference between time-series econometrics and panel data econometrics? The second dimension of panel data need not be time. We could have data on twins or siblings or data on N individuals answering T survey questions. Longitudinal data, where T is a second dimension, is arguably the most common type of panel data, and has become virtually synonymous with it. Micro or short panels (large N, small T) typically have asymptotics that send N to infinity, keeping T fixed. Macro or long panels have moderate N and large T, and the asymptotics tend to hold N fixed and grow T, or grow both N and T. With micro panels, cross-unit dependence is typically not an issue because units are randomly sampled, whereas with macro panels it may be a real concern (spatial dependence between countries or states, for example). With macro panels, you also have to worry about unit roots, structural breaks, and cointegration, all of which are familiar time series concerns. You also have to occasionally worry about selectivity problems (like attrition, self-selectivity, and non-response). When T is long enough, even countries can disappear. I would take a look at Baltagi's Econometric Analysis of Panel Data, particularly chapters 8, 12, and 13. It also covers the short panels in some detail. The previous edition also had a companion volume with exercise solutions that was very nice.
What's the difference between time-series econometrics and panel data econometrics? The second dimension of panel data need not be time. We could have data on twins or siblings or data on N individuals answering T survey questions. Longitudinal data, where T is a second dimension, is
16,393
What's the difference between time-series econometrics and panel data econometrics?
As mentioned above then panel data has often been used on individual level rather than on an aggregated level with large N and small T. There are many pros with using panel data since we can remove individual heterogeneity and often get higher power when testing to mention two. This new time dimension does introduce some new methods, assumptions and problems compared with cross-sectional data (I will refer you to Wooldridge's book to study these closer). It is however very common within economics to also use country level panel data with small N and large T. This introduces a whole range of difficulties not encountered when dealing with large N, small T panel data. We could for instance have unit roots in our panel and there are also specific panel unit root tests to deal with this specific issue. Notice that these have a significantly higher power than unit root tests on individual series. We could also have all sorts of other kinds of non-stationarity in these panels. Furthermore, when dealing with panel data with small N and large T we can also have cointegration. Another major issue when dealing with large T and small N panel data is that this data is often for country level economic variables and that in this case the independence assumption is often violated and this should be tested for. That being said this is not a problem only for small N and large T but can also be present in large N and small T panels. So panel data with large N and small T introduce a time series dimension compared to cross sectional data and are similar to cross sectional analysis while panels with large T and small N introduce a cross sectional dimension compared to the time series approach and which is similar to time series analysis. An excellent book on panel data with large N and small T is "Econometric Analysis of Cross Section and Panel Data" by Wooldridge. This book is quite dense and packs a lot of information on every page so you might want to start with a introductory book in econometrics and read the section on panel data there first. I do not know a specific book for panels with large T and small N but there is a volume called: "Nonstationary Panels, Panel Cointegration, and Dynamic Panels", Baltagi, ed.
What's the difference between time-series econometrics and panel data econometrics?
As mentioned above then panel data has often been used on individual level rather than on an aggregated level with large N and small T. There are many pros with using panel data since we can remove in
What's the difference between time-series econometrics and panel data econometrics? As mentioned above then panel data has often been used on individual level rather than on an aggregated level with large N and small T. There are many pros with using panel data since we can remove individual heterogeneity and often get higher power when testing to mention two. This new time dimension does introduce some new methods, assumptions and problems compared with cross-sectional data (I will refer you to Wooldridge's book to study these closer). It is however very common within economics to also use country level panel data with small N and large T. This introduces a whole range of difficulties not encountered when dealing with large N, small T panel data. We could for instance have unit roots in our panel and there are also specific panel unit root tests to deal with this specific issue. Notice that these have a significantly higher power than unit root tests on individual series. We could also have all sorts of other kinds of non-stationarity in these panels. Furthermore, when dealing with panel data with small N and large T we can also have cointegration. Another major issue when dealing with large T and small N panel data is that this data is often for country level economic variables and that in this case the independence assumption is often violated and this should be tested for. That being said this is not a problem only for small N and large T but can also be present in large N and small T panels. So panel data with large N and small T introduce a time series dimension compared to cross sectional data and are similar to cross sectional analysis while panels with large T and small N introduce a cross sectional dimension compared to the time series approach and which is similar to time series analysis. An excellent book on panel data with large N and small T is "Econometric Analysis of Cross Section and Panel Data" by Wooldridge. This book is quite dense and packs a lot of information on every page so you might want to start with a introductory book in econometrics and read the section on panel data there first. I do not know a specific book for panels with large T and small N but there is a volume called: "Nonstationary Panels, Panel Cointegration, and Dynamic Panels", Baltagi, ed.
What's the difference between time-series econometrics and panel data econometrics? As mentioned above then panel data has often been used on individual level rather than on an aggregated level with large N and small T. There are many pros with using panel data since we can remove in
16,394
What's the difference between time-series econometrics and panel data econometrics?
It's largely a question of emphasis, since both data consist of cross sectional and time series components. Panel data is more likely to have large N and smaller T. There is more attention to the individual components (e.g. stores over time, consumers over time) and more likelihood of segmenting those individual components (e.g. high income consumers, consumers who have moved from middle to high income). The individual components have survival/replacement issues (the components leave the study for some reason, and must be replaced). With econometric data you are more likely to be dealing at a more aggregated level and it's often somebody else's problem (e.g. those fine folks at the BLS) to deal with those issues. Autocorrelation issues do arise, but often are modeled as past history rather than as an autocorrelation per se, e.g. your past history of buying Chocolate Frosted Sugar Bombs http://www.gocomics.com/calvinandhobbes/1986/03/22 informs the prediction of future buying behavior.
What's the difference between time-series econometrics and panel data econometrics?
It's largely a question of emphasis, since both data consist of cross sectional and time series components. Panel data is more likely to have large N and smaller T. There is more attention to the indi
What's the difference between time-series econometrics and panel data econometrics? It's largely a question of emphasis, since both data consist of cross sectional and time series components. Panel data is more likely to have large N and smaller T. There is more attention to the individual components (e.g. stores over time, consumers over time) and more likelihood of segmenting those individual components (e.g. high income consumers, consumers who have moved from middle to high income). The individual components have survival/replacement issues (the components leave the study for some reason, and must be replaced). With econometric data you are more likely to be dealing at a more aggregated level and it's often somebody else's problem (e.g. those fine folks at the BLS) to deal with those issues. Autocorrelation issues do arise, but often are modeled as past history rather than as an autocorrelation per se, e.g. your past history of buying Chocolate Frosted Sugar Bombs http://www.gocomics.com/calvinandhobbes/1986/03/22 informs the prediction of future buying behavior.
What's the difference between time-series econometrics and panel data econometrics? It's largely a question of emphasis, since both data consist of cross sectional and time series components. Panel data is more likely to have large N and smaller T. There is more attention to the indi
16,395
What's the difference between time-series econometrics and panel data econometrics?
I would like to complement the above answers with a reference where you can read more about time dependence in panel data models, as you requested: Verbeek, Marno. A guide to modern econometrics, Wiley. There is a chapter in this book on panel data models that can serve as a good introduction. As an example of contemporary research regarding time-dependence in panel data, you could read: Fredrik N. G. Andersson: Exchange rates dynamics revisited: a panel data test of the fractional integration order. Empir Econ (2014) 47:389–409.
What's the difference between time-series econometrics and panel data econometrics?
I would like to complement the above answers with a reference where you can read more about time dependence in panel data models, as you requested: Verbeek, Marno. A guide to modern econometrics, Wile
What's the difference between time-series econometrics and panel data econometrics? I would like to complement the above answers with a reference where you can read more about time dependence in panel data models, as you requested: Verbeek, Marno. A guide to modern econometrics, Wiley. There is a chapter in this book on panel data models that can serve as a good introduction. As an example of contemporary research regarding time-dependence in panel data, you could read: Fredrik N. G. Andersson: Exchange rates dynamics revisited: a panel data test of the fractional integration order. Empir Econ (2014) 47:389–409.
What's the difference between time-series econometrics and panel data econometrics? I would like to complement the above answers with a reference where you can read more about time dependence in panel data models, as you requested: Verbeek, Marno. A guide to modern econometrics, Wile
16,396
Why not use the T-distribution to estimate the mean when the sample is large?
Just to clarify on relation to the title, we aren't using the t-distribution to estimate the mean (in the sense of a point estimate at least), but to construct an interval for it. But why use an estimate when you can get your confidence interval exactly? It's a good question (as long as we don't get too insistent on 'exactly', since the assumptions for it to be exactly t-distributed won't actually hold). "You must use the t-distribution table when working problems when the population standard deviation (σ) is not known and the sample size is small (n<30)" Why don't people use the T-distribution all the time when the population standard deviation is not known (even when n>30)? I regard the advice as - at best - potentially misleading. In some situations, the t-distribution should still be used when degrees of freedom are a good deal larger than that. Where the normal is a reasonable approximation depends on a variety of things (and so depends on the situation). However, since (with computers) it's not at all difficult to just use the $t$, even if the d.f. are very large, you'd have to wonder why the need to worry about doing something different at n=30. If the sample sizes are really large, it won't make a noticeable difference to a confidence interval, but I don't think n=30 is always sufficiently close to 'really large'. There is one circumstance in which it might make sense to use the normal rather than the $t$ - that's when your data clearly don't satisfy the conditions to get a t-distribution, but you can still argue for approximate normality of the mean (if $n$ is quite large). However, in those circumstances, often the t is a good approximation in practice, and may be somewhat 'safer'. [In a situation like that, I might be inclined to investigate via simulation.]
Why not use the T-distribution to estimate the mean when the sample is large?
Just to clarify on relation to the title, we aren't using the t-distribution to estimate the mean (in the sense of a point estimate at least), but to construct an interval for it. But why use an esti
Why not use the T-distribution to estimate the mean when the sample is large? Just to clarify on relation to the title, we aren't using the t-distribution to estimate the mean (in the sense of a point estimate at least), but to construct an interval for it. But why use an estimate when you can get your confidence interval exactly? It's a good question (as long as we don't get too insistent on 'exactly', since the assumptions for it to be exactly t-distributed won't actually hold). "You must use the t-distribution table when working problems when the population standard deviation (σ) is not known and the sample size is small (n<30)" Why don't people use the T-distribution all the time when the population standard deviation is not known (even when n>30)? I regard the advice as - at best - potentially misleading. In some situations, the t-distribution should still be used when degrees of freedom are a good deal larger than that. Where the normal is a reasonable approximation depends on a variety of things (and so depends on the situation). However, since (with computers) it's not at all difficult to just use the $t$, even if the d.f. are very large, you'd have to wonder why the need to worry about doing something different at n=30. If the sample sizes are really large, it won't make a noticeable difference to a confidence interval, but I don't think n=30 is always sufficiently close to 'really large'. There is one circumstance in which it might make sense to use the normal rather than the $t$ - that's when your data clearly don't satisfy the conditions to get a t-distribution, but you can still argue for approximate normality of the mean (if $n$ is quite large). However, in those circumstances, often the t is a good approximation in practice, and may be somewhat 'safer'. [In a situation like that, I might be inclined to investigate via simulation.]
Why not use the T-distribution to estimate the mean when the sample is large? Just to clarify on relation to the title, we aren't using the t-distribution to estimate the mean (in the sense of a point estimate at least), but to construct an interval for it. But why use an esti
16,397
Why not use the T-distribution to estimate the mean when the sample is large?
It's a historical anachronism. There are many of them in statistics. If you didn't have a computer, it was hard to use the t-distribution, and much easier to use a normal distribution. Once the sample size gets large, they two distributions become similar (how large is 'large' is another question).
Why not use the T-distribution to estimate the mean when the sample is large?
It's a historical anachronism. There are many of them in statistics. If you didn't have a computer, it was hard to use the t-distribution, and much easier to use a normal distribution. Once the sample
Why not use the T-distribution to estimate the mean when the sample is large? It's a historical anachronism. There are many of them in statistics. If you didn't have a computer, it was hard to use the t-distribution, and much easier to use a normal distribution. Once the sample size gets large, they two distributions become similar (how large is 'large' is another question).
Why not use the T-distribution to estimate the mean when the sample is large? It's a historical anachronism. There are many of them in statistics. If you didn't have a computer, it was hard to use the t-distribution, and much easier to use a normal distribution. Once the sample
16,398
Why not use the T-distribution to estimate the mean when the sample is large?
Because in either case (using the normal distribution or the t-distribution), cumulative distribution values are derived numerically (there is no closed form for the integral of $e^{-x^2}$ , or the integral of the t-density). The cumulative distribution function of t distribution with n-degrees of freedom tends to the CDF of a standard normal as $n \rightarrow \infty $. If n is large, the numerical error in approximating the integral is less than the error made by replacing the t-density by the normal density. In other words, the "exact" t-value is not "exact", and within the approximation error, the value is the same as the CDF value for the standard normal.
Why not use the T-distribution to estimate the mean when the sample is large?
Because in either case (using the normal distribution or the t-distribution), cumulative distribution values are derived numerically (there is no closed form for the integral of $e^{-x^2}$ , or the in
Why not use the T-distribution to estimate the mean when the sample is large? Because in either case (using the normal distribution or the t-distribution), cumulative distribution values are derived numerically (there is no closed form for the integral of $e^{-x^2}$ , or the integral of the t-density). The cumulative distribution function of t distribution with n-degrees of freedom tends to the CDF of a standard normal as $n \rightarrow \infty $. If n is large, the numerical error in approximating the integral is less than the error made by replacing the t-density by the normal density. In other words, the "exact" t-value is not "exact", and within the approximation error, the value is the same as the CDF value for the standard normal.
Why not use the T-distribution to estimate the mean when the sample is large? Because in either case (using the normal distribution or the t-distribution), cumulative distribution values are derived numerically (there is no closed form for the integral of $e^{-x^2}$ , or the in
16,399
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$?
That's not exactly what Casella and Berger say. They recognize (page 319) that when the transformation is one-to-one the proof of the invariance property is very simple. But then they extend the invariance property to arbitrary transformations of the parameters introducing an induced likelihood function on page 320. Theorem 7.2.10 on the same page gives the proof of the extended property. Hence, no contradiction here.
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$?
That's not exactly what Casella and Berger say. They recognize (page 319) that when the transformation is one-to-one the proof of the invariance property is very simple. But then they extend the invar
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$? That's not exactly what Casella and Berger say. They recognize (page 319) that when the transformation is one-to-one the proof of the invariance property is very simple. But then they extend the invariance property to arbitrary transformations of the parameters introducing an induced likelihood function on page 320. Theorem 7.2.10 on the same page gives the proof of the extended property. Hence, no contradiction here.
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$? That's not exactly what Casella and Berger say. They recognize (page 319) that when the transformation is one-to-one the proof of the invariance property is very simple. But then they extend the invar
16,400
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$?
From page 350 of "Probability and Statistical Inference": (Note: This Theorem can be found on pg. 320 and labeled as 7.2.10 in the second edition.)
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$?
From page 350 of "Probability and Statistical Inference": (Note: This Theorem can be found on pg. 320 and labeled as 7.2.10 in the second edition.)
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$? From page 350 of "Probability and Statistical Inference": (Note: This Theorem can be found on pg. 320 and labeled as 7.2.10 in the second edition.)
Invariance property of MLE: what is the MLE of $\theta^2$ of normal, $\bar{X}^2$? From page 350 of "Probability and Statistical Inference": (Note: This Theorem can be found on pg. 320 and labeled as 7.2.10 in the second edition.)