idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
16,101
Best books for an introduction to statistical data analysis?
My favourite book on Statistics is is David William's Weighing the Odds. Davison's Statistical Models is good too.
Best books for an introduction to statistical data analysis?
My favourite book on Statistics is is David William's Weighing the Odds. Davison's Statistical Models is good too.
Best books for an introduction to statistical data analysis? My favourite book on Statistics is is David William's Weighing the Odds. Davison's Statistical Models is good too.
Best books for an introduction to statistical data analysis? My favourite book on Statistics is is David William's Weighing the Odds. Davison's Statistical Models is good too.
16,102
Best books for an introduction to statistical data analysis?
The best intro in my eyes is the following one: David Howell - Statistical Methods for Psychology It is the BEST in making statistical concepts understandable for non mathematicians so that they get the math afterwards! Unfortunately it is updated every year and, hence, pricey.
Best books for an introduction to statistical data analysis?
The best intro in my eyes is the following one: David Howell - Statistical Methods for Psychology It is the BEST in making statistical concepts understandable for non mathematicians so that they g
Best books for an introduction to statistical data analysis? The best intro in my eyes is the following one: David Howell - Statistical Methods for Psychology It is the BEST in making statistical concepts understandable for non mathematicians so that they get the math afterwards! Unfortunately it is updated every year and, hence, pricey.
Best books for an introduction to statistical data analysis? The best intro in my eyes is the following one: David Howell - Statistical Methods for Psychology It is the BEST in making statistical concepts understandable for non mathematicians so that they g
16,103
Best books for an introduction to statistical data analysis?
Statistics as Principled Argument by Abelson is a good side book to learning statistics, particularly if your substantive field is in the social sciences. It won't teach you how to do analysis, but it will teach you about statistical thinking. I reviewed this book here
Best books for an introduction to statistical data analysis?
Statistics as Principled Argument by Abelson is a good side book to learning statistics, particularly if your substantive field is in the social sciences. It won't teach you how to do analysis, but i
Best books for an introduction to statistical data analysis? Statistics as Principled Argument by Abelson is a good side book to learning statistics, particularly if your substantive field is in the social sciences. It won't teach you how to do analysis, but it will teach you about statistical thinking. I reviewed this book here
Best books for an introduction to statistical data analysis? Statistics as Principled Argument by Abelson is a good side book to learning statistics, particularly if your substantive field is in the social sciences. It won't teach you how to do analysis, but i
16,104
Best books for an introduction to statistical data analysis?
You might find useful this one: The Elements of Statistical Learning: Data Mining, Inference, and Prediction UPDATE #1: This book might be useful as well: O'Reilly: Statistics in a Nutshell
Best books for an introduction to statistical data analysis?
You might find useful this one: The Elements of Statistical Learning: Data Mining, Inference, and Prediction UPDATE #1: This book might be useful as well: O'Reilly: Statistics in a Nutshell
Best books for an introduction to statistical data analysis? You might find useful this one: The Elements of Statistical Learning: Data Mining, Inference, and Prediction UPDATE #1: This book might be useful as well: O'Reilly: Statistics in a Nutshell
Best books for an introduction to statistical data analysis? You might find useful this one: The Elements of Statistical Learning: Data Mining, Inference, and Prediction UPDATE #1: This book might be useful as well: O'Reilly: Statistics in a Nutshell
16,105
Best books for an introduction to statistical data analysis?
Wilcox, Rand R. - BASIC STATISTICS - Understanding Conventional Methods and Modern Insights, Oxford University Press, 2009 Hoff, Peter D. - A First Course in Bayesian Statistical Methods, Springer, 2009 Dalgaard, Peter - Introductory Statistics with R, Second Edition, Springer, 2008 also take a glance at this link, though it's R-specific, there are plenty of books that can guide you through basic statistical techniques.
Best books for an introduction to statistical data analysis?
Wilcox, Rand R. - BASIC STATISTICS - Understanding Conventional Methods and Modern Insights, Oxford University Press, 2009 Hoff, Peter D. - A First Course in Bayesian Statistical Methods, Springer, 20
Best books for an introduction to statistical data analysis? Wilcox, Rand R. - BASIC STATISTICS - Understanding Conventional Methods and Modern Insights, Oxford University Press, 2009 Hoff, Peter D. - A First Course in Bayesian Statistical Methods, Springer, 2009 Dalgaard, Peter - Introductory Statistics with R, Second Edition, Springer, 2008 also take a glance at this link, though it's R-specific, there are plenty of books that can guide you through basic statistical techniques.
Best books for an introduction to statistical data analysis? Wilcox, Rand R. - BASIC STATISTICS - Understanding Conventional Methods and Modern Insights, Oxford University Press, 2009 Hoff, Peter D. - A First Course in Bayesian Statistical Methods, Springer, 20
16,106
Best books for an introduction to statistical data analysis?
As a biologist, I found the Sokal and Rohlf text to be quite readable, despite its voluminous-ness. It's not so great as a quick reference, but does walk one through statistical theory. R. R. Sokal and F. J. Rohlf, Biometry the principles and practice of statistics in biological research, Third. (New York: W.J. Freeman and Company, 1995).
Best books for an introduction to statistical data analysis?
As a biologist, I found the Sokal and Rohlf text to be quite readable, despite its voluminous-ness. It's not so great as a quick reference, but does walk one through statistical theory. R. R. Sokal an
Best books for an introduction to statistical data analysis? As a biologist, I found the Sokal and Rohlf text to be quite readable, despite its voluminous-ness. It's not so great as a quick reference, but does walk one through statistical theory. R. R. Sokal and F. J. Rohlf, Biometry the principles and practice of statistics in biological research, Third. (New York: W.J. Freeman and Company, 1995).
Best books for an introduction to statistical data analysis? As a biologist, I found the Sokal and Rohlf text to be quite readable, despite its voluminous-ness. It's not so great as a quick reference, but does walk one through statistical theory. R. R. Sokal an
16,107
Best books for an introduction to statistical data analysis?
An old favourite of mine as an introduction to biostatistics is Armitage & Berry's (& now Matthew's): Statistical Methods in Medical Research
Best books for an introduction to statistical data analysis?
An old favourite of mine as an introduction to biostatistics is Armitage & Berry's (& now Matthew's): Statistical Methods in Medical Research
Best books for an introduction to statistical data analysis? An old favourite of mine as an introduction to biostatistics is Armitage & Berry's (& now Matthew's): Statistical Methods in Medical Research
Best books for an introduction to statistical data analysis? An old favourite of mine as an introduction to biostatistics is Armitage & Berry's (& now Matthew's): Statistical Methods in Medical Research
16,108
Best books for an introduction to statistical data analysis?
Agresti & Finlay's Statistical Methods for the Social Sciences is quite good, though I'd like to believe there is a good open source alternative. Is it wrong to use an amazon affiliate link here?
Best books for an introduction to statistical data analysis?
Agresti & Finlay's Statistical Methods for the Social Sciences is quite good, though I'd like to believe there is a good open source alternative. Is it wrong to use an amazon affiliate link here?
Best books for an introduction to statistical data analysis? Agresti & Finlay's Statistical Methods for the Social Sciences is quite good, though I'd like to believe there is a good open source alternative. Is it wrong to use an amazon affiliate link here?
Best books for an introduction to statistical data analysis? Agresti & Finlay's Statistical Methods for the Social Sciences is quite good, though I'd like to believe there is a good open source alternative. Is it wrong to use an amazon affiliate link here?
16,109
What is the formula to calculate the area under the ROC curve from a contingency table?
In the general case: you can't The ROC curve shows how sensitivity and specificity varies at every possible threshold. A contingency table has been calculated at a single threshold and information about other thresholds has been lost. Therefore you can't calculate the ROC curve from this summarized data. But my classifier is binary, so I have one single threshold Binary classifiers aren't really binary. Even though they may expose only a final binary decision, all the classifiers I know rely on some quantitative estimate under the hood. A binary decision tree? Try to build a regression tree. A classifier SVM? Do a support vector regression. Logistic regression? Get access to the raw probabilities. Neural network? Use the numeric output of the last layer instead. This will give you more freedom to choose the optimal threshold to get to the best possible classification for your needs. But I really want to You really shouldn't. ROC curves with few thresholds significantly underestimate the true area under the curve (1). A ROC curve with a single point is a worst-case scenario, and any comparison with a continuous classifier will be inaccurate and misleading. Just give me the answer! Ok, ok, you win. With a single point we can consider the AUC as the sum of two triangles T and U: We can get their areas based on the contingency table (A, B, C and D as you defined): $$ \begin{align*} T = \frac{1 \times SE}{2} &= \frac{SE}{2} = \frac{A}{2(A + C)} \\ U = \frac{SP \times 1}{2} &= \frac{SP}{2} = \frac{D}{2(B + D)} \end{align*} $$ Getting the AUC: $$ \begin{align*} AUC &= T + U \\ &= \frac{A}{2(A + C)} + \frac{D}{2(B + D)} \\ &= \frac{SE + SP}{2} \end{align*} $$ To conclude You can technically calculate a ROC AUC for a binary classifier from the confusion matrix. But just in case I wasn't clear, let me repeat one last time: DON'T DO IT! References (1) DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics 1988,44:837-845. https://www.jstor.org/stable/2531595
What is the formula to calculate the area under the ROC curve from a contingency table?
In the general case: you can't The ROC curve shows how sensitivity and specificity varies at every possible threshold. A contingency table has been calculated at a single threshold and information abo
What is the formula to calculate the area under the ROC curve from a contingency table? In the general case: you can't The ROC curve shows how sensitivity and specificity varies at every possible threshold. A contingency table has been calculated at a single threshold and information about other thresholds has been lost. Therefore you can't calculate the ROC curve from this summarized data. But my classifier is binary, so I have one single threshold Binary classifiers aren't really binary. Even though they may expose only a final binary decision, all the classifiers I know rely on some quantitative estimate under the hood. A binary decision tree? Try to build a regression tree. A classifier SVM? Do a support vector regression. Logistic regression? Get access to the raw probabilities. Neural network? Use the numeric output of the last layer instead. This will give you more freedom to choose the optimal threshold to get to the best possible classification for your needs. But I really want to You really shouldn't. ROC curves with few thresholds significantly underestimate the true area under the curve (1). A ROC curve with a single point is a worst-case scenario, and any comparison with a continuous classifier will be inaccurate and misleading. Just give me the answer! Ok, ok, you win. With a single point we can consider the AUC as the sum of two triangles T and U: We can get their areas based on the contingency table (A, B, C and D as you defined): $$ \begin{align*} T = \frac{1 \times SE}{2} &= \frac{SE}{2} = \frac{A}{2(A + C)} \\ U = \frac{SP \times 1}{2} &= \frac{SP}{2} = \frac{D}{2(B + D)} \end{align*} $$ Getting the AUC: $$ \begin{align*} AUC &= T + U \\ &= \frac{A}{2(A + C)} + \frac{D}{2(B + D)} \\ &= \frac{SE + SP}{2} \end{align*} $$ To conclude You can technically calculate a ROC AUC for a binary classifier from the confusion matrix. But just in case I wasn't clear, let me repeat one last time: DON'T DO IT! References (1) DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the Areas under Two or More Correlated Receiver Operating Characteristic Curves: A Nonparametric Approach. Biometrics 1988,44:837-845. https://www.jstor.org/stable/2531595
What is the formula to calculate the area under the ROC curve from a contingency table? In the general case: you can't The ROC curve shows how sensitivity and specificity varies at every possible threshold. A contingency table has been calculated at a single threshold and information abo
16,110
What is the formula to calculate the area under the ROC curve from a contingency table?
When I claim all of them are negative, then sensitivity (y) = 0, 1 - specificity (x) = 0. If I claim the positive/negative according to test results, then y =A/(A+C), x=B/(B+D). When I say all of them are Positive, then y = 1 and x = 1. Based on three points with coordinate (0,0) (A/(A+C), B/(B+D)) (1,1), (in (y,x) order), it is easy to calculate the area under the curve by using the formula for area of triangle. Final result: Area = $\frac {AB+2AD+2CD}{(A+C)(B+D)}$ ? Need to be verified.
What is the formula to calculate the area under the ROC curve from a contingency table?
When I claim all of them are negative, then sensitivity (y) = 0, 1 - specificity (x) = 0. If I claim the positive/negative according to test results, then y =A/(A+C), x=B/(B+D). When I say all of them
What is the formula to calculate the area under the ROC curve from a contingency table? When I claim all of them are negative, then sensitivity (y) = 0, 1 - specificity (x) = 0. If I claim the positive/negative according to test results, then y =A/(A+C), x=B/(B+D). When I say all of them are Positive, then y = 1 and x = 1. Based on three points with coordinate (0,0) (A/(A+C), B/(B+D)) (1,1), (in (y,x) order), it is easy to calculate the area under the curve by using the formula for area of triangle. Final result: Area = $\frac {AB+2AD+2CD}{(A+C)(B+D)}$ ? Need to be verified.
What is the formula to calculate the area under the ROC curve from a contingency table? When I claim all of them are negative, then sensitivity (y) = 0, 1 - specificity (x) = 0. If I claim the positive/negative according to test results, then y =A/(A+C), x=B/(B+D). When I say all of them
16,111
Predictions from BSTS model (in R) are failing completely
Steve Scott here. I wrote the bsts package. I have a few suggestions for you. First, your seasonal components aren't doing what you think they are. I think you have daily data, because you're trying to add a 7 season component, which should be working correctly. But you've told your annual seasonal component to repeat every 12 days. Getting a monthly seasonal component with daily data is kind of hard to do, but you can do a 52 week seasonal by AddSeasonal(..., nseasons = 52, season.duration = 7). The seasonal.duration argument tells the model how many time points each season should last for. The nseasons argument tells it how many seasons are in a cycle. The total number of time points in a cycle is season.duration * nseasons. The second suggestion is that you might want to think about a different model for trend. The LocalLinearTrend model is very flexible, but this flexibility can show up as undesired variance in long term forecasts. There are some other trend models that contain a bit more structure. GeneralizedLocalLinearTrend (sorry about the nondescriptive name) assumes the "slope" component of trend is an AR1 process instead of a random walk. It is my default option if I want to forecast far into the future. Most of your time series variation seems to come from seasonality, so you might try AddLocalLevel or even AddAr instead of AddLocalLinearTrend. Finally, in general if you're getting strange forecasts, and you want to figure out which part of the model is to blame, try plot(model, "components") to see the model decomposed into the individual pieces you've requested.
Predictions from BSTS model (in R) are failing completely
Steve Scott here. I wrote the bsts package. I have a few suggestions for you. First, your seasonal components aren't doing what you think they are. I think you have daily data, because you're tryi
Predictions from BSTS model (in R) are failing completely Steve Scott here. I wrote the bsts package. I have a few suggestions for you. First, your seasonal components aren't doing what you think they are. I think you have daily data, because you're trying to add a 7 season component, which should be working correctly. But you've told your annual seasonal component to repeat every 12 days. Getting a monthly seasonal component with daily data is kind of hard to do, but you can do a 52 week seasonal by AddSeasonal(..., nseasons = 52, season.duration = 7). The seasonal.duration argument tells the model how many time points each season should last for. The nseasons argument tells it how many seasons are in a cycle. The total number of time points in a cycle is season.duration * nseasons. The second suggestion is that you might want to think about a different model for trend. The LocalLinearTrend model is very flexible, but this flexibility can show up as undesired variance in long term forecasts. There are some other trend models that contain a bit more structure. GeneralizedLocalLinearTrend (sorry about the nondescriptive name) assumes the "slope" component of trend is an AR1 process instead of a random walk. It is my default option if I want to forecast far into the future. Most of your time series variation seems to come from seasonality, so you might try AddLocalLevel or even AddAr instead of AddLocalLinearTrend. Finally, in general if you're getting strange forecasts, and you want to figure out which part of the model is to blame, try plot(model, "components") to see the model decomposed into the individual pieces you've requested.
Predictions from BSTS model (in R) are failing completely Steve Scott here. I wrote the bsts package. I have a few suggestions for you. First, your seasonal components aren't doing what you think they are. I think you have daily data, because you're tryi
16,112
Predictions from BSTS model (in R) are failing completely
I think you can also change the default burn. As I have used bsts I created a grid of burn and niter values with MAPE as my statistic on the holdout period. Also try using AddStudentLocalLinearTrend instead if your data has huge variation in order for the model to expect such variation
Predictions from BSTS model (in R) are failing completely
I think you can also change the default burn. As I have used bsts I created a grid of burn and niter values with MAPE as my statistic on the holdout period. Also try using AddStudentLocalLinearTrend i
Predictions from BSTS model (in R) are failing completely I think you can also change the default burn. As I have used bsts I created a grid of burn and niter values with MAPE as my statistic on the holdout period. Also try using AddStudentLocalLinearTrend instead if your data has huge variation in order for the model to expect such variation
Predictions from BSTS model (in R) are failing completely I think you can also change the default burn. As I have used bsts I created a grid of burn and niter values with MAPE as my statistic on the holdout period. Also try using AddStudentLocalLinearTrend i
16,113
Is it possible to append training data to existing SVM models?
It sounds like you're looking for an "incremental" or "online" learning algorithm. These algorithms let you update a classifier with new examples, without retraining the entire thing from scratch. It's definitely possible with support vector machines, though I believe libSVM doesn't presently support it. It might be worth taking a look at several other packages that do offer it, including Gert Cauwenbergh's 2000 NIPS paper (with code) http://www.isn.ucsd.edu/svm/incremental/ Pegasos (which is available by itself or as part of dlib) SVM Heavy http://people.eng.unimelb.edu.au/shiltona/svm/ PS: @Bogdanovist: There's a pretty extensive literature on this. kNN is obviously and trivially incremental. One could turn (some) bayesian classifiers into incremental classifiers by storing counts instead of probabilities. STAGGER, AQ* and some (but not all) of the ID* family of decision tree algorithms are also incremental, off the top of my head.
Is it possible to append training data to existing SVM models?
It sounds like you're looking for an "incremental" or "online" learning algorithm. These algorithms let you update a classifier with new examples, without retraining the entire thing from scratch. It
Is it possible to append training data to existing SVM models? It sounds like you're looking for an "incremental" or "online" learning algorithm. These algorithms let you update a classifier with new examples, without retraining the entire thing from scratch. It's definitely possible with support vector machines, though I believe libSVM doesn't presently support it. It might be worth taking a look at several other packages that do offer it, including Gert Cauwenbergh's 2000 NIPS paper (with code) http://www.isn.ucsd.edu/svm/incremental/ Pegasos (which is available by itself or as part of dlib) SVM Heavy http://people.eng.unimelb.edu.au/shiltona/svm/ PS: @Bogdanovist: There's a pretty extensive literature on this. kNN is obviously and trivially incremental. One could turn (some) bayesian classifiers into incremental classifiers by storing counts instead of probabilities. STAGGER, AQ* and some (but not all) of the ID* family of decision tree algorithms are also incremental, off the top of my head.
Is it possible to append training data to existing SVM models? It sounds like you're looking for an "incremental" or "online" learning algorithm. These algorithms let you update a classifier with new examples, without retraining the entire thing from scratch. It
16,114
Is it possible to append training data to existing SVM models?
Most of the online/incremental SVM utilities are for linear kernels and I suppose its not as difficult as it is for non-linear kernels. Some of the notable Online/incremental SVM tools currently available: + Leon Bottous's LaSVM: It supports both linear and non-linear kernels. C++ code + Bordes's LaRank: It supports both linear and non-linear kernels. C++ code . It seems the link is broken now :-( + Gert Cauwenberghs' code incremental and decremental: supports both linear and nonlinear kernels. Matlab code . + Chris Diehl's Incremental SVM Learning: supports both linear and non-linear kernels. Matlab code. + Alistair Shilton's SVMHeavy: Only Binary classification and regression. C++ code + Francesco Parrella's OnlineSVR: Only Regression. Matlab and C++. + Pegasos: Both linear and nonlinear. C and Matlab code. A java interface. + Langford's Vowpal Wabbit: Not sure :-( + Koby Crammer’s MCSVM: Both linear and non-linear. C code A more updated list can be found on my Quora answer.
Is it possible to append training data to existing SVM models?
Most of the online/incremental SVM utilities are for linear kernels and I suppose its not as difficult as it is for non-linear kernels. Some of the notable Online/incremental SVM tools currently ava
Is it possible to append training data to existing SVM models? Most of the online/incremental SVM utilities are for linear kernels and I suppose its not as difficult as it is for non-linear kernels. Some of the notable Online/incremental SVM tools currently available: + Leon Bottous's LaSVM: It supports both linear and non-linear kernels. C++ code + Bordes's LaRank: It supports both linear and non-linear kernels. C++ code . It seems the link is broken now :-( + Gert Cauwenberghs' code incremental and decremental: supports both linear and nonlinear kernels. Matlab code . + Chris Diehl's Incremental SVM Learning: supports both linear and non-linear kernels. Matlab code. + Alistair Shilton's SVMHeavy: Only Binary classification and regression. C++ code + Francesco Parrella's OnlineSVR: Only Regression. Matlab and C++. + Pegasos: Both linear and nonlinear. C and Matlab code. A java interface. + Langford's Vowpal Wabbit: Not sure :-( + Koby Crammer’s MCSVM: Both linear and non-linear. C code A more updated list can be found on my Quora answer.
Is it possible to append training data to existing SVM models? Most of the online/incremental SVM utilities are for linear kernels and I suppose its not as difficult as it is for non-linear kernels. Some of the notable Online/incremental SVM tools currently ava
16,115
Is it possible to append training data to existing SVM models?
Another possibility is alpha-seeding. I am not aware whether libSVM supports it. The idea is to divide a huge amount of training data into chunks. Then you train a SVM on the first chunk. As the resulting support vectors are nothing but some of the samples of your data, you take those and use them to train your SVM with the next chunk. Also, you use that SVM to compute a initial estimate of the alpha values for the next iteration (seeding). Therefore, the benefits are twofold: each of the problems is smaller and through smart initialization they converge even faster. This way you simplify a huge problem into sequentially solving a series of simpler steps.
Is it possible to append training data to existing SVM models?
Another possibility is alpha-seeding. I am not aware whether libSVM supports it. The idea is to divide a huge amount of training data into chunks. Then you train a SVM on the first chunk. As the resul
Is it possible to append training data to existing SVM models? Another possibility is alpha-seeding. I am not aware whether libSVM supports it. The idea is to divide a huge amount of training data into chunks. Then you train a SVM on the first chunk. As the resulting support vectors are nothing but some of the samples of your data, you take those and use them to train your SVM with the next chunk. Also, you use that SVM to compute a initial estimate of the alpha values for the next iteration (seeding). Therefore, the benefits are twofold: each of the problems is smaller and through smart initialization they converge even faster. This way you simplify a huge problem into sequentially solving a series of simpler steps.
Is it possible to append training data to existing SVM models? Another possibility is alpha-seeding. I am not aware whether libSVM supports it. The idea is to divide a huge amount of training data into chunks. Then you train a SVM on the first chunk. As the resul
16,116
Is it possible to append training data to existing SVM models?
Another option if you are seeking an "incremental" solution can be found here... Liblinear Incremental An extension of LIBLINEAR which allows for incremental learning.
Is it possible to append training data to existing SVM models?
Another option if you are seeking an "incremental" solution can be found here... Liblinear Incremental An extension of LIBLINEAR which allows for incremental learning.
Is it possible to append training data to existing SVM models? Another option if you are seeking an "incremental" solution can be found here... Liblinear Incremental An extension of LIBLINEAR which allows for incremental learning.
Is it possible to append training data to existing SVM models? Another option if you are seeking an "incremental" solution can be found here... Liblinear Incremental An extension of LIBLINEAR which allows for incremental learning.
16,117
Reference with distributions with various properties
The most comprehensive collection of distributions and their properties that I know of are Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2; Kotz, Johnson, Balakrishnan: Continuous Multivariate Distributions; Johnson, Kemp, Kotz: Univariate Discrete Distributions; Johnson, Kotz, Balakrishnan: Multivariate Discrete Distributions; The books have a broad subject index. All books are from Wiley. Edit: Oh yes and then there also is the nice poster displaying properties and relationships between univariate distributions. http://www.math.wm.edu/~leemis/2008amstat.pdf This might be of further interest.
Reference with distributions with various properties
The most comprehensive collection of distributions and their properties that I know of are Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2; Kotz, Johnson, Balakrishna
Reference with distributions with various properties The most comprehensive collection of distributions and their properties that I know of are Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2; Kotz, Johnson, Balakrishnan: Continuous Multivariate Distributions; Johnson, Kemp, Kotz: Univariate Discrete Distributions; Johnson, Kotz, Balakrishnan: Multivariate Discrete Distributions; The books have a broad subject index. All books are from Wiley. Edit: Oh yes and then there also is the nice poster displaying properties and relationships between univariate distributions. http://www.math.wm.edu/~leemis/2008amstat.pdf This might be of further interest.
Reference with distributions with various properties The most comprehensive collection of distributions and their properties that I know of are Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2; Kotz, Johnson, Balakrishna
16,118
Reference with distributions with various properties
honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them. Anyway, back to your question, I always find this diagram quite informative and useful, it's like probability distributions cheatsheet. http://jonfwilkins.com/wp-content/uploads/2013/06/BaseImage.png
Reference with distributions with various properties
honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them. Anyway, back to your question, I alway
Reference with distributions with various properties honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them. Anyway, back to your question, I always find this diagram quite informative and useful, it's like probability distributions cheatsheet. http://jonfwilkins.com/wp-content/uploads/2013/06/BaseImage.png
Reference with distributions with various properties honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them. Anyway, back to your question, I alway
16,119
Reference with distributions with various properties
No book could cover all distributions, as it is always possible to invent new ones. But Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used distributions while A primer on statistical distributions by N. Balakrishnan and V.B. Nezvorov is also fairly concise, but rather more mathematically oriented. The nearest approach to a treatise is the series started by N.L. Johnson and S. Kotz, being continued by A.W. Kemp and N. Balakrishnan, and currently published by John Wiley. This isn't a complete list even of surveys of distributions, but Googling your local Amazon site easily gets you other ideas.
Reference with distributions with various properties
No book could cover all distributions, as it is always possible to invent new ones. But Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used
Reference with distributions with various properties No book could cover all distributions, as it is always possible to invent new ones. But Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used distributions while A primer on statistical distributions by N. Balakrishnan and V.B. Nezvorov is also fairly concise, but rather more mathematically oriented. The nearest approach to a treatise is the series started by N.L. Johnson and S. Kotz, being continued by A.W. Kemp and N. Balakrishnan, and currently published by John Wiley. This isn't a complete list even of surveys of distributions, but Googling your local Amazon site easily gets you other ideas.
Reference with distributions with various properties No book could cover all distributions, as it is always possible to invent new ones. But Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used
16,120
Reference with distributions with various properties
Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wishart central distribution).
Reference with distributions with various properties
Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wis
Reference with distributions with various properties Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wishart central distribution).
Reference with distributions with various properties Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wis
16,121
Reference with distributions with various properties
The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each distribution described with its formulas, moments, moment generating function, characteristic function, how to generate a random variate from this distribution, and much more. Very nice for a free pdf.
Reference with distributions with various properties
The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each
Reference with distributions with various properties The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each distribution described with its formulas, moments, moment generating function, characteristic function, how to generate a random variate from this distribution, and much more. Very nice for a free pdf.
Reference with distributions with various properties The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each
16,122
Reference with distributions with various properties
Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions. It is written at the level of a grad level course in ecology, so it is accessible to non-statisticians. Less dense than the Johnson, Kotz et al references in the answer by @Momo, but gives more practical details than a list or appendix might.
Reference with distributions with various properties
Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions. It
Reference with distributions with various properties Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions. It is written at the level of a grad level course in ecology, so it is accessible to non-statisticians. Less dense than the Johnson, Kotz et al references in the answer by @Momo, but gives more practical details than a list or appendix might.
Reference with distributions with various properties Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions. It
16,123
Reference with distributions with various properties
The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation.
Reference with distributions with various properties
The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation.
Reference with distributions with various properties The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation.
Reference with distributions with various properties The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation.
16,124
Reference with distributions with various properties
A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The two encyclopedic volumes by Johnson et al. (1994, 1995) are the most comprehensive texts to date on continuous univariate distributions. Monographs by Ord (1972) and Hastings and Peacock (1975) are worth mentioning, with the latter being a convenient handbook presenting graphs of densities and various relationships between distributions. Another useful compendium is by Patel et al. (1976); Chapters 3 and 4 of Manoukian (1986) present many distributions and relations between them. Extensive collections of illustrations of probability density functions (denoted by p.d.f. hereafter) may be found in Hirano et al. (1983) (105 graphs, each with typically about five curves shown, grouped in 25 families of distributions) and in Patil et al. (1984). This is from Chapter 0 of a book on continuous bivariate distributions, which provides an elementary introduction and basic details on properties of various univariate distributions. I remember I enjoyed reading Ord (1972) very much, but I can't now remember why.
Reference with distributions with various properties
A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The tw
Reference with distributions with various properties A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The two encyclopedic volumes by Johnson et al. (1994, 1995) are the most comprehensive texts to date on continuous univariate distributions. Monographs by Ord (1972) and Hastings and Peacock (1975) are worth mentioning, with the latter being a convenient handbook presenting graphs of densities and various relationships between distributions. Another useful compendium is by Patel et al. (1976); Chapters 3 and 4 of Manoukian (1986) present many distributions and relations between them. Extensive collections of illustrations of probability density functions (denoted by p.d.f. hereafter) may be found in Hirano et al. (1983) (105 graphs, each with typically about five curves shown, grouped in 25 families of distributions) and in Patil et al. (1984). This is from Chapter 0 of a book on continuous bivariate distributions, which provides an elementary introduction and basic details on properties of various univariate distributions. I remember I enjoyed reading Ord (1972) very much, but I can't now remember why.
Reference with distributions with various properties A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The tw
16,125
Reference with distributions with various properties
The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to start with Continuous Univariate Distributions, Vols I and II. A couple more: Evans, Hastings & Peacock, Statistical Distributions Wimmer & Altmann, Thesaurus of univariate discrete probability distributions There's also many other books, sometimes for more specialized applications.
Reference with distributions with various properties
The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to st
Reference with distributions with various properties The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to start with Continuous Univariate Distributions, Vols I and II. A couple more: Evans, Hastings & Peacock, Statistical Distributions Wimmer & Altmann, Thesaurus of univariate discrete probability distributions There's also many other books, sometimes for more specialized applications.
Reference with distributions with various properties The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to st
16,126
Acceptance of null hypothesis
I think it is at times appropriate to interpret non-statistically significant results in the spirit of "accept the null hypothesis". In fact, I have seen statistically significant studies interpreted in such a fashion; the study was too precise and results were consistent with a narrow range of non-null but clinically insignificant effects. Here's a somewhat blistering critique of a study (or moreover its press) about the relation between chocolate/red wine consumption and its "salubrious" effect on diabetes. The probability curves for insulin resistance distributions by high/low intake is hysterical. Whether one can interpret findings as "confirming H_0" depends on a great number of factors: the validity of the study, the power, the uncertainty of the estimate, and the prior evidence. Reporting the confidence interval (CI) instead of the p-value is perhaps the most useful contribution you can make as a statistician. I remind researchers and fellow statisticians that statistics do not make decisions, people do; omitting p-values actually encourages a more thoughtful discussion of the findings. The width of the CI describes a range of effects which may or may not include the null, and may or may not include very clinically significant values like life-saving potential. However, a narrow CI confirms one type of effect; either the latter type which is "significant" in a true sense, or the former which may be the null or something very close to the null. Perhaps what is needed is a broader sense of what "null results" (and null effects) are. What I find disappointing in research collaboration is when investigators cannot a priori state what range of effects they are targeting: if an intervention is meant to lower blood pressure, how many mmHg? If a drug is meant to cure cancer, how many months of survival will the patient have? Someone who is passionate with research and "plugged-in" to their field and science can rattle off the most amazing facts about prior research and what has been done. In your example, I can't help but notice that the p-value of 0.82 is likely very close to the null. From that, all I can tell is that the CI is centered on a null value. What I do not know is whether it encompasses clinically significant effects. If the CI is very narrow, the interpretation they give is, in my opinion, correct but the data do not support it: that would be a minor edit. In contrast, the second p-value of 0.22 is relatively closer to its significance threshold (whatever it may be). The authors correspondingly interpret it as "not giving any evidence of difference" which is consistent with a "do not reject H_0"-type interpretation. As far as the relevance of the article, I can say very little. I hope that you browse the literature finding more salient discussions of study findings! As far as analyses, just report the CI and be done with it!
Acceptance of null hypothesis
I think it is at times appropriate to interpret non-statistically significant results in the spirit of "accept the null hypothesis". In fact, I have seen statistically significant studies interpreted
Acceptance of null hypothesis I think it is at times appropriate to interpret non-statistically significant results in the spirit of "accept the null hypothesis". In fact, I have seen statistically significant studies interpreted in such a fashion; the study was too precise and results were consistent with a narrow range of non-null but clinically insignificant effects. Here's a somewhat blistering critique of a study (or moreover its press) about the relation between chocolate/red wine consumption and its "salubrious" effect on diabetes. The probability curves for insulin resistance distributions by high/low intake is hysterical. Whether one can interpret findings as "confirming H_0" depends on a great number of factors: the validity of the study, the power, the uncertainty of the estimate, and the prior evidence. Reporting the confidence interval (CI) instead of the p-value is perhaps the most useful contribution you can make as a statistician. I remind researchers and fellow statisticians that statistics do not make decisions, people do; omitting p-values actually encourages a more thoughtful discussion of the findings. The width of the CI describes a range of effects which may or may not include the null, and may or may not include very clinically significant values like life-saving potential. However, a narrow CI confirms one type of effect; either the latter type which is "significant" in a true sense, or the former which may be the null or something very close to the null. Perhaps what is needed is a broader sense of what "null results" (and null effects) are. What I find disappointing in research collaboration is when investigators cannot a priori state what range of effects they are targeting: if an intervention is meant to lower blood pressure, how many mmHg? If a drug is meant to cure cancer, how many months of survival will the patient have? Someone who is passionate with research and "plugged-in" to their field and science can rattle off the most amazing facts about prior research and what has been done. In your example, I can't help but notice that the p-value of 0.82 is likely very close to the null. From that, all I can tell is that the CI is centered on a null value. What I do not know is whether it encompasses clinically significant effects. If the CI is very narrow, the interpretation they give is, in my opinion, correct but the data do not support it: that would be a minor edit. In contrast, the second p-value of 0.22 is relatively closer to its significance threshold (whatever it may be). The authors correspondingly interpret it as "not giving any evidence of difference" which is consistent with a "do not reject H_0"-type interpretation. As far as the relevance of the article, I can say very little. I hope that you browse the literature finding more salient discussions of study findings! As far as analyses, just report the CI and be done with it!
Acceptance of null hypothesis I think it is at times appropriate to interpret non-statistically significant results in the spirit of "accept the null hypothesis". In fact, I have seen statistically significant studies interpreted
16,127
Acceptance of null hypothesis
Speaking to the title of your question: we never accept the null hypothesis, because testing $H_{0}$ only provides evidence against $H_{0}$ (i.e. conclusions are always with respect to the alternative hypothesis, either you found evidence for $H_{A}$ or your failed to find evidence for $H_{A}$). However, we can recognize that there are different kinds of null hypothesis: You have probably learned about one-sided null hypotheses of the form $H_{0}: \theta \ge \theta_{0}$ and $H_{0}: \theta \le \theta_{0}$ You have probably learned about two-sided null hypotheses (aka two-tailed null hypotheses) of the form $H_{0}: \theta = \theta_{0}$, or synonymously $H_{0}: \theta - \theta_{0} = 0$ in the one-sample case, and $H_{0}: \theta_{1} = \theta_{2}$, or synonymously $H_{0}: \theta_{1} - \theta_{2} = 0$ in the two-sample case. I suspect this specific form of null hypothesis is what your question is about. Following Reagle and Vinod, I term null hypotheses of this form positivist null hypotheses, and make this explicit with the notation $H^{+}_{0}$. Positivist null hypotheses provide, or fail to provide evidence of difference or evidence of an effect. Positivist null hypotheses have an omnibus form for $k$ groups: $H_{0}^{+}: \theta_{i} = \theta_{j};$ for all $i,j \in \{1, 2, \dots k\};$ $\text{ and }i\ne j$. You may just now be learning about joint one-sided null hypotheses, which are null hypotheses of this form $H_{0}: |\theta - \theta_{0}|\ge \Delta$ in the one-sample case, and $H_{0}: |\theta_{1} - \theta_{2}|\ge \Delta$ in the two-sample case, where $\Delta$ is the minimum relevant difference that you care about a priori (i.e. you say up front that differences smaller than this do not matter). Again, following Reagle and Vinod, I term null hypotheses of this form negativist null hypotheses, and make this explicit with the notation $H^{-}_{0}$. Negativist null hypotheses provide evidence of equivalence (within $\pm\Delta$), or evidence of absence of an effect (larger than $|\Delta|$). Negativist null hypotheses have an omnibus form for $k$ groups: $H_{0}^{-}: |\theta_{i} = \theta_{j}|\ge \Delta;$ for all $i,j \in \{1, 2, \dots k\};$ $\text{ and }i\ne j$ (Wellek, chapter 7) The very cool thing to do is combine tests for difference with tests for equivalence. This is termed relevance testing, and places both statistical power and effect size explicitly within the conclusions drawn from a test, as detailed in the description of the [tost] tag. Consider: if you reject $H_{0}^{+}$ is that because there is a true effect of a size you find relevant? Or is it because your sample size was simply so large your test was over-powered? And if you fail to reject $H_{0}^{+}$, is that because there is no true effect, or because your sample size was too small, and your test under-powered? Relevance tests address these issues head-on. There are a few ways to perform tests for equivalence (whether or not one is combining with tests for difference): Two one-sided tests (TOST) translates the general negativist null hypothesis expressed above into two specific one-sided null hypotheses: $H^{-}_{01}: \theta - \theta_{0} \ge \Delta$ (one-sample) or $H^{-}_{01}: \theta_{1} - \theta_{2} \ge \Delta$ (two-sample) $H^{-}_{02}: \theta - \theta_{0} \le -\Delta$ (one-sample) or $H^{-}_{01}: \theta_{1} - \theta_{2} \le -\Delta$ (two-sample) Uniformly most powerful tests for equivalence, which tend to be much more arithmetically sophisticated than TOST. Wellek is the definitive reference for these. A confidence interval approach, I believe first motivated by Schuirman, and refined by others, such as Tryon. References Reagle, D. P. and Vinod, H. D. (2003). Inference for negativist theory using numerically computed rejection regions. Computational Statistics & Data Analysis, 42(3):491–512. Schuirmann, D. A. (1987). A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. Journal of Pharmacokinetics and Biopharmaceutics, 15(6):657–680. Tryon, W. W. and Lewis, C. (2008). An inferential confidence interval method of establishing statistical equivalence that corrects Tryon’s (2001) reduction factor. Psychological Methods, 13(3):272–277. Tryon, W. W. and Lewis, C. (2009). Evaluating independent proportions for statistical difference, equivalence, indeterminacy, and trivial difference using inferential confidence intervals. Journal of Educational and Behavioral Statistics, 34(2):171–189. Wellek, S. (2010). Testing Statistical Hypotheses of Equivalence and Noninferiority. Chapman and Hall/CRC Press, second edition.
Acceptance of null hypothesis
Speaking to the title of your question: we never accept the null hypothesis, because testing $H_{0}$ only provides evidence against $H_{0}$ (i.e. conclusions are always with respect to the alternative
Acceptance of null hypothesis Speaking to the title of your question: we never accept the null hypothesis, because testing $H_{0}$ only provides evidence against $H_{0}$ (i.e. conclusions are always with respect to the alternative hypothesis, either you found evidence for $H_{A}$ or your failed to find evidence for $H_{A}$). However, we can recognize that there are different kinds of null hypothesis: You have probably learned about one-sided null hypotheses of the form $H_{0}: \theta \ge \theta_{0}$ and $H_{0}: \theta \le \theta_{0}$ You have probably learned about two-sided null hypotheses (aka two-tailed null hypotheses) of the form $H_{0}: \theta = \theta_{0}$, or synonymously $H_{0}: \theta - \theta_{0} = 0$ in the one-sample case, and $H_{0}: \theta_{1} = \theta_{2}$, or synonymously $H_{0}: \theta_{1} - \theta_{2} = 0$ in the two-sample case. I suspect this specific form of null hypothesis is what your question is about. Following Reagle and Vinod, I term null hypotheses of this form positivist null hypotheses, and make this explicit with the notation $H^{+}_{0}$. Positivist null hypotheses provide, or fail to provide evidence of difference or evidence of an effect. Positivist null hypotheses have an omnibus form for $k$ groups: $H_{0}^{+}: \theta_{i} = \theta_{j};$ for all $i,j \in \{1, 2, \dots k\};$ $\text{ and }i\ne j$. You may just now be learning about joint one-sided null hypotheses, which are null hypotheses of this form $H_{0}: |\theta - \theta_{0}|\ge \Delta$ in the one-sample case, and $H_{0}: |\theta_{1} - \theta_{2}|\ge \Delta$ in the two-sample case, where $\Delta$ is the minimum relevant difference that you care about a priori (i.e. you say up front that differences smaller than this do not matter). Again, following Reagle and Vinod, I term null hypotheses of this form negativist null hypotheses, and make this explicit with the notation $H^{-}_{0}$. Negativist null hypotheses provide evidence of equivalence (within $\pm\Delta$), or evidence of absence of an effect (larger than $|\Delta|$). Negativist null hypotheses have an omnibus form for $k$ groups: $H_{0}^{-}: |\theta_{i} = \theta_{j}|\ge \Delta;$ for all $i,j \in \{1, 2, \dots k\};$ $\text{ and }i\ne j$ (Wellek, chapter 7) The very cool thing to do is combine tests for difference with tests for equivalence. This is termed relevance testing, and places both statistical power and effect size explicitly within the conclusions drawn from a test, as detailed in the description of the [tost] tag. Consider: if you reject $H_{0}^{+}$ is that because there is a true effect of a size you find relevant? Or is it because your sample size was simply so large your test was over-powered? And if you fail to reject $H_{0}^{+}$, is that because there is no true effect, or because your sample size was too small, and your test under-powered? Relevance tests address these issues head-on. There are a few ways to perform tests for equivalence (whether or not one is combining with tests for difference): Two one-sided tests (TOST) translates the general negativist null hypothesis expressed above into two specific one-sided null hypotheses: $H^{-}_{01}: \theta - \theta_{0} \ge \Delta$ (one-sample) or $H^{-}_{01}: \theta_{1} - \theta_{2} \ge \Delta$ (two-sample) $H^{-}_{02}: \theta - \theta_{0} \le -\Delta$ (one-sample) or $H^{-}_{01}: \theta_{1} - \theta_{2} \le -\Delta$ (two-sample) Uniformly most powerful tests for equivalence, which tend to be much more arithmetically sophisticated than TOST. Wellek is the definitive reference for these. A confidence interval approach, I believe first motivated by Schuirman, and refined by others, such as Tryon. References Reagle, D. P. and Vinod, H. D. (2003). Inference for negativist theory using numerically computed rejection regions. Computational Statistics & Data Analysis, 42(3):491–512. Schuirmann, D. A. (1987). A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. Journal of Pharmacokinetics and Biopharmaceutics, 15(6):657–680. Tryon, W. W. and Lewis, C. (2008). An inferential confidence interval method of establishing statistical equivalence that corrects Tryon’s (2001) reduction factor. Psychological Methods, 13(3):272–277. Tryon, W. W. and Lewis, C. (2009). Evaluating independent proportions for statistical difference, equivalence, indeterminacy, and trivial difference using inferential confidence intervals. Journal of Educational and Behavioral Statistics, 34(2):171–189. Wellek, S. (2010). Testing Statistical Hypotheses of Equivalence and Noninferiority. Chapman and Hall/CRC Press, second edition.
Acceptance of null hypothesis Speaking to the title of your question: we never accept the null hypothesis, because testing $H_{0}$ only provides evidence against $H_{0}$ (i.e. conclusions are always with respect to the alternative
16,128
Acceptance of null hypothesis
You are referring to standard inference practice taught in statistics courses: form $H_0,H_a$ set the significance level $\alpha$ compare p-value with $\alpha$ either "reject $H_0$, accept $H_a$" or "fail to reject $H_0$" This is fine, and it's used in practice. I would even venture to guess this procedure could be mandatory in some regulated industries such as pharmaceuticals. However, this is not the only way statistics and inference applied in research and practice. For instance, take a look at this paper: "Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC". The paper was first to present the evidence of existence of Higgs boson, in so called ATLAS experiment. It was also one of those papers where the list of authors is as long as its actual content :) The paper doesn't mention neither $H_0$ nor $H_a$. The term "hypothesis" is used, and you could guess what was their $H_0$ reading the text. They use the term "significance", but not as $\alpha$-significance threshold in the "standard" inference. They simply express the distance in standard deviations, e.g. "the observed local significances for mH = 125 GeV are 2.7$\sigma$" they present "raw" p-values, and don't run them through "reject/fail to reject" comparisons with significance levels $\alpha$, as I wrote earlier they don't even use the latter they present confidence intervals at usual confidence levels such as 95% Here's how the conclusion is formulated: "These results provide conclusive evidence for the discovery of a new particle with mass 126.0 ± 0.4 (stat) ± 0.4 (sys) GeV." The words "stat" refers to statistical and "sys" to systematic uncertainties. So, as you see not everyone does the four step procedure that I outlined in the beginning of this answer. Here, the researchers show the p-value without pre-establishing the threshold, contrary to what is taught in statistics classes. Secondly, they don't do "reject/fail to reject" dance, at least formally. They cut to the chase, and say "here's the p-value, and that's why we say we found a new particle with 126 GeV mass." Important note The authors of the Higgs paper did not declare the Higgs boson yet. They only asserted that the new particle was found and that some of its properties such as a mass are consistent with Higgs boson. It took a couple of years to gather additional evidence before it was established that the particle is indeed the Higgs boson. See this blog post with early discussion of results. Physicists went on to check different properties such as zero spin. And while the evidence was gathered at some point CERN declared that the particle is Higgs boson. Why is this important? Because it is impossible to trivialize the process of scientific discovery to some rigid statistical inference procedure. Statistical inference is just one tool used. When CERN was looking for this particle the focus was on first finding it. It was the ultimate goal. Physicist had an idea where to look at. Once they found a candidate, they focused on proving it's the one. Eventually, the totality of evidence, not a single experiment with p-value and significance, convinced everyone that we found the particle. Include here all the prior knowledge and the standard model. This is not just a statistical inference, the scientific method is broader than that.
Acceptance of null hypothesis
You are referring to standard inference practice taught in statistics courses: form $H_0,H_a$ set the significance level $\alpha$ compare p-value with $\alpha$ either "reject $H_0$, accept $H_a$" or
Acceptance of null hypothesis You are referring to standard inference practice taught in statistics courses: form $H_0,H_a$ set the significance level $\alpha$ compare p-value with $\alpha$ either "reject $H_0$, accept $H_a$" or "fail to reject $H_0$" This is fine, and it's used in practice. I would even venture to guess this procedure could be mandatory in some regulated industries such as pharmaceuticals. However, this is not the only way statistics and inference applied in research and practice. For instance, take a look at this paper: "Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC". The paper was first to present the evidence of existence of Higgs boson, in so called ATLAS experiment. It was also one of those papers where the list of authors is as long as its actual content :) The paper doesn't mention neither $H_0$ nor $H_a$. The term "hypothesis" is used, and you could guess what was their $H_0$ reading the text. They use the term "significance", but not as $\alpha$-significance threshold in the "standard" inference. They simply express the distance in standard deviations, e.g. "the observed local significances for mH = 125 GeV are 2.7$\sigma$" they present "raw" p-values, and don't run them through "reject/fail to reject" comparisons with significance levels $\alpha$, as I wrote earlier they don't even use the latter they present confidence intervals at usual confidence levels such as 95% Here's how the conclusion is formulated: "These results provide conclusive evidence for the discovery of a new particle with mass 126.0 ± 0.4 (stat) ± 0.4 (sys) GeV." The words "stat" refers to statistical and "sys" to systematic uncertainties. So, as you see not everyone does the four step procedure that I outlined in the beginning of this answer. Here, the researchers show the p-value without pre-establishing the threshold, contrary to what is taught in statistics classes. Secondly, they don't do "reject/fail to reject" dance, at least formally. They cut to the chase, and say "here's the p-value, and that's why we say we found a new particle with 126 GeV mass." Important note The authors of the Higgs paper did not declare the Higgs boson yet. They only asserted that the new particle was found and that some of its properties such as a mass are consistent with Higgs boson. It took a couple of years to gather additional evidence before it was established that the particle is indeed the Higgs boson. See this blog post with early discussion of results. Physicists went on to check different properties such as zero spin. And while the evidence was gathered at some point CERN declared that the particle is Higgs boson. Why is this important? Because it is impossible to trivialize the process of scientific discovery to some rigid statistical inference procedure. Statistical inference is just one tool used. When CERN was looking for this particle the focus was on first finding it. It was the ultimate goal. Physicist had an idea where to look at. Once they found a candidate, they focused on proving it's the one. Eventually, the totality of evidence, not a single experiment with p-value and significance, convinced everyone that we found the particle. Include here all the prior knowledge and the standard model. This is not just a statistical inference, the scientific method is broader than that.
Acceptance of null hypothesis You are referring to standard inference practice taught in statistics courses: form $H_0,H_a$ set the significance level $\alpha$ compare p-value with $\alpha$ either "reject $H_0$, accept $H_a$" or
16,129
Acceptance of null hypothesis
There are ways to approach this that don't rely on the power calculations (see Wellek, 2010). In particular, you can also test whether you reject the null that the effect is of an a priori meaningful magnitude. Daniël Lakens advocates in this situation for equivalence testing. Lakens in particular uses "TOST" (two one-sided tests) for mean comparisons, but there are other ways to get at the same idea. In TOST you test a compound null: the one-sided null hypothesis that your your effect is more negative than the smallest negative difference of interest and the null that your effect is more positive than the smallest positive difference of interest. If you reject both, then you can claim that there is no meaningful difference. Note that this can happen even if the effect is significantly different from zero, but in no case does it require endorsing the null. Lakens, D. (2017). Equivalence tests: a practical primer for t tests, correlations, and meta-analyses. Social Psychological and Personality Science, 8(4), 355-362. Wellek, S. (2010). Testing Statistical Hypotheses of Equivalence and Noninferiority. Chapman and Hall/CRC Press, second edition.
Acceptance of null hypothesis
There are ways to approach this that don't rely on the power calculations (see Wellek, 2010). In particular, you can also test whether you reject the null that the effect is of an a priori meaningful
Acceptance of null hypothesis There are ways to approach this that don't rely on the power calculations (see Wellek, 2010). In particular, you can also test whether you reject the null that the effect is of an a priori meaningful magnitude. Daniël Lakens advocates in this situation for equivalence testing. Lakens in particular uses "TOST" (two one-sided tests) for mean comparisons, but there are other ways to get at the same idea. In TOST you test a compound null: the one-sided null hypothesis that your your effect is more negative than the smallest negative difference of interest and the null that your effect is more positive than the smallest positive difference of interest. If you reject both, then you can claim that there is no meaningful difference. Note that this can happen even if the effect is significantly different from zero, but in no case does it require endorsing the null. Lakens, D. (2017). Equivalence tests: a practical primer for t tests, correlations, and meta-analyses. Social Psychological and Personality Science, 8(4), 355-362. Wellek, S. (2010). Testing Statistical Hypotheses of Equivalence and Noninferiority. Chapman and Hall/CRC Press, second edition.
Acceptance of null hypothesis There are ways to approach this that don't rely on the power calculations (see Wellek, 2010). In particular, you can also test whether you reject the null that the effect is of an a priori meaningful
16,130
What is the difference between a "statistical experiment" and a "statistical model"?
Another way to think about this is that the statistical experiment is the protocol we follow to generate data and the statistical model is the protocol we use to analyze these data.
What is the difference between a "statistical experiment" and a "statistical model"?
Another way to think about this is that the statistical experiment is the protocol we follow to generate data and the statistical model is the protocol we use to analyze these data.
What is the difference between a "statistical experiment" and a "statistical model"? Another way to think about this is that the statistical experiment is the protocol we follow to generate data and the statistical model is the protocol we use to analyze these data.
What is the difference between a "statistical experiment" and a "statistical model"? Another way to think about this is that the statistical experiment is the protocol we follow to generate data and the statistical model is the protocol we use to analyze these data.
16,131
What is the difference between a "statistical experiment" and a "statistical model"?
A statistical experiment is a design that describes how the data will be collected in a statistically valid way. A statistical model is a description of relationships between variables that are measured in the experiment and/or the parametric form of the distribution of those variables.
What is the difference between a "statistical experiment" and a "statistical model"?
A statistical experiment is a design that describes how the data will be collected in a statistically valid way. A statistical model is a description of relationships between variables that are measur
What is the difference between a "statistical experiment" and a "statistical model"? A statistical experiment is a design that describes how the data will be collected in a statistically valid way. A statistical model is a description of relationships between variables that are measured in the experiment and/or the parametric form of the distribution of those variables.
What is the difference between a "statistical experiment" and a "statistical model"? A statistical experiment is a design that describes how the data will be collected in a statistically valid way. A statistical model is a description of relationships between variables that are measur
16,132
What is the difference between a "statistical experiment" and a "statistical model"?
What @Michael Chernick said is correct from a statistician's viewpoint. I'm a physicist, and in science, "model" can also mean a predictive description. You would say something like "if you set up the following situation, for example, place a small drop of ink in a glass of milk, then at time t you would expect the statistical distribution of ink molecules in space to be P[x, t]." I suspect this meaning is becoming more ubiquitous. For instance, to be apropos, one might tell a potential client "if we target a certain population with such and such advertising, we would anticipate a voting increase of 4 ± 1 % for your candidate." This might be a purely empirical result, or it might be based on a predictive statistical model built on a theory combined with the results of statistical experiments.
What is the difference between a "statistical experiment" and a "statistical model"?
What @Michael Chernick said is correct from a statistician's viewpoint. I'm a physicist, and in science, "model" can also mean a predictive description. You would say something like "if you set up the
What is the difference between a "statistical experiment" and a "statistical model"? What @Michael Chernick said is correct from a statistician's viewpoint. I'm a physicist, and in science, "model" can also mean a predictive description. You would say something like "if you set up the following situation, for example, place a small drop of ink in a glass of milk, then at time t you would expect the statistical distribution of ink molecules in space to be P[x, t]." I suspect this meaning is becoming more ubiquitous. For instance, to be apropos, one might tell a potential client "if we target a certain population with such and such advertising, we would anticipate a voting increase of 4 ± 1 % for your candidate." This might be a purely empirical result, or it might be based on a predictive statistical model built on a theory combined with the results of statistical experiments.
What is the difference between a "statistical experiment" and a "statistical model"? What @Michael Chernick said is correct from a statistician's viewpoint. I'm a physicist, and in science, "model" can also mean a predictive description. You would say something like "if you set up the
16,133
What is the basis for the Box and Whisker Plot definition of an outlier?
Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley. Chapter 3, "Boxplots and Batch Comparison", written by John D. Emerson and Judith Strenio (from page 62): [...] Our definition of outliers as data values that are smaller than $F_{L}-\frac{3}{2}d_{F}$ or larger than $F_{U}+\frac{3}{2}d_{F}$ is somewhat arbitrary, but experience with many data sets indicates that this definition serves well in identifying values that may require special attention.[...] $F_{L}$ and $F_{U}$ denote the first and third quartile, whereas $d_{F}$ is the interquartile range (i.e. $F_{U}-F_{L}$). They go on and show the application to a Gaussian population (page 63): Consider the standard Gaussian distribution, with mean $0$ and variance $1$. We look for population values of this distribution that are analogous to the sample values used in the boxplot. For a symmetric distribution, the median equals the mean, so the population median of the standard Gaussian distribution is $0$. The population fourths are $-0.6745$ and $0.6745$, so the population fourth-spread is $1.349$, or about $\frac{4}{3}$. Thus $\frac{3}{2}$ times the fourth-spread is $2.0235$ (about $2$). The population outlier cutoffs are $\pm 2.698$ (about $2\frac{2}{3}$), and they contain $99.3\%$ of the distribution. [...] So [they] show that if the cutoffs are applied to a Gaussian distribution, then $0.7\%$ of the population is outside the outlier cutoffs; this figure provides a standard of comparison for judging the placement of the outlier cutoffs [...]. Further, they write [...] Thus we can judge whether our data seem heavier-tailed than Gaussian by how many points fall beyond the outlier cutoffs. [...] They provide a table with the expected proportion of values that fall outside the outlier cutoffs (labelled "Total % Out"): So these cutoffs where never intended to be a strict rule about what data points are outliers or not. As you noted, even a perfect Normal distribution is expected to exhibit "outliers" in a boxplot. Outliers As far as I know, there is no universally accepted definition of outlier. I like the definition by Hawkins (1980): An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism. Ideally, you should only treat data points as outliers once you understand why they don't belong to the rest of the data. A simple rule is not sufficient. A good treatment of outliers can be found in Aggarwal (2013). References Aggarwal CC (2013): Outlier Analysis. Springer. Hawkins D (1980): Identification of Outliers. Chapman and Hall. Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley.
What is the basis for the Box and Whisker Plot definition of an outlier?
Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley. Chapter 3, "Boxplots and Batch Comparison", written by John D.
What is the basis for the Box and Whisker Plot definition of an outlier? Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley. Chapter 3, "Boxplots and Batch Comparison", written by John D. Emerson and Judith Strenio (from page 62): [...] Our definition of outliers as data values that are smaller than $F_{L}-\frac{3}{2}d_{F}$ or larger than $F_{U}+\frac{3}{2}d_{F}$ is somewhat arbitrary, but experience with many data sets indicates that this definition serves well in identifying values that may require special attention.[...] $F_{L}$ and $F_{U}$ denote the first and third quartile, whereas $d_{F}$ is the interquartile range (i.e. $F_{U}-F_{L}$). They go on and show the application to a Gaussian population (page 63): Consider the standard Gaussian distribution, with mean $0$ and variance $1$. We look for population values of this distribution that are analogous to the sample values used in the boxplot. For a symmetric distribution, the median equals the mean, so the population median of the standard Gaussian distribution is $0$. The population fourths are $-0.6745$ and $0.6745$, so the population fourth-spread is $1.349$, or about $\frac{4}{3}$. Thus $\frac{3}{2}$ times the fourth-spread is $2.0235$ (about $2$). The population outlier cutoffs are $\pm 2.698$ (about $2\frac{2}{3}$), and they contain $99.3\%$ of the distribution. [...] So [they] show that if the cutoffs are applied to a Gaussian distribution, then $0.7\%$ of the population is outside the outlier cutoffs; this figure provides a standard of comparison for judging the placement of the outlier cutoffs [...]. Further, they write [...] Thus we can judge whether our data seem heavier-tailed than Gaussian by how many points fall beyond the outlier cutoffs. [...] They provide a table with the expected proportion of values that fall outside the outlier cutoffs (labelled "Total % Out"): So these cutoffs where never intended to be a strict rule about what data points are outliers or not. As you noted, even a perfect Normal distribution is expected to exhibit "outliers" in a boxplot. Outliers As far as I know, there is no universally accepted definition of outlier. I like the definition by Hawkins (1980): An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism. Ideally, you should only treat data points as outliers once you understand why they don't belong to the rest of the data. A simple rule is not sufficient. A good treatment of outliers can be found in Aggarwal (2013). References Aggarwal CC (2013): Outlier Analysis. Springer. Hawkins D (1980): Identification of Outliers. Chapman and Hall. Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley.
What is the basis for the Box and Whisker Plot definition of an outlier? Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley. Chapter 3, "Boxplots and Batch Comparison", written by John D.
16,134
What is the basis for the Box and Whisker Plot definition of an outlier?
The word 'outlier' is often assumed to mean something like 'a data value that is erroneous, misleading, mistaken or broken and should therefore be omitted from analysis', but that is not what Tukey meant by his use of outlier. The outliers are simply points that are a long way from the median of the dataset. Your point about expecting outliers in many datasets is correct and important. And there are many good questions and answers on the topic. Removing outliers from asymmetric data Is it appropriate to identify and remove outliers because they cause problems?
What is the basis for the Box and Whisker Plot definition of an outlier?
The word 'outlier' is often assumed to mean something like 'a data value that is erroneous, misleading, mistaken or broken and should therefore be omitted from analysis', but that is not what Tukey me
What is the basis for the Box and Whisker Plot definition of an outlier? The word 'outlier' is often assumed to mean something like 'a data value that is erroneous, misleading, mistaken or broken and should therefore be omitted from analysis', but that is not what Tukey meant by his use of outlier. The outliers are simply points that are a long way from the median of the dataset. Your point about expecting outliers in many datasets is correct and important. And there are many good questions and answers on the topic. Removing outliers from asymmetric data Is it appropriate to identify and remove outliers because they cause problems?
What is the basis for the Box and Whisker Plot definition of an outlier? The word 'outlier' is often assumed to mean something like 'a data value that is erroneous, misleading, mistaken or broken and should therefore be omitted from analysis', but that is not what Tukey me
16,135
What is the basis for the Box and Whisker Plot definition of an outlier?
As with all outlier detection methods, care and thought must be used to determine what values are truly outliers. I think the boxplot simply provides a good visualization of the spread of data and any true outliers will be easy to catch.
What is the basis for the Box and Whisker Plot definition of an outlier?
As with all outlier detection methods, care and thought must be used to determine what values are truly outliers. I think the boxplot simply provides a good visualization of the spread of data and any
What is the basis for the Box and Whisker Plot definition of an outlier? As with all outlier detection methods, care and thought must be used to determine what values are truly outliers. I think the boxplot simply provides a good visualization of the spread of data and any true outliers will be easy to catch.
What is the basis for the Box and Whisker Plot definition of an outlier? As with all outlier detection methods, care and thought must be used to determine what values are truly outliers. I think the boxplot simply provides a good visualization of the spread of data and any
16,136
What is the basis for the Box and Whisker Plot definition of an outlier?
I think you should be concerned if you don't get some outliers as part of a normal distribution, otherwise perhaps you should be looking for reasons there aren't any. Clearly they should be reviewed to ensure they are not recording errors, but otherwise they are to be expected.
What is the basis for the Box and Whisker Plot definition of an outlier?
I think you should be concerned if you don't get some outliers as part of a normal distribution, otherwise perhaps you should be looking for reasons there aren't any. Clearly they should be reviewed t
What is the basis for the Box and Whisker Plot definition of an outlier? I think you should be concerned if you don't get some outliers as part of a normal distribution, otherwise perhaps you should be looking for reasons there aren't any. Clearly they should be reviewed to ensure they are not recording errors, but otherwise they are to be expected.
What is the basis for the Box and Whisker Plot definition of an outlier? I think you should be concerned if you don't get some outliers as part of a normal distribution, otherwise perhaps you should be looking for reasons there aren't any. Clearly they should be reviewed t
16,137
How can I model flips until N successes?
The distribution of the number of tails before achieving $10$ heads is Negative Binomial with parameters $10$ and $1/2$. Let $f$ be the probability function and $G$ the survival function: for each $n\ge 0$, $f(n)$ is the player's chance of $n$ tails before $10$ heads and $G(n)$ is the player's chance of $n$ or more tails before $10$ heads. Because the players roll independently, the chance the first player wins with rolling exactly $n$ tails is obtained by multiplying that chance by the chance the second player rolls $n$ or more tails, equal to $f(n)G(n)$. Summing over all possible $n$ gives the first player's winning chances as $$\sum_{n=0}^\infty f(n)G(n) \approx 53.290977425133892\ldots\%.$$ That is about $3\%$ more than half the time. In general, replacing $10$ by any positive integer $m$, the answer can be given in terms of a Hypergeometric function: it is equal to $$1/2 + 2^{-2m-1} {_2F_1}(m,m,1,1/4).$$ When using a biased coin with a chance $p$ of heads, this generalizes to $$\frac{1}{2} + \frac{1}{2}(p^{2m}) {_2F_1}(m, m, 1, (1 - p)^2).$$ Here is an R simulation of a million such games. It reports an estimate of $0.5325$. A binomial hypothesis test to compare it to the theoretical result has a Z-score of $-0.843$, which is an insignificant difference. n.sim <- 1e6 set.seed(17) xy <- matrix(rnbinom(2*n.sim, 10, 1/2), nrow=2) p <- mean(xy[1,] <= xy[2,]) cat("Estimate:", signif(p, 4), "Z-score:", signif((p - 0.532909774) / sqrt(p*(1-p)) * sqrt(n.sim), 3))
How can I model flips until N successes?
The distribution of the number of tails before achieving $10$ heads is Negative Binomial with parameters $10$ and $1/2$. Let $f$ be the probability function and $G$ the survival function: for each $n
How can I model flips until N successes? The distribution of the number of tails before achieving $10$ heads is Negative Binomial with parameters $10$ and $1/2$. Let $f$ be the probability function and $G$ the survival function: for each $n\ge 0$, $f(n)$ is the player's chance of $n$ tails before $10$ heads and $G(n)$ is the player's chance of $n$ or more tails before $10$ heads. Because the players roll independently, the chance the first player wins with rolling exactly $n$ tails is obtained by multiplying that chance by the chance the second player rolls $n$ or more tails, equal to $f(n)G(n)$. Summing over all possible $n$ gives the first player's winning chances as $$\sum_{n=0}^\infty f(n)G(n) \approx 53.290977425133892\ldots\%.$$ That is about $3\%$ more than half the time. In general, replacing $10$ by any positive integer $m$, the answer can be given in terms of a Hypergeometric function: it is equal to $$1/2 + 2^{-2m-1} {_2F_1}(m,m,1,1/4).$$ When using a biased coin with a chance $p$ of heads, this generalizes to $$\frac{1}{2} + \frac{1}{2}(p^{2m}) {_2F_1}(m, m, 1, (1 - p)^2).$$ Here is an R simulation of a million such games. It reports an estimate of $0.5325$. A binomial hypothesis test to compare it to the theoretical result has a Z-score of $-0.843$, which is an insignificant difference. n.sim <- 1e6 set.seed(17) xy <- matrix(rnbinom(2*n.sim, 10, 1/2), nrow=2) p <- mean(xy[1,] <= xy[2,]) cat("Estimate:", signif(p, 4), "Z-score:", signif((p - 0.532909774) / sqrt(p*(1-p)) * sqrt(n.sim), 3))
How can I model flips until N successes? The distribution of the number of tails before achieving $10$ heads is Negative Binomial with parameters $10$ and $1/2$. Let $f$ be the probability function and $G$ the survival function: for each $n
16,138
How can I model flips until N successes?
We can model the game like this: Player A flips a coin repeatedly, getting results $A_1, A_2, \dots$ until they get a total of 10 heads. Let the time index of the 10th heads be the random variable $X$. Player B does the same. Let the time index of the 10th heads be the random variable $Y$, which is an iid copy of $X$. If $X \le Y$, Player A wins; otherwise Player B wins. That is, \begin{align} \Pr(A\text{ wins})&= \Pr(X \ge Y) = \Pr(X > Y) + \Pr(X = Y)\\ \Pr(B\text{ wins})&= \Pr(Y > X) = \Pr(X > Y). \end{align} The gap in the win rates is thus $$ \Pr(X = Y) = \sum_k \Pr(X = k, Y = k) = \sum_k \Pr(X = k)^2 .$$ As you suspected, $X$ (and $Y$) are distributed essentially according to a negative binomial distribution. Notations for this vary, but in Wikipedia's parameterization, we have heads as a "failure" and tails as a "success"; we need $r = 10$ "failures" (heads) before the experiment is stopped, and success probability $p = \tfrac12$. Then the number of "successes," which is $X - 10$, has $$\Pr(X - 10 = k) = \binom{k + 9}{k} 2^{-10 - k},$$ and the collision probability is $$ \Pr(X = Y) = \sum_{k=0}^\infty \binom{k + 9}{k}^2 2^{-2k - 20} ,$$ which Mathematica helpfully tells us is $\frac{76\,499\,525}{1\,162\,261\,467} \approx 6.6\%$. Thus Player B's win rate is $\Pr(Y > X) \approx 46.7\%$, and Player A's is $\frac{619\,380\,496}{1\,162\,261\,467} \approx 53.3\%$.
How can I model flips until N successes?
We can model the game like this: Player A flips a coin repeatedly, getting results $A_1, A_2, \dots$ until they get a total of 10 heads. Let the time index of the 10th heads be the random variable $X
How can I model flips until N successes? We can model the game like this: Player A flips a coin repeatedly, getting results $A_1, A_2, \dots$ until they get a total of 10 heads. Let the time index of the 10th heads be the random variable $X$. Player B does the same. Let the time index of the 10th heads be the random variable $Y$, which is an iid copy of $X$. If $X \le Y$, Player A wins; otherwise Player B wins. That is, \begin{align} \Pr(A\text{ wins})&= \Pr(X \ge Y) = \Pr(X > Y) + \Pr(X = Y)\\ \Pr(B\text{ wins})&= \Pr(Y > X) = \Pr(X > Y). \end{align} The gap in the win rates is thus $$ \Pr(X = Y) = \sum_k \Pr(X = k, Y = k) = \sum_k \Pr(X = k)^2 .$$ As you suspected, $X$ (and $Y$) are distributed essentially according to a negative binomial distribution. Notations for this vary, but in Wikipedia's parameterization, we have heads as a "failure" and tails as a "success"; we need $r = 10$ "failures" (heads) before the experiment is stopped, and success probability $p = \tfrac12$. Then the number of "successes," which is $X - 10$, has $$\Pr(X - 10 = k) = \binom{k + 9}{k} 2^{-10 - k},$$ and the collision probability is $$ \Pr(X = Y) = \sum_{k=0}^\infty \binom{k + 9}{k}^2 2^{-2k - 20} ,$$ which Mathematica helpfully tells us is $\frac{76\,499\,525}{1\,162\,261\,467} \approx 6.6\%$. Thus Player B's win rate is $\Pr(Y > X) \approx 46.7\%$, and Player A's is $\frac{619\,380\,496}{1\,162\,261\,467} \approx 53.3\%$.
How can I model flips until N successes? We can model the game like this: Player A flips a coin repeatedly, getting results $A_1, A_2, \dots$ until they get a total of 10 heads. Let the time index of the 10th heads be the random variable $X
16,139
How can I model flips until N successes?
Let $E_{ij}$ be the event that the player on roll flips i heads before the other player flips j heads, and let $X$ be the first two flips having sample space $ \{ hh,ht,th,tt\}$ where h means heads and t tails, and let $p_{ij} \equiv Pr(E_{ij})$. Then $p_{ij}=Pr(E_{i-1j-1}|X=hh)*Pr(X=hh)+Pr(E_{i-1j}|X=ht)*Pr(X=ht)+Pr(E_{ij-1}|X=th)*Pr(X=th)+Pr(E_{ij}|X=tt)*Pr(X=tt)$ Assuming a standard coin $Pr(X=*)=1/4$ means that $p_{ij}=1/4*[p_{i-1j-1}+p_{i-1j}+p_{ij-1}+p_{ij}]$ solving for $p_{ij}$, $= 1/3*[p_{i-1j-1}+p_{i-1j}+p_{ij-1}]$ But $p_{0j}=p_{00}=1$ and $p_{i0}=0$, implying that the recursion fully terminates. However, a direct naive recursive implementation will yield poor performance because the branches intersect. An efficient implementation will have complexity $O(i*j)$ and memory complexity $O(min(i,j))$. Here's a simple fold implemented in Haskell: Prelude> let p i j = last. head. drop j $ iterate ((1:).(f 1)) start where start = 1 : replicate i 0; f c v = case v of (a:[]) -> []; (a:b:rest) -> sum : f sum (b:rest) where sum = (a+b+c)/3 Prelude> p 0 0 1.0 Prelude> p 1 0 0.0 Prelude> p 10 10 0.5329097742513388 Prelude> UPDATE: Someone in the comments above asked whether one was suppose to roll 10 heads in a row or not. So let $E_{kl}$ be the event that the player on roll flips i heads in a row before the other player flips i heads in a row, given that they already flipped k and l consecutive heads respectively. Proceeding as before above, but this time conditioning on the first flip only, $p_{k,l} = 1-1/2*[p_{l,k+1}+p_{l,0}]$ where $p_{il}=p_{ii}=1, p_{ki}=0$ This is a linear system with $i^2$ unknowns and one unique solution. To convert it into an iterative scheme, simply add an iterate number $n$ and a sensitivity factor $\epsilon$: $p_{k,l,n+1} = 1/(1+\epsilon)*[\epsilon*p_{k,l,n} +1-1/2*(p_{l,k+1,n}+p_{l,0,n})]$ Choose $\epsilon$ and $p_{k,l,0}$ wisely and run the iteration for a few steps and monitor the correction term.
How can I model flips until N successes?
Let $E_{ij}$ be the event that the player on roll flips i heads before the other player flips j heads, and let $X$ be the first two flips having sample space $ \{ hh,ht,th,tt\}$ where h means heads an
How can I model flips until N successes? Let $E_{ij}$ be the event that the player on roll flips i heads before the other player flips j heads, and let $X$ be the first two flips having sample space $ \{ hh,ht,th,tt\}$ where h means heads and t tails, and let $p_{ij} \equiv Pr(E_{ij})$. Then $p_{ij}=Pr(E_{i-1j-1}|X=hh)*Pr(X=hh)+Pr(E_{i-1j}|X=ht)*Pr(X=ht)+Pr(E_{ij-1}|X=th)*Pr(X=th)+Pr(E_{ij}|X=tt)*Pr(X=tt)$ Assuming a standard coin $Pr(X=*)=1/4$ means that $p_{ij}=1/4*[p_{i-1j-1}+p_{i-1j}+p_{ij-1}+p_{ij}]$ solving for $p_{ij}$, $= 1/3*[p_{i-1j-1}+p_{i-1j}+p_{ij-1}]$ But $p_{0j}=p_{00}=1$ and $p_{i0}=0$, implying that the recursion fully terminates. However, a direct naive recursive implementation will yield poor performance because the branches intersect. An efficient implementation will have complexity $O(i*j)$ and memory complexity $O(min(i,j))$. Here's a simple fold implemented in Haskell: Prelude> let p i j = last. head. drop j $ iterate ((1:).(f 1)) start where start = 1 : replicate i 0; f c v = case v of (a:[]) -> []; (a:b:rest) -> sum : f sum (b:rest) where sum = (a+b+c)/3 Prelude> p 0 0 1.0 Prelude> p 1 0 0.0 Prelude> p 10 10 0.5329097742513388 Prelude> UPDATE: Someone in the comments above asked whether one was suppose to roll 10 heads in a row or not. So let $E_{kl}$ be the event that the player on roll flips i heads in a row before the other player flips i heads in a row, given that they already flipped k and l consecutive heads respectively. Proceeding as before above, but this time conditioning on the first flip only, $p_{k,l} = 1-1/2*[p_{l,k+1}+p_{l,0}]$ where $p_{il}=p_{ii}=1, p_{ki}=0$ This is a linear system with $i^2$ unknowns and one unique solution. To convert it into an iterative scheme, simply add an iterate number $n$ and a sensitivity factor $\epsilon$: $p_{k,l,n+1} = 1/(1+\epsilon)*[\epsilon*p_{k,l,n} +1-1/2*(p_{l,k+1,n}+p_{l,0,n})]$ Choose $\epsilon$ and $p_{k,l,0}$ wisely and run the iteration for a few steps and monitor the correction term.
How can I model flips until N successes? Let $E_{ij}$ be the event that the player on roll flips i heads before the other player flips j heads, and let $X$ be the first two flips having sample space $ \{ hh,ht,th,tt\}$ where h means heads an
16,140
What is the distribution of sample means of a Cauchy distribution?
If $X_1, \ldots, X_n$ are i.i.d. Cauchy$(0, 1)$ then we can show that $\bar{X}$ is also Cauchy$(0, 1)$ using a characteristic function argument: \begin{align} \varphi_{\bar{X}}(t) &= \text{E} \left (e^{it \bar{X}} \right ) \\ &= \text{E} \left ( \prod_{j=1}^{n} e^{it X_j / n} \right ) \\ &= \prod_{j=1}^{n} \text{E} \left ( e^{it X_j / n} \right ) \\ &= \text{E} \left (e^{it X_1 / n} \right )^n \\ &= e^{- |t|} \end{align} which is the characteristic function of the standard Cauchy distribution. The proof for the more general Cauchy$(\mu, \sigma)$ case is basically identical.
What is the distribution of sample means of a Cauchy distribution?
If $X_1, \ldots, X_n$ are i.i.d. Cauchy$(0, 1)$ then we can show that $\bar{X}$ is also Cauchy$(0, 1)$ using a characteristic function argument: \begin{align} \varphi_{\bar{X}}(t) &= \text{E} \left (e
What is the distribution of sample means of a Cauchy distribution? If $X_1, \ldots, X_n$ are i.i.d. Cauchy$(0, 1)$ then we can show that $\bar{X}$ is also Cauchy$(0, 1)$ using a characteristic function argument: \begin{align} \varphi_{\bar{X}}(t) &= \text{E} \left (e^{it \bar{X}} \right ) \\ &= \text{E} \left ( \prod_{j=1}^{n} e^{it X_j / n} \right ) \\ &= \prod_{j=1}^{n} \text{E} \left ( e^{it X_j / n} \right ) \\ &= \text{E} \left (e^{it X_1 / n} \right )^n \\ &= e^{- |t|} \end{align} which is the characteristic function of the standard Cauchy distribution. The proof for the more general Cauchy$(\mu, \sigma)$ case is basically identical.
What is the distribution of sample means of a Cauchy distribution? If $X_1, \ldots, X_n$ are i.i.d. Cauchy$(0, 1)$ then we can show that $\bar{X}$ is also Cauchy$(0, 1)$ using a characteristic function argument: \begin{align} \varphi_{\bar{X}}(t) &= \text{E} \left (e
16,141
What is the distribution of sample means of a Cauchy distribution?
Typically when one takes random sample averages of a distribution (with sample size greater than 30) one obtains a normal distribution centering around the mean value. Not exactly. You're thinking of the central limit theorem, which states that given a sequence $X_n$ of IID random variables with finite variance (which itself implies a finite mean $μ$), the expression $\sqrt{n}[(X_1 + X_2 + \cdots + X_n)/n - μ]$ converges in distribution to a normal distribution as $n$ goes to infinity. There is no guarantee that the sample mean of any finite subset of the variables will be normally distributed. However, I heard that the Cauchy distribution has no mean value. What distribution does one obtain then when obtaining sample means of the Cauchy distribution? Like GeoMatt22 said, the sample means will be themselves Cauchy distributed. In other words, the Cauchy distribution is a stable distribution. Notice that the central limit theorem doesn't apply to Cauchy distributed random variables because they don't have finite mean and variance.
What is the distribution of sample means of a Cauchy distribution?
Typically when one takes random sample averages of a distribution (with sample size greater than 30) one obtains a normal distribution centering around the mean value. Not exactly. You're thinking of
What is the distribution of sample means of a Cauchy distribution? Typically when one takes random sample averages of a distribution (with sample size greater than 30) one obtains a normal distribution centering around the mean value. Not exactly. You're thinking of the central limit theorem, which states that given a sequence $X_n$ of IID random variables with finite variance (which itself implies a finite mean $μ$), the expression $\sqrt{n}[(X_1 + X_2 + \cdots + X_n)/n - μ]$ converges in distribution to a normal distribution as $n$ goes to infinity. There is no guarantee that the sample mean of any finite subset of the variables will be normally distributed. However, I heard that the Cauchy distribution has no mean value. What distribution does one obtain then when obtaining sample means of the Cauchy distribution? Like GeoMatt22 said, the sample means will be themselves Cauchy distributed. In other words, the Cauchy distribution is a stable distribution. Notice that the central limit theorem doesn't apply to Cauchy distributed random variables because they don't have finite mean and variance.
What is the distribution of sample means of a Cauchy distribution? Typically when one takes random sample averages of a distribution (with sample size greater than 30) one obtains a normal distribution centering around the mean value. Not exactly. You're thinking of
16,142
Calculate the Kullback-Leibler Divergence in practice?
You can't and you don't. Imagine that you have an random variable of probability distribution Q. But your friend Bob thinks that the outcome comes from the probability distribution P. He has constructed an optimal encoding, that minimizes the number of expected bits he will need to use to tell you the outcome. But, since he constructed the encoding from P and not from Q, his codes will be longer than necessary. KL-divergence measure how much longer the codes will be. Now lets say he has a coin and he wants to tell you the sequence of outcomes he gets. Because head and tail are equally likely he gives them both 1-bit codes. 0 for head, 1 for tail. If he gets tail tail head tail, he can send 1 1 0 1. Now, if his coin lands on the edge he cannot possibly tell you! No code he sends you would work. At this point KL-divergence breaks down. Since KL-divergence breaks down you will either have to use another measure or other probability distributions. What you should do really depends on what you want. Why are you comparing probability distributions? Where do your probability distributions come from, are they estimated from data? You say your probability distributions come from natural language documents somehow, and you want to compare pairs of categories. First, I'd recommend a symmetric relatedness measure. For this application it sounds like A to be as similar to B as B is similar to A. Have you tried the cosine similarity measure? It is quite common in NLP. If you want to stick with KL, one thing you could do is estimate a probability function from both documents and then see how how many extra bits you'd need on average for either document. That is (P||(P+Q)/2 + Q||(P+Q)/2)/2
Calculate the Kullback-Leibler Divergence in practice?
You can't and you don't. Imagine that you have an random variable of probability distribution Q. But your friend Bob thinks that the outcome comes from the probability distribution P. He has construc
Calculate the Kullback-Leibler Divergence in practice? You can't and you don't. Imagine that you have an random variable of probability distribution Q. But your friend Bob thinks that the outcome comes from the probability distribution P. He has constructed an optimal encoding, that minimizes the number of expected bits he will need to use to tell you the outcome. But, since he constructed the encoding from P and not from Q, his codes will be longer than necessary. KL-divergence measure how much longer the codes will be. Now lets say he has a coin and he wants to tell you the sequence of outcomes he gets. Because head and tail are equally likely he gives them both 1-bit codes. 0 for head, 1 for tail. If he gets tail tail head tail, he can send 1 1 0 1. Now, if his coin lands on the edge he cannot possibly tell you! No code he sends you would work. At this point KL-divergence breaks down. Since KL-divergence breaks down you will either have to use another measure or other probability distributions. What you should do really depends on what you want. Why are you comparing probability distributions? Where do your probability distributions come from, are they estimated from data? You say your probability distributions come from natural language documents somehow, and you want to compare pairs of categories. First, I'd recommend a symmetric relatedness measure. For this application it sounds like A to be as similar to B as B is similar to A. Have you tried the cosine similarity measure? It is quite common in NLP. If you want to stick with KL, one thing you could do is estimate a probability function from both documents and then see how how many extra bits you'd need on average for either document. That is (P||(P+Q)/2 + Q||(P+Q)/2)/2
Calculate the Kullback-Leibler Divergence in practice? You can't and you don't. Imagine that you have an random variable of probability distribution Q. But your friend Bob thinks that the outcome comes from the probability distribution P. He has construc
16,143
Calculate the Kullback-Leibler Divergence in practice?
In practice, I have run into this issue as well. In this case, I've found that substituting the value of 0 for some very small number can cause problems. Depending upon the value that you use, you will introduce a "bias" in the KL value. If you are using the KL value for hypothesis testing or some other use that involves a threshold, then this small value can bias your results. I have found that the most effective way to deal with this is to only consider computing the KL over a consistent hypothesis space X_i where BOTH P and Q are non-zero. Essentially, this limits the domain of the KL to a domain where both are defined and keeps you out of trouble when using the KL to perform hypothesis tests.
Calculate the Kullback-Leibler Divergence in practice?
In practice, I have run into this issue as well. In this case, I've found that substituting the value of 0 for some very small number can cause problems. Depending upon the value that you use, you wil
Calculate the Kullback-Leibler Divergence in practice? In practice, I have run into this issue as well. In this case, I've found that substituting the value of 0 for some very small number can cause problems. Depending upon the value that you use, you will introduce a "bias" in the KL value. If you are using the KL value for hypothesis testing or some other use that involves a threshold, then this small value can bias your results. I have found that the most effective way to deal with this is to only consider computing the KL over a consistent hypothesis space X_i where BOTH P and Q are non-zero. Essentially, this limits the domain of the KL to a domain where both are defined and keeps you out of trouble when using the KL to perform hypothesis tests.
Calculate the Kullback-Leibler Divergence in practice? In practice, I have run into this issue as well. In this case, I've found that substituting the value of 0 for some very small number can cause problems. Depending upon the value that you use, you wil
16,144
Calculate the Kullback-Leibler Divergence in practice?
Having a probability distribution where $Q_i=0$ for any $i$ means that you are certain that $Q_i$ can not occur. Therefore if a $Q_i$ were ever obeserved it would represent infinite surprise/information, which is what Shannon information represents. KL diveregence represents the amount of additional surprise (ie information lost) per observation if the distribution $Q$ is used as an approximation for the distribution $P$. If the approximation predicts 0 probability for an event that has a postive probability in reality, then you will experience infinite surprise some percentage of the time and thus infinite surprise on average. The solution is to never allow 0 or 1 probabilities in estimated distributions. This is usually achieved by some form of smoothing like Good-Turing smoothing, Dirichlet smoothing or Laplace smoothing.
Calculate the Kullback-Leibler Divergence in practice?
Having a probability distribution where $Q_i=0$ for any $i$ means that you are certain that $Q_i$ can not occur. Therefore if a $Q_i$ were ever obeserved it would represent infinite surprise/informati
Calculate the Kullback-Leibler Divergence in practice? Having a probability distribution where $Q_i=0$ for any $i$ means that you are certain that $Q_i$ can not occur. Therefore if a $Q_i$ were ever obeserved it would represent infinite surprise/information, which is what Shannon information represents. KL diveregence represents the amount of additional surprise (ie information lost) per observation if the distribution $Q$ is used as an approximation for the distribution $P$. If the approximation predicts 0 probability for an event that has a postive probability in reality, then you will experience infinite surprise some percentage of the time and thus infinite surprise on average. The solution is to never allow 0 or 1 probabilities in estimated distributions. This is usually achieved by some form of smoothing like Good-Turing smoothing, Dirichlet smoothing or Laplace smoothing.
Calculate the Kullback-Leibler Divergence in practice? Having a probability distribution where $Q_i=0$ for any $i$ means that you are certain that $Q_i$ can not occur. Therefore if a $Q_i$ were ever obeserved it would represent infinite surprise/informati
16,145
Probit two-stage least squares (2SLS)
What was proposed to you is sometimes referred to as a forbidden regression and in general you will not consistently estimate the relationship of interest. Forbidden regressions produce consistent estimates only under very restrictive assumptions which rarely hold in practice (see for instance Wooldridge (2010) "Econometric Analysis of Cross Section an Panel Data", p. 265-268). The problem is that neither the conditional expectations operator nor the linear projection carry through nonlinear functions. For this reason only an OLS regression in the first stage is guaranteed to produce fitted values that are uncorrelated with the residuals. A proof for this can be found in Greene (2008) "Econometric Analysis" or, if you want a more detailed (but also more technical) proof, you can have a look at the notes by Jean-Louis Arcand on p. 47 to 52. For the same reason as in the forbidden regression this seemingly obvious two-step procedure of mimicking 2SLS with probit will not produce consistent estimates. This is again because expectations and linear projections do not carry over through nonlinear functions. Wooldridge (2010) in section 15.7.3 on page 594 provides a detailed explanation for this. He also explains the proper procedure of estimating probit models with a binary endogenous variable. The correct approach is to use maximum likelihood but doing this by hand is not exactly trivial. Therefore it is preferable if you have access to some statistical software which has a ready-canned package for this. For example, the Stata command would be ivprobit (see the Stata manual for this command which also explains the maximum likelihood approach). If you require references for the theory behind probit with instrumental variables see for instance: Newey, W. (1987) "Efficient estimation of limited dependent variable models with endogenous explanatory variables", Journal of Econometrics, Vol. 36, pp. 231-250 Rivers, D. and Vuong, Q.H. (1988) "Limited information estimators and exogeneity tests for simultaneous probit models", Journal of Econometrics, Vol. 39, pp. 347-366 Finally, combining different estimation methods in the first and second stages is difficult unless there exists a theoretical foundation which justifies their use. This is not to say that it is not feasible though. For instance, Adams et al. (2009) use a three-step procedure where they have a probit "first stage" and an OLS second stage without falling for the forbidden regression problem. Their general approach is: use probit to regress the endogenous variable on the instrument(s) and control variables use the predicted values from the previous step in an OLS first stage together with the control (but without the instrumental) variables do the second stage as usual A similar procedure was employed by a user on the Statalist who wanted to use a Tobit first-stage and a Poisson second stage (see here). The same fix should be feasible for your estimation problem.
Probit two-stage least squares (2SLS)
What was proposed to you is sometimes referred to as a forbidden regression and in general you will not consistently estimate the relationship of interest. Forbidden regressions produce consistent est
Probit two-stage least squares (2SLS) What was proposed to you is sometimes referred to as a forbidden regression and in general you will not consistently estimate the relationship of interest. Forbidden regressions produce consistent estimates only under very restrictive assumptions which rarely hold in practice (see for instance Wooldridge (2010) "Econometric Analysis of Cross Section an Panel Data", p. 265-268). The problem is that neither the conditional expectations operator nor the linear projection carry through nonlinear functions. For this reason only an OLS regression in the first stage is guaranteed to produce fitted values that are uncorrelated with the residuals. A proof for this can be found in Greene (2008) "Econometric Analysis" or, if you want a more detailed (but also more technical) proof, you can have a look at the notes by Jean-Louis Arcand on p. 47 to 52. For the same reason as in the forbidden regression this seemingly obvious two-step procedure of mimicking 2SLS with probit will not produce consistent estimates. This is again because expectations and linear projections do not carry over through nonlinear functions. Wooldridge (2010) in section 15.7.3 on page 594 provides a detailed explanation for this. He also explains the proper procedure of estimating probit models with a binary endogenous variable. The correct approach is to use maximum likelihood but doing this by hand is not exactly trivial. Therefore it is preferable if you have access to some statistical software which has a ready-canned package for this. For example, the Stata command would be ivprobit (see the Stata manual for this command which also explains the maximum likelihood approach). If you require references for the theory behind probit with instrumental variables see for instance: Newey, W. (1987) "Efficient estimation of limited dependent variable models with endogenous explanatory variables", Journal of Econometrics, Vol. 36, pp. 231-250 Rivers, D. and Vuong, Q.H. (1988) "Limited information estimators and exogeneity tests for simultaneous probit models", Journal of Econometrics, Vol. 39, pp. 347-366 Finally, combining different estimation methods in the first and second stages is difficult unless there exists a theoretical foundation which justifies their use. This is not to say that it is not feasible though. For instance, Adams et al. (2009) use a three-step procedure where they have a probit "first stage" and an OLS second stage without falling for the forbidden regression problem. Their general approach is: use probit to regress the endogenous variable on the instrument(s) and control variables use the predicted values from the previous step in an OLS first stage together with the control (but without the instrumental) variables do the second stage as usual A similar procedure was employed by a user on the Statalist who wanted to use a Tobit first-stage and a Poisson second stage (see here). The same fix should be feasible for your estimation problem.
Probit two-stage least squares (2SLS) What was proposed to you is sometimes referred to as a forbidden regression and in general you will not consistently estimate the relationship of interest. Forbidden regressions produce consistent est
16,146
Probit two-stage least squares (2SLS)
I wanted to add an answer here because there seems to be A LOT of confusion about the forbidden regression. In my opinion the OP falls in this regression, only when he wished for the second stage to be a probit/poisson model. I base my answer on comments from Wooldridge himself on Statalist, and Wooldridge (2010) Econometric Analysis of Cross Section and Panel Data. I went through some of Wooldridge's own comments on Statalist (and his books) and in contrast to what Andy comments (please don't shoot me), it seems that the forbidden regression, is using fitted values of a first stage into a non-linear second stage. I base this on two threads: Here Wooldridge explains that another poster falls in the forbidden regression trap: https://www.statalist.org/forums/forum/general-stata-discussion/general/1308457-endogeneity-issue-negative-binomial. I quote him: "You cannot, in most cases, simply insert fitted values for the EEV into a nonlinear function." In this post Wooldridge even suggest using an ordinal probit in the first stage (and use the fitted probabilities in the second stage), which therefore apparently does not pose any issue: https://www.statalist.org/forums/forum/general-stata-discussion/general/1381281-iv-estimation-for-ordinal-variable?_=1617356656297. Please also note that in my opinion Wooldridge (2010) mentions that you can still use 2SLS in this case, but not mimic it by using fitted values! See chapter 9.5.2 titled "Estimation".
Probit two-stage least squares (2SLS)
I wanted to add an answer here because there seems to be A LOT of confusion about the forbidden regression. In my opinion the OP falls in this regression, only when he wished for the second stage to b
Probit two-stage least squares (2SLS) I wanted to add an answer here because there seems to be A LOT of confusion about the forbidden regression. In my opinion the OP falls in this regression, only when he wished for the second stage to be a probit/poisson model. I base my answer on comments from Wooldridge himself on Statalist, and Wooldridge (2010) Econometric Analysis of Cross Section and Panel Data. I went through some of Wooldridge's own comments on Statalist (and his books) and in contrast to what Andy comments (please don't shoot me), it seems that the forbidden regression, is using fitted values of a first stage into a non-linear second stage. I base this on two threads: Here Wooldridge explains that another poster falls in the forbidden regression trap: https://www.statalist.org/forums/forum/general-stata-discussion/general/1308457-endogeneity-issue-negative-binomial. I quote him: "You cannot, in most cases, simply insert fitted values for the EEV into a nonlinear function." In this post Wooldridge even suggest using an ordinal probit in the first stage (and use the fitted probabilities in the second stage), which therefore apparently does not pose any issue: https://www.statalist.org/forums/forum/general-stata-discussion/general/1381281-iv-estimation-for-ordinal-variable?_=1617356656297. Please also note that in my opinion Wooldridge (2010) mentions that you can still use 2SLS in this case, but not mimic it by using fitted values! See chapter 9.5.2 titled "Estimation".
Probit two-stage least squares (2SLS) I wanted to add an answer here because there seems to be A LOT of confusion about the forbidden regression. In my opinion the OP falls in this regression, only when he wished for the second stage to b
16,147
Probit two-stage least squares (2SLS)
if you want a more detailed (but also more technical) proof, you can have a look at the notes by Jean-Louis Arcand on p. 47 to 52. This does not seem to be the case. The Arcand discussion is not about the functional form; instead, it is about the inclusion of different covariate sets in the first-stage versus the second-stage models. "In words, the correct 2SLS procedure entails including all of the exogenous covariates that appear in the structural equation in the first-stage reduced form. The forbidden regression involves leaving some or all of them out." Going back to the original question, I would recommend using an OLS for the first stage, and the probit for the second. While this may be technically biased, it is likely (assuming you have a good instrument) to be less biased than the non-IV approach.
Probit two-stage least squares (2SLS)
if you want a more detailed (but also more technical) proof, you can have a look at the notes by Jean-Louis Arcand on p. 47 to 52. This does not seem to be the case. The Arcand discussion is not abou
Probit two-stage least squares (2SLS) if you want a more detailed (but also more technical) proof, you can have a look at the notes by Jean-Louis Arcand on p. 47 to 52. This does not seem to be the case. The Arcand discussion is not about the functional form; instead, it is about the inclusion of different covariate sets in the first-stage versus the second-stage models. "In words, the correct 2SLS procedure entails including all of the exogenous covariates that appear in the structural equation in the first-stage reduced form. The forbidden regression involves leaving some or all of them out." Going back to the original question, I would recommend using an OLS for the first stage, and the probit for the second. While this may be technically biased, it is likely (assuming you have a good instrument) to be less biased than the non-IV approach.
Probit two-stage least squares (2SLS) if you want a more detailed (but also more technical) proof, you can have a look at the notes by Jean-Louis Arcand on p. 47 to 52. This does not seem to be the case. The Arcand discussion is not abou
16,148
Perform linear regression, but force solution to go through some particular data points
The model in question can be written $$y = p(x) + (x-x_1)\cdots(x-x_d)\left(\beta_0 + \beta_1 x + \cdots + \beta_p x^p \right) + \varepsilon$$ where $p(x_i) = y_i$ is a polynomial of degree $d-1$ passing through predetermined points $(x_1,y_1), \ldots, (x_d,y_d)$ and $\varepsilon$ is random. (Use the Lagrange interpolating polynomial.) Writing $(x-x_1)\cdots(x-x_d) = r(x)$ allows us to rewrite this model as $$y - p(x) = \beta_0 r(x) + \beta_1 r(x)x + \beta_2 r(x)x^2 + \cdots + \beta_p r(x)x^p + \varepsilon,$$ which is a standard OLS multiple regression problem with the same error structure as the original where the independent variables are the $p+1$ quantities $r(x)x^i,$ $i=0, 1, \ldots, p$. Simply compute these variables and run your familiar regression software, making sure to prevent it from including a constant term. The usual caveats about regressions without a constant term apply; in particular, the $R^2$ can be artificially high; the usual interpretations do not apply. (In fact, regression through the origin is a special case of this construction where $d=1$, $(x_1,y_1) = (0,0)$, and $p(x)=0$, so that the model is $y = \beta_0 x + \cdots + \beta_p x^{p+1} + \varepsilon.$) Here is a worked example (in R) # Generate some data that *do* pass through three points (up to random error). x <- 1:24 f <- function(x) ( (x-2)*(x-12) + (x-2)*(x-23) + (x-12)*(x-23) ) / 100 y0 <-(x-2) * (x-12) * (x-23) * (1 + x - (x/24)^2) / 10^4 + f(x) set.seed(17) eps <- rnorm(length(y0), mean=0, 1/2) y <- y0 + eps data <- data.frame(x,y) # Plot the data and the three special points. plot(data) points(cbind(c(2,12,23), f(c(2,12,23))), pch=19, col="Red", cex=1.5) # For comparison, conduct unconstrained polynomial regression data$x2 <- x^2 data$x3 <- x^3 data$x4 <- x^4 fit0 <- lm(y ~ x + x2 + x3 + x4, data=data) lines(predict(fit0), lty=2, lwd=2) # Conduct the constrained regressions data$y1 <- y - f(x) data$r <- (x-2)*(x-12)*(x-23) data$z0 <- data$r data$z1 <- data$r * x data$z2 <- data$r * x^2 fit <- lm(y1 ~ z0 + z1 + z2 - 1, data=data) lines(predict(fit) + f(x), col="Red", lwd=2) The three fixed points are shown in solid red--they are not part of the data. The unconstrained fourth-order polynomial least squares fit is shown with a black dotted line (it has five parameters); the constrained fit (of order five, but with only three free parameters) is shown with the red line. Inspecting the least squares output (summary(fit0) and summary(fit)) can be instructive--I leave this to the interested reader.
Perform linear regression, but force solution to go through some particular data points
The model in question can be written $$y = p(x) + (x-x_1)\cdots(x-x_d)\left(\beta_0 + \beta_1 x + \cdots + \beta_p x^p \right) + \varepsilon$$ where $p(x_i) = y_i$ is a polynomial of degree $d-1$ pass
Perform linear regression, but force solution to go through some particular data points The model in question can be written $$y = p(x) + (x-x_1)\cdots(x-x_d)\left(\beta_0 + \beta_1 x + \cdots + \beta_p x^p \right) + \varepsilon$$ where $p(x_i) = y_i$ is a polynomial of degree $d-1$ passing through predetermined points $(x_1,y_1), \ldots, (x_d,y_d)$ and $\varepsilon$ is random. (Use the Lagrange interpolating polynomial.) Writing $(x-x_1)\cdots(x-x_d) = r(x)$ allows us to rewrite this model as $$y - p(x) = \beta_0 r(x) + \beta_1 r(x)x + \beta_2 r(x)x^2 + \cdots + \beta_p r(x)x^p + \varepsilon,$$ which is a standard OLS multiple regression problem with the same error structure as the original where the independent variables are the $p+1$ quantities $r(x)x^i,$ $i=0, 1, \ldots, p$. Simply compute these variables and run your familiar regression software, making sure to prevent it from including a constant term. The usual caveats about regressions without a constant term apply; in particular, the $R^2$ can be artificially high; the usual interpretations do not apply. (In fact, regression through the origin is a special case of this construction where $d=1$, $(x_1,y_1) = (0,0)$, and $p(x)=0$, so that the model is $y = \beta_0 x + \cdots + \beta_p x^{p+1} + \varepsilon.$) Here is a worked example (in R) # Generate some data that *do* pass through three points (up to random error). x <- 1:24 f <- function(x) ( (x-2)*(x-12) + (x-2)*(x-23) + (x-12)*(x-23) ) / 100 y0 <-(x-2) * (x-12) * (x-23) * (1 + x - (x/24)^2) / 10^4 + f(x) set.seed(17) eps <- rnorm(length(y0), mean=0, 1/2) y <- y0 + eps data <- data.frame(x,y) # Plot the data and the three special points. plot(data) points(cbind(c(2,12,23), f(c(2,12,23))), pch=19, col="Red", cex=1.5) # For comparison, conduct unconstrained polynomial regression data$x2 <- x^2 data$x3 <- x^3 data$x4 <- x^4 fit0 <- lm(y ~ x + x2 + x3 + x4, data=data) lines(predict(fit0), lty=2, lwd=2) # Conduct the constrained regressions data$y1 <- y - f(x) data$r <- (x-2)*(x-12)*(x-23) data$z0 <- data$r data$z1 <- data$r * x data$z2 <- data$r * x^2 fit <- lm(y1 ~ z0 + z1 + z2 - 1, data=data) lines(predict(fit) + f(x), col="Red", lwd=2) The three fixed points are shown in solid red--they are not part of the data. The unconstrained fourth-order polynomial least squares fit is shown with a black dotted line (it has five parameters); the constrained fit (of order five, but with only three free parameters) is shown with the red line. Inspecting the least squares output (summary(fit0) and summary(fit)) can be instructive--I leave this to the interested reader.
Perform linear regression, but force solution to go through some particular data points The model in question can be written $$y = p(x) + (x-x_1)\cdots(x-x_d)\left(\beta_0 + \beta_1 x + \cdots + \beta_p x^p \right) + \varepsilon$$ where $p(x_i) = y_i$ is a polynomial of degree $d-1$ pass
16,149
Perform linear regression, but force solution to go through some particular data points
If you want to force a regression line to go through a single point, that can be done in a roundabout way. Let's say your point is $(x_i,y_i)$. You just re-center your data with that point as the origin. That is, you subtract $x_i$ from every $x$-value, and $y_i$ from every $y$-value. Now the point is at the origin of the coordinate plane. Then you simply fit a regression line while suppressing the intercept (forcing the intercept to be (0,0). Because this is a linear transformation, you can easily back-transform everything afterwards if you want to. If you want to force a line to go through two points in an X-Y plane, that's pretty easy to do also. Any two points can be fit with a line. You can use the point-slope formula to calculate your slope, and then use one of the points, the slope, and the equation of a line to find the intercept. Note that it may not be possible to fit a straight line through three points in a coordinate plane. However, we can guarantee that they can be fit perfectly with a parabola (that is, using both $X$ and $X^2$). There is algebra for this too, but as we move up, it might be easier to just fit a model with software by including only those three (more) points in the dataset. Similarly, you could get the straight line that best approximates those three points by fitting a model that has access only to those three points. I feel compelled to mention at this point, however, that this may not be a great thing to do (unless your theory provides very solid reasons for doing so). You may also want to look into Bayesian regression, where you can allow your model to find the best combination of the information in your data, and some prior information (which you could use to strongly bias your intercept towards zero, for example, without quite forcing it).
Perform linear regression, but force solution to go through some particular data points
If you want to force a regression line to go through a single point, that can be done in a roundabout way. Let's say your point is $(x_i,y_i)$. You just re-center your data with that point as the ori
Perform linear regression, but force solution to go through some particular data points If you want to force a regression line to go through a single point, that can be done in a roundabout way. Let's say your point is $(x_i,y_i)$. You just re-center your data with that point as the origin. That is, you subtract $x_i$ from every $x$-value, and $y_i$ from every $y$-value. Now the point is at the origin of the coordinate plane. Then you simply fit a regression line while suppressing the intercept (forcing the intercept to be (0,0). Because this is a linear transformation, you can easily back-transform everything afterwards if you want to. If you want to force a line to go through two points in an X-Y plane, that's pretty easy to do also. Any two points can be fit with a line. You can use the point-slope formula to calculate your slope, and then use one of the points, the slope, and the equation of a line to find the intercept. Note that it may not be possible to fit a straight line through three points in a coordinate plane. However, we can guarantee that they can be fit perfectly with a parabola (that is, using both $X$ and $X^2$). There is algebra for this too, but as we move up, it might be easier to just fit a model with software by including only those three (more) points in the dataset. Similarly, you could get the straight line that best approximates those three points by fitting a model that has access only to those three points. I feel compelled to mention at this point, however, that this may not be a great thing to do (unless your theory provides very solid reasons for doing so). You may also want to look into Bayesian regression, where you can allow your model to find the best combination of the information in your data, and some prior information (which you could use to strongly bias your intercept towards zero, for example, without quite forcing it).
Perform linear regression, but force solution to go through some particular data points If you want to force a regression line to go through a single point, that can be done in a roundabout way. Let's say your point is $(x_i,y_i)$. You just re-center your data with that point as the ori
16,150
Perform linear regression, but force solution to go through some particular data points
To add a little extra information to @gung's excellent coverage of the linear case, in the higher order polynomial case there are several ways you could do it either exactly or approximately (but pretty much as accurately as you need). First, note that the degrees of freedom for the polynomial (or indeed of any fitted function) must be at least as large as the number of "known" points. If the degrees of freedom are equal, you don't need the data at all, since the curve is completely determined. If there are more 'known' points you can't solve it (unless they all lie on exactly the same polynomial of the specified degree in which case any suitably-sized subset will suffice). From here on, I'll just talk about when the polynomial has more d.f. than the known points (such as a cubic - with 4df - and three known points, so that the cubic is neither overdetermined by known points nor completely determined by them). 1) "the curve must pass through this point" is a linear constraint on the parameters, resulting in constrained estimation or constrained least squares (though both terms can include other things than linear constraints, such as positivity constraints). You can incorporate linear constraints by either   (a) recasting the parameterization to implicitly include each constraint resulting in a lower order model.   (b) using standard tools that can incorporate linear constraints on the parameters of a least squares fit. (usually via something like the formula given at the above link) 2) Another way is via weighted regression. If you give the known points sufficiently large weight, you can get essentially the same fit as in (1). This is often readily implemented, can be substantially quicker than reparameterizing, and can be done in packages that don't offer constrained fitting. All of @gung's caveats apply
Perform linear regression, but force solution to go through some particular data points
To add a little extra information to @gung's excellent coverage of the linear case, in the higher order polynomial case there are several ways you could do it either exactly or approximately (but pret
Perform linear regression, but force solution to go through some particular data points To add a little extra information to @gung's excellent coverage of the linear case, in the higher order polynomial case there are several ways you could do it either exactly or approximately (but pretty much as accurately as you need). First, note that the degrees of freedom for the polynomial (or indeed of any fitted function) must be at least as large as the number of "known" points. If the degrees of freedom are equal, you don't need the data at all, since the curve is completely determined. If there are more 'known' points you can't solve it (unless they all lie on exactly the same polynomial of the specified degree in which case any suitably-sized subset will suffice). From here on, I'll just talk about when the polynomial has more d.f. than the known points (such as a cubic - with 4df - and three known points, so that the cubic is neither overdetermined by known points nor completely determined by them). 1) "the curve must pass through this point" is a linear constraint on the parameters, resulting in constrained estimation or constrained least squares (though both terms can include other things than linear constraints, such as positivity constraints). You can incorporate linear constraints by either   (a) recasting the parameterization to implicitly include each constraint resulting in a lower order model.   (b) using standard tools that can incorporate linear constraints on the parameters of a least squares fit. (usually via something like the formula given at the above link) 2) Another way is via weighted regression. If you give the known points sufficiently large weight, you can get essentially the same fit as in (1). This is often readily implemented, can be substantially quicker than reparameterizing, and can be done in packages that don't offer constrained fitting. All of @gung's caveats apply
Perform linear regression, but force solution to go through some particular data points To add a little extra information to @gung's excellent coverage of the linear case, in the higher order polynomial case there are several ways you could do it either exactly or approximately (but pret
16,151
Fastest SVM implementation
Google's Sofia algorithm contains an extremely fast implementation of a linear SVM. It's one of the fastest SVMs out there, but I think it only supports classification, and only supports linear SVMs. There's even an R package!
Fastest SVM implementation
Google's Sofia algorithm contains an extremely fast implementation of a linear SVM. It's one of the fastest SVMs out there, but I think it only supports classification, and only supports linear SVMs.
Fastest SVM implementation Google's Sofia algorithm contains an extremely fast implementation of a linear SVM. It's one of the fastest SVMs out there, but I think it only supports classification, and only supports linear SVMs. There's even an R package!
Fastest SVM implementation Google's Sofia algorithm contains an extremely fast implementation of a linear SVM. It's one of the fastest SVMs out there, but I think it only supports classification, and only supports linear SVMs.
16,152
Fastest SVM implementation
The easiest speedup you're going to get is running the cross-validation in parallel. Personally, I like the caret package in R, which uses foreach as a backend. It makes it very easy to farm the cross-validation and grid search out to multiple cores or multiple machines. Caret can handle many different models, including rbf SVMs: library(caret) library(doMC) registerDoMC() model <- train(Species ~ ., data = iris, method="svmRadial", trControl=trainControl(method='cv', number=10)) > confusionMatrix(model) Cross-Validated (10 fold) Confusion Matrix (entries are percentages of table totals) Reference Prediction setosa versicolor virginica setosa 32.4 0.0 0.0 versicolor 0.0 30.9 2.0 virginica 0.9 2.4 31.3 Note that the doMC() library is only available on mac and linux, it should be run from the command line, not from a GUI, and it breaks any models from RWeka. It's also easy to use MPI or SNOW clusters as parallel backend, which don't have these issues.
Fastest SVM implementation
The easiest speedup you're going to get is running the cross-validation in parallel. Personally, I like the caret package in R, which uses foreach as a backend. It makes it very easy to farm the cro
Fastest SVM implementation The easiest speedup you're going to get is running the cross-validation in parallel. Personally, I like the caret package in R, which uses foreach as a backend. It makes it very easy to farm the cross-validation and grid search out to multiple cores or multiple machines. Caret can handle many different models, including rbf SVMs: library(caret) library(doMC) registerDoMC() model <- train(Species ~ ., data = iris, method="svmRadial", trControl=trainControl(method='cv', number=10)) > confusionMatrix(model) Cross-Validated (10 fold) Confusion Matrix (entries are percentages of table totals) Reference Prediction setosa versicolor virginica setosa 32.4 0.0 0.0 versicolor 0.0 30.9 2.0 virginica 0.9 2.4 31.3 Note that the doMC() library is only available on mac and linux, it should be run from the command line, not from a GUI, and it breaks any models from RWeka. It's also easy to use MPI or SNOW clusters as parallel backend, which don't have these issues.
Fastest SVM implementation The easiest speedup you're going to get is running the cross-validation in parallel. Personally, I like the caret package in R, which uses foreach as a backend. It makes it very easy to farm the cro
16,153
Fastest SVM implementation
I realize this is a quite old question, but it's also possible (depending on the size of your dataset it can be more or less effective) to use low-dimensional approximations of the kernel feature map and then use that in a linear-SVM. See http://scikit-learn.org/stable/modules/kernel_approximation.html
Fastest SVM implementation
I realize this is a quite old question, but it's also possible (depending on the size of your dataset it can be more or less effective) to use low-dimensional approximations of the kernel feature map
Fastest SVM implementation I realize this is a quite old question, but it's also possible (depending on the size of your dataset it can be more or less effective) to use low-dimensional approximations of the kernel feature map and then use that in a linear-SVM. See http://scikit-learn.org/stable/modules/kernel_approximation.html
Fastest SVM implementation I realize this is a quite old question, but it's also possible (depending on the size of your dataset it can be more or less effective) to use low-dimensional approximations of the kernel feature map
16,154
Fastest SVM implementation
Have a look at Python's multiprocessing module. It makes parallelizing things really easy and is perfect for cross validation.
Fastest SVM implementation
Have a look at Python's multiprocessing module. It makes parallelizing things really easy and is perfect for cross validation.
Fastest SVM implementation Have a look at Python's multiprocessing module. It makes parallelizing things really easy and is perfect for cross validation.
Fastest SVM implementation Have a look at Python's multiprocessing module. It makes parallelizing things really easy and is perfect for cross validation.
16,155
Fastest SVM implementation
R has a great GPU-accelerated svm package rpusvm, it takes ~20 seconds to train on 20K samples*100 dimensions, and I found that the CPU is never overloaded by it, so it uses the GPU efficiently. However, it requires a NVIDIA GPU.
Fastest SVM implementation
R has a great GPU-accelerated svm package rpusvm, it takes ~20 seconds to train on 20K samples*100 dimensions, and I found that the CPU is never overloaded by it, so it uses the GPU efficiently. Howev
Fastest SVM implementation R has a great GPU-accelerated svm package rpusvm, it takes ~20 seconds to train on 20K samples*100 dimensions, and I found that the CPU is never overloaded by it, so it uses the GPU efficiently. However, it requires a NVIDIA GPU.
Fastest SVM implementation R has a great GPU-accelerated svm package rpusvm, it takes ~20 seconds to train on 20K samples*100 dimensions, and I found that the CPU is never overloaded by it, so it uses the GPU efficiently. Howev
16,156
Fastest SVM implementation
Alert: This is a shameless plug. Consider DynaML a Scala based ML library I am working on. I have implemented Kernel based LS-SVM (Least Squares Support Vector Machines) along with automated Kernel tuning, using grid search or Coupled Simulated Annealing. http://mandar2812.github.io/DynaML/
Fastest SVM implementation
Alert: This is a shameless plug. Consider DynaML a Scala based ML library I am working on. I have implemented Kernel based LS-SVM (Least Squares Support Vector Machines) along with automated Kernel tu
Fastest SVM implementation Alert: This is a shameless plug. Consider DynaML a Scala based ML library I am working on. I have implemented Kernel based LS-SVM (Least Squares Support Vector Machines) along with automated Kernel tuning, using grid search or Coupled Simulated Annealing. http://mandar2812.github.io/DynaML/
Fastest SVM implementation Alert: This is a shameless plug. Consider DynaML a Scala based ML library I am working on. I have implemented Kernel based LS-SVM (Least Squares Support Vector Machines) along with automated Kernel tu
16,157
Are confidence intervals open or closed intervals?
The short answer is "Yes". The longer answer is that it does not really matter that much because the ends of the intervals are random variables based on the sample (and assumptions, etc.) and if we are talking a continuous variable then the probability of getting an exact value (the bound equaling the true parameter) is 0. Confidence intervals are the range of null values that would not be rejected, so what do you do if you compute a p-value that is exactly $\alpha$? (another probability 0 event for continuous cases). If you reject when p=$\alpha$ exactly then your CI is open, if you don't reject then the CI is closed. For practical purposes, it doesn't matter that much.
Are confidence intervals open or closed intervals?
The short answer is "Yes". The longer answer is that it does not really matter that much because the ends of the intervals are random variables based on the sample (and assumptions, etc.) and if we ar
Are confidence intervals open or closed intervals? The short answer is "Yes". The longer answer is that it does not really matter that much because the ends of the intervals are random variables based on the sample (and assumptions, etc.) and if we are talking a continuous variable then the probability of getting an exact value (the bound equaling the true parameter) is 0. Confidence intervals are the range of null values that would not be rejected, so what do you do if you compute a p-value that is exactly $\alpha$? (another probability 0 event for continuous cases). If you reject when p=$\alpha$ exactly then your CI is open, if you don't reject then the CI is closed. For practical purposes, it doesn't matter that much.
Are confidence intervals open or closed intervals? The short answer is "Yes". The longer answer is that it does not really matter that much because the ends of the intervals are random variables based on the sample (and assumptions, etc.) and if we ar
16,158
Are confidence intervals open or closed intervals?
Depends on the support of the DF for the sampling distribution of the value you're trying to estimate. I would say that confidence intervals for binomial proportions are, in fact, closed intervals since there are only a finite number of values a statistic could achieve and the confidence interval would contain all its limit points (i.e. the endpoints are inclusive).
Are confidence intervals open or closed intervals?
Depends on the support of the DF for the sampling distribution of the value you're trying to estimate. I would say that confidence intervals for binomial proportions are, in fact, closed intervals sin
Are confidence intervals open or closed intervals? Depends on the support of the DF for the sampling distribution of the value you're trying to estimate. I would say that confidence intervals for binomial proportions are, in fact, closed intervals since there are only a finite number of values a statistic could achieve and the confidence interval would contain all its limit points (i.e. the endpoints are inclusive).
Are confidence intervals open or closed intervals? Depends on the support of the DF for the sampling distribution of the value you're trying to estimate. I would say that confidence intervals for binomial proportions are, in fact, closed intervals sin
16,159
Are confidence intervals open or closed intervals?
A confidence set with confidence $1-\alpha$ for parameter $\theta$ is a set $\mathcal{S}$ for which $P(\theta\in\mathcal{S}) = 1-\alpha$. This set could be an open interval, a closed interval, or it could not even be an interval at all. I think it makes sense to call any confidence set which takes the form of either an open, closed, or half open interval a "confidence interval".
Are confidence intervals open or closed intervals?
A confidence set with confidence $1-\alpha$ for parameter $\theta$ is a set $\mathcal{S}$ for which $P(\theta\in\mathcal{S}) = 1-\alpha$. This set could be an open interval, a closed interval, or it c
Are confidence intervals open or closed intervals? A confidence set with confidence $1-\alpha$ for parameter $\theta$ is a set $\mathcal{S}$ for which $P(\theta\in\mathcal{S}) = 1-\alpha$. This set could be an open interval, a closed interval, or it could not even be an interval at all. I think it makes sense to call any confidence set which takes the form of either an open, closed, or half open interval a "confidence interval".
Are confidence intervals open or closed intervals? A confidence set with confidence $1-\alpha$ for parameter $\theta$ is a set $\mathcal{S}$ for which $P(\theta\in\mathcal{S}) = 1-\alpha$. This set could be an open interval, a closed interval, or it c
16,160
Are confidence intervals open or closed intervals?
As other answers here correctly point out, it doesn't usually matter. If the parameter space is continuous, and the method of finding the CI is "continuous" (e.g. sample mean $\pm$ margin of error; what I have in mind are CI methods where the endpoints are not 2 of the sampled observations), then indeed it doesn't matter whether it's open or closed. However, I'd like to make a case for why the convention should be "closed." If the parameter space is discrete, it does matter. Consider an integer-valued parameter, such as the unknown size of a finite population in a capture-recapture problem[1]. In that case, it feels much more natural and less confusing to say "We are 95% confident that the population size is between 93 and 106, including the endpoints" rather than "...between 92 and 107, excluding the endpoints." In other words, the closed CI [93, 106] makes more sense than an open CI such as (92, 107) or (92.9, 106.1) or whatever. Even with a continuous parameter space, some CI calculation methods just choose two observations to be the CI endpoints. Consider a bootstrap percentile CI. Conventionally, we include those endpoints as part of the CI to ensure our estimated coverage is at least nominal: if we're trying to get a 95% CI, we include the endpoints so that at least 95% of the bootstrap statistics are in the CI. (I know bootstrap percentile CIs are not guaranteed to have the right coverage! But this is what we usually hope they will do, even if they don't necessarily succeed.) More generally, statisticians tend to conventionally prefer theoretical guarantees that are slightly conservative: most of us would rather have slight over-coverage than under-coverage in our CIs. In this spirit, closed intervals are slightly more appropriate. [1] Edited to replace initial example (Poisson median, which may be questionable as per @whuber's comment below) with finite population size. Another example could be the unknown count within a finite population. If there are $N$ students in my class and I want to know $\theta$ = the number who would say "Yes" to a sensitive question, I could use randomized response to ask it in a privacy-protecting way. Then I may want a confidence interval for $\theta$ which has the discrete parameter space $\{0,1,\ldots,N\}$.
Are confidence intervals open or closed intervals?
As other answers here correctly point out, it doesn't usually matter. If the parameter space is continuous, and the method of finding the CI is "continuous" (e.g. sample mean $\pm$ margin of error; wh
Are confidence intervals open or closed intervals? As other answers here correctly point out, it doesn't usually matter. If the parameter space is continuous, and the method of finding the CI is "continuous" (e.g. sample mean $\pm$ margin of error; what I have in mind are CI methods where the endpoints are not 2 of the sampled observations), then indeed it doesn't matter whether it's open or closed. However, I'd like to make a case for why the convention should be "closed." If the parameter space is discrete, it does matter. Consider an integer-valued parameter, such as the unknown size of a finite population in a capture-recapture problem[1]. In that case, it feels much more natural and less confusing to say "We are 95% confident that the population size is between 93 and 106, including the endpoints" rather than "...between 92 and 107, excluding the endpoints." In other words, the closed CI [93, 106] makes more sense than an open CI such as (92, 107) or (92.9, 106.1) or whatever. Even with a continuous parameter space, some CI calculation methods just choose two observations to be the CI endpoints. Consider a bootstrap percentile CI. Conventionally, we include those endpoints as part of the CI to ensure our estimated coverage is at least nominal: if we're trying to get a 95% CI, we include the endpoints so that at least 95% of the bootstrap statistics are in the CI. (I know bootstrap percentile CIs are not guaranteed to have the right coverage! But this is what we usually hope they will do, even if they don't necessarily succeed.) More generally, statisticians tend to conventionally prefer theoretical guarantees that are slightly conservative: most of us would rather have slight over-coverage than under-coverage in our CIs. In this spirit, closed intervals are slightly more appropriate. [1] Edited to replace initial example (Poisson median, which may be questionable as per @whuber's comment below) with finite population size. Another example could be the unknown count within a finite population. If there are $N$ students in my class and I want to know $\theta$ = the number who would say "Yes" to a sensitive question, I could use randomized response to ask it in a privacy-protecting way. Then I may want a confidence interval for $\theta$ which has the discrete parameter space $\{0,1,\ldots,N\}$.
Are confidence intervals open or closed intervals? As other answers here correctly point out, it doesn't usually matter. If the parameter space is continuous, and the method of finding the CI is "continuous" (e.g. sample mean $\pm$ margin of error; wh
16,161
Are confidence intervals open or closed intervals?
My answer is that it is open. Since we have a interval from which we will get a neighbourhood value of our unknown parameter, and as we all know that this interval will give us an approximate value of the estimator, i.e., estimate then how it can be possible to declare it to be a closed interval. One more point is that if we have a closed interval, then our estimate will be bounded fully, and we want a value that will lie between this interval only. By definition it must be closed, but in my opinion it should be open.
Are confidence intervals open or closed intervals?
My answer is that it is open. Since we have a interval from which we will get a neighbourhood value of our unknown parameter, and as we all know that this interval will give us an approximate value o
Are confidence intervals open or closed intervals? My answer is that it is open. Since we have a interval from which we will get a neighbourhood value of our unknown parameter, and as we all know that this interval will give us an approximate value of the estimator, i.e., estimate then how it can be possible to declare it to be a closed interval. One more point is that if we have a closed interval, then our estimate will be bounded fully, and we want a value that will lie between this interval only. By definition it must be closed, but in my opinion it should be open.
Are confidence intervals open or closed intervals? My answer is that it is open. Since we have a interval from which we will get a neighbourhood value of our unknown parameter, and as we all know that this interval will give us an approximate value o
16,162
How can I use logistic regression betas + raw data to get probabilities
Here is the applied researcher's answer (using the statistics package R). First, let's create some data, i.e. I am simulating data for a simple bivariate logistic regression model $log(\frac{p}{1-p})=\beta_0 + \beta_1 \cdot x$: > set.seed(3124) > > ## Formula for converting logit to probabilities > ## Source: http://www.statgun.com/tutorials/logistic-regression.html > logit2prop <- function(l){exp(l)/(1+exp(l))} > > ## Make up some data > y <- rbinom(100, 1, 0.2) > x <- rbinom(100, 1, 0.5) The predictor x is a dichotomous variable: > x [1] 0 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 1 1 1 1 [48] 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 0 [95] 1 1 1 1 1 0 Second, estimate the intercept ($\beta_0$) and the slope ($\beta_1$). As you can see, the intercept is $\beta_0 = -0.8690$ and the slope is $\beta_1 = -1.0769 $. > ## Run the model > summary(glm.mod <- glm(y ~ x, family = "binomial")) [...] Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.8690 0.3304 -2.630 0.00854 ** x -1.0769 0.5220 -2.063 0.03910 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) [...] Third, R, like most statistical packages, can compute the fitted values, i.e. the probabilities. I will use these values as reference. > ## Save the fitted values > glm.fitted <- fitted(glm.mod) Fourth, this step directly refers to your question: We have the raw data (here: $x$) and we have the coefficients ($\beta_0$ and $\beta_1$). Now, let's compute the logits and save these fitted values in glm.rcdm: > ## "Raw data + coefficients" method (RDCM) ## logit = -0.8690 + (-1.0769) * x glm.rdcm <- -0.8690 + (-1.0769)*x The final step is a comparison of the fitted values based on R's fitted-function (glm.fitted) and my "hand-made" approach (logit2prop.glm.rdcm). My own function logit2prop (see first step) converts logits to probabilities: > ## Compare fitted values and RDCM > df <- data.frame(glm.fitted, logit2prop(glm.rdcm)) > df[10:25,] > df[10:25,] glm.fitted logit2prop.glm.rdcm. 10 0.1250000 0.1250011 11 0.2954545 0.2954624 12 0.1250000 0.1250011 13 0.2954545 0.2954624 14 0.2954545 0.2954624 15 0.1250000 0.1250011 16 0.1250000 0.1250011 17 0.1250000 0.1250011 18 0.2954545 0.2954624 19 0.1250000 0.1250011 20 0.1250000 0.1250011 21 0.1250000 0.1250011 22 0.1250000 0.1250011 23 0.1250000 0.1250011 24 0.1250000 0.1250011 25 0.2954545 0.2954624
How can I use logistic regression betas + raw data to get probabilities
Here is the applied researcher's answer (using the statistics package R). First, let's create some data, i.e. I am simulating data for a simple bivariate logistic regression model $log(\frac{p}{1-p})=
How can I use logistic regression betas + raw data to get probabilities Here is the applied researcher's answer (using the statistics package R). First, let's create some data, i.e. I am simulating data for a simple bivariate logistic regression model $log(\frac{p}{1-p})=\beta_0 + \beta_1 \cdot x$: > set.seed(3124) > > ## Formula for converting logit to probabilities > ## Source: http://www.statgun.com/tutorials/logistic-regression.html > logit2prop <- function(l){exp(l)/(1+exp(l))} > > ## Make up some data > y <- rbinom(100, 1, 0.2) > x <- rbinom(100, 1, 0.5) The predictor x is a dichotomous variable: > x [1] 0 1 1 1 1 1 0 1 0 1 0 1 0 0 1 1 1 0 1 1 1 1 1 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 0 1 1 1 0 1 1 1 1 [48] 1 1 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1 1 0 1 0 0 0 [95] 1 1 1 1 1 0 Second, estimate the intercept ($\beta_0$) and the slope ($\beta_1$). As you can see, the intercept is $\beta_0 = -0.8690$ and the slope is $\beta_1 = -1.0769 $. > ## Run the model > summary(glm.mod <- glm(y ~ x, family = "binomial")) [...] Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.8690 0.3304 -2.630 0.00854 ** x -1.0769 0.5220 -2.063 0.03910 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) [...] Third, R, like most statistical packages, can compute the fitted values, i.e. the probabilities. I will use these values as reference. > ## Save the fitted values > glm.fitted <- fitted(glm.mod) Fourth, this step directly refers to your question: We have the raw data (here: $x$) and we have the coefficients ($\beta_0$ and $\beta_1$). Now, let's compute the logits and save these fitted values in glm.rcdm: > ## "Raw data + coefficients" method (RDCM) ## logit = -0.8690 + (-1.0769) * x glm.rdcm <- -0.8690 + (-1.0769)*x The final step is a comparison of the fitted values based on R's fitted-function (glm.fitted) and my "hand-made" approach (logit2prop.glm.rdcm). My own function logit2prop (see first step) converts logits to probabilities: > ## Compare fitted values and RDCM > df <- data.frame(glm.fitted, logit2prop(glm.rdcm)) > df[10:25,] > df[10:25,] glm.fitted logit2prop.glm.rdcm. 10 0.1250000 0.1250011 11 0.2954545 0.2954624 12 0.1250000 0.1250011 13 0.2954545 0.2954624 14 0.2954545 0.2954624 15 0.1250000 0.1250011 16 0.1250000 0.1250011 17 0.1250000 0.1250011 18 0.2954545 0.2954624 19 0.1250000 0.1250011 20 0.1250000 0.1250011 21 0.1250000 0.1250011 22 0.1250000 0.1250011 23 0.1250000 0.1250011 24 0.1250000 0.1250011 25 0.2954545 0.2954624
How can I use logistic regression betas + raw data to get probabilities Here is the applied researcher's answer (using the statistics package R). First, let's create some data, i.e. I am simulating data for a simple bivariate logistic regression model $log(\frac{p}{1-p})=
16,163
How can I use logistic regression betas + raw data to get probabilities
The link function of a logistic model is $f: x \mapsto \log \tfrac{x}{1 - x}$. Its inverse is $g: x \mapsto \tfrac{\exp x}{1 + \exp x}$. In a logistic model, the left-hand side is the logit of $\pi$, the probability of success: $f(\pi) = \beta_0 + x_1 \beta_1 + x_2 \beta_2 + \ldots$ Therefore, if you want $\pi$ you need to evaluate $g$ at the right-hand side: $\pi = g( \beta_0 + x_1 \beta_1 + x_2 \beta_2 + \ldots)$.
How can I use logistic regression betas + raw data to get probabilities
The link function of a logistic model is $f: x \mapsto \log \tfrac{x}{1 - x}$. Its inverse is $g: x \mapsto \tfrac{\exp x}{1 + \exp x}$. In a logistic model, the left-hand side is the logit of $\pi$,
How can I use logistic regression betas + raw data to get probabilities The link function of a logistic model is $f: x \mapsto \log \tfrac{x}{1 - x}$. Its inverse is $g: x \mapsto \tfrac{\exp x}{1 + \exp x}$. In a logistic model, the left-hand side is the logit of $\pi$, the probability of success: $f(\pi) = \beta_0 + x_1 \beta_1 + x_2 \beta_2 + \ldots$ Therefore, if you want $\pi$ you need to evaluate $g$ at the right-hand side: $\pi = g( \beta_0 + x_1 \beta_1 + x_2 \beta_2 + \ldots)$.
How can I use logistic regression betas + raw data to get probabilities The link function of a logistic model is $f: x \mapsto \log \tfrac{x}{1 - x}$. Its inverse is $g: x \mapsto \tfrac{\exp x}{1 + \exp x}$. In a logistic model, the left-hand side is the logit of $\pi$,
16,164
Checking ANOVA assumptions
In applied settings it is typically more important to know whether any violation of assumptions is problematic for inference. Assumption tests based on significance tests are rarely of interest in large samples, because most inferential tests are robust to mild violations of assumptions. One of the nice features of graphical assessments of assumptions is that they focus attention on the degree of violation and not the statistical significance of any violation. However, it's also possible to focus on numeric summaries of your data which quantify the degree of violation of assumptions and not the statistical significance (e.g., skewness values, kurtosis values, ratio of largest to smallest group variances, etc.). You can also get standard errors or confidence intervals on these values, which will get smaller with larger samples. This perspective is consistent with the general idea that statistical significance is not equivalent to practical importance.
Checking ANOVA assumptions
In applied settings it is typically more important to know whether any violation of assumptions is problematic for inference. Assumption tests based on significance tests are rarely of interest in l
Checking ANOVA assumptions In applied settings it is typically more important to know whether any violation of assumptions is problematic for inference. Assumption tests based on significance tests are rarely of interest in large samples, because most inferential tests are robust to mild violations of assumptions. One of the nice features of graphical assessments of assumptions is that they focus attention on the degree of violation and not the statistical significance of any violation. However, it's also possible to focus on numeric summaries of your data which quantify the degree of violation of assumptions and not the statistical significance (e.g., skewness values, kurtosis values, ratio of largest to smallest group variances, etc.). You can also get standard errors or confidence intervals on these values, which will get smaller with larger samples. This perspective is consistent with the general idea that statistical significance is not equivalent to practical importance.
Checking ANOVA assumptions In applied settings it is typically more important to know whether any violation of assumptions is problematic for inference. Assumption tests based on significance tests are rarely of interest in l
16,165
Checking ANOVA assumptions
A couple of graphs will usually be much more enlightening than the p value from a test of normality or homoskedasticity. Plot observed dependent variables against independent variables. Plot observations against fits. Plot residuals against independent variables. Investigate anything that looks strange on these plots. If something does not look strange, I would not worry about a significant test of an assumption.
Checking ANOVA assumptions
A couple of graphs will usually be much more enlightening than the p value from a test of normality or homoskedasticity. Plot observed dependent variables against independent variables. Plot observati
Checking ANOVA assumptions A couple of graphs will usually be much more enlightening than the p value from a test of normality or homoskedasticity. Plot observed dependent variables against independent variables. Plot observations against fits. Plot residuals against independent variables. Investigate anything that looks strange on these plots. If something does not look strange, I would not worry about a significant test of an assumption.
Checking ANOVA assumptions A couple of graphs will usually be much more enlightening than the p value from a test of normality or homoskedasticity. Plot observed dependent variables against independent variables. Plot observati
16,166
Checking ANOVA assumptions
The are some very good web guides to checking the assumptions of ANOVA & what to do if the fail. Here is one. This is another. Essentially your eye is the best judge, so do some exploratory data analysis. That means plot the data - histograms and box plots are a good way to assess normality and homoscedascity. And remember ANOVA is robust to minor violations of these.
Checking ANOVA assumptions
The are some very good web guides to checking the assumptions of ANOVA & what to do if the fail. Here is one. This is another. Essentially your eye is the best judge, so do some exploratory data ana
Checking ANOVA assumptions The are some very good web guides to checking the assumptions of ANOVA & what to do if the fail. Here is one. This is another. Essentially your eye is the best judge, so do some exploratory data analysis. That means plot the data - histograms and box plots are a good way to assess normality and homoscedascity. And remember ANOVA is robust to minor violations of these.
Checking ANOVA assumptions The are some very good web guides to checking the assumptions of ANOVA & what to do if the fail. Here is one. This is another. Essentially your eye is the best judge, so do some exploratory data ana
16,167
Checking ANOVA assumptions
QQ Plots are pretty good ways to detect non-normality. For homoscedasticity, try Levene's test or a Brown-Forsythe test. Both are similar, though BF is a little more robust. They are less sensitive to non-normality than Bartlett's test, but even still, I've found them not to be the most reliable with small sample sizes. Q-Q plot Brown-Forsythe test Levene's test
Checking ANOVA assumptions
QQ Plots are pretty good ways to detect non-normality. For homoscedasticity, try Levene's test or a Brown-Forsythe test. Both are similar, though BF is a little more robust. They are less sensitive t
Checking ANOVA assumptions QQ Plots are pretty good ways to detect non-normality. For homoscedasticity, try Levene's test or a Brown-Forsythe test. Both are similar, though BF is a little more robust. They are less sensitive to non-normality than Bartlett's test, but even still, I've found them not to be the most reliable with small sample sizes. Q-Q plot Brown-Forsythe test Levene's test
Checking ANOVA assumptions QQ Plots are pretty good ways to detect non-normality. For homoscedasticity, try Levene's test or a Brown-Forsythe test. Both are similar, though BF is a little more robust. They are less sensitive t
16,168
Checking ANOVA assumptions
I agree with others that significance testing for assumptions is problematic. I like to deal with this problem by making a single plot that exposes all the model assumptions needed to have accurate type I error and low type II error (high power). For the case of ANOVA with 2 groups (two sample t-test) this plot is the normal inverse of the empirical cumulative distribution function (ECDF) stratified by group (see QQ plot comment in an earlier post). For the t-test to perform well, the two curves need to be parallel straight lines. For the $k$-sample problem of ANOVA in general you would have $k$ parallel straight lines. Semi-parametric (rank) methods such as the Wilcoxon and Kruskal-Wallis tests make far fewer assumptions. The logit of the ECDF should be parallel for Wilcoxon-Kruskal-Wallis tests to have maximum power (type I error is never a problem for them). Linearity is not required. Rank tests make assumptions about how distributions of different groups are related to other, but do not make assumptions about the shape of any one distribution.
Checking ANOVA assumptions
I agree with others that significance testing for assumptions is problematic. I like to deal with this problem by making a single plot that exposes all the model assumptions needed to have accurate
Checking ANOVA assumptions I agree with others that significance testing for assumptions is problematic. I like to deal with this problem by making a single plot that exposes all the model assumptions needed to have accurate type I error and low type II error (high power). For the case of ANOVA with 2 groups (two sample t-test) this plot is the normal inverse of the empirical cumulative distribution function (ECDF) stratified by group (see QQ plot comment in an earlier post). For the t-test to perform well, the two curves need to be parallel straight lines. For the $k$-sample problem of ANOVA in general you would have $k$ parallel straight lines. Semi-parametric (rank) methods such as the Wilcoxon and Kruskal-Wallis tests make far fewer assumptions. The logit of the ECDF should be parallel for Wilcoxon-Kruskal-Wallis tests to have maximum power (type I error is never a problem for them). Linearity is not required. Rank tests make assumptions about how distributions of different groups are related to other, but do not make assumptions about the shape of any one distribution.
Checking ANOVA assumptions I agree with others that significance testing for assumptions is problematic. I like to deal with this problem by making a single plot that exposes all the model assumptions needed to have accurate
16,169
Reference Request: Generalized Linear Models
Gelman, Andrew, and Jennifer Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2007, is not about GLMs per se, but also covers that and has a nice mix of theory, hands-on-advice, implementation in R, and exercises (and, when you websearch for it, you might find an ebook version of it!). Not a textbook, but freely available is this graduate statistics course from the Harvard Government Department, which also covers the most common GLMs. The section videos cover implementation in R. The textbook is King, Gary. Unifying political methodology: The likelihood theory of statistical inference. University of Michigan Press, 1989.
Reference Request: Generalized Linear Models
Gelman, Andrew, and Jennifer Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2007, is not about GLMs per se, but also covers that and has a nice mi
Reference Request: Generalized Linear Models Gelman, Andrew, and Jennifer Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2007, is not about GLMs per se, but also covers that and has a nice mix of theory, hands-on-advice, implementation in R, and exercises (and, when you websearch for it, you might find an ebook version of it!). Not a textbook, but freely available is this graduate statistics course from the Harvard Government Department, which also covers the most common GLMs. The section videos cover implementation in R. The textbook is King, Gary. Unifying political methodology: The likelihood theory of statistical inference. University of Michigan Press, 1989.
Reference Request: Generalized Linear Models Gelman, Andrew, and Jennifer Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2007, is not about GLMs per se, but also covers that and has a nice mi
16,170
Reference Request: Generalized Linear Models
Disclaimer: Highly subjective personal opinion follows... For theory and applications I can't recommend Generalized Linear Models and Extensions by Hardin and Hilbe too highly. It uses SPSS Stata, (both of) which I never use and know nothing about, but it covers the theory and has a very rich set of examples. If I had to choose one book to start with, it would be this one. A more theory-focused book is Generalized, Linear, and Mixed Models by McCulloch, Searle, and Neuhaus. This has fewer examples than Hardin and Hilbe but goes further into random effects for both the linear model and the GLM. This is my favorite GLM book, because it connects a lot of things together, but if you have no interest in random effects it may be overkill. What I would call a canonical reference for GLMs is Generalized Linear Models by McCullagh and Nelder. It's a little older title but I enjoyed it very much. Generalized Linear Models with Applications in Engineering and the Sciences by Myers, Montgomery, Vining, and Robinson spends a little more time on the binary/poisson GLMs and also has interesting examples. The new edition has examples in a few languages, including R. I picked up Faraway's Extending the Linear Model with R: Generalized Linear, Mixed Effects and Nonparametric Regression Models a while back, and it has been very useful for helping me do things in R, though it's not a good "teach yourself GLM" book. But it may be a good companion to some of the other books out there.
Reference Request: Generalized Linear Models
Disclaimer: Highly subjective personal opinion follows... For theory and applications I can't recommend Generalized Linear Models and Extensions by Hardin and Hilbe too highly. It uses SPSS Stata, (b
Reference Request: Generalized Linear Models Disclaimer: Highly subjective personal opinion follows... For theory and applications I can't recommend Generalized Linear Models and Extensions by Hardin and Hilbe too highly. It uses SPSS Stata, (both of) which I never use and know nothing about, but it covers the theory and has a very rich set of examples. If I had to choose one book to start with, it would be this one. A more theory-focused book is Generalized, Linear, and Mixed Models by McCulloch, Searle, and Neuhaus. This has fewer examples than Hardin and Hilbe but goes further into random effects for both the linear model and the GLM. This is my favorite GLM book, because it connects a lot of things together, but if you have no interest in random effects it may be overkill. What I would call a canonical reference for GLMs is Generalized Linear Models by McCullagh and Nelder. It's a little older title but I enjoyed it very much. Generalized Linear Models with Applications in Engineering and the Sciences by Myers, Montgomery, Vining, and Robinson spends a little more time on the binary/poisson GLMs and also has interesting examples. The new edition has examples in a few languages, including R. I picked up Faraway's Extending the Linear Model with R: Generalized Linear, Mixed Effects and Nonparametric Regression Models a while back, and it has been very useful for helping me do things in R, though it's not a good "teach yourself GLM" book. But it may be a good companion to some of the other books out there.
Reference Request: Generalized Linear Models Disclaimer: Highly subjective personal opinion follows... For theory and applications I can't recommend Generalized Linear Models and Extensions by Hardin and Hilbe too highly. It uses SPSS Stata, (b
16,171
Reference Request: Generalized Linear Models
I really like Frank Harrell's Regression Modeling Strategies.
Reference Request: Generalized Linear Models
I really like Frank Harrell's Regression Modeling Strategies.
Reference Request: Generalized Linear Models I really like Frank Harrell's Regression Modeling Strategies.
Reference Request: Generalized Linear Models I really like Frank Harrell's Regression Modeling Strategies.
16,172
Reference Request: Generalized Linear Models
The text by Dobson and Barnett http://www.amazon.com/Introduction-Generalized-Edition-Chapman-Statistical/dp/1584889500 is I think aimed in exactly the direction you ask. It does a good job of balancing technical detail and friendly style.
Reference Request: Generalized Linear Models
The text by Dobson and Barnett http://www.amazon.com/Introduction-Generalized-Edition-Chapman-Statistical/dp/1584889500 is I think aimed in exactly the direction you ask. It does a good job of balanc
Reference Request: Generalized Linear Models The text by Dobson and Barnett http://www.amazon.com/Introduction-Generalized-Edition-Chapman-Statistical/dp/1584889500 is I think aimed in exactly the direction you ask. It does a good job of balancing technical detail and friendly style.
Reference Request: Generalized Linear Models The text by Dobson and Barnett http://www.amazon.com/Introduction-Generalized-Edition-Chapman-Statistical/dp/1584889500 is I think aimed in exactly the direction you ask. It does a good job of balanc
16,173
Reference Request: Generalized Linear Models
This one helped me out a lot: Springer Linear mixed effect models using R by A. Galecki and T. Burzykowski. http://www.springer.com/statistics/statistical+theory+and+methods/book/978-1-4614-3899-1
Reference Request: Generalized Linear Models
This one helped me out a lot: Springer Linear mixed effect models using R by A. Galecki and T. Burzykowski. http://www.springer.com/statistics/statistical+theory+and+methods/book/978-1-4614-3899-1
Reference Request: Generalized Linear Models This one helped me out a lot: Springer Linear mixed effect models using R by A. Galecki and T. Burzykowski. http://www.springer.com/statistics/statistical+theory+and+methods/book/978-1-4614-3899-1
Reference Request: Generalized Linear Models This one helped me out a lot: Springer Linear mixed effect models using R by A. Galecki and T. Burzykowski. http://www.springer.com/statistics/statistical+theory+and+methods/book/978-1-4614-3899-1
16,174
Reference Request: Generalized Linear Models
Introduction to Statistical Learning with Applications in R was a really easy to follow introductory text that covers GLM's and as the title suggests comes with problem sets and example code in R. I learned a lot from going through that book. If you feel comfortable with Linear Algebra Elements of Statistical Learning covers that same material in more detail, and many other topics as well, but it doesn't have the same kind of easy to follow tutorial style R examples in the chapters.
Reference Request: Generalized Linear Models
Introduction to Statistical Learning with Applications in R was a really easy to follow introductory text that covers GLM's and as the title suggests comes with problem sets and example code in R. I l
Reference Request: Generalized Linear Models Introduction to Statistical Learning with Applications in R was a really easy to follow introductory text that covers GLM's and as the title suggests comes with problem sets and example code in R. I learned a lot from going through that book. If you feel comfortable with Linear Algebra Elements of Statistical Learning covers that same material in more detail, and many other topics as well, but it doesn't have the same kind of easy to follow tutorial style R examples in the chapters.
Reference Request: Generalized Linear Models Introduction to Statistical Learning with Applications in R was a really easy to follow introductory text that covers GLM's and as the title suggests comes with problem sets and example code in R. I l
16,175
Reference Request: Generalized Linear Models
The lecture notes for German Rodriguez' Princeton course on GLMs are a thorough introduction, packed with examples of the more common types, & explaining the relationships between them. The more theoretical aspects are separated in two appendices.
Reference Request: Generalized Linear Models
The lecture notes for German Rodriguez' Princeton course on GLMs are a thorough introduction, packed with examples of the more common types, & explaining the relationships between them. The more theor
Reference Request: Generalized Linear Models The lecture notes for German Rodriguez' Princeton course on GLMs are a thorough introduction, packed with examples of the more common types, & explaining the relationships between them. The more theoretical aspects are separated in two appendices.
Reference Request: Generalized Linear Models The lecture notes for German Rodriguez' Princeton course on GLMs are a thorough introduction, packed with examples of the more common types, & explaining the relationships between them. The more theor
16,176
Reference Request: Generalized Linear Models
Alain Zuur's book "A beginners guide to GLM and GLMM with R" gives some nice examples for GLMs and GLMMs in R.
Reference Request: Generalized Linear Models
Alain Zuur's book "A beginners guide to GLM and GLMM with R" gives some nice examples for GLMs and GLMMs in R.
Reference Request: Generalized Linear Models Alain Zuur's book "A beginners guide to GLM and GLMM with R" gives some nice examples for GLMs and GLMMs in R.
Reference Request: Generalized Linear Models Alain Zuur's book "A beginners guide to GLM and GLMM with R" gives some nice examples for GLMs and GLMMs in R.
16,177
Reference Request: Generalized Linear Models
Here is a good write up on generalized linear regression. The code is done in R and it explains how they work. CRAN also has a package glmnet which does this for you but can be a bit unwieldy to use initially. But once you get a hang of it, its quite flexible. Here is a good write up on glmnet. Hope that helps.
Reference Request: Generalized Linear Models
Here is a good write up on generalized linear regression. The code is done in R and it explains how they work. CRAN also has a package glmnet which does this for you but can be a bit unwieldy to use i
Reference Request: Generalized Linear Models Here is a good write up on generalized linear regression. The code is done in R and it explains how they work. CRAN also has a package glmnet which does this for you but can be a bit unwieldy to use initially. But once you get a hang of it, its quite flexible. Here is a good write up on glmnet. Hope that helps.
Reference Request: Generalized Linear Models Here is a good write up on generalized linear regression. The code is done in R and it explains how they work. CRAN also has a package glmnet which does this for you but can be a bit unwieldy to use i
16,178
Multiple linear regression for hypothesis testing
Here is a simple example. I don't know if you are familiar with R, but hopefully the code is sufficiently self-explanatory. set.seed(9) # this makes the example reproducible N = 36 # the following generates 3 variables: x1 = rep(seq(from=11, to=13), each=12) x2 = rep(rep(seq(from=90, to=150, by=20), each=3 ), times=3) x3 = rep(seq(from=6, to=18, by=6 ), times=12) cbind(x1, x2, x3)[1:7,] # 1st 7 cases, just to see the pattern x1 x2 x3 [1,] 11 90 6 [2,] 11 90 12 [3,] 11 90 18 [4,] 11 110 6 [5,] 11 110 12 [6,] 11 110 18 [7,] 11 130 6 # the following is the true data generating process, note that y is a function of # x1 & x2, but not x3, note also that x1 is designed above w/ a restricted range, # & that x2 tends to have less influence on the response variable than x1: y = 15 + 2*x1 + .2*x2 + rnorm(N, mean=0, sd=10) reg.Model = lm(y~x1+x2+x3) # fits a regression model to these data Now, lets see what this looks like: . . . Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.76232 27.18170 -0.065 0.94871 x1 3.11683 2.09795 1.486 0.14716 x2 0.21214 0.07661 2.769 0.00927 ** x3 0.17748 0.34966 0.508 0.61524 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 . . . F-statistic: 3.378 on 3 and 32 DF, p-value: 0.03016 We can focus on the "Coefficients" section of the output. Each parameter estimated by the model gets its own row. The actual estimate itself is listed in the first column. The second column lists the Standard Errors of the estimates, that is, an estimate of how much estimates would 'bounce around' from sample to sample, if we were to repeat this process over and over and over again. More specifically, it is an estimate of the standard deviation of the sampling distribution of the estimate. If we divide each parameter estimate by its SE, we get a t-score, which is listed in the third column; this is used for hypothesis testing, specifically to test whether the parameter estimate is 'significantly' different from 0. The last column is the p-value associated with that t-score. It is the probability of finding an estimated value that far or further from 0, if the null hypothesis were true. Note that if the null hypothesis is not true, it is not clear that this value is telling us anything meaningful at all. If we look back and forth between the Coefficients table and the true data generating process above, we can see a few interesting things. The intercept is estimated to be -1.8 and its SE is 27, whereas the true value is 15. Because the associated p-value is .95, it would not be considered 'significantly different' from 0 (a type II error), but it is nonetheless within one SE of the true value. There is thus nothing terribly extreme about this estimate from the perspective of the true value and the amount it ought to fluctuate; we simply have insufficient power to differentiate it from 0. The same story holds, more or less, for x1. Data analysts would typically say that it is not even 'marginally significant' because its p-value is >.10, however, this is another type II error. The estimate for x2 is quite accurate $.21214\approx.2$, and the p-value is 'highly significant', a correct decision. x3 also could not be differentiated from 0, p=.62, another correct decision (x3 does not show up in the true data generating process above). Interestingly, the p-value is greater than that for x1, but less than that for the intercept, both of which are type II errors. Finally, if we look below the Coefficients table we see the F-value for the model, which is a simultaneous test. This test checks to see if the model as a whole predicts the response variable better than chance alone. Another way to say this, is whether or not all the estimates should be considered unable to be differentiated from 0. The results of this test suggests that at least some of the parameter estimates are not equal to 0, anther correct decision. Since there are 4 tests above, we would have no protection from the problem of multiple comparisons without this. (Bear in mind that because p-values are random variables--whether something is significant would vary from experiment to experiment, if the experiment were re-run--it is possible for these to be inconsistent with each other. This is discussed on CV here: Significance of coefficients in multiple regression: significant t-test vs. non-significant F-statistic, and the opposite situation here: How can a regression be significant yet all predictors be non-significant, & here: F and t statistics in a regression.) Perhaps curiously, there are no type I errors in this example. At any rate, all 5 of the tests discussed in this paragraph are hypothesis tests. From your comment, I gather you may also wonder about how to determine if one explanatory variable is more important than another. This is a very common question, but is quite tricky. Imagine wanting to predict the potential for success in a sport based on an athlete's height and weight, and wondering which is more important. A common strategy is to look to see which estimated coefficient is larger. However, these estimates are specific to the units that were used: for example, the coefficient for weight will change depending on whether pounds or kilograms are used. In addition, it is not remotely clear how to equate / compare pounds and inches, or kilograms and centimeters. One strategy people employ is to standardize (i.e., turn into z-scores) their data first. Then these dimensions are in common units (viz., standard deviations), and the coefficients are similar to r-scores. Moreover, it is possible to test if one r-score is larger than another. Unfortunately, this does not get you out of the woods; unless the true r is exactly 0, the estimated r is driven in large part by the range of covariate values that are used. (I don't know how easy it will be to recognize, but @whuber's excellent answer here: Is $R^2$ useful or dangerous, illustrates this point; to see it, just think about how $r=\sqrt{r^2}$.) Thus, the best that can ever be said is that variability in one explanatory variable within a specified range is more important to determining the level of the response than variability in another explanatory variable within another specified range.
Multiple linear regression for hypothesis testing
Here is a simple example. I don't know if you are familiar with R, but hopefully the code is sufficiently self-explanatory. set.seed(9) # this makes the example reproducible N = 36 # the
Multiple linear regression for hypothesis testing Here is a simple example. I don't know if you are familiar with R, but hopefully the code is sufficiently self-explanatory. set.seed(9) # this makes the example reproducible N = 36 # the following generates 3 variables: x1 = rep(seq(from=11, to=13), each=12) x2 = rep(rep(seq(from=90, to=150, by=20), each=3 ), times=3) x3 = rep(seq(from=6, to=18, by=6 ), times=12) cbind(x1, x2, x3)[1:7,] # 1st 7 cases, just to see the pattern x1 x2 x3 [1,] 11 90 6 [2,] 11 90 12 [3,] 11 90 18 [4,] 11 110 6 [5,] 11 110 12 [6,] 11 110 18 [7,] 11 130 6 # the following is the true data generating process, note that y is a function of # x1 & x2, but not x3, note also that x1 is designed above w/ a restricted range, # & that x2 tends to have less influence on the response variable than x1: y = 15 + 2*x1 + .2*x2 + rnorm(N, mean=0, sd=10) reg.Model = lm(y~x1+x2+x3) # fits a regression model to these data Now, lets see what this looks like: . . . Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -1.76232 27.18170 -0.065 0.94871 x1 3.11683 2.09795 1.486 0.14716 x2 0.21214 0.07661 2.769 0.00927 ** x3 0.17748 0.34966 0.508 0.61524 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 . . . F-statistic: 3.378 on 3 and 32 DF, p-value: 0.03016 We can focus on the "Coefficients" section of the output. Each parameter estimated by the model gets its own row. The actual estimate itself is listed in the first column. The second column lists the Standard Errors of the estimates, that is, an estimate of how much estimates would 'bounce around' from sample to sample, if we were to repeat this process over and over and over again. More specifically, it is an estimate of the standard deviation of the sampling distribution of the estimate. If we divide each parameter estimate by its SE, we get a t-score, which is listed in the third column; this is used for hypothesis testing, specifically to test whether the parameter estimate is 'significantly' different from 0. The last column is the p-value associated with that t-score. It is the probability of finding an estimated value that far or further from 0, if the null hypothesis were true. Note that if the null hypothesis is not true, it is not clear that this value is telling us anything meaningful at all. If we look back and forth between the Coefficients table and the true data generating process above, we can see a few interesting things. The intercept is estimated to be -1.8 and its SE is 27, whereas the true value is 15. Because the associated p-value is .95, it would not be considered 'significantly different' from 0 (a type II error), but it is nonetheless within one SE of the true value. There is thus nothing terribly extreme about this estimate from the perspective of the true value and the amount it ought to fluctuate; we simply have insufficient power to differentiate it from 0. The same story holds, more or less, for x1. Data analysts would typically say that it is not even 'marginally significant' because its p-value is >.10, however, this is another type II error. The estimate for x2 is quite accurate $.21214\approx.2$, and the p-value is 'highly significant', a correct decision. x3 also could not be differentiated from 0, p=.62, another correct decision (x3 does not show up in the true data generating process above). Interestingly, the p-value is greater than that for x1, but less than that for the intercept, both of which are type II errors. Finally, if we look below the Coefficients table we see the F-value for the model, which is a simultaneous test. This test checks to see if the model as a whole predicts the response variable better than chance alone. Another way to say this, is whether or not all the estimates should be considered unable to be differentiated from 0. The results of this test suggests that at least some of the parameter estimates are not equal to 0, anther correct decision. Since there are 4 tests above, we would have no protection from the problem of multiple comparisons without this. (Bear in mind that because p-values are random variables--whether something is significant would vary from experiment to experiment, if the experiment were re-run--it is possible for these to be inconsistent with each other. This is discussed on CV here: Significance of coefficients in multiple regression: significant t-test vs. non-significant F-statistic, and the opposite situation here: How can a regression be significant yet all predictors be non-significant, & here: F and t statistics in a regression.) Perhaps curiously, there are no type I errors in this example. At any rate, all 5 of the tests discussed in this paragraph are hypothesis tests. From your comment, I gather you may also wonder about how to determine if one explanatory variable is more important than another. This is a very common question, but is quite tricky. Imagine wanting to predict the potential for success in a sport based on an athlete's height and weight, and wondering which is more important. A common strategy is to look to see which estimated coefficient is larger. However, these estimates are specific to the units that were used: for example, the coefficient for weight will change depending on whether pounds or kilograms are used. In addition, it is not remotely clear how to equate / compare pounds and inches, or kilograms and centimeters. One strategy people employ is to standardize (i.e., turn into z-scores) their data first. Then these dimensions are in common units (viz., standard deviations), and the coefficients are similar to r-scores. Moreover, it is possible to test if one r-score is larger than another. Unfortunately, this does not get you out of the woods; unless the true r is exactly 0, the estimated r is driven in large part by the range of covariate values that are used. (I don't know how easy it will be to recognize, but @whuber's excellent answer here: Is $R^2$ useful or dangerous, illustrates this point; to see it, just think about how $r=\sqrt{r^2}$.) Thus, the best that can ever be said is that variability in one explanatory variable within a specified range is more important to determining the level of the response than variability in another explanatory variable within another specified range.
Multiple linear regression for hypothesis testing Here is a simple example. I don't know if you are familiar with R, but hopefully the code is sufficiently self-explanatory. set.seed(9) # this makes the example reproducible N = 36 # the
16,179
Multiple linear regression for hypothesis testing
The essential test in regression models is the Full-Reduced test. This is where you are comparing 2 regression models, the Full model has all the terms in it and the Reduced test has a subset of those terms (the Reduced model needs to be nested in the Full model). The test then tests the null hypothesis that the reduced model fits just as well as the full model and any difference is due to chance. Common printouts from statistical software include an overall F test, this is just the Full-Reduced test where the reduced test is an intercept only model. They also often print a p-value for each individual predictor, this is just a series of Full-Reduced model tests, in each one the reduced model does not include that specific term. There are many ways to use these tests to answer questions of interest. In fact pretty much every test taught in a introductory stats course can be computed using regression models and the Full-Reduced test and the results will be identical in many cases and a very close approximation in the few others.
Multiple linear regression for hypothesis testing
The essential test in regression models is the Full-Reduced test. This is where you are comparing 2 regression models, the Full model has all the terms in it and the Reduced test has a subset of thos
Multiple linear regression for hypothesis testing The essential test in regression models is the Full-Reduced test. This is where you are comparing 2 regression models, the Full model has all the terms in it and the Reduced test has a subset of those terms (the Reduced model needs to be nested in the Full model). The test then tests the null hypothesis that the reduced model fits just as well as the full model and any difference is due to chance. Common printouts from statistical software include an overall F test, this is just the Full-Reduced test where the reduced test is an intercept only model. They also often print a p-value for each individual predictor, this is just a series of Full-Reduced model tests, in each one the reduced model does not include that specific term. There are many ways to use these tests to answer questions of interest. In fact pretty much every test taught in a introductory stats course can be computed using regression models and the Full-Reduced test and the results will be identical in many cases and a very close approximation in the few others.
Multiple linear regression for hypothesis testing The essential test in regression models is the Full-Reduced test. This is where you are comparing 2 regression models, the Full model has all the terms in it and the Reduced test has a subset of thos
16,180
Good introductions to time series (with R)
This is a very large subject and there are many good books that cover it. These are both good, but Cryer is my favorite of the two: Cryer. "Time Series Analysis: With Applications in R" is a classic on the subject, updated to include R code. Shumway and Stoffer. "Time Series Analysis and Its Applications: With R Examples". A good free resource is Zoonekynd's ebook, especially the time series section. My first suggestion for seeing the R packages would be the free ebook "A Discussion of Time Series Objects for R in Finance" from Rmetrics. It gives lots of examples comparing the different time series packages and discusses some of the considerations, but it doesn't provide any theory. Eric Zivot's "Modeling financial time series with S-PLUS" and Ruey Tsay's "Analysis of Financial Time Series" (available in the TSA package on CRAN) are directed and financial time series but both provide good general references. I strongly recommend looking at Ruey Tsay's homepage because it covers all these topics, and provides the necessary R code. In particular, look at the "Analysis of Financial Time Series", and "Multivariate Time Series Analysis" courses.
Good introductions to time series (with R)
This is a very large subject and there are many good books that cover it. These are both good, but Cryer is my favorite of the two: Cryer. "Time Series Analysis: With Applications in R" is a classic
Good introductions to time series (with R) This is a very large subject and there are many good books that cover it. These are both good, but Cryer is my favorite of the two: Cryer. "Time Series Analysis: With Applications in R" is a classic on the subject, updated to include R code. Shumway and Stoffer. "Time Series Analysis and Its Applications: With R Examples". A good free resource is Zoonekynd's ebook, especially the time series section. My first suggestion for seeing the R packages would be the free ebook "A Discussion of Time Series Objects for R in Finance" from Rmetrics. It gives lots of examples comparing the different time series packages and discusses some of the considerations, but it doesn't provide any theory. Eric Zivot's "Modeling financial time series with S-PLUS" and Ruey Tsay's "Analysis of Financial Time Series" (available in the TSA package on CRAN) are directed and financial time series but both provide good general references. I strongly recommend looking at Ruey Tsay's homepage because it covers all these topics, and provides the necessary R code. In particular, look at the "Analysis of Financial Time Series", and "Multivariate Time Series Analysis" courses.
Good introductions to time series (with R) This is a very large subject and there are many good books that cover it. These are both good, but Cryer is my favorite of the two: Cryer. "Time Series Analysis: With Applications in R" is a classic
16,181
Good introductions to time series (with R)
Time Series Analysis and Its Applications: With R Examples by Robert H. Shumway and David S. Stoffer would be a great resource for the subject, but you may find a lot of useful blog entries (e.g. my favorite one: learnr) and tutorials (e.g. from the linked homepage) also on the Internet freely available . On David Stoffer's homepage (linked above) you can find the example datasets used in the book's chapters, and others from the first and second editions with even sample chapters also.
Good introductions to time series (with R)
Time Series Analysis and Its Applications: With R Examples by Robert H. Shumway and David S. Stoffer would be a great resource for the subject, but you may find a lot of useful blog entries (e.g. my f
Good introductions to time series (with R) Time Series Analysis and Its Applications: With R Examples by Robert H. Shumway and David S. Stoffer would be a great resource for the subject, but you may find a lot of useful blog entries (e.g. my favorite one: learnr) and tutorials (e.g. from the linked homepage) also on the Internet freely available . On David Stoffer's homepage (linked above) you can find the example datasets used in the book's chapters, and others from the first and second editions with even sample chapters also.
Good introductions to time series (with R) Time Series Analysis and Its Applications: With R Examples by Robert H. Shumway and David S. Stoffer would be a great resource for the subject, but you may find a lot of useful blog entries (e.g. my f
16,182
Good introductions to time series (with R)
Very late answer from me, but I have found Introductory Time Series with R by Cowpertwaite and Metcalfe to be really useful for transitioning between BSc level analysis to MSc level stuff and professional work. Yes, it is a little basic, but there's good explanations and examples and some useful code. EDIT: I should add that I've also found Cryer and Chan hugely useful too in line with the first answer.
Good introductions to time series (with R)
Very late answer from me, but I have found Introductory Time Series with R by Cowpertwaite and Metcalfe to be really useful for transitioning between BSc level analysis to MSc level stuff and professi
Good introductions to time series (with R) Very late answer from me, but I have found Introductory Time Series with R by Cowpertwaite and Metcalfe to be really useful for transitioning between BSc level analysis to MSc level stuff and professional work. Yes, it is a little basic, but there's good explanations and examples and some useful code. EDIT: I should add that I've also found Cryer and Chan hugely useful too in line with the first answer.
Good introductions to time series (with R) Very late answer from me, but I have found Introductory Time Series with R by Cowpertwaite and Metcalfe to be really useful for transitioning between BSc level analysis to MSc level stuff and professi
16,183
What is the role of temperature in Softmax?
The temperature is a way to control the entropy of a distribution, while preserving the relative ranks of each event. If two events $i$ and $j$ have probabilities $p_i$ and $p_j$ in your softmax, then adjusting the temperature preserves this relationship, as long as the temperature is finite: $$p_i > p_j \Longleftrightarrow p'_i > p'_j$$ Heating a distribution increases the entropy, bringing it closer to a uniform distribution. (Try it for yourself: construct a simple distribution like $\mathbf{y}=(3, 4, 5)$, then divide all $y_i$ values by $T=1000000$ and see how the distribution changes.) Cooling it decreases the entropy, accentuating the common events. I’ll put that another way. It’s common to talk about the inverse temperature $\beta=1/T$. If $\beta = 0$, then you've attained a uniform distribution. As $\beta \to \infty$, you reach a trivial distribution with all mass concentrated on the highest-probability class. This is why softmax is considered a soft relaxation of argmax.
What is the role of temperature in Softmax?
The temperature is a way to control the entropy of a distribution, while preserving the relative ranks of each event. If two events $i$ and $j$ have probabilities $p_i$ and $p_j$ in your softmax, the
What is the role of temperature in Softmax? The temperature is a way to control the entropy of a distribution, while preserving the relative ranks of each event. If two events $i$ and $j$ have probabilities $p_i$ and $p_j$ in your softmax, then adjusting the temperature preserves this relationship, as long as the temperature is finite: $$p_i > p_j \Longleftrightarrow p'_i > p'_j$$ Heating a distribution increases the entropy, bringing it closer to a uniform distribution. (Try it for yourself: construct a simple distribution like $\mathbf{y}=(3, 4, 5)$, then divide all $y_i$ values by $T=1000000$ and see how the distribution changes.) Cooling it decreases the entropy, accentuating the common events. I’ll put that another way. It’s common to talk about the inverse temperature $\beta=1/T$. If $\beta = 0$, then you've attained a uniform distribution. As $\beta \to \infty$, you reach a trivial distribution with all mass concentrated on the highest-probability class. This is why softmax is considered a soft relaxation of argmax.
What is the role of temperature in Softmax? The temperature is a way to control the entropy of a distribution, while preserving the relative ranks of each event. If two events $i$ and $j$ have probabilities $p_i$ and $p_j$ in your softmax, the
16,184
What is the role of temperature in Softmax?
Temperature will modify the output distribution of the mapping. For example: low temperature softmax probs : [0.01,0.01,0.98] high temperature softmax probs : [0.2,0.2,0.6] Temperature is a bias against the mapping. Adding noise to the output. The higher the temp, the less it's going to resemble the input distribution. Think of it vaguely as "blurring" your output.
What is the role of temperature in Softmax?
Temperature will modify the output distribution of the mapping. For example: low temperature softmax probs : [0.01,0.01,0.98] high temperature softmax probs : [0.2,0.2,0.6] Temperature is a bias a
What is the role of temperature in Softmax? Temperature will modify the output distribution of the mapping. For example: low temperature softmax probs : [0.01,0.01,0.98] high temperature softmax probs : [0.2,0.2,0.6] Temperature is a bias against the mapping. Adding noise to the output. The higher the temp, the less it's going to resemble the input distribution. Think of it vaguely as "blurring" your output.
What is the role of temperature in Softmax? Temperature will modify the output distribution of the mapping. For example: low temperature softmax probs : [0.01,0.01,0.98] high temperature softmax probs : [0.2,0.2,0.6] Temperature is a bias a
16,185
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
The correlation matrix needs to be positive semi-definite with non-negative eigenvalues. The eigenvalues of the correlation matrix are the solutions of $$ \left| \begin{matrix} 1-\lambda & \rho & \rho \\ \rho & 1-\lambda & 0 \\ \rho & 0 & 1-\lambda \end{matrix} \right| =(1-\lambda)\big((1-\lambda)^2-2\rho^2)\big) =0 $$ so the eigenvalues are $1$ and $1\pm\sqrt{2}\rho$. These are all non-negative for $$ -\frac1{\sqrt{2}} \le \rho \le \frac1{\sqrt{2}}. $$
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
The correlation matrix needs to be positive semi-definite with non-negative eigenvalues. The eigenvalues of the correlation matrix are the solutions of $$ \left| \begin{matrix} 1-\lambda & \rho & \rh
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? The correlation matrix needs to be positive semi-definite with non-negative eigenvalues. The eigenvalues of the correlation matrix are the solutions of $$ \left| \begin{matrix} 1-\lambda & \rho & \rho \\ \rho & 1-\lambda & 0 \\ \rho & 0 & 1-\lambda \end{matrix} \right| =(1-\lambda)\big((1-\lambda)^2-2\rho^2)\big) =0 $$ so the eigenvalues are $1$ and $1\pm\sqrt{2}\rho$. These are all non-negative for $$ -\frac1{\sqrt{2}} \le \rho \le \frac1{\sqrt{2}}. $$
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? The correlation matrix needs to be positive semi-definite with non-negative eigenvalues. The eigenvalues of the correlation matrix are the solutions of $$ \left| \begin{matrix} 1-\lambda & \rho & \rh
16,186
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
A more intuitive perspective (an example) to complement @Jarle Tufto's +1 answer: What you are asking, is whether something like this variance-covariance matrix is possible: $\bf{\Sigma} = \begin{matrix} & X & Y & Z \\ X & 1 & 0.9 & 0 \\ Y & 0.9 & 1 & 0.9 \\ Z & 0 & 0.9 & 1\\ \end{matrix}$ This matrix is not positive-semidefinite. In fact, it is indefinite, since its determinant is negative. For example, a multivariate normal vector with this var-cov matrix cannot exist, since its PDF requires the determinant of $\Sigma$. If it is negative, the PDF would become negative, which would lead to negative probabilities. For this not to happen, the condition mentioned by @Jarle Tufto needs to be fulfilled. $PDF_{Gauss}(x) =(2\pi)^{-0.5k}\det(\Sigma)^{-0.5}e^{-0.5(x-\mu)^T\Sigma^{-1}(x-\mu)}$
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
A more intuitive perspective (an example) to complement @Jarle Tufto's +1 answer: What you are asking, is whether something like this variance-covariance matrix is possible: $\bf{\Sigma} = \begin{matr
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? A more intuitive perspective (an example) to complement @Jarle Tufto's +1 answer: What you are asking, is whether something like this variance-covariance matrix is possible: $\bf{\Sigma} = \begin{matrix} & X & Y & Z \\ X & 1 & 0.9 & 0 \\ Y & 0.9 & 1 & 0.9 \\ Z & 0 & 0.9 & 1\\ \end{matrix}$ This matrix is not positive-semidefinite. In fact, it is indefinite, since its determinant is negative. For example, a multivariate normal vector with this var-cov matrix cannot exist, since its PDF requires the determinant of $\Sigma$. If it is negative, the PDF would become negative, which would lead to negative probabilities. For this not to happen, the condition mentioned by @Jarle Tufto needs to be fulfilled. $PDF_{Gauss}(x) =(2\pi)^{-0.5k}\det(\Sigma)^{-0.5}e^{-0.5(x-\mu)^T\Sigma^{-1}(x-\mu)}$
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? A more intuitive perspective (an example) to complement @Jarle Tufto's +1 answer: What you are asking, is whether something like this variance-covariance matrix is possible: $\bf{\Sigma} = \begin{matr
16,187
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
If you performed linear regression on $Y$, you would get an $R^2$ value of at most 1. In your problem setting: Performing linear regression on $Y$ with $X$ gets $R = 0.99$. Performing linear regression on $Y$ with $Z$ gets $R = 0.99$. $X$ and $Z$ are not correlated, so they would both independently contribute to the $R^2$ value of a regression on $Y$. Combining these, when you perform linear regression on $Y$ with both $X$ and $Z$, you get $R^2 = (0.99)^2 + (0.99)^2 > 1$, which is impossible. This should also provide some idea of the bounds on these correlation values.
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
If you performed linear regression on $Y$, you would get an $R^2$ value of at most 1. In your problem setting: Performing linear regression on $Y$ with $X$ gets $R = 0.99$. Performing linear regressi
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? If you performed linear regression on $Y$, you would get an $R^2$ value of at most 1. In your problem setting: Performing linear regression on $Y$ with $X$ gets $R = 0.99$. Performing linear regression on $Y$ with $Z$ gets $R = 0.99$. $X$ and $Z$ are not correlated, so they would both independently contribute to the $R^2$ value of a regression on $Y$. Combining these, when you perform linear regression on $Y$ with both $X$ and $Z$, you get $R^2 = (0.99)^2 + (0.99)^2 > 1$, which is impossible. This should also provide some idea of the bounds on these correlation values.
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? If you performed linear regression on $Y$, you would get an $R^2$ value of at most 1. In your problem setting: Performing linear regression on $Y$ with $X$ gets $R = 0.99$. Performing linear regressi
16,188
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
I posted this previously on Math StackExchange, but will reiterate here. If $\rho_{AB} = \text{Corr}(A, B)$, and similarly defined for $\rho_{BC}$ and $\rho_{AC}$, we have the inequality. \begin{align*} \rho_{AC} \ge \max\{2(\rho_{AB} + \rho_{BC}) - 3, 2\rho_{AB}\rho_{BC} - 1\} \end{align*} Proof. Some notation. I let $\sigma_{AB} = \text{Cov}(A,B)$ and $\sigma_A^2 = \text{Var}(A)$. Let's first prove $\rho_{AC} \ge 2(\rho_{AB} + \rho_{BC}) - 3$. Recall the identity \begin{align*} 2 E[X^2] + 2E[Y^2] = E[(X+Y)^2] + E[(X-Y)^2] \end{align*} hence $2E[Y^2] \le E[(X+Y)^2] + E[(X-Y)^2]$. Set \begin{align*} X = \widetilde{B} - (\widetilde{A} + \widetilde{C})/2 \quad \text{and} \quad Y = (\widetilde{A} - \widetilde{C})/2 \end{align*} where $\widetilde{C} = (C - E[C])/\sigma_C$, the normalized random variable, and similarly for $\widetilde{A}, \widetilde{B}$. Upon substitution and simplification, we get \begin{align*} \frac{1}{2}(2 - 2\rho_{AC}) \le (2 - 2\rho_{AB}) + (2 - 2\rho_{BC}) \iff \rho_{AC} \ge 2(\rho_{AB} + \rho_{BC}) - 3 \end{align*} To prove $\rho_{AC} \ge 2\rho_{AB}\rho_{BC} - 1$, consider the random variable \begin{align*} W = 2 \frac{\sigma_{AB}}{\sigma_B^2}B - A \end{align*} We can verify $\sigma_W^2 = \sigma_A^2$, and hence $\sigma_{WC} \le \sigma_{W}\sigma_{C} = \sigma_ A \sigma_C$ by the Cauchy-Schwarz inequality. On the other hand, you may compute \begin{align*} \sigma_{WC} = 2 \frac{\sigma_{AB}}{\sigma_B^2}\sigma_{BC} - \sigma_{AC} \end{align*} Reorganizing all this, we prove $\rho_{AC} \ge 2\rho_{AB}\rho_{BC} - 1$. In your specific example, with $\rho_{AB} = \rho_{BC} = 0.99$, then no matter the construction of $A, B, C$, we must have $\rho_{AC} \ge 0.9602$.
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$?
I posted this previously on Math StackExchange, but will reiterate here. If $\rho_{AB} = \text{Corr}(A, B)$, and similarly defined for $\rho_{BC}$ and $\rho_{AC}$, we have the inequality. \begin{align
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? I posted this previously on Math StackExchange, but will reiterate here. If $\rho_{AB} = \text{Corr}(A, B)$, and similarly defined for $\rho_{BC}$ and $\rho_{AC}$, we have the inequality. \begin{align*} \rho_{AC} \ge \max\{2(\rho_{AB} + \rho_{BC}) - 3, 2\rho_{AB}\rho_{BC} - 1\} \end{align*} Proof. Some notation. I let $\sigma_{AB} = \text{Cov}(A,B)$ and $\sigma_A^2 = \text{Var}(A)$. Let's first prove $\rho_{AC} \ge 2(\rho_{AB} + \rho_{BC}) - 3$. Recall the identity \begin{align*} 2 E[X^2] + 2E[Y^2] = E[(X+Y)^2] + E[(X-Y)^2] \end{align*} hence $2E[Y^2] \le E[(X+Y)^2] + E[(X-Y)^2]$. Set \begin{align*} X = \widetilde{B} - (\widetilde{A} + \widetilde{C})/2 \quad \text{and} \quad Y = (\widetilde{A} - \widetilde{C})/2 \end{align*} where $\widetilde{C} = (C - E[C])/\sigma_C$, the normalized random variable, and similarly for $\widetilde{A}, \widetilde{B}$. Upon substitution and simplification, we get \begin{align*} \frac{1}{2}(2 - 2\rho_{AC}) \le (2 - 2\rho_{AB}) + (2 - 2\rho_{BC}) \iff \rho_{AC} \ge 2(\rho_{AB} + \rho_{BC}) - 3 \end{align*} To prove $\rho_{AC} \ge 2\rho_{AB}\rho_{BC} - 1$, consider the random variable \begin{align*} W = 2 \frac{\sigma_{AB}}{\sigma_B^2}B - A \end{align*} We can verify $\sigma_W^2 = \sigma_A^2$, and hence $\sigma_{WC} \le \sigma_{W}\sigma_{C} = \sigma_ A \sigma_C$ by the Cauchy-Schwarz inequality. On the other hand, you may compute \begin{align*} \sigma_{WC} = 2 \frac{\sigma_{AB}}{\sigma_B^2}\sigma_{BC} - \sigma_{AC} \end{align*} Reorganizing all this, we prove $\rho_{AC} \ge 2\rho_{AB}\rho_{BC} - 1$. In your specific example, with $\rho_{AB} = \rho_{BC} = 0.99$, then no matter the construction of $A, B, C$, we must have $\rho_{AC} \ge 0.9602$.
Is this possible that $Cor(X, Y)=0.99$, $Cor(Y, Z)=0.99$ but $Cor(X, Z)=0$? I posted this previously on Math StackExchange, but will reiterate here. If $\rho_{AB} = \text{Corr}(A, B)$, and similarly defined for $\rho_{BC}$ and $\rho_{AC}$, we have the inequality. \begin{align
16,189
categorizing a variable turns it from insignificant to significant
One possible explanation would be nonlinearities in the relationship between your outcome and the predictor. Here is a little example. We use a predictor that is uniform on $[-1,1]$. The outcome, however, does not linearly depend on the predictor, but on the square of the predictor: TRUE is more likely for both $x\approx-1$ and $x\approx 1$, but less likely for $x\approx 0$. In this case, a linear model will come up insignificant, but cutting the predictor into intervals makes it significant. > set.seed(1) > nn <- 1e3 > xx <- runif(nn,-1,1) > yy <- runif(nn)<1/(1+exp(-xx^2)) > > library(lmtest) > > model_0 <- glm(yy~1,family="binomial") > model_1 <- glm(yy~xx,family="binomial") > lrtest(model_1,model_0) Likelihood ratio test Model 1: yy ~ xx Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 2 -676.72 2 1 -677.22 -1 0.9914 0.3194 > > xx_cut <- cut(xx,c(-1,-0.3,0.3,1)) > model_2 <- glm(yy~xx_cut,family="binomial") > lrtest(model_2,model_0) Likelihood ratio test Model 1: yy ~ xx_cut Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 3 -673.65 2 1 -677.22 -2 7.1362 0.02821 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 However, this does not mean that discretizing the predictor is the best approach. (It almost never is.) Much better to model the nonlinearity using splines or similar.
categorizing a variable turns it from insignificant to significant
One possible explanation would be nonlinearities in the relationship between your outcome and the predictor. Here is a little example. We use a predictor that is uniform on $[-1,1]$. The outcome, howe
categorizing a variable turns it from insignificant to significant One possible explanation would be nonlinearities in the relationship between your outcome and the predictor. Here is a little example. We use a predictor that is uniform on $[-1,1]$. The outcome, however, does not linearly depend on the predictor, but on the square of the predictor: TRUE is more likely for both $x\approx-1$ and $x\approx 1$, but less likely for $x\approx 0$. In this case, a linear model will come up insignificant, but cutting the predictor into intervals makes it significant. > set.seed(1) > nn <- 1e3 > xx <- runif(nn,-1,1) > yy <- runif(nn)<1/(1+exp(-xx^2)) > > library(lmtest) > > model_0 <- glm(yy~1,family="binomial") > model_1 <- glm(yy~xx,family="binomial") > lrtest(model_1,model_0) Likelihood ratio test Model 1: yy ~ xx Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 2 -676.72 2 1 -677.22 -1 0.9914 0.3194 > > xx_cut <- cut(xx,c(-1,-0.3,0.3,1)) > model_2 <- glm(yy~xx_cut,family="binomial") > lrtest(model_2,model_0) Likelihood ratio test Model 1: yy ~ xx_cut Model 2: yy ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 3 -673.65 2 1 -677.22 -2 7.1362 0.02821 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 However, this does not mean that discretizing the predictor is the best approach. (It almost never is.) Much better to model the nonlinearity using splines or similar.
categorizing a variable turns it from insignificant to significant One possible explanation would be nonlinearities in the relationship between your outcome and the predictor. Here is a little example. We use a predictor that is uniform on $[-1,1]$. The outcome, howe
16,190
categorizing a variable turns it from insignificant to significant
One possible way is if the relationship is distinctly nonlinear. It's not possible to tell (given the lack of detail) whether this really explains what's going on. You can check for yourself. First, you could do an added variable plot for the variable as itself, and you could also plot the fitted effects in the factor-version of the model. If the explanation is right, both should see a distinctly nonlinear pattern.
categorizing a variable turns it from insignificant to significant
One possible way is if the relationship is distinctly nonlinear. It's not possible to tell (given the lack of detail) whether this really explains what's going on. You can check for yourself. First, y
categorizing a variable turns it from insignificant to significant One possible way is if the relationship is distinctly nonlinear. It's not possible to tell (given the lack of detail) whether this really explains what's going on. You can check for yourself. First, you could do an added variable plot for the variable as itself, and you could also plot the fitted effects in the factor-version of the model. If the explanation is right, both should see a distinctly nonlinear pattern.
categorizing a variable turns it from insignificant to significant One possible way is if the relationship is distinctly nonlinear. It's not possible to tell (given the lack of detail) whether this really explains what's going on. You can check for yourself. First, y
16,191
Why do we need to fit a k-nearest neighbors classifier?
On the conceptual level Fitting a classifier means taking a data set as input, then outputting a classifier, which is chosen from a space of possible classifiers. In many cases, a classifier is identified--that is, distinguished from other possible classifiers--by a set of parameters. The parameters are typically chosen by solving an optimization problem or some other numerical procedure. But, in the case of knn, the classifier is identified by the training data itself. So, at an abstract level, fitting a knn classifier simply requires storing the training set. On the implementation level Evaluating a knn classifier on a new data point requires searching for its nearest neighbors in the training set, which can be an expensive operation when the training set is large. As RUser mentioned, there are various tricks to speed up this search, which typically work by creating various data structures based on the training set. The general idea is that some of the computational work needed to classify new points is actually common across points. So, this work can be done ahead of time and then re-used, rather than repeated for each new instance. A knn implementation using these tricks would do this work during the training phase. For example, scikit-learn can construct kd-trees or ball trees during the call to the fit() function. Choosing $k$ and the distance metric The number of neighbors $k$ and the distance metric are hyperparameters of knn classifiers. Performance can usually be improved by choosing them to suit the problem. But, the optimal settings aren't usually known ahead of time, and we must search for them during the training procedure. This search amounts to solving an optimization problem, and is similar to hyperparameter tuning for other methods.
Why do we need to fit a k-nearest neighbors classifier?
On the conceptual level Fitting a classifier means taking a data set as input, then outputting a classifier, which is chosen from a space of possible classifiers. In many cases, a classifier is identi
Why do we need to fit a k-nearest neighbors classifier? On the conceptual level Fitting a classifier means taking a data set as input, then outputting a classifier, which is chosen from a space of possible classifiers. In many cases, a classifier is identified--that is, distinguished from other possible classifiers--by a set of parameters. The parameters are typically chosen by solving an optimization problem or some other numerical procedure. But, in the case of knn, the classifier is identified by the training data itself. So, at an abstract level, fitting a knn classifier simply requires storing the training set. On the implementation level Evaluating a knn classifier on a new data point requires searching for its nearest neighbors in the training set, which can be an expensive operation when the training set is large. As RUser mentioned, there are various tricks to speed up this search, which typically work by creating various data structures based on the training set. The general idea is that some of the computational work needed to classify new points is actually common across points. So, this work can be done ahead of time and then re-used, rather than repeated for each new instance. A knn implementation using these tricks would do this work during the training phase. For example, scikit-learn can construct kd-trees or ball trees during the call to the fit() function. Choosing $k$ and the distance metric The number of neighbors $k$ and the distance metric are hyperparameters of knn classifiers. Performance can usually be improved by choosing them to suit the problem. But, the optimal settings aren't usually known ahead of time, and we must search for them during the training procedure. This search amounts to solving an optimization problem, and is similar to hyperparameter tuning for other methods.
Why do we need to fit a k-nearest neighbors classifier? On the conceptual level Fitting a classifier means taking a data set as input, then outputting a classifier, which is chosen from a space of possible classifiers. In many cases, a classifier is identi
16,192
Why do we need to fit a k-nearest neighbors classifier?
You can implement it in a lazy way and it makes a decent exercise when discovering a language. (see per example one of my blog posts). But you can also index the data, to make the prediction (much faster). If the feature space had a dimension of one, sorting the points according to this feature would help you find the neighbours much faster (using per example dichotomic search). In larger dimension, there is no natural generalization of sorting, but you can index the points using (per example) quadtrees. Looking at the source, you can see that various methods have been implemented in scikit learn. And there is some research, that keep improving these nearest neighbour queries.
Why do we need to fit a k-nearest neighbors classifier?
You can implement it in a lazy way and it makes a decent exercise when discovering a language. (see per example one of my blog posts). But you can also index the data, to make the prediction (much fas
Why do we need to fit a k-nearest neighbors classifier? You can implement it in a lazy way and it makes a decent exercise when discovering a language. (see per example one of my blog posts). But you can also index the data, to make the prediction (much faster). If the feature space had a dimension of one, sorting the points according to this feature would help you find the neighbours much faster (using per example dichotomic search). In larger dimension, there is no natural generalization of sorting, but you can index the points using (per example) quadtrees. Looking at the source, you can see that various methods have been implemented in scikit learn. And there is some research, that keep improving these nearest neighbour queries.
Why do we need to fit a k-nearest neighbors classifier? You can implement it in a lazy way and it makes a decent exercise when discovering a language. (see per example one of my blog posts). But you can also index the data, to make the prediction (much fas
16,193
Why do we need to fit a k-nearest neighbors classifier?
While the points the other answerers made are certainly valid and interesting, I'd like to point out one more thing from a strictly software engineering point-of-view: To make it consistent with their API sklearn's Estimators should, among other things, have a fit method that takes one or two array-likes(depending on whether it's a supervised/unsupervised estimator) and a number of implementation-specific details (Source). So even if knn's fit method were to do absolutely nothing ever, it would likely still exist, because knn is an estimator and sklearn's developers, as well as the code they contribute, expect estimators to have a fit method.
Why do we need to fit a k-nearest neighbors classifier?
While the points the other answerers made are certainly valid and interesting, I'd like to point out one more thing from a strictly software engineering point-of-view: To make it consistent with their
Why do we need to fit a k-nearest neighbors classifier? While the points the other answerers made are certainly valid and interesting, I'd like to point out one more thing from a strictly software engineering point-of-view: To make it consistent with their API sklearn's Estimators should, among other things, have a fit method that takes one or two array-likes(depending on whether it's a supervised/unsupervised estimator) and a number of implementation-specific details (Source). So even if knn's fit method were to do absolutely nothing ever, it would likely still exist, because knn is an estimator and sklearn's developers, as well as the code they contribute, expect estimators to have a fit method.
Why do we need to fit a k-nearest neighbors classifier? While the points the other answerers made are certainly valid and interesting, I'd like to point out one more thing from a strictly software engineering point-of-view: To make it consistent with their
16,194
Why do we need to fit a k-nearest neighbors classifier?
fit () fuction implements Knn in train set, but your question can be clarified by an another question concerning predict() function that is excuted using test set data However, predict() uses trainng data in order to chose the nearest neighbors and classifies the query case in the calss which has the majority votes. And then compares the accuracy of the classification in the teraget variable in test set data
Why do we need to fit a k-nearest neighbors classifier?
fit () fuction implements Knn in train set, but your question can be clarified by an another question concerning predict() function that is excuted using test set data However, predict() uses trainng
Why do we need to fit a k-nearest neighbors classifier? fit () fuction implements Knn in train set, but your question can be clarified by an another question concerning predict() function that is excuted using test set data However, predict() uses trainng data in order to chose the nearest neighbors and classifies the query case in the calss which has the majority votes. And then compares the accuracy of the classification in the teraget variable in test set data
Why do we need to fit a k-nearest neighbors classifier? fit () fuction implements Knn in train set, but your question can be clarified by an another question concerning predict() function that is excuted using test set data However, predict() uses trainng
16,195
Why is correlation not very useful when one of the variables is categorical?
Correlation is the standardized covariance, i.e the covariance of $x$ and $y$ divided by the standard deviation of $x$ and $y$. Let me illustrate that. Loosely speaking, statistics can be summarized as fitting models to data and assessing how well the model describes those data points (Outcome = Model + Error). One way to do that is to calculate the sums of deviances, or residuals (res) from the model: $res= \sum(x_{i}-\bar{x})$ Many statistical calculations are based on this, incl. the correlation coefficient (see below). Here is an example dataset made in R (the residuals are indicated as red lines and their values added next to them): X <- c(8,9,10,13,15) Y <- c(5,4,4,6,8) By looking at each data point individually and subtracting its value from the model (e.g. the mean; in this case X=11 and Y=5.4), one could assess the accuracy of a model. One could say the model over-/ underestimated the actual value. However, when summing up all the deviances from the model, the total error tends to be zero, the values cancel each other out because there are positive values (the model underestimates a particular data point) and negative values (the model overestimates a particular data point). To solve this problem the sums of deviances are squared and now called sums of squares ($SS$): $SS = \sum(x_i-\bar{x})(x_i-\bar{x}) = \sum(x_i-\bar{x})^2$ The sums of squares are a measure of deviation from the model (i.e. the mean or any other fitted line to a given dataset). Not very helpful for interpreting the deviance from the model (and comparing it with other models) since it is dependent on the number of observations. The more observations the higher the sums of squares. This can be taken care of by dividing the sums of square with $n-1$. The resulting sample variance ($s^2$) becomes the "average error" between the mean and the observations and is therefore a measure of how well the model fits (i.e. represents) the data: $s^2 = \frac{SS}{n-1} = \frac{\sum(x_i-\bar{x})(x_i-\bar{x})}{n-1} = \frac{\sum(x_i-\bar{x})^2}{n-1}$ For convenience, the square root of the sample variance can be taken, which is known as the sample standard deviation: $s=\sqrt{s^2}=\sqrt{\frac{SS}{n-1}}=\sqrt{\frac{\sum(x_i-\bar{x})^2}{n-1}}$ Now, the covariance assesses whether two variables are related to each other. A positive value indicates that as one variable deviates from the mean, the other variable deviates in the same direction. $cov_{x,y}= \frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{n-1}$ By standardizing, we express the covariance per unit standard deviation, which is the Pearson correlation coefficient $r$. This allows comparing variables with each other that were measured in different units. The correlation coefficient is a measure of the strength of a relationship ranging from -1 (a perfect negative correlation) to 0 (no correlation) and +1 (a perfect positive correlation). $r=\frac{cov_{x,y}}{s_x s_y} = \frac{\sum(x_1-\bar{x})(y_i-\bar{y})}{(n-1) s_x s_y}$ In this, case the Pearson correlation coefficient is $r=0.87$, which can be considered a strong correlation (although this is also relative depending on the field of study). To check this, here another plot with X on the x-axis and Y on the y axis: So long story short, yes your feeling is right but I hope my answer can provide some context.
Why is correlation not very useful when one of the variables is categorical?
Correlation is the standardized covariance, i.e the covariance of $x$ and $y$ divided by the standard deviation of $x$ and $y$. Let me illustrate that. Loosely speaking, statistics can be summarized a
Why is correlation not very useful when one of the variables is categorical? Correlation is the standardized covariance, i.e the covariance of $x$ and $y$ divided by the standard deviation of $x$ and $y$. Let me illustrate that. Loosely speaking, statistics can be summarized as fitting models to data and assessing how well the model describes those data points (Outcome = Model + Error). One way to do that is to calculate the sums of deviances, or residuals (res) from the model: $res= \sum(x_{i}-\bar{x})$ Many statistical calculations are based on this, incl. the correlation coefficient (see below). Here is an example dataset made in R (the residuals are indicated as red lines and their values added next to them): X <- c(8,9,10,13,15) Y <- c(5,4,4,6,8) By looking at each data point individually and subtracting its value from the model (e.g. the mean; in this case X=11 and Y=5.4), one could assess the accuracy of a model. One could say the model over-/ underestimated the actual value. However, when summing up all the deviances from the model, the total error tends to be zero, the values cancel each other out because there are positive values (the model underestimates a particular data point) and negative values (the model overestimates a particular data point). To solve this problem the sums of deviances are squared and now called sums of squares ($SS$): $SS = \sum(x_i-\bar{x})(x_i-\bar{x}) = \sum(x_i-\bar{x})^2$ The sums of squares are a measure of deviation from the model (i.e. the mean or any other fitted line to a given dataset). Not very helpful for interpreting the deviance from the model (and comparing it with other models) since it is dependent on the number of observations. The more observations the higher the sums of squares. This can be taken care of by dividing the sums of square with $n-1$. The resulting sample variance ($s^2$) becomes the "average error" between the mean and the observations and is therefore a measure of how well the model fits (i.e. represents) the data: $s^2 = \frac{SS}{n-1} = \frac{\sum(x_i-\bar{x})(x_i-\bar{x})}{n-1} = \frac{\sum(x_i-\bar{x})^2}{n-1}$ For convenience, the square root of the sample variance can be taken, which is known as the sample standard deviation: $s=\sqrt{s^2}=\sqrt{\frac{SS}{n-1}}=\sqrt{\frac{\sum(x_i-\bar{x})^2}{n-1}}$ Now, the covariance assesses whether two variables are related to each other. A positive value indicates that as one variable deviates from the mean, the other variable deviates in the same direction. $cov_{x,y}= \frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{n-1}$ By standardizing, we express the covariance per unit standard deviation, which is the Pearson correlation coefficient $r$. This allows comparing variables with each other that were measured in different units. The correlation coefficient is a measure of the strength of a relationship ranging from -1 (a perfect negative correlation) to 0 (no correlation) and +1 (a perfect positive correlation). $r=\frac{cov_{x,y}}{s_x s_y} = \frac{\sum(x_1-\bar{x})(y_i-\bar{y})}{(n-1) s_x s_y}$ In this, case the Pearson correlation coefficient is $r=0.87$, which can be considered a strong correlation (although this is also relative depending on the field of study). To check this, here another plot with X on the x-axis and Y on the y axis: So long story short, yes your feeling is right but I hope my answer can provide some context.
Why is correlation not very useful when one of the variables is categorical? Correlation is the standardized covariance, i.e the covariance of $x$ and $y$ divided by the standard deviation of $x$ and $y$. Let me illustrate that. Loosely speaking, statistics can be summarized a
16,196
Why is correlation not very useful when one of the variables is categorical?
You are (nearly) right. Covariance (and therefore correlation too) can be computed only between numerical variables. That includes continuous variables but also discrete numerical variables. Categorical variables could be used to compute correlation only given a useful numerical code for them, but this is not likely to get a practical advantage - maybe it could be useful for some two levels categorical variables, but other tools are likely to be more suitable.
Why is correlation not very useful when one of the variables is categorical?
You are (nearly) right. Covariance (and therefore correlation too) can be computed only between numerical variables. That includes continuous variables but also discrete numerical variables. Categoric
Why is correlation not very useful when one of the variables is categorical? You are (nearly) right. Covariance (and therefore correlation too) can be computed only between numerical variables. That includes continuous variables but also discrete numerical variables. Categorical variables could be used to compute correlation only given a useful numerical code for them, but this is not likely to get a practical advantage - maybe it could be useful for some two levels categorical variables, but other tools are likely to be more suitable.
Why is correlation not very useful when one of the variables is categorical? You are (nearly) right. Covariance (and therefore correlation too) can be computed only between numerical variables. That includes continuous variables but also discrete numerical variables. Categoric
16,197
Why is correlation not very useful when one of the variables is categorical?
There is absolutely nothing wrong with computing correlations where one of the variables is categorical. A strong positive correlation would imply that turning your categorical variable on (or off depending on your convention) causes an increase in the response. For example this could happen when calculating a logistic regression where variables are categorical: predicting the chance of a heart attack given patient comorbidities like diabetes and bmi. In this case BMI would have would have a very strong correlation with heart attacks. Would you conclude that's not useful?
Why is correlation not very useful when one of the variables is categorical?
There is absolutely nothing wrong with computing correlations where one of the variables is categorical. A strong positive correlation would imply that turning your categorical variable on (or off dep
Why is correlation not very useful when one of the variables is categorical? There is absolutely nothing wrong with computing correlations where one of the variables is categorical. A strong positive correlation would imply that turning your categorical variable on (or off depending on your convention) causes an increase in the response. For example this could happen when calculating a logistic regression where variables are categorical: predicting the chance of a heart attack given patient comorbidities like diabetes and bmi. In this case BMI would have would have a very strong correlation with heart attacks. Would you conclude that's not useful?
Why is correlation not very useful when one of the variables is categorical? There is absolutely nothing wrong with computing correlations where one of the variables is categorical. A strong positive correlation would imply that turning your categorical variable on (or off dep
16,198
How to keep time invariant variables in a fixed effects model
There are a few potential ways for you to keep the gender dummy in a fixed effects regression. Within Estimator Suppose you have a similar model compared to your pooled OLS model which is $$y_{it} = \beta_1 + \sum^{10}_{t=2} \beta_t d_t + \gamma_1 (male_i) + \sum^{10}_{t=1} \gamma_t (d_t \cdot male_i) + X'_{it}\theta + c_i + \epsilon_{it}$$ where the variables are as before. Now note that $\beta_1$ and $\beta_1 + \gamma_1 (male_i)$ cannot be identified because the within estimator cannot distinguish them from the fixed effect $c_i$. Given that $\beta_1$ is the intercept for the base year $t=1$, $\gamma_1$ is the gender effect on earnings in this period. What we can identify in this case are $\gamma_2, ..., \gamma_{10}$ because they are interacted with your time dummies and they measure the differences in the partial effects of your gender variable relative to the first time period. This means if you observe an increase in your $\gamma_2,...,\gamma_{10}$ over time this is an indication for a widening of the earnings gap between men and women. First-Difference Estimator If you want to know the overall effect of the difference between men and women over time, you can try the following model: $$y_{it} = \beta_1 + \sum^{10}_{t=2} \beta_t d_t + \gamma (t\cdot male_i) + X'_{it}\theta + c_i + \epsilon_{it}$$ where the variable $t = 1, 2,...,10$ is interacted with the time-invariant gender dummy. Now if you take first differences $\beta_1$ and $c_i$ drop out and you get $$y_{it} - y_{i(t-1)} = \sum^{10}_{t=3} \beta_t (d_t - d_{(t-1)}) + \gamma (t\cdot male_i - [(t-1)male_i]) + (X'_{it}-X'_{i(t-1)})\theta + \epsilon_{it}-\epsilon_{i(t-1)}$$ Then $\gamma(t\cdot male_i - [(t-1)male_i]) = \gamma[(t - (t-1))\cdot male_i] = \gamma (male_i)$ and you can identify the gender difference in earnings $\gamma$. So the final regression equation will be: $$\Delta y_{it} = \sum_{t=3}^{10}\beta_t \Delta d_t + \gamma(male_i) + \Delta X'_{it}\theta + \Delta \epsilon_{it}$$ and you get your effect of interest. The nice thing is that this is easily implemented in any statistical software but you lose a time period. Hausman-Taylor Estimator This estimator distinguishes between regressors that you can assume to be uncorrelated with the fixed effect $c_i$ and those that are potentially correlated with it. It further distinguishes between time-varying and time-invariant variables. Let $1$ denote variables that are uncorrelated with $c_i$ and $2$ those who are and let's say your gender variable is the only time-invariant variable. The Hausman-Taylor estimator then applies the random effects transformation: $$\tilde{y}_{it} = \tilde{X}'_{1it} + \tilde{X}'_{2it} + \gamma (\widetilde{male}_{i2}) + \tilde{c}_i + \tilde{\epsilon}_{it}$$ where tilde notation means $\tilde{X}_{1it} = X_{1it} - \hat{\theta}_i \overline{X}_{1i}$ where $\hat{\theta}_i$ is used for the random effects transformation and $\overline{X}_{1i}$ is the time-average over each individual. This isn't like the usual random effects estimator that you wanted to avoid because group $2$ variables are instrumented for in order to remove the correlation with $c_i$. For $\tilde{X}_{2it}$ the instrument is $X_{2it} - \overline{X}_{2i}$. The same is done for the time-invariant variables, so if you specify the gender variable to be potentially correlated with the fixed effect it gets instrumented with $\overline{X}_{1i}$, so you must have more time-varying than time-invariant variables. All of this might sound a little complicated but there are canned packages for this estimator. For instance, in Stata the corresponding command is xthtaylor. For further information on this method you could read Cameron and Trivedi (2009) "Microeconometrics Using Stata". Otherwise you can just stick with the two previous methods which are a bit easier. Inference For your hypothesis tests there is not much that needs to be considered other than what you would need to do anyway in a fixed effects regression. You need to take care for the autocorrelation in the errors, for example by clustering on the individual ID variable. This allows for an arbitrary correlation structure among clusters (individuals) which deals with autocorrelation. For a reference see again Cameron and Trivedi (2009).
How to keep time invariant variables in a fixed effects model
There are a few potential ways for you to keep the gender dummy in a fixed effects regression. Within Estimator Suppose you have a similar model compared to your pooled OLS model which is $$y_{it} = \
How to keep time invariant variables in a fixed effects model There are a few potential ways for you to keep the gender dummy in a fixed effects regression. Within Estimator Suppose you have a similar model compared to your pooled OLS model which is $$y_{it} = \beta_1 + \sum^{10}_{t=2} \beta_t d_t + \gamma_1 (male_i) + \sum^{10}_{t=1} \gamma_t (d_t \cdot male_i) + X'_{it}\theta + c_i + \epsilon_{it}$$ where the variables are as before. Now note that $\beta_1$ and $\beta_1 + \gamma_1 (male_i)$ cannot be identified because the within estimator cannot distinguish them from the fixed effect $c_i$. Given that $\beta_1$ is the intercept for the base year $t=1$, $\gamma_1$ is the gender effect on earnings in this period. What we can identify in this case are $\gamma_2, ..., \gamma_{10}$ because they are interacted with your time dummies and they measure the differences in the partial effects of your gender variable relative to the first time period. This means if you observe an increase in your $\gamma_2,...,\gamma_{10}$ over time this is an indication for a widening of the earnings gap between men and women. First-Difference Estimator If you want to know the overall effect of the difference between men and women over time, you can try the following model: $$y_{it} = \beta_1 + \sum^{10}_{t=2} \beta_t d_t + \gamma (t\cdot male_i) + X'_{it}\theta + c_i + \epsilon_{it}$$ where the variable $t = 1, 2,...,10$ is interacted with the time-invariant gender dummy. Now if you take first differences $\beta_1$ and $c_i$ drop out and you get $$y_{it} - y_{i(t-1)} = \sum^{10}_{t=3} \beta_t (d_t - d_{(t-1)}) + \gamma (t\cdot male_i - [(t-1)male_i]) + (X'_{it}-X'_{i(t-1)})\theta + \epsilon_{it}-\epsilon_{i(t-1)}$$ Then $\gamma(t\cdot male_i - [(t-1)male_i]) = \gamma[(t - (t-1))\cdot male_i] = \gamma (male_i)$ and you can identify the gender difference in earnings $\gamma$. So the final regression equation will be: $$\Delta y_{it} = \sum_{t=3}^{10}\beta_t \Delta d_t + \gamma(male_i) + \Delta X'_{it}\theta + \Delta \epsilon_{it}$$ and you get your effect of interest. The nice thing is that this is easily implemented in any statistical software but you lose a time period. Hausman-Taylor Estimator This estimator distinguishes between regressors that you can assume to be uncorrelated with the fixed effect $c_i$ and those that are potentially correlated with it. It further distinguishes between time-varying and time-invariant variables. Let $1$ denote variables that are uncorrelated with $c_i$ and $2$ those who are and let's say your gender variable is the only time-invariant variable. The Hausman-Taylor estimator then applies the random effects transformation: $$\tilde{y}_{it} = \tilde{X}'_{1it} + \tilde{X}'_{2it} + \gamma (\widetilde{male}_{i2}) + \tilde{c}_i + \tilde{\epsilon}_{it}$$ where tilde notation means $\tilde{X}_{1it} = X_{1it} - \hat{\theta}_i \overline{X}_{1i}$ where $\hat{\theta}_i$ is used for the random effects transformation and $\overline{X}_{1i}$ is the time-average over each individual. This isn't like the usual random effects estimator that you wanted to avoid because group $2$ variables are instrumented for in order to remove the correlation with $c_i$. For $\tilde{X}_{2it}$ the instrument is $X_{2it} - \overline{X}_{2i}$. The same is done for the time-invariant variables, so if you specify the gender variable to be potentially correlated with the fixed effect it gets instrumented with $\overline{X}_{1i}$, so you must have more time-varying than time-invariant variables. All of this might sound a little complicated but there are canned packages for this estimator. For instance, in Stata the corresponding command is xthtaylor. For further information on this method you could read Cameron and Trivedi (2009) "Microeconometrics Using Stata". Otherwise you can just stick with the two previous methods which are a bit easier. Inference For your hypothesis tests there is not much that needs to be considered other than what you would need to do anyway in a fixed effects regression. You need to take care for the autocorrelation in the errors, for example by clustering on the individual ID variable. This allows for an arbitrary correlation structure among clusters (individuals) which deals with autocorrelation. For a reference see again Cameron and Trivedi (2009).
How to keep time invariant variables in a fixed effects model There are a few potential ways for you to keep the gender dummy in a fixed effects regression. Within Estimator Suppose you have a similar model compared to your pooled OLS model which is $$y_{it} = \
16,199
How to keep time invariant variables in a fixed effects model
Another potential way for you to keep the gender dummy is the the Mundlak's (1978) approach for a fixed effect model with time invariant variables. The Mundlak's approach would posit that the gender effect can be projected upon the group means of the time-varying variables. Mundlak, Y. 1978: On the pooling of time series and cross section data. Econometrica 46:69-85.
How to keep time invariant variables in a fixed effects model
Another potential way for you to keep the gender dummy is the the Mundlak's (1978) approach for a fixed effect model with time invariant variables. The Mundlak's approach would posit that the gender e
How to keep time invariant variables in a fixed effects model Another potential way for you to keep the gender dummy is the the Mundlak's (1978) approach for a fixed effect model with time invariant variables. The Mundlak's approach would posit that the gender effect can be projected upon the group means of the time-varying variables. Mundlak, Y. 1978: On the pooling of time series and cross section data. Econometrica 46:69-85.
How to keep time invariant variables in a fixed effects model Another potential way for you to keep the gender dummy is the the Mundlak's (1978) approach for a fixed effect model with time invariant variables. The Mundlak's approach would posit that the gender e
16,200
How to keep time invariant variables in a fixed effects model
Another method is to estimate the time-invariant coefficients in a second stage equation, using the mean error as the dependent variable. First, estimate the model with FE. From here you get an estimation of $\beta$ and $\gamma_{t}$. For simplicity, let's forget about the year-effects. Define the estimation error $\hat{u}_{it}$ as before: $$ \hat{u}_{it} \equiv y_{it} - X_{it}\hat{\beta} $$ The linear predictor $\bar{u}_{i}$ is: $$ \bar{u}_{i} \equiv \frac{\sum_{t=1}^{T}\hat{u}_{i}}{T} = \bar{y_{it}} - \bar{x}_{i}\hat{\beta} $$ Now, consider the following second stage equation: \begin{equation} \bar{u}_{i} = \delta male_{i} + c_{i} \end{equation} Assuming that gender is uncorrelated with unobserved factors $c_{i}$. Then, the OLS estimator of $\delta$ is unbiased and time-consistent (this is, it is consistent when $T \rightarrow \infty$). To prove the above, replace the original model into the estimator $\bar{u}_{i}$: $$ \bar{u}_{i} = \bar{x}_{i}\beta - \bar{x}_{i}\hat{\beta} + \delta male_{i} + c_{i} + \frac{\sum_{t=1}^{T}\epsilon_{it}}{T} $$ The expectation of this estimator is: $$ E(\bar{u}_{i}) = \bar{x}_{i}\beta - \bar{x}_{i}E(\hat{\beta}) + \delta male_{i} + E(c_{i}) + \frac{\sum_{t=1}^{T}E(\epsilon_{it})}{T} $$ If assumptions for FE consistency hold, $\hat{\beta}$ is an unbiased estimator of $\beta$, and $E(\epsilon_{it}) = 0$. Thus: $$ E(\bar{u}_{i}) = \delta male_{i} + E(c_{i}) $$ This is, our predictor is an unbiased estimator of the time-invariant components of the model. Regarding consistency, the probability limit of this predictor is: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{i} = p \lim\limits_{T \rightarrow \infty} \left( \bar{x}_{i}\beta\right) - p \lim\limits_{T \rightarrow \infty} \left(\bar{x}_{i}\hat{\beta}\right) + p \lim\limits_{T \rightarrow \infty} \delta male_{i} + p \lim\limits_{T \rightarrow \infty} c_{i} + p \lim\limits_{T \rightarrow \infty} \left( \frac{\sum_{t=1}^{T}\epsilon_{it}}{T}\right) $$ Again, given FE assumptions, $\hat{\beta}$ is a consistent estimator of $\beta$, and the error term converges to its mean, which is zero. Therefore: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{i} = \delta male_{i} + c_{i} $$ Again, our predictor is a consistent estimator of the time-invariant components of the model.
How to keep time invariant variables in a fixed effects model
Another method is to estimate the time-invariant coefficients in a second stage equation, using the mean error as the dependent variable. First, estimate the model with FE. From here you get an estim
How to keep time invariant variables in a fixed effects model Another method is to estimate the time-invariant coefficients in a second stage equation, using the mean error as the dependent variable. First, estimate the model with FE. From here you get an estimation of $\beta$ and $\gamma_{t}$. For simplicity, let's forget about the year-effects. Define the estimation error $\hat{u}_{it}$ as before: $$ \hat{u}_{it} \equiv y_{it} - X_{it}\hat{\beta} $$ The linear predictor $\bar{u}_{i}$ is: $$ \bar{u}_{i} \equiv \frac{\sum_{t=1}^{T}\hat{u}_{i}}{T} = \bar{y_{it}} - \bar{x}_{i}\hat{\beta} $$ Now, consider the following second stage equation: \begin{equation} \bar{u}_{i} = \delta male_{i} + c_{i} \end{equation} Assuming that gender is uncorrelated with unobserved factors $c_{i}$. Then, the OLS estimator of $\delta$ is unbiased and time-consistent (this is, it is consistent when $T \rightarrow \infty$). To prove the above, replace the original model into the estimator $\bar{u}_{i}$: $$ \bar{u}_{i} = \bar{x}_{i}\beta - \bar{x}_{i}\hat{\beta} + \delta male_{i} + c_{i} + \frac{\sum_{t=1}^{T}\epsilon_{it}}{T} $$ The expectation of this estimator is: $$ E(\bar{u}_{i}) = \bar{x}_{i}\beta - \bar{x}_{i}E(\hat{\beta}) + \delta male_{i} + E(c_{i}) + \frac{\sum_{t=1}^{T}E(\epsilon_{it})}{T} $$ If assumptions for FE consistency hold, $\hat{\beta}$ is an unbiased estimator of $\beta$, and $E(\epsilon_{it}) = 0$. Thus: $$ E(\bar{u}_{i}) = \delta male_{i} + E(c_{i}) $$ This is, our predictor is an unbiased estimator of the time-invariant components of the model. Regarding consistency, the probability limit of this predictor is: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{i} = p \lim\limits_{T \rightarrow \infty} \left( \bar{x}_{i}\beta\right) - p \lim\limits_{T \rightarrow \infty} \left(\bar{x}_{i}\hat{\beta}\right) + p \lim\limits_{T \rightarrow \infty} \delta male_{i} + p \lim\limits_{T \rightarrow \infty} c_{i} + p \lim\limits_{T \rightarrow \infty} \left( \frac{\sum_{t=1}^{T}\epsilon_{it}}{T}\right) $$ Again, given FE assumptions, $\hat{\beta}$ is a consistent estimator of $\beta$, and the error term converges to its mean, which is zero. Therefore: $$ p \lim\limits_{T \rightarrow \infty} \bar{u}_{i} = \delta male_{i} + c_{i} $$ Again, our predictor is a consistent estimator of the time-invariant components of the model.
How to keep time invariant variables in a fixed effects model Another method is to estimate the time-invariant coefficients in a second stage equation, using the mean error as the dependent variable. First, estimate the model with FE. From here you get an estim