idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
8,101
|
In Regression Analysis, why do we call independent variables "independent"?
|
I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could set the regressors independently of each other. There are many preferable terms that do not have this loaded causal connotation, and in my experience, statisticians tend to prefer the more neutral terms. There are many other terms used here, including the following:
$$\begin{matrix}
Y_i & & & x_{i,1},...,x_{i,m} \\
\hline
\text{Response} & & & \text{Predictors} \\
\text{Regressand} & & & \text{Regressors} \\
\text{Output variable} & & & \text{Input variables} \\
\text{Predicted variable} & & & \text{Explanatory variables} \\
\end{matrix}$$
Personally, I use the terms explanatory variables, and response variable, since those terms have no connotation of statistical independence or control, etc. (One might argue that 'response' has a causal connotation, but this is a fairly weak connotation, so I have not found it problematic.)
|
In Regression Analysis, why do we call independent variables "independent"?
|
I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could
|
In Regression Analysis, why do we call independent variables "independent"?
I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could set the regressors independently of each other. There are many preferable terms that do not have this loaded causal connotation, and in my experience, statisticians tend to prefer the more neutral terms. There are many other terms used here, including the following:
$$\begin{matrix}
Y_i & & & x_{i,1},...,x_{i,m} \\
\hline
\text{Response} & & & \text{Predictors} \\
\text{Regressand} & & & \text{Regressors} \\
\text{Output variable} & & & \text{Input variables} \\
\text{Predicted variable} & & & \text{Explanatory variables} \\
\end{matrix}$$
Personally, I use the terms explanatory variables, and response variable, since those terms have no connotation of statistical independence or control, etc. (One might argue that 'response' has a causal connotation, but this is a fairly weak connotation, so I have not found it problematic.)
|
In Regression Analysis, why do we call independent variables "independent"?
I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could
|
8,102
|
In Regression Analysis, why do we call independent variables "independent"?
|
To add to Frank Harrell's and Peter Flom's answers:
I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why:
In regression analysis we have one "special" variable (usually denoted by $Y$) and many "not-so-special" variables ($X$'s) and we want to see how changes in $X$'s affect $Y$. In other words, we want to see how $Y$ depends on $X$'s.
That is why $Y$ is called "dependent". And if one is called "dependent" how would you call another one?
|
In Regression Analysis, why do we call independent variables "independent"?
|
To add to Frank Harrell's and Peter Flom's answers:
I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why:
In regr
|
In Regression Analysis, why do we call independent variables "independent"?
To add to Frank Harrell's and Peter Flom's answers:
I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why:
In regression analysis we have one "special" variable (usually denoted by $Y$) and many "not-so-special" variables ($X$'s) and we want to see how changes in $X$'s affect $Y$. In other words, we want to see how $Y$ depends on $X$'s.
That is why $Y$ is called "dependent". And if one is called "dependent" how would you call another one?
|
In Regression Analysis, why do we call independent variables "independent"?
To add to Frank Harrell's and Peter Flom's answers:
I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why:
In regr
|
8,103
|
In Regression Analysis, why do we call independent variables "independent"?
|
"Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean that the DV, in some sense, depends on the IV. So, for example, when modeling the relationship of height and weight in adult humans, we say weight is the DV and height is the IV.
This does capture something that "predictor" does not - namely, the direction of the relationship. Height predicts weight, but weight also predicts height. That is, if you were told to guess the height of people and were told their weights, that would be useful.
But we wouldn't say that height depends on weight.
|
In Regression Analysis, why do we call independent variables "independent"?
|
"Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean
|
In Regression Analysis, why do we call independent variables "independent"?
"Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean that the DV, in some sense, depends on the IV. So, for example, when modeling the relationship of height and weight in adult humans, we say weight is the DV and height is the IV.
This does capture something that "predictor" does not - namely, the direction of the relationship. Height predicts weight, but weight also predicts height. That is, if you were told to guess the height of people and were told their weights, that would be useful.
But we wouldn't say that height depends on weight.
|
In Regression Analysis, why do we call independent variables "independent"?
"Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean
|
8,104
|
In Regression Analysis, why do we call independent variables "independent"?
|
Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us.
You say that for a general regression problem we have a Output variable, say Y, whose value depends on other input variables, say x1, x2, x3. That is why it is called a "Dependent Variable". And similarly depending upon this context only, and just to differentiate between Output and Input Variable, x1, x2, x3 are termed as independent variable. Because unlike Y it does not depend on any other variable(But yes here we are not talking about there dependency with themselves.)
|
In Regression Analysis, why do we call independent variables "independent"?
|
Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us.
You say that for a g
|
In Regression Analysis, why do we call independent variables "independent"?
Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us.
You say that for a general regression problem we have a Output variable, say Y, whose value depends on other input variables, say x1, x2, x3. That is why it is called a "Dependent Variable". And similarly depending upon this context only, and just to differentiate between Output and Input Variable, x1, x2, x3 are termed as independent variable. Because unlike Y it does not depend on any other variable(But yes here we are not talking about there dependency with themselves.)
|
In Regression Analysis, why do we call independent variables "independent"?
Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us.
You say that for a g
|
8,105
|
In Regression Analysis, why do we call independent variables "independent"?
|
Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and house_price. Here, house_price is determined based on the house_size and location but the location and house_size can vary for different houses.
|
In Regression Analysis, why do we call independent variables "independent"?
|
Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and h
|
In Regression Analysis, why do we call independent variables "independent"?
Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and house_price. Here, house_price is determined based on the house_size and location but the location and house_size can vary for different houses.
|
In Regression Analysis, why do we call independent variables "independent"?
Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and h
|
8,106
|
Recommendations for non-technical yet deep articles in statistics
|
Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310.
I believe it matches your three bullet points.
It talks about explanatory versus predictive modelling (the terms should be self-explanatory) and notes that differences between them are often not recognized.
It raises the point that depending on the goal of modelling (explanatory vs. predictive), different model building strategies could be used and different models may be selected as "the best" model.
It is a rather comprehensive paper and an enjoyable read. A discussion of it is summarized in Rob J. Hyndman's blog post. A related discussion on Cross Validated is in this thread (with lots of upvotes). Another (unanswered) question on the same topic is this.
|
Recommendations for non-technical yet deep articles in statistics
|
Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310.
I believe it matches your three bullet points.
It talks about explanatory versus predictive modelling (the terms shoul
|
Recommendations for non-technical yet deep articles in statistics
Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310.
I believe it matches your three bullet points.
It talks about explanatory versus predictive modelling (the terms should be self-explanatory) and notes that differences between them are often not recognized.
It raises the point that depending on the goal of modelling (explanatory vs. predictive), different model building strategies could be used and different models may be selected as "the best" model.
It is a rather comprehensive paper and an enjoyable read. A discussion of it is summarized in Rob J. Hyndman's blog post. A related discussion on Cross Validated is in this thread (with lots of upvotes). Another (unanswered) question on the same topic is this.
|
Recommendations for non-technical yet deep articles in statistics
Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310.
I believe it matches your three bullet points.
It talks about explanatory versus predictive modelling (the terms shoul
|
8,107
|
Recommendations for non-technical yet deep articles in statistics
|
Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249.
It is not known to many but when the giants of the profession were still among us, they did not get on well with each other. The debate on the foundations of hypothesis testing specifically, whether it should be inductive or deductive, saw some pretty serious insults flying around between Fisher on the one hand and Neyman-Pearson on the other. And the issue was never settled during their lifetimes.
Long after they have all passed, Lehmann tries to bridge the gap and in my opinion does a good job as he shows that the approaches are complementary rather than mutually exclusive. This is what students learn nowadays by the way. You need to know a few basic things about hypothesis testing but you can otherwise follow the paper without any problems.
|
Recommendations for non-technical yet deep articles in statistics
|
Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249.
It is not known to many bu
|
Recommendations for non-technical yet deep articles in statistics
Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249.
It is not known to many but when the giants of the profession were still among us, they did not get on well with each other. The debate on the foundations of hypothesis testing specifically, whether it should be inductive or deductive, saw some pretty serious insults flying around between Fisher on the one hand and Neyman-Pearson on the other. And the issue was never settled during their lifetimes.
Long after they have all passed, Lehmann tries to bridge the gap and in my opinion does a good job as he shows that the approaches are complementary rather than mutually exclusive. This is what students learn nowadays by the way. You need to know a few basic things about hypothesis testing but you can otherwise follow the paper without any problems.
|
Recommendations for non-technical yet deep articles in statistics
Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249.
It is not known to many bu
|
8,108
|
Recommendations for non-technical yet deep articles in statistics
|
Wilk, M.B. and Gnanadesikan, R. 1968.
Probability plotting methods for the analysis of data.
Biometrika 55: 1-17. Jstor link if you have access
This paper is, at the time of my writing, almost 50 years old but still feels fresh and innovative. Using a rich variety of interesting and substantial examples, the authors unify and extend a variety of ideas for plotting and comparing distributions using the framework of Q-Q (quantile-quantile) and P-P (probability-probability) plots. Distributions here mean broadly any sets of data or of numbers (residuals, contrasts, etc., etc.) arising in their analyses.
Particular versions of these plots go back several decades, most obviously normal probability or normal scores plots. which are in these terms quantile-quantile plots, namely plots of observed quantiles versus expected or theoretical quantiles from a sample of the same size from a normal (Gaussian) distribution. But the authors show, modestly yet confidently, that the same ideas can be extended easily -- and practically with modern computing -- for examining other kinds of quantiles and plotting the results automatically.
The authors, then both at Bell Telephone Laboratories, enjoyed state-of-the-art computing facilities, and even many universities and research institutions took a decade or so to catch up. Even now, the ideas in this paper deserve wider application than they get. It's a rare introductory text or course that includes any of these ideas other than the normal Q-Q plot. Histograms and box plots (each often highly useful, but nevertheless each awkward and limited in several ways) continue to be the main staples when plots of distributions are introduced.
On a personal level, even though the main ideas of this paper have been familiar for most of my career, I enjoy re-reading it every couple of years or so. One good reason is pleasure at the way the authors yield simple but powerful ideas to good effect with serious examples. Another good reason is the way that the paper, which is concisely written, without the slightest trace of bombast, hints at extensions of the main ideas. More than once, I've rediscovered twists on the main ideas covered explicitly in side hints and further comments.
This isn't just a paper for those especially interested in statistical graphics, although to my mind that should include everyone interested in statistics of any kind. It promotes ways of thinking about distributions that are practically helpful in developing anyone's statistical skills and insights.
|
Recommendations for non-technical yet deep articles in statistics
|
Wilk, M.B. and Gnanadesikan, R. 1968.
Probability plotting methods for the analysis of data.
Biometrika 55: 1-17. Jstor link if you have access
This paper is, at the time of my writing, almost 50 ye
|
Recommendations for non-technical yet deep articles in statistics
Wilk, M.B. and Gnanadesikan, R. 1968.
Probability plotting methods for the analysis of data.
Biometrika 55: 1-17. Jstor link if you have access
This paper is, at the time of my writing, almost 50 years old but still feels fresh and innovative. Using a rich variety of interesting and substantial examples, the authors unify and extend a variety of ideas for plotting and comparing distributions using the framework of Q-Q (quantile-quantile) and P-P (probability-probability) plots. Distributions here mean broadly any sets of data or of numbers (residuals, contrasts, etc., etc.) arising in their analyses.
Particular versions of these plots go back several decades, most obviously normal probability or normal scores plots. which are in these terms quantile-quantile plots, namely plots of observed quantiles versus expected or theoretical quantiles from a sample of the same size from a normal (Gaussian) distribution. But the authors show, modestly yet confidently, that the same ideas can be extended easily -- and practically with modern computing -- for examining other kinds of quantiles and plotting the results automatically.
The authors, then both at Bell Telephone Laboratories, enjoyed state-of-the-art computing facilities, and even many universities and research institutions took a decade or so to catch up. Even now, the ideas in this paper deserve wider application than they get. It's a rare introductory text or course that includes any of these ideas other than the normal Q-Q plot. Histograms and box plots (each often highly useful, but nevertheless each awkward and limited in several ways) continue to be the main staples when plots of distributions are introduced.
On a personal level, even though the main ideas of this paper have been familiar for most of my career, I enjoy re-reading it every couple of years or so. One good reason is pleasure at the way the authors yield simple but powerful ideas to good effect with serious examples. Another good reason is the way that the paper, which is concisely written, without the slightest trace of bombast, hints at extensions of the main ideas. More than once, I've rediscovered twists on the main ideas covered explicitly in side hints and further comments.
This isn't just a paper for those especially interested in statistical graphics, although to my mind that should include everyone interested in statistics of any kind. It promotes ways of thinking about distributions that are practically helpful in developing anyone's statistical skills and insights.
|
Recommendations for non-technical yet deep articles in statistics
Wilk, M.B. and Gnanadesikan, R. 1968.
Probability plotting methods for the analysis of data.
Biometrika 55: 1-17. Jstor link if you have access
This paper is, at the time of my writing, almost 50 ye
|
8,109
|
Recommendations for non-technical yet deep articles in statistics
|
Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005)
Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014)
Must reads for every researcher/statistician/analyst who wants to avoid the dangers of using and interpreting statistics incorrectly in research. The 2005 article has been the most-accessed in the history of Public Library of Science, and it stimulated lots of controversy and discussion.
|
Recommendations for non-technical yet deep articles in statistics
|
Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005)
Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014)
Must reads for every
|
Recommendations for non-technical yet deep articles in statistics
Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005)
Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014)
Must reads for every researcher/statistician/analyst who wants to avoid the dangers of using and interpreting statistics incorrectly in research. The 2005 article has been the most-accessed in the history of Public Library of Science, and it stimulated lots of controversy and discussion.
|
Recommendations for non-technical yet deep articles in statistics
Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005)
Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014)
Must reads for every
|
8,110
|
Recommendations for non-technical yet deep articles in statistics
|
Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433
This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches at least the third of your dot points.
I first read this paper when I was completing a PhD in engineering and appreciated its exploration of the practicalities of data analysis.
|
Recommendations for non-technical yet deep articles in statistics
|
Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433
This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches
|
Recommendations for non-technical yet deep articles in statistics
Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433
This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches at least the third of your dot points.
I first read this paper when I was completing a PhD in engineering and appreciated its exploration of the practicalities of data analysis.
|
Recommendations for non-technical yet deep articles in statistics
Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433
This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches
|
8,111
|
Recommendations for non-technical yet deep articles in statistics
|
Efron and Morris, 1977, Stein's Paradox in Statistics.
Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes context. The 1977 paper is a popular one published in Scientific American.
It is a great read.
|
Recommendations for non-technical yet deep articles in statistics
|
Efron and Morris, 1977, Stein's Paradox in Statistics.
Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes conte
|
Recommendations for non-technical yet deep articles in statistics
Efron and Morris, 1977, Stein's Paradox in Statistics.
Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes context. The 1977 paper is a popular one published in Scientific American.
It is a great read.
|
Recommendations for non-technical yet deep articles in statistics
Efron and Morris, 1977, Stein's Paradox in Statistics.
Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes conte
|
8,112
|
Recommendations for non-technical yet deep articles in statistics
|
Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical discussion about self selection problem. This paper served as inspiration for the selection models developed by the nobel prize James Heckman. Although old, I think it matches your three bullet points.
|
Recommendations for non-technical yet deep articles in statistics
|
Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical
|
Recommendations for non-technical yet deep articles in statistics
Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical discussion about self selection problem. This paper served as inspiration for the selection models developed by the nobel prize James Heckman. Although old, I think it matches your three bullet points.
|
Recommendations for non-technical yet deep articles in statistics
Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical
|
8,113
|
Recommendations for non-technical yet deep articles in statistics
|
Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria.
It addresses a foundational question of statistics—under what conditions can statistical analyses yield causal conclusions—in a way that successfully targets a general audience. Philosophers, AI researchers, trial lawyers, climate activists, and of course statisticians will all find much of interest in this remarkable, cross-disciplinary book.
Among statisticians with a serious interest in causality, Andrew Gelman’s review is less enthusiastic than most because Gelman finds Pearl’s causal graphs and do-calculus less useful than do most textbooks on causal inference. Gelman likewise takes issue with many of Pearl’s grander generalizations about the history of statistics. But stirring up controversy and directly influencing the discipline both count as stimulation:
https://statmodeling.stat.columbia.edu/2019/01/08/book-pearl-mackenzie/
|
Recommendations for non-technical yet deep articles in statistics
|
Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria.
It addresses a foundational question of statistics—under what condit
|
Recommendations for non-technical yet deep articles in statistics
Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria.
It addresses a foundational question of statistics—under what conditions can statistical analyses yield causal conclusions—in a way that successfully targets a general audience. Philosophers, AI researchers, trial lawyers, climate activists, and of course statisticians will all find much of interest in this remarkable, cross-disciplinary book.
Among statisticians with a serious interest in causality, Andrew Gelman’s review is less enthusiastic than most because Gelman finds Pearl’s causal graphs and do-calculus less useful than do most textbooks on causal inference. Gelman likewise takes issue with many of Pearl’s grander generalizations about the history of statistics. But stirring up controversy and directly influencing the discipline both count as stimulation:
https://statmodeling.stat.columbia.edu/2019/01/08/book-pearl-mackenzie/
|
Recommendations for non-technical yet deep articles in statistics
Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria.
It addresses a foundational question of statistics—under what condit
|
8,114
|
Recommendations for non-technical yet deep articles in statistics
|
No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolution to many (but not all) of them.
The Reference-Class Problem is Your Problem Too (Hájek, 2007) is a pretty good summary of the fundamental impossibility of assigning statements of probability to uniquely interpretable empirical quantities, much in the vein of the first article. I think it is a little unduly harsh on frequentism, but not so much that it hurts the article.
Prior Probabilities and Transformation Groups (Jaynes) is a nice overview of how symmetries in our problem-framing can be used to inform our choice of priors. Jaynes is always a pleasure to read, if a bit dogmatic.
All of these require very little in the way of formal mathematical background, but touch on extremely important concepts that are relevant more or less everywhere in application.
|
Recommendations for non-technical yet deep articles in statistics
|
No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolu
|
Recommendations for non-technical yet deep articles in statistics
No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolution to many (but not all) of them.
The Reference-Class Problem is Your Problem Too (Hájek, 2007) is a pretty good summary of the fundamental impossibility of assigning statements of probability to uniquely interpretable empirical quantities, much in the vein of the first article. I think it is a little unduly harsh on frequentism, but not so much that it hurts the article.
Prior Probabilities and Transformation Groups (Jaynes) is a nice overview of how symmetries in our problem-framing can be used to inform our choice of priors. Jaynes is always a pleasure to read, if a bit dogmatic.
All of these require very little in the way of formal mathematical background, but touch on extremely important concepts that are relevant more or less everywhere in application.
|
Recommendations for non-technical yet deep articles in statistics
No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolu
|
8,115
|
Recommendations for non-technical yet deep articles in statistics
|
A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great starting point on the intricacies of Causal Inference in Machine Learning:
The Seven Tools of Causal Inference, with Reflections on Machine Learning .
Judea Pearl.
Communications of the ACM, March 2019, Vol. 62 No. 3, Pages 54-60.
|
Recommendations for non-technical yet deep articles in statistics
|
A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great s
|
Recommendations for non-technical yet deep articles in statistics
A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great starting point on the intricacies of Causal Inference in Machine Learning:
The Seven Tools of Causal Inference, with Reflections on Machine Learning .
Judea Pearl.
Communications of the ACM, March 2019, Vol. 62 No. 3, Pages 54-60.
|
Recommendations for non-technical yet deep articles in statistics
A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great s
|
8,116
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
|
Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4,
Assuming that one wants to define an $R^2$ measure, I think an argument
could be made for treating the penalized residual sum of squares from
a linear mixed model in the same way that we consider the residual sum
of squares from a linear model. Or one could use just the residual sum
of squares without the penalty or the minimum residual sum of squares
obtainable from a given set of terms, which corresponds to an infinite
precision matrix. I don't know, really. It depends on what you are
trying to characterize.
For more information, you can look at this thread, this thread, and this message. Basically, the issue is that there is not an agreed upon method for the inclusion and decomposition of the variance from the random effects in the model. However, there are a few standards that are used. If you have a look at the Wiki set up for/by the r-sig-mixed-models mailing list, there are a couple of approaches listed.
One of the suggested methods looks at the correlation between the fitted and the observed values. This can be implemented in R as suggested by Jarrett Byrnes in one of those threads:
r2.corr.mer <- function(m) {
lmfit <- lm(model.response(model.frame(m)) ~ fitted(m))
summary(lmfit)$r.squared
}
So for example, say we estimate the following linear mixed model:
set.seed(1)
d <- data.frame(y = rnorm(250), x = rnorm(250), z = rnorm(250),
g = sample(letters[1:4], 250, replace=T) )
library(lme4)
summary(fm1 <- lmer(y ~ x + (z | g), data=d))
# Linear mixed model fit by REML ['lmerMod']
# Formula: y ~ x + (z | g)
# Data: d
# REML criterion at convergence: 744.4
#
# Scaled residuals:
# Min 1Q Median 3Q Max
# -2.7808 -0.6123 -0.0244 0.6330 3.5374
#
# Random effects:
# Groups Name Variance Std.Dev. Corr
# g (Intercept) 0.006218 0.07885
# z 0.001318 0.03631 -1.00
# Residual 1.121439 1.05898
# Number of obs: 250, groups: g, 4
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) 0.02180 0.07795 0.280
# x 0.04446 0.06980 0.637
#
# Correlation of Fixed Effects:
# (Intr)
# x -0.005
We can calculate the effect size using the function defined above:
r2.corr.mer(fm1)
# [1] 0.0160841
A similar alternative is recommended in a paper by Ronghui Xu, referred to as $\Omega^{2}_{0}$, and can be calculated in R simply:
1-var(residuals(fm1))/(var(model.response(model.frame(fm1))))
# [1] 0.01173721 # Usually, it would be even closer to the value above
With respect to the p-values, this is a much more contentious issue (at least in the R/lme4 community). See the discussions in the questions here, here, and here among many others. Referencing the Wiki page again, there are a few approaches to test hypotheses on effects in linear mixed models. Listed from "worst to best" (according to the authors of the Wiki page which I believe includes Doug Bates as well as Ben Bolker who contributes here a lot):
Wald Z-tests
For balanced, nested LMMs where df can be computed: Wald t-tests
Likelihood ratio test, either by setting up the model so that the parameter can be isolated/dropped (via anova or drop1), or via computing likelihood profiles
MCMC or parametric bootstrap confidence intervals
They recommend the Markov chain Monte Carlo sampling approach and also list a number of possibilities to implement this from pseudo and fully Bayesian approaches, listed below.
Pseudo-Bayesian:
Post-hoc sampling, typically (1) assuming flat priors and (2) starting from the MLE, possibly using the approximate variance-covariance estimate to choose a candidate distribution
Via mcmcsamp (if available for your problem: i.e. LMMs with simple random effects — not GLMMs or complex random effects)
Via pvals.fnc in the languageR package, a wrapper for mcmcsamp)
In AD Model Builder, possibly via the glmmADMB package (use the mcmc=TRUE option) or the R2admb package (write your own model definition in AD Model Builder), or outside of R
Via the sim function from the arm package (simulates the posterior only for the beta (fixed-effect) coefficients
Fully Bayesian approaches:
Via the MCMCglmm package
Using glmmBUGS (a WinBUGS wrapper/R interface)
Using JAGS/WinBUGS/OpenBUGS etc., via the rjags/r2jags/R2WinBUGS/BRugs packages
For the sake of illustration to show what this might look like, below is an MCMCglmm estimated using the MCMCglmm package which you will see yields similar results as the above model and has some kind of Bayesian p-values:
library(MCMCglmm)
summary(fm2 <- MCMCglmm(y ~ x, random=~us(z):g, data=d))
# Iterations = 3001:12991
# Thinning interval = 10
# Sample size = 1000
#
# DIC: 697.7438
#
# G-structure: ~us(z):g
#
# post.mean l-95% CI u-95% CI eff.samp
# z:z.g 0.0004363 1.586e-17 0.001268 397.6
#
# R-structure: ~units
#
# post.mean l-95% CI u-95% CI eff.samp
# units 0.9466 0.7926 1.123 1000
#
# Location effects: y ~ x
#
# post.mean l-95% CI u-95% CI eff.samp pMCMC
# (Intercept) -0.04936 -0.17176 0.07502 1000 0.424
# x -0.07955 -0.19648 0.05811 1000 0.214
I hope this helps somewhat. I think the best advice for somebody starting out with linear mixed models and trying to estimate them in R is to read the Wiki faqs from where most of this information was drawn. It is an excellent resource for all sorts of mixed effects themes from basic to advanced and from modelling to plotting.
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
|
Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4,
Assuming that
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4,
Assuming that one wants to define an $R^2$ measure, I think an argument
could be made for treating the penalized residual sum of squares from
a linear mixed model in the same way that we consider the residual sum
of squares from a linear model. Or one could use just the residual sum
of squares without the penalty or the minimum residual sum of squares
obtainable from a given set of terms, which corresponds to an infinite
precision matrix. I don't know, really. It depends on what you are
trying to characterize.
For more information, you can look at this thread, this thread, and this message. Basically, the issue is that there is not an agreed upon method for the inclusion and decomposition of the variance from the random effects in the model. However, there are a few standards that are used. If you have a look at the Wiki set up for/by the r-sig-mixed-models mailing list, there are a couple of approaches listed.
One of the suggested methods looks at the correlation between the fitted and the observed values. This can be implemented in R as suggested by Jarrett Byrnes in one of those threads:
r2.corr.mer <- function(m) {
lmfit <- lm(model.response(model.frame(m)) ~ fitted(m))
summary(lmfit)$r.squared
}
So for example, say we estimate the following linear mixed model:
set.seed(1)
d <- data.frame(y = rnorm(250), x = rnorm(250), z = rnorm(250),
g = sample(letters[1:4], 250, replace=T) )
library(lme4)
summary(fm1 <- lmer(y ~ x + (z | g), data=d))
# Linear mixed model fit by REML ['lmerMod']
# Formula: y ~ x + (z | g)
# Data: d
# REML criterion at convergence: 744.4
#
# Scaled residuals:
# Min 1Q Median 3Q Max
# -2.7808 -0.6123 -0.0244 0.6330 3.5374
#
# Random effects:
# Groups Name Variance Std.Dev. Corr
# g (Intercept) 0.006218 0.07885
# z 0.001318 0.03631 -1.00
# Residual 1.121439 1.05898
# Number of obs: 250, groups: g, 4
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) 0.02180 0.07795 0.280
# x 0.04446 0.06980 0.637
#
# Correlation of Fixed Effects:
# (Intr)
# x -0.005
We can calculate the effect size using the function defined above:
r2.corr.mer(fm1)
# [1] 0.0160841
A similar alternative is recommended in a paper by Ronghui Xu, referred to as $\Omega^{2}_{0}$, and can be calculated in R simply:
1-var(residuals(fm1))/(var(model.response(model.frame(fm1))))
# [1] 0.01173721 # Usually, it would be even closer to the value above
With respect to the p-values, this is a much more contentious issue (at least in the R/lme4 community). See the discussions in the questions here, here, and here among many others. Referencing the Wiki page again, there are a few approaches to test hypotheses on effects in linear mixed models. Listed from "worst to best" (according to the authors of the Wiki page which I believe includes Doug Bates as well as Ben Bolker who contributes here a lot):
Wald Z-tests
For balanced, nested LMMs where df can be computed: Wald t-tests
Likelihood ratio test, either by setting up the model so that the parameter can be isolated/dropped (via anova or drop1), or via computing likelihood profiles
MCMC or parametric bootstrap confidence intervals
They recommend the Markov chain Monte Carlo sampling approach and also list a number of possibilities to implement this from pseudo and fully Bayesian approaches, listed below.
Pseudo-Bayesian:
Post-hoc sampling, typically (1) assuming flat priors and (2) starting from the MLE, possibly using the approximate variance-covariance estimate to choose a candidate distribution
Via mcmcsamp (if available for your problem: i.e. LMMs with simple random effects — not GLMMs or complex random effects)
Via pvals.fnc in the languageR package, a wrapper for mcmcsamp)
In AD Model Builder, possibly via the glmmADMB package (use the mcmc=TRUE option) or the R2admb package (write your own model definition in AD Model Builder), or outside of R
Via the sim function from the arm package (simulates the posterior only for the beta (fixed-effect) coefficients
Fully Bayesian approaches:
Via the MCMCglmm package
Using glmmBUGS (a WinBUGS wrapper/R interface)
Using JAGS/WinBUGS/OpenBUGS etc., via the rjags/r2jags/R2WinBUGS/BRugs packages
For the sake of illustration to show what this might look like, below is an MCMCglmm estimated using the MCMCglmm package which you will see yields similar results as the above model and has some kind of Bayesian p-values:
library(MCMCglmm)
summary(fm2 <- MCMCglmm(y ~ x, random=~us(z):g, data=d))
# Iterations = 3001:12991
# Thinning interval = 10
# Sample size = 1000
#
# DIC: 697.7438
#
# G-structure: ~us(z):g
#
# post.mean l-95% CI u-95% CI eff.samp
# z:z.g 0.0004363 1.586e-17 0.001268 397.6
#
# R-structure: ~units
#
# post.mean l-95% CI u-95% CI eff.samp
# units 0.9466 0.7926 1.123 1000
#
# Location effects: y ~ x
#
# post.mean l-95% CI u-95% CI eff.samp pMCMC
# (Intercept) -0.04936 -0.17176 0.07502 1000 0.424
# x -0.07955 -0.19648 0.05811 1000 0.214
I hope this helps somewhat. I think the best advice for somebody starting out with linear mixed models and trying to estimate them in R is to read the Wiki faqs from where most of this information was drawn. It is an excellent resource for all sorts of mixed effects themes from basic to advanced and from modelling to plotting.
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4,
Assuming that
|
8,117
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
|
In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite approximation for degrees of freedom (available in R with packages such as lmerTest or afex).
Abstract
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
(emphasis added)
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
|
In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite a
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite approximation for degrees of freedom (available in R with packages such as lmerTest or afex).
Abstract
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
(emphasis added)
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite a
|
8,118
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
|
I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts here.
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
|
I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts her
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts here.
|
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts her
|
8,119
|
Clustering methods that do not require pre-specifying the number of clusters
|
Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like asking for a description of any organisms that aren't cats.
Clustering algorithms are often categorized into broad kingdoms:
Partitioning algorithms (like k-means and it's progeny)
Hierarchical clustering (as @Tim describes)
Density based clustering (such as DBSCAN)
Model based clustering (e.g., finite Gaussian mixture models, or Latent Class Analysis)
There can be additional categories, and people can disagree with these categories and which algorithms go in which category, because this is heuristic. Nevertheless, something like this scheme is common. Working from this, it is primarily only the partitioning methods (1) that require pre-specification of the number of clusters to find. What other information needs to be pre-specified (e.g., the number of points per cluster), and whether it seems reasonable to call various algorithms 'nonparametric', is likewise highly variable and hard to summarize.
Hierarchical clustering does not require you to pre-specify the number of clusters, the way that k-means does, but you do select a number of clusters from your output. On the other hand, DBSCAN doesn't require either (but it does require specification of a minimum number of points for a 'neighborhood'—although there are defaults, so in some sense you could skip specifying that—which does put a floor on the number of patterns in a cluster). GMM doesn't even require any of those three, but does require parametric assumptions about the data generating process. As far as I know, there is no clustering algorithm that never requires you to specify a number of clusters, a minimum number of data per cluster, or any pattern / arrangement of data within clusters. I don't see how there could be.
It might help you to read an overview of different types of clustering algorithms. The following might be a place to start:
Berkhin, P. "Survey of Clustering Data Mining Techniques" (pdf)
|
Clustering methods that do not require pre-specifying the number of clusters
|
Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like askin
|
Clustering methods that do not require pre-specifying the number of clusters
Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like asking for a description of any organisms that aren't cats.
Clustering algorithms are often categorized into broad kingdoms:
Partitioning algorithms (like k-means and it's progeny)
Hierarchical clustering (as @Tim describes)
Density based clustering (such as DBSCAN)
Model based clustering (e.g., finite Gaussian mixture models, or Latent Class Analysis)
There can be additional categories, and people can disagree with these categories and which algorithms go in which category, because this is heuristic. Nevertheless, something like this scheme is common. Working from this, it is primarily only the partitioning methods (1) that require pre-specification of the number of clusters to find. What other information needs to be pre-specified (e.g., the number of points per cluster), and whether it seems reasonable to call various algorithms 'nonparametric', is likewise highly variable and hard to summarize.
Hierarchical clustering does not require you to pre-specify the number of clusters, the way that k-means does, but you do select a number of clusters from your output. On the other hand, DBSCAN doesn't require either (but it does require specification of a minimum number of points for a 'neighborhood'—although there are defaults, so in some sense you could skip specifying that—which does put a floor on the number of patterns in a cluster). GMM doesn't even require any of those three, but does require parametric assumptions about the data generating process. As far as I know, there is no clustering algorithm that never requires you to specify a number of clusters, a minimum number of data per cluster, or any pattern / arrangement of data within clusters. I don't see how there could be.
It might help you to read an overview of different types of clustering algorithms. The following might be a place to start:
Berkhin, P. "Survey of Clustering Data Mining Techniques" (pdf)
|
Clustering methods that do not require pre-specifying the number of clusters
Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like askin
|
8,120
|
Clustering methods that do not require pre-specifying the number of clusters
|
The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to create joined pseudo-point (e.g. b and c makes bc as on the image below). Next you repeat the procedure by joining the points, and pseudo-points, based their pairwise distances until each point is joined with the graph.
(source: https://en.wikipedia.org/wiki/Hierarchical_clustering)
The procedure is non-parametric and the only thing that you need for it is the distance measure. In the end you need to decide how to prune the tree-graph created using this procedure, so a decision about expected number of clusters needs to be made.
|
Clustering methods that do not require pre-specifying the number of clusters
|
The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to cr
|
Clustering methods that do not require pre-specifying the number of clusters
The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to create joined pseudo-point (e.g. b and c makes bc as on the image below). Next you repeat the procedure by joining the points, and pseudo-points, based their pairwise distances until each point is joined with the graph.
(source: https://en.wikipedia.org/wiki/Hierarchical_clustering)
The procedure is non-parametric and the only thing that you need for it is the distance measure. In the end you need to decide how to prune the tree-graph created using this procedure, so a decision about expected number of clusters needs to be made.
|
Clustering methods that do not require pre-specifying the number of clusters
The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to cr
|
8,121
|
Clustering methods that do not require pre-specifying the number of clusters
|
Parameters are good!
A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities.
Now clustering is a explorative technique. You must not assume there is a single "true" clustering. You should rather be interested in exploring different clusterings of the same data to learn more about it. Treating clustering as a black box never works well.
For example, you want to be able to customize the distance function used depending on your data (this is also a parameter!) If the result is too coarse, you want to be able to get a finer result, or if it is too fine, get a coarser version of it.
The best methods often are those that let you navigate the result well, such as the dendrogram in hierarchical clustering. You can then explore substructures easily.
|
Clustering methods that do not require pre-specifying the number of clusters
|
Parameters are good!
A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities.
Now clustering is a explorative technique. You
|
Clustering methods that do not require pre-specifying the number of clusters
Parameters are good!
A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities.
Now clustering is a explorative technique. You must not assume there is a single "true" clustering. You should rather be interested in exploring different clusterings of the same data to learn more about it. Treating clustering as a black box never works well.
For example, you want to be able to customize the distance function used depending on your data (this is also a parameter!) If the result is too coarse, you want to be able to get a finer result, or if it is too fine, get a coarser version of it.
The best methods often are those that let you navigate the result well, such as the dendrogram in hierarchical clustering. You can then explore substructures easily.
|
Clustering methods that do not require pre-specifying the number of clusters
Parameters are good!
A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities.
Now clustering is a explorative technique. You
|
8,122
|
Clustering methods that do not require pre-specifying the number of clusters
|
Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of clusters, which your data might violate.
|
Clustering methods that do not require pre-specifying the number of clusters
|
Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of
|
Clustering methods that do not require pre-specifying the number of clusters
Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of clusters, which your data might violate.
|
Clustering methods that do not require pre-specifying the number of clusters
Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of
|
8,123
|
Clustering methods that do not require pre-specifying the number of clusters
|
If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free seminar paper at https://hal.archives-ouvertes.fr/hal-02124947/document (contains bibliographic references); the input data is either the array of coordinates of the N data points, or an N by N array of distances between the N items (the distances are not required to be Euclidean ones).
If you are working on categorical variables, you may look at POP (optimal partitioning); the method is presented in the same seminar paper; it operates either on categorical variables or on an N by N array of signed dissimilarities (the main cited paper was published in 2002: free copy at https://hal.archives-ouvertes.fr/hal-02123085/document).
There are free binaries for MCG and POP (at least for linux). Indeed the two methods are explained (with examples) in English in the seminar paper.
|
Clustering methods that do not require pre-specifying the number of clusters
|
If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free sem
|
Clustering methods that do not require pre-specifying the number of clusters
If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free seminar paper at https://hal.archives-ouvertes.fr/hal-02124947/document (contains bibliographic references); the input data is either the array of coordinates of the N data points, or an N by N array of distances between the N items (the distances are not required to be Euclidean ones).
If you are working on categorical variables, you may look at POP (optimal partitioning); the method is presented in the same seminar paper; it operates either on categorical variables or on an N by N array of signed dissimilarities (the main cited paper was published in 2002: free copy at https://hal.archives-ouvertes.fr/hal-02123085/document).
There are free binaries for MCG and POP (at least for linux). Indeed the two methods are explained (with examples) in English in the seminar paper.
|
Clustering methods that do not require pre-specifying the number of clusters
If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free sem
|
8,124
|
How can I include random effects (or repeated measures) into a randomForest
|
Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which allows to compute all those algorithms and the new ones.
|
How can I include random effects (or repeated measures) into a randomForest
|
Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which all
|
How can I include random effects (or repeated measures) into a randomForest
Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which allows to compute all those algorithms and the new ones.
|
How can I include random effects (or repeated measures) into a randomForest
Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which all
|
8,125
|
How can I include random effects (or repeated measures) into a randomForest
|
Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree.
It's been a while since I looked at the paper. I recall the authors had not yet tried forming ensembles of these trees, but that nothing suggested it wouldn't work.
|
How can I include random effects (or repeated measures) into a randomForest
|
Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree.
It's been a while since I looked at the paper
|
How can I include random effects (or repeated measures) into a randomForest
Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree.
It's been a while since I looked at the paper. I recall the authors had not yet tried forming ensembles of these trees, but that nothing suggested it wouldn't work.
|
How can I include random effects (or repeated measures) into a randomForest
Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree.
It's been a while since I looked at the paper
|
8,126
|
How can I include random effects (or repeated measures) into a randomForest
|
Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline.com/doi/abs/10.1080/00949655.2012.741599.
Essentially it is a theoretically sound way to combine the non-linear modeling of random forests with linear random effects.
We just released an open source package in Python implementing MERF using the above algorithm in the paper.
We wrote a detailed blog post about the package and how to use it for clustered data sets.
|
How can I include random effects (or repeated measures) into a randomForest
|
Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline
|
How can I include random effects (or repeated measures) into a randomForest
Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline.com/doi/abs/10.1080/00949655.2012.741599.
Essentially it is a theoretically sound way to combine the non-linear modeling of random forests with linear random effects.
We just released an open source package in Python implementing MERF using the above algorithm in the paper.
We wrote a detailed blog post about the package and how to use it for clustered data sets.
|
How can I include random effects (or repeated measures) into a randomForest
Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline
|
8,127
|
How can I include random effects (or repeated measures) into a randomForest
|
They are not commonly used together, and care should be taken before combining them.
Random forests are typically used as classifiers. The reason that you would use a random forest instead of another method (e.g. K-means clustering) is that you may have a large number of dimensions that you want to classify by. The issue with having the large number of dimensions is that if you wanted to test all combinations of dimension orders, you would have a large number of choices (it grows faster than the number of dimensions factorial).
Random effects are typically used in regression with repeated measures of the same thing. They are commonly used in mixed effects models where the term mixed refers to both fixed and random effects. The fixed effects are thought to represent the parameters that you will see again (e.g. a drug or a person's age). The random effects are thought to represent an instance of variability around a parameter that you will not see again (e.g. a specific person).
There are examples using them together when there is clustered data http://dx.doi.org/10.1080/00949655.2012.741599 and http://www2.ims.nus.edu.sg/Programs/014swclass/files/denis.pdf.
I'm unaware of any R packages that can do this analysis.
|
How can I include random effects (or repeated measures) into a randomForest
|
They are not commonly used together, and care should be taken before combining them.
Random forests are typically used as classifiers. The reason that you would use a random forest instead of another
|
How can I include random effects (or repeated measures) into a randomForest
They are not commonly used together, and care should be taken before combining them.
Random forests are typically used as classifiers. The reason that you would use a random forest instead of another method (e.g. K-means clustering) is that you may have a large number of dimensions that you want to classify by. The issue with having the large number of dimensions is that if you wanted to test all combinations of dimension orders, you would have a large number of choices (it grows faster than the number of dimensions factorial).
Random effects are typically used in regression with repeated measures of the same thing. They are commonly used in mixed effects models where the term mixed refers to both fixed and random effects. The fixed effects are thought to represent the parameters that you will see again (e.g. a drug or a person's age). The random effects are thought to represent an instance of variability around a parameter that you will not see again (e.g. a specific person).
There are examples using them together when there is clustered data http://dx.doi.org/10.1080/00949655.2012.741599 and http://www2.ims.nus.edu.sg/Programs/014swclass/files/denis.pdf.
I'm unaware of any R packages that can do this analysis.
|
How can I include random effects (or repeated measures) into a randomForest
They are not commonly used together, and care should be taken before combining them.
Random forests are typically used as classifiers. The reason that you would use a random forest instead of another
|
8,128
|
How can I include random effects (or repeated measures) into a randomForest
|
Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects models. Simply speaking it is an extension of linear mixed effects models where the fixed-effects are learned using tree-boosting. See this blog post and Sigrist (2020) for further information.
Disclaimer: I am the author of the GPBoost library.
|
How can I include random effects (or repeated measures) into a randomForest
|
Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for com
|
How can I include random effects (or repeated measures) into a randomForest
Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects models. Simply speaking it is an extension of linear mixed effects models where the fixed-effects are learned using tree-boosting. See this blog post and Sigrist (2020) for further information.
Disclaimer: I am the author of the GPBoost library.
|
How can I include random effects (or repeated measures) into a randomForest
Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for com
|
8,129
|
How can I include random effects (or repeated measures) into a randomForest
|
There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html
The focus of the package is not precisely on the MERF. However, it employs the same syntax as lme4, which may be more intuitive for people who are used to that package. Also, the ranger object can easily be extracted and treated separately.
|
How can I include random effects (or repeated measures) into a randomForest
|
There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html
The focus of the package is not precisely on the MERF. Ho
|
How can I include random effects (or repeated measures) into a randomForest
There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html
The focus of the package is not precisely on the MERF. However, it employs the same syntax as lme4, which may be more intuitive for people who are used to that package. Also, the ranger object can easily be extracted and treated separately.
|
How can I include random effects (or repeated measures) into a randomForest
There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html
The focus of the package is not precisely on the MERF. Ho
|
8,130
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$?
|
For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, but note that there, $k$ counts the regressors without the constant term, so the expression looks a bit different) that the statistic
$$F = \frac {n-k}{k-1}\frac {R^2}{1-R^2}$$ is distributed as a central $F(k-1, n-k)$ random variable.
Note that although we do not test the constant term, $k$ counts it also.
Moving things around,
$$(k-1)F - (k-1)FR^2 = (n-k)R^2 \Rightarrow (k-1)F = R^2\big[(n-k) + (k-1)F\big]$$
$$\Rightarrow R^2 = \frac {(k-1)F}{(n-k) + (k-1)F}$$
But the right hand side is distributed as a Beta distribution, specifically
$$R^2 \sim Beta\left (\frac {k-1}{2}, \frac {n-k}{2}\right)$$
The mode of this distribution is
$$\text{mode}R^2 = \frac {\frac {k-1}{2}-1}{\frac {k-1}{2}+ \frac {n-k}{2}-2} =\frac {k-3}{n-5} $$
FINITE & UNIQUE MODE
From the above relation we can infer that for the distribution to have a unique and finite mode we must have
$$k\geq 3, n >5 $$
This is consistent with the general requirement for a Beta distribution, which is
$$\{\alpha >1 , \beta \geq 1\},\;\; \text {OR}\;\; \{\alpha \geq1 , \beta > 1\}$$
as one can infer from this CV thread or read here.
Note that if $\{\alpha =1 , \beta = 1\}$, we obtain the Uniform distribution, so all the density points are modes (finite but not unique). Which creates the question: Why, if $k=3, n=5$, $R^2$ is distributed as a $U(0,1)$?
IMPLICATIONS
Assume that you have $k=5$ regressors (including the constant), and $n=99$ observations. Pretty nice regression, no overfitting. Then
$$R^2\Big|_{\beta=0} \sim Beta\left (2, 47\right), \text{mode}R^2 = \frac 1{47} \approx 0.021$$
and density plot
Intuition please: this is the distribution of $R^2$ under the hypothesis that no regressor actually belongs to the regression. So a) the distribution is independent of the regressors, b) as the sample size increases its distribution is concentrated towards zero as the increased information swamps small-sample variability that may produce some "fit" but also c) as the number of irrelevant regressors increases for given sample size, the distribution concentrates towards $1$, and we have the "spurious fit" phenomenon.
But also, note how "easy" it is to reject the null hypothesis: in the particular example, for $R^2=0.13$ cumulative probability has already reached $0.99$, so an obtained $R^2>0.13$ will reject the null of "insignificant regression" at significance level $1$%.
ADDENDUM
To respond to the new issue regarding the mode of the $R^2$ distribution, I can offer the following line of thought (not geometrical), which links it to the "spurious fit" phenomenon: when we run least-squares on a data set, we essentially solve a system of $n$ linear equations with $k$ unknowns (the only difference from high-school math is that back then we called "known coefficients" what in linear regression we call "variables/regressors", "unknown x" what we now call "unknown coefficients", and "constant terms" what we know call "dependent variable"). As long as $k<n$ the system is over-identified and there is no exact solution, only approximate -and the difference emerges as "unexplained variance of the dependent variable", which is captured by $1-R^2$. If $k=n$ the system has one exact solution (assuming linear independence). In between, as we increase the number of $k$, we reduce the "degree of overidentification" of the system and we "move towards" the single exact solution. Under this view, it makes sense why $R^2$ increases spuriously with the addition of irrelevant regressions, and consequently, why its mode moves gradually towards $1$, as $k$ increases for given $n$.
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no
|
For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, b
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$?
For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, but note that there, $k$ counts the regressors without the constant term, so the expression looks a bit different) that the statistic
$$F = \frac {n-k}{k-1}\frac {R^2}{1-R^2}$$ is distributed as a central $F(k-1, n-k)$ random variable.
Note that although we do not test the constant term, $k$ counts it also.
Moving things around,
$$(k-1)F - (k-1)FR^2 = (n-k)R^2 \Rightarrow (k-1)F = R^2\big[(n-k) + (k-1)F\big]$$
$$\Rightarrow R^2 = \frac {(k-1)F}{(n-k) + (k-1)F}$$
But the right hand side is distributed as a Beta distribution, specifically
$$R^2 \sim Beta\left (\frac {k-1}{2}, \frac {n-k}{2}\right)$$
The mode of this distribution is
$$\text{mode}R^2 = \frac {\frac {k-1}{2}-1}{\frac {k-1}{2}+ \frac {n-k}{2}-2} =\frac {k-3}{n-5} $$
FINITE & UNIQUE MODE
From the above relation we can infer that for the distribution to have a unique and finite mode we must have
$$k\geq 3, n >5 $$
This is consistent with the general requirement for a Beta distribution, which is
$$\{\alpha >1 , \beta \geq 1\},\;\; \text {OR}\;\; \{\alpha \geq1 , \beta > 1\}$$
as one can infer from this CV thread or read here.
Note that if $\{\alpha =1 , \beta = 1\}$, we obtain the Uniform distribution, so all the density points are modes (finite but not unique). Which creates the question: Why, if $k=3, n=5$, $R^2$ is distributed as a $U(0,1)$?
IMPLICATIONS
Assume that you have $k=5$ regressors (including the constant), and $n=99$ observations. Pretty nice regression, no overfitting. Then
$$R^2\Big|_{\beta=0} \sim Beta\left (2, 47\right), \text{mode}R^2 = \frac 1{47} \approx 0.021$$
and density plot
Intuition please: this is the distribution of $R^2$ under the hypothesis that no regressor actually belongs to the regression. So a) the distribution is independent of the regressors, b) as the sample size increases its distribution is concentrated towards zero as the increased information swamps small-sample variability that may produce some "fit" but also c) as the number of irrelevant regressors increases for given sample size, the distribution concentrates towards $1$, and we have the "spurious fit" phenomenon.
But also, note how "easy" it is to reject the null hypothesis: in the particular example, for $R^2=0.13$ cumulative probability has already reached $0.99$, so an obtained $R^2>0.13$ will reject the null of "insignificant regression" at significance level $1$%.
ADDENDUM
To respond to the new issue regarding the mode of the $R^2$ distribution, I can offer the following line of thought (not geometrical), which links it to the "spurious fit" phenomenon: when we run least-squares on a data set, we essentially solve a system of $n$ linear equations with $k$ unknowns (the only difference from high-school math is that back then we called "known coefficients" what in linear regression we call "variables/regressors", "unknown x" what we now call "unknown coefficients", and "constant terms" what we know call "dependent variable"). As long as $k<n$ the system is over-identified and there is no exact solution, only approximate -and the difference emerges as "unexplained variance of the dependent variable", which is captured by $1-R^2$. If $k=n$ the system has one exact solution (assuming linear independence). In between, as we increase the number of $k$, we reduce the "degree of overidentification" of the system and we "move towards" the single exact solution. Under this view, it makes sense why $R^2$ increases spuriously with the addition of irrelevant regressions, and consequently, why its mode moves gradually towards $1$, as $k$ increases for given $n$.
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no
For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, b
|
8,131
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$?
|
I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in more details about the consequences! Firstly, what does the null distribution of $R^2$ look like for a range of values of $n$ and $k$? The graph in @Alecos's answer is quite representative of what occurs in practical multiple regressions, but sometimes insight is gleaned more easily from smaller cases. I've included the mean, mode (where it exists) and standard deviation. The graph/table deserves a good eyeball: best viewed at full-size. I could have included less facets but the pattern would have been less clear; I have appended R code so that readers can experiment with different subsets of $n$ and $k$.
Values of shape parameters
The graph's colour scheme indicates whether each shape parameter is less than one (red), equal to one (blue), or more than one (green). The left-hand side shows the value of $\alpha$ while $\beta$ is on the right. Since $\alpha = \frac{k-1}{2}$, its value increases in arithmetic progression by a common difference of $\frac{1}{2}$ as we move right from column to column (add a regressor to our model) whereas, for fixed $n$, $\beta = \frac{n-k}{2}$ decreases by $\frac{1}{2}$. The total $\alpha + \beta = \frac{n-1}{2}$ is fixed for each row (for a given sample size). If instead we fix $k$ and move down the column (increase sample size by 1), then $\alpha$ stays constant and $\beta$ increases by $\frac{1}{2}$. In regression terms, $\alpha$ is half the number of regressors included in the model, and $\beta$ is half the residual degrees of freedom. To determine the shape of the distribution we are particularly interested in where $\alpha$ or $\beta$ equal one.
The algebra is straightforward for $\alpha$: we have $\frac{k-1}{2}=1$ so $k=3$. This is indeed the only column of the facet plot that's filled blue on the left. Similarly $\alpha < 1$ for $k<3$ (the $k=2$ column is red on the left) and $\alpha > 1$ for $k>3$ (from the $k=4$ column onwards, the left side is green).
For $\beta=1$ we have $\frac{n-k}{2}=1$ hence $k=n-2$. Note how these cases (marked with a blue right-hand side) cut a diagonal line across the facet plot. For $\beta > 1$ we obtain $k < n - 2$ (the graphs with a green left side lie to the left of the diagonal line). For $\beta < 1$ we need $k > n - 2$, which involves only the right-most cases on my graph: at $n=k$ we have $\beta=0$ and the distribution is degenerate, but $n=k-1$ where $\beta = \frac{1}{2}$ is plotted (right side in red).
Since the PDF is $f(x;\,\alpha,\,\beta) \propto x^{\alpha-1} (1-x)^{\beta-1}$, it is clear that if (and only if) $\alpha<1$ then $f(x) \to \infty$ as $x \to 0$. We can see this in the graph: when the left side is shaded red, observe the behaviour at 0. Similarly when $\beta<1$ then $f(x) \to \infty$ as $x \to 1$. Look where the right side is red!
Symmetries
One of the most eye-catching features of the graph is the level of symmetry, but when the Beta distribution is involved, this shouldn't be surprising!
The Beta distribution itself is symmetric if $\alpha = \beta$. For us this occurs if $n = 2k-1$ which correctly identifies the panels $(k=2, n=3)$, $(k=3, n=5)$, $(k=4, n=7)$ and $(k=5, n=9)$. The extent to which the distribution is symmetric across $R^2 = 0.5$ depends on how many regressor variables we include in the model for that sample size. If $k = \frac{n+1}{2}$ the distribution of $R^2$ is perfectly symmetric about 0.5; if we include fewer variables than that it becomes increasingly asymmetric and the bulk of the probability mass shifts closer to $R^2 = 0$; if we include more variables then it shifts closer to $R^2 = 1$. Remember that $k$ includes the intercept in its count, and that we are working under the null, so the regressor variables should have coefficient zero in the correctly specified model.
There is also an obviously symmetry between distributions for any given $n$, i.e. any row in the facet grid. For example, compare $(k=3, n=9)$ with $(k=7, n=9)$. What's causing this? Recall that the distribution of $\mathrm{Beta}(\alpha, \beta)$ is the mirror image of $\mathrm{Beta}(\beta, \alpha)$ across $x=0.5$. Now we had $\alpha_{k,n} = \frac{k-1}{2}$ and $\beta_{k,n} = \frac{n-k}{2}$. Consider $k'=n-k+1$ and we find:
$$\alpha_{k',n} = \frac{(n-k+1)-1}{2} = \frac{n-k}{2} = \beta_{k,n}$$
$$\beta_{k',n} = \frac{n-(n-k+1)}{2} = \frac{k-1}{2} = \alpha_{k,n}$$
So this explains the symmetry as we vary the number of regressors in the model for a fixed sample size. It also explains the distributions that are themselves symmetric as a special case: for them, $k' = k$ so they are obliged to be symmetric with themselves!
This tells us something we might not have guessed about multiple regression: for a given sample size $n$, and assuming no regressors have a genuine relationship with $Y$, the $R^2$ for a model using $k-1$ regressors plus an intercept has the same distribution as $1 - R^2$ does for a model with $k-1$ residual degrees of freedom remaining.
Special distributions
When $k=n$ we have $\beta=0$, which isn't a valid parameter. However, as $\beta \to 0$ the distribution becomes degenerate with a spike such that $\mathsf{P}(R^2 = 1)=1$. This is consistent with what we know about a model with as many parameters as data points - it achieves perfect fit. I haven't drawn the degenerate distribution on my graph but did include the mean, mode and standard deviation.
When $k=2$ and $n=3$ we obtain $\mathrm{Beta}(\frac{1}{2}, \, \frac{1}{2})$ which is the arcsine distribution. This is symmetric (since $\alpha = \beta$) and bimodal (0 and 1). Since this is the only case where both $\alpha < 1$ and $\beta < 1$ (marked red on both sides), it is our only distribution which goes to infinity at both ends of the support.
The $\mathrm{Beta}(1, \, 1)$ distribution is the only Beta distribution to be rectangular (uniform). All values of $R^2$ from 0 to 1 are equally likely. The only combination of $k$ and $n$ for which $\alpha = \beta =1$ occurs is $k=3$ and $n=5$ (marked blue on both sides).
The previous special cases are of limited applicability but the case $\alpha > 1$ and $\beta=1$ (green on left, blue on right) is important. Now $f(x;\,\alpha,\,\beta) \propto x^{\alpha-1} (1-x)^{\beta-1} = x^{\alpha-1}$ so we have a power-law distribution on [0, 1]. Of course it's unlikely we'd perform a regression with $k=n-2$ and $k>3$, which is when this situation occurs. But by the previous symmetry argument, or some trivial algebra on the PDF, when $k=3$ and $n > 5$, which is the frequent procedure of multiple regression with two regressors and an intercept on a non-trivial sample size, $R^2$ will follow a reflected power law distribution on [0, 1] under $H_0$. This corresponds to $\alpha=1$ and $\beta>1$ so is marked blue on left, green on right.
You may also have noticed the triangular distributions at $(k=5,n=7)$ and its reflection $(k=3,n=7)$. We can recognise from their $\alpha$ and $\beta$ that these are just special cases of the power-law and reflected power-law distributions where the power is $2-1=1$.
Mode
If $\alpha>1$ and $\beta>1$, all green in the plot, $f(x; \, \alpha, \, \beta)$ is concave with $f(0)=f(1)=0$, and the Beta distribution has a unique mode $\frac{\alpha-1}{\alpha+\beta-2}$. Putting these in terms of $k$ and $n$, the condition becomes $k>3$ and $n>k+2$ while the mode is $\frac{k-3}{n-5}$.
All other cases have been dealt with above. If we relax the inequality to allow $\beta=1$, then we include the (green-blue) power-law distributions with $k=n-2$ and $k>3$ (equivalently, $n>5$). These cases clearly have mode 1, which actually agrees with the previous formula since $\frac{(n-2)-3}{n-5}=1$. If instead we allowed $\alpha=1$ but still demanded $\beta>1$, we'd find the (blue-green) reflected power-law distributions with $k=3$ and $n>5$. Their mode is 0, which agrees with $\frac{3-3}{n-5}=0$. However, if we relaxed both inequalities simultaneously to allow $\alpha=\beta=1$, we'd find the (all blue) uniform distribution with $k=3$ and $n=5$, which does not have a unique mode. Moreover the previous formula can't be applied in this case, since it would return the indeterminate form $\frac{3-3}{5-5}=\frac{0}{0}$.
When $n=k$ we get a degenerate distribution with mode 1. When $\beta < 1$ (in regression terms, $n=k-1$ so there is only one residual degree of freedom) then $f(x) \to \infty$ as $x \to 1$, and when $\alpha < 1$ (in regression terms, $k=2$ so a simple linear model with intercept and one regressor) then $f(x) \to \infty$ as $x \to 0$. These would be unique modes except in the unusual case where $k=2$ and $n=3$ (fitting a simple linear model to three points) which is bimodal at 0 and 1.
Mean
The question asked about the mode, but the mean of $R^2$ under the null is also interesting - it has the remarkably simple form $\frac{k-1}{n-1}$. For a fixed sample size it increases in arithmetic progression as more regressors are added to the model, until the mean value is 1 when $k=n$. The mean of a Beta distribution is $\frac{\alpha}{\alpha+\beta}$ so such an arithmetic progression was inevitable from our earlier observation that, for fixed $n$, the sum $\alpha+\beta$ is constant but $\alpha$ increases by 0.5 for each regressor added to the model.
$$\frac{\alpha}{\alpha+\beta} = \frac{(k-1)/2}{(k-1)/2 + (n-k)/2} = \frac{k-1}{n-1}$$
Code for plots
require(grid)
require(dplyr)
nlist <- 3:9 #change here which n to plot
klist <- 2:8 #change here which k to plot
totaln <- length(nlist)
totalk <- length(klist)
df <- data.frame(
x = rep(seq(0, 1, length.out = 100), times = totaln * totalk),
k = rep(klist, times = totaln, each = 100),
n = rep(nlist, each = totalk * 100)
)
df <- mutate(df,
kname = paste("k =", k),
nname = paste("n =", n),
a = (k-1)/2,
b = (n-k)/2,
density = dbeta(x, (k-1)/2, (n-k)/2),
groupcol = ifelse(x < 0.5,
ifelse(a < 1, "below 1", ifelse(a ==1, "equals 1", "more than 1")),
ifelse(b < 1, "below 1", ifelse(b ==1, "equals 1", "more than 1")))
)
g <- ggplot(df, aes(x, density)) +
geom_line(size=0.8) + geom_area(aes(group=groupcol, fill=groupcol)) +
scale_fill_brewer(palette="Set1") +
facet_grid(nname ~ kname) +
ylab("probability density") + theme_bw() +
labs(x = expression(R^{2}), fill = expression(alpha~(left)~beta~(right))) +
theme(panel.margin = unit(0.6, "lines"),
legend.title=element_text(size=20),
legend.text=element_text(size=20),
legend.background = element_rect(colour = "black"),
legend.position = c(1, 1), legend.justification = c(1, 1))
df2 <- data.frame(
k = rep(klist, times = totaln),
n = rep(nlist, each = totalk),
x = 0.5,
ymean = 7.5,
ymode = 5,
ysd = 2.5
)
df2 <- mutate(df2,
kname = paste("k =", k),
nname = paste("n =", n),
a = (k-1)/2,
b = (n-k)/2,
meanR2 = ifelse(k > n, NaN, a/(a+b)),
modeR2 = ifelse((a>1 & b>=1) | (a>=1 & b>1), (a-1)/(a+b-2),
ifelse(a<1 & b>=1 & n>=k, 0, ifelse(a>=1 & b<1 & n>=k, 1, NaN))),
sdR2 = ifelse(k > n, NaN, sqrt(a*b/((a+b)^2 * (a+b+1)))),
meantext = ifelse(is.nan(meanR2), "", paste("Mean =", round(meanR2,3))),
modetext = ifelse(is.nan(modeR2), "", paste("Mode =", round(modeR2,3))),
sdtext = ifelse(is.nan(sdR2), "", paste("SD =", round(sdR2,3)))
)
g <- g + geom_text(data=df2, aes(x, ymean, label=meantext)) +
geom_text(data=df2, aes(x, ymode, label=modetext)) +
geom_text(data=df2, aes(x, ysd, label=sdtext))
print(g)
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no
|
I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in mo
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$?
I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in more details about the consequences! Firstly, what does the null distribution of $R^2$ look like for a range of values of $n$ and $k$? The graph in @Alecos's answer is quite representative of what occurs in practical multiple regressions, but sometimes insight is gleaned more easily from smaller cases. I've included the mean, mode (where it exists) and standard deviation. The graph/table deserves a good eyeball: best viewed at full-size. I could have included less facets but the pattern would have been less clear; I have appended R code so that readers can experiment with different subsets of $n$ and $k$.
Values of shape parameters
The graph's colour scheme indicates whether each shape parameter is less than one (red), equal to one (blue), or more than one (green). The left-hand side shows the value of $\alpha$ while $\beta$ is on the right. Since $\alpha = \frac{k-1}{2}$, its value increases in arithmetic progression by a common difference of $\frac{1}{2}$ as we move right from column to column (add a regressor to our model) whereas, for fixed $n$, $\beta = \frac{n-k}{2}$ decreases by $\frac{1}{2}$. The total $\alpha + \beta = \frac{n-1}{2}$ is fixed for each row (for a given sample size). If instead we fix $k$ and move down the column (increase sample size by 1), then $\alpha$ stays constant and $\beta$ increases by $\frac{1}{2}$. In regression terms, $\alpha$ is half the number of regressors included in the model, and $\beta$ is half the residual degrees of freedom. To determine the shape of the distribution we are particularly interested in where $\alpha$ or $\beta$ equal one.
The algebra is straightforward for $\alpha$: we have $\frac{k-1}{2}=1$ so $k=3$. This is indeed the only column of the facet plot that's filled blue on the left. Similarly $\alpha < 1$ for $k<3$ (the $k=2$ column is red on the left) and $\alpha > 1$ for $k>3$ (from the $k=4$ column onwards, the left side is green).
For $\beta=1$ we have $\frac{n-k}{2}=1$ hence $k=n-2$. Note how these cases (marked with a blue right-hand side) cut a diagonal line across the facet plot. For $\beta > 1$ we obtain $k < n - 2$ (the graphs with a green left side lie to the left of the diagonal line). For $\beta < 1$ we need $k > n - 2$, which involves only the right-most cases on my graph: at $n=k$ we have $\beta=0$ and the distribution is degenerate, but $n=k-1$ where $\beta = \frac{1}{2}$ is plotted (right side in red).
Since the PDF is $f(x;\,\alpha,\,\beta) \propto x^{\alpha-1} (1-x)^{\beta-1}$, it is clear that if (and only if) $\alpha<1$ then $f(x) \to \infty$ as $x \to 0$. We can see this in the graph: when the left side is shaded red, observe the behaviour at 0. Similarly when $\beta<1$ then $f(x) \to \infty$ as $x \to 1$. Look where the right side is red!
Symmetries
One of the most eye-catching features of the graph is the level of symmetry, but when the Beta distribution is involved, this shouldn't be surprising!
The Beta distribution itself is symmetric if $\alpha = \beta$. For us this occurs if $n = 2k-1$ which correctly identifies the panels $(k=2, n=3)$, $(k=3, n=5)$, $(k=4, n=7)$ and $(k=5, n=9)$. The extent to which the distribution is symmetric across $R^2 = 0.5$ depends on how many regressor variables we include in the model for that sample size. If $k = \frac{n+1}{2}$ the distribution of $R^2$ is perfectly symmetric about 0.5; if we include fewer variables than that it becomes increasingly asymmetric and the bulk of the probability mass shifts closer to $R^2 = 0$; if we include more variables then it shifts closer to $R^2 = 1$. Remember that $k$ includes the intercept in its count, and that we are working under the null, so the regressor variables should have coefficient zero in the correctly specified model.
There is also an obviously symmetry between distributions for any given $n$, i.e. any row in the facet grid. For example, compare $(k=3, n=9)$ with $(k=7, n=9)$. What's causing this? Recall that the distribution of $\mathrm{Beta}(\alpha, \beta)$ is the mirror image of $\mathrm{Beta}(\beta, \alpha)$ across $x=0.5$. Now we had $\alpha_{k,n} = \frac{k-1}{2}$ and $\beta_{k,n} = \frac{n-k}{2}$. Consider $k'=n-k+1$ and we find:
$$\alpha_{k',n} = \frac{(n-k+1)-1}{2} = \frac{n-k}{2} = \beta_{k,n}$$
$$\beta_{k',n} = \frac{n-(n-k+1)}{2} = \frac{k-1}{2} = \alpha_{k,n}$$
So this explains the symmetry as we vary the number of regressors in the model for a fixed sample size. It also explains the distributions that are themselves symmetric as a special case: for them, $k' = k$ so they are obliged to be symmetric with themselves!
This tells us something we might not have guessed about multiple regression: for a given sample size $n$, and assuming no regressors have a genuine relationship with $Y$, the $R^2$ for a model using $k-1$ regressors plus an intercept has the same distribution as $1 - R^2$ does for a model with $k-1$ residual degrees of freedom remaining.
Special distributions
When $k=n$ we have $\beta=0$, which isn't a valid parameter. However, as $\beta \to 0$ the distribution becomes degenerate with a spike such that $\mathsf{P}(R^2 = 1)=1$. This is consistent with what we know about a model with as many parameters as data points - it achieves perfect fit. I haven't drawn the degenerate distribution on my graph but did include the mean, mode and standard deviation.
When $k=2$ and $n=3$ we obtain $\mathrm{Beta}(\frac{1}{2}, \, \frac{1}{2})$ which is the arcsine distribution. This is symmetric (since $\alpha = \beta$) and bimodal (0 and 1). Since this is the only case where both $\alpha < 1$ and $\beta < 1$ (marked red on both sides), it is our only distribution which goes to infinity at both ends of the support.
The $\mathrm{Beta}(1, \, 1)$ distribution is the only Beta distribution to be rectangular (uniform). All values of $R^2$ from 0 to 1 are equally likely. The only combination of $k$ and $n$ for which $\alpha = \beta =1$ occurs is $k=3$ and $n=5$ (marked blue on both sides).
The previous special cases are of limited applicability but the case $\alpha > 1$ and $\beta=1$ (green on left, blue on right) is important. Now $f(x;\,\alpha,\,\beta) \propto x^{\alpha-1} (1-x)^{\beta-1} = x^{\alpha-1}$ so we have a power-law distribution on [0, 1]. Of course it's unlikely we'd perform a regression with $k=n-2$ and $k>3$, which is when this situation occurs. But by the previous symmetry argument, or some trivial algebra on the PDF, when $k=3$ and $n > 5$, which is the frequent procedure of multiple regression with two regressors and an intercept on a non-trivial sample size, $R^2$ will follow a reflected power law distribution on [0, 1] under $H_0$. This corresponds to $\alpha=1$ and $\beta>1$ so is marked blue on left, green on right.
You may also have noticed the triangular distributions at $(k=5,n=7)$ and its reflection $(k=3,n=7)$. We can recognise from their $\alpha$ and $\beta$ that these are just special cases of the power-law and reflected power-law distributions where the power is $2-1=1$.
Mode
If $\alpha>1$ and $\beta>1$, all green in the plot, $f(x; \, \alpha, \, \beta)$ is concave with $f(0)=f(1)=0$, and the Beta distribution has a unique mode $\frac{\alpha-1}{\alpha+\beta-2}$. Putting these in terms of $k$ and $n$, the condition becomes $k>3$ and $n>k+2$ while the mode is $\frac{k-3}{n-5}$.
All other cases have been dealt with above. If we relax the inequality to allow $\beta=1$, then we include the (green-blue) power-law distributions with $k=n-2$ and $k>3$ (equivalently, $n>5$). These cases clearly have mode 1, which actually agrees with the previous formula since $\frac{(n-2)-3}{n-5}=1$. If instead we allowed $\alpha=1$ but still demanded $\beta>1$, we'd find the (blue-green) reflected power-law distributions with $k=3$ and $n>5$. Their mode is 0, which agrees with $\frac{3-3}{n-5}=0$. However, if we relaxed both inequalities simultaneously to allow $\alpha=\beta=1$, we'd find the (all blue) uniform distribution with $k=3$ and $n=5$, which does not have a unique mode. Moreover the previous formula can't be applied in this case, since it would return the indeterminate form $\frac{3-3}{5-5}=\frac{0}{0}$.
When $n=k$ we get a degenerate distribution with mode 1. When $\beta < 1$ (in regression terms, $n=k-1$ so there is only one residual degree of freedom) then $f(x) \to \infty$ as $x \to 1$, and when $\alpha < 1$ (in regression terms, $k=2$ so a simple linear model with intercept and one regressor) then $f(x) \to \infty$ as $x \to 0$. These would be unique modes except in the unusual case where $k=2$ and $n=3$ (fitting a simple linear model to three points) which is bimodal at 0 and 1.
Mean
The question asked about the mode, but the mean of $R^2$ under the null is also interesting - it has the remarkably simple form $\frac{k-1}{n-1}$. For a fixed sample size it increases in arithmetic progression as more regressors are added to the model, until the mean value is 1 when $k=n$. The mean of a Beta distribution is $\frac{\alpha}{\alpha+\beta}$ so such an arithmetic progression was inevitable from our earlier observation that, for fixed $n$, the sum $\alpha+\beta$ is constant but $\alpha$ increases by 0.5 for each regressor added to the model.
$$\frac{\alpha}{\alpha+\beta} = \frac{(k-1)/2}{(k-1)/2 + (n-k)/2} = \frac{k-1}{n-1}$$
Code for plots
require(grid)
require(dplyr)
nlist <- 3:9 #change here which n to plot
klist <- 2:8 #change here which k to plot
totaln <- length(nlist)
totalk <- length(klist)
df <- data.frame(
x = rep(seq(0, 1, length.out = 100), times = totaln * totalk),
k = rep(klist, times = totaln, each = 100),
n = rep(nlist, each = totalk * 100)
)
df <- mutate(df,
kname = paste("k =", k),
nname = paste("n =", n),
a = (k-1)/2,
b = (n-k)/2,
density = dbeta(x, (k-1)/2, (n-k)/2),
groupcol = ifelse(x < 0.5,
ifelse(a < 1, "below 1", ifelse(a ==1, "equals 1", "more than 1")),
ifelse(b < 1, "below 1", ifelse(b ==1, "equals 1", "more than 1")))
)
g <- ggplot(df, aes(x, density)) +
geom_line(size=0.8) + geom_area(aes(group=groupcol, fill=groupcol)) +
scale_fill_brewer(palette="Set1") +
facet_grid(nname ~ kname) +
ylab("probability density") + theme_bw() +
labs(x = expression(R^{2}), fill = expression(alpha~(left)~beta~(right))) +
theme(panel.margin = unit(0.6, "lines"),
legend.title=element_text(size=20),
legend.text=element_text(size=20),
legend.background = element_rect(colour = "black"),
legend.position = c(1, 1), legend.justification = c(1, 1))
df2 <- data.frame(
k = rep(klist, times = totaln),
n = rep(nlist, each = totalk),
x = 0.5,
ymean = 7.5,
ymode = 5,
ysd = 2.5
)
df2 <- mutate(df2,
kname = paste("k =", k),
nname = paste("n =", n),
a = (k-1)/2,
b = (n-k)/2,
meanR2 = ifelse(k > n, NaN, a/(a+b)),
modeR2 = ifelse((a>1 & b>=1) | (a>=1 & b>1), (a-1)/(a+b-2),
ifelse(a<1 & b>=1 & n>=k, 0, ifelse(a>=1 & b<1 & n>=k, 1, NaN))),
sdR2 = ifelse(k > n, NaN, sqrt(a*b/((a+b)^2 * (a+b+1)))),
meantext = ifelse(is.nan(meanR2), "", paste("Mean =", round(meanR2,3))),
modetext = ifelse(is.nan(modeR2), "", paste("Mode =", round(modeR2,3))),
sdtext = ifelse(is.nan(sdR2), "", paste("SD =", round(sdR2,3)))
)
g <- g + geom_text(data=df2, aes(x, ymean, label=meantext)) +
geom_text(data=df2, aes(x, ymode, label=modetext)) +
geom_text(data=df2, aes(x, ysd, label=sdtext))
print(g)
|
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no
I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in mo
|
8,132
|
When should I *not* use R's nlm function for MLE?
|
There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood). Here are some things that you might want to consider in choosing which one to use.
optim can use a number of different algorithms including conjugate gradient, Newton, quasi-Newton, Nelder-Mead and simulated annealing. The last two don't need gradient information and so can be useful if gradients aren't available or not feasible to calculate (but are likely to be slower and require more parameter fine-tuning, respectively). It also has an option to return the computed Hessian at the solution, which you would need if you want standard errors along with the solution itself.
nlminb uses a quasi-Newton algorithm that fills the same niche as the "L-BFGS-B" method in optim. In my experience it seems a bit more robust than optim in that it's more likely to return a solution in marginal cases where optim will fail to converge, although that's likely problem-dependent. It has the nice feature, if you provide an explicit gradient function, of doing a numerical check of its values at the solution. If these values don't match those obtained from numerical differencing, nlminb will give a warning; this helps to ensure you haven't made a mistake in specifying the gradient (easy to do with complicated likelihoods).
nlm only uses a Newton algorithm. This can be faster than other algorithms in the sense of needing fewer iterations to reach convergence, but has its own drawbacks. It's more sensitive to the shape of the likelihood, so if it's strongly non-quadratic, it may be slower or you may get convergence to a false solution. The Newton algorithm also uses the Hessian, and computing that can be slow enough in practice that it more than cancels out any theoretical speedup.
|
When should I *not* use R's nlm function for MLE?
|
There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood
|
When should I *not* use R's nlm function for MLE?
There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood). Here are some things that you might want to consider in choosing which one to use.
optim can use a number of different algorithms including conjugate gradient, Newton, quasi-Newton, Nelder-Mead and simulated annealing. The last two don't need gradient information and so can be useful if gradients aren't available or not feasible to calculate (but are likely to be slower and require more parameter fine-tuning, respectively). It also has an option to return the computed Hessian at the solution, which you would need if you want standard errors along with the solution itself.
nlminb uses a quasi-Newton algorithm that fills the same niche as the "L-BFGS-B" method in optim. In my experience it seems a bit more robust than optim in that it's more likely to return a solution in marginal cases where optim will fail to converge, although that's likely problem-dependent. It has the nice feature, if you provide an explicit gradient function, of doing a numerical check of its values at the solution. If these values don't match those obtained from numerical differencing, nlminb will give a warning; this helps to ensure you haven't made a mistake in specifying the gradient (easy to do with complicated likelihoods).
nlm only uses a Newton algorithm. This can be faster than other algorithms in the sense of needing fewer iterations to reach convergence, but has its own drawbacks. It's more sensitive to the shape of the likelihood, so if it's strongly non-quadratic, it may be slower or you may get convergence to a false solution. The Newton algorithm also uses the Hessian, and computing that can be slow enough in practice that it more than cancels out any theoretical speedup.
|
When should I *not* use R's nlm function for MLE?
There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood
|
8,133
|
When should I *not* use R's nlm function for MLE?
|
When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" and is everywhere differentiable. nlminb provides a way to constrain parameter values to particular bounding boxes. optim, which is probably the most-used optimizer, provides a few different optimization routines; for example, BFGS, L-BFGS-B, and simulated annealing (via the SANN option), the latter of which might be handy if you have a difficult optimizing problem. There are also a number of optimizers available on CRAN. rgenoud, for instance, provides a genetic algorithm for optimization. DEoptim uses a different genetic optimization routine. Genetic algorithms can be slow to converge, but are usually guaranteed to converge (in time) even when there are discontinuities in the likelihood. I don't know about DEoptim, but rgenoud is set up to use snow for parallel processing, which helps somewhat.
So, a probably somewhat unsatisfactory answer is that you should use nlm or any other optimizer if it works for the data you have. If you have a well-behaved likelihood, any of the routines provided by optim or nlm will give you the same result. Some may be faster than others, which may or may not matter, depending on the size of the dataset, etc. As for the number of parameters these routines can handle, I don't know, though it's probably quite a few. Of course, the more parameters you have, the more likely you are to run into problems with convergence.
|
When should I *not* use R's nlm function for MLE?
|
When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" an
|
When should I *not* use R's nlm function for MLE?
When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" and is everywhere differentiable. nlminb provides a way to constrain parameter values to particular bounding boxes. optim, which is probably the most-used optimizer, provides a few different optimization routines; for example, BFGS, L-BFGS-B, and simulated annealing (via the SANN option), the latter of which might be handy if you have a difficult optimizing problem. There are also a number of optimizers available on CRAN. rgenoud, for instance, provides a genetic algorithm for optimization. DEoptim uses a different genetic optimization routine. Genetic algorithms can be slow to converge, but are usually guaranteed to converge (in time) even when there are discontinuities in the likelihood. I don't know about DEoptim, but rgenoud is set up to use snow for parallel processing, which helps somewhat.
So, a probably somewhat unsatisfactory answer is that you should use nlm or any other optimizer if it works for the data you have. If you have a well-behaved likelihood, any of the routines provided by optim or nlm will give you the same result. Some may be faster than others, which may or may not matter, depending on the size of the dataset, etc. As for the number of parameters these routines can handle, I don't know, though it's probably quite a few. Of course, the more parameters you have, the more likely you are to run into problems with convergence.
|
When should I *not* use R's nlm function for MLE?
When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" an
|
8,134
|
Why are Jeffreys priors considered noninformative?
|
It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't.
What happens with Jeffreys' prior under a transformation is that the Jacobian from the transformation gets sucked into the original Fisher information, which ends up giving you the Fisher information under the new parameterization. No magic (in the mechanics at least), just a little calculus and linear algebra.
|
Why are Jeffreys priors considered noninformative?
|
It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't.
What
|
Why are Jeffreys priors considered noninformative?
It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't.
What happens with Jeffreys' prior under a transformation is that the Jacobian from the transformation gets sucked into the original Fisher information, which ends up giving you the Fisher information under the new parameterization. No magic (in the mechanics at least), just a little calculus and linear algebra.
|
Why are Jeffreys priors considered noninformative?
It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't.
What
|
8,135
|
Why are Jeffreys priors considered noninformative?
|
The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence between the prior and the posterior is maximal. This quantity represents the amount of information brought by the data. This is why the prior is considered to be uninformative: this is the one for which the data brings the maximal amount of information.
By the way I don't know whether Jeffreys was aware of this characterization of his prior ?
|
Why are Jeffreys priors considered noninformative?
|
The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence
|
Why are Jeffreys priors considered noninformative?
The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence between the prior and the posterior is maximal. This quantity represents the amount of information brought by the data. This is why the prior is considered to be uninformative: this is the one for which the data brings the maximal amount of information.
By the way I don't know whether Jeffreys was aware of this characterization of his prior ?
|
Why are Jeffreys priors considered noninformative?
The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence
|
8,136
|
Why are Jeffreys priors considered noninformative?
|
I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation (e.g. the units of measurement). If your prior state of knowledge was precisely zero, you wouldn't know that your prior was invariant to such transformations.
|
Why are Jeffreys priors considered noninformative?
|
I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation
|
Why are Jeffreys priors considered noninformative?
I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation (e.g. the units of measurement). If your prior state of knowledge was precisely zero, you wouldn't know that your prior was invariant to such transformations.
|
Why are Jeffreys priors considered noninformative?
I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation
|
8,137
|
Why are Jeffreys priors considered noninformative?
|
This is an old but interesting topic. I recently thought about this and developed a take that I would like to share.
First off, the problem with flat priors as uninformative priors is that this idea is rooted in the way we would guess a number; not the way the data guess a number in likelihood-based inference.
We can understand this by comparing two binomial random variables:
\begin{eqnarray}
X &\sim& Bi(x|n=10,\theta=.5)\\
Y &\sim& Bi(y|n=10,\theta=.9)
\end{eqnarray}
Clearly, E[X]=5 and E[Y]=9.
The likelihood of finding E[Y]=9 under the distribution of X is $\approx 0.01$, while the likelihood of finding E[X]=5 under the distribution of Y is $\approx 0.0015$.
This fact is independent of parametrization (odds, log odds). For example, if $\phi$ are odds, sample odds of .9/.1=9 give not as much evidence against $H_0: \phi=1$ as sample odds of 1 gives against $H_0: \phi=9$.
Hence, finding x=5 is much better at excluding $H_0:\theta=.9$ than finding x=9 is at excluding $H_0:\theta=.5$ (just considering one random variable $X \sim Bi(x|n,\theta)$ from now on). More generally, intermediate $x$ exclude extreme $\theta$ very well, but extreme $x$ do not exclude intermediate $\theta$ so well. This notion is formalized in Fisher's information, which is the expected curvature of the log likelihood given some $\theta$. The expected curvature of the log likelihood for binomial random variables equals
\begin{equation}
\frac{-n}{\theta(1-\theta)}.
\end{equation}
Referring back to the example above, it is readily verified that the curvature is equal to -4n at $\theta=.5$, but $\approx-11n$ at $\theta=.9$. More curvature means that fewer values of X are compatible with that value of $\theta$, so it is easier to find evidence against that value of $\theta$ (implying lower posterior density in the Bayesian setting).
Fisher's information is different for different parameterizations, but that's because it is on a different scale: the curvature may be different, but so is the distance between points. The net result is invariance under a transformation.
The key point of using Jeffreys' prior then seems to be that if we do not want to help the data making its decision, we should give less weight to points that are hard to find evidence against, and more weight to points that are easy to find evidence against (e.g., it would be unjust to give a lot of weight to $\theta=0.5$ because it is hard to exclude this point from the posterior anyway). We do so by taking the prior distribution proportional to the expected curvature of the likelihood, which is the square root of Fisher information if we parametrize the prior over $\theta$ (since Fisher information runs over $\theta^2$).
In the Binomial case, this gives a Beta distribution with parameters .5 and .5. This distribution gives less weight to intermediate values of $\theta$ (values close to 0.5, which are hard to throw out of the posterior anyway) and more weight to extreme values of $\theta$ (values close to 0 or 1, which are easy to throw out of the posterior).
From here, I see two ways forward. The first is to reject the notion of uninformative priors altogether because the Bayesian posterior is still different from the frequentist likelihood. The second is to say that by using the Jeffreys prior we finally have a method under which all values of $\theta$ are equally likely before we have seen the data (under frequentist likelihood-based inference, they are not). If I read Jeffreys' 1946 paper, it seems to be all about invariance under transformations. I can see how that is a necessary condition for a prior to be uninformative, but I'm not sure about its sufficiency. I'm not aware of Jeffreys wishing to correct a deficiency of likelihood-based frequentist inference (granted, I haven't looked very much), but that does seem to be the corrolary. Take your pick.
|
Why are Jeffreys priors considered noninformative?
|
This is an old but interesting topic. I recently thought about this and developed a take that I would like to share.
First off, the problem with flat priors as uninformative priors is that this idea i
|
Why are Jeffreys priors considered noninformative?
This is an old but interesting topic. I recently thought about this and developed a take that I would like to share.
First off, the problem with flat priors as uninformative priors is that this idea is rooted in the way we would guess a number; not the way the data guess a number in likelihood-based inference.
We can understand this by comparing two binomial random variables:
\begin{eqnarray}
X &\sim& Bi(x|n=10,\theta=.5)\\
Y &\sim& Bi(y|n=10,\theta=.9)
\end{eqnarray}
Clearly, E[X]=5 and E[Y]=9.
The likelihood of finding E[Y]=9 under the distribution of X is $\approx 0.01$, while the likelihood of finding E[X]=5 under the distribution of Y is $\approx 0.0015$.
This fact is independent of parametrization (odds, log odds). For example, if $\phi$ are odds, sample odds of .9/.1=9 give not as much evidence against $H_0: \phi=1$ as sample odds of 1 gives against $H_0: \phi=9$.
Hence, finding x=5 is much better at excluding $H_0:\theta=.9$ than finding x=9 is at excluding $H_0:\theta=.5$ (just considering one random variable $X \sim Bi(x|n,\theta)$ from now on). More generally, intermediate $x$ exclude extreme $\theta$ very well, but extreme $x$ do not exclude intermediate $\theta$ so well. This notion is formalized in Fisher's information, which is the expected curvature of the log likelihood given some $\theta$. The expected curvature of the log likelihood for binomial random variables equals
\begin{equation}
\frac{-n}{\theta(1-\theta)}.
\end{equation}
Referring back to the example above, it is readily verified that the curvature is equal to -4n at $\theta=.5$, but $\approx-11n$ at $\theta=.9$. More curvature means that fewer values of X are compatible with that value of $\theta$, so it is easier to find evidence against that value of $\theta$ (implying lower posterior density in the Bayesian setting).
Fisher's information is different for different parameterizations, but that's because it is on a different scale: the curvature may be different, but so is the distance between points. The net result is invariance under a transformation.
The key point of using Jeffreys' prior then seems to be that if we do not want to help the data making its decision, we should give less weight to points that are hard to find evidence against, and more weight to points that are easy to find evidence against (e.g., it would be unjust to give a lot of weight to $\theta=0.5$ because it is hard to exclude this point from the posterior anyway). We do so by taking the prior distribution proportional to the expected curvature of the likelihood, which is the square root of Fisher information if we parametrize the prior over $\theta$ (since Fisher information runs over $\theta^2$).
In the Binomial case, this gives a Beta distribution with parameters .5 and .5. This distribution gives less weight to intermediate values of $\theta$ (values close to 0.5, which are hard to throw out of the posterior anyway) and more weight to extreme values of $\theta$ (values close to 0 or 1, which are easy to throw out of the posterior).
From here, I see two ways forward. The first is to reject the notion of uninformative priors altogether because the Bayesian posterior is still different from the frequentist likelihood. The second is to say that by using the Jeffreys prior we finally have a method under which all values of $\theta$ are equally likely before we have seen the data (under frequentist likelihood-based inference, they are not). If I read Jeffreys' 1946 paper, it seems to be all about invariance under transformations. I can see how that is a necessary condition for a prior to be uninformative, but I'm not sure about its sufficiency. I'm not aware of Jeffreys wishing to correct a deficiency of likelihood-based frequentist inference (granted, I haven't looked very much), but that does seem to be the corrolary. Take your pick.
|
Why are Jeffreys priors considered noninformative?
This is an old but interesting topic. I recently thought about this and developed a take that I would like to share.
First off, the problem with flat priors as uninformative priors is that this idea i
|
8,138
|
Wikipedia entry on likelihood seems ambiguous
|
I think this is largely unnecessary splitting hairs.
Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $y$. But we can also talk about probability $P(x\mid\theta)$ of $x$ given $\theta$ where $\theta$ is not a random variable but a parameter.
Note that in both cases the same term "given" and the same notation $P(\cdot\mid\cdot)$ can be used. There is no need to invent different notations. Moreover, what is called "parameter" and what is called "random variable" can depend on your philosophy, but the math does not change.
The first quote from Wikipedia states that $\mathcal{L}(\theta \mid x) = P(x \mid \theta)$ by definition. Here it is assumed that $\theta$ is a parameter. The second quote says that $\mathcal{L}(\theta \mid x)$ is not a conditional probability. This means that it is not a conditional probability of $\theta$ given $x$; and indeed it cannot be, because $\theta$ is assumed to be a parameter here.
In the context of Bayes theorem $$P(a\mid b)=\frac{P(b\mid a)P(a)}{P(b)},$$ both $a$ and $b$ are random variables. But we can still call $P(b\mid a)$ "likelihood" (of $a$), and now it is also a bona fide conditional probability (of $b$). This terminology is standard in Bayesian statistics. Nobody says it is something "similar" to the likelihood; people simply call it the likelihood.
Note 1: In the last paragraph, $P(b\mid a)$ is obviously a conditional probability of $b$. As a likelihood $\mathcal L(a\mid b)$ it is seen as a function of $a$; but it is not a probability distribution (or conditional probability) of $a$! Its integral over $a$ does not necessarily equal $1$. (Whereas its integral over $b$ does.)
Note 2: Sometimes likelihood is defined up to an arbitrary proportionality constant, as emphasized by @MichaelLew (because most of the time people are interested in likelihood ratios). This can be useful, but is not always done and is not essential.
See also What is the difference between "likelihood" and "probability"? and in particular @whuber's answer there.
I fully agree with @Tim's answer in this thread too (+1).
|
Wikipedia entry on likelihood seems ambiguous
|
I think this is largely unnecessary splitting hairs.
Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $
|
Wikipedia entry on likelihood seems ambiguous
I think this is largely unnecessary splitting hairs.
Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $y$. But we can also talk about probability $P(x\mid\theta)$ of $x$ given $\theta$ where $\theta$ is not a random variable but a parameter.
Note that in both cases the same term "given" and the same notation $P(\cdot\mid\cdot)$ can be used. There is no need to invent different notations. Moreover, what is called "parameter" and what is called "random variable" can depend on your philosophy, but the math does not change.
The first quote from Wikipedia states that $\mathcal{L}(\theta \mid x) = P(x \mid \theta)$ by definition. Here it is assumed that $\theta$ is a parameter. The second quote says that $\mathcal{L}(\theta \mid x)$ is not a conditional probability. This means that it is not a conditional probability of $\theta$ given $x$; and indeed it cannot be, because $\theta$ is assumed to be a parameter here.
In the context of Bayes theorem $$P(a\mid b)=\frac{P(b\mid a)P(a)}{P(b)},$$ both $a$ and $b$ are random variables. But we can still call $P(b\mid a)$ "likelihood" (of $a$), and now it is also a bona fide conditional probability (of $b$). This terminology is standard in Bayesian statistics. Nobody says it is something "similar" to the likelihood; people simply call it the likelihood.
Note 1: In the last paragraph, $P(b\mid a)$ is obviously a conditional probability of $b$. As a likelihood $\mathcal L(a\mid b)$ it is seen as a function of $a$; but it is not a probability distribution (or conditional probability) of $a$! Its integral over $a$ does not necessarily equal $1$. (Whereas its integral over $b$ does.)
Note 2: Sometimes likelihood is defined up to an arbitrary proportionality constant, as emphasized by @MichaelLew (because most of the time people are interested in likelihood ratios). This can be useful, but is not always done and is not essential.
See also What is the difference between "likelihood" and "probability"? and in particular @whuber's answer there.
I fully agree with @Tim's answer in this thread too (+1).
|
Wikipedia entry on likelihood seems ambiguous
I think this is largely unnecessary splitting hairs.
Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $
|
8,139
|
Wikipedia entry on likelihood seems ambiguous
|
You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as
$$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$
so we have likelihood of some parameter value $\theta$ given the data $X$. It is equal to product of probability mass (discrete case), or density (continuous case) functions $f$ of $X$ parametrized by $\theta$. Likelihood is a function of parameter given the data. Notice that $\theta$ is a parameter that we are optimizing, not a random variable, so it does not have any probabilities assigned to it. This is why Wikipedia states that using conditional probability notation may be ambiguous, since we are not conditioning on any random variable. On another hand, in Bayesian setting $\theta$ is a random variable and does have distribution, so we can work with it as with any other random variable and we can use Bayes theorem to calculate the posterior probabilities. Bayesian likelihood is still likelihood since it tells us about likelihood of data given the parameter, the only difference is that the parameter is considered as random variable.
If you know programming, you can think of likelihood function as of overloaded function in programming. Some programming languages allow you to have function that works differently when called using different parameter types. If you think of likelihood like this, then by default if takes as argument some parameter value and returns likelihood of data given this parameter. On another hand, you can use such function in Bayesian setting, where parameter is random variable, this leads to basically the same output, but that can be understood as conditional probability since we are conditioning on random variable. In both cases the function works the same, just you use it and understand it a little bit differently.
// likelihood "as" overloaded function
Default Likelihood(Numeric theta, Data X) {
return f(X, theta); // returns likelihood, not probability
}
Bayesian Likelihood(RandomVariable theta, Data X) {
return f(X, theta); // since theta is r.v., the output can be
// understood as conditional probability
}
Moreover, you rather won't find Bayesians who write Bayes theorem as
$$ P(\theta|X) \propto \mathcal{L}(\theta|X) P(\theta) $$
...this would be very confusing. First, you would have $\theta|X$ on both sides of equation and it wouldn't have much sense. Second, we have posterior probability to know about probability of $\theta$ given data (i.e. the thing that you would like to know in likelihoodist framework, but you don't when $\theta$ is not a random variable). Third, since $\theta$ is a random variable, we have and write it as conditional probability. The $L$-notation is generally reserved for likelihoodist setting. The name likelihood is used by convention in both approaches to denote similar thing: how probability of observing such data changes given your model and the parameter.
|
Wikipedia entry on likelihood seems ambiguous
|
You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as
$$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$
so we have li
|
Wikipedia entry on likelihood seems ambiguous
You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as
$$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$
so we have likelihood of some parameter value $\theta$ given the data $X$. It is equal to product of probability mass (discrete case), or density (continuous case) functions $f$ of $X$ parametrized by $\theta$. Likelihood is a function of parameter given the data. Notice that $\theta$ is a parameter that we are optimizing, not a random variable, so it does not have any probabilities assigned to it. This is why Wikipedia states that using conditional probability notation may be ambiguous, since we are not conditioning on any random variable. On another hand, in Bayesian setting $\theta$ is a random variable and does have distribution, so we can work with it as with any other random variable and we can use Bayes theorem to calculate the posterior probabilities. Bayesian likelihood is still likelihood since it tells us about likelihood of data given the parameter, the only difference is that the parameter is considered as random variable.
If you know programming, you can think of likelihood function as of overloaded function in programming. Some programming languages allow you to have function that works differently when called using different parameter types. If you think of likelihood like this, then by default if takes as argument some parameter value and returns likelihood of data given this parameter. On another hand, you can use such function in Bayesian setting, where parameter is random variable, this leads to basically the same output, but that can be understood as conditional probability since we are conditioning on random variable. In both cases the function works the same, just you use it and understand it a little bit differently.
// likelihood "as" overloaded function
Default Likelihood(Numeric theta, Data X) {
return f(X, theta); // returns likelihood, not probability
}
Bayesian Likelihood(RandomVariable theta, Data X) {
return f(X, theta); // since theta is r.v., the output can be
// understood as conditional probability
}
Moreover, you rather won't find Bayesians who write Bayes theorem as
$$ P(\theta|X) \propto \mathcal{L}(\theta|X) P(\theta) $$
...this would be very confusing. First, you would have $\theta|X$ on both sides of equation and it wouldn't have much sense. Second, we have posterior probability to know about probability of $\theta$ given data (i.e. the thing that you would like to know in likelihoodist framework, but you don't when $\theta$ is not a random variable). Third, since $\theta$ is a random variable, we have and write it as conditional probability. The $L$-notation is generally reserved for likelihoodist setting. The name likelihood is used by convention in both approaches to denote similar thing: how probability of observing such data changes given your model and the parameter.
|
Wikipedia entry on likelihood seems ambiguous
You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as
$$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$
so we have li
|
8,140
|
Wikipedia entry on likelihood seems ambiguous
|
There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example.
First, likelihood cannot be generally equal to a the probability of the data given the parameter value, as likelihood is only defined up to a proportionality constant. Fisher was explicit about that when he first formalised likelihood (Fisher, 1922). The reason for that seems to be the fact that there is no restraint on the integral (or sum) of a likelihood function, and the probability of observing data $x$ within a statistical model given any value of the parameter(s) is strongly affected by the precision of the data values and of the granularity of specification of the parameter values.
Second, it is more helpful to think about the likelihood function than individual likelihoods. The likelihood function is a function of the model parameter value(s), as is obvious from a graph of a likelihood function. Such a graph also makes it easy to see that the likelihoods allow a ranking of the various values of the parameter(s) according to how well the model predicts the data when set to those parameter values. Exploration of likelihood functions makes the roles of the data and the parameter values much more clear, in my opinion, than can cogitation of the various formulas given in the original question.
The use a ratio of pairs of likelihoods within a likelihood function as the relative degree of support offered by the observed data for the parameter values (within the model) gets around the problem of unknown proportionality constants because those constants cancel in the ratio. It is important to note that the constants would not necessarily cancel in a ratio of likelihoods that come from separate likelihood functions (i.e. from different statistical models).
Finally, it is useful to be explicit about the role of the statistical model because likelihoods are determined by the statistical model as well as the data. If you choose a different model you get a different likelihood function, and you can get a different unknown proportionality constant.
Thus, to answer the original question, likelihoods are not a probability of any sort. They do not obey Kolmogorov's axioms of probability, and they play a different role in statistical support of inference from the roles played by the various types of probability.
Fisher (1922) On the mathematical foundations of statistics http://rsta.royalsocietypublishing.org/content/222/594-604/309
|
Wikipedia entry on likelihood seems ambiguous
|
There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example.
First, likelihood cannot
|
Wikipedia entry on likelihood seems ambiguous
There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example.
First, likelihood cannot be generally equal to a the probability of the data given the parameter value, as likelihood is only defined up to a proportionality constant. Fisher was explicit about that when he first formalised likelihood (Fisher, 1922). The reason for that seems to be the fact that there is no restraint on the integral (or sum) of a likelihood function, and the probability of observing data $x$ within a statistical model given any value of the parameter(s) is strongly affected by the precision of the data values and of the granularity of specification of the parameter values.
Second, it is more helpful to think about the likelihood function than individual likelihoods. The likelihood function is a function of the model parameter value(s), as is obvious from a graph of a likelihood function. Such a graph also makes it easy to see that the likelihoods allow a ranking of the various values of the parameter(s) according to how well the model predicts the data when set to those parameter values. Exploration of likelihood functions makes the roles of the data and the parameter values much more clear, in my opinion, than can cogitation of the various formulas given in the original question.
The use a ratio of pairs of likelihoods within a likelihood function as the relative degree of support offered by the observed data for the parameter values (within the model) gets around the problem of unknown proportionality constants because those constants cancel in the ratio. It is important to note that the constants would not necessarily cancel in a ratio of likelihoods that come from separate likelihood functions (i.e. from different statistical models).
Finally, it is useful to be explicit about the role of the statistical model because likelihoods are determined by the statistical model as well as the data. If you choose a different model you get a different likelihood function, and you can get a different unknown proportionality constant.
Thus, to answer the original question, likelihoods are not a probability of any sort. They do not obey Kolmogorov's axioms of probability, and they play a different role in statistical support of inference from the roles played by the various types of probability.
Fisher (1922) On the mathematical foundations of statistics http://rsta.royalsocietypublishing.org/content/222/594-604/309
|
Wikipedia entry on likelihood seems ambiguous
There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example.
First, likelihood cannot
|
8,141
|
Wikipedia entry on likelihood seems ambiguous
|
Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many values of $\theta$ in the parameter space, you can have
$$
\sum_\theta L(\theta) = \infty,
$$
for example by having $L(\theta)=1$ regardless of the value of $\theta$, and if there is some standard measure $d\theta$ on the parameter space $\Theta$, then in the same way one can have
$$
\int_\Theta L(\theta)\,d\theta =\infty.
$$
An essential point that the article should emphasize is that $L$ is the function
$$
\theta \mapsto P(x\mid\theta) \text{ and NOT } x\mapsto P(x\mid\theta).
$$
|
Wikipedia entry on likelihood seems ambiguous
|
Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many value
|
Wikipedia entry on likelihood seems ambiguous
Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many values of $\theta$ in the parameter space, you can have
$$
\sum_\theta L(\theta) = \infty,
$$
for example by having $L(\theta)=1$ regardless of the value of $\theta$, and if there is some standard measure $d\theta$ on the parameter space $\Theta$, then in the same way one can have
$$
\int_\Theta L(\theta)\,d\theta =\infty.
$$
An essential point that the article should emphasize is that $L$ is the function
$$
\theta \mapsto P(x\mid\theta) \text{ and NOT } x\mapsto P(x\mid\theta).
$$
|
Wikipedia entry on likelihood seems ambiguous
Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many value
|
8,142
|
Wikipedia entry on likelihood seems ambiguous
|
"I read this as: "The likelihood of parameters equaling theta, given
data X = x, (the left-hand-side), is equal to the probability of the
data X being equal to x, given that the parameters are equal to
theta". (Bold is mine for emphasis)."
It's the probability of the set of observations given the parameter is theta. This is perhaps confusing because they write $P(x|\theta)$ but then $\mathcal{L}(\theta|x)$.
The explanation (somewhat objectively) implies that $\theta$ is not a random variable. It could, for example, be a random variable with some prior distribution in a Bayesian setting. The point however, is that we suppose $\theta=\theta$, a concrete value and then make statements about the likelihood of our observations. This is because there is only one true value of $\theta$ in whatever system we're interested in.
|
Wikipedia entry on likelihood seems ambiguous
|
"I read this as: "The likelihood of parameters equaling theta, given
data X = x, (the left-hand-side), is equal to the probability of the
data X being equal to x, given that the parameters are equ
|
Wikipedia entry on likelihood seems ambiguous
"I read this as: "The likelihood of parameters equaling theta, given
data X = x, (the left-hand-side), is equal to the probability of the
data X being equal to x, given that the parameters are equal to
theta". (Bold is mine for emphasis)."
It's the probability of the set of observations given the parameter is theta. This is perhaps confusing because they write $P(x|\theta)$ but then $\mathcal{L}(\theta|x)$.
The explanation (somewhat objectively) implies that $\theta$ is not a random variable. It could, for example, be a random variable with some prior distribution in a Bayesian setting. The point however, is that we suppose $\theta=\theta$, a concrete value and then make statements about the likelihood of our observations. This is because there is only one true value of $\theta$ in whatever system we're interested in.
|
Wikipedia entry on likelihood seems ambiguous
"I read this as: "The likelihood of parameters equaling theta, given
data X = x, (the left-hand-side), is equal to the probability of the
data X being equal to x, given that the parameters are equ
|
8,143
|
Prerequisites for AIC model comparison
|
You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models.
Just a reminder before we continue: a Gaussian log-likelihood is given by
$$ \log(L(\theta)) =-\frac{|D|}{2}\log(2\pi) -\frac{1}{2} \log(|K|) -\frac{1}{2}(x-\mu)^T K^{-1} (x-\mu), $$
$K$ being the covariance structure of your model, $|D|$ the number of points in your datasets, $\mu$ the mean response and $x$ your dependent variable.
More specifically AIC is calculated to be equal to $2k - 2 \log(L)$, where $k$ is the number of fixed effects in your model and $L$ your likelihood function [1]. It practically compares trade-off between variance ($2k$) and bias ($2\log(L)$) in your modelling assumptions. As such in your case it would compare two different log-likelihood structures when it came to the bias term. That is because when you calculate your log-likelihood practically you look at two terms: a fit term, denoted by $-\frac{1}{2}(x-\mu)^T K^{-1} (x-\mu)$, and a complexity penalization term, denoted by $-\frac{1}{2} \log(|K|)$. Therefore you see that your fit term is completely different between the two models; in the first case you compare the residuals from the raw data and in the other case the residuals of the logged data.
Aside Wikipedia, AIC is also defined to equate: $|D| \log\left(\frac{RSS}{|D|}\right) + 2k$ [3]; this form makes it even more obvious why different models with different dependent variable are not comparable. The RSS is the two case is just incomparable between the two.
Akaike's original paper [4] is actually quite hard to grasp (I think). It is based on KL divergence (difference between two distributions roughly speaking) and works its way on proving how you can approximate the unknown true distribution of your data and compare that to the distribution of the data your model assumes. That's why "smaller AIC score is better"; you are
closer to the approximate true distribution of your data.
So to bring it all together the obvious things to remember when using AIC are three [2,5] :
You can not use it to compare models of different data sets.
You should use the same response variables for all the candidate models.
You should have $|D| >> k$, because otherwise you do not get good asymptotic consistency.
Sorry to break the bad news to you but using AIC to show you are choosing one dependent variable over another is not a statistically sound thing to do. Check the distribution of your residuals in both models, if the logged data case has normally distributed residuals and the raw data case doesn't, you have all the justification you might ever need. You might also want to check if your raw data correspond to a lognormal, that might be enough of a justification also.
For strict mathematical assumptions the game is KL divergence and information theory...
Ah, and some references:
http://en.wikipedia.org/wiki/Akaike_information_criterion
Akaike Information Criterion, Shuhua Hu, (Presentation p.17-18)
Applied Multivariate Statistical Analysis, Johnson & Wichern, 6th Ed. (p. 386-387)
A new look at the statistical model identification, H. Akaike, IEEE Transactions on Automatic Control 19 (6): 716–723 (1974)
Model Selection Tutorial #1: Akaike’s Information Criterion, D. Schmidt and E. Makalic, (Presentation p.39)
|
Prerequisites for AIC model comparison
|
You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models.
Just a re
|
Prerequisites for AIC model comparison
You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models.
Just a reminder before we continue: a Gaussian log-likelihood is given by
$$ \log(L(\theta)) =-\frac{|D|}{2}\log(2\pi) -\frac{1}{2} \log(|K|) -\frac{1}{2}(x-\mu)^T K^{-1} (x-\mu), $$
$K$ being the covariance structure of your model, $|D|$ the number of points in your datasets, $\mu$ the mean response and $x$ your dependent variable.
More specifically AIC is calculated to be equal to $2k - 2 \log(L)$, where $k$ is the number of fixed effects in your model and $L$ your likelihood function [1]. It practically compares trade-off between variance ($2k$) and bias ($2\log(L)$) in your modelling assumptions. As such in your case it would compare two different log-likelihood structures when it came to the bias term. That is because when you calculate your log-likelihood practically you look at two terms: a fit term, denoted by $-\frac{1}{2}(x-\mu)^T K^{-1} (x-\mu)$, and a complexity penalization term, denoted by $-\frac{1}{2} \log(|K|)$. Therefore you see that your fit term is completely different between the two models; in the first case you compare the residuals from the raw data and in the other case the residuals of the logged data.
Aside Wikipedia, AIC is also defined to equate: $|D| \log\left(\frac{RSS}{|D|}\right) + 2k$ [3]; this form makes it even more obvious why different models with different dependent variable are not comparable. The RSS is the two case is just incomparable between the two.
Akaike's original paper [4] is actually quite hard to grasp (I think). It is based on KL divergence (difference between two distributions roughly speaking) and works its way on proving how you can approximate the unknown true distribution of your data and compare that to the distribution of the data your model assumes. That's why "smaller AIC score is better"; you are
closer to the approximate true distribution of your data.
So to bring it all together the obvious things to remember when using AIC are three [2,5] :
You can not use it to compare models of different data sets.
You should use the same response variables for all the candidate models.
You should have $|D| >> k$, because otherwise you do not get good asymptotic consistency.
Sorry to break the bad news to you but using AIC to show you are choosing one dependent variable over another is not a statistically sound thing to do. Check the distribution of your residuals in both models, if the logged data case has normally distributed residuals and the raw data case doesn't, you have all the justification you might ever need. You might also want to check if your raw data correspond to a lognormal, that might be enough of a justification also.
For strict mathematical assumptions the game is KL divergence and information theory...
Ah, and some references:
http://en.wikipedia.org/wiki/Akaike_information_criterion
Akaike Information Criterion, Shuhua Hu, (Presentation p.17-18)
Applied Multivariate Statistical Analysis, Johnson & Wichern, 6th Ed. (p. 386-387)
A new look at the statistical model identification, H. Akaike, IEEE Transactions on Automatic Control 19 (6): 716–723 (1974)
Model Selection Tutorial #1: Akaike’s Information Criterion, D. Schmidt and E. Makalic, (Presentation p.39)
|
Prerequisites for AIC model comparison
You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models.
Just a re
|
8,144
|
Prerequisites for AIC model comparison
|
You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 is basically just missing the "jacobian" of the log transformation. For a log normal model, this is simply $\prod_i y_i^{-1} $. To convert this to AIC you need to take negative twice log of this term, which means that you need to add $2\sum_i\log (y_i)$ to the AIC number for uu0. So you should have
AIC (uu0)+2*sum (log (usili)) being compared with AIC (uu1)
|
Prerequisites for AIC model comparison
|
You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 i
|
Prerequisites for AIC model comparison
You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 is basically just missing the "jacobian" of the log transformation. For a log normal model, this is simply $\prod_i y_i^{-1} $. To convert this to AIC you need to take negative twice log of this term, which means that you need to add $2\sum_i\log (y_i)$ to the AIC number for uu0. So you should have
AIC (uu0)+2*sum (log (usili)) being compared with AIC (uu1)
|
Prerequisites for AIC model comparison
You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 i
|
8,145
|
Prerequisites for AIC model comparison
|
This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic.
Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society. Series D (The Statistician) 27:217-235.
|
Prerequisites for AIC model comparison
|
This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic.
Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society
|
Prerequisites for AIC model comparison
This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic.
Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society. Series D (The Statistician) 27:217-235.
|
Prerequisites for AIC model comparison
This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic.
Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society
|
8,146
|
Visualizing a million, PCA edition
|
The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run into a problem with over-plotting. Alpha blending could help there.
Here is a PC biplot of the wine data from the UCI ML repository:
The points correspond to the PC1 and PC2 scores of each observation.
The arrows represent the correlation of the variables with PC1 and PC2. The white circle indicates the theoretical maximum extent of the arrows. The ellipses are 68% data ellipses for each of the 3 wine varieties in the data.
I have made the code for generating this plot available here.
|
Visualizing a million, PCA edition
|
The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run i
|
Visualizing a million, PCA edition
The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run into a problem with over-plotting. Alpha blending could help there.
Here is a PC biplot of the wine data from the UCI ML repository:
The points correspond to the PC1 and PC2 scores of each observation.
The arrows represent the correlation of the variables with PC1 and PC2. The white circle indicates the theoretical maximum extent of the arrows. The ellipses are 68% data ellipses for each of the 3 wine varieties in the data.
I have made the code for generating this plot available here.
|
Visualizing a million, PCA edition
The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run i
|
8,147
|
Visualizing a million, PCA edition
|
A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one dominant eigenvalue which falls outside the Marchenko-Pastur distribution. The usefulness of this kind of plot depends on your application.
|
Visualizing a million, PCA edition
|
A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one domin
|
Visualizing a million, PCA edition
A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one dominant eigenvalue which falls outside the Marchenko-Pastur distribution. The usefulness of this kind of plot depends on your application.
|
Visualizing a million, PCA edition
A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one domin
|
8,148
|
Visualizing a million, PCA edition
|
You could also use the psych package.
This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
|
Visualizing a million, PCA edition
|
You could also use the psych package.
This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
|
Visualizing a million, PCA edition
You could also use the psych package.
This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
|
Visualizing a million, PCA edition
You could also use the psych package.
This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
|
8,149
|
Why do political polls have such large sample sizes?
|
Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong).
Why numbers in the vicinity of 1000 are often used
Numbers of around 1000-2000 are often used in surveys, even in the case of a simple proportion ("Are you in favor of $<$whatever$>$?").
This is done so that reasonably accurate estimates of the proportion are obtained.
If binomial sampling is assumed, the standard error* of the sample proportion is largest when the proportion is $\frac{1}{2}$ - but that upper limit is still a pretty good approximation for proportions between about 25% and 75%.
* "standard error" = "standard deviation of the distribution of"
A common aim is to estimate percentages to within about $\pm 3\%$ of the true percentage, about $95\%$ of the time. That $3\%$ is called the 'margin of error'.
In that 'worst case' standard error under binomial sampling, this leads to:
$1.96 \times \sqrt{\frac{1}{2}\cdot(1-\frac{1}{2})/n} \leq 0.03$
$0.98 \times \sqrt{1/n} \leq 0.03$
$\sqrt{n} \geq 0.98/0.03$
$n \geq 1067.11$
... or 'a bit more than 1000'.
So if you survey 1000 people at random from the population you want to make inferences about, and 58% of the sample support the proposal, you can be reasonably sure the population proportion is between 55% and 61%.
(Sometimes other values for the margin of error, such as 2.5% might be used. If you halve the margin of error, the sample size goes up by a multiple of 4.)
In complex surveys where an accurate estimate of a proportion in some sub-population is needed (e.g. the proportion of black college graduates from Texas in favor of the proposal), the numbers may be large enough that that subgroup is several hundred in size, perhaps entailing tens of thousands of responses in total.
Since that can quickly become impractical, it's common to split up the population into subpopulations (strata) and sample each one separately. Even so, you can end up with some very large surveys.
It was made to seem that a sample size over 30 is pointless due to diminishing returns.
It depends on the effect size, and relative variability. The $\sqrt n$ effect on variance means you might need some quite large samples in some situations.
I answered a question here (I think it was from an engineer) that was dealing with very large sample sizes (in the vicinity of a million if I remember right) but he was looking for very small effects.
Let's see what a random sample with a sample size of 30 leaves us with when estimating a sample proportion.
Imagine we ask 30 people whether overall they approved of the State of the Union address (strongly agree, agree, disagree, strongly disagree). Further imagine that interest lies in the proportion that either agree or strongly agree.
Say 11 of those interviewed agreed and 5 strongly agreed, for a total of 16.
16/30 is about 53%. What are our bounds for the proportion in the population (with say a 95% interval)?
We can pin the population proportion down to somewhere between 35% and 71% (roughly), if our assumptions hold.
Not all that useful.
|
Why do political polls have such large sample sizes?
|
Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong).
Why numbers in the vicinity of 1000 are often used
Num
|
Why do political polls have such large sample sizes?
Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong).
Why numbers in the vicinity of 1000 are often used
Numbers of around 1000-2000 are often used in surveys, even in the case of a simple proportion ("Are you in favor of $<$whatever$>$?").
This is done so that reasonably accurate estimates of the proportion are obtained.
If binomial sampling is assumed, the standard error* of the sample proportion is largest when the proportion is $\frac{1}{2}$ - but that upper limit is still a pretty good approximation for proportions between about 25% and 75%.
* "standard error" = "standard deviation of the distribution of"
A common aim is to estimate percentages to within about $\pm 3\%$ of the true percentage, about $95\%$ of the time. That $3\%$ is called the 'margin of error'.
In that 'worst case' standard error under binomial sampling, this leads to:
$1.96 \times \sqrt{\frac{1}{2}\cdot(1-\frac{1}{2})/n} \leq 0.03$
$0.98 \times \sqrt{1/n} \leq 0.03$
$\sqrt{n} \geq 0.98/0.03$
$n \geq 1067.11$
... or 'a bit more than 1000'.
So if you survey 1000 people at random from the population you want to make inferences about, and 58% of the sample support the proposal, you can be reasonably sure the population proportion is between 55% and 61%.
(Sometimes other values for the margin of error, such as 2.5% might be used. If you halve the margin of error, the sample size goes up by a multiple of 4.)
In complex surveys where an accurate estimate of a proportion in some sub-population is needed (e.g. the proportion of black college graduates from Texas in favor of the proposal), the numbers may be large enough that that subgroup is several hundred in size, perhaps entailing tens of thousands of responses in total.
Since that can quickly become impractical, it's common to split up the population into subpopulations (strata) and sample each one separately. Even so, you can end up with some very large surveys.
It was made to seem that a sample size over 30 is pointless due to diminishing returns.
It depends on the effect size, and relative variability. The $\sqrt n$ effect on variance means you might need some quite large samples in some situations.
I answered a question here (I think it was from an engineer) that was dealing with very large sample sizes (in the vicinity of a million if I remember right) but he was looking for very small effects.
Let's see what a random sample with a sample size of 30 leaves us with when estimating a sample proportion.
Imagine we ask 30 people whether overall they approved of the State of the Union address (strongly agree, agree, disagree, strongly disagree). Further imagine that interest lies in the proportion that either agree or strongly agree.
Say 11 of those interviewed agreed and 5 strongly agreed, for a total of 16.
16/30 is about 53%. What are our bounds for the proportion in the population (with say a 95% interval)?
We can pin the population proportion down to somewhere between 35% and 71% (roughly), if our assumptions hold.
Not all that useful.
|
Why do political polls have such large sample sizes?
Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong).
Why numbers in the vicinity of 1000 are often used
Num
|
8,150
|
Why do political polls have such large sample sizes?
|
That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matters, check your data! This does suggest that you'd want at least 30 respondents for your poll if your analysis depends on these assumptions, but there are other factors too.
One major factor is the "effect size." Most races tend to be fairly close, so fairly large samples are required to reliably detect these differences. (If you're interested in determining the "right" sample size, you should look into power analysis). If you've got a Bernoulli random variable (something with two outcomes) that's approximately 50:50, then you need about 1000 trials to get the standard error down to 1.5%. That is probably accurate enough to predict a race's outcome (the last 4 US Presidental elections had a mean margin of ~3.2 percent), which matches your observation nicely.
The poll data is often sliced and diced different ways: "Is the candidate leading with gun-owning men over 75?" or whatever. This requires even larger samples because each respondent fits into only a few of these categories.
Presidential polls are sometimes "bundled" with other survey questions (e.g., Congressional races) too. Since these vary from state to state, one ends up with some "extra" polling data.
Bernoulli distributions are discrete probability distributions with only two outcomes: Option 1 is chosen with probability $p$, while option 2 is chosen with probability $1-p$.
The variance of a bernoulli distribution is $p(1-p)$, so the standard error of the mean is $\sqrt{\frac{p(1-p)}{n}}$. Plug in $p=0.5$ (the election is a tie), set the standard error to to 1.5% (0.015), and solve. You'll need get 1,111 subjects to get to 1.5% SE
|
Why do political polls have such large sample sizes?
|
That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matter
|
Why do political polls have such large sample sizes?
That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matters, check your data! This does suggest that you'd want at least 30 respondents for your poll if your analysis depends on these assumptions, but there are other factors too.
One major factor is the "effect size." Most races tend to be fairly close, so fairly large samples are required to reliably detect these differences. (If you're interested in determining the "right" sample size, you should look into power analysis). If you've got a Bernoulli random variable (something with two outcomes) that's approximately 50:50, then you need about 1000 trials to get the standard error down to 1.5%. That is probably accurate enough to predict a race's outcome (the last 4 US Presidental elections had a mean margin of ~3.2 percent), which matches your observation nicely.
The poll data is often sliced and diced different ways: "Is the candidate leading with gun-owning men over 75?" or whatever. This requires even larger samples because each respondent fits into only a few of these categories.
Presidential polls are sometimes "bundled" with other survey questions (e.g., Congressional races) too. Since these vary from state to state, one ends up with some "extra" polling data.
Bernoulli distributions are discrete probability distributions with only two outcomes: Option 1 is chosen with probability $p$, while option 2 is chosen with probability $1-p$.
The variance of a bernoulli distribution is $p(1-p)$, so the standard error of the mean is $\sqrt{\frac{p(1-p)}{n}}$. Plug in $p=0.5$ (the election is a tie), set the standard error to to 1.5% (0.015), and solve. You'll need get 1,111 subjects to get to 1.5% SE
|
Why do political polls have such large sample sizes?
That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matter
|
8,151
|
Why do political polls have such large sample sizes?
|
There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$.
Suppose we take a poll of just one voter, let's call him or her voter 1, and ask "will you vote for the Purple Party?" We can code the answer as 1 for "yes" and 0 for "no". Let's say that probability of a "yes" is $p$. We now have a binary random variable $X_1$ which is 1 with probability $p$ and 0 with probability $1-p$. We say that $X_1$ is a Bernouilli variable with probability of success $p$, which we can write $X_1 \sim Bernouilli(p)$. The expected, or mean, value of $X_1$ is given by $\mathbb{E}(X_1)=\sum{xP(X_1=x)}$ where we sum over all possible outcomes $x$ of $X_1$. But there are only two outcomes, 0 with probability $1-p$ and 1 with probability $p$, so the sum is just $\mathbb{E}(X_1)=0(1-p)+1(p)=p$. Stop and think. This actually looks completely reasonable - if there is a 30% chance of voter 1 supporting the Purple Party, and we've coded the variable to be 1 if they say "yes" and 0 if they say "no", then we'd expect $X_1$ to be 0.3 on average.
Let's think what happens we square $X_1$. If $X_1 = 0$ then $X_1^2 = 0$ and if $X_1 = 1$ then $X_1^2 = 1$. So in fact $X_1^2 = X_1$ in either case. Since they are the same, then they must have the same expected value, so $\mathbb{E}(X_1^2)=p$. This gives me an easy way of calculating the variance of a Bernouilli variable: I use $Var(X_1)=\mathbb{E}(X_1^2)-\mathbb{E}(X_1)^2=p - p^2 = p(1-p)$ and so the standard deviation is $\sigma_{X_1}=\sqrt{p(1-p)}$.
Obviously I want to talk to other voters - lets call them voter 2, voter 3, through to voter $n$. Let's assume they all have the same probability $p$ of supporting the Purple Party. Now we have $n$ Bernouilli variables, $X_1$, $X_2$ through to $X_n$, with each $X_i \sim Bernoulli(p)$ for $i$ from 1 to $n$. They all have the same mean, $p$, and variance, $p(1-p)$.
I'd like to find how many people in my sample said "yes", and to do that I can just add up all the $X_i$. I'll write $X=\sum_{i=1}^{n}X_i$. I can calculate the mean or expected value of $X$ by using the rule that $\mathbb{E}(X+Y)=\mathbb{E}(X)+\mathbb{E}(Y)$ if those expectations exist, and extending that to $\mathbb{E}(X_1+X_2+\ldots+X_n)=\mathbb{E}(X_1)+\mathbb{E}(X_2)+\ldots+\mathbb{E}(X_n)$. But I am adding up $n$ of those expectations, and each is $p$, so I get in total that $\mathbb{E}(X)=np$. Stop and think. If I poll 200 people and each has a 30% chance of saying they support the Purple Party, of course I'd expect 0.3 x 200 = 60 people to say "yes". So the $np$ formula looks right. Less "obvious" is how to handle the variance.
There is a rule that says
$$Var(X_1+X_2+\ldots+X_n)=Var(X_1)+Var(X_2)+\ldots+Var(X_n)$$
but I can only use it if my random variables are independent of each other. So fine, let's make that assumption, and by a similar logic to before I can see that $Var(X)=np(1-p)$. If a variable $X$ is the sum of $n$ independent Bernoulli trials, with identical probability of success $p$, then we say that $X$ has a binomial distribution, $X \sim Binomial(n,p)$. We have just shown that the mean of such a binomial distribution is $np$ and the variance is $np(1-p)$.
Our original problem was how to estimate $p$ from the sample. The sensible way to define our estimator is $\hat{p}=X/n$. For instance of 64 out of our sample of 200 people said "yes", we'd estimate that 64/200 = 0.32 = 32% of people say they support the Purple Party. You can see that $\hat{p}$ is a "scaled-down" version of our total number of yes-voters, $X$. That means it is still a random variable, but no longer follows the binomial distribution. We can find its mean and variance, because when we scale a random variable by a constant factor $k$ then it obeys the following rules: $\mathbb{E}(kX)=k\mathbb{E}(X)$ (so the mean scales by the same factor $k$) and $Var(kX)=k^2 Var(X)$. Note how variance scales by $k^2$. That makes sense when you know that in general, the variance is measured in the square of whatever units the variable is measured in: not so applicable here, but if our random variable had been a height in cm then the variance would be in $cm^2$ which scale differently - if you double lengths, you quadruple area.
Here our scale factor is $\frac{1}{n}$. This gives us $\mathbb{E}(\hat{p})=\frac{1}{n}\mathbb{E}(X)=\frac{np}{n}=p$. This is great! On average, our estimator $\hat{p}$ is exactly what it "should" be, the true (or population) probability that a random voter says that they will vote for the Purple Party. We say that our estimator is unbiased. But while it is correct on average, sometimes it will be too small, and sometimes too high. We can see just how wrong it is likely to be by looking at its variance. $Var(\hat{p})=\frac{1}{n^2}Var(X)=\frac{np(1-p)}{n^2}=\frac{p(1-p)}{n}$. The standard deviation is the square root, $\sqrt{\frac{p(1-p)}{n}}$, and because it gives us a grasp of how badly our estimator will be off (it is effectively a root mean square error, a way of calculating the average error that treats positive and negative errors as equally bad, by squaring them before averaging out), it is usually called the standard error. A good rule of thumb, which works well for large samples and which can be dealt with more rigorously using the famous Central Limit Theorem, is that most of the time (about 95%) the estimate will be wrong by less than two standard errors.
Since it appears in the denominator of the fraction, higher values of $n$ - bigger samples - make the standard error smaller. That is great news, as if I want a small standard error I just make the sample size big enough. The bad news is that $n$ is inside a square root, so if I quadruple the sample size, I will only halve the standard error. Very small standard errors are going to involve very very large, hence expensive, samples. There's another problem: if I want to target a particular standard error, say 1%, then I need to know what value of $p$ to use in my calculation. I might use historic values if I have past polling data, but I would like to prepare for the worst possible case. Which value of $p$ is most problematic? A graph is instructive.
The worst-case (highest) standard error will occur when $p=0.5$. To prove that I could use calculus, but some high school algebra will do the trick, so long as I know how to "complete the square".
$$\sqrt{p(1-p)}=\sqrt{p-p^2}=\sqrt{\frac{1}{4}-(p^2-p+\frac{1}{4})}=\sqrt{\frac{1}{4}-(p-\frac{1}{2})^2}$$
The expression is the brackets is squared, so will always return a zero or positive answer, which then gets taken away from a quarter. In the worst case (large standard error) as little as possible gets taken away. I know the least that can be subtracted is zero, and that will occur when $p-\frac{1}{2}=0$, so when $p=\frac{1}{2}$. The upshot of this is that I get bigger standard errors when trying to estimate support for e.g. political parties near 50% of the vote, and lower standard errors for estimating support for propositions which are substantially more or substantially less popular than that. In fact the symmetry of my graph and equation show me that I would get the same standard error for my estimates of support of the Purple Party, whether they had 30% popular support or 70%.
So how many people do I need to poll to keep the standard error below 1%? This would mean that, the vast majority of the time, my estimate will be within 2% of the correct proportion. I now know that the worst case standard error is $\sqrt{\frac{0.25}{n}}=\frac{0.5}{\sqrt{n}} < 0.01$ which gives me $\sqrt{n} > 50$ and so $n > 2500$. That would explain why you see polling figures in the thousands.
In reality low standard error is not a guarantee of a good estimate. Many problems in polling are of a practical rather than theoretical nature. For instance, I assumed that the sample was of random voters each with same probability $p$, but taking a "random" sample in real life is fraught with difficulty. You might try telephone or online polling - but not only has not everybody got a phone or internet access, but those who don't may have very different demographics (and voting intentions) to those who do. To avoid introducing bias to their results, polling firms actually do all kinds of complicated weighting of their samples, not the simple average $\frac{\sum{X_i}}{n}$that I took. Also, people lie to pollsters! The different ways that pollsters have compensated for this possibility is, obviously, controversial. You can see a variety of approaches in how polling firms have dealt with the so-called Shy Tory Factor in the UK. One method of correction involved looking at how people voted in the past to judge how plausible their claimed voting intention is, but it turns out that even when they're not lying, many voters simply fail to remember their electoral history. When you've got this stuff going on, there's frankly very little point getting the "standard error" down to 0.00001%.
To finish, here are some graphs showing how the required sample size - according to my simplistic analysis - is influenced by the desired standard error, and how bad the "worst case" value of $p=0.5$ is compared to the more amenable proportions. Remember that the curve for $p=0.7$ would be identical to the one for $p=0.3$ due to the symmetry of the earlier graph of $\sqrt{p(1-p)}$
|
Why do political polls have such large sample sizes?
|
There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$
|
Why do political polls have such large sample sizes?
There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$.
Suppose we take a poll of just one voter, let's call him or her voter 1, and ask "will you vote for the Purple Party?" We can code the answer as 1 for "yes" and 0 for "no". Let's say that probability of a "yes" is $p$. We now have a binary random variable $X_1$ which is 1 with probability $p$ and 0 with probability $1-p$. We say that $X_1$ is a Bernouilli variable with probability of success $p$, which we can write $X_1 \sim Bernouilli(p)$. The expected, or mean, value of $X_1$ is given by $\mathbb{E}(X_1)=\sum{xP(X_1=x)}$ where we sum over all possible outcomes $x$ of $X_1$. But there are only two outcomes, 0 with probability $1-p$ and 1 with probability $p$, so the sum is just $\mathbb{E}(X_1)=0(1-p)+1(p)=p$. Stop and think. This actually looks completely reasonable - if there is a 30% chance of voter 1 supporting the Purple Party, and we've coded the variable to be 1 if they say "yes" and 0 if they say "no", then we'd expect $X_1$ to be 0.3 on average.
Let's think what happens we square $X_1$. If $X_1 = 0$ then $X_1^2 = 0$ and if $X_1 = 1$ then $X_1^2 = 1$. So in fact $X_1^2 = X_1$ in either case. Since they are the same, then they must have the same expected value, so $\mathbb{E}(X_1^2)=p$. This gives me an easy way of calculating the variance of a Bernouilli variable: I use $Var(X_1)=\mathbb{E}(X_1^2)-\mathbb{E}(X_1)^2=p - p^2 = p(1-p)$ and so the standard deviation is $\sigma_{X_1}=\sqrt{p(1-p)}$.
Obviously I want to talk to other voters - lets call them voter 2, voter 3, through to voter $n$. Let's assume they all have the same probability $p$ of supporting the Purple Party. Now we have $n$ Bernouilli variables, $X_1$, $X_2$ through to $X_n$, with each $X_i \sim Bernoulli(p)$ for $i$ from 1 to $n$. They all have the same mean, $p$, and variance, $p(1-p)$.
I'd like to find how many people in my sample said "yes", and to do that I can just add up all the $X_i$. I'll write $X=\sum_{i=1}^{n}X_i$. I can calculate the mean or expected value of $X$ by using the rule that $\mathbb{E}(X+Y)=\mathbb{E}(X)+\mathbb{E}(Y)$ if those expectations exist, and extending that to $\mathbb{E}(X_1+X_2+\ldots+X_n)=\mathbb{E}(X_1)+\mathbb{E}(X_2)+\ldots+\mathbb{E}(X_n)$. But I am adding up $n$ of those expectations, and each is $p$, so I get in total that $\mathbb{E}(X)=np$. Stop and think. If I poll 200 people and each has a 30% chance of saying they support the Purple Party, of course I'd expect 0.3 x 200 = 60 people to say "yes". So the $np$ formula looks right. Less "obvious" is how to handle the variance.
There is a rule that says
$$Var(X_1+X_2+\ldots+X_n)=Var(X_1)+Var(X_2)+\ldots+Var(X_n)$$
but I can only use it if my random variables are independent of each other. So fine, let's make that assumption, and by a similar logic to before I can see that $Var(X)=np(1-p)$. If a variable $X$ is the sum of $n$ independent Bernoulli trials, with identical probability of success $p$, then we say that $X$ has a binomial distribution, $X \sim Binomial(n,p)$. We have just shown that the mean of such a binomial distribution is $np$ and the variance is $np(1-p)$.
Our original problem was how to estimate $p$ from the sample. The sensible way to define our estimator is $\hat{p}=X/n$. For instance of 64 out of our sample of 200 people said "yes", we'd estimate that 64/200 = 0.32 = 32% of people say they support the Purple Party. You can see that $\hat{p}$ is a "scaled-down" version of our total number of yes-voters, $X$. That means it is still a random variable, but no longer follows the binomial distribution. We can find its mean and variance, because when we scale a random variable by a constant factor $k$ then it obeys the following rules: $\mathbb{E}(kX)=k\mathbb{E}(X)$ (so the mean scales by the same factor $k$) and $Var(kX)=k^2 Var(X)$. Note how variance scales by $k^2$. That makes sense when you know that in general, the variance is measured in the square of whatever units the variable is measured in: not so applicable here, but if our random variable had been a height in cm then the variance would be in $cm^2$ which scale differently - if you double lengths, you quadruple area.
Here our scale factor is $\frac{1}{n}$. This gives us $\mathbb{E}(\hat{p})=\frac{1}{n}\mathbb{E}(X)=\frac{np}{n}=p$. This is great! On average, our estimator $\hat{p}$ is exactly what it "should" be, the true (or population) probability that a random voter says that they will vote for the Purple Party. We say that our estimator is unbiased. But while it is correct on average, sometimes it will be too small, and sometimes too high. We can see just how wrong it is likely to be by looking at its variance. $Var(\hat{p})=\frac{1}{n^2}Var(X)=\frac{np(1-p)}{n^2}=\frac{p(1-p)}{n}$. The standard deviation is the square root, $\sqrt{\frac{p(1-p)}{n}}$, and because it gives us a grasp of how badly our estimator will be off (it is effectively a root mean square error, a way of calculating the average error that treats positive and negative errors as equally bad, by squaring them before averaging out), it is usually called the standard error. A good rule of thumb, which works well for large samples and which can be dealt with more rigorously using the famous Central Limit Theorem, is that most of the time (about 95%) the estimate will be wrong by less than two standard errors.
Since it appears in the denominator of the fraction, higher values of $n$ - bigger samples - make the standard error smaller. That is great news, as if I want a small standard error I just make the sample size big enough. The bad news is that $n$ is inside a square root, so if I quadruple the sample size, I will only halve the standard error. Very small standard errors are going to involve very very large, hence expensive, samples. There's another problem: if I want to target a particular standard error, say 1%, then I need to know what value of $p$ to use in my calculation. I might use historic values if I have past polling data, but I would like to prepare for the worst possible case. Which value of $p$ is most problematic? A graph is instructive.
The worst-case (highest) standard error will occur when $p=0.5$. To prove that I could use calculus, but some high school algebra will do the trick, so long as I know how to "complete the square".
$$\sqrt{p(1-p)}=\sqrt{p-p^2}=\sqrt{\frac{1}{4}-(p^2-p+\frac{1}{4})}=\sqrt{\frac{1}{4}-(p-\frac{1}{2})^2}$$
The expression is the brackets is squared, so will always return a zero or positive answer, which then gets taken away from a quarter. In the worst case (large standard error) as little as possible gets taken away. I know the least that can be subtracted is zero, and that will occur when $p-\frac{1}{2}=0$, so when $p=\frac{1}{2}$. The upshot of this is that I get bigger standard errors when trying to estimate support for e.g. political parties near 50% of the vote, and lower standard errors for estimating support for propositions which are substantially more or substantially less popular than that. In fact the symmetry of my graph and equation show me that I would get the same standard error for my estimates of support of the Purple Party, whether they had 30% popular support or 70%.
So how many people do I need to poll to keep the standard error below 1%? This would mean that, the vast majority of the time, my estimate will be within 2% of the correct proportion. I now know that the worst case standard error is $\sqrt{\frac{0.25}{n}}=\frac{0.5}{\sqrt{n}} < 0.01$ which gives me $\sqrt{n} > 50$ and so $n > 2500$. That would explain why you see polling figures in the thousands.
In reality low standard error is not a guarantee of a good estimate. Many problems in polling are of a practical rather than theoretical nature. For instance, I assumed that the sample was of random voters each with same probability $p$, but taking a "random" sample in real life is fraught with difficulty. You might try telephone or online polling - but not only has not everybody got a phone or internet access, but those who don't may have very different demographics (and voting intentions) to those who do. To avoid introducing bias to their results, polling firms actually do all kinds of complicated weighting of their samples, not the simple average $\frac{\sum{X_i}}{n}$that I took. Also, people lie to pollsters! The different ways that pollsters have compensated for this possibility is, obviously, controversial. You can see a variety of approaches in how polling firms have dealt with the so-called Shy Tory Factor in the UK. One method of correction involved looking at how people voted in the past to judge how plausible their claimed voting intention is, but it turns out that even when they're not lying, many voters simply fail to remember their electoral history. When you've got this stuff going on, there's frankly very little point getting the "standard error" down to 0.00001%.
To finish, here are some graphs showing how the required sample size - according to my simplistic analysis - is influenced by the desired standard error, and how bad the "worst case" value of $p=0.5$ is compared to the more amenable proportions. Remember that the curve for $p=0.7$ would be identical to the one for $p=0.3$ due to the symmetry of the earlier graph of $\sqrt{p(1-p)}$
|
Why do political polls have such large sample sizes?
There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$
|
8,152
|
Why do political polls have such large sample sizes?
|
The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best.
When you think of a sample that's supposed to represent millions of people, you're going to have to have a much larger sample than just 30. Intuitively, 30 people can't even include one person from each state! Then think that you want to represent Republicans, Democrats, and Independents (at least), and for each of those you'll want to represent a couple of different age categories, and for each of those a couple of different income categories.
With only 30 people called, you're going to miss huge swaths of the demographics you need to sample.
EDIT2: [I've removed the paragraph that abaumann and StasK objected to. I'm still not 100% persuaded, but especially StasK's argument I can't disagree with.] If the 30 people are truly selected completely at random from among all eligible voters, the sample would be valid in some sense, but too small to let you distinguish whether the answer to your question was actually true or false (among all eligible voters). StasK explains how bad it would be in his third comment, below.
EDIT: In reply to samplesize999's comment, there is a formal method for determining how large is large enough, called "power analysis", which is also described here. abaumann's comment illustrates how there is a tradeoff between your ability to distinguish differences and the amount of data you need to make a certain amount of improvement. As he illustrates, there's a square root in the calculation, which means the benefit (in terms of increased power) grows more and more slowly, or the cost (in terms of how many more samples you need) grows increasingly rapidly, so you want enough samples, but not more.
|
Why do political polls have such large sample sizes?
|
The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best.
When you think of a sample that's supposed to represent millions of people, you're going to ha
|
Why do political polls have such large sample sizes?
The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best.
When you think of a sample that's supposed to represent millions of people, you're going to have to have a much larger sample than just 30. Intuitively, 30 people can't even include one person from each state! Then think that you want to represent Republicans, Democrats, and Independents (at least), and for each of those you'll want to represent a couple of different age categories, and for each of those a couple of different income categories.
With only 30 people called, you're going to miss huge swaths of the demographics you need to sample.
EDIT2: [I've removed the paragraph that abaumann and StasK objected to. I'm still not 100% persuaded, but especially StasK's argument I can't disagree with.] If the 30 people are truly selected completely at random from among all eligible voters, the sample would be valid in some sense, but too small to let you distinguish whether the answer to your question was actually true or false (among all eligible voters). StasK explains how bad it would be in his third comment, below.
EDIT: In reply to samplesize999's comment, there is a formal method for determining how large is large enough, called "power analysis", which is also described here. abaumann's comment illustrates how there is a tradeoff between your ability to distinguish differences and the amount of data you need to make a certain amount of improvement. As he illustrates, there's a square root in the calculation, which means the benefit (in terms of increased power) grows more and more slowly, or the cost (in terms of how many more samples you need) grows increasingly rapidly, so you want enough samples, but not more.
|
Why do political polls have such large sample sizes?
The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best.
When you think of a sample that's supposed to represent millions of people, you're going to ha
|
8,153
|
Why do political polls have such large sample sizes?
|
A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition.
Just like @Glen_b, let's assume we require at least 95% confidence that the true proportion who agree with a statement lies within a 3% margin of error. In a particular sample of the population, the true proportion $p$ is unknown. However, the uncertainty around this parameter of success $p$ can be characterized with a Beta distribution.
We don't have any prior information about how $p$ is distributed, so we will say that $p \sim Beta(\alpha=1, \beta=1)$ as an uninformed prior. This is a uniform distribution of $p$ from 0 to 1.
As we get information from respondents from the survey, we get to update our beliefs as to the distribution of $p$. The posterior distribution of $p$ when we get $\delta_y$ "yes" responses and $\delta_n$ "no" responses is $p \sim Beta(\alpha=1+\delta_y, \beta=1+\delta_n)$.
Assuming the worst-case scenario where the true proportion is 0.5, we want to find the number of respondents $n=\delta_y+\delta_n$ such that only 0.025 of the probability mass is below 0.47 and 0.025 of the probability mass is above 0.53 (to account for the 95% confidence in our 3% margin of error). Namely, in a programming language like R, we want to figure out the $n$ such that qbeta(0.025, n/2, n/2) yields a value of 0.47.
If you use $n=1067$, you get:
> qbeta(0.025, 1067/2, 1067/2)
[1] 0.470019
which is our desired result.
In summary, 1,067 respondents who evenly split between "yes" and "no" responses would give us 95% confidence that the true proportion of "yes" respondents is between 47% and 53%.
|
Why do political polls have such large sample sizes?
|
A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition.
Just like @Glen_b, let's assume we require at leas
|
Why do political polls have such large sample sizes?
A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition.
Just like @Glen_b, let's assume we require at least 95% confidence that the true proportion who agree with a statement lies within a 3% margin of error. In a particular sample of the population, the true proportion $p$ is unknown. However, the uncertainty around this parameter of success $p$ can be characterized with a Beta distribution.
We don't have any prior information about how $p$ is distributed, so we will say that $p \sim Beta(\alpha=1, \beta=1)$ as an uninformed prior. This is a uniform distribution of $p$ from 0 to 1.
As we get information from respondents from the survey, we get to update our beliefs as to the distribution of $p$. The posterior distribution of $p$ when we get $\delta_y$ "yes" responses and $\delta_n$ "no" responses is $p \sim Beta(\alpha=1+\delta_y, \beta=1+\delta_n)$.
Assuming the worst-case scenario where the true proportion is 0.5, we want to find the number of respondents $n=\delta_y+\delta_n$ such that only 0.025 of the probability mass is below 0.47 and 0.025 of the probability mass is above 0.53 (to account for the 95% confidence in our 3% margin of error). Namely, in a programming language like R, we want to figure out the $n$ such that qbeta(0.025, n/2, n/2) yields a value of 0.47.
If you use $n=1067$, you get:
> qbeta(0.025, 1067/2, 1067/2)
[1] 0.470019
which is our desired result.
In summary, 1,067 respondents who evenly split between "yes" and "no" responses would give us 95% confidence that the true proportion of "yes" respondents is between 47% and 53%.
|
Why do political polls have such large sample sizes?
A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition.
Just like @Glen_b, let's assume we require at leas
|
8,154
|
When should I apply feature scaling for my data [duplicate]
|
You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful.
K-means considers Euclidean distance to be meaningful. If a feature has a big scale compared to another, but the first feature truly represents greater diversity, then clustering in that dimension should be penalized.
In regression, as long as you have a bias it does not matter if you normalize or not since you are discovering an affine map, and the composition of a scaling transformation and an affine map is still affine.
When there are learning rates involved, e.g. when you're doing gradient descent, the input scale effectively scales the gradients, which might require some kind of second order method to stabilize per-parameter learning rates. It's probably easier to normalize the inputs if it doesn't matter otherwise.
|
When should I apply feature scaling for my data [duplicate]
|
You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful.
K-means considers Euclidean distance to be meaningful. If a feature has a
|
When should I apply feature scaling for my data [duplicate]
You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful.
K-means considers Euclidean distance to be meaningful. If a feature has a big scale compared to another, but the first feature truly represents greater diversity, then clustering in that dimension should be penalized.
In regression, as long as you have a bias it does not matter if you normalize or not since you are discovering an affine map, and the composition of a scaling transformation and an affine map is still affine.
When there are learning rates involved, e.g. when you're doing gradient descent, the input scale effectively scales the gradients, which might require some kind of second order method to stabilize per-parameter learning rates. It's probably easier to normalize the inputs if it doesn't matter otherwise.
|
When should I apply feature scaling for my data [duplicate]
You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful.
K-means considers Euclidean distance to be meaningful. If a feature has a
|
8,155
|
When should I apply feature scaling for my data [duplicate]
|
In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about the problem.
Some of the algorithms, like Linear Discriminant Analysis and Naive Bayes do feature scaling by design and you would have no effect in performing one manually. Others, like knn can be gravely affected by it.
So with knn type of classifier you have to measure the distances between pairs of samples. The distances will of course be influenced by the measurement units one uses. Imagine you are classifying population into males and females and you have a bunch of measurements including height. Now your classification result will be influenced by the measurements the height was reported in. If the height is measured in nanometers then it's likely that any k nearest neighbors will merely have similar measures of height. You have to scale.
However as a contrast example imagine classifying something that has equal units of measurement recorded with noise. Like a photograph or microarray or some spectrum. in this case you already know a-priori that your features have equal units. If you were to scale them all you would amplify the effect of features that are constant across all samples, but were measured with noise. (Like a background of the photo). This again will have an influence on knn and might drastically reduce performance if your data had more noisy constant values compared to the ones that vary. Now any similarity between k nearest neighbors will get influenced by noise.
So this is like with everything else in machine learning - use prior knowledge whenever possible and in the case of black-box features do both and cross-validate.
|
When should I apply feature scaling for my data [duplicate]
|
In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about
|
When should I apply feature scaling for my data [duplicate]
In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about the problem.
Some of the algorithms, like Linear Discriminant Analysis and Naive Bayes do feature scaling by design and you would have no effect in performing one manually. Others, like knn can be gravely affected by it.
So with knn type of classifier you have to measure the distances between pairs of samples. The distances will of course be influenced by the measurement units one uses. Imagine you are classifying population into males and females and you have a bunch of measurements including height. Now your classification result will be influenced by the measurements the height was reported in. If the height is measured in nanometers then it's likely that any k nearest neighbors will merely have similar measures of height. You have to scale.
However as a contrast example imagine classifying something that has equal units of measurement recorded with noise. Like a photograph or microarray or some spectrum. in this case you already know a-priori that your features have equal units. If you were to scale them all you would amplify the effect of features that are constant across all samples, but were measured with noise. (Like a background of the photo). This again will have an influence on knn and might drastically reduce performance if your data had more noisy constant values compared to the ones that vary. Now any similarity between k nearest neighbors will get influenced by noise.
So this is like with everything else in machine learning - use prior knowledge whenever possible and in the case of black-box features do both and cross-validate.
|
When should I apply feature scaling for my data [duplicate]
In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about
|
8,156
|
When should I apply feature scaling for my data [duplicate]
|
There are several methods of normalization.
In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalization like dividing or multiplying by a constant already gets adjusted in the weights(i.e lets say the weight of a feature is 3, but if we normalize all the values of the feature by dividing by 2, then the new weight will be 6, so overall the effect is same). In contrast if you are planning to mean normalize, then there is a different story. Mean normalization is good when there is a huge variance in the feature values ( 1 70 300 4 ). Also if a single feature can have both a positive and negative effect, then it is good to mean normalize. This is because when you mean normalize a given set of positive values then the values below mean become negative while those above mean become positive.
In regards to k-nearest neighbours, normalization should be performed all the times. This is because in KNN, the distance between points causes the clustering to happen. So if you are applying KNN on a problem with 2 features with the first feature ranging from 1-10 and the other ranging from 1-1000, then all the clusters will be generated based on the second feature as the difference between 1 to 10 is small as compared to 1-1000 and hence can all be clustered ito a single group
|
When should I apply feature scaling for my data [duplicate]
|
There are several methods of normalization.
In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalizatio
|
When should I apply feature scaling for my data [duplicate]
There are several methods of normalization.
In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalization like dividing or multiplying by a constant already gets adjusted in the weights(i.e lets say the weight of a feature is 3, but if we normalize all the values of the feature by dividing by 2, then the new weight will be 6, so overall the effect is same). In contrast if you are planning to mean normalize, then there is a different story. Mean normalization is good when there is a huge variance in the feature values ( 1 70 300 4 ). Also if a single feature can have both a positive and negative effect, then it is good to mean normalize. This is because when you mean normalize a given set of positive values then the values below mean become negative while those above mean become positive.
In regards to k-nearest neighbours, normalization should be performed all the times. This is because in KNN, the distance between points causes the clustering to happen. So if you are applying KNN on a problem with 2 features with the first feature ranging from 1-10 and the other ranging from 1-1000, then all the clusters will be generated based on the second feature as the difference between 1 to 10 is small as compared to 1-1000 and hence can all be clustered ito a single group
|
When should I apply feature scaling for my data [duplicate]
There are several methods of normalization.
In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalizatio
|
8,157
|
When should I apply feature scaling for my data [duplicate]
|
This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog.
In short, there are "monotonic transformation" invariant learning methods (decision trees and everything that derives from them), translation invariant learning methods (kNN, SVM with RBF kernel), and the others.
Obviously, the monotonic transformation invariant learning methods are translation invariant.
With the first class, you do not need to do any centering / scaling. With the translation invariant algorithms, centering is useless. Now, for the other methods, it really depends on the data. Usually, it may be worth trying with scaling (especially if variables have different orders of magnitude).
In a general case, I would recommend trying various preprocessings of the data : without scaling, scaling dividing by the standard deviation, scaling dividing by the sum of absolute values of your data (which would make it lie on a simplex). One of them will perform better than the others, but I cannot say which one until I have tried.
|
When should I apply feature scaling for my data [duplicate]
|
This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog.
In short, there are "monotonic transformation" invariant lear
|
When should I apply feature scaling for my data [duplicate]
This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog.
In short, there are "monotonic transformation" invariant learning methods (decision trees and everything that derives from them), translation invariant learning methods (kNN, SVM with RBF kernel), and the others.
Obviously, the monotonic transformation invariant learning methods are translation invariant.
With the first class, you do not need to do any centering / scaling. With the translation invariant algorithms, centering is useless. Now, for the other methods, it really depends on the data. Usually, it may be worth trying with scaling (especially if variables have different orders of magnitude).
In a general case, I would recommend trying various preprocessings of the data : without scaling, scaling dividing by the standard deviation, scaling dividing by the sum of absolute values of your data (which would make it lie on a simplex). One of them will perform better than the others, but I cannot say which one until I have tried.
|
When should I apply feature scaling for my data [duplicate]
This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog.
In short, there are "monotonic transformation" invariant lear
|
8,158
|
When should I apply feature scaling for my data [duplicate]
|
Here's another chemometric application example where feature scaling would be disastrous:
There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substance of interest) content is below (or above) a given threshold (e.g. legal limit)". In this case, the sensors to produce the input data for the classifier would be chosen to have $$signal = f (analyte~concentration)$$, preferrably with $f$ being a steep and even linear function.
In this situation, feature scaling would essentially erase all relevant information from the raw data.
In general, some questions that help to decide whether scaling is a good idea:
What does normalization do to your data wrt. solving the task at hand? Should that become easier or do you risk to delete important information?
Does your algorithm/classifier react sensitively to the (numeric) scale of the data? (convergence)
Is the algorithm/classifier heavily influenced by different scales of different features?
If so, do your features share the same (or comparable) scales or even physical units?
Does your classifier/algorithm/actual implementation perform its own normalisation?
|
When should I apply feature scaling for my data [duplicate]
|
Here's another chemometric application example where feature scaling would be disastrous:
There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substa
|
When should I apply feature scaling for my data [duplicate]
Here's another chemometric application example where feature scaling would be disastrous:
There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substance of interest) content is below (or above) a given threshold (e.g. legal limit)". In this case, the sensors to produce the input data for the classifier would be chosen to have $$signal = f (analyte~concentration)$$, preferrably with $f$ being a steep and even linear function.
In this situation, feature scaling would essentially erase all relevant information from the raw data.
In general, some questions that help to decide whether scaling is a good idea:
What does normalization do to your data wrt. solving the task at hand? Should that become easier or do you risk to delete important information?
Does your algorithm/classifier react sensitively to the (numeric) scale of the data? (convergence)
Is the algorithm/classifier heavily influenced by different scales of different features?
If so, do your features share the same (or comparable) scales or even physical units?
Does your classifier/algorithm/actual implementation perform its own normalisation?
|
When should I apply feature scaling for my data [duplicate]
Here's another chemometric application example where feature scaling would be disastrous:
There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substa
|
8,159
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
|
EDIT (after reading the paper):
I've read the paper thoughtfully. Let's start off with what Google claimed in the paper:
They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks
The match was absolutely one-sided, many wins for AlphaZero but none for Stockfish
They were able to do it in just four hours
AlphaZero played like a human
Unfortunately, I don't think it's a good journal paper. I'm going to explain with links (so you know I'm not dreaming):
https://chess.stackexchange.com/questions/19360/how-is-alpha-zero-more-human has my answer on how AlphaZero played like a human
The match was unfair, strongly biased. I quote Tord Romstad, the original programmer for Stockfish.
https://www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-author
The match results by themselves are not particularly meaningful because of the rather strange choice of time controls and Stockfish parameter settings: The games were played at a fixed time of 1 minute/move, which means that Stockfish has no use of its time management heuristics (lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move; at a fixed time per move, the strength will suffer significantly).
Stockfish couldn't have played the best chess with only a minute per move. The program was not designed for that.
Stockfish was running on a regular commercial machine, while AlphaZero was on a 4 millions+ TPU machine tuned for AlphaZero. This is a like matching your high-end desktop against a cheap Android phone. Tord wrote:
One is a conventional chess program running on ordinary computers, the other uses fundamentally different techniques and is running on custom designed hardware that is not available for purchase (and would be way out of the budget of ordinary users if it were).
Google inadvertently gave 64 threads to a 32 core machine for Stockfish. I quote GM Larry Kaufman (world class computer chess expert):
http://talkchess.com/forum/viewtopic.php?p=741987&highlight=#741987
I agree that the test was far from fair; another issue that hurt SF was that it was apparently run on 64 threads on a 32 core machine, but it would play much better running just 32 threads on that machine, since there is almost no SMP benefit to offset the roughly 5 to 3 slowdown. Also the cost ratio was more than I said; I was thinking it was a 64 core machine, but a 32 core machine costs about half what I guessed. So maybe all in all 30 to 1 isn't so bad an estimate. On the other hand I think you underestimate how much it could be further improved.
Stockfish gave only 1GB hash table. This is a joke... I have a larger hash table for my Stockfish iOS app (Disclaimer: I'm the author) on my iPhone! Tord wrote:
... way too small hash tables for the number of threads ...
1GB hash table is absolutely unacceptable for a match like this. Stockfish would frequently encounter hash collision. It takes CPU cycles to replace old hash entries.
Stockfish is not designed to run with that many number of threads. In my iOS chess app, only a few threads are used. Tord wrote:
... was playing with far more search threads than has ever received any significant amount of testing ...
Stockfish was running without an opening book or 6-piece Syzygy endgame tablebase. The sample size was insufficient. The Stockfish version was not the latest. Discussion here.
CONCLUSION
Google has not proven without doubts their methods are superior to Stockfish. Their numbers are superficial and strongly biased to AlphaZero. Their methods are not reproducible by an independent third party. It's still a bit too early to say Deep Learning is a superior method to traditional chess programming.
EDIT (Dec 2017):
There is a new paper from Google Deepmind (https://arxiv.org/pdf/1712.01815.pdf) for deep reinforcement learning in chess. From the abstract, the world number one Stockfish chess engine was "convincingly" defeated. I think this is the most significant achievement in computer chess since the 1997 Deep Blue match. I'll update my answer once I read the paper in details.
Original (before Dec 2017)
Let's clarify your question:
No, chess engines don't use brute-force.
AlphaGo does use tree searching, it uses Monte Carlo Tree Search. Google "Monte Carlo Tree Search alphaGo" if you want to be convinced.
ANN can be used for chess engines:
Giraffe (the link posted by @Tim)
NeuroChess
Would this program perform better than the top chess-engines (and chess players) of today?
Giraffe plays at about Internation Master level, which is about FIDE 2400 rating. However, Stockfish, Houdini and Komodo all play at about FIDE 3000. This is a big gap. Why? Why not Monte-Carlo Tree Search?
Material heuristic in chess is simple. Most of the time, a chess position is winning/losing by just counting materials on the board. Please recall counting materials doesn't work for Go. Material counting is orders of magnitude faster than running neural networks - this can be done by bitboards represented by a 64-bit integer. On the 64 bits system, it can be done by only several machine instructions. Searching with the traditional algorithm is much faster than machine learning. Higher nodes per second translate to deeper search.
Similarly, there're very useful and cheap techniques such as null move pruning, late move reduction and killer moves etc. They are cheap to run, and much efficient to the approach used in AlphaGo.
Static evaluation in chess is fast and useful
Machine learning is useful for optimizating parameters, but we also have SPSA and CLOP for chess.
There are lots of useful metrics for tree reduction in chess. Much less so for Go.
There was research that Monte Carlo Tree Search don't scale well for chess. Go is a different game to chess. The chess algorithms don't work for Go because chess relies on brutal tactics. Tactics is arguably more important in chess.
Now, we've established that MCTS work well for AlphaGo but less so for chess. Deep learning would be more useful if:
The tuned NN evaluation is better than the traditional algorithms. However ... deep learning is not magic, you as the programmer would still need to do the programming. As mentioned, we have something like SPSA for self-playing for parameters tuning in chess.
Investment, money! There's not much money for machine learning in chess. Stockfish is free and open source, but strong enough to defeat all human players. Why would Google spend millions if anybody can just download Stockfish for free? Why's going to pay for the CPU clusters? Who's going to pay for talents? Nobody wants to do it, because chess is considered a "solved" game.
If deep learning can achieve the following, it'll beat the traditional algorithm:
Given a chess position, "feel" it like a human grandmaster. For example, a human grandmaster wouldn't go into lines that are bad - by experience. Neither the traditional algorithm nor deep learning can achieve that. Your NN model might give you a probability [0..1] for your position, but that's not good enough.
Let me point out:
No. Giraffe (the link posted by @Tim) doesn't use Monte Carlo Tree Search. It uses the regular nega-max algorithm. All it does is replace the regular evaluation function with NN, and it's very slow.
one more:
Although Kasparov was beaten by Deep Blue in the 1997 match. "Humanity" was really lost around 2003-2005, when Kramnik lost a match to Deep Fritz without a win and Michael Adams lost to a cluster machine in a one-sided match. Around that time, Rybka proved too strong for even the best players in the world.
Reference:
http://www.talkchess.com/forum/viewtopic.php?t=64096&postdays=0&postorder=asc&highlight=alphago+chess&topic_view=flat&start=0
I quote:
In chess we have the concept of materiality which already gives a resonable estimation of how well an engine is doing and can be computed quickly. Furthermore, there a lot of other aspects of the game that can be encoded in a static evaluation function which couldn`t be done in Go. Due to the many heuristics and good evaluation, the EBF (Effective-Branching-Factor) is quite small. Using a Neural Network as a replacement for the static evaluation function would definently slow down the engine by quite a lot.
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
|
EDIT (after reading the paper):
I've read the paper thoughtfully. Let's start off with what Google claimed in the paper:
They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks
Th
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
EDIT (after reading the paper):
I've read the paper thoughtfully. Let's start off with what Google claimed in the paper:
They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks
The match was absolutely one-sided, many wins for AlphaZero but none for Stockfish
They were able to do it in just four hours
AlphaZero played like a human
Unfortunately, I don't think it's a good journal paper. I'm going to explain with links (so you know I'm not dreaming):
https://chess.stackexchange.com/questions/19360/how-is-alpha-zero-more-human has my answer on how AlphaZero played like a human
The match was unfair, strongly biased. I quote Tord Romstad, the original programmer for Stockfish.
https://www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-author
The match results by themselves are not particularly meaningful because of the rather strange choice of time controls and Stockfish parameter settings: The games were played at a fixed time of 1 minute/move, which means that Stockfish has no use of its time management heuristics (lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move; at a fixed time per move, the strength will suffer significantly).
Stockfish couldn't have played the best chess with only a minute per move. The program was not designed for that.
Stockfish was running on a regular commercial machine, while AlphaZero was on a 4 millions+ TPU machine tuned for AlphaZero. This is a like matching your high-end desktop against a cheap Android phone. Tord wrote:
One is a conventional chess program running on ordinary computers, the other uses fundamentally different techniques and is running on custom designed hardware that is not available for purchase (and would be way out of the budget of ordinary users if it were).
Google inadvertently gave 64 threads to a 32 core machine for Stockfish. I quote GM Larry Kaufman (world class computer chess expert):
http://talkchess.com/forum/viewtopic.php?p=741987&highlight=#741987
I agree that the test was far from fair; another issue that hurt SF was that it was apparently run on 64 threads on a 32 core machine, but it would play much better running just 32 threads on that machine, since there is almost no SMP benefit to offset the roughly 5 to 3 slowdown. Also the cost ratio was more than I said; I was thinking it was a 64 core machine, but a 32 core machine costs about half what I guessed. So maybe all in all 30 to 1 isn't so bad an estimate. On the other hand I think you underestimate how much it could be further improved.
Stockfish gave only 1GB hash table. This is a joke... I have a larger hash table for my Stockfish iOS app (Disclaimer: I'm the author) on my iPhone! Tord wrote:
... way too small hash tables for the number of threads ...
1GB hash table is absolutely unacceptable for a match like this. Stockfish would frequently encounter hash collision. It takes CPU cycles to replace old hash entries.
Stockfish is not designed to run with that many number of threads. In my iOS chess app, only a few threads are used. Tord wrote:
... was playing with far more search threads than has ever received any significant amount of testing ...
Stockfish was running without an opening book or 6-piece Syzygy endgame tablebase. The sample size was insufficient. The Stockfish version was not the latest. Discussion here.
CONCLUSION
Google has not proven without doubts their methods are superior to Stockfish. Their numbers are superficial and strongly biased to AlphaZero. Their methods are not reproducible by an independent third party. It's still a bit too early to say Deep Learning is a superior method to traditional chess programming.
EDIT (Dec 2017):
There is a new paper from Google Deepmind (https://arxiv.org/pdf/1712.01815.pdf) for deep reinforcement learning in chess. From the abstract, the world number one Stockfish chess engine was "convincingly" defeated. I think this is the most significant achievement in computer chess since the 1997 Deep Blue match. I'll update my answer once I read the paper in details.
Original (before Dec 2017)
Let's clarify your question:
No, chess engines don't use brute-force.
AlphaGo does use tree searching, it uses Monte Carlo Tree Search. Google "Monte Carlo Tree Search alphaGo" if you want to be convinced.
ANN can be used for chess engines:
Giraffe (the link posted by @Tim)
NeuroChess
Would this program perform better than the top chess-engines (and chess players) of today?
Giraffe plays at about Internation Master level, which is about FIDE 2400 rating. However, Stockfish, Houdini and Komodo all play at about FIDE 3000. This is a big gap. Why? Why not Monte-Carlo Tree Search?
Material heuristic in chess is simple. Most of the time, a chess position is winning/losing by just counting materials on the board. Please recall counting materials doesn't work for Go. Material counting is orders of magnitude faster than running neural networks - this can be done by bitboards represented by a 64-bit integer. On the 64 bits system, it can be done by only several machine instructions. Searching with the traditional algorithm is much faster than machine learning. Higher nodes per second translate to deeper search.
Similarly, there're very useful and cheap techniques such as null move pruning, late move reduction and killer moves etc. They are cheap to run, and much efficient to the approach used in AlphaGo.
Static evaluation in chess is fast and useful
Machine learning is useful for optimizating parameters, but we also have SPSA and CLOP for chess.
There are lots of useful metrics for tree reduction in chess. Much less so for Go.
There was research that Monte Carlo Tree Search don't scale well for chess. Go is a different game to chess. The chess algorithms don't work for Go because chess relies on brutal tactics. Tactics is arguably more important in chess.
Now, we've established that MCTS work well for AlphaGo but less so for chess. Deep learning would be more useful if:
The tuned NN evaluation is better than the traditional algorithms. However ... deep learning is not magic, you as the programmer would still need to do the programming. As mentioned, we have something like SPSA for self-playing for parameters tuning in chess.
Investment, money! There's not much money for machine learning in chess. Stockfish is free and open source, but strong enough to defeat all human players. Why would Google spend millions if anybody can just download Stockfish for free? Why's going to pay for the CPU clusters? Who's going to pay for talents? Nobody wants to do it, because chess is considered a "solved" game.
If deep learning can achieve the following, it'll beat the traditional algorithm:
Given a chess position, "feel" it like a human grandmaster. For example, a human grandmaster wouldn't go into lines that are bad - by experience. Neither the traditional algorithm nor deep learning can achieve that. Your NN model might give you a probability [0..1] for your position, but that's not good enough.
Let me point out:
No. Giraffe (the link posted by @Tim) doesn't use Monte Carlo Tree Search. It uses the regular nega-max algorithm. All it does is replace the regular evaluation function with NN, and it's very slow.
one more:
Although Kasparov was beaten by Deep Blue in the 1997 match. "Humanity" was really lost around 2003-2005, when Kramnik lost a match to Deep Fritz without a win and Michael Adams lost to a cluster machine in a one-sided match. Around that time, Rybka proved too strong for even the best players in the world.
Reference:
http://www.talkchess.com/forum/viewtopic.php?t=64096&postdays=0&postorder=asc&highlight=alphago+chess&topic_view=flat&start=0
I quote:
In chess we have the concept of materiality which already gives a resonable estimation of how well an engine is doing and can be computed quickly. Furthermore, there a lot of other aspects of the game that can be encoded in a static evaluation function which couldn`t be done in Go. Due to the many heuristics and good evaluation, the EBF (Effective-Branching-Factor) is quite small. Using a Neural Network as a replacement for the static evaluation function would definently slow down the engine by quite a lot.
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
EDIT (after reading the paper):
I've read the paper thoughtfully. Let's start off with what Google claimed in the paper:
They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks
Th
|
8,160
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
|
DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much simpler problem. Moreover, notice that both NN and brute force need huge computing resources (here you can find a photo of the computer behind AlphaGo, notice that it uses not even GPU's, but TPU's for computation). The whole fuss with go was that when Deep Blue beat Kasparov, the go community has argued that this would not be possible with go (for lots of different reasons, but to summarize the arguments I'd need to give a detailed introduction to the game of go). Yes you can teach NN to play chess, Mario, or try teaching it to play Starcraft...
I guess that the reason for it is that you simply don't often hear in mainstream media about cases when people solve problems that were already solved.
Moreover your premise is wrong, Deep Learning is used to play chess, e.g. as described in Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level. See also the corresponding paper, Giraffe: Using Deep Reinforcement Learning to Play Chess.
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
|
DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much s
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much simpler problem. Moreover, notice that both NN and brute force need huge computing resources (here you can find a photo of the computer behind AlphaGo, notice that it uses not even GPU's, but TPU's for computation). The whole fuss with go was that when Deep Blue beat Kasparov, the go community has argued that this would not be possible with go (for lots of different reasons, but to summarize the arguments I'd need to give a detailed introduction to the game of go). Yes you can teach NN to play chess, Mario, or try teaching it to play Starcraft...
I guess that the reason for it is that you simply don't often hear in mainstream media about cases when people solve problems that were already solved.
Moreover your premise is wrong, Deep Learning is used to play chess, e.g. as described in Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level. See also the corresponding paper, Giraffe: Using Deep Reinforcement Learning to Play Chess.
|
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much s
|
8,161
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
|
What is the difference between them?
You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias.
Kernel Regularizer: Tries to reduce the weights $W$ (excluding bias).
Bias Regularizer: Tries to reduce the bias $b$.
Activity Regularizer: Tries to reduce the layer's output $y$, thus will reduce the weights and adjust bias so $Wx+b$ is smallest.
When to use which?
Usually, if you have no prior on the distribution that you wish to model, you would only use the kernel regularizer, since a large enough network can still model your function even if the regularization on the weights are big.
If you want the output function to pass through (or have an intercept closer to) the origin, you can use the bias regularizer.
If you want the output to be smaller (or closer to 0), you can use the activity regularizer.
$L_1$ versus $L_2$ regularization
Now, for the $L_1$ versus $L_2$ loss for weight decay (not to be confused with the outputs loss function).
$L_2$ loss is defined as $w^2$
$L_1$ loss is defined as $|w|$.
where $w$ is a component of the matrix $W$.
The gradient of $L_2$ will be: $2w$
The gradient of $L_1$ will be: $sign(w)$
Thus, for each gradient update with a learning rate $a$, in $L_2$ loss, the weights will be subtracted by $aW$, while in $L_1$ loss they will be subtracted by $a \cdot sign(W)$.
The effect of $L_2$ loss on the weights is a reduction of large components in the matrix $W$, while $L_1$ loss will make the weights matrix sparse, with many zero values. The same applies to the bias and output respectively using the bias and activity regularizer.
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
|
What is the difference between them?
You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias.
Kernel Regularizer: Tries to reduce the weights $W$
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
What is the difference between them?
You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias.
Kernel Regularizer: Tries to reduce the weights $W$ (excluding bias).
Bias Regularizer: Tries to reduce the bias $b$.
Activity Regularizer: Tries to reduce the layer's output $y$, thus will reduce the weights and adjust bias so $Wx+b$ is smallest.
When to use which?
Usually, if you have no prior on the distribution that you wish to model, you would only use the kernel regularizer, since a large enough network can still model your function even if the regularization on the weights are big.
If you want the output function to pass through (or have an intercept closer to) the origin, you can use the bias regularizer.
If you want the output to be smaller (or closer to 0), you can use the activity regularizer.
$L_1$ versus $L_2$ regularization
Now, for the $L_1$ versus $L_2$ loss for weight decay (not to be confused with the outputs loss function).
$L_2$ loss is defined as $w^2$
$L_1$ loss is defined as $|w|$.
where $w$ is a component of the matrix $W$.
The gradient of $L_2$ will be: $2w$
The gradient of $L_1$ will be: $sign(w)$
Thus, for each gradient update with a learning rate $a$, in $L_2$ loss, the weights will be subtracted by $aW$, while in $L_1$ loss they will be subtracted by $a \cdot sign(W)$.
The effect of $L_2$ loss on the weights is a reduction of large components in the matrix $W$, while $L_1$ loss will make the weights matrix sparse, with many zero values. The same applies to the bias and output respectively using the bias and activity regularizer.
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
What is the difference between them?
You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias.
Kernel Regularizer: Tries to reduce the weights $W$
|
8,162
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
|
kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output).
We apply kernel_regularizer to penalize the weights which are very large causing the network to overfit, after applying kernel_regularizer the weights will become smaller.
While we bias_regularizer to add a bias so that our bias approaches towards zero.
activity_regularizer tries to make the output smaller so as to remove overfitting.
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
|
kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output).
We apply kernel_regularizer to penalize the weights which are very
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output).
We apply kernel_regularizer to penalize the weights which are very large causing the network to overfit, after applying kernel_regularizer the weights will become smaller.
While we bias_regularizer to add a bias so that our bias approaches towards zero.
activity_regularizer tries to make the output smaller so as to remove overfitting.
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output).
We apply kernel_regularizer to penalize the weights which are very
|
8,163
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
|
I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero.
In the case of $L2$ regularization, the gradient of a single weight is given by
$$ \delta w = u - 2pw$$
where $u$ is the input from the previous layer being multiplied by weight $w$, and $p$ is parameter weighting the $L2$ penalty.
Without loss of generalization, assume that $u>0$ and $w>0$.
Then the sign of $\delta w$ is given by
$$ sign(\delta w) = sign(\frac{u}{2p} -w)$$
showing that $L2$ regularization will drive $w$ to grow bigger if $w$ drops below $\frac{u}{2p}$.
On the other hand, in the case of $L1$ regularization, the gradient of a single weight is given by
$$ \delta w = u - p$$
so the sign of $\delta w$ is given by
$$ sign(\delta w) = sign(u-p)$$
showing that $L1$ regularization will drive $w$ to grow smaller when the input $u$ is smaller than the $L1$ regularization parameter $p$.
Effectively, $p$ is functioning as a threshold such that, whenever $u$ is less than $p$, $L1$ regularization will push the weight to grow smaller, and whenever $u$ is greater than $p$, $L1$ regularization will push the weight to grow larger.
The above is a local linear approximation of a nonlinear system: $u$ is actually an average over, for example, all the samples in a batch, and $u$ also changes with with each update. Nevertheless, it gives an intuitive understanding of how $L1$ regularization tries to drive some weights to zero (given large enough $p$).
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
|
I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero.
In the case of $L2$ regularization, the gradient
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero.
In the case of $L2$ regularization, the gradient of a single weight is given by
$$ \delta w = u - 2pw$$
where $u$ is the input from the previous layer being multiplied by weight $w$, and $p$ is parameter weighting the $L2$ penalty.
Without loss of generalization, assume that $u>0$ and $w>0$.
Then the sign of $\delta w$ is given by
$$ sign(\delta w) = sign(\frac{u}{2p} -w)$$
showing that $L2$ regularization will drive $w$ to grow bigger if $w$ drops below $\frac{u}{2p}$.
On the other hand, in the case of $L1$ regularization, the gradient of a single weight is given by
$$ \delta w = u - p$$
so the sign of $\delta w$ is given by
$$ sign(\delta w) = sign(u-p)$$
showing that $L1$ regularization will drive $w$ to grow smaller when the input $u$ is smaller than the $L1$ regularization parameter $p$.
Effectively, $p$ is functioning as a threshold such that, whenever $u$ is less than $p$, $L1$ regularization will push the weight to grow smaller, and whenever $u$ is greater than $p$, $L1$ regularization will push the weight to grow larger.
The above is a local linear approximation of a nonlinear system: $u$ is actually an average over, for example, all the samples in a batch, and $u$ also changes with with each update. Nevertheless, it gives an intuitive understanding of how $L1$ regularization tries to drive some weights to zero (given large enough $p$).
|
What is the difference between kernel, bias, and activity regulizers, and when to use which?
I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero.
In the case of $L2$ regularization, the gradient
|
8,164
|
Why is the CDF of a sample uniformly distributed
|
Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then
$$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$
The derivative of $F_Z$ is constant so $Z$ is uniformly distributed.
A more specific way to see this is by observing that for a uniform random variable $U$ taking values in $[0, 1]$, $F_U(x) = \int_R f_U(u)\,du =\int_0^x \,du =x$ so $F_Z(x) = F_U(x)$ for every $x\in[0, 1]$. Since $Z$ and $U$ has the same distribution function $Z$ must also be uniform on $[0, 1]$.
|
Why is the CDF of a sample uniformly distributed
|
Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then
$$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$
The deriva
|
Why is the CDF of a sample uniformly distributed
Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then
$$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$
The derivative of $F_Z$ is constant so $Z$ is uniformly distributed.
A more specific way to see this is by observing that for a uniform random variable $U$ taking values in $[0, 1]$, $F_U(x) = \int_R f_U(u)\,du =\int_0^x \,du =x$ so $F_Z(x) = F_U(x)$ for every $x\in[0, 1]$. Since $Z$ and $U$ has the same distribution function $Z$ must also be uniform on $[0, 1]$.
|
Why is the CDF of a sample uniformly distributed
Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then
$$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$
The deriva
|
8,165
|
Why is the CDF of a sample uniformly distributed
|
Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think inverse images, not a proper inverse function per se) is a "quantile" function. That is, $x = F^{-1}(p)$ is the point $x$ behind which falls $p$ proportion of the sample. The functional composition is measurably commutative $F \circ F^{-1} =_\lambda F^{-1} \circ F$.
The uniform distribution is the only distribution having a quantile function equal to a percentile function: they are the identity function. So the image space is the same as the probability space. $F$ maps continuous random variables into a (0, 1) space with equal measure. Since for any two percentiles, $a < b$, we have $P(F^{-1}(a) < x < F^{-1}(b)) = P(a < F(X) < b) = b-a$
|
Why is the CDF of a sample uniformly distributed
|
Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think i
|
Why is the CDF of a sample uniformly distributed
Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think inverse images, not a proper inverse function per se) is a "quantile" function. That is, $x = F^{-1}(p)$ is the point $x$ behind which falls $p$ proportion of the sample. The functional composition is measurably commutative $F \circ F^{-1} =_\lambda F^{-1} \circ F$.
The uniform distribution is the only distribution having a quantile function equal to a percentile function: they are the identity function. So the image space is the same as the probability space. $F$ maps continuous random variables into a (0, 1) space with equal measure. Since for any two percentiles, $a < b$, we have $P(F^{-1}(a) < x < F^{-1}(b)) = P(a < F(X) < b) = b-a$
|
Why is the CDF of a sample uniformly distributed
Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think i
|
8,166
|
Why is the CDF of a sample uniformly distributed
|
Here's some intuition. Let's use a discrete example.
Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ looks better.
One way to achieve this is to find the percentiles of each student's score. Score $10$ is $25\%$, score $50$ is $50\%$, and so on. Note that the percentile is just the CDF. So the CDF of a sample is "uniform".
When $X$ is a random variable, the percentile of $X$ is "uniform" (e.g. the number $X$'s in $0-25$ percentile should be the same as the number of $X$'s in $25-50$ percentile). Therefore the CDF of $X$ is uniformly distributed.
|
Why is the CDF of a sample uniformly distributed
|
Here's some intuition. Let's use a discrete example.
Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ l
|
Why is the CDF of a sample uniformly distributed
Here's some intuition. Let's use a discrete example.
Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ looks better.
One way to achieve this is to find the percentiles of each student's score. Score $10$ is $25\%$, score $50$ is $50\%$, and so on. Note that the percentile is just the CDF. So the CDF of a sample is "uniform".
When $X$ is a random variable, the percentile of $X$ is "uniform" (e.g. the number $X$'s in $0-25$ percentile should be the same as the number of $X$'s in $25-50$ percentile). Therefore the CDF of $X$ is uniformly distributed.
|
Why is the CDF of a sample uniformly distributed
Here's some intuition. Let's use a discrete example.
Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ l
|
8,167
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
|
This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30 is not particularly small in this context).
One difference between lme4 and many other packages is that many packages, including lme4's predecessor nlme, handle the fact that variance estimates must be non-negative by fitting variance on the log scale: that means that variance estimates can't be exactly zero, just very very small. lme4, in contrast, uses constrained optimization, so it can return values that are exactly zero (see http://arxiv.org/abs/1406.5823 p. 24 for more discussion). http://rpubs.com/bbolker/6226 gives an example.
In particular, looking closely at your among-subject variance results from Stata, you have an estimate of 7.18e-07 (relative to an intercept of -3.4) with a Wald standard deviation of .3783434 (essentially useless in this case!) and a 95% CI listed as "0"; this is technically "non-zero", but it's as close to zero as the program will report ...
It's well known and theoretically provable (e.g. Stram and Lee Biometrics 1994) that the null distribution for variance components is a mixture of a point mass ('spike') at zero and a chi-squared distribution away from zero. Unsurprisingly (but I don't know if it's proven/well known), the sampling distribution of the variance component estimates often has a spike at zero even when the true value is not zero -- see e.g. http://rpubs.com/bbolker/4187 for an example, or the last example in the ?bootMer page:
library(lme4)
library(boot)
## Check stored values from a longer (1000-replicate) run:
load(system.file("testdata","boo01L.RData",package="lme4"))
plot(boo01L,index=3)
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
|
This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30 is not particularly small in this context).
One difference between lme4 and many other packages is that many packages, including lme4's predecessor nlme, handle the fact that variance estimates must be non-negative by fitting variance on the log scale: that means that variance estimates can't be exactly zero, just very very small. lme4, in contrast, uses constrained optimization, so it can return values that are exactly zero (see http://arxiv.org/abs/1406.5823 p. 24 for more discussion). http://rpubs.com/bbolker/6226 gives an example.
In particular, looking closely at your among-subject variance results from Stata, you have an estimate of 7.18e-07 (relative to an intercept of -3.4) with a Wald standard deviation of .3783434 (essentially useless in this case!) and a 95% CI listed as "0"; this is technically "non-zero", but it's as close to zero as the program will report ...
It's well known and theoretically provable (e.g. Stram and Lee Biometrics 1994) that the null distribution for variance components is a mixture of a point mass ('spike') at zero and a chi-squared distribution away from zero. Unsurprisingly (but I don't know if it's proven/well known), the sampling distribution of the variance component estimates often has a spike at zero even when the true value is not zero -- see e.g. http://rpubs.com/bbolker/4187 for an example, or the last example in the ?bootMer page:
library(lme4)
library(boot)
## Check stored values from a longer (1000-replicate) run:
load(system.file("testdata","boo01L.RData",package="lme4"))
plot(boo01L,index=3)
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30
|
8,168
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
|
I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually-fully explained by just the residual variance term alone. There is not enough additional subject-level variation to warrant adding an additional subject-level random effect to explain all the observed variation.
Think of it this way. Imagine we are simulating experimental data under this same paradigm. We set up the parameters so that there is residual variation on a trial-by-trial basis, but 0 subject-level variation (i.e., all subjects have the same "true mean," plus error). Now each time we simulate data from this set of parameters, we will of course find that subjects do not have exactly equal performance. Some end up with low scores, some with high scores. But this is all just because of the residual trial-level variation. We "know" (by virtue of having determined the simulation parameters) that there is not really any subject-level variation.
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
|
I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually-fully explained by just the residual variance term alone. There is not enough additional subject-level variation to warrant adding an additional subject-level random effect to explain all the observed variation.
Think of it this way. Imagine we are simulating experimental data under this same paradigm. We set up the parameters so that there is residual variation on a trial-by-trial basis, but 0 subject-level variation (i.e., all subjects have the same "true mean," plus error). Now each time we simulate data from this set of parameters, we will of course find that subjects do not have exactly equal performance. Some end up with low scores, some with high scores. But this is all just because of the residual trial-level variation. We "know" (by virtue of having determined the simulation parameters) that there is not really any subject-level variation.
|
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually
|
8,169
|
Utility of feature-engineering : Why create new features based on existing features?
|
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could expect your machine learning algorithm to find out the correct decision boundary by itself, but if you generated additional feature $z=xy$, then the problem becomes trivial as $z>0$ gives you nearly perfect decision criterion for classification and you used just simple arithmetic!
So while in many cases you could expect from the algorithm to find the solution, alternatively, by feature engineering you could simplify the problem. Simple problems are easier and faster to solve, and need less complicated algorithms. Simple algorithms are often more robust, the results are often more interpretable, they are more scalable (less computational resources, time to train, etc.), and portable. You can find more examples and explanations in the wonderful talk by Vincent D. Warmerdam, given on from PyData conference in London.
Moreover, don't believe everything the machine learning marketers tell you. In most cases, the algorithms won't "learn by themselves". You usually have limited time, resources, computational power, and the data has usually a limited size and is noisy, neither of these helps.
Taking this to the extreme, you could provide your data as photos of handwritten notes of the experiment result and pass them to the complicated neural network. It would first learn to recognize the data on pictures, then learn to understand it, and make predictions. To do so, you would need a powerful computer and lots of time for training and tuning the model and need huge amounts of data because of using a complicated neural network. Providing the data in a computer-readable format (as tables of numbers), simplifies the problem tremendously, since you don't need all the character recognition. You can think of feature engineering as a next step, where you transform the data in such a way to create meaningful features so that your algorithm has even less to figure out on its own. To give an analogy, it is like you wanted to read a book in a foreign language, so that you needed to learn the language first, versus reading it translated in the language that you understand.
In the Titanic data example, your algorithm would need to figure out that summing family members makes sense, to get the "family size" feature (yes, I'm personalizing it in here). This is an obvious feature for a human, but it is not obvious if you see the data as just some columns of the numbers. If you don't know what columns are meaningful when considered together with other columns, the algorithm could figure it out by trying each possible combination of such columns. Sure, we have clever ways of doing this, but still, it is much easier if the information is given to the algorithm right away.
|
Utility of feature-engineering : Why create new features based on existing features?
|
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could e
|
Utility of feature-engineering : Why create new features based on existing features?
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could expect your machine learning algorithm to find out the correct decision boundary by itself, but if you generated additional feature $z=xy$, then the problem becomes trivial as $z>0$ gives you nearly perfect decision criterion for classification and you used just simple arithmetic!
So while in many cases you could expect from the algorithm to find the solution, alternatively, by feature engineering you could simplify the problem. Simple problems are easier and faster to solve, and need less complicated algorithms. Simple algorithms are often more robust, the results are often more interpretable, they are more scalable (less computational resources, time to train, etc.), and portable. You can find more examples and explanations in the wonderful talk by Vincent D. Warmerdam, given on from PyData conference in London.
Moreover, don't believe everything the machine learning marketers tell you. In most cases, the algorithms won't "learn by themselves". You usually have limited time, resources, computational power, and the data has usually a limited size and is noisy, neither of these helps.
Taking this to the extreme, you could provide your data as photos of handwritten notes of the experiment result and pass them to the complicated neural network. It would first learn to recognize the data on pictures, then learn to understand it, and make predictions. To do so, you would need a powerful computer and lots of time for training and tuning the model and need huge amounts of data because of using a complicated neural network. Providing the data in a computer-readable format (as tables of numbers), simplifies the problem tremendously, since you don't need all the character recognition. You can think of feature engineering as a next step, where you transform the data in such a way to create meaningful features so that your algorithm has even less to figure out on its own. To give an analogy, it is like you wanted to read a book in a foreign language, so that you needed to learn the language first, versus reading it translated in the language that you understand.
In the Titanic data example, your algorithm would need to figure out that summing family members makes sense, to get the "family size" feature (yes, I'm personalizing it in here). This is an obvious feature for a human, but it is not obvious if you see the data as just some columns of the numbers. If you don't know what columns are meaningful when considered together with other columns, the algorithm could figure it out by trying each possible combination of such columns. Sure, we have clever ways of doing this, but still, it is much easier if the information is given to the algorithm right away.
|
Utility of feature-engineering : Why create new features based on existing features?
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could e
|
8,170
|
Utility of feature-engineering : Why create new features based on existing features?
|
Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates you the resulting feature will be meaningful and informative. Note that a linear classifier cannot consider those complex features unless you explicitly provide them.
Ideally, If you use a sufficiently powerful nonlinear classification algorithm it should be able to create a decision boundary which considers arbitrary non-linear transformations of the input features if they are informative for classification. However, in practice most non-linear classifiers just look at some type of transformations. For instance, a polynomial kernel SVM will consider polynomial interactions between features, but maybe a more informative feature can be created by applying other types of transformations...
In short, if domain knowledge indicates that a hand-crafted non-linear combination of features might be informative, it makes sense to add that into the existing set of features.
|
Utility of feature-engineering : Why create new features based on existing features?
|
Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates
|
Utility of feature-engineering : Why create new features based on existing features?
Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates you the resulting feature will be meaningful and informative. Note that a linear classifier cannot consider those complex features unless you explicitly provide them.
Ideally, If you use a sufficiently powerful nonlinear classification algorithm it should be able to create a decision boundary which considers arbitrary non-linear transformations of the input features if they are informative for classification. However, in practice most non-linear classifiers just look at some type of transformations. For instance, a polynomial kernel SVM will consider polynomial interactions between features, but maybe a more informative feature can be created by applying other types of transformations...
In short, if domain knowledge indicates that a hand-crafted non-linear combination of features might be informative, it makes sense to add that into the existing set of features.
|
Utility of feature-engineering : Why create new features based on existing features?
Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates
|
8,171
|
Utility of feature-engineering : Why create new features based on existing features?
|
It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes necessary.
Some models like linear regression don't handle non-linearity automatically, in that case, you need to create extra features to help. For example below: if you have the following dataset that all the $Y = 1$ of the target variable clustered at the center of a circle like area.
If you are given only two features, $x_1$ and $x_2$. A simple linear model of $y = x_0 + c_1x_1 + c_2x_2$ will not find any way to classify the target variable. So, instead, you need new quartic features to capture the non-linearity: $y = x_0 + c_1x_1^2 + c_2x_2^2$.
If you know in advance that some features (from business knowledge or experience), it may help create them to speed up the runtime of the model and make it easy for your model. For example, in your example of the Titanic data and if you are using a decision tree classification model. If you know that old ladies (age & gender) are more likely to survive, by creating a single feature that captures the information, your tree can make one split on the new variable instead of making two split on the two variables. It may speed up the computation time if you know in advance that the feature is significant.
In the real world, you won't get a single dataset like Kaggle provides. Instead, you get information from all over the place. For example, if you want to predict customer attrition for an online retail company like Amazon, you have customer demography info, purchase transaction info. You need to generate a lot of feature from different sources, in this case, You will find a lot of useful features can be obtained/aggregated from the transaction level. As Andrew Ng puts it: Often times, the ability to do feature-engineering defines the success or failure of a machine learning project.
|
Utility of feature-engineering : Why create new features based on existing features?
|
It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes ne
|
Utility of feature-engineering : Why create new features based on existing features?
It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes necessary.
Some models like linear regression don't handle non-linearity automatically, in that case, you need to create extra features to help. For example below: if you have the following dataset that all the $Y = 1$ of the target variable clustered at the center of a circle like area.
If you are given only two features, $x_1$ and $x_2$. A simple linear model of $y = x_0 + c_1x_1 + c_2x_2$ will not find any way to classify the target variable. So, instead, you need new quartic features to capture the non-linearity: $y = x_0 + c_1x_1^2 + c_2x_2^2$.
If you know in advance that some features (from business knowledge or experience), it may help create them to speed up the runtime of the model and make it easy for your model. For example, in your example of the Titanic data and if you are using a decision tree classification model. If you know that old ladies (age & gender) are more likely to survive, by creating a single feature that captures the information, your tree can make one split on the new variable instead of making two split on the two variables. It may speed up the computation time if you know in advance that the feature is significant.
In the real world, you won't get a single dataset like Kaggle provides. Instead, you get information from all over the place. For example, if you want to predict customer attrition for an online retail company like Amazon, you have customer demography info, purchase transaction info. You need to generate a lot of feature from different sources, in this case, You will find a lot of useful features can be obtained/aggregated from the transaction level. As Andrew Ng puts it: Often times, the ability to do feature-engineering defines the success or failure of a machine learning project.
|
Utility of feature-engineering : Why create new features based on existing features?
It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes ne
|
8,172
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007)
I will give an example of doing this using rjags package in R. It is an interface between R and JAGS (like OpenBUGS or WinBUGS).
$$n_{ij} \sim \mathrm{Poisson}(\theta_{ij})$$
$$\log \theta_{ij} = \beta_0 + \beta_1 \mbox{ } \mathtt{Treatment}_{i} + \delta_{ij}$$
$$\delta_{ij} \sim N(0, \sigma^2_{\epsilon})$$
$$i=1 \ldots I, \quad j = 1\ldots J$$
$\mathtt{Treatment}_i = 0 \mbox{ or } 1, \ldots, J-1 \mbox{ if the } i^{th} \mbox{ observation belongs to treatment group } 1 \mbox{, or, } 2, \ldots, J$
The $\delta_{ij}$ part in the code above models overdispersion. But there is no one stopping you from modeling correlation between individuals (you don't believe that individuals are really independent) and within individuals (repeated measures). Also, the rate parameter may be scaled by some other constant as in rate models. Please see Gelman and Hill (2007) for more reference. Here is the JAGS code for the simple model:
data{
for (i in 1:I){
ncount[i,1] <- obsTrt1[i]
ncount[i,2] <- obsTrt2[i]
## notice I have only 2 treatments and I individuals
}
}
model{
for (i in 1:I){
nCount[i, 1] ~ dpois( means[i, 1] )
nCount[i, 2] ~ dpois( means[i, 2] )
log( means[i, 1] ) <- mu + b * trt1[i] + disp[i, 1]
log( means[i, 2] ) <- mu + b * trt2[i] + disp[i, 2]
disp[i, 1] ~ dnorm( 0, tau)
disp[i, 2] ~ dnorm( 0, tau)
}
mu ~ dnorm( 0, 0.001)
b ~ dnorm(0, 0.001)
tau ~ dgamma( 0.001, 0.001)
}
Here is the R code to implement use it (say it is named: overdisp.bug)
dataFixedEffect <- list("I" = 10,
"obsTrt1" = obsTrt1 , #vector of n_i1
"obsTrt2" = obsTrt2, #vector of n_i2
"trt1" = trt1, #vector of 0
"trt2" = trt2, #vector of 1
)
initFixedEffect <- list(mu = 0.0 , b = 0.0, tau = 0.01)
simFixedEffect <- jags.model(file = "overdisp.bug",
data = dataFixedEffect,
inits = initFixedEffect,
n.chains = 4,
n.adapt = 1000)
sampleFixedEffect <- coda.samples(model = simFixedEffect,
variable.names = c("mu", "b", "means"),
n.iter = 1000)
meansTrt1 <- as.matrix(sampleFixedEffect[ , 2:11])
meansTrt2 <- as.matrix(sampleFixedEffect[ , 12:21])
You can play around with your parameters' posteriors and you can introduce more parameters to make you modeling more precise (we like to think this). Basically, you get the idea.
For more details on using rjags and JAGS, please see John Myles White's page
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007)
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007)
I will give an example of doing this using rjags package in R. It is an interface between R and JAGS (like OpenBUGS or WinBUGS).
$$n_{ij} \sim \mathrm{Poisson}(\theta_{ij})$$
$$\log \theta_{ij} = \beta_0 + \beta_1 \mbox{ } \mathtt{Treatment}_{i} + \delta_{ij}$$
$$\delta_{ij} \sim N(0, \sigma^2_{\epsilon})$$
$$i=1 \ldots I, \quad j = 1\ldots J$$
$\mathtt{Treatment}_i = 0 \mbox{ or } 1, \ldots, J-1 \mbox{ if the } i^{th} \mbox{ observation belongs to treatment group } 1 \mbox{, or, } 2, \ldots, J$
The $\delta_{ij}$ part in the code above models overdispersion. But there is no one stopping you from modeling correlation between individuals (you don't believe that individuals are really independent) and within individuals (repeated measures). Also, the rate parameter may be scaled by some other constant as in rate models. Please see Gelman and Hill (2007) for more reference. Here is the JAGS code for the simple model:
data{
for (i in 1:I){
ncount[i,1] <- obsTrt1[i]
ncount[i,2] <- obsTrt2[i]
## notice I have only 2 treatments and I individuals
}
}
model{
for (i in 1:I){
nCount[i, 1] ~ dpois( means[i, 1] )
nCount[i, 2] ~ dpois( means[i, 2] )
log( means[i, 1] ) <- mu + b * trt1[i] + disp[i, 1]
log( means[i, 2] ) <- mu + b * trt2[i] + disp[i, 2]
disp[i, 1] ~ dnorm( 0, tau)
disp[i, 2] ~ dnorm( 0, tau)
}
mu ~ dnorm( 0, 0.001)
b ~ dnorm(0, 0.001)
tau ~ dgamma( 0.001, 0.001)
}
Here is the R code to implement use it (say it is named: overdisp.bug)
dataFixedEffect <- list("I" = 10,
"obsTrt1" = obsTrt1 , #vector of n_i1
"obsTrt2" = obsTrt2, #vector of n_i2
"trt1" = trt1, #vector of 0
"trt2" = trt2, #vector of 1
)
initFixedEffect <- list(mu = 0.0 , b = 0.0, tau = 0.01)
simFixedEffect <- jags.model(file = "overdisp.bug",
data = dataFixedEffect,
inits = initFixedEffect,
n.chains = 4,
n.adapt = 1000)
sampleFixedEffect <- coda.samples(model = simFixedEffect,
variable.names = c("mu", "b", "means"),
n.iter = 1000)
meansTrt1 <- as.matrix(sampleFixedEffect[ , 2:11])
meansTrt2 <- as.matrix(sampleFixedEffect[ , 12:21])
You can play around with your parameters' posteriors and you can introduce more parameters to make you modeling more precise (we like to think this). Basically, you get the idea.
For more details on using rjags and JAGS, please see John Myles White's page
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007)
|
8,173
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren't, you should have the easy to fit lme4 results for comparison.
data$obs_effect<-1:nrow(data)
overdisp.fit<-lmer(y~1+obs_effect+x+(1|obs_effect)+(1+x|subject_id),data=data,family=poisson)
This is discussed here: http://article.gmane.org/gmane.comp.lang.r.lme4.devel/4727 informally and academically by Elston et al. (2001).
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren'
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren't, you should have the easy to fit lme4 results for comparison.
data$obs_effect<-1:nrow(data)
overdisp.fit<-lmer(y~1+obs_effect+x+(1|obs_effect)+(1+x|subject_id),data=data,family=poisson)
This is discussed here: http://article.gmane.org/gmane.comp.lang.r.lme4.devel/4727 informally and academically by Elston et al. (2001).
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren'
|
8,174
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
I think that the glmmADMB package is exactely what you are looking for.
install.packages("glmmADMB",
repos="http://r-forge.r-project.org")
But in a bayesian point of view you can use the MCMCglmm package or the BUGS/JAGS software, they are very flexible and you can fit this kind of model. (and the syntax is close to the R one)
EDIT thanks to @randel
If you want to install the glmmADMB and R2admb packages it is better to do:
install.packages("glmmADMB", repos="http://glmmadmb.r-forge.r-project.org/repos")
install.packages("R2admb")
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
I think that the glmmADMB package is exactely what you are looking for.
install.packages("glmmADMB",
repos="http://r-forge.r-project.org")
But in a bayesian point of view you can use the MCMCglmm
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
I think that the glmmADMB package is exactely what you are looking for.
install.packages("glmmADMB",
repos="http://r-forge.r-project.org")
But in a bayesian point of view you can use the MCMCglmm package or the BUGS/JAGS software, they are very flexible and you can fit this kind of model. (and the syntax is close to the R one)
EDIT thanks to @randel
If you want to install the glmmADMB and R2admb packages it is better to do:
install.packages("glmmADMB", repos="http://glmmadmb.r-forge.r-project.org/repos")
install.packages("R2admb")
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
I think that the glmmADMB package is exactely what you are looking for.
install.packages("glmmADMB",
repos="http://r-forge.r-project.org")
But in a bayesian point of view you can use the MCMCglmm
|
8,175
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
|
Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
|
How do I fit a multilevel model for over-dispersed poisson outcomes?
Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
|
8,176
|
FPR (false positive rate) vs FDR (false discovery rate)
|
I'm going to explain these in a few different ways because it helped me understand it.
Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some terms. For each of the following, I am referring to an individual who has been tested:
True positive (TP): Has the disease, identified as having the disease
False positive (FP): Does not have the disease, identified as having the disease
True negative (TN): Does not have the disease, identified as not having the disease
False negative (FN): Has the disease, identified as not having the disease
Visually, this is typically shown using the confusion matrix:
The false positive rate (FPR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who do not have the disease (includes all FPs and TNs).
$$
FPR = \frac{FP}{FP + TN}
$$
The false discovery rate (FDR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who are identified as having the disease (includes all FPs and TPs).
$$
FDR = \frac{FP}{FP + TP}
$$
So, the difference is in the denominator i.e. what are you comparing the number of false positives to?
The FPR is telling you the proportion of all the people who do not have the disease who will be identified as having the disease.
The FDR is telling you the proportion of all the people identified as having the disease who do not have the disease.
Both are therefore useful, distinct measures of failure. Depending on the situation and the proportions of TPs, FPs, TNs and FNs, you may care more about one that the other.
Let's now put some numbers to this. You have measured 100 people for the disease and you get the following:
True positives (TPs): 12
False positives (FPs): 4
True negatives (TNs): 76
False negatives (FNs): 8
To show this using the confusion matrix:
Then,
$$
FPR = \frac{FP}{FP + TN} = \frac{4}{4 + 76} = \frac{4}{80} = 0.05 = 5\%
$$
$$
FDR = \frac{FP}{FP + TP} = \frac{4}{4 + 12} = \frac{4}{16} = 0.25 = 25\%
$$
In other words,
The FPR tells you that 5% of people of people who did not have the disease were identified as having the disease. The FDR tells you that
25% of people who were identified as having the disease actually did not have the disease.
EDIT based on @amoeba's comment (also the numbers in the example above):
Why is the distinction so important? In the paper you link to, Storey & Tibhshirani are pointing out that there was a strong focus on the FPR (or type I error rate) in genomewide studies, and that this was leading people to make flawed inferences. This is because once you find $n$ significant results by fixing the FPR, you really, really need to consider how many of your significant results are incorrect. In the above example, 25% of the 'significant results' would have been wrong!
[Side note: Wikipedia points out that though the FPR is mathematically equivalent to the type I error rate, it is considered conceptually distinct because one is typically set a priori while the other is typically used to measure the performance of a test afterwards. This is important but I will not discuss that here].
And for a bit more completeness:
Obviously, FPR and FDR are not the only relevant metrics you can calculate with the four quantities in the confusion matrix. Of the many possible metrics that may be useful in different contexts, two relatively common ones that you are likely to encounter are:
True Positive Rate (TPR), also known as sensitivity, is the proportion of people who have the disease who are identified as having the disease.
$$
TPR = \frac{TP}{TP + FN}
$$
True Negative Rate (TNR), also known as specificity, is the proportion of people who do not have the disease who are identified as not having the disease.
$$
TNR = \frac{TN}{TN + FP}
$$
|
FPR (false positive rate) vs FDR (false discovery rate)
|
I'm going to explain these in a few different ways because it helped me understand it.
Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some ter
|
FPR (false positive rate) vs FDR (false discovery rate)
I'm going to explain these in a few different ways because it helped me understand it.
Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some terms. For each of the following, I am referring to an individual who has been tested:
True positive (TP): Has the disease, identified as having the disease
False positive (FP): Does not have the disease, identified as having the disease
True negative (TN): Does not have the disease, identified as not having the disease
False negative (FN): Has the disease, identified as not having the disease
Visually, this is typically shown using the confusion matrix:
The false positive rate (FPR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who do not have the disease (includes all FPs and TNs).
$$
FPR = \frac{FP}{FP + TN}
$$
The false discovery rate (FDR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who are identified as having the disease (includes all FPs and TPs).
$$
FDR = \frac{FP}{FP + TP}
$$
So, the difference is in the denominator i.e. what are you comparing the number of false positives to?
The FPR is telling you the proportion of all the people who do not have the disease who will be identified as having the disease.
The FDR is telling you the proportion of all the people identified as having the disease who do not have the disease.
Both are therefore useful, distinct measures of failure. Depending on the situation and the proportions of TPs, FPs, TNs and FNs, you may care more about one that the other.
Let's now put some numbers to this. You have measured 100 people for the disease and you get the following:
True positives (TPs): 12
False positives (FPs): 4
True negatives (TNs): 76
False negatives (FNs): 8
To show this using the confusion matrix:
Then,
$$
FPR = \frac{FP}{FP + TN} = \frac{4}{4 + 76} = \frac{4}{80} = 0.05 = 5\%
$$
$$
FDR = \frac{FP}{FP + TP} = \frac{4}{4 + 12} = \frac{4}{16} = 0.25 = 25\%
$$
In other words,
The FPR tells you that 5% of people of people who did not have the disease were identified as having the disease. The FDR tells you that
25% of people who were identified as having the disease actually did not have the disease.
EDIT based on @amoeba's comment (also the numbers in the example above):
Why is the distinction so important? In the paper you link to, Storey & Tibhshirani are pointing out that there was a strong focus on the FPR (or type I error rate) in genomewide studies, and that this was leading people to make flawed inferences. This is because once you find $n$ significant results by fixing the FPR, you really, really need to consider how many of your significant results are incorrect. In the above example, 25% of the 'significant results' would have been wrong!
[Side note: Wikipedia points out that though the FPR is mathematically equivalent to the type I error rate, it is considered conceptually distinct because one is typically set a priori while the other is typically used to measure the performance of a test afterwards. This is important but I will not discuss that here].
And for a bit more completeness:
Obviously, FPR and FDR are not the only relevant metrics you can calculate with the four quantities in the confusion matrix. Of the many possible metrics that may be useful in different contexts, two relatively common ones that you are likely to encounter are:
True Positive Rate (TPR), also known as sensitivity, is the proportion of people who have the disease who are identified as having the disease.
$$
TPR = \frac{TP}{TP + FN}
$$
True Negative Rate (TNR), also known as specificity, is the proportion of people who do not have the disease who are identified as not having the disease.
$$
TNR = \frac{TN}{TN + FP}
$$
|
FPR (false positive rate) vs FDR (false discovery rate)
I'm going to explain these in a few different ways because it helped me understand it.
Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some ter
|
8,177
|
FPR (false positive rate) vs FDR (false discovery rate)
|
You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal.
FP happens if your null hypothesis is true but you reject it
FD happens if you predict something significant but you shouldn't
|
FPR (false positive rate) vs FDR (false discovery rate)
|
You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal.
FP happens if your null hypothesis is true but you reject
|
FPR (false positive rate) vs FDR (false discovery rate)
You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal.
FP happens if your null hypothesis is true but you reject it
FD happens if you predict something significant but you shouldn't
|
FPR (false positive rate) vs FDR (false discovery rate)
You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal.
FP happens if your null hypothesis is true but you reject
|
8,178
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems.
The convergence outcome follows from the condition of exchangeability. If the coin is tossed in a manner that is consistent from flip-to-flip, then one might reasonably assume that the resulting sequence of coin tosses is exchangeable, meaning that the probability of any particular sequence of outcomes does not depend on the order those outcomes occur in. For example, the condition of exhangeability would say that the outcome $H \cdot H \cdot T \cdot T \cdot H \cdot T$ has the same probability as the outcome $H \cdot H \cdot H \cdot T \cdot T \cdot T$, and exchangeability of the sequence would mean that this is true for strings of any length which are permutations of each other. The assumption of exchangeability is the operational assumption that reflects the idea of "repeated trials" of an experiment --- it captures the idea that nothing is changing from trial-to-trial, such that sets of outcomes which are permutations of one another should have the same probability.
Now, if this assumption holds then the sequence of outcomes will be IID (conditional on the underlying distribution) with fixed probability for heads/tails which applies to all the flips. (This is due to a famous mathematical result called de Finetti's representation theorem; see related questions here and here.) The strong law of large numbers then kicks in to give you the convergence result ---i.e., the sample proportion of heads/tails converges to the (fixed) probability of heads/tails with probability one.
What if exchangability doesn't hold? Can there be a lack of convergence? Although there are also weaker assumptions that can allow similar convergence results, if the underlying assumption of exchangeability does not hold ---i.e., if the probability of a sequence of coin-toss outcomes depends on their order--- then it is possible to get a situation where there is no convergence.
As an example of the latter, suppose that you have a way of tossing a coin that can bias it to one side or the other ---e.g., you start with a certain face of the coin upwards and you flip it in a way that gives a small and consistent number of rotations before landing on a soft surface (where it doesn't bounce). Suppose that this method is sufficiently effective that you can bias the coin 70-30 in favour of one side. (For reasons why it is difficult to bias a coin-flip in practice, see e.g., Gelman and Nolan 2002 and Diaconis, Holmes and Montgomery 2007.) Now, suppose you were to execute a sequence of coin tosses in such a way that you start off biasing your tosses towards heads, but each time the sample proportion of heads exceeds 60% you change to bias towards tails, and each time the sample proportion of tails exceeds 60% you change to bias towards heads. If you were to execute this method then you would obtain a sample proportion that "oscillates" endlessly between about 40-60% heads without ever converging to a fixed value. In this instance you can see that the assumption of exchangeability does not hold, since the order of outcomes gives information on your present flipping-method (which therefore affects the probability of a subsequent outcome).
Illustrating non-convergence for the biased-flipping mechanism: We can implement a computational simulation of the above flipping mechanism using the R code below. Here we create a function oscillating.flips that can implement that method for a given biasing probability, switching probability and starting side.
oscillating.flips <- function(n, big.prob, switch.prob, start.head = TRUE) {
#Set vector of flip outcomes and sample proportions
FLIPS <- rep('', n)
PROPS <- rep(NA, n)
#Set starting values
VALS <- c('H', 'T')
HEAD <- start.head
#Execute the coin flips
for (k in 1:n) {
#Set probability and perform the coin flip
PROB <- c(big.prob, 1-big.prob)
if (!HEAD) { PROB <- rev(PROB) }
FLIPS[k] <- sample(VALS, size = 1, prob = PROB)
#Update sample proportion and execute switch (if triggered)
if (k == 1) {
PROPS[k] <- 1*(FLIPS[k] == 'H')
} else {
PROPS[k] <- ((k-1)*PROPS[k-1] + (FLIPS[k] == 'H'))/k }
if (PROPS[k] > switch.prob) { HEAD <- FALSE }
if (PROPS[k] < 1-switch.prob) { HEAD <- TRUE } }
#Return the flips
data.frame(flip = 1:n, outcome = FLIPS, head.props = PROPS) }
We implement this function using the mechanism described above (70% weighting towards biased side, switching probability of 60%, and starting biased to heads) and we get $n=10^6$ simulated coin-flips for the problem, with a running output of the sample proportion of heads. We plot these sample proportions against the number of flips, with the latter shown on a logarithmic scale. As you can see from the plot, the sample proportion does not converge to any fixed value --- instead it oscillates between the switching probabilities as expected.
#Generate coin-flips
set.seed(187826487)
FLIPS <- oscillating.flips(10^6, big.prob = 0.7, switch.prob = 0.6, start.head = TRUE)
#Plot the resulting sample proportion of heads
library(ggplot2)
THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'),
plot.subtitle = element_text(hjust = 0.5, face = 'bold'))
FIGURE <- ggplot(aes(x = flip, y = head.props), data = FLIPS) +
geom_point() +
geom_hline(yintercept = 0.5, linetype = 'dashed', colour = 'red') +
scale_x_log10(breaks = scales::trans_breaks("log10", function(x) 10^x),
labels = scales::trans_format("log10", scales::math_format(10^.x))) +
scale_y_continuous(limits = c(0, 1)) +
THEME + ggtitle('Example of Biased Coin-Flipping Mechanism') +
labs(subtitle = '(Proportion of heads does not converge!) \n') +
xlab('Number of Flips') + ylab('Sample Proportion of Heads')
FIGURE
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems.
The convergence outcome follows from the condition of exchangeabi
|
Why should the frequency of heads in a coin toss converge to anything at all?
This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems.
The convergence outcome follows from the condition of exchangeability. If the coin is tossed in a manner that is consistent from flip-to-flip, then one might reasonably assume that the resulting sequence of coin tosses is exchangeable, meaning that the probability of any particular sequence of outcomes does not depend on the order those outcomes occur in. For example, the condition of exhangeability would say that the outcome $H \cdot H \cdot T \cdot T \cdot H \cdot T$ has the same probability as the outcome $H \cdot H \cdot H \cdot T \cdot T \cdot T$, and exchangeability of the sequence would mean that this is true for strings of any length which are permutations of each other. The assumption of exchangeability is the operational assumption that reflects the idea of "repeated trials" of an experiment --- it captures the idea that nothing is changing from trial-to-trial, such that sets of outcomes which are permutations of one another should have the same probability.
Now, if this assumption holds then the sequence of outcomes will be IID (conditional on the underlying distribution) with fixed probability for heads/tails which applies to all the flips. (This is due to a famous mathematical result called de Finetti's representation theorem; see related questions here and here.) The strong law of large numbers then kicks in to give you the convergence result ---i.e., the sample proportion of heads/tails converges to the (fixed) probability of heads/tails with probability one.
What if exchangability doesn't hold? Can there be a lack of convergence? Although there are also weaker assumptions that can allow similar convergence results, if the underlying assumption of exchangeability does not hold ---i.e., if the probability of a sequence of coin-toss outcomes depends on their order--- then it is possible to get a situation where there is no convergence.
As an example of the latter, suppose that you have a way of tossing a coin that can bias it to one side or the other ---e.g., you start with a certain face of the coin upwards and you flip it in a way that gives a small and consistent number of rotations before landing on a soft surface (where it doesn't bounce). Suppose that this method is sufficiently effective that you can bias the coin 70-30 in favour of one side. (For reasons why it is difficult to bias a coin-flip in practice, see e.g., Gelman and Nolan 2002 and Diaconis, Holmes and Montgomery 2007.) Now, suppose you were to execute a sequence of coin tosses in such a way that you start off biasing your tosses towards heads, but each time the sample proportion of heads exceeds 60% you change to bias towards tails, and each time the sample proportion of tails exceeds 60% you change to bias towards heads. If you were to execute this method then you would obtain a sample proportion that "oscillates" endlessly between about 40-60% heads without ever converging to a fixed value. In this instance you can see that the assumption of exchangeability does not hold, since the order of outcomes gives information on your present flipping-method (which therefore affects the probability of a subsequent outcome).
Illustrating non-convergence for the biased-flipping mechanism: We can implement a computational simulation of the above flipping mechanism using the R code below. Here we create a function oscillating.flips that can implement that method for a given biasing probability, switching probability and starting side.
oscillating.flips <- function(n, big.prob, switch.prob, start.head = TRUE) {
#Set vector of flip outcomes and sample proportions
FLIPS <- rep('', n)
PROPS <- rep(NA, n)
#Set starting values
VALS <- c('H', 'T')
HEAD <- start.head
#Execute the coin flips
for (k in 1:n) {
#Set probability and perform the coin flip
PROB <- c(big.prob, 1-big.prob)
if (!HEAD) { PROB <- rev(PROB) }
FLIPS[k] <- sample(VALS, size = 1, prob = PROB)
#Update sample proportion and execute switch (if triggered)
if (k == 1) {
PROPS[k] <- 1*(FLIPS[k] == 'H')
} else {
PROPS[k] <- ((k-1)*PROPS[k-1] + (FLIPS[k] == 'H'))/k }
if (PROPS[k] > switch.prob) { HEAD <- FALSE }
if (PROPS[k] < 1-switch.prob) { HEAD <- TRUE } }
#Return the flips
data.frame(flip = 1:n, outcome = FLIPS, head.props = PROPS) }
We implement this function using the mechanism described above (70% weighting towards biased side, switching probability of 60%, and starting biased to heads) and we get $n=10^6$ simulated coin-flips for the problem, with a running output of the sample proportion of heads. We plot these sample proportions against the number of flips, with the latter shown on a logarithmic scale. As you can see from the plot, the sample proportion does not converge to any fixed value --- instead it oscillates between the switching probabilities as expected.
#Generate coin-flips
set.seed(187826487)
FLIPS <- oscillating.flips(10^6, big.prob = 0.7, switch.prob = 0.6, start.head = TRUE)
#Plot the resulting sample proportion of heads
library(ggplot2)
THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'),
plot.subtitle = element_text(hjust = 0.5, face = 'bold'))
FIGURE <- ggplot(aes(x = flip, y = head.props), data = FLIPS) +
geom_point() +
geom_hline(yintercept = 0.5, linetype = 'dashed', colour = 'red') +
scale_x_log10(breaks = scales::trans_breaks("log10", function(x) 10^x),
labels = scales::trans_format("log10", scales::math_format(10^.x))) +
scale_y_continuous(limits = c(0, 1)) +
THEME + ggtitle('Example of Biased Coin-Flipping Mechanism') +
labs(subtitle = '(Proportion of heads does not converge!) \n') +
xlab('Number of Flips') + ylab('Sample Proportion of Heads')
FIGURE
|
Why should the frequency of heads in a coin toss converge to anything at all?
This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems.
The convergence outcome follows from the condition of exchangeabi
|
8,179
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of large numbers.
EDIT: Just to answer some of the other things in your post, statistics is built upon probability theory, so if we empirically observe the frequency of heads converging to a specific value then that's fine, but we would like the empirical observations to agree with our probabilistic model.
And just to clear up a potential confusion before it arises, the strong law of large numbers isn't just related to coin tossing, and is an incredibly versatile and useful theorem (for example, under the same assumptions listed above, we could apply the same argument to dice throwing, and even continuous measurements!)
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of la
|
Why should the frequency of heads in a coin toss converge to anything at all?
If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of large numbers.
EDIT: Just to answer some of the other things in your post, statistics is built upon probability theory, so if we empirically observe the frequency of heads converging to a specific value then that's fine, but we would like the empirical observations to agree with our probabilistic model.
And just to clear up a potential confusion before it arises, the strong law of large numbers isn't just related to coin tossing, and is an incredibly versatile and useful theorem (for example, under the same assumptions listed above, we could apply the same argument to dice throwing, and even continuous measurements!)
|
Why should the frequency of heads in a coin toss converge to anything at all?
If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of la
|
8,180
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
I want to second the answer by jacobe, but also add a bit of detail.
Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a score of $0$. Note that the average score from a large number of coin tosses is also the average frequency of getting heads.
The "expected" score from any given coin toss is
\begin{equation}
p\cdot 1+(1-p)\cdot 0
\end{equation}
Of course, this simplifies to
\begin{equation}
p
\end{equation}
do to our scoring system. The law of large numbers tells
us that, as we flip the coin more times, we should eventually
see the average approaching to this expectation value (assuming
all coin tosses are independent of each other).
Now, if your question is: why do we even believe there is a probability
$p$ of getting heads, then I'm not sure there is a justification. At least, I haven't heard it. This comes back to what jacobe said about probability vs.
statistics. The existence of $p$ and independence of events are assumptions for our model. If the statistical results seem consistent, that's a good sign for the model.
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
I want to second the answer by jacobe, but also add a bit of detail.
Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a sco
|
Why should the frequency of heads in a coin toss converge to anything at all?
I want to second the answer by jacobe, but also add a bit of detail.
Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a score of $0$. Note that the average score from a large number of coin tosses is also the average frequency of getting heads.
The "expected" score from any given coin toss is
\begin{equation}
p\cdot 1+(1-p)\cdot 0
\end{equation}
Of course, this simplifies to
\begin{equation}
p
\end{equation}
do to our scoring system. The law of large numbers tells
us that, as we flip the coin more times, we should eventually
see the average approaching to this expectation value (assuming
all coin tosses are independent of each other).
Now, if your question is: why do we even believe there is a probability
$p$ of getting heads, then I'm not sure there is a justification. At least, I haven't heard it. This comes back to what jacobe said about probability vs.
statistics. The existence of $p$ and independence of events are assumptions for our model. If the statistical results seem consistent, that's a good sign for the model.
|
Why should the frequency of heads in a coin toss converge to anything at all?
I want to second the answer by jacobe, but also add a bit of detail.
Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a sco
|
8,181
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented in the theorem that indepedent variances add: $$\mathrm{Var}(A+B)=\mathrm{Var}(A)+\mathrm{Var}(B)$$ Since the variation in $A$ (or $B$) is proportional to the square root (a concave function) of the variance, variances in a sum grow less quickly than the sum itself.
The average takes a sum of random variables and then divides by the number of random variables, which is proportional to the sum. So the random variation in that average will be small.
We can formalize that argument as follows:
Remember that if a random variable $Y$ has mean $\mu$ and variance $\sigma^2$, then Chebyshev's inequality tells us $$\mathbb{P}[{|Y-\mu|>\delta}]\leq\frac{\sigma^2}{k^2}\tag{1}$$ Now, given $N$ coin flips $\{X_n\}_{n=1}^N$ (heads is $1$, tails $0$), the average number of heads is the random variable $$Y_N=\frac{1}{N}\sum_{n=1}^N{X_n}$$
Now, we only ever measure $Y$ up to some accuracy. For example, have you ever really checked more than the 6th or 7th decimal place of a number? So take $\delta=10^{-8}$ in (1). The idea is to show that $\mathrm{Var}{(Y_N)}\to0$; then (1) will tell us that $$\mathbb{P}\left[{\left|Y_N-\frac{1}{2}\right|>10^{-8}}\right]\to0$$ and so any variation in the average will be indistinguishable to us.
So, OK, let's compute that variance. Since each coin flip is independent, $$\mathrm{Var}{(Y_N)}=\frac{1}{N^2}\sum_{n=1}^N{\mathrm{Var}(X_n)}=\frac{1}{N^2}\cdot N\mathrm{Var}(X_1)=\frac{\mathrm{Var}(X_1)}{N}\to0$$ as $N\to\infty$.
(One can generalize this argument into a proof of Kolmogorov's Strong Law of Large Numbers.)
* The claim also holds exchangeable random variables, as in Ben's answer. But the proof I know (page 185) uses a martingale convergence argument, which is complicated to explain if you haven't seen it.
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented
|
Why should the frequency of heads in a coin toss converge to anything at all?
As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented in the theorem that indepedent variances add: $$\mathrm{Var}(A+B)=\mathrm{Var}(A)+\mathrm{Var}(B)$$ Since the variation in $A$ (or $B$) is proportional to the square root (a concave function) of the variance, variances in a sum grow less quickly than the sum itself.
The average takes a sum of random variables and then divides by the number of random variables, which is proportional to the sum. So the random variation in that average will be small.
We can formalize that argument as follows:
Remember that if a random variable $Y$ has mean $\mu$ and variance $\sigma^2$, then Chebyshev's inequality tells us $$\mathbb{P}[{|Y-\mu|>\delta}]\leq\frac{\sigma^2}{k^2}\tag{1}$$ Now, given $N$ coin flips $\{X_n\}_{n=1}^N$ (heads is $1$, tails $0$), the average number of heads is the random variable $$Y_N=\frac{1}{N}\sum_{n=1}^N{X_n}$$
Now, we only ever measure $Y$ up to some accuracy. For example, have you ever really checked more than the 6th or 7th decimal place of a number? So take $\delta=10^{-8}$ in (1). The idea is to show that $\mathrm{Var}{(Y_N)}\to0$; then (1) will tell us that $$\mathbb{P}\left[{\left|Y_N-\frac{1}{2}\right|>10^{-8}}\right]\to0$$ and so any variation in the average will be indistinguishable to us.
So, OK, let's compute that variance. Since each coin flip is independent, $$\mathrm{Var}{(Y_N)}=\frac{1}{N^2}\sum_{n=1}^N{\mathrm{Var}(X_n)}=\frac{1}{N^2}\cdot N\mathrm{Var}(X_1)=\frac{\mathrm{Var}(X_1)}{N}\to0$$ as $N\to\infty$.
(One can generalize this argument into a proof of Kolmogorov's Strong Law of Large Numbers.)
* The claim also holds exchangeable random variables, as in Ben's answer. But the proof I know (page 185) uses a martingale convergence argument, which is complicated to explain if you haven't seen it.
|
Why should the frequency of heads in a coin toss converge to anything at all?
As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented
|
8,182
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea.
I see a hugely important secondary question as whether this is about prediction, measurement or explanation.
Whichever matters most, how is this not a sophisticated version of the standard high-school query, has a single toss the same odds as one in a sequence and then, which sequence?
After User7344, Ben asks why there should be a single probability that applies to all coin tosses. Rather, how could there not be if “all coin tosses” are equal?
How is it not axiomatic, in- or outside chaos theory and the nature of randomness that from a choice of two outcomes with all things equal, the likelihood of either is 1/2?
At risk of being pedantic, the actual wording of the Question negates much of its value. That looks like a linguistic niggle but consider the detailed discussion that’s developed from it! I hope we all saw the same meaning but how could I prove that. In reality, there can be neither frequency nor convergence in “a coin toss”; only in a series of tosses. Take that same ambiguity back from linguistics to probability, and what certainty remains?
In my view “what we empirically observe” is a starting point, often to be explained but rarely needful of justification.
One thing I suggest most people won’t accept until they’ve tried it, is that whether convergence is what we “empirically observe” depends on how patient we are.
I built a simple roulette simulator and until someone shows how they don’t, I suggest red/black spins follow the same probabilities as heads/tails tosses.
Will you take the time to guess how long an uninterrupted string of either outcome was not uncommon?
I thought that might be four or five; perhaps six but in fact after millions of runs, nothing less than 14 turned out to be uncommon. I’m told that both the result, and my amazement at it, are frequently seen in studies of advanced maths.
If any sequence is equally likely and all are combined, how could the results not converge? How would that not describe the idea of an “average”?
If any sequence is equally likely and they are not combined, how could 13-in-a-row not skew the empirical view of whichever observer saw it?
|
Why should the frequency of heads in a coin toss converge to anything at all?
|
The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea.
I see a hugely important secon
|
Why should the frequency of heads in a coin toss converge to anything at all?
The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea.
I see a hugely important secondary question as whether this is about prediction, measurement or explanation.
Whichever matters most, how is this not a sophisticated version of the standard high-school query, has a single toss the same odds as one in a sequence and then, which sequence?
After User7344, Ben asks why there should be a single probability that applies to all coin tosses. Rather, how could there not be if “all coin tosses” are equal?
How is it not axiomatic, in- or outside chaos theory and the nature of randomness that from a choice of two outcomes with all things equal, the likelihood of either is 1/2?
At risk of being pedantic, the actual wording of the Question negates much of its value. That looks like a linguistic niggle but consider the detailed discussion that’s developed from it! I hope we all saw the same meaning but how could I prove that. In reality, there can be neither frequency nor convergence in “a coin toss”; only in a series of tosses. Take that same ambiguity back from linguistics to probability, and what certainty remains?
In my view “what we empirically observe” is a starting point, often to be explained but rarely needful of justification.
One thing I suggest most people won’t accept until they’ve tried it, is that whether convergence is what we “empirically observe” depends on how patient we are.
I built a simple roulette simulator and until someone shows how they don’t, I suggest red/black spins follow the same probabilities as heads/tails tosses.
Will you take the time to guess how long an uninterrupted string of either outcome was not uncommon?
I thought that might be four or five; perhaps six but in fact after millions of runs, nothing less than 14 turned out to be uncommon. I’m told that both the result, and my amazement at it, are frequently seen in studies of advanced maths.
If any sequence is equally likely and all are combined, how could the results not converge? How would that not describe the idea of an “average”?
If any sequence is equally likely and they are not combined, how could 13-in-a-row not skew the empirical view of whichever observer saw it?
|
Why should the frequency of heads in a coin toss converge to anything at all?
The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea.
I see a hugely important secon
|
8,183
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
|
The cosine similarity between two vectors $a$ and $b$ is just the angle between them
$$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$
In many applications that use cosine similarity, the vectors are non-negative (e.g. a term frequency vector for a document), and in this case the cosine similarity will also be non-negative.
For a vector $x$ the "$z$-score" vector would typically be defined as
$$z=\frac{x-\bar{x}}{s_x}$$
where $\bar{x}=\frac{1}{n}\sum_ix_i$ and $s_x^2=\overline{(x-\bar{x})^2}$ are the mean and variance of $x$. So $z$ has mean 0 and standard deviation 1, i.e. $z_x$ is the standardized version of $x$.
For two vectors $x$ and $y$, their correlation coefficient would be
$$\rho_{x,y}=\overline{(z_xz_y)}$$
Now if the vector $a$ has zero mean, then its variance will be $s_a^2=\frac{1}{n}\lVert{a}\rVert^2$, so its unit vector and z-score will be related by
$$\hat{a}=\frac{a}{\lVert{a}\rVert}=\frac{z_a}{\sqrt n}$$
So if the vectors $a$ and $b$ are centered (i.e. have zero means), then their cosine similarity will be the same as their correlation coefficient.
TL;DR Cosine similarity is a dot product of unit vectors. Pearson correlation is cosine similarity between centered vectors. The "Z-score transform" of a vector is the centered vector scaled to a norm of $\sqrt n$.
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
|
The cosine similarity between two vectors $a$ and $b$ is just the angle between them
$$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$
In many applications that use cosine similarit
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
The cosine similarity between two vectors $a$ and $b$ is just the angle between them
$$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$
In many applications that use cosine similarity, the vectors are non-negative (e.g. a term frequency vector for a document), and in this case the cosine similarity will also be non-negative.
For a vector $x$ the "$z$-score" vector would typically be defined as
$$z=\frac{x-\bar{x}}{s_x}$$
where $\bar{x}=\frac{1}{n}\sum_ix_i$ and $s_x^2=\overline{(x-\bar{x})^2}$ are the mean and variance of $x$. So $z$ has mean 0 and standard deviation 1, i.e. $z_x$ is the standardized version of $x$.
For two vectors $x$ and $y$, their correlation coefficient would be
$$\rho_{x,y}=\overline{(z_xz_y)}$$
Now if the vector $a$ has zero mean, then its variance will be $s_a^2=\frac{1}{n}\lVert{a}\rVert^2$, so its unit vector and z-score will be related by
$$\hat{a}=\frac{a}{\lVert{a}\rVert}=\frac{z_a}{\sqrt n}$$
So if the vectors $a$ and $b$ are centered (i.e. have zero means), then their cosine similarity will be the same as their correlation coefficient.
TL;DR Cosine similarity is a dot product of unit vectors. Pearson correlation is cosine similarity between centered vectors. The "Z-score transform" of a vector is the centered vector scaled to a norm of $\sqrt n$.
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
The cosine similarity between two vectors $a$ and $b$ is just the angle between them
$$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$
In many applications that use cosine similarit
|
8,184
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
|
To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that value, multiply by 2, and assume that value is the sine of an angle. Use the arcsine function to find that angle. Then take the cosine of that angle. Voila!
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
|
To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that va
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that value, multiply by 2, and assume that value is the sine of an angle. Use the arcsine function to find that angle. Then take the cosine of that angle. Voila!
|
Is there any relationship among cosine similarity, pearson correlation, and z-score?
To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that va
|
8,185
|
What is one class SVM and how does it work?
|
The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection".
The idea of novelty detection is to detect rare events, i.e. events that happen rarely, and hence, of which you have very little samples. The problem is then, that the usual way of training a classifier will not work.
So how do you decide what a novel pattern is?. Many approaches are based on the estimation of the density of probability for the data. Novelty corresponds to those samples where the density of probability is "very low". How low depends on the application.
Now, SVMs are max-margin methods, i.e. they do not model a probability distribution. Here the idea is to find a function that is positive for regions with high density of points, and negative for small densities.
The gritty details are given in the paper. ;)
If you really intend to go through the paper, make sure that you first understand the settings of the basic SVM algorithm for classification. It will make much easier to understand the bounds and the motivation the algorithm.
|
What is one class SVM and how does it work?
|
The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection".
T
|
What is one class SVM and how does it work?
The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection".
The idea of novelty detection is to detect rare events, i.e. events that happen rarely, and hence, of which you have very little samples. The problem is then, that the usual way of training a classifier will not work.
So how do you decide what a novel pattern is?. Many approaches are based on the estimation of the density of probability for the data. Novelty corresponds to those samples where the density of probability is "very low". How low depends on the application.
Now, SVMs are max-margin methods, i.e. they do not model a probability distribution. Here the idea is to find a function that is positive for regions with high density of points, and negative for small densities.
The gritty details are given in the paper. ;)
If you really intend to go through the paper, make sure that you first understand the settings of the basic SVM algorithm for classification. It will make much easier to understand the bounds and the motivation the algorithm.
|
What is one class SVM and how does it work?
The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection".
T
|
8,186
|
What is one class SVM and how does it work?
|
I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin.
One-Class SVM is similar, but instead of using a hyperplane to separate two classes of instances, it uses a hypersphere to encompass all of the instances. Now think of the "margin" as referring to the outside of the hypersphere -- so by "the largest possible margin", we mean "the smallest possible hypersphere".
That's about it. Note the following facts, true of SVM, still apply to One-Class SVM:
If we insist that there are no margin violations, by seeking the smallest hypersphere, the margin will end up touching a small number of instances. These are the "support vectors", and they fully determine the model. As long as they are within the hypersphere, all of the other instances can be changed without affecting the model.
We can allow for some margin violations if we don't want the model to be too sensitive to noise.
We can do this in the original space, or in an enlarged feature space (implicitly, using the kernel trick), which can result in a boundary with a complex shape in the original space.
Note: this is my account of the model as described here. I believe this is the version of One-Class SVM proposed by Tax and Duin. There are other approaches, such as that of Schölkopf et al, which is similar, but instead of using a small hypersphere, it uses a hyperplane which is far from the origin; this is the version implemented by LIBSVM and thus scikit-learn.
|
What is one class SVM and how does it work?
|
I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin.
One-Class SVM is similar, but instead of using a hype
|
What is one class SVM and how does it work?
I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin.
One-Class SVM is similar, but instead of using a hyperplane to separate two classes of instances, it uses a hypersphere to encompass all of the instances. Now think of the "margin" as referring to the outside of the hypersphere -- so by "the largest possible margin", we mean "the smallest possible hypersphere".
That's about it. Note the following facts, true of SVM, still apply to One-Class SVM:
If we insist that there are no margin violations, by seeking the smallest hypersphere, the margin will end up touching a small number of instances. These are the "support vectors", and they fully determine the model. As long as they are within the hypersphere, all of the other instances can be changed without affecting the model.
We can allow for some margin violations if we don't want the model to be too sensitive to noise.
We can do this in the original space, or in an enlarged feature space (implicitly, using the kernel trick), which can result in a boundary with a complex shape in the original space.
Note: this is my account of the model as described here. I believe this is the version of One-Class SVM proposed by Tax and Duin. There are other approaches, such as that of Schölkopf et al, which is similar, but instead of using a small hypersphere, it uses a hyperplane which is far from the origin; this is the version implemented by LIBSVM and thus scikit-learn.
|
What is one class SVM and how does it work?
I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin.
One-Class SVM is similar, but instead of using a hype
|
8,187
|
What is one class SVM and how does it work?
|
1. Traditional SVM
Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space)
Find support vectors (on the edge of each class in feature space)
Allow some soft margin for some points to lie in the region between support vectors (this is to avoid over-fitting)
Final objective is to maximise the margin
2. One class SVM
a. According to Schölkopf et al.
Project point to a higher dimensional space
Separate all the data points from the origin in the feature space using hyper-plane
Unlike traditional svm where we use soft margin for smoothness, we use a parameter that fixes fraction of outliers in the data
maximise the distance between the hyper-plane and the origin
The points lying below the hyper-plane and closer to origin are outliers
b. According to Tax et al.
Project point to a higher dimensional space
Separate all the data points from the origin in the feature space using hyper-sphere
Like traditional svm where we use soft margin for smoothness
minimise the volume of the hyper-sphere
The points lying outside the hyper-sphere are outliers.
|
What is one class SVM and how does it work?
|
1. Traditional SVM
Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space)
Find support vectors (on the edge of each class in feature spa
|
What is one class SVM and how does it work?
1. Traditional SVM
Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space)
Find support vectors (on the edge of each class in feature space)
Allow some soft margin for some points to lie in the region between support vectors (this is to avoid over-fitting)
Final objective is to maximise the margin
2. One class SVM
a. According to Schölkopf et al.
Project point to a higher dimensional space
Separate all the data points from the origin in the feature space using hyper-plane
Unlike traditional svm where we use soft margin for smoothness, we use a parameter that fixes fraction of outliers in the data
maximise the distance between the hyper-plane and the origin
The points lying below the hyper-plane and closer to origin are outliers
b. According to Tax et al.
Project point to a higher dimensional space
Separate all the data points from the origin in the feature space using hyper-sphere
Like traditional svm where we use soft margin for smoothness
minimise the volume of the hyper-sphere
The points lying outside the hyper-sphere are outliers.
|
What is one class SVM and how does it work?
1. Traditional SVM
Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space)
Find support vectors (on the edge of each class in feature spa
|
8,188
|
What is one class SVM and how does it work?
|
You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way.
Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as boundaries for some specific class and perform the relabeling.
|
What is one class SVM and how does it work?
|
You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way.
Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as bo
|
What is one class SVM and how does it work?
You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way.
Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as boundaries for some specific class and perform the relabeling.
|
What is one class SVM and how does it work?
You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way.
Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as bo
|
8,189
|
Color and line thickness recommendations for line plots
|
I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking about "spaghetti plots" and the problems they pose in distinguishing different series.
Concretely, a mass of lines for several individual time series can collectively convey general patterns and sometimes individual series that vary from any such pattern.
The question, however, I take to be about distinguishing all the individual time series when they have identities you care about.
If you have say 2 or 3 series, distinguishing series is usually not too difficult, and I would tend to use solid lines in two or three of red, blue or black. I've also played with orange and blue as used by Hastie and friends (see answer from @user31264).
Varying the line pattern (solid, dash, dotted, etc.) I have found of only limited value. Dotted lines tend to be washed out physically and mentally and the more subtle combinations of dots and dashes are just too subtle (meaning, slight) in contrast to be successful in practice.
I'd say the problem bites long before you have 10 series. Unless they are very different, 5 or so series can be too much like hard work to distinguish. Common psychology seems to be that people understand the principle that different series are indicated by different colours and or symbolism perfectly well, but lack the inclination to work hard at tracing the individual lines and trying to hold a story about their similarities and differences in their heads. Part of this often stems from the use of a legend (or key). It's controversial, but I'd try to label different series on the graph wherever possible. My motto here is "Lose the legend, or kill the key, if you can".
I've become fonder of a different approach to showing multiple time series, in which all the different time series are shown repeatedly in several panels, but a different one is highlighted in each one. That's a fusion of one old idea (a) small multiples (as Edward Tufte calls them) and another old idea (b) highlighting a series of particular interest. In turn it may just be yet another old idea rediscovered, but so far I can only find recent references. More in this thread on Statalist.
In terms of colours, I am positive about using greys for time series that are backdrop to whatever is being emphasised. That seems to be consistent with most journals worth publishing in.
Here is one experiment. The data are grain yields from 17 plots on the Broadbalk Fields at Rothamsted 1852-1925 and come from Andrews, D.F. and Herzberg, A.M. (Eds) 1985. Data: A collection of problems from many fields for the student and research worker. New York: Springer, Table 5.1 and downloadable from various places (e.g. enter link description here. (Detail: The data there come in blocks of 4 lines for each year; the third and fourth lines are for straw yield, not plotted here. The plot identifiers are not explicit in that table.)
I have no specific expertise on this kind of data; I just wanted a multiple time series that couldn't (easily) be dismissed as trivially small in terms of length of series or number of panels. (If you have hundreds, thousands, ... of panels, this approach can't really help much.) What I am imagining is that a data analyst, perhaps talking to a subject-matter expert, could identify a variety of common and uncommon behaviours here and get insights and information thereby.
Evidently this recipe could be used for many other kinds of plots (e.g. scatter plots or histograms with each subset highlighted in turn); together with ordering panels according to some interesting or useful measure or criterion (e.g. by median or 90th percentile or SD); and for model results as well as raw data.
|
Color and line thickness recommendations for line plots
|
I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking a
|
Color and line thickness recommendations for line plots
I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking about "spaghetti plots" and the problems they pose in distinguishing different series.
Concretely, a mass of lines for several individual time series can collectively convey general patterns and sometimes individual series that vary from any such pattern.
The question, however, I take to be about distinguishing all the individual time series when they have identities you care about.
If you have say 2 or 3 series, distinguishing series is usually not too difficult, and I would tend to use solid lines in two or three of red, blue or black. I've also played with orange and blue as used by Hastie and friends (see answer from @user31264).
Varying the line pattern (solid, dash, dotted, etc.) I have found of only limited value. Dotted lines tend to be washed out physically and mentally and the more subtle combinations of dots and dashes are just too subtle (meaning, slight) in contrast to be successful in practice.
I'd say the problem bites long before you have 10 series. Unless they are very different, 5 or so series can be too much like hard work to distinguish. Common psychology seems to be that people understand the principle that different series are indicated by different colours and or symbolism perfectly well, but lack the inclination to work hard at tracing the individual lines and trying to hold a story about their similarities and differences in their heads. Part of this often stems from the use of a legend (or key). It's controversial, but I'd try to label different series on the graph wherever possible. My motto here is "Lose the legend, or kill the key, if you can".
I've become fonder of a different approach to showing multiple time series, in which all the different time series are shown repeatedly in several panels, but a different one is highlighted in each one. That's a fusion of one old idea (a) small multiples (as Edward Tufte calls them) and another old idea (b) highlighting a series of particular interest. In turn it may just be yet another old idea rediscovered, but so far I can only find recent references. More in this thread on Statalist.
In terms of colours, I am positive about using greys for time series that are backdrop to whatever is being emphasised. That seems to be consistent with most journals worth publishing in.
Here is one experiment. The data are grain yields from 17 plots on the Broadbalk Fields at Rothamsted 1852-1925 and come from Andrews, D.F. and Herzberg, A.M. (Eds) 1985. Data: A collection of problems from many fields for the student and research worker. New York: Springer, Table 5.1 and downloadable from various places (e.g. enter link description here. (Detail: The data there come in blocks of 4 lines for each year; the third and fourth lines are for straw yield, not plotted here. The plot identifiers are not explicit in that table.)
I have no specific expertise on this kind of data; I just wanted a multiple time series that couldn't (easily) be dismissed as trivially small in terms of length of series or number of panels. (If you have hundreds, thousands, ... of panels, this approach can't really help much.) What I am imagining is that a data analyst, perhaps talking to a subject-matter expert, could identify a variety of common and uncommon behaviours here and get insights and information thereby.
Evidently this recipe could be used for many other kinds of plots (e.g. scatter plots or histograms with each subset highlighted in turn); together with ordering panels according to some interesting or useful measure or criterion (e.g. by median or 90th percentile or SD); and for model results as well as raw data.
|
Color and line thickness recommendations for line plots
I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking a
|
8,190
|
Color and line thickness recommendations for line plots
|
Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what makes one be able to distinguish between the lines easily, it is based on continuity and how tortuous the lines are. Thus there are design based choices, other than the color or dash pattern of the lines, that will aid in making the plot easier to interpret.
I will steal one of Frank's diagrams showing the flexibility of splines to approximate many different shaped functions over a limited domain as an example.
#code adapted from http://biostat.mc.vanderbilt.edu/wiki/pub/Main/RmS/rms.pdf page 40
library(Hmisc)
x <- rcspline.eval(seq(0,1,.01), knots=seq(.05,.95,length=5), inclx=T)
xm <- x
xm[xm > .0106] <- NA
x <- seq(0,1,length=300)
nk <- 6
set.seed(15)
knots<-seq(.05,.95,length=nk)
xx<-rcspline.eval(x,knots=knots,inclx=T)
for(i in 1:(nk−1)){
xx[,i]<-(xx[,i]−min(xx[,i]))/
(max(xx[,i])−min(xx[,i]))
for(i in 1:20){
beta<-2∗runif(nk−1)−1
xbeta<-xx%∗%beta+2∗runif(1)−1
xbeta<-(xbeta−min(xbeta))/
(max(xbeta)−min(xbeta))
if (i==1){
id <- i
MyData <- data.frame(cbind(x,xbeta,id))
}
else {
id <- i
MyData <- rbind(MyData,cbind(x,xbeta,id))
}
}
}
MyData$id <- as.factor(MyData$id)
Now this produces quite a tangled mess of 20 lines, a difficult challenge to visualize.
library(ggplot2)
p1 <- ggplot(data = MyData, aes(x = x, y = V2, group = id)) + geom_line()
p1
Here is the same plot in small multiples, at the same size, using wrapped panels. It is slightly more difficult to make comparisons across panels, but even in the shrunken space it is much easier to visualize the shape of the lines.
p2 <- p1 + facet_wrap(~id) + scale_x_continuous(breaks=c(0.2,0.5,0.8))
p2
One point that Stephen Kosslyn makes in his books is that it isn't how many different lines make the plot complicated, it is how many different types of shapes the lines can take. If 20 panels end up being too small, you can frequently reduce the set to similar trajectories to place in the same panel. It is still hard to distinguish between the lines within the panels, by definition they will be nearby each over and overlap frequently, but it reduces the complexity of making between panel comparisons quite a bit. Here I arbitrarily reduced the 20 lines into 4 separate groupings. This has the added benefit that direct labelling of lines is simpler, there is more space within the panels.
###############1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
newLevels <- c(1,1,2,2,2,2,2,1,1, 2, 3, 3, 3, 3, 2, 4, 1, 1, 2, 1)
MyData$idGroup <- factor(newLevels[MyData$id])
p3 <- ggplot(data = MyData, aes(x = x, y = V2, group = id)) + geom_line() +
facet_wrap(~idGroup)
p3
There is a general phrase that is applicable to the situation, if you focus on everything you focus on nothing. In the case with only ten lines, you have (10*9)/2=45 possible pairs of lines to compare. We probably are not interested in all 45 comparisons in most circumstances, we are either interested in comparing specific lines to each other or comparing one line to the distribution of the rest. Nick's answer shows the latter nicely. Drawing the background lines thin, light colored, and semi-transparent, and then drawing the foreground line in any bright color and thicker will be sufficient. (Also for the device make sure to draw the foreground line on top of the other lines!)
It is much more difficult to create a layering where each individual line can be easily distinguished in the tangle. One way to accomplish foreground-background differentiation in cartography is the use of shadows, (see this paper by Dan Carr for a good example). This will not scale up to 10 lines, but can help for 2 or 3 lines. Here is an example for the trajectories in Panel 1 using Excel!
There are other points to make, such as the light grey lines can be misleading if you have trajectories that are not smooth. E.g. you could have two trajectories in the shape of an X, or two in the shape of one right side up and upside down V. Drawing them the same color you wouldn't be able to trace the lines, and this is why some suggest drawing parallel coordinate plots using smooth lines or jittering/off-setting the points (Graham and Kennedy, 2003; Dang et al., 2010).
So the design advice can change depending on the end goal and the nature of the data. But when making bivariate comparisons between the trajectories is of interest, I think the clustering of similar trajectories and using small multiples makes the plots much easier to interpret in a wide variety of circumstances. This I feel is generally more productive than any combination of colors/line dashes will be in complicated plots. Singled panel plots in many articles are much larger than they need to be, and splitting into 4 panels is typically possible within page constraints without much loss.
|
Color and line thickness recommendations for line plots
|
Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what make
|
Color and line thickness recommendations for line plots
Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what makes one be able to distinguish between the lines easily, it is based on continuity and how tortuous the lines are. Thus there are design based choices, other than the color or dash pattern of the lines, that will aid in making the plot easier to interpret.
I will steal one of Frank's diagrams showing the flexibility of splines to approximate many different shaped functions over a limited domain as an example.
#code adapted from http://biostat.mc.vanderbilt.edu/wiki/pub/Main/RmS/rms.pdf page 40
library(Hmisc)
x <- rcspline.eval(seq(0,1,.01), knots=seq(.05,.95,length=5), inclx=T)
xm <- x
xm[xm > .0106] <- NA
x <- seq(0,1,length=300)
nk <- 6
set.seed(15)
knots<-seq(.05,.95,length=nk)
xx<-rcspline.eval(x,knots=knots,inclx=T)
for(i in 1:(nk−1)){
xx[,i]<-(xx[,i]−min(xx[,i]))/
(max(xx[,i])−min(xx[,i]))
for(i in 1:20){
beta<-2∗runif(nk−1)−1
xbeta<-xx%∗%beta+2∗runif(1)−1
xbeta<-(xbeta−min(xbeta))/
(max(xbeta)−min(xbeta))
if (i==1){
id <- i
MyData <- data.frame(cbind(x,xbeta,id))
}
else {
id <- i
MyData <- rbind(MyData,cbind(x,xbeta,id))
}
}
}
MyData$id <- as.factor(MyData$id)
Now this produces quite a tangled mess of 20 lines, a difficult challenge to visualize.
library(ggplot2)
p1 <- ggplot(data = MyData, aes(x = x, y = V2, group = id)) + geom_line()
p1
Here is the same plot in small multiples, at the same size, using wrapped panels. It is slightly more difficult to make comparisons across panels, but even in the shrunken space it is much easier to visualize the shape of the lines.
p2 <- p1 + facet_wrap(~id) + scale_x_continuous(breaks=c(0.2,0.5,0.8))
p2
One point that Stephen Kosslyn makes in his books is that it isn't how many different lines make the plot complicated, it is how many different types of shapes the lines can take. If 20 panels end up being too small, you can frequently reduce the set to similar trajectories to place in the same panel. It is still hard to distinguish between the lines within the panels, by definition they will be nearby each over and overlap frequently, but it reduces the complexity of making between panel comparisons quite a bit. Here I arbitrarily reduced the 20 lines into 4 separate groupings. This has the added benefit that direct labelling of lines is simpler, there is more space within the panels.
###############1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
newLevels <- c(1,1,2,2,2,2,2,1,1, 2, 3, 3, 3, 3, 2, 4, 1, 1, 2, 1)
MyData$idGroup <- factor(newLevels[MyData$id])
p3 <- ggplot(data = MyData, aes(x = x, y = V2, group = id)) + geom_line() +
facet_wrap(~idGroup)
p3
There is a general phrase that is applicable to the situation, if you focus on everything you focus on nothing. In the case with only ten lines, you have (10*9)/2=45 possible pairs of lines to compare. We probably are not interested in all 45 comparisons in most circumstances, we are either interested in comparing specific lines to each other or comparing one line to the distribution of the rest. Nick's answer shows the latter nicely. Drawing the background lines thin, light colored, and semi-transparent, and then drawing the foreground line in any bright color and thicker will be sufficient. (Also for the device make sure to draw the foreground line on top of the other lines!)
It is much more difficult to create a layering where each individual line can be easily distinguished in the tangle. One way to accomplish foreground-background differentiation in cartography is the use of shadows, (see this paper by Dan Carr for a good example). This will not scale up to 10 lines, but can help for 2 or 3 lines. Here is an example for the trajectories in Panel 1 using Excel!
There are other points to make, such as the light grey lines can be misleading if you have trajectories that are not smooth. E.g. you could have two trajectories in the shape of an X, or two in the shape of one right side up and upside down V. Drawing them the same color you wouldn't be able to trace the lines, and this is why some suggest drawing parallel coordinate plots using smooth lines or jittering/off-setting the points (Graham and Kennedy, 2003; Dang et al., 2010).
So the design advice can change depending on the end goal and the nature of the data. But when making bivariate comparisons between the trajectories is of interest, I think the clustering of similar trajectories and using small multiples makes the plots much easier to interpret in a wide variety of circumstances. This I feel is generally more productive than any combination of colors/line dashes will be in complicated plots. Singled panel plots in many articles are much larger than they need to be, and splitting into 4 panels is typically possible within page constraints without much loss.
|
Color and line thickness recommendations for line plots
Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what make
|
8,191
|
Color and line thickness recommendations for line plots
|
From "The Elements of Statistical Learning" by Trevor Hastie et al. :
"Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particularly troublesome. We have changed the color palette in this edition to a large extent, replacing the above with an orange/blue contrast."
You may want to look at their graphs.
You may also use dashed, dotted etc. lines.
|
Color and line thickness recommendations for line plots
|
From "The Elements of Statistical Learning" by Trevor Hastie et al. :
"Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particula
|
Color and line thickness recommendations for line plots
From "The Elements of Statistical Learning" by Trevor Hastie et al. :
"Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particularly troublesome. We have changed the color palette in this edition to a large extent, replacing the above with an orange/blue contrast."
You may want to look at their graphs.
You may also use dashed, dotted etc. lines.
|
Color and line thickness recommendations for line plots
From "The Elements of Statistical Learning" by Trevor Hastie et al. :
"Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particula
|
8,192
|
Color and line thickness recommendations for line plots
|
I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discern color.
Some resources:
Hadley Wickham ( 2009), ggplot: Elegant Graphics for Data Analysis, Springer;
has a supporting web page
8 suggested book resources on data visualization:
http://www.tableausoftware.com/about/blog/2013/7/list-books-about-data-visualisation-24182
Some courses:
Graphics Lecture in Thomas Lumley's Introductory computing for biostatistics course
Ross Ihaka's graduate course on computational data analysis and graphics
Ross Ihaka's undergraduate course on information visualization
Deborah Nolan's undergraduate course Concepts in Computing with Data
Hadley Wickham's Data visualization course
|
Color and line thickness recommendations for line plots
|
I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discer
|
Color and line thickness recommendations for line plots
I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discern color.
Some resources:
Hadley Wickham ( 2009), ggplot: Elegant Graphics for Data Analysis, Springer;
has a supporting web page
8 suggested book resources on data visualization:
http://www.tableausoftware.com/about/blog/2013/7/list-books-about-data-visualisation-24182
Some courses:
Graphics Lecture in Thomas Lumley's Introductory computing for biostatistics course
Ross Ihaka's graduate course on computational data analysis and graphics
Ross Ihaka's undergraduate course on information visualization
Deborah Nolan's undergraduate course Concepts in Computing with Data
Hadley Wickham's Data visualization course
|
Color and line thickness recommendations for line plots
I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discer
|
8,193
|
Color and line thickness recommendations for line plots
|
While I agree that there's not a unique solution to the problem, I use the recommendation of this blog:
http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html
The posts on colour tackle the issues of colour-blindness and Gray-scale printing and gives an example of colour scale that solves this both issues.
In the same articles is analysed also the continuous colour scales, which many uses for heat plots and so on. It is recommended not to use the rainbow, because of some sharp transitions (like the yellow zone, much smaller than the red). Instead, it is possible to make transitions between other pairs of colours.
A good set of colours for this purpose is blue and orange (a classic!). You can make a test, by applying colour-blind and Gray filters and see if you can still notice the difference.
For the thickness of lines, some of the issues of the blog mentioned before deal with this point. Lines, if you have many, should have the same thickness, that is "thin". Use thick lines only if you want to call attention to that object.
|
Color and line thickness recommendations for line plots
|
While I agree that there's not a unique solution to the problem, I use the recommendation of this blog:
http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html
The posts on co
|
Color and line thickness recommendations for line plots
While I agree that there's not a unique solution to the problem, I use the recommendation of this blog:
http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html
The posts on colour tackle the issues of colour-blindness and Gray-scale printing and gives an example of colour scale that solves this both issues.
In the same articles is analysed also the continuous colour scales, which many uses for heat plots and so on. It is recommended not to use the rainbow, because of some sharp transitions (like the yellow zone, much smaller than the red). Instead, it is possible to make transitions between other pairs of colours.
A good set of colours for this purpose is blue and orange (a classic!). You can make a test, by applying colour-blind and Gray filters and see if you can still notice the difference.
For the thickness of lines, some of the issues of the blog mentioned before deal with this point. Lines, if you have many, should have the same thickness, that is "thin". Use thick lines only if you want to call attention to that object.
|
Color and line thickness recommendations for line plots
While I agree that there's not a unique solution to the problem, I use the recommendation of this blog:
http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html
The posts on co
|
8,194
|
Why not report the mean of a bootstrap distribution?
|
Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the bootstrap. The bootstrapped mean value is not a better estimator for your population parameter. It's merely an estimate of an estimate.
As $n \rightarrow \infty$ the bootstrap distribution containing all possible bootstrapped combinations centers around the sample statistic much like the sample statistic centers around the population parameter under the same conditions. This paper here sums these things up quite nicely and it's one of the easiest I could find. For more detailed proofs follow the papers they're referencing. Noteworthy examples are Efron (1979) and Singh (1981)
The bootstrapped distribution of $\theta_B - \hat\theta$ follows the distribution of $\hat \theta - \theta$ which makes it useful in the estimation of the standard error of a sample estimate, in the construction of confidence intervals, and in the estimation of a parameter's bias. It does not make it a better estimator for the population's parameter. It merely offers a sometimes better alternative to the usual parametric distribution for the statistic's distribution.
|
Why not report the mean of a bootstrap distribution?
|
Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the b
|
Why not report the mean of a bootstrap distribution?
Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the bootstrap. The bootstrapped mean value is not a better estimator for your population parameter. It's merely an estimate of an estimate.
As $n \rightarrow \infty$ the bootstrap distribution containing all possible bootstrapped combinations centers around the sample statistic much like the sample statistic centers around the population parameter under the same conditions. This paper here sums these things up quite nicely and it's one of the easiest I could find. For more detailed proofs follow the papers they're referencing. Noteworthy examples are Efron (1979) and Singh (1981)
The bootstrapped distribution of $\theta_B - \hat\theta$ follows the distribution of $\hat \theta - \theta$ which makes it useful in the estimation of the standard error of a sample estimate, in the construction of confidence intervals, and in the estimation of a parameter's bias. It does not make it a better estimator for the population's parameter. It merely offers a sometimes better alternative to the usual parametric distribution for the statistic's distribution.
|
Why not report the mean of a bootstrap distribution?
Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the b
|
8,195
|
Why not report the mean of a bootstrap distribution?
|
There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating).
The basic idea is that if your estimator is very sensitive to perturbations in the data (i.e., the estimator has high variance and low bias), then you can average over lots of bootstrap samples to reduce the amount of overfitting particular examples.
The page I linked to points out that this introduces some bias into your estimate, which is why the sample mean will often make more sense than averaging your bootstrap samples. But if you have something like a decision tree or a nearest neighbor classifier that can change radically in response to small changes in the data, then this bias might not be as big a concern as overfitting.
|
Why not report the mean of a bootstrap distribution?
|
There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating).
The basic idea is that if your estimator is very sensitive to perturba
|
Why not report the mean of a bootstrap distribution?
There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating).
The basic idea is that if your estimator is very sensitive to perturbations in the data (i.e., the estimator has high variance and low bias), then you can average over lots of bootstrap samples to reduce the amount of overfitting particular examples.
The page I linked to points out that this introduces some bias into your estimate, which is why the sample mean will often make more sense than averaging your bootstrap samples. But if you have something like a decision tree or a nearest neighbor classifier that can change radically in response to small changes in the data, then this bias might not be as big a concern as overfitting.
|
Why not report the mean of a bootstrap distribution?
There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating).
The basic idea is that if your estimator is very sensitive to perturba
|
8,196
|
Why not report the mean of a bootstrap distribution?
|
It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in estimating the true parameter $\theta$.
|
Why not report the mean of a bootstrap distribution?
|
It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in
|
Why not report the mean of a bootstrap distribution?
It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in estimating the true parameter $\theta$.
|
Why not report the mean of a bootstrap distribution?
It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in
|
8,197
|
Why not report the mean of a bootstrap distribution?
|
One simple answer, because it's biased.
A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and calculate the MLE for each BS subsample and average them together. The relative bias is 5%!
set.seed(123)
out <- replicate(1000, {
n <- 10
u <- runif(n, 0, 3)
mle <- max(u)*(n+1)/n
bsm <- mean(replicate(1000, {
max(sample(u, replace=T))*(n+1)/n
}))
c(mle, bsm)
})
b <- hist(out[1, ], col=c1 <- rgb(0.5, 0.5, 0.5, 0.5), breaks=pretty(out, 20), xlab='Estimate')
hist(out[2, ], breaks = b$breaks, col=c2 <- rgb(0.5, 0.5, 0, 0.50), add=T)
|
Why not report the mean of a bootstrap distribution?
|
One simple answer, because it's biased.
A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and
|
Why not report the mean of a bootstrap distribution?
One simple answer, because it's biased.
A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and calculate the MLE for each BS subsample and average them together. The relative bias is 5%!
set.seed(123)
out <- replicate(1000, {
n <- 10
u <- runif(n, 0, 3)
mle <- max(u)*(n+1)/n
bsm <- mean(replicate(1000, {
max(sample(u, replace=T))*(n+1)/n
}))
c(mle, bsm)
})
b <- hist(out[1, ], col=c1 <- rgb(0.5, 0.5, 0.5, 0.5), breaks=pretty(out, 20), xlab='Estimate')
hist(out[2, ], breaks = b$breaks, col=c2 <- rgb(0.5, 0.5, 0, 0.50), add=T)
|
Why not report the mean of a bootstrap distribution?
One simple answer, because it's biased.
A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and
|
8,198
|
How do you find weights for weighted least squares regression?
|
Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$. Those points that are more informative are given more 'weight', and those that are less informative are given less weight. You are right that weighted least squares (WLS) regression is technically only valid if the weights are known a-priori.
However, (OLS) linear regression is fairly robust against heteroscedasticity and thus so is WLS if your estimates are in the ballpark. A rule of thumb for OLS regression is that it isn't too impacted by heteroscedasticity as long as the maximum variance is not greater than 4 times the minimum variance. For example, if the variance of the residuals / errors increases with $X$, then you would be OK if the variance of the residuals at the high end were less than four times the variance of the residuals at the low end. The implication of this is that if your weights get you within that range, you are reasonably safe. It's kind of a horseshoes and hand grenades situation. As a result, you can try to estimate the function relating the variance of the residuals to the levels of your predictor variables.
There are several issues pertaining to how such estimation should be done:
Remember that the weights should be the reciprocal of the variance (or whatever you use).
If your data occur only at discrete levels of $X$, like in an experiment or an ANOVA, then you can estimate the variance directly at each level of $X$ and use that. If the estimates are discrete levels of a continuous variable (e.g., 0 mg., 10 mg., 20 mg., etc.), you may want to smooth those, but it probably won't make much difference.
Estimates of variances, due to the squaring, are very susceptible to outliers and/or high leverage points, though. If your data are not evenly distributed across $X$, or you have relatively few data, estimating the variance directly is not recommended. It is better to estimate something that is expected to correlate with variance, but which is more robust. A common choice would be to use the square root of the absolute values of the deviations from the conditional mean. (For example, in R, plot(model, which=2) will display a scatterplot of these against $X$, called a "spread level plot", to help you diagnose potential heteroscedasticity; see my answer here.) Even more robust might be to use the conditional interquartile range, or the conditional median absolute deviation from the median.
If $X$ is a continuous variable, the typical strategy is to use a simple OLS regression to get the residuals, and then regress one of the functions in [3] (most likely the root absolute deviation) onto $X$. The predicted value of this function is used for the weight associated with that point.
Getting your weights from the residuals of an OLS regression is reasonable because OLS is unbiased, even in the presence of heteroscedasticity. Nonetheless, those weights are contingent on the original model, and may change the fit of the subsequent WLS model. Thus, you should check your results by comparing the estimated betas from the two regressions. If they are very similar, you are OK. If the WLS coefficients diverge from the OLS ones, you should use the WLS estimates to compute residuals manually (the reported residuals from the WLS fit will take the weights into account). Having calculated a new set of residuals, determine the weights again and use the new weights in a second WLS regression. This process should be repeated until two sets of estimated betas are sufficiently similar (even doing this once is uncommon, though).
If this process makes you somewhat uncomfortable, because the weights are estimated, and because they are contingent on the earlier, incorrect model, another option is to use the Huber-White 'sandwich' estimator. This is consistent even in the presence of heteroscedasticity no matter how severe, and it isn't contingent on the model. It is also potentially less hassle.
I demonstrate a simple version of weighted least squares and the use of the sandwich SEs in my answer here: Alternatives to one-way ANOVA for heteroscedastic data.
|
How do you find weights for weighted least squares regression?
|
Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$
|
How do you find weights for weighted least squares regression?
Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$. Those points that are more informative are given more 'weight', and those that are less informative are given less weight. You are right that weighted least squares (WLS) regression is technically only valid if the weights are known a-priori.
However, (OLS) linear regression is fairly robust against heteroscedasticity and thus so is WLS if your estimates are in the ballpark. A rule of thumb for OLS regression is that it isn't too impacted by heteroscedasticity as long as the maximum variance is not greater than 4 times the minimum variance. For example, if the variance of the residuals / errors increases with $X$, then you would be OK if the variance of the residuals at the high end were less than four times the variance of the residuals at the low end. The implication of this is that if your weights get you within that range, you are reasonably safe. It's kind of a horseshoes and hand grenades situation. As a result, you can try to estimate the function relating the variance of the residuals to the levels of your predictor variables.
There are several issues pertaining to how such estimation should be done:
Remember that the weights should be the reciprocal of the variance (or whatever you use).
If your data occur only at discrete levels of $X$, like in an experiment or an ANOVA, then you can estimate the variance directly at each level of $X$ and use that. If the estimates are discrete levels of a continuous variable (e.g., 0 mg., 10 mg., 20 mg., etc.), you may want to smooth those, but it probably won't make much difference.
Estimates of variances, due to the squaring, are very susceptible to outliers and/or high leverage points, though. If your data are not evenly distributed across $X$, or you have relatively few data, estimating the variance directly is not recommended. It is better to estimate something that is expected to correlate with variance, but which is more robust. A common choice would be to use the square root of the absolute values of the deviations from the conditional mean. (For example, in R, plot(model, which=2) will display a scatterplot of these against $X$, called a "spread level plot", to help you diagnose potential heteroscedasticity; see my answer here.) Even more robust might be to use the conditional interquartile range, or the conditional median absolute deviation from the median.
If $X$ is a continuous variable, the typical strategy is to use a simple OLS regression to get the residuals, and then regress one of the functions in [3] (most likely the root absolute deviation) onto $X$. The predicted value of this function is used for the weight associated with that point.
Getting your weights from the residuals of an OLS regression is reasonable because OLS is unbiased, even in the presence of heteroscedasticity. Nonetheless, those weights are contingent on the original model, and may change the fit of the subsequent WLS model. Thus, you should check your results by comparing the estimated betas from the two regressions. If they are very similar, you are OK. If the WLS coefficients diverge from the OLS ones, you should use the WLS estimates to compute residuals manually (the reported residuals from the WLS fit will take the weights into account). Having calculated a new set of residuals, determine the weights again and use the new weights in a second WLS regression. This process should be repeated until two sets of estimated betas are sufficiently similar (even doing this once is uncommon, though).
If this process makes you somewhat uncomfortable, because the weights are estimated, and because they are contingent on the earlier, incorrect model, another option is to use the Huber-White 'sandwich' estimator. This is consistent even in the presence of heteroscedasticity no matter how severe, and it isn't contingent on the model. It is also potentially less hassle.
I demonstrate a simple version of weighted least squares and the use of the sandwich SEs in my answer here: Alternatives to one-way ANOVA for heteroscedastic data.
|
How do you find weights for weighted least squares regression?
Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$
|
8,199
|
How do you find weights for weighted least squares regression?
|
When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Geoffrey Vining. For example:
Experience or prior information using some theoretical model.
Using residuals of the model, for example if ${\rm var}(\varepsilon_i)=\sigma^2x_i$ then we may decide to use $w_i=1/x_i$.
If the responses are the average of $n_i$ observation at each $x_i$ or something like ${\rm var}(y_i)={\rm var}(\varepsilon_i)=\sigma^2/n_i$, then we may decide to use $w_i=n_i$.
Sometime we know that different observations have been measured by different instruments that have some (known or estimated) accuracy. In this case we may decide to use weights as inversely proportional to the variance of measurement errors.
|
How do you find weights for weighted least squares regression?
|
When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Ge
|
How do you find weights for weighted least squares regression?
When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Geoffrey Vining. For example:
Experience or prior information using some theoretical model.
Using residuals of the model, for example if ${\rm var}(\varepsilon_i)=\sigma^2x_i$ then we may decide to use $w_i=1/x_i$.
If the responses are the average of $n_i$ observation at each $x_i$ or something like ${\rm var}(y_i)={\rm var}(\varepsilon_i)=\sigma^2/n_i$, then we may decide to use $w_i=n_i$.
Sometime we know that different observations have been measured by different instruments that have some (known or estimated) accuracy. In this case we may decide to use weights as inversely proportional to the variance of measurement errors.
|
How do you find weights for weighted least squares regression?
When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Ge
|
8,200
|
Mean Average Precision vs Mean Reciprocal Rank
|
Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth to this, that in truth we can say for each of those 20 that "yes" it is a relevant answer or "no" it isn't.
Mean reciprocal rank (MRR) gives you a general measure of quality in these situations, but MRR only cares about the single highest-ranked relevant item. If your system returns a relevant item in the third-highest spot, that's what MRR cares about. It doesn't care if the other relevant items (assuming there are any) are ranked number 4 or number 20.
Therefore, MRR is appropriate to judge a system where either (a) there's only one relevant result, or (b) in your use-case you only really care about the highest-ranked one. This might be true in some web-search scenarios, for example, where the user just wants to find one thing to click on, they don't need any more. (Though is that typically true, or would you be more happy with a web search that returned ten pretty good answers, and you could make your own judgment about which of those to click on...?)
Mean average precision (MAP) considers whether all of the relevant items tend to get ranked highly. So in the top-20 example, it doesn't only care if there's a relevant answer up at number 3, it also cares whether all the "yes" items in that list are bunched up towards the top.
When there is only one relevant answer in your dataset, the MRR and the MAP are exactly equivalent under the standard definition of MAP.
To see why, consider the following toy examples, inspired by the examples in this blog post:
Example 1
Query: "Capital of California"
Ranked results: "Portland", "Sacramento", "Los Angeles"
Ranked results (binary relevance): [0, 1, 0]
Number of correct answers possible: 1
Reciprocal Rank: $\frac{1}{2}$
Precision at 1: $\frac{0}{1}$
Precision at 2: $\frac{1}{2}$
Precision at 3: $\frac{1}{3}$
Average precision = $\frac{1}{m} * \frac{1}{2} = \frac{1}{1}*\frac{1}{2} = 0.5 $.
As you can see, the average precision for a query with exactly one correct answer is equal to the reciprocal rank of the correct result. It follows that the MRR of a collection of such queries will be equal to its MAP. However, as illustrated by the following example, things diverge if there are more than one correct answer:
Example 2
Query: "Cities in California"
Ranked results: "Portland", "Sacramento", "Los Angeles"
Ranked results (binary relevance): [0, 1, 1]
Number of correct answers possible: 2
Reciprocal Rank: $\frac{1}{2}$
Precision at 1: $\frac{0}{1}$
Precision at 2: $\frac{1}{2}$
Precision at 3: $\frac{2}{3}$
Average precision = $\frac{1}{m} * \big[ \frac{1}{2} + \frac{2}{3} \big] = \frac{1}{2} * \big[ \frac{1}{2} + \frac{2}{3} \big] = 0.38 $.
As such, the choice of MRR vs MAP in this case depends entirely on whether or not you want the rankings after the first correct hit to influence.
|
Mean Average Precision vs Mean Reciprocal Rank
|
Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth t
|
Mean Average Precision vs Mean Reciprocal Rank
Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth to this, that in truth we can say for each of those 20 that "yes" it is a relevant answer or "no" it isn't.
Mean reciprocal rank (MRR) gives you a general measure of quality in these situations, but MRR only cares about the single highest-ranked relevant item. If your system returns a relevant item in the third-highest spot, that's what MRR cares about. It doesn't care if the other relevant items (assuming there are any) are ranked number 4 or number 20.
Therefore, MRR is appropriate to judge a system where either (a) there's only one relevant result, or (b) in your use-case you only really care about the highest-ranked one. This might be true in some web-search scenarios, for example, where the user just wants to find one thing to click on, they don't need any more. (Though is that typically true, or would you be more happy with a web search that returned ten pretty good answers, and you could make your own judgment about which of those to click on...?)
Mean average precision (MAP) considers whether all of the relevant items tend to get ranked highly. So in the top-20 example, it doesn't only care if there's a relevant answer up at number 3, it also cares whether all the "yes" items in that list are bunched up towards the top.
When there is only one relevant answer in your dataset, the MRR and the MAP are exactly equivalent under the standard definition of MAP.
To see why, consider the following toy examples, inspired by the examples in this blog post:
Example 1
Query: "Capital of California"
Ranked results: "Portland", "Sacramento", "Los Angeles"
Ranked results (binary relevance): [0, 1, 0]
Number of correct answers possible: 1
Reciprocal Rank: $\frac{1}{2}$
Precision at 1: $\frac{0}{1}$
Precision at 2: $\frac{1}{2}$
Precision at 3: $\frac{1}{3}$
Average precision = $\frac{1}{m} * \frac{1}{2} = \frac{1}{1}*\frac{1}{2} = 0.5 $.
As you can see, the average precision for a query with exactly one correct answer is equal to the reciprocal rank of the correct result. It follows that the MRR of a collection of such queries will be equal to its MAP. However, as illustrated by the following example, things diverge if there are more than one correct answer:
Example 2
Query: "Cities in California"
Ranked results: "Portland", "Sacramento", "Los Angeles"
Ranked results (binary relevance): [0, 1, 1]
Number of correct answers possible: 2
Reciprocal Rank: $\frac{1}{2}$
Precision at 1: $\frac{0}{1}$
Precision at 2: $\frac{1}{2}$
Precision at 3: $\frac{2}{3}$
Average precision = $\frac{1}{m} * \big[ \frac{1}{2} + \frac{2}{3} \big] = \frac{1}{2} * \big[ \frac{1}{2} + \frac{2}{3} \big] = 0.38 $.
As such, the choice of MRR vs MAP in this case depends entirely on whether or not you want the rankings after the first correct hit to influence.
|
Mean Average Precision vs Mean Reciprocal Rank
Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth t
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.