idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
8,101
In Regression Analysis, why do we call independent variables "independent"?
I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could set the regressors independently of each other. There are many preferable terms that do not have this loaded causal co...
In Regression Analysis, why do we call independent variables "independent"?
I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could
In Regression Analysis, why do we call independent variables "independent"? I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could set the regressors independently of each ot...
In Regression Analysis, why do we call independent variables "independent"? I agree with the other answers here that "independent" and "dependent" is poor terminology. As EdM explains, this terminology arose in the context of controlled experiments where the researcher could
8,102
In Regression Analysis, why do we call independent variables "independent"?
To add to Frank Harrell's and Peter Flom's answers: I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why: In regression analysis we have one "special" variable (usually denoted by $Y$) and many "not-so-special" variables ($X$'s) and ...
In Regression Analysis, why do we call independent variables "independent"?
To add to Frank Harrell's and Peter Flom's answers: I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why: In regr
In Regression Analysis, why do we call independent variables "independent"? To add to Frank Harrell's and Peter Flom's answers: I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why: In regression analysis we have one "special" variab...
In Regression Analysis, why do we call independent variables "independent"? To add to Frank Harrell's and Peter Flom's answers: I agree that calling a variable "independent" or "dependent" is often misleading. But some people still do that. I once heard an answer why: In regr
8,103
In Regression Analysis, why do we call independent variables "independent"?
"Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean that the DV, in some sense, depends on the IV. So, for example, when modeling the relationship of height and weight in ...
In Regression Analysis, why do we call independent variables "independent"?
"Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean
In Regression Analysis, why do we call independent variables "independent"? "Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean that the DV, in some sense, depends on the I...
In Regression Analysis, why do we call independent variables "independent"? "Dependent" and "independent" can be confusing terms. One sense is pseudo-causal or even causal and this is the one that is meant when saying "independent variable" and "dependent variable". We mean
8,104
In Regression Analysis, why do we call independent variables "independent"?
Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us. You say that for a general regression problem we have a Output variable, say Y, whose value depends on other input variables, say x1, x2, x3...
In Regression Analysis, why do we call independent variables "independent"?
Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us. You say that for a g
In Regression Analysis, why do we call independent variables "independent"? Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us. You say that for a general regression problem we have a Output v...
In Regression Analysis, why do we call independent variables "independent"? Based on the above answers, yes , i agree that this dependent and independent variable are weak terminology. But I can explain the context in which it is being used by many of us. You say that for a g
8,105
In Regression Analysis, why do we call independent variables "independent"?
Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and house_price. Here, house_price is determined based on the house_size and location but the location and house_size can var...
In Regression Analysis, why do we call independent variables "independent"?
Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and h
In Regression Analysis, why do we call independent variables "independent"? Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and house_price. Here, house_price is determined ...
In Regression Analysis, why do we call independent variables "independent"? Independent variables are called independent because they do not depend on other variables. For example, consider the house price prediction problem. Assume we have data on house_size, location, and h
8,106
Recommendations for non-technical yet deep articles in statistics
Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310. I believe it matches your three bullet points. It talks about explanatory versus predictive modelling (the terms should be self-explanatory) and notes that differences between them are often not recognized. It raises the point that depen...
Recommendations for non-technical yet deep articles in statistics
Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310. I believe it matches your three bullet points. It talks about explanatory versus predictive modelling (the terms shoul
Recommendations for non-technical yet deep articles in statistics Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310. I believe it matches your three bullet points. It talks about explanatory versus predictive modelling (the terms should be self-explanatory) and notes that differences betw...
Recommendations for non-technical yet deep articles in statistics Shmueli, Galit. "To explain or to predict?." Statistical science (2010): 289-310. I believe it matches your three bullet points. It talks about explanatory versus predictive modelling (the terms shoul
8,107
Recommendations for non-technical yet deep articles in statistics
Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249. It is not known to many but when the giants of the profession were still among us, they did not get on well with each other. The debate on the fou...
Recommendations for non-technical yet deep articles in statistics
Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249. It is not known to many bu
Recommendations for non-technical yet deep articles in statistics Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249. It is not known to many but when the giants of the profession were still among u...
Recommendations for non-technical yet deep articles in statistics Lehmann, Erich L. "The Fisher, Neyman-Pearson theories of testing hypotheses: One theory or two?." Journal of the American Statistical Association 88.424 (1993): 1242-1249. It is not known to many bu
8,108
Recommendations for non-technical yet deep articles in statistics
Wilk, M.B. and Gnanadesikan, R. 1968. Probability plotting methods for the analysis of data. Biometrika 55: 1-17. Jstor link if you have access This paper is, at the time of my writing, almost 50 years old but still feels fresh and innovative. Using a rich variety of interesting and substantial examples, the authors ...
Recommendations for non-technical yet deep articles in statistics
Wilk, M.B. and Gnanadesikan, R. 1968. Probability plotting methods for the analysis of data. Biometrika 55: 1-17. Jstor link if you have access This paper is, at the time of my writing, almost 50 ye
Recommendations for non-technical yet deep articles in statistics Wilk, M.B. and Gnanadesikan, R. 1968. Probability plotting methods for the analysis of data. Biometrika 55: 1-17. Jstor link if you have access This paper is, at the time of my writing, almost 50 years old but still feels fresh and innovative. Using a ...
Recommendations for non-technical yet deep articles in statistics Wilk, M.B. and Gnanadesikan, R. 1968. Probability plotting methods for the analysis of data. Biometrika 55: 1-17. Jstor link if you have access This paper is, at the time of my writing, almost 50 ye
8,109
Recommendations for non-technical yet deep articles in statistics
Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005) Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014) Must reads for every researcher/statistician/analyst who wants to avoid the dangers of using and interpreting statistics incorrectly in rese...
Recommendations for non-technical yet deep articles in statistics
Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005) Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014) Must reads for every
Recommendations for non-technical yet deep articles in statistics Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005) Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014) Must reads for every researcher/statistician/analyst who wants to avoid th...
Recommendations for non-technical yet deep articles in statistics Ioannidis, John P. A. "Why Most Published Research Findings Are False." PLoS Medicine (2005) Ioannidis, John P. A. "How to Make More Published Research True." PLoS Medicine (2014) Must reads for every
8,110
Recommendations for non-technical yet deep articles in statistics
Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433 This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches at least the third of your dot points. I first read this paper when I was completing a PhD in engineering and appreciat...
Recommendations for non-technical yet deep articles in statistics
Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433 This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches
Recommendations for non-technical yet deep articles in statistics Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433 This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches at least the third of your dot points. I first read t...
Recommendations for non-technical yet deep articles in statistics Tukey, J. W. (1960) Conclusions vs Decisions Technometrics 2(4): 423-433 This paper is based on an after-dinner talk by Tukey and there is a comment that 'considerable discussion ensued' so it matches
8,111
Recommendations for non-technical yet deep articles in statistics
Efron and Morris, 1977, Stein's Paradox in Statistics. Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes context. The 1977 paper is a popular one published in Scientific American. It is a great read.
Recommendations for non-technical yet deep articles in statistics
Efron and Morris, 1977, Stein's Paradox in Statistics. Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes conte
Recommendations for non-technical yet deep articles in statistics Efron and Morris, 1977, Stein's Paradox in Statistics. Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes context. The 1977 paper is a popular one published in Scien...
Recommendations for non-technical yet deep articles in statistics Efron and Morris, 1977, Stein's Paradox in Statistics. Efron and Morris wrote a series of technical papers on James-Stein estimator in the 1970s, framing Stein's "paradox" in the Empirical Bayes conte
8,112
Recommendations for non-technical yet deep articles in statistics
Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical discussion about self selection problem. This paper served as inspiration for the selection models developed by the nob...
Recommendations for non-technical yet deep articles in statistics
Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical
Recommendations for non-technical yet deep articles in statistics Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical discussion about self selection problem. This paper s...
Recommendations for non-technical yet deep articles in statistics Well, despite the greater interest in Roy Model is among economists (but I may be wrong), its original paper "Some Thoughts on the Distribution of Earnings" from 1951, is a insightful and nontechnical
8,113
Recommendations for non-technical yet deep articles in statistics
Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria. It addresses a foundational question of statistics—under what conditions can statistical analyses yield causal conclusions—in a way that successfully targets a general audience. Philosoph...
Recommendations for non-technical yet deep articles in statistics
Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria. It addresses a foundational question of statistics—under what condit
Recommendations for non-technical yet deep articles in statistics Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria. It addresses a foundational question of statistics—under what conditions can statistical analyses yield causal conclusions...
Recommendations for non-technical yet deep articles in statistics Although it’s a full-length book and not just an article, Judea Pearl’s The Book of Why admirably meets all three of your criteria. It addresses a foundational question of statistics—under what condit
8,114
Recommendations for non-technical yet deep articles in statistics
No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolution to many (but not all) of them. The Reference-Class Problem is Your Problem Too (Hájek, 2007) is a pretty good summa...
Recommendations for non-technical yet deep articles in statistics
No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolu
Recommendations for non-technical yet deep articles in statistics No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolution to many (but not all) of them. The Reference-Clas...
Recommendations for non-technical yet deep articles in statistics No Interpretation of Probability (Schwarz, 2018) is a favorite of mine. It touches on a lot of deep and persisting interpretational issues in statistics, and offers a refreshingly deflationary resolu
8,115
Recommendations for non-technical yet deep articles in statistics
A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great starting point on the intricacies of Causal Inference in Machine Learning: The Seven Tools of Causal Inference, with Ref...
Recommendations for non-technical yet deep articles in statistics
A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great s
Recommendations for non-technical yet deep articles in statistics A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great starting point on the intricacies of Causal Inference i...
Recommendations for non-technical yet deep articles in statistics A recent surge in Causality and machine learning give rise to Pearl's framework moving into mainstream data science and statistics practice. In this direction, an article from Judea Pearl is a great s
8,116
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4, Assuming that one wants to define an $R^2$ measure, I think an argument could be made for treating the penalized residual sum of squ...
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4, Assuming that
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)? Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4, Assuming that one wants to define an...
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)? Both of the concepts you mention (p-values and effect sizes of linear mixed models) have inherent issues. With respect to effect size, quoting Doug Bates, the original author of lme4, Assuming that
8,117
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite approximation for degrees of freedom (available in R with packages such as lmerTest or afex). Abstract Mixed-effects mod...
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite a
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)? In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite approximation for degre...
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)? In regard to calculating significance (p) values, Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the optimal method is either the Kenward-Roger or Satterthwaite a
8,118
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts here.
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)?
I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts her
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)? I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts here.
How to get an "overall" p-value and effect size for a categorical factor in a mixed model (lme4)? I use the lmerTest package. This conveniently includes an estimation of the p-value in the anova() output for my MLM analyses, but does not give an effect size for the reasons given in other posts her
8,119
Clustering methods that do not require pre-specifying the number of clusters
Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like asking for a description of any organisms that aren't cats. Clustering algorithms are often categorized into broad kingdoms...
Clustering methods that do not require pre-specifying the number of clusters
Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like askin
Clustering methods that do not require pre-specifying the number of clusters Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like asking for a description of any organisms that a...
Clustering methods that do not require pre-specifying the number of clusters Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like askin
8,120
Clustering methods that do not require pre-specifying the number of clusters
The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to create joined pseudo-point (e.g. b and c makes bc as on the image below). Next you repeat the procedure by joining the poi...
Clustering methods that do not require pre-specifying the number of clusters
The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to cr
Clustering methods that do not require pre-specifying the number of clusters The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to create joined pseudo-point (e.g. b and c make...
Clustering methods that do not require pre-specifying the number of clusters The most simple example is hierarchical clustering, where you compare each point with each other point using some distance measure, and then join together the pair that has the smallest distance to cr
8,121
Clustering methods that do not require pre-specifying the number of clusters
Parameters are good! A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities. Now clustering is a explorative technique. You must not assume there is a single "true" clustering. You should rather be interested in exploring different clusterings ...
Clustering methods that do not require pre-specifying the number of clusters
Parameters are good! A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities. Now clustering is a explorative technique. You
Clustering methods that do not require pre-specifying the number of clusters Parameters are good! A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities. Now clustering is a explorative technique. You must not assume there is a single "true" cl...
Clustering methods that do not require pre-specifying the number of clusters Parameters are good! A "parameter-free" method means that you only get a single shot (except for maybe randomness), with no customization possibilities. Now clustering is a explorative technique. You
8,122
Clustering methods that do not require pre-specifying the number of clusters
Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of clusters, which your data might violate.
Clustering methods that do not require pre-specifying the number of clusters
Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of
Clustering methods that do not require pre-specifying the number of clusters Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of clusters, which your data might violate.
Clustering methods that do not require pre-specifying the number of clusters Check out Dirichlet mixture models. They're provide a good way of making sense of the data if you don't know the number of clusters beforehand. However, they do make assumptions about the shapes of
8,123
Clustering methods that do not require pre-specifying the number of clusters
If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free seminar paper at https://hal.archives-ouvertes.fr/hal-02124947/document (contains bibliographic references); the input data...
Clustering methods that do not require pre-specifying the number of clusters
If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free sem
Clustering methods that do not require pre-specifying the number of clusters If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free seminar paper at https://hal.archives-ouvertes...
Clustering methods that do not require pre-specifying the number of clusters If you want to compute the number of clusters only from the input data, for numerical variables you may look at MCG, a hierarchical clustering method with an automatic stop criterion: see the free sem
8,124
How can I include random effects (or repeated measures) into a randomForest
Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which allows to compute all those algorithms and the new ones.
How can I include random effects (or repeated measures) into a randomForest
Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which all
How can I include random effects (or repeated measures) into a randomForest Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which allows to compute all those algorithms and the ...
How can I include random effects (or repeated measures) into a randomForest Currently, this paper (doi:10.1177/0962280220946080) does a revision of previous algorithms, including those cited in previous answers. Further, that paper introduce the R library LongituRF, which all
8,125
How can I include random effects (or repeated measures) into a randomForest
Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree. It's been a while since I looked at the paper. I recall the authors had not yet tried forming ensembles of these trees, but that nothing suggested it wouldn't work.
How can I include random effects (or repeated measures) into a randomForest
Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree. It's been a while since I looked at the paper
How can I include random effects (or repeated measures) into a randomForest Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree. It's been a while since I looked at the paper. I recall the authors had not yet tried for...
How can I include random effects (or repeated measures) into a randomForest Yeah it's possible. You should check out "RE-EM Trees: A Data Mining Approach for Longitudinal and Clustered Data," and the associated R package REEMtree. It's been a while since I looked at the paper
8,126
How can I include random effects (or repeated measures) into a randomForest
Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline.com/doi/abs/10.1080/00949655.2012.741599. Essentially it is a theoretically sound way to combine the non-linear modeli...
How can I include random effects (or repeated measures) into a randomForest
Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline
How can I include random effects (or repeated measures) into a randomForest Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline.com/doi/abs/10.1080/00949655.2012.741599. ...
How can I include random effects (or repeated measures) into a randomForest Mixed Effects Random Forests (MERFs) are a thing. As the answer above states, there's some great research about them by Dr. Larocque's group at HEC Montreal. The paper is here: http://www.tandfonline
8,127
How can I include random effects (or repeated measures) into a randomForest
They are not commonly used together, and care should be taken before combining them. Random forests are typically used as classifiers. The reason that you would use a random forest instead of another method (e.g. K-means clustering) is that you may have a large number of dimensions that you want to classify by. The i...
How can I include random effects (or repeated measures) into a randomForest
They are not commonly used together, and care should be taken before combining them. Random forests are typically used as classifiers. The reason that you would use a random forest instead of another
How can I include random effects (or repeated measures) into a randomForest They are not commonly used together, and care should be taken before combining them. Random forests are typically used as classifiers. The reason that you would use a random forest instead of another method (e.g. K-means clustering) is that yo...
How can I include random effects (or repeated measures) into a randomForest They are not commonly used together, and care should be taken before combining them. Random forests are typically used as classifiers. The reason that you would use a random forest instead of another
8,128
How can I include random effects (or repeated measures) into a randomForest
Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects models. Simply speaking it is an extension of linear mixed effects models where t...
How can I include random effects (or repeated measures) into a randomForest
Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for com
How can I include random effects (or repeated measures) into a randomForest Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for combining tree-boosting and mixed effects model...
How can I include random effects (or repeated measures) into a randomForest Instead of random forest, you can also use tree-boosting for the fixed effects part in a model with random effects. The GPBoost library with Python and R packages builds on LightGBM and allows for com
8,129
How can I include random effects (or repeated measures) into a randomForest
There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html The focus of the package is not precisely on the MERF. However, it employs the same syntax as lme4, which may be more intuitive for people who are used to that package. Also, th...
How can I include random effects (or repeated measures) into a randomForest
There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html The focus of the package is not precisely on the MERF. Ho
How can I include random effects (or repeated measures) into a randomForest There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html The focus of the package is not precisely on the MERF. However, it employs the same syntax as lme4, w...
How can I include random effects (or repeated measures) into a randomForest There is now an R package called SAEforest that provides the command MERFranger: https://cran.r-project.org/web/packages/SAEforest/index.html The focus of the package is not precisely on the MERF. Ho
8,130
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$?
For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, but note that there, $k$ counts the regressors without the constant term, so the expression looks a bit different) that t...
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no
For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, b
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$? For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155,...
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, b
8,131
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$?
I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in more details about the consequences! Firstly, what does the null distribution of $R^2$ look like for a range of values of ...
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no
I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in mo
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode not at zero when $k>3$? I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in ...
What is the distribution of $R^2$ in linear regression under the null hypothesis? Why is its mode no I won't rederive the $\mathrm{Beta}(\frac{k-1}{2}, \, \frac{n-k}{2})$ distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in mo
8,132
When should I *not* use R's nlm function for MLE?
There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood). Here are some things that you might want to consider in choosing which one to use. optim can use a number of differe...
When should I *not* use R's nlm function for MLE?
There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood
When should I *not* use R's nlm function for MLE? There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood). Here are some things that you might want to consider in choosing wh...
When should I *not* use R's nlm function for MLE? There are a number of general-purpose optimization routines in base R that I'm aware of: optim, nlminb, nlm and constrOptim (which handles linear inequality constraints, and calls optim under the hood
8,133
When should I *not* use R's nlm function for MLE?
When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" and is everywhere differentiable. nlminb provides a way to constrain parameter values to particular bounding boxes. optim,...
When should I *not* use R's nlm function for MLE?
When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" an
When should I *not* use R's nlm function for MLE? When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" and is everywhere differentiable. nlminb provides a way to constrain par...
When should I *not* use R's nlm function for MLE? When to use and not to use any particular method of maximization depends to a great extent on the type of data you have. nlm will work just fine if the likelihood surface isn't particularly "rough" an
8,134
Why are Jeffreys priors considered noninformative?
It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't. What happens with Jeffreys' prior under a transformation is that the Jacobian from the transformation gets sucked into the or...
Why are Jeffreys priors considered noninformative?
It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't. What
Why are Jeffreys priors considered noninformative? It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't. What happens with Jeffreys' prior under a transformation is that the Jacob...
Why are Jeffreys priors considered noninformative? It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't. What
8,135
Why are Jeffreys priors considered noninformative?
The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence between the prior and the posterior is maximal. This quantity represents the amount of information brought by the data....
Why are Jeffreys priors considered noninformative?
The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence
Why are Jeffreys priors considered noninformative? The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence between the prior and the posterior is maximal. This quantity repres...
Why are Jeffreys priors considered noninformative? The Jeffreys prior coincides with the Bernardo reference prior for one-dimensional parameter space (and "regular" models). Roughly speaking, this is the prior for which the Kullback-Leibler divergence
8,136
Why are Jeffreys priors considered noninformative?
I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation (e.g. the units of measurement). If your prior state of knowledge was precisely zero, you wouldn't know that your prio...
Why are Jeffreys priors considered noninformative?
I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation
Why are Jeffreys priors considered noninformative? I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation (e.g. the units of measurement). If your prior state of knowledge w...
Why are Jeffreys priors considered noninformative? I'd say it isn't absolutely non-informative, but minimally informative. It encodes the (rather weak) prior knowledge that you know your prior state of knowledge doesn't depend on its parameterisation
8,137
Why are Jeffreys priors considered noninformative?
This is an old but interesting topic. I recently thought about this and developed a take that I would like to share. First off, the problem with flat priors as uninformative priors is that this idea is rooted in the way we would guess a number; not the way the data guess a number in likelihood-based inference. We can u...
Why are Jeffreys priors considered noninformative?
This is an old but interesting topic. I recently thought about this and developed a take that I would like to share. First off, the problem with flat priors as uninformative priors is that this idea i
Why are Jeffreys priors considered noninformative? This is an old but interesting topic. I recently thought about this and developed a take that I would like to share. First off, the problem with flat priors as uninformative priors is that this idea is rooted in the way we would guess a number; not the way the data gue...
Why are Jeffreys priors considered noninformative? This is an old but interesting topic. I recently thought about this and developed a take that I would like to share. First off, the problem with flat priors as uninformative priors is that this idea i
8,138
Wikipedia entry on likelihood seems ambiguous
I think this is largely unnecessary splitting hairs. Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $y$. But we can also talk about probability $P(x\mid\theta)$ of $x$ given $\theta$ where $\theta$ is not a random variabl...
Wikipedia entry on likelihood seems ambiguous
I think this is largely unnecessary splitting hairs. Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $
Wikipedia entry on likelihood seems ambiguous I think this is largely unnecessary splitting hairs. Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $y$. But we can also talk about probability $P(x\mid\theta)$ of $x$ given $...
Wikipedia entry on likelihood seems ambiguous I think this is largely unnecessary splitting hairs. Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $
8,139
Wikipedia entry on likelihood seems ambiguous
You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as $$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$ so we have likelihood of some parameter value $\theta$ given the data $X$. It is equal to product of probability mass (discrete case)...
Wikipedia entry on likelihood seems ambiguous
You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as $$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$ so we have li
Wikipedia entry on likelihood seems ambiguous You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as $$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$ so we have likelihood of some parameter value $\theta$ given the data $X$. It is equal ...
Wikipedia entry on likelihood seems ambiguous You already got two nice answers, but since it still seems unclear for you let me provide one. Likelihood is defined as $$ \mathcal{L}(\theta|X) = P(X|\theta) = \prod_i f_\theta(x_i) $$ so we have li
8,140
Wikipedia entry on likelihood seems ambiguous
There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example. First, likelihood cannot be generally equal to a the probability of the data given the parameter value, as likelihood is only defined up to a pr...
Wikipedia entry on likelihood seems ambiguous
There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example. First, likelihood cannot
Wikipedia entry on likelihood seems ambiguous There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example. First, likelihood cannot be generally equal to a the probability of the data given the parameter v...
Wikipedia entry on likelihood seems ambiguous There are several aspects of the common descriptions of likelihood that are imprecise or omit detail in a way that engenders confusion. The Wikipedia entry is a good example. First, likelihood cannot
8,141
Wikipedia entry on likelihood seems ambiguous
Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many values of $\theta$ in the parameter space, you can have $$ \sum_\theta L(\theta) = \infty, $$ for example by having $L(\theta...
Wikipedia entry on likelihood seems ambiguous
Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many value
Wikipedia entry on likelihood seems ambiguous Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many values of $\theta$ in the parameter space, you can have $$ \sum_\theta L(\theta...
Wikipedia entry on likelihood seems ambiguous Wikipedia should have said that $L(\theta)$ is not a conditional probability of $\theta$ being in some specified set, nor a probability density of $\theta$. Indeed, if there are infinitely many value
8,142
Wikipedia entry on likelihood seems ambiguous
"I read this as: "The likelihood of parameters equaling theta, given data X = x, (the left-hand-side), is equal to the probability of the data X being equal to x, given that the parameters are equal to theta". (Bold is mine for emphasis)." It's the probability of the set of observations given the parameter is th...
Wikipedia entry on likelihood seems ambiguous
"I read this as: "The likelihood of parameters equaling theta, given data X = x, (the left-hand-side), is equal to the probability of the data X being equal to x, given that the parameters are equ
Wikipedia entry on likelihood seems ambiguous "I read this as: "The likelihood of parameters equaling theta, given data X = x, (the left-hand-side), is equal to the probability of the data X being equal to x, given that the parameters are equal to theta". (Bold is mine for emphasis)." It's the probability of the...
Wikipedia entry on likelihood seems ambiguous "I read this as: "The likelihood of parameters equaling theta, given data X = x, (the left-hand-side), is equal to the probability of the data X being equal to x, given that the parameters are equ
8,143
Prerequisites for AIC model comparison
You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models. Just a reminder before we continue: a Gaussian log-likelihood is given by $$ \log(L(\theta)) =-\frac{|D|}{2}\log(2\pi) -\frac{1}...
Prerequisites for AIC model comparison
You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models. Just a re
Prerequisites for AIC model comparison You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models. Just a reminder before we continue: a Gaussian log-likelihood is given by $$ \log(L(\thet...
Prerequisites for AIC model comparison You can not compare the two models as they do not model the same variable (as you correctly recognise yourself). Nevertheless AIC should work when comparing both nested and nonnested models. Just a re
8,144
Prerequisites for AIC model comparison
You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 is basically just missing the "jacobian" of the log transformation. For a log normal model, this is simply $\prod_i y_i^...
Prerequisites for AIC model comparison
You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 i
Prerequisites for AIC model comparison You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 is basically just missing the "jacobian" of the log transformation. For a log nor...
Prerequisites for AIC model comparison You should be able to compare using AIC in principle, just that the number called "AIC" is not the number you need. You are comparing normal vs log-normal distributions. Now the AIC from model uu0 i
8,145
Prerequisites for AIC model comparison
This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic. Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society. Series D (The Statistician) 27:217-235.
Prerequisites for AIC model comparison
This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic. Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society
Prerequisites for AIC model comparison This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic. Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society. Series D (The Statistician) 27:217-235.
Prerequisites for AIC model comparison This excerpt from Akaike 1978 provides a citation in support of the solution by @probabilityislogic. Akaike, H. 1978. On the Likelihood of a Time Series Model. Journal of the Royal Statistical Society
8,146
Visualizing a million, PCA edition
The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run into a problem with over-plotting. Alpha blending could help there. Here is a PC biplot of the wine data from the UCI ML...
Visualizing a million, PCA edition
The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run i
Visualizing a million, PCA edition The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run into a problem with over-plotting. Alpha blending could help there. Here is a PC bipl...
Visualizing a million, PCA edition The biplot is a useful tool for visualizing the results of PCA. It allows you to visualize the principal component scores and directions simultaneously. With 10,000 observations you’ll probably run i
8,147
Visualizing a million, PCA edition
A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one dominant eigenvalue which falls outside the Marchenko-Pastur distribution. The usefulness of this kind of plot depends on you...
Visualizing a million, PCA edition
A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one domin
Visualizing a million, PCA edition A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one dominant eigenvalue which falls outside the Marchenko-Pastur distribution. The usefulness ...
Visualizing a million, PCA edition A Wachter plot can help you visualize the eigenvalues of your PCA. It is essentially a Q-Q plot of the eigenvalues against the Marchenko-Pastur distribution. I have an example here: There is one domin
8,148
Visualizing a million, PCA edition
You could also use the psych package. This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
Visualizing a million, PCA edition
You could also use the psych package. This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
Visualizing a million, PCA edition You could also use the psych package. This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
Visualizing a million, PCA edition You could also use the psych package. This contains a plot.factor method, which will plot the different components against one another in the style of a scatterplot matrix.
8,149
Why do political polls have such large sample sizes?
Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong). Why numbers in the vicinity of 1000 are often used Numbers of around 1000-2000 are often used in surveys, even in the case of a simple proportion ("Are you in favor of $<$wha...
Why do political polls have such large sample sizes?
Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong). Why numbers in the vicinity of 1000 are often used Num
Why do political polls have such large sample sizes? Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong). Why numbers in the vicinity of 1000 are often used Numbers of around 1000-2000 are often used in surveys, even in the cas...
Why do political polls have such large sample sizes? Wayne has addressed the "30" issue well enough (my own rule of thumb: mention of the number 30 in relation to statistics is likely to be wrong). Why numbers in the vicinity of 1000 are often used Num
8,150
Why do political polls have such large sample sizes?
That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matters, check your data! This does suggest that you'd want at least 30 respondents for your poll if your analysis depends on ...
Why do political polls have such large sample sizes?
That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matter
Why do political polls have such large sample sizes? That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matters, check your data! This does suggest that you'd want at least 30 r...
Why do political polls have such large sample sizes? That particular rule of thumb suggests that 30 points are enough to assume that the data is normally distributed (i.e., looks like a bell curve) but this is, at best, a rough guideline. If this matter
8,151
Why do political polls have such large sample sizes?
There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$. Suppose we take a poll of just one voter, let's call him or her voter 1, and ask "will you vote for the Purple Party?...
Why do political polls have such large sample sizes?
There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$
Why do political polls have such large sample sizes? There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$. Suppose we take a poll of just one voter, let's call him or her ...
Why do political polls have such large sample sizes? There are already some excellent answers to this question, but I want to answer why the standard error is what it is, why we use $p = 0.5$ as the worst case, and how the standard error varies with $n$
8,152
Why do political polls have such large sample sizes?
The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best. When you think of a sample that's supposed to represent millions of people, you're going to have to have a much larger sample than just 30. Intuitively, 30 people can't even include one person from each state! Then...
Why do political polls have such large sample sizes?
The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best. When you think of a sample that's supposed to represent millions of people, you're going to ha
Why do political polls have such large sample sizes? The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best. When you think of a sample that's supposed to represent millions of people, you're going to have to have a much larger sample than just 30. Intuitively, 30 peopl...
Why do political polls have such large sample sizes? The "at least 30" rule is addressed in another posting on Cross Validated. It's a rule of thumb, at best. When you think of a sample that's supposed to represent millions of people, you're going to ha
8,153
Why do political polls have such large sample sizes?
A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition. Just like @Glen_b, let's assume we require at least 95% confidence that the true proportion who agree with a statement lies within a 3% margin of error. In a particular ...
Why do political polls have such large sample sizes?
A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition. Just like @Glen_b, let's assume we require at leas
Why do political polls have such large sample sizes? A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition. Just like @Glen_b, let's assume we require at least 95% confidence that the true proportion who agree with a statemen...
Why do political polls have such large sample sizes? A lot of great answers have already been posted. Let me suggest a different framing that yields the same response, but could further drive intuition. Just like @Glen_b, let's assume we require at leas
8,154
When should I apply feature scaling for my data [duplicate]
You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful. K-means considers Euclidean distance to be meaningful. If a feature has a big scale compared to another, but the first feature truly represents greater diversity, then clustering in that dimens...
When should I apply feature scaling for my data [duplicate]
You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful. K-means considers Euclidean distance to be meaningful. If a feature has a
When should I apply feature scaling for my data [duplicate] You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful. K-means considers Euclidean distance to be meaningful. If a feature has a big scale compared to another, but the first feature truly ...
When should I apply feature scaling for my data [duplicate] You should normalize when the scale of a feature is irrelevant or misleading, and not normalize when the scale is meaningful. K-means considers Euclidean distance to be meaningful. If a feature has a
8,155
When should I apply feature scaling for my data [duplicate]
In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about the problem. Some of the algorithms, like Linear Discriminant Analysis and Naive Bayes do feature scaling by design and ...
When should I apply feature scaling for my data [duplicate]
In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about
When should I apply feature scaling for my data [duplicate] In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about the problem. Some of the algorithms, like Linear Discriminan...
When should I apply feature scaling for my data [duplicate] In my view the question about scaling/not scaling the features in machine learning is a statement about the measurement units of your features. And it is related to the prior knowledge you have about
8,156
When should I apply feature scaling for my data [duplicate]
There are several methods of normalization. In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalization like dividing or multiplying by a constant already gets adjusted in the weights(i.e lets say the weight of a feature i...
When should I apply feature scaling for my data [duplicate]
There are several methods of normalization. In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalizatio
When should I apply feature scaling for my data [duplicate] There are several methods of normalization. In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalization like dividing or multiplying by a constant already gets ad...
When should I apply feature scaling for my data [duplicate] There are several methods of normalization. In regards to regression, if you plan on normalizing the feature by a single factor then there is no need. The reason being that single factor normalizatio
8,157
When should I apply feature scaling for my data [duplicate]
This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog. In short, there are "monotonic transformation" invariant learning methods (decision trees and everything that derives from them), translation invariant learning methods (kNN, SVM wi...
When should I apply feature scaling for my data [duplicate]
This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog. In short, there are "monotonic transformation" invariant lear
When should I apply feature scaling for my data [duplicate] This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog. In short, there are "monotonic transformation" invariant learning methods (decision trees and everything that derives fro...
When should I apply feature scaling for my data [duplicate] This issue seems actually overlooked in many machine learning courses / resources. I ended up writing an article about scaling on my blog. In short, there are "monotonic transformation" invariant lear
8,158
When should I apply feature scaling for my data [duplicate]
Here's another chemometric application example where feature scaling would be disastrous: There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substance of interest) content is below (or above) a given threshold (e.g. legal limit)". In this case, the sensors to produce...
When should I apply feature scaling for my data [duplicate]
Here's another chemometric application example where feature scaling would be disastrous: There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substa
When should I apply feature scaling for my data [duplicate] Here's another chemometric application example where feature scaling would be disastrous: There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substance of interest) content is below (or above) a given thresho...
When should I apply feature scaling for my data [duplicate] Here's another chemometric application example where feature scaling would be disastrous: There are lots of classification (qualitative analysis) tasks of the form "test whether some analyte (= substa
8,159
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
EDIT (after reading the paper): I've read the paper thoughtfully. Let's start off with what Google claimed in the paper: They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks The match was absolutely one-sided, many wins for AlphaZero but none for Stockfish They were able to do it in just four ho...
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
EDIT (after reading the paper): I've read the paper thoughtfully. Let's start off with what Google claimed in the paper: They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks Th
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo? EDIT (after reading the paper): I've read the paper thoughtfully. Let's start off with what Google claimed in the paper: They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks The match was absolutely one-sided, ma...
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo? EDIT (after reading the paper): I've read the paper thoughtfully. Let's start off with what Google claimed in the paper: They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks Th
8,160
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much simpler problem. Moreover, notice that both NN and brute force need huge computing resources (here you can find a photo o...
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo?
DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much s
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo? DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much simpler problem. Moreover, notice tha...
Why are there no deep reinforcement learning engines for chess, similar to AlphaGo? DeepBlue has already beaten Kasparov so this problem is solved with much simpler approach. This was possible because the number of possible moves in chess is much smaller then in go, so it is a much s
8,161
What is the difference between kernel, bias, and activity regulizers, and when to use which?
What is the difference between them? You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias. Kernel Regularizer: Tries to reduce the weights $W$ (excluding bias). Bias Regularizer: Tries to reduce the bias $b$. Activity Regularizer: Tries to reduce the layer's outp...
What is the difference between kernel, bias, and activity regulizers, and when to use which?
What is the difference between them? You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias. Kernel Regularizer: Tries to reduce the weights $W$
What is the difference between kernel, bias, and activity regulizers, and when to use which? What is the difference between them? You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias. Kernel Regularizer: Tries to reduce the weights $W$ (excluding bias). Bias Regu...
What is the difference between kernel, bias, and activity regulizers, and when to use which? What is the difference between them? You have the regression equation $y = Wx+b$, where $x$ is the input, $W$ the weights matrix and $b$ the bias. Kernel Regularizer: Tries to reduce the weights $W$
8,162
What is the difference between kernel, bias, and activity regulizers, and when to use which?
kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output). We apply kernel_regularizer to penalize the weights which are very large causing the network to overfit, after applying kernel_regularizer the weights will become smaller. While we bias_...
What is the difference between kernel, bias, and activity regulizers, and when to use which?
kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output). We apply kernel_regularizer to penalize the weights which are very
What is the difference between kernel, bias, and activity regulizers, and when to use which? kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output). We apply kernel_regularizer to penalize the weights which are very large causing the network ...
What is the difference between kernel, bias, and activity regulizers, and when to use which? kernel_regularizer acts on the weights, while bias_initializer acts on the bias and activity_regularizer acts on the y(layer output). We apply kernel_regularizer to penalize the weights which are very
8,163
What is the difference between kernel, bias, and activity regulizers, and when to use which?
I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero. In the case of $L2$ regularization, the gradient of a single weight is given by $$ \delta w = u - 2pw$$ where $u$ is the input from the previous layer being multiplied b...
What is the difference between kernel, bias, and activity regulizers, and when to use which?
I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero. In the case of $L2$ regularization, the gradient
What is the difference between kernel, bias, and activity regulizers, and when to use which? I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero. In the case of $L2$ regularization, the gradient of a single weight is given...
What is the difference between kernel, bias, and activity regulizers, and when to use which? I will expand upon @Bloc97 's answer about the difference between $L1$ and $L2$ constraints, in order to show why $L1$ may drive some weights to zero. In the case of $L2$ regularization, the gradient
8,164
Why is the CDF of a sample uniformly distributed
Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then $$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$ The derivative of $F_Z$ is constant so $Z$ is uniformly distributed. A more specific way to see this is by observing that for a un...
Why is the CDF of a sample uniformly distributed
Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then $$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$ The deriva
Why is the CDF of a sample uniformly distributed Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then $$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$ The derivative of $F_Z$ is constant so $Z$ is uniformly distributed. A more speci...
Why is the CDF of a sample uniformly distributed Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$. Then $$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$ The deriva
8,165
Why is the CDF of a sample uniformly distributed
Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think inverse images, not a proper inverse function per se) is a "quantile" function. That is, $x = F^{-1}(p)$ is the point $x$...
Why is the CDF of a sample uniformly distributed
Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think i
Why is the CDF of a sample uniformly distributed Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think inverse images, not a proper inverse function per se) is a "quantile" fu...
Why is the CDF of a sample uniformly distributed Intuitively, perhaps it makes sense to think of $F(x)$ as a percentile function, e.g. $F(x)$ of a randomly generated sample from the DF $F$ is expected to fall below $x$. Alternately $F^{-1}$ (think i
8,166
Why is the CDF of a sample uniformly distributed
Here's some intuition. Let's use a discrete example. Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ looks better. One way to achieve this is to find the percentiles of each student's score. Score $10$ is $25\%$, score $50...
Why is the CDF of a sample uniformly distributed
Here's some intuition. Let's use a discrete example. Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ l
Why is the CDF of a sample uniformly distributed Here's some intuition. Let's use a discrete example. Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ looks better. One way to achieve this is to find the percentiles of each...
Why is the CDF of a sample uniformly distributed Here's some intuition. Let's use a discrete example. Say after an exam the students' scores are $X = [10, 50, 60, 90]$. But you want the scores to be more even or uniform. $h(X) = [25, 50, 75, 100]$ l
8,167
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30 is not particularly small in this context). One difference between lme4 and many other packages is that many packages, i...
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data? This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30 is not particularly...
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data? This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30
8,168
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually-fully explained by just the residual variance term alone. There is not enough additional subject-level variation to war...
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data? I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually-fully explained by...
Why do I get zero variance of a random effect in my mixed model, despite some variation in the data? I don't think there's a problem. The lesson from the model output is that although there is "obviously" variation in subject performance, the extent of this subject variation can be fully or virtually
8,169
Utility of feature-engineering : Why create new features based on existing features?
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could expect your machine learning algorithm to find out the correct decision boundary by itself, but if you generated addition...
Utility of feature-engineering : Why create new features based on existing features?
The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could e
Utility of feature-engineering : Why create new features based on existing features? The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could expect your machine learning algorit...
Utility of feature-engineering : Why create new features based on existing features? The most simple example used to illustrate this is the XOR problem (see image below). Imagine that you are given data containing of $x$ and $y$ coordinated and the binary class to predict. You could e
8,170
Utility of feature-engineering : Why create new features based on existing features?
Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates you the resulting feature will be meaningful and informative. Note that a linear classifier cannot consider those compl...
Utility of feature-engineering : Why create new features based on existing features?
Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates
Utility of feature-engineering : Why create new features based on existing features? Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates you the resulting feature will be ...
Utility of feature-engineering : Why create new features based on existing features? Well, if you plan to use a simple, linear classifier, it makes perfect sense to generate new features which are a non-linear function of the existing ones, specially if your domain knowledge indicates
8,171
Utility of feature-engineering : Why create new features based on existing features?
It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes necessary. Some models like linear regression don't handle non-linearity automatically, in that case, you need to create ...
Utility of feature-engineering : Why create new features based on existing features?
It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes ne
Utility of feature-engineering : Why create new features based on existing features? It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes necessary. Some models like linear r...
Utility of feature-engineering : Why create new features based on existing features? It is true that some of the machine learning models have the ability to handle the non-linearity and interaction between variables, however, depends on the situation, I see three reasons it becomes ne
8,172
How do I fit a multilevel model for over-dispersed poisson outcomes?
You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007) I will give an example of doing this using rjags package in R. It is an interface between R and JAGS (like OpenBUGS or ...
How do I fit a multilevel model for over-dispersed poisson outcomes?
You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007)
How do I fit a multilevel model for over-dispersed poisson outcomes? You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007) I will give an example of doing this using rjags p...
How do I fit a multilevel model for over-dispersed poisson outcomes? You can fit multilevel GLMM with a Poisson distribution (with over-dispersion) using R in multiple ways. Few R packages are: lme4, MCMCglmm, arm, etc. A good reference to see is Gelman and Hill (2007)
8,173
How do I fit a multilevel model for over-dispersed poisson outcomes?
No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren't, you should have the easy to fit lme4 results for comparison. data$obs_effect<-1:nrow(data) overdisp.fit<-lmer(y~1+obs...
How do I fit a multilevel model for over-dispersed poisson outcomes?
No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren'
How do I fit a multilevel model for over-dispersed poisson outcomes? No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren't, you should have the easy to fit lme4 results for...
How do I fit a multilevel model for over-dispersed poisson outcomes? No need to leave the lme4 package to account for overdispersion; just include a random effect for observation number. The BUGS/JAGS solutions mentioned are probably overkill for you, and if they aren'
8,174
How do I fit a multilevel model for over-dispersed poisson outcomes?
I think that the glmmADMB package is exactely what you are looking for. install.packages("glmmADMB", repos="http://r-forge.r-project.org") But in a bayesian point of view you can use the MCMCglmm package or the BUGS/JAGS software, they are very flexible and you can fit this kind of model. (and the syntax is close t...
How do I fit a multilevel model for over-dispersed poisson outcomes?
I think that the glmmADMB package is exactely what you are looking for. install.packages("glmmADMB", repos="http://r-forge.r-project.org") But in a bayesian point of view you can use the MCMCglmm
How do I fit a multilevel model for over-dispersed poisson outcomes? I think that the glmmADMB package is exactely what you are looking for. install.packages("glmmADMB", repos="http://r-forge.r-project.org") But in a bayesian point of view you can use the MCMCglmm package or the BUGS/JAGS software, they are very fl...
How do I fit a multilevel model for over-dispersed poisson outcomes? I think that the glmmADMB package is exactely what you are looking for. install.packages("glmmADMB", repos="http://r-forge.r-project.org") But in a bayesian point of view you can use the MCMCglmm
8,175
How do I fit a multilevel model for over-dispersed poisson outcomes?
Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
How do I fit a multilevel model for over-dispersed poisson outcomes?
Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
How do I fit a multilevel model for over-dispersed poisson outcomes? Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
How do I fit a multilevel model for over-dispersed poisson outcomes? Good suggestions so far. Here's one more. You can fit a hierarchical negative binomial regression model using the rhierNegbinRw function of the bayesm package.
8,176
FPR (false positive rate) vs FDR (false discovery rate)
I'm going to explain these in a few different ways because it helped me understand it. Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some terms. For each of the following, I am referring to an individual who has been tested: True positive (TP): Has the disease,...
FPR (false positive rate) vs FDR (false discovery rate)
I'm going to explain these in a few different ways because it helped me understand it. Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some ter
FPR (false positive rate) vs FDR (false discovery rate) I'm going to explain these in a few different ways because it helped me understand it. Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some terms. For each of the following, I am referring to an individual w...
FPR (false positive rate) vs FDR (false discovery rate) I'm going to explain these in a few different ways because it helped me understand it. Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some ter
8,177
FPR (false positive rate) vs FDR (false discovery rate)
You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal. FP happens if your null hypothesis is true but you reject it FD happens if you predict something significant but you shouldn't
FPR (false positive rate) vs FDR (false discovery rate)
You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal. FP happens if your null hypothesis is true but you reject
FPR (false positive rate) vs FDR (false discovery rate) You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal. FP happens if your null hypothesis is true but you reject it FD happens if you predict something significant but you shou...
FPR (false positive rate) vs FDR (false discovery rate) You should examine the table in https://en.wikipedia.org/wiki/Confusion_matrix. Please note FPR is vertically placed while FDR is horizontal. FP happens if your null hypothesis is true but you reject
8,178
Why should the frequency of heads in a coin toss converge to anything at all?
This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems. The convergence outcome follows from the condition of exchangeability. If the coin is tossed in a manner that is consistent from flip-to-flip, then one might reasonably assume that the...
Why should the frequency of heads in a coin toss converge to anything at all?
This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems. The convergence outcome follows from the condition of exchangeabi
Why should the frequency of heads in a coin toss converge to anything at all? This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems. The convergence outcome follows from the condition of exchangeability. If the coin is tossed in a manner t...
Why should the frequency of heads in a coin toss converge to anything at all? This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems. The convergence outcome follows from the condition of exchangeabi
8,179
Why should the frequency of heads in a coin toss converge to anything at all?
If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of large numbers. EDIT: Just to answer some of the other things in your post, statistics is built upon probability theory, so...
Why should the frequency of heads in a coin toss converge to anything at all?
If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of la
Why should the frequency of heads in a coin toss converge to anything at all? If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of large numbers. EDIT: Just to answer some of ...
Why should the frequency of heads in a coin toss converge to anything at all? If you assume the coin tosses are independent of each other, and that you are equally likely to obtain heads on any one coin toss, then it isn't an axiom, and in fact follows from the strong law of la
8,180
Why should the frequency of heads in a coin toss converge to anything at all?
I want to second the answer by jacobe, but also add a bit of detail. Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a score of $0$. Note that the average score from a large number of coin tosses is also the average frequency of getting heads...
Why should the frequency of heads in a coin toss converge to anything at all?
I want to second the answer by jacobe, but also add a bit of detail. Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a sco
Why should the frequency of heads in a coin toss converge to anything at all? I want to second the answer by jacobe, but also add a bit of detail. Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a score of $0$. Note that the average score fro...
Why should the frequency of heads in a coin toss converge to anything at all? I want to second the answer by jacobe, but also add a bit of detail. Assume that there is some probability $p$ of a coin toss being a head. Assign the outcome head to a score of $1$ and tails to a sco
8,181
Why should the frequency of heads in a coin toss converge to anything at all?
As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented in the theorem that indepedent variances add: $$\mathrm{Var}(A+B)=\mathrm{Var}(A)+\mathrm{Var}(B)$$ Since the variation...
Why should the frequency of heads in a coin toss converge to anything at all?
As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented
Why should the frequency of heads in a coin toss converge to anything at all? As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented in the theorem that indepedent variances a...
Why should the frequency of heads in a coin toss converge to anything at all? As long as your coin is memoryless (each flip independent),* such a limiting probability exists because randomly chosen variation tends to cancel itself out. Mathematically, this fact is represented
8,182
Why should the frequency of heads in a coin toss converge to anything at all?
The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea. I see a hugely important secondary question as whether this is about prediction, measurement or explanation. Whichever matters most, how is this not a...
Why should the frequency of heads in a coin toss converge to anything at all?
The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea. I see a hugely important secon
Why should the frequency of heads in a coin toss converge to anything at all? The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea. I see a hugely important secondary question as whether this is about pre...
Why should the frequency of heads in a coin toss converge to anything at all? The set of principles that applies to coin tossing from which we can derive that, are indeed axioms. That means they’re open to question only inasmuch as the whole idea. I see a hugely important secon
8,183
Is there any relationship among cosine similarity, pearson correlation, and z-score?
The cosine similarity between two vectors $a$ and $b$ is just the angle between them $$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$ In many applications that use cosine similarity, the vectors are non-negative (e.g. a term frequency vector for a document), and in this case the cosine similarity wi...
Is there any relationship among cosine similarity, pearson correlation, and z-score?
The cosine similarity between two vectors $a$ and $b$ is just the angle between them $$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$ In many applications that use cosine similarit
Is there any relationship among cosine similarity, pearson correlation, and z-score? The cosine similarity between two vectors $a$ and $b$ is just the angle between them $$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$ In many applications that use cosine similarity, the vectors are non-negative (e....
Is there any relationship among cosine similarity, pearson correlation, and z-score? The cosine similarity between two vectors $a$ and $b$ is just the angle between them $$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$ In many applications that use cosine similarit
8,184
Is there any relationship among cosine similarity, pearson correlation, and z-score?
To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that value, multiply by 2, and assume that value is the sine of an angle. Use the arcsine function to find that angle. Then tak...
Is there any relationship among cosine similarity, pearson correlation, and z-score?
To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that va
Is there any relationship among cosine similarity, pearson correlation, and z-score? To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that value, multiply by 2, and assume that...
Is there any relationship among cosine similarity, pearson correlation, and z-score? To convert a z-score to a cosine, use the cumulative distribution function for a Gaussian distribution. Find the value of the Gaussian cdf corresponding to the z-score value. Subtract 0.5 from that va
8,185
What is one class SVM and how does it work?
The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection". The idea of novelty detection is to detect rare events, i.e. events that happen rarely, and hence, of which you have very...
What is one class SVM and how does it work?
The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection". T
What is one class SVM and how does it work? The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection". The idea of novelty detection is to detect rare events, i.e. events that happ...
What is one class SVM and how does it work? The problem addressed by One Class SVM, as the documentation says, is novelty detection. The original paper describing how to use SVMs for this task is "Support Vector Method for Novelty Detection". T
8,186
What is one class SVM and how does it work?
I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin. One-Class SVM is similar, but instead of using a hyperplane to separate two classes of instances, it uses a hypersphere to encompass all of the instances. Now think of the "...
What is one class SVM and how does it work?
I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin. One-Class SVM is similar, but instead of using a hype
What is one class SVM and how does it work? I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin. One-Class SVM is similar, but instead of using a hyperplane to separate two classes of instances, it uses a hypersphere to encomp...
What is one class SVM and how does it work? I will assume you understand how a standard SVM works. To summarise, it separates two classes using a hyperplane with the largest possible margin. One-Class SVM is similar, but instead of using a hype
8,187
What is one class SVM and how does it work?
1. Traditional SVM Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space) Find support vectors (on the edge of each class in feature space) Allow some soft margin for some points to lie in the region between support vectors (this is to avoid over-fitting) ...
What is one class SVM and how does it work?
1. Traditional SVM Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space) Find support vectors (on the edge of each class in feature spa
What is one class SVM and how does it work? 1. Traditional SVM Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space) Find support vectors (on the edge of each class in feature space) Allow some soft margin for some points to lie in the region between supp...
What is one class SVM and how does it work? 1. Traditional SVM Project point to higher dimensional space to separate two classes (initially inseparable in lower dimensional space) Find support vectors (on the edge of each class in feature spa
8,188
What is one class SVM and how does it work?
You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way. Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as boundaries for some specific class and perform the relabeling.
What is one class SVM and how does it work?
You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way. Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as bo
What is one class SVM and how does it work? You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way. Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as boundaries for some specific class and perform the relabeling.
What is one class SVM and how does it work? You can use One Class SVM for some pipeline for Active Learning in some semi-supervised way. Ex: As SVM deals with a max-margin method as described before, you can consider those margin regions as bo
8,189
Color and line thickness recommendations for line plots
I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking about "spaghetti plots" and the problems they pose in distinguishing different series. Concretely, a mass of lines for s...
Color and line thickness recommendations for line plots
I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking a
Color and line thickness recommendations for line plots I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking about "spaghetti plots" and the problems they pose in distinguish...
Color and line thickness recommendations for line plots I will try to be provocative here and wonder whether the absence of such guidelines arises because this is a nearly insoluble problem. People in quite different fields seem to agree in often talking a
8,190
Color and line thickness recommendations for line plots
Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what makes one be able to distinguish between the lines easily, it is based on continuity and how tortuous the lines are. Thus th...
Color and line thickness recommendations for line plots
Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what make
Color and line thickness recommendations for line plots Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what makes one be able to distinguish between the lines easily, it is bas...
Color and line thickness recommendations for line plots Questions 2 and 3 you answered yourself - the color brewer palettes are suitable. The hard question is 1, but like Nick I'm afraid it is based on a false hope. The color of the lines are not what make
8,191
Color and line thickness recommendations for line plots
From "The Elements of Statistical Learning" by Trevor Hastie et al. : "Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particularly troublesome. We have changed the color palette in this edition to a large extent, replacing the above with an orange...
Color and line thickness recommendations for line plots
From "The Elements of Statistical Learning" by Trevor Hastie et al. : "Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particula
Color and line thickness recommendations for line plots From "The Elements of Statistical Learning" by Trevor Hastie et al. : "Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particularly troublesome. We have changed the color palette in this editi...
Color and line thickness recommendations for line plots From "The Elements of Statistical Learning" by Trevor Hastie et al. : "Our first edition was unfriendly to colorblind readers; in particular, we tended to favor red/green contrasts which are particula
8,192
Color and line thickness recommendations for line plots
I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discern color. Some resources: Hadley Wickham ( 2009), ggplot: Elegant Graphics for Data Analysis, Springer; has a supportin...
Color and line thickness recommendations for line plots
I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discer
Color and line thickness recommendations for line plots I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discern color. Some resources: Hadley Wickham ( 2009), ggplot: Elega...
Color and line thickness recommendations for line plots I've seen very little attention given to "line thickness" in regards to proper data visualization. Perhaps the ability to discern different line thicknesses is not as variable as the ability to discer
8,193
Color and line thickness recommendations for line plots
While I agree that there's not a unique solution to the problem, I use the recommendation of this blog: http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html The posts on colour tackle the issues of colour-blindness and Gray-scale printing and gives an example of colour scale that solves this...
Color and line thickness recommendations for line plots
While I agree that there's not a unique solution to the problem, I use the recommendation of this blog: http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html The posts on co
Color and line thickness recommendations for line plots While I agree that there's not a unique solution to the problem, I use the recommendation of this blog: http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html The posts on colour tackle the issues of colour-blindness and Gray-scale printi...
Color and line thickness recommendations for line plots While I agree that there's not a unique solution to the problem, I use the recommendation of this blog: http://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html The posts on co
8,194
Why not report the mean of a bootstrap distribution?
Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the bootstrap. The bootstrapped mean value is not a better estimator for your population parameter. It's merely an estimate o...
Why not report the mean of a bootstrap distribution?
Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the b
Why not report the mean of a bootstrap distribution? Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the bootstrap. The bootstrapped mean value is not a better estimator for...
Why not report the mean of a bootstrap distribution? Because the bootstrapped statistic is one further abstraction away from your population parameter. You have your population parameter, your sample statistic, and only on the third layer you have the b
8,195
Why not report the mean of a bootstrap distribution?
There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating). The basic idea is that if your estimator is very sensitive to perturbations in the data (i.e., the estimator has high variance and low bias), then you can average over lots of bootstrap samp...
Why not report the mean of a bootstrap distribution?
There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating). The basic idea is that if your estimator is very sensitive to perturba
Why not report the mean of a bootstrap distribution? There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating). The basic idea is that if your estimator is very sensitive to perturbations in the data (i.e., the estimator has high variance and low bi...
Why not report the mean of a bootstrap distribution? There is at least one case where people do use the mean of the bootstrap distribution: bagging (short for bootstrap aggregating). The basic idea is that if your estimator is very sensitive to perturba
8,196
Why not report the mean of a bootstrap distribution?
It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in estimating the true parameter $\theta$.
Why not report the mean of a bootstrap distribution?
It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in
Why not report the mean of a bootstrap distribution? It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in estimating the true parameter $\theta$.
Why not report the mean of a bootstrap distribution? It is worth noting that the difference between the mean of bootstrapped samples $\theta_B$ and the sample estimate $\hat{\theta}$ can sometimes be used as an estimate of the bias of $\hat{\theta}$ in
8,197
Why not report the mean of a bootstrap distribution?
One simple answer, because it's biased. A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and calculate the MLE for each BS subsample and average them together. The relative bias is 5%! set.seed(123) out <- replic...
Why not report the mean of a bootstrap distribution?
One simple answer, because it's biased. A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and
Why not report the mean of a bootstrap distribution? One simple answer, because it's biased. A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and calculate the MLE for each BS subsample and average them together....
Why not report the mean of a bootstrap distribution? One simple answer, because it's biased. A simple example, estimate the upper bound of a $\text{Uniform}(0, \theta)$ random variable. Here, I take 1,000 bootstrap samples of a $n=10$ random sample, and
8,198
How do you find weights for weighted least squares regression?
Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$. Those points that are more informative are given more 'weight', and those that are less informative are given less we...
How do you find weights for weighted least squares regression?
Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$
How do you find weights for weighted least squares regression? Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$. Those points that are more informative are given more ...
How do you find weights for weighted least squares regression? Weighted least squares (WLS) regression is not a transformed model. Instead, you are simply treating each observation as more or less informative about the underlying relationship between $X$ and $Y$
8,199
How do you find weights for weighted least squares regression?
When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Geoffrey Vining. For example: Experience or prior information using some theoretical model. Using residuals of the model,...
How do you find weights for weighted least squares regression?
When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Ge
How do you find weights for weighted least squares regression? When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Geoffrey Vining. For example: Experience or prior informat...
How do you find weights for weighted least squares regression? When performing WLS, you need to know the weights. There are some ways to find them as said on page 191 of Introduction to Linear Regression Analysis by Douglas C. Montgomery, Elizabeth A. Peck, G. Ge
8,200
Mean Average Precision vs Mean Reciprocal Rank
Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth to this, that in truth we can say for each of those 20 that "yes" it is a relevant answer or "no" it isn't. Mean reciproc...
Mean Average Precision vs Mean Reciprocal Rank
Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth t
Mean Average Precision vs Mean Reciprocal Rank Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth to this, that in truth we can say for each of those 20 that "yes" it is a ...
Mean Average Precision vs Mean Reciprocal Rank Imagine you have some kind of query, and your retrieval system has returned you a ranked list of the top-20 items it thinks most relevant to your query. Now also imagine that there is a ground-truth t