idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
33,301
Justifications for a fixed-effects vs random-effects model in meta-analysis
Note: If you want a quick answer to your question regarding using the heterogeneity test to make this decision, scroll down to "Which Justifications Are Reasonable?". There are a few justifications (some more/less reasonable than others) that researchers offer for their selection of a fixed-effects vs. random-effects meta-analytic synthesis. These are discussed in introductory meta-analysis textbooks, like Borenstein et al. (2009), Card (2011), and Cooper(2017). Without condemning or condoning any of these justifications (yet), they include: Justifications for Selection of Fixed-Effects Model Analytic Simplicity: Some folks feel the calculation/interpretation of a random-effects model is beyond their statistical understanding, and therefore stick to a simpler model. With the fixed-effect model the researcher only needs to estimate variability in effect sizes driven by sampling error. For better or worse, this is a pragmatic practice encouraged explicitly in Card (2011). Prior Belief in No Study-Level Variability/Moderators: If a researcher believes that all effect sizes in their sample vary only because of sampling error--and that there is no systematic study-level variability (and therefore no moderators--there would be little imperative to fit a random-effects model. I think this justification and the former sometimes walk hand-in-hand, when a researcher feels fitting a random-effects model is beyond their capacity, and then subsequently rationalizes this decision by claiming, after the fact, that they don't anticipate any amount of true study-level heterogeneity. Systematic Moderators Have Been Exhaustively Considered: Some researchers may use a fixed-effect analysis after they have investigated and taken into account every moderator that they can think of. The underlying rationale here is that once a researcher has accounted for every conceivable/meaningful source of study-level variability, all that can be left over is sampling error, and therefore a random-effects model would be unnecessary. Non-Significant Heterogeneity Test (i.e., $Q$ Statistic): A researcher might feel more comfortable adopting a fixed-effects model if they fail to reject the null of a homogenous sample of effect sizes. Intention to Make Limited/Specific Inferences: Fixed-effects models are appropriate for speaking to patterns of effects strictly within the sample of effects. A researcher might therefore justify fitting a fixed-effects model if they are comfortable speaking only to what is going on in their sample, and not speculating about what might happen in studies missed by their review, or in studies that come after their review. Justifications for Selection of a Random-Effects Model Prior Belief in Study-Level Variability/Moderators: In contrast to Justification 2. (in favour of fixed-effects models), if the researcher anticipates that there will be some meaningful amount of study-level variability (and therefore moderation), they would default to specifying a random-effects model. If you come from a psychology background (I do), this is becoming an increasingly routine/encouraged default way of thinking about effect sizes (e.g., see Cumming, 2014). Significant Heterogeneity Test (i.e., $Q$ Statistic): Just as a researcher might use a non-significant $Q$ test to justify their selection of a fixed-effects model, so too might they use a significant $Q$ test (rejecting the null of homogenous effect sizes) to justify their use of a random-effects model. Analytic Pragmatism: It turns out, if you fit a random-effects model and there is no significant heterogeneity (i.e., the $Q$ is not significant), you will arrive at fixed-effect estimates; only in the presence of significant heterogeneity will these estimates change. Some researchers might therefore default to a random-effects model figuring that their analyses will "work out" the way they ought to, depending on the qualities of the underlying data. Intention to Make Broad/Generalizable Inferences: Unlike with fixed-effects models, random-effects models license a researcher to speak (to some degree) beyond their sample, in terms of patterns of effects/moderation that would play out in a broader literature. If this level of inference is desirable to a researcher, they might therefore prefer a random-effects model. Consequences of Specifying the Wrong Model Though not an explicit part of your question, I think it's important to point out why it is important for the researcher to "get it right" when selecting between fixed-effects and random-effects meta-analysis models: it largely comes down to estimation precision and statistical power. Fixed-effects models are more statistically powerful at the risk of yielding artificially precise estimates; random-effects models are less statistically powerful, but potentially more reasonable if there is true heterogeneity. In the context of tests of moderators, fixed-effect models can underestimate the extent of error variance, while random-effects models can overestimate the extent of error variance (depending on whether their modelling assumptions are met or violated, see Overton, 1998). Again, within the psychology literature, there is an increasing sense that the field has relied too heavily on fixed-effects meta-analyses, and that therefore we have deluded ourselves into a greater sense of certainty/precision in our effects (see Schmidt et al., 2009). Which Justifications Are Reasonable? To answer your particular inquiry directly: some (e.g., Borenstein et al., 2009; Card, 2001) caution against the use of the heterogeneity test statistic $Q$ for the purpose of deciding whether to specify a fixed-effects or random-effects model (Justification 4. and Justification 7.). These authors argue instead that you ought to make this decision primarily on conceptual grounds (i.e., Justification 2. or Justification 6.). The fallibility of the $Q$ statistic for this purpose also makes a certain amount of intuitive sense in the context of especially small (or especially large) syntheses, where $Q$ is likely to be under-powered to detect meaningful heterogeneity (or over-powered to detect trivial amounts of heterogeneity). Analytic simplicity (Justification 1.) seems like another justification for fixed-effects models that is unlikely to be successful (for reasons that I think are more obvious). Arguing that all possible moderators have been exhausted (Justification 3.), on the other hand, could be more compelling in some cases, if the researcher can demonstrate that they have considered/modelled a wide range of moderator variables. If they've only coded a few moderators, this justification will likely be seen as pretty specious/flimsy. Letting the data make the decision via a default random-effects model (Justification 8.) is one that I feel uncertain about. It's certainly not an active/principled decision, but coupled with the psychology field's shift towards preferring random-effects models as a default, it may prove to be an acceptable (though not a particularly thoughtful) justification. That leaves justifications related to prior beliefs regarding the distribution(s) of effects (Justification 2. and Justification 6.), and those related to the kinds of inferences the researcher wishes to be licensed to make (Justification 5. and Justification 9.). The plausibility of prior beliefs about distributions of effects will largely come down to features of the research you are synthesizing; as Cooper (2017) notes, if you are synthesizing effects of mechanistic/universal processes, collected from largely similar contexts/samples, and in tightly controlled environments, a fixed-effects analysis could be entirely reasonable. Synthesizing results from replications of the same experiment would be a good example of when this analytic strategy could be desirable (see. Goh et al., 2016). If, however, you're synthesizing a field where designs, manipulations, measures, contexts, and sample characteristics differ quite a bit, it seems to become increasingly difficult to argue that one is studying exactly the same effect in each instance. Lastly, the kinds of inferences one wishes to make seems a matter of personal preference/taste, so I'm not sure how one would begin to argue for/against this justification as long as it seemed conceptually defensible. References Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex, UK: Wiley. Card, N. A. (2011). Applied meta-analysis for social science research. New York, NY: Guilford Press. Cooper, H. (2017). Research synthesis and meta-analysis: A step-by-step approach. Thousand Oaks, CA: Sage. Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7-29. Goh, J. X., Hall, J. A., & Rosenthal, R. (2016). Mini Meta‐Analysis of Your Own Studies: Some Arguments on Why and a Primer on How. Social and Personality Psychology Compass, 10(10), 535-549. Overton, R. C. (1998). A comparison of fixed-effects and mixed (random-effects) models for meta-analysis tests of moderator variable effects. Psychological Methods, 3(3), 354-379. Schmidt, F. L., Oh, I. S., & Hayes, T. L. (2009). Fixed‐versus random‐effects models in meta‐analysis: Model properties and an empirical comparison of differences in results. British Journal of Mathematical and Statistical Psychology, 62(1), 97-128.
Justifications for a fixed-effects vs random-effects model in meta-analysis
Note: If you want a quick answer to your question regarding using the heterogeneity test to make this decision, scroll down to "Which Justifications Are Reasonable?". There are a few justifications (s
Justifications for a fixed-effects vs random-effects model in meta-analysis Note: If you want a quick answer to your question regarding using the heterogeneity test to make this decision, scroll down to "Which Justifications Are Reasonable?". There are a few justifications (some more/less reasonable than others) that researchers offer for their selection of a fixed-effects vs. random-effects meta-analytic synthesis. These are discussed in introductory meta-analysis textbooks, like Borenstein et al. (2009), Card (2011), and Cooper(2017). Without condemning or condoning any of these justifications (yet), they include: Justifications for Selection of Fixed-Effects Model Analytic Simplicity: Some folks feel the calculation/interpretation of a random-effects model is beyond their statistical understanding, and therefore stick to a simpler model. With the fixed-effect model the researcher only needs to estimate variability in effect sizes driven by sampling error. For better or worse, this is a pragmatic practice encouraged explicitly in Card (2011). Prior Belief in No Study-Level Variability/Moderators: If a researcher believes that all effect sizes in their sample vary only because of sampling error--and that there is no systematic study-level variability (and therefore no moderators--there would be little imperative to fit a random-effects model. I think this justification and the former sometimes walk hand-in-hand, when a researcher feels fitting a random-effects model is beyond their capacity, and then subsequently rationalizes this decision by claiming, after the fact, that they don't anticipate any amount of true study-level heterogeneity. Systematic Moderators Have Been Exhaustively Considered: Some researchers may use a fixed-effect analysis after they have investigated and taken into account every moderator that they can think of. The underlying rationale here is that once a researcher has accounted for every conceivable/meaningful source of study-level variability, all that can be left over is sampling error, and therefore a random-effects model would be unnecessary. Non-Significant Heterogeneity Test (i.e., $Q$ Statistic): A researcher might feel more comfortable adopting a fixed-effects model if they fail to reject the null of a homogenous sample of effect sizes. Intention to Make Limited/Specific Inferences: Fixed-effects models are appropriate for speaking to patterns of effects strictly within the sample of effects. A researcher might therefore justify fitting a fixed-effects model if they are comfortable speaking only to what is going on in their sample, and not speculating about what might happen in studies missed by their review, or in studies that come after their review. Justifications for Selection of a Random-Effects Model Prior Belief in Study-Level Variability/Moderators: In contrast to Justification 2. (in favour of fixed-effects models), if the researcher anticipates that there will be some meaningful amount of study-level variability (and therefore moderation), they would default to specifying a random-effects model. If you come from a psychology background (I do), this is becoming an increasingly routine/encouraged default way of thinking about effect sizes (e.g., see Cumming, 2014). Significant Heterogeneity Test (i.e., $Q$ Statistic): Just as a researcher might use a non-significant $Q$ test to justify their selection of a fixed-effects model, so too might they use a significant $Q$ test (rejecting the null of homogenous effect sizes) to justify their use of a random-effects model. Analytic Pragmatism: It turns out, if you fit a random-effects model and there is no significant heterogeneity (i.e., the $Q$ is not significant), you will arrive at fixed-effect estimates; only in the presence of significant heterogeneity will these estimates change. Some researchers might therefore default to a random-effects model figuring that their analyses will "work out" the way they ought to, depending on the qualities of the underlying data. Intention to Make Broad/Generalizable Inferences: Unlike with fixed-effects models, random-effects models license a researcher to speak (to some degree) beyond their sample, in terms of patterns of effects/moderation that would play out in a broader literature. If this level of inference is desirable to a researcher, they might therefore prefer a random-effects model. Consequences of Specifying the Wrong Model Though not an explicit part of your question, I think it's important to point out why it is important for the researcher to "get it right" when selecting between fixed-effects and random-effects meta-analysis models: it largely comes down to estimation precision and statistical power. Fixed-effects models are more statistically powerful at the risk of yielding artificially precise estimates; random-effects models are less statistically powerful, but potentially more reasonable if there is true heterogeneity. In the context of tests of moderators, fixed-effect models can underestimate the extent of error variance, while random-effects models can overestimate the extent of error variance (depending on whether their modelling assumptions are met or violated, see Overton, 1998). Again, within the psychology literature, there is an increasing sense that the field has relied too heavily on fixed-effects meta-analyses, and that therefore we have deluded ourselves into a greater sense of certainty/precision in our effects (see Schmidt et al., 2009). Which Justifications Are Reasonable? To answer your particular inquiry directly: some (e.g., Borenstein et al., 2009; Card, 2001) caution against the use of the heterogeneity test statistic $Q$ for the purpose of deciding whether to specify a fixed-effects or random-effects model (Justification 4. and Justification 7.). These authors argue instead that you ought to make this decision primarily on conceptual grounds (i.e., Justification 2. or Justification 6.). The fallibility of the $Q$ statistic for this purpose also makes a certain amount of intuitive sense in the context of especially small (or especially large) syntheses, where $Q$ is likely to be under-powered to detect meaningful heterogeneity (or over-powered to detect trivial amounts of heterogeneity). Analytic simplicity (Justification 1.) seems like another justification for fixed-effects models that is unlikely to be successful (for reasons that I think are more obvious). Arguing that all possible moderators have been exhausted (Justification 3.), on the other hand, could be more compelling in some cases, if the researcher can demonstrate that they have considered/modelled a wide range of moderator variables. If they've only coded a few moderators, this justification will likely be seen as pretty specious/flimsy. Letting the data make the decision via a default random-effects model (Justification 8.) is one that I feel uncertain about. It's certainly not an active/principled decision, but coupled with the psychology field's shift towards preferring random-effects models as a default, it may prove to be an acceptable (though not a particularly thoughtful) justification. That leaves justifications related to prior beliefs regarding the distribution(s) of effects (Justification 2. and Justification 6.), and those related to the kinds of inferences the researcher wishes to be licensed to make (Justification 5. and Justification 9.). The plausibility of prior beliefs about distributions of effects will largely come down to features of the research you are synthesizing; as Cooper (2017) notes, if you are synthesizing effects of mechanistic/universal processes, collected from largely similar contexts/samples, and in tightly controlled environments, a fixed-effects analysis could be entirely reasonable. Synthesizing results from replications of the same experiment would be a good example of when this analytic strategy could be desirable (see. Goh et al., 2016). If, however, you're synthesizing a field where designs, manipulations, measures, contexts, and sample characteristics differ quite a bit, it seems to become increasingly difficult to argue that one is studying exactly the same effect in each instance. Lastly, the kinds of inferences one wishes to make seems a matter of personal preference/taste, so I'm not sure how one would begin to argue for/against this justification as long as it seemed conceptually defensible. References Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex, UK: Wiley. Card, N. A. (2011). Applied meta-analysis for social science research. New York, NY: Guilford Press. Cooper, H. (2017). Research synthesis and meta-analysis: A step-by-step approach. Thousand Oaks, CA: Sage. Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7-29. Goh, J. X., Hall, J. A., & Rosenthal, R. (2016). Mini Meta‐Analysis of Your Own Studies: Some Arguments on Why and a Primer on How. Social and Personality Psychology Compass, 10(10), 535-549. Overton, R. C. (1998). A comparison of fixed-effects and mixed (random-effects) models for meta-analysis tests of moderator variable effects. Psychological Methods, 3(3), 354-379. Schmidt, F. L., Oh, I. S., & Hayes, T. L. (2009). Fixed‐versus random‐effects models in meta‐analysis: Model properties and an empirical comparison of differences in results. British Journal of Mathematical and Statistical Psychology, 62(1), 97-128.
Justifications for a fixed-effects vs random-effects model in meta-analysis Note: If you want a quick answer to your question regarding using the heterogeneity test to make this decision, scroll down to "Which Justifications Are Reasonable?". There are a few justifications (s
33,302
Justifications for a fixed-effects vs random-effects model in meta-analysis
You use a fixed-effects model if you want to make a conditional inference about the average outcome of the $k$ studies included in your analysis. So, any statements you make about the average outcome only pertain to those $k$ studies and you cannot automatically generalize to other studies. You use a random-effects model if you want to make an unconditional inference about the average outcome in a (typically hypothetical) population of studies from which the $k$ studies included in your analysis are assumed to have come. So, any statements you make about the average outcome in principle pertain to the that entire population of studies (assuming that the $k$ studies included in your meta-analysis are a random sample of the studies in the population or can in some sense be considered to be representative of all of those studies). A very common misconception is that the fixed-effects model is only appropriate when the true outcomes are homogeneous and that the random-effects model should be used when they are heterogeneous. However, both models are perfectly fine even under heterogeneity -- the crucial distinction is the type of inference you can make (conditional versus unconditional). In fact, it is also perfectly fine to fit both models: Once to make a statement about the average outcome of those $k$ studies and once to try the more difficult task of making a statement about the average effect 'in general'.
Justifications for a fixed-effects vs random-effects model in meta-analysis
You use a fixed-effects model if you want to make a conditional inference about the average outcome of the $k$ studies included in your analysis. So, any statements you make about the average outcome
Justifications for a fixed-effects vs random-effects model in meta-analysis You use a fixed-effects model if you want to make a conditional inference about the average outcome of the $k$ studies included in your analysis. So, any statements you make about the average outcome only pertain to those $k$ studies and you cannot automatically generalize to other studies. You use a random-effects model if you want to make an unconditional inference about the average outcome in a (typically hypothetical) population of studies from which the $k$ studies included in your analysis are assumed to have come. So, any statements you make about the average outcome in principle pertain to the that entire population of studies (assuming that the $k$ studies included in your meta-analysis are a random sample of the studies in the population or can in some sense be considered to be representative of all of those studies). A very common misconception is that the fixed-effects model is only appropriate when the true outcomes are homogeneous and that the random-effects model should be used when they are heterogeneous. However, both models are perfectly fine even under heterogeneity -- the crucial distinction is the type of inference you can make (conditional versus unconditional). In fact, it is also perfectly fine to fit both models: Once to make a statement about the average outcome of those $k$ studies and once to try the more difficult task of making a statement about the average effect 'in general'.
Justifications for a fixed-effects vs random-effects model in meta-analysis You use a fixed-effects model if you want to make a conditional inference about the average outcome of the $k$ studies included in your analysis. So, any statements you make about the average outcome
33,303
Justifications for a fixed-effects vs random-effects model in meta-analysis
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You ask in particular for references. The classical reference for this is probably the article by Hedges and Vevea entitled "Fixed- and random-effects models in meta-analysis". If you work in health the relevant chapter in the Cochrane handbook is probably essential reading and contains much good sense. In particular it suggests when meta-analysis should not be considered at all and also distinguishes clearly what to do about heterogeneity other than simply fitting random effects models.
Justifications for a fixed-effects vs random-effects model in meta-analysis
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Justifications for a fixed-effects vs random-effects model in meta-analysis Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You ask in particular for references. The classical reference for this is probably the article by Hedges and Vevea entitled "Fixed- and random-effects models in meta-analysis". If you work in health the relevant chapter in the Cochrane handbook is probably essential reading and contains much good sense. In particular it suggests when meta-analysis should not be considered at all and also distinguishes clearly what to do about heterogeneity other than simply fitting random effects models.
Justifications for a fixed-effects vs random-effects model in meta-analysis Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
33,304
Test whether cross-sectional dependence in panel data follows known (network/spatial) structure
Maybe you a looking for the local variant of Pesaran's CD (cross-sectional dependence) test? Have a look at the literature cited in the R package plm for function pcdtest (pcdtest has an argument W to define a spatial property), e.g., on the web here: http://rdocumentation.org/packages/plm/versions/2.4-1/topics/pcdtest: Baltagi BH, Feng Q, Kao C (2012). “A Lagrange Multiplier test for cross-sectional dependence in a fixed effects panel data model.” Journal of Econometrics, 170(1), 164 - 177. ISSN 0304-4076, https://www.sciencedirect.com/science/article/pii/S030440761200098X. Breusch TS, Pagan AR (1980). “The Lagrange Multiplier Test and Its Applications to Model Specification in Econometrics.” Review of Economic Studies, 47, 239–253. Pesaran MH (2004). “General Diagnostic Tests for Cross Section Dependence in Panels.” CESifo Working Paper Series, 1229. Pesaran MH (2015). “Testing Weak Cross-Sectional Dependence in Large Panels.” Econometric Reviews, 34(6-10), 1089-1117. doi: 10.1080/07474938.2014.956623, https://doi.org/10.1080/07474938.2014.956623.
Test whether cross-sectional dependence in panel data follows known (network/spatial) structure
Maybe you a looking for the local variant of Pesaran's CD (cross-sectional dependence) test? Have a look at the literature cited in the R package plm for function pcdtest (pcdtest has an argument W to
Test whether cross-sectional dependence in panel data follows known (network/spatial) structure Maybe you a looking for the local variant of Pesaran's CD (cross-sectional dependence) test? Have a look at the literature cited in the R package plm for function pcdtest (pcdtest has an argument W to define a spatial property), e.g., on the web here: http://rdocumentation.org/packages/plm/versions/2.4-1/topics/pcdtest: Baltagi BH, Feng Q, Kao C (2012). “A Lagrange Multiplier test for cross-sectional dependence in a fixed effects panel data model.” Journal of Econometrics, 170(1), 164 - 177. ISSN 0304-4076, https://www.sciencedirect.com/science/article/pii/S030440761200098X. Breusch TS, Pagan AR (1980). “The Lagrange Multiplier Test and Its Applications to Model Specification in Econometrics.” Review of Economic Studies, 47, 239–253. Pesaran MH (2004). “General Diagnostic Tests for Cross Section Dependence in Panels.” CESifo Working Paper Series, 1229. Pesaran MH (2015). “Testing Weak Cross-Sectional Dependence in Large Panels.” Econometric Reviews, 34(6-10), 1089-1117. doi: 10.1080/07474938.2014.956623, https://doi.org/10.1080/07474938.2014.956623.
Test whether cross-sectional dependence in panel data follows known (network/spatial) structure Maybe you a looking for the local variant of Pesaran's CD (cross-sectional dependence) test? Have a look at the literature cited in the R package plm for function pcdtest (pcdtest has an argument W to
33,305
minimizer weighted linear regression
It's early and i'm on the bus...Maybe i'm sleeping, but i tried to solve it: For $W= S:= \Sigma^{-1}, $ $$a^\top (X^\top SX)^{-1}X^\top S S^{-1} S X(X^\top SX)^{-1} a $$ Where $S^{-1} S = \textrm{diag}(1)$ $$a^\top (X^\top SX)^{-1} X^\top SX(X^\top SX)^{-1}a = a (X^\top SX)^{-1}a .$$ where $(X^\top SX)^{-1} X^\top SX= \textrm{diag}(1)~~~a (X^\top SX)^{-1}a,$ where $(X^\top SX)^{-1}$ is homoscedastic covariance matrix of Theta vector and it is minimal cov matrix for the gauss-markov theorem.
minimizer weighted linear regression
It's early and i'm on the bus...Maybe i'm sleeping, but i tried to solve it: For $W= S:= \Sigma^{-1}, $ $$a^\top (X^\top SX)^{-1}X^\top S S^{-1} S X(X^\top SX)^{-1} a $$ Where $S^{-1} S = \textrm{dia
minimizer weighted linear regression It's early and i'm on the bus...Maybe i'm sleeping, but i tried to solve it: For $W= S:= \Sigma^{-1}, $ $$a^\top (X^\top SX)^{-1}X^\top S S^{-1} S X(X^\top SX)^{-1} a $$ Where $S^{-1} S = \textrm{diag}(1)$ $$a^\top (X^\top SX)^{-1} X^\top SX(X^\top SX)^{-1}a = a (X^\top SX)^{-1}a .$$ where $(X^\top SX)^{-1} X^\top SX= \textrm{diag}(1)~~~a (X^\top SX)^{-1}a,$ where $(X^\top SX)^{-1}$ is homoscedastic covariance matrix of Theta vector and it is minimal cov matrix for the gauss-markov theorem.
minimizer weighted linear regression It's early and i'm on the bus...Maybe i'm sleeping, but i tried to solve it: For $W= S:= \Sigma^{-1}, $ $$a^\top (X^\top SX)^{-1}X^\top S S^{-1} S X(X^\top SX)^{-1} a $$ Where $S^{-1} S = \textrm{dia
33,306
minimizer weighted linear regression
$\require{cancel}$ $$\operatorname{Cov}(\theta^\ast)=E[{\theta^\ast}{\theta^\ast}']-E[{\theta^\ast}]E[{\theta^\ast}]'$$ $$E[{\theta^\ast}]E[{\theta^\ast}]'=\theta \theta'$$ $$y = X\theta+u$$ $$E[{\theta^\ast}{\theta^\ast}']'=E[((X'WX)^{−1}X'Wy)((X'WX)^{−1}X'Wy)']=\\ E[(X'WX)^{−1}X'Wyy'WX(X'WX)^{−1}]=E[(X'WX)^{−1}X'W(X\theta+u)(\theta'X'+u')WX(X'WX)^{−1}]\\ =E[(X'WX)^{−1}X'W(X\theta\theta'X+u\theta'X+X\theta u'+uu')WX(X'WX)^{−1}]= E[\cancel{(X'WX)^{−1}(X'WX)}\theta\theta'\cancel{(XWX)(X'WX)^{−1}}]+\\ \underbrace{E[(X'WX)^{−1}X'W(u\theta'X)WX(X'WX)^{−1}]}_{=0}+\\ \underbrace{E[(X'WX)^{−1}X'W(X\theta u')WX(X'WX)^{−1}]}_{=0}+\\ E[(X'WX)^{−1}X'W(uu')WX(X'WX)^{−1}]\\ =\theta \theta' +(X'WX)^{−1}X'W\overbrace{\Sigma}^{E[uu']} WX(X'WX)^{−1} $$ Hence $$\operatorname{Cov}(\theta^\ast)=\underbrace{\cancel{\theta \theta'} +(X'WX)^{−1}X'W\Sigma WX(X'WX)^{−1}}_{E[{\theta^\ast}{\theta^\ast}']}- \underbrace{\cancel{\theta \theta'}}_{E[{\theta^\ast}]E[{\theta^\ast}]'}\\ =(X'WX)^{−1}X'W\Sigma WX(X'WX)^{−1}$$ Then $$\operatorname{Var}(a'\theta^\ast)=E[a'\theta^\ast{\theta^\ast}'a]-E[a'\theta^\ast]E[{\theta^\ast}'a]\\ =a'E[\theta^\ast{\theta^\ast}]a - a'E[\theta^\ast]E[\theta^\ast]'a\\ = a'\left(E[\theta^\ast{\theta^\ast}] - E[\theta^\ast]E[\theta^\ast]'\right)a\\ =a'\operatorname{Cov}(\theta^\ast)a$$ If you can use the results of the unweighted version, the result can be directly obtained by recognizing a linear transformation of both $X$ and $y$ that generalizes the weighted linear regression estimator.
minimizer weighted linear regression
$\require{cancel}$ $$\operatorname{Cov}(\theta^\ast)=E[{\theta^\ast}{\theta^\ast}']-E[{\theta^\ast}]E[{\theta^\ast}]'$$ $$E[{\theta^\ast}]E[{\theta^\ast}]'=\theta \theta'$$ $$y = X\theta+u$$ $$E[{\the
minimizer weighted linear regression $\require{cancel}$ $$\operatorname{Cov}(\theta^\ast)=E[{\theta^\ast}{\theta^\ast}']-E[{\theta^\ast}]E[{\theta^\ast}]'$$ $$E[{\theta^\ast}]E[{\theta^\ast}]'=\theta \theta'$$ $$y = X\theta+u$$ $$E[{\theta^\ast}{\theta^\ast}']'=E[((X'WX)^{−1}X'Wy)((X'WX)^{−1}X'Wy)']=\\ E[(X'WX)^{−1}X'Wyy'WX(X'WX)^{−1}]=E[(X'WX)^{−1}X'W(X\theta+u)(\theta'X'+u')WX(X'WX)^{−1}]\\ =E[(X'WX)^{−1}X'W(X\theta\theta'X+u\theta'X+X\theta u'+uu')WX(X'WX)^{−1}]= E[\cancel{(X'WX)^{−1}(X'WX)}\theta\theta'\cancel{(XWX)(X'WX)^{−1}}]+\\ \underbrace{E[(X'WX)^{−1}X'W(u\theta'X)WX(X'WX)^{−1}]}_{=0}+\\ \underbrace{E[(X'WX)^{−1}X'W(X\theta u')WX(X'WX)^{−1}]}_{=0}+\\ E[(X'WX)^{−1}X'W(uu')WX(X'WX)^{−1}]\\ =\theta \theta' +(X'WX)^{−1}X'W\overbrace{\Sigma}^{E[uu']} WX(X'WX)^{−1} $$ Hence $$\operatorname{Cov}(\theta^\ast)=\underbrace{\cancel{\theta \theta'} +(X'WX)^{−1}X'W\Sigma WX(X'WX)^{−1}}_{E[{\theta^\ast}{\theta^\ast}']}- \underbrace{\cancel{\theta \theta'}}_{E[{\theta^\ast}]E[{\theta^\ast}]'}\\ =(X'WX)^{−1}X'W\Sigma WX(X'WX)^{−1}$$ Then $$\operatorname{Var}(a'\theta^\ast)=E[a'\theta^\ast{\theta^\ast}'a]-E[a'\theta^\ast]E[{\theta^\ast}'a]\\ =a'E[\theta^\ast{\theta^\ast}]a - a'E[\theta^\ast]E[\theta^\ast]'a\\ = a'\left(E[\theta^\ast{\theta^\ast}] - E[\theta^\ast]E[\theta^\ast]'\right)a\\ =a'\operatorname{Cov}(\theta^\ast)a$$ If you can use the results of the unweighted version, the result can be directly obtained by recognizing a linear transformation of both $X$ and $y$ that generalizes the weighted linear regression estimator.
minimizer weighted linear regression $\require{cancel}$ $$\operatorname{Cov}(\theta^\ast)=E[{\theta^\ast}{\theta^\ast}']-E[{\theta^\ast}]E[{\theta^\ast}]'$$ $$E[{\theta^\ast}]E[{\theta^\ast}]'=\theta \theta'$$ $$y = X\theta+u$$ $$E[{\the
33,307
minimizer weighted linear regression
There are two methods of showing $\operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'\hat{\theta})$, where $\hat{\theta} = (X'\Sigma^{-1}X)^{-1}X'\Sigma^{-1}y$ is the weighted-least squares estimate of $\theta$ with the "theoretical" weights $\Sigma$, and $\theta^* = (X'WX)^{-1}X'Wy$ is an arbitrary weighted-least squares estimate. The first method links the problem to an OLS problem and then applies the Gauss-Markov theorem (as @Danilo attempted but he did not clearly finish the argument). The second method is a brutal-force evaluation of the difference $\operatorname{Var}(a'\theta^*) - \operatorname{Var}(a'\hat{\theta})$. Method 1 Rewrite the linear model $y = X\theta + \epsilon$ as $y_0 = X_0\theta + u$, where $y_0 = \Sigma^{-1/2}y$, $X_0 = \Sigma^{-1/2}X$, $u = \Sigma^{-1/2}\epsilon$. The latter representation then corresponds to an OLS problem as the error $u$ is homoscedastic in view of $\operatorname{Var}(u) = \Sigma^{-1/2}\Sigma\Sigma^{-1/2} = I_{(n)}$. The Gauss-Markov theorem then applies: since $a'\theta^* = a'(X'WX)^{-1}X'Wy = a'(X'WX)^{-1}X'W\Sigma^{1/2}y_0$ is an unbiased linear estimate of $a'\theta$ (i.e., $E[a'\theta^*] = a'\theta$), it follows that \begin{align} \operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'(X_0'X_0)^{-1}X_0'y_0) = \operatorname{Var}(a'\hat{\theta}). \end{align} This completes the proof. Method 2 Since $\operatorname{Var}(a'\theta^*) - \operatorname{Var}(a'\hat{\theta}) = a'((X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1})a$, if we can show that the matrix $(X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1} \geq 0$ (i.e., the difference is a positive semi-definite matirx), the result then follows. To this end, note that \begin{align} & (X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1} \\ =& (X'WX)^{-1}[X'W\Sigma WX - (X'WX)(X'\Sigma^{-1}X)^{-1}(X'WX)](X'WX)^{-1} \\ =& (X'WX)^{-1}X'W[\Sigma - X(X'\Sigma^{-1}X)^{-1}X']WX(X'WX)^{-1} \\ =& (X'WX)^{-1}X'W\Sigma^{1/2}[I_{(n)} - \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)']\Sigma^{1/2}WX(X'WX)^{-1}, \end{align} hence it suffices to prove $I_{(n)} - \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)' \geq 0$, which follows from the matrix ("hat matrix") $H := \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)'$ is symmetric and idempotent. This completes the proof.
minimizer weighted linear regression
There are two methods of showing $\operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'\hat{\theta})$, where $\hat{\theta} = (X'\Sigma^{-1}X)^{-1}X'\Sigma^{-1}y$ is the weighted-least squares esti
minimizer weighted linear regression There are two methods of showing $\operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'\hat{\theta})$, where $\hat{\theta} = (X'\Sigma^{-1}X)^{-1}X'\Sigma^{-1}y$ is the weighted-least squares estimate of $\theta$ with the "theoretical" weights $\Sigma$, and $\theta^* = (X'WX)^{-1}X'Wy$ is an arbitrary weighted-least squares estimate. The first method links the problem to an OLS problem and then applies the Gauss-Markov theorem (as @Danilo attempted but he did not clearly finish the argument). The second method is a brutal-force evaluation of the difference $\operatorname{Var}(a'\theta^*) - \operatorname{Var}(a'\hat{\theta})$. Method 1 Rewrite the linear model $y = X\theta + \epsilon$ as $y_0 = X_0\theta + u$, where $y_0 = \Sigma^{-1/2}y$, $X_0 = \Sigma^{-1/2}X$, $u = \Sigma^{-1/2}\epsilon$. The latter representation then corresponds to an OLS problem as the error $u$ is homoscedastic in view of $\operatorname{Var}(u) = \Sigma^{-1/2}\Sigma\Sigma^{-1/2} = I_{(n)}$. The Gauss-Markov theorem then applies: since $a'\theta^* = a'(X'WX)^{-1}X'Wy = a'(X'WX)^{-1}X'W\Sigma^{1/2}y_0$ is an unbiased linear estimate of $a'\theta$ (i.e., $E[a'\theta^*] = a'\theta$), it follows that \begin{align} \operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'(X_0'X_0)^{-1}X_0'y_0) = \operatorname{Var}(a'\hat{\theta}). \end{align} This completes the proof. Method 2 Since $\operatorname{Var}(a'\theta^*) - \operatorname{Var}(a'\hat{\theta}) = a'((X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1})a$, if we can show that the matrix $(X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1} \geq 0$ (i.e., the difference is a positive semi-definite matirx), the result then follows. To this end, note that \begin{align} & (X'WX)^{-1}X'W\Sigma WX(X'WX)^{-1} - (X'\Sigma^{-1}X)^{-1} \\ =& (X'WX)^{-1}[X'W\Sigma WX - (X'WX)(X'\Sigma^{-1}X)^{-1}(X'WX)](X'WX)^{-1} \\ =& (X'WX)^{-1}X'W[\Sigma - X(X'\Sigma^{-1}X)^{-1}X']WX(X'WX)^{-1} \\ =& (X'WX)^{-1}X'W\Sigma^{1/2}[I_{(n)} - \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)']\Sigma^{1/2}WX(X'WX)^{-1}, \end{align} hence it suffices to prove $I_{(n)} - \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)' \geq 0$, which follows from the matrix ("hat matrix") $H := \Sigma^{-1/2}X(X'\Sigma^{-1}X)^{-1}(\Sigma^{-1/2}X)'$ is symmetric and idempotent. This completes the proof.
minimizer weighted linear regression There are two methods of showing $\operatorname{Var}(a'\theta^*) \geq \operatorname{Var}(a'\hat{\theta})$, where $\hat{\theta} = (X'\Sigma^{-1}X)^{-1}X'\Sigma^{-1}y$ is the weighted-least squares esti
33,308
Entropy of Inverse-Wishart distribution
Wishart and Inverse Wishart Distributions The Wishart and inverse Wishart distributions are arguably the most popular distributions for modeling random positive definite matrices. Moreover, if a random variable has a Gaussian distribution, then its sample covariance is drawn from a Wishart distribution. The relative entropy between Wishart distributions may be a useful way to measure the dissimilarity between collections of covariance matrices or Gram (inner product) matrices. A new Bayesian entropy estimator is proposed using an inverted Wishart distribution and a data-dependent prior that handles the small-sample case. Inverse Wishart Differential Entropy and Relative Entropy The above link explains the derivation of the entropy of Inverse-Wishart distribution
Entropy of Inverse-Wishart distribution
Wishart and Inverse Wishart Distributions The Wishart and inverse Wishart distributions are arguably the most popular distributions for modeling random positive definite matrices. Moreover, if a rando
Entropy of Inverse-Wishart distribution Wishart and Inverse Wishart Distributions The Wishart and inverse Wishart distributions are arguably the most popular distributions for modeling random positive definite matrices. Moreover, if a random variable has a Gaussian distribution, then its sample covariance is drawn from a Wishart distribution. The relative entropy between Wishart distributions may be a useful way to measure the dissimilarity between collections of covariance matrices or Gram (inner product) matrices. A new Bayesian entropy estimator is proposed using an inverted Wishart distribution and a data-dependent prior that handles the small-sample case. Inverse Wishart Differential Entropy and Relative Entropy The above link explains the derivation of the entropy of Inverse-Wishart distribution
Entropy of Inverse-Wishart distribution Wishart and Inverse Wishart Distributions The Wishart and inverse Wishart distributions are arguably the most popular distributions for modeling random positive definite matrices. Moreover, if a rando
33,309
Markov Switching Forecast. How can I derive this? [closed]
My attemp is the following: From the system i derived $\begin{array}{lll} y^{\ast}_{t + n} & = & a_{12} \sum_{j = 0}^{\infty} a_{11}^j x_{t + n - j - 1}^{\ast} + \sum_{j = 0}^{\infty} a_{11}^j \varepsilon_{t + n - j}\\ & = & a_{11}^n y^{\ast}_t + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j x_{t + n - j - 1}^{\ast} + \sum_{j = 0}^{n - 1} a_{11}^j \varepsilon_{t + n - j}\\ y_{t + n} - \alpha_1 - \alpha_2 S_{t + n} & = & a_{11}^n ( y_t - \alpha_1 - \alpha_2 S_t) + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j ( x_{t + n - j - 1} - \alpha_3 - \alpha_4 S_{t + n - j - 1}) + \sum_{j = 0}^{n - 1} a_{11}^j \varepsilon_{t + n - j} \end{array}$ Then, $y_{t + n} = \alpha_1 ( 1 - a_{11}^n) + \alpha_2 ( S_{t + n} - a_{11}^n S_t) + a_{11}^n y_t + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j ( x_{t + n - j - 1} - \alpha_3 - \alpha_4 S_{t + n - j - 1}) + \sum_{j = 0}^{n - 1} a_{11}^j \varepsilon_{t + n - j}$ Taking expectations conditional on information at time t: \begin{equation} E ( y_{t + n} | I_t) = \alpha_1 ( 1 - a_{11}^n) + \alpha_2 ( E ( S_{t + n} | I_t) - a_{11}^n S_t) + a_{11}^n y_t + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j ( E ( x_{t + n - j - 1} | I_t) - \alpha_3 - \alpha_4 E ( S_{t + n - j - 1} | I_t )) \end{equation}
Markov Switching Forecast. How can I derive this? [closed]
My attemp is the following: From the system i derived $\begin{array}{lll} y^{\ast}_{t + n} & = & a_{12} \sum_{j = 0}^{\infty} a_{11}^j x_{t + n - j - 1}^{\ast} + \sum_{j = 0}^{\infty} a_{11}^j \va
Markov Switching Forecast. How can I derive this? [closed] My attemp is the following: From the system i derived $\begin{array}{lll} y^{\ast}_{t + n} & = & a_{12} \sum_{j = 0}^{\infty} a_{11}^j x_{t + n - j - 1}^{\ast} + \sum_{j = 0}^{\infty} a_{11}^j \varepsilon_{t + n - j}\\ & = & a_{11}^n y^{\ast}_t + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j x_{t + n - j - 1}^{\ast} + \sum_{j = 0}^{n - 1} a_{11}^j \varepsilon_{t + n - j}\\ y_{t + n} - \alpha_1 - \alpha_2 S_{t + n} & = & a_{11}^n ( y_t - \alpha_1 - \alpha_2 S_t) + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j ( x_{t + n - j - 1} - \alpha_3 - \alpha_4 S_{t + n - j - 1}) + \sum_{j = 0}^{n - 1} a_{11}^j \varepsilon_{t + n - j} \end{array}$ Then, $y_{t + n} = \alpha_1 ( 1 - a_{11}^n) + \alpha_2 ( S_{t + n} - a_{11}^n S_t) + a_{11}^n y_t + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j ( x_{t + n - j - 1} - \alpha_3 - \alpha_4 S_{t + n - j - 1}) + \sum_{j = 0}^{n - 1} a_{11}^j \varepsilon_{t + n - j}$ Taking expectations conditional on information at time t: \begin{equation} E ( y_{t + n} | I_t) = \alpha_1 ( 1 - a_{11}^n) + \alpha_2 ( E ( S_{t + n} | I_t) - a_{11}^n S_t) + a_{11}^n y_t + a_{12} \sum_{j = 0}^{n - 1} a_{11}^j ( E ( x_{t + n - j - 1} | I_t) - \alpha_3 - \alpha_4 E ( S_{t + n - j - 1} | I_t )) \end{equation}
Markov Switching Forecast. How can I derive this? [closed] My attemp is the following: From the system i derived $\begin{array}{lll} y^{\ast}_{t + n} & = & a_{12} \sum_{j = 0}^{\infty} a_{11}^j x_{t + n - j - 1}^{\ast} + \sum_{j = 0}^{\infty} a_{11}^j \va
33,310
What does an infinite AIC mean and what can be done about it? [duplicate]
Why don't you use Forward feature selection? min.model <- lm(y ~ 1, data=dat) fwd.model <- step(min.model, direction = "forward", scope = (~ x1 + x2 + ... xn)) This way the model will only add predictors until it can.
What does an infinite AIC mean and what can be done about it? [duplicate]
Why don't you use Forward feature selection? min.model <- lm(y ~ 1, data=dat) fwd.model <- step(min.model, direction = "forward", scope = (~ x1 + x2 + ... xn)) This way the model will only add predic
What does an infinite AIC mean and what can be done about it? [duplicate] Why don't you use Forward feature selection? min.model <- lm(y ~ 1, data=dat) fwd.model <- step(min.model, direction = "forward", scope = (~ x1 + x2 + ... xn)) This way the model will only add predictors until it can.
What does an infinite AIC mean and what can be done about it? [duplicate] Why don't you use Forward feature selection? min.model <- lm(y ~ 1, data=dat) fwd.model <- step(min.model, direction = "forward", scope = (~ x1 + x2 + ... xn)) This way the model will only add predic
33,311
Comparing two F-test statistics
Your Model may be response variable=dummy variable(Category A=1 or B=0) + Samples(1,2,3,4,5) run regression, fixed factors as independent variables and see the effect of dummy, samples and combine effect too as your own requirements.
Comparing two F-test statistics
Your Model may be response variable=dummy variable(Category A=1 or B=0) + Samples(1,2,3,4,5) run regression, fixed factors as independent variables and see the effect of dummy, samples and combine e
Comparing two F-test statistics Your Model may be response variable=dummy variable(Category A=1 or B=0) + Samples(1,2,3,4,5) run regression, fixed factors as independent variables and see the effect of dummy, samples and combine effect too as your own requirements.
Comparing two F-test statistics Your Model may be response variable=dummy variable(Category A=1 or B=0) + Samples(1,2,3,4,5) run regression, fixed factors as independent variables and see the effect of dummy, samples and combine e
33,312
Bootstrapping clusters in R [closed]
If I understand you correctly you want to estimate a statistic per state and that average that statistic to get a bootstrapped estimation of the overall statistic. Stratified sampling does something different. It ensures that the label is samples representatively in each sample. I do not think that is what you want to do. You could do this manually without being hacky. Using the dplyr, tidyr and purrr package from the tidyverse this becomes transparant and clean code. library(tidyr) library(dplyr) library(purrr) dat <- data.frame(cluster=rep(letters[1:5],each=10), x=runif(5*10), stringsAsFactors=TRUE) boot.stat2 <- function(df) { mean(df$x) } dat %>% nest(x) %>% mutate(stat = map_dbl(data, boot.stat2)) More information on the pipe %>% using dplyr nesting and unnesting using tidyr mapping values using purrr
Bootstrapping clusters in R [closed]
If I understand you correctly you want to estimate a statistic per state and that average that statistic to get a bootstrapped estimation of the overall statistic. Stratified sampling does something
Bootstrapping clusters in R [closed] If I understand you correctly you want to estimate a statistic per state and that average that statistic to get a bootstrapped estimation of the overall statistic. Stratified sampling does something different. It ensures that the label is samples representatively in each sample. I do not think that is what you want to do. You could do this manually without being hacky. Using the dplyr, tidyr and purrr package from the tidyverse this becomes transparant and clean code. library(tidyr) library(dplyr) library(purrr) dat <- data.frame(cluster=rep(letters[1:5],each=10), x=runif(5*10), stringsAsFactors=TRUE) boot.stat2 <- function(df) { mean(df$x) } dat %>% nest(x) %>% mutate(stat = map_dbl(data, boot.stat2)) More information on the pipe %>% using dplyr nesting and unnesting using tidyr mapping values using purrr
Bootstrapping clusters in R [closed] If I understand you correctly you want to estimate a statistic per state and that average that statistic to get a bootstrapped estimation of the overall statistic. Stratified sampling does something
33,313
2x3 ANOVA interaction no longer significant after including a covariate? [duplicate]
Ordinarily I would say that each term you add diminishes the residual degrees of freedom, and that in turn means your F-statistic has to be larger in order to attain a given level of significance. That does not appear to be the case here because your F-values themselves are very different when you include the covariate. Some statistical pakcages get higher order F-value calculations wrong in the presence of covariate interactions (specifically, what goes into the denominator). One way to sanity test this is to obtain the coefficient estimates for the model with and without the covariate and see if any of the estimates change a lot (several-fold) between the two. That would be an indicator that the covariate really does make a difference. But I also am mystified why centering the covariate would fix this. That's weird enough that if I were you I wouldn't rely on these calculations until speaking to a statistician.
2x3 ANOVA interaction no longer significant after including a covariate? [duplicate]
Ordinarily I would say that each term you add diminishes the residual degrees of freedom, and that in turn means your F-statistic has to be larger in order to attain a given level of significance. Tha
2x3 ANOVA interaction no longer significant after including a covariate? [duplicate] Ordinarily I would say that each term you add diminishes the residual degrees of freedom, and that in turn means your F-statistic has to be larger in order to attain a given level of significance. That does not appear to be the case here because your F-values themselves are very different when you include the covariate. Some statistical pakcages get higher order F-value calculations wrong in the presence of covariate interactions (specifically, what goes into the denominator). One way to sanity test this is to obtain the coefficient estimates for the model with and without the covariate and see if any of the estimates change a lot (several-fold) between the two. That would be an indicator that the covariate really does make a difference. But I also am mystified why centering the covariate would fix this. That's weird enough that if I were you I wouldn't rely on these calculations until speaking to a statistician.
2x3 ANOVA interaction no longer significant after including a covariate? [duplicate] Ordinarily I would say that each term you add diminishes the residual degrees of freedom, and that in turn means your F-statistic has to be larger in order to attain a given level of significance. Tha
33,314
Name of phenomenon on estimated CDF plots of censored data
I'm not an expert, but I believe what you're seeing is analogous to soft clipping. Sort Clipping (Gain Compression) It's a little different, because your clipping is caused by a non-deterministic process, in that your signal is clipped when it plus a random noise exceeds a threshold, instead of a device that deterministically reduces an analog signal. I have a guitar pedal that does this, it softens the "punch" of playing an electric guitar: Keeyley Compressor Demo Seems like a decent analogy. Im not sure if there is a name in the statistical community.
Name of phenomenon on estimated CDF plots of censored data
I'm not an expert, but I believe what you're seeing is analogous to soft clipping. Sort Clipping (Gain Compression) It's a little different, because your clipping is caused by a non-deterministic pr
Name of phenomenon on estimated CDF plots of censored data I'm not an expert, but I believe what you're seeing is analogous to soft clipping. Sort Clipping (Gain Compression) It's a little different, because your clipping is caused by a non-deterministic process, in that your signal is clipped when it plus a random noise exceeds a threshold, instead of a device that deterministically reduces an analog signal. I have a guitar pedal that does this, it softens the "punch" of playing an electric guitar: Keeyley Compressor Demo Seems like a decent analogy. Im not sure if there is a name in the statistical community.
Name of phenomenon on estimated CDF plots of censored data I'm not an expert, but I believe what you're seeing is analogous to soft clipping. Sort Clipping (Gain Compression) It's a little different, because your clipping is caused by a non-deterministic pr
33,315
Name of phenomenon on estimated CDF plots of censored data
I suspect you run into the family of stable non-symmetric distributions. First, plot your ecdf in a log-log plot. Adopt a parametric approach, assume Pareto Distribution, The cdf in your case is translated as $F_t(t)=1-(\frac{t_{min}}{t})^a \ for \ t>t_{min}$, where $t_{min}$ is the minimum completion time of your algorithm, hence the threshold appearing on the left side of the ecdf graph If you see a line in the log-log plot, then you are on the right path, make a linear regression on the log transformed data you have, to find out $\hat{\alpha}$, the so called Pareto index. Pareto index must be greater than 1, it gives and interpretation of the heavy "tailness" of the distribution, how much data is spans on the edges. The closer to 1 the more pathogenic situation you have. In other words, $\alpha$ expresses the ratio of nodes spent negligible time vs nodes spent excessive time before their completion. Previous reader pinpointed the fact you terminate abruptly your experiment, this introduces a complication described as $\hat{\alpha}=\hat{\alpha}(T)$. I suggest you should vary $T$ to explore this dependence. Heavy tails phenomenon is common in computer science, particularly when nodes compete against shared resources in a random fashion, e.g. computer networks.
Name of phenomenon on estimated CDF plots of censored data
I suspect you run into the family of stable non-symmetric distributions. First, plot your ecdf in a log-log plot. Adopt a parametric approach, assume Pareto Distribution, The cdf in your case is tra
Name of phenomenon on estimated CDF plots of censored data I suspect you run into the family of stable non-symmetric distributions. First, plot your ecdf in a log-log plot. Adopt a parametric approach, assume Pareto Distribution, The cdf in your case is translated as $F_t(t)=1-(\frac{t_{min}}{t})^a \ for \ t>t_{min}$, where $t_{min}$ is the minimum completion time of your algorithm, hence the threshold appearing on the left side of the ecdf graph If you see a line in the log-log plot, then you are on the right path, make a linear regression on the log transformed data you have, to find out $\hat{\alpha}$, the so called Pareto index. Pareto index must be greater than 1, it gives and interpretation of the heavy "tailness" of the distribution, how much data is spans on the edges. The closer to 1 the more pathogenic situation you have. In other words, $\alpha$ expresses the ratio of nodes spent negligible time vs nodes spent excessive time before their completion. Previous reader pinpointed the fact you terminate abruptly your experiment, this introduces a complication described as $\hat{\alpha}=\hat{\alpha}(T)$. I suggest you should vary $T$ to explore this dependence. Heavy tails phenomenon is common in computer science, particularly when nodes compete against shared resources in a random fashion, e.g. computer networks.
Name of phenomenon on estimated CDF plots of censored data I suspect you run into the family of stable non-symmetric distributions. First, plot your ecdf in a log-log plot. Adopt a parametric approach, assume Pareto Distribution, The cdf in your case is tra
33,316
Name of phenomenon on estimated CDF plots of censored data
say that your distribution is truncated, like truncated normal
Name of phenomenon on estimated CDF plots of censored data
say that your distribution is truncated, like truncated normal
Name of phenomenon on estimated CDF plots of censored data say that your distribution is truncated, like truncated normal
Name of phenomenon on estimated CDF plots of censored data say that your distribution is truncated, like truncated normal
33,317
Simulate forecast sample paths from tbats model
No, that method is not valid in general. Here's a simple, illustrative counterexample. Assume that you have a random walk without drift: $$Y_t = Y_{t-1} + \varepsilon_t$$ $$\varepsilon_t \sim \mathcal{N}(0,1)$$ This process falls in the TBATS class (it is just an "ANN"-type ETS model with $\alpha=1$, without any complex seasonality, or Box-Cox transform, or ARMA errors). Here's what it looks like if you use your method on simulated data: The "simulated path" is flat and has a small variance, whereas the original data will stray from its mean level quite a bit. It does not "look" like the original data at all. If we repeat the procedure many times and compute the empirical quantiles for the middle 95% of the distribution at each horizon, you will see that they are wrong compared to the prediction intervals reported by forecast.tbats (if the method worked, they should match the outer, grey intervals): Many time series models can be viewed as a transformation of a sequence of uncorrelated random variables; the exact transformation depends on the model. Given a specific transformation, you can generally take the residuals (call them $\hat{\varepsilon_t}$), resample them, and then apply this transformation to simulate from the same process. For example, the random walk transforms a sequence of uncorrelated variables $\varepsilon_t$ by the recursion stated above (the cumulative sum). If your original series ends at $T$, you can sample $\varepsilon^*_{T+1}$, from $\{ \hat{\varepsilon_1}, \ldots, \hat{\varepsilon_T} \}$, and apply the same recursion to obtain a simulated value for $Y_{T+1}$, like this: $$Y_{T+1}^* = Y_T + \varepsilon^*_{T+1}$$ If you compute the quantiles as before, you should come close to the grey area. In general, therefore, this kind of model-based bootstrap requires slightly different code for different models, to perform different transformations on the resampled $\varepsilon^*_t$. The function simulate.ets handles this for you for the ETS class, but there still does not seem to be an equivalent for TBATS in the package, as far as I can tell.
Simulate forecast sample paths from tbats model
No, that method is not valid in general. Here's a simple, illustrative counterexample. Assume that you have a random walk without drift: $$Y_t = Y_{t-1} + \varepsilon_t$$ $$\varepsilon_t \sim \mathcal
Simulate forecast sample paths from tbats model No, that method is not valid in general. Here's a simple, illustrative counterexample. Assume that you have a random walk without drift: $$Y_t = Y_{t-1} + \varepsilon_t$$ $$\varepsilon_t \sim \mathcal{N}(0,1)$$ This process falls in the TBATS class (it is just an "ANN"-type ETS model with $\alpha=1$, without any complex seasonality, or Box-Cox transform, or ARMA errors). Here's what it looks like if you use your method on simulated data: The "simulated path" is flat and has a small variance, whereas the original data will stray from its mean level quite a bit. It does not "look" like the original data at all. If we repeat the procedure many times and compute the empirical quantiles for the middle 95% of the distribution at each horizon, you will see that they are wrong compared to the prediction intervals reported by forecast.tbats (if the method worked, they should match the outer, grey intervals): Many time series models can be viewed as a transformation of a sequence of uncorrelated random variables; the exact transformation depends on the model. Given a specific transformation, you can generally take the residuals (call them $\hat{\varepsilon_t}$), resample them, and then apply this transformation to simulate from the same process. For example, the random walk transforms a sequence of uncorrelated variables $\varepsilon_t$ by the recursion stated above (the cumulative sum). If your original series ends at $T$, you can sample $\varepsilon^*_{T+1}$, from $\{ \hat{\varepsilon_1}, \ldots, \hat{\varepsilon_T} \}$, and apply the same recursion to obtain a simulated value for $Y_{T+1}$, like this: $$Y_{T+1}^* = Y_T + \varepsilon^*_{T+1}$$ If you compute the quantiles as before, you should come close to the grey area. In general, therefore, this kind of model-based bootstrap requires slightly different code for different models, to perform different transformations on the resampled $\varepsilon^*_t$. The function simulate.ets handles this for you for the ETS class, but there still does not seem to be an equivalent for TBATS in the package, as far as I can tell.
Simulate forecast sample paths from tbats model No, that method is not valid in general. Here's a simple, illustrative counterexample. Assume that you have a random walk without drift: $$Y_t = Y_{t-1} + \varepsilon_t$$ $$\varepsilon_t \sim \mathcal
33,318
Simulate forecast sample paths from tbats model
This is probably a very late answer but I do not see why not. Your approach seems correct. But, there is an easier way to do it. You have already assigned a variable name to your forecast. You simply need to plot(prediction). By adding the variable h which is the number of periods for forecasting to prediction, you can control the forecast length. For example, you could say prediction <- forecast(fit, h = 48). Note that the value of h depends on how far into the future you want to see.
Simulate forecast sample paths from tbats model
This is probably a very late answer but I do not see why not. Your approach seems correct. But, there is an easier way to do it. You have already assigned a variable name to your forecast. You simply
Simulate forecast sample paths from tbats model This is probably a very late answer but I do not see why not. Your approach seems correct. But, there is an easier way to do it. You have already assigned a variable name to your forecast. You simply need to plot(prediction). By adding the variable h which is the number of periods for forecasting to prediction, you can control the forecast length. For example, you could say prediction <- forecast(fit, h = 48). Note that the value of h depends on how far into the future you want to see.
Simulate forecast sample paths from tbats model This is probably a very late answer but I do not see why not. Your approach seems correct. But, there is an easier way to do it. You have already assigned a variable name to your forecast. You simply
33,319
Post-hoc test after 2-factor repeated measures ANOVA in R?
Would df1$x1x2=interaction(df1$x1,df1$x2) library(lmerTest) Lme.mod <- lme(dv ~ x1x2, random=~1|subject, correlation=corCompSymm(form=~1|subject), data=df1) anova(Lme.mod) summary(glht(Lme.mod, linfct=mcp(x1x2="Tukey"))) be what you are after, i.e. do posthoc tests among all combinations of measurements levels of both factors x1 and x2? (I've also imposed compound symmetry, to make the lme result match that of the repeated measures aov call)
Post-hoc test after 2-factor repeated measures ANOVA in R?
Would df1$x1x2=interaction(df1$x1,df1$x2) library(lmerTest) Lme.mod <- lme(dv ~ x1x2, random=~1|subject, correlation=corCompSymm(form=~1|subject), data=df1) anova(Lme.mod
Post-hoc test after 2-factor repeated measures ANOVA in R? Would df1$x1x2=interaction(df1$x1,df1$x2) library(lmerTest) Lme.mod <- lme(dv ~ x1x2, random=~1|subject, correlation=corCompSymm(form=~1|subject), data=df1) anova(Lme.mod) summary(glht(Lme.mod, linfct=mcp(x1x2="Tukey"))) be what you are after, i.e. do posthoc tests among all combinations of measurements levels of both factors x1 and x2? (I've also imposed compound symmetry, to make the lme result match that of the repeated measures aov call)
Post-hoc test after 2-factor repeated measures ANOVA in R? Would df1$x1x2=interaction(df1$x1,df1$x2) library(lmerTest) Lme.mod <- lme(dv ~ x1x2, random=~1|subject, correlation=corCompSymm(form=~1|subject), data=df1) anova(Lme.mod
33,320
Post-hoc test after 2-factor repeated measures ANOVA in R?
Tukey Multicomparison Test Install multcomp package install.packages("multcomp") Make multcomp available for use library("multcomp") Check it is running - Explains what packages are currently open in R search() Then use the function glht()
Post-hoc test after 2-factor repeated measures ANOVA in R?
Tukey Multicomparison Test Install multcomp package install.packages("multcomp") Make multcomp available for use library("multcomp") Check it is running - Explains what packages are currently open in
Post-hoc test after 2-factor repeated measures ANOVA in R? Tukey Multicomparison Test Install multcomp package install.packages("multcomp") Make multcomp available for use library("multcomp") Check it is running - Explains what packages are currently open in R search() Then use the function glht()
Post-hoc test after 2-factor repeated measures ANOVA in R? Tukey Multicomparison Test Install multcomp package install.packages("multcomp") Make multcomp available for use library("multcomp") Check it is running - Explains what packages are currently open in
33,321
Polychoric PCA and component loadings in Stata
Although you can store all scores in variables, you cannot display the weights for all of them. But as they are important for a meaningful interpretation of the components, you could use the generated variables containing the score to get back on the weights.
Polychoric PCA and component loadings in Stata
Although you can store all scores in variables, you cannot display the weights for all of them. But as they are important for a meaningful interpretation of the components, you could use the generated
Polychoric PCA and component loadings in Stata Although you can store all scores in variables, you cannot display the weights for all of them. But as they are important for a meaningful interpretation of the components, you could use the generated variables containing the score to get back on the weights.
Polychoric PCA and component loadings in Stata Although you can store all scores in variables, you cannot display the weights for all of them. But as they are important for a meaningful interpretation of the components, you could use the generated
33,322
Comparing coefficients in multilevel models
I think you could start by fitting two separate models: first a model with the predictor at the first level and then another model with the predictor at the second level. Then you can check which model leads to a better fit by looking at the value of the deviance (-2LL). The smallest the value of -2LL, the better the fit. In this way you can see which predictor fits better.
Comparing coefficients in multilevel models
I think you could start by fitting two separate models: first a model with the predictor at the first level and then another model with the predictor at the second level. Then you can check which mode
Comparing coefficients in multilevel models I think you could start by fitting two separate models: first a model with the predictor at the first level and then another model with the predictor at the second level. Then you can check which model leads to a better fit by looking at the value of the deviance (-2LL). The smallest the value of -2LL, the better the fit. In this way you can see which predictor fits better.
Comparing coefficients in multilevel models I think you could start by fitting two separate models: first a model with the predictor at the first level and then another model with the predictor at the second level. Then you can check which mode
33,323
Showing a correlation and its p-value as a color
Mixing together the correlation coefficients with the p-values using one dimension (hue) doesn't make sense. If the sample size is the same, then the same correlation coefficient will lead to the same p-value. If the sample size is not the same, then collapsing them both into a single variable is unhelpful and will rely on arbitrary and confusing choices. There's a simple solution: as Ian_Fin suggests, just show both the correlation coefficient and the p-value using separate design elements. Any two of size, shape, colour, and text can be used to convey them separately. The corrplot package in R for visualising correlation matrices offers a range of plotting options and can be used for this. Or, since a correlation matrix is symmetric, you could also show the correlation coefficient elements above the diagonal and the p-values below it (or vice versa, as in this example).
Showing a correlation and its p-value as a color
Mixing together the correlation coefficients with the p-values using one dimension (hue) doesn't make sense. If the sample size is the same, then the same correlation coefficient will lead to the same
Showing a correlation and its p-value as a color Mixing together the correlation coefficients with the p-values using one dimension (hue) doesn't make sense. If the sample size is the same, then the same correlation coefficient will lead to the same p-value. If the sample size is not the same, then collapsing them both into a single variable is unhelpful and will rely on arbitrary and confusing choices. There's a simple solution: as Ian_Fin suggests, just show both the correlation coefficient and the p-value using separate design elements. Any two of size, shape, colour, and text can be used to convey them separately. The corrplot package in R for visualising correlation matrices offers a range of plotting options and can be used for this. Or, since a correlation matrix is symmetric, you could also show the correlation coefficient elements above the diagonal and the p-values below it (or vice versa, as in this example).
Showing a correlation and its p-value as a color Mixing together the correlation coefficients with the p-values using one dimension (hue) doesn't make sense. If the sample size is the same, then the same correlation coefficient will lead to the same
33,324
Showing a correlation and its p-value as a color
I use the following transformation: I multiply the correlation measure by 1 minus the p-value: cor.value * (1 - p.value). And the resulting heatmap looks like this: The numbers in the heatmap are the correlation coefficient and inside the brackets the p-values. In the axes between parenthesis represent the number of variables of the datasets used.
Showing a correlation and its p-value as a color
I use the following transformation: I multiply the correlation measure by 1 minus the p-value: cor.value * (1 - p.value). And the resulting heatmap looks like this: The numbers in the heatmap are the
Showing a correlation and its p-value as a color I use the following transformation: I multiply the correlation measure by 1 minus the p-value: cor.value * (1 - p.value). And the resulting heatmap looks like this: The numbers in the heatmap are the correlation coefficient and inside the brackets the p-values. In the axes between parenthesis represent the number of variables of the datasets used.
Showing a correlation and its p-value as a color I use the following transformation: I multiply the correlation measure by 1 minus the p-value: cor.value * (1 - p.value). And the resulting heatmap looks like this: The numbers in the heatmap are the
33,325
How probabilities are calculated for SVM model?
Your SVM implementation is very likely based on the LibSVM library. Please refer to LibSVM's documentation (eg. this FAQ item) for an explanation how probabilities are calculated. In brief, probability calculation is based on a separate procedure, which has nothing in common with the decision function. It is even possible that the decision function and calculated probabilities predict different class as winner. The PMML specification does not support LibSVM's probability calculation procedure. Hence, you can't use the probability output feature with SVM models.
How probabilities are calculated for SVM model?
Your SVM implementation is very likely based on the LibSVM library. Please refer to LibSVM's documentation (eg. this FAQ item) for an explanation how probabilities are calculated. In brief, probabilit
How probabilities are calculated for SVM model? Your SVM implementation is very likely based on the LibSVM library. Please refer to LibSVM's documentation (eg. this FAQ item) for an explanation how probabilities are calculated. In brief, probability calculation is based on a separate procedure, which has nothing in common with the decision function. It is even possible that the decision function and calculated probabilities predict different class as winner. The PMML specification does not support LibSVM's probability calculation procedure. Hence, you can't use the probability output feature with SVM models.
How probabilities are calculated for SVM model? Your SVM implementation is very likely based on the LibSVM library. Please refer to LibSVM's documentation (eg. this FAQ item) for an explanation how probabilities are calculated. In brief, probabilit
33,326
Hypergeometric: how do I construct a credibility interval around K (population successes) in R?
You can solve your problem using the Maximum Likelihood method rather than the Bayesian method. Just calculate the hypergeometric probability of finding k for a large number of values of K using your known values of k, N and n. The value of K that provides the maximum for this probability is the expected value of K.
Hypergeometric: how do I construct a credibility interval around K (population successes) in R?
You can solve your problem using the Maximum Likelihood method rather than the Bayesian method. Just calculate the hypergeometric probability of finding k for a large number of values of K using your
Hypergeometric: how do I construct a credibility interval around K (population successes) in R? You can solve your problem using the Maximum Likelihood method rather than the Bayesian method. Just calculate the hypergeometric probability of finding k for a large number of values of K using your known values of k, N and n. The value of K that provides the maximum for this probability is the expected value of K.
Hypergeometric: how do I construct a credibility interval around K (population successes) in R? You can solve your problem using the Maximum Likelihood method rather than the Bayesian method. Just calculate the hypergeometric probability of finding k for a large number of values of K using your
33,327
Cross validation with nonparametric smoothing regressions
It seems to me there are two confusions in your question: First, linear (least-square) regression does not require a linear relationship in the independent variables, but in the parameters. Thus $y=a + b \cdot x e^{-x} + c \cdot \frac{z}{1 + x^2}$ can be estimated by ordinary least squares ($y$ is a linear function of parameters $a$, $b$, $c$), while $y = a + b \cdot x + b^2 \cdot z$ cannot ($y$ is not linear in parameter $b$). Second, how do you determine a "correct" functional model from a smoother, i.e. how do you go from step 1 to step 2? As far as I know, there is no way to infer "which functions of regressors to use" from smoothing techniques such as splines, neural nets, etc. Except maybe by plotting the smoothed outputs, and determining relationships by intuition, but that doesn't sound very robust to me, and it seems one doesn't need smoothing for this, just scatterplots. If your final goal is a linear regression model, and your problem is that you don't know exactly what functional form of the regressors should be used, you would be better off directly fitting a regularized linear regression model (such as LASSO) with a large basis expansion of the original regressors (such as polynomials of the regressors, exponentials, logs, ...). The regularization procedure should then elimnate the unneeded regressors, leaving you with a (hopefully good) parametric model. And you can use cross-validation to determine the optimal penalization parameter (which determines the actual degrees of freedom of the model). You can always use nonparametric regressions as a benchmark for generalization error, as a way to check that your regularized linear model predicts outside data just as well as a nonparametric smoother.
Cross validation with nonparametric smoothing regressions
It seems to me there are two confusions in your question: First, linear (least-square) regression does not require a linear relationship in the independent variables, but in the parameters. Thus $
Cross validation with nonparametric smoothing regressions It seems to me there are two confusions in your question: First, linear (least-square) regression does not require a linear relationship in the independent variables, but in the parameters. Thus $y=a + b \cdot x e^{-x} + c \cdot \frac{z}{1 + x^2}$ can be estimated by ordinary least squares ($y$ is a linear function of parameters $a$, $b$, $c$), while $y = a + b \cdot x + b^2 \cdot z$ cannot ($y$ is not linear in parameter $b$). Second, how do you determine a "correct" functional model from a smoother, i.e. how do you go from step 1 to step 2? As far as I know, there is no way to infer "which functions of regressors to use" from smoothing techniques such as splines, neural nets, etc. Except maybe by plotting the smoothed outputs, and determining relationships by intuition, but that doesn't sound very robust to me, and it seems one doesn't need smoothing for this, just scatterplots. If your final goal is a linear regression model, and your problem is that you don't know exactly what functional form of the regressors should be used, you would be better off directly fitting a regularized linear regression model (such as LASSO) with a large basis expansion of the original regressors (such as polynomials of the regressors, exponentials, logs, ...). The regularization procedure should then elimnate the unneeded regressors, leaving you with a (hopefully good) parametric model. And you can use cross-validation to determine the optimal penalization parameter (which determines the actual degrees of freedom of the model). You can always use nonparametric regressions as a benchmark for generalization error, as a way to check that your regularized linear model predicts outside data just as well as a nonparametric smoother.
Cross validation with nonparametric smoothing regressions It seems to me there are two confusions in your question: First, linear (least-square) regression does not require a linear relationship in the independent variables, but in the parameters. Thus $
33,328
Is there a name for this type of bootstrapping?
My book Bootstrap Methods 2nd Edition has a massive bibliography up to 2007. So even if I don't cover the subject in the book the reference might be in the bibliography. Of course a Google search with the right key words might be better. Freedman, Peters and Navidi did bootstrapping for prediction in linear regression and econometric models but I am not sure what has been done on the mixed model case. Stine's 1985 JASA paper Bootstrap prediction intervals for regression is something you will find very interesting if you haven't already seen it.
Is there a name for this type of bootstrapping?
My book Bootstrap Methods 2nd Edition has a massive bibliography up to 2007. So even if I don't cover the subject in the book the reference might be in the bibliography. Of course a Google search wi
Is there a name for this type of bootstrapping? My book Bootstrap Methods 2nd Edition has a massive bibliography up to 2007. So even if I don't cover the subject in the book the reference might be in the bibliography. Of course a Google search with the right key words might be better. Freedman, Peters and Navidi did bootstrapping for prediction in linear regression and econometric models but I am not sure what has been done on the mixed model case. Stine's 1985 JASA paper Bootstrap prediction intervals for regression is something you will find very interesting if you haven't already seen it.
Is there a name for this type of bootstrapping? My book Bootstrap Methods 2nd Edition has a massive bibliography up to 2007. So even if I don't cover the subject in the book the reference might be in the bibliography. Of course a Google search wi
33,329
Using Fieller's theorem to calculate the confidence interval of a ratio (paired measurements)
The problem of calculating confidence/likelihood intervals for the ratio of two means is addressed in Chapter 7 of the book Statistical Inference in Science, and in Chapter 10 of Empirical Bayes and Likelihood Inference. Note also that (i) the ratio of the means is different to the [mean of the ratio] (http://www.hindawi.com/journals/ads/2006/078375/abs/), (ii) the distribution of the ratio of two normal variables [is not normal] (http://link.springer.com/article/10.1007%2Fs00362-012-0429-2#page-1).
Using Fieller's theorem to calculate the confidence interval of a ratio (paired measurements)
The problem of calculating confidence/likelihood intervals for the ratio of two means is addressed in Chapter 7 of the book Statistical Inference in Science, and in Chapter 10 of Empirical Bayes and L
Using Fieller's theorem to calculate the confidence interval of a ratio (paired measurements) The problem of calculating confidence/likelihood intervals for the ratio of two means is addressed in Chapter 7 of the book Statistical Inference in Science, and in Chapter 10 of Empirical Bayes and Likelihood Inference. Note also that (i) the ratio of the means is different to the [mean of the ratio] (http://www.hindawi.com/journals/ads/2006/078375/abs/), (ii) the distribution of the ratio of two normal variables [is not normal] (http://link.springer.com/article/10.1007%2Fs00362-012-0429-2#page-1).
Using Fieller's theorem to calculate the confidence interval of a ratio (paired measurements) The problem of calculating confidence/likelihood intervals for the ratio of two means is addressed in Chapter 7 of the book Statistical Inference in Science, and in Chapter 10 of Empirical Bayes and L
33,330
Test to distinguish periodic from almost periodic data
As I said, I had an idea how to do this, which I realised, refined and wrote a paper about, which is now published: Chaos 25, 113106 (2015) – preprint on ArXiv. The investigated criterion is almost the same as sketched in the question: Given data $x_1, \ldots, x_n$ sampled at time points $t_0, t_0 + Δt, \ldots, t_0 + nΔt$, the test decides whether there is a function $f: [t_0, t_0 + Δt] → ℝ$ and a $τ ∈ [2Δt,(n-1)Δt]$ such that: $f(t_0 + iΔt)=x_i\quad \forall i∈\{1,…,n\}$ $f(t+τ)=f(t) \quad∀t∈[t_0, t_0 + Δt-τ]$ $f$ has no more local extrema than the sequence $x$, with the possible exception of at most one extremum close to the beginning and end of $f$ each. The test can be modified to account for small errors, such as numerical errors of the simulation method. I hope that my paper also answers why I was interested in such a test.
Test to distinguish periodic from almost periodic data
As I said, I had an idea how to do this, which I realised, refined and wrote a paper about, which is now published: Chaos 25, 113106 (2015) – preprint on ArXiv. The investigated criterion is almost th
Test to distinguish periodic from almost periodic data As I said, I had an idea how to do this, which I realised, refined and wrote a paper about, which is now published: Chaos 25, 113106 (2015) – preprint on ArXiv. The investigated criterion is almost the same as sketched in the question: Given data $x_1, \ldots, x_n$ sampled at time points $t_0, t_0 + Δt, \ldots, t_0 + nΔt$, the test decides whether there is a function $f: [t_0, t_0 + Δt] → ℝ$ and a $τ ∈ [2Δt,(n-1)Δt]$ such that: $f(t_0 + iΔt)=x_i\quad \forall i∈\{1,…,n\}$ $f(t+τ)=f(t) \quad∀t∈[t_0, t_0 + Δt-τ]$ $f$ has no more local extrema than the sequence $x$, with the possible exception of at most one extremum close to the beginning and end of $f$ each. The test can be modified to account for small errors, such as numerical errors of the simulation method. I hope that my paper also answers why I was interested in such a test.
Test to distinguish periodic from almost periodic data As I said, I had an idea how to do this, which I realised, refined and wrote a paper about, which is now published: Chaos 25, 113106 (2015) – preprint on ArXiv. The investigated criterion is almost th
33,331
Test to distinguish periodic from almost periodic data
Transform the data into frequency domain using the discrete Fourier transform (DFT). If the data is perfectly periodic, there will be exactly one frequency bin with a high value, and other bins will be zero (or near zero, see spectral leakage). Note that the frequency resolution is given by $\frac{\text{sampling frequency}}{\text{Number of samples}}$. So this sets the limit for the detection precision.
Test to distinguish periodic from almost periodic data
Transform the data into frequency domain using the discrete Fourier transform (DFT). If the data is perfectly periodic, there will be exactly one frequency bin with a high value, and other bins will b
Test to distinguish periodic from almost periodic data Transform the data into frequency domain using the discrete Fourier transform (DFT). If the data is perfectly periodic, there will be exactly one frequency bin with a high value, and other bins will be zero (or near zero, see spectral leakage). Note that the frequency resolution is given by $\frac{\text{sampling frequency}}{\text{Number of samples}}$. So this sets the limit for the detection precision.
Test to distinguish periodic from almost periodic data Transform the data into frequency domain using the discrete Fourier transform (DFT). If the data is perfectly periodic, there will be exactly one frequency bin with a high value, and other bins will b
33,332
Test to distinguish periodic from almost periodic data
If you know the actual periodic signal, calculate $\text{difference} = \Big|\text{theoretical data} - \text{measured data}\big|$ Then sum the elements of $\text{difference}$. If it is above a threshold (consider error from floating point arithmetic) the data is not periodic.
Test to distinguish periodic from almost periodic data
If you know the actual periodic signal, calculate $\text{difference} = \Big|\text{theoretical data} - \text{measured data}\big|$ Then sum the elements of $\text{difference}$. If it is above a threshol
Test to distinguish periodic from almost periodic data If you know the actual periodic signal, calculate $\text{difference} = \Big|\text{theoretical data} - \text{measured data}\big|$ Then sum the elements of $\text{difference}$. If it is above a threshold (consider error from floating point arithmetic) the data is not periodic.
Test to distinguish periodic from almost periodic data If you know the actual periodic signal, calculate $\text{difference} = \Big|\text{theoretical data} - \text{measured data}\big|$ Then sum the elements of $\text{difference}$. If it is above a threshol
33,333
Sum of two independent Student t variables with same dof is t distributed? [duplicate]
I think you are wrong, because the Student-t includes the Gaussian and the Cauchy distribution. It is well known that sum of Cauchy is Cauchy and sum of normal is normal - so there are at least two contradictions!
Sum of two independent Student t variables with same dof is t distributed? [duplicate]
I think you are wrong, because the Student-t includes the Gaussian and the Cauchy distribution. It is well known that sum of Cauchy is Cauchy and sum of normal is normal - so there are at least two co
Sum of two independent Student t variables with same dof is t distributed? [duplicate] I think you are wrong, because the Student-t includes the Gaussian and the Cauchy distribution. It is well known that sum of Cauchy is Cauchy and sum of normal is normal - so there are at least two contradictions!
Sum of two independent Student t variables with same dof is t distributed? [duplicate] I think you are wrong, because the Student-t includes the Gaussian and the Cauchy distribution. It is well known that sum of Cauchy is Cauchy and sum of normal is normal - so there are at least two co
33,334
Is Wikipedia's page on the sigmoid function incorrect?
The unsatisfying answer is "It depends who you ask." "Sigmoid", if you break it into parts, just means "S-shaped". The logistic sigmoid function is so prevalent that people tend to gloss over the word "logistic". For machine learning folks, it's become the exemplar of the class, and most call it the sigmoid function. (Is it myopia to call it the sigmoid function?) Still, there are other communities that use S-shaped functions.
Is Wikipedia's page on the sigmoid function incorrect?
The unsatisfying answer is "It depends who you ask." "Sigmoid", if you break it into parts, just means "S-shaped". The logistic sigmoid function is so prevalent that people tend to gloss over the word
Is Wikipedia's page on the sigmoid function incorrect? The unsatisfying answer is "It depends who you ask." "Sigmoid", if you break it into parts, just means "S-shaped". The logistic sigmoid function is so prevalent that people tend to gloss over the word "logistic". For machine learning folks, it's become the exemplar of the class, and most call it the sigmoid function. (Is it myopia to call it the sigmoid function?) Still, there are other communities that use S-shaped functions.
Is Wikipedia's page on the sigmoid function incorrect? The unsatisfying answer is "It depends who you ask." "Sigmoid", if you break it into parts, just means "S-shaped". The logistic sigmoid function is so prevalent that people tend to gloss over the word
33,335
Is Wikipedia's page on the sigmoid function incorrect?
As Arya said, it depends who you ask, but this is not specific to Machine Learning, and even in Machine Learning the situation is not consistent (or not consistently bad). Bishop, for example, uses the term "logistic sigmoid function" and Jordan used "logistic function" already in 1995. In Statistical Mechanics, on the other hand, people are likely to call it the "Fermi-Dirac distribution/function". In some fields of biochemistry, including toxicology, you'll meet the same thing under the name "Hill equation". Etc. It is IMHO important to remember that these are only names (words) used for describing a mathematical concept. Words is what people use to communicate, for example ideas and methods. As long as all participants of the communication understand what concept they are talking about, it doesn't really matter what words they use for it. Communities develop to a large part independently from each other (otherwise they would form a single community) and develop field-specific "dialects". As a related example, the words "weight" and "bias", in the context of neural networks (and, through historical development, support vector machines) have completely different meanings from those used in statistics, but there is historical/field specific justification for using them. Update: Actually, neural network pioneers commonly use "logistic function" or "logistic neuron": Hinton, Rumelhart and McClelland (also here), Sejnowski etc. Update 2: Also, one might as well ask: "Is RBF just the Gaussian function?". For some reason, equating the two on CV doesn't seem to cause nearly as much commotion as your question.
Is Wikipedia's page on the sigmoid function incorrect?
As Arya said, it depends who you ask, but this is not specific to Machine Learning, and even in Machine Learning the situation is not consistent (or not consistently bad). Bishop, for example, uses th
Is Wikipedia's page on the sigmoid function incorrect? As Arya said, it depends who you ask, but this is not specific to Machine Learning, and even in Machine Learning the situation is not consistent (or not consistently bad). Bishop, for example, uses the term "logistic sigmoid function" and Jordan used "logistic function" already in 1995. In Statistical Mechanics, on the other hand, people are likely to call it the "Fermi-Dirac distribution/function". In some fields of biochemistry, including toxicology, you'll meet the same thing under the name "Hill equation". Etc. It is IMHO important to remember that these are only names (words) used for describing a mathematical concept. Words is what people use to communicate, for example ideas and methods. As long as all participants of the communication understand what concept they are talking about, it doesn't really matter what words they use for it. Communities develop to a large part independently from each other (otherwise they would form a single community) and develop field-specific "dialects". As a related example, the words "weight" and "bias", in the context of neural networks (and, through historical development, support vector machines) have completely different meanings from those used in statistics, but there is historical/field specific justification for using them. Update: Actually, neural network pioneers commonly use "logistic function" or "logistic neuron": Hinton, Rumelhart and McClelland (also here), Sejnowski etc. Update 2: Also, one might as well ask: "Is RBF just the Gaussian function?". For some reason, equating the two on CV doesn't seem to cause nearly as much commotion as your question.
Is Wikipedia's page on the sigmoid function incorrect? As Arya said, it depends who you ask, but this is not specific to Machine Learning, and even in Machine Learning the situation is not consistent (or not consistently bad). Bishop, for example, uses th
33,336
Is Wikipedia's page on the sigmoid function incorrect?
I believe one more answer, specifically addressing your points as they currently stand (Revision 11) and comments is warranted. Is Wikipedia's page on the sigmoid function incorrect? No. In some communities, specifically Machine Learning, some (maybe even most?) people use the term "sigmoid function" in a different, more limited sense, as a synonym for the logistic function. But, not the whole community does so, Machine Learning is not the only community using the term, and Wikipedia is not an encyclopaedia of Machine Learning. It addresses a broader audience, which uses a different terminology and has been using it probably before Machine Learning has been invented. I have never seen or heard the phrasing that the logistic function is a type of sigmoid function. Wikipedia also doesn't use this exact wording, so you seem to be misquoting it. But, semantically, considering the logistic function just a member of the sigmoid family is not at all uncommon, not even in the ML community. See for example: A Sigmoid function is a mathematical function which has a characteristic S-shaped curve. There are a number of common sigmoid functions, such as the logistic function, the hyperbolic tangent, and the arctangent. Examples of such usage in other communities have been given in other answers and comments. These functions are considered to be peers, usually in a context like: We can use various non-linear functions in this neural network, such as the sigmoid, tanh, and ReLU activation functions. Again, this is just ML-specific lingo and even there the situation seems not to be so clear-cut. For example, in Python's Scikit-learn (an ML library!), the neurons in a multi-layer perceptron can have identity, logistic, tanh, or relu activation functions, but not "sigmoid". From the comments: I work in applied ML, and I would probably be knocked down by my peers if I said that I used a sigmoid function in my neural network instead of being more specific and saying that I really used tanh. When in Rome, do as the Romans do. But, that goes in both directions. Machine Learners, when addressing other audiences, should be specific and use "logistic function" instead of "sigmoid". I posted this question on a ML site, purposefully to limit the scope of the audience to people in my field. Cross Validated's scope is broader than just Machine Learning: Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization.
Is Wikipedia's page on the sigmoid function incorrect?
I believe one more answer, specifically addressing your points as they currently stand (Revision 11) and comments is warranted. Is Wikipedia's page on the sigmoid function incorrect? No. In some com
Is Wikipedia's page on the sigmoid function incorrect? I believe one more answer, specifically addressing your points as they currently stand (Revision 11) and comments is warranted. Is Wikipedia's page on the sigmoid function incorrect? No. In some communities, specifically Machine Learning, some (maybe even most?) people use the term "sigmoid function" in a different, more limited sense, as a synonym for the logistic function. But, not the whole community does so, Machine Learning is not the only community using the term, and Wikipedia is not an encyclopaedia of Machine Learning. It addresses a broader audience, which uses a different terminology and has been using it probably before Machine Learning has been invented. I have never seen or heard the phrasing that the logistic function is a type of sigmoid function. Wikipedia also doesn't use this exact wording, so you seem to be misquoting it. But, semantically, considering the logistic function just a member of the sigmoid family is not at all uncommon, not even in the ML community. See for example: A Sigmoid function is a mathematical function which has a characteristic S-shaped curve. There are a number of common sigmoid functions, such as the logistic function, the hyperbolic tangent, and the arctangent. Examples of such usage in other communities have been given in other answers and comments. These functions are considered to be peers, usually in a context like: We can use various non-linear functions in this neural network, such as the sigmoid, tanh, and ReLU activation functions. Again, this is just ML-specific lingo and even there the situation seems not to be so clear-cut. For example, in Python's Scikit-learn (an ML library!), the neurons in a multi-layer perceptron can have identity, logistic, tanh, or relu activation functions, but not "sigmoid". From the comments: I work in applied ML, and I would probably be knocked down by my peers if I said that I used a sigmoid function in my neural network instead of being more specific and saying that I really used tanh. When in Rome, do as the Romans do. But, that goes in both directions. Machine Learners, when addressing other audiences, should be specific and use "logistic function" instead of "sigmoid". I posted this question on a ML site, purposefully to limit the scope of the audience to people in my field. Cross Validated's scope is broader than just Machine Learning: Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization.
Is Wikipedia's page on the sigmoid function incorrect? I believe one more answer, specifically addressing your points as they currently stand (Revision 11) and comments is warranted. Is Wikipedia's page on the sigmoid function incorrect? No. In some com
33,337
Is Wikipedia's page on the sigmoid function incorrect?
It should be clear that the mentioned Wikipedia page has some terminology issues. Wikipedia's statement A common example of a sigmoid function is the logistic function and assertions that these functions are examples of sigmoid functions are confusing at best. The logistic function is not a type of sigmoid function. The sigmoid function is the logistic function. Likewise, the tanh function is not a type of sigmoid function. Stanford's Andrew Ng states the terminology concisely in this video on neural network activation functions. This is the correct terminology to use if you are working in this field. https://www.youtube.com/watch?v=P7_jFxTtJEo
Is Wikipedia's page on the sigmoid function incorrect?
It should be clear that the mentioned Wikipedia page has some terminology issues. Wikipedia's statement A common example of a sigmoid function is the logistic function and assertions that these func
Is Wikipedia's page on the sigmoid function incorrect? It should be clear that the mentioned Wikipedia page has some terminology issues. Wikipedia's statement A common example of a sigmoid function is the logistic function and assertions that these functions are examples of sigmoid functions are confusing at best. The logistic function is not a type of sigmoid function. The sigmoid function is the logistic function. Likewise, the tanh function is not a type of sigmoid function. Stanford's Andrew Ng states the terminology concisely in this video on neural network activation functions. This is the correct terminology to use if you are working in this field. https://www.youtube.com/watch?v=P7_jFxTtJEo
Is Wikipedia's page on the sigmoid function incorrect? It should be clear that the mentioned Wikipedia page has some terminology issues. Wikipedia's statement A common example of a sigmoid function is the logistic function and assertions that these func
33,338
Where in the ROC curve does it tell you what the threshold is?
Each (FPR, TPR) point on a ROC curve is associated with a threshold. However, the thresholds are not typically drawn on the curve itself. It is possible to reveal them, either adding extra annotation to the curve, or by coloring the curves. Here are some examples generated in R with pROC and ROCR, respectively: Here is R code to generate these plots: set.seed(42) truth <- rbinom(30, 1, 0.5) predictor <- rnorm(30) + truth + 1 library(pROC) plot(roc(truth, predictor), print.thres="local") library(ROCR) pred <- prediction(predictor, truth) perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf, colorize=TRUE)
Where in the ROC curve does it tell you what the threshold is?
Each (FPR, TPR) point on a ROC curve is associated with a threshold. However, the thresholds are not typically drawn on the curve itself. It is possible to reveal them, either adding extra annotation
Where in the ROC curve does it tell you what the threshold is? Each (FPR, TPR) point on a ROC curve is associated with a threshold. However, the thresholds are not typically drawn on the curve itself. It is possible to reveal them, either adding extra annotation to the curve, or by coloring the curves. Here are some examples generated in R with pROC and ROCR, respectively: Here is R code to generate these plots: set.seed(42) truth <- rbinom(30, 1, 0.5) predictor <- rnorm(30) + truth + 1 library(pROC) plot(roc(truth, predictor), print.thres="local") library(ROCR) pred <- prediction(predictor, truth) perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf, colorize=TRUE)
Where in the ROC curve does it tell you what the threshold is? Each (FPR, TPR) point on a ROC curve is associated with a threshold. However, the thresholds are not typically drawn on the curve itself. It is possible to reveal them, either adding extra annotation
33,339
Where in the ROC curve does it tell you what the threshold is?
It doesn't. If you care about having the highest possible TPR and FPR, the threshold is where the elbow is. If you care more about TPR than FPR, or the other way around, it's something else. If you care about optimizing some other metric, it may be something else.
Where in the ROC curve does it tell you what the threshold is?
It doesn't. If you care about having the highest possible TPR and FPR, the threshold is where the elbow is. If you care more about TPR than FPR, or the other way around, it's something else. If you ca
Where in the ROC curve does it tell you what the threshold is? It doesn't. If you care about having the highest possible TPR and FPR, the threshold is where the elbow is. If you care more about TPR than FPR, or the other way around, it's something else. If you care about optimizing some other metric, it may be something else.
Where in the ROC curve does it tell you what the threshold is? It doesn't. If you care about having the highest possible TPR and FPR, the threshold is where the elbow is. If you care more about TPR than FPR, or the other way around, it's something else. If you ca
33,340
Where in the ROC curve does it tell you what the threshold is?
There is no universal truth to that question. There is always a tradeoff made with any threshold and the ROC visualizes all possible thresholds for you to pick the best. Is it better to err on one side or better to err on the other? That will heavily depend on the topic at hand. A screening test for a disease with the implication to not send your kid to school for a week is less critical with a false positive then a diagnostic test with the implication of removing a limb or an organ. If that is not good enough you may want to look into Youden's J or the Youden Index and the threshold that maximizes it.
Where in the ROC curve does it tell you what the threshold is?
There is no universal truth to that question. There is always a tradeoff made with any threshold and the ROC visualizes all possible thresholds for you to pick the best. Is it better to err on one sid
Where in the ROC curve does it tell you what the threshold is? There is no universal truth to that question. There is always a tradeoff made with any threshold and the ROC visualizes all possible thresholds for you to pick the best. Is it better to err on one side or better to err on the other? That will heavily depend on the topic at hand. A screening test for a disease with the implication to not send your kid to school for a week is less critical with a false positive then a diagnostic test with the implication of removing a limb or an organ. If that is not good enough you may want to look into Youden's J or the Youden Index and the threshold that maximizes it.
Where in the ROC curve does it tell you what the threshold is? There is no universal truth to that question. There is always a tradeoff made with any threshold and the ROC visualizes all possible thresholds for you to pick the best. Is it better to err on one sid
33,341
Where in the ROC curve does it tell you what the threshold is?
Since you mentioned python, here is an example. It's pretty trivial to plot the threshold along with the ROC using sklearn.metrics.roc_curve. from sklearn import metrics import numpy as np import matplotlib.pyplot as plt fpr, tpr, th = metrics.roc_curve(true, score) fig, ax = plt.subplots() fig.set_size_inches(10,10) ax.plot(fpr,tpr,label='ROC') ##Plotting the threshold. The value of the first index can be above one so I exclude that point when plotting. ax.plot(fpr[1:],th[1:],label='Threshold') ax.plot(fpr,fpr,c='k') ax.xaxis.set_ticks(np.arange(0, 1.05, .1)) ax.yaxis.set_ticks(np.arange(0, 1.05, .1)) plt.grid(True) auc = np.round(metrics.roc_auc_score(true, score),3) plt.title(f' AUC: {auc}',fontsize=25) plt.xlabel('FPR',fontsize=20) plt.ylabel('TPR',fontsize=20) plt.legend(fontsize=20) With the threshold plotted along with the ROC you can see that as the threshold approaches 1 (on the left side of the figure) TPR and FPR both approach 0, and as the threshold approaches 0 the TPR and FPR both approach 1.
Where in the ROC curve does it tell you what the threshold is?
Since you mentioned python, here is an example. It's pretty trivial to plot the threshold along with the ROC using sklearn.metrics.roc_curve. from sklearn import metrics import numpy as np import matp
Where in the ROC curve does it tell you what the threshold is? Since you mentioned python, here is an example. It's pretty trivial to plot the threshold along with the ROC using sklearn.metrics.roc_curve. from sklearn import metrics import numpy as np import matplotlib.pyplot as plt fpr, tpr, th = metrics.roc_curve(true, score) fig, ax = plt.subplots() fig.set_size_inches(10,10) ax.plot(fpr,tpr,label='ROC') ##Plotting the threshold. The value of the first index can be above one so I exclude that point when plotting. ax.plot(fpr[1:],th[1:],label='Threshold') ax.plot(fpr,fpr,c='k') ax.xaxis.set_ticks(np.arange(0, 1.05, .1)) ax.yaxis.set_ticks(np.arange(0, 1.05, .1)) plt.grid(True) auc = np.round(metrics.roc_auc_score(true, score),3) plt.title(f' AUC: {auc}',fontsize=25) plt.xlabel('FPR',fontsize=20) plt.ylabel('TPR',fontsize=20) plt.legend(fontsize=20) With the threshold plotted along with the ROC you can see that as the threshold approaches 1 (on the left side of the figure) TPR and FPR both approach 0, and as the threshold approaches 0 the TPR and FPR both approach 1.
Where in the ROC curve does it tell you what the threshold is? Since you mentioned python, here is an example. It's pretty trivial to plot the threshold along with the ROC using sklearn.metrics.roc_curve. from sklearn import metrics import numpy as np import matp
33,342
Where in the ROC curve does it tell you what the threshold is?
The ROC curve does not "tell" you what threshold to use by itself. Setting the threshold is a business decision made by the users of the model based on the specifics of their application. The business people's job is to know how many false positives and false negatives they can take. The ROC curve tells them what are the business implications of picking a given threshold. If there is a threshold value that suits their needs they will use the model. Determining the threshold involves quantifying several items that depend on the application. Usually false positives carry a cost while false negatives carry a risk. Their impact changes according to what the whole process is like, its goal and where in the process the model is used (is it after a screening? is this the first screening? Is the purpose medical? Security? Financial?). All this info needs to be considered in conjunction with the ROC curve to reach a ddecision.
Where in the ROC curve does it tell you what the threshold is?
The ROC curve does not "tell" you what threshold to use by itself. Setting the threshold is a business decision made by the users of the model based on the specifics of their application. The business
Where in the ROC curve does it tell you what the threshold is? The ROC curve does not "tell" you what threshold to use by itself. Setting the threshold is a business decision made by the users of the model based on the specifics of their application. The business people's job is to know how many false positives and false negatives they can take. The ROC curve tells them what are the business implications of picking a given threshold. If there is a threshold value that suits their needs they will use the model. Determining the threshold involves quantifying several items that depend on the application. Usually false positives carry a cost while false negatives carry a risk. Their impact changes according to what the whole process is like, its goal and where in the process the model is used (is it after a screening? is this the first screening? Is the purpose medical? Security? Financial?). All this info needs to be considered in conjunction with the ROC curve to reach a ddecision.
Where in the ROC curve does it tell you what the threshold is? The ROC curve does not "tell" you what threshold to use by itself. Setting the threshold is a business decision made by the users of the model based on the specifics of their application. The business
33,343
Where in the ROC curve does it tell you what the threshold is?
As others have pointed out, ROC curve does not display threshold values, unless additionally annotated. For the purpose of determining the best decision threshold, classification plot may be more convenient. To determine the threshold, draw a horizontal line from the desired sensitivity (90% in the figure) to the sensitivity curve (blue), then draw a vertical line to the x-axis. The ordinate of the point at which the vertical line intersects the specificity curve (red) is the matching specificity, and the abscissa is the corresponding decision threshold. This is similar to choosing the best combination of (sensitivity, specificity) on the ROC graph, but in addition it directly provides the decision threshold.
Where in the ROC curve does it tell you what the threshold is?
As others have pointed out, ROC curve does not display threshold values, unless additionally annotated. For the purpose of determining the best decision threshold, classification plot may be more conv
Where in the ROC curve does it tell you what the threshold is? As others have pointed out, ROC curve does not display threshold values, unless additionally annotated. For the purpose of determining the best decision threshold, classification plot may be more convenient. To determine the threshold, draw a horizontal line from the desired sensitivity (90% in the figure) to the sensitivity curve (blue), then draw a vertical line to the x-axis. The ordinate of the point at which the vertical line intersects the specificity curve (red) is the matching specificity, and the abscissa is the corresponding decision threshold. This is similar to choosing the best combination of (sensitivity, specificity) on the ROC graph, but in addition it directly provides the decision threshold.
Where in the ROC curve does it tell you what the threshold is? As others have pointed out, ROC curve does not display threshold values, unless additionally annotated. For the purpose of determining the best decision threshold, classification plot may be more conv
33,344
How Well Does the Mean Describe a Multimodal Probability Distribution?
The mean means what it means Whenever you compute a single real value that describes some aspect of a distribution ---whether this is the mean, mode, standard deviation, kurtosis, a particular quantile, or whatever--- that quantity measures what it measures and not what it doesn't measure. So the mean always measures the mean, irrespective of whether the distribution is unimodal, bimodal, trimodal, etc. Now, you ask whether the mean is good to "infer properties of these distributions". This begs the natural question, which properties? If the property of interest to you is the "centre" of the distribution, then obviously the mean will represent that property extremely well. On the other hand, if the property of interest to you is something else (e.g., the mode) then the mean might represent that very poorly. All of this is just another way of saying that real quantities computed from distributions generally represent only one aspect of the distribution, and there is a loss of information when transitioning from the distribution to a descriptive quantity. So if you want to use descriptive quantities to represent properties of the distribution, you need to be specific about what properties are of interest to you. There is no single quantity (other than the distribution itself) that will give you "the properties" of the distribution.
How Well Does the Mean Describe a Multimodal Probability Distribution?
The mean means what it means Whenever you compute a single real value that describes some aspect of a distribution ---whether this is the mean, mode, standard deviation, kurtosis, a particular quantil
How Well Does the Mean Describe a Multimodal Probability Distribution? The mean means what it means Whenever you compute a single real value that describes some aspect of a distribution ---whether this is the mean, mode, standard deviation, kurtosis, a particular quantile, or whatever--- that quantity measures what it measures and not what it doesn't measure. So the mean always measures the mean, irrespective of whether the distribution is unimodal, bimodal, trimodal, etc. Now, you ask whether the mean is good to "infer properties of these distributions". This begs the natural question, which properties? If the property of interest to you is the "centre" of the distribution, then obviously the mean will represent that property extremely well. On the other hand, if the property of interest to you is something else (e.g., the mode) then the mean might represent that very poorly. All of this is just another way of saying that real quantities computed from distributions generally represent only one aspect of the distribution, and there is a loss of information when transitioning from the distribution to a descriptive quantity. So if you want to use descriptive quantities to represent properties of the distribution, you need to be specific about what properties are of interest to you. There is no single quantity (other than the distribution itself) that will give you "the properties" of the distribution.
How Well Does the Mean Describe a Multimodal Probability Distribution? The mean means what it means Whenever you compute a single real value that describes some aspect of a distribution ---whether this is the mean, mode, standard deviation, kurtosis, a particular quantil
33,345
How Well Does the Mean Describe a Multimodal Probability Distribution?
The mean as useful descriptor of the process creating the distribution Often the mean is of interest because often it relates to parameters of the underlying process that is described by the distribution. This can also be true for skewed distributions like the Poisson distribution where the mean is equal to the rate parameter. On the other hand, in the case of a bimodal or multimodal distribution you are often dealing with a mixture of distributions, each with their own mean. In that case the mean of the mixture is not a very useful descriptor that helps to understand the distribution. The mean as useful in application of the distribution. A case that the mean might still be useful, even when it has little to do with the mechanics behind the process creating the distribution, is when the mean plays a role in the application. For instance, if your application involves a sum of variables, then the distribution of the sum is of interest (and this will follow approximately a normal distribution with a single mode, centered around the mean). Example: Say the distribution is for how much food to buy for the buffet on a cruise ship and the bimodal distribution describes the eating patterns of the individuals on a ship, then the distribution of the sum is of interest. An example highlighting the difference between the two cases from the split in this answer are the different cost functions involved in optimization (one cost function for the fitting procedure, and one cost function as the actual optimization target). For instance, the mean might be desired for an application (e.g. it minimizes the squared error loss function) but the median of a sample from the distribution can be a better estimator of the distribution shape: http://stats.stackexchange.com/a/492143 An analogy with the usefulness of the mean to describe a distribution, when it is about the application, is the centre of mass in physics. Say you want to describe the motion of asteroid in the solar system then the exact shape of the asteroid is not much important and we make computations with the centre of mass. (there are some effects that make the shape a little bit important, e.g. tidal forces and radiation pressure). In the same way for statistics, the centre of probability mass (the mean) may not describe well the shape of some probability distribution, but it could be the only thing that matters in the application.
How Well Does the Mean Describe a Multimodal Probability Distribution?
The mean as useful descriptor of the process creating the distribution Often the mean is of interest because often it relates to parameters of the underlying process that is described by the distribut
How Well Does the Mean Describe a Multimodal Probability Distribution? The mean as useful descriptor of the process creating the distribution Often the mean is of interest because often it relates to parameters of the underlying process that is described by the distribution. This can also be true for skewed distributions like the Poisson distribution where the mean is equal to the rate parameter. On the other hand, in the case of a bimodal or multimodal distribution you are often dealing with a mixture of distributions, each with their own mean. In that case the mean of the mixture is not a very useful descriptor that helps to understand the distribution. The mean as useful in application of the distribution. A case that the mean might still be useful, even when it has little to do with the mechanics behind the process creating the distribution, is when the mean plays a role in the application. For instance, if your application involves a sum of variables, then the distribution of the sum is of interest (and this will follow approximately a normal distribution with a single mode, centered around the mean). Example: Say the distribution is for how much food to buy for the buffet on a cruise ship and the bimodal distribution describes the eating patterns of the individuals on a ship, then the distribution of the sum is of interest. An example highlighting the difference between the two cases from the split in this answer are the different cost functions involved in optimization (one cost function for the fitting procedure, and one cost function as the actual optimization target). For instance, the mean might be desired for an application (e.g. it minimizes the squared error loss function) but the median of a sample from the distribution can be a better estimator of the distribution shape: http://stats.stackexchange.com/a/492143 An analogy with the usefulness of the mean to describe a distribution, when it is about the application, is the centre of mass in physics. Say you want to describe the motion of asteroid in the solar system then the exact shape of the asteroid is not much important and we make computations with the centre of mass. (there are some effects that make the shape a little bit important, e.g. tidal forces and radiation pressure). In the same way for statistics, the centre of probability mass (the mean) may not describe well the shape of some probability distribution, but it could be the only thing that matters in the application.
How Well Does the Mean Describe a Multimodal Probability Distribution? The mean as useful descriptor of the process creating the distribution Often the mean is of interest because often it relates to parameters of the underlying process that is described by the distribut
33,346
How Well Does the Mean Describe a Multimodal Probability Distribution?
Your first plot shows a bimodal distribution that is close to symmetric, so it is quite likely that the mean would be close to or equal to the median. A mean or median is just a single number that summarizes some kind of information about the distribution. A single number will never tell you everything about the distribution, so it is hard to answer "how well" it works because the answer depends on what is important for you. It does not tell you anything about the multimodality, but neither would a median. With your second plot, it is hard to say if the distribution is "irregular", or you just used the wrong parameters for the kernel density estimator. In kernel density estimation using a smaller bandwidth would always lead to curly shapes, while a high bandwidth would smooth such shapes more. The same applies to the histogram: with large bins it would be smoother, while with small bins. it would be a collection of peaks. The third plot shows a skewed distribution. Again, choosing between mean and median would depend on what kind of information you want to summarize. The If mean is so sensitive, why use it in the first place? thread discusses in detail why we use means and what are the ideas behind that. TL;DR you actually may want the mean to be influenced by the extreme values, so it summarizes the "whole distribution" better.
How Well Does the Mean Describe a Multimodal Probability Distribution?
Your first plot shows a bimodal distribution that is close to symmetric, so it is quite likely that the mean would be close to or equal to the median. A mean or median is just a single number that sum
How Well Does the Mean Describe a Multimodal Probability Distribution? Your first plot shows a bimodal distribution that is close to symmetric, so it is quite likely that the mean would be close to or equal to the median. A mean or median is just a single number that summarizes some kind of information about the distribution. A single number will never tell you everything about the distribution, so it is hard to answer "how well" it works because the answer depends on what is important for you. It does not tell you anything about the multimodality, but neither would a median. With your second plot, it is hard to say if the distribution is "irregular", or you just used the wrong parameters for the kernel density estimator. In kernel density estimation using a smaller bandwidth would always lead to curly shapes, while a high bandwidth would smooth such shapes more. The same applies to the histogram: with large bins it would be smoother, while with small bins. it would be a collection of peaks. The third plot shows a skewed distribution. Again, choosing between mean and median would depend on what kind of information you want to summarize. The If mean is so sensitive, why use it in the first place? thread discusses in detail why we use means and what are the ideas behind that. TL;DR you actually may want the mean to be influenced by the extreme values, so it summarizes the "whole distribution" better.
How Well Does the Mean Describe a Multimodal Probability Distribution? Your first plot shows a bimodal distribution that is close to symmetric, so it is quite likely that the mean would be close to or equal to the median. A mean or median is just a single number that sum
33,347
How Well Does the Mean Describe a Multimodal Probability Distribution?
There are many good answers here. I’ll just add here the general point to be made. You can summarise a distribution of values with a single number, the mean (or even two, say the standard deviation) but you always have to remember you are loosing information in doing so. That’s why you look at histograms before summarising. So after looking at your histograms above, I would summarise the distributions in three different ways. In the first case: the distribution is multimodal, hence mode is a better metric than mean or median. Take both modes, assume a "cut" in the middle and report interquartile range for "left" and "right" distribution. In the second case: this may still be "normal-like" if the number of observations is small. You can report mean or median, and interquartile range around one or the other In the third case, the didstirbution is highly skewed. You could report median and interquartile range. Even better, look at the histogram using log-scale on the y axis first to see if there’s a second maximum - there seems to be one. If so, report median and interquartile range around first and second peak. You will need to choose and arbitrary value to say where the first distribution starts and the second ends PS a better way for multimodal distributions would be to assume a parametric form for each component, and use maximum likelihood to estimate the relative components. But that’s likely an overkill in your case
How Well Does the Mean Describe a Multimodal Probability Distribution?
There are many good answers here. I’ll just add here the general point to be made. You can summarise a distribution of values with a single number, the mean (or even two, say the standard deviation) b
How Well Does the Mean Describe a Multimodal Probability Distribution? There are many good answers here. I’ll just add here the general point to be made. You can summarise a distribution of values with a single number, the mean (or even two, say the standard deviation) but you always have to remember you are loosing information in doing so. That’s why you look at histograms before summarising. So after looking at your histograms above, I would summarise the distributions in three different ways. In the first case: the distribution is multimodal, hence mode is a better metric than mean or median. Take both modes, assume a "cut" in the middle and report interquartile range for "left" and "right" distribution. In the second case: this may still be "normal-like" if the number of observations is small. You can report mean or median, and interquartile range around one or the other In the third case, the didstirbution is highly skewed. You could report median and interquartile range. Even better, look at the histogram using log-scale on the y axis first to see if there’s a second maximum - there seems to be one. If so, report median and interquartile range around first and second peak. You will need to choose and arbitrary value to say where the first distribution starts and the second ends PS a better way for multimodal distributions would be to assume a parametric form for each component, and use maximum likelihood to estimate the relative components. But that’s likely an overkill in your case
How Well Does the Mean Describe a Multimodal Probability Distribution? There are many good answers here. I’ll just add here the general point to be made. You can summarise a distribution of values with a single number, the mean (or even two, say the standard deviation) b
33,348
How Well Does the Mean Describe a Multimodal Probability Distribution?
A multi-model can be best explained by a localised distribution, but I don’t think there is any explicit localised distribution. Due to localised nature, spline regression or redial basis regression may help to model your problem.
How Well Does the Mean Describe a Multimodal Probability Distribution?
A multi-model can be best explained by a localised distribution, but I don’t think there is any explicit localised distribution. Due to localised nature, spline regression or redial basis regression m
How Well Does the Mean Describe a Multimodal Probability Distribution? A multi-model can be best explained by a localised distribution, but I don’t think there is any explicit localised distribution. Due to localised nature, spline regression or redial basis regression may help to model your problem.
How Well Does the Mean Describe a Multimodal Probability Distribution? A multi-model can be best explained by a localised distribution, but I don’t think there is any explicit localised distribution. Due to localised nature, spline regression or redial basis regression m
33,349
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why?
I know that when doing surveys or polls, a sample size of mere hundreds or thousands is often sufficient, even for very large populations. In the calculator you linked to, it only considers estimating a single proportion. The relationship between sample size, confidence level, and desired margin of error is simple for estimating a single proportion from iid binomial data. Political polls, at least, tend to be focused on just a few proportions (what % of voters favor candidate A over candidate B). If they assume random sampling and ignorable nonresponse, they can use such a calculator to find that, say, ~1000 respondents will get you a 95% MOE of $\pm$ 3 percentage points. Even if you wanted to do multiple comparisons corrections (though they often don't) for reporting a handful of proportions at once, ~1000 respondents is typically still good enough for reasonably narrow MOEs. So now what happens if we extend this notion to Monte Carlo simulations of a multi-parameter model? If you're studying a higher-dimensional space, you need more data if you want to honestly account for the uncertainty in studying many estimates at once. If your estimators have some complicated intractable distribution, sample-size calculations may be sketchy and you want to err on the side of more data. If you want to characterize the space in more detail than just a single (multi-dimensional) point estimate, you need more data. In fact... this is true of surveys as well. There are plenty of much larger surveys, such as the ones that are run by national statistical offices (such as the US Census Bureau). Some ask simple binary questions where the above sample-size calculator works, but they want bigger samples in order to account for asking many such questions. Other questions are quantitative measurements and might need a different approach to estimate the sample size. They also often need bigger sample sizes in order to get precise sub-group estimates (think small geographic regions or small demographic groups). So just a few hundred responses is not always enough. For instance, the American Community Survey collects roughly 3 million responses each year.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin
I know that when doing surveys or polls, a sample size of mere hundreds or thousands is often sufficient, even for very large populations. In the calculator you linked to, it only considers estimatin
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why? I know that when doing surveys or polls, a sample size of mere hundreds or thousands is often sufficient, even for very large populations. In the calculator you linked to, it only considers estimating a single proportion. The relationship between sample size, confidence level, and desired margin of error is simple for estimating a single proportion from iid binomial data. Political polls, at least, tend to be focused on just a few proportions (what % of voters favor candidate A over candidate B). If they assume random sampling and ignorable nonresponse, they can use such a calculator to find that, say, ~1000 respondents will get you a 95% MOE of $\pm$ 3 percentage points. Even if you wanted to do multiple comparisons corrections (though they often don't) for reporting a handful of proportions at once, ~1000 respondents is typically still good enough for reasonably narrow MOEs. So now what happens if we extend this notion to Monte Carlo simulations of a multi-parameter model? If you're studying a higher-dimensional space, you need more data if you want to honestly account for the uncertainty in studying many estimates at once. If your estimators have some complicated intractable distribution, sample-size calculations may be sketchy and you want to err on the side of more data. If you want to characterize the space in more detail than just a single (multi-dimensional) point estimate, you need more data. In fact... this is true of surveys as well. There are plenty of much larger surveys, such as the ones that are run by national statistical offices (such as the US Census Bureau). Some ask simple binary questions where the above sample-size calculator works, but they want bigger samples in order to account for asking many such questions. Other questions are quantitative measurements and might need a different approach to estimate the sample size. They also often need bigger sample sizes in order to get precise sub-group estimates (think small geographic regions or small demographic groups). So just a few hundred responses is not always enough. For instance, the American Community Survey collects roughly 3 million responses each year.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin I know that when doing surveys or polls, a sample size of mere hundreds or thousands is often sufficient, even for very large populations. In the calculator you linked to, it only considers estimatin
33,350
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why?
Surveys are relatively expensive, which determines a different balance between the cost and value. One instance of a Monte Carlo simulation is extremely cheap so you can repeat it more until the marginal cost of the ongoing simulation exceeds its marginal value.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin
Surveys are relatively expensive, which determines a different balance between the cost and value. One instance of a Monte Carlo simulation is extremely cheap so you can repeat it more until the margi
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why? Surveys are relatively expensive, which determines a different balance between the cost and value. One instance of a Monte Carlo simulation is extremely cheap so you can repeat it more until the marginal cost of the ongoing simulation exceeds its marginal value.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin Surveys are relatively expensive, which determines a different balance between the cost and value. One instance of a Monte Carlo simulation is extremely cheap so you can repeat it more until the margi
33,351
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why?
Population size is misleading. Consider the following two experiments: Experiment 1: A coin lands on heads with unknown probability $p$. We flip the coin $100$ times and try to estimate $p$. Experiment 2: We sample uniformly from a bag containing a trillion balls, either red or blue. The probability of sampling a red ball is $p$. We sample $100$ balls and try to estimate $p$. These experiments might seem very different, but probabilistically they are extremely similar. Abstracting away the population size, we are trying to compute a single parameter value, $p$, and the value of $p$ has such a big effect on our observations that even with only $100$ samples the value of $p$ can be well-estimated. With more complicated models, there may be many parameters whose impact (individually and collectively) on our observations is very subtle, and so many samples are needed to tease out these subtleties. This line of thinking leads one into the intersection of probability theory and information theory. Essentially, a few observations of flipping a coin carry a lot of information about the parameter $p$, but for more complicated models the amount of information in each individual observation may be very small. It is this that governs how many samples you need, and not the total size of the available data.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin
Population size is misleading. Consider the following two experiments: Experiment 1: A coin lands on heads with unknown probability $p$. We flip the coin $100$ times and try to estimate $p$. Experimen
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why? Population size is misleading. Consider the following two experiments: Experiment 1: A coin lands on heads with unknown probability $p$. We flip the coin $100$ times and try to estimate $p$. Experiment 2: We sample uniformly from a bag containing a trillion balls, either red or blue. The probability of sampling a red ball is $p$. We sample $100$ balls and try to estimate $p$. These experiments might seem very different, but probabilistically they are extremely similar. Abstracting away the population size, we are trying to compute a single parameter value, $p$, and the value of $p$ has such a big effect on our observations that even with only $100$ samples the value of $p$ can be well-estimated. With more complicated models, there may be many parameters whose impact (individually and collectively) on our observations is very subtle, and so many samples are needed to tease out these subtleties. This line of thinking leads one into the intersection of probability theory and information theory. Essentially, a few observations of flipping a coin carry a lot of information about the parameter $p$, but for more complicated models the amount of information in each individual observation may be very small. It is this that governs how many samples you need, and not the total size of the available data.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin Population size is misleading. Consider the following two experiments: Experiment 1: A coin lands on heads with unknown probability $p$. We flip the coin $100$ times and try to estimate $p$. Experimen
33,352
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why?
One important side note: This seems to suggest that any application of the Monte Carlo method can be concluded within several hundred/thousand of simulations as well. But I feel this cannot be true, as it would appear to defeat the purpose of the plethora of “more efficient alternatives” to Monte Carlo (such as Markov Chain Monte Carlo). Markov Chain Monte Carlo is almost always not more efficient than a Monte Carlo approach with independent samples, but rather much, much less efficient. With most MCMC algorithms, each sample is positively correlated with the previous sample. The impact of this correlation can be quite extreme: it's not too uncommon to see every one thousand samples being equal to one new sample (i.e. takes one thousand samples to forget the previous state in the MCMC algorithm). In these scenarios, if we you were content with the inference learned from 100 independent samples, you would need 100,000 samples from an MCMC algorithm with the heavy correlation mentioned above. MCMC is generally used when you cannot take independent draws from the target distribution.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin
One important side note: This seems to suggest that any application of the Monte Carlo method can be concluded within several hundred/thousand of simulations as well. But I feel this cannot be true,
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why? One important side note: This seems to suggest that any application of the Monte Carlo method can be concluded within several hundred/thousand of simulations as well. But I feel this cannot be true, as it would appear to defeat the purpose of the plethora of “more efficient alternatives” to Monte Carlo (such as Markov Chain Monte Carlo). Markov Chain Monte Carlo is almost always not more efficient than a Monte Carlo approach with independent samples, but rather much, much less efficient. With most MCMC algorithms, each sample is positively correlated with the previous sample. The impact of this correlation can be quite extreme: it's not too uncommon to see every one thousand samples being equal to one new sample (i.e. takes one thousand samples to forget the previous state in the MCMC algorithm). In these scenarios, if we you were content with the inference learned from 100 independent samples, you would need 100,000 samples from an MCMC algorithm with the heavy correlation mentioned above. MCMC is generally used when you cannot take independent draws from the target distribution.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin One important side note: This seems to suggest that any application of the Monte Carlo method can be concluded within several hundred/thousand of simulations as well. But I feel this cannot be true,
33,353
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why?
I have doubts that the statement "a surprisingly small number of surveys suffice to get a decent idea of the likely outcome of an election" actually holds well in practice. Well, maybe depends on your definition of "decent"... Caveat: What I know about polling is mostly from reading FiveThirtyEight. If you want to learn more about the accuracy of polls, in the US at least, take a look at some FiveThirtyEight posts tagged "polling accuracy". To start with, you are considering a very simple problem: estimate a proportion by sampling a homogeneous population. Voting intent on the other hand depends on a number of factors (personal characteristics, the state of the economy, other current questions/concerns relevant to voters). So polling companies don't survey respondents (completely) at random: they want to have samples that are representative of the population (in a county, a state or a country). So surveys are designed by using census data to determine how to construct a representative sample efficiently. The sample itself can be biased (it may not be efficient to over-sample the most common sub-group of voters) but the bias can be corrected to calculate an unbiased estimate of voting preferences. Another point to consider is that when an election is close, it's probably necessary to have a smaller margin of error and therefore collect a larger sample of respondents. The second part of your question about Markov chain Monte Carlo methods has a similar issue of making a simplistic generalization. No one wants to waste their time and computational resources on running their MCMC sampling for longer than necessary to get convergence. That's why it's so important to have tools to diagnose convergence issues. Some chains may need 1,000 samples; others 1,000,000.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin
I have doubts that the statement "a surprisingly small number of surveys suffice to get a decent idea of the likely outcome of an election" actually holds well in practice. Well, maybe depends on your
When sampling a population for surveys we can often limit our sample size to hundreds, but when doing a Monte Carlo simulation we need way more. Why? I have doubts that the statement "a surprisingly small number of surveys suffice to get a decent idea of the likely outcome of an election" actually holds well in practice. Well, maybe depends on your definition of "decent"... Caveat: What I know about polling is mostly from reading FiveThirtyEight. If you want to learn more about the accuracy of polls, in the US at least, take a look at some FiveThirtyEight posts tagged "polling accuracy". To start with, you are considering a very simple problem: estimate a proportion by sampling a homogeneous population. Voting intent on the other hand depends on a number of factors (personal characteristics, the state of the economy, other current questions/concerns relevant to voters). So polling companies don't survey respondents (completely) at random: they want to have samples that are representative of the population (in a county, a state or a country). So surveys are designed by using census data to determine how to construct a representative sample efficiently. The sample itself can be biased (it may not be efficient to over-sample the most common sub-group of voters) but the bias can be corrected to calculate an unbiased estimate of voting preferences. Another point to consider is that when an election is close, it's probably necessary to have a smaller margin of error and therefore collect a larger sample of respondents. The second part of your question about Markov chain Monte Carlo methods has a similar issue of making a simplistic generalization. No one wants to waste their time and computational resources on running their MCMC sampling for longer than necessary to get convergence. That's why it's so important to have tools to diagnose convergence issues. Some chains may need 1,000 samples; others 1,000,000.
When sampling a population for surveys we can often limit our sample size to hundreds, but when doin I have doubts that the statement "a surprisingly small number of surveys suffice to get a decent idea of the likely outcome of an election" actually holds well in practice. Well, maybe depends on your
33,354
Check if a character string is not random
the chance of finding this random seems low to me whereas finding BABDCABCDACDBACD seems less random. Why would that be? If the overall proportion of letters A...D is equal to 0.25 for each letter, and each letter is independent of the other one, then both words are exactly equally probable. If the distribution of letters differ, then of course the probabilities of generating both words might be different. You can try to find "low complexity" words, for example words with an especially high proportion of one letter (you could use the Shannon information as suggested in the other response, and in biological sequence analysis there are many other approaches), but there is no test for "randomness", as without further assumptions or knowledge about what you are actually analyzing, the term "randomness" makes no sense.
Check if a character string is not random
the chance of finding this random seems low to me whereas finding BABDCABCDACDBACD seems less random. Why would that be? If the overall proportion of letters A...D is equal to 0.25 for each letter, a
Check if a character string is not random the chance of finding this random seems low to me whereas finding BABDCABCDACDBACD seems less random. Why would that be? If the overall proportion of letters A...D is equal to 0.25 for each letter, and each letter is independent of the other one, then both words are exactly equally probable. If the distribution of letters differ, then of course the probabilities of generating both words might be different. You can try to find "low complexity" words, for example words with an especially high proportion of one letter (you could use the Shannon information as suggested in the other response, and in biological sequence analysis there are many other approaches), but there is no test for "randomness", as without further assumptions or knowledge about what you are actually analyzing, the term "randomness" makes no sense.
Check if a character string is not random the chance of finding this random seems low to me whereas finding BABDCABCDACDBACD seems less random. Why would that be? If the overall proportion of letters A...D is equal to 0.25 for each letter, a
33,355
Check if a character string is not random
You could try Shannon information: $$ H = -\sum_{i = 0}^n {P_{i}\log_{2}(P_{i})} $$ where, $P_{i} = \frac{c_{i}}{n}$, $c_{i}$ is the count of some letter $c$ in the word and $n = |{\rm word}|$. For the first word you have $H = 0.35$. In the second word you have $H = 2$. If the entropy is high, you could think of it as more random vs. another word with lower entropy.
Check if a character string is not random
You could try Shannon information: $$ H = -\sum_{i = 0}^n {P_{i}\log_{2}(P_{i})} $$ where, $P_{i} = \frac{c_{i}}{n}$, $c_{i}$ is the count of some letter $c$ in the word and $n = |{\rm word}|$. For
Check if a character string is not random You could try Shannon information: $$ H = -\sum_{i = 0}^n {P_{i}\log_{2}(P_{i})} $$ where, $P_{i} = \frac{c_{i}}{n}$, $c_{i}$ is the count of some letter $c$ in the word and $n = |{\rm word}|$. For the first word you have $H = 0.35$. In the second word you have $H = 2$. If the entropy is high, you could think of it as more random vs. another word with lower entropy.
Check if a character string is not random You could try Shannon information: $$ H = -\sum_{i = 0}^n {P_{i}\log_{2}(P_{i})} $$ where, $P_{i} = \frac{c_{i}}{n}$, $c_{i}$ is the count of some letter $c$ in the word and $n = |{\rm word}|$. For
33,356
Check if a character string is not random
Other answers here have focused on the overall occurrence of different letters in the sequence, which may be one aspect of the "randomness" expected. However, another aspect of interest is the apparent randomness in the order of the letters in the sequence. At minimum, I would think that "randomness" entails the exchangeability of the vector of letters, which can be tested using a "runs test". The runs test counts the number of "runs" in the sequence and compares the total number of runs to its null distribution under the null hypothesis of exchangeability, for a vector with the same letters. The exact definition of what constitutes a "run" depends on the particular test (see e.g., a similar answer here), but in this case, with nominal categories, the natural definition is to count any consecutive sequence consisting of only one letter as a single "run". For example, your sequence BABD-CABC-DACD-BACD looks prima facie non-random to me (no letter appears with itself, which is probably unusual for a sequence this long).$^\dagger$ To test this formally, we can perform a runs test for exchangeability. In this sequence we have $n = 16$ letters (four of each letter) and there are $r = 16$ runs, each consisting of one single instance of a letter. The observed number of runs can be compared to its null distribution under the hypothesis of exchangeability. We can do this via simulation, which yields a simulated null distribution and a p-value for the test. The result for this sequence of characters is shown in the graph below. For this sequence, the p-value for the runs test (under the null hypothesis of exchangeability) is $p=0.0537$. This is significant at the 10% significance level, but not at the 5% significance level. There is some evidence to suggest a non-exchangeable series (i.e., non-random order), but the evidence is not particularly strong. With a longer observed string, the runs test would have greater power to distinguish an exchangeable string from a non-exchangeable string. (As you can see, my initial prima facie judgment that this string is non-random may be wrong - the p-value is not actually as low as I expected it to be.) Finally, it is important to note that this test only looks at the randomness of the order of the letters in the string - it takes the number of letters of each type as a fixed input. This test will detect non-randomness in the sense of non-exchangeability of the letters in the string, but it will not test "randomness" in the sense of overall probabilities of different letters. If the latter is also part of the specified meaning of "randomness" then this runs test could be augmented with another test that looks at the overall counts of the letters, and compares this to a hypothesised null distribution. R code: The above plot and p-value was generated using the following R code: #Define the character string vector (as factors) x <- factor(c(2,1,2,4, 3,1,2,3, 4,1,3,4, 2,1,3,4), labels = c('A', 'B', 'C', 'D')) #Define a function to calculate the runs for an input vector RUNS <- function(x) { n <- length(x); R <- 1; for (i in 2:n) { R <- R + (x[i] != x[i-1]) } R; } #Simulate the runs statistic for k permutations k <- 10^5; set.seed(12345); RR <- rep(0, k); for (i in 1:k) { x_perm <- sample(x, length(x), replace = FALSE); RR[i] <- RUNS(x_perm); } #Generate the frequency table for the simulated runs FREQS <- as.data.frame(table(RR)); #Calculate the p-value of the runs test R <- RUNS(x); R_FREQ <- FREQS$Freq[match(R, FREQS$RR)]; p <- sum(FREQS$Freq*(FREQS$Freq <= R_FREQ))/k; #Plot estimated distribution of runs with test library(ggplot2); ggplot(data = FREQS, aes(x = RR, y = Freq/k, fill = (Freq <= R_FREQ))) + geom_bar(stat = 'identity') + geom_vline(xintercept = match(R, FREQS$RR)) + scale_fill_manual(values = c('Grey', 'Red')) + theme(legend.position = 'none', plot.title = element_text(hjust = 0.5, face = 'bold'), plot.subtitle = element_text(hjust = 0.5), axis.title.y = element_text(margin = margin(t = 0, r = 10, b = 0, l = 0))) + labs(title = 'Runs Test - Plot of Distribution of Runs under Exchangeability', subtitle = paste0('(Observed runs is black line, p-value = ', p, ')'), x = 'Runs', y = 'Estimated Probability'); $^\dagger$ I have broken the sequence up with dashes solely to make it easier to read; the dashes have no significance to the analysis.
Check if a character string is not random
Other answers here have focused on the overall occurrence of different letters in the sequence, which may be one aspect of the "randomness" expected. However, another aspect of interest is the appare
Check if a character string is not random Other answers here have focused on the overall occurrence of different letters in the sequence, which may be one aspect of the "randomness" expected. However, another aspect of interest is the apparent randomness in the order of the letters in the sequence. At minimum, I would think that "randomness" entails the exchangeability of the vector of letters, which can be tested using a "runs test". The runs test counts the number of "runs" in the sequence and compares the total number of runs to its null distribution under the null hypothesis of exchangeability, for a vector with the same letters. The exact definition of what constitutes a "run" depends on the particular test (see e.g., a similar answer here), but in this case, with nominal categories, the natural definition is to count any consecutive sequence consisting of only one letter as a single "run". For example, your sequence BABD-CABC-DACD-BACD looks prima facie non-random to me (no letter appears with itself, which is probably unusual for a sequence this long).$^\dagger$ To test this formally, we can perform a runs test for exchangeability. In this sequence we have $n = 16$ letters (four of each letter) and there are $r = 16$ runs, each consisting of one single instance of a letter. The observed number of runs can be compared to its null distribution under the hypothesis of exchangeability. We can do this via simulation, which yields a simulated null distribution and a p-value for the test. The result for this sequence of characters is shown in the graph below. For this sequence, the p-value for the runs test (under the null hypothesis of exchangeability) is $p=0.0537$. This is significant at the 10% significance level, but not at the 5% significance level. There is some evidence to suggest a non-exchangeable series (i.e., non-random order), but the evidence is not particularly strong. With a longer observed string, the runs test would have greater power to distinguish an exchangeable string from a non-exchangeable string. (As you can see, my initial prima facie judgment that this string is non-random may be wrong - the p-value is not actually as low as I expected it to be.) Finally, it is important to note that this test only looks at the randomness of the order of the letters in the string - it takes the number of letters of each type as a fixed input. This test will detect non-randomness in the sense of non-exchangeability of the letters in the string, but it will not test "randomness" in the sense of overall probabilities of different letters. If the latter is also part of the specified meaning of "randomness" then this runs test could be augmented with another test that looks at the overall counts of the letters, and compares this to a hypothesised null distribution. R code: The above plot and p-value was generated using the following R code: #Define the character string vector (as factors) x <- factor(c(2,1,2,4, 3,1,2,3, 4,1,3,4, 2,1,3,4), labels = c('A', 'B', 'C', 'D')) #Define a function to calculate the runs for an input vector RUNS <- function(x) { n <- length(x); R <- 1; for (i in 2:n) { R <- R + (x[i] != x[i-1]) } R; } #Simulate the runs statistic for k permutations k <- 10^5; set.seed(12345); RR <- rep(0, k); for (i in 1:k) { x_perm <- sample(x, length(x), replace = FALSE); RR[i] <- RUNS(x_perm); } #Generate the frequency table for the simulated runs FREQS <- as.data.frame(table(RR)); #Calculate the p-value of the runs test R <- RUNS(x); R_FREQ <- FREQS$Freq[match(R, FREQS$RR)]; p <- sum(FREQS$Freq*(FREQS$Freq <= R_FREQ))/k; #Plot estimated distribution of runs with test library(ggplot2); ggplot(data = FREQS, aes(x = RR, y = Freq/k, fill = (Freq <= R_FREQ))) + geom_bar(stat = 'identity') + geom_vline(xintercept = match(R, FREQS$RR)) + scale_fill_manual(values = c('Grey', 'Red')) + theme(legend.position = 'none', plot.title = element_text(hjust = 0.5, face = 'bold'), plot.subtitle = element_text(hjust = 0.5), axis.title.y = element_text(margin = margin(t = 0, r = 10, b = 0, l = 0))) + labs(title = 'Runs Test - Plot of Distribution of Runs under Exchangeability', subtitle = paste0('(Observed runs is black line, p-value = ', p, ')'), x = 'Runs', y = 'Estimated Probability'); $^\dagger$ I have broken the sequence up with dashes solely to make it easier to read; the dashes have no significance to the analysis.
Check if a character string is not random Other answers here have focused on the overall occurrence of different letters in the sequence, which may be one aspect of the "randomness" expected. However, another aspect of interest is the appare
33,357
Check if a character string is not random
Assuming the string of letters is long enough, you can apply Randomness tests on the data. One set of such tests is called the diehard tests: The diehard tests are a battery of statistical tests for measuring the quality of a random number generator. They were developed by George Marsaglia over several years and first published in 1995 on a CD-ROM of random numbers. They involve a, perhaps arbitrary, set of tests such as: Birthday spacings Overlapping permutations Ranks of matrices Monkey tests Count the 1s Parking lot test Minimum distance test Random spheres test The squeeze test Overlapping sums test Runs test The craps test A good sequence of random data should pass these tests. However, passing these tests isn't sufficient to prove the numbers don't actually encode a real signal. They could be the output from a high-quality encryption routine.
Check if a character string is not random
Assuming the string of letters is long enough, you can apply Randomness tests on the data. One set of such tests is called the diehard tests: The diehard tests are a battery of statistical tests for
Check if a character string is not random Assuming the string of letters is long enough, you can apply Randomness tests on the data. One set of such tests is called the diehard tests: The diehard tests are a battery of statistical tests for measuring the quality of a random number generator. They were developed by George Marsaglia over several years and first published in 1995 on a CD-ROM of random numbers. They involve a, perhaps arbitrary, set of tests such as: Birthday spacings Overlapping permutations Ranks of matrices Monkey tests Count the 1s Parking lot test Minimum distance test Random spheres test The squeeze test Overlapping sums test Runs test The craps test A good sequence of random data should pass these tests. However, passing these tests isn't sufficient to prove the numbers don't actually encode a real signal. They could be the output from a high-quality encryption routine.
Check if a character string is not random Assuming the string of letters is long enough, you can apply Randomness tests on the data. One set of such tests is called the diehard tests: The diehard tests are a battery of statistical tests for
33,358
Is cosine similarity a classification or a clustering technique?
No. Cosine similarity can be computed amongst arbitrary vectors. It is a similarity measure (which can be converted to a distance measure, and then be used in any distance based classifier, such as nearest neighbor classification.) $$\cos \varphi = \frac{a\cdot b}{\|a\| \, \|b\|} $$ Where $a$ and $b$ are whatever vectors you want to compare. If you want to do NN classification, you would use $a$ as your new document, and $b$ as your known sample documents, then classify the new document based on the most similar sample(s). Alternatively, you could compute a centroid for a whole class, but that would assume that the class is very consistent in itself, and that the centroid is a reasonable estimator for the cosine distances (I'm not sure about this!). NN classification is much easier for you, and less dependent on your corpus to be very consistent in itself. Say you have the topic "sports". Some documents will talk about Soccer, others about Basketball, others about American Football. The centroid will probably be quite meaningless. Keeping a number of good sample documents for NN classification will likely work much better. This happens commonly when one class consists of multiple clusters. It's an often misunderstood thing, classes do not necessarily equal clusters. Multiple classes may be one big cluster when they are hard to discern in the data. And on the other hand a class may well have multiple clusters if it is not very uniform. Clustering can work well for finding good sample documents from your training data, but there are other more appropriate methods. In a supervised context, supervised methods will always perform better than unsupervised.
Is cosine similarity a classification or a clustering technique?
No. Cosine similarity can be computed amongst arbitrary vectors. It is a similarity measure (which can be converted to a distance measure, and then be used in any distance based classifier, such as ne
Is cosine similarity a classification or a clustering technique? No. Cosine similarity can be computed amongst arbitrary vectors. It is a similarity measure (which can be converted to a distance measure, and then be used in any distance based classifier, such as nearest neighbor classification.) $$\cos \varphi = \frac{a\cdot b}{\|a\| \, \|b\|} $$ Where $a$ and $b$ are whatever vectors you want to compare. If you want to do NN classification, you would use $a$ as your new document, and $b$ as your known sample documents, then classify the new document based on the most similar sample(s). Alternatively, you could compute a centroid for a whole class, but that would assume that the class is very consistent in itself, and that the centroid is a reasonable estimator for the cosine distances (I'm not sure about this!). NN classification is much easier for you, and less dependent on your corpus to be very consistent in itself. Say you have the topic "sports". Some documents will talk about Soccer, others about Basketball, others about American Football. The centroid will probably be quite meaningless. Keeping a number of good sample documents for NN classification will likely work much better. This happens commonly when one class consists of multiple clusters. It's an often misunderstood thing, classes do not necessarily equal clusters. Multiple classes may be one big cluster when they are hard to discern in the data. And on the other hand a class may well have multiple clusters if it is not very uniform. Clustering can work well for finding good sample documents from your training data, but there are other more appropriate methods. In a supervised context, supervised methods will always perform better than unsupervised.
Is cosine similarity a classification or a clustering technique? No. Cosine similarity can be computed amongst arbitrary vectors. It is a similarity measure (which can be converted to a distance measure, and then be used in any distance based classifier, such as ne
33,359
Is cosine similarity a classification or a clustering technique?
I think you have not yet understood the difference between clustering and classification. Document classification (or supervised learning) requires a set of documents and a class information for each document (example: the topic of the document). The goal of classification is to build a model which predicts the class for documents where the class (in this example the topic) is not known. When models are applied on documents where the class is known, they can be evaluated by comparing the predicted with the true class (hence supervised). The data used for training but not evaluating the model is called training data. Document clustering (or unsupervised learning) requires a set of documents but not a class information. The goal is to find groups / clusters in the data, so that documents which are similar according to a specified distance function are in one cluster. Example: Documents which roughly contains the same keywords. documents which are not similar according to a specified distance function are in different clusters The resulting clusters cannot be evaluated like a classification model, because the true clusters are not known (hence unsupervised). Hence there is no such thing as training data, you simply use all data to build the clusters. See also: Classification vs clustering Now the connection between both techniques, and imho the source of your confusion: By defining the clusters generated by document clustering as class, one can train a classification model on the data. Example: If you cluster documents by words, you may detect that the resulting clusters are indeed describing topics. Now you can build a classification model for that automatically derived class. Finally, as put by Anony-Mousse et al., the cosine similarity can be used both for clustering, by defining 1-cosine as distance function (which may not be a metric). Maybe you want use the loosely related Jaccard distance instead classification, by using it in e.g. k-nearest-neighbor
Is cosine similarity a classification or a clustering technique?
I think you have not yet understood the difference between clustering and classification. Document classification (or supervised learning) requires a set of documents and a class information for each
Is cosine similarity a classification or a clustering technique? I think you have not yet understood the difference between clustering and classification. Document classification (or supervised learning) requires a set of documents and a class information for each document (example: the topic of the document). The goal of classification is to build a model which predicts the class for documents where the class (in this example the topic) is not known. When models are applied on documents where the class is known, they can be evaluated by comparing the predicted with the true class (hence supervised). The data used for training but not evaluating the model is called training data. Document clustering (or unsupervised learning) requires a set of documents but not a class information. The goal is to find groups / clusters in the data, so that documents which are similar according to a specified distance function are in one cluster. Example: Documents which roughly contains the same keywords. documents which are not similar according to a specified distance function are in different clusters The resulting clusters cannot be evaluated like a classification model, because the true clusters are not known (hence unsupervised). Hence there is no such thing as training data, you simply use all data to build the clusters. See also: Classification vs clustering Now the connection between both techniques, and imho the source of your confusion: By defining the clusters generated by document clustering as class, one can train a classification model on the data. Example: If you cluster documents by words, you may detect that the resulting clusters are indeed describing topics. Now you can build a classification model for that automatically derived class. Finally, as put by Anony-Mousse et al., the cosine similarity can be used both for clustering, by defining 1-cosine as distance function (which may not be a metric). Maybe you want use the loosely related Jaccard distance instead classification, by using it in e.g. k-nearest-neighbor
Is cosine similarity a classification or a clustering technique? I think you have not yet understood the difference between clustering and classification. Document classification (or supervised learning) requires a set of documents and a class information for each
33,360
Is cosine similarity a classification or a clustering technique?
A cosine similarity function returns the cosine between vectors. A cosine is a cosine, and should not depend upon the data. However, how we decide to represent an object, like a document, as a vector may well depend upon the data. Often, we represent an document as a vector where each dimension corresponds to a word. If the word does not appear we assign a value of 0 to that dimension. If the word does appear the value corresponds to the number of times that word appears in the document normalized by the number of times that word appears in all the documents with our data. This is the general idea behind TF/IDF. Since different sets of documents will have a different distribution of words, the TF/IDF vector representations of documents depend upon the particular document set you are working with. Many classification and clustering methods depend upon some measure of distance and similarity or distance between objects. If they do, then they can use cosine similarity.
Is cosine similarity a classification or a clustering technique?
A cosine similarity function returns the cosine between vectors. A cosine is a cosine, and should not depend upon the data. However, how we decide to represent an object, like a document, as a vector
Is cosine similarity a classification or a clustering technique? A cosine similarity function returns the cosine between vectors. A cosine is a cosine, and should not depend upon the data. However, how we decide to represent an object, like a document, as a vector may well depend upon the data. Often, we represent an document as a vector where each dimension corresponds to a word. If the word does not appear we assign a value of 0 to that dimension. If the word does appear the value corresponds to the number of times that word appears in the document normalized by the number of times that word appears in all the documents with our data. This is the general idea behind TF/IDF. Since different sets of documents will have a different distribution of words, the TF/IDF vector representations of documents depend upon the particular document set you are working with. Many classification and clustering methods depend upon some measure of distance and similarity or distance between objects. If they do, then they can use cosine similarity.
Is cosine similarity a classification or a clustering technique? A cosine similarity function returns the cosine between vectors. A cosine is a cosine, and should not depend upon the data. However, how we decide to represent an object, like a document, as a vector
33,361
Is cosine similarity a classification or a clustering technique?
Similarity measures are not machine learning algorithm per se, but they play an integral part. After features are extracted from the raw data, the classes are selected or clusters defined implicitly by the properties of the similarity measure. It might help to consider the Euclidean distance instead of cosine similarity. Is the Euclidean distance a learning algorithm? No, but you can use it to define one.
Is cosine similarity a classification or a clustering technique?
Similarity measures are not machine learning algorithm per se, but they play an integral part. After features are extracted from the raw data, the classes are selected or clusters defined implicitly b
Is cosine similarity a classification or a clustering technique? Similarity measures are not machine learning algorithm per se, but they play an integral part. After features are extracted from the raw data, the classes are selected or clusters defined implicitly by the properties of the similarity measure. It might help to consider the Euclidean distance instead of cosine similarity. Is the Euclidean distance a learning algorithm? No, but you can use it to define one.
Is cosine similarity a classification or a clustering technique? Similarity measures are not machine learning algorithm per se, but they play an integral part. After features are extracted from the raw data, the classes are selected or clusters defined implicitly b
33,362
Why not validate on the entire training set?
@jpl has provided a good explanation of the ideas here. If what you want is just a reference, I would use a solid, basic textbook. Some well regarded books that cover the idea of cross-validation and why it's important might be: Harrell, F. (2010). Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis. Springer. Hastie, T., Tibshirani, R., & Friedman, J. (2011). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer. and/or James, G., Witten, T., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning: with Applications in R. Springer.
Why not validate on the entire training set?
@jpl has provided a good explanation of the ideas here. If what you want is just a reference, I would use a solid, basic textbook. Some well regarded books that cover the idea of cross-validation an
Why not validate on the entire training set? @jpl has provided a good explanation of the ideas here. If what you want is just a reference, I would use a solid, basic textbook. Some well regarded books that cover the idea of cross-validation and why it's important might be: Harrell, F. (2010). Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis. Springer. Hastie, T., Tibshirani, R., & Friedman, J. (2011). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer. and/or James, G., Witten, T., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical Learning: with Applications in R. Springer.
Why not validate on the entire training set? @jpl has provided a good explanation of the ideas here. If what you want is just a reference, I would use a solid, basic textbook. Some well regarded books that cover the idea of cross-validation an
33,363
Why not validate on the entire training set?
The argument is simple: when you build a model, you want this model to be efficient on NEW, UNSEEN data, right? Otherwise you don't need a model. Then, your evaluation metric, let's say precision and recall, must give an idea of how well your model will behave on unseen data. Now, if you evaluate on the same data that you used for training, your precision and recall will be biased (almost certainly, higher than they should), because your model has already seen the data. Suppose that you're a teacher writing an exam for some students. If you want to evaluate their skills, will you give them exercises that they have already seen, and that they still have on their desks, or new exercises, inspired by what they learned, but different from them? That's why you always need to keep a totally unseen test set for evaluation. (You can also use cross-validation, but that's a different story).
Why not validate on the entire training set?
The argument is simple: when you build a model, you want this model to be efficient on NEW, UNSEEN data, right? Otherwise you don't need a model. Then, your evaluation metric, let's say precision and
Why not validate on the entire training set? The argument is simple: when you build a model, you want this model to be efficient on NEW, UNSEEN data, right? Otherwise you don't need a model. Then, your evaluation metric, let's say precision and recall, must give an idea of how well your model will behave on unseen data. Now, if you evaluate on the same data that you used for training, your precision and recall will be biased (almost certainly, higher than they should), because your model has already seen the data. Suppose that you're a teacher writing an exam for some students. If you want to evaluate their skills, will you give them exercises that they have already seen, and that they still have on their desks, or new exercises, inspired by what they learned, but different from them? That's why you always need to keep a totally unseen test set for evaluation. (You can also use cross-validation, but that's a different story).
Why not validate on the entire training set? The argument is simple: when you build a model, you want this model to be efficient on NEW, UNSEEN data, right? Otherwise you don't need a model. Then, your evaluation metric, let's say precision and
33,364
Why not validate on the entire training set?
If you validate on the entire training set, your ideal model is the one that just memorizes the data. Nothing can beat it. You say that "realistically this is not a model that just memorizes the data". But why do you prefer other models? This is the point of my reduction to absurdity of validating on all the data: the main reason you don't like the model that memorizes everything it has seen is that it doesn't generalize at all. What should it do given an input that it hasn't seen? So you want a model that works in general rather than one that just works on what it has seen. The way that you encode that desire for working well on unseen data is to set the validation data to be exactly that unseen data. However, If you know that your training examples completely represent the true distribution, then go ahead and validate using them! Also, contrary to the claims in your final paragraph, the quotation you cited is not "plainly wrong" and that "particular evaluation strategy" does have to do "with overfitting models". Overfitting means fitting (the noise of) the provided training examples rather than the statistical relationships of general data. By validating using seen data, you will prefer models that fit noise rather than those that work well using unseen data.
Why not validate on the entire training set?
If you validate on the entire training set, your ideal model is the one that just memorizes the data. Nothing can beat it. You say that "realistically this is not a model that just memorizes the data
Why not validate on the entire training set? If you validate on the entire training set, your ideal model is the one that just memorizes the data. Nothing can beat it. You say that "realistically this is not a model that just memorizes the data". But why do you prefer other models? This is the point of my reduction to absurdity of validating on all the data: the main reason you don't like the model that memorizes everything it has seen is that it doesn't generalize at all. What should it do given an input that it hasn't seen? So you want a model that works in general rather than one that just works on what it has seen. The way that you encode that desire for working well on unseen data is to set the validation data to be exactly that unseen data. However, If you know that your training examples completely represent the true distribution, then go ahead and validate using them! Also, contrary to the claims in your final paragraph, the quotation you cited is not "plainly wrong" and that "particular evaluation strategy" does have to do "with overfitting models". Overfitting means fitting (the noise of) the provided training examples rather than the statistical relationships of general data. By validating using seen data, you will prefer models that fit noise rather than those that work well using unseen data.
Why not validate on the entire training set? If you validate on the entire training set, your ideal model is the one that just memorizes the data. Nothing can beat it. You say that "realistically this is not a model that just memorizes the data
33,365
Why not validate on the entire training set?
Here's my simple explanation. When we model reality we want our models to be able not only to explain existing facts but also predict the new facts. So, the out-of-sample testing is to emulate this objective. We estimate (train) the model on some data (training set), then try to predict outside the training set and compare the predictions with the holdout sample. Obviously, this is only an exercize in prediction, not the real prediction, because the holdout sample was in fact already observed. The real test in prediction happens only when you use the model on the data, which was not observed yet. For instance, you developed machine learning program for advertising. Only when you start using it in practice, and observe its performance you'll know for sure if it works or not. However, despite the limitation of training/holdout approach, it's still informative. If your model only works in-sample, it's probably not a good model at all. So, this kind of testing helps weed out bad models. Another thing to remember: let's say you conducted training/holdout sample validation of the model. However, when you want to use the model you probably will estimate the model on the entire dataset. In this case how applicable are the results of the out-of-sample validation of the model which was estimated on the training sample?
Why not validate on the entire training set?
Here's my simple explanation. When we model reality we want our models to be able not only to explain existing facts but also predict the new facts. So, the out-of-sample testing is to emulate this ob
Why not validate on the entire training set? Here's my simple explanation. When we model reality we want our models to be able not only to explain existing facts but also predict the new facts. So, the out-of-sample testing is to emulate this objective. We estimate (train) the model on some data (training set), then try to predict outside the training set and compare the predictions with the holdout sample. Obviously, this is only an exercize in prediction, not the real prediction, because the holdout sample was in fact already observed. The real test in prediction happens only when you use the model on the data, which was not observed yet. For instance, you developed machine learning program for advertising. Only when you start using it in practice, and observe its performance you'll know for sure if it works or not. However, despite the limitation of training/holdout approach, it's still informative. If your model only works in-sample, it's probably not a good model at all. So, this kind of testing helps weed out bad models. Another thing to remember: let's say you conducted training/holdout sample validation of the model. However, when you want to use the model you probably will estimate the model on the entire dataset. In this case how applicable are the results of the out-of-sample validation of the model which was estimated on the training sample?
Why not validate on the entire training set? Here's my simple explanation. When we model reality we want our models to be able not only to explain existing facts but also predict the new facts. So, the out-of-sample testing is to emulate this ob
33,366
Why not validate on the entire training set?
Others have answered your earlier paragraphs, so let me address your last one. Your point's validity depends on the interpretation of "evaluation". If it's used in the sense of a final run on unseen data to give a sense of how well your chosen model might be expected to work in the future, your point is correct. If "evaluation" is used more in the sense of what I'd call a "test" set -- that is, to evaluate the results of training multiple models in order to choose one -- then evaluating on the training data will lead to overfitting.
Why not validate on the entire training set?
Others have answered your earlier paragraphs, so let me address your last one. Your point's validity depends on the interpretation of "evaluation". If it's used in the sense of a final run on unseen d
Why not validate on the entire training set? Others have answered your earlier paragraphs, so let me address your last one. Your point's validity depends on the interpretation of "evaluation". If it's used in the sense of a final run on unseen data to give a sense of how well your chosen model might be expected to work in the future, your point is correct. If "evaluation" is used more in the sense of what I'd call a "test" set -- that is, to evaluate the results of training multiple models in order to choose one -- then evaluating on the training data will lead to overfitting.
Why not validate on the entire training set? Others have answered your earlier paragraphs, so let me address your last one. Your point's validity depends on the interpretation of "evaluation". If it's used in the sense of a final run on unseen d
33,367
Why not validate on the entire training set?
All the other answers (especially related to over-fitting) are very good, but I would just add one thing. The very nature of learning algorithms is that training them ensures they learn "something" common about the data they are exposed to. However, what we cannot be directly sure of, is exactly which features about the training data they end up actually learning. As an example, with image recognition, it's very hard to be sure whether a trained neural network has learned what a face looks like, or something else that's inherent in the images. An ANN could have just memorized what the shirts or shoulders or hair look like, for example. That said, using a separate set of testing data (unseen by training) is one way to increase the confidence that you have a model that can be counted on to perform as expected with real-world/unseen data. Increasing the number of samples and feature variability also helps. What is meant by feature variability, is that you want to train with data that has as many variations which still count on each sample as possible. For example, with face data again, you want to show each particular face on as many different backgrounds as possible, and with as many variations in clothing, lighting, hair color, camera angles etc as possible. This will help ensure that when the ANN says "face" it's really a face, and not a blank wall in the background that triggered the response.
Why not validate on the entire training set?
All the other answers (especially related to over-fitting) are very good, but I would just add one thing. The very nature of learning algorithms is that training them ensures they learn "something" c
Why not validate on the entire training set? All the other answers (especially related to over-fitting) are very good, but I would just add one thing. The very nature of learning algorithms is that training them ensures they learn "something" common about the data they are exposed to. However, what we cannot be directly sure of, is exactly which features about the training data they end up actually learning. As an example, with image recognition, it's very hard to be sure whether a trained neural network has learned what a face looks like, or something else that's inherent in the images. An ANN could have just memorized what the shirts or shoulders or hair look like, for example. That said, using a separate set of testing data (unseen by training) is one way to increase the confidence that you have a model that can be counted on to perform as expected with real-world/unseen data. Increasing the number of samples and feature variability also helps. What is meant by feature variability, is that you want to train with data that has as many variations which still count on each sample as possible. For example, with face data again, you want to show each particular face on as many different backgrounds as possible, and with as many variations in clothing, lighting, hair color, camera angles etc as possible. This will help ensure that when the ANN says "face" it's really a face, and not a blank wall in the background that triggered the response.
Why not validate on the entire training set? All the other answers (especially related to over-fitting) are very good, but I would just add one thing. The very nature of learning algorithms is that training them ensures they learn "something" c
33,368
Why not validate on the entire training set?
Hastie et al have a good example in the context of cross-validation that I think also applies here. Consider prediction with an extremely high number of predictors on data where the predictors and outcomes are all independently distributed. For the sake of argument suppose that everything is Bernoulli with p = 0.5. If you have enough variables then you'll have a few predictors that let you predict the outcomes perfectly. But, on new data, there's no way that you're going to get perfect accuracy. This isn't exactly the same as your case but it does show an example where your method can really lead you astray.
Why not validate on the entire training set?
Hastie et al have a good example in the context of cross-validation that I think also applies here. Consider prediction with an extremely high number of predictors on data where the predictors and out
Why not validate on the entire training set? Hastie et al have a good example in the context of cross-validation that I think also applies here. Consider prediction with an extremely high number of predictors on data where the predictors and outcomes are all independently distributed. For the sake of argument suppose that everything is Bernoulli with p = 0.5. If you have enough variables then you'll have a few predictors that let you predict the outcomes perfectly. But, on new data, there's no way that you're going to get perfect accuracy. This isn't exactly the same as your case but it does show an example where your method can really lead you astray.
Why not validate on the entire training set? Hastie et al have a good example in the context of cross-validation that I think also applies here. Consider prediction with an extremely high number of predictors on data where the predictors and out
33,369
Is the estimated value in an OLS regression "better" than the original value
You wouldn't normally call the observed value an 'estimated value'. However, in spite of that, the observed value is nevertheless technically an estimate of the mean at its particular $x$, and treating it as an estimate will actually tell us the sense in which OLS is better at estimating the mean there. Generally speaking regression is used in the situation where if you were to take another sample with the same $x$'s, you would not get the same values for the $y$'s. In ordinary regression, we treat the $x_i$ as fixed/known quantities and the responses, the $Y_i$ as random variables (with observed values denoted by $y_i$). Using a more common notation, we write $$Y_i = \alpha + \beta x_i + \varepsilon_i$$ The noise term, $\varepsilon_i$, is important because the observations don't lie right on the population line (if they did there'd be no need for regression; any two points would give you the population line); the model for $Y$ must account for the values it takes, and in this case, the distribution of the random error accounts for the deviations from the ('true') line. The estimate of the mean at point $x_i$ for ordinary linear regression has variance $$\Big(\frac{1}{n} + \frac{(x_i-\bar{x})^2}{\sum(x_i-\bar{x})^2}\Big)\,\sigma^2$$ while the estimate based on the observed value has variance $\sigma^2$. It's possible to show that for $n$ at least 3, $\,\frac{1}{n} + \frac{(x_i-\bar{x})^2}{\sum(x_i-\bar{x})^2}$ is no more than 1 (but it may be - and in practice usually is - much smaller). [Further, when you estimate the fit at $x_i$ by $y_i$ you're also left with the issue of how to estimate $\sigma$.] But rather than pursue the formal demonstration, ponder an example, which I hope might be more motivating. Let $v_f = \frac{1}{n} + \frac{(x_i-\bar{x})^2}{\sum(x_i-\bar{x})^2}$, the factor by which the observation variance is multiplied to get the variance of the fit at $x_i$. However, let's work on the scale of relative standard error rather than relative variance (that is, let's look at the square root of this quantity); confidence intervals for the mean at a particular $x_i$ will be a multiple of $\sqrt{v_f}$. So to the example. Let's take the cars data in R; this is 50 observations collected in the 1920s on the speed of cars and the distances taken to stop: So how do the values of $\sqrt{v_f}$ compare with 1? Like so: The blue circles show the multiples of $\sigma$ for your estimate, while the black ones show it for the usual least squares estimate. As you see, using the information from all the data makes our uncertainty about where the population mean lies substantially smaller - at least in this case, and of course given that the linear model is correct. As a result, if we plot (say) a 95% confidence interval for the mean for each value $x$ (including at places other than an observation), the limits of the interval at the various $x$'s are typically small compared to the variation in the data: This is the benefit of 'borrowing' information from data values other than the present one. Indeed, we can use the information from other values - via the linear relationship - to get good estimates the value at places where we don't even have data. Consider that there's no data in our example at x=5, 6 or 21. With the suggested estimator, we have no information there - but with the regression line we can not only estimate the mean at those points (and at 5.5 and 12.8 and so on), we can give an interval for it -- though, again, one that relies on the suitability of the assumptions of linearity (and constant variance of the $Y$s, and independence).
Is the estimated value in an OLS regression "better" than the original value
You wouldn't normally call the observed value an 'estimated value'. However, in spite of that, the observed value is nevertheless technically an estimate of the mean at its particular $x$, and treati
Is the estimated value in an OLS regression "better" than the original value You wouldn't normally call the observed value an 'estimated value'. However, in spite of that, the observed value is nevertheless technically an estimate of the mean at its particular $x$, and treating it as an estimate will actually tell us the sense in which OLS is better at estimating the mean there. Generally speaking regression is used in the situation where if you were to take another sample with the same $x$'s, you would not get the same values for the $y$'s. In ordinary regression, we treat the $x_i$ as fixed/known quantities and the responses, the $Y_i$ as random variables (with observed values denoted by $y_i$). Using a more common notation, we write $$Y_i = \alpha + \beta x_i + \varepsilon_i$$ The noise term, $\varepsilon_i$, is important because the observations don't lie right on the population line (if they did there'd be no need for regression; any two points would give you the population line); the model for $Y$ must account for the values it takes, and in this case, the distribution of the random error accounts for the deviations from the ('true') line. The estimate of the mean at point $x_i$ for ordinary linear regression has variance $$\Big(\frac{1}{n} + \frac{(x_i-\bar{x})^2}{\sum(x_i-\bar{x})^2}\Big)\,\sigma^2$$ while the estimate based on the observed value has variance $\sigma^2$. It's possible to show that for $n$ at least 3, $\,\frac{1}{n} + \frac{(x_i-\bar{x})^2}{\sum(x_i-\bar{x})^2}$ is no more than 1 (but it may be - and in practice usually is - much smaller). [Further, when you estimate the fit at $x_i$ by $y_i$ you're also left with the issue of how to estimate $\sigma$.] But rather than pursue the formal demonstration, ponder an example, which I hope might be more motivating. Let $v_f = \frac{1}{n} + \frac{(x_i-\bar{x})^2}{\sum(x_i-\bar{x})^2}$, the factor by which the observation variance is multiplied to get the variance of the fit at $x_i$. However, let's work on the scale of relative standard error rather than relative variance (that is, let's look at the square root of this quantity); confidence intervals for the mean at a particular $x_i$ will be a multiple of $\sqrt{v_f}$. So to the example. Let's take the cars data in R; this is 50 observations collected in the 1920s on the speed of cars and the distances taken to stop: So how do the values of $\sqrt{v_f}$ compare with 1? Like so: The blue circles show the multiples of $\sigma$ for your estimate, while the black ones show it for the usual least squares estimate. As you see, using the information from all the data makes our uncertainty about where the population mean lies substantially smaller - at least in this case, and of course given that the linear model is correct. As a result, if we plot (say) a 95% confidence interval for the mean for each value $x$ (including at places other than an observation), the limits of the interval at the various $x$'s are typically small compared to the variation in the data: This is the benefit of 'borrowing' information from data values other than the present one. Indeed, we can use the information from other values - via the linear relationship - to get good estimates the value at places where we don't even have data. Consider that there's no data in our example at x=5, 6 or 21. With the suggested estimator, we have no information there - but with the regression line we can not only estimate the mean at those points (and at 5.5 and 12.8 and so on), we can give an interval for it -- though, again, one that relies on the suitability of the assumptions of linearity (and constant variance of the $Y$s, and independence).
Is the estimated value in an OLS regression "better" than the original value You wouldn't normally call the observed value an 'estimated value'. However, in spite of that, the observed value is nevertheless technically an estimate of the mean at its particular $x$, and treati
33,370
Is the estimated value in an OLS regression "better" than the original value
First, the regression equation is: \begin{equation} Y_i = \alpha + \beta X_i + \epsilon_i \end{equation} There is an error term, $\epsilon$. As it turns out, this error term is critical to answering your question. What, exactly, is the error term in your application? One common interpretation of it is "the influence of everything, other than $X$, which affects $Y$." If that is your interpretation of your error term, then $Y_i$ is the best measure of what $Y_i$ really is. On the other hand, in some rare cases we interpret the error term as being exclusively measurement error---the error induced by the operator error in using a scientific instrument or the error coming from the naturally limited precision of an instrument. In that case, the "real" value of $Y_i$ is $\alpha+\beta X_i$. In this case, you should use the OLS prediction of $Y_i$ instead of the actual value of $Y_i$ if $V(\epsilon_i)>V(\hat{\alpha}_{OLS}+\hat{\beta}_{OLS} X_i)$---that is if the variance of the error which comes from replacing $\alpha$ and $\beta$ with their OLS estimators is smaller than the variance of the measurement error.
Is the estimated value in an OLS regression "better" than the original value
First, the regression equation is: \begin{equation} Y_i = \alpha + \beta X_i + \epsilon_i \end{equation} There is an error term, $\epsilon$. As it turns out, this error term is critical to answering
Is the estimated value in an OLS regression "better" than the original value First, the regression equation is: \begin{equation} Y_i = \alpha + \beta X_i + \epsilon_i \end{equation} There is an error term, $\epsilon$. As it turns out, this error term is critical to answering your question. What, exactly, is the error term in your application? One common interpretation of it is "the influence of everything, other than $X$, which affects $Y$." If that is your interpretation of your error term, then $Y_i$ is the best measure of what $Y_i$ really is. On the other hand, in some rare cases we interpret the error term as being exclusively measurement error---the error induced by the operator error in using a scientific instrument or the error coming from the naturally limited precision of an instrument. In that case, the "real" value of $Y_i$ is $\alpha+\beta X_i$. In this case, you should use the OLS prediction of $Y_i$ instead of the actual value of $Y_i$ if $V(\epsilon_i)>V(\hat{\alpha}_{OLS}+\hat{\beta}_{OLS} X_i)$---that is if the variance of the error which comes from replacing $\alpha$ and $\beta$ with their OLS estimators is smaller than the variance of the measurement error.
Is the estimated value in an OLS regression "better" than the original value First, the regression equation is: \begin{equation} Y_i = \alpha + \beta X_i + \epsilon_i \end{equation} There is an error term, $\epsilon$. As it turns out, this error term is critical to answering
33,371
Is the estimated value in an OLS regression "better" than the original value
The original value is not an estimate (except for the fact that it may have measurement error): It is the value of Y for a specific subject (e.g. person or whatever). The predicted value from the equation is an estimate: It is an estimate of the expected value of Y at a given value of X. Let's make this concrete: Let's say Y is weight and X is height. Let's say you measure and weigh a bunch of people. Let's say Jill is 5'0 and 105 pounds. That is her height and weight. The equation will give you a different predicted value of weight for a person who is 5'0". That is not the predicted value for Jill - you don't need to predict or estimate her weight, you know it to the precision of the scale. It is the predicted value of some "typical 5'0" person".
Is the estimated value in an OLS regression "better" than the original value
The original value is not an estimate (except for the fact that it may have measurement error): It is the value of Y for a specific subject (e.g. person or whatever). The predicted value from the equa
Is the estimated value in an OLS regression "better" than the original value The original value is not an estimate (except for the fact that it may have measurement error): It is the value of Y for a specific subject (e.g. person or whatever). The predicted value from the equation is an estimate: It is an estimate of the expected value of Y at a given value of X. Let's make this concrete: Let's say Y is weight and X is height. Let's say you measure and weigh a bunch of people. Let's say Jill is 5'0 and 105 pounds. That is her height and weight. The equation will give you a different predicted value of weight for a person who is 5'0". That is not the predicted value for Jill - you don't need to predict or estimate her weight, you know it to the precision of the scale. It is the predicted value of some "typical 5'0" person".
Is the estimated value in an OLS regression "better" than the original value The original value is not an estimate (except for the fact that it may have measurement error): It is the value of Y for a specific subject (e.g. person or whatever). The predicted value from the equa
33,372
Is the estimated value in an OLS regression "better" than the original value
The equation should be $$\operatorname{E}(Y)=\alpha+\beta x$$; that is the expected value of $Y$ at the given value of $x$. So, if your model's right & you make enough observations of $Y$ at that value of $x$, it tells you what the average value of $Y$ will be. In the long run you'll do better making predictions using that average than the value you observed.
Is the estimated value in an OLS regression "better" than the original value
The equation should be $$\operatorname{E}(Y)=\alpha+\beta x$$; that is the expected value of $Y$ at the given value of $x$. So, if your model's right & you make enough observations of $Y$ at that valu
Is the estimated value in an OLS regression "better" than the original value The equation should be $$\operatorname{E}(Y)=\alpha+\beta x$$; that is the expected value of $Y$ at the given value of $x$. So, if your model's right & you make enough observations of $Y$ at that value of $x$, it tells you what the average value of $Y$ will be. In the long run you'll do better making predictions using that average than the value you observed.
Is the estimated value in an OLS regression "better" than the original value The equation should be $$\operatorname{E}(Y)=\alpha+\beta x$$; that is the expected value of $Y$ at the given value of $x$. So, if your model's right & you make enough observations of $Y$ at that valu
33,373
Is the estimated value in an OLS regression "better" than the original value
Typically, OLS is typically not motivated by comparing the estimated response, $\hat{Y_i}$, to the observed response $Y_i$. Instead, if given a new set of values for the predictor value $X_{new}$, the OLS model predicts what the dependent variable would be $\hat{Y}_{new}$ in a typical case. The point is that $\hat{Y}_i$ is typically not considered "better" than $Y_i$, but rather a more accurate reflection of what you expect $Y$ to be at a particular value for $X$. However, there are situations when you may think $\hat{Y}_i$ more accurately reflects the truth than $Y_i$ (perhaps for an outlier arising from a malfunction in your data collection). This would be highly dependent on the details of your data.
Is the estimated value in an OLS regression "better" than the original value
Typically, OLS is typically not motivated by comparing the estimated response, $\hat{Y_i}$, to the observed response $Y_i$. Instead, if given a new set of values for the predictor value $X_{new}$, the
Is the estimated value in an OLS regression "better" than the original value Typically, OLS is typically not motivated by comparing the estimated response, $\hat{Y_i}$, to the observed response $Y_i$. Instead, if given a new set of values for the predictor value $X_{new}$, the OLS model predicts what the dependent variable would be $\hat{Y}_{new}$ in a typical case. The point is that $\hat{Y}_i$ is typically not considered "better" than $Y_i$, but rather a more accurate reflection of what you expect $Y$ to be at a particular value for $X$. However, there are situations when you may think $\hat{Y}_i$ more accurately reflects the truth than $Y_i$ (perhaps for an outlier arising from a malfunction in your data collection). This would be highly dependent on the details of your data.
Is the estimated value in an OLS regression "better" than the original value Typically, OLS is typically not motivated by comparing the estimated response, $\hat{Y_i}$, to the observed response $Y_i$. Instead, if given a new set of values for the predictor value $X_{new}$, the
33,374
Is the estimated value in an OLS regression "better" than the original value
Does this help? (It was what first came to my mind on reading the question.) In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear regression model in which the errors have expectation zero and are uncorrelated and have equal variances, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator. Here "best" means giving the lowest variance of the estimate, as compared to other unbiased, linear estimates. The errors don't need be normal, nor independent and identically distributed (only uncorrelated and homoscedastic). The hypothesis that the estimator be unbiased cannot be dropped, since otherwise estimators better than OLS exist. http://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem
Is the estimated value in an OLS regression "better" than the original value
Does this help? (It was what first came to my mind on reading the question.) In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear regre
Is the estimated value in an OLS regression "better" than the original value Does this help? (It was what first came to my mind on reading the question.) In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear regression model in which the errors have expectation zero and are uncorrelated and have equal variances, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator. Here "best" means giving the lowest variance of the estimate, as compared to other unbiased, linear estimates. The errors don't need be normal, nor independent and identically distributed (only uncorrelated and homoscedastic). The hypothesis that the estimator be unbiased cannot be dropped, since otherwise estimators better than OLS exist. http://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem
Is the estimated value in an OLS regression "better" than the original value Does this help? (It was what first came to my mind on reading the question.) In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear regre
33,375
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
Following a clarification by the OP, it appears that a) we assume that the two variables follow jointly a bivariate normal and b) our interest is in the conditional distribution, which is then $$Y_n\mid X_n=x \ \sim\ \mathcal{N}\left(\mu_y+\frac{\sigma_y}{\sigma_x}\rho_n( x - \mu_x),\, (1-\rho_n^2)\sigma_y^2\right)$$ Then we see that as $n \to \infty$, we have $\rho_n \to 1$, and the variance of the conditional distribution goes to zero. Intuitively, if correlation goes to unity, "knowing $x$" is enough to "know $y$" also. But nowhere in the above do we get that $\text{Cov}(Y_n, X_n)$ is zero. Even at the limit covariance will remain equal to $\text{Cov}(Y_n, X_n) \to \sigma_y \sigma_x$. Note that the conditional covariance (and then also the conditional correlation) is always zero, because, $$\text{Cov}(Y_n, X_n \mid X_n =x) = E(Y_nX_n\mid X_n =x) - E(Y\mid X_n =x) E(X\mid X_n =x)$$ $$=xE(Y_n\mid X_n =x) - xE(Y\mid X_n =x) =0$$ This happens because by examining $X_n = x$ we have turned one of the random variables into a constant, and constants do not co-vary with anything.
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
Following a clarification by the OP, it appears that a) we assume that the two variables follow jointly a bivariate normal and b) our interest is in the conditional distribution, which is then $$Y_n\m
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ Following a clarification by the OP, it appears that a) we assume that the two variables follow jointly a bivariate normal and b) our interest is in the conditional distribution, which is then $$Y_n\mid X_n=x \ \sim\ \mathcal{N}\left(\mu_y+\frac{\sigma_y}{\sigma_x}\rho_n( x - \mu_x),\, (1-\rho_n^2)\sigma_y^2\right)$$ Then we see that as $n \to \infty$, we have $\rho_n \to 1$, and the variance of the conditional distribution goes to zero. Intuitively, if correlation goes to unity, "knowing $x$" is enough to "know $y$" also. But nowhere in the above do we get that $\text{Cov}(Y_n, X_n)$ is zero. Even at the limit covariance will remain equal to $\text{Cov}(Y_n, X_n) \to \sigma_y \sigma_x$. Note that the conditional covariance (and then also the conditional correlation) is always zero, because, $$\text{Cov}(Y_n, X_n \mid X_n =x) = E(Y_nX_n\mid X_n =x) - E(Y\mid X_n =x) E(X\mid X_n =x)$$ $$=xE(Y_n\mid X_n =x) - xE(Y\mid X_n =x) =0$$ This happens because by examining $X_n = x$ we have turned one of the random variables into a constant, and constants do not co-vary with anything.
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ Following a clarification by the OP, it appears that a) we assume that the two variables follow jointly a bivariate normal and b) our interest is in the conditional distribution, which is then $$Y_n\m
33,376
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
Since the covariance depends on the scale of $X$ and $Y$ and the correlation does not (rescaled to $[-1, -1]$) it is possible. For example, if the variance decreases towards zero: If $X=Y$ and $\sigma_x^2$ is the variance of $X$, then $\lim_{\sigma_x^2 \to 0} \operatorname{cov}(X, Y) = 0$ and $\lim_{\sigma_x^2 \to 0} \operatorname{cor}(X, Y) = 1$. Note 1: when $\sigma_x^2 = 0$ the correlation is strictly undefined because its denominator would be equal to 0.
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
Since the covariance depends on the scale of $X$ and $Y$ and the correlation does not (rescaled to $[-1, -1]$) it is possible. For example, if the variance decreases towards zero: If $X=Y$ and $\sigm
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ Since the covariance depends on the scale of $X$ and $Y$ and the correlation does not (rescaled to $[-1, -1]$) it is possible. For example, if the variance decreases towards zero: If $X=Y$ and $\sigma_x^2$ is the variance of $X$, then $\lim_{\sigma_x^2 \to 0} \operatorname{cov}(X, Y) = 0$ and $\lim_{\sigma_x^2 \to 0} \operatorname{cor}(X, Y) = 1$. Note 1: when $\sigma_x^2 = 0$ the correlation is strictly undefined because its denominator would be equal to 0.
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ Since the covariance depends on the scale of $X$ and $Y$ and the correlation does not (rescaled to $[-1, -1]$) it is possible. For example, if the variance decreases towards zero: If $X=Y$ and $\sigm
33,377
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
As far as I can see (perhaps outside of some special circumstances, but you don't mention any), it's not possible. The correlation is the covariance divided by the product of the two standard deviations, so if the covariance is zero, the correlation is either zero (when both standard deviations are non-zero) or undefined (when at least one standard deviation is 0). It should not be 1 when the covariance is 0. I expect you have either made some error in your analysis or your description is insufficiently clear to discern the situation correctly.
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
As far as I can see (perhaps outside of some special circumstances, but you don't mention any), it's not possible. The correlation is the covariance divided by the product of the two standard deviati
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ As far as I can see (perhaps outside of some special circumstances, but you don't mention any), it's not possible. The correlation is the covariance divided by the product of the two standard deviations, so if the covariance is zero, the correlation is either zero (when both standard deviations are non-zero) or undefined (when at least one standard deviation is 0). It should not be 1 when the covariance is 0. I expect you have either made some error in your analysis or your description is insufficiently clear to discern the situation correctly.
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ As far as I can see (perhaps outside of some special circumstances, but you don't mention any), it's not possible. The correlation is the covariance divided by the product of the two standard deviati
33,378
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
You are probably having difficulty because you are visualizing the data as being Gaussian. It's possible that all data represent the same point (though it would be redundant) and that you have two variables with different names (aliases of each other) comprising the data. This would lead to zero covariance, and a correlation of 1 as fundamentally, covariance represents how spread out the data is across the feature space, while correlation represents how much one variable depends on another, or the degree of influence they have on each other. If the data is not spread out at all, then covariance must be zero. NOTE However the best thing you can do with such a dataset is simply predict all points as having the same output, which is most likely going to give a high bias
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$
You are probably having difficulty because you are visualizing the data as being Gaussian. It's possible that all data represent the same point (though it would be redundant) and that you have two var
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ You are probably having difficulty because you are visualizing the data as being Gaussian. It's possible that all data represent the same point (though it would be redundant) and that you have two variables with different names (aliases of each other) comprising the data. This would lead to zero covariance, and a correlation of 1 as fundamentally, covariance represents how spread out the data is across the feature space, while correlation represents how much one variable depends on another, or the degree of influence they have on each other. If the data is not spread out at all, then covariance must be zero. NOTE However the best thing you can do with such a dataset is simply predict all points as having the same output, which is most likely going to give a high bias
What kind of distribution is this? $\text{Cov}(X, Y)=0$ but $\text{Corr}(X, Y)=1$ You are probably having difficulty because you are visualizing the data as being Gaussian. It's possible that all data represent the same point (though it would be redundant) and that you have two var
33,379
if covariance is -150, what is the type of relationship between two variables?
To add to Łukasz Deryło's answer: as he writes, a covariance of -150 implies a negative relationship. Whether this is a strong relationship or a weak one depends on the variables' variances. Below I plot examples for a strong relationship (each separate variable has a variance of 200, so the covariance is large, in absolute terms, compared to the variance), and for a weak relationship (each variance is 2000, so the covariance is small, in absolute terms, compared to the variance). Strong relationship, variance <- 200: Weak relationship, variance <- 2000: R code: library(MASS) nn <- 100 epsilon <- 0.1 variance <- 2000 # weak relationship opar <- par(mfrow=c(2,2)) for ( ii in 1:4 ) { while ( TRUE ) { dataset <- mvrnorm(n=100,mu=c(0,0),Sigma=rbind(c(2000,-150),c(-150,2000))) if ( abs(cov(dataset)[1,2]-(-150)) < epsilon ) break } plot(dataset,pch=19,xlab="",ylab="",main=paste("Covariance:",cov(dataset)[1,2])) } par(opar) EDIT: Anscombe's quartet As whuber notes, the covariance in itself doesn't really tell us a lot about a dataset. To illustrate, I'll take Anscombe's quartet and modify it slightly. Note how very different scatterplots can all have the same (rounded) covariance of -150: anscombe.mod <- anscombe anscombe.mod[,c("x1","x2","x3","x4")] <- sqrt(150/5.5)*anscombe[,c("x1","x2","x3","x4")] anscombe.mod[,c("y1","y2","y3","y4")] <- -sqrt(150/5.5)*anscombe[,c("y1","y2","y3","y4")] opar <- par(mfrow=c(2,2)) with(anscombe.mod,plot(x1,y1,pch=19,main=paste("Covariance:",round(cov(x1,y1),0)))) with(anscombe.mod,plot(x2,y2,pch=19,main=paste("Covariance:",round(cov(x2,y2),0)))) with(anscombe.mod,plot(x3,y3,pch=19,main=paste("Covariance:",round(cov(x3,y3),0)))) with(anscombe.mod,plot(x4,y4,pch=19,main=paste("Covariance:",round(cov(x4,y4),0)))) par(opar) FINAL EDIT (I promise!) Finally, here is a covariance of -150 with perhaps the most tenuous "negative relationship" between $x$ and $y$ imaginable: xx <- yy <- seq(0,100,by=10) yy[9] <- -336.7 plot(xx,yy,pch=19,main=paste("Covariance:",cov(xx,yy)))
if covariance is -150, what is the type of relationship between two variables?
To add to Łukasz Deryło's answer: as he writes, a covariance of -150 implies a negative relationship. Whether this is a strong relationship or a weak one depends on the variables' variances. Below I p
if covariance is -150, what is the type of relationship between two variables? To add to Łukasz Deryło's answer: as he writes, a covariance of -150 implies a negative relationship. Whether this is a strong relationship or a weak one depends on the variables' variances. Below I plot examples for a strong relationship (each separate variable has a variance of 200, so the covariance is large, in absolute terms, compared to the variance), and for a weak relationship (each variance is 2000, so the covariance is small, in absolute terms, compared to the variance). Strong relationship, variance <- 200: Weak relationship, variance <- 2000: R code: library(MASS) nn <- 100 epsilon <- 0.1 variance <- 2000 # weak relationship opar <- par(mfrow=c(2,2)) for ( ii in 1:4 ) { while ( TRUE ) { dataset <- mvrnorm(n=100,mu=c(0,0),Sigma=rbind(c(2000,-150),c(-150,2000))) if ( abs(cov(dataset)[1,2]-(-150)) < epsilon ) break } plot(dataset,pch=19,xlab="",ylab="",main=paste("Covariance:",cov(dataset)[1,2])) } par(opar) EDIT: Anscombe's quartet As whuber notes, the covariance in itself doesn't really tell us a lot about a dataset. To illustrate, I'll take Anscombe's quartet and modify it slightly. Note how very different scatterplots can all have the same (rounded) covariance of -150: anscombe.mod <- anscombe anscombe.mod[,c("x1","x2","x3","x4")] <- sqrt(150/5.5)*anscombe[,c("x1","x2","x3","x4")] anscombe.mod[,c("y1","y2","y3","y4")] <- -sqrt(150/5.5)*anscombe[,c("y1","y2","y3","y4")] opar <- par(mfrow=c(2,2)) with(anscombe.mod,plot(x1,y1,pch=19,main=paste("Covariance:",round(cov(x1,y1),0)))) with(anscombe.mod,plot(x2,y2,pch=19,main=paste("Covariance:",round(cov(x2,y2),0)))) with(anscombe.mod,plot(x3,y3,pch=19,main=paste("Covariance:",round(cov(x3,y3),0)))) with(anscombe.mod,plot(x4,y4,pch=19,main=paste("Covariance:",round(cov(x4,y4),0)))) par(opar) FINAL EDIT (I promise!) Finally, here is a covariance of -150 with perhaps the most tenuous "negative relationship" between $x$ and $y$ imaginable: xx <- yy <- seq(0,100,by=10) yy[9] <- -336.7 plot(xx,yy,pch=19,main=paste("Covariance:",cov(xx,yy)))
if covariance is -150, what is the type of relationship between two variables? To add to Łukasz Deryło's answer: as he writes, a covariance of -150 implies a negative relationship. Whether this is a strong relationship or a weak one depends on the variables' variances. Below I p
33,380
if covariance is -150, what is the type of relationship between two variables?
It tells you only that relationship is negative. This means that low values of one variable tend to occur together with high values of the other. It is hard to tell if this covariance is big or small (if your relationship is strong or weak) because $cov(X,Y)$ ranges from $-sd(X)\cdot sd(Y)$ to $sd(X)\cdot sd(Y)$. So it depends on the scale of your variables. To judge if this relationship is strong or not, you need to convert covariance to correlation (divide it by $sd(X)\cdot sd(Y)$). This ranges from $-1$ to $1$ and many different guidelines for interpretation can be found in the Web and textbooks. You can run test for significance of correlation too.
if covariance is -150, what is the type of relationship between two variables?
It tells you only that relationship is negative. This means that low values of one variable tend to occur together with high values of the other. It is hard to tell if this covariance is big or small
if covariance is -150, what is the type of relationship between two variables? It tells you only that relationship is negative. This means that low values of one variable tend to occur together with high values of the other. It is hard to tell if this covariance is big or small (if your relationship is strong or weak) because $cov(X,Y)$ ranges from $-sd(X)\cdot sd(Y)$ to $sd(X)\cdot sd(Y)$. So it depends on the scale of your variables. To judge if this relationship is strong or not, you need to convert covariance to correlation (divide it by $sd(X)\cdot sd(Y)$). This ranges from $-1$ to $1$ and many different guidelines for interpretation can be found in the Web and textbooks. You can run test for significance of correlation too.
if covariance is -150, what is the type of relationship between two variables? It tells you only that relationship is negative. This means that low values of one variable tend to occur together with high values of the other. It is hard to tell if this covariance is big or small
33,381
Zero Inflated Logistic Regression - Does This Exist?
Logistic regression will not "state that all future patients do not have the disease". Logistic regression yields probabilistic predictions, i.e., probabilities that a patient has the disease. In the case of a rare disease, this probability may be extremely low (for a patient that is essentially healthy - no need for action), or very low (better to run another non-invasive test), or "merely" low. If the disease in question is rare but dangerous, it may make sense to run an invasive test, e.g., taking biopsies, even if the predicted probability your logistic regression yields is only $\hat{p}=0.2$. You need to adapt your decision thresholds (possibly multiple ones, as here!) to the costs of decisions. Thus, while a "zero-inflated logistic regression" could in principle make sense (e.g., in the case where we suspect two data generating processes to be at work, one of which always yields a zero), that does not seem to be the case here. Logistic regression can deal quite well with rare instances of the target variable. If all goes well, it will simply output low probabilities. If these are well-calibrated, this is precisely what should happen. And no, oversampling (or weighting, which is essentially the same as oversampling) won't address a non-problem.
Zero Inflated Logistic Regression - Does This Exist?
Logistic regression will not "state that all future patients do not have the disease". Logistic regression yields probabilistic predictions, i.e., probabilities that a patient has the disease. In the
Zero Inflated Logistic Regression - Does This Exist? Logistic regression will not "state that all future patients do not have the disease". Logistic regression yields probabilistic predictions, i.e., probabilities that a patient has the disease. In the case of a rare disease, this probability may be extremely low (for a patient that is essentially healthy - no need for action), or very low (better to run another non-invasive test), or "merely" low. If the disease in question is rare but dangerous, it may make sense to run an invasive test, e.g., taking biopsies, even if the predicted probability your logistic regression yields is only $\hat{p}=0.2$. You need to adapt your decision thresholds (possibly multiple ones, as here!) to the costs of decisions. Thus, while a "zero-inflated logistic regression" could in principle make sense (e.g., in the case where we suspect two data generating processes to be at work, one of which always yields a zero), that does not seem to be the case here. Logistic regression can deal quite well with rare instances of the target variable. If all goes well, it will simply output low probabilities. If these are well-calibrated, this is precisely what should happen. And no, oversampling (or weighting, which is essentially the same as oversampling) won't address a non-problem.
Zero Inflated Logistic Regression - Does This Exist? Logistic regression will not "state that all future patients do not have the disease". Logistic regression yields probabilistic predictions, i.e., probabilities that a patient has the disease. In the
33,382
Zero Inflated Logistic Regression - Does This Exist?
While the answer by Stephan gives a good overview of the bigger picture, I think the answer in the narrow sense is IMHO that No, zero-inflated logistic regression does not make much sense Why? Assume the true data-generating process is indeed a mixture of a Bernoulli distribution and a constant zero, specifically: $$ \begin{align} y_i &= \begin{cases} 0 & z_i = 0 \\ \bar{y}_i & z_i = 1 \end{cases}\\ \bar{y}_i &\sim \text{Bernoulli}(p_i)\\ z_i &\sim \text{Bernoulli}(\theta_i) \end{align} $$ Where both $p_i$ and $\theta_i$ are some function of some predictors (usually a logit-transformed linear predictor term). We can quickly see that the outcome is 1 if and only if both $z_i$ and $\bar{y}_i$ are 1, so $P(y_i = 1) = \theta_i p_i$ and thus simply $y_i \sim \text{Bernoulli}(\theta_i p_i)$. This means $y_i$ can only give you information about the product $\theta_i p_i$ and cannot disentangle the individual contributions of the "logistic regression" and "zero inflation" components. unless you make strong restricting assumptions about the possible forms of predictors for $\theta_i$ and $p_i$. (In theory, there is a very tiny difference as this "zero-inflated" formulation implies a slightly different link function and thus different behavior of continuous predictor terms than the logistic regression, but I think this is highly unlikely to be relevant to any practical analysis task). A similar line of reasoning applies to hurdle logistic regression. So standard logistic regression model is likely sensible, but it is known that maximum likelihood estimators can be biased when there is little information in the data (small sample size and/or rare events) and bias corrected methods such as Firth's correction (e.g. via logistf) are thus likely to be preferred to R's glm or similar. The case would be different if you had zero inflation to a binomial response with more than one trial - then you could in fact at least in some cases learn something about the zero-inflation/hurdle component separately from the success probability.
Zero Inflated Logistic Regression - Does This Exist?
While the answer by Stephan gives a good overview of the bigger picture, I think the answer in the narrow sense is IMHO that No, zero-inflated logistic regression does not make much sense Why? Assume
Zero Inflated Logistic Regression - Does This Exist? While the answer by Stephan gives a good overview of the bigger picture, I think the answer in the narrow sense is IMHO that No, zero-inflated logistic regression does not make much sense Why? Assume the true data-generating process is indeed a mixture of a Bernoulli distribution and a constant zero, specifically: $$ \begin{align} y_i &= \begin{cases} 0 & z_i = 0 \\ \bar{y}_i & z_i = 1 \end{cases}\\ \bar{y}_i &\sim \text{Bernoulli}(p_i)\\ z_i &\sim \text{Bernoulli}(\theta_i) \end{align} $$ Where both $p_i$ and $\theta_i$ are some function of some predictors (usually a logit-transformed linear predictor term). We can quickly see that the outcome is 1 if and only if both $z_i$ and $\bar{y}_i$ are 1, so $P(y_i = 1) = \theta_i p_i$ and thus simply $y_i \sim \text{Bernoulli}(\theta_i p_i)$. This means $y_i$ can only give you information about the product $\theta_i p_i$ and cannot disentangle the individual contributions of the "logistic regression" and "zero inflation" components. unless you make strong restricting assumptions about the possible forms of predictors for $\theta_i$ and $p_i$. (In theory, there is a very tiny difference as this "zero-inflated" formulation implies a slightly different link function and thus different behavior of continuous predictor terms than the logistic regression, but I think this is highly unlikely to be relevant to any practical analysis task). A similar line of reasoning applies to hurdle logistic regression. So standard logistic regression model is likely sensible, but it is known that maximum likelihood estimators can be biased when there is little information in the data (small sample size and/or rare events) and bias corrected methods such as Firth's correction (e.g. via logistf) are thus likely to be preferred to R's glm or similar. The case would be different if you had zero inflation to a binomial response with more than one trial - then you could in fact at least in some cases learn something about the zero-inflation/hurdle component separately from the success probability.
Zero Inflated Logistic Regression - Does This Exist? While the answer by Stephan gives a good overview of the bigger picture, I think the answer in the narrow sense is IMHO that No, zero-inflated logistic regression does not make much sense Why? Assume
33,383
Zero Inflated Logistic Regression - Does This Exist?
Zero inflated models are using a distribution (like Poisson or something else) that is mixed with a point mass at zero. Logistic regression relates to binary data which is modelled with a Bernoulli distribution. When you mix a Bernoulli distribution with a point mass at zero then you get another Bernoulli distribution. So zero inflation does not make much sense. An exception could be when your response is a Binomial distribution that is modelled with a logistic function for a parameter $p$. E.g. instead of whether a patient has a disease or not, the response could be something like during how many years $x$ out of $n$ a patient has had some rare symptom.
Zero Inflated Logistic Regression - Does This Exist?
Zero inflated models are using a distribution (like Poisson or something else) that is mixed with a point mass at zero. Logistic regression relates to binary data which is modelled with a Bernoulli d
Zero Inflated Logistic Regression - Does This Exist? Zero inflated models are using a distribution (like Poisson or something else) that is mixed with a point mass at zero. Logistic regression relates to binary data which is modelled with a Bernoulli distribution. When you mix a Bernoulli distribution with a point mass at zero then you get another Bernoulli distribution. So zero inflation does not make much sense. An exception could be when your response is a Binomial distribution that is modelled with a logistic function for a parameter $p$. E.g. instead of whether a patient has a disease or not, the response could be something like during how many years $x$ out of $n$ a patient has had some rare symptom.
Zero Inflated Logistic Regression - Does This Exist? Zero inflated models are using a distribution (like Poisson or something else) that is mixed with a point mass at zero. Logistic regression relates to binary data which is modelled with a Bernoulli d
33,384
Zero Inflated Logistic Regression - Does This Exist?
Zero-inflated logistic regressions do exist. The most "famous" example is that of a species occupancy model where the probability of presence of a species follows a logistic model dependent on various predictors AND the probability of detection on a single visit is less than 1. If only a single visit is observed, then as @MartinModrak has mentioned only the product of the occupancy probability and the detection probability can be estimated. However, if multiple independent observations are taken on the same site under the same conditions (with the true but unknown presence status constant), then the two probabilities can be separated. The R package unmarked deals with such models (logistic regressions for both the probability of occupancy and the probability of detection). A reference for this is MacKenzie, D. I., J. D. Nichols, G. B. Lachman, S. Droege, J. Andrew Royle, and C. A. Langtimm. 2002. Estimating Site Occupancy Rates When Detection Probabilities Are Less Than One. Ecology 83: 2248-2255. For your problem with a rare disease such models might be appropriate if the test for having the rare disease does not always detect the disease when the patient does have the disease. You would need to have patients tested multiple times (with an appropriate model describing the probability of detection). These methods are certainly data hungry and having a "rare" disease (i.e., a small probability of having the disease) raises the need for lots of patients. But my main point is that there are models that deal with excess zeros in a logistic regression but a secondary model (and subject matter rationale) that describes the generation of excess zeros is required.
Zero Inflated Logistic Regression - Does This Exist?
Zero-inflated logistic regressions do exist. The most "famous" example is that of a species occupancy model where the probability of presence of a species follows a logistic model dependent on variou
Zero Inflated Logistic Regression - Does This Exist? Zero-inflated logistic regressions do exist. The most "famous" example is that of a species occupancy model where the probability of presence of a species follows a logistic model dependent on various predictors AND the probability of detection on a single visit is less than 1. If only a single visit is observed, then as @MartinModrak has mentioned only the product of the occupancy probability and the detection probability can be estimated. However, if multiple independent observations are taken on the same site under the same conditions (with the true but unknown presence status constant), then the two probabilities can be separated. The R package unmarked deals with such models (logistic regressions for both the probability of occupancy and the probability of detection). A reference for this is MacKenzie, D. I., J. D. Nichols, G. B. Lachman, S. Droege, J. Andrew Royle, and C. A. Langtimm. 2002. Estimating Site Occupancy Rates When Detection Probabilities Are Less Than One. Ecology 83: 2248-2255. For your problem with a rare disease such models might be appropriate if the test for having the rare disease does not always detect the disease when the patient does have the disease. You would need to have patients tested multiple times (with an appropriate model describing the probability of detection). These methods are certainly data hungry and having a "rare" disease (i.e., a small probability of having the disease) raises the need for lots of patients. But my main point is that there are models that deal with excess zeros in a logistic regression but a secondary model (and subject matter rationale) that describes the generation of excess zeros is required.
Zero Inflated Logistic Regression - Does This Exist? Zero-inflated logistic regressions do exist. The most "famous" example is that of a species occupancy model where the probability of presence of a species follows a logistic model dependent on variou
33,385
Incorrectly Using the Word "Causal" to Describe a Regression Model?
There is a very careful formulation in Gelman, Hill, and Vehtari Regression and Other Stories: From the data alone, a regression only tells us about comparisons between units, not about changes within units. Thus the most careful interpretation of regression coefficients is in terms of comparisons, for example [...] "Comparing two items $i$ and $j$ that differ by an amount $x$ on predictor $k$ but are identical on all other predictors, the predicted difference $y_i - y_j$ is $\beta_k x$, on average. This is of course a bit of a mouthful.
Incorrectly Using the Word "Causal" to Describe a Regression Model?
There is a very careful formulation in Gelman, Hill, and Vehtari Regression and Other Stories: From the data alone, a regression only tells us about comparisons between units, not about changes withi
Incorrectly Using the Word "Causal" to Describe a Regression Model? There is a very careful formulation in Gelman, Hill, and Vehtari Regression and Other Stories: From the data alone, a regression only tells us about comparisons between units, not about changes within units. Thus the most careful interpretation of regression coefficients is in terms of comparisons, for example [...] "Comparing two items $i$ and $j$ that differ by an amount $x$ on predictor $k$ but are identical on all other predictors, the predicted difference $y_i - y_j$ is $\beta_k x$, on average. This is of course a bit of a mouthful.
Incorrectly Using the Word "Causal" to Describe a Regression Model? There is a very careful formulation in Gelman, Hill, and Vehtari Regression and Other Stories: From the data alone, a regression only tells us about comparisons between units, not about changes withi
33,386
Incorrectly Using the Word "Causal" to Describe a Regression Model?
On average, a one unit increase in $x_i$ causes is associated with an increase in $y_i$ of $\beta_1$ units.
Incorrectly Using the Word "Causal" to Describe a Regression Model?
On average, a one unit increase in $x_i$ causes is associated with an increase in $y_i$ of $\beta_1$ units.
Incorrectly Using the Word "Causal" to Describe a Regression Model? On average, a one unit increase in $x_i$ causes is associated with an increase in $y_i$ of $\beta_1$ units.
Incorrectly Using the Word "Causal" to Describe a Regression Model? On average, a one unit increase in $x_i$ causes is associated with an increase in $y_i$ of $\beta_1$ units.
33,387
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
In case you're asking for intuition, rather than for mathematical detail... For your second question: I find it helpful to interpret the p-value as a percentile, not a probability. A p-value of 0.02 means "If $H_0$ were true, the test statistic value $T$ that I observed would have been among the top 2% possible values of $T$ that are least like $H_0$ and most like $H_A$." For your first question: It depends on your alternative hypothesis. If you're testing whether a new drug does better than control, then perhaps you've got a one-sided $H_A$, such as "the difference in means between the treatment group vs the control group is a positive difference." If so, then you'd only reject $H_0$ if your test statistic actually points in that direction. You wouldn't want to take the absolute value of $T$ because it would not match your scientific question -- you're not interested in finding drugs that do worse than control. Again, you want to know: "Out of all the possible values of the test statistic when $H_0$ is true, is this among the top X% of values that look most like $H_A$ and least like $H_0$?"
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
In case you're asking for intuition, rather than for mathematical detail... For your second question: I find it helpful to interpret the p-value as a percentile, not a probability. A p-value of 0.02 m
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? In case you're asking for intuition, rather than for mathematical detail... For your second question: I find it helpful to interpret the p-value as a percentile, not a probability. A p-value of 0.02 means "If $H_0$ were true, the test statistic value $T$ that I observed would have been among the top 2% possible values of $T$ that are least like $H_0$ and most like $H_A$." For your first question: It depends on your alternative hypothesis. If you're testing whether a new drug does better than control, then perhaps you've got a one-sided $H_A$, such as "the difference in means between the treatment group vs the control group is a positive difference." If so, then you'd only reject $H_0$ if your test statistic actually points in that direction. You wouldn't want to take the absolute value of $T$ because it would not match your scientific question -- you're not interested in finding drugs that do worse than control. Again, you want to know: "Out of all the possible values of the test statistic when $H_0$ is true, is this among the top X% of values that look most like $H_A$ and least like $H_0$?"
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? In case you're asking for intuition, rather than for mathematical detail... For your second question: I find it helpful to interpret the p-value as a percentile, not a probability. A p-value of 0.02 m
33,388
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
First question: should "at least as extreme" always be interpreted as "the absolute value of T' is at least as large as T"? If not, when shouldn't it be interpreted that way? The definition of the $p$-value depends on the rejection region of the test statistic. Indeed, given a sample $X_1,\ldots,X_n$, a test statistic $T(X_1,\ldots,X_n)$ and $R_\alpha$, a rejection region of size $\alpha$, then $$ p\text{-value} = \inf\{\alpha: T(X_1,\ldots,X_n)\in R_\alpha\}. $$ Thus the $p$-value can be interpreted as the smallest size at which we can reject $H_0$. Thus the $p$-value tells us how surprising is the value of the statistic when $H_0$ is true, to be interpreted as: the lower the $p$-value, the more surprising is observed such a value under the model with $H_0$ being true. Indeed, some authors refer to the $p$-value by the name observed significance level. Now about the computation of the $p$-value, if $t_n$ is the observed test statistics then: if the rejection region is of the form $$\{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\geq c\},$$ then the $p$-value is defined by $$\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\geq t_n);$$ if the rejection region is of the form $$\{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\leq c\},$$ then the $p$-value is defined by $$\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\leq t_n);$$ if the rejection region is of the form $$\{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\geq c_1\}\cup \{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\geq c_2\},$$ then the $p$-value is defined by $$2\min\left(\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\leq t_n,\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\geq t_n\right).$$
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
First question: should "at least as extreme" always be interpreted as "the absolute value of T' is at least as large as T"? If not, when shouldn't it be interpreted that way? The definition of the $p
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? First question: should "at least as extreme" always be interpreted as "the absolute value of T' is at least as large as T"? If not, when shouldn't it be interpreted that way? The definition of the $p$-value depends on the rejection region of the test statistic. Indeed, given a sample $X_1,\ldots,X_n$, a test statistic $T(X_1,\ldots,X_n)$ and $R_\alpha$, a rejection region of size $\alpha$, then $$ p\text{-value} = \inf\{\alpha: T(X_1,\ldots,X_n)\in R_\alpha\}. $$ Thus the $p$-value can be interpreted as the smallest size at which we can reject $H_0$. Thus the $p$-value tells us how surprising is the value of the statistic when $H_0$ is true, to be interpreted as: the lower the $p$-value, the more surprising is observed such a value under the model with $H_0$ being true. Indeed, some authors refer to the $p$-value by the name observed significance level. Now about the computation of the $p$-value, if $t_n$ is the observed test statistics then: if the rejection region is of the form $$\{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\geq c\},$$ then the $p$-value is defined by $$\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\geq t_n);$$ if the rejection region is of the form $$\{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\leq c\},$$ then the $p$-value is defined by $$\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\leq t_n);$$ if the rejection region is of the form $$\{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\geq c_1\}\cup \{X_1,\ldots,X_n: T(X_1,\ldots,X_n)\geq c_2\},$$ then the $p$-value is defined by $$2\min\left(\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\leq t_n,\sup_{\theta\in\Theta_0} P_\theta(T(X_1,\ldots,X_n)\geq t_n\right).$$
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? First question: should "at least as extreme" always be interpreted as "the absolute value of T' is at least as large as T"? If not, when shouldn't it be interpreted that way? The definition of the $p
33,389
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
How is $p$-value actually defined? Definition $1.$ (cf. $\rm[I]$) A $p$-value is a test statistic $p:\mathcal X\to [0,1]$ such that $$\mathbb P_\theta (p(\mathbf X) \leq \alpha)\leq\alpha,~~\forall\theta\in\Omega_\mathcal H, ~\forall\alpha\in(0,1).\tag{1.a}\label 1$$ Consider a series of nested tests $\langle \varphi_\alpha\rangle$ in the sense that $\varphi_\alpha(x) \leq\varphi_{\alpha^\prime}(x) $ for $\alpha<\alpha^\prime.$ Define $$\hat p:=\inf\{\alpha:\varphi_\alpha=1\}.\tag{1.b}\label b$$ Observation $1.1.$ $\hat p$ is a valid $p$-value. Formally, if for a set of nested test functions $$\sup_{\theta\in\Omega_\mathcal H}\mathbb P_\theta(\varphi_\alpha(\mathbf X) \leq \alpha) \leq \alpha ~~\forall\alpha\in(0, 1), \tag 2\label 2$$ then for all $u\in(0,1),$ $$\mathbb P_\theta\left(\hat p\leq u\right) \leq u. \tag 3\label 3$$ $\eqref 3$ is easy to see (cf. $\rm [II]$ ) for $\left\{\hat p\leq u\right\}$ means $\{\varphi_v(\mathbf X) =1\}$ for all $u<v.$ Then, let $v\to u. $ $\blacksquare$ Now consider a test statistic $W(\mathbf X) $ whose large values indicate the rejection of $\mathcal H. $ Observation $1.2.$ (cf.$\rm [I]$) Define $$p(\mathbf x) := \sup_{\theta\in\Omega_\mathcal H} \mathbb P_\theta(W(\mathbf X) \geq W(\mathbf x)).\tag{1.c}\label c$$ $p(\mathbf x) $ is also a valid $p$-value. Notice that \begin{align}p_\theta(\mathbf x) &= \mathbb P_\theta (W(\mathbf X) \geq W(\mathbf x))\\&= \mathrm F_\theta(-W(\mathbf x)),\tag 4\end{align} which implies $p_\theta(\mathbf x) $ is stochastically greater than or equal to $\mathcal U(0, 1).$ Then as $p(\mathbf x) \geq p_\theta(\mathbf x), $ $\eqref 1$ follows. $\blacksquare$ When one talks about $p$-value, they are basically meaning $\eqref 1$ or $\eqref c$ which as outlined above are genuine $p$-values. The phrase as extreme as is essential to define $p$-value. Gloss over the definition and its equivalence as outlined above. However, what does it imply intuitively? What does lower $p$-value mean? Why is the phrase necessary? Over the years, there have been many CV posts dealing with the specifics. Please have a look at some of those: $\bullet$ Why is smaller the p-value, larger is the significance? $\bullet$ Does p-value ever depend on the alternative? (courtesy Richard Hardy) $\bullet$ What is the meaning of p values and t values in statistical tests? and links therein. References: $\rm [I]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002, $ sec. $8.3, $ pp. $397-398.$ $\rm [II]$ Testing Statistical Hypotheses, E. L. Lehmann, Joseph P. Romano, Springer Science$+$Business Media, $2005, $ sec. $3.3, $ pp. $63-64.$
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
How is $p$-value actually defined? Definition $1.$ (cf. $\rm[I]$) A $p$-value is a test statistic $p:\mathcal X\to [0,1]$ such that $$\mathbb P_\theta (p(\mathbf X) \leq \alpha)\leq\alpha,~~\forall\th
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? How is $p$-value actually defined? Definition $1.$ (cf. $\rm[I]$) A $p$-value is a test statistic $p:\mathcal X\to [0,1]$ such that $$\mathbb P_\theta (p(\mathbf X) \leq \alpha)\leq\alpha,~~\forall\theta\in\Omega_\mathcal H, ~\forall\alpha\in(0,1).\tag{1.a}\label 1$$ Consider a series of nested tests $\langle \varphi_\alpha\rangle$ in the sense that $\varphi_\alpha(x) \leq\varphi_{\alpha^\prime}(x) $ for $\alpha<\alpha^\prime.$ Define $$\hat p:=\inf\{\alpha:\varphi_\alpha=1\}.\tag{1.b}\label b$$ Observation $1.1.$ $\hat p$ is a valid $p$-value. Formally, if for a set of nested test functions $$\sup_{\theta\in\Omega_\mathcal H}\mathbb P_\theta(\varphi_\alpha(\mathbf X) \leq \alpha) \leq \alpha ~~\forall\alpha\in(0, 1), \tag 2\label 2$$ then for all $u\in(0,1),$ $$\mathbb P_\theta\left(\hat p\leq u\right) \leq u. \tag 3\label 3$$ $\eqref 3$ is easy to see (cf. $\rm [II]$ ) for $\left\{\hat p\leq u\right\}$ means $\{\varphi_v(\mathbf X) =1\}$ for all $u<v.$ Then, let $v\to u. $ $\blacksquare$ Now consider a test statistic $W(\mathbf X) $ whose large values indicate the rejection of $\mathcal H. $ Observation $1.2.$ (cf.$\rm [I]$) Define $$p(\mathbf x) := \sup_{\theta\in\Omega_\mathcal H} \mathbb P_\theta(W(\mathbf X) \geq W(\mathbf x)).\tag{1.c}\label c$$ $p(\mathbf x) $ is also a valid $p$-value. Notice that \begin{align}p_\theta(\mathbf x) &= \mathbb P_\theta (W(\mathbf X) \geq W(\mathbf x))\\&= \mathrm F_\theta(-W(\mathbf x)),\tag 4\end{align} which implies $p_\theta(\mathbf x) $ is stochastically greater than or equal to $\mathcal U(0, 1).$ Then as $p(\mathbf x) \geq p_\theta(\mathbf x), $ $\eqref 1$ follows. $\blacksquare$ When one talks about $p$-value, they are basically meaning $\eqref 1$ or $\eqref c$ which as outlined above are genuine $p$-values. The phrase as extreme as is essential to define $p$-value. Gloss over the definition and its equivalence as outlined above. However, what does it imply intuitively? What does lower $p$-value mean? Why is the phrase necessary? Over the years, there have been many CV posts dealing with the specifics. Please have a look at some of those: $\bullet$ Why is smaller the p-value, larger is the significance? $\bullet$ Does p-value ever depend on the alternative? (courtesy Richard Hardy) $\bullet$ What is the meaning of p values and t values in statistical tests? and links therein. References: $\rm [I]$ Statistical Inference, George Casella, Roger L. Berger, Wadsworth, $2002, $ sec. $8.3, $ pp. $397-398.$ $\rm [II]$ Testing Statistical Hypotheses, E. L. Lehmann, Joseph P. Romano, Springer Science$+$Business Media, $2005, $ sec. $3.3, $ pp. $63-64.$
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? How is $p$-value actually defined? Definition $1.$ (cf. $\rm[I]$) A $p$-value is a test statistic $p:\mathcal X\to [0,1]$ such that $$\mathbb P_\theta (p(\mathbf X) \leq \alpha)\leq\alpha,~~\forall\th
33,390
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
Regarding the first question, you have got some good answers already. (You may want to check out the following threads, too, as I think they align with your thinking quite well: $p$-value: Fisherian vs. contemporary frequentist definitions, Does $p$-value ever depend on the alternative? and Defining extremeness of test statistic and defining $p$-value for a two-sided test). Regarding the second question, you start with a Fisherian-esque interpretation by only looking at the null hypothesis and ignoring the alternative. This is pretty intuitive to me, but it does not seem to be fashionable anymore. However, you differ by suggesting to only look at the density at or around the test statistic – not the integral of all densities that are at most as high. The latter would be the Fisherian $p$-value*. It has the advantage of using a unified (and I think reasonably intuitive) scale between 0 and 1 which is based on ranking all possible test statistics from the most extreme (in the Fisherian sense, i.e. having the lowest likelihood*) to the least extreme (having the highest likelihood). Meanwhile, your approach would use a different scale for different tests (Student-$t$, $\chi^2$, $F$, ...), so we would have to develop intuition for each of them. While we can now quite easily judge a value between 0 and 1, there we would be dealing with all kinds of values on quite different scales – not quite as easy. *Not everyone may agree on that; I had some discussion about it somewhere else on this site. Too bad Fisher is not here with us anymore to elaborate on his position. Some further references: Keuzenkamp & Magnus "On Tests and Significance in Econometrics" (1995) Lehmann "The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two?" (1993) Christensen "Testing Fisher, Neyman, Pearson, and Bayes" (2005) Spanos "Probability Theory and Statistical Inference: Econometric Modeling with Observational Data" (1999) Section 14.5; there is a newer edition, too.
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
Regarding the first question, you have got some good answers already. (You may want to check out the following threads, too, as I think they align with your thinking quite well: $p$-value: Fisherian v
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? Regarding the first question, you have got some good answers already. (You may want to check out the following threads, too, as I think they align with your thinking quite well: $p$-value: Fisherian vs. contemporary frequentist definitions, Does $p$-value ever depend on the alternative? and Defining extremeness of test statistic and defining $p$-value for a two-sided test). Regarding the second question, you start with a Fisherian-esque interpretation by only looking at the null hypothesis and ignoring the alternative. This is pretty intuitive to me, but it does not seem to be fashionable anymore. However, you differ by suggesting to only look at the density at or around the test statistic – not the integral of all densities that are at most as high. The latter would be the Fisherian $p$-value*. It has the advantage of using a unified (and I think reasonably intuitive) scale between 0 and 1 which is based on ranking all possible test statistics from the most extreme (in the Fisherian sense, i.e. having the lowest likelihood*) to the least extreme (having the highest likelihood). Meanwhile, your approach would use a different scale for different tests (Student-$t$, $\chi^2$, $F$, ...), so we would have to develop intuition for each of them. While we can now quite easily judge a value between 0 and 1, there we would be dealing with all kinds of values on quite different scales – not quite as easy. *Not everyone may agree on that; I had some discussion about it somewhere else on this site. Too bad Fisher is not here with us anymore to elaborate on his position. Some further references: Keuzenkamp & Magnus "On Tests and Significance in Econometrics" (1995) Lehmann "The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two?" (1993) Christensen "Testing Fisher, Neyman, Pearson, and Bayes" (2005) Spanos "Probability Theory and Statistical Inference: Econometric Modeling with Observational Data" (1999) Section 14.5; there is a newer edition, too.
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? Regarding the first question, you have got some good answers already. (You may want to check out the following threads, too, as I think they align with your thinking quite well: $p$-value: Fisherian v
33,391
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
Likelihood is a pain to work with. For starters, likelihood can only go down as you collect more data. Indeed, if your alternate hypothesis is "My coin is biased in favour of tails" and you flip it 0 times, the likelihood of getting zero heads is 100%! Actually doing the experiment can only ever hurt your likelihood value from there. Second, likelihood is non-exclusive. The likelihood of zero heads from zero throws is also 100% under the null, as well as under the hypothesis "It is a magic coin that always lands how I want." Even if we actually flip our coin and get tails, we're scoring a 75% likelihood. But the null hypothesis still scores a 50% likelihood. It isn't a straightforward battle between null and alternate. Lastly, likelihood calculations are incredibly sensitive to the formulation of the hypothesis. In practice you often don't know ahead of time (and stating your hypothesis ahead of time is important!) what exact numbers to put in. If you suspect someone is cheating with a coin biased for tails, it's a pain to have to say whether it's a 70% tails or a 75% tails coin. Now suppose you take that coin and flip it 1000 times, getting exactly 748 tails. Under the 75% hypothesis your likelihood of that result is an astounding 2.9%, but under a 70% hypothesis it would be 0.01%. Under the hypothesis we really want, "It's biased to somewhere in the 65-80 zone" it's just messy to define and calculate. The intuitive measure you're after is "Probability of our hypothesis." Unfortunately, it generally is not a value we can actually calculate from our experiment. It depends too much on things like priors (Fair coins are more common than magic coins, but how much?), and the huge space of possibility of other hypotheses (e.g. what if the coin is fair but the flipper is trained?). So, we make do with p values as our first filter. To be clear, a tiny p value does not mean that the alternate hypothesis is true. All it means is that the researcher has done enough work to beat the trivial "It's a fluke" standard that they've earned the right to have their work examined.
Why is the p-value defined the way it is (as opposed to a more intuitive measure)?
Likelihood is a pain to work with. For starters, likelihood can only go down as you collect more data. Indeed, if your alternate hypothesis is "My coin is biased in favour of tails" and you flip it 0
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? Likelihood is a pain to work with. For starters, likelihood can only go down as you collect more data. Indeed, if your alternate hypothesis is "My coin is biased in favour of tails" and you flip it 0 times, the likelihood of getting zero heads is 100%! Actually doing the experiment can only ever hurt your likelihood value from there. Second, likelihood is non-exclusive. The likelihood of zero heads from zero throws is also 100% under the null, as well as under the hypothesis "It is a magic coin that always lands how I want." Even if we actually flip our coin and get tails, we're scoring a 75% likelihood. But the null hypothesis still scores a 50% likelihood. It isn't a straightforward battle between null and alternate. Lastly, likelihood calculations are incredibly sensitive to the formulation of the hypothesis. In practice you often don't know ahead of time (and stating your hypothesis ahead of time is important!) what exact numbers to put in. If you suspect someone is cheating with a coin biased for tails, it's a pain to have to say whether it's a 70% tails or a 75% tails coin. Now suppose you take that coin and flip it 1000 times, getting exactly 748 tails. Under the 75% hypothesis your likelihood of that result is an astounding 2.9%, but under a 70% hypothesis it would be 0.01%. Under the hypothesis we really want, "It's biased to somewhere in the 65-80 zone" it's just messy to define and calculate. The intuitive measure you're after is "Probability of our hypothesis." Unfortunately, it generally is not a value we can actually calculate from our experiment. It depends too much on things like priors (Fair coins are more common than magic coins, but how much?), and the huge space of possibility of other hypotheses (e.g. what if the coin is fair but the flipper is trained?). So, we make do with p values as our first filter. To be clear, a tiny p value does not mean that the alternate hypothesis is true. All it means is that the researcher has done enough work to beat the trivial "It's a fluke" standard that they've earned the right to have their work examined.
Why is the p-value defined the way it is (as opposed to a more intuitive measure)? Likelihood is a pain to work with. For starters, likelihood can only go down as you collect more data. Indeed, if your alternate hypothesis is "My coin is biased in favour of tails" and you flip it 0
33,392
Why is it Bad to Discretize a Continuous Variable? [duplicate]
There are two main issues here that militate against discretisation into bins for statistical analysis. The first is that this generally involves some loss of information, since different values are put into the same bin, and so there is a corresponding loss of statistical power in the analysis. The second is that such an approach fails to produce inferences that change in a continuous manner with the continuous variable of interest; instead the inferences change over the bins and so they appear as "jumps" when looked at on the original continuous scale. Neither of these problems are necessarily fatal, particularly if you have a reasonably large number of bins to reduce the loss of information. Ideally, we would be able to model continuous data with models that treat them as continuous. However, continuous models generally involve some specification of a parametric family of distributions that limits the generality of analysis, and this means that there are distributional assumptions that might or might not fit the data well. (By contrast, in the discrete case the multinomial distribution fits all possible probability distributions for the number of outcomes occurring over a finite, or even countably infinite, set of finite bins.) It is usually possible to fit a continuous model that works well on the data if you have an expansive set of tools and the ability to generalise continuous models when they don't fit well. The difficulty that some analysts encounter is that they may find that standard continuous models they use for inference (e.g., linear regression) aren't fitting their continuous variables well and they may reach the limit of their ability to generalise these models effectively to improve things. In such cases, analysts sometimes fall back on the discretisation into bins in order to allow them to apply highly general discrete models (e.g., the multinomial) that do not assume any particular distributional forms. This is a trade-off --- you lose some information and the ability to make smooth inferences about a continuous variable, but you have a high level of generality of the remaining discretised variable and it falls within well-known model forms that are easy to apply. I note your proposal that discretisation might be okay "...if the bins are made carefully and rigorously tested...". It is easy to say that, but how do you propose to test them (against what), if not by comparing them to an initial analysis of the continuous variable? If you "rigorously test" the discretised method and inference by comparing it to a continuous model that is taken to be the correct analysis, then you already presumably have a well-fitted continuous model, so what is the point of the discretisation? If you can't get that comparator, then what exactly is your proposed "rigorous test" doing? It is not necessarily impossible to answer these questions sensibly, but you would need to think more about what exactly you propose to do and how it helps you. The above gives you a practical idea of the issue, but I would be remiss if I did not expand on this with a little excursion into the philosophy and foundations of mathematics and computing. In particular, it is worth noting that all continuous data is discretised to some extent in statistical analysis, due to the finite precision of computational representation of numbers. At best we represent continuous variables up to some finite level of precision (e.g., standard precision under floating-point arithmetic) and so we always implicitly use a discretised scale where the "bins" are tiny intervals that are too small to be differentiated under the computational representation. (Usually these bins are precise enough to avoid duplicate values of the "continuous" variable.) Since this is a necessity of analysis, there cannot be any in principle objection to some discretisation of continuous variables in analysis, and so the question becomes one of degree. To take a deeper excursion into the philosophy of mathematics, finitists like Doron Zeilberger would go further with this representation argument and object even to the assertion that continuous random variables, continuous functions, or infinite sets exist; they would say that all purportedly continuous variables are actually discrete, up to the finite level of accuracy of the computational representation, and so the real question is only whether we want to aggregate smaller bins into larger bins.
Why is it Bad to Discretize a Continuous Variable? [duplicate]
There are two main issues here that militate against discretisation into bins for statistical analysis. The first is that this generally involves some loss of information, since different values are
Why is it Bad to Discretize a Continuous Variable? [duplicate] There are two main issues here that militate against discretisation into bins for statistical analysis. The first is that this generally involves some loss of information, since different values are put into the same bin, and so there is a corresponding loss of statistical power in the analysis. The second is that such an approach fails to produce inferences that change in a continuous manner with the continuous variable of interest; instead the inferences change over the bins and so they appear as "jumps" when looked at on the original continuous scale. Neither of these problems are necessarily fatal, particularly if you have a reasonably large number of bins to reduce the loss of information. Ideally, we would be able to model continuous data with models that treat them as continuous. However, continuous models generally involve some specification of a parametric family of distributions that limits the generality of analysis, and this means that there are distributional assumptions that might or might not fit the data well. (By contrast, in the discrete case the multinomial distribution fits all possible probability distributions for the number of outcomes occurring over a finite, or even countably infinite, set of finite bins.) It is usually possible to fit a continuous model that works well on the data if you have an expansive set of tools and the ability to generalise continuous models when they don't fit well. The difficulty that some analysts encounter is that they may find that standard continuous models they use for inference (e.g., linear regression) aren't fitting their continuous variables well and they may reach the limit of their ability to generalise these models effectively to improve things. In such cases, analysts sometimes fall back on the discretisation into bins in order to allow them to apply highly general discrete models (e.g., the multinomial) that do not assume any particular distributional forms. This is a trade-off --- you lose some information and the ability to make smooth inferences about a continuous variable, but you have a high level of generality of the remaining discretised variable and it falls within well-known model forms that are easy to apply. I note your proposal that discretisation might be okay "...if the bins are made carefully and rigorously tested...". It is easy to say that, but how do you propose to test them (against what), if not by comparing them to an initial analysis of the continuous variable? If you "rigorously test" the discretised method and inference by comparing it to a continuous model that is taken to be the correct analysis, then you already presumably have a well-fitted continuous model, so what is the point of the discretisation? If you can't get that comparator, then what exactly is your proposed "rigorous test" doing? It is not necessarily impossible to answer these questions sensibly, but you would need to think more about what exactly you propose to do and how it helps you. The above gives you a practical idea of the issue, but I would be remiss if I did not expand on this with a little excursion into the philosophy and foundations of mathematics and computing. In particular, it is worth noting that all continuous data is discretised to some extent in statistical analysis, due to the finite precision of computational representation of numbers. At best we represent continuous variables up to some finite level of precision (e.g., standard precision under floating-point arithmetic) and so we always implicitly use a discretised scale where the "bins" are tiny intervals that are too small to be differentiated under the computational representation. (Usually these bins are precise enough to avoid duplicate values of the "continuous" variable.) Since this is a necessity of analysis, there cannot be any in principle objection to some discretisation of continuous variables in analysis, and so the question becomes one of degree. To take a deeper excursion into the philosophy of mathematics, finitists like Doron Zeilberger would go further with this representation argument and object even to the assertion that continuous random variables, continuous functions, or infinite sets exist; they would say that all purportedly continuous variables are actually discrete, up to the finite level of accuracy of the computational representation, and so the real question is only whether we want to aggregate smaller bins into larger bins.
Why is it Bad to Discretize a Continuous Variable? [duplicate] There are two main issues here that militate against discretisation into bins for statistical analysis. The first is that this generally involves some loss of information, since different values are
33,393
Why is it Bad to Discretize a Continuous Variable? [duplicate]
Others answers have discussed how discretization throws away information, which can hide effects which would be discovered if the continuous data were used. But sometimes the loss of information actually creates illusory effects in the data! For example, suppose I'm trying to determine if a particular Northern species of bird migrates south in the winter. In the first week of November, I tag each bird caught and released from a research site at, say, $47.5^\circ$ North latitude. To get a larger sample size, I also tag the birds from a more northern site, at $51^\circ$ North latitude. Then I use the tags to locate the birds in December, seeing if they trend south as it gets colder. Suppose that in reality, the birds don't migrate at all, but just mill about, some heading south, some north. The continuous data would reveal that there is no migratory trend whatsoever, only random displacements between November and December. But I decide (foolishly) to discretize my data into bins $5^\circ$ across. None of the birds I tagged made it below $40^\circ$ or above $60^\circ$, so I break my sample range into four bins, $40^\circ$-$45^\circ$, $45^\circ$-$50^\circ$, $50^\circ$-$55^\circ$, and $55^\circ$-$60^\circ$. This feels like a natural discretization to me, as the ranges have nice, round-number endpoints. The first site is in the center of a range, so it doesn't cause me any problems. Most of the birds tagged at the $47.5^\circ$ site stay in the $45^\circ$-$50^\circ$ bin, but some of the outliers end up in both the $40^\circ$-$45^\circ$ and $50^\circ$-$55^\circ$ bins. However, the $51^\circ$ site is close to the bottom of its bin, so birds tagged there are much more likely to randomly move to the bin below than the bin above. I notice this pattern in my data, and wrongly conclude that the northern population of birds tend to travel south during November. When I publish my results in an esteemed ornithology journal, I don't include the latitudes at which the birds were tagged, only the bin in which they were tagged. I do include statistical tests, including a $p$-value below 0.05, which supports the claim that the birds trend into southern bins during Novemeber. My readers have no way of knowing that the reason for this trend is that my discretization biased my methodology, and are convinced that I have discovered a real phenomenon.
Why is it Bad to Discretize a Continuous Variable? [duplicate]
Others answers have discussed how discretization throws away information, which can hide effects which would be discovered if the continuous data were used. But sometimes the loss of information actua
Why is it Bad to Discretize a Continuous Variable? [duplicate] Others answers have discussed how discretization throws away information, which can hide effects which would be discovered if the continuous data were used. But sometimes the loss of information actually creates illusory effects in the data! For example, suppose I'm trying to determine if a particular Northern species of bird migrates south in the winter. In the first week of November, I tag each bird caught and released from a research site at, say, $47.5^\circ$ North latitude. To get a larger sample size, I also tag the birds from a more northern site, at $51^\circ$ North latitude. Then I use the tags to locate the birds in December, seeing if they trend south as it gets colder. Suppose that in reality, the birds don't migrate at all, but just mill about, some heading south, some north. The continuous data would reveal that there is no migratory trend whatsoever, only random displacements between November and December. But I decide (foolishly) to discretize my data into bins $5^\circ$ across. None of the birds I tagged made it below $40^\circ$ or above $60^\circ$, so I break my sample range into four bins, $40^\circ$-$45^\circ$, $45^\circ$-$50^\circ$, $50^\circ$-$55^\circ$, and $55^\circ$-$60^\circ$. This feels like a natural discretization to me, as the ranges have nice, round-number endpoints. The first site is in the center of a range, so it doesn't cause me any problems. Most of the birds tagged at the $47.5^\circ$ site stay in the $45^\circ$-$50^\circ$ bin, but some of the outliers end up in both the $40^\circ$-$45^\circ$ and $50^\circ$-$55^\circ$ bins. However, the $51^\circ$ site is close to the bottom of its bin, so birds tagged there are much more likely to randomly move to the bin below than the bin above. I notice this pattern in my data, and wrongly conclude that the northern population of birds tend to travel south during November. When I publish my results in an esteemed ornithology journal, I don't include the latitudes at which the birds were tagged, only the bin in which they were tagged. I do include statistical tests, including a $p$-value below 0.05, which supports the claim that the birds trend into southern bins during Novemeber. My readers have no way of knowing that the reason for this trend is that my discretization biased my methodology, and are convinced that I have discovered a real phenomenon.
Why is it Bad to Discretize a Continuous Variable? [duplicate] Others answers have discussed how discretization throws away information, which can hide effects which would be discovered if the continuous data were used. But sometimes the loss of information actua
33,394
Why is it Bad to Discretize a Continuous Variable? [duplicate]
A different issue is that the categorical model may be unnecessarily hard to fit, caused by the philosophical mismatch of the model to the problem. Suppose I have a large spring. The farther I pull it, $x$, the more force it exerts, $y$. This is an extremely straightforward continuous relationship -- pull a bit farther, feel a bit more force. I could fit a model $y = ax$ with just one parameter, $a$. If Hooke's law applies, then just a couple data points will give me a very accurate estimate of $a$. Now consider the categorical approach. I could break distances up into one hundred buckets, $x \in [0,1]$, $x \in [1,2]$, ..., $x \in [99,100]$. Now I have a model with at least one hundred parameters. It is capable of modeling many relationships that I know are impossible, such as the force going up and back down and back up etc. as the spring is stretched farther and farther. I may need hundreds or thousands of data points to fit the model. And it still won't predict very accurately at resolutions below $1$. To fix this, I could just use a couple buckets, $x \in [0,50]$ and $x \in [50,100]$. Now I don't need as much data and my model is simple, but very inaccurate.
Why is it Bad to Discretize a Continuous Variable? [duplicate]
A different issue is that the categorical model may be unnecessarily hard to fit, caused by the philosophical mismatch of the model to the problem. Suppose I have a large spring. The farther I pull it
Why is it Bad to Discretize a Continuous Variable? [duplicate] A different issue is that the categorical model may be unnecessarily hard to fit, caused by the philosophical mismatch of the model to the problem. Suppose I have a large spring. The farther I pull it, $x$, the more force it exerts, $y$. This is an extremely straightforward continuous relationship -- pull a bit farther, feel a bit more force. I could fit a model $y = ax$ with just one parameter, $a$. If Hooke's law applies, then just a couple data points will give me a very accurate estimate of $a$. Now consider the categorical approach. I could break distances up into one hundred buckets, $x \in [0,1]$, $x \in [1,2]$, ..., $x \in [99,100]$. Now I have a model with at least one hundred parameters. It is capable of modeling many relationships that I know are impossible, such as the force going up and back down and back up etc. as the spring is stretched farther and farther. I may need hundreds or thousands of data points to fit the model. And it still won't predict very accurately at resolutions below $1$. To fix this, I could just use a couple buckets, $x \in [0,50]$ and $x \in [50,100]$. Now I don't need as much data and my model is simple, but very inaccurate.
Why is it Bad to Discretize a Continuous Variable? [duplicate] A different issue is that the categorical model may be unnecessarily hard to fit, caused by the philosophical mismatch of the model to the problem. Suppose I have a large spring. The farther I pull it
33,395
Visualize a continuous variable against a binary variable
Boxplots lose an enormous amount of information, since they condense all data into just five summary statistics (the median, the box and the whiskers) plus what is unhappily called "outliers". I would always go with beanplots, also known as violin plots. If you want, you can always overlay boxplots, or the original data. If you do add the original data, jitter them horizontally to avoid overplotting. (And if you plot both boxplots and the original data, suppress the "outliers" plotted by the boxplots, because then you would have the same data plotted twice.) This answer of mine gives an example (the second plot) and R code to create it. In Python, it seems like you could use statsmodels.graphics.boxplots.beanplot or seaborn.violinplot. Visually checking relationships is a very good idea. However, it is of course subjective. If you want an objective result, consider running a logistic regression of heart_disease on BMI. Consider using splines to capture any nonlinearities.
Visualize a continuous variable against a binary variable
Boxplots lose an enormous amount of information, since they condense all data into just five summary statistics (the median, the box and the whiskers) plus what is unhappily called "outliers". I would
Visualize a continuous variable against a binary variable Boxplots lose an enormous amount of information, since they condense all data into just five summary statistics (the median, the box and the whiskers) plus what is unhappily called "outliers". I would always go with beanplots, also known as violin plots. If you want, you can always overlay boxplots, or the original data. If you do add the original data, jitter them horizontally to avoid overplotting. (And if you plot both boxplots and the original data, suppress the "outliers" plotted by the boxplots, because then you would have the same data plotted twice.) This answer of mine gives an example (the second plot) and R code to create it. In Python, it seems like you could use statsmodels.graphics.boxplots.beanplot or seaborn.violinplot. Visually checking relationships is a very good idea. However, it is of course subjective. If you want an objective result, consider running a logistic regression of heart_disease on BMI. Consider using splines to capture any nonlinearities.
Visualize a continuous variable against a binary variable Boxplots lose an enormous amount of information, since they condense all data into just five summary statistics (the median, the box and the whiskers) plus what is unhappily called "outliers". I would
33,396
Visualize a continuous variable against a binary variable
I usually do overlayed histograms. E.g. https://stackoverflow.com/questions/49533978/multiple-histograms-in-python
Visualize a continuous variable against a binary variable
I usually do overlayed histograms. E.g. https://stackoverflow.com/questions/49533978/multiple-histograms-in-python
Visualize a continuous variable against a binary variable I usually do overlayed histograms. E.g. https://stackoverflow.com/questions/49533978/multiple-histograms-in-python
Visualize a continuous variable against a binary variable I usually do overlayed histograms. E.g. https://stackoverflow.com/questions/49533978/multiple-histograms-in-python
33,397
Visualize a continuous variable against a binary variable
Depending on the sample size, you might do a strip plot (where every observation is a dot in the graph), maybe with jittering or transparent points. I second Stephen's recommendation of using splines -- the box plot you've shown seems to show that the median and quartiles are roughly similar, but that the outliers are quite different. Oddly, the high outliers all show no heart disease. Was N much larger for that group?
Visualize a continuous variable against a binary variable
Depending on the sample size, you might do a strip plot (where every observation is a dot in the graph), maybe with jittering or transparent points. I second Stephen's recommendation of using splines
Visualize a continuous variable against a binary variable Depending on the sample size, you might do a strip plot (where every observation is a dot in the graph), maybe with jittering or transparent points. I second Stephen's recommendation of using splines -- the box plot you've shown seems to show that the median and quartiles are roughly similar, but that the outliers are quite different. Oddly, the high outliers all show no heart disease. Was N much larger for that group?
Visualize a continuous variable against a binary variable Depending on the sample size, you might do a strip plot (where every observation is a dot in the graph), maybe with jittering or transparent points. I second Stephen's recommendation of using splines
33,398
Visualize a continuous variable against a binary variable
Variants on this question arise frequently: see for example Histogram or box plot, to compare two distributions of means?. I want to add to the fine answers to date, first by emphasising some small tensions here: Show the data, but show summaries and expose detail too You should want the data to be seen clearly, but also allow summaries and important detail to jump out at readers. Important detail could be outliers, gaps and spikes as well as skewness and long tails and additive or multiplicative shift between groups. Substantively (meaning, scientifically or practically), people often want to focus on differences in distribution level (means, medians, whatever) between two groups, but differences in distribution spread and shape could complicate or even undermine any comparison based on levels alone. Familiar method or novel method? There is some advantage to simple methods that are familiar to people in your field, yet there are many small new ideas even on this basic problem that may deserve experiment. Histograms are traditional but often work fine. Box plots for some fields have become traditional too, but their limitations are shown by many small variants aimed to enhance them. Just reference and explain any novel method you use. Show the data directly? Showing the data directly sounds an obvious goal, but some methods don't do exactly that, in pursuit of clarity or simplicity. Jittering to shake identical points apart and smoothing the density function are contradictory moves, but united by a motive of making overall patterns easier to grasp. Use a transformed scale? People in different fields can range from those using a transformed scale immediately as a matter of standard measurement convention (pH, Richter scale, decibels) to those suspicious of or unfamiliar with any kind of transformation. Most statistically-minded people are aware of the possibility of showing data on a transformed scale, yet also variable in their enthusiasm for using one in practice. Here I use a hybrid display with elements of quantile plot (i.e. a plot of all values against a cumulative probability scale), box plot (starting with the idea of median and quartiles in a box), and extra annotation. The goal -- easy to explain, but harder to achieve well -- is to allow readers to see summaries and detail at the same time. Back in 1979, Emanuel Parzen suggested quantile-box plots (reference below). The display below is similar in spirit. Parzen's own examples weren't especially impressive and his readers were perhaps distracted by an attempt in that paper to represent exploratory data analysis in fine mathematical clothes, a project resisted robustly (!) by John W. Tukey in discussion. The name quantile-box plots has also been used since for other related but not identical plots. The reaction time means (units not stated) from the CV thread cited at the beginning of this answer will serve fine to show some technique. Here is one of many plots that could be shown. These are small samples, 20 values in each, and so it's certainly possible to show each value distinctly without distortion. The design copes reasonably with many more points (200, 2000, 20000, ...) in so far as major details (e.g. marked outliers if they exist) will still jump out. Offsetting point markers according to the associated cumulative probability (rank, equivalently) avoids or at least reduces any call for jittering. If the underlying distribution is fairly smooth, so too will be the quantile trace. The cumulative probability scale here is linear, but other scales could make as much or more sense (e.g. transformed to unit normal deviates). Parzen superimposed quantile and box plots (that was much of his point, that quantile and box plots share links to cumulative probabilities) and I do that too sometimes. Here they are juxtaposed. Box plots are presented but here have only a summary role (and so can be made quite thin without loss). As the quantile plot shows all the data, we needn't concern ourselves with any rule or convention about which data points are shown individually. The most common convention used seems to remain the convention that Tukey settled on after some experiment, namely to show all points individually that lie more than 1.5 IQR from the nearer quartile. I find in my reading and discussion that (a) teachers and researchers are often poor in explaining the convention that they used, and readers are often less familiar with such conventions than is assumed; (b) there is even a tendency to take that rule of thumb for which data points are to be shown individually as a hard-and-fast criterion for outliers, which is unfortunate. It's partly a matter of taste, but I often like to revert to a practice of just taking the whiskers out to paired percentiles in the tails, which is quite often done in various literatures. Which percentiles is not, and need not be, standardized and that should not matter much so long as a choice is explained. I add longer lines showing the means. Naturally, that could be done too with distinct point symbols. Show geometric means, midmeans or anything else instead if that fits the problem or the data better. I note a bizarre but common habit of accompanying discussions of t-tests or analysis of variance with box plots that don't show means too; that is much better than no graph at all, but is like Romeo and Juliet without Romeo and Juliet. As above, this display is compatible with transformations, the major detail to be clear whether log(mean) or mean(log) is being shown. On a logarithmic scale showing geometric means to compare with medians is a really good idea. As a matter of record, I show Stata code I used. If your favourite software doesn't make something like this fairly easy, you need a new favourite. clear input float(A B) .8792397 .9964306 .5845183 .7269523 .8092829 .9343457 .6869933 .9014275 .7223416 .7856004 .6551149 .8224649 .6549308 .8670868 .6560364 .7797318 .602458 .8236209 .6110293 .8315545 .6373121 .774942 .6295298 .8020451 .6144096 .7622244 .6776401 .8087511 .6165493 .7977815 .6055175 .7647625 .6318304 .7652289 .6315798 .7064912 .6329535 .7116355 .5817685 .7861449 end stripplot A B, cumul vertical box(barw(0.1)) refline pctile(5) boffset(-0.1) /// height(0.4) yla(, ang(h)) ytitle("") /// note("box and whiskers show median, quartiles, 5% and 95% percentiles" "longer lines show means") /// xla(, tlc(bg)) ytitle("more explanation goes here") Parzen, E. 1979. Nonparametric statistical data modeling. Journal of the American Statistical Association 74(365): 105–121. https://doi.org/10.2307/2286734
Visualize a continuous variable against a binary variable
Variants on this question arise frequently: see for example Histogram or box plot, to compare two distributions of means?. I want to add to the fine answers to date, first by emphasising some small te
Visualize a continuous variable against a binary variable Variants on this question arise frequently: see for example Histogram or box plot, to compare two distributions of means?. I want to add to the fine answers to date, first by emphasising some small tensions here: Show the data, but show summaries and expose detail too You should want the data to be seen clearly, but also allow summaries and important detail to jump out at readers. Important detail could be outliers, gaps and spikes as well as skewness and long tails and additive or multiplicative shift between groups. Substantively (meaning, scientifically or practically), people often want to focus on differences in distribution level (means, medians, whatever) between two groups, but differences in distribution spread and shape could complicate or even undermine any comparison based on levels alone. Familiar method or novel method? There is some advantage to simple methods that are familiar to people in your field, yet there are many small new ideas even on this basic problem that may deserve experiment. Histograms are traditional but often work fine. Box plots for some fields have become traditional too, but their limitations are shown by many small variants aimed to enhance them. Just reference and explain any novel method you use. Show the data directly? Showing the data directly sounds an obvious goal, but some methods don't do exactly that, in pursuit of clarity or simplicity. Jittering to shake identical points apart and smoothing the density function are contradictory moves, but united by a motive of making overall patterns easier to grasp. Use a transformed scale? People in different fields can range from those using a transformed scale immediately as a matter of standard measurement convention (pH, Richter scale, decibels) to those suspicious of or unfamiliar with any kind of transformation. Most statistically-minded people are aware of the possibility of showing data on a transformed scale, yet also variable in their enthusiasm for using one in practice. Here I use a hybrid display with elements of quantile plot (i.e. a plot of all values against a cumulative probability scale), box plot (starting with the idea of median and quartiles in a box), and extra annotation. The goal -- easy to explain, but harder to achieve well -- is to allow readers to see summaries and detail at the same time. Back in 1979, Emanuel Parzen suggested quantile-box plots (reference below). The display below is similar in spirit. Parzen's own examples weren't especially impressive and his readers were perhaps distracted by an attempt in that paper to represent exploratory data analysis in fine mathematical clothes, a project resisted robustly (!) by John W. Tukey in discussion. The name quantile-box plots has also been used since for other related but not identical plots. The reaction time means (units not stated) from the CV thread cited at the beginning of this answer will serve fine to show some technique. Here is one of many plots that could be shown. These are small samples, 20 values in each, and so it's certainly possible to show each value distinctly without distortion. The design copes reasonably with many more points (200, 2000, 20000, ...) in so far as major details (e.g. marked outliers if they exist) will still jump out. Offsetting point markers according to the associated cumulative probability (rank, equivalently) avoids or at least reduces any call for jittering. If the underlying distribution is fairly smooth, so too will be the quantile trace. The cumulative probability scale here is linear, but other scales could make as much or more sense (e.g. transformed to unit normal deviates). Parzen superimposed quantile and box plots (that was much of his point, that quantile and box plots share links to cumulative probabilities) and I do that too sometimes. Here they are juxtaposed. Box plots are presented but here have only a summary role (and so can be made quite thin without loss). As the quantile plot shows all the data, we needn't concern ourselves with any rule or convention about which data points are shown individually. The most common convention used seems to remain the convention that Tukey settled on after some experiment, namely to show all points individually that lie more than 1.5 IQR from the nearer quartile. I find in my reading and discussion that (a) teachers and researchers are often poor in explaining the convention that they used, and readers are often less familiar with such conventions than is assumed; (b) there is even a tendency to take that rule of thumb for which data points are to be shown individually as a hard-and-fast criterion for outliers, which is unfortunate. It's partly a matter of taste, but I often like to revert to a practice of just taking the whiskers out to paired percentiles in the tails, which is quite often done in various literatures. Which percentiles is not, and need not be, standardized and that should not matter much so long as a choice is explained. I add longer lines showing the means. Naturally, that could be done too with distinct point symbols. Show geometric means, midmeans or anything else instead if that fits the problem or the data better. I note a bizarre but common habit of accompanying discussions of t-tests or analysis of variance with box plots that don't show means too; that is much better than no graph at all, but is like Romeo and Juliet without Romeo and Juliet. As above, this display is compatible with transformations, the major detail to be clear whether log(mean) or mean(log) is being shown. On a logarithmic scale showing geometric means to compare with medians is a really good idea. As a matter of record, I show Stata code I used. If your favourite software doesn't make something like this fairly easy, you need a new favourite. clear input float(A B) .8792397 .9964306 .5845183 .7269523 .8092829 .9343457 .6869933 .9014275 .7223416 .7856004 .6551149 .8224649 .6549308 .8670868 .6560364 .7797318 .602458 .8236209 .6110293 .8315545 .6373121 .774942 .6295298 .8020451 .6144096 .7622244 .6776401 .8087511 .6165493 .7977815 .6055175 .7647625 .6318304 .7652289 .6315798 .7064912 .6329535 .7116355 .5817685 .7861449 end stripplot A B, cumul vertical box(barw(0.1)) refline pctile(5) boffset(-0.1) /// height(0.4) yla(, ang(h)) ytitle("") /// note("box and whiskers show median, quartiles, 5% and 95% percentiles" "longer lines show means") /// xla(, tlc(bg)) ytitle("more explanation goes here") Parzen, E. 1979. Nonparametric statistical data modeling. Journal of the American Statistical Association 74(365): 105–121. https://doi.org/10.2307/2286734
Visualize a continuous variable against a binary variable Variants on this question arise frequently: see for example Histogram or box plot, to compare two distributions of means?. I want to add to the fine answers to date, first by emphasising some small te
33,399
Should the word "component" be singular or plural in the name for PCA?
Ian Jolliffe discusses this on p.viii of the 2002 second edition of his Principal Component Analysis (New York: Springer) -- which, as you can see immediately, jumps one way. He expresses a definite preference for that form principal component analysis as similar to say factor analysis or cluster analysis and cites evidence that it is more common any way. Fortuitously, but fortunately for this question, this material is visible on my local Amazon site, and perhaps on yours too. I add that the form independent component analysis seems overwhelmingly preponderant for that approach, although whether this is, as it were, independent evidence might be in doubt. It's not evident from the title but J.E. Jackson's A User's Guide to Principal Components (New York: John Wiley, 1991) has the same choice. A grab sample of multivariate books from my shelves suggests a majority for the singular form. An argument I would respect might be that in most cases the point is to calculate several principal components, but a similar point could be made for several factors or several clusters. I suggest that the variants factors analysis and clusters analysis, which I can't recall ever seeing in print, would typically be regarded as non-standard or typos by reviewers, copy-editors or editors. I can't see that principal components analysis is wrong in any sense, statistically or linguistically, and it is certainly often seen, but I would suggest following leading authorities and using principal component analysis unless you have arguments to the contrary or consider your own taste paramount. I write as a native (British) English speaker and have no idea on whether there are arguments the other way in any other language -- perhaps through grammatical rules, as the mathematics and statistics of PCA are universal. I hope for comments in that direction. If in doubt, define PCA once and refer to that thereafter, and hope that anyone passionate for the form you don't use doesn't notice. Or write about empirical orthogonal functions.
Should the word "component" be singular or plural in the name for PCA?
Ian Jolliffe discusses this on p.viii of the 2002 second edition of his Principal Component Analysis (New York: Springer) -- which, as you can see immediately, jumps one way. He expresses a definite p
Should the word "component" be singular or plural in the name for PCA? Ian Jolliffe discusses this on p.viii of the 2002 second edition of his Principal Component Analysis (New York: Springer) -- which, as you can see immediately, jumps one way. He expresses a definite preference for that form principal component analysis as similar to say factor analysis or cluster analysis and cites evidence that it is more common any way. Fortuitously, but fortunately for this question, this material is visible on my local Amazon site, and perhaps on yours too. I add that the form independent component analysis seems overwhelmingly preponderant for that approach, although whether this is, as it were, independent evidence might be in doubt. It's not evident from the title but J.E. Jackson's A User's Guide to Principal Components (New York: John Wiley, 1991) has the same choice. A grab sample of multivariate books from my shelves suggests a majority for the singular form. An argument I would respect might be that in most cases the point is to calculate several principal components, but a similar point could be made for several factors or several clusters. I suggest that the variants factors analysis and clusters analysis, which I can't recall ever seeing in print, would typically be regarded as non-standard or typos by reviewers, copy-editors or editors. I can't see that principal components analysis is wrong in any sense, statistically or linguistically, and it is certainly often seen, but I would suggest following leading authorities and using principal component analysis unless you have arguments to the contrary or consider your own taste paramount. I write as a native (British) English speaker and have no idea on whether there are arguments the other way in any other language -- perhaps through grammatical rules, as the mathematics and statistics of PCA are universal. I hope for comments in that direction. If in doubt, define PCA once and refer to that thereafter, and hope that anyone passionate for the form you don't use doesn't notice. Or write about empirical orthogonal functions.
Should the word "component" be singular or plural in the name for PCA? Ian Jolliffe discusses this on p.viii of the 2002 second edition of his Principal Component Analysis (New York: Springer) -- which, as you can see immediately, jumps one way. He expresses a definite p
33,400
Should the word "component" be singular or plural in the name for PCA?
NGrams from Google Books suggests they were similar in frequency of use from 1960 to about 1982, after which the without-s form started to be more popular. This suggests to me that neither form is wrong but the without-s form may be slightly more comfortable to say.
Should the word "component" be singular or plural in the name for PCA?
NGrams from Google Books suggests they were similar in frequency of use from 1960 to about 1982, after which the without-s form started to be more popular. This suggests to me that neither form is wro
Should the word "component" be singular or plural in the name for PCA? NGrams from Google Books suggests they were similar in frequency of use from 1960 to about 1982, after which the without-s form started to be more popular. This suggests to me that neither form is wrong but the without-s form may be slightly more comfortable to say.
Should the word "component" be singular or plural in the name for PCA? NGrams from Google Books suggests they were similar in frequency of use from 1960 to about 1982, after which the without-s form started to be more popular. This suggests to me that neither form is wro