idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
5,501
|
Understanding regressions - the role of the model
|
The other side of the answer, complementary to mpiktas's answer but not mentioned so far, is:
"They don't, but as soon as they assume some model structure, they can check it against the data".
The two basic things that could go wrong are: The form of the function, e.g. it's not even linear in logs. So you'd start by plotting the an appropriate residual against the expected values. Or the choice of conditional distribution, e.g. the observed counts overdispersed relative to Poisson. So you'd test against an Negative Binomial version of the same model, or see if extra covariates account for the extra variation.
You'd also want to check for outliers, influential observations, and a host of other things. A reasonable place to read about checking these kinds of model problems is ch.5 of Cameron and Trivedi 1998. (There is surely a better place for epidemiologically oriented researchers to start - perhaps other folk can suggest it.)
If these diagnostics indicated the model failed to fit the data, you'd change the relevant aspect of the model and start the whole process again.
|
Understanding regressions - the role of the model
|
The other side of the answer, complementary to mpiktas's answer but not mentioned so far, is:
"They don't, but as soon as they assume some model structure, they can check it against the data".
The t
|
Understanding regressions - the role of the model
The other side of the answer, complementary to mpiktas's answer but not mentioned so far, is:
"They don't, but as soon as they assume some model structure, they can check it against the data".
The two basic things that could go wrong are: The form of the function, e.g. it's not even linear in logs. So you'd start by plotting the an appropriate residual against the expected values. Or the choice of conditional distribution, e.g. the observed counts overdispersed relative to Poisson. So you'd test against an Negative Binomial version of the same model, or see if extra covariates account for the extra variation.
You'd also want to check for outliers, influential observations, and a host of other things. A reasonable place to read about checking these kinds of model problems is ch.5 of Cameron and Trivedi 1998. (There is surely a better place for epidemiologically oriented researchers to start - perhaps other folk can suggest it.)
If these diagnostics indicated the model failed to fit the data, you'd change the relevant aspect of the model and start the whole process again.
|
Understanding regressions - the role of the model
The other side of the answer, complementary to mpiktas's answer but not mentioned so far, is:
"They don't, but as soon as they assume some model structure, they can check it against the data".
The t
|
5,502
|
Understanding regressions - the role of the model
|
An excellent first question! I agree with mpiktas's answer, i.e. the short answer is "they don't, but they hope to have an approximation to the right model that gives approximately the right answer".
In the jargon of epidemiology, this model uncertainty is one source of what's known as 'residual confounding'. See Steve Simon's page 'What is residual confounding?' for a good short description, or Heiko Becher's 1992 paper in Statistics in Medicine (subscription req'd) for a longer, more mathematical treatment, or Fewell, Davey Smith & Sterne's more recent paper in the American Journal of Epidemiology (subscription req'd).
This is one reason that epidemiology of small effects is difficult and the findings often controversial - if the measured effect size is small, it's hard to rule out residual confounding or other sources of bias as the explanation.
|
Understanding regressions - the role of the model
|
An excellent first question! I agree with mpiktas's answer, i.e. the short answer is "they don't, but they hope to have an approximation to the right model that gives approximately the right answer".
|
Understanding regressions - the role of the model
An excellent first question! I agree with mpiktas's answer, i.e. the short answer is "they don't, but they hope to have an approximation to the right model that gives approximately the right answer".
In the jargon of epidemiology, this model uncertainty is one source of what's known as 'residual confounding'. See Steve Simon's page 'What is residual confounding?' for a good short description, or Heiko Becher's 1992 paper in Statistics in Medicine (subscription req'd) for a longer, more mathematical treatment, or Fewell, Davey Smith & Sterne's more recent paper in the American Journal of Epidemiology (subscription req'd).
This is one reason that epidemiology of small effects is difficult and the findings often controversial - if the measured effect size is small, it's hard to rule out residual confounding or other sources of bias as the explanation.
|
Understanding regressions - the role of the model
An excellent first question! I agree with mpiktas's answer, i.e. the short answer is "they don't, but they hope to have an approximation to the right model that gives approximately the right answer".
|
5,503
|
Understanding regressions - the role of the model
|
There is the famous quote "Essentially, all models are wrong, but some are useful" of George Box. When fitting models like this, we try to (or should) think about the data generation process and the physical, real world, relationships between the response and covariates. We try to express these relationships in a model that fits the data. Or to put it another way, is consistent with the data. As such an empirical model is produced.
Whether it is useful or not is determined later - does it give good, reliable predictions, for example, for women not used to fit the model? Are the model coefficients interpretable and of scientific use? Are the effect sizes meaningful?
|
Understanding regressions - the role of the model
|
There is the famous quote "Essentially, all models are wrong, but some are useful" of George Box. When fitting models like this, we try to (or should) think about the data generation process and the p
|
Understanding regressions - the role of the model
There is the famous quote "Essentially, all models are wrong, but some are useful" of George Box. When fitting models like this, we try to (or should) think about the data generation process and the physical, real world, relationships between the response and covariates. We try to express these relationships in a model that fits the data. Or to put it another way, is consistent with the data. As such an empirical model is produced.
Whether it is useful or not is determined later - does it give good, reliable predictions, for example, for women not used to fit the model? Are the model coefficients interpretable and of scientific use? Are the effect sizes meaningful?
|
Understanding regressions - the role of the model
There is the famous quote "Essentially, all models are wrong, but some are useful" of George Box. When fitting models like this, we try to (or should) think about the data generation process and the p
|
5,504
|
Understanding regressions - the role of the model
|
The answers you have already gotten are excellent ones, but I'm going to give a (hopefully) complementary answer from the perspective of an Epidemiologist. I really have three thoughts on this:
First, they don't. See also: All models are wrong, some models are useful. The goal is not to produce a single, definitive number that is taken as the "truth" of an underlying function. The goal is to produce an estimate of that function, with a quantification of the uncertainty around it, that is a reasonable and useful approximation of the underlying function.
This is especially true for large effect measures. The "take away" message from a study that finds a relative risk of 3.0 isn't really different if the "true" relationship is 2.5 or 3.2. As @onestop mentioned, this does get harder with small effect measure estimates, because the difference between 0.9, 1.0 and 1.1 can be huge from a health and policy standpoint.
Second, there's a process hidden in most Epidemiology papers. That's the actual model selection process. We tend to report the model we ended up with, not all the models we considered (because that would be tiresome, if nothing else). There are a slew of model building steps, conceptual diagrams, diagnostics, fit statistics, sensitivity analysis, swearing at computers and scribbling on white boards involved in the analysis of even small observational studies.
Because while you are making assumptions, many of them are also assumptions you can check.
Third, sometimes we don't. And then we go to conferences and argue with each other about it ;)
If you're interested in the nuts and bolts of Epidemiology as a field, and how we perform out research, the best place to start is probably Modern Epidemiology 3rd Edition by Rothman, Greenland and Lash. It's a moderately technical and very good overview of how Epi research is conducted.
|
Understanding regressions - the role of the model
|
The answers you have already gotten are excellent ones, but I'm going to give a (hopefully) complementary answer from the perspective of an Epidemiologist. I really have three thoughts on this:
First,
|
Understanding regressions - the role of the model
The answers you have already gotten are excellent ones, but I'm going to give a (hopefully) complementary answer from the perspective of an Epidemiologist. I really have three thoughts on this:
First, they don't. See also: All models are wrong, some models are useful. The goal is not to produce a single, definitive number that is taken as the "truth" of an underlying function. The goal is to produce an estimate of that function, with a quantification of the uncertainty around it, that is a reasonable and useful approximation of the underlying function.
This is especially true for large effect measures. The "take away" message from a study that finds a relative risk of 3.0 isn't really different if the "true" relationship is 2.5 or 3.2. As @onestop mentioned, this does get harder with small effect measure estimates, because the difference between 0.9, 1.0 and 1.1 can be huge from a health and policy standpoint.
Second, there's a process hidden in most Epidemiology papers. That's the actual model selection process. We tend to report the model we ended up with, not all the models we considered (because that would be tiresome, if nothing else). There are a slew of model building steps, conceptual diagrams, diagnostics, fit statistics, sensitivity analysis, swearing at computers and scribbling on white boards involved in the analysis of even small observational studies.
Because while you are making assumptions, many of them are also assumptions you can check.
Third, sometimes we don't. And then we go to conferences and argue with each other about it ;)
If you're interested in the nuts and bolts of Epidemiology as a field, and how we perform out research, the best place to start is probably Modern Epidemiology 3rd Edition by Rothman, Greenland and Lash. It's a moderately technical and very good overview of how Epi research is conducted.
|
Understanding regressions - the role of the model
The answers you have already gotten are excellent ones, but I'm going to give a (hopefully) complementary answer from the perspective of an Epidemiologist. I really have three thoughts on this:
First,
|
5,505
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
To answer your literal question, "Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?", the answer is no. The answer is no, because by construction the baseline score is correlated with the error term when the change score is used as the dependent variable, hence the estimated effect of the baseline on the change score is uninterpretable.
Using
$Y_1$ as the initial weight
$Y_2$ as the end weight
$\Delta{Y}$ as the change in weight (i.e. $\Delta{Y} = Y_2 - Y_1$)
$T$ as a randomly assigned treatment, and
$X$ as other exogenous factors that affect weight (e.g. other control variables that are related to the outcome but should be uncorrelated with treatment due to random assignment)
One then has a model regressing $\Delta{Y}$ on $T$ and $X$;
$$\Delta{Y} = \beta_1T + \beta_2X + e$$
Which by definition is equivalent to;
$$Y_2 - Y_1 = \beta_1T + \beta_2X + e$$
Now, if you include the baseline as a covariate, one should see a problem, in that you have the $Y_1$ term on both sides of the equation. This shows that $\beta_3Y_1$ is uninterpretable, because it is inherently correlated with the error term.
$$\begin{align*}Y_2 - Y_1 &= \beta_1T + \beta_2X + \beta_3Y_1 + e \\
Y_2 &= \beta_1T + \beta_2X + \beta_3Y_1 + (e + Y_1) \end{align*}$$
Now, part of the confusion in the various answers seems to stem from the fact that different models will yield identical results for the treatment effect, $\beta_1T$ in my above formulation. So, if one were to compare the treatment effect for the model using change scores as the dependent variable to the model using the "levels" (with each model including the baseline $Y_1$ as a covariate), ones interpretation of the treatment effect would be the same. In the two models that follow $\beta_1T$ will be the same, and so will the inferences based on them (Bruce Weaver has some SPSS code posted demonstrating the equivalence as well).
$$\begin{align*} Change\ Score\ Model&: Y_2 - Y_1 = \beta_1T + \beta_2X + \beta_3Y_1 + e \\
Levels\ Model&: Y_2 = \beta_1T + \beta_2X + \beta_3Y_1 + e \end{align*}$$
So some will argue (as Felix has in this thread, and as Bruce Weaver has done on some discussions over on the SPSS google group) that since the models result in the same estimated treatment effect, it does not matter which one you choose. I disagree, because the baseline covariate in the change score model can not be interpreted, you should never include the baseline as a covariate (regardless of whether the estimated treatment effect is the same or not). So this brings up another question, what is the point in using the change scores as dependent variables? As Felix already noted as well, the model using the change score as the dependent variable excluding the baseline as a covariate is different than the model using the levels. To clarify, the subsequent models will give different treatment effects (especially in the case that the treatment is correlated with baseline);
$$\begin{align*} Change\ Score\ Model\ Without\ Baseline&: Y_2 - Y_1 = \beta_1T + \beta_2X + e \\
Levels\ Model&: Y_2 = \beta_1T + \beta_2X + \beta_3Y_1 + e \end{align*}$$
This has been noted in prior literature as "Lord's Paradox". So which model is right? Well, in the case of randomized experiments, I would say the Levels model is preferable (although if you did a good job randomizing, the average treatment effect should be very close between the models). Other's have noted reasons why the levels model is preferable, Charlie's answer makes a good point in that you can estimate interaction effects with the baseline in the levels model (but you can't in the change score model). Whuber in this response to a very similar question demonstrates how the change scores induce correlations between different treatments.
In situations in which the treatment is not randomly assigned, the model using change scores as the dependent variable should be given more consideration. The main benefit of the change score model, is that any time invariant predictors of the outcome are controlled for. So say in the above formulation, $X$ is constant throughout time (for example say a genetic predisposition to be at a certain weight), and that $X$ is correlated with whether an individual chooses to exercise (and $X$ is unobserved). In that instance, the change score model is preferable. Also in instances in which selection into treatment is correlated with the baseline value, the change score model may be preferable. Paul Allison in his paper, Change Scores as Dependent Variables in Regression Analysis, gives these same examples (and largely influenced my perspective on the topic, so I highly suggest to read it).
This isn't to say that change scores are always preferable in non-randomized settings. In the case that you expect the baseline to have an actual causal effect on the post weight, you should use the levels model. In the case that you expect the baseline to have a causal effect, and the selection into treatment is correlated with the baseline, the treatment effect is confounded with the baseline effect.
I've ignored the note by Charlie that the logarithm of the weight could be used as the dependent variable. While I don't doubt that could be a possibility, it is somewhat non sequitur to the initial question. Another question has discussed when it is appropriate to use the logarithms of the variable (and those still apply in this case). There is probably prior literature on the subject that would help guide you as to whether using the logged weight is appropriate as well.
Citation
Allison, Paul D. 1990. Change scores as dependent variables in regression analysis. Sociological Methodology 20: 93-114. Public PDF version.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
To answer your literal question, "Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?", the answer is no. The answer is n
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
To answer your literal question, "Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?", the answer is no. The answer is no, because by construction the baseline score is correlated with the error term when the change score is used as the dependent variable, hence the estimated effect of the baseline on the change score is uninterpretable.
Using
$Y_1$ as the initial weight
$Y_2$ as the end weight
$\Delta{Y}$ as the change in weight (i.e. $\Delta{Y} = Y_2 - Y_1$)
$T$ as a randomly assigned treatment, and
$X$ as other exogenous factors that affect weight (e.g. other control variables that are related to the outcome but should be uncorrelated with treatment due to random assignment)
One then has a model regressing $\Delta{Y}$ on $T$ and $X$;
$$\Delta{Y} = \beta_1T + \beta_2X + e$$
Which by definition is equivalent to;
$$Y_2 - Y_1 = \beta_1T + \beta_2X + e$$
Now, if you include the baseline as a covariate, one should see a problem, in that you have the $Y_1$ term on both sides of the equation. This shows that $\beta_3Y_1$ is uninterpretable, because it is inherently correlated with the error term.
$$\begin{align*}Y_2 - Y_1 &= \beta_1T + \beta_2X + \beta_3Y_1 + e \\
Y_2 &= \beta_1T + \beta_2X + \beta_3Y_1 + (e + Y_1) \end{align*}$$
Now, part of the confusion in the various answers seems to stem from the fact that different models will yield identical results for the treatment effect, $\beta_1T$ in my above formulation. So, if one were to compare the treatment effect for the model using change scores as the dependent variable to the model using the "levels" (with each model including the baseline $Y_1$ as a covariate), ones interpretation of the treatment effect would be the same. In the two models that follow $\beta_1T$ will be the same, and so will the inferences based on them (Bruce Weaver has some SPSS code posted demonstrating the equivalence as well).
$$\begin{align*} Change\ Score\ Model&: Y_2 - Y_1 = \beta_1T + \beta_2X + \beta_3Y_1 + e \\
Levels\ Model&: Y_2 = \beta_1T + \beta_2X + \beta_3Y_1 + e \end{align*}$$
So some will argue (as Felix has in this thread, and as Bruce Weaver has done on some discussions over on the SPSS google group) that since the models result in the same estimated treatment effect, it does not matter which one you choose. I disagree, because the baseline covariate in the change score model can not be interpreted, you should never include the baseline as a covariate (regardless of whether the estimated treatment effect is the same or not). So this brings up another question, what is the point in using the change scores as dependent variables? As Felix already noted as well, the model using the change score as the dependent variable excluding the baseline as a covariate is different than the model using the levels. To clarify, the subsequent models will give different treatment effects (especially in the case that the treatment is correlated with baseline);
$$\begin{align*} Change\ Score\ Model\ Without\ Baseline&: Y_2 - Y_1 = \beta_1T + \beta_2X + e \\
Levels\ Model&: Y_2 = \beta_1T + \beta_2X + \beta_3Y_1 + e \end{align*}$$
This has been noted in prior literature as "Lord's Paradox". So which model is right? Well, in the case of randomized experiments, I would say the Levels model is preferable (although if you did a good job randomizing, the average treatment effect should be very close between the models). Other's have noted reasons why the levels model is preferable, Charlie's answer makes a good point in that you can estimate interaction effects with the baseline in the levels model (but you can't in the change score model). Whuber in this response to a very similar question demonstrates how the change scores induce correlations between different treatments.
In situations in which the treatment is not randomly assigned, the model using change scores as the dependent variable should be given more consideration. The main benefit of the change score model, is that any time invariant predictors of the outcome are controlled for. So say in the above formulation, $X$ is constant throughout time (for example say a genetic predisposition to be at a certain weight), and that $X$ is correlated with whether an individual chooses to exercise (and $X$ is unobserved). In that instance, the change score model is preferable. Also in instances in which selection into treatment is correlated with the baseline value, the change score model may be preferable. Paul Allison in his paper, Change Scores as Dependent Variables in Regression Analysis, gives these same examples (and largely influenced my perspective on the topic, so I highly suggest to read it).
This isn't to say that change scores are always preferable in non-randomized settings. In the case that you expect the baseline to have an actual causal effect on the post weight, you should use the levels model. In the case that you expect the baseline to have a causal effect, and the selection into treatment is correlated with the baseline, the treatment effect is confounded with the baseline effect.
I've ignored the note by Charlie that the logarithm of the weight could be used as the dependent variable. While I don't doubt that could be a possibility, it is somewhat non sequitur to the initial question. Another question has discussed when it is appropriate to use the logarithms of the variable (and those still apply in this case). There is probably prior literature on the subject that would help guide you as to whether using the logged weight is appropriate as well.
Citation
Allison, Paul D. 1990. Change scores as dependent variables in regression analysis. Sociological Methodology 20: 93-114. Public PDF version.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
To answer your literal question, "Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?", the answer is no. The answer is n
|
5,506
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
Andy's answer seems to be the economist's view of things. It is accepted practice in clinical trials to almost always adjust for the baseline version of the response variable, to greatly increase power. Since we condition on the baseline variables there is no 'error term' for them to be confused with the overall error term. The only problem would be if measurement errors in the baseline covariate are confounded with another X, distorting that other X's effect. The overall preferred method is to adjust for baseline and to model the response variable, not computing the change. One reason for this is that change is heavily dependent on getting the transformation of Y correct, and that change does not apply to regression models in general. E.g. if Y is ordinal, the difference between two ordinal variables is no longer ordinal. Concerning logging or not logging, that just depends on model and overall residual distribution assumptions.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
Andy's answer seems to be the economist's view of things. It is accepted practice in clinical trials to almost always adjust for the baseline version of the response variable, to greatly increase pow
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
Andy's answer seems to be the economist's view of things. It is accepted practice in clinical trials to almost always adjust for the baseline version of the response variable, to greatly increase power. Since we condition on the baseline variables there is no 'error term' for them to be confused with the overall error term. The only problem would be if measurement errors in the baseline covariate are confounded with another X, distorting that other X's effect. The overall preferred method is to adjust for baseline and to model the response variable, not computing the change. One reason for this is that change is heavily dependent on getting the transformation of Y correct, and that change does not apply to regression models in general. E.g. if Y is ordinal, the difference between two ordinal variables is no longer ordinal. Concerning logging or not logging, that just depends on model and overall residual distribution assumptions.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
Andy's answer seems to be the economist's view of things. It is accepted practice in clinical trials to almost always adjust for the baseline version of the response variable, to greatly increase pow
|
5,507
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
EDIT: Andy W's argument convinced me to drop Model C. I added another possibility: Analyzing change with Random Coefficient Models (aka Multilevel Models or Mixed Effect Models
There has been a lot of scientific debate about the use of difference scores. My favorite texts are Rogosa (1982, [1]) and Fitzmaurice, Laird, & Ware (2004, [2])
In general, you have three possibilities of analyzing your data:
A) Only take the interindividual difference score (the change score)
B) Treat the post measurement as DV and control it for the baseline
C) Take the difference score as DV and control it for the baseline (that's the model you suggested).Due to Andy W's arguments, I dropped this alternative
D) Using a multilevel / mixed-effect-model approach, where the regression line is modeled for each participant and participant are treated as Level-2 units.
Models A and B can produce very different results if the baseline is correlated with the change score (e.g., heavier people have more weight loss), and/or treatment assignment is correlated with the baseline.
If you want to know more about these issues, see the cited papers, or here and here.
There has also been a recent simulation study [3] which empirically compares the conditions under which A or B are preferable.
For completely balanced designs with no missing values, Model D should be equivalent to Model A. However, it gives you more information about between person variability, it is easily extended to more measurement points, and it has nice properties in the presence of unbalanced data and/ or missing values.
As a bottom line: In your case, I would analyze post-measures controlled for baseline (Model B).
[1] Rogosa, D., Brandt, D., & Zimowski, M. (1982). A growth curve approach to the measurement of change. Psychological Bulletin, 92, 726-748.
[2] Fitzmaurice, G. M., Laird, N. M., & Ware, J. H. (2004). Applied longitudinal analysis. Hoboken, NJ: Wiley.
[3] Petscher, Y., & Schatschneider, C., 2011. A Simulation Study on the Performance of the Simple Difference and Covariance‐Adjusted Scores in Randomized Experimental Designs. Journal of Educational Measurement, 48, 31-43.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
EDIT: Andy W's argument convinced me to drop Model C. I added another possibility: Analyzing change with Random Coefficient Models (aka Multilevel Models or Mixed Effect Models
There has been a lot of
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
EDIT: Andy W's argument convinced me to drop Model C. I added another possibility: Analyzing change with Random Coefficient Models (aka Multilevel Models or Mixed Effect Models
There has been a lot of scientific debate about the use of difference scores. My favorite texts are Rogosa (1982, [1]) and Fitzmaurice, Laird, & Ware (2004, [2])
In general, you have three possibilities of analyzing your data:
A) Only take the interindividual difference score (the change score)
B) Treat the post measurement as DV and control it for the baseline
C) Take the difference score as DV and control it for the baseline (that's the model you suggested).Due to Andy W's arguments, I dropped this alternative
D) Using a multilevel / mixed-effect-model approach, where the regression line is modeled for each participant and participant are treated as Level-2 units.
Models A and B can produce very different results if the baseline is correlated with the change score (e.g., heavier people have more weight loss), and/or treatment assignment is correlated with the baseline.
If you want to know more about these issues, see the cited papers, or here and here.
There has also been a recent simulation study [3] which empirically compares the conditions under which A or B are preferable.
For completely balanced designs with no missing values, Model D should be equivalent to Model A. However, it gives you more information about between person variability, it is easily extended to more measurement points, and it has nice properties in the presence of unbalanced data and/ or missing values.
As a bottom line: In your case, I would analyze post-measures controlled for baseline (Model B).
[1] Rogosa, D., Brandt, D., & Zimowski, M. (1982). A growth curve approach to the measurement of change. Psychological Bulletin, 92, 726-748.
[2] Fitzmaurice, G. M., Laird, N. M., & Ware, J. H. (2004). Applied longitudinal analysis. Hoboken, NJ: Wiley.
[3] Petscher, Y., & Schatschneider, C., 2011. A Simulation Study on the Performance of the Simple Difference and Covariance‐Adjusted Scores in Randomized Experimental Designs. Journal of Educational Measurement, 48, 31-43.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
EDIT: Andy W's argument convinced me to drop Model C. I added another possibility: Analyzing change with Random Coefficient Models (aka Multilevel Models or Mixed Effect Models
There has been a lot of
|
5,508
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
We can alter @ocram's reasoning slightly to have
$$\begin{align*} \text{E}[w_1 - w_0 \mid X, w_0] &= \beta_0 + x \beta + w_0 \gamma \\
\text{E}[w_1 \mid X, w_0] &= \beta_0 + x \beta + w_0 (\gamma + 1) \end{align*} $$
So, if this is the right model, saying that the difference depends upon the weight implies that the end value depends upon the initial value with a coefficient that could be anything. Running a regression of the difference on $x$ and $w_0$ or the end weight on the same variables should give you the same coefficients on everything but $w_0$. But, if this model isn't exactly correct, these regressions will give different results on the other coefficients as well.
Note that this set up implies that the starting weight predicts the difference in weights, not the impact of treatment. This would require an interaction term, perhaps
$$\begin{align*} \text{E}[w_1 - w_0 \mid X, w_0] &= \beta_0 + (x * w_0) \beta + w_0 \gamma. \end{align*} $$
Another approach would be to calculate
$$\begin{align*} \log (w_1) - \log (w_0) \approx r; \end{align*}$$
here, $r$ is the growth rate of weight. This could be your outcome. Your coefficients on $x$ would be telling you how these predictors are related to proportion changes in weight. This "controls for" initial weight by saying that, for example, an exercise regime that reduces weight by 10% (a coefficient of 0.1 multiplied by 100%) for someone that weights 130 pounds reduces weight by 13 pounds, while the program reduces the weight of a 200 pound participant by 20 pounds. In this case, you might not need to include the initial weight (or its log) on the right hand side.
An interaction term may still be necessary if you believe that the impact of the program depends upon the starting weight. If you use $w_0$ in the interaction term, then the program would be associated with a $w_0 \beta_1$ change in the growth rate of weight. Every pound heavier that a person was at the start of the program leads to a $\beta_1$ increase in the change in the growth rate (this is the cross-partial derivative of the expected value with respect to both treatment and starting weight).
If you use $\log (w_0)$ in the interaction term, the impact of the program increases by $\beta_1/w_0$ for each additional pound heavier the participant was at the start of the program.
As you can see, the cross-partials on interaction terms can become a bit tricky to interpret, but they may capture an impact that you are interested in.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
We can alter @ocram's reasoning slightly to have
$$\begin{align*} \text{E}[w_1 - w_0 \mid X, w_0] &= \beta_0 + x \beta + w_0 \gamma \\
\text{E}[w_1 \mid X, w_0] &= \beta_0 + x \beta + w_0 (\gamma + 1
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
We can alter @ocram's reasoning slightly to have
$$\begin{align*} \text{E}[w_1 - w_0 \mid X, w_0] &= \beta_0 + x \beta + w_0 \gamma \\
\text{E}[w_1 \mid X, w_0] &= \beta_0 + x \beta + w_0 (\gamma + 1) \end{align*} $$
So, if this is the right model, saying that the difference depends upon the weight implies that the end value depends upon the initial value with a coefficient that could be anything. Running a regression of the difference on $x$ and $w_0$ or the end weight on the same variables should give you the same coefficients on everything but $w_0$. But, if this model isn't exactly correct, these regressions will give different results on the other coefficients as well.
Note that this set up implies that the starting weight predicts the difference in weights, not the impact of treatment. This would require an interaction term, perhaps
$$\begin{align*} \text{E}[w_1 - w_0 \mid X, w_0] &= \beta_0 + (x * w_0) \beta + w_0 \gamma. \end{align*} $$
Another approach would be to calculate
$$\begin{align*} \log (w_1) - \log (w_0) \approx r; \end{align*}$$
here, $r$ is the growth rate of weight. This could be your outcome. Your coefficients on $x$ would be telling you how these predictors are related to proportion changes in weight. This "controls for" initial weight by saying that, for example, an exercise regime that reduces weight by 10% (a coefficient of 0.1 multiplied by 100%) for someone that weights 130 pounds reduces weight by 13 pounds, while the program reduces the weight of a 200 pound participant by 20 pounds. In this case, you might not need to include the initial weight (or its log) on the right hand side.
An interaction term may still be necessary if you believe that the impact of the program depends upon the starting weight. If you use $w_0$ in the interaction term, then the program would be associated with a $w_0 \beta_1$ change in the growth rate of weight. Every pound heavier that a person was at the start of the program leads to a $\beta_1$ increase in the change in the growth rate (this is the cross-partial derivative of the expected value with respect to both treatment and starting weight).
If you use $\log (w_0)$ in the interaction term, the impact of the program increases by $\beta_1/w_0$ for each additional pound heavier the participant was at the start of the program.
As you can see, the cross-partials on interaction terms can become a bit tricky to interpret, but they may capture an impact that you are interested in.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
We can alter @ocram's reasoning slightly to have
$$\begin{align*} \text{E}[w_1 - w_0 \mid X, w_0] &= \beta_0 + x \beta + w_0 \gamma \\
\text{E}[w_1 \mid X, w_0] &= \beta_0 + x \beta + w_0 (\gamma + 1
|
5,509
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
Glymour et al. (2005) addressed using baseline adjustment when analyzing a change score. If change in health status preceded baseline assessment or there is large measurement error in the dependent variable, they find that a bias can arise if the regression model using change score as the dependent variable includes a baseline covariate. Frank Harrell's answer "The only problem would be if measurement errors in the baseline covariate are confounded with another X, distorting that other X's effect." may be reflecting the same bias as Glymour addresses.
Glymour (2005)
"When is Baseline Adjustment Useful in the Analysis of Change? An Example with Education and Cognitive Change. American Journal of Epidemiology 162:267-278
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
Glymour et al. (2005) addressed using baseline adjustment when analyzing a change score. If change in health status preceded baseline assessment or there is large measurement error in the dependent va
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
Glymour et al. (2005) addressed using baseline adjustment when analyzing a change score. If change in health status preceded baseline assessment or there is large measurement error in the dependent variable, they find that a bias can arise if the regression model using change score as the dependent variable includes a baseline covariate. Frank Harrell's answer "The only problem would be if measurement errors in the baseline covariate are confounded with another X, distorting that other X's effect." may be reflecting the same bias as Glymour addresses.
Glymour (2005)
"When is Baseline Adjustment Useful in the Analysis of Change? An Example with Education and Cognitive Change. American Journal of Epidemiology 162:267-278
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
Glymour et al. (2005) addressed using baseline adjustment when analyzing a change score. If change in health status preceded baseline assessment or there is large measurement error in the dependent va
|
5,510
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
See Josh Angrist on exactly this question: http://www.mostlyharmlesseconometrics.com/2009/10/adding-lagged-dependent-vars-to-differenced-models/. He comes down largely against including the lagged DV in your model. There is nothing in his response that is not in the responses above, but a further succinct answer to your question may help.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
See Josh Angrist on exactly this question: http://www.mostlyharmlesseconometrics.com/2009/10/adding-lagged-dependent-vars-to-differenced-models/. He comes down largely against including the lagged DV
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
See Josh Angrist on exactly this question: http://www.mostlyharmlesseconometrics.com/2009/10/adding-lagged-dependent-vars-to-differenced-models/. He comes down largely against including the lagged DV in your model. There is nothing in his response that is not in the responses above, but a further succinct answer to your question may help.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
See Josh Angrist on exactly this question: http://www.mostlyharmlesseconometrics.com/2009/10/adding-lagged-dependent-vars-to-differenced-models/. He comes down largely against including the lagged DV
|
5,511
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
Ocram is not correct. The difference in weights does not take the initial weight into account. Specifically, the intial weight is kind of taken out by subtracting the end weight from it.
Therefore, I would argue that it does not violate any assumptions if you control for the initial weight.
(The same logic applies if you take the difference of the BMI and the initial BMI.)
Update
After Andy W's critic let me be more formal on why I am right and Ocram wrong (at least from my point).
There is some absolute level of weight each person has (e.g., around 100 pound as opposed to 200 pounds). Let $a_w$ be this absoulte weight.
Then, the initial weight can be formalized as $i_w = a_w$ and the end weight as $e_w = a_w + \Delta_w$
The dv the OP wants to use is thus $\Delta_w = i_w - e_w = a_w - a_w + \Delta_w = \Delta_w$
In other words, the absolute level of weight (formalized as $a_w$) drops out from the equation representing the dv and, hence, does not contaminate it (which disagrees with Andy W's claim).
If you want to take it into account you need to incorporate it into your model separately (as an ordinary parameter and/or as an interaction term).
Obviosuly this same logic applies to $\Delta_{BMJ}$ and can be easily accommodated to proportions where one would say e.g.: $e_w = a_w * prop_{\Delta w}$
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
Ocram is not correct. The difference in weights does not take the initial weight into account. Specifically, the intial weight is kind of taken out by subtracting the end weight from it.
Therefore, I
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
Ocram is not correct. The difference in weights does not take the initial weight into account. Specifically, the intial weight is kind of taken out by subtracting the end weight from it.
Therefore, I would argue that it does not violate any assumptions if you control for the initial weight.
(The same logic applies if you take the difference of the BMI and the initial BMI.)
Update
After Andy W's critic let me be more formal on why I am right and Ocram wrong (at least from my point).
There is some absolute level of weight each person has (e.g., around 100 pound as opposed to 200 pounds). Let $a_w$ be this absoulte weight.
Then, the initial weight can be formalized as $i_w = a_w$ and the end weight as $e_w = a_w + \Delta_w$
The dv the OP wants to use is thus $\Delta_w = i_w - e_w = a_w - a_w + \Delta_w = \Delta_w$
In other words, the absolute level of weight (formalized as $a_w$) drops out from the equation representing the dv and, hence, does not contaminate it (which disagrees with Andy W's claim).
If you want to take it into account you need to incorporate it into your model separately (as an ordinary parameter and/or as an interaction term).
Obviosuly this same logic applies to $\Delta_{BMJ}$ and can be easily accommodated to proportions where one would say e.g.: $e_w = a_w * prop_{\Delta w}$
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
Ocram is not correct. The difference in weights does not take the initial weight into account. Specifically, the intial weight is kind of taken out by subtracting the end weight from it.
Therefore, I
|
5,512
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
Just an addition to the entire discussion (I did not know where to put a comment, so I put it as an answer): if you plan to do a longitudinal analysis in the pharmaceutical industry, then the adjustment for baseline is mentioned by the major guidelines (FDA and EMA) also in case of the analysis of change (change score). This topic gives more details and references to the industry guidelines: Is there any formal guideline that would indicate the necessity for adjustment for baseline when analysing change from baseline? and also this blog post: http://onbiostatistics.blogspot.com/2019/05/fda-and-ema-guidance-on-adjusting-for.html
Also:
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
Just an addition to the entire discussion (I did not know where to put a comment, so I put it as an answer): if you plan to do a longitudinal analysis in the pharmaceutical industry, then the adjustme
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
Just an addition to the entire discussion (I did not know where to put a comment, so I put it as an answer): if you plan to do a longitudinal analysis in the pharmaceutical industry, then the adjustment for baseline is mentioned by the major guidelines (FDA and EMA) also in case of the analysis of change (change score). This topic gives more details and references to the industry guidelines: Is there any formal guideline that would indicate the necessity for adjustment for baseline when analysing change from baseline? and also this blog post: http://onbiostatistics.blogspot.com/2019/05/fda-and-ema-guidance-on-adjusting-for.html
Also:
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
Just an addition to the entire discussion (I did not know where to put a comment, so I put it as an answer): if you plan to do a longitudinal analysis in the pharmaceutical industry, then the adjustme
|
5,513
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
|
Observe that
$\underbrace{\textrm{end weight} - \textrm{initial weight}}_{Y} = \beta_{0} + \beta^{T}x$
is equivalent to
$\textrm{end weight} = \textrm{initial weight} + \beta_{0} + \beta^{T}x$
In words, using the change in weight (instead of the end weight itself) as DV already accounts for the initial weight.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
|
Observe that
$\underbrace{\textrm{end weight} - \textrm{initial weight}}_{Y} = \beta_{0} + \beta^{T}x$
is equivalent to
$\textrm{end weight} = \textrm{initial weight} + \beta_{0} + \beta^{T}x$
In wor
|
Is it valid to include a baseline measure as control variable when testing the effect of an independent variable on change scores?
Observe that
$\underbrace{\textrm{end weight} - \textrm{initial weight}}_{Y} = \beta_{0} + \beta^{T}x$
is equivalent to
$\textrm{end weight} = \textrm{initial weight} + \beta_{0} + \beta^{T}x$
In words, using the change in weight (instead of the end weight itself) as DV already accounts for the initial weight.
|
Is it valid to include a baseline measure as control variable when testing the effect of an independ
Observe that
$\underbrace{\textrm{end weight} - \textrm{initial weight}}_{Y} = \beta_{0} + \beta^{T}x$
is equivalent to
$\textrm{end weight} = \textrm{initial weight} + \beta_{0} + \beta^{T}x$
In wor
|
5,514
|
How to interpret mean of Silhouette plot?
|
Sergey's answer contains the critical point, which is that the silhouette coefficient quantifies the quality of clustering achieved -- so you should select the number of clusters that maximizes the silhouette coefficient.
The long answer is that the best way to evaluate the results of your clustering efforts is to start by actually examining -- human inspection -- the clusters formed and making a determination based on an understanding of what the data represents, what a cluster represents, and what the clustering is intended to achieve.
There are numerous quantitative methods of evaluating clustering results which should be used as tools, with full understanding of the limitations. They tend to be fairly intuitive in nature, and thus have a natural appeal (like clustering problems in general).
Examples: cluster mass / radius / density, cohesion or separation between clusters, etc. These concepts are often combined, for example, the ratio of separation to cohesion should be large if clustering was successful.
The way clustering is measured is informed by the type of clustering algorithms used. For example, measuring quality of a complete clustering algorithm (in which all points are put into clusters) can be very different from measuring quality of a threshold-based fuzzy clustering algorithm (in which some point might be left un-clustered as 'noise').
The silhouette coefficient is one such measure. It works as follows:
For each point p, first find the average distance between p and all other points in the same cluster (this is a measure of cohesion, call it A). Then find the average distance between p and all points in the nearest cluster (this is a measure of separation from the closest other cluster, call it B). The silhouette coefficient for p is defined as the difference between B and A divided by the greater of the two (max(A,B)).
We evaluate the cluster coefficient of each point and from this we can obtain the 'overall' average cluster coefficient.
Intuitively, we are trying to measure the space between clusters. If cluster cohesion is good (A is small) and cluster separation is good (B is large), the numerator will be large, etc.
I've constructed an example here to demonstrate this graphically.
In these plots the same data is plotted five times; the colors indicate the clusters created by k-means clustering, with k = 1,2,3,4,5. That is, I've forced a clustering algorithm to divide the data into 2 clusters, then 3, and so on, and colored the graph accordingly.
The silhouette plot shows the that the silhouette coefficient was highest when k = 3, suggesting that's the optimal number of clusters. In this example we are lucky to be able to visualize the data and we might agree that indeed, three clusters best captures the segmentation of this data set.
If we were unable to visualize the data, perhaps because of higher dimensionality, a silhouette plot would still give us a suggestion. However, I hope my somewhat long-winded answer here also makes the point that this "suggestion" could be very insufficient or just plain wrong in certain scenarios.
|
How to interpret mean of Silhouette plot?
|
Sergey's answer contains the critical point, which is that the silhouette coefficient quantifies the quality of clustering achieved -- so you should select the number of clusters that maximizes the si
|
How to interpret mean of Silhouette plot?
Sergey's answer contains the critical point, which is that the silhouette coefficient quantifies the quality of clustering achieved -- so you should select the number of clusters that maximizes the silhouette coefficient.
The long answer is that the best way to evaluate the results of your clustering efforts is to start by actually examining -- human inspection -- the clusters formed and making a determination based on an understanding of what the data represents, what a cluster represents, and what the clustering is intended to achieve.
There are numerous quantitative methods of evaluating clustering results which should be used as tools, with full understanding of the limitations. They tend to be fairly intuitive in nature, and thus have a natural appeal (like clustering problems in general).
Examples: cluster mass / radius / density, cohesion or separation between clusters, etc. These concepts are often combined, for example, the ratio of separation to cohesion should be large if clustering was successful.
The way clustering is measured is informed by the type of clustering algorithms used. For example, measuring quality of a complete clustering algorithm (in which all points are put into clusters) can be very different from measuring quality of a threshold-based fuzzy clustering algorithm (in which some point might be left un-clustered as 'noise').
The silhouette coefficient is one such measure. It works as follows:
For each point p, first find the average distance between p and all other points in the same cluster (this is a measure of cohesion, call it A). Then find the average distance between p and all points in the nearest cluster (this is a measure of separation from the closest other cluster, call it B). The silhouette coefficient for p is defined as the difference between B and A divided by the greater of the two (max(A,B)).
We evaluate the cluster coefficient of each point and from this we can obtain the 'overall' average cluster coefficient.
Intuitively, we are trying to measure the space between clusters. If cluster cohesion is good (A is small) and cluster separation is good (B is large), the numerator will be large, etc.
I've constructed an example here to demonstrate this graphically.
In these plots the same data is plotted five times; the colors indicate the clusters created by k-means clustering, with k = 1,2,3,4,5. That is, I've forced a clustering algorithm to divide the data into 2 clusters, then 3, and so on, and colored the graph accordingly.
The silhouette plot shows the that the silhouette coefficient was highest when k = 3, suggesting that's the optimal number of clusters. In this example we are lucky to be able to visualize the data and we might agree that indeed, three clusters best captures the segmentation of this data set.
If we were unable to visualize the data, perhaps because of higher dimensionality, a silhouette plot would still give us a suggestion. However, I hope my somewhat long-winded answer here also makes the point that this "suggestion" could be very insufficient or just plain wrong in certain scenarios.
|
How to interpret mean of Silhouette plot?
Sergey's answer contains the critical point, which is that the silhouette coefficient quantifies the quality of clustering achieved -- so you should select the number of clusters that maximizes the si
|
5,515
|
How to interpret mean of Silhouette plot?
|
I have been looking into the same thing today and found an interpretation here. It makes logical sense but I am not sure if we can blindly apply the interpretation for our datasets. In summary, what that article says is the following:
0.71-1.0
A strong structure has been found
0.51-0.70
A reasonable structure has been found
0.26-0.50
The structure is weak and could be artificial. Try additional methods of data analysis.
< 0.25
No substantial structure has been found
However, it seems like we can use the silhouette width to catch outliers. In a document clustering task that I am currently handling, the ones with negative silhouette width are definite outliers (when cross checked with their semantic meaning). I am not sure if this width will improve after removing outliers (again, this makes logical sense but I have not done this myself).
|
How to interpret mean of Silhouette plot?
|
I have been looking into the same thing today and found an interpretation here. It makes logical sense but I am not sure if we can blindly apply the interpretation for our datasets. In summary, what t
|
How to interpret mean of Silhouette plot?
I have been looking into the same thing today and found an interpretation here. It makes logical sense but I am not sure if we can blindly apply the interpretation for our datasets. In summary, what that article says is the following:
0.71-1.0
A strong structure has been found
0.51-0.70
A reasonable structure has been found
0.26-0.50
The structure is weak and could be artificial. Try additional methods of data analysis.
< 0.25
No substantial structure has been found
However, it seems like we can use the silhouette width to catch outliers. In a document clustering task that I am currently handling, the ones with negative silhouette width are definite outliers (when cross checked with their semantic meaning). I am not sure if this width will improve after removing outliers (again, this makes logical sense but I have not done this myself).
|
How to interpret mean of Silhouette plot?
I have been looking into the same thing today and found an interpretation here. It makes logical sense but I am not sure if we can blindly apply the interpretation for our datasets. In summary, what t
|
5,516
|
How to interpret mean of Silhouette plot?
|
Take a look at the
Cluster Validity Analysis Platform (CVAP) ToolBox
And some of the materials (links) from CVAP:
Silhouette index (overall average
silhouette) a larger Silhouette value
indicates a better quality of a
clustering result [Chen et al. 2002]
N. Bolshakova, F. Azuaje. 2003. Cluster validation techniques for genome expression data, Signal Processing. V.83. N4, P.825-833.
E. Dimitriadou, S. Dolnicar, A. Weingessel. An examination of indexes for determining the Number of Cluster in binary data sets. Psychometrika, 67(1):137-160, 2002.
You can also check this (simple) Tool for estimating the number of clusters
Just take a look at the examples of both toolkits (You can also use other cluster validation techniques)
|
How to interpret mean of Silhouette plot?
|
Take a look at the
Cluster Validity Analysis Platform (CVAP) ToolBox
And some of the materials (links) from CVAP:
Silhouette index (overall average
silhouette) a larger Silhouette value
indicate
|
How to interpret mean of Silhouette plot?
Take a look at the
Cluster Validity Analysis Platform (CVAP) ToolBox
And some of the materials (links) from CVAP:
Silhouette index (overall average
silhouette) a larger Silhouette value
indicates a better quality of a
clustering result [Chen et al. 2002]
N. Bolshakova, F. Azuaje. 2003. Cluster validation techniques for genome expression data, Signal Processing. V.83. N4, P.825-833.
E. Dimitriadou, S. Dolnicar, A. Weingessel. An examination of indexes for determining the Number of Cluster in binary data sets. Psychometrika, 67(1):137-160, 2002.
You can also check this (simple) Tool for estimating the number of clusters
Just take a look at the examples of both toolkits (You can also use other cluster validation techniques)
|
How to interpret mean of Silhouette plot?
Take a look at the
Cluster Validity Analysis Platform (CVAP) ToolBox
And some of the materials (links) from CVAP:
Silhouette index (overall average
silhouette) a larger Silhouette value
indicate
|
5,517
|
How to interpret mean of Silhouette plot?
|
If you are trying to select the number of clusters for unsupervised learning then maybe you could try doing something like-
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
They use more than just the silhouette score mean (they use the distribution) but it makes sense. It seems to prefer smaller clusters but maybe you could try this with some generated data and see if works?
Alternatively, you can check this paper-
http://www.sciencedirect.com/science/article/pii/0377042787901257
|
How to interpret mean of Silhouette plot?
|
If you are trying to select the number of clusters for unsupervised learning then maybe you could try doing something like-
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_
|
How to interpret mean of Silhouette plot?
If you are trying to select the number of clusters for unsupervised learning then maybe you could try doing something like-
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
They use more than just the silhouette score mean (they use the distribution) but it makes sense. It seems to prefer smaller clusters but maybe you could try this with some generated data and see if works?
Alternatively, you can check this paper-
http://www.sciencedirect.com/science/article/pii/0377042787901257
|
How to interpret mean of Silhouette plot?
If you are trying to select the number of clusters for unsupervised learning then maybe you could try doing something like-
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_
|
5,518
|
Does the beta distribution have a conjugate prior?
|
It seems that you already gave up on conjugacy. Just for the record, one thing that I've seen people doing (but don't remember exactly where, sorry) is a reparameterization like this. If $X_1,\dots,X_n$ are conditionally iid, given $\alpha,\beta$, such that $X_i\mid\alpha,\beta\sim\mathrm{Beta}(\alpha,\beta)$, remember that
$$
\mathbb{E}[X_i\mid\alpha,\beta]=\frac{\alpha}{\alpha+\beta} =: \mu
$$
and
$$
\mathbb{Var}[X_i\mid\alpha,\beta] = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} =: \sigma^2 \, .
$$
Hence, you may reparameterize the likelihood in terms of $\mu$ and $\sigma^2$ and use as a prior
$$
\sigma^2\mid\mu \sim \mathrm{U}[0,\mu(1-\mu)] \qquad \qquad \mu\sim\mathrm{U}[0,1] \, .
$$
Now you're ready to compute the posterior and explore it by your favorite computational method.
|
Does the beta distribution have a conjugate prior?
|
It seems that you already gave up on conjugacy. Just for the record, one thing that I've seen people doing (but don't remember exactly where, sorry) is a reparameterization like this. If $X_1,\dots,X_
|
Does the beta distribution have a conjugate prior?
It seems that you already gave up on conjugacy. Just for the record, one thing that I've seen people doing (but don't remember exactly where, sorry) is a reparameterization like this. If $X_1,\dots,X_n$ are conditionally iid, given $\alpha,\beta$, such that $X_i\mid\alpha,\beta\sim\mathrm{Beta}(\alpha,\beta)$, remember that
$$
\mathbb{E}[X_i\mid\alpha,\beta]=\frac{\alpha}{\alpha+\beta} =: \mu
$$
and
$$
\mathbb{Var}[X_i\mid\alpha,\beta] = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} =: \sigma^2 \, .
$$
Hence, you may reparameterize the likelihood in terms of $\mu$ and $\sigma^2$ and use as a prior
$$
\sigma^2\mid\mu \sim \mathrm{U}[0,\mu(1-\mu)] \qquad \qquad \mu\sim\mathrm{U}[0,1] \, .
$$
Now you're ready to compute the posterior and explore it by your favorite computational method.
|
Does the beta distribution have a conjugate prior?
It seems that you already gave up on conjugacy. Just for the record, one thing that I've seen people doing (but don't remember exactly where, sorry) is a reparameterization like this. If $X_1,\dots,X_
|
5,519
|
Does the beta distribution have a conjugate prior?
|
Yes, it has a conjugate prior in the exponential family. Consider the three parameter family
$$
\pi(\alpha, \beta \mid a, b, p)
\propto \left\{\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}\right\}^p
\exp\left(a\alpha + b\beta \right).
$$
For some values of $(a, b, p)$, this is integrable, although I haven't quite figured out which ones (I believe $p \ge 0$ and $a < 0, b < 0$ should work — $p = 0$ corresponds to independent exponential distributions so that definitely works; moreover, the conjugate update involves incrementing $p$ so this suggests that $p > 0$ works as well).
The problem, and at least part of the reason no one uses it, is that
$$
\int_0^\infty \int_0^\infty \left\{\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}\right\}^p
\exp\left(a\alpha + b\beta \right) = ?
$$
i.e. the normalizing constant doesn't have a closed form.
|
Does the beta distribution have a conjugate prior?
|
Yes, it has a conjugate prior in the exponential family. Consider the three parameter family
$$
\pi(\alpha, \beta \mid a, b, p)
\propto \left\{\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta
|
Does the beta distribution have a conjugate prior?
Yes, it has a conjugate prior in the exponential family. Consider the three parameter family
$$
\pi(\alpha, \beta \mid a, b, p)
\propto \left\{\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}\right\}^p
\exp\left(a\alpha + b\beta \right).
$$
For some values of $(a, b, p)$, this is integrable, although I haven't quite figured out which ones (I believe $p \ge 0$ and $a < 0, b < 0$ should work — $p = 0$ corresponds to independent exponential distributions so that definitely works; moreover, the conjugate update involves incrementing $p$ so this suggests that $p > 0$ works as well).
The problem, and at least part of the reason no one uses it, is that
$$
\int_0^\infty \int_0^\infty \left\{\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)}\right\}^p
\exp\left(a\alpha + b\beta \right) = ?
$$
i.e. the normalizing constant doesn't have a closed form.
|
Does the beta distribution have a conjugate prior?
Yes, it has a conjugate prior in the exponential family. Consider the three parameter family
$$
\pi(\alpha, \beta \mid a, b, p)
\propto \left\{\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta
|
5,520
|
Does the beta distribution have a conjugate prior?
|
In theory there should be a conjugate prior for the beta distribution. This is because
the beta distribution is one of the exponential family distributions, and
in theory it should be possible to derive a prior. See, e.g., wikipedia, D Blei's lecture on exponential families.
However the derivation looks difficult, and to quote A Bouchard-Cote's Exponential Families and Conjugate Priors
An important observation to make is that this recipe does not always yields
a conjugate prior that is computationally tractable.
Consistent with this, there is no prior for the Beta distribution in D Fink's A Compendium of Conjugate Priors.
|
Does the beta distribution have a conjugate prior?
|
In theory there should be a conjugate prior for the beta distribution. This is because
the beta distribution is one of the exponential family distributions, and
in theory it should be possible to der
|
Does the beta distribution have a conjugate prior?
In theory there should be a conjugate prior for the beta distribution. This is because
the beta distribution is one of the exponential family distributions, and
in theory it should be possible to derive a prior. See, e.g., wikipedia, D Blei's lecture on exponential families.
However the derivation looks difficult, and to quote A Bouchard-Cote's Exponential Families and Conjugate Priors
An important observation to make is that this recipe does not always yields
a conjugate prior that is computationally tractable.
Consistent with this, there is no prior for the Beta distribution in D Fink's A Compendium of Conjugate Priors.
|
Does the beta distribution have a conjugate prior?
In theory there should be a conjugate prior for the beta distribution. This is because
the beta distribution is one of the exponential family distributions, and
in theory it should be possible to der
|
5,521
|
Does the beta distribution have a conjugate prior?
|
Robert and Casella (RC) happen to describe the family of conjugate priors of the beta distribution in Example 3.6 (p 71 - 75) of their book, Introducing Monte Carlo Methods in R, Springer, 2010. However, they quote the result without citing a source.
Added in response to gung's request for details. RC state that for distribution $B(\alpha, \beta)$, the conjugate prior is "... of the form
$$ \pi(\alpha,\beta) \propto \Big\{ \frac{\Gamma(\alpha+\beta)} {\Gamma(\alpha)\Gamma(\beta)} \Big\} ^{\lambda} x_0^{\alpha} y_0^{\beta} $$
where $\{\lambda, x_0, y_0\}$ are hyperparameters, since the posterior is then equal to
$$ \pi(\alpha,\beta \vert x) \propto \Big\{ \frac{\Gamma(\alpha+\beta)} {\Gamma(\alpha)\Gamma(\beta)} \Big\} ^{\lambda} (xx_0)^{\alpha} ((1-x)y_0)^{\beta}." $$
The remainder of the example concerns importance sampling from $\pi(\alpha,\beta \vert x)$ in order to compute the marginal likelihood of $x$.
|
Does the beta distribution have a conjugate prior?
|
Robert and Casella (RC) happen to describe the family of conjugate priors of the beta distribution in Example 3.6 (p 71 - 75) of their book, Introducing Monte Carlo Methods in R, Springer, 2010. Howev
|
Does the beta distribution have a conjugate prior?
Robert and Casella (RC) happen to describe the family of conjugate priors of the beta distribution in Example 3.6 (p 71 - 75) of their book, Introducing Monte Carlo Methods in R, Springer, 2010. However, they quote the result without citing a source.
Added in response to gung's request for details. RC state that for distribution $B(\alpha, \beta)$, the conjugate prior is "... of the form
$$ \pi(\alpha,\beta) \propto \Big\{ \frac{\Gamma(\alpha+\beta)} {\Gamma(\alpha)\Gamma(\beta)} \Big\} ^{\lambda} x_0^{\alpha} y_0^{\beta} $$
where $\{\lambda, x_0, y_0\}$ are hyperparameters, since the posterior is then equal to
$$ \pi(\alpha,\beta \vert x) \propto \Big\{ \frac{\Gamma(\alpha+\beta)} {\Gamma(\alpha)\Gamma(\beta)} \Big\} ^{\lambda} (xx_0)^{\alpha} ((1-x)y_0)^{\beta}." $$
The remainder of the example concerns importance sampling from $\pi(\alpha,\beta \vert x)$ in order to compute the marginal likelihood of $x$.
|
Does the beta distribution have a conjugate prior?
Robert and Casella (RC) happen to describe the family of conjugate priors of the beta distribution in Example 3.6 (p 71 - 75) of their book, Introducing Monte Carlo Methods in R, Springer, 2010. Howev
|
5,522
|
Does the beta distribution have a conjugate prior?
|
I do not believe there is a "standard" (i.e., exponential family) distribution that is the conjugate prior for the beta distribution. However, if one does exist it would have to be a bivariate distribution.
|
Does the beta distribution have a conjugate prior?
|
I do not believe there is a "standard" (i.e., exponential family) distribution that is the conjugate prior for the beta distribution. However, if one does exist it would have to be a bivariate distri
|
Does the beta distribution have a conjugate prior?
I do not believe there is a "standard" (i.e., exponential family) distribution that is the conjugate prior for the beta distribution. However, if one does exist it would have to be a bivariate distribution.
|
Does the beta distribution have a conjugate prior?
I do not believe there is a "standard" (i.e., exponential family) distribution that is the conjugate prior for the beta distribution. However, if one does exist it would have to be a bivariate distri
|
5,523
|
Lift measure in data mining
|
I'll give an example of how "lift" is useful...
Imagine you are running a direct mail campaign where you mail customers an offer in the hopes they respond. Historical data shows that when you mail your customer base completely at random about 8% of them respond to the mailing (i.e. they come in and shop with the offer). So, if you mail 1,000 customers you can expect 80 responders.
Now, you decide to fit a logistic regression model to your historical data to find patterns that are predictive of whether a customer is likely to respond to a mailing. Using the logistic regression model each customer is assigned a probability of responding and you can assess the accuracy because you know whether they actually responded. Once each customer is assigned their probability, you rank them from highest to lowest scoring customer. Then you could generate some "lift" graphics like these:
Ignore the top chart for now. The bottom chart is saying that after we sort the customers based on their probability of responding (high to low), and then break them up into ten equal bins, the response rate in bin #1 (the top 10% of customers) is 29% vs 8% of random customers, for a lift of 29/8 = 3.63. By the time we get to scored customers in the 4th bin, we have captured so many the previous three that the response rate is lower than what we would expect mailing people at random.
Looking at the top chart now, what this says is that if we use the probability scores on customers we can get 60% of the total responders we'd get mailing randomly by only mailing the top 30% of scored customers. That is, using the model we can get 60% of the expected profit for 30% of the mail cost by only mailing the top 30% of scored customers, and this is what lift really refers to.
|
Lift measure in data mining
|
I'll give an example of how "lift" is useful...
Imagine you are running a direct mail campaign where you mail customers an offer in the hopes they respond. Historical data shows that when you mail you
|
Lift measure in data mining
I'll give an example of how "lift" is useful...
Imagine you are running a direct mail campaign where you mail customers an offer in the hopes they respond. Historical data shows that when you mail your customer base completely at random about 8% of them respond to the mailing (i.e. they come in and shop with the offer). So, if you mail 1,000 customers you can expect 80 responders.
Now, you decide to fit a logistic regression model to your historical data to find patterns that are predictive of whether a customer is likely to respond to a mailing. Using the logistic regression model each customer is assigned a probability of responding and you can assess the accuracy because you know whether they actually responded. Once each customer is assigned their probability, you rank them from highest to lowest scoring customer. Then you could generate some "lift" graphics like these:
Ignore the top chart for now. The bottom chart is saying that after we sort the customers based on their probability of responding (high to low), and then break them up into ten equal bins, the response rate in bin #1 (the top 10% of customers) is 29% vs 8% of random customers, for a lift of 29/8 = 3.63. By the time we get to scored customers in the 4th bin, we have captured so many the previous three that the response rate is lower than what we would expect mailing people at random.
Looking at the top chart now, what this says is that if we use the probability scores on customers we can get 60% of the total responders we'd get mailing randomly by only mailing the top 30% of scored customers. That is, using the model we can get 60% of the expected profit for 30% of the mail cost by only mailing the top 30% of scored customers, and this is what lift really refers to.
|
Lift measure in data mining
I'll give an example of how "lift" is useful...
Imagine you are running a direct mail campaign where you mail customers an offer in the hopes they respond. Historical data shows that when you mail you
|
5,524
|
Lift measure in data mining
|
Lift charts represent the ratio between the response of a model vs the absence of that model. Typically, it's represented by the percentage of cases in the X and the number of times the response is better in the Y axe. For example, a model with lift=2 at the point 10% means:
Without any model taking a 10% of the population (with no order because no model) the proportion of y=1 would be 10% of the total
population with y=1.
With the model we get 2 times this proportion, i.e, we expect to get 20% of the total population with y=1.In th char label X represents data orderd by the prediction. THe first 10% is the top 10% predictions
|
Lift measure in data mining
|
Lift charts represent the ratio between the response of a model vs the absence of that model. Typically, it's represented by the percentage of cases in the X and the number of times the response is be
|
Lift measure in data mining
Lift charts represent the ratio between the response of a model vs the absence of that model. Typically, it's represented by the percentage of cases in the X and the number of times the response is better in the Y axe. For example, a model with lift=2 at the point 10% means:
Without any model taking a 10% of the population (with no order because no model) the proportion of y=1 would be 10% of the total
population with y=1.
With the model we get 2 times this proportion, i.e, we expect to get 20% of the total population with y=1.In th char label X represents data orderd by the prediction. THe first 10% is the top 10% predictions
|
Lift measure in data mining
Lift charts represent the ratio between the response of a model vs the absence of that model. Typically, it's represented by the percentage of cases in the X and the number of times the response is be
|
5,525
|
Lift measure in data mining
|
Lift is nothing but the ratio of Confidence to Expected Confidence. In the area of association rules - "A lift ratio larger than 1.0 implies that the relationship between the antecedent and the consequent is more significant than would be expected if the two sets were independent. The larger the lift ratio, the more significant the association."
For Example-
if a supermarket database has 100,000 point-of-sale transactions, out of which 2,000 include both items A and B, and 800 of these include item C, the association rule "If A and B are purchased, then C is purchased on the same trip," has a support of 800 transactions (alternatively 0.8% = 800/100,000), and a confidence of 40% (=800/2,000). One way to think of support is that it is the probability that a randomly selected transaction from the database will contain all items in the antecedent and the consequent, whereas the confidence is the conditional probability that a randomly selected transaction will include all the items in the consequent, given that the transaction includes all the items in the antecedent.
Using the above example, expected Confidence, in this case, means, "confidence, if buying A and B does not enhance the probability of buying C." It is the number of transactions that include the consequent divided by the total number of transactions. Suppose the total number of transactions for C is 5,000. Thus Expected Confidence is 5,000/1,00,000=5%. For the supermarket example the Lift = Confidence/Expected Confidence = 40%/5% = 8. Hence, Lift is a value that gives us information about the increase in the probability of the then (consequent) given the if (antecedent) part.
here's the link to the source article
|
Lift measure in data mining
|
Lift is nothing but the ratio of Confidence to Expected Confidence. In the area of association rules - "A lift ratio larger than 1.0 implies that the relationship between the antecedent and the conse
|
Lift measure in data mining
Lift is nothing but the ratio of Confidence to Expected Confidence. In the area of association rules - "A lift ratio larger than 1.0 implies that the relationship between the antecedent and the consequent is more significant than would be expected if the two sets were independent. The larger the lift ratio, the more significant the association."
For Example-
if a supermarket database has 100,000 point-of-sale transactions, out of which 2,000 include both items A and B, and 800 of these include item C, the association rule "If A and B are purchased, then C is purchased on the same trip," has a support of 800 transactions (alternatively 0.8% = 800/100,000), and a confidence of 40% (=800/2,000). One way to think of support is that it is the probability that a randomly selected transaction from the database will contain all items in the antecedent and the consequent, whereas the confidence is the conditional probability that a randomly selected transaction will include all the items in the consequent, given that the transaction includes all the items in the antecedent.
Using the above example, expected Confidence, in this case, means, "confidence, if buying A and B does not enhance the probability of buying C." It is the number of transactions that include the consequent divided by the total number of transactions. Suppose the total number of transactions for C is 5,000. Thus Expected Confidence is 5,000/1,00,000=5%. For the supermarket example the Lift = Confidence/Expected Confidence = 40%/5% = 8. Hence, Lift is a value that gives us information about the increase in the probability of the then (consequent) given the if (antecedent) part.
here's the link to the source article
|
Lift measure in data mining
Lift is nothing but the ratio of Confidence to Expected Confidence. In the area of association rules - "A lift ratio larger than 1.0 implies that the relationship between the antecedent and the conse
|
5,526
|
Lift measure in data mining
|
Lift is just a measure to measure the importance of the rule
its a measure to check whether this rule is in the list by random chance or we are expecting
Lift = Confidence / Expected Confidence
|
Lift measure in data mining
|
Lift is just a measure to measure the importance of the rule
its a measure to check whether this rule is in the list by random chance or we are expecting
Lift = Confidence / Expected Confidence
|
Lift measure in data mining
Lift is just a measure to measure the importance of the rule
its a measure to check whether this rule is in the list by random chance or we are expecting
Lift = Confidence / Expected Confidence
|
Lift measure in data mining
Lift is just a measure to measure the importance of the rule
its a measure to check whether this rule is in the list by random chance or we are expecting
Lift = Confidence / Expected Confidence
|
5,527
|
Lift measure in data mining
|
Say we are using the example of a grocery store that is testing the validity of an association rule that has an antecedent and a consequent (for example: "If a customer buys bread, they will also buy butter").
If you look at all transactions, and examine one at random, the probability that that transaction contains the consequent is "Expected Confidence". If you look at all transactions that contain the antecedent, and select a random transaction from these, the probability that that transaction will contain the consequent is "Confidence". "Lift" is essentially the difference between these two. With lift, we can examine the relationship between two items that have high confidence (if confidence is low then lift is essentially irrelevant).
If they have high confidence and low lift, then we still know the items are frequently bought together but we do not know if the consequent is happening because of the antecedent or if it is just a coincidence (perhaps they are both purchased together often because they're both very popular products but don't have any kind of relationship to one another).
However, if the confidence and lift are both high, then we can reasonably assume that the consequent is happening due to the antecedent. The higher the lift gets, the lower the probability is that the relationship between the two items is just a coincidence. In mathematical terms:
Lift = Confidence / Expected Confidence
In our example, if the confidence of our rule was high and the lift was low, that would mean that a lot of customers are buying bread and butter, but we do not know if it's due to some special relationship between bread and butter or if bread and butter are just popular items individually and the fact that they often show up in grocery carts together is just a coincidence. If the confidence in our rule is high and the lift is high, this indicates a pretty strong correlation between the antecedent and the consequent, meaning that we can reasonably assume that customers are buying butter because of the fact that they are buying bread. The higher the lift is, the more confident we can be in this association.
|
Lift measure in data mining
|
Say we are using the example of a grocery store that is testing the validity of an association rule that has an antecedent and a consequent (for example: "If a customer buys bread, they will also buy
|
Lift measure in data mining
Say we are using the example of a grocery store that is testing the validity of an association rule that has an antecedent and a consequent (for example: "If a customer buys bread, they will also buy butter").
If you look at all transactions, and examine one at random, the probability that that transaction contains the consequent is "Expected Confidence". If you look at all transactions that contain the antecedent, and select a random transaction from these, the probability that that transaction will contain the consequent is "Confidence". "Lift" is essentially the difference between these two. With lift, we can examine the relationship between two items that have high confidence (if confidence is low then lift is essentially irrelevant).
If they have high confidence and low lift, then we still know the items are frequently bought together but we do not know if the consequent is happening because of the antecedent or if it is just a coincidence (perhaps they are both purchased together often because they're both very popular products but don't have any kind of relationship to one another).
However, if the confidence and lift are both high, then we can reasonably assume that the consequent is happening due to the antecedent. The higher the lift gets, the lower the probability is that the relationship between the two items is just a coincidence. In mathematical terms:
Lift = Confidence / Expected Confidence
In our example, if the confidence of our rule was high and the lift was low, that would mean that a lot of customers are buying bread and butter, but we do not know if it's due to some special relationship between bread and butter or if bread and butter are just popular items individually and the fact that they often show up in grocery carts together is just a coincidence. If the confidence in our rule is high and the lift is high, this indicates a pretty strong correlation between the antecedent and the consequent, meaning that we can reasonably assume that customers are buying butter because of the fact that they are buying bread. The higher the lift is, the more confident we can be in this association.
|
Lift measure in data mining
Say we are using the example of a grocery store that is testing the validity of an association rule that has an antecedent and a consequent (for example: "If a customer buys bread, they will also buy
|
5,528
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
Let me start by denying the premise. Robert Geary probably didn't overstate the case when he said (in 1947) "...normality is a myth; there never was, and never will be, a normal distribution." --
the normal distribution is a model*, an approximation that is sometimes more-or-less useful.
$\:$*(about which, see George Box, though I prefer his "...how wrong does it have to be to not be useful" version).
That some phenomena are approximately normal may be no vast surprise, since sums of independent [or even not-too-strongly-correlated effects] should, if there a lot of them and none has a variance that is substantial compared to the variance of the sum of the rest that we might see the distribution tend to look more normal.
The central limit theorem (which is about the convergence to a normal distribution of a standardized sample mean as $n$ goes to infinity under some mild conditions) at least suggests that we might see a tendency toward that
normality with sufficiently large but finite sample sizes.
Of course if standardized means are approximately normal, standardized sums will be; this is the reason for the "sum of many effects" reasoning. So if there are a lot of little contributions to the variation, and they're not highly correlated, you might tend to see it.
The Berry-Esseen theorem gives us a statement about it (convergence toward normal distributions) actually happening with standardized sample means for iid data (under slightly more stringent conditions than for the CLT, since it requires that the third absolute moment be finite), as well as telling us about how rapidly it happens. Subsequent versions of the theorem deal with non-identically distributed components in the sum, though the upper bounds on the deviation from normality are less tight.
Less formally, the behavior of convolutions with reasonably nice distributions gives us additional (though closely related) reasons to suspect it might tend to be a fair approximation in finite samples in many cases. Convolution acts as a kind of "smearing" operator that people who use kernel density estimation across a variety of kernels will be familiar with; once you standardize the result (so the variance remains constant each time you do such an operation), there's clear a progression toward increasingly symmetric hill shapes as you repeatedly smooth (and it doesn't much matter if you change the kernel each time).
Terry Tao gives some nice discussion of versions of the Central limit theorem and the Berry-Esseen theorem here, and along the way mentions an approach to a non-independent version of Berry-Esseen.
So there's at least one class of situations where we might expect to see it, and formal reasons to think it really will tend to happen in those situations.
However, at best any sense that the result of "sums of many effects" will be normal is an approximation. In many cases it's quite a reasonable approximation (and in additional cases even though the approximation of the distribution isn't close, some procedures that assume normality aren't especially sensitive to the distribution of the individual values, at least in large samples).
There are many other circumstances where effects don't "add" and there we may expect other things to happen; for example, in a lot of financial data effects tend to be multiplicative (effects will move amounts in percentage terms, like interest and inflation and exchange rates for example). There we don't expect normality, but we might sometimes observe a rough approximation to normality on the log scale. In other situations neither can be appropriate, even in a rough sense. For example, inter-event times are generally not going to be well approximated by either normality or normality of logs; there's no "sums" nor "products" of effects to argue for here. There are numerous other phenomena that we can make some argument for a particular kind of "law" in particular circumstances, such as the limiting distributions in extreme values (Fisher-Tippett-Gnedenko or Pickands-Balkema-de Haan theorems, for example).
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
Let me start by denying the premise. Robert Geary probably didn't overstate the case when he said (in 1947) "...normality is a myth; there never was, and never will be, a normal distribution." --
the
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
Let me start by denying the premise. Robert Geary probably didn't overstate the case when he said (in 1947) "...normality is a myth; there never was, and never will be, a normal distribution." --
the normal distribution is a model*, an approximation that is sometimes more-or-less useful.
$\:$*(about which, see George Box, though I prefer his "...how wrong does it have to be to not be useful" version).
That some phenomena are approximately normal may be no vast surprise, since sums of independent [or even not-too-strongly-correlated effects] should, if there a lot of them and none has a variance that is substantial compared to the variance of the sum of the rest that we might see the distribution tend to look more normal.
The central limit theorem (which is about the convergence to a normal distribution of a standardized sample mean as $n$ goes to infinity under some mild conditions) at least suggests that we might see a tendency toward that
normality with sufficiently large but finite sample sizes.
Of course if standardized means are approximately normal, standardized sums will be; this is the reason for the "sum of many effects" reasoning. So if there are a lot of little contributions to the variation, and they're not highly correlated, you might tend to see it.
The Berry-Esseen theorem gives us a statement about it (convergence toward normal distributions) actually happening with standardized sample means for iid data (under slightly more stringent conditions than for the CLT, since it requires that the third absolute moment be finite), as well as telling us about how rapidly it happens. Subsequent versions of the theorem deal with non-identically distributed components in the sum, though the upper bounds on the deviation from normality are less tight.
Less formally, the behavior of convolutions with reasonably nice distributions gives us additional (though closely related) reasons to suspect it might tend to be a fair approximation in finite samples in many cases. Convolution acts as a kind of "smearing" operator that people who use kernel density estimation across a variety of kernels will be familiar with; once you standardize the result (so the variance remains constant each time you do such an operation), there's clear a progression toward increasingly symmetric hill shapes as you repeatedly smooth (and it doesn't much matter if you change the kernel each time).
Terry Tao gives some nice discussion of versions of the Central limit theorem and the Berry-Esseen theorem here, and along the way mentions an approach to a non-independent version of Berry-Esseen.
So there's at least one class of situations where we might expect to see it, and formal reasons to think it really will tend to happen in those situations.
However, at best any sense that the result of "sums of many effects" will be normal is an approximation. In many cases it's quite a reasonable approximation (and in additional cases even though the approximation of the distribution isn't close, some procedures that assume normality aren't especially sensitive to the distribution of the individual values, at least in large samples).
There are many other circumstances where effects don't "add" and there we may expect other things to happen; for example, in a lot of financial data effects tend to be multiplicative (effects will move amounts in percentage terms, like interest and inflation and exchange rates for example). There we don't expect normality, but we might sometimes observe a rough approximation to normality on the log scale. In other situations neither can be appropriate, even in a rough sense. For example, inter-event times are generally not going to be well approximated by either normality or normality of logs; there's no "sums" nor "products" of effects to argue for here. There are numerous other phenomena that we can make some argument for a particular kind of "law" in particular circumstances, such as the limiting distributions in extreme values (Fisher-Tippett-Gnedenko or Pickands-Balkema-de Haan theorems, for example).
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
Let me start by denying the premise. Robert Geary probably didn't overstate the case when he said (in 1947) "...normality is a myth; there never was, and never will be, a normal distribution." --
the
|
5,529
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
There is a famous saying by Gabriel Lippmann (physicist, Nobel laureate), as told by Poincaré:
[The normal distribution] cannot be obtained by rigorous deductions. Several of its putative proofs are awful [...]. Nonetheless,
everyone believes it, as M. Lippmann told me one day, because experimenters imagine it to be a mathematical theorem, while mathematicians
imagine it to be an experimental fact.
-- Henri Poincaré, Le calcul des Probabilités. 1896
[Cette loi] ne s’obtient pas par des déductions rigoureuses; plus d’une démonstration qu’on a voulu en donner
est grossière [...]. Tout le monde y croit cependant, me disait un jour M. Lippmann, car les expérimentateurs s’imaginent que
c’est un théorème de mathématiques, et les mathématiciens que c’est un fait expérimental.
It seems that we don't have this quote in our List of Statistical Quotes thread, that's why I thought it would be good to post it here.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
There is a famous saying by Gabriel Lippmann (physicist, Nobel laureate), as told by Poincaré:
[The normal distribution] cannot be obtained by rigorous deductions. Several of its putative proofs are
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
There is a famous saying by Gabriel Lippmann (physicist, Nobel laureate), as told by Poincaré:
[The normal distribution] cannot be obtained by rigorous deductions. Several of its putative proofs are awful [...]. Nonetheless,
everyone believes it, as M. Lippmann told me one day, because experimenters imagine it to be a mathematical theorem, while mathematicians
imagine it to be an experimental fact.
-- Henri Poincaré, Le calcul des Probabilités. 1896
[Cette loi] ne s’obtient pas par des déductions rigoureuses; plus d’une démonstration qu’on a voulu en donner
est grossière [...]. Tout le monde y croit cependant, me disait un jour M. Lippmann, car les expérimentateurs s’imaginent que
c’est un théorème de mathématiques, et les mathématiciens que c’est un fait expérimental.
It seems that we don't have this quote in our List of Statistical Quotes thread, that's why I thought it would be good to post it here.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
There is a famous saying by Gabriel Lippmann (physicist, Nobel laureate), as told by Poincaré:
[The normal distribution] cannot be obtained by rigorous deductions. Several of its putative proofs are
|
5,530
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
What law of physics makes so that so many natural phenomena have
normal distribution? It would seem more intuitive that they would have
uniform distribution.
The normal distribution is a common place in natural sciences. The usual explanation is why it happens in measurement errors is through some form of large numbers or central limit theorem (CLT) reasoning, which usually goes like this: "since the experiment outcomes are impacted by infinitely large number of disturbances coming from unrelated sources CLT suggests that the errors would normally distributed". For instance, here's an excerpt from Statistical Methods in Data Analysis by W. J. Metzger:
Most of what we measure is in fact the sum of many r.v.’s. For
example, you measure the length of a table with a ruler. The length
you measure depends on a lot of small effects: optical parallax,
calibration of the ruler, temperature, your shaking hand, etc. A
digital meter has electronic noise at various places in its circuitry.
Thus, what you measure is not only what you want to measure, but added
to it a large number of (hopefully) small contributions. If this
number of small contributions is large the C.L.T. tells us that their
total sum is Gaussian distributed. This is often the case and is the
reason resolution functions are usually Gaussian.
However, as you must know this doesn't mean that every distribution will be normal, of course. For instance, Poisson distribution is as common in physics when dealing with counting processes. In spectroscopy Cauchy (aka Breit Wigner) distribution is used to describe the shape of radiation spectra and so on.
I realized this after writing: all three distributions mentioned so far (Gaussian, Poisson, Cauchy) are stable distributions, with Poisson being discrete stable. Now that I thought about this, it seems an important quality of a distribution that will make it survive aggregations: if you add a bunch of numbers from Poisson, the sum is a Poisson. This may "explain" (in some sense) why it's so ubiquitous.
In unnatural sciences you have to be very careful with applying normal (or any other) distribution for a variety of reasons. Particularly the correlations and dependencies are an issue, because they may break the assumptions of CLT. For instance, in finance it's well known that many series look like normal but have much heavier tails, which is a big issue in risk management.
Finally, there are more solid reasons in natural sciences for having normal distribution than sort of "hand waving" reasoning that I cited earlier. Consider, Brownian motion. If the shocks are truly independent and infinitesimal, then inevitably the distribution of an observable path will have normal distribution due to CLT, see e.g. Eq.(10) in Einstein's famous work "INVESTIGATIONS ON THE THEORY OF THE BROWNIAN MOVEMENT". He didn't even bother to call it by its today's name "Gaussian" or "normal".
Another example is quantum mechanics. It so happens that if the uncertainty of a coordinate $\Delta x$ and moment $\Delta p$ are from normal distributions, then the total uncertainty $\Delta x \Delta p$ reaches the minimum, Heisenberg's uncertainty threshold, see Eq.235-237 here.
Hence, don't be surprised to get very different reactions to Gaussian distribution use from researchers in different fields. In some fields such as physics, certain phenomena are expected to be linked naturally to Gaussian distribution based on very solid theory backed by enormous amount of observations. In other fields, Normal distribution is used for its technical convenience, handy mathematical properties or other questionable reasons.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
What law of physics makes so that so many natural phenomena have
normal distribution? It would seem more intuitive that they would have
uniform distribution.
The normal distribution is a common p
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
What law of physics makes so that so many natural phenomena have
normal distribution? It would seem more intuitive that they would have
uniform distribution.
The normal distribution is a common place in natural sciences. The usual explanation is why it happens in measurement errors is through some form of large numbers or central limit theorem (CLT) reasoning, which usually goes like this: "since the experiment outcomes are impacted by infinitely large number of disturbances coming from unrelated sources CLT suggests that the errors would normally distributed". For instance, here's an excerpt from Statistical Methods in Data Analysis by W. J. Metzger:
Most of what we measure is in fact the sum of many r.v.’s. For
example, you measure the length of a table with a ruler. The length
you measure depends on a lot of small effects: optical parallax,
calibration of the ruler, temperature, your shaking hand, etc. A
digital meter has electronic noise at various places in its circuitry.
Thus, what you measure is not only what you want to measure, but added
to it a large number of (hopefully) small contributions. If this
number of small contributions is large the C.L.T. tells us that their
total sum is Gaussian distributed. This is often the case and is the
reason resolution functions are usually Gaussian.
However, as you must know this doesn't mean that every distribution will be normal, of course. For instance, Poisson distribution is as common in physics when dealing with counting processes. In spectroscopy Cauchy (aka Breit Wigner) distribution is used to describe the shape of radiation spectra and so on.
I realized this after writing: all three distributions mentioned so far (Gaussian, Poisson, Cauchy) are stable distributions, with Poisson being discrete stable. Now that I thought about this, it seems an important quality of a distribution that will make it survive aggregations: if you add a bunch of numbers from Poisson, the sum is a Poisson. This may "explain" (in some sense) why it's so ubiquitous.
In unnatural sciences you have to be very careful with applying normal (or any other) distribution for a variety of reasons. Particularly the correlations and dependencies are an issue, because they may break the assumptions of CLT. For instance, in finance it's well known that many series look like normal but have much heavier tails, which is a big issue in risk management.
Finally, there are more solid reasons in natural sciences for having normal distribution than sort of "hand waving" reasoning that I cited earlier. Consider, Brownian motion. If the shocks are truly independent and infinitesimal, then inevitably the distribution of an observable path will have normal distribution due to CLT, see e.g. Eq.(10) in Einstein's famous work "INVESTIGATIONS ON THE THEORY OF THE BROWNIAN MOVEMENT". He didn't even bother to call it by its today's name "Gaussian" or "normal".
Another example is quantum mechanics. It so happens that if the uncertainty of a coordinate $\Delta x$ and moment $\Delta p$ are from normal distributions, then the total uncertainty $\Delta x \Delta p$ reaches the minimum, Heisenberg's uncertainty threshold, see Eq.235-237 here.
Hence, don't be surprised to get very different reactions to Gaussian distribution use from researchers in different fields. In some fields such as physics, certain phenomena are expected to be linked naturally to Gaussian distribution based on very solid theory backed by enormous amount of observations. In other fields, Normal distribution is used for its technical convenience, handy mathematical properties or other questionable reasons.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
What law of physics makes so that so many natural phenomena have
normal distribution? It would seem more intuitive that they would have
uniform distribution.
The normal distribution is a common p
|
5,531
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
there is an awful lot of overly complicated explanations here...
A good way it was related to me is the following:
Roll a single die, and you have an equal likelihood of rolling each number (1-6), and hence, the PDF is constant.
Roll two dice and sum the results together, and the PDF is no longer constant. This is because there are 36 combinations, and the summative range is 2 to 12. The likelihood of a 2 is unique singular combination of 1 + 1 . The likelihood of a 12, is also unique in that it can only occur in a single combination of a 6 + 6. Now, looking at 7, there are multiple combinations, i.e. 3 + 4, 5 + 2, and 6 + 1 (and their reverse permutations). As you work away from the mid-value (i.e. 7), there are lesser combinations for 6 & 8 etc until you arrive at the singular combinations of 2 and 12. This example does not result in a clear normal distribution, but the more die you add, and the more samples you take, then the result will tend towards a normal distribution.
Therefore, as you sum a range of independent variables subject to random variation (which each can have their own PDFs), the more the resulting output will tend to normality. This in Six Sigma terms give us what we call the 'Voice of the Process'. This is what we call the result of 'common-cause variation' of a system, and hence, if the output is tending towards normality, then we call this system 'in statistical process control'. Where the output is non-normal (skewed or shifted), then we say the system is subject to 'special cause variation' in which there has been some 'signal' that has biased the outcome in some way.
Hope that helps.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
there is an awful lot of overly complicated explanations here...
A good way it was related to me is the following:
Roll a single die, and you have an equal likelihood of rolling each number (1-6), an
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
there is an awful lot of overly complicated explanations here...
A good way it was related to me is the following:
Roll a single die, and you have an equal likelihood of rolling each number (1-6), and hence, the PDF is constant.
Roll two dice and sum the results together, and the PDF is no longer constant. This is because there are 36 combinations, and the summative range is 2 to 12. The likelihood of a 2 is unique singular combination of 1 + 1 . The likelihood of a 12, is also unique in that it can only occur in a single combination of a 6 + 6. Now, looking at 7, there are multiple combinations, i.e. 3 + 4, 5 + 2, and 6 + 1 (and their reverse permutations). As you work away from the mid-value (i.e. 7), there are lesser combinations for 6 & 8 etc until you arrive at the singular combinations of 2 and 12. This example does not result in a clear normal distribution, but the more die you add, and the more samples you take, then the result will tend towards a normal distribution.
Therefore, as you sum a range of independent variables subject to random variation (which each can have their own PDFs), the more the resulting output will tend to normality. This in Six Sigma terms give us what we call the 'Voice of the Process'. This is what we call the result of 'common-cause variation' of a system, and hence, if the output is tending towards normality, then we call this system 'in statistical process control'. Where the output is non-normal (skewed or shifted), then we say the system is subject to 'special cause variation' in which there has been some 'signal' that has biased the outcome in some way.
Hope that helps.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
there is an awful lot of overly complicated explanations here...
A good way it was related to me is the following:
Roll a single die, and you have an equal likelihood of rolling each number (1-6), an
|
5,532
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
What law of physics makes so that so many natural phenomena have normal distribution?
No idea. On the other hand I've also no idea whether it's true, or indeed what 'so many' means.
However, rearranging the problem a little, there is good reason to assume (that is, to model) a continuous quantity that you believe to have a fixed mean and variance with a Normal distribution. That's because the Normal distribution is the result of maximizing entropy subject to those moment constraints. Since, roughly speaking, entropy is a measure of uncertainty, that makes the Normal the most non-commital or maximally uncertain choice of distributional form.
Now, the idea that one should choose a distribution by maximizing its entropy subject to known constraints really does have some physics backing in terms of the number of possible ways to fulfill them. Jaynes on statistical mechanics is the standard reference here.
Note that while maximum entropy motivates Normal distributions in this case, different sorts of constraints can be shown to lead to different distributional families, e.g. the familiar exponential, poisson, binomial, etc.
Sivia and Skilling 2005 ch.5 has an intuitive discussion.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
|
What law of physics makes so that so many natural phenomena have normal distribution?
No idea. On the other hand I've also no idea whether it's true, or indeed what 'so many' means.
However, rearrang
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
What law of physics makes so that so many natural phenomena have normal distribution?
No idea. On the other hand I've also no idea whether it's true, or indeed what 'so many' means.
However, rearranging the problem a little, there is good reason to assume (that is, to model) a continuous quantity that you believe to have a fixed mean and variance with a Normal distribution. That's because the Normal distribution is the result of maximizing entropy subject to those moment constraints. Since, roughly speaking, entropy is a measure of uncertainty, that makes the Normal the most non-commital or maximally uncertain choice of distributional form.
Now, the idea that one should choose a distribution by maximizing its entropy subject to known constraints really does have some physics backing in terms of the number of possible ways to fulfill them. Jaynes on statistical mechanics is the standard reference here.
Note that while maximum entropy motivates Normal distributions in this case, different sorts of constraints can be shown to lead to different distributional families, e.g. the familiar exponential, poisson, binomial, etc.
Sivia and Skilling 2005 ch.5 has an intuitive discussion.
|
Is there an explanation for why there are so many natural phenomena that follow normal distribution?
What law of physics makes so that so many natural phenomena have normal distribution?
No idea. On the other hand I've also no idea whether it's true, or indeed what 'so many' means.
However, rearrang
|
5,533
|
Dealing with singular fit in mixed models
|
When you obtain a singular fit, this is often indicating that the model is overfitted – that is, the random effects structure is too complex to be supported by the data, which naturally leads to the advice to remove the most complex part of the random effects structure (usually random slopes). The benefit of this approach is that it leads to a more parsimonious model that is not over-fitted.
However, before doing anything, do you have a good reason for wanting X, Condition and their interaction, all to vary by subject in the first place ? Does the theory of how the data are generated suggest this ?
If you desire to fit the model with the maximal random effects structure, and lme4 obtains a singular fit, then fitting the same model in a Bayesian framework might very well inform you why lme4 had problems, by inspecting trace plots and how well the various parameter estimates converge. The advantage in taking the Bayesian approach is that by doing so you may uncover a problem with original model ie. the reason why the maximum random effects structure isn’t supported by the data) or it might uncover why lme4 is unable to fit the model. I have encountered situations where a Bayesian model does not converge well, unless informative priors are used – which may or may not be OK.
In short, both approaches have merit.
However, I would always start from a place where the initial model is parsimonious and informed by expert domain knowledge to determine the most appropriate random effects structure. Specifying grouping variables is relatively easy, but random slopes usually don’t have to be included. Only include them if they make sound theoretical sense AND they are supported by the data.
Edit:
It is mentioned in the comments that there are sound theoretical reasons to fit the maximal random effects structure. So, a relatively easy way to proceed with an equivalent Bayesian model is to swap the call to glmer with stan_glmer from the rstanarm package – it is designed to be plug and play. It has default priors, so you can quickly get a model fitted. The package also has many tools for assessing convergence. If you find that all the parameters have converging to plausible values, then you are all good. However there can be a number of issues – for example a variance being estimated at or below zero, or an estimate that continues to drift. The mc-stan.org site has a wealth of information and a user forum.
|
Dealing with singular fit in mixed models
|
When you obtain a singular fit, this is often indicating that the model is overfitted – that is, the random effects structure is too complex to be supported by the data, which naturally leads to the a
|
Dealing with singular fit in mixed models
When you obtain a singular fit, this is often indicating that the model is overfitted – that is, the random effects structure is too complex to be supported by the data, which naturally leads to the advice to remove the most complex part of the random effects structure (usually random slopes). The benefit of this approach is that it leads to a more parsimonious model that is not over-fitted.
However, before doing anything, do you have a good reason for wanting X, Condition and their interaction, all to vary by subject in the first place ? Does the theory of how the data are generated suggest this ?
If you desire to fit the model with the maximal random effects structure, and lme4 obtains a singular fit, then fitting the same model in a Bayesian framework might very well inform you why lme4 had problems, by inspecting trace plots and how well the various parameter estimates converge. The advantage in taking the Bayesian approach is that by doing so you may uncover a problem with original model ie. the reason why the maximum random effects structure isn’t supported by the data) or it might uncover why lme4 is unable to fit the model. I have encountered situations where a Bayesian model does not converge well, unless informative priors are used – which may or may not be OK.
In short, both approaches have merit.
However, I would always start from a place where the initial model is parsimonious and informed by expert domain knowledge to determine the most appropriate random effects structure. Specifying grouping variables is relatively easy, but random slopes usually don’t have to be included. Only include them if they make sound theoretical sense AND they are supported by the data.
Edit:
It is mentioned in the comments that there are sound theoretical reasons to fit the maximal random effects structure. So, a relatively easy way to proceed with an equivalent Bayesian model is to swap the call to glmer with stan_glmer from the rstanarm package – it is designed to be plug and play. It has default priors, so you can quickly get a model fitted. The package also has many tools for assessing convergence. If you find that all the parameters have converging to plausible values, then you are all good. However there can be a number of issues – for example a variance being estimated at or below zero, or an estimate that continues to drift. The mc-stan.org site has a wealth of information and a user forum.
|
Dealing with singular fit in mixed models
When you obtain a singular fit, this is often indicating that the model is overfitted – that is, the random effects structure is too complex to be supported by the data, which naturally leads to the a
|
5,534
|
Dealing with singular fit in mixed models
|
This is a very interesting thread, with interesting answers and comments! Since this hasn't been brought up yet, I wanted to point out that we have very little data for each subject (as I understand it). Indeed, each subject has only two values for each of the response variable Y, categorical variable Condition and continuous variable X. In particular, we know that the two values of Condition are A and B.
If we were to pursue the two-stage regression modelling instead of the mixed effects modelling, we couldn't even fit a linear regression model to the data from a specific subject, as illustrated in the toy example below for one of the subject:
y <- c(4, 7)
condition <- c("A", "B")
condition <- factor(condition)
x <- c(0.2, 0.4)
m <- lm(y ~ condition*x)
summary(m)
The output of this subject-specific model would be:
Call:
lm(formula = y ~ condition * x)
Residuals:
ALL 2 residuals are 0: no residual degrees of freedom!
Coefficients: (2 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4 NA NA NA
conditionB 3 NA NA NA
x NA NA NA NA
conditionB:x NA NA NA NA
Residual standard error: NaN on 0 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: NaN
F-statistic: NaN on 1 and 0 DF, p-value: NA
Notice that the model fit suffers from singularities, as we're trying to estimate 4 regression coefficients plus the error standard deviation using just 2 observations.
The singularities would persist even if we observed this subject twice - rather than once - under each condition. However, if we observed the subject 3 times under each condition, we would get rid of singularities:
y <- c(4, 7, 3, 5, 1, 2)
condition <- c("A", "B", "A","B","A","B")
condition <- factor(condition)
x <- c(0.2, 0.4, 0.1, 0.3, 0.3, 0.5)
m2 <- lm(y ~ condition*x)
summary(m2)
Here is the corresponding R output for this second example, from which the singularities have disappeared:
> summary(m2)
Call:
lm(formula = y ~ condition * x)
Residuals:
1 2 3 4 5 6
1.3333 2.3333 -0.6667 -1.1667 -0.6667 -1.1667
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.667 3.555 1.313 0.320
conditionB 6.000 7.601 0.789 0.513
x -10.000 16.457 -0.608 0.605
conditionB:x -5.000 23.274 -0.215 0.850
Residual standard error: 2.327 on 2 degrees of freedom
Multiple R-squared: 0.5357, Adjusted R-squared: -0.1607
F-statistic: 0.7692 on 3 and 2 DF, p-value: 0.6079
Of course, the mixed effects model does not fit unrelated, separate linear regression models for each subject - it fits "related" models whose intercepts and/or slopes deviate randomly about a typical intercept and/or slope, such that the random deviations from the typical intercept and/or typical slope follow a Normal distribution with mean zero and some unknown standard deviation.
Even so, my intuition suggests that the mixed effects model is struggling with the small amount of observations - just 2 - available for each subject. The more the model is loaded with random slopes, the more it probably struggles. I suspect that, if each subject contributed 6 observations instead of 2 (that is, 3 per condition), it would no longer struggle to accommodate all of the random slopes.
It seems to me that this could be (?) a case where the current study design does not support the complex modelling ambitions - to support those ambitions, more observations would be needed under each condition for each subject (or at least for some of the subjects?). This is just my intuition so I hope others can add their insights to my observations above. Thank you in advance!
|
Dealing with singular fit in mixed models
|
This is a very interesting thread, with interesting answers and comments! Since this hasn't been brought up yet, I wanted to point out that we have very little data for each subject (as I understand i
|
Dealing with singular fit in mixed models
This is a very interesting thread, with interesting answers and comments! Since this hasn't been brought up yet, I wanted to point out that we have very little data for each subject (as I understand it). Indeed, each subject has only two values for each of the response variable Y, categorical variable Condition and continuous variable X. In particular, we know that the two values of Condition are A and B.
If we were to pursue the two-stage regression modelling instead of the mixed effects modelling, we couldn't even fit a linear regression model to the data from a specific subject, as illustrated in the toy example below for one of the subject:
y <- c(4, 7)
condition <- c("A", "B")
condition <- factor(condition)
x <- c(0.2, 0.4)
m <- lm(y ~ condition*x)
summary(m)
The output of this subject-specific model would be:
Call:
lm(formula = y ~ condition * x)
Residuals:
ALL 2 residuals are 0: no residual degrees of freedom!
Coefficients: (2 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4 NA NA NA
conditionB 3 NA NA NA
x NA NA NA NA
conditionB:x NA NA NA NA
Residual standard error: NaN on 0 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: NaN
F-statistic: NaN on 1 and 0 DF, p-value: NA
Notice that the model fit suffers from singularities, as we're trying to estimate 4 regression coefficients plus the error standard deviation using just 2 observations.
The singularities would persist even if we observed this subject twice - rather than once - under each condition. However, if we observed the subject 3 times under each condition, we would get rid of singularities:
y <- c(4, 7, 3, 5, 1, 2)
condition <- c("A", "B", "A","B","A","B")
condition <- factor(condition)
x <- c(0.2, 0.4, 0.1, 0.3, 0.3, 0.5)
m2 <- lm(y ~ condition*x)
summary(m2)
Here is the corresponding R output for this second example, from which the singularities have disappeared:
> summary(m2)
Call:
lm(formula = y ~ condition * x)
Residuals:
1 2 3 4 5 6
1.3333 2.3333 -0.6667 -1.1667 -0.6667 -1.1667
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.667 3.555 1.313 0.320
conditionB 6.000 7.601 0.789 0.513
x -10.000 16.457 -0.608 0.605
conditionB:x -5.000 23.274 -0.215 0.850
Residual standard error: 2.327 on 2 degrees of freedom
Multiple R-squared: 0.5357, Adjusted R-squared: -0.1607
F-statistic: 0.7692 on 3 and 2 DF, p-value: 0.6079
Of course, the mixed effects model does not fit unrelated, separate linear regression models for each subject - it fits "related" models whose intercepts and/or slopes deviate randomly about a typical intercept and/or slope, such that the random deviations from the typical intercept and/or typical slope follow a Normal distribution with mean zero and some unknown standard deviation.
Even so, my intuition suggests that the mixed effects model is struggling with the small amount of observations - just 2 - available for each subject. The more the model is loaded with random slopes, the more it probably struggles. I suspect that, if each subject contributed 6 observations instead of 2 (that is, 3 per condition), it would no longer struggle to accommodate all of the random slopes.
It seems to me that this could be (?) a case where the current study design does not support the complex modelling ambitions - to support those ambitions, more observations would be needed under each condition for each subject (or at least for some of the subjects?). This is just my intuition so I hope others can add their insights to my observations above. Thank you in advance!
|
Dealing with singular fit in mixed models
This is a very interesting thread, with interesting answers and comments! Since this hasn't been brought up yet, I wanted to point out that we have very little data for each subject (as I understand i
|
5,535
|
What is the difference between Metropolis-Hastings, Gibbs, Importance, and Rejection sampling?
|
As detailed in our book with George Casella, Monte Carlo statistical methods, these methods are used to produce samples from a given distribution, with density $f$ say, either to get an idea about this distribution, or to solve an integration or optimisation problem related with $f$. For instance, to find the value of $$\int_{\mathcal{X}} h(x) f(x)\text{d}x\qquad h(\mathcal{X})\subset \mathbb{R}$$ or the mode of the distribution of $h(X)$ when $X\sim f(x)$ or a quantile of this distribution.
To compare the Monte Carlo and Markov chain Monte Carlo methods you mention on relevant criteria requires one to set the background of the problem and the goals of the simulation experiment, since the pros and cons of each will vary from case to case.
Here are a few generic remarks that most certainly do not cover the complexity of the issue:
Accept-reject methods are intended to provide an i.i.d. sample from $f$. To achieve this, one designs an algorithm that takes as input a random number of uniform variates $u_1,u_2,\ldots$, and returns a value $x$ that is a realisation from $f$. The pros are that there is no approximation in the method: the outcome is truly an i.i.d. sample from $f$. The cons are many: (i) designing the algorithm by finding an envelope of $f$ that can be generated may be very costly in human time; (ii) the algorithm may be inefficient in computing time, i.e., requires many uniforms to produce a single $x$; (iii) those performances are decreasing with the dimension of $X$. In short, such methods cannot be used for simulating one or a few simulations from $f$ unless they are already available in a computer language like R.
Markov chain Monte Carlo (MCMC) methods are extensions of i.i.d. simulations methods when i.i.d. simulation is too costly. They produce a sequence of simulations $(x_t)_t$ which limiting distribution is the distribution $f$. The pros are that (i) less information about $f$ is needed to implement the method; (ii) $f$ may be only known up to a normalising constant or even as an integral$$f(x)\propto\int_{\mathcal{Z}} \tilde{f}(x,z)\text{d}z$$ and still be associated with an MCMC method; (iii) there exist generic MCMC algorithms to produce simulations $(x_t)_t$ that require very little calibration; (iv) dimension is less of an issue as large dimension targets can be broken into conditionals of smaller dimension (as in Gibbs sampling). The cons are that (i) the simulations $(x_t)_t$ are correlated, hence less informative than i.i.d. simulations; (ii) the validation of the method is only asymptotic, hence there is an approximation in considering $x_t$ for a fixed $t$ as a realisation of $f$; (iii) convergence to $f$ (in $t$) may be so slow that for all practical purposes the algorithm does not converge; (iv) the universal validation of the method means there is an infinite number of potential implementations, with an equally infinite range of efficiencies.
Importance sampling methods are originally designed for integral approximations, namely generating from the wrong target $g(x)$ and compensating by an importance weight $$f(x)/g(x)\,.$$ The resulting sample is thus weighted, which makes the comparison with the above awkward. However, importance sampling can be turned into importance sampling resampling by using an additional resampling step based on the weights. The pros of importance sampling resampling are that (i) generation from an importance target $g$ can be cheap and recycled for different targets $f$; (ii) the "right" choice of $g$ can lead to huge improvements compared with regular or MCMC sampling; (iii) importance sampling is more amenable to numerical integration improvement, like for instance quasi-Monte Carlo integration; (iv) it can be turn into adaptive versions like population Monte Carlo and sequential Monte Carlo. The cons are that (i) resampling induces inefficiency (which can be partly corrected by reducing the noise as in systematic resampling or qMC); (ii) the "wrong" choice of $g$ can lead to huge losses in efficiency and even to infinite variance; (iii) importance has trouble facing large dimensions and its efficiency diminishes quickly with the dimension; (iv) the method may be as myopic as local MCMC methods in missing important regions of the support of $f$; (v) resampling induces a bias due to the division by the sum of the weights.
In conclusion, a warning that there is no such thing as an optimal simulation method. Even in a specific setting like approximating an integral $$\mathcal{I}=\int_{\mathcal{X}} h(x) f(x)\text{d}x\,,$$ costs of designing and running different methods intrude as to make a global comparison very delicate, if at all possible, while, from a formal point of view, they can never beat the zero variance answer of returning the constant "estimate" $$\hat{\mathcal{I}}=\int_{\mathcal{X}} h(x) f(x)\text{d}x$$ For instance, simulating from $f$ is very rarely if ever the best option. This does not mean that methods cannot be compared, but that there always is a possibility for an improvement, which comes with additional costs.
|
What is the difference between Metropolis-Hastings, Gibbs, Importance, and Rejection sampling?
|
As detailed in our book with George Casella, Monte Carlo statistical methods, these methods are used to produce samples from a given distribution, with density $f$ say, either to get an idea about thi
|
What is the difference between Metropolis-Hastings, Gibbs, Importance, and Rejection sampling?
As detailed in our book with George Casella, Monte Carlo statistical methods, these methods are used to produce samples from a given distribution, with density $f$ say, either to get an idea about this distribution, or to solve an integration or optimisation problem related with $f$. For instance, to find the value of $$\int_{\mathcal{X}} h(x) f(x)\text{d}x\qquad h(\mathcal{X})\subset \mathbb{R}$$ or the mode of the distribution of $h(X)$ when $X\sim f(x)$ or a quantile of this distribution.
To compare the Monte Carlo and Markov chain Monte Carlo methods you mention on relevant criteria requires one to set the background of the problem and the goals of the simulation experiment, since the pros and cons of each will vary from case to case.
Here are a few generic remarks that most certainly do not cover the complexity of the issue:
Accept-reject methods are intended to provide an i.i.d. sample from $f$. To achieve this, one designs an algorithm that takes as input a random number of uniform variates $u_1,u_2,\ldots$, and returns a value $x$ that is a realisation from $f$. The pros are that there is no approximation in the method: the outcome is truly an i.i.d. sample from $f$. The cons are many: (i) designing the algorithm by finding an envelope of $f$ that can be generated may be very costly in human time; (ii) the algorithm may be inefficient in computing time, i.e., requires many uniforms to produce a single $x$; (iii) those performances are decreasing with the dimension of $X$. In short, such methods cannot be used for simulating one or a few simulations from $f$ unless they are already available in a computer language like R.
Markov chain Monte Carlo (MCMC) methods are extensions of i.i.d. simulations methods when i.i.d. simulation is too costly. They produce a sequence of simulations $(x_t)_t$ which limiting distribution is the distribution $f$. The pros are that (i) less information about $f$ is needed to implement the method; (ii) $f$ may be only known up to a normalising constant or even as an integral$$f(x)\propto\int_{\mathcal{Z}} \tilde{f}(x,z)\text{d}z$$ and still be associated with an MCMC method; (iii) there exist generic MCMC algorithms to produce simulations $(x_t)_t$ that require very little calibration; (iv) dimension is less of an issue as large dimension targets can be broken into conditionals of smaller dimension (as in Gibbs sampling). The cons are that (i) the simulations $(x_t)_t$ are correlated, hence less informative than i.i.d. simulations; (ii) the validation of the method is only asymptotic, hence there is an approximation in considering $x_t$ for a fixed $t$ as a realisation of $f$; (iii) convergence to $f$ (in $t$) may be so slow that for all practical purposes the algorithm does not converge; (iv) the universal validation of the method means there is an infinite number of potential implementations, with an equally infinite range of efficiencies.
Importance sampling methods are originally designed for integral approximations, namely generating from the wrong target $g(x)$ and compensating by an importance weight $$f(x)/g(x)\,.$$ The resulting sample is thus weighted, which makes the comparison with the above awkward. However, importance sampling can be turned into importance sampling resampling by using an additional resampling step based on the weights. The pros of importance sampling resampling are that (i) generation from an importance target $g$ can be cheap and recycled for different targets $f$; (ii) the "right" choice of $g$ can lead to huge improvements compared with regular or MCMC sampling; (iii) importance sampling is more amenable to numerical integration improvement, like for instance quasi-Monte Carlo integration; (iv) it can be turn into adaptive versions like population Monte Carlo and sequential Monte Carlo. The cons are that (i) resampling induces inefficiency (which can be partly corrected by reducing the noise as in systematic resampling or qMC); (ii) the "wrong" choice of $g$ can lead to huge losses in efficiency and even to infinite variance; (iii) importance has trouble facing large dimensions and its efficiency diminishes quickly with the dimension; (iv) the method may be as myopic as local MCMC methods in missing important regions of the support of $f$; (v) resampling induces a bias due to the division by the sum of the weights.
In conclusion, a warning that there is no such thing as an optimal simulation method. Even in a specific setting like approximating an integral $$\mathcal{I}=\int_{\mathcal{X}} h(x) f(x)\text{d}x\,,$$ costs of designing and running different methods intrude as to make a global comparison very delicate, if at all possible, while, from a formal point of view, they can never beat the zero variance answer of returning the constant "estimate" $$\hat{\mathcal{I}}=\int_{\mathcal{X}} h(x) f(x)\text{d}x$$ For instance, simulating from $f$ is very rarely if ever the best option. This does not mean that methods cannot be compared, but that there always is a possibility for an improvement, which comes with additional costs.
|
What is the difference between Metropolis-Hastings, Gibbs, Importance, and Rejection sampling?
As detailed in our book with George Casella, Monte Carlo statistical methods, these methods are used to produce samples from a given distribution, with density $f$ say, either to get an idea about thi
|
5,536
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
|
If you can sample from a given distribution with mean 0 and variance 1, then you can easily sample from a scale-location transformation of that distribution, which has mean $\mu$ and variance $\sigma^2$. If $x$ is a sample from a mean 0 and variance 1 distribution then
$$\sigma x + \mu$$
is a sample with mean $\mu$ and variance $\sigma^2$. So, all you have to do is to scale the variable by the standard deviation $\sigma$ (square root of the variance) before adding the mean $\mu$.
How you actually get a simulation from a normal distribution with mean 0 and variance 1 is a different story. It's fun and interesting to know how to implement such things, but whether you use a statistical package or programming language or not, I will recommend that you obtain and use a suitable function or library for the random number generation. If you want advice on what library to use you might want to add specific information on which programming language(s) you are using.
Edit: In the light of the comments, some other answers and the fact that Fixee accepted this answer, I will give some more details on how one can use transformations of uniform variables to produce normal variables.
One method, already mentioned in a comment by VitalStatistix, is the Box-Muller method that takes two independent uniform random variables and produces two independent normal random variables. A similar method that avoids the computation of two transcendental functions sin and cos at the expense of a few more simulations was posted as an answer by francogrex.
A completely general method is the transformation of a uniform random variable by the inverse distribution function. If $U$ is uniformly distributed on $[0,1]$ then
$$\Phi^{-1}(U)$$
has a standard normal distribution. Though there is no explicit analytic formula for $\Phi^{-1}$, it can be computed by accurate numerical approximations. The current implementation in R (last I checked) uses this idea. The method is conceptually very simple, but requires an accurate implementation of $\Phi^{-1}$, which is probably not as widespread as the (other) transcendental functions log, sin and cos.
Several answers mention the possibility of using the central limit theorem to approximate the normal distribution as an average of uniform random variables. This is not generally recommended. Arguments presented, such as matching the mean 0 and variance 1, and considerations of support of the distribution are not convincing. In Exercise 2.3 in "Introducing Monte Carlo Methods with R" by Christian P. Robert and George Casella this generator is called antiquated and the approximation is called very poor.
There is a bewildering number of other ideas. Chapter 3 and, in particular, Section 3.4, in "The Art of Computer Programming" Vol. 2 by Donald E. Knuth is a classical reference on random number generation. Brian Ripley wrote Computer Generation of Random Variables: A Tutorial, which may be useful. The book mentioned by Robert and Casella, or perhaps Chapter 2 in their other book, "Monte Carlo statistical methods", is also recommended.
At the end of the day, a correctly implemented method is not better than the uniform pseudo random number generator used. Personally, I prefer to rely on special purpose libraries that I believe are trustworthy. I almost always rely on the methods implemented in R either directly in R or via the API in C/C++. Obviously, this is not a solution for everybody, but I am not familiar enough with other libraries to recommend alternatives.
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
|
If you can sample from a given distribution with mean 0 and variance 1, then you can easily sample from a scale-location transformation of that distribution, which has mean $\mu$ and variance $\sigma^
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
If you can sample from a given distribution with mean 0 and variance 1, then you can easily sample from a scale-location transformation of that distribution, which has mean $\mu$ and variance $\sigma^2$. If $x$ is a sample from a mean 0 and variance 1 distribution then
$$\sigma x + \mu$$
is a sample with mean $\mu$ and variance $\sigma^2$. So, all you have to do is to scale the variable by the standard deviation $\sigma$ (square root of the variance) before adding the mean $\mu$.
How you actually get a simulation from a normal distribution with mean 0 and variance 1 is a different story. It's fun and interesting to know how to implement such things, but whether you use a statistical package or programming language or not, I will recommend that you obtain and use a suitable function or library for the random number generation. If you want advice on what library to use you might want to add specific information on which programming language(s) you are using.
Edit: In the light of the comments, some other answers and the fact that Fixee accepted this answer, I will give some more details on how one can use transformations of uniform variables to produce normal variables.
One method, already mentioned in a comment by VitalStatistix, is the Box-Muller method that takes two independent uniform random variables and produces two independent normal random variables. A similar method that avoids the computation of two transcendental functions sin and cos at the expense of a few more simulations was posted as an answer by francogrex.
A completely general method is the transformation of a uniform random variable by the inverse distribution function. If $U$ is uniformly distributed on $[0,1]$ then
$$\Phi^{-1}(U)$$
has a standard normal distribution. Though there is no explicit analytic formula for $\Phi^{-1}$, it can be computed by accurate numerical approximations. The current implementation in R (last I checked) uses this idea. The method is conceptually very simple, but requires an accurate implementation of $\Phi^{-1}$, which is probably not as widespread as the (other) transcendental functions log, sin and cos.
Several answers mention the possibility of using the central limit theorem to approximate the normal distribution as an average of uniform random variables. This is not generally recommended. Arguments presented, such as matching the mean 0 and variance 1, and considerations of support of the distribution are not convincing. In Exercise 2.3 in "Introducing Monte Carlo Methods with R" by Christian P. Robert and George Casella this generator is called antiquated and the approximation is called very poor.
There is a bewildering number of other ideas. Chapter 3 and, in particular, Section 3.4, in "The Art of Computer Programming" Vol. 2 by Donald E. Knuth is a classical reference on random number generation. Brian Ripley wrote Computer Generation of Random Variables: A Tutorial, which may be useful. The book mentioned by Robert and Casella, or perhaps Chapter 2 in their other book, "Monte Carlo statistical methods", is also recommended.
At the end of the day, a correctly implemented method is not better than the uniform pseudo random number generator used. Personally, I prefer to rely on special purpose libraries that I believe are trustworthy. I almost always rely on the methods implemented in R either directly in R or via the API in C/C++. Obviously, this is not a solution for everybody, but I am not familiar enough with other libraries to recommend alternatives.
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
If you can sample from a given distribution with mean 0 and variance 1, then you can easily sample from a scale-location transformation of that distribution, which has mean $\mu$ and variance $\sigma^
|
5,537
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
|
This is really a comment on Michael Lew's answer and Fixee's comment, but is posted as an answer because I don't have the reputation on this site to comment.
The sum of twelve independent random variables uniformly distributed on $[0, 1]$ has mean $6$ and variance $1$. In other words,
$$E\left [\sum_{i=1}^{12} X_i\right ] = \sum_{i=1}^{12} E[X_i]
= 12\times \frac{1}{2} = 6$$
and
$$\text{var} \left [\sum_{i=1}^{12} X_i\right ]
= \sum_{i=1}^{12} \text{var}[X_i] = 12\times \frac{1}{12} = 1.$$
The CLT can then be used to assert that the distribution of
$\sum_{i=1}^{12} X_i - 6$
is approximately a standard normal distribution.
Compared to the ten variables considered by Michael Lew and Fixee,
two additional calls to the random number generator are required, but
we avoid division by $\sqrt{10/12}$ to get the desired unit variance. It is
also worth remembering that $\sum_{i=1}^{12} X_i - 6$ can take on values
only in the range $[-6, 6]$ and thus extreme (very low-probability) values
differing from the mean by more than $6$ standard deviations will never
occur. This is often a problem in simulations of computer and communication systems where such very low probability events are of much interest.
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
|
This is really a comment on Michael Lew's answer and Fixee's comment, but is posted as an answer because I don't have the reputation on this site to comment.
The sum of twelve independent random varia
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
This is really a comment on Michael Lew's answer and Fixee's comment, but is posted as an answer because I don't have the reputation on this site to comment.
The sum of twelve independent random variables uniformly distributed on $[0, 1]$ has mean $6$ and variance $1$. In other words,
$$E\left [\sum_{i=1}^{12} X_i\right ] = \sum_{i=1}^{12} E[X_i]
= 12\times \frac{1}{2} = 6$$
and
$$\text{var} \left [\sum_{i=1}^{12} X_i\right ]
= \sum_{i=1}^{12} \text{var}[X_i] = 12\times \frac{1}{12} = 1.$$
The CLT can then be used to assert that the distribution of
$\sum_{i=1}^{12} X_i - 6$
is approximately a standard normal distribution.
Compared to the ten variables considered by Michael Lew and Fixee,
two additional calls to the random number generator are required, but
we avoid division by $\sqrt{10/12}$ to get the desired unit variance. It is
also worth remembering that $\sum_{i=1}^{12} X_i - 6$ can take on values
only in the range $[-6, 6]$ and thus extreme (very low-probability) values
differing from the mean by more than $6$ standard deviations will never
occur. This is often a problem in simulations of computer and communication systems where such very low probability events are of much interest.
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
This is really a comment on Michael Lew's answer and Fixee's comment, but is posted as an answer because I don't have the reputation on this site to comment.
The sum of twelve independent random varia
|
5,538
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
|
In addition to the answer by NRH, if you still have no means to generate random samples from a "standard normal distribution" N(0,1), below is a good and simple way (since you mention you don't have a statistical package, the functions below should be available in most standard programming languages).
1. Generate u and v as two uniformly distributed random numbers in the range from -1 to 1
by u = 2 r1 - 1 and v = 2 r2 - 1
2.calculate w = u^2 + v^2 if w > 1 the go back to 1
3.return u*z and y= v*z with z= sqrt(-2ln(w)/w)
A sample code would look like this:
u = 2 * random() - 1;
v = 2 * random() - 1;
w = pow(u, 2) + pow(v, 2);
if (w < 1) {
z = sqrt((-2 * log(w)) / w);
x = u * z;
y = v * z;
}
then use what MHR has suggested above to obtain the random deviates from N(mu, sigma^2).
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
|
In addition to the answer by NRH, if you still have no means to generate random samples from a "standard normal distribution" N(0,1), below is a good and simple way (since you mention you don't have a
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
In addition to the answer by NRH, if you still have no means to generate random samples from a "standard normal distribution" N(0,1), below is a good and simple way (since you mention you don't have a statistical package, the functions below should be available in most standard programming languages).
1. Generate u and v as two uniformly distributed random numbers in the range from -1 to 1
by u = 2 r1 - 1 and v = 2 r2 - 1
2.calculate w = u^2 + v^2 if w > 1 the go back to 1
3.return u*z and y= v*z with z= sqrt(-2ln(w)/w)
A sample code would look like this:
u = 2 * random() - 1;
v = 2 * random() - 1;
w = pow(u, 2) + pow(v, 2);
if (w < 1) {
z = sqrt((-2 * log(w)) / w);
x = u * z;
y = v * z;
}
then use what MHR has suggested above to obtain the random deviates from N(mu, sigma^2).
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
In addition to the answer by NRH, if you still have no means to generate random samples from a "standard normal distribution" N(0,1), below is a good and simple way (since you mention you don't have a
|
5,539
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
|
The normal distribution emerges when one adds together a lot of random values of similar distribution (similar to each other, I mean). If you add together ten or more uniformly distributed random values then the sum is very nearly normally distributed. (Add more than ten if you want it to be even more normal, but ten is enough for almost all purposes.)
Say that your uniform random values are uniformly distributed between 0 and 1. The sum will then be between 0 and 10. Subtract 5 from the sum and the mean of the resulting distribution will be 0. Now you divide the result by the standard deviation of the (near) normal distribution and multiply the result by the desired standard deviation. Unfortunately I'm not sure what the standard deviation of the sum of ten uniform random deviates is, but if we are lucky someone will tell us in a comment!
I prefer to talk to students about the normal distribution in these terms because the utility of the assumption of a normal distribution in many systems stems entirely from the property that the sums of many random influences leads to a normal distribution.
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
|
The normal distribution emerges when one adds together a lot of random values of similar distribution (similar to each other, I mean). If you add together ten or more uniformly distributed random valu
|
How to sample from a normal distribution with known mean and variance using a conventional programming language?
The normal distribution emerges when one adds together a lot of random values of similar distribution (similar to each other, I mean). If you add together ten or more uniformly distributed random values then the sum is very nearly normally distributed. (Add more than ten if you want it to be even more normal, but ten is enough for almost all purposes.)
Say that your uniform random values are uniformly distributed between 0 and 1. The sum will then be between 0 and 10. Subtract 5 from the sum and the mean of the resulting distribution will be 0. Now you divide the result by the standard deviation of the (near) normal distribution and multiply the result by the desired standard deviation. Unfortunately I'm not sure what the standard deviation of the sum of ten uniform random deviates is, but if we are lucky someone will tell us in a comment!
I prefer to talk to students about the normal distribution in these terms because the utility of the assumption of a normal distribution in many systems stems entirely from the property that the sums of many random influences leads to a normal distribution.
|
How to sample from a normal distribution with known mean and variance using a conventional programmi
The normal distribution emerges when one adds together a lot of random values of similar distribution (similar to each other, I mean). If you add together ten or more uniformly distributed random valu
|
5,540
|
What is the rationale of the Matérn covariance function?
|
In addition to @Dahn's nice answer, I thought I would try to say a little bit more about where the Bessel and Gamma functions come from. One starting point for arriving at the covariance function is Bochner's theorem.
Theorem (Bochner) A continuous stationary function $k(x, y) = \widetilde{k}(|x − y|)$ is positive definite if and only if
$\widetilde{k}$ is the Fourier transform of a finite positive measure:
$$\widetilde{k}(t) = \int_{\mathbb{R}} e^{−iωt}\mathrm{d}µ(ω) .$$
From this you can deduce that the Matérn covariance matrix is derived as the Fourier transform of $\frac{1}{(1+\omega^2)^p}$ (Source: Durrande). That's all good but it doesn't really tell us how you arrive at this finite positive measure given by $\frac{1}{(1+\omega^2)^p}$. Well,
it's the (power) spectral density of a stochastic process $f(x)$.
Which stochastic process? It's known that a random process on $\mathbb{R}^d$ with a Matérn covariance function is a solution to the stochastic partial differential equation (SPDE)
$$
(κ^2 − ∆)^{α/2} X(s) = φW(s),
$$
where $W(s)$ is Gaussian white noise with unit variance, $$\Delta = \sum_{i=1}^d \frac{\partial^2}{\partial x^2_i}$$ is the Laplace operator, and $α =ν + d/2$ (I think this is in Cressie and Wikle).
Why pick this particular SPDE/stochastic process? The origin is in spatial statistics where it's argued that this is the simplest and natural covariance that works well in $\mathbb{R}^2$:
The exponential correlation function is a natural correlation in one
dimension, since it corresponds to a Markov process. In two dimensions
this is no longer so, although the exponential is a common correlation
function in geostatistical work. Whittle (1954) determined the
correlation corresponding to a stochastic differential equation of
Laplace type:
$$ \left[ \left(\frac{\partial}{\partial t_1}\right)^2 + \left(\frac{\partial}{\partial t_2}\right)^2 - \kappa^2 \right] X(t_1, t_2) = \epsilon(t_1 , t_2) $$
where $\epsilon$ is white noise. The corresponding discrete lattice process is a second order
autoregression. (Source: Guttorp&Gneiting)
The family of processes included in the SDE associated with the Matérn equation includes the $AR(1)$ Ornstein–Uhlenbeck model of the velocity of
a particle undergoing Brownian motion. More generally, you can define a power spectrum for a family of $AR(p)$ processes for every integer $p$ which also have a Matérn family covariance. This is in the appendix of Rasmussen and Williams.
This covariance function is not related to Matérn cluster process.
References
Cressie, Noel, and Christopher K. Wikle. Statistics for spatio-temporal data. John Wiley & Sons, 2015.
Guttorp, Peter, and Tilmann Gneiting. "Studies in the history of probability and statistics XLIX On the Matern correlation family." Biometrika 93.4 (2006): 989-995.
Rasmussen, C. E. and Williams, C. K. I. Gaussian Processes for Machine Learning. the MIT Press, 2006.
|
What is the rationale of the Matérn covariance function?
|
In addition to @Dahn's nice answer, I thought I would try to say a little bit more about where the Bessel and Gamma functions come from. One starting point for arriving at the covariance function is B
|
What is the rationale of the Matérn covariance function?
In addition to @Dahn's nice answer, I thought I would try to say a little bit more about where the Bessel and Gamma functions come from. One starting point for arriving at the covariance function is Bochner's theorem.
Theorem (Bochner) A continuous stationary function $k(x, y) = \widetilde{k}(|x − y|)$ is positive definite if and only if
$\widetilde{k}$ is the Fourier transform of a finite positive measure:
$$\widetilde{k}(t) = \int_{\mathbb{R}} e^{−iωt}\mathrm{d}µ(ω) .$$
From this you can deduce that the Matérn covariance matrix is derived as the Fourier transform of $\frac{1}{(1+\omega^2)^p}$ (Source: Durrande). That's all good but it doesn't really tell us how you arrive at this finite positive measure given by $\frac{1}{(1+\omega^2)^p}$. Well,
it's the (power) spectral density of a stochastic process $f(x)$.
Which stochastic process? It's known that a random process on $\mathbb{R}^d$ with a Matérn covariance function is a solution to the stochastic partial differential equation (SPDE)
$$
(κ^2 − ∆)^{α/2} X(s) = φW(s),
$$
where $W(s)$ is Gaussian white noise with unit variance, $$\Delta = \sum_{i=1}^d \frac{\partial^2}{\partial x^2_i}$$ is the Laplace operator, and $α =ν + d/2$ (I think this is in Cressie and Wikle).
Why pick this particular SPDE/stochastic process? The origin is in spatial statistics where it's argued that this is the simplest and natural covariance that works well in $\mathbb{R}^2$:
The exponential correlation function is a natural correlation in one
dimension, since it corresponds to a Markov process. In two dimensions
this is no longer so, although the exponential is a common correlation
function in geostatistical work. Whittle (1954) determined the
correlation corresponding to a stochastic differential equation of
Laplace type:
$$ \left[ \left(\frac{\partial}{\partial t_1}\right)^2 + \left(\frac{\partial}{\partial t_2}\right)^2 - \kappa^2 \right] X(t_1, t_2) = \epsilon(t_1 , t_2) $$
where $\epsilon$ is white noise. The corresponding discrete lattice process is a second order
autoregression. (Source: Guttorp&Gneiting)
The family of processes included in the SDE associated with the Matérn equation includes the $AR(1)$ Ornstein–Uhlenbeck model of the velocity of
a particle undergoing Brownian motion. More generally, you can define a power spectrum for a family of $AR(p)$ processes for every integer $p$ which also have a Matérn family covariance. This is in the appendix of Rasmussen and Williams.
This covariance function is not related to Matérn cluster process.
References
Cressie, Noel, and Christopher K. Wikle. Statistics for spatio-temporal data. John Wiley & Sons, 2015.
Guttorp, Peter, and Tilmann Gneiting. "Studies in the history of probability and statistics XLIX On the Matern correlation family." Biometrika 93.4 (2006): 989-995.
Rasmussen, C. E. and Williams, C. K. I. Gaussian Processes for Machine Learning. the MIT Press, 2006.
|
What is the rationale of the Matérn covariance function?
In addition to @Dahn's nice answer, I thought I would try to say a little bit more about where the Bessel and Gamma functions come from. One starting point for arriving at the covariance function is B
|
5,541
|
What is the rationale of the Matérn covariance function?
|
I do not know, but I found this question very interesting and here's what I got after a bit of reading on it.
For certain values of $\nu$, the Matérn covariance function can be expressed as a product of an exponential and a polynomial. E.g. for $\nu = 5/2$:
$$C_{5/2}(d) = \sigma^2\left(1 + \frac{\sqrt 5 d}{\rho} + \frac{5d^2}{3\rho^2} \right) \exp \left(- \frac{\sqrt 5 d}{\rho}\right)$$
It is then not too surprising that, as $\nu \to \infty$, $C_\nu$ actually converges to the Gaussian RBF:
$$\lim_{\nu \to \infty} C_\nu(d) = \sigma^2 \exp \left( -\frac{d^2}{2\rho^2}\right)$$
For $\nu = 1/2$, the Matérn covariance function gives the absolute exponential kernel
$$C_{1/2}(d) = \sigma^2 \exp\left( -\frac{d}{\rho} \right)$$
Furthermore, a Gaussian process with the Matérn covariance function with parameter $\nu$ is $\lceil \nu \rceil -1$-time differentiable .
This is quite nicely demonstrated on a picture taken from Rasmussen & Williams (2006)
In Interpolation of Spatial Data, Stein (who actually proposed the name of the Matérn covariance function), argues (pg. 30) that the infinite differentiability of the Gaussian covariance function yields unrealistic results for physical processes, since observing only a small continuous fraction of space/time should, in theory, yield the whole function.
He thus proposed the Matérn version as a generalization that is able to match physical processes more realistically.
Summary
The Matérn covariance function can be seen as a generalization of the Gaussian radial basis function. It contains even the absolute exponential kernel, which gives radically different results, and is better able to capture physical processes due to its finite differentiability (for finite $\nu$).
As for the mysteriousness of the appearance of the Bessel function, I'd love to see further intuition behind that, but I would guess that it is precisely its (asymptotic) behaviour in $\nu$ that made it useful in this context and lead Stein to define the Matérn covariance function. That of course does not rule out the possibility that there's a beautiful argument as to why all of that is true.
|
What is the rationale of the Matérn covariance function?
|
I do not know, but I found this question very interesting and here's what I got after a bit of reading on it.
For certain values of $\nu$, the Matérn covariance function can be expressed as a product
|
What is the rationale of the Matérn covariance function?
I do not know, but I found this question very interesting and here's what I got after a bit of reading on it.
For certain values of $\nu$, the Matérn covariance function can be expressed as a product of an exponential and a polynomial. E.g. for $\nu = 5/2$:
$$C_{5/2}(d) = \sigma^2\left(1 + \frac{\sqrt 5 d}{\rho} + \frac{5d^2}{3\rho^2} \right) \exp \left(- \frac{\sqrt 5 d}{\rho}\right)$$
It is then not too surprising that, as $\nu \to \infty$, $C_\nu$ actually converges to the Gaussian RBF:
$$\lim_{\nu \to \infty} C_\nu(d) = \sigma^2 \exp \left( -\frac{d^2}{2\rho^2}\right)$$
For $\nu = 1/2$, the Matérn covariance function gives the absolute exponential kernel
$$C_{1/2}(d) = \sigma^2 \exp\left( -\frac{d}{\rho} \right)$$
Furthermore, a Gaussian process with the Matérn covariance function with parameter $\nu$ is $\lceil \nu \rceil -1$-time differentiable .
This is quite nicely demonstrated on a picture taken from Rasmussen & Williams (2006)
In Interpolation of Spatial Data, Stein (who actually proposed the name of the Matérn covariance function), argues (pg. 30) that the infinite differentiability of the Gaussian covariance function yields unrealistic results for physical processes, since observing only a small continuous fraction of space/time should, in theory, yield the whole function.
He thus proposed the Matérn version as a generalization that is able to match physical processes more realistically.
Summary
The Matérn covariance function can be seen as a generalization of the Gaussian radial basis function. It contains even the absolute exponential kernel, which gives radically different results, and is better able to capture physical processes due to its finite differentiability (for finite $\nu$).
As for the mysteriousness of the appearance of the Bessel function, I'd love to see further intuition behind that, but I would guess that it is precisely its (asymptotic) behaviour in $\nu$ that made it useful in this context and lead Stein to define the Matérn covariance function. That of course does not rule out the possibility that there's a beautiful argument as to why all of that is true.
|
What is the rationale of the Matérn covariance function?
I do not know, but I found this question very interesting and here's what I got after a bit of reading on it.
For certain values of $\nu$, the Matérn covariance function can be expressed as a product
|
5,542
|
What is the rationale of the Matérn covariance function?
|
There is one aspect of Matérn covariance functions that makes them very useful for physical systems:
It describes an electrical signal with white Gaussian noise passing through an RC low-pass filter. The output signal is time-correlated according to the Matérn covariance function $\nu= 1/2$. When this output signal passes a second low-pass filter, the new output is the Matérn covariance function $\nu=3/2$.
In general, a series of $n$ low-pass filters on white Gaussian noise has the effect of correlating it according to the Matérn function $\nu=(2n-1)/2$.
In physical systems, one often finds influences according to an exponential decay due to one or more independent physical mechanisms, leading to the Matérn covariance functions.
|
What is the rationale of the Matérn covariance function?
|
There is one aspect of Matérn covariance functions that makes them very useful for physical systems:
It describes an electrical signal with white Gaussian noise passing through an RC low-pass filter.
|
What is the rationale of the Matérn covariance function?
There is one aspect of Matérn covariance functions that makes them very useful for physical systems:
It describes an electrical signal with white Gaussian noise passing through an RC low-pass filter. The output signal is time-correlated according to the Matérn covariance function $\nu= 1/2$. When this output signal passes a second low-pass filter, the new output is the Matérn covariance function $\nu=3/2$.
In general, a series of $n$ low-pass filters on white Gaussian noise has the effect of correlating it according to the Matérn function $\nu=(2n-1)/2$.
In physical systems, one often finds influences according to an exponential decay due to one or more independent physical mechanisms, leading to the Matérn covariance functions.
|
What is the rationale of the Matérn covariance function?
There is one aspect of Matérn covariance functions that makes them very useful for physical systems:
It describes an electrical signal with white Gaussian noise passing through an RC low-pass filter.
|
5,543
|
What is the rationale of the Matérn covariance function?
|
Adding to the excellent comments already made on this great question, I'd like to highlight the fact that the power spectrum of the 1-dimensional Matérn covariance function is a non-standardized Student's t-distribution, offering another potential viewpoint for interpretation.
|
What is the rationale of the Matérn covariance function?
|
Adding to the excellent comments already made on this great question, I'd like to highlight the fact that the power spectrum of the 1-dimensional Matérn covariance function is a non-standardized Stude
|
What is the rationale of the Matérn covariance function?
Adding to the excellent comments already made on this great question, I'd like to highlight the fact that the power spectrum of the 1-dimensional Matérn covariance function is a non-standardized Student's t-distribution, offering another potential viewpoint for interpretation.
|
What is the rationale of the Matérn covariance function?
Adding to the excellent comments already made on this great question, I'd like to highlight the fact that the power spectrum of the 1-dimensional Matérn covariance function is a non-standardized Stude
|
5,544
|
Gamma vs. lognormal distributions
|
As for qualitative differences, the lognormal and gamma are, as you say, quite similar.
Indeed, in practice they're often used to model the same phenomena (some people will use a gamma where others use a lognormal). They are both, for example, constant-coefficient-of-variation models (the CV for the lognormal is $\sqrt{e^{\sigma^2} -1}$, for the gamma it's $1/\sqrt \alpha$).
[How can it be constant if it depends on a parameter, you ask? It applies when you model the scale (location for the log scale); for the lognormal, the $\mu$ parameter acts as the log of a scale parameter, while for the gamma, the scale is the parameter that isn't the shape parameter (or its reciprocal if you use the shape-rate parameterization). I'll call the scale parameter for the gamma distribution $\beta$. Gamma GLMs model the mean ($\mu=\alpha\beta$) while holding $\alpha$ constant; in that case $\mu$ is also a scale parameter. A model with varying $\mu$ and constant $\alpha$ or $\sigma$ respectively will have constant CV.]
You might find it instructive to look at the density of their logs, which often shows a very clear difference.
The log of a lognormal random variable is ... normal. It's symmetric.
The log of a gamma random variable is left-skew. Depending on the value of the shape parameter, it may be quite skew or nearly symmetric.
Here's an example, with both lognormal and gamma having mean 1 and variance 1/4. The top plot shows the densities (gamma in green, lognormal in blue), and the lower one shows the densities of the logs:
(Plotting the log of the density of the logs is also useful. That is, taking a log-scale on the y-axis above)
This difference implies that the gamma has more of a tail on the left, and less of a tail on the right; the far right tail of the lognormal is heavier and its left tail lighter. And indeed, if you look at the skewness, of the lognormal and gamma, for a given coefficient of variation, the lognormal is more right skew ($\text{CV}^3+3\text{CV}$) than the gamma ($2\text{CV}$).
|
Gamma vs. lognormal distributions
|
As for qualitative differences, the lognormal and gamma are, as you say, quite similar.
Indeed, in practice they're often used to model the same phenomena (some people will use a gamma where others us
|
Gamma vs. lognormal distributions
As for qualitative differences, the lognormal and gamma are, as you say, quite similar.
Indeed, in practice they're often used to model the same phenomena (some people will use a gamma where others use a lognormal). They are both, for example, constant-coefficient-of-variation models (the CV for the lognormal is $\sqrt{e^{\sigma^2} -1}$, for the gamma it's $1/\sqrt \alpha$).
[How can it be constant if it depends on a parameter, you ask? It applies when you model the scale (location for the log scale); for the lognormal, the $\mu$ parameter acts as the log of a scale parameter, while for the gamma, the scale is the parameter that isn't the shape parameter (or its reciprocal if you use the shape-rate parameterization). I'll call the scale parameter for the gamma distribution $\beta$. Gamma GLMs model the mean ($\mu=\alpha\beta$) while holding $\alpha$ constant; in that case $\mu$ is also a scale parameter. A model with varying $\mu$ and constant $\alpha$ or $\sigma$ respectively will have constant CV.]
You might find it instructive to look at the density of their logs, which often shows a very clear difference.
The log of a lognormal random variable is ... normal. It's symmetric.
The log of a gamma random variable is left-skew. Depending on the value of the shape parameter, it may be quite skew or nearly symmetric.
Here's an example, with both lognormal and gamma having mean 1 and variance 1/4. The top plot shows the densities (gamma in green, lognormal in blue), and the lower one shows the densities of the logs:
(Plotting the log of the density of the logs is also useful. That is, taking a log-scale on the y-axis above)
This difference implies that the gamma has more of a tail on the left, and less of a tail on the right; the far right tail of the lognormal is heavier and its left tail lighter. And indeed, if you look at the skewness, of the lognormal and gamma, for a given coefficient of variation, the lognormal is more right skew ($\text{CV}^3+3\text{CV}$) than the gamma ($2\text{CV}$).
|
Gamma vs. lognormal distributions
As for qualitative differences, the lognormal and gamma are, as you say, quite similar.
Indeed, in practice they're often used to model the same phenomena (some people will use a gamma where others us
|
5,545
|
Gamma vs. lognormal distributions
|
Yes, the gamma distribution is the maximum entropy distribution for which the mean $E(X)$ and mean-log $E(\log X)$ are fixed. As with all exponential family distributions, it is the unique maximum entropy distribution for a fixed expected sufficient statistic.
To answer your question about physical processes that generate these distributions: The lognormal distribution arises when the logarithm of X is normally distributed, for example, if X is the product of very many small factors. If X is gamma distributed, it is the sum of many exponentially-distributed variates. For example, the waiting time for many events of a Poisson process.
|
Gamma vs. lognormal distributions
|
Yes, the gamma distribution is the maximum entropy distribution for which the mean $E(X)$ and mean-log $E(\log X)$ are fixed. As with all exponential family distributions, it is the unique maximum en
|
Gamma vs. lognormal distributions
Yes, the gamma distribution is the maximum entropy distribution for which the mean $E(X)$ and mean-log $E(\log X)$ are fixed. As with all exponential family distributions, it is the unique maximum entropy distribution for a fixed expected sufficient statistic.
To answer your question about physical processes that generate these distributions: The lognormal distribution arises when the logarithm of X is normally distributed, for example, if X is the product of very many small factors. If X is gamma distributed, it is the sum of many exponentially-distributed variates. For example, the waiting time for many events of a Poisson process.
|
Gamma vs. lognormal distributions
Yes, the gamma distribution is the maximum entropy distribution for which the mean $E(X)$ and mean-log $E(\log X)$ are fixed. As with all exponential family distributions, it is the unique maximum en
|
5,546
|
Rigorous definition of an outlier?
|
As long as your data comes from a known distribution with known properties, you can rigorously define an outlier as an event that is too unlikely to have been generated by the observed process (if you consider "too unlikely" to be non-rigorous, then all hypothesis testing is).
However, this approach is problematic on two levels: It assumes that the data comes from a known distribution with known properties, and it brings the risk that outliers are looked at as data points that were smuggled into your data set by some magical faeries.
In the absence of magical data faeries, all data comes from your experiment, and thus it is actually not possible to have outliers, just weird results. These can come from recording errors (e.g. a 400000 bedroom house for 4 dollars), systematic measurement issues (the image analysis algorithm reports huge areas if the object is too close to the border) experimental problems (sometimes, crystals precipitate out of the solution, which give very high signal), or features of your system (a cell can sometimes divide in three instead of two), but they can also be the result of a mechanism that no one's ever considered because it's rare and you're doing research, which means that some of the stuff you do is simply not known yet.
Ideally, you take the time to investigate every outlier, and only remove it from your data set once you understand why it doesn't fit your model. This is time-consuming and subjective in that the reasons are highly dependent on the experiment, but the alternative is worse: If you don't understand where the outliers came from, you have the choice between letting outliers "mess up" your results, or defining some "mathematically rigorous" approach to hide your lack of understanding. In other words, by pursuing "mathematical rigorousness" you choose between not getting a significant effect and not getting into heaven.
EDIT
If all you have is a list of numbers without knowing where they come from, you have no way of telling whether some data point is an outlier, because you can always assume a distribution where all data are inliers.
|
Rigorous definition of an outlier?
|
As long as your data comes from a known distribution with known properties, you can rigorously define an outlier as an event that is too unlikely to have been generated by the observed process (if you
|
Rigorous definition of an outlier?
As long as your data comes from a known distribution with known properties, you can rigorously define an outlier as an event that is too unlikely to have been generated by the observed process (if you consider "too unlikely" to be non-rigorous, then all hypothesis testing is).
However, this approach is problematic on two levels: It assumes that the data comes from a known distribution with known properties, and it brings the risk that outliers are looked at as data points that were smuggled into your data set by some magical faeries.
In the absence of magical data faeries, all data comes from your experiment, and thus it is actually not possible to have outliers, just weird results. These can come from recording errors (e.g. a 400000 bedroom house for 4 dollars), systematic measurement issues (the image analysis algorithm reports huge areas if the object is too close to the border) experimental problems (sometimes, crystals precipitate out of the solution, which give very high signal), or features of your system (a cell can sometimes divide in three instead of two), but they can also be the result of a mechanism that no one's ever considered because it's rare and you're doing research, which means that some of the stuff you do is simply not known yet.
Ideally, you take the time to investigate every outlier, and only remove it from your data set once you understand why it doesn't fit your model. This is time-consuming and subjective in that the reasons are highly dependent on the experiment, but the alternative is worse: If you don't understand where the outliers came from, you have the choice between letting outliers "mess up" your results, or defining some "mathematically rigorous" approach to hide your lack of understanding. In other words, by pursuing "mathematical rigorousness" you choose between not getting a significant effect and not getting into heaven.
EDIT
If all you have is a list of numbers without knowing where they come from, you have no way of telling whether some data point is an outlier, because you can always assume a distribution where all data are inliers.
|
Rigorous definition of an outlier?
As long as your data comes from a known distribution with known properties, you can rigorously define an outlier as an event that is too unlikely to have been generated by the observed process (if you
|
5,547
|
Rigorous definition of an outlier?
|
You are correct that removing outliers can look like a subjective exercise but that doesn't mean that it's wrong. The compulsive need to always have a rigorous mathematical reason for every decision regarding your data analysis is often just a thin veil of artificial rigour over what turns out to be a subjective exercise anyway. This is especially true if you want to apply the same mathematical justification to every situation you come across. (If there were bulletproof clear mathematical rules for everything then you wouldn't need a statistician.)
For example, in your long tail distribution situation, there's no guaranteed method to just decide from the numbers whether you've got one underlying distribution of interest with outliers or two underlying distributions of interest with outliers being part of only one of them. Or, heaven forbid, just the actual distribution of data.
The more data you collect, the more you get into the low probability regions of a distribution. If you collect 20 samples it's very unlikely you'l get a value with a z-score of 3.5. If you collect 10,000 samples it's very likely you'll get one and it's a natural part of the distribution. Given the above, how do you decide just because something is extreme to exclude it?
Selecting the best methods in general for analysis is often subjective. Whether it's unreasonably subjective depends on the explanation for the decision and on the outlier.
|
Rigorous definition of an outlier?
|
You are correct that removing outliers can look like a subjective exercise but that doesn't mean that it's wrong. The compulsive need to always have a rigorous mathematical reason for every decision
|
Rigorous definition of an outlier?
You are correct that removing outliers can look like a subjective exercise but that doesn't mean that it's wrong. The compulsive need to always have a rigorous mathematical reason for every decision regarding your data analysis is often just a thin veil of artificial rigour over what turns out to be a subjective exercise anyway. This is especially true if you want to apply the same mathematical justification to every situation you come across. (If there were bulletproof clear mathematical rules for everything then you wouldn't need a statistician.)
For example, in your long tail distribution situation, there's no guaranteed method to just decide from the numbers whether you've got one underlying distribution of interest with outliers or two underlying distributions of interest with outliers being part of only one of them. Or, heaven forbid, just the actual distribution of data.
The more data you collect, the more you get into the low probability regions of a distribution. If you collect 20 samples it's very unlikely you'l get a value with a z-score of 3.5. If you collect 10,000 samples it's very likely you'll get one and it's a natural part of the distribution. Given the above, how do you decide just because something is extreme to exclude it?
Selecting the best methods in general for analysis is often subjective. Whether it's unreasonably subjective depends on the explanation for the decision and on the outlier.
|
Rigorous definition of an outlier?
You are correct that removing outliers can look like a subjective exercise but that doesn't mean that it's wrong. The compulsive need to always have a rigorous mathematical reason for every decision
|
5,548
|
Rigorous definition of an outlier?
|
I don't think it is possible to define an outlier without assuming a model of the underlying process giving rise to the data. Without such a model we have no frame of reference to decide whether the data are anomalous or "wrong". The definition of an outlier that I have found useful is that an outlier is an observation (or observations) that cannot be reconciled to a model that otherwise performs well.
|
Rigorous definition of an outlier?
|
I don't think it is possible to define an outlier without assuming a model of the underlying process giving rise to the data. Without such a model we have no frame of reference to decide whether the
|
Rigorous definition of an outlier?
I don't think it is possible to define an outlier without assuming a model of the underlying process giving rise to the data. Without such a model we have no frame of reference to decide whether the data are anomalous or "wrong". The definition of an outlier that I have found useful is that an outlier is an observation (or observations) that cannot be reconciled to a model that otherwise performs well.
|
Rigorous definition of an outlier?
I don't think it is possible to define an outlier without assuming a model of the underlying process giving rise to the data. Without such a model we have no frame of reference to decide whether the
|
5,549
|
Rigorous definition of an outlier?
|
There are many excellent answers here. However, I want to point out that two questions are being confused. The first is, 'what is an outlier?', and more specifically to give a "rigorous definition" of such. This is simple:
An outlier is a data point that comes from a different population /
distribution / data generating process than the one you intended to
study / the rest of your data.
The second question is 'how do I know / detect that a data point is an outlier?' Unfortunately, this is very difficult. However, the answers given here (which really are very good, and which I can't improve upon) will be quite helpful with that task.
|
Rigorous definition of an outlier?
|
There are many excellent answers here. However, I want to point out that two questions are being confused. The first is, 'what is an outlier?', and more specifically to give a "rigorous definition"
|
Rigorous definition of an outlier?
There are many excellent answers here. However, I want to point out that two questions are being confused. The first is, 'what is an outlier?', and more specifically to give a "rigorous definition" of such. This is simple:
An outlier is a data point that comes from a different population /
distribution / data generating process than the one you intended to
study / the rest of your data.
The second question is 'how do I know / detect that a data point is an outlier?' Unfortunately, this is very difficult. However, the answers given here (which really are very good, and which I can't improve upon) will be quite helpful with that task.
|
Rigorous definition of an outlier?
There are many excellent answers here. However, I want to point out that two questions are being confused. The first is, 'what is an outlier?', and more specifically to give a "rigorous definition"
|
5,550
|
Rigorous definition of an outlier?
|
Definition 1: As already mentioned, an outlier in a group of data reflecting the same process (say process A) is an observation (or a set of observations) that is unlikely to be a result of process A.
This definition certainly involves an estimation of the likelihood function of the process A (hence a model) and setting what unlikely means (i.e. deciding where to stop...). This definition is at the root of the answer I gave here. It is more related to ideas of hypothesis testing of significance or goodness of fit.
Definition 2 An outlier is an observation $x$ in a group of observations $G$ such that when modeling the group of observation with a given model the accuracy is higher if $x$ is removed and treated separately (with a mixture, in the spirit of what I mention here).
This definition involves a "given model" and a measure of accuracy. I think this definition is more from the practical side and is more at the origin of outliers. At the Origin, outlier detection was a tool for robust statistics.
Obviously these definitions can be made very similar if you understand that calculating likelihood in the first definition involves modeling and calculation of a score :)
|
Rigorous definition of an outlier?
|
Definition 1: As already mentioned, an outlier in a group of data reflecting the same process (say process A) is an observation (or a set of observations) that is unlikely to be a result of process A
|
Rigorous definition of an outlier?
Definition 1: As already mentioned, an outlier in a group of data reflecting the same process (say process A) is an observation (or a set of observations) that is unlikely to be a result of process A.
This definition certainly involves an estimation of the likelihood function of the process A (hence a model) and setting what unlikely means (i.e. deciding where to stop...). This definition is at the root of the answer I gave here. It is more related to ideas of hypothesis testing of significance or goodness of fit.
Definition 2 An outlier is an observation $x$ in a group of observations $G$ such that when modeling the group of observation with a given model the accuracy is higher if $x$ is removed and treated separately (with a mixture, in the spirit of what I mention here).
This definition involves a "given model" and a measure of accuracy. I think this definition is more from the practical side and is more at the origin of outliers. At the Origin, outlier detection was a tool for robust statistics.
Obviously these definitions can be made very similar if you understand that calculating likelihood in the first definition involves modeling and calculation of a score :)
|
Rigorous definition of an outlier?
Definition 1: As already mentioned, an outlier in a group of data reflecting the same process (say process A) is an observation (or a set of observations) that is unlikely to be a result of process A
|
5,551
|
Rigorous definition of an outlier?
|
An outlier is a data point that is inconvenient to me, given my current understanding of the process that generates this data.
I believe this definition is as rigorous as can be made.
|
Rigorous definition of an outlier?
|
An outlier is a data point that is inconvenient to me, given my current understanding of the process that generates this data.
I believe this definition is as rigorous as can be made.
|
Rigorous definition of an outlier?
An outlier is a data point that is inconvenient to me, given my current understanding of the process that generates this data.
I believe this definition is as rigorous as can be made.
|
Rigorous definition of an outlier?
An outlier is a data point that is inconvenient to me, given my current understanding of the process that generates this data.
I believe this definition is as rigorous as can be made.
|
5,552
|
Rigorous definition of an outlier?
|
define an outlier as a member of that minimal set of elements which must be removed from a datasetof size n in order to assure 100% compliance with RUM tests conducted at 95% confidence level on all (2^n -1) unique subsets of the data. See Karian and Dudewicz text on fitting data to pdfs using R(Sept 2010) for definition of the RUM test.
|
Rigorous definition of an outlier?
|
define an outlier as a member of that minimal set of elements which must be removed from a datasetof size n in order to assure 100% compliance with RUM tests conducted at 95% confidence level on all
|
Rigorous definition of an outlier?
define an outlier as a member of that minimal set of elements which must be removed from a datasetof size n in order to assure 100% compliance with RUM tests conducted at 95% confidence level on all (2^n -1) unique subsets of the data. See Karian and Dudewicz text on fitting data to pdfs using R(Sept 2010) for definition of the RUM test.
|
Rigorous definition of an outlier?
define an outlier as a member of that minimal set of elements which must be removed from a datasetof size n in order to assure 100% compliance with RUM tests conducted at 95% confidence level on all
|
5,553
|
Rigorous definition of an outlier?
|
Outliers are important only in the frequentist realm. If a single datapoint adds bias to your model which is defined by an underlying distribution predeterimined by your theory, then it is an outlier for that model. The subjectivity lies in the fact that if your theory posits a different model, then you can have a different set of points as outliers.
|
Rigorous definition of an outlier?
|
Outliers are important only in the frequentist realm. If a single datapoint adds bias to your model which is defined by an underlying distribution predeterimined by your theory, then it is an outlier
|
Rigorous definition of an outlier?
Outliers are important only in the frequentist realm. If a single datapoint adds bias to your model which is defined by an underlying distribution predeterimined by your theory, then it is an outlier for that model. The subjectivity lies in the fact that if your theory posits a different model, then you can have a different set of points as outliers.
|
Rigorous definition of an outlier?
Outliers are important only in the frequentist realm. If a single datapoint adds bias to your model which is defined by an underlying distribution predeterimined by your theory, then it is an outlier
|
5,554
|
Statistical models cheat sheet
|
I have previously found UCLA's "Choosing the Correct Statistical Test" to be helpful:
https://stats.idre.ucla.edu/other/mult-pkg/whatstat/
It also gives examples of how to do the analysis in SAS, Stata, SPSS and R.
|
Statistical models cheat sheet
|
I have previously found UCLA's "Choosing the Correct Statistical Test" to be helpful:
https://stats.idre.ucla.edu/other/mult-pkg/whatstat/
It also gives examples of how to do the analysis in SAS, Stat
|
Statistical models cheat sheet
I have previously found UCLA's "Choosing the Correct Statistical Test" to be helpful:
https://stats.idre.ucla.edu/other/mult-pkg/whatstat/
It also gives examples of how to do the analysis in SAS, Stata, SPSS and R.
|
Statistical models cheat sheet
I have previously found UCLA's "Choosing the Correct Statistical Test" to be helpful:
https://stats.idre.ucla.edu/other/mult-pkg/whatstat/
It also gives examples of how to do the analysis in SAS, Stat
|
5,555
|
Statistical models cheat sheet
|
Do you mean a statistical analysis decision tree? (google search), like this (only with extensions):
(source: processma.com)
?
BTW, notice that the chart in wrong in that the tests it offers for median are not for median but for rank... (it would be for median if the distribution is symmetrical)
|
Statistical models cheat sheet
|
Do you mean a statistical analysis decision tree? (google search), like this (only with extensions):
(source: processma.com)
?
BTW, notice that the chart in wrong in that the tests it offers for med
|
Statistical models cheat sheet
Do you mean a statistical analysis decision tree? (google search), like this (only with extensions):
(source: processma.com)
?
BTW, notice that the chart in wrong in that the tests it offers for median are not for median but for rank... (it would be for median if the distribution is symmetrical)
|
Statistical models cheat sheet
Do you mean a statistical analysis decision tree? (google search), like this (only with extensions):
(source: processma.com)
?
BTW, notice that the chart in wrong in that the tests it offers for med
|
5,556
|
Statistical models cheat sheet
|
Reading "Using Multivariate Statistics (4th Edition) Barbara G. Tabachnick"
I found these decision trees based on major research question. I think they are quite useful. Following this link you'll find an extract of the book
http://www.psychwiki.com/images/d/d8/TF2.pdf
see pages 29 to 31
|
Statistical models cheat sheet
|
Reading "Using Multivariate Statistics (4th Edition) Barbara G. Tabachnick"
I found these decision trees based on major research question. I think they are quite useful. Following this link you'll fi
|
Statistical models cheat sheet
Reading "Using Multivariate Statistics (4th Edition) Barbara G. Tabachnick"
I found these decision trees based on major research question. I think they are quite useful. Following this link you'll find an extract of the book
http://www.psychwiki.com/images/d/d8/TF2.pdf
see pages 29 to 31
|
Statistical models cheat sheet
Reading "Using Multivariate Statistics (4th Edition) Barbara G. Tabachnick"
I found these decision trees based on major research question. I think they are quite useful. Following this link you'll fi
|
5,557
|
Statistical models cheat sheet
|
Here is a collection page:
http://sasdataguru.blogspot.com/2011/05/online-statistics-cheat-sheet.html
|
Statistical models cheat sheet
|
Here is a collection page:
http://sasdataguru.blogspot.com/2011/05/online-statistics-cheat-sheet.html
|
Statistical models cheat sheet
Here is a collection page:
http://sasdataguru.blogspot.com/2011/05/online-statistics-cheat-sheet.html
|
Statistical models cheat sheet
Here is a collection page:
http://sasdataguru.blogspot.com/2011/05/online-statistics-cheat-sheet.html
|
5,558
|
Statistical models cheat sheet
|
Since when is regression an hypothesis test of anything? If by"regression"why is meant is curve fitting or correlations (pair-wise or multiple) the only "test" is between some relation vs. no relation. Figures like this own their origin to Siege's l956 book.
|
Statistical models cheat sheet
|
Since when is regression an hypothesis test of anything? If by"regression"why is meant is curve fitting or correlations (pair-wise or multiple) the only "test" is between some relation vs. no relatio
|
Statistical models cheat sheet
Since when is regression an hypothesis test of anything? If by"regression"why is meant is curve fitting or correlations (pair-wise or multiple) the only "test" is between some relation vs. no relation. Figures like this own their origin to Siege's l956 book.
|
Statistical models cheat sheet
Since when is regression an hypothesis test of anything? If by"regression"why is meant is curve fitting or correlations (pair-wise or multiple) the only "test" is between some relation vs. no relatio
|
5,559
|
Optimized implementations of the Random Forest algorithm
|
(Updated 6 IX 2015 with suggestions from comments, also made CW)
There are two new, nice packages available for R which are pretty well optimised for a certain conditions:
ranger -- C++, R package, optimised for $p>>n$ problems, parallel, special treatment of GWAS data.
Arborist -- C++, R and Python bindings, optimised for large-$n$ problems, apparently plans for GPGPU.
Other RF implementations:
The Original One -- standalone Fortran code, not parallel, pretty hard to use.
randomForest -- C, R package, probably the most popular, not parallel, actually quite fast when compared on a single-core speed basis, especially for small data.
randomForestSRC -- C, R package, clone of randomForest supporting parallel processing and survival problems.
party -- C, R package, quite slow, but designed as a plane for experimenting with RF.
bigrf -- C+/R, R package, built to work on big data within bigmemory framework; quite far from being complete.
scikit learn Ensemble forest -- Python, part of scikit-learn framework, parallel, implements many variants of RF.
milk's RF -- Python, part of milk framework.
so-called WEKA rf -- Java/WEKA, parallel.
ALGLIB
rt-rank -- abandoned?
Ranger paper has some speed/memory comparisons, but there is no thorough benchmark.
|
Optimized implementations of the Random Forest algorithm
|
(Updated 6 IX 2015 with suggestions from comments, also made CW)
There are two new, nice packages available for R which are pretty well optimised for a certain conditions:
ranger -- C++, R package, o
|
Optimized implementations of the Random Forest algorithm
(Updated 6 IX 2015 with suggestions from comments, also made CW)
There are two new, nice packages available for R which are pretty well optimised for a certain conditions:
ranger -- C++, R package, optimised for $p>>n$ problems, parallel, special treatment of GWAS data.
Arborist -- C++, R and Python bindings, optimised for large-$n$ problems, apparently plans for GPGPU.
Other RF implementations:
The Original One -- standalone Fortran code, not parallel, pretty hard to use.
randomForest -- C, R package, probably the most popular, not parallel, actually quite fast when compared on a single-core speed basis, especially for small data.
randomForestSRC -- C, R package, clone of randomForest supporting parallel processing and survival problems.
party -- C, R package, quite slow, but designed as a plane for experimenting with RF.
bigrf -- C+/R, R package, built to work on big data within bigmemory framework; quite far from being complete.
scikit learn Ensemble forest -- Python, part of scikit-learn framework, parallel, implements many variants of RF.
milk's RF -- Python, part of milk framework.
so-called WEKA rf -- Java/WEKA, parallel.
ALGLIB
rt-rank -- abandoned?
Ranger paper has some speed/memory comparisons, but there is no thorough benchmark.
|
Optimized implementations of the Random Forest algorithm
(Updated 6 IX 2015 with suggestions from comments, also made CW)
There are two new, nice packages available for R which are pretty well optimised for a certain conditions:
ranger -- C++, R package, o
|
5,560
|
Optimized implementations of the Random Forest algorithm
|
As far as I know, the R version of randomForest calls the same Fortran code as the original version. Furthermore, it's trivial to parallelize the randomForest function. It's actually one of the examples provided in the foreach documentation.
library(foreach)
library(randomForest)
rf <- foreach(ntree = rep(250, 4), .combine = combine, .packages = "randomForest") %dopar%
randomForest(x, y, ntree = ntree)
Given that random forests are embarrassingly parallel, the biggest optimization you can make is running them in parallel. After that, I don't think there's any other low-hanging fruit in the algorithm, but I could be wrong.
The only issue is that you lose the out-of-bag error estimate in the combined forest, but there's probably a simple way to calculate it (I'd actually love to find out how to do this).
|
Optimized implementations of the Random Forest algorithm
|
As far as I know, the R version of randomForest calls the same Fortran code as the original version. Furthermore, it's trivial to parallelize the randomForest function. It's actually one of the examp
|
Optimized implementations of the Random Forest algorithm
As far as I know, the R version of randomForest calls the same Fortran code as the original version. Furthermore, it's trivial to parallelize the randomForest function. It's actually one of the examples provided in the foreach documentation.
library(foreach)
library(randomForest)
rf <- foreach(ntree = rep(250, 4), .combine = combine, .packages = "randomForest") %dopar%
randomForest(x, y, ntree = ntree)
Given that random forests are embarrassingly parallel, the biggest optimization you can make is running them in parallel. After that, I don't think there's any other low-hanging fruit in the algorithm, but I could be wrong.
The only issue is that you lose the out-of-bag error estimate in the combined forest, but there's probably a simple way to calculate it (I'd actually love to find out how to do this).
|
Optimized implementations of the Random Forest algorithm
As far as I know, the R version of randomForest calls the same Fortran code as the original version. Furthermore, it's trivial to parallelize the randomForest function. It's actually one of the examp
|
5,561
|
Optimized implementations of the Random Forest algorithm
|
The ELSII used randomForest (see e.g., footnote 3 p.591), which is an R implementation of the Breiman and Cutler's Fortran code from Salford. Andy Liaw's code is in C.
There's another implementation of RFs proposed in the party package (in C), which relies on R/Lapack, which has some dependencies on BLAS (see/include/R_ext/Lapack.h in your base R directory).
As far as bagging is concerned, it should not be too hard to parallelize it, but I'll let more specialized users answer on this aspect.
|
Optimized implementations of the Random Forest algorithm
|
The ELSII used randomForest (see e.g., footnote 3 p.591), which is an R implementation of the Breiman and Cutler's Fortran code from Salford. Andy Liaw's code is in C.
There's another implementation o
|
Optimized implementations of the Random Forest algorithm
The ELSII used randomForest (see e.g., footnote 3 p.591), which is an R implementation of the Breiman and Cutler's Fortran code from Salford. Andy Liaw's code is in C.
There's another implementation of RFs proposed in the party package (in C), which relies on R/Lapack, which has some dependencies on BLAS (see/include/R_ext/Lapack.h in your base R directory).
As far as bagging is concerned, it should not be too hard to parallelize it, but I'll let more specialized users answer on this aspect.
|
Optimized implementations of the Random Forest algorithm
The ELSII used randomForest (see e.g., footnote 3 p.591), which is an R implementation of the Breiman and Cutler's Fortran code from Salford. Andy Liaw's code is in C.
There's another implementation o
|
5,562
|
Optimized implementations of the Random Forest algorithm
|
The team behind randomJungle claims that is an order of magnitude faster than
the R randomForest implementation and uses an order magnitude less memory.
A package for randomJungle is being developed for R but I can't get to build yet.
https://r-forge.r-project.org/projects/rjungler/
|
Optimized implementations of the Random Forest algorithm
|
The team behind randomJungle claims that is an order of magnitude faster than
the R randomForest implementation and uses an order magnitude less memory.
A package for randomJungle is being developed
|
Optimized implementations of the Random Forest algorithm
The team behind randomJungle claims that is an order of magnitude faster than
the R randomForest implementation and uses an order magnitude less memory.
A package for randomJungle is being developed for R but I can't get to build yet.
https://r-forge.r-project.org/projects/rjungler/
|
Optimized implementations of the Random Forest algorithm
The team behind randomJungle claims that is an order of magnitude faster than
the R randomForest implementation and uses an order magnitude less memory.
A package for randomJungle is being developed
|
5,563
|
Optimized implementations of the Random Forest algorithm
|
For the Javascript Implementation go through this demo.
If you are like a child who is hungry for a chocolate, here is your chocolate of random forest
http://cs.stanford.edu/people/karpathy/svmjs/demo/demoforest.html
|
Optimized implementations of the Random Forest algorithm
|
For the Javascript Implementation go through this demo.
If you are like a child who is hungry for a chocolate, here is your chocolate of random forest
http://cs.stanford.edu/people/karpathy/svmjs/de
|
Optimized implementations of the Random Forest algorithm
For the Javascript Implementation go through this demo.
If you are like a child who is hungry for a chocolate, here is your chocolate of random forest
http://cs.stanford.edu/people/karpathy/svmjs/demo/demoforest.html
|
Optimized implementations of the Random Forest algorithm
For the Javascript Implementation go through this demo.
If you are like a child who is hungry for a chocolate, here is your chocolate of random forest
http://cs.stanford.edu/people/karpathy/svmjs/de
|
5,564
|
whether to rescale indicator / binary / dummy predictors for LASSO
|
According Tibshirani (THE LASSO METHOD FOR VARIABLE SELECTION
IN THE COX MODEL, Statistics in Medicine, VOL. 16, 385-395 (1997)), who literally wrote the book on regularization methods, you should standardize the dummies. However, you then lose the straightforward interpretability of your coefficients. If you don't, your variables are not on an even playing field. You are essentially tipping the scales in favor of your continuous variables (most likely). So, if your primary goal is model selection then this is an egregious error. However, if you are more interested in interpretation then perhaps this isn't the best idea.
The recommendation is on page 394:
The lasso method requires initial standardization of the regressors, so that the penalization scheme is fair to all regressors. For categorical regressors, one codes the regressor with dummy variables and then standardizes the dummy variables. As pointed out by a referee, however, the relative scaling between continuous and categorical variables in this scheme can be somewhat arbitrary.
|
whether to rescale indicator / binary / dummy predictors for LASSO
|
According Tibshirani (THE LASSO METHOD FOR VARIABLE SELECTION
IN THE COX MODEL, Statistics in Medicine, VOL. 16, 385-395 (1997)), who literally wrote the book on regularization methods, you should sta
|
whether to rescale indicator / binary / dummy predictors for LASSO
According Tibshirani (THE LASSO METHOD FOR VARIABLE SELECTION
IN THE COX MODEL, Statistics in Medicine, VOL. 16, 385-395 (1997)), who literally wrote the book on regularization methods, you should standardize the dummies. However, you then lose the straightforward interpretability of your coefficients. If you don't, your variables are not on an even playing field. You are essentially tipping the scales in favor of your continuous variables (most likely). So, if your primary goal is model selection then this is an egregious error. However, if you are more interested in interpretation then perhaps this isn't the best idea.
The recommendation is on page 394:
The lasso method requires initial standardization of the regressors, so that the penalization scheme is fair to all regressors. For categorical regressors, one codes the regressor with dummy variables and then standardizes the dummy variables. As pointed out by a referee, however, the relative scaling between continuous and categorical variables in this scheme can be somewhat arbitrary.
|
whether to rescale indicator / binary / dummy predictors for LASSO
According Tibshirani (THE LASSO METHOD FOR VARIABLE SELECTION
IN THE COX MODEL, Statistics in Medicine, VOL. 16, 385-395 (1997)), who literally wrote the book on regularization methods, you should sta
|
5,565
|
whether to rescale indicator / binary / dummy predictors for LASSO
|
Andrew Gelman's blog post, When to standardize regression inputs and when to leave them alone, is also worth a look. This part in particular is relevant:
For comparing coefficients for different predictors within a model, standardizing gets the nod. (Although I don’t standardize binary inputs. I code them as 0/1, and then I standardize all other numeric inputs by dividing by two standard deviation, thus putting them on approximately the same scale as 0/1 variables.)
|
whether to rescale indicator / binary / dummy predictors for LASSO
|
Andrew Gelman's blog post, When to standardize regression inputs and when to leave them alone, is also worth a look. This part in particular is relevant:
For comparing coefficients for different pre
|
whether to rescale indicator / binary / dummy predictors for LASSO
Andrew Gelman's blog post, When to standardize regression inputs and when to leave them alone, is also worth a look. This part in particular is relevant:
For comparing coefficients for different predictors within a model, standardizing gets the nod. (Although I don’t standardize binary inputs. I code them as 0/1, and then I standardize all other numeric inputs by dividing by two standard deviation, thus putting them on approximately the same scale as 0/1 variables.)
|
whether to rescale indicator / binary / dummy predictors for LASSO
Andrew Gelman's blog post, When to standardize regression inputs and when to leave them alone, is also worth a look. This part in particular is relevant:
For comparing coefficients for different pre
|
5,566
|
whether to rescale indicator / binary / dummy predictors for LASSO
|
This is more of a comment, but too long. One of the most used softwares for lasso (and friends) is R's glmnet. From the help page, printed by ?glmnet:
standardize: Logical flag for x variable standardization, prior to
fitting the model sequence. The coefficients are always
returned on the original scale. Default is
‘standardize=TRUE’. If variables are in the same units
already, you might not wish to standardize. See details below
for y standardization with ‘family="gaussian"’.
standardize is one of the arguments, defaults to true. So the $X$ variables are usually standardized, and this includes dummys (since there is no mention of an exception for them). But the coefficients are reported on the original scale.
|
whether to rescale indicator / binary / dummy predictors for LASSO
|
This is more of a comment, but too long. One of the most used softwares for lasso (and friends) is R's glmnet. From the help page, printed by ?glmnet:
standardize: Logical flag for x variable stand
|
whether to rescale indicator / binary / dummy predictors for LASSO
This is more of a comment, but too long. One of the most used softwares for lasso (and friends) is R's glmnet. From the help page, printed by ?glmnet:
standardize: Logical flag for x variable standardization, prior to
fitting the model sequence. The coefficients are always
returned on the original scale. Default is
‘standardize=TRUE’. If variables are in the same units
already, you might not wish to standardize. See details below
for y standardization with ‘family="gaussian"’.
standardize is one of the arguments, defaults to true. So the $X$ variables are usually standardized, and this includes dummys (since there is no mention of an exception for them). But the coefficients are reported on the original scale.
|
whether to rescale indicator / binary / dummy predictors for LASSO
This is more of a comment, but too long. One of the most used softwares for lasso (and friends) is R's glmnet. From the help page, printed by ?glmnet:
standardize: Logical flag for x variable stand
|
5,567
|
How to do community detection in a weighted social network/graph?
|
igraph implementation of Newman's modularity clustering (fastgreedy function) can be used with weighted edges as well. Just add weight attribute to the edges and analyse as usual. In my experience, it run even faster with weights as there are less ties.
|
How to do community detection in a weighted social network/graph?
|
igraph implementation of Newman's modularity clustering (fastgreedy function) can be used with weighted edges as well. Just add weight attribute to the edges and analyse as usual. In my experience, i
|
How to do community detection in a weighted social network/graph?
igraph implementation of Newman's modularity clustering (fastgreedy function) can be used with weighted edges as well. Just add weight attribute to the edges and analyse as usual. In my experience, it run even faster with weights as there are less ties.
|
How to do community detection in a weighted social network/graph?
igraph implementation of Newman's modularity clustering (fastgreedy function) can be used with weighted edges as well. Just add weight attribute to the edges and analyse as usual. In my experience, i
|
5,568
|
How to do community detection in a weighted social network/graph?
|
I know that Gephi can process undirected weighted graph, but I seem to remember it has to be stored in GDF, which is pretty close to CSV, or Ucinet DL. Be aware that it's still an alpha release.
Now, about clustering your graph, Gephi seems to lack clustering pipelines, except for the MCL algorithm that is now available in the latest version. There was a Google Code Project in 2009, Gephi Network Statistics (featuring e.g. Newman’s modularity metric), but I don't know if something has been released in this direction. Anyway, it seems to allow some kind of modularity/clustering computations, but see also Social Network Analysis using R and Gephi and Data preparation for Social Network Analysis using R and Gephi (Many thanks to @Tal).
If you are used to Python, it is worth trying NetworkX (Here is an example of a weighted graph with the corresponding code). Then you have many ways to carry out your analysis.
You should also look at INSNA - Social Network Analysis Software or Tim Evans's webpage about Complex Networks and Complexity.
|
How to do community detection in a weighted social network/graph?
|
I know that Gephi can process undirected weighted graph, but I seem to remember it has to be stored in GDF, which is pretty close to CSV, or Ucinet DL. Be aware that it's still an alpha release.
Now,
|
How to do community detection in a weighted social network/graph?
I know that Gephi can process undirected weighted graph, but I seem to remember it has to be stored in GDF, which is pretty close to CSV, or Ucinet DL. Be aware that it's still an alpha release.
Now, about clustering your graph, Gephi seems to lack clustering pipelines, except for the MCL algorithm that is now available in the latest version. There was a Google Code Project in 2009, Gephi Network Statistics (featuring e.g. Newman’s modularity metric), but I don't know if something has been released in this direction. Anyway, it seems to allow some kind of modularity/clustering computations, but see also Social Network Analysis using R and Gephi and Data preparation for Social Network Analysis using R and Gephi (Many thanks to @Tal).
If you are used to Python, it is worth trying NetworkX (Here is an example of a weighted graph with the corresponding code). Then you have many ways to carry out your analysis.
You should also look at INSNA - Social Network Analysis Software or Tim Evans's webpage about Complex Networks and Complexity.
|
How to do community detection in a weighted social network/graph?
I know that Gephi can process undirected weighted graph, but I seem to remember it has to be stored in GDF, which is pretty close to CSV, or Ucinet DL. Be aware that it's still an alpha release.
Now,
|
5,569
|
How to do community detection in a weighted social network/graph?
|
Gephi implements the Louvain Modularity method: http://wiki.gephi.org/index.php/Modularity
cheers
|
How to do community detection in a weighted social network/graph?
|
Gephi implements the Louvain Modularity method: http://wiki.gephi.org/index.php/Modularity
cheers
|
How to do community detection in a weighted social network/graph?
Gephi implements the Louvain Modularity method: http://wiki.gephi.org/index.php/Modularity
cheers
|
How to do community detection in a weighted social network/graph?
Gephi implements the Louvain Modularity method: http://wiki.gephi.org/index.php/Modularity
cheers
|
5,570
|
How to do community detection in a weighted social network/graph?
|
The Louvain modularity algorithm is available in C++:
https://sites.google.com/site/findcommunities/
It deals with weighted networks of millions of nodes and edges, and has been demonstrated to be much faster than Newman algorithm.
|
How to do community detection in a weighted social network/graph?
|
The Louvain modularity algorithm is available in C++:
https://sites.google.com/site/findcommunities/
It deals with weighted networks of millions of nodes and edges, and has been demonstrated to be muc
|
How to do community detection in a weighted social network/graph?
The Louvain modularity algorithm is available in C++:
https://sites.google.com/site/findcommunities/
It deals with weighted networks of millions of nodes and edges, and has been demonstrated to be much faster than Newman algorithm.
|
How to do community detection in a weighted social network/graph?
The Louvain modularity algorithm is available in C++:
https://sites.google.com/site/findcommunities/
It deals with weighted networks of millions of nodes and edges, and has been demonstrated to be muc
|
5,571
|
How to do community detection in a weighted social network/graph?
|
If you are using python, and have created a weighted graph using NetworkX, then you can use python-louvain for clustering. Where G is a weighted graph:
import community
partition = community.best_partition(G, weight='weight')
|
How to do community detection in a weighted social network/graph?
|
If you are using python, and have created a weighted graph using NetworkX, then you can use python-louvain for clustering. Where G is a weighted graph:
import community
partition = community.best_par
|
How to do community detection in a weighted social network/graph?
If you are using python, and have created a weighted graph using NetworkX, then you can use python-louvain for clustering. Where G is a weighted graph:
import community
partition = community.best_partition(G, weight='weight')
|
How to do community detection in a weighted social network/graph?
If you are using python, and have created a weighted graph using NetworkX, then you can use python-louvain for clustering. Where G is a weighted graph:
import community
partition = community.best_par
|
5,572
|
How to do community detection in a weighted social network/graph?
|
I just came across the tnet package for R. The creator seems to be researching on community discovery in weighted and bipartite (two-mode) graphs.
http://opsahl.co.uk/tnet/content/view/15/27/
I have not yet use it.
|
How to do community detection in a weighted social network/graph?
|
I just came across the tnet package for R. The creator seems to be researching on community discovery in weighted and bipartite (two-mode) graphs.
http://opsahl.co.uk/tnet/content/view/15/27/
I have n
|
How to do community detection in a weighted social network/graph?
I just came across the tnet package for R. The creator seems to be researching on community discovery in weighted and bipartite (two-mode) graphs.
http://opsahl.co.uk/tnet/content/view/15/27/
I have not yet use it.
|
How to do community detection in a weighted social network/graph?
I just came across the tnet package for R. The creator seems to be researching on community discovery in weighted and bipartite (two-mode) graphs.
http://opsahl.co.uk/tnet/content/view/15/27/
I have n
|
5,573
|
How to do community detection in a weighted social network/graph?
|
SLPA (now called GANXiS) is a fast algorithm capable of detecting both disjoint and overlapping communities in social networks (undirected/directed and unweighted/weighted). It is shown that the algorithm produces meaningful results on real-world social and gene networks. It is one of the state-of-the-art. It is available at
https://sites.google.com/site/communitydetectionslpa/
See a nice review arxiv.org/abs/1110.5813 for more info
|
How to do community detection in a weighted social network/graph?
|
SLPA (now called GANXiS) is a fast algorithm capable of detecting both disjoint and overlapping communities in social networks (undirected/directed and unweighted/weighted). It is shown that the al
|
How to do community detection in a weighted social network/graph?
SLPA (now called GANXiS) is a fast algorithm capable of detecting both disjoint and overlapping communities in social networks (undirected/directed and unweighted/weighted). It is shown that the algorithm produces meaningful results on real-world social and gene networks. It is one of the state-of-the-art. It is available at
https://sites.google.com/site/communitydetectionslpa/
See a nice review arxiv.org/abs/1110.5813 for more info
|
How to do community detection in a weighted social network/graph?
SLPA (now called GANXiS) is a fast algorithm capable of detecting both disjoint and overlapping communities in social networks (undirected/directed and unweighted/weighted). It is shown that the al
|
5,574
|
How to do community detection in a weighted social network/graph?
|
I've an java implementation for non-overlapping, weighted/unweighted network that could probably handle 3 million nodes (I've tested it for a million node dataset). However, it works like k-means, and needs the number of partitions to be detected as an input (k in kmeans). You can find more info here, and here is the code, in github
Cheers,
|
How to do community detection in a weighted social network/graph?
|
I've an java implementation for non-overlapping, weighted/unweighted network that could probably handle 3 million nodes (I've tested it for a million node dataset). However, it works like k-means, and
|
How to do community detection in a weighted social network/graph?
I've an java implementation for non-overlapping, weighted/unweighted network that could probably handle 3 million nodes (I've tested it for a million node dataset). However, it works like k-means, and needs the number of partitions to be detected as an input (k in kmeans). You can find more info here, and here is the code, in github
Cheers,
|
How to do community detection in a weighted social network/graph?
I've an java implementation for non-overlapping, weighted/unweighted network that could probably handle 3 million nodes (I've tested it for a million node dataset). However, it works like k-means, and
|
5,575
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
amoeba already gave a good answer in the comments, but if you want a formal argument, here it goes.
The singular value decomposition of a matrix $A$ is $A=U\Sigma V^T$, where the columns of $V$ are eigenvectors of $A^TA$ and the diagonal entries of $\Sigma$ are the square roots of its eigenvalues, i.e. $\sigma_{ii}=\sqrt{\lambda_i(A^TA)}$.
As you know, the principal components are the orthogonal projections of your variables onto the space of the eigenvectors of the empirical covariance matrix $\frac{1}{n-1}A^TA$. The variance of the components is given by its eigenvalues, $\lambda_i(\frac{1}{n-1}A^TA)$.
Consider any square matrix $B$, $\alpha \in \mathbb R$ and a vector $v$ such that $Bv=\lambda v$. Then
$B^kv=\lambda^kv$
$\lambda(\alpha B) = \alpha\lambda( B)$
Let us define $S=\frac{1}{n-1}A^TA$. The SVD of $S$ will compute the eigendecomposition of $S^TS=\frac{1}{(n-1)^2}A^TAA^TA$ to yield
the eigenvectors of $(A^TA)^TA^TA=A^TAA^TA$, which by property 1 are those of $A^TA$
the square roots of the eigenvalues of $\frac{1}{(n-1)^2}A^TAA^TA$, which by property 2, then 1, then 2 again, are $\sqrt{\frac{1}{(n-1)^2} \lambda_i(A^TAA^TA)} = \sqrt{\frac{1}{(n-1)^2} \lambda_i^2(A^TA)} = \frac{1}{n-1}\lambda_i(A^TA) = \lambda_i(\frac{1}{n-1}A^TA)$.
Voilà!
Regarding the numerical stability, one would need to figure out what the employed alogrithms are. If you're up to it, I believe these are the LAPACK routines used by numpy:
numpy.linalg.eig
numpy.linalg.svd
Update: On the stability, the SVD implementation seems to be using a divide-and-conquer approach, while the eigendecomposition uses a plain QR algorithm. I cannot access some relevant SIAM papers from my institution (blame research cutbacks) but I found something that might support the assessment that the SVD routine is more stable.
In
Nakatsukasa, Yuji, and Nicholas J. Higham. "Stable and efficient spectral divide and conquer algorithms for the symmetric eigenvalue decomposition and the SVD." SIAM Journal on Scientific Computing 35.3 (2013): A1325-A1349.
they compare the stability of various eigenvalue algorithms, and it seems that the divide-and-conquer approach (they use the same one as numpy in one of the experiments!) is more stable than the QR algorithm. This, together with claims elsewhere that D&C methods are indeed more stable, supports Ng's choice.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
amoeba already gave a good answer in the comments, but if you want a formal argument, here it goes.
The singular value decomposition of a matrix $A$ is $A=U\Sigma V^T$, where the columns of $V$ are ei
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
amoeba already gave a good answer in the comments, but if you want a formal argument, here it goes.
The singular value decomposition of a matrix $A$ is $A=U\Sigma V^T$, where the columns of $V$ are eigenvectors of $A^TA$ and the diagonal entries of $\Sigma$ are the square roots of its eigenvalues, i.e. $\sigma_{ii}=\sqrt{\lambda_i(A^TA)}$.
As you know, the principal components are the orthogonal projections of your variables onto the space of the eigenvectors of the empirical covariance matrix $\frac{1}{n-1}A^TA$. The variance of the components is given by its eigenvalues, $\lambda_i(\frac{1}{n-1}A^TA)$.
Consider any square matrix $B$, $\alpha \in \mathbb R$ and a vector $v$ such that $Bv=\lambda v$. Then
$B^kv=\lambda^kv$
$\lambda(\alpha B) = \alpha\lambda( B)$
Let us define $S=\frac{1}{n-1}A^TA$. The SVD of $S$ will compute the eigendecomposition of $S^TS=\frac{1}{(n-1)^2}A^TAA^TA$ to yield
the eigenvectors of $(A^TA)^TA^TA=A^TAA^TA$, which by property 1 are those of $A^TA$
the square roots of the eigenvalues of $\frac{1}{(n-1)^2}A^TAA^TA$, which by property 2, then 1, then 2 again, are $\sqrt{\frac{1}{(n-1)^2} \lambda_i(A^TAA^TA)} = \sqrt{\frac{1}{(n-1)^2} \lambda_i^2(A^TA)} = \frac{1}{n-1}\lambda_i(A^TA) = \lambda_i(\frac{1}{n-1}A^TA)$.
Voilà!
Regarding the numerical stability, one would need to figure out what the employed alogrithms are. If you're up to it, I believe these are the LAPACK routines used by numpy:
numpy.linalg.eig
numpy.linalg.svd
Update: On the stability, the SVD implementation seems to be using a divide-and-conquer approach, while the eigendecomposition uses a plain QR algorithm. I cannot access some relevant SIAM papers from my institution (blame research cutbacks) but I found something that might support the assessment that the SVD routine is more stable.
In
Nakatsukasa, Yuji, and Nicholas J. Higham. "Stable and efficient spectral divide and conquer algorithms for the symmetric eigenvalue decomposition and the SVD." SIAM Journal on Scientific Computing 35.3 (2013): A1325-A1349.
they compare the stability of various eigenvalue algorithms, and it seems that the divide-and-conquer approach (they use the same one as numpy in one of the experiments!) is more stable than the QR algorithm. This, together with claims elsewhere that D&C methods are indeed more stable, supports Ng's choice.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
amoeba already gave a good answer in the comments, but if you want a formal argument, here it goes.
The singular value decomposition of a matrix $A$ is $A=U\Sigma V^T$, where the columns of $V$ are ei
|
5,576
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
@amoeba had excellent answers to PCA questions, including this one on relation of SVD to PCA. Answering to your exact question I'll make three points:
mathematically there is no difference whether you calculate PCA on the data matrix directly or on its covariance matrix
the difference is purely due to numerical precision and complexity. Applying SVD directly to the data matrix is numerically more stable than to the covariance matrix
SVD can be applied to the covariance matrix to perform PCA or obtain eigen values, in fact, it's my favorite method of solving eigen problems
It turns out that SVD is more stable than typical eigenvalue decomoposition procedures, especially, for machine learning. In machine learning it is easy to end up with highly collinear regressors. SVD works better in these cases.
Here's Python code to demo the point. I created a highly collinear data matrix, got its covariance matrix and tried to obtain the eigenvalues of the latter. SVD is still working, while ordinary eigen decomposition fails in this case.
import numpy as np
import math
from numpy import linalg as LA
np.random.seed(1)
# create the highly collinear series
T = 1000
X = np.random.rand(T,2)
eps = 1e-11
X[:,1] = X[:,0] + eps*X[:,1]
C = np.cov(np.transpose(X))
print('Cov: ',C)
U, s, V = LA.svd(C)
print('SVDs: ',s)
w, v = LA.eig(C)
print('eigen vals: ',w)
Output:
Cov: [[ 0.08311516 0.08311516]
[ 0.08311516 0.08311516]]
SVDs: [ 1.66230312e-01 5.66687522e-18]
eigen vals: [ 0. 0.16623031]
Update
Answering to Federico Poloni's comment, here's the code with stability testing of SVD vs Eig on 1000 random samples of the same matrix above. In many cases Eig shows 0 small eigen value, which would lead to singularity of the matrix, and SVD doesn't do it here. SVD is about twice more precise on a small eigen value determination, which may or may not be important depending on your problem.
import numpy as np
import math
from scipy.linalg import toeplitz
from numpy import linalg as LA
np.random.seed(1)
# create the highly collinear series
T = 100
p = 2
eps = 1e-8
m = 1000 # simulations
err = np.ones((m,2)) # accuracy of small eig value
for j in range(m):
u = np.random.rand(T,p)
X = np.ones(u.shape)
X[:,0] = u[:,0]
for i in range(1,p):
X[:,i] = eps*u[:,i]+u[:,0]
C = np.cov(np.transpose(X))
U, s, V = LA.svd(C)
w, v = LA.eig(C)
# true eigen values
te = eps**2/2 * np.var(u[:,1])*(1-np.corrcoef(u,rowvar=False)[0,1]**2)
err[j,0] = s[p-1] - te
err[j,1] = np.amin(w) - te
print('Cov: ',C)
print('SVDs: ',s)
print('eigen vals: ',w)
print('true small eigenvals: ',te)
acc = np.mean(np.abs(err),axis=0)
print("small eigenval, accuracy SVD, Eig: ",acc[0]/te,acc[1]/te)
Output:
Cov: [[ 0.09189421 0.09189421]
[ 0.09189421 0.09189421]]
SVDs: [ 0.18378843 0. ]
eigen vals: [ 1.38777878e-17 1.83788428e-01]
true small eigenvals: 4.02633695086e-18
small eigenval, accuracy SVD, Eig: 2.43114702041 3.31970128319
Here code the code works. Instead of generating the random covariance matrix to test the routines, I'm generating the random data matrix with two variables:
$$x_1=u\\
x_2=u+\varepsilon v$$
where $u,v$ - independent uniform random variables. So, the covariance matrix is
$$\begin{pmatrix}
\sigma_1^2 & \sigma_1^2 + \varepsilon \rho \sigma_1 \sigma_2\\
\sigma_1^2 + \varepsilon \rho \sigma_1 \sigma_2 & \sigma_1^2 + 2 \varepsilon \rho \sigma_1 \sigma_2 + \varepsilon^2 \sigma_2^2\sigma^2\end{pmatrix}$$
where $\sigma_1^2,\sigma_2^2,\rho$ - variances of the uniforms and the correlation coeffient between them.
Its smallest eigenvalue:
$$\lambda= \frac 1 2 \left(\sigma_2^2 \varepsilon^2 - \sqrt{\sigma_2^4 \varepsilon^4 + 4 \sigma_2^3 \rho \sigma_1 \varepsilon^3 + 8 \sigma_2^2 \rho^2 \sigma_1^2 \varepsilon^2 + 8 \sigma_2 \rho \sigma_1^3 \varepsilon + 4 \sigma_1^4} + 2 \sigma_2 \rho \sigma_1 \varepsilon + 2 \sigma_1^2\right)$$
The small eigenvalue can't be calculated by simply plugging the $\varepsilon$ into formula due to limited precision, so you need to Taylor expand it:
$$\lambda\approx \sigma_2^2 \varepsilon^2 (1-\rho^2)/2$$
I run $j=1,\dots,m$ simulations of the realizations of the data matrix, calculate the eigenvalues of the simulated covariance matrix $\hat\lambda_j$, and obtain the errors $e_j=\lambda-\hat\lambda_j$.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
@amoeba had excellent answers to PCA questions, including this one on relation of SVD to PCA. Answering to your exact question I'll make three points:
mathematically there is no difference whether yo
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
@amoeba had excellent answers to PCA questions, including this one on relation of SVD to PCA. Answering to your exact question I'll make three points:
mathematically there is no difference whether you calculate PCA on the data matrix directly or on its covariance matrix
the difference is purely due to numerical precision and complexity. Applying SVD directly to the data matrix is numerically more stable than to the covariance matrix
SVD can be applied to the covariance matrix to perform PCA or obtain eigen values, in fact, it's my favorite method of solving eigen problems
It turns out that SVD is more stable than typical eigenvalue decomoposition procedures, especially, for machine learning. In machine learning it is easy to end up with highly collinear regressors. SVD works better in these cases.
Here's Python code to demo the point. I created a highly collinear data matrix, got its covariance matrix and tried to obtain the eigenvalues of the latter. SVD is still working, while ordinary eigen decomposition fails in this case.
import numpy as np
import math
from numpy import linalg as LA
np.random.seed(1)
# create the highly collinear series
T = 1000
X = np.random.rand(T,2)
eps = 1e-11
X[:,1] = X[:,0] + eps*X[:,1]
C = np.cov(np.transpose(X))
print('Cov: ',C)
U, s, V = LA.svd(C)
print('SVDs: ',s)
w, v = LA.eig(C)
print('eigen vals: ',w)
Output:
Cov: [[ 0.08311516 0.08311516]
[ 0.08311516 0.08311516]]
SVDs: [ 1.66230312e-01 5.66687522e-18]
eigen vals: [ 0. 0.16623031]
Update
Answering to Federico Poloni's comment, here's the code with stability testing of SVD vs Eig on 1000 random samples of the same matrix above. In many cases Eig shows 0 small eigen value, which would lead to singularity of the matrix, and SVD doesn't do it here. SVD is about twice more precise on a small eigen value determination, which may or may not be important depending on your problem.
import numpy as np
import math
from scipy.linalg import toeplitz
from numpy import linalg as LA
np.random.seed(1)
# create the highly collinear series
T = 100
p = 2
eps = 1e-8
m = 1000 # simulations
err = np.ones((m,2)) # accuracy of small eig value
for j in range(m):
u = np.random.rand(T,p)
X = np.ones(u.shape)
X[:,0] = u[:,0]
for i in range(1,p):
X[:,i] = eps*u[:,i]+u[:,0]
C = np.cov(np.transpose(X))
U, s, V = LA.svd(C)
w, v = LA.eig(C)
# true eigen values
te = eps**2/2 * np.var(u[:,1])*(1-np.corrcoef(u,rowvar=False)[0,1]**2)
err[j,0] = s[p-1] - te
err[j,1] = np.amin(w) - te
print('Cov: ',C)
print('SVDs: ',s)
print('eigen vals: ',w)
print('true small eigenvals: ',te)
acc = np.mean(np.abs(err),axis=0)
print("small eigenval, accuracy SVD, Eig: ",acc[0]/te,acc[1]/te)
Output:
Cov: [[ 0.09189421 0.09189421]
[ 0.09189421 0.09189421]]
SVDs: [ 0.18378843 0. ]
eigen vals: [ 1.38777878e-17 1.83788428e-01]
true small eigenvals: 4.02633695086e-18
small eigenval, accuracy SVD, Eig: 2.43114702041 3.31970128319
Here code the code works. Instead of generating the random covariance matrix to test the routines, I'm generating the random data matrix with two variables:
$$x_1=u\\
x_2=u+\varepsilon v$$
where $u,v$ - independent uniform random variables. So, the covariance matrix is
$$\begin{pmatrix}
\sigma_1^2 & \sigma_1^2 + \varepsilon \rho \sigma_1 \sigma_2\\
\sigma_1^2 + \varepsilon \rho \sigma_1 \sigma_2 & \sigma_1^2 + 2 \varepsilon \rho \sigma_1 \sigma_2 + \varepsilon^2 \sigma_2^2\sigma^2\end{pmatrix}$$
where $\sigma_1^2,\sigma_2^2,\rho$ - variances of the uniforms and the correlation coeffient between them.
Its smallest eigenvalue:
$$\lambda= \frac 1 2 \left(\sigma_2^2 \varepsilon^2 - \sqrt{\sigma_2^4 \varepsilon^4 + 4 \sigma_2^3 \rho \sigma_1 \varepsilon^3 + 8 \sigma_2^2 \rho^2 \sigma_1^2 \varepsilon^2 + 8 \sigma_2 \rho \sigma_1^3 \varepsilon + 4 \sigma_1^4} + 2 \sigma_2 \rho \sigma_1 \varepsilon + 2 \sigma_1^2\right)$$
The small eigenvalue can't be calculated by simply plugging the $\varepsilon$ into formula due to limited precision, so you need to Taylor expand it:
$$\lambda\approx \sigma_2^2 \varepsilon^2 (1-\rho^2)/2$$
I run $j=1,\dots,m$ simulations of the realizations of the data matrix, calculate the eigenvalues of the simulated covariance matrix $\hat\lambda_j$, and obtain the errors $e_j=\lambda-\hat\lambda_j$.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
@amoeba had excellent answers to PCA questions, including this one on relation of SVD to PCA. Answering to your exact question I'll make three points:
mathematically there is no difference whether yo
|
5,577
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
For Python users, I'd like to point out that for symmetric matrices (like the covariance matrix), it is better to use numpy.linalg.eigh function instead of a general numpy.linalg.eig function.
eigh is 9-10 times faster than eig on my computer (regardless of matrix size) and has better accuracy (based on @Aksakal's accuracy test).
I am not convinced with the demonstration of the accuracy benefit of SVD with small eigenvalues. @Aksakal's test is 1-2 orders of magnitude more sensitive to random state than to the algorithm (try plotting all errors instead of reducing them to one absolute maximum). It means that small errors in the covariance matrix will have a greater effect on accuracy than the choice of an eigendecomposition algorithm. Also, this is not related to the main question, which is about PCA. The smallest components are ignored in PCA.
A similar argument can be made about numerical stability. If I have to use the covariance matrix method for PCA, I would decompose it with eigh instead of svd. If it fails (which has not been demonstrated here yet), then it is probably worth rethinking the problem that you are trying to solve before starting to look for a better algorithm.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
For Python users, I'd like to point out that for symmetric matrices (like the covariance matrix), it is better to use numpy.linalg.eigh function instead of a general numpy.linalg.eig function.
eigh is
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
For Python users, I'd like to point out that for symmetric matrices (like the covariance matrix), it is better to use numpy.linalg.eigh function instead of a general numpy.linalg.eig function.
eigh is 9-10 times faster than eig on my computer (regardless of matrix size) and has better accuracy (based on @Aksakal's accuracy test).
I am not convinced with the demonstration of the accuracy benefit of SVD with small eigenvalues. @Aksakal's test is 1-2 orders of magnitude more sensitive to random state than to the algorithm (try plotting all errors instead of reducing them to one absolute maximum). It means that small errors in the covariance matrix will have a greater effect on accuracy than the choice of an eigendecomposition algorithm. Also, this is not related to the main question, which is about PCA. The smallest components are ignored in PCA.
A similar argument can be made about numerical stability. If I have to use the covariance matrix method for PCA, I would decompose it with eigh instead of svd. If it fails (which has not been demonstrated here yet), then it is probably worth rethinking the problem that you are trying to solve before starting to look for a better algorithm.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
For Python users, I'd like to point out that for symmetric matrices (like the covariance matrix), it is better to use numpy.linalg.eigh function instead of a general numpy.linalg.eig function.
eigh is
|
5,578
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
Some great answers already have been given to your questions, so I won't add a lot of new stuff. But I tried (i) to base my answer on the knowledge you seem to have and (ii) to be as concise as possible. So you - or others in a similar situation - may find this answer helpful.
(Simple) Mathematical Explanation
SVD and the eigendecomposition are closely related. Let $X \in \mathbb{R}^{n \times p}$ be a real data matrix, so you may define its covariance matrix $C \in \mathbb{R}^{p\times p}$ as
\begin{equation}
C = \frac{1}{n} X^T X.
\end{equation}
1 | SVD of X
As you correctly stated, applying SVD on $X$ decomposes your original data in
\begin{equation*}
X = U S V^T
\end{equation*}
with $U \in \mathbb{R}^{n \times n}$, $V \in \mathbb{R}^{p \times p}$ being unitary, containing the (orthonormal) principal components and eigenvectors, respectively. The diagonal matrix $S \in \mathbb{R}^{n \times p}$ holds the singular values $s$.
2 | Eigendecomposition of C
Since $C$ is hermitian its eigendecomposition yields eigenvectors given by the unitary matrix $V$ with corresponding real eigenvalues $\lambda$ as entries of a diagonal matrix $\Lambda \in \mathbb{R}^{p \times p}$:
\begin{equation}
CV = V\Lambda
\end{equation}
In this case we may calculate the principal components by projecting the eigenvectors on the original data PCs $= X V^T$. Note that these PCs are scaled by their corresponding eigenvalues and are thus correlated.
3 | SVD of C
In order to answer your questions, recall that we can factorize $C$ - since it is symmetric - via
\begin{equation}
C = V\Lambda V^T
\end{equation}
using its eigenvectors and eigenvalues.
Note, that this true also by just rearranging the equation from section (2). We can therefore calculate the eigenvectors of $X$ by applying the SVD to $C$.
With just a bit more of effort we can now establish the relation between the singular values and the eigenvalues. Using the definition of the covariance we may as well write:
\begin{align}
C &= \frac{1}{n} ( U S V^T )^T ( U S V^T ) \\
&= \frac{1}{n} ( V S U^T U S V^T ) \\
&= \frac{1}{n} ( V S^2 V^T )
\end{align}
The last equation holds since $U$ is unitary, that is $U^T U = \mathbb{1}$. Now by simply comparing this result with that from above we find:
\begin{equation}
\frac{1}{n} ( V S^2 V^T ) = V \Lambda V^T \quad \Rightarrow \quad \lambda = \frac{s^2}{n}
\end{equation}
Python, NumPy and Algorithms
Just a basic example to explore the different behaviour of numpy's
linalg.svd(X)
linalg.svd(C)
linalg.eig(C)
linalg.eigh(C)
shows that linalg.eig() reveals some (at least for me) unexpected behaviour. Calculating the matrix $V^TV=\mathbb{1}$ for all four cases, we can get a visual idea of the respective precision. It seems from the figure below, that linalg.eig() only provides a stable solution up to dimension $d = \text{rank}(C) = \text{min}(n,p)$.
# Create random data
n,p = [100,300]
X = np.random.randn(n,p)
# Covariane matrix
C = X.T @ X /n
# Create figure environment
fig = plt.figure(figsize=(14,5))
ax1 = fig.add_subplot(141)
ax2 = fig.add_subplot(142)
ax3 = fig.add_subplot(143)
ax4 = fig.add_subplot(144)
# 1. SVD on X
# ---------------------
U,s,VT = np.linalg.svd(X)
V = VT.T
ax1.imshow(V.T@V,cmap='Reds',vmin=0,vmax=1)
# 2. SVD on C
# ---------------------
V,eigenvalues,VT = np.linalg.svd(C)
ax2.imshow(V.T@V,cmap='Reds',vmin=0,vmax=1)
# 3. Eigendecomposition on C
# -> linalg.eig()
# ---------------------
eigenvalues,V = np.linalg.eig(C)
sortIdx = np.argsort(eigenvalues)[::-1]
V = V[:,sortIdx]
ax3.imshow((V.T@V).real,cmap='Reds',vmin=0,vmax=1)
# 4. Eigendecomposition on C
# -> linalg.eigh()
# ---------------------
eigenvalues,V = np.linalg.eigh(C)
sortIdx = np.argsort(eigenvalues)[::-1]
V = V[:,sortIdx]
ax4.imshow((V.T@V).real,cmap='Reds',vmin=0,vmax=1)
for a in [ax1,ax2,ax3,ax4]:
a.set_xticks([])
a.set_yticks([])
ax1.set_title('svd(X)')
ax2.set_title('svd(C)')
ax3.set_title('eig(C)')
ax4.set_title('eigh(C)')
fig.subplots_adjust(wspace=0,top=0.95)
fig.suptitle('Eigendecomposition in NumPy')
plt.show()
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
Some great answers already have been given to your questions, so I won't add a lot of new stuff. But I tried (i) to base my answer on the knowledge you seem to have and (ii) to be as concise as possib
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
Some great answers already have been given to your questions, so I won't add a lot of new stuff. But I tried (i) to base my answer on the knowledge you seem to have and (ii) to be as concise as possible. So you - or others in a similar situation - may find this answer helpful.
(Simple) Mathematical Explanation
SVD and the eigendecomposition are closely related. Let $X \in \mathbb{R}^{n \times p}$ be a real data matrix, so you may define its covariance matrix $C \in \mathbb{R}^{p\times p}$ as
\begin{equation}
C = \frac{1}{n} X^T X.
\end{equation}
1 | SVD of X
As you correctly stated, applying SVD on $X$ decomposes your original data in
\begin{equation*}
X = U S V^T
\end{equation*}
with $U \in \mathbb{R}^{n \times n}$, $V \in \mathbb{R}^{p \times p}$ being unitary, containing the (orthonormal) principal components and eigenvectors, respectively. The diagonal matrix $S \in \mathbb{R}^{n \times p}$ holds the singular values $s$.
2 | Eigendecomposition of C
Since $C$ is hermitian its eigendecomposition yields eigenvectors given by the unitary matrix $V$ with corresponding real eigenvalues $\lambda$ as entries of a diagonal matrix $\Lambda \in \mathbb{R}^{p \times p}$:
\begin{equation}
CV = V\Lambda
\end{equation}
In this case we may calculate the principal components by projecting the eigenvectors on the original data PCs $= X V^T$. Note that these PCs are scaled by their corresponding eigenvalues and are thus correlated.
3 | SVD of C
In order to answer your questions, recall that we can factorize $C$ - since it is symmetric - via
\begin{equation}
C = V\Lambda V^T
\end{equation}
using its eigenvectors and eigenvalues.
Note, that this true also by just rearranging the equation from section (2). We can therefore calculate the eigenvectors of $X$ by applying the SVD to $C$.
With just a bit more of effort we can now establish the relation between the singular values and the eigenvalues. Using the definition of the covariance we may as well write:
\begin{align}
C &= \frac{1}{n} ( U S V^T )^T ( U S V^T ) \\
&= \frac{1}{n} ( V S U^T U S V^T ) \\
&= \frac{1}{n} ( V S^2 V^T )
\end{align}
The last equation holds since $U$ is unitary, that is $U^T U = \mathbb{1}$. Now by simply comparing this result with that from above we find:
\begin{equation}
\frac{1}{n} ( V S^2 V^T ) = V \Lambda V^T \quad \Rightarrow \quad \lambda = \frac{s^2}{n}
\end{equation}
Python, NumPy and Algorithms
Just a basic example to explore the different behaviour of numpy's
linalg.svd(X)
linalg.svd(C)
linalg.eig(C)
linalg.eigh(C)
shows that linalg.eig() reveals some (at least for me) unexpected behaviour. Calculating the matrix $V^TV=\mathbb{1}$ for all four cases, we can get a visual idea of the respective precision. It seems from the figure below, that linalg.eig() only provides a stable solution up to dimension $d = \text{rank}(C) = \text{min}(n,p)$.
# Create random data
n,p = [100,300]
X = np.random.randn(n,p)
# Covariane matrix
C = X.T @ X /n
# Create figure environment
fig = plt.figure(figsize=(14,5))
ax1 = fig.add_subplot(141)
ax2 = fig.add_subplot(142)
ax3 = fig.add_subplot(143)
ax4 = fig.add_subplot(144)
# 1. SVD on X
# ---------------------
U,s,VT = np.linalg.svd(X)
V = VT.T
ax1.imshow(V.T@V,cmap='Reds',vmin=0,vmax=1)
# 2. SVD on C
# ---------------------
V,eigenvalues,VT = np.linalg.svd(C)
ax2.imshow(V.T@V,cmap='Reds',vmin=0,vmax=1)
# 3. Eigendecomposition on C
# -> linalg.eig()
# ---------------------
eigenvalues,V = np.linalg.eig(C)
sortIdx = np.argsort(eigenvalues)[::-1]
V = V[:,sortIdx]
ax3.imshow((V.T@V).real,cmap='Reds',vmin=0,vmax=1)
# 4. Eigendecomposition on C
# -> linalg.eigh()
# ---------------------
eigenvalues,V = np.linalg.eigh(C)
sortIdx = np.argsort(eigenvalues)[::-1]
V = V[:,sortIdx]
ax4.imshow((V.T@V).real,cmap='Reds',vmin=0,vmax=1)
for a in [ax1,ax2,ax3,ax4]:
a.set_xticks([])
a.set_yticks([])
ax1.set_title('svd(X)')
ax2.set_title('svd(C)')
ax3.set_title('eig(C)')
ax4.set_title('eigh(C)')
fig.subplots_adjust(wspace=0,top=0.95)
fig.suptitle('Eigendecomposition in NumPy')
plt.show()
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
Some great answers already have been given to your questions, so I won't add a lot of new stuff. But I tried (i) to base my answer on the knowledge you seem to have and (ii) to be as concise as possib
|
5,579
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
To answer the last part of your question, "Why do they do SVD of covariance matrix, not data matrix?" I believe it is for performance and storage reasons. Typically, $m$ will be a very large number and even if $n$ is large, we would expect $m \gg n$.
Calculating the covariance matrix and then performing SVD on that is vastly quicker than calculating SVD on the full data matrix under these conditions, for the same result.
Even for fairly small values the performance gains are
factors of thousands (milliseconds vs seconds). I ran a few tests on my machine to compare using Matlab:
That's just CPU time, but storage needs are just as, if not more, important. If you attempt SVD on a million by a thousand matrix in Matlab it will error by default, because it needs a working array size of 7.4TB.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
To answer the last part of your question, "Why do they do SVD of covariance matrix, not data matrix?" I believe it is for performance and storage reasons. Typically, $m$ will be a very large number an
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
To answer the last part of your question, "Why do they do SVD of covariance matrix, not data matrix?" I believe it is for performance and storage reasons. Typically, $m$ will be a very large number and even if $n$ is large, we would expect $m \gg n$.
Calculating the covariance matrix and then performing SVD on that is vastly quicker than calculating SVD on the full data matrix under these conditions, for the same result.
Even for fairly small values the performance gains are
factors of thousands (milliseconds vs seconds). I ran a few tests on my machine to compare using Matlab:
That's just CPU time, but storage needs are just as, if not more, important. If you attempt SVD on a million by a thousand matrix in Matlab it will error by default, because it needs a working array size of 7.4TB.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
To answer the last part of your question, "Why do they do SVD of covariance matrix, not data matrix?" I believe it is for performance and storage reasons. Typically, $m$ will be a very large number an
|
5,580
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
If you apply SVD on the covariance matrix, your principal vectors are the same as applying SVD on the data matrix. So, mathematically they are equivalent in this case.
However, in terms of complexity, it does not make much sense to apply SVD on the covariance matrix: you have constructed the covariance matrix and then you pay for SVD which is more expensive than computing eigenvectors.
The best practice is to apply SVD directly on the data matrix, to save some flops (compared to Andrew Ng's way) and to achieve numerical stability of SVD routines (compared to eigendecomposition).
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
If you apply SVD on the covariance matrix, your principal vectors are the same as applying SVD on the data matrix. So, mathematically they are equivalent in this case.
However, in terms of complexity,
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
If you apply SVD on the covariance matrix, your principal vectors are the same as applying SVD on the data matrix. So, mathematically they are equivalent in this case.
However, in terms of complexity, it does not make much sense to apply SVD on the covariance matrix: you have constructed the covariance matrix and then you pay for SVD which is more expensive than computing eigenvectors.
The best practice is to apply SVD directly on the data matrix, to save some flops (compared to Andrew Ng's way) and to achieve numerical stability of SVD routines (compared to eigendecomposition).
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
If you apply SVD on the covariance matrix, your principal vectors are the same as applying SVD on the data matrix. So, mathematically they are equivalent in this case.
However, in terms of complexity,
|
5,581
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
If anyone cares about performance difference in a specific case, I compared
SVD and Eigen-analysys on a 632x632 real symmetric matrix L using C++/Eigen.
I used BDCSVD
// compute full V, U *not* needed
Eigen::BDCSVD<Eigen::MatrixXf> svd(L,Eigen::ComputeFullV);
and SelfAdjointEigenSolver
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> eigenSolver(N);
eigenSolver.compute(L);
and ran each block of code 100 times (clang++ -O3 -DNEBUG) on a
3.2 Ghz Intel Core i5 Mac. These are the results:
BCSSVD ..................... 10036 milliseconds
SelfAdjointEigenSolver ..... 10338 milliseconds
So each takes about 100 msec for a single call. So a push speed wise.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
|
If anyone cares about performance difference in a specific case, I compared
SVD and Eigen-analysys on a 632x632 real symmetric matrix L using C++/Eigen.
I used BDCSVD
// compute full V, U *not* needed
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
If anyone cares about performance difference in a specific case, I compared
SVD and Eigen-analysys on a 632x632 real symmetric matrix L using C++/Eigen.
I used BDCSVD
// compute full V, U *not* needed
Eigen::BDCSVD<Eigen::MatrixXf> svd(L,Eigen::ComputeFullV);
and SelfAdjointEigenSolver
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXf> eigenSolver(N);
eigenSolver.compute(L);
and ran each block of code 100 times (clang++ -O3 -DNEBUG) on a
3.2 Ghz Intel Core i5 Mac. These are the results:
BCSSVD ..................... 10036 milliseconds
SelfAdjointEigenSolver ..... 10338 milliseconds
So each takes about 100 msec for a single call. So a push speed wise.
|
Why does Andrew Ng prefer to use SVD and not EIG of covariance matrix to do PCA?
If anyone cares about performance difference in a specific case, I compared
SVD and Eigen-analysys on a 632x632 real symmetric matrix L using C++/Eigen.
I used BDCSVD
// compute full V, U *not* needed
|
5,582
|
Neural Networks: weight change momentum and weight decay
|
Yes, it's very common to use both tricks. They solve different problems and can work well together.
One way to think about it is that weight decay changes the function that's being optimized, while momentum changes the path you take to the optimum.
Weight decay, by shrinking your coefficients toward zero, ensures that you find a local optimum with small-magnitude parameters. This is usually crucial for avoiding overfitting (although other kinds of constraints on the weights can work too). As a side benefit, it can also make the model easier to optimize, by making the objective function more convex.
Once you have an objective function, you have to decide how to move around on it. Steepest descent on the gradient is the simplest approach, but you're right that fluctuations can be a big problem. Adding momentum helps solve that problem. If you're working with batch updates (which is usually a bad idea with neural networks) Newton-type steps are another option. The new "hot" approaches are based on Nesterov's accelerated gradient and so-called "Hessian-Free" optimization.
But regardless of which of these update rules you use (momentum, Newton, etc.), you're still working with the same objective function, which is determined by your error function (e.g. squared error) and other constraints (e.g. weight decay). The main question when deciding which of these to use is how quickly you'll get to a good set of weights.
|
Neural Networks: weight change momentum and weight decay
|
Yes, it's very common to use both tricks. They solve different problems and can work well together.
One way to think about it is that weight decay changes the function that's being optimized, while m
|
Neural Networks: weight change momentum and weight decay
Yes, it's very common to use both tricks. They solve different problems and can work well together.
One way to think about it is that weight decay changes the function that's being optimized, while momentum changes the path you take to the optimum.
Weight decay, by shrinking your coefficients toward zero, ensures that you find a local optimum with small-magnitude parameters. This is usually crucial for avoiding overfitting (although other kinds of constraints on the weights can work too). As a side benefit, it can also make the model easier to optimize, by making the objective function more convex.
Once you have an objective function, you have to decide how to move around on it. Steepest descent on the gradient is the simplest approach, but you're right that fluctuations can be a big problem. Adding momentum helps solve that problem. If you're working with batch updates (which is usually a bad idea with neural networks) Newton-type steps are another option. The new "hot" approaches are based on Nesterov's accelerated gradient and so-called "Hessian-Free" optimization.
But regardless of which of these update rules you use (momentum, Newton, etc.), you're still working with the same objective function, which is determined by your error function (e.g. squared error) and other constraints (e.g. weight decay). The main question when deciding which of these to use is how quickly you'll get to a good set of weights.
|
Neural Networks: weight change momentum and weight decay
Yes, it's very common to use both tricks. They solve different problems and can work well together.
One way to think about it is that weight decay changes the function that's being optimized, while m
|
5,583
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
|
1. What formula does lm in R use for adjusted r-square?
As already mentioned, typing summary.lm will give you the code that R uses to calculate adjusted R square. Extracting the most relevant line you get:
ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n - df.int)/rdf)
which corresponds in mathematical notation to:
$$R^2_{adj} = 1 - (1 - R^2) \frac{n-1}{n-p-1}$$
assuming that there is an intercept (i.e., df.int=1), $n$ is your sample size, and $p$ is your number of predictors. Thus, your error degrees of freedom (i.e., rdf) equals n-p-1.
The formula corresponds to what Yin and Fan (2001) label Wherry Formula-1 (there is apparently another less common Wherry formula that uses $n-p$ in the denominator instead $n-p-1$). They suggest it's most common names in order of occurrence are "Wherry formula", "Ezekiel formlua", "Wherry/McNemar formula", and "Cohen/Cohen formula".
2. Why are there so many adjusted r-square formulas?
$R^2_{adj}$ aims to estimate $\rho^2$, the proportion of variance explained in the population by the population regression equation. While this is clearly related to sample size and the number of predictors, what is the best estimator is less clear. Thus, you have simulation studies such as Yin and Fan (2001) that have evaluated different adjusted r-square formulas in terms of how well they estimate $\rho^2$ (see this question for further discussion).
You will see with all the formulas, the difference between $R^2$ and $R^2_{adj}$ gets smaller as the sample size increases. The difference approaches zero as sample size tends to infinity. The difference also get smaller with fewer predictors.
3. How to interpret $R^2_{adj}$?
$R^2_{adj}$ is an estimate of the proportion of variance explained by the true regression equation in the population $\rho^2$. You would typically be interested in $\rho^2$ where you are interested in the theoretical linear prediction of a variable. In contrast, if you are more interested in prediction using the sample regression equation, such is often the case in applied settings, then some form of cross-validated $R^2$ would be more relevant.
References
Yin, P., & Fan, X. (2001). Estimating $R^2$ shrinkage in multiple regression: A comparison of different analytical methods. The Journal of Experimental Education, 69(2), 203-224. PDF
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
|
1. What formula does lm in R use for adjusted r-square?
As already mentioned, typing summary.lm will give you the code that R uses to calculate adjusted R square. Extracting the most relevant line you
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
1. What formula does lm in R use for adjusted r-square?
As already mentioned, typing summary.lm will give you the code that R uses to calculate adjusted R square. Extracting the most relevant line you get:
ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n - df.int)/rdf)
which corresponds in mathematical notation to:
$$R^2_{adj} = 1 - (1 - R^2) \frac{n-1}{n-p-1}$$
assuming that there is an intercept (i.e., df.int=1), $n$ is your sample size, and $p$ is your number of predictors. Thus, your error degrees of freedom (i.e., rdf) equals n-p-1.
The formula corresponds to what Yin and Fan (2001) label Wherry Formula-1 (there is apparently another less common Wherry formula that uses $n-p$ in the denominator instead $n-p-1$). They suggest it's most common names in order of occurrence are "Wherry formula", "Ezekiel formlua", "Wherry/McNemar formula", and "Cohen/Cohen formula".
2. Why are there so many adjusted r-square formulas?
$R^2_{adj}$ aims to estimate $\rho^2$, the proportion of variance explained in the population by the population regression equation. While this is clearly related to sample size and the number of predictors, what is the best estimator is less clear. Thus, you have simulation studies such as Yin and Fan (2001) that have evaluated different adjusted r-square formulas in terms of how well they estimate $\rho^2$ (see this question for further discussion).
You will see with all the formulas, the difference between $R^2$ and $R^2_{adj}$ gets smaller as the sample size increases. The difference approaches zero as sample size tends to infinity. The difference also get smaller with fewer predictors.
3. How to interpret $R^2_{adj}$?
$R^2_{adj}$ is an estimate of the proportion of variance explained by the true regression equation in the population $\rho^2$. You would typically be interested in $\rho^2$ where you are interested in the theoretical linear prediction of a variable. In contrast, if you are more interested in prediction using the sample regression equation, such is often the case in applied settings, then some form of cross-validated $R^2$ would be more relevant.
References
Yin, P., & Fan, X. (2001). Estimating $R^2$ shrinkage in multiple regression: A comparison of different analytical methods. The Journal of Experimental Education, 69(2), 203-224. PDF
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
1. What formula does lm in R use for adjusted r-square?
As already mentioned, typing summary.lm will give you the code that R uses to calculate adjusted R square. Extracting the most relevant line you
|
5,584
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
|
Regarding your first question: If you don't know how it is calculated look at the code! If you type summary.lm in your console, you get the code for this function. If you skim throught the code you'll find a line: ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n - df.int)/rdf). If you look some lines above of this line you will notice that:
ans$r.squared: is your $R^2$
n is the number of the residuals = number of observations
df.int is 0 or 1 (depending if you have a intercept)
rdf are your residual df
Question 2: From Wikipedia: 'Adjusted $R^2$ is a modification of $R^2$ that adjusts for the number of explanatory terms in a model. '
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
|
Regarding your first question: If you don't know how it is calculated look at the code! If you type summary.lm in your console, you get the code for this function. If you skim throught the code you'll
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
Regarding your first question: If you don't know how it is calculated look at the code! If you type summary.lm in your console, you get the code for this function. If you skim throught the code you'll find a line: ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n - df.int)/rdf). If you look some lines above of this line you will notice that:
ans$r.squared: is your $R^2$
n is the number of the residuals = number of observations
df.int is 0 or 1 (depending if you have a intercept)
rdf are your residual df
Question 2: From Wikipedia: 'Adjusted $R^2$ is a modification of $R^2$ that adjusts for the number of explanatory terms in a model. '
|
What is the adjusted R-squared formula in lm in R and how should it be interpreted?
Regarding your first question: If you don't know how it is calculated look at the code! If you type summary.lm in your console, you get the code for this function. If you skim throught the code you'll
|
5,585
|
Poisson regression to estimate relative risk for binary outcomes
|
An answer to all four of your questions, preceeded by a note:
It's not actually all that common for modern epidemiology studies to report an odds ratio from a logistic regression for a cohort study. It remains the regression technique of choice for case-control studies, but more sophisticated techniques are now the de facto standard for analysis in major epidemiology journals like Epidemiology, AJE or IJE. There will be a greater tendency for them to show up in clinical journals reporting the results of observational studies. There's also going to be some problems because Poisson regression can be used in two contexts: What you're referring to, wherein it's a substitute for a binomial regression model, and in a time-to-event context, which is extremely common for cohort studies. More details in the particular question answers:
For a cohort study, not really no. There are some extremely specific cases where say, a piecewise logistic model may have been used, but these are outliers. The whole point of a cohort study is that you can directly measure the relative risk, or many related measures, and don't have to rely on an odds ratio. I will however make two notes: A Poisson regression is estimating often a rate, not a risk, and thus the effect estimate from it will often be noted as a rate ratio (mainly, in my mind, so you can still abbreviate it RR) or an incidence density ratio (IRR or IDR). So make sure in your search you're actually looking for the right terms: there are many cohort studies using survival analysis methods. For these studies, Poisson regression makes some assumptions that are problematic, notably that the hazard is constant. As such it is much more common to analyze a cohort study using Cox proportional hazards models, rather than Poisson models, and report the ensuing hazard ratio (HR). If pressed to name a "default" method with which to analyze a cohort, I'd say epidemiology is actually dominated by the Cox model. This has its own problems, and some very good epidemiologists would like to change it, but there it is.
There are two things I might attribute the infrequency to - an infrequency I don't necessarily think exists to the extent you suggest. One is that yes - "epidemiology" as a field isn't exactly closed, and you get huge numbers of papers from clinicians, social scientists, etc. as well as epidemiologists of varying statistical backgrounds. The logistic model is commonly taught, and in my experience many researchers will turn to the familiar tool over the better tool.
The second is actually a question of what you mean by "cohort" study. Something like the Cox model, or a Poisson model, needs an actual estimate of person-time. It's possible to get a cohort study that follows a somewhat closed population for a particular period - especially in early "Intro to Epi" examples, where survival methods like Poisson or Cox models aren't so useful. The logistic model can be used to estimate an odds ratio that, with sufficiently low disease prevalence, approximates a relative risk. Other regression techniques that directly estimate it, like binomial regression, have convergence issues that can easily derail a new student. Keep in mind the Zou papers you cite are both using a Poisson regression technique to get around the convergence issues of binomial regression. But binomial-appropriate cohort studies are actually a small slice of the "cohort study pie".
Yes. Frankly, survival analysis methods should come up earlier than they often do. My pet theory is that the reason this isn't so is that methods like logistic regression are easier to code. Techniques that are easier to code, but come with much larger caveats about the validity of their effect estimates, are taught as the "basic" standard, which is a problem.
You should be encouraging students and colleagues to use the appropriate tool. Generally for the field, I think you'd probably be better off suggesting a consideration of the Cox model over a Poisson regression, as most reviewers would (and should) swiftly bring up concerns about the assumption of a constant hazard. But yes, the sooner you can get them away from "How do I shoehorn my question into a logistic regression model?" the better off we'll all be. But yes, if you're looking at a study without time, students should be introduced to both binomial regression, and alternative approaches, like Poisson regression, which can be used in case of convergence problems.
|
Poisson regression to estimate relative risk for binary outcomes
|
An answer to all four of your questions, preceeded by a note:
It's not actually all that common for modern epidemiology studies to report an odds ratio from a logistic regression for a cohort study. I
|
Poisson regression to estimate relative risk for binary outcomes
An answer to all four of your questions, preceeded by a note:
It's not actually all that common for modern epidemiology studies to report an odds ratio from a logistic regression for a cohort study. It remains the regression technique of choice for case-control studies, but more sophisticated techniques are now the de facto standard for analysis in major epidemiology journals like Epidemiology, AJE or IJE. There will be a greater tendency for them to show up in clinical journals reporting the results of observational studies. There's also going to be some problems because Poisson regression can be used in two contexts: What you're referring to, wherein it's a substitute for a binomial regression model, and in a time-to-event context, which is extremely common for cohort studies. More details in the particular question answers:
For a cohort study, not really no. There are some extremely specific cases where say, a piecewise logistic model may have been used, but these are outliers. The whole point of a cohort study is that you can directly measure the relative risk, or many related measures, and don't have to rely on an odds ratio. I will however make two notes: A Poisson regression is estimating often a rate, not a risk, and thus the effect estimate from it will often be noted as a rate ratio (mainly, in my mind, so you can still abbreviate it RR) or an incidence density ratio (IRR or IDR). So make sure in your search you're actually looking for the right terms: there are many cohort studies using survival analysis methods. For these studies, Poisson regression makes some assumptions that are problematic, notably that the hazard is constant. As such it is much more common to analyze a cohort study using Cox proportional hazards models, rather than Poisson models, and report the ensuing hazard ratio (HR). If pressed to name a "default" method with which to analyze a cohort, I'd say epidemiology is actually dominated by the Cox model. This has its own problems, and some very good epidemiologists would like to change it, but there it is.
There are two things I might attribute the infrequency to - an infrequency I don't necessarily think exists to the extent you suggest. One is that yes - "epidemiology" as a field isn't exactly closed, and you get huge numbers of papers from clinicians, social scientists, etc. as well as epidemiologists of varying statistical backgrounds. The logistic model is commonly taught, and in my experience many researchers will turn to the familiar tool over the better tool.
The second is actually a question of what you mean by "cohort" study. Something like the Cox model, or a Poisson model, needs an actual estimate of person-time. It's possible to get a cohort study that follows a somewhat closed population for a particular period - especially in early "Intro to Epi" examples, where survival methods like Poisson or Cox models aren't so useful. The logistic model can be used to estimate an odds ratio that, with sufficiently low disease prevalence, approximates a relative risk. Other regression techniques that directly estimate it, like binomial regression, have convergence issues that can easily derail a new student. Keep in mind the Zou papers you cite are both using a Poisson regression technique to get around the convergence issues of binomial regression. But binomial-appropriate cohort studies are actually a small slice of the "cohort study pie".
Yes. Frankly, survival analysis methods should come up earlier than they often do. My pet theory is that the reason this isn't so is that methods like logistic regression are easier to code. Techniques that are easier to code, but come with much larger caveats about the validity of their effect estimates, are taught as the "basic" standard, which is a problem.
You should be encouraging students and colleagues to use the appropriate tool. Generally for the field, I think you'd probably be better off suggesting a consideration of the Cox model over a Poisson regression, as most reviewers would (and should) swiftly bring up concerns about the assumption of a constant hazard. But yes, the sooner you can get them away from "How do I shoehorn my question into a logistic regression model?" the better off we'll all be. But yes, if you're looking at a study without time, students should be introduced to both binomial regression, and alternative approaches, like Poisson regression, which can be used in case of convergence problems.
|
Poisson regression to estimate relative risk for binary outcomes
An answer to all four of your questions, preceeded by a note:
It's not actually all that common for modern epidemiology studies to report an odds ratio from a logistic regression for a cohort study. I
|
5,586
|
Poisson regression to estimate relative risk for binary outcomes
|
I too speculate at the prevalence of logistic models in the literature when a relative risk model would be more appropriate. We as statisticians are all too familiar with adherence to convention or sticking to "drop-down-menu" analyses. These create far more problems than they solve. Logistic regression is taught as a "standard off the shelf tool" for analyzing binary outcomes, where an individual has a yes/no type of outcome like death or disability.
Poisson regression is frequently taught as a method for analyzing counts. It is somewhat under emphasized that such a probability model works exceptionally well for modeling 0/1 outcomes, especially when they are rare. However, a logistic model is also well applied with rare outcomes: the odds ratio is approximately a risk ratio, even with outcome dependent sampling as with case control studies. The same cannot be said of relative risk or Poisson models.
A poisson model is useful too when individuals may have an "outcome" more than once, and you might be interested in cumulative incidence, such as outbreaks of herpes, hospitalizations, or breast cancers. For this reason, exponentiated coefficients can be interpreted as relative rates. To belabor the difference between rates and risks: If there are 100 cases per 1,000 person-years, but all 100 cases happened in one individual, the incidence (rate) is still 1 case per 10 person-years. In a health care delivery setting, you still need to treat 100 cases, and vaccinating 80% of the people has an 80% incidence rate reduction (a priori). However the risk of at least one outcome is 1/1000. The nature of the outcome and the question, together, determine which model is appropriate.
I would be concerned with saying "we fit a Poisson regression model for incidence to estimate relative rates" because this may introduce some confusion as to the nature of the outcome and whether one person may experience it more than once. If you are interested in relative risks, you must say so, and be prepared to discuss the sensitivities of the inappropriate variance assumption where the mean is proportional to the outcome when binary events have the following mean variance relationship: $\mbox{var}(y) = E(y)(1-E(y))$
My understanding is that if the scientific interest lies in estimating relative rates, there is a hybrid model: relative risk regression which is a GLM using the logistic variance structure and the poisson mean structure. That is to say: $\log (E[Y|X])= \beta_0 + \beta_1 X$ and $\mbox{var}(Y) = E[Y](1-E[Y])$,
By the way, the Zhang article provides a biased estimate of inference based on the relative risk estimate which doesn't account for variability in the intercept term. You can correct the estimator by bootstrapping.
To answer the specific questions:
If the outcome is rare they are approximately the same. If the outcome is common, the variance of the relative rate estimator from the Poisson might be over inflated, and we might prefer the odds ratio as a biased but efficient estimate of association between a binary outcome and several exposures. I also think that case-control studies justify use of the odds ratio as a measure which does not vary with outcome dependent sampling. Scott and Wild 97 discuss methods around this. Of course, other journals might not have dedicated statistical reviewers.
2.3. I think you are blaming and assuming overmuch about what happens in medical review and academics.
You should always be encouraging your students to use appropriate models whenever possible.
http://biostats.bepress.com/cgi/viewcontent.cgi?article=1128&context=uwbiostat
|
Poisson regression to estimate relative risk for binary outcomes
|
I too speculate at the prevalence of logistic models in the literature when a relative risk model would be more appropriate. We as statisticians are all too familiar with adherence to convention or st
|
Poisson regression to estimate relative risk for binary outcomes
I too speculate at the prevalence of logistic models in the literature when a relative risk model would be more appropriate. We as statisticians are all too familiar with adherence to convention or sticking to "drop-down-menu" analyses. These create far more problems than they solve. Logistic regression is taught as a "standard off the shelf tool" for analyzing binary outcomes, where an individual has a yes/no type of outcome like death or disability.
Poisson regression is frequently taught as a method for analyzing counts. It is somewhat under emphasized that such a probability model works exceptionally well for modeling 0/1 outcomes, especially when they are rare. However, a logistic model is also well applied with rare outcomes: the odds ratio is approximately a risk ratio, even with outcome dependent sampling as with case control studies. The same cannot be said of relative risk or Poisson models.
A poisson model is useful too when individuals may have an "outcome" more than once, and you might be interested in cumulative incidence, such as outbreaks of herpes, hospitalizations, or breast cancers. For this reason, exponentiated coefficients can be interpreted as relative rates. To belabor the difference between rates and risks: If there are 100 cases per 1,000 person-years, but all 100 cases happened in one individual, the incidence (rate) is still 1 case per 10 person-years. In a health care delivery setting, you still need to treat 100 cases, and vaccinating 80% of the people has an 80% incidence rate reduction (a priori). However the risk of at least one outcome is 1/1000. The nature of the outcome and the question, together, determine which model is appropriate.
I would be concerned with saying "we fit a Poisson regression model for incidence to estimate relative rates" because this may introduce some confusion as to the nature of the outcome and whether one person may experience it more than once. If you are interested in relative risks, you must say so, and be prepared to discuss the sensitivities of the inappropriate variance assumption where the mean is proportional to the outcome when binary events have the following mean variance relationship: $\mbox{var}(y) = E(y)(1-E(y))$
My understanding is that if the scientific interest lies in estimating relative rates, there is a hybrid model: relative risk regression which is a GLM using the logistic variance structure and the poisson mean structure. That is to say: $\log (E[Y|X])= \beta_0 + \beta_1 X$ and $\mbox{var}(Y) = E[Y](1-E[Y])$,
By the way, the Zhang article provides a biased estimate of inference based on the relative risk estimate which doesn't account for variability in the intercept term. You can correct the estimator by bootstrapping.
To answer the specific questions:
If the outcome is rare they are approximately the same. If the outcome is common, the variance of the relative rate estimator from the Poisson might be over inflated, and we might prefer the odds ratio as a biased but efficient estimate of association between a binary outcome and several exposures. I also think that case-control studies justify use of the odds ratio as a measure which does not vary with outcome dependent sampling. Scott and Wild 97 discuss methods around this. Of course, other journals might not have dedicated statistical reviewers.
2.3. I think you are blaming and assuming overmuch about what happens in medical review and academics.
You should always be encouraging your students to use appropriate models whenever possible.
http://biostats.bepress.com/cgi/viewcontent.cgi?article=1128&context=uwbiostat
|
Poisson regression to estimate relative risk for binary outcomes
I too speculate at the prevalence of logistic models in the literature when a relative risk model would be more appropriate. We as statisticians are all too familiar with adherence to convention or st
|
5,587
|
How can I test whether a random effect is significant?
|
The estimate ID's variance = 0, indicates that the level of
between-group variability is not sufficient to warrant incorporating random effects in the model; i.e., your model is degenerate.
As you correctly identify yourself: most probably, yes; ID as a random effect is unnecessary. A few things spring to mind to test this assumption:
You could compare (using REML = F always) the AIC (or your favourite IC in general) between a model with and without random effects and see how this goes.
You would look at the anova() output of the two models.
You could do a parametric bootstrap using the posterior distribution defined by your original model.
Mind you, choices 1 & 2 have an issue: you are checking for something that is on the boundaries of the parameter space so actually they are not technically sound. Having said that, I don't think you'll get wrong insights from them and a lot of people use them (e.g., Douglas Bates, one of lme4's developers, uses them in his book but clearly states this caveat about parameter values being tested on the boundary of the set of possible values).
Choice 3 is the most tedious but actually gives you the best idea about what is really going on. Some people are tempted to use non-parametric bootstrap also but I think that given the fact you are making parametric assumptions to start with you might as well use them.
|
How can I test whether a random effect is significant?
|
The estimate ID's variance = 0, indicates that the level of
between-group variability is not sufficient to warrant incorporating random effects in the model; i.e., your model is degenerate.
As you cor
|
How can I test whether a random effect is significant?
The estimate ID's variance = 0, indicates that the level of
between-group variability is not sufficient to warrant incorporating random effects in the model; i.e., your model is degenerate.
As you correctly identify yourself: most probably, yes; ID as a random effect is unnecessary. A few things spring to mind to test this assumption:
You could compare (using REML = F always) the AIC (or your favourite IC in general) between a model with and without random effects and see how this goes.
You would look at the anova() output of the two models.
You could do a parametric bootstrap using the posterior distribution defined by your original model.
Mind you, choices 1 & 2 have an issue: you are checking for something that is on the boundaries of the parameter space so actually they are not technically sound. Having said that, I don't think you'll get wrong insights from them and a lot of people use them (e.g., Douglas Bates, one of lme4's developers, uses them in his book but clearly states this caveat about parameter values being tested on the boundary of the set of possible values).
Choice 3 is the most tedious but actually gives you the best idea about what is really going on. Some people are tempted to use non-parametric bootstrap also but I think that given the fact you are making parametric assumptions to start with you might as well use them.
|
How can I test whether a random effect is significant?
The estimate ID's variance = 0, indicates that the level of
between-group variability is not sufficient to warrant incorporating random effects in the model; i.e., your model is degenerate.
As you cor
|
5,588
|
How can I test whether a random effect is significant?
|
I'm not sure that the approach I'm going to suggest is reasonable, so those who know more about this topic correct me if I'm wrong.
My proposal is to create an additional column in your data that has a constant value of 1:
IDconst <- factor(rep(1, each=length(tv$Velocity)))
Then, you can create a model that uses this column as your random effect:
fm1 <- lmer(Velocity ~ D.CPC.min + FD.CPC + (1|IDconst),
REML=FALSE, family=gaussian, data=tv)
At this point, you could compare (AIC) your original model with the random effect ID (let's call it fm0) with the new model that doesn't take into account ID (fm1) since IDconst is the same for all your data.
anova(fm0, fm1)
Update
@user11852 was asking for an example, because in his/her opinion the above approach won't even execute. On the contrary, I can show that the approach works (at least with lme4_0.999999-0).
set.seed(101)
dataset <- expand.grid(id=factor(seq_len(10)), fac1=factor(c("A", "B"),
levels=c("A", "B")), trial=seq_len(10))
dataset$value <- rnorm(nrow(dataset), sd=0.5) +
with(dataset, rnorm(length(levels(id)), sd=0.5)[id] +
ifelse(fac1 == "B", 1.0, 0)) + rnorm(1, .5)
dataset$idconst <- factor(rep(1, each=length(dataset$value)))
library(lme4)
fm0 <- lmer(value ~ fac1 + (1|id), data=dataset)
fm1 <- lmer(value ~ fac1 + (1|idconst), data=dataset)
anova(fm1, fm0)
Output:
Data: dataset
Models:
fm1: value ~ fac1 + (1 | idconst)
fm0: value ~ fac1 + (1 | id)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm1 4 370.72 383.92 -181.36
fm0 4 309.79 322.98 -150.89 60.936 0 < 2.2e-16 ***
According to this last test, we should keep the random effect since the fm0 model has the lowest AIC as well as BIC.
Update 2
By the way, this same approach is proposed by N. W. Galwey in Introduction to Mixed Modelling: Beyond Regression and Analysis of Variance on pages 213-214.
|
How can I test whether a random effect is significant?
|
I'm not sure that the approach I'm going to suggest is reasonable, so those who know more about this topic correct me if I'm wrong.
My proposal is to create an additional column in your data that has
|
How can I test whether a random effect is significant?
I'm not sure that the approach I'm going to suggest is reasonable, so those who know more about this topic correct me if I'm wrong.
My proposal is to create an additional column in your data that has a constant value of 1:
IDconst <- factor(rep(1, each=length(tv$Velocity)))
Then, you can create a model that uses this column as your random effect:
fm1 <- lmer(Velocity ~ D.CPC.min + FD.CPC + (1|IDconst),
REML=FALSE, family=gaussian, data=tv)
At this point, you could compare (AIC) your original model with the random effect ID (let's call it fm0) with the new model that doesn't take into account ID (fm1) since IDconst is the same for all your data.
anova(fm0, fm1)
Update
@user11852 was asking for an example, because in his/her opinion the above approach won't even execute. On the contrary, I can show that the approach works (at least with lme4_0.999999-0).
set.seed(101)
dataset <- expand.grid(id=factor(seq_len(10)), fac1=factor(c("A", "B"),
levels=c("A", "B")), trial=seq_len(10))
dataset$value <- rnorm(nrow(dataset), sd=0.5) +
with(dataset, rnorm(length(levels(id)), sd=0.5)[id] +
ifelse(fac1 == "B", 1.0, 0)) + rnorm(1, .5)
dataset$idconst <- factor(rep(1, each=length(dataset$value)))
library(lme4)
fm0 <- lmer(value ~ fac1 + (1|id), data=dataset)
fm1 <- lmer(value ~ fac1 + (1|idconst), data=dataset)
anova(fm1, fm0)
Output:
Data: dataset
Models:
fm1: value ~ fac1 + (1 | idconst)
fm0: value ~ fac1 + (1 | id)
Df AIC BIC logLik Chisq Chi Df Pr(>Chisq)
fm1 4 370.72 383.92 -181.36
fm0 4 309.79 322.98 -150.89 60.936 0 < 2.2e-16 ***
According to this last test, we should keep the random effect since the fm0 model has the lowest AIC as well as BIC.
Update 2
By the way, this same approach is proposed by N. W. Galwey in Introduction to Mixed Modelling: Beyond Regression and Analysis of Variance on pages 213-214.
|
How can I test whether a random effect is significant?
I'm not sure that the approach I'm going to suggest is reasonable, so those who know more about this topic correct me if I'm wrong.
My proposal is to create an additional column in your data that has
|
5,589
|
Computing Cohen's Kappa variance (and standard errors)
|
I don't know which of the two ways to calculate the variance is to prefer but I can give you a third, practical and useful way to calculate confidence/credible intervals by using Bayesian estimation of Cohen's Kappa.
The R and JAGS code below generates MCMC samples from the posterior distribution of the credible values of Kappa given the data.
library(rjags)
library(coda)
library(psych)
# Creating some mock data
rater1 <- c(1, 2, 3, 1, 1, 2, 1, 1, 3, 1, 2, 3, 3, 2, 3)
rater2 <- c(1, 2, 2, 1, 2, 2, 3, 1, 3, 1, 2, 3, 2, 1, 1)
agreement <- rater1 == rater2
n_categories <- 3
n_ratings <- 15
# The JAGS model definition, should work in WinBugs with minimal modification
kohen_model_string <- "model {
kappa <- (p_agreement - chance_agreement) / (1 - chance_agreement)
chance_agreement <- sum(p1 * p2)
for(i in 1:n_ratings) {
rater1[i] ~ dcat(p1)
rater2[i] ~ dcat(p2)
agreement[i] ~ dbern(p_agreement)
}
# Uniform priors on all parameters
p1 ~ ddirch(alpha)
p2 ~ ddirch(alpha)
p_agreement ~ dbeta(1, 1)
for(cat_i in 1:n_categories) {
alpha[cat_i] <- 1
}
}"
# Running the model
kohen_model <- jags.model(file = textConnection(kohen_model_string),
data = list(rater1 = rater1, rater2 = rater2,
agreement = agreement, n_categories = n_categories,
n_ratings = n_ratings),
n.chains= 1, n.adapt= 1000)
update(kohen_model, 10000)
mcmc_samples <- coda.samples(kohen_model, variable.names="kappa", n.iter=20000)
The plot below shows a density plot of the MCMC samples from the posterior distribution of Kappa.
Using the MCMC samples we can now use the median value as an estimate of Kappa and use the 2.5% and 97.5% quantiles as a 95 % confidence/credible interval.
summary(mcmc_samples)$quantiles
## 2.5% 25% 50% 75% 97.5%
## 0.01688361 0.26103573 0.38753814 0.50757431 0.70288890
Compare this with the "classical" estimates calculated according to Fleiss, Cohen and Everitt:
cohen.kappa(cbind(rater1, rater2), alpha=0.05)
## lower estimate upper
## unweighted kappa 0.041 0.40 0.76
Personally I would prefer the Bayesian confidence interval over the classical confidence interval, especially since I believe the Bayesian confidence interval have better small sample properties. A common concern people tend to have with Bayesian analyses is that you have to specify prior beliefs regarding the distributions of the parameters. Fortunately, in this case, it is easy to construct "objective" priors by simply putting uniform distributions over all the parameters. This should make the outcome of the Bayesian model very similar to a "classical" calculation of the Kappa coefficient.
References
Sanjib Basu, Mousumi Banerjee and Ananda Sen (2000). Bayesian Inference for Kappa from Single and Multiple Studies. Biometrics, Vol. 56, No. 2 (Jun., 2000), pp. 577-582
|
Computing Cohen's Kappa variance (and standard errors)
|
I don't know which of the two ways to calculate the variance is to prefer but I can give you a third, practical and useful way to calculate confidence/credible intervals by using Bayesian estimation o
|
Computing Cohen's Kappa variance (and standard errors)
I don't know which of the two ways to calculate the variance is to prefer but I can give you a third, practical and useful way to calculate confidence/credible intervals by using Bayesian estimation of Cohen's Kappa.
The R and JAGS code below generates MCMC samples from the posterior distribution of the credible values of Kappa given the data.
library(rjags)
library(coda)
library(psych)
# Creating some mock data
rater1 <- c(1, 2, 3, 1, 1, 2, 1, 1, 3, 1, 2, 3, 3, 2, 3)
rater2 <- c(1, 2, 2, 1, 2, 2, 3, 1, 3, 1, 2, 3, 2, 1, 1)
agreement <- rater1 == rater2
n_categories <- 3
n_ratings <- 15
# The JAGS model definition, should work in WinBugs with minimal modification
kohen_model_string <- "model {
kappa <- (p_agreement - chance_agreement) / (1 - chance_agreement)
chance_agreement <- sum(p1 * p2)
for(i in 1:n_ratings) {
rater1[i] ~ dcat(p1)
rater2[i] ~ dcat(p2)
agreement[i] ~ dbern(p_agreement)
}
# Uniform priors on all parameters
p1 ~ ddirch(alpha)
p2 ~ ddirch(alpha)
p_agreement ~ dbeta(1, 1)
for(cat_i in 1:n_categories) {
alpha[cat_i] <- 1
}
}"
# Running the model
kohen_model <- jags.model(file = textConnection(kohen_model_string),
data = list(rater1 = rater1, rater2 = rater2,
agreement = agreement, n_categories = n_categories,
n_ratings = n_ratings),
n.chains= 1, n.adapt= 1000)
update(kohen_model, 10000)
mcmc_samples <- coda.samples(kohen_model, variable.names="kappa", n.iter=20000)
The plot below shows a density plot of the MCMC samples from the posterior distribution of Kappa.
Using the MCMC samples we can now use the median value as an estimate of Kappa and use the 2.5% and 97.5% quantiles as a 95 % confidence/credible interval.
summary(mcmc_samples)$quantiles
## 2.5% 25% 50% 75% 97.5%
## 0.01688361 0.26103573 0.38753814 0.50757431 0.70288890
Compare this with the "classical" estimates calculated according to Fleiss, Cohen and Everitt:
cohen.kappa(cbind(rater1, rater2), alpha=0.05)
## lower estimate upper
## unweighted kappa 0.041 0.40 0.76
Personally I would prefer the Bayesian confidence interval over the classical confidence interval, especially since I believe the Bayesian confidence interval have better small sample properties. A common concern people tend to have with Bayesian analyses is that you have to specify prior beliefs regarding the distributions of the parameters. Fortunately, in this case, it is easy to construct "objective" priors by simply putting uniform distributions over all the parameters. This should make the outcome of the Bayesian model very similar to a "classical" calculation of the Kappa coefficient.
References
Sanjib Basu, Mousumi Banerjee and Ananda Sen (2000). Bayesian Inference for Kappa from Single and Multiple Studies. Biometrics, Vol. 56, No. 2 (Jun., 2000), pp. 577-582
|
Computing Cohen's Kappa variance (and standard errors)
I don't know which of the two ways to calculate the variance is to prefer but I can give you a third, practical and useful way to calculate confidence/credible intervals by using Bayesian estimation o
|
5,590
|
How well can multiple regression really "control for" covariates?
|
There is a becoming widely accepted, non-statistical perhaps, answer to - what assumptions does one need to make to claim one has really controlled for the covariates.
That can be done with Judea Pearl's causal graphs and do calculus.
See http://ftp.cs.ucla.edu/pub/stat_ser/r402.pdf as well as other material on his website.
Now as statisticians we know that all models are false, and the real statistical question is are those identified assumption likely to be not too wrong so that our answer is approximately OK. Pearl is aware of this and does discuss it in his work but perhaps not explicitly and often enough to avoid flustrating many statisticians with his claim to have an answer (which I believe his does for what assumptions does one need to make?).
(Currently the ASA is offering a prize for teaching material to include these methods in statistical courses see here)
|
How well can multiple regression really "control for" covariates?
|
There is a becoming widely accepted, non-statistical perhaps, answer to - what assumptions does one need to make to claim one has really controlled for the covariates.
That can be done with Judea Pea
|
How well can multiple regression really "control for" covariates?
There is a becoming widely accepted, non-statistical perhaps, answer to - what assumptions does one need to make to claim one has really controlled for the covariates.
That can be done with Judea Pearl's causal graphs and do calculus.
See http://ftp.cs.ucla.edu/pub/stat_ser/r402.pdf as well as other material on his website.
Now as statisticians we know that all models are false, and the real statistical question is are those identified assumption likely to be not too wrong so that our answer is approximately OK. Pearl is aware of this and does discuss it in his work but perhaps not explicitly and often enough to avoid flustrating many statisticians with his claim to have an answer (which I believe his does for what assumptions does one need to make?).
(Currently the ASA is offering a prize for teaching material to include these methods in statistical courses see here)
|
How well can multiple regression really "control for" covariates?
There is a becoming widely accepted, non-statistical perhaps, answer to - what assumptions does one need to make to claim one has really controlled for the covariates.
That can be done with Judea Pea
|
5,591
|
How well can multiple regression really "control for" covariates?
|
Answer to question 1:
The magnitude of seriousness is best assessed in a contextual way
(i.e., should consider all factors contributing to validity).
The magnitude of seriousness should not be assessed in a categorical way. An example is the notion of a hierarchy of inference for study designs (e.g. case reports are lowest and RCTs are categorically highest). This type of scheme is frequently taught in medical schools as an easy heuristic to quickly identify high quality evidence. The problem with this type of thinking is that it is algorithmic and overly deterministic in reality the answer is itself overdetermined. When this happens, you can miss the ways in which poorly designed RCTs can yield worse results than a well designed observational study.
See this easy to read review for a full discussion of the above points from the perspective of an epidemiologist (Rothman, 2014).
Answer to question 2:
Be very afraid. To simply reiterate what others have already said and to quote (roughly) from Richard McElreath's elegant introductory text on critical thinking in statistical modeling:
"...all models are false, but some are useful..."
|
How well can multiple regression really "control for" covariates?
|
Answer to question 1:
The magnitude of seriousness is best assessed in a contextual way
(i.e., should consider all factors contributing to validity).
The magnitude of seriousness should not be asses
|
How well can multiple regression really "control for" covariates?
Answer to question 1:
The magnitude of seriousness is best assessed in a contextual way
(i.e., should consider all factors contributing to validity).
The magnitude of seriousness should not be assessed in a categorical way. An example is the notion of a hierarchy of inference for study designs (e.g. case reports are lowest and RCTs are categorically highest). This type of scheme is frequently taught in medical schools as an easy heuristic to quickly identify high quality evidence. The problem with this type of thinking is that it is algorithmic and overly deterministic in reality the answer is itself overdetermined. When this happens, you can miss the ways in which poorly designed RCTs can yield worse results than a well designed observational study.
See this easy to read review for a full discussion of the above points from the perspective of an epidemiologist (Rothman, 2014).
Answer to question 2:
Be very afraid. To simply reiterate what others have already said and to quote (roughly) from Richard McElreath's elegant introductory text on critical thinking in statistical modeling:
"...all models are false, but some are useful..."
|
How well can multiple regression really "control for" covariates?
Answer to question 1:
The magnitude of seriousness is best assessed in a contextual way
(i.e., should consider all factors contributing to validity).
The magnitude of seriousness should not be asses
|
5,592
|
How to plot trends properly
|
Sometimes less is more. With less detail about the year-to-year variations and the country distinctions you can provide more information about the trends. Since the other countries are moving mostly together you can get by without separate colors.
In using a smoother you're requiring the reader to trust that you haven't smoothed over any interesting variation.
Update after getting a couple requests for code:
I made this in JMP's interactive Graph Builder. The JMP script is:
Graph Builder(
Size( 528, 456 ), Show Control Panel( 0 ), Show Legend( 0 ),
// variable role assignments:
Variables( X( :year ), Y( :Deaths ), Overlay( :Country ) ),
// spline smoother:
Elements( Smoother( X, Y, Legend( 3 ) ) ),
// customizations:
SendToReport(
// x scale, leaving room for annotations
Dispatch( {},"year",ScaleBox,
{Min( 1926.5 ), Max( 1937.9 ), Inc( 2 ), Minor Ticks( 1 )}
),
// customize colors and DE line width
Dispatch( {}, "400", ScaleBox, {Legend Model( 3,
Properties( 0, {Line Color( "gray" )}, Item ID( "aut", 1 ) ),
Properties( 1, {Line Color( "gray" )}, Item ID( "be", 1 ) ),
Properties( 2, {Line Color( "gray" )}, Item ID( "ch", 1 ) ),
Properties( 3, {Line Color( "gray" )}, Item ID( "cz", 1 ) ),
Properties( 4, {Line Color( "gray" )}, Item ID( "den", 1 ) ),
Properties( 5, {Line Color( "gray" )}, Item ID( "fr", 1 ) ),
Properties( 6, {Line Color( "gray" )}, Item ID( "nl", 1 ) ),
Properties( 7, {Line Color( "gray" )}, Item ID( "pl", 1 ) ),
Properties( 8, {Line Color("dark red"), Line Width( 3 )}, Item ID( "de", 1 ))
)}),
// add line annotations (omitted)
));
|
How to plot trends properly
|
Sometimes less is more. With less detail about the year-to-year variations and the country distinctions you can provide more information about the trends. Since the other countries are moving mostly t
|
How to plot trends properly
Sometimes less is more. With less detail about the year-to-year variations and the country distinctions you can provide more information about the trends. Since the other countries are moving mostly together you can get by without separate colors.
In using a smoother you're requiring the reader to trust that you haven't smoothed over any interesting variation.
Update after getting a couple requests for code:
I made this in JMP's interactive Graph Builder. The JMP script is:
Graph Builder(
Size( 528, 456 ), Show Control Panel( 0 ), Show Legend( 0 ),
// variable role assignments:
Variables( X( :year ), Y( :Deaths ), Overlay( :Country ) ),
// spline smoother:
Elements( Smoother( X, Y, Legend( 3 ) ) ),
// customizations:
SendToReport(
// x scale, leaving room for annotations
Dispatch( {},"year",ScaleBox,
{Min( 1926.5 ), Max( 1937.9 ), Inc( 2 ), Minor Ticks( 1 )}
),
// customize colors and DE line width
Dispatch( {}, "400", ScaleBox, {Legend Model( 3,
Properties( 0, {Line Color( "gray" )}, Item ID( "aut", 1 ) ),
Properties( 1, {Line Color( "gray" )}, Item ID( "be", 1 ) ),
Properties( 2, {Line Color( "gray" )}, Item ID( "ch", 1 ) ),
Properties( 3, {Line Color( "gray" )}, Item ID( "cz", 1 ) ),
Properties( 4, {Line Color( "gray" )}, Item ID( "den", 1 ) ),
Properties( 5, {Line Color( "gray" )}, Item ID( "fr", 1 ) ),
Properties( 6, {Line Color( "gray" )}, Item ID( "nl", 1 ) ),
Properties( 7, {Line Color( "gray" )}, Item ID( "pl", 1 ) ),
Properties( 8, {Line Color("dark red"), Line Width( 3 )}, Item ID( "de", 1 ))
)}),
// add line annotations (omitted)
));
|
How to plot trends properly
Sometimes less is more. With less detail about the year-to-year variations and the country distinctions you can provide more information about the trends. Since the other countries are moving mostly t
|
5,593
|
How to plot trends properly
|
There are good answers here. Let me take you at your word that you want to show that the trend for Germany differs from the rest. Levels vs. changes is a common distinction in economics. Your data are in levels, but your question is stated as seeking changes. The way to do that is to set the reference level (here 1932) as $1$. From there, each successive year is a fraction of the previous. (It is common to take logs to make changes more stable and symmetrical. This does change the meaning of the exact numbers somewhat, if you really want someone to get that from the plot, but usually for this kind of thing, people want to be able to see the pattern.) You then get a running sum for each series and multiply it by $100$ by convention. That's what you plot. Your case is slightly less common in that your reference point is in the middle of your series, so I ran this in both directions from 1932. Below is a simple example, coded in R (there will be lots of ways to make the code and plot nicer, but this should show the idea straightforwardly). I made the line for Germany thicker to distinguish it in the legend, and I added a reference line at $100$. It's easy to see that Germany stands out from the rest. You can also see that all the other countries end up with lower rates at 1937 than 1932, and that their year by year changes fluctuate much less in the years after 1932 than in the years leading up to it.
d = read.table(text="
year de fr be nl den ch aut cz pl
1927 10.9 16.5 13 10.2 11.6 12.4 15 16 17.3
...
1937 11.5 15 12.5 8.8 10.8 11.3 13.3 13.3 14",
header=T)
d2 = d # we'll end up needing both
d2[6,2:10] = 1 # set 1932 as 1
for(j in 2:10){
for(i in 7:11){
# changes moving forward from 1932:
d2[i,j] = log( d[i,j]/d[i-1,j] )
# running sum moving forward from 1932:
d2[i,j] = d2[i,j]+d2[i-1,j]
}
for(i in 5:1){
# changes moving backward from 1932:
d2[i,j] = log( d[i,j]/d[i+1,j] )
# running sum moving forward from 1932:
d2[i,j] = d2[i+1,j]+d2[i,j]
}
}
d2[,2:10] = d2[,2:10]*100 # multiply all values by 100
windows() # plot of changes
plot(1,1, xlim=c(1927,1937), ylim=c(82,118), xlab="Year",
ylab="Change from 1932", main="European death rates")
abline(h=100, col="lightgray")
for(j in 2:10){
lines(1927:1937, d2[,j], col=rainbow(9)[j-1], lwd=ifelse(j==2,2,1))
}
legend("bottomleft", legend=colnames(d2)[2:10], lwd=c(2,rep(1,8)), lty=1,
col=rainbow(9), ncol=2)
windows() # plot of levels
plot(1,1, xlim=c(1927,1937), ylim=c(8,18.4), xlab="Year",
ylab="Deaths per thousand", main="European death rates")
abline(h=d[6,2:10], col="gray90")
points(rep(1932,9), d[6,2:10], col=rainbow(9), pch=16)
for(j in 2:10){
lines(1927:1937, d[,j], col=rainbow(9)[j-1], lwd=ifelse(j==2,2,1))
}
legend("topright", legend=colnames(d)[2:10], lwd=c(2,rep(1,8)), lty=1,
col=rainbow(9), ncol=2)
By contrast, below is a corresponding plot of the data in levels. I nonetheless tried to make it possible to see that Germany alone goes up after 1932 in two ways: I put a prominent point on each series at 1932, and drew a faint gray line across the plot in the background at those levels.
|
How to plot trends properly
|
There are good answers here. Let me take you at your word that you want to show that the trend for Germany differs from the rest. Levels vs. changes is a common distinction in economics. Your data
|
How to plot trends properly
There are good answers here. Let me take you at your word that you want to show that the trend for Germany differs from the rest. Levels vs. changes is a common distinction in economics. Your data are in levels, but your question is stated as seeking changes. The way to do that is to set the reference level (here 1932) as $1$. From there, each successive year is a fraction of the previous. (It is common to take logs to make changes more stable and symmetrical. This does change the meaning of the exact numbers somewhat, if you really want someone to get that from the plot, but usually for this kind of thing, people want to be able to see the pattern.) You then get a running sum for each series and multiply it by $100$ by convention. That's what you plot. Your case is slightly less common in that your reference point is in the middle of your series, so I ran this in both directions from 1932. Below is a simple example, coded in R (there will be lots of ways to make the code and plot nicer, but this should show the idea straightforwardly). I made the line for Germany thicker to distinguish it in the legend, and I added a reference line at $100$. It's easy to see that Germany stands out from the rest. You can also see that all the other countries end up with lower rates at 1937 than 1932, and that their year by year changes fluctuate much less in the years after 1932 than in the years leading up to it.
d = read.table(text="
year de fr be nl den ch aut cz pl
1927 10.9 16.5 13 10.2 11.6 12.4 15 16 17.3
...
1937 11.5 15 12.5 8.8 10.8 11.3 13.3 13.3 14",
header=T)
d2 = d # we'll end up needing both
d2[6,2:10] = 1 # set 1932 as 1
for(j in 2:10){
for(i in 7:11){
# changes moving forward from 1932:
d2[i,j] = log( d[i,j]/d[i-1,j] )
# running sum moving forward from 1932:
d2[i,j] = d2[i,j]+d2[i-1,j]
}
for(i in 5:1){
# changes moving backward from 1932:
d2[i,j] = log( d[i,j]/d[i+1,j] )
# running sum moving forward from 1932:
d2[i,j] = d2[i+1,j]+d2[i,j]
}
}
d2[,2:10] = d2[,2:10]*100 # multiply all values by 100
windows() # plot of changes
plot(1,1, xlim=c(1927,1937), ylim=c(82,118), xlab="Year",
ylab="Change from 1932", main="European death rates")
abline(h=100, col="lightgray")
for(j in 2:10){
lines(1927:1937, d2[,j], col=rainbow(9)[j-1], lwd=ifelse(j==2,2,1))
}
legend("bottomleft", legend=colnames(d2)[2:10], lwd=c(2,rep(1,8)), lty=1,
col=rainbow(9), ncol=2)
windows() # plot of levels
plot(1,1, xlim=c(1927,1937), ylim=c(8,18.4), xlab="Year",
ylab="Deaths per thousand", main="European death rates")
abline(h=d[6,2:10], col="gray90")
points(rep(1932,9), d[6,2:10], col=rainbow(9), pch=16)
for(j in 2:10){
lines(1927:1937, d[,j], col=rainbow(9)[j-1], lwd=ifelse(j==2,2,1))
}
legend("topright", legend=colnames(d)[2:10], lwd=c(2,rep(1,8)), lty=1,
col=rainbow(9), ncol=2)
By contrast, below is a corresponding plot of the data in levels. I nonetheless tried to make it possible to see that Germany alone goes up after 1932 in two ways: I put a prominent point on each series at 1932, and drew a faint gray line across the plot in the background at those levels.
|
How to plot trends properly
There are good answers here. Let me take you at your word that you want to show that the trend for Germany differs from the rest. Levels vs. changes is a common distinction in economics. Your data
|
5,594
|
How to plot trends properly
|
There are many good ideas here in other answers, but they don't exhaust the good solutions that are possible. The first graph in this answer takes it that different levels of death rate can be discussed and explained separately. In allowing each series to fill much of the space available, it focuses readers' attention on patterns of relative change.
Alphabetical order by country is usually a dopey default, and is not insisted on here. Fortuitously, and fortunately, Germany as de is in the centre of this 3 x 3 display. A simple narrative -- Look! Germany's pattern is exceptional with an upturn from 1932 -- is made possible and plausible.
Fortuitously, but fortunately, 9 countries are enough to justify trying separate panels, but not too many to make that design impracticable (with say 30 and certainly 300 panels, there could (would) be too many panels to scan, with each too small to scrutinize).
Evidently, there is plenty of space here for fuller country names. (In some other answers, legends take up a large fraction of the available space, while remaining a little cryptic. In practice, people interested in such data would find the country abbreviations easy to decode, but how far the legend is needed is often a vexing issue in graphical design.)
Stata code for the record:
clear
input int year double(de fr be nl den ch aut cz pl)
1927 10.9 16.5 13 10.2 11.6 12.4 15 16 17.3
1928 11.2 16.4 12.8 9.6 11 12 14.5 15.1 16.4
1929 11.4 17.9 14.4 10.7 11.2 12.5 14.6 15.5 16.7
1930 10.4 15.6 12.8 9.1 10.8 11.6 13.5 14.2 15.6
1931 10.4 16.2 12.7 9.6 11.4 12.1 14 14.4 15.5
1932 10.2 15.8 12.7 9 11 12.2 13.9 14.1 15
1933 10.8 15.8 12.7 8.8 10.6 11.4 13.2 13.7 14.2
1934 10.6 15.1 11.7 8.4 10.4 11.3 12.7 13.2 14.4
1935 11.4 15.7 12.3 8.7 11.1 12.1 13.7 13.5 14
1936 11.7 15.3 12.2 8.7 11 11.4 13.2 13.3 14.2
1937 11.5 15 12.5 8.8 10.8 11.3 13.3 13.3 14
end
rename (de-pl) (death=)
reshape long death, i(year) j(country) string
set scheme s1color
line death year, by(country, yrescale note("")) xtitle("") xla(1927(5)1937)
EDIT:
One simple enhancement of this graph suggested by Tim Morris is to highlight the year in which the maximum occurred:
egen max = max(death) , by(country)
replace max = max == death
twoway line death year || scatter death year if max, ms(O) ///
by(country, yrescale note("") legend(off)) xtitle("") xla(1927(5)1937)
EDIT 2 (revised to show simpler code):
Alternatively, this next design shows each series separately, but each time with the other series as backdrop. The general idea is discussed within this related thread.
There is loss as well as gain here. While each series can more easily be seen in the context of others, space is lost by repetition.
Stata code for the record:
(Code to input, reshape, rename as above in this answer)
* type "ssc inst fabplot" to install
fabplot line death year, by(country, compact note("countries highlighted in turn")) ///
ytitle("death rate, yearly deaths per 1000") yla(8(2)18, ang(h)) ///
xla(1927(5)1937, format(%tyY)) xtitle("") front(connected)
fabplot is to be understood as front or foreground and backdrop or background plot, not as some echo of 1960s slang for "fabulous".
|
How to plot trends properly
|
There are many good ideas here in other answers, but they don't exhaust the good solutions that are possible. The first graph in this answer takes it that different levels of death rate can be discuss
|
How to plot trends properly
There are many good ideas here in other answers, but they don't exhaust the good solutions that are possible. The first graph in this answer takes it that different levels of death rate can be discussed and explained separately. In allowing each series to fill much of the space available, it focuses readers' attention on patterns of relative change.
Alphabetical order by country is usually a dopey default, and is not insisted on here. Fortuitously, and fortunately, Germany as de is in the centre of this 3 x 3 display. A simple narrative -- Look! Germany's pattern is exceptional with an upturn from 1932 -- is made possible and plausible.
Fortuitously, but fortunately, 9 countries are enough to justify trying separate panels, but not too many to make that design impracticable (with say 30 and certainly 300 panels, there could (would) be too many panels to scan, with each too small to scrutinize).
Evidently, there is plenty of space here for fuller country names. (In some other answers, legends take up a large fraction of the available space, while remaining a little cryptic. In practice, people interested in such data would find the country abbreviations easy to decode, but how far the legend is needed is often a vexing issue in graphical design.)
Stata code for the record:
clear
input int year double(de fr be nl den ch aut cz pl)
1927 10.9 16.5 13 10.2 11.6 12.4 15 16 17.3
1928 11.2 16.4 12.8 9.6 11 12 14.5 15.1 16.4
1929 11.4 17.9 14.4 10.7 11.2 12.5 14.6 15.5 16.7
1930 10.4 15.6 12.8 9.1 10.8 11.6 13.5 14.2 15.6
1931 10.4 16.2 12.7 9.6 11.4 12.1 14 14.4 15.5
1932 10.2 15.8 12.7 9 11 12.2 13.9 14.1 15
1933 10.8 15.8 12.7 8.8 10.6 11.4 13.2 13.7 14.2
1934 10.6 15.1 11.7 8.4 10.4 11.3 12.7 13.2 14.4
1935 11.4 15.7 12.3 8.7 11.1 12.1 13.7 13.5 14
1936 11.7 15.3 12.2 8.7 11 11.4 13.2 13.3 14.2
1937 11.5 15 12.5 8.8 10.8 11.3 13.3 13.3 14
end
rename (de-pl) (death=)
reshape long death, i(year) j(country) string
set scheme s1color
line death year, by(country, yrescale note("")) xtitle("") xla(1927(5)1937)
EDIT:
One simple enhancement of this graph suggested by Tim Morris is to highlight the year in which the maximum occurred:
egen max = max(death) , by(country)
replace max = max == death
twoway line death year || scatter death year if max, ms(O) ///
by(country, yrescale note("") legend(off)) xtitle("") xla(1927(5)1937)
EDIT 2 (revised to show simpler code):
Alternatively, this next design shows each series separately, but each time with the other series as backdrop. The general idea is discussed within this related thread.
There is loss as well as gain here. While each series can more easily be seen in the context of others, space is lost by repetition.
Stata code for the record:
(Code to input, reshape, rename as above in this answer)
* type "ssc inst fabplot" to install
fabplot line death year, by(country, compact note("countries highlighted in turn")) ///
ytitle("death rate, yearly deaths per 1000") yla(8(2)18, ang(h)) ///
xla(1927(5)1937, format(%tyY)) xtitle("") front(connected)
fabplot is to be understood as front or foreground and backdrop or background plot, not as some echo of 1960s slang for "fabulous".
|
How to plot trends properly
There are many good ideas here in other answers, but they don't exhaust the good solutions that are possible. The first graph in this answer takes it that different levels of death rate can be discuss
|
5,595
|
How to plot trends properly
|
Your graph is reasonable, but it would require some refinement, including a title, axis labels, and complete country labels. If your goal is to stress the fact that Germany was the only country with a rise in death rate over the observation period then a simple way to do this would be to highlight this line in the plot, either by using a thicker line, a different line-type, or alpha transparency. You could also augment your time-series plot with a bar-plot showing the change in death rate over time, so that the complexity of the time-series lines are reduced to a single measure of change.
Here is how you could produce these plots using ggplot in R:
library(tidyr);
library(dplyr);
library(ggplot2);
#Create data frame in wide format
DATA_WIDE <- data.frame(Year = 1927L:1937L,
DE = c(10.9, 11.2, 11.4, 10.4, 10.4, 10.2, 10.8, 10.6, 11.4, 11.7, 11.5),
FR = c(16.5, 16.4, 17.9, 15.6, 16.2, 15.8, 15.8, 15.1, 15.7, 15.3, 15.0),
BE = c(13.0, 12.8, 14.4, 12.8, 12.7, 12.7, 12.7, 11.7, 12.3, 12.2, 12.5),
NL = c(10.2, 9.6, 10.7, 9.1, 9.6, 9.0, 8.8, 8.4, 8.7, 8.7, 8.8),
DEN = c(11.6, 11.0, 11.2, 10.8, 11.4, 11.0, 10.6, 10.4, 11.1, 11.0, 10.8),
CH = c(12.4, 12.0, 12.5, 11.6, 12.1, 12.2, 11.4, 11.3, 12.1, 11.4, 11.3),
AUT = c(15.0, 14.5, 14.6, 13.5, 14.0, 13.9, 13.2, 12.7, 13.7, 13.2, 13.3),
CZ = c(16.0, 15.1, 15.5, 14.2, 14.4, 14.1, 13.7, 13.3, 13.5, 13.3, 13.3),
PL = c(17.3, 16.4, 16.7, 15.6, 15.5, 15.0, 14.2, 14.4, 14.0, 14.2, 14.0));
#Convert data to long format
DATA_LONG <- DATA_WIDE %>% gather(Country, Measurement, DE:PL);
#Set line-types and sizes for plot
#Germany (DE) is the fifth country in the plot
LINETYPE <- c("dashed", "dashed", "dashed", "dashed", "solid", "dashed", "dashed", "dashed", "dashed");
SIZE <- c(1, 1, 1, 1, 2, 1, 1, 1, 1);
#Create time-series plot
theme_set(theme_bw());
PLOT1 <- ggplot(DATA_LONG, aes(x = Year, y = Measurement, colour = Country)) +
geom_line(aes(size = Country, linetype = Country)) +
scale_size_manual(values = SIZE) +
scale_linetype_manual(values = LINETYPE) +
scale_x_continuous(breaks = 1927:1937) +
scale_y_continuous(limits = c(0, 20)) +
labs(title = "Annual Time Series Plot: Death Rates over Time",
subtitle = "Only Germany (DE) trends upward from 1927-37") +
xlab("Year") + ylab("Crude Death Rate\n(per 1,000 population)");
#Create new data frame for differences
DATA_DIFF <- data.frame(Country = c("DE", "FR", "BE", "NL", "DEN", "CH", "AUT", "CZ", "PL"),
Change = as.numeric(DATA_WIDE[11, 2:10] - DATA_WIDE[1, 2:10]));
#Create bar plot
PLOT2 <- ggplot(DATA_DIFF, aes(x = reorder(Country, - Change), y = Change, colour = Country, fill = Country)) +
geom_bar(stat = "identity") +
labs(title = "Bar Plot: Change in Death Rates from 1927-37",
subtitle = "Only Germany (DE) shows an increase in death rate") +
xlab(NULL) + ylab("Change in crude Death Rate\n(per 1,000 population)");
This leads to the following plots:
Note: I am aware that the OP intended to highlight the change in death rate since 1932, when the trend in Germany started going up. This seems to me a bit like cherry-picking, and I find it dubious when time intervals are chosen to obtain a particular trend. For this reason I have looked at the interval over the whole data range, which is a different comparison to the OP.
|
How to plot trends properly
|
Your graph is reasonable, but it would require some refinement, including a title, axis labels, and complete country labels. If your goal is to stress the fact that Germany was the only country with
|
How to plot trends properly
Your graph is reasonable, but it would require some refinement, including a title, axis labels, and complete country labels. If your goal is to stress the fact that Germany was the only country with a rise in death rate over the observation period then a simple way to do this would be to highlight this line in the plot, either by using a thicker line, a different line-type, or alpha transparency. You could also augment your time-series plot with a bar-plot showing the change in death rate over time, so that the complexity of the time-series lines are reduced to a single measure of change.
Here is how you could produce these plots using ggplot in R:
library(tidyr);
library(dplyr);
library(ggplot2);
#Create data frame in wide format
DATA_WIDE <- data.frame(Year = 1927L:1937L,
DE = c(10.9, 11.2, 11.4, 10.4, 10.4, 10.2, 10.8, 10.6, 11.4, 11.7, 11.5),
FR = c(16.5, 16.4, 17.9, 15.6, 16.2, 15.8, 15.8, 15.1, 15.7, 15.3, 15.0),
BE = c(13.0, 12.8, 14.4, 12.8, 12.7, 12.7, 12.7, 11.7, 12.3, 12.2, 12.5),
NL = c(10.2, 9.6, 10.7, 9.1, 9.6, 9.0, 8.8, 8.4, 8.7, 8.7, 8.8),
DEN = c(11.6, 11.0, 11.2, 10.8, 11.4, 11.0, 10.6, 10.4, 11.1, 11.0, 10.8),
CH = c(12.4, 12.0, 12.5, 11.6, 12.1, 12.2, 11.4, 11.3, 12.1, 11.4, 11.3),
AUT = c(15.0, 14.5, 14.6, 13.5, 14.0, 13.9, 13.2, 12.7, 13.7, 13.2, 13.3),
CZ = c(16.0, 15.1, 15.5, 14.2, 14.4, 14.1, 13.7, 13.3, 13.5, 13.3, 13.3),
PL = c(17.3, 16.4, 16.7, 15.6, 15.5, 15.0, 14.2, 14.4, 14.0, 14.2, 14.0));
#Convert data to long format
DATA_LONG <- DATA_WIDE %>% gather(Country, Measurement, DE:PL);
#Set line-types and sizes for plot
#Germany (DE) is the fifth country in the plot
LINETYPE <- c("dashed", "dashed", "dashed", "dashed", "solid", "dashed", "dashed", "dashed", "dashed");
SIZE <- c(1, 1, 1, 1, 2, 1, 1, 1, 1);
#Create time-series plot
theme_set(theme_bw());
PLOT1 <- ggplot(DATA_LONG, aes(x = Year, y = Measurement, colour = Country)) +
geom_line(aes(size = Country, linetype = Country)) +
scale_size_manual(values = SIZE) +
scale_linetype_manual(values = LINETYPE) +
scale_x_continuous(breaks = 1927:1937) +
scale_y_continuous(limits = c(0, 20)) +
labs(title = "Annual Time Series Plot: Death Rates over Time",
subtitle = "Only Germany (DE) trends upward from 1927-37") +
xlab("Year") + ylab("Crude Death Rate\n(per 1,000 population)");
#Create new data frame for differences
DATA_DIFF <- data.frame(Country = c("DE", "FR", "BE", "NL", "DEN", "CH", "AUT", "CZ", "PL"),
Change = as.numeric(DATA_WIDE[11, 2:10] - DATA_WIDE[1, 2:10]));
#Create bar plot
PLOT2 <- ggplot(DATA_DIFF, aes(x = reorder(Country, - Change), y = Change, colour = Country, fill = Country)) +
geom_bar(stat = "identity") +
labs(title = "Bar Plot: Change in Death Rates from 1927-37",
subtitle = "Only Germany (DE) shows an increase in death rate") +
xlab(NULL) + ylab("Change in crude Death Rate\n(per 1,000 population)");
This leads to the following plots:
Note: I am aware that the OP intended to highlight the change in death rate since 1932, when the trend in Germany started going up. This seems to me a bit like cherry-picking, and I find it dubious when time intervals are chosen to obtain a particular trend. For this reason I have looked at the interval over the whole data range, which is a different comparison to the OP.
|
How to plot trends properly
Your graph is reasonable, but it would require some refinement, including a title, axis labels, and complete country labels. If your goal is to stress the fact that Germany was the only country with
|
5,596
|
How to plot trends properly
|
Although the stated objective is to display changes, apparently you wish to show the annual time series by country, too. That suggests not completely redoing the graphic, but just modifying it.
Since a change concerns what happens from one year to the next, you might consider representing the changes by graphical symbols that span successive years: that is, the line segments connecting the data points in the plot.
Since color is so useful for distinguishing countries, and otherwise is not so good at indicating quantitative variables, that leaves us with essentially just two other characteristics that can be varied to indicate change: the style and thickness of the segments. Because your thesis concerns positive change, you will want to make line segments for increases more prominent: their styles should be more continuous and they should be thicker.
Finally, your thesis concerns data after 1932. We will want to emphasize those elements of the graphic relative to the others. That can be done by saturating the color.
This solution immediately provides insights that were not apparent in the original:
No country experienced annual increases in death rates for all years after 1932. Any such country would appear as a continuous solid line, but there is no such line present.
Much of the change ought to be attributed to factors common to all countries. This is apparent in the similarities of line style and thickness within vertical columns. For instance, during the period 1934-35 the death rates increased in almost all countries, where in 1933-34 they decreased in nearly all countries.
Germany was unusual in experiencing a large increase in death rates in 1932-33 and also a slight increase in 1935-36.
These suggest performing a robust two-way exploration of change in death rate versus country, perhaps by median polish, in order to penetrate more deeply into the relative performance of European countries during this period.
If you wish to emphasize only the difference between 1937 and 1932, a similar technique can be used to symbolize the portions of the paths between those dates. Germany would stand out:
|
How to plot trends properly
|
Although the stated objective is to display changes, apparently you wish to show the annual time series by country, too. That suggests not completely redoing the graphic, but just modifying it.
Since
|
How to plot trends properly
Although the stated objective is to display changes, apparently you wish to show the annual time series by country, too. That suggests not completely redoing the graphic, but just modifying it.
Since a change concerns what happens from one year to the next, you might consider representing the changes by graphical symbols that span successive years: that is, the line segments connecting the data points in the plot.
Since color is so useful for distinguishing countries, and otherwise is not so good at indicating quantitative variables, that leaves us with essentially just two other characteristics that can be varied to indicate change: the style and thickness of the segments. Because your thesis concerns positive change, you will want to make line segments for increases more prominent: their styles should be more continuous and they should be thicker.
Finally, your thesis concerns data after 1932. We will want to emphasize those elements of the graphic relative to the others. That can be done by saturating the color.
This solution immediately provides insights that were not apparent in the original:
No country experienced annual increases in death rates for all years after 1932. Any such country would appear as a continuous solid line, but there is no such line present.
Much of the change ought to be attributed to factors common to all countries. This is apparent in the similarities of line style and thickness within vertical columns. For instance, during the period 1934-35 the death rates increased in almost all countries, where in 1933-34 they decreased in nearly all countries.
Germany was unusual in experiencing a large increase in death rates in 1932-33 and also a slight increase in 1935-36.
These suggest performing a robust two-way exploration of change in death rate versus country, perhaps by median polish, in order to penetrate more deeply into the relative performance of European countries during this period.
If you wish to emphasize only the difference between 1937 and 1932, a similar technique can be used to symbolize the portions of the paths between those dates. Germany would stand out:
|
How to plot trends properly
Although the stated objective is to display changes, apparently you wish to show the annual time series by country, too. That suggests not completely redoing the graphic, but just modifying it.
Since
|
5,597
|
How to plot trends properly
|
Slopegraphs
One way that you could present your data is using a slopegraph which is particular good for comparing changes or gradients (some links: 1 2 )
Below is
On the left an example of a slopegraph that shows how this looks for your case.
In the center a more complex slopegraph which also shows the year 1932
On the right a variation of the slopegraph, more a sort of sparklines, where all data is shown (meaning no straight lines).
I am not sure which one is best. The third/right option provides a stronger idea about the variations from year to year (and for instance it becomes more visible that Danmark vs Germany do not look so different and it is going up and down a lot from year to year) but it can also be distracting (especially the 1929 peak). So which one is better depends on what you want to convey with the graph and how much detail your story requires (e.g the turn around 1932 with the different government which is more clear in the second/middle option).
The variation of the slopegraph on the right looks much like the graph by Xan. However, besides stylistic differences there is one more important difference. The width and height of the figure are chosen such that the angle of the curves are close to 45 degrees. In this way the differences are more salient (I believe that the best example is the sunspot example by Edward Tufte)
More context
If you want to add more complexity than the simple slopegraph, then I believe it is actually better to show more data outside the range 1927-1937 than inside the range. (again an example by Tufte from pages 74-75 in The Visual Display of Quantitive Information you can get to it via this page on the bulletin board on his website)
The example below shows data for the years 1900-2000 (excluding Poland whose data is a bit difficult) extracted from wikipedia (e.g. this page for Czech Republic) and for Switzerland and the Netherlands their national bureaus of statistics (bfs and Statline).
(The data is a bit different from yours but the same as for instance the article "Autarchy, market disintegration, and health: The mortality and nutritional crisis in Nazi Germany, 1933-1937" by Jörg Baten and Andrea Wagner. This article is interesting to read since they provide many more data than just crude death rates, although they also limit themselves to a small period. Especially interesting is that the rise in death rate, from 1932 to 1937, mainly existed among cities in a strip from Frankfurt to Bremen and Hamburg)
I believe that this graph is important because it shows that Germany made a very strong drop before the rise after 1932. Stronger than other countries. So you can have negative and positive interpretations. Germany's death rate was rising more than other countries between 1932-1937, but was this (1) a rise away from a low peak, or (2) a rise towards a high peak? An interesting aspect in this regard is that the 1932 level of 10.8 is a very low level for Germany (at this point only the Netherlands had a lower death rate). This is not only the lowest level for the years up to 1937, but also it takes until 1995 before this level of 10.8 is reached again.
Another point, related to health (if this is your context) it might be better to compare life expectancy, the demographic composition of the population has an influence on the death rate, independent from changes in the health situation
A bit less additional context
The above graph shows the totality but may be an overkill for most purposes (except in this post where I wanted to show the entire history and it is more for an exploratory purpose). The graph below is an alternative which, I believe, is still decent.
|
How to plot trends properly
|
Slopegraphs
One way that you could present your data is using a slopegraph which is particular good for comparing changes or gradients (some links: 1 2 )
Below is
On the left an example of a slopegra
|
How to plot trends properly
Slopegraphs
One way that you could present your data is using a slopegraph which is particular good for comparing changes or gradients (some links: 1 2 )
Below is
On the left an example of a slopegraph that shows how this looks for your case.
In the center a more complex slopegraph which also shows the year 1932
On the right a variation of the slopegraph, more a sort of sparklines, where all data is shown (meaning no straight lines).
I am not sure which one is best. The third/right option provides a stronger idea about the variations from year to year (and for instance it becomes more visible that Danmark vs Germany do not look so different and it is going up and down a lot from year to year) but it can also be distracting (especially the 1929 peak). So which one is better depends on what you want to convey with the graph and how much detail your story requires (e.g the turn around 1932 with the different government which is more clear in the second/middle option).
The variation of the slopegraph on the right looks much like the graph by Xan. However, besides stylistic differences there is one more important difference. The width and height of the figure are chosen such that the angle of the curves are close to 45 degrees. In this way the differences are more salient (I believe that the best example is the sunspot example by Edward Tufte)
More context
If you want to add more complexity than the simple slopegraph, then I believe it is actually better to show more data outside the range 1927-1937 than inside the range. (again an example by Tufte from pages 74-75 in The Visual Display of Quantitive Information you can get to it via this page on the bulletin board on his website)
The example below shows data for the years 1900-2000 (excluding Poland whose data is a bit difficult) extracted from wikipedia (e.g. this page for Czech Republic) and for Switzerland and the Netherlands their national bureaus of statistics (bfs and Statline).
(The data is a bit different from yours but the same as for instance the article "Autarchy, market disintegration, and health: The mortality and nutritional crisis in Nazi Germany, 1933-1937" by Jörg Baten and Andrea Wagner. This article is interesting to read since they provide many more data than just crude death rates, although they also limit themselves to a small period. Especially interesting is that the rise in death rate, from 1932 to 1937, mainly existed among cities in a strip from Frankfurt to Bremen and Hamburg)
I believe that this graph is important because it shows that Germany made a very strong drop before the rise after 1932. Stronger than other countries. So you can have negative and positive interpretations. Germany's death rate was rising more than other countries between 1932-1937, but was this (1) a rise away from a low peak, or (2) a rise towards a high peak? An interesting aspect in this regard is that the 1932 level of 10.8 is a very low level for Germany (at this point only the Netherlands had a lower death rate). This is not only the lowest level for the years up to 1937, but also it takes until 1995 before this level of 10.8 is reached again.
Another point, related to health (if this is your context) it might be better to compare life expectancy, the demographic composition of the population has an influence on the death rate, independent from changes in the health situation
A bit less additional context
The above graph shows the totality but may be an overkill for most purposes (except in this post where I wanted to show the entire history and it is more for an exploratory purpose). The graph below is an alternative which, I believe, is still decent.
|
How to plot trends properly
Slopegraphs
One way that you could present your data is using a slopegraph which is particular good for comparing changes or gradients (some links: 1 2 )
Below is
On the left an example of a slopegra
|
5,598
|
How to plot trends properly
|
If you are wanting to highlight change, then perhaps calculate this and display that. Using a heatmap to display the changes can be useful as it allows comparisons to be made without overplotting issues and avoids interpolation issues that can come from line graphs.
Using your data as d in R:
library(tidyverse)
d2 <- data.frame(apply(d[-1],2,diff))
d2$year <- d$year[-1]
d2 %>% gather(key="country",value=deathrate,-year) %>%
ggplot(aes(x=factor(year),y=country,fill=deathrate)) +
geom_tile() +
scale_fill_gradient2("\u0394 deathrate")
Note that the data is now change from previous year. You can see that Germany has a cluster of blues (increases in death rates) after 1932 that other countries do not have. You can also see that between 1934 and 1935 all countries except for Poland saw increases in death rates, but Germany's trend bucking appears to be 1932-1933 and 1935-1936 (as well as 1927-1928).
One interesting feature is the fact that the colours are more intense on left compared to the right. This means that the magnitude of the changes was higher at the start of the period, and more muted towards the end.
I would recommend pairing this with a line graph showing the levels too.
|
How to plot trends properly
|
If you are wanting to highlight change, then perhaps calculate this and display that. Using a heatmap to display the changes can be useful as it allows comparisons to be made without overplotting issu
|
How to plot trends properly
If you are wanting to highlight change, then perhaps calculate this and display that. Using a heatmap to display the changes can be useful as it allows comparisons to be made without overplotting issues and avoids interpolation issues that can come from line graphs.
Using your data as d in R:
library(tidyverse)
d2 <- data.frame(apply(d[-1],2,diff))
d2$year <- d$year[-1]
d2 %>% gather(key="country",value=deathrate,-year) %>%
ggplot(aes(x=factor(year),y=country,fill=deathrate)) +
geom_tile() +
scale_fill_gradient2("\u0394 deathrate")
Note that the data is now change from previous year. You can see that Germany has a cluster of blues (increases in death rates) after 1932 that other countries do not have. You can also see that between 1934 and 1935 all countries except for Poland saw increases in death rates, but Germany's trend bucking appears to be 1932-1933 and 1935-1936 (as well as 1927-1928).
One interesting feature is the fact that the colours are more intense on left compared to the right. This means that the magnitude of the changes was higher at the start of the period, and more muted towards the end.
I would recommend pairing this with a line graph showing the levels too.
|
How to plot trends properly
If you are wanting to highlight change, then perhaps calculate this and display that. Using a heatmap to display the changes can be useful as it allows comparisons to be made without overplotting issu
|
5,599
|
How to plot trends properly
|
Depends on the audience, but I would simplify things:
Then spell it out in the caption e.g.
From 1932-37, the annual death rate increased in Germany, whereas it fell overall throughout central Europe (France, Belgium, Netherlands, Denmark, Austria, Czech Republic, Poland).
(BTW what is ch vs. cz i.e. which country am I missing above?)
To be thorough, you will of course need to weight the death rate by an estimate of population when 'pooling' this for the 'Others', but I'm sure this information is readily available to you.
Update 6/9/18:
This is of course a 'toy' sketch and was not derived from the data; the idea is to provide a rough draft of the form a graph should take.
To address whuber's comment: the values for the 'Others' could be generated as mean, weighted by population e.g. with $O_y$ indicating value for $O$ per year and $i=1...8$ as $8 \times$ countries in 'Others':
$$
O_{yi} = \sum^{i=1}_{i=8} \frac{ADR_{yi} . population_i}{totalPopulation}
$$
or better, if you have population info. for each year:
$$
O_{yi} = \sum^{i=1}_{i=8} \frac{ADR_{yi} . population_{yi}}{totalPopulation_y}
$$
Depending on the readership (e.g. epidemiologists vs. historians) a standard deviation or standard error could be added to the latter, although I think this would rather spoil the simple look of the plot.
|
How to plot trends properly
|
Depends on the audience, but I would simplify things:
Then spell it out in the caption e.g.
From 1932-37, the annual death rate increased in Germany, whereas it fell overall throughout central Europ
|
How to plot trends properly
Depends on the audience, but I would simplify things:
Then spell it out in the caption e.g.
From 1932-37, the annual death rate increased in Germany, whereas it fell overall throughout central Europe (France, Belgium, Netherlands, Denmark, Austria, Czech Republic, Poland).
(BTW what is ch vs. cz i.e. which country am I missing above?)
To be thorough, you will of course need to weight the death rate by an estimate of population when 'pooling' this for the 'Others', but I'm sure this information is readily available to you.
Update 6/9/18:
This is of course a 'toy' sketch and was not derived from the data; the idea is to provide a rough draft of the form a graph should take.
To address whuber's comment: the values for the 'Others' could be generated as mean, weighted by population e.g. with $O_y$ indicating value for $O$ per year and $i=1...8$ as $8 \times$ countries in 'Others':
$$
O_{yi} = \sum^{i=1}_{i=8} \frac{ADR_{yi} . population_i}{totalPopulation}
$$
or better, if you have population info. for each year:
$$
O_{yi} = \sum^{i=1}_{i=8} \frac{ADR_{yi} . population_{yi}}{totalPopulation_y}
$$
Depending on the readership (e.g. epidemiologists vs. historians) a standard deviation or standard error could be added to the latter, although I think this would rather spoil the simple look of the plot.
|
How to plot trends properly
Depends on the audience, but I would simplify things:
Then spell it out in the caption e.g.
From 1932-37, the annual death rate increased in Germany, whereas it fell overall throughout central Europ
|
5,600
|
How to plot trends properly
|
Here I show you the difference of the logarithm of the ratio of death per 1000 inhabitants, with regards to the previous year (therefore 1927 is not shown). Germany is shown in red while the average of other countries is shown in the thick black line.
Germany had increases in the ratio in 5 out of 10 years. After 1932 it sayed above the average of other countries (and mostly positive), until 1937.
Though why the logarithm? The reason is simple: the change from 2 to 1 is more drastic than the change from 1000 to 999 :)
Code:
x = read.table("clipboard", header = TRUE, dec = ".")
xl = log(x[-1])
xd = apply(xl, 2L, diff)
png("CVquestion.png")
plot(0,0, xlim = range(x[-1,1]), ylim = range(xd), type = "n", ylab = "", main = "Difference of the log(death rate per 1000 inhab.)", xlab = "year")
grid()
for (i in rev(seq(ncol(xl)))) lines(x[-1,1], xd[,i], type = "o", col = adjustcolor(ifelse(i == 1, 2, 1), 0.7), lwd = ifelse(i == 1, 2, 1), lty = ifelse(i == 1, 1, 2), pch = ifelse(i == 1,16,NA))
lines(x[-1,1], rowMeans(xd[,-1]), type = "o", col = adjustcolor(1, 0.7), lwd = 2, lty = 1, pch = 16)
text(x = 1937, y = rev(xd[10,]), label = rev(colnames(xd)), col = rev(c(2, rep(1,8))))
dev.off()
|
How to plot trends properly
|
Here I show you the difference of the logarithm of the ratio of death per 1000 inhabitants, with regards to the previous year (therefore 1927 is not shown). Germany is shown in red while the average o
|
How to plot trends properly
Here I show you the difference of the logarithm of the ratio of death per 1000 inhabitants, with regards to the previous year (therefore 1927 is not shown). Germany is shown in red while the average of other countries is shown in the thick black line.
Germany had increases in the ratio in 5 out of 10 years. After 1932 it sayed above the average of other countries (and mostly positive), until 1937.
Though why the logarithm? The reason is simple: the change from 2 to 1 is more drastic than the change from 1000 to 999 :)
Code:
x = read.table("clipboard", header = TRUE, dec = ".")
xl = log(x[-1])
xd = apply(xl, 2L, diff)
png("CVquestion.png")
plot(0,0, xlim = range(x[-1,1]), ylim = range(xd), type = "n", ylab = "", main = "Difference of the log(death rate per 1000 inhab.)", xlab = "year")
grid()
for (i in rev(seq(ncol(xl)))) lines(x[-1,1], xd[,i], type = "o", col = adjustcolor(ifelse(i == 1, 2, 1), 0.7), lwd = ifelse(i == 1, 2, 1), lty = ifelse(i == 1, 1, 2), pch = ifelse(i == 1,16,NA))
lines(x[-1,1], rowMeans(xd[,-1]), type = "o", col = adjustcolor(1, 0.7), lwd = 2, lty = 1, pch = 16)
text(x = 1937, y = rev(xd[10,]), label = rev(colnames(xd)), col = rev(c(2, rep(1,8))))
dev.off()
|
How to plot trends properly
Here I show you the difference of the logarithm of the ratio of death per 1000 inhabitants, with regards to the previous year (therefore 1927 is not shown). Germany is shown in red while the average o
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.