idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
9,901
|
Difference between regression analysis and analysis of variance?
|
The analysis of variance (ANOVA) is a body of statistical method of analyzing observations assumed to be of the structure
$y_i=\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_px_{ip}+e_i,~i=1(1)n$,which are constituted of linear combinations of $p$ unknown quantities $\beta_1,\beta_2,\dots,\beta_p$ plus errors $e_1,e_2,\dots,e_n$ and the {$x_{ij}$} are known constant coefficients with the r.v's {$e_i$} are uncorrelated and have the same mean $0$ and the variance $\sigma^2$(unknown).
i.e. $E(y^{n \times 1})=X\beta,D(y)=\sigma^2I_n$
Where D is dispersion matrix or variance-covariance matrix.
,where the coefficients {$x_{ij}$} are the values of counter variables or indicator variables which refer to the presence or absence of the effects {$\beta_j$} in the conditions under which the observations are taken:{$x_{ij}$} is the number of times $\beta_j$ occurs in the i-th observation,and this is usually $0$ or $1$.In general,in the analysis of variance all the factors are treated qualitatively.
If the {$x_{ij}$} are values taken on in the observations not by counter variables but by continuous variables like $t$=time ,$T$=temperature,$t^2,e^{-T}$,etc,then we have a case of *regression analysis.In general,in regression analysis all factors are quantitative and treated quantitatively.
Mainly,these two are two kinds of Analysis.
|
Difference between regression analysis and analysis of variance?
|
The analysis of variance (ANOVA) is a body of statistical method of analyzing observations assumed to be of the structure
$y_i=\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_px_{ip}+e_i,~i=1(1)n$,which are c
|
Difference between regression analysis and analysis of variance?
The analysis of variance (ANOVA) is a body of statistical method of analyzing observations assumed to be of the structure
$y_i=\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_px_{ip}+e_i,~i=1(1)n$,which are constituted of linear combinations of $p$ unknown quantities $\beta_1,\beta_2,\dots,\beta_p$ plus errors $e_1,e_2,\dots,e_n$ and the {$x_{ij}$} are known constant coefficients with the r.v's {$e_i$} are uncorrelated and have the same mean $0$ and the variance $\sigma^2$(unknown).
i.e. $E(y^{n \times 1})=X\beta,D(y)=\sigma^2I_n$
Where D is dispersion matrix or variance-covariance matrix.
,where the coefficients {$x_{ij}$} are the values of counter variables or indicator variables which refer to the presence or absence of the effects {$\beta_j$} in the conditions under which the observations are taken:{$x_{ij}$} is the number of times $\beta_j$ occurs in the i-th observation,and this is usually $0$ or $1$.In general,in the analysis of variance all the factors are treated qualitatively.
If the {$x_{ij}$} are values taken on in the observations not by counter variables but by continuous variables like $t$=time ,$T$=temperature,$t^2,e^{-T}$,etc,then we have a case of *regression analysis.In general,in regression analysis all factors are quantitative and treated quantitatively.
Mainly,these two are two kinds of Analysis.
|
Difference between regression analysis and analysis of variance?
The analysis of variance (ANOVA) is a body of statistical method of analyzing observations assumed to be of the structure
$y_i=\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_px_{ip}+e_i,~i=1(1)n$,which are c
|
9,902
|
Difference between regression analysis and analysis of variance?
|
In regression analysis you have one variable fixed and you want to know how the variable goes with the other variable.
In analysis of variance you want to know for example: If this specific animal food influences the weight of animals... SO one fixed var and the influence on the others.
|
Difference between regression analysis and analysis of variance?
|
In regression analysis you have one variable fixed and you want to know how the variable goes with the other variable.
In analysis of variance you want to know for example: If this specific animal foo
|
Difference between regression analysis and analysis of variance?
In regression analysis you have one variable fixed and you want to know how the variable goes with the other variable.
In analysis of variance you want to know for example: If this specific animal food influences the weight of animals... SO one fixed var and the influence on the others.
|
Difference between regression analysis and analysis of variance?
In regression analysis you have one variable fixed and you want to know how the variable goes with the other variable.
In analysis of variance you want to know for example: If this specific animal foo
|
9,903
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
You can use a permutation test.
Form your hypothesis as a full and reduced model test and using the original data compute the F-statistic for the full and reduced model test (or another stat of interest).
Now compute the fitted values and residuals for the reduced model, then randomly permute the residuals and add them back to the fitted values, now do the full and reduced test on the permuted dataset and save the F-statistic (or other). Repeate this many times (like 1999).
The p-value is then the proportion of the statistics that are greater than or equal to the original statistic.
This can be used to test interactions or groups of terms including interactions.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
You can use a permutation test.
Form your hypothesis as a full and reduced model test and using the original data compute the F-statistic for the full and reduced model test (or another stat of inte
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
You can use a permutation test.
Form your hypothesis as a full and reduced model test and using the original data compute the F-statistic for the full and reduced model test (or another stat of interest).
Now compute the fitted values and residuals for the reduced model, then randomly permute the residuals and add them back to the fitted values, now do the full and reduced test on the permuted dataset and save the F-statistic (or other). Repeate this many times (like 1999).
The p-value is then the proportion of the statistics that are greater than or equal to the original statistic.
This can be used to test interactions or groups of terms including interactions.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
You can use a permutation test.
Form your hypothesis as a full and reduced model test and using the original data compute the F-statistic for the full and reduced model test (or another stat of inte
|
9,904
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
The Kruskal-Wallis test is a special case of the proportional odds model. You can use the proportional odds model to model multiple factors, adjust for covariates, etc.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
The Kruskal-Wallis test is a special case of the proportional odds model. You can use the proportional odds model to model multiple factors, adjust for covariates, etc.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
The Kruskal-Wallis test is a special case of the proportional odds model. You can use the proportional odds model to model multiple factors, adjust for covariates, etc.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
The Kruskal-Wallis test is a special case of the proportional odds model. You can use the proportional odds model to model multiple factors, adjust for covariates, etc.
|
9,905
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
Friedman's test provides a non-parametric equivalent to a one-way ANOVA with a blocking factor, but can't do anything more complex than this.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
Friedman's test provides a non-parametric equivalent to a one-way ANOVA with a blocking factor, but can't do anything more complex than this.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
Friedman's test provides a non-parametric equivalent to a one-way ANOVA with a blocking factor, but can't do anything more complex than this.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
Friedman's test provides a non-parametric equivalent to a one-way ANOVA with a blocking factor, but can't do anything more complex than this.
|
9,906
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
One nonparametric test for a two-way factorial design is the Scheirer–Ray–Hare test. It is described by Sokal and Rohlf (1995), and can be found on a variety of websites, though it appears to be not particularly well known or widely discussed.
Another approach is aligned ranks transformation anova (ART anova). With current software implementations, this approach is easy to use, and in some implementations it can handle relatively complex designs including random effects.
References
Sokal, R.R. and F.J. Rohlf. 1995. Biometry, 3rd ed. W.H. Freeman. New York.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
|
One nonparametric test for a two-way factorial design is the Scheirer–Ray–Hare test. It is described by Sokal and Rohlf (1995), and can be found on a variety of websites, though it appears to be not p
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
One nonparametric test for a two-way factorial design is the Scheirer–Ray–Hare test. It is described by Sokal and Rohlf (1995), and can be found on a variety of websites, though it appears to be not particularly well known or widely discussed.
Another approach is aligned ranks transformation anova (ART anova). With current software implementations, this approach is easy to use, and in some implementations it can handle relatively complex designs including random effects.
References
Sokal, R.R. and F.J. Rohlf. 1995. Biometry, 3rd ed. W.H. Freeman. New York.
|
Is there an equivalent to Kruskal Wallis one-way test for a two-way model?
One nonparametric test for a two-way factorial design is the Scheirer–Ray–Hare test. It is described by Sokal and Rohlf (1995), and can be found on a variety of websites, though it appears to be not p
|
9,907
|
Pitfalls in experimental design: Avoiding dead experiments
|
I believe what Fisher meant in his famous quote goes beyond saying "We will do a full factorial design for our study" or another design approach. Consulting a statistician when planning the experiment means thinking about every aspect of the problem in an intelligent way, including the research objective, what variables are relevant, how to collect them, data management, pitfalls, intermediate assessment of how the experiment is going and much more. Often, I find it is important to see every aspect of the proposed experiment hand-on to really understand where the difficulties lie.
My experience is mainly from medical applications. Some of the issues I have encountered that could have been prevented by consulting a statistician beforehand:
Insufficient sample size is, of course, number one on this list. Often, data from previous studies would have been available and it would have been easy to give a reasonable estimate of the sample size needed. In these cases, the only recourse is often to do a purely descriptive analysis of the data and promise further research in the paper (not publishing is usually not an option after doctors invested valuable time).
Execution of the experiments is left to convenience and chance instead of design. An example I am currently working on has measurements collected over time. The measurement times, measurement frequency and end of monitoring period all vary wildly between individuals. Increasing the number of measurements per individual and fixing the measurement dates and end of monitoring period would have been fairly little extra work (in this case) and would have been very beneficial to the study.
Poor control of nuisance factors that could have easily been controlled. E.g. measurements were sometimes performed on the day of sample collection and sometimes later, leaving the possibility that the sample has degraded.
Poor data management, including my personal favourite "I rounded the data before putting it into the computer, because the machine is inaccurate in its measurements". Often, relevant data is just not collected and it is impossible to get it after the fact.
Often, problems with a study go even further back, to the initial conception of the research:
Data is sometimes collected without a clear objective and just the assumption that it will be useful somehow. Producing hypotheses and "significant results" is left to the statistician.
And the opposite: data is scraped together with the aim of proving a specific point that the PI has in his head, irrespective of the data and what can actually be proved with it. This time, the statistician is just supposed to put his stamp of significance on pre-written conclusions without the conclusions getting adjusted in the face of the data.
So far, this mainly sounds like the statistician suffers and maybe scientific integrity suffers when the PI tries to push conclusions not supported by the data (always a fun discussion). But the experimental team suffers as well, because they do unnecessary extra work (while not doing necessary work) during the experimental phase and need to spend much more time in discussion with their statistician after the fact, because they did not get their advice before. And of course, the final paper will be worse, will have fewer conclusions (and more "conjectures") and will likely not make it into that high-impact journal the PI wanted.
|
Pitfalls in experimental design: Avoiding dead experiments
|
I believe what Fisher meant in his famous quote goes beyond saying "We will do a full factorial design for our study" or another design approach. Consulting a statistician when planning the experiment
|
Pitfalls in experimental design: Avoiding dead experiments
I believe what Fisher meant in his famous quote goes beyond saying "We will do a full factorial design for our study" or another design approach. Consulting a statistician when planning the experiment means thinking about every aspect of the problem in an intelligent way, including the research objective, what variables are relevant, how to collect them, data management, pitfalls, intermediate assessment of how the experiment is going and much more. Often, I find it is important to see every aspect of the proposed experiment hand-on to really understand where the difficulties lie.
My experience is mainly from medical applications. Some of the issues I have encountered that could have been prevented by consulting a statistician beforehand:
Insufficient sample size is, of course, number one on this list. Often, data from previous studies would have been available and it would have been easy to give a reasonable estimate of the sample size needed. In these cases, the only recourse is often to do a purely descriptive analysis of the data and promise further research in the paper (not publishing is usually not an option after doctors invested valuable time).
Execution of the experiments is left to convenience and chance instead of design. An example I am currently working on has measurements collected over time. The measurement times, measurement frequency and end of monitoring period all vary wildly between individuals. Increasing the number of measurements per individual and fixing the measurement dates and end of monitoring period would have been fairly little extra work (in this case) and would have been very beneficial to the study.
Poor control of nuisance factors that could have easily been controlled. E.g. measurements were sometimes performed on the day of sample collection and sometimes later, leaving the possibility that the sample has degraded.
Poor data management, including my personal favourite "I rounded the data before putting it into the computer, because the machine is inaccurate in its measurements". Often, relevant data is just not collected and it is impossible to get it after the fact.
Often, problems with a study go even further back, to the initial conception of the research:
Data is sometimes collected without a clear objective and just the assumption that it will be useful somehow. Producing hypotheses and "significant results" is left to the statistician.
And the opposite: data is scraped together with the aim of proving a specific point that the PI has in his head, irrespective of the data and what can actually be proved with it. This time, the statistician is just supposed to put his stamp of significance on pre-written conclusions without the conclusions getting adjusted in the face of the data.
So far, this mainly sounds like the statistician suffers and maybe scientific integrity suffers when the PI tries to push conclusions not supported by the data (always a fun discussion). But the experimental team suffers as well, because they do unnecessary extra work (while not doing necessary work) during the experimental phase and need to spend much more time in discussion with their statistician after the fact, because they did not get their advice before. And of course, the final paper will be worse, will have fewer conclusions (and more "conjectures") and will likely not make it into that high-impact journal the PI wanted.
|
Pitfalls in experimental design: Avoiding dead experiments
I believe what Fisher meant in his famous quote goes beyond saying "We will do a full factorial design for our study" or another design approach. Consulting a statistician when planning the experiment
|
9,908
|
Pitfalls in experimental design: Avoiding dead experiments
|
Two words: Sample Size...A power analysis is a must. By including a competent statistician on your team from the get-go, you will likely save yourself a great deal of frustration when you are writing the results and discussion sections of your manuscript or report.
It is all too common for a principal investigator to collect data prior to consulting with a statistician with the expectation of a "predictive model" or a "causal relationship" from a sample of less than 30 subjects. Had the PI consulted with a statistician prior to collecting data, the statistician would have been able to inform the PI, after appropriate analyses, to collect more data/subjects or to restructure the goals of their analysis plan/project.
|
Pitfalls in experimental design: Avoiding dead experiments
|
Two words: Sample Size...A power analysis is a must. By including a competent statistician on your team from the get-go, you will likely save yourself a great deal of frustration when you are writing
|
Pitfalls in experimental design: Avoiding dead experiments
Two words: Sample Size...A power analysis is a must. By including a competent statistician on your team from the get-go, you will likely save yourself a great deal of frustration when you are writing the results and discussion sections of your manuscript or report.
It is all too common for a principal investigator to collect data prior to consulting with a statistician with the expectation of a "predictive model" or a "causal relationship" from a sample of less than 30 subjects. Had the PI consulted with a statistician prior to collecting data, the statistician would have been able to inform the PI, after appropriate analyses, to collect more data/subjects or to restructure the goals of their analysis plan/project.
|
Pitfalls in experimental design: Avoiding dead experiments
Two words: Sample Size...A power analysis is a must. By including a competent statistician on your team from the get-go, you will likely save yourself a great deal of frustration when you are writing
|
9,909
|
Pitfalls in experimental design: Avoiding dead experiments
|
I suppose it depends on how strictly you interpret the word "design". It is sometimes taken to mean completely randomized vs. randomized blocks, etc. I don't think I've seen a study that died from that. Also, as others have mentioned, I suspect "died" is too strong, but it depends on how you interpret the term. Certainly I've seen studies that were 'non-significant' (and that researchers subsequently did not try to publish as a result); under the assumption that these studies might have been 'significant' if conducted differently (according to obvious advice that I would have given), and hence been published, might qualify as "died". In light of this conception, the power issue raised by both @RobHall and @MattReichenbach is pretty straightforward, but there is more to power than sample size, and those could fall under a looser conception of "design". Here are a couple of examples:
Not gathering / recording / or throwing away information
I worked on a study where the researchers were interested in whether a particular trait was related to a cancer. They got mice from two lines (i.e., genetic lines, the mice were bred for certain properties) where one line was expected to have more of the trait than the other. However, the trait in question was not actually measured, even though it could have been. This situation is analogous to dichotomizing or binning a continuous variable, which reduces power. However, even if the results were 'significant', they would be less informative than if we knew the magnitude of the trait for each mouse.
Another case within this same heading is not thinking about and gathering obvious covariates.
Poor questionnaire design
I recently worked on a study where a patient satisfaction survey was administered under two conditions. However, none of the items were reverse-scored. It appeared that most patients just went down the list and marked all 5s (strongly agree), possibly without even reading the items. There were some other issues, but this is pretty obvious. Oddly, the fellow in charge of conducting the study told me her attending had explicitly encouraged her not to vet the study with a statistician first, even though we are free and conveniently available for such consulting.
|
Pitfalls in experimental design: Avoiding dead experiments
|
I suppose it depends on how strictly you interpret the word "design". It is sometimes taken to mean completely randomized vs. randomized blocks, etc. I don't think I've seen a study that died from t
|
Pitfalls in experimental design: Avoiding dead experiments
I suppose it depends on how strictly you interpret the word "design". It is sometimes taken to mean completely randomized vs. randomized blocks, etc. I don't think I've seen a study that died from that. Also, as others have mentioned, I suspect "died" is too strong, but it depends on how you interpret the term. Certainly I've seen studies that were 'non-significant' (and that researchers subsequently did not try to publish as a result); under the assumption that these studies might have been 'significant' if conducted differently (according to obvious advice that I would have given), and hence been published, might qualify as "died". In light of this conception, the power issue raised by both @RobHall and @MattReichenbach is pretty straightforward, but there is more to power than sample size, and those could fall under a looser conception of "design". Here are a couple of examples:
Not gathering / recording / or throwing away information
I worked on a study where the researchers were interested in whether a particular trait was related to a cancer. They got mice from two lines (i.e., genetic lines, the mice were bred for certain properties) where one line was expected to have more of the trait than the other. However, the trait in question was not actually measured, even though it could have been. This situation is analogous to dichotomizing or binning a continuous variable, which reduces power. However, even if the results were 'significant', they would be less informative than if we knew the magnitude of the trait for each mouse.
Another case within this same heading is not thinking about and gathering obvious covariates.
Poor questionnaire design
I recently worked on a study where a patient satisfaction survey was administered under two conditions. However, none of the items were reverse-scored. It appeared that most patients just went down the list and marked all 5s (strongly agree), possibly without even reading the items. There were some other issues, but this is pretty obvious. Oddly, the fellow in charge of conducting the study told me her attending had explicitly encouraged her not to vet the study with a statistician first, even though we are free and conveniently available for such consulting.
|
Pitfalls in experimental design: Avoiding dead experiments
I suppose it depends on how strictly you interpret the word "design". It is sometimes taken to mean completely randomized vs. randomized blocks, etc. I don't think I've seen a study that died from t
|
9,910
|
Pitfalls in experimental design: Avoiding dead experiments
|
I've seen this kind of problem in survey-like and psychological experiments.
In one case, the entire experiment had to be chalked up to a learning experience. There were problems at multiple levels that resulted in a jumble of results, but results that seemed to give some support for the hypothesis. In the end, I was able to help plan a more rigorous experiment, which essentially had enough power to reject the hypothesis.
In the other case, I was handed a survey that had already been designed and executed, and there were multiple problems that resulted in several areas of interest being affected. In one key area, for example, they asked how many times the customers were turned away from an event due to it being full when they arrived. The problem is that there's no time range on the question so you couldn't tell the difference between someone who had tried to attend 4 times and been turned away 4 times and someone who had tried to attend 40 times and only been turned away 4 times.
I'm not a trained, capital-s Statistician, but if they'd come to me beforehand, I would have been able to help them fix these issues and get better results. In the first case, it still would have been a disappointing, "Sorry, your hypothesis seems extremely unlikely", but it could have saved them a second experiment. In the second case, it would have given them answers to some important questions and would have made the results sharper. (Another problem they had is that they surveyed multiple locations over time and at least some people were thus surveyed multiple times, with no question like "Have you taken this survey elsewhere?")
Perhaps not statistical issues per se, but in both of these cases smart, well-educated domain experts created instruments that were flawed, and the results were one dead experiment and one experiment with limbs amputated.
|
Pitfalls in experimental design: Avoiding dead experiments
|
I've seen this kind of problem in survey-like and psychological experiments.
In one case, the entire experiment had to be chalked up to a learning experience. There were problems at multiple levels th
|
Pitfalls in experimental design: Avoiding dead experiments
I've seen this kind of problem in survey-like and psychological experiments.
In one case, the entire experiment had to be chalked up to a learning experience. There were problems at multiple levels that resulted in a jumble of results, but results that seemed to give some support for the hypothesis. In the end, I was able to help plan a more rigorous experiment, which essentially had enough power to reject the hypothesis.
In the other case, I was handed a survey that had already been designed and executed, and there were multiple problems that resulted in several areas of interest being affected. In one key area, for example, they asked how many times the customers were turned away from an event due to it being full when they arrived. The problem is that there's no time range on the question so you couldn't tell the difference between someone who had tried to attend 4 times and been turned away 4 times and someone who had tried to attend 40 times and only been turned away 4 times.
I'm not a trained, capital-s Statistician, but if they'd come to me beforehand, I would have been able to help them fix these issues and get better results. In the first case, it still would have been a disappointing, "Sorry, your hypothesis seems extremely unlikely", but it could have saved them a second experiment. In the second case, it would have given them answers to some important questions and would have made the results sharper. (Another problem they had is that they surveyed multiple locations over time and at least some people were thus surveyed multiple times, with no question like "Have you taken this survey elsewhere?")
Perhaps not statistical issues per se, but in both of these cases smart, well-educated domain experts created instruments that were flawed, and the results were one dead experiment and one experiment with limbs amputated.
|
Pitfalls in experimental design: Avoiding dead experiments
I've seen this kind of problem in survey-like and psychological experiments.
In one case, the entire experiment had to be chalked up to a learning experience. There were problems at multiple levels th
|
9,911
|
How to interpret the output of predict.coxph?
|
Edit: the following description applies to survival versions 3.2-8 and below. Starting with version 3.2-9, the default behavior of predict.coxph() changes with respect to treating 0/1 (dummy indicator) variables. See NEWS.
predict.coxph() computes the hazard ratio relative to the sample average for all $p$ predictor variables. Factors are converted to dummy predictors as usual whose average can be calculated. Recall that the Cox PH model is a linear model for the log-hazard $\ln h(t)$:
$$
\ln h(t) = \ln h_{0}(t) + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} = \ln h_{0}(t) + \bf{X} \bf{\beta}
$$
Where $h_{0}(t)$ is the unspecified baseline hazard. Equivalently, the hazard $h(t)$ is modeled as $h(t) = h_{0}(t) \cdot e^{\beta_{1} X_{1} + \dots + \beta_{p} X_{p}} = h_{0}(t) \cdot e^{\bf{X} \bf{\beta}}$. The hazard ratio between two persons $i$ and $i'$ with predictor values $\bf{X}_{i}$ and $\bf{X}_{i'}$ is thus independent of the baseline hazard and independent of time $t$:
$$
\frac{h_{i}(t)}{h_{i'}(t)} = \frac{h_{0}(t) \cdot e^{\bf{X}_{i} \bf{\beta}}}{h_{0}(t) \cdot e^{\bf{X}_{i'} \bf{\beta}}} = \frac{e^{\bf{X}_{i} \bf{\beta}}}{e^{\bf{X}_{i'} \bf{\beta}}}
$$
For the estimated hazard ratio between persons $i$ and $i'$, we just plug in the coefficient estimates $b_{1}, \ldots, b_{p}$ for the $\beta_{1}, \ldots, \beta_{p}$, giving $e^{\bf{X}_{i} \bf{b}}$ and $e^{\bf{X}_{i'} \bf{b}}$.
As an example in R, I use the data from John Fox' appendix on the Cox-PH model which provides a very nice introductory text. First, we fetch the data and build a simple Cox-PH model for the time-to-arrest of released prisoners (fin: factor - received financial aid with dummy coding "no" -> 0, "yes" -> 1, age: age at the time of release, prio: number of prior convictions):
> URL <- "https://socialsciences.mcmaster.ca/jfox/Books/Companion/data/Rossi.txt"
> Rossi <- read.table(URL, header=TRUE) # our data
> Rossi[1:3, c("week", "arrest", "fin", "age", "prio")] # looks like this
week arrest fin age prio
1 20 1 no 27 3
2 17 1 no 18 8
3 25 1 no 19 13
> library(survival) # for coxph()
> fitCPH <- coxph(Surv(week, arrest) ~ fin + age + prio, data=Rossi) # Cox-PH model
> (coefCPH <- coef(fitCPH)) # estimated coefficients
finyes age prio
-0.34695446 -0.06710533 0.09689320
Now we plug in the sample averages for our predictors into the $e^{\bf{X} \bf{b}}$ formula:
meanFin <- mean(as.numeric(Rossi$fin) - 1) # average of financial aid dummy
meanAge <- mean(Rossi$age) # average age
meanPrio <- mean(Rossi$prio) # average number of prior convictions
rMean <- exp(coefCPH["finyes"]*meanFin # e^Xb
+ coefCPH["age"] *meanAge
+ coefCPH["prio"] *meanPrio)
Now we plug in the predictor values of the first 4 persons into the $e^{\bf{X} \bf{b}}$ formula.
r1234 <- exp(coefCPH["finyes"]*(as.numeric(Rossi[1:4, "fin"])-1)
+ coefCPH["age"] *Rossi[1:4, "age"]
+ coefCPH["prio"] *Rossi[1:4, "prio"])
Now calculate the relative risk for the first 4 persons against the sample average and compare to the output from predict.coxph().
> r1234 / rMean
[1] 1.0139038 3.0108488 4.5703176 0.7722002
> relRisk <- predict(fitCPH, Rossi, type="risk") # relative risk
> relRisk[1:4]
1 2 3 4
1.0139038 3.0108488 4.5703176 0.7722002
If you have a stratified model, the comparison in predict.coxph() is against the strata-averages, this can be controlled via the reference option that is explained in the help page.
|
How to interpret the output of predict.coxph?
|
Edit: the following description applies to survival versions 3.2-8 and below. Starting with version 3.2-9, the default behavior of predict.coxph() changes with respect to treating 0/1 (dummy indicator
|
How to interpret the output of predict.coxph?
Edit: the following description applies to survival versions 3.2-8 and below. Starting with version 3.2-9, the default behavior of predict.coxph() changes with respect to treating 0/1 (dummy indicator) variables. See NEWS.
predict.coxph() computes the hazard ratio relative to the sample average for all $p$ predictor variables. Factors are converted to dummy predictors as usual whose average can be calculated. Recall that the Cox PH model is a linear model for the log-hazard $\ln h(t)$:
$$
\ln h(t) = \ln h_{0}(t) + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} = \ln h_{0}(t) + \bf{X} \bf{\beta}
$$
Where $h_{0}(t)$ is the unspecified baseline hazard. Equivalently, the hazard $h(t)$ is modeled as $h(t) = h_{0}(t) \cdot e^{\beta_{1} X_{1} + \dots + \beta_{p} X_{p}} = h_{0}(t) \cdot e^{\bf{X} \bf{\beta}}$. The hazard ratio between two persons $i$ and $i'$ with predictor values $\bf{X}_{i}$ and $\bf{X}_{i'}$ is thus independent of the baseline hazard and independent of time $t$:
$$
\frac{h_{i}(t)}{h_{i'}(t)} = \frac{h_{0}(t) \cdot e^{\bf{X}_{i} \bf{\beta}}}{h_{0}(t) \cdot e^{\bf{X}_{i'} \bf{\beta}}} = \frac{e^{\bf{X}_{i} \bf{\beta}}}{e^{\bf{X}_{i'} \bf{\beta}}}
$$
For the estimated hazard ratio between persons $i$ and $i'$, we just plug in the coefficient estimates $b_{1}, \ldots, b_{p}$ for the $\beta_{1}, \ldots, \beta_{p}$, giving $e^{\bf{X}_{i} \bf{b}}$ and $e^{\bf{X}_{i'} \bf{b}}$.
As an example in R, I use the data from John Fox' appendix on the Cox-PH model which provides a very nice introductory text. First, we fetch the data and build a simple Cox-PH model for the time-to-arrest of released prisoners (fin: factor - received financial aid with dummy coding "no" -> 0, "yes" -> 1, age: age at the time of release, prio: number of prior convictions):
> URL <- "https://socialsciences.mcmaster.ca/jfox/Books/Companion/data/Rossi.txt"
> Rossi <- read.table(URL, header=TRUE) # our data
> Rossi[1:3, c("week", "arrest", "fin", "age", "prio")] # looks like this
week arrest fin age prio
1 20 1 no 27 3
2 17 1 no 18 8
3 25 1 no 19 13
> library(survival) # for coxph()
> fitCPH <- coxph(Surv(week, arrest) ~ fin + age + prio, data=Rossi) # Cox-PH model
> (coefCPH <- coef(fitCPH)) # estimated coefficients
finyes age prio
-0.34695446 -0.06710533 0.09689320
Now we plug in the sample averages for our predictors into the $e^{\bf{X} \bf{b}}$ formula:
meanFin <- mean(as.numeric(Rossi$fin) - 1) # average of financial aid dummy
meanAge <- mean(Rossi$age) # average age
meanPrio <- mean(Rossi$prio) # average number of prior convictions
rMean <- exp(coefCPH["finyes"]*meanFin # e^Xb
+ coefCPH["age"] *meanAge
+ coefCPH["prio"] *meanPrio)
Now we plug in the predictor values of the first 4 persons into the $e^{\bf{X} \bf{b}}$ formula.
r1234 <- exp(coefCPH["finyes"]*(as.numeric(Rossi[1:4, "fin"])-1)
+ coefCPH["age"] *Rossi[1:4, "age"]
+ coefCPH["prio"] *Rossi[1:4, "prio"])
Now calculate the relative risk for the first 4 persons against the sample average and compare to the output from predict.coxph().
> r1234 / rMean
[1] 1.0139038 3.0108488 4.5703176 0.7722002
> relRisk <- predict(fitCPH, Rossi, type="risk") # relative risk
> relRisk[1:4]
1 2 3 4
1.0139038 3.0108488 4.5703176 0.7722002
If you have a stratified model, the comparison in predict.coxph() is against the strata-averages, this can be controlled via the reference option that is explained in the help page.
|
How to interpret the output of predict.coxph?
Edit: the following description applies to survival versions 3.2-8 and below. Starting with version 3.2-9, the default behavior of predict.coxph() changes with respect to treating 0/1 (dummy indicator
|
9,912
|
Getting seRious about time series with R
|
There is a Time Series Task View that aims to summarize all the time series packages for R. It highlights some core packages that provide some essential functionality.
I would also recommend the book by Shumway and Stoffer and the associated website, although it is not so good for forecasting.
My blog post on "Econometrics and R" provides a few other references that are useful.
Then there is my own book on forecasting using R: Forecasting principles and practice.
|
Getting seRious about time series with R
|
There is a Time Series Task View that aims to summarize all the time series packages for R. It highlights some core packages that provide some essential functionality.
I would also recommend the book
|
Getting seRious about time series with R
There is a Time Series Task View that aims to summarize all the time series packages for R. It highlights some core packages that provide some essential functionality.
I would also recommend the book by Shumway and Stoffer and the associated website, although it is not so good for forecasting.
My blog post on "Econometrics and R" provides a few other references that are useful.
Then there is my own book on forecasting using R: Forecasting principles and practice.
|
Getting seRious about time series with R
There is a Time Series Task View that aims to summarize all the time series packages for R. It highlights some core packages that provide some essential functionality.
I would also recommend the book
|
9,913
|
Getting seRious about time series with R
|
I've found the UseR! series book Introductory Time Series with R by Cowpertwait and Metcalfe very useful in translating my time series statistics textbooks into R-speak.
|
Getting seRious about time series with R
|
I've found the UseR! series book Introductory Time Series with R by Cowpertwait and Metcalfe very useful in translating my time series statistics textbooks into R-speak.
|
Getting seRious about time series with R
I've found the UseR! series book Introductory Time Series with R by Cowpertwait and Metcalfe very useful in translating my time series statistics textbooks into R-speak.
|
Getting seRious about time series with R
I've found the UseR! series book Introductory Time Series with R by Cowpertwait and Metcalfe very useful in translating my time series statistics textbooks into R-speak.
|
9,914
|
Getting seRious about time series with R
|
For ecologists, Tree diversity analysis can be a first healthy step into the right direction. The book is free, it comes with an R package (BiodiversityR) and gives you a taste of other eco-packages (like vegan).
|
Getting seRious about time series with R
|
For ecologists, Tree diversity analysis can be a first healthy step into the right direction. The book is free, it comes with an R package (BiodiversityR) and gives you a taste of other eco-packages (
|
Getting seRious about time series with R
For ecologists, Tree diversity analysis can be a first healthy step into the right direction. The book is free, it comes with an R package (BiodiversityR) and gives you a taste of other eco-packages (like vegan).
|
Getting seRious about time series with R
For ecologists, Tree diversity analysis can be a first healthy step into the right direction. The book is free, it comes with an R package (BiodiversityR) and gives you a taste of other eco-packages (
|
9,915
|
Best way to deal with heteroscedasticity?
|
It's a good question, but I think it's the wrong question. Your figure makes it clear that you have a more fundamental problem than heteroscedasticity, i.e. your model has a nonlinearity that you haven't accounted for. Many of the potential problems that a model can have (nonlinearity, interactions, outliers, heteroscedasticity, non-Normality) can masquerade as each other. I don't think there's a hard and fast rule, but in general I would suggest dealing with problems in the order
outliers > nonlinearity > heteroscedasticity > non-normality
(e.g., don't worry about nonlinearity before checking whether there are weird observations that are skewing the fit; don't worry about normality before you worry about heteroscedasticity).
In this particular case, I would fit a quadratic model y ~ poly(x,2) (or poly(x,2,raw=TRUE) or y ~ x + I(x^2) and see if it makes the problem go away.
|
Best way to deal with heteroscedasticity?
|
It's a good question, but I think it's the wrong question. Your figure makes it clear that you have a more fundamental problem than heteroscedasticity, i.e. your model has a nonlinearity that you hav
|
Best way to deal with heteroscedasticity?
It's a good question, but I think it's the wrong question. Your figure makes it clear that you have a more fundamental problem than heteroscedasticity, i.e. your model has a nonlinearity that you haven't accounted for. Many of the potential problems that a model can have (nonlinearity, interactions, outliers, heteroscedasticity, non-Normality) can masquerade as each other. I don't think there's a hard and fast rule, but in general I would suggest dealing with problems in the order
outliers > nonlinearity > heteroscedasticity > non-normality
(e.g., don't worry about nonlinearity before checking whether there are weird observations that are skewing the fit; don't worry about normality before you worry about heteroscedasticity).
In this particular case, I would fit a quadratic model y ~ poly(x,2) (or poly(x,2,raw=TRUE) or y ~ x + I(x^2) and see if it makes the problem go away.
|
Best way to deal with heteroscedasticity?
It's a good question, but I think it's the wrong question. Your figure makes it clear that you have a more fundamental problem than heteroscedasticity, i.e. your model has a nonlinearity that you hav
|
9,916
|
Best way to deal with heteroscedasticity?
|
I list a number of methods of dealing with heteroscedasticity (with R examples) here: Alternatives to one-way ANOVA for heteroskedastic data. Many of those recommendations would be less ideal because you have a single continuous variable, rather than a multi-level categorical variable, but it might be nice to read through as an overview anyway.
For your situation, weighted least squares (perhaps combined with robust regression if you suspect there may be some outliers) would be a reasonable choice. Using the Huber-White sandwich errors would also be good.
Here are some answers to your specific questions:
Robust regression is a viable option, but would be better if paired with weights in my opinion. If you aren't worried that the heteroscedasticity is due to outliers, you could just use regular linear regression with weights. Be aware that the variance can be very sensitive to outliers, and your results can be sensitive to inappropriate weights, so what might be more important than using robust regression for the final model would be using a robust measure of dispersion to estimate the weights. In the linked thread, I use 1/IQR, for example.
The standard errors are wrong because of the heteroscedasticity. You can adjust the standard errors with the Huber-White sandwich estimator. That is what @GavinSimpson is doing in the linked SO thread.
The heteroscedasticity does not make your linear model totally invalid. It primarily affects the standard errors. If you don't have outliers, least squares methods should remain unbiased. Therefore the predictive accuracy of point predictions should be unaffected. The coverage of interval predictions would be affected if you didn't model the variance as a function of $X$ and use that to adjust the width of your prediction intervals conditional on $X$.
|
Best way to deal with heteroscedasticity?
|
I list a number of methods of dealing with heteroscedasticity (with R examples) here: Alternatives to one-way ANOVA for heteroskedastic data. Many of those recommendations would be less ideal because
|
Best way to deal with heteroscedasticity?
I list a number of methods of dealing with heteroscedasticity (with R examples) here: Alternatives to one-way ANOVA for heteroskedastic data. Many of those recommendations would be less ideal because you have a single continuous variable, rather than a multi-level categorical variable, but it might be nice to read through as an overview anyway.
For your situation, weighted least squares (perhaps combined with robust regression if you suspect there may be some outliers) would be a reasonable choice. Using the Huber-White sandwich errors would also be good.
Here are some answers to your specific questions:
Robust regression is a viable option, but would be better if paired with weights in my opinion. If you aren't worried that the heteroscedasticity is due to outliers, you could just use regular linear regression with weights. Be aware that the variance can be very sensitive to outliers, and your results can be sensitive to inappropriate weights, so what might be more important than using robust regression for the final model would be using a robust measure of dispersion to estimate the weights. In the linked thread, I use 1/IQR, for example.
The standard errors are wrong because of the heteroscedasticity. You can adjust the standard errors with the Huber-White sandwich estimator. That is what @GavinSimpson is doing in the linked SO thread.
The heteroscedasticity does not make your linear model totally invalid. It primarily affects the standard errors. If you don't have outliers, least squares methods should remain unbiased. Therefore the predictive accuracy of point predictions should be unaffected. The coverage of interval predictions would be affected if you didn't model the variance as a function of $X$ and use that to adjust the width of your prediction intervals conditional on $X$.
|
Best way to deal with heteroscedasticity?
I list a number of methods of dealing with heteroscedasticity (with R examples) here: Alternatives to one-way ANOVA for heteroskedastic data. Many of those recommendations would be less ideal because
|
9,917
|
Best way to deal with heteroscedasticity?
|
Load the sandwich package and compute the var-cov matrix of your regression with var_cov<-vcovHC(regression_result, type = "HC4") (read the manual of sandwich).
Now with the lmtest package use the coeftest function:
coeftest(regression_result, df = Inf, var_cov)
|
Best way to deal with heteroscedasticity?
|
Load the sandwich package and compute the var-cov matrix of your regression with var_cov<-vcovHC(regression_result, type = "HC4") (read the manual of sandwich).
Now with the lmtest package use the coe
|
Best way to deal with heteroscedasticity?
Load the sandwich package and compute the var-cov matrix of your regression with var_cov<-vcovHC(regression_result, type = "HC4") (read the manual of sandwich).
Now with the lmtest package use the coeftest function:
coeftest(regression_result, df = Inf, var_cov)
|
Best way to deal with heteroscedasticity?
Load the sandwich package and compute the var-cov matrix of your regression with var_cov<-vcovHC(regression_result, type = "HC4") (read the manual of sandwich).
Now with the lmtest package use the coe
|
9,918
|
Best way to deal with heteroscedasticity?
|
How does the distribution of your data looks like? Does it look like a bell curve at all? From the subject matter, can it be normally distributed at all? Duration of a phone call may not be negative, for example. So in that specific case of calls a gamma distribution describes it well. And with gamma you can use generalized linear model (glm in R)
|
Best way to deal with heteroscedasticity?
|
How does the distribution of your data looks like? Does it look like a bell curve at all? From the subject matter, can it be normally distributed at all? Duration of a phone call may not be negative,
|
Best way to deal with heteroscedasticity?
How does the distribution of your data looks like? Does it look like a bell curve at all? From the subject matter, can it be normally distributed at all? Duration of a phone call may not be negative, for example. So in that specific case of calls a gamma distribution describes it well. And with gamma you can use generalized linear model (glm in R)
|
Best way to deal with heteroscedasticity?
How does the distribution of your data looks like? Does it look like a bell curve at all? From the subject matter, can it be normally distributed at all? Duration of a phone call may not be negative,
|
9,919
|
RMSE vs. Coefficient of Determination
|
I have used them both, and have a few points to make.
Rmse is useful because it is simple to explain. Everybody knows what it is.
Rmse does not show relative values. If $rmse=0.2$, you must specifically know the range $\alpha <y_x< \beta$. If $\alpha=1, \beta=1000$, then 0.2 is a good value. If $\alpha=0, \beta=1$, it does not seem not so good anymore.
Inline with the previous approach, rmse is a good way to hide the fact that the people you surveyed, or the measurements you took are mostly uniform (everybody rated the product with 3 stars), and your results look good because data helped you. If data was a bit random, you would find your model orbiting Jupiter.
Use adjusted coefficient of determination, rather than the ordinary $R^2$
Coefficient of determination is difficult to explain. Even people from the field needs a footnote tip like \footnote{The adjusted coefficient of determination is the proportion of variability in a data set that can be explained by the statistical model. This value shows how well future outcomes can be predicted by the model. $R^2$ can take 0 as minimum, and 1 as maximum.}
Coefficient of determination is however very precise in telling how well your model explains a phenomena. if $R^2=0.2$, regardless of $y_x$ values, your model is bad. I believe cut off point for a good model starts from 0.6, and if you have something around 0.7-0.8, your model is a very good one.
To recap, $R^2=0.7$ says that, with your model, you can explain 70% of what is going on in the real data. The rest, 30%, is something you do not know and you cannot explain. It is probably because there are confounding factors, or you made some mistakes in constructing the model.
In computer science, almost everybody uses rmse. Social sciences use $R^2$ more often.
If you do not need to justify the parameters in your model, just use rmse. However, if you need to put in, remove or change your parameters while building your model, you need to use $R^2$ to show that these parameters can explain the data best.
If you will use $R^2$, code in the R language. It has libraries, and you just give it the data to have all results.
For an aspiring computer scientist, it was thrilling to write about statistics . Yours truly.
|
RMSE vs. Coefficient of Determination
|
I have used them both, and have a few points to make.
Rmse is useful because it is simple to explain. Everybody knows what it is.
Rmse does not show relative values. If $rmse=0.2$, you must specifica
|
RMSE vs. Coefficient of Determination
I have used them both, and have a few points to make.
Rmse is useful because it is simple to explain. Everybody knows what it is.
Rmse does not show relative values. If $rmse=0.2$, you must specifically know the range $\alpha <y_x< \beta$. If $\alpha=1, \beta=1000$, then 0.2 is a good value. If $\alpha=0, \beta=1$, it does not seem not so good anymore.
Inline with the previous approach, rmse is a good way to hide the fact that the people you surveyed, or the measurements you took are mostly uniform (everybody rated the product with 3 stars), and your results look good because data helped you. If data was a bit random, you would find your model orbiting Jupiter.
Use adjusted coefficient of determination, rather than the ordinary $R^2$
Coefficient of determination is difficult to explain. Even people from the field needs a footnote tip like \footnote{The adjusted coefficient of determination is the proportion of variability in a data set that can be explained by the statistical model. This value shows how well future outcomes can be predicted by the model. $R^2$ can take 0 as minimum, and 1 as maximum.}
Coefficient of determination is however very precise in telling how well your model explains a phenomena. if $R^2=0.2$, regardless of $y_x$ values, your model is bad. I believe cut off point for a good model starts from 0.6, and if you have something around 0.7-0.8, your model is a very good one.
To recap, $R^2=0.7$ says that, with your model, you can explain 70% of what is going on in the real data. The rest, 30%, is something you do not know and you cannot explain. It is probably because there are confounding factors, or you made some mistakes in constructing the model.
In computer science, almost everybody uses rmse. Social sciences use $R^2$ more often.
If you do not need to justify the parameters in your model, just use rmse. However, if you need to put in, remove or change your parameters while building your model, you need to use $R^2$ to show that these parameters can explain the data best.
If you will use $R^2$, code in the R language. It has libraries, and you just give it the data to have all results.
For an aspiring computer scientist, it was thrilling to write about statistics . Yours truly.
|
RMSE vs. Coefficient of Determination
I have used them both, and have a few points to make.
Rmse is useful because it is simple to explain. Everybody knows what it is.
Rmse does not show relative values. If $rmse=0.2$, you must specifica
|
9,920
|
RMSE vs. Coefficient of Determination
|
No matter what Errror measurement you give, consider giving your complete result vector in an appendix. People who like to compare against your method but prefer another error measurement can derive such value from your table.
$R^2$:
Does not reflect systematic errors. Imagine you measure diameters instead of radii of circular objects. You have an expected overestimation of 100 %, but can reach still an $R^2$ close to 1.
Disagree with previous comments that $R^2$ is difficult to understand. The higher the value is, the more precise is your model, but it can include systematical errors.
Can be expressed by the easy to understand formula where you build the ratio of sum of squared residuals and divide by the total sum of squares (TSS):
$R^2 = 1 - {\frac{SS_E}{TSS}} = 1 - \frac{\sum{(y_i - \hat{y_i}})^2}{\sum{(y_i - \overline{y}})^2}$
The ratio on this formula can also be interpreted as the variance explained by your model over the total variance in your data.
should be expressed in its more advanced version of $R^2_{adj.}$. Here more predictors do punish the model. Expected to be more robust against overfitting.
$RMSE$:
You can reach a low $RMSE$ only by having both a high precision (single but large outliers punish heavily) and no systematic error. So in a way a low $RMSE$ garantues better quality than a high $R^2$ does.
This number has a unit and is for people not familiar with your data not easy to interpret. It can be for example devided with the mean of the data to produce a $rel. RMSE$. Be careful, this is not the only definition of $rel. RMSE$. Some people prefer to divide by the range of their data instead of dividing by the mean.
As other people mentioned, the choice might be dependent on your field and state of the art. Is there a hugely accepted method to compare too? Use the same measurement as they do and you are able to directly link your methods benefits easily in the discussion.
|
RMSE vs. Coefficient of Determination
|
No matter what Errror measurement you give, consider giving your complete result vector in an appendix. People who like to compare against your method but prefer another error measurement can derive s
|
RMSE vs. Coefficient of Determination
No matter what Errror measurement you give, consider giving your complete result vector in an appendix. People who like to compare against your method but prefer another error measurement can derive such value from your table.
$R^2$:
Does not reflect systematic errors. Imagine you measure diameters instead of radii of circular objects. You have an expected overestimation of 100 %, but can reach still an $R^2$ close to 1.
Disagree with previous comments that $R^2$ is difficult to understand. The higher the value is, the more precise is your model, but it can include systematical errors.
Can be expressed by the easy to understand formula where you build the ratio of sum of squared residuals and divide by the total sum of squares (TSS):
$R^2 = 1 - {\frac{SS_E}{TSS}} = 1 - \frac{\sum{(y_i - \hat{y_i}})^2}{\sum{(y_i - \overline{y}})^2}$
The ratio on this formula can also be interpreted as the variance explained by your model over the total variance in your data.
should be expressed in its more advanced version of $R^2_{adj.}$. Here more predictors do punish the model. Expected to be more robust against overfitting.
$RMSE$:
You can reach a low $RMSE$ only by having both a high precision (single but large outliers punish heavily) and no systematic error. So in a way a low $RMSE$ garantues better quality than a high $R^2$ does.
This number has a unit and is for people not familiar with your data not easy to interpret. It can be for example devided with the mean of the data to produce a $rel. RMSE$. Be careful, this is not the only definition of $rel. RMSE$. Some people prefer to divide by the range of their data instead of dividing by the mean.
As other people mentioned, the choice might be dependent on your field and state of the art. Is there a hugely accepted method to compare too? Use the same measurement as they do and you are able to directly link your methods benefits easily in the discussion.
|
RMSE vs. Coefficient of Determination
No matter what Errror measurement you give, consider giving your complete result vector in an appendix. People who like to compare against your method but prefer another error measurement can derive s
|
9,921
|
RMSE vs. Coefficient of Determination
|
Both the Root-Mean-Square-Error (RMSE) and coefficient of determination ($R^2$) offer different, yet complementary, information that should be assessed when evaluating your physical model. Neither is "better", but some reports might focus more on one metric depending on the particular application.
I would use the following as a very general guide to understanding the difference between both metrics:
The RMSE gives you a sense of how close (or far) your predicted values are from the actual data you are attempting to model. This is useful in a variety of applications where you wish to understand the accuracy and precision of your model's predictions (e.g., modelling tree height).
Pros
It is relatively easy to understand and communicate since reported values are in the same units as the dependent variable being modelled.
Cons
It is sensitive to large errors (penalizes large prediction errors more than smaller prediction errors).
The coefficient of determination ($R^2$) is useful when you are attempting to understand how well your selected independent variable(s) explain the variability in your dependent variable(s). This is useful when you are attempting to explain what factors might be driving the underlying process of interest (e.g., climatic variables and soil conditions related to tree height).
Pros
Gives an overall sense of how well your selected variables fit the data.
Cons
As more independent variables are added to your model, $R^2$ increases (see adj. $R^2$ or Akaike's Information Criterion as potential alternatives).
Of course, the above will be subject to sample size and sampling design, and a general understanding that correlation does not imply causation.
|
RMSE vs. Coefficient of Determination
|
Both the Root-Mean-Square-Error (RMSE) and coefficient of determination ($R^2$) offer different, yet complementary, information that should be assessed when evaluating your physical model. Neither is
|
RMSE vs. Coefficient of Determination
Both the Root-Mean-Square-Error (RMSE) and coefficient of determination ($R^2$) offer different, yet complementary, information that should be assessed when evaluating your physical model. Neither is "better", but some reports might focus more on one metric depending on the particular application.
I would use the following as a very general guide to understanding the difference between both metrics:
The RMSE gives you a sense of how close (or far) your predicted values are from the actual data you are attempting to model. This is useful in a variety of applications where you wish to understand the accuracy and precision of your model's predictions (e.g., modelling tree height).
Pros
It is relatively easy to understand and communicate since reported values are in the same units as the dependent variable being modelled.
Cons
It is sensitive to large errors (penalizes large prediction errors more than smaller prediction errors).
The coefficient of determination ($R^2$) is useful when you are attempting to understand how well your selected independent variable(s) explain the variability in your dependent variable(s). This is useful when you are attempting to explain what factors might be driving the underlying process of interest (e.g., climatic variables and soil conditions related to tree height).
Pros
Gives an overall sense of how well your selected variables fit the data.
Cons
As more independent variables are added to your model, $R^2$ increases (see adj. $R^2$ or Akaike's Information Criterion as potential alternatives).
Of course, the above will be subject to sample size and sampling design, and a general understanding that correlation does not imply causation.
|
RMSE vs. Coefficient of Determination
Both the Root-Mean-Square-Error (RMSE) and coefficient of determination ($R^2$) offer different, yet complementary, information that should be assessed when evaluating your physical model. Neither is
|
9,922
|
RMSE vs. Coefficient of Determination
|
There is also MAE, Mean Absolute Error. Unlike RMSE, it isn't overly sensitive to large errors. From what I've read, some fields prefer RMSE, others MAE. I like to use both.
|
RMSE vs. Coefficient of Determination
|
There is also MAE, Mean Absolute Error. Unlike RMSE, it isn't overly sensitive to large errors. From what I've read, some fields prefer RMSE, others MAE. I like to use both.
|
RMSE vs. Coefficient of Determination
There is also MAE, Mean Absolute Error. Unlike RMSE, it isn't overly sensitive to large errors. From what I've read, some fields prefer RMSE, others MAE. I like to use both.
|
RMSE vs. Coefficient of Determination
There is also MAE, Mean Absolute Error. Unlike RMSE, it isn't overly sensitive to large errors. From what I've read, some fields prefer RMSE, others MAE. I like to use both.
|
9,923
|
RMSE vs. Coefficient of Determination
|
If some number is added to each element of one of the vectors, RMSE changes. Same if all elements in one of or both vectors are multiplied by a number.
R code follows;
#RMSE vs pearson's correlation
one<-rnorm(100)
two<-one+rnorm(100)
rumis<-(two - one)^2
(RMSE<-sqrt(mean(rumis)))
cor(one,two)
oneA<-one+100
rumis<-(two - oneA)^2
(RMSE<-sqrt(mean(rumis)))
cor(oneA,two)
oneB<-one*10
twoB<-two*10
rumis<-(twoB - oneB)^2
(RMSE<-sqrt(mean(rumis)))
cor(oneB,twoB)
cor(oneB,twoB)^2
|
RMSE vs. Coefficient of Determination
|
If some number is added to each element of one of the vectors, RMSE changes. Same if all elements in one of or both vectors are multiplied by a number.
R code follows;
#RMSE vs pearson's correlation
o
|
RMSE vs. Coefficient of Determination
If some number is added to each element of one of the vectors, RMSE changes. Same if all elements in one of or both vectors are multiplied by a number.
R code follows;
#RMSE vs pearson's correlation
one<-rnorm(100)
two<-one+rnorm(100)
rumis<-(two - one)^2
(RMSE<-sqrt(mean(rumis)))
cor(one,two)
oneA<-one+100
rumis<-(two - oneA)^2
(RMSE<-sqrt(mean(rumis)))
cor(oneA,two)
oneB<-one*10
twoB<-two*10
rumis<-(twoB - oneB)^2
(RMSE<-sqrt(mean(rumis)))
cor(oneB,twoB)
cor(oneB,twoB)^2
|
RMSE vs. Coefficient of Determination
If some number is added to each element of one of the vectors, RMSE changes. Same if all elements in one of or both vectors are multiplied by a number.
R code follows;
#RMSE vs pearson's correlation
o
|
9,924
|
RMSE vs. Coefficient of Determination
|
Ultimately the difference is just standardization as both lead to the choice of the same model, because RMSE times the number of observations is in the numerator or R squared, and the denominator of the latter is constant across all models (just plot one measure against the other for 10 different models).
|
RMSE vs. Coefficient of Determination
|
Ultimately the difference is just standardization as both lead to the choice of the same model, because RMSE times the number of observations is in the numerator or R squared, and the denominator of t
|
RMSE vs. Coefficient of Determination
Ultimately the difference is just standardization as both lead to the choice of the same model, because RMSE times the number of observations is in the numerator or R squared, and the denominator of the latter is constant across all models (just plot one measure against the other for 10 different models).
|
RMSE vs. Coefficient of Determination
Ultimately the difference is just standardization as both lead to the choice of the same model, because RMSE times the number of observations is in the numerator or R squared, and the denominator of t
|
9,925
|
RMSE vs. Coefficient of Determination
|
Actually,for statistical scientists should know the best fit of the model,then RMSE is very important for those people in his robust research.if RMSE is very close to zero,then the model is best fitted.
The coefficient of determination is good for other scientists like agricultural and other fields. It is a value between 0 and 1. If it is 1, 100% of the values matches to the observed data sets. If it is 0 ,then the data completely heterogeneous.
|
RMSE vs. Coefficient of Determination
|
Actually,for statistical scientists should know the best fit of the model,then RMSE is very important for those people in his robust research.if RMSE is very close to zero,then the model is best fitte
|
RMSE vs. Coefficient of Determination
Actually,for statistical scientists should know the best fit of the model,then RMSE is very important for those people in his robust research.if RMSE is very close to zero,then the model is best fitted.
The coefficient of determination is good for other scientists like agricultural and other fields. It is a value between 0 and 1. If it is 1, 100% of the values matches to the observed data sets. If it is 0 ,then the data completely heterogeneous.
|
RMSE vs. Coefficient of Determination
Actually,for statistical scientists should know the best fit of the model,then RMSE is very important for those people in his robust research.if RMSE is very close to zero,then the model is best fitte
|
9,926
|
Subsetting R time series vectors
|
Use the window function:
> window(qs, 2010, c(2010, 4))
Qtr1 Qtr2 Qtr3 Qtr4
2010 104 105 106 107
|
Subsetting R time series vectors
|
Use the window function:
> window(qs, 2010, c(2010, 4))
Qtr1 Qtr2 Qtr3 Qtr4
2010 104 105 106 107
|
Subsetting R time series vectors
Use the window function:
> window(qs, 2010, c(2010, 4))
Qtr1 Qtr2 Qtr3 Qtr4
2010 104 105 106 107
|
Subsetting R time series vectors
Use the window function:
> window(qs, 2010, c(2010, 4))
Qtr1 Qtr2 Qtr3 Qtr4
2010 104 105 106 107
|
9,927
|
Subsetting R time series vectors
|
Also useful, if you are combining multiple time series and don't want to have to have to window every one to get them to match, ts.union and ts.intersect.
|
Subsetting R time series vectors
|
Also useful, if you are combining multiple time series and don't want to have to have to window every one to get them to match, ts.union and ts.intersect.
|
Subsetting R time series vectors
Also useful, if you are combining multiple time series and don't want to have to have to window every one to get them to match, ts.union and ts.intersect.
|
Subsetting R time series vectors
Also useful, if you are combining multiple time series and don't want to have to have to window every one to get them to match, ts.union and ts.intersect.
|
9,928
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
|
There is no difference in the definition - in both cases, the likelihood function is any function of the parameter that is proportional to the sampling density. Strictly speaking we do not require that the likelihood be equal to the sampling density; it needs only be proportional, which allows removal of multiplicative parts that do not depend on the parameters.
Whereas the sampling density is interpreted as a function of the data, conditional on a specified value of the parameter, the likelihood function is interpreted as a function of the parameter for a fixed data vector. So in the standard case of IID data you have:
$$L_\mathbf{x}(\theta) \propto \prod_{i=1}^n p(x_i|\theta).$$
In Bayesian statistics, we usually express Bayes' theorem in its simplest form as:
$$\pi (\theta|\mathbf{x}) \propto \pi(\theta) \cdot L_\mathbf{x}(\theta).$$
This expression for Bayes' theorem stresses that both of its multilicative elements are functions of the parameter, which is the object of interest in the posterior density. (This proportionality result fully defines the rule, since the posterior is a density, and so there is a unique multiplying constant that makes it integrate to one.) As you point out in your update, Bayesian and frequentist philosophy have different interpretive structures. Within the frequentist paradigm the parameter is generally treated as a "fixed constant" and so it is not ascribed a probability measure. Frequentists therefore reject the ascription of a prior or posterior distribution to the parameter (for more discussion on these philosophic and interpretive differences, see e.g., O'Neill 2009).
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
|
There is no difference in the definition - in both cases, the likelihood function is any function of the parameter that is proportional to the sampling density. Strictly speaking we do not require th
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
There is no difference in the definition - in both cases, the likelihood function is any function of the parameter that is proportional to the sampling density. Strictly speaking we do not require that the likelihood be equal to the sampling density; it needs only be proportional, which allows removal of multiplicative parts that do not depend on the parameters.
Whereas the sampling density is interpreted as a function of the data, conditional on a specified value of the parameter, the likelihood function is interpreted as a function of the parameter for a fixed data vector. So in the standard case of IID data you have:
$$L_\mathbf{x}(\theta) \propto \prod_{i=1}^n p(x_i|\theta).$$
In Bayesian statistics, we usually express Bayes' theorem in its simplest form as:
$$\pi (\theta|\mathbf{x}) \propto \pi(\theta) \cdot L_\mathbf{x}(\theta).$$
This expression for Bayes' theorem stresses that both of its multilicative elements are functions of the parameter, which is the object of interest in the posterior density. (This proportionality result fully defines the rule, since the posterior is a density, and so there is a unique multiplying constant that makes it integrate to one.) As you point out in your update, Bayesian and frequentist philosophy have different interpretive structures. Within the frequentist paradigm the parameter is generally treated as a "fixed constant" and so it is not ascribed a probability measure. Frequentists therefore reject the ascription of a prior or posterior distribution to the parameter (for more discussion on these philosophic and interpretive differences, see e.g., O'Neill 2009).
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
There is no difference in the definition - in both cases, the likelihood function is any function of the parameter that is proportional to the sampling density. Strictly speaking we do not require th
|
9,929
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
|
The likelihood function is defined independently from $-$or prior to$-$ the statistical paradigm that is used for inference, as a function, $L(\theta;x)$ (or $L(\theta|x)$), of the parameter $\theta$, function that depends on $-$or is indexed by$-$ the observation(s) $x$ available for this inference. And also implicitly depending on the family of probability models chosen to represent the variability or randomness in the data. For a given value of the pair $(\theta,x)$, the value of this function is exactly identical to the value of the density of the model at $x$ when indexed with the parameter $\theta$. Which is often crudely translated as the "probability of the data".
To quote more authoritative and historical sources than an earlier answer on this forum,
"We may discuss the probability of occurrence of quantities which can
be observed . . . in relation to any hypotheses which may be suggested
to explain these observations. We can know nothing of the probability
of hypotheses . . . [We] may ascertain the likelihood of hypotheses .
. . by calculation from observations: . . . to speak of the likelihood
. . . of an observable quantity has no meaning." R.A. Fisher, On the ``probable error’’ of a coefficient of correlation deduced from a small sample. Metron 1, 1921, p.25
and
"What we can find from a sample is the likelihood of any particular
value of r, if we define the likelihood as a quantity proportional to
the probability that, from a population having the particular value of
r, a sample having the observed value of r, should be obtained." R.A.
Fisher, On the ``probable error’’ of a coefficient of correlation
deduced from a small sample. Metron 1, 1921, p.24
which mentions a proportionality that Jeffreys (and I) find superfluous:
"..likelihood, a convenient term introduced by Professor R.A. Fisher,
though in his usage it is sometimes multiplied by a constant factor.
This is the probability of the observations given the original
information and the hypothesis under discussion." H. Jeffreys, Theory
of Probability, 1939, p.28
To quote but one sentence from the excellent historical entry to the topic by John Aldrich (Statistical Science, 1997):
"Fisher (1921, p. 24) redrafted what he had written in 1912 about
inverse probability, distinguishing between the mathematical
operations that can be performed on probability densities and
likelihoods: likelihood is not a ‘‘differential element,’’ it cannot
be integrated." J. Aldrich, R. A. Fisher and the Making of Maximum
Likelihood 1912 – 1922, 1997, p.9
When adopting a Bayesian approach, the likelihood function does not change in shape or in nature. It keeps being the density at $x$ indexed by $\theta$. The additional feature is that, since $\theta$ is also endowed with a probabilistic model, the prior distribution, the density at $x$ indexed by $\theta$ can also be interpreted as a conditional density, conditional on a realisation of $\theta$: in a Bayesian modelling, one realisation of $\theta$ is produced from the prior, with density $\pi(\cdot)$, then a realisation of $X$, $x$, is produced from the distribution with density $L(\theta|\cdot)$, indexed by $\theta$. In other words, and with respect to the proper dominating measure, the pair $(\theta,x)$ has joint density
$$\pi(\theta) \times L(\theta|x)$$
from which one derives the posterior density of $\theta$, that is, the conditional density of $\theta$, conditional on a realisation of $x$ as
$$\pi(\theta|x) \propto \pi(\theta) \times L(\theta|x)$$
also expressed as
$$\text{posterior} \propto \text{prior} \times \text{likelihood}$$
found since Jeffreys (1939).
Note: I find the distinction made in the introduction of the Wikipedia page about likelihood functions between frequentist and Bayesian likelihoods confusing and unnecessary, or just plain wrong as the large majority of current Bayesian statisticians does not use likelihood as a substitute for posterior probability. Similarly, the "difference" pointed out in the Wikipedia page about Bayes Theorem sounds more confusing than anything else, as this theorem is a probability statement about a change of conditioning, independent from the paradigm or from the meaning of a probability statement. (In my opinion, it is more a definition than a theorem!)
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
|
The likelihood function is defined independently from $-$or prior to$-$ the statistical paradigm that is used for inference, as a function, $L(\theta;x)$ (or $L(\theta|x)$), of the parameter $\theta$,
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
The likelihood function is defined independently from $-$or prior to$-$ the statistical paradigm that is used for inference, as a function, $L(\theta;x)$ (or $L(\theta|x)$), of the parameter $\theta$, function that depends on $-$or is indexed by$-$ the observation(s) $x$ available for this inference. And also implicitly depending on the family of probability models chosen to represent the variability or randomness in the data. For a given value of the pair $(\theta,x)$, the value of this function is exactly identical to the value of the density of the model at $x$ when indexed with the parameter $\theta$. Which is often crudely translated as the "probability of the data".
To quote more authoritative and historical sources than an earlier answer on this forum,
"We may discuss the probability of occurrence of quantities which can
be observed . . . in relation to any hypotheses which may be suggested
to explain these observations. We can know nothing of the probability
of hypotheses . . . [We] may ascertain the likelihood of hypotheses .
. . by calculation from observations: . . . to speak of the likelihood
. . . of an observable quantity has no meaning." R.A. Fisher, On the ``probable error’’ of a coefficient of correlation deduced from a small sample. Metron 1, 1921, p.25
and
"What we can find from a sample is the likelihood of any particular
value of r, if we define the likelihood as a quantity proportional to
the probability that, from a population having the particular value of
r, a sample having the observed value of r, should be obtained." R.A.
Fisher, On the ``probable error’’ of a coefficient of correlation
deduced from a small sample. Metron 1, 1921, p.24
which mentions a proportionality that Jeffreys (and I) find superfluous:
"..likelihood, a convenient term introduced by Professor R.A. Fisher,
though in his usage it is sometimes multiplied by a constant factor.
This is the probability of the observations given the original
information and the hypothesis under discussion." H. Jeffreys, Theory
of Probability, 1939, p.28
To quote but one sentence from the excellent historical entry to the topic by John Aldrich (Statistical Science, 1997):
"Fisher (1921, p. 24) redrafted what he had written in 1912 about
inverse probability, distinguishing between the mathematical
operations that can be performed on probability densities and
likelihoods: likelihood is not a ‘‘differential element,’’ it cannot
be integrated." J. Aldrich, R. A. Fisher and the Making of Maximum
Likelihood 1912 – 1922, 1997, p.9
When adopting a Bayesian approach, the likelihood function does not change in shape or in nature. It keeps being the density at $x$ indexed by $\theta$. The additional feature is that, since $\theta$ is also endowed with a probabilistic model, the prior distribution, the density at $x$ indexed by $\theta$ can also be interpreted as a conditional density, conditional on a realisation of $\theta$: in a Bayesian modelling, one realisation of $\theta$ is produced from the prior, with density $\pi(\cdot)$, then a realisation of $X$, $x$, is produced from the distribution with density $L(\theta|\cdot)$, indexed by $\theta$. In other words, and with respect to the proper dominating measure, the pair $(\theta,x)$ has joint density
$$\pi(\theta) \times L(\theta|x)$$
from which one derives the posterior density of $\theta$, that is, the conditional density of $\theta$, conditional on a realisation of $x$ as
$$\pi(\theta|x) \propto \pi(\theta) \times L(\theta|x)$$
also expressed as
$$\text{posterior} \propto \text{prior} \times \text{likelihood}$$
found since Jeffreys (1939).
Note: I find the distinction made in the introduction of the Wikipedia page about likelihood functions between frequentist and Bayesian likelihoods confusing and unnecessary, or just plain wrong as the large majority of current Bayesian statisticians does not use likelihood as a substitute for posterior probability. Similarly, the "difference" pointed out in the Wikipedia page about Bayes Theorem sounds more confusing than anything else, as this theorem is a probability statement about a change of conditioning, independent from the paradigm or from the meaning of a probability statement. (In my opinion, it is more a definition than a theorem!)
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
The likelihood function is defined independently from $-$or prior to$-$ the statistical paradigm that is used for inference, as a function, $L(\theta;x)$ (or $L(\theta|x)$), of the parameter $\theta$,
|
9,930
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
|
As a small addendum:
The name "Likelihood" is entirely misleading, because there are very many different possible meanings. Not only the "normal language" one, but also in statistics. I can think of at least three different, but even related expressions that are all called Likelihood; even in text books.
That said, when taking the multiplicative definition of Likelihood, there is nothing in it that will turn it into any kind of probability in the sense of its (e.g. axiomatic) definition. It is a real-valued number. You can do lots of things to compute or relate it to a probability (taking ratios, calculating priors and posteriors, etc.) -- but on itself it has no meaning in terms of probability.
The answer has been more or less obsoleted by the much more informative and comprehensive answer by Xi'an. But by request, some text book definitions of Likelihood:
the function $L (\vec{x}; \theta)$
the method of finding a 'best' value of the parameter $\theta$ under the condition of some observed data (Maximum L., Minimum L., log-L., etc.)
the ratio of Likelihood values for different priors (e.g. in a classification task)
... and moreover the different meanings one can try to attribute to the (ab)use of the aforementioned elements.
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
|
As a small addendum:
The name "Likelihood" is entirely misleading, because there are very many different possible meanings. Not only the "normal language" one, but also in statistics. I can think of a
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
As a small addendum:
The name "Likelihood" is entirely misleading, because there are very many different possible meanings. Not only the "normal language" one, but also in statistics. I can think of at least three different, but even related expressions that are all called Likelihood; even in text books.
That said, when taking the multiplicative definition of Likelihood, there is nothing in it that will turn it into any kind of probability in the sense of its (e.g. axiomatic) definition. It is a real-valued number. You can do lots of things to compute or relate it to a probability (taking ratios, calculating priors and posteriors, etc.) -- but on itself it has no meaning in terms of probability.
The answer has been more or less obsoleted by the much more informative and comprehensive answer by Xi'an. But by request, some text book definitions of Likelihood:
the function $L (\vec{x}; \theta)$
the method of finding a 'best' value of the parameter $\theta$ under the condition of some observed data (Maximum L., Minimum L., log-L., etc.)
the ratio of Likelihood values for different priors (e.g. in a classification task)
... and moreover the different meanings one can try to attribute to the (ab)use of the aforementioned elements.
|
Is there any difference between Frequentist and Bayesian on the definition of Likelihood?
As a small addendum:
The name "Likelihood" is entirely misleading, because there are very many different possible meanings. Not only the "normal language" one, but also in statistics. I can think of a
|
9,931
|
AUC and class imbalance in training/test dataset
|
It depends how you mean the word sensitive. The ROC AUC is sensitive to class imbalance in the sense that when there is a minority class, you typically define this as the positive class and it will have a strong impact on the AUC value. This is very much desirable behaviour. Accuracy is for example not sensitive in that way. It can be very high even if the minority class is not well predicted at all.
In most experimental setups (bootstrap or cross validation for example) the class distribution of training and test sets should be similar. But this is a result of how you sample those sets, not of using or not using ROC. Basically you are right to say that the ROC makes abstraction of class imbalance in the test set by giving equal importance to sensitivity and specificity. When the training set doesn't contain enough examples to learn the class, this will still affect ROC though, as it should.
What you do in terms of oversampling and parameter tuning is a separate issue. The ROC can only ever tell you how well a specific configuration works. You can then try multiple config's and select the best.
|
AUC and class imbalance in training/test dataset
|
It depends how you mean the word sensitive. The ROC AUC is sensitive to class imbalance in the sense that when there is a minority class, you typically define this as the positive class and it will ha
|
AUC and class imbalance in training/test dataset
It depends how you mean the word sensitive. The ROC AUC is sensitive to class imbalance in the sense that when there is a minority class, you typically define this as the positive class and it will have a strong impact on the AUC value. This is very much desirable behaviour. Accuracy is for example not sensitive in that way. It can be very high even if the minority class is not well predicted at all.
In most experimental setups (bootstrap or cross validation for example) the class distribution of training and test sets should be similar. But this is a result of how you sample those sets, not of using or not using ROC. Basically you are right to say that the ROC makes abstraction of class imbalance in the test set by giving equal importance to sensitivity and specificity. When the training set doesn't contain enough examples to learn the class, this will still affect ROC though, as it should.
What you do in terms of oversampling and parameter tuning is a separate issue. The ROC can only ever tell you how well a specific configuration works. You can then try multiple config's and select the best.
|
AUC and class imbalance in training/test dataset
It depends how you mean the word sensitive. The ROC AUC is sensitive to class imbalance in the sense that when there is a minority class, you typically define this as the positive class and it will ha
|
9,932
|
AUC and class imbalance in training/test dataset
|
I think it is not safe to say that the AUC is insensitive to class imbalance, as it introduces some confusion to the reader. In case you mean that the score itself doesn't detect class imbalance, that's wrong, that's why the AUC is there. In case you mean insensitive such that changes in the class distribution don't have influence on calculating the AUC, that's true.
I happened to be prompted about this by my supervisor. In fact, that's literally the advantage of using the AUC as classification measure in comparison to others (e.g. accuracy). AUC tells you your model's performance pretty much, while addressing the issue of class imbalance. To be scientifically safe, I'd rather say it is insensitive to changes in class distribution.
For example, and to make this as simple as possible, let's take a look at a binary classification problem where the positive class is dominant. Say, we have a sample distribution and a randomly-predicting model with default accuracy 0.8 (predicts positive constantly without even looking at the data). You can see that this model will return a high accuracy score, although its precision is rather low $$Precision = \frac{TP}{TP+FP}$$because the number of false positives will grow and therefore the denominator is larger ...
What the AUC on the other hand does, is that it notifies you that you have several wrongly classified positives $FP$ despite the fact that you have a high accuracy because of the dominant class, and therefore it would return a low score in this case. I hope I made this clear!
If you are interested in AUC changes with different class distributions or AUC analysis for other classification tasks, I would definitely recommend you Fawcett's paper on ROC curve analysis. One of the best out there and easily put.
|
AUC and class imbalance in training/test dataset
|
I think it is not safe to say that the AUC is insensitive to class imbalance, as it introduces some confusion to the reader. In case you mean that the score itself doesn't detect class imbalance, that
|
AUC and class imbalance in training/test dataset
I think it is not safe to say that the AUC is insensitive to class imbalance, as it introduces some confusion to the reader. In case you mean that the score itself doesn't detect class imbalance, that's wrong, that's why the AUC is there. In case you mean insensitive such that changes in the class distribution don't have influence on calculating the AUC, that's true.
I happened to be prompted about this by my supervisor. In fact, that's literally the advantage of using the AUC as classification measure in comparison to others (e.g. accuracy). AUC tells you your model's performance pretty much, while addressing the issue of class imbalance. To be scientifically safe, I'd rather say it is insensitive to changes in class distribution.
For example, and to make this as simple as possible, let's take a look at a binary classification problem where the positive class is dominant. Say, we have a sample distribution and a randomly-predicting model with default accuracy 0.8 (predicts positive constantly without even looking at the data). You can see that this model will return a high accuracy score, although its precision is rather low $$Precision = \frac{TP}{TP+FP}$$because the number of false positives will grow and therefore the denominator is larger ...
What the AUC on the other hand does, is that it notifies you that you have several wrongly classified positives $FP$ despite the fact that you have a high accuracy because of the dominant class, and therefore it would return a low score in this case. I hope I made this clear!
If you are interested in AUC changes with different class distributions or AUC analysis for other classification tasks, I would definitely recommend you Fawcett's paper on ROC curve analysis. One of the best out there and easily put.
|
AUC and class imbalance in training/test dataset
I think it is not safe to say that the AUC is insensitive to class imbalance, as it introduces some confusion to the reader. In case you mean that the score itself doesn't detect class imbalance, that
|
9,933
|
AUC and class imbalance in training/test dataset
|
(a 3-years late answer, but maybe still useful!)
ROC is sensitive to the class-imbalance issue, meaning that it favors the class with larger population solely because of its higher population. In other words, it is biased toward the larger population when it comes to classification/prediction.
This is indeed problematic. Imagine in different trials when data go under rounds of sampling (e.g., in cross validation), populations of subclasses may vary in each iteration. In such a case, the trained models are no longer comparable using a sensitive metric (like accuracy or ROC). To remedy this, either the number of each subclass should be kept fixed, or an insensitive metric must be used. True Skill Statistic (also known as Youden J Index) is a metric that is indeed insensitive to this issue. These metrics are very popular in the domains which deal with extreme-imbalanced data, such as weather forecasting, fraud detection, and of course in bioinformatics.
Also, people modified ROC and introduced Precision-Recall curve for this very reason. PR curve seems to be less sensitive to this issue.
For Youden J Index, see Youden 1950, for True Skill Statistic see Bloomfield et al. 2018.
For a thorough example, read this blog post on Machine Learning Master.
For an applied analysis on extreme-imbalance data, see Ahmadzadeh et al. 2019.
|
AUC and class imbalance in training/test dataset
|
(a 3-years late answer, but maybe still useful!)
ROC is sensitive to the class-imbalance issue, meaning that it favors the class with larger population solely because of its higher population. In othe
|
AUC and class imbalance in training/test dataset
(a 3-years late answer, but maybe still useful!)
ROC is sensitive to the class-imbalance issue, meaning that it favors the class with larger population solely because of its higher population. In other words, it is biased toward the larger population when it comes to classification/prediction.
This is indeed problematic. Imagine in different trials when data go under rounds of sampling (e.g., in cross validation), populations of subclasses may vary in each iteration. In such a case, the trained models are no longer comparable using a sensitive metric (like accuracy or ROC). To remedy this, either the number of each subclass should be kept fixed, or an insensitive metric must be used. True Skill Statistic (also known as Youden J Index) is a metric that is indeed insensitive to this issue. These metrics are very popular in the domains which deal with extreme-imbalanced data, such as weather forecasting, fraud detection, and of course in bioinformatics.
Also, people modified ROC and introduced Precision-Recall curve for this very reason. PR curve seems to be less sensitive to this issue.
For Youden J Index, see Youden 1950, for True Skill Statistic see Bloomfield et al. 2018.
For a thorough example, read this blog post on Machine Learning Master.
For an applied analysis on extreme-imbalance data, see Ahmadzadeh et al. 2019.
|
AUC and class imbalance in training/test dataset
(a 3-years late answer, but maybe still useful!)
ROC is sensitive to the class-imbalance issue, meaning that it favors the class with larger population solely because of its higher population. In othe
|
9,934
|
AUC and class imbalance in training/test dataset
|
I choose to disgaree with the answer given by @Azim. Emphirical research has shown ROC is insentive to class imbalance. This has been extensively discussed by Tom Fawcett, see Section 4.2 of his paper An introduction to ROC analysis
4.2. Class skew
ROC curves have an attractive property: they are insensitive to changes in class distribution. If the proportion of
positive to negative instances changes in a test set, the
ROC curves will not change. To see why this is so, consider
the confusion matrix in Fig. 1. Note that the class distribution—the proportion of positive to negative instances—is
the relationship of the left (+) column to the right (-) column. Any performance metric that uses values from both
columns will be inherently sensitive to class skews. Metrics
such as accuracy, precision, lift and F score use values from
both columns of the confusion matrix. As a class distribution changes these measures will change as well, even if the
fundamental classifier performance does not. ROC graphs
are based upon tp rate and fp rate, in which each dimension
is a strict columnar ratio, so do not depend on class
distributions
|
AUC and class imbalance in training/test dataset
|
I choose to disgaree with the answer given by @Azim. Emphirical research has shown ROC is insentive to class imbalance. This has been extensively discussed by Tom Fawcett, see Section 4.2 of his pape
|
AUC and class imbalance in training/test dataset
I choose to disgaree with the answer given by @Azim. Emphirical research has shown ROC is insentive to class imbalance. This has been extensively discussed by Tom Fawcett, see Section 4.2 of his paper An introduction to ROC analysis
4.2. Class skew
ROC curves have an attractive property: they are insensitive to changes in class distribution. If the proportion of
positive to negative instances changes in a test set, the
ROC curves will not change. To see why this is so, consider
the confusion matrix in Fig. 1. Note that the class distribution—the proportion of positive to negative instances—is
the relationship of the left (+) column to the right (-) column. Any performance metric that uses values from both
columns will be inherently sensitive to class skews. Metrics
such as accuracy, precision, lift and F score use values from
both columns of the confusion matrix. As a class distribution changes these measures will change as well, even if the
fundamental classifier performance does not. ROC graphs
are based upon tp rate and fp rate, in which each dimension
is a strict columnar ratio, so do not depend on class
distributions
|
AUC and class imbalance in training/test dataset
I choose to disgaree with the answer given by @Azim. Emphirical research has shown ROC is insentive to class imbalance. This has been extensively discussed by Tom Fawcett, see Section 4.2 of his pape
|
9,935
|
Mean of the bootstrap sample vs statistic of the sample
|
Let's generalize, so as to focus on the crux of the matter. I will spell out the tiniest details so as to leave no doubts. The analysis requires only the following:
The arithmetic mean of a set of numbers $z_1, \ldots, z_m$ is defined to be
$$\frac{1}{m}\left(z_1 + \cdots + z_m\right).$$
Expectation is a linear operator. That is, when $Z_i, i=1,\ldots,m$ are random variables and $\alpha_i$ are numbers, then the expectation of a linear combination is the linear combination of the expectations,
$$\mathbb{E}\left(\alpha_1 Z_1 + \cdots + \alpha_m Z_m\right) = \alpha_1 \mathbb{E}(Z_1) + \cdots + \alpha_m\mathbb{E}(Z_m).$$
Let $B$ be a sample $(B_1, \ldots, B_k)$ obtained from a dataset $x = (x_1, \ldots, x_n)$ by taking $k$ elements uniformly from $x$ with replacement. Let $m(B)$ be the arithmetic mean of $B$. This is a random variable. Then
$$\mathbb{E}(m(B)) = \mathbb{E}\left(\frac{1}{k}\left(B_1+\cdots+B_k\right)\right) = \frac{1}{k}\left(\mathbb{E}(B_1) + \cdots + \mathbb{E}(B_k)\right)$$
follows by linearity of expectation. Since the elements of $B$ are all obtained in the same fashion, they all have the same expectation, $b$ say:
$$\mathbb{E}(B_1) = \cdots = \mathbb{E}(B_k) = b.$$
This simplifies the foregoing to
$$\mathbb{E}(m(B)) = \frac{1}{k}\left(b + b + \cdots + b\right) = \frac{1}{k}\left(k b\right) = b.$$
By definition, the expectation is the probability-weighted sum of values. Since each value of $X$ is assumed to have an equal chance of $1/n$ of being selected,
$$\mathbb{E}(m(B)) = b = \mathbb{E}(B_1) = \frac{1}{n}x_1 + \cdots + \frac{1}{n}x_n = \frac{1}{n}\left(x_1 + \cdots + x_n\right) = \bar x,$$
the arithmetic mean of the data.
To answer the question, if one uses the data mean $\bar x$ to estimate the population mean, then the bootstrap mean (which is the case $k=n$) also equals $\bar x$, and therefore is identical as an estimator of the population mean.
For statistics that are not linear functions of the data, the same result does not necessarily hold. However, it would be wrong simply to substitute the bootstrap mean for the statistic's value on the data: that is not how bootstrapping works. Instead, by comparing the bootstrap mean to the data statistic we obtain information about the bias of the statistic. This can be used to adjust the original statistic to remove the bias. As such, the bias-corrected estimate thereby becomes an algebraic combination of the original statistic and the bootstrap mean. For more information, look up "BCa" (bias-corrected and accelerated bootstrap) and "ABC". Wikipedia provides some references.
|
Mean of the bootstrap sample vs statistic of the sample
|
Let's generalize, so as to focus on the crux of the matter. I will spell out the tiniest details so as to leave no doubts. The analysis requires only the following:
The arithmetic mean of a set of
|
Mean of the bootstrap sample vs statistic of the sample
Let's generalize, so as to focus on the crux of the matter. I will spell out the tiniest details so as to leave no doubts. The analysis requires only the following:
The arithmetic mean of a set of numbers $z_1, \ldots, z_m$ is defined to be
$$\frac{1}{m}\left(z_1 + \cdots + z_m\right).$$
Expectation is a linear operator. That is, when $Z_i, i=1,\ldots,m$ are random variables and $\alpha_i$ are numbers, then the expectation of a linear combination is the linear combination of the expectations,
$$\mathbb{E}\left(\alpha_1 Z_1 + \cdots + \alpha_m Z_m\right) = \alpha_1 \mathbb{E}(Z_1) + \cdots + \alpha_m\mathbb{E}(Z_m).$$
Let $B$ be a sample $(B_1, \ldots, B_k)$ obtained from a dataset $x = (x_1, \ldots, x_n)$ by taking $k$ elements uniformly from $x$ with replacement. Let $m(B)$ be the arithmetic mean of $B$. This is a random variable. Then
$$\mathbb{E}(m(B)) = \mathbb{E}\left(\frac{1}{k}\left(B_1+\cdots+B_k\right)\right) = \frac{1}{k}\left(\mathbb{E}(B_1) + \cdots + \mathbb{E}(B_k)\right)$$
follows by linearity of expectation. Since the elements of $B$ are all obtained in the same fashion, they all have the same expectation, $b$ say:
$$\mathbb{E}(B_1) = \cdots = \mathbb{E}(B_k) = b.$$
This simplifies the foregoing to
$$\mathbb{E}(m(B)) = \frac{1}{k}\left(b + b + \cdots + b\right) = \frac{1}{k}\left(k b\right) = b.$$
By definition, the expectation is the probability-weighted sum of values. Since each value of $X$ is assumed to have an equal chance of $1/n$ of being selected,
$$\mathbb{E}(m(B)) = b = \mathbb{E}(B_1) = \frac{1}{n}x_1 + \cdots + \frac{1}{n}x_n = \frac{1}{n}\left(x_1 + \cdots + x_n\right) = \bar x,$$
the arithmetic mean of the data.
To answer the question, if one uses the data mean $\bar x$ to estimate the population mean, then the bootstrap mean (which is the case $k=n$) also equals $\bar x$, and therefore is identical as an estimator of the population mean.
For statistics that are not linear functions of the data, the same result does not necessarily hold. However, it would be wrong simply to substitute the bootstrap mean for the statistic's value on the data: that is not how bootstrapping works. Instead, by comparing the bootstrap mean to the data statistic we obtain information about the bias of the statistic. This can be used to adjust the original statistic to remove the bias. As such, the bias-corrected estimate thereby becomes an algebraic combination of the original statistic and the bootstrap mean. For more information, look up "BCa" (bias-corrected and accelerated bootstrap) and "ABC". Wikipedia provides some references.
|
Mean of the bootstrap sample vs statistic of the sample
Let's generalize, so as to focus on the crux of the matter. I will spell out the tiniest details so as to leave no doubts. The analysis requires only the following:
The arithmetic mean of a set of
|
9,936
|
Mean of the bootstrap sample vs statistic of the sample
|
Since the bootstrap distribution associated with an iid sample $X_1,\ldots,X_n$ is defined as$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)\,,$$
the mean of the bootstrap distribution $\hat{F}_n$ (conditional on the iid sample $X_1,\ldots,X_n$) is$$\mathbb{E}_{\hat{F}_n}[X]=\frac{1}{n}\sum_{i=1}^n X_i=\bar{X}_ n$$
When you (if you have to) implement a simulation version of this expectation, i.e., compute an average of $B$ random draws from $\hat{F}_n$, $$\hat{\mathbb{E}}_{\hat{F}_n}[X]=\frac{1}{B} \sum_{b=1}^B X^*_b \qquad X^*_i\stackrel{\text{iid}}{\sim}\hat F_n(x)\,,$$there is some Monte Carlo variability in this approximation of $\mathbb{E}_{\hat{F}_n}[X]$, but its mean (the expectation of the empirical average, conditional on the original sample $X_1,\ldots,X_n$) and its limit when the number $B$ of bootstrap simulations grows to infinity are both exactly $\bar{X}_ n$.
|
Mean of the bootstrap sample vs statistic of the sample
|
Since the bootstrap distribution associated with an iid sample $X_1,\ldots,X_n$ is defined as$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)\,,$$
|
Mean of the bootstrap sample vs statistic of the sample
Since the bootstrap distribution associated with an iid sample $X_1,\ldots,X_n$ is defined as$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)\,,$$
the mean of the bootstrap distribution $\hat{F}_n$ (conditional on the iid sample $X_1,\ldots,X_n$) is$$\mathbb{E}_{\hat{F}_n}[X]=\frac{1}{n}\sum_{i=1}^n X_i=\bar{X}_ n$$
When you (if you have to) implement a simulation version of this expectation, i.e., compute an average of $B$ random draws from $\hat{F}_n$, $$\hat{\mathbb{E}}_{\hat{F}_n}[X]=\frac{1}{B} \sum_{b=1}^B X^*_b \qquad X^*_i\stackrel{\text{iid}}{\sim}\hat F_n(x)\,,$$there is some Monte Carlo variability in this approximation of $\mathbb{E}_{\hat{F}_n}[X]$, but its mean (the expectation of the empirical average, conditional on the original sample $X_1,\ldots,X_n$) and its limit when the number $B$ of bootstrap simulations grows to infinity are both exactly $\bar{X}_ n$.
|
Mean of the bootstrap sample vs statistic of the sample
Since the bootstrap distribution associated with an iid sample $X_1,\ldots,X_n$ is defined as$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)\,,$$
|
9,937
|
Strategies for introducing advanced statistics to various audiences
|
This is a tricky question!
First, some thoughts on why this happens. I work in an area which does (or at least should) make extensive use of statistics, but where most practitioners are not statistical experts. Consequently one sees a lot of "I put a vector into excel's t-test function and this number fell out. Therefore my paper is supported by statistics."
The main reason I see for this happening is that lack of statistics knowledge starts at the top. If your reviewers and thesis committee don't keep up to date on statistical techniques, then you need to justify use of anything that is "unconventional". For example, in a thesis, I opted to use violin plots instead of box plots to show the shape of a distribution. The use of this technique required extensive documentation in the thesis, as well as a prolonged discussion in my defense where all of the committee members wanted to know what this strange plot meant, despite both the descriptions in the text and the references to the source material. Had I just used a box plot (which shows strictly less information in this case, and can easily deceive the viewer about the shape of a distribution if it is multi-modal) no one would have said anything, and my defense would have been easier.
The point is, in non-stats fields practitioners face a difficult choice: We can read about and then use the correct methods, which entails a bunch of work that none of our higher ups are interested in; or we can just go with the flow, get the rubber stamp on our papers and theses, and keep using incorrect but conventional methods.
Now, to answer your question:
I think a good approach is to emphasize the consequences of failing to use correct techniques. This might entail:
Giving a real world example of how someone in their field experienced the consequences of poor inference. This is easier in some fields than others. Examples where careers were damaged are especially good.
Explaining that doing incorrect analysis can leave you in a situation where your results are very unlikely to transfer to the real world, which could cause harms (e.g. In my field, if your A.I. system prototype appears statistically better than the competition, but in fact is the same, then spending the next 6 months building a full implementation is a really bad idea.
Pick techniques which will save the users lots of time. Enough time so that they can spend what they save explaining the techniques to the higher ups.
|
Strategies for introducing advanced statistics to various audiences
|
This is a tricky question!
First, some thoughts on why this happens. I work in an area which does (or at least should) make extensive use of statistics, but where most practitioners are not statistica
|
Strategies for introducing advanced statistics to various audiences
This is a tricky question!
First, some thoughts on why this happens. I work in an area which does (or at least should) make extensive use of statistics, but where most practitioners are not statistical experts. Consequently one sees a lot of "I put a vector into excel's t-test function and this number fell out. Therefore my paper is supported by statistics."
The main reason I see for this happening is that lack of statistics knowledge starts at the top. If your reviewers and thesis committee don't keep up to date on statistical techniques, then you need to justify use of anything that is "unconventional". For example, in a thesis, I opted to use violin plots instead of box plots to show the shape of a distribution. The use of this technique required extensive documentation in the thesis, as well as a prolonged discussion in my defense where all of the committee members wanted to know what this strange plot meant, despite both the descriptions in the text and the references to the source material. Had I just used a box plot (which shows strictly less information in this case, and can easily deceive the viewer about the shape of a distribution if it is multi-modal) no one would have said anything, and my defense would have been easier.
The point is, in non-stats fields practitioners face a difficult choice: We can read about and then use the correct methods, which entails a bunch of work that none of our higher ups are interested in; or we can just go with the flow, get the rubber stamp on our papers and theses, and keep using incorrect but conventional methods.
Now, to answer your question:
I think a good approach is to emphasize the consequences of failing to use correct techniques. This might entail:
Giving a real world example of how someone in their field experienced the consequences of poor inference. This is easier in some fields than others. Examples where careers were damaged are especially good.
Explaining that doing incorrect analysis can leave you in a situation where your results are very unlikely to transfer to the real world, which could cause harms (e.g. In my field, if your A.I. system prototype appears statistically better than the competition, but in fact is the same, then spending the next 6 months building a full implementation is a really bad idea.
Pick techniques which will save the users lots of time. Enough time so that they can spend what they save explaining the techniques to the higher ups.
|
Strategies for introducing advanced statistics to various audiences
This is a tricky question!
First, some thoughts on why this happens. I work in an area which does (or at least should) make extensive use of statistics, but where most practitioners are not statistica
|
9,938
|
Strategies for introducing advanced statistics to various audiences
|
Speaking from the perspective of a psychologist with only slight statistical sophistication: When you introduce the method, also introduce the tools. If you tell most researchers in my field a long story about a great new method, they're going to spend the whole time worried that the punchline is "and all you have to do is brush up on your differential calculus and then take a two week training course!" (or "and buy a $2000 stats package!" or "and adapt 5000 lines of Python and R code!"). Whereas if there's an implementation of the method available in the stats package they already use, or in a piece of free software with a comprehensible GUI, and they can get up to speed on it in a day or two, they might be willing to give it a try.
I'm aware that this approach can seem venal and unscientific, but it's easy for people to fall into when they're worried about grants and publications, and don't see learning huge amounts of math as likely to help them keep their jobs.
|
Strategies for introducing advanced statistics to various audiences
|
Speaking from the perspective of a psychologist with only slight statistical sophistication: When you introduce the method, also introduce the tools. If you tell most researchers in my field a long st
|
Strategies for introducing advanced statistics to various audiences
Speaking from the perspective of a psychologist with only slight statistical sophistication: When you introduce the method, also introduce the tools. If you tell most researchers in my field a long story about a great new method, they're going to spend the whole time worried that the punchline is "and all you have to do is brush up on your differential calculus and then take a two week training course!" (or "and buy a $2000 stats package!" or "and adapt 5000 lines of Python and R code!"). Whereas if there's an implementation of the method available in the stats package they already use, or in a piece of free software with a comprehensible GUI, and they can get up to speed on it in a day or two, they might be willing to give it a try.
I'm aware that this approach can seem venal and unscientific, but it's easy for people to fall into when they're worried about grants and publications, and don't see learning huge amounts of math as likely to help them keep their jobs.
|
Strategies for introducing advanced statistics to various audiences
Speaking from the perspective of a psychologist with only slight statistical sophistication: When you introduce the method, also introduce the tools. If you tell most researchers in my field a long st
|
9,939
|
Strategies for introducing advanced statistics to various audiences
|
Thanks for this nice question Peter. I work at a medical research institution and deal with physicians who do research and publish in the medical journals. Often they are more interested in getting their paper published than "doing the statistics completely right". So when I propose an unfamilar technique they will point to a similar paper and say "look they did it this way and got their results published."
There is a problem I think when the published paper is really bad and has mistakes. It is difficult to argue even though I have a great reputation. Some docs have big egos and think they can learn almost anything. So they think they understand the statistics when they don't and can be insistent. It can get frustrating. When it is a t test and Wilcoxon is more appropriate I get them to do a Wilk Shapiro test and if normality is rejected we include both methods and explain why Wilcoxon is better. I sometimes can convince them and often they depend on me for statistics, so I have a little more clout then a general consultant might have.
I also ran into a situation where I did Kaplan-Meier curves for them and we used the log rank test but Wilcoxon gave a different result. It was hard for me to decide and in such situations I think it is best to present both methods and explain why they differ. The same goes for using Peto vs Greenwood confidence intervals for the survival curve. Explaining the Cox proportion hazard assumption can be difficult and they often misinterpret odds ratios and relative risk.
There is no simple answer. I had a boss here who was a top medical researcher in cardiology and he sometimes referees for journals. He was looking at a paper that dealt with diagnosis and used AUC as a measure. He had never seen an AUC curve before and came to me to see if I thought it was valid. He had doubts. It turned out to be appropriate and I explained it to him as best I could.
I have tried to lecture on biostatistics to physicians and have taught biostatistics in public health schools. i try to do it better than others have and produced a book for health science majors introductory course in 2002 with an epidemiologist as coauthor. Wiley wants me to do a second edition now. In 2011 I published a more concise book that I tried to cover just the essentials so that busy MDs might take the time to reasd it and reference it. That is how I deal with it. Maybe you can share your stories with us.
|
Strategies for introducing advanced statistics to various audiences
|
Thanks for this nice question Peter. I work at a medical research institution and deal with physicians who do research and publish in the medical journals. Often they are more interested in getting
|
Strategies for introducing advanced statistics to various audiences
Thanks for this nice question Peter. I work at a medical research institution and deal with physicians who do research and publish in the medical journals. Often they are more interested in getting their paper published than "doing the statistics completely right". So when I propose an unfamilar technique they will point to a similar paper and say "look they did it this way and got their results published."
There is a problem I think when the published paper is really bad and has mistakes. It is difficult to argue even though I have a great reputation. Some docs have big egos and think they can learn almost anything. So they think they understand the statistics when they don't and can be insistent. It can get frustrating. When it is a t test and Wilcoxon is more appropriate I get them to do a Wilk Shapiro test and if normality is rejected we include both methods and explain why Wilcoxon is better. I sometimes can convince them and often they depend on me for statistics, so I have a little more clout then a general consultant might have.
I also ran into a situation where I did Kaplan-Meier curves for them and we used the log rank test but Wilcoxon gave a different result. It was hard for me to decide and in such situations I think it is best to present both methods and explain why they differ. The same goes for using Peto vs Greenwood confidence intervals for the survival curve. Explaining the Cox proportion hazard assumption can be difficult and they often misinterpret odds ratios and relative risk.
There is no simple answer. I had a boss here who was a top medical researcher in cardiology and he sometimes referees for journals. He was looking at a paper that dealt with diagnosis and used AUC as a measure. He had never seen an AUC curve before and came to me to see if I thought it was valid. He had doubts. It turned out to be appropriate and I explained it to him as best I could.
I have tried to lecture on biostatistics to physicians and have taught biostatistics in public health schools. i try to do it better than others have and produced a book for health science majors introductory course in 2002 with an epidemiologist as coauthor. Wiley wants me to do a second edition now. In 2011 I published a more concise book that I tried to cover just the essentials so that busy MDs might take the time to reasd it and reference it. That is how I deal with it. Maybe you can share your stories with us.
|
Strategies for introducing advanced statistics to various audiences
Thanks for this nice question Peter. I work at a medical research institution and deal with physicians who do research and publish in the medical journals. Often they are more interested in getting
|
9,940
|
Strategies for introducing advanced statistics to various audiences
|
There are some nice comments already made here, but I'll throw in my 2 cents. I'll preface this all by saying that I'm assuming we're talking about a situation where using the traditional "canned" techniques will damage the substantive conclusions reached by the analysis. If that's not the case, then I think that sometimes doing an overly simplistic analysis is excusable both for brevity and for ease of understanding when the target audience are laymen. Is it really such a crime to assume independence when the intraclass correlation is .02 or to assume linearity when the truth is $\log(x); \ x \in (1,2)? \ $ I'd say no.
In my career I do a lot of interdisciplinary research and has lead me to work with closely with substance abuse researchers, epidemiologists, biologists, criminologists and physicians at various times. This typically involved analysis of data where the usual "canned" approaches would fail for various reason (e.g. some combination of biased sampling and clustered, longitudinally and/or spatially indexed data). I also spent a couple years consulting part time in graduate school, where I worked with people from a large variety of fields. So, I've had to think about this a lot.
My experience is that the most important thing is to explain why the usual canned approaches are inappropriate and appeal to the person's desire to do "good science". No respectable researcher wants to publish something that is blatantly misleading in its conclusions because of inappropriate statistical analysis. I've never encountered someone who says something along the lines of "I don't care whether the analysis is correct or not, I just want to get this published" although I'm sure such people exist - my response there would be to end the professional relationship if at all possible. As the statistician, it's my reputation that could be damaged if someone who actually knows what they're talking about happens to read the paper.
I admit that it can be challenging to convince someone that a particular analysis is inappropriate, but I think that as statisticians we should (a) have the knowledge necessary to know exactly what can go wrong with the "canned" approach and (b) have the ability to explain it is a reasonably comprehensible way. Unless you're working as a statistics or math professor, a part of your job is going to be to work with non-statisticians (and even sometimes if you are a stat/math prof).
Regarding (a), if the statistician doesn't have this knowledge, why would they be discouraging the canned approach? If the statistician is saying "use a random effects models" but can't explain why assuming independence is a problem, then aren't they guilty of giving in to dogma in the same way the client is? Any reviewer, statistician or not, can make pedantic critiques of a statistical modeling approach because, let's face it - all models are wrong. But, it requires expertise to know exactly what could go wrong.
Regarding (b), I've found that graphical depictions of what could go wrong typically "hit home" the most. Examples:
In the example given by Peter about categorizing continuous data, the best way to show why this is a bad idea is to graph the data in its continuous form and compare it with its categorical form. For example, if you're making your response variable binary then plot the continuous variable vs. $x$, and, if it doesn't look an awful lot like a step function, then you know the discretization lost valuable information. If this difference isn't drastic or resulting in any changes in the substantive conclusions, you could also see this from the plot.
When the proposed "form" of the model (e.g. linear) is inappropriate. For example, if the regression function "plateaus" like $y = x$ for $x \in (0,1)$ but $y = 1$ for $x > 1$ then a linear fit's slope will be too shallow and, depending on the data, this could push the $p$-value below significance despite there being an obvious relationship between $x$ and $y$.
Another common situation (also mentioned by Peter) is explaining why assuming independence is a bad idea. For example, you can show with a plot that positive autocorrelation will typically produce data that is more "clustered" and the variance will be underestimate for that reason, giving some intuition of why the naive standard errors tend to be too small. Or, you could also plot the data with the fitted curve that assumes independence and one can visually see how the clusters influence the fit (effectively lowering the sample size) in a way that is not present in independent data.
There are a million other examples but I'm working with space/time constraints here :) When pictures simply won't do for whatever reason (e.g. showing why one approach is underpowered) then simulation examples are also an option that I've employed from time to time.
|
Strategies for introducing advanced statistics to various audiences
|
There are some nice comments already made here, but I'll throw in my 2 cents. I'll preface this all by saying that I'm assuming we're talking about a situation where using the traditional "canned" tec
|
Strategies for introducing advanced statistics to various audiences
There are some nice comments already made here, but I'll throw in my 2 cents. I'll preface this all by saying that I'm assuming we're talking about a situation where using the traditional "canned" techniques will damage the substantive conclusions reached by the analysis. If that's not the case, then I think that sometimes doing an overly simplistic analysis is excusable both for brevity and for ease of understanding when the target audience are laymen. Is it really such a crime to assume independence when the intraclass correlation is .02 or to assume linearity when the truth is $\log(x); \ x \in (1,2)? \ $ I'd say no.
In my career I do a lot of interdisciplinary research and has lead me to work with closely with substance abuse researchers, epidemiologists, biologists, criminologists and physicians at various times. This typically involved analysis of data where the usual "canned" approaches would fail for various reason (e.g. some combination of biased sampling and clustered, longitudinally and/or spatially indexed data). I also spent a couple years consulting part time in graduate school, where I worked with people from a large variety of fields. So, I've had to think about this a lot.
My experience is that the most important thing is to explain why the usual canned approaches are inappropriate and appeal to the person's desire to do "good science". No respectable researcher wants to publish something that is blatantly misleading in its conclusions because of inappropriate statistical analysis. I've never encountered someone who says something along the lines of "I don't care whether the analysis is correct or not, I just want to get this published" although I'm sure such people exist - my response there would be to end the professional relationship if at all possible. As the statistician, it's my reputation that could be damaged if someone who actually knows what they're talking about happens to read the paper.
I admit that it can be challenging to convince someone that a particular analysis is inappropriate, but I think that as statisticians we should (a) have the knowledge necessary to know exactly what can go wrong with the "canned" approach and (b) have the ability to explain it is a reasonably comprehensible way. Unless you're working as a statistics or math professor, a part of your job is going to be to work with non-statisticians (and even sometimes if you are a stat/math prof).
Regarding (a), if the statistician doesn't have this knowledge, why would they be discouraging the canned approach? If the statistician is saying "use a random effects models" but can't explain why assuming independence is a problem, then aren't they guilty of giving in to dogma in the same way the client is? Any reviewer, statistician or not, can make pedantic critiques of a statistical modeling approach because, let's face it - all models are wrong. But, it requires expertise to know exactly what could go wrong.
Regarding (b), I've found that graphical depictions of what could go wrong typically "hit home" the most. Examples:
In the example given by Peter about categorizing continuous data, the best way to show why this is a bad idea is to graph the data in its continuous form and compare it with its categorical form. For example, if you're making your response variable binary then plot the continuous variable vs. $x$, and, if it doesn't look an awful lot like a step function, then you know the discretization lost valuable information. If this difference isn't drastic or resulting in any changes in the substantive conclusions, you could also see this from the plot.
When the proposed "form" of the model (e.g. linear) is inappropriate. For example, if the regression function "plateaus" like $y = x$ for $x \in (0,1)$ but $y = 1$ for $x > 1$ then a linear fit's slope will be too shallow and, depending on the data, this could push the $p$-value below significance despite there being an obvious relationship between $x$ and $y$.
Another common situation (also mentioned by Peter) is explaining why assuming independence is a bad idea. For example, you can show with a plot that positive autocorrelation will typically produce data that is more "clustered" and the variance will be underestimate for that reason, giving some intuition of why the naive standard errors tend to be too small. Or, you could also plot the data with the fitted curve that assumes independence and one can visually see how the clusters influence the fit (effectively lowering the sample size) in a way that is not present in independent data.
There are a million other examples but I'm working with space/time constraints here :) When pictures simply won't do for whatever reason (e.g. showing why one approach is underpowered) then simulation examples are also an option that I've employed from time to time.
|
Strategies for introducing advanced statistics to various audiences
There are some nice comments already made here, but I'll throw in my 2 cents. I'll preface this all by saying that I'm assuming we're talking about a situation where using the traditional "canned" tec
|
9,941
|
Strategies for introducing advanced statistics to various audiences
|
Some random thoughts because this is a complex issue...
I feel that a big problem is the lack of math education in a variety of professional disciplines and graduated programs. Without a mathematical understanding of statistics, it becomes a bunch of formulas to be applied according the case. Also, for getting a real understanding of the matter, professors should talk about the original problems that the original authors were facing at the time they published their approaches. One can learn more from that than from reading thousands books on the subject.
Statistics is a toolbox for solving problems, but it is also an art and faces the same issues than any other art. One can learn how to make sounds with an instrument. But by being able of "playing" an instrument one does not become a musician. However, is not uncommon to find people that see themselves as musicians without having studied a single concept of rhythm, melody and harmony.
In the same line, for getting papers published, most people don't need to know nor understand the concepts behind a formula... nowadays scientists just need to know what key they have to press and when it has to be pressed, period.
So this has nothing to do with the "ego" of MDs. This is a subcultural problem, a problem more related with education, customs and values of the scientific community.
What one can expect in an era in which there are thousands and thousands and thousands of useless papers and books being published for fulfilling some academic requisites/policies? In an era in which the amount of papers one publishes is more important than the quality of them?
Mainstream scientists are not worried about the good science anymore. They are slaves of numbers. They are affected (or infected) by the administrative bug of our era...
So, from my perspective, a good course in statistics should include the mathematical, historical and philosophical basis of the approach being studied, always highlighting the several paths one can take for solving a single problem.
Finally, if I were a professor in statistics/probability my first lecture(s) would be dedicated to problems like shuffling cards or tossing a coin. That will put the audience in the right position for listening... probably.
|
Strategies for introducing advanced statistics to various audiences
|
Some random thoughts because this is a complex issue...
I feel that a big problem is the lack of math education in a variety of professional disciplines and graduated programs. Without a mathematical
|
Strategies for introducing advanced statistics to various audiences
Some random thoughts because this is a complex issue...
I feel that a big problem is the lack of math education in a variety of professional disciplines and graduated programs. Without a mathematical understanding of statistics, it becomes a bunch of formulas to be applied according the case. Also, for getting a real understanding of the matter, professors should talk about the original problems that the original authors were facing at the time they published their approaches. One can learn more from that than from reading thousands books on the subject.
Statistics is a toolbox for solving problems, but it is also an art and faces the same issues than any other art. One can learn how to make sounds with an instrument. But by being able of "playing" an instrument one does not become a musician. However, is not uncommon to find people that see themselves as musicians without having studied a single concept of rhythm, melody and harmony.
In the same line, for getting papers published, most people don't need to know nor understand the concepts behind a formula... nowadays scientists just need to know what key they have to press and when it has to be pressed, period.
So this has nothing to do with the "ego" of MDs. This is a subcultural problem, a problem more related with education, customs and values of the scientific community.
What one can expect in an era in which there are thousands and thousands and thousands of useless papers and books being published for fulfilling some academic requisites/policies? In an era in which the amount of papers one publishes is more important than the quality of them?
Mainstream scientists are not worried about the good science anymore. They are slaves of numbers. They are affected (or infected) by the administrative bug of our era...
So, from my perspective, a good course in statistics should include the mathematical, historical and philosophical basis of the approach being studied, always highlighting the several paths one can take for solving a single problem.
Finally, if I were a professor in statistics/probability my first lecture(s) would be dedicated to problems like shuffling cards or tossing a coin. That will put the audience in the right position for listening... probably.
|
Strategies for introducing advanced statistics to various audiences
Some random thoughts because this is a complex issue...
I feel that a big problem is the lack of math education in a variety of professional disciplines and graduated programs. Without a mathematical
|
9,942
|
Logistic regression or T test?
|
Both tests implicitly model the age-response relationship, but they do so in different ways. Which one to select depends on how you choose to model that relationship. Your choice ought to depend on an underlying theory, if there is one; on what kind of information you want to extract from the results; and on how the sample is selected. This answer discusses these three aspects in order.
I will describe the t-test and logistic regression using language that supposes you are studying a well-defined population of people and wish to make inferences from the sample to this population.
In order to support any kind of statistical inference we must assume the sample is random.
A t-test assumes the people in the sample responding "no" are a simple random sample of all no-respondents in the population and that the people in the sample responding "yes" are a simple random sample of all yes-respondents in the population.
A t-test makes additional technical assumptions about the distributions of the ages within each of the two groups in the population. Various versions of the t-test exist to handle the likely possibilities.
Logistic regression assumes all people of any given age are a simple random sample of the people of that age in the population. The separate age groups may exhibit different rates of "yes" responses. These rates, when expressed as log odds (rather than as straight proportions), are assumed to be linearly related with age (or with some determined functions of age).
Logistic regression is easily extended to accommodate non-linear relationships between age and response. Such an extension can be used to evaluate the plausibility of the initial linear assumption. It is practicable with large datasets, which afford enough detail to display non-linearities, but is unlikely to be of much use with small datasets. A common rule of thumb--that regression models should have ten times as many observations as parameters--suggests that substantially more than 20 observations are needed to detect nonlinearity (which needs a third parameter in addition to the intercept and slope of a linear function).
A t-test detects whether the average ages differ between no-and yes-respondents in the population. A logistic regression estimates how the response rate varies by age. As such it is more flexible and capable of supplying more detailed information than the t-test is. On the other hand, it tends to be less powerful than the t-test for the basic purpose of detecting a difference between the average ages in the groups.
It is possible for the pair of tests to exhibit all four combinations of significance and non-significance. Two of these are problematic:
The t-test is not significant but the logistic regression is. When the assumptions of both tests are plausible, such a result is practically impossible, because the t-test is not trying to detect such a specific relationship as posited by logistic regression. However, when that relationship is sufficiently nonlinear to cause the oldest and youngest subjects to share one opinion and the middle-aged subjects another, then the extension of logistic regression to nonlinear relationships can detect and quantify that situation, which no t-test could detect.
The t-test is significant but the logistic regression is not, as in the question. This often happens, especially when there is a group of younger respondents, a group of older respondents, and few people in between. This may create a great separation between the response rates of no- and yes-responders. It is readily detected by the t-test. However, logistic regression would either have relatively little detailed information about how the response rate actually changes with age or else it would have inconclusive information: the case of "complete separation" where all older people respond one way and all younger people another way--but in that case both tests would usually have very low p-values.
Note that the experimental design can invalidate some of the test assumptions. For instance, if you selected people according to their age in a stratified design, then the t-test's assumption (that each group reflects a simple random sample of ages) becomes questionable. This design would suggest relying on logistic regression. If instead you had two pools, one of no-responders and one of yes-responders, and selected randomly from those to ascertain their age, then the sampling assumptions of logistic regression are doubtful while those of the t-test will hold. That design would suggest using some form of a t-test.
(The second design might seem silly here, but in circumstances where "age" is replaced by some characteristic that is difficult, costly, or time-consuming to measure it can be appealing.)
|
Logistic regression or T test?
|
Both tests implicitly model the age-response relationship, but they do so in different ways. Which one to select depends on how you choose to model that relationship. Your choice ought to depend on
|
Logistic regression or T test?
Both tests implicitly model the age-response relationship, but they do so in different ways. Which one to select depends on how you choose to model that relationship. Your choice ought to depend on an underlying theory, if there is one; on what kind of information you want to extract from the results; and on how the sample is selected. This answer discusses these three aspects in order.
I will describe the t-test and logistic regression using language that supposes you are studying a well-defined population of people and wish to make inferences from the sample to this population.
In order to support any kind of statistical inference we must assume the sample is random.
A t-test assumes the people in the sample responding "no" are a simple random sample of all no-respondents in the population and that the people in the sample responding "yes" are a simple random sample of all yes-respondents in the population.
A t-test makes additional technical assumptions about the distributions of the ages within each of the two groups in the population. Various versions of the t-test exist to handle the likely possibilities.
Logistic regression assumes all people of any given age are a simple random sample of the people of that age in the population. The separate age groups may exhibit different rates of "yes" responses. These rates, when expressed as log odds (rather than as straight proportions), are assumed to be linearly related with age (or with some determined functions of age).
Logistic regression is easily extended to accommodate non-linear relationships between age and response. Such an extension can be used to evaluate the plausibility of the initial linear assumption. It is practicable with large datasets, which afford enough detail to display non-linearities, but is unlikely to be of much use with small datasets. A common rule of thumb--that regression models should have ten times as many observations as parameters--suggests that substantially more than 20 observations are needed to detect nonlinearity (which needs a third parameter in addition to the intercept and slope of a linear function).
A t-test detects whether the average ages differ between no-and yes-respondents in the population. A logistic regression estimates how the response rate varies by age. As such it is more flexible and capable of supplying more detailed information than the t-test is. On the other hand, it tends to be less powerful than the t-test for the basic purpose of detecting a difference between the average ages in the groups.
It is possible for the pair of tests to exhibit all four combinations of significance and non-significance. Two of these are problematic:
The t-test is not significant but the logistic regression is. When the assumptions of both tests are plausible, such a result is practically impossible, because the t-test is not trying to detect such a specific relationship as posited by logistic regression. However, when that relationship is sufficiently nonlinear to cause the oldest and youngest subjects to share one opinion and the middle-aged subjects another, then the extension of logistic regression to nonlinear relationships can detect and quantify that situation, which no t-test could detect.
The t-test is significant but the logistic regression is not, as in the question. This often happens, especially when there is a group of younger respondents, a group of older respondents, and few people in between. This may create a great separation between the response rates of no- and yes-responders. It is readily detected by the t-test. However, logistic regression would either have relatively little detailed information about how the response rate actually changes with age or else it would have inconclusive information: the case of "complete separation" where all older people respond one way and all younger people another way--but in that case both tests would usually have very low p-values.
Note that the experimental design can invalidate some of the test assumptions. For instance, if you selected people according to their age in a stratified design, then the t-test's assumption (that each group reflects a simple random sample of ages) becomes questionable. This design would suggest relying on logistic regression. If instead you had two pools, one of no-responders and one of yes-responders, and selected randomly from those to ascertain their age, then the sampling assumptions of logistic regression are doubtful while those of the t-test will hold. That design would suggest using some form of a t-test.
(The second design might seem silly here, but in circumstances where "age" is replaced by some characteristic that is difficult, costly, or time-consuming to measure it can be appealing.)
|
Logistic regression or T test?
Both tests implicitly model the age-response relationship, but they do so in different ways. Which one to select depends on how you choose to model that relationship. Your choice ought to depend on
|
9,943
|
Logistic regression or T test?
|
This doesn't really answer the question but may still be of some interest. The standard assumption of a two sample $t$-test is that the conditional normal distribution of $X$ given a binary variable $Y$,
$$
X|Y=i \sim N(\mu_i,\sigma^2).
$$
This together with the assumption that $Y \sim \operatorname{bernoulli}(p)$ marginally, implies that the conditional distribution of the binary variable $Y$ given $X=x$ is
\begin{align}
P(Y=1|X=x)
&=\frac{f_{X|Y=1}(x)P(Y=1)}{\sum_{i=0}^1 f_{X|Y=i}(x)P(Y=i)}
\\&=\frac{pe^{-\frac1{2\sigma^2}(x-\mu_1)^2}}{pe^{-\frac1{2\sigma^2}(x-\mu_1)^2} + (1-p)e^{-\frac1{2\sigma^2}(x-\mu_0)^2}}
\\&=\frac1{1+\frac{1-p}pe^{-\frac1{2\sigma^2}(x-\mu_0)^2+\frac1{2\sigma^2}(x-\mu_1)^2}}
\\&=\operatorname{logit}^{-1}(\beta_0 + \beta_1 x)
\end{align}
that is, a logistic regression model with intercept and slope
\begin{align}\beta_0 &= \ln\frac p{1-p} -\frac1{2\sigma^2}(\mu_1^2-\mu_0^2) \\
\beta_1&=\frac1{\sigma^2}(\mu_1-\mu_0).
\end{align}
So in this sense the two conditional models are compatible. Such compatibility is desirable for example in multiple imputation by chained equations (MICE) methods.
Also see exercise 5.1 b and c in Agresti 2015.
|
Logistic regression or T test?
|
This doesn't really answer the question but may still be of some interest. The standard assumption of a two sample $t$-test is that the conditional normal distribution of $X$ given a binary variable
|
Logistic regression or T test?
This doesn't really answer the question but may still be of some interest. The standard assumption of a two sample $t$-test is that the conditional normal distribution of $X$ given a binary variable $Y$,
$$
X|Y=i \sim N(\mu_i,\sigma^2).
$$
This together with the assumption that $Y \sim \operatorname{bernoulli}(p)$ marginally, implies that the conditional distribution of the binary variable $Y$ given $X=x$ is
\begin{align}
P(Y=1|X=x)
&=\frac{f_{X|Y=1}(x)P(Y=1)}{\sum_{i=0}^1 f_{X|Y=i}(x)P(Y=i)}
\\&=\frac{pe^{-\frac1{2\sigma^2}(x-\mu_1)^2}}{pe^{-\frac1{2\sigma^2}(x-\mu_1)^2} + (1-p)e^{-\frac1{2\sigma^2}(x-\mu_0)^2}}
\\&=\frac1{1+\frac{1-p}pe^{-\frac1{2\sigma^2}(x-\mu_0)^2+\frac1{2\sigma^2}(x-\mu_1)^2}}
\\&=\operatorname{logit}^{-1}(\beta_0 + \beta_1 x)
\end{align}
that is, a logistic regression model with intercept and slope
\begin{align}\beta_0 &= \ln\frac p{1-p} -\frac1{2\sigma^2}(\mu_1^2-\mu_0^2) \\
\beta_1&=\frac1{\sigma^2}(\mu_1-\mu_0).
\end{align}
So in this sense the two conditional models are compatible. Such compatibility is desirable for example in multiple imputation by chained equations (MICE) methods.
Also see exercise 5.1 b and c in Agresti 2015.
|
Logistic regression or T test?
This doesn't really answer the question but may still be of some interest. The standard assumption of a two sample $t$-test is that the conditional normal distribution of $X$ given a binary variable
|
9,944
|
Logistic regression or T test?
|
The better test is the the one that better addresses your question. Neither is just better on it's face. The differences here are equivalent to those found when regressing y on x and x on y and the reasons for different results are similar. The variance being assessed depends on which variable is being treated as the response variable in the model.
Your research question is terribly vague. Perhaps if you considered direction of causality you'd be able to come to a conclusion about which analysis you want to use. Is age causing people to respond "yes" or is responding "yes" causing people to get older? It's more likely the former, in which case the variance in the probability of a "yes" is what you wish to model and therefore the logistic regression is the best choice.
That said, you should examine assumptions of the tests. Those can be found online at wikipedia or in your text books on them. It may well be that you have good reasons not to perform the logistic regression and, when that happens you may need to ask a different question.
|
Logistic regression or T test?
|
The better test is the the one that better addresses your question. Neither is just better on it's face. The differences here are equivalent to those found when regressing y on x and x on y and the re
|
Logistic regression or T test?
The better test is the the one that better addresses your question. Neither is just better on it's face. The differences here are equivalent to those found when regressing y on x and x on y and the reasons for different results are similar. The variance being assessed depends on which variable is being treated as the response variable in the model.
Your research question is terribly vague. Perhaps if you considered direction of causality you'd be able to come to a conclusion about which analysis you want to use. Is age causing people to respond "yes" or is responding "yes" causing people to get older? It's more likely the former, in which case the variance in the probability of a "yes" is what you wish to model and therefore the logistic regression is the best choice.
That said, you should examine assumptions of the tests. Those can be found online at wikipedia or in your text books on them. It may well be that you have good reasons not to perform the logistic regression and, when that happens you may need to ask a different question.
|
Logistic regression or T test?
The better test is the the one that better addresses your question. Neither is just better on it's face. The differences here are equivalent to those found when regressing y on x and x on y and the re
|
9,945
|
What is the daily job routine of the machine learning scientist?
|
Alex, I can't comment specifically on Germany or Switzerland, but I do work for an international company with a staff of over 100,000 people from all different countries. Most of these people have at least graduate level degrees, many have Masters and PhDs and, except for the HR and Admin staff most of us are expert in one or more different scientific domains. I have more than 30 years experience, have worked as a skilled scientific / technical specialist, a manager, a Project manager and eventually returned to a purely scientific role that I enjoy. I have also been involved with hiring staff and perhaps some of my observations that follow may be of value to you.
Most new graduates really don't know exactly what they want and it usually takes a few years to find out. In most cases their workplace experience turns out to be quite different compared to what they had expected for a range of reasons. Some workplaces are exciting while some are dull, boring and "workplace politics", bad bosses, etc can sometimes be big problems. A higher degree may or may not help at all with any of these issues.
Most employers want people who can "do the job" and be productive as soon as possible. Higher degrees may or may not matter, depending on the employer. In some situations the door is closed UNLESS you have a PhD. In other situations, the door may be closed BECAUSE you have a PhD and the employer wants someone "less theoretical and with more practical experience".
A PhD does not necessarily mean faster promotions or even much difference in salary and may or may not make any difference to the sort of position that you can obtain. Generally when I have been interviewing candidates, I have been most interested in finding people with relevant work-related experience. A PhD might be a final deciding factor in securing a position, IF the candidate's thesis topic is specifically relevant.
People tend to change jobs more often now than they used to in the past. Your age divided by 2*pi is not a bad rule of thumb for a good number of years to stay in a job before you start going around in circles. Some people work for a while and then return to higher studies. Some people (like me) start on a PhD and then get an "offer too good to refuse" and leave the PhD to go and work. Am I sorry I did that? NO, not at all, and if I were starting over again I would do a PhD in a completely different topic anyway.
The best suggestion that I can give you is to do what you most enjoy doing and see how it unfolds. No-one else can tell you what will be best for you. Sometimes you just have to try something and, if it doesn't work out, then learn as much as you can from it and move on to something else. As Rodin said: Nothing is ever a waste of time if you use the experience wisely.
|
What is the daily job routine of the machine learning scientist?
|
Alex, I can't comment specifically on Germany or Switzerland, but I do work for an international company with a staff of over 100,000 people from all different countries. Most of these people have at
|
What is the daily job routine of the machine learning scientist?
Alex, I can't comment specifically on Germany or Switzerland, but I do work for an international company with a staff of over 100,000 people from all different countries. Most of these people have at least graduate level degrees, many have Masters and PhDs and, except for the HR and Admin staff most of us are expert in one or more different scientific domains. I have more than 30 years experience, have worked as a skilled scientific / technical specialist, a manager, a Project manager and eventually returned to a purely scientific role that I enjoy. I have also been involved with hiring staff and perhaps some of my observations that follow may be of value to you.
Most new graduates really don't know exactly what they want and it usually takes a few years to find out. In most cases their workplace experience turns out to be quite different compared to what they had expected for a range of reasons. Some workplaces are exciting while some are dull, boring and "workplace politics", bad bosses, etc can sometimes be big problems. A higher degree may or may not help at all with any of these issues.
Most employers want people who can "do the job" and be productive as soon as possible. Higher degrees may or may not matter, depending on the employer. In some situations the door is closed UNLESS you have a PhD. In other situations, the door may be closed BECAUSE you have a PhD and the employer wants someone "less theoretical and with more practical experience".
A PhD does not necessarily mean faster promotions or even much difference in salary and may or may not make any difference to the sort of position that you can obtain. Generally when I have been interviewing candidates, I have been most interested in finding people with relevant work-related experience. A PhD might be a final deciding factor in securing a position, IF the candidate's thesis topic is specifically relevant.
People tend to change jobs more often now than they used to in the past. Your age divided by 2*pi is not a bad rule of thumb for a good number of years to stay in a job before you start going around in circles. Some people work for a while and then return to higher studies. Some people (like me) start on a PhD and then get an "offer too good to refuse" and leave the PhD to go and work. Am I sorry I did that? NO, not at all, and if I were starting over again I would do a PhD in a completely different topic anyway.
The best suggestion that I can give you is to do what you most enjoy doing and see how it unfolds. No-one else can tell you what will be best for you. Sometimes you just have to try something and, if it doesn't work out, then learn as much as you can from it and move on to something else. As Rodin said: Nothing is ever a waste of time if you use the experience wisely.
|
What is the daily job routine of the machine learning scientist?
Alex, I can't comment specifically on Germany or Switzerland, but I do work for an international company with a staff of over 100,000 people from all different countries. Most of these people have at
|
9,946
|
What is the daily job routine of the machine learning scientist?
|
Before I describe my opinion of job routine, I will pick a few pieces of your post that I think are relevant (emphasis mine):
I'm a very curious person
Will work with intellectually challenging stuff
I need to be honest and say that also I hate to see someone else with a higher degree than me (vanity)
I can start a career and make a lot of money in 1 or 2 years
start my own company
Based on 1 and 2, you appear to have a very romantic view of data science and research in general. Yes, you will get to work on interesting problems, but certainly 24/7 (this applies to both industry and research).
Based on 2 and 3, you seem to consider research the pinnacle of human intellect and consider a PhD as a certification of your smarts. I do not agree, because:
there are intellectually challenging problems in both academic research and industry. I think it's a strange assumption that academics face the hardest ones.
having a PhD doesn't mean you are smart, it means you have what it takes to do good research in your field. Research is not about being smarter than someone else (though it helps). Creativity and approaching problems from a different angle are also very important qualities. If you want some kind of proof that you are smarter than the next person, take Mensa tests, not a PhD.
In my personal opinion the smartest people are the ones that end up living a happy life with the choices they made, whether that means becoming a nuclear physicist or a carpenter. Don't make your decisions based on whether or not they grant you something to show off with.
Based on 4 and 5, it looks like you envision starting your own company at some point. Be aware that when doing startups, even technology-oriented ones, you are likely not going to spend the majority of your time with the actual technology. Marketing, business plans, management etc. etc. are all equally (if not more) important to successful startups. How do you expect a PhD to help?
Now that these preliminaries are out of the way: my personal opinion on the job routine of a machine learning scientist. First of all: you get to work with state-of-the-art methods on big/complicated/interesting data sets with an emphasis of your choice. It is most certainly very interesting work.
... BUT
Real machine learning involves a lot of grunt work
You will not spend every working hour in a utopian world full of mathematical elegance while an army of computers does your bidding. A large portion of your time will be spent doing grunt work: database management, preparing data sets, normalizing stuff, dealing with inconsistencies, etc. etc. I spend the majority of my time doing tasks like these. They do not grow more exciting over time. If you are not passionate about your topic, you will eventually lose motivation to do these things.
If you have taken machine learning classes you typically get nicely labeled data sets without inconsistencies, no missing data, where everything is as it should be. This is not real life machine learning. You will spend most of your time on trying to get to the point where you are ready to run your favorite algorithm.
Expectation management in collaborations
If you want to do interdisciplinary projects, you will have to learn how to work with people that know little to nothing about what you do (this is true for any specialization). In machine learning that often implies one of two scenarios:
Your collaborators have seen too much TV and think that you can solve everything, with a fancy algorithm and lots of cool visualizations.
Your collaborators don't understand the techniques you use and as such don't see the benefits or potential applications.
|
What is the daily job routine of the machine learning scientist?
|
Before I describe my opinion of job routine, I will pick a few pieces of your post that I think are relevant (emphasis mine):
I'm a very curious person
Will work with intellectually challenging stuff
|
What is the daily job routine of the machine learning scientist?
Before I describe my opinion of job routine, I will pick a few pieces of your post that I think are relevant (emphasis mine):
I'm a very curious person
Will work with intellectually challenging stuff
I need to be honest and say that also I hate to see someone else with a higher degree than me (vanity)
I can start a career and make a lot of money in 1 or 2 years
start my own company
Based on 1 and 2, you appear to have a very romantic view of data science and research in general. Yes, you will get to work on interesting problems, but certainly 24/7 (this applies to both industry and research).
Based on 2 and 3, you seem to consider research the pinnacle of human intellect and consider a PhD as a certification of your smarts. I do not agree, because:
there are intellectually challenging problems in both academic research and industry. I think it's a strange assumption that academics face the hardest ones.
having a PhD doesn't mean you are smart, it means you have what it takes to do good research in your field. Research is not about being smarter than someone else (though it helps). Creativity and approaching problems from a different angle are also very important qualities. If you want some kind of proof that you are smarter than the next person, take Mensa tests, not a PhD.
In my personal opinion the smartest people are the ones that end up living a happy life with the choices they made, whether that means becoming a nuclear physicist or a carpenter. Don't make your decisions based on whether or not they grant you something to show off with.
Based on 4 and 5, it looks like you envision starting your own company at some point. Be aware that when doing startups, even technology-oriented ones, you are likely not going to spend the majority of your time with the actual technology. Marketing, business plans, management etc. etc. are all equally (if not more) important to successful startups. How do you expect a PhD to help?
Now that these preliminaries are out of the way: my personal opinion on the job routine of a machine learning scientist. First of all: you get to work with state-of-the-art methods on big/complicated/interesting data sets with an emphasis of your choice. It is most certainly very interesting work.
... BUT
Real machine learning involves a lot of grunt work
You will not spend every working hour in a utopian world full of mathematical elegance while an army of computers does your bidding. A large portion of your time will be spent doing grunt work: database management, preparing data sets, normalizing stuff, dealing with inconsistencies, etc. etc. I spend the majority of my time doing tasks like these. They do not grow more exciting over time. If you are not passionate about your topic, you will eventually lose motivation to do these things.
If you have taken machine learning classes you typically get nicely labeled data sets without inconsistencies, no missing data, where everything is as it should be. This is not real life machine learning. You will spend most of your time on trying to get to the point where you are ready to run your favorite algorithm.
Expectation management in collaborations
If you want to do interdisciplinary projects, you will have to learn how to work with people that know little to nothing about what you do (this is true for any specialization). In machine learning that often implies one of two scenarios:
Your collaborators have seen too much TV and think that you can solve everything, with a fancy algorithm and lots of cool visualizations.
Your collaborators don't understand the techniques you use and as such don't see the benefits or potential applications.
|
What is the daily job routine of the machine learning scientist?
Before I describe my opinion of job routine, I will pick a few pieces of your post that I think are relevant (emphasis mine):
I'm a very curious person
Will work with intellectually challenging stuff
|
9,947
|
What is the daily job routine of the machine learning scientist?
|
•What is it like to work as a data scientist/machine learner with a
master degree in the industry? What kind of work you do? Especially
when I read those ads on Amazon as a machine learning scientist, I
always wonder what they do.
The business problems do not really change depending on your degree, so you would look at the same or similar things. If you work in a big organisation, you work on the company's large datasets. This can usually be product/client data or operational data ( chemical process data, financial markets data, website traffic data, etc.). The generic end goal is to leverage the data to save money or make money for the company.
•The same question as before, but with a PhD. Do you do something
different or the same thing as with masters?
The answer is as above, you would do pretty much the same things. However, in the reseach / quantiative analysis / or a similar technical department of a large international corporation, if you have a PhD, you have an edge over someone with an MSc. in terms of career progression. PhD teaches (or is supposed to teach) you to be an independent researcher, so with a doctorate, the company usually 'values' your labour (inquisitive skills and diligence) a bit more. BUT I would strongly advise against doing a PhD, just for the sake of (potentially) faster career progression. Doing a PhD is a hard and -especially towards the end- painful process, you would have to like (ideally love) your subject and also in my opinion have a potential interest to remain in academia (which is proxy to reveal your affinity towards research and the partiuclar topic) in order to make it bearable.
Also bear in mind that going back to industry with a PhD, you will be lagged in the career ladder and may end up being channeled into a technically oriented support role (which pays less compared to those people that earn real money for the company) - which may not be your primary objetive. Finally, if you are working in a small scale company, in your own company, the edge of having a PhD virtually disappears in terms of career progression or salary.
•Am I going to deal with challenging interesting problems? Or some
boring stuff?
I guess there is no generic answer to this. ML is cross-disciplinary. If you work as an analyst, you would usually look at data and try to build models, if you are on the development side, you end up dealing with the knitty-gritties of implementation. If you are client-facing, you may have to do a lot of hand holding and training of clients (but likely earn more money). Usually, the answer to your question depends on personal preference and also how much flexibility your employer provides.
|
What is the daily job routine of the machine learning scientist?
|
•What is it like to work as a data scientist/machine learner with a
master degree in the industry? What kind of work you do? Especially
when I read those ads on Amazon as a machine learning scient
|
What is the daily job routine of the machine learning scientist?
•What is it like to work as a data scientist/machine learner with a
master degree in the industry? What kind of work you do? Especially
when I read those ads on Amazon as a machine learning scientist, I
always wonder what they do.
The business problems do not really change depending on your degree, so you would look at the same or similar things. If you work in a big organisation, you work on the company's large datasets. This can usually be product/client data or operational data ( chemical process data, financial markets data, website traffic data, etc.). The generic end goal is to leverage the data to save money or make money for the company.
•The same question as before, but with a PhD. Do you do something
different or the same thing as with masters?
The answer is as above, you would do pretty much the same things. However, in the reseach / quantiative analysis / or a similar technical department of a large international corporation, if you have a PhD, you have an edge over someone with an MSc. in terms of career progression. PhD teaches (or is supposed to teach) you to be an independent researcher, so with a doctorate, the company usually 'values' your labour (inquisitive skills and diligence) a bit more. BUT I would strongly advise against doing a PhD, just for the sake of (potentially) faster career progression. Doing a PhD is a hard and -especially towards the end- painful process, you would have to like (ideally love) your subject and also in my opinion have a potential interest to remain in academia (which is proxy to reveal your affinity towards research and the partiuclar topic) in order to make it bearable.
Also bear in mind that going back to industry with a PhD, you will be lagged in the career ladder and may end up being channeled into a technically oriented support role (which pays less compared to those people that earn real money for the company) - which may not be your primary objetive. Finally, if you are working in a small scale company, in your own company, the edge of having a PhD virtually disappears in terms of career progression or salary.
•Am I going to deal with challenging interesting problems? Or some
boring stuff?
I guess there is no generic answer to this. ML is cross-disciplinary. If you work as an analyst, you would usually look at data and try to build models, if you are on the development side, you end up dealing with the knitty-gritties of implementation. If you are client-facing, you may have to do a lot of hand holding and training of clients (but likely earn more money). Usually, the answer to your question depends on personal preference and also how much flexibility your employer provides.
|
What is the daily job routine of the machine learning scientist?
•What is it like to work as a data scientist/machine learner with a
master degree in the industry? What kind of work you do? Especially
when I read those ads on Amazon as a machine learning scient
|
9,948
|
What is the daily job routine of the machine learning scientist?
|
Or you can try to join some research group where statisticians and machine learners are not an everyday appearance. For example infestation and disease spreading, botany or ecology, social insect or maybe social sciences?
I can´t give you exact examples, but if you are a good statistician/ML at a place where there are only few of them, than people and different research proposals will find you. The point is, that you will be really in demand without too much effort from your side.
If you like that idea, than try to search for machine learning problems outside your current topics (industry), and maybe you will find the way how to find your "challenging interesting problems" and "work with intellectually challenging stuff".
|
What is the daily job routine of the machine learning scientist?
|
Or you can try to join some research group where statisticians and machine learners are not an everyday appearance. For example infestation and disease spreading, botany or ecology, social insect or m
|
What is the daily job routine of the machine learning scientist?
Or you can try to join some research group where statisticians and machine learners are not an everyday appearance. For example infestation and disease spreading, botany or ecology, social insect or maybe social sciences?
I can´t give you exact examples, but if you are a good statistician/ML at a place where there are only few of them, than people and different research proposals will find you. The point is, that you will be really in demand without too much effort from your side.
If you like that idea, than try to search for machine learning problems outside your current topics (industry), and maybe you will find the way how to find your "challenging interesting problems" and "work with intellectually challenging stuff".
|
What is the daily job routine of the machine learning scientist?
Or you can try to join some research group where statisticians and machine learners are not an everyday appearance. For example infestation and disease spreading, botany or ecology, social insect or m
|
9,949
|
What is the daily job routine of the machine learning scientist?
|
I agree with the other answers. I would just emphasize that one common way (at least in the US) for people like you who hesitate between continuing with a PhD or doing the industry after their undergrad degrees is to apply for PhD, then take a leave (one year or more) if things aren't as great as they expected or simply want to explore industry. It is generally easier to apply for PhD right after undergrad: you haven't forgotten yet the habit to cram exams (GRE), professors who are going to write recommendation letters for you still remember you well, etc.
Also, in your comparison between PhD and industry, amongst the opportunities you have, you might want to compare the access to interesting datasets, computer cluster availability, software engineering skills of the place and how many people are assigned for each project.
Lastly, you can find a lot of intellectually challenging stuff in the industry as well, e.g. check out IBM/Google/Microsoft/Nuance/Facebook/etc. research department (just like you can find a lot of intellectually unchallenging stuff academia). E.g. the folks behind SVM were working at AT&T, IBM Watson is at IBM, Google Translate is one of the best machine translation system, Nuance and Google have the top voice recognition system, and those are very far from isolated examples. In fact I've always wondered who among industry and academia contribute the most toward machine learning research (I had asked the same question regarding the database research on Quora: Has database research been mostly driven by the industry over the last decade?).
|
What is the daily job routine of the machine learning scientist?
|
I agree with the other answers. I would just emphasize that one common way (at least in the US) for people like you who hesitate between continuing with a PhD or doing the industry after their undergr
|
What is the daily job routine of the machine learning scientist?
I agree with the other answers. I would just emphasize that one common way (at least in the US) for people like you who hesitate between continuing with a PhD or doing the industry after their undergrad degrees is to apply for PhD, then take a leave (one year or more) if things aren't as great as they expected or simply want to explore industry. It is generally easier to apply for PhD right after undergrad: you haven't forgotten yet the habit to cram exams (GRE), professors who are going to write recommendation letters for you still remember you well, etc.
Also, in your comparison between PhD and industry, amongst the opportunities you have, you might want to compare the access to interesting datasets, computer cluster availability, software engineering skills of the place and how many people are assigned for each project.
Lastly, you can find a lot of intellectually challenging stuff in the industry as well, e.g. check out IBM/Google/Microsoft/Nuance/Facebook/etc. research department (just like you can find a lot of intellectually unchallenging stuff academia). E.g. the folks behind SVM were working at AT&T, IBM Watson is at IBM, Google Translate is one of the best machine translation system, Nuance and Google have the top voice recognition system, and those are very far from isolated examples. In fact I've always wondered who among industry and academia contribute the most toward machine learning research (I had asked the same question regarding the database research on Quora: Has database research been mostly driven by the industry over the last decade?).
|
What is the daily job routine of the machine learning scientist?
I agree with the other answers. I would just emphasize that one common way (at least in the US) for people like you who hesitate between continuing with a PhD or doing the industry after their undergr
|
9,950
|
What is the daily job routine of the machine learning scientist?
|
To get a PhD, you have to advance the state of human knowledge. You don't just have to learn more stuff. You have to produce something original. This is a long, slow, and painful process, and not everyone succeeds at it. So you should do a PhD only if you think you have a new, creative, contribution to the field in you.
If you just want to learn the field and apply the field, take your Masters at most, and then spend the rest of your life learning while you apply. Read things. Take the occasional workshop. If at some point you are infected with the urge to do something truly original, take a (long) break from career and try to get that PhD then.
|
What is the daily job routine of the machine learning scientist?
|
To get a PhD, you have to advance the state of human knowledge. You don't just have to learn more stuff. You have to produce something original. This is a long, slow, and painful process, and not ever
|
What is the daily job routine of the machine learning scientist?
To get a PhD, you have to advance the state of human knowledge. You don't just have to learn more stuff. You have to produce something original. This is a long, slow, and painful process, and not everyone succeeds at it. So you should do a PhD only if you think you have a new, creative, contribution to the field in you.
If you just want to learn the field and apply the field, take your Masters at most, and then spend the rest of your life learning while you apply. Read things. Take the occasional workshop. If at some point you are infected with the urge to do something truly original, take a (long) break from career and try to get that PhD then.
|
What is the daily job routine of the machine learning scientist?
To get a PhD, you have to advance the state of human knowledge. You don't just have to learn more stuff. You have to produce something original. This is a long, slow, and painful process, and not ever
|
9,951
|
What is the daily job routine of the machine learning scientist?
|
When you choose the /famous little company/ route, you have the freedom to establish a research department in your company.
Here, you can get annoyingly creative, as in, unrestrained... explore all your childhood fantasies, intellectually challenging stuff... you set the pace... you will be /the man/.
You don't have to sit at University Labs to write a /Killer/ research paper.
That notwithstanding, while at it, you can always coordinate with relevant research departments back at Univ. see...? zwei vögel mit eines stein :-)
...someone else with a higher degree...
Well, vanity, in moderation, motivates us to seek the best that there can be.
Good luck.
yb
|
What is the daily job routine of the machine learning scientist?
|
When you choose the /famous little company/ route, you have the freedom to establish a research department in your company.
Here, you can get annoyingly creative, as in, unrestrained... explore all yo
|
What is the daily job routine of the machine learning scientist?
When you choose the /famous little company/ route, you have the freedom to establish a research department in your company.
Here, you can get annoyingly creative, as in, unrestrained... explore all your childhood fantasies, intellectually challenging stuff... you set the pace... you will be /the man/.
You don't have to sit at University Labs to write a /Killer/ research paper.
That notwithstanding, while at it, you can always coordinate with relevant research departments back at Univ. see...? zwei vögel mit eines stein :-)
...someone else with a higher degree...
Well, vanity, in moderation, motivates us to seek the best that there can be.
Good luck.
yb
|
What is the daily job routine of the machine learning scientist?
When you choose the /famous little company/ route, you have the freedom to establish a research department in your company.
Here, you can get annoyingly creative, as in, unrestrained... explore all yo
|
9,952
|
Spatial statistics models: CAR vs SAR
|
Non-spatial model
My House Value is a function of my home Gardening Investment.
SAR model
My House Value is a function of the House Values of my neighbours.
CAR model
My House Value is a function of the Gardening Investment of my neighbours.
|
Spatial statistics models: CAR vs SAR
|
Non-spatial model
My House Value is a function of my home Gardening Investment.
SAR model
My House Value is a function of the House Values of my neighbours.
CAR model
My House Value is a function of
|
Spatial statistics models: CAR vs SAR
Non-spatial model
My House Value is a function of my home Gardening Investment.
SAR model
My House Value is a function of the House Values of my neighbours.
CAR model
My House Value is a function of the Gardening Investment of my neighbours.
|
Spatial statistics models: CAR vs SAR
Non-spatial model
My House Value is a function of my home Gardening Investment.
SAR model
My House Value is a function of the House Values of my neighbours.
CAR model
My House Value is a function of
|
9,953
|
Spatial statistics models: CAR vs SAR
|
As the Encyclopedia of GIS states, the conditional autoregressive model (CAR) is appropriate for situations with first order dependency or relatively local spatial autocorrelation, and simultaneous autoregressive model (SAR) is more suitable where there are second order dependency or a more global spatial autocorrelation.
This is made clear by the fact that CAR obeys the spatial version of the Markov property, namely it assumes that the state of a particular area is influenced its neighbors and not neighbors of neighbors, etc. (i.e. it is spatially “memoryless”, instead of temporally), whereas SAR does not assume such. This is due to the different ways in which they specify their variance-covariance matrixes. So, when the spatial Markov property obtains, CAR provides a simpler way to model autocorrelated geo-referenced areal data.
See Gis And Spatial Data Analysis: Converging Perspectives for more details.
|
Spatial statistics models: CAR vs SAR
|
As the Encyclopedia of GIS states, the conditional autoregressive model (CAR) is appropriate for situations with first order dependency or relatively local spatial autocorrelation, and simultaneous au
|
Spatial statistics models: CAR vs SAR
As the Encyclopedia of GIS states, the conditional autoregressive model (CAR) is appropriate for situations with first order dependency or relatively local spatial autocorrelation, and simultaneous autoregressive model (SAR) is more suitable where there are second order dependency or a more global spatial autocorrelation.
This is made clear by the fact that CAR obeys the spatial version of the Markov property, namely it assumes that the state of a particular area is influenced its neighbors and not neighbors of neighbors, etc. (i.e. it is spatially “memoryless”, instead of temporally), whereas SAR does not assume such. This is due to the different ways in which they specify their variance-covariance matrixes. So, when the spatial Markov property obtains, CAR provides a simpler way to model autocorrelated geo-referenced areal data.
See Gis And Spatial Data Analysis: Converging Perspectives for more details.
|
Spatial statistics models: CAR vs SAR
As the Encyclopedia of GIS states, the conditional autoregressive model (CAR) is appropriate for situations with first order dependency or relatively local spatial autocorrelation, and simultaneous au
|
9,954
|
How to test if my distribution is multimodal?
|
@NickCox has presented an interesting strategy (+1). I might consider it more exploratory in nature however, due to the concern that @whuber points out.
Let me suggest another strategy: You could fit a Gaussian finite mixture model. Note that this makes the very strong assumption that your data are drawn from one or more true normals. As both @whuber and @NickCox point out in the comments, without a substantive interpretation of these data—supported by well-established theory—to support this assumption, this strategy should be considered exploratory as well.
First, let's follow @Glen_b's suggestion and look at your data using twice as many bins:
We still see two modes; if anything, they come through more clearly here. (Note also that the kernel density line should be identical, but appears more spread out due to the larger number of bins.)
Now lets fit a Gaussian finite mixture model. In R, you can use the Mclust package to do this:
library(mclust)
x.gmm = Mclust(x)
summary(x.gmm)
# ----------------------------------------------------
# Gaussian finite mixture model fitted by EM algorithm
# ----------------------------------------------------
#
# Mclust V (univariate, unequal variance) model with 2 components:
#
# log.likelihood n df BIC ICL
# -1200.874 120 5 -2425.686 -2442.719
#
# Clustering table:
# 1 2
# 68 52
Two normal components optimizes the BIC. For comparison, we can force a one component fit and perform a likelihood ratio test:
x.gmm.1 = Mclust(x, G=1)
logLik(x.gmm.1)
# 'log Lik.' -1226.241 (df=2)
logLik(x.gmm)-logLik(x.gmm.1)
# 'log Lik.' 25.36657 (df=5)
1-pchisq(25.36657, df=3) # [1] 1.294187e-05
This suggests it is extremely unlikely you would find data as far from unimodal as yours if they came from a single true normal distribution.
Some people don't feel comfortable using a parametric test here (although if the assumptions hold, I don't know of any problem). One very broadly applicable technique is to use the Parametric Bootstrap Cross-fitting Method (I describe the algorithm here). We can try applying it to these data:
x.gmm$parameters
# $mean
# 12346.98 23322.06
# $variance$sigmasq
# [1] 4514863 24582180
x.gmm.1$parameters
# $mean
# [1] 17520.91
# $variance$sigmasq
# [1] 43989870
set.seed(7809)
B = 10000; x2.d = vector(length=B); x1.d = vector(length=B)
for(i in 1:B){
x2 = c(rnorm(68, mean=12346.98, sd=sqrt( 4514863)),
rnorm(52, mean=23322.06, sd=sqrt(24582180)) )
x1 = rnorm( 120, mean=17520.91, sd=sqrt(43989870))
x2.d[i] = Mclust(x2, G=2)$loglik - Mclust(x2, G=1)$loglik
x1.d[i] = Mclust(x1, G=2)$loglik - Mclust(x1, G=1)$loglik
}
x2.d = sort(x2.d); x1.d = sort(x1.d)
summary(x1.d)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# -0.29070 -0.02124 0.41460 0.88760 1.36700 14.01000
summary(x2.d)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 9.006 23.770 27.500 27.760 31.350 53.500
The summary statistics, and the kernel density plots for the sampling distributions show several interesting features. The log likelihood for the single component model is rarely greater than that of the two component fit, even when the true data generating process has only a single component, and when it is greater, the amount is trivial. The idea of comparing models that differ in their ability to fit data is one of the motivations behind the PBCM. The two sampling distributions barely overlap at all; only .35% of x2.d are less than the maximum x1.d value. If you selected a two component model if the difference in log likelihood were >9.7, you would incorrectly select the one component model .01% and the two component model .02% of the time. These are highly discriminable. If, on the other hand, you chose to use the one component model as a null hypothesis, your observed result is sufficiently small as not to show up in the empirical sampling distribution in 10,000 iterations. We can use the rule of 3 (see here) to place an upper bound on the p-value, namely, we estimate your p-value is less than .0003. That is, this is highly significant.
This raises the question of why these results diverge so much from your dip test. (To answer your explicit question, your dip test provides no evidence that there are two real modes.) I honestly don't know the dip test, so it's hard to say; it may be underpowered. However, I think the likely answer is that this approach assumes your data are generated by true normal[s]. A Shapiro-Wilk test for your data is highly significant ($p < .000001$), and it is also highly significant for the optimal Box-Cox transformation of your data (the inverse square root; $p < .001$). However, data are never really normal (cf., this famous quote), and the underlying components, should they exist, aren't guaranteed to be perfectly normal either. If you find it reasonable that your data could come from a positively skewed distribution, rather than a normal, this level of bimodality may well be within the typical range of variation, which is what I suspect the dip test is saying.
|
How to test if my distribution is multimodal?
|
@NickCox has presented an interesting strategy (+1). I might consider it more exploratory in nature however, due to the concern that @whuber points out.
Let me suggest another strategy: You could
|
How to test if my distribution is multimodal?
@NickCox has presented an interesting strategy (+1). I might consider it more exploratory in nature however, due to the concern that @whuber points out.
Let me suggest another strategy: You could fit a Gaussian finite mixture model. Note that this makes the very strong assumption that your data are drawn from one or more true normals. As both @whuber and @NickCox point out in the comments, without a substantive interpretation of these data—supported by well-established theory—to support this assumption, this strategy should be considered exploratory as well.
First, let's follow @Glen_b's suggestion and look at your data using twice as many bins:
We still see two modes; if anything, they come through more clearly here. (Note also that the kernel density line should be identical, but appears more spread out due to the larger number of bins.)
Now lets fit a Gaussian finite mixture model. In R, you can use the Mclust package to do this:
library(mclust)
x.gmm = Mclust(x)
summary(x.gmm)
# ----------------------------------------------------
# Gaussian finite mixture model fitted by EM algorithm
# ----------------------------------------------------
#
# Mclust V (univariate, unequal variance) model with 2 components:
#
# log.likelihood n df BIC ICL
# -1200.874 120 5 -2425.686 -2442.719
#
# Clustering table:
# 1 2
# 68 52
Two normal components optimizes the BIC. For comparison, we can force a one component fit and perform a likelihood ratio test:
x.gmm.1 = Mclust(x, G=1)
logLik(x.gmm.1)
# 'log Lik.' -1226.241 (df=2)
logLik(x.gmm)-logLik(x.gmm.1)
# 'log Lik.' 25.36657 (df=5)
1-pchisq(25.36657, df=3) # [1] 1.294187e-05
This suggests it is extremely unlikely you would find data as far from unimodal as yours if they came from a single true normal distribution.
Some people don't feel comfortable using a parametric test here (although if the assumptions hold, I don't know of any problem). One very broadly applicable technique is to use the Parametric Bootstrap Cross-fitting Method (I describe the algorithm here). We can try applying it to these data:
x.gmm$parameters
# $mean
# 12346.98 23322.06
# $variance$sigmasq
# [1] 4514863 24582180
x.gmm.1$parameters
# $mean
# [1] 17520.91
# $variance$sigmasq
# [1] 43989870
set.seed(7809)
B = 10000; x2.d = vector(length=B); x1.d = vector(length=B)
for(i in 1:B){
x2 = c(rnorm(68, mean=12346.98, sd=sqrt( 4514863)),
rnorm(52, mean=23322.06, sd=sqrt(24582180)) )
x1 = rnorm( 120, mean=17520.91, sd=sqrt(43989870))
x2.d[i] = Mclust(x2, G=2)$loglik - Mclust(x2, G=1)$loglik
x1.d[i] = Mclust(x1, G=2)$loglik - Mclust(x1, G=1)$loglik
}
x2.d = sort(x2.d); x1.d = sort(x1.d)
summary(x1.d)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# -0.29070 -0.02124 0.41460 0.88760 1.36700 14.01000
summary(x2.d)
# Min. 1st Qu. Median Mean 3rd Qu. Max.
# 9.006 23.770 27.500 27.760 31.350 53.500
The summary statistics, and the kernel density plots for the sampling distributions show several interesting features. The log likelihood for the single component model is rarely greater than that of the two component fit, even when the true data generating process has only a single component, and when it is greater, the amount is trivial. The idea of comparing models that differ in their ability to fit data is one of the motivations behind the PBCM. The two sampling distributions barely overlap at all; only .35% of x2.d are less than the maximum x1.d value. If you selected a two component model if the difference in log likelihood were >9.7, you would incorrectly select the one component model .01% and the two component model .02% of the time. These are highly discriminable. If, on the other hand, you chose to use the one component model as a null hypothesis, your observed result is sufficiently small as not to show up in the empirical sampling distribution in 10,000 iterations. We can use the rule of 3 (see here) to place an upper bound on the p-value, namely, we estimate your p-value is less than .0003. That is, this is highly significant.
This raises the question of why these results diverge so much from your dip test. (To answer your explicit question, your dip test provides no evidence that there are two real modes.) I honestly don't know the dip test, so it's hard to say; it may be underpowered. However, I think the likely answer is that this approach assumes your data are generated by true normal[s]. A Shapiro-Wilk test for your data is highly significant ($p < .000001$), and it is also highly significant for the optimal Box-Cox transformation of your data (the inverse square root; $p < .001$). However, data are never really normal (cf., this famous quote), and the underlying components, should they exist, aren't guaranteed to be perfectly normal either. If you find it reasonable that your data could come from a positively skewed distribution, rather than a normal, this level of bimodality may well be within the typical range of variation, which is what I suspect the dip test is saying.
|
How to test if my distribution is multimodal?
@NickCox has presented an interesting strategy (+1). I might consider it more exploratory in nature however, due to the concern that @whuber points out.
Let me suggest another strategy: You could
|
9,955
|
How to test if my distribution is multimodal?
|
Following up on the ideas in @Nick's answer and comments, you can see how wide the bandwidth needs to be to just flatten out the secondary mode:
Take this kernel density estimate as the proximal null—the distribution closest to the data yet still consistent with the null hypothesis that it's a sample from a unimodal population—and simulate from it. In the simulated samples the secondary mode doesn't often look so distinct, and you needn't widen the bandwidth as much to flatten it out.
Formalizing this approach leads to the test given in Silverman (1981), "Using kernel density estimates to investigate modality", JRSS B, 43, 1. Schwaiger & Holzmann's silvermantest package implements this test, and also the calibration procedure described by Hall & York (2001), "On the calibration of Silverman's test for multimodality", Statistica Sinica, 11, p 515, which adjusts for asymptotic conservatism. Performing the test on your data with a null hypothesis of unimodality results in p-values of 0.08 without calibration and 0.02 with calibration. I'm not familiar enough with the dip test to guess at why it might differ.
R code:
# kernel density estimate for x using Sheather-Jones
# method to estimate b/w:
density(x, kernel="gaussian", bw="SJ") -> dens.SJ
# tweak b/w until mode just disappears:
density(x, kernel="gaussian", bw=3160) -> prox.null
# fill matrix with simulated samples from the proximal
# null:
x.sim <- matrix(NA, nrow=length(x), ncol=10)
for (i in 1:10){
x.sim[ ,i] <- rnorm(length(x), sample(x, size=length(x),
replace=TRUE), prox.null$bw)
}
# perform Silverman test without Hall-York calibration:
require(silvermantest)
silverman.test(x, k=1, M=10000, adjust=F)
# perform Silverman test with Hall-York calibration:
silverman.test(x, k=1, M=10000, adjust=T)
|
How to test if my distribution is multimodal?
|
Following up on the ideas in @Nick's answer and comments, you can see how wide the bandwidth needs to be to just flatten out the secondary mode:
Take this kernel density estimate as the proximal null
|
How to test if my distribution is multimodal?
Following up on the ideas in @Nick's answer and comments, you can see how wide the bandwidth needs to be to just flatten out the secondary mode:
Take this kernel density estimate as the proximal null—the distribution closest to the data yet still consistent with the null hypothesis that it's a sample from a unimodal population—and simulate from it. In the simulated samples the secondary mode doesn't often look so distinct, and you needn't widen the bandwidth as much to flatten it out.
Formalizing this approach leads to the test given in Silverman (1981), "Using kernel density estimates to investigate modality", JRSS B, 43, 1. Schwaiger & Holzmann's silvermantest package implements this test, and also the calibration procedure described by Hall & York (2001), "On the calibration of Silverman's test for multimodality", Statistica Sinica, 11, p 515, which adjusts for asymptotic conservatism. Performing the test on your data with a null hypothesis of unimodality results in p-values of 0.08 without calibration and 0.02 with calibration. I'm not familiar enough with the dip test to guess at why it might differ.
R code:
# kernel density estimate for x using Sheather-Jones
# method to estimate b/w:
density(x, kernel="gaussian", bw="SJ") -> dens.SJ
# tweak b/w until mode just disappears:
density(x, kernel="gaussian", bw=3160) -> prox.null
# fill matrix with simulated samples from the proximal
# null:
x.sim <- matrix(NA, nrow=length(x), ncol=10)
for (i in 1:10){
x.sim[ ,i] <- rnorm(length(x), sample(x, size=length(x),
replace=TRUE), prox.null$bw)
}
# perform Silverman test without Hall-York calibration:
require(silvermantest)
silverman.test(x, k=1, M=10000, adjust=F)
# perform Silverman test with Hall-York calibration:
silverman.test(x, k=1, M=10000, adjust=T)
|
How to test if my distribution is multimodal?
Following up on the ideas in @Nick's answer and comments, you can see how wide the bandwidth needs to be to just flatten out the secondary mode:
Take this kernel density estimate as the proximal null
|
9,956
|
How to test if my distribution is multimodal?
|
The things to worry about include:
The size of the dataset. It is not tiny, not large.
The dependence of what you see on histogram origin and bin width. With only one choice evident, you (and we) have no idea of sensitivity.
The dependence of what you see on kernel type and width and whatever other choices are made for you in density estimation. With only one choice evident, you (and we) have no idea of sensitivity.
Elsewhere I have suggested tentatively that credibility of modes is supported (but not established) by a substantive interpretation and by the ability to discern the same modality in other datasets of the same size. (Bigger is better too....)
We can't comment on either of those here. One small handle on repeatability is to compare what you get with bootstrap samples of the same size. Here are the results of a token experiment using Stata, but what you see is arbitrarily limited to Stata's defaults, which themselves are documented as plucked out of the air. I got density estimates for the original data and for 24 bootstrap samples from the same.
The indication (no more, no less) is what I think experienced analysts would just guess any way from your graph. The left-hand mode is highly repeatable and the right-hand is distinctly more fragile.
Note that there is an inevitability about this: as there are fewer data nearer the right-hand mode, it won't always reappear in a bootstrap sample. But this is also the key point.
Note that point 3. above remains untouched. But the results are somewhere between unimodal and bimodal.
For those interested, this is the code:
clear
set scheme s1color
set seed 2803
mat data = (10346, 13698, 13894, 19854, 28066, 26620, 27066, 16658, 9221, 13578, 11483, 10390, 11126, 13487, 15851, 16116, 24102, 30892, 25081, 14067, 10433, 15591, 8639, 10345, 10639, 15796, 14507, 21289, 25444, 26149, 23612, 19671, 12447, 13535, 10667, 11255, 8442, 11546, 15958, 21058, 28088, 23827, 30707, 19653, 12791, 13463, 11465, 12326, 12277, 12769, 18341, 19140, 24590, 28277, 22694, 15489, 11070, 11002, 11579, 9834, 9364, 15128, 15147, 18499, 25134, 32116, 24475, 21952, 10272, 15404, 13079, 10633, 10761, 13714, 16073, 23335, 29822, 26800, 31489, 19780, 12238, 15318, 9646, 11786, 10906, 13056, 17599, 22524, 25057, 28809, 27880, 19912, 12319, 18240, 11934, 10290, 11304, 16092, 15911, 24671, 31081, 27716, 25388, 22665, 10603, 14409, 10736, 9651, 12533, 17546, 16863, 23598, 25867, 31774, 24216, 20448, 12548, 15129, 11687, 11581)
set obs `=colsof(data)'
gen data = data[1,_n]
gen index = .
quietly forval j = 1/24 {
replace index = ceil(120 * runiform())
gen data`j' = data[index]
kdensity data`j' , nograph at(data) gen(xx`j' d`j')
}
kdensity data, nograph at(data) gen(xx d)
local xstuff xtitle(data/1000) xla(10000 "10" 20000 "20" 30000 "30") sort
local ystuff ysc(r(0 .0001)) yla(none) `ystuff'
local i = 1
local colour "orange"
foreach v of var d d? d?? {
line `v' data, lc(`colour') `xstuff' `ystuff' name(g`i', replace)
local colour "gs8"
local G `G' g`i'
local ++i
}
graph combine `G'
|
How to test if my distribution is multimodal?
|
The things to worry about include:
The size of the dataset. It is not tiny, not large.
The dependence of what you see on histogram origin and bin width. With only one choice evident, you (and we) h
|
How to test if my distribution is multimodal?
The things to worry about include:
The size of the dataset. It is not tiny, not large.
The dependence of what you see on histogram origin and bin width. With only one choice evident, you (and we) have no idea of sensitivity.
The dependence of what you see on kernel type and width and whatever other choices are made for you in density estimation. With only one choice evident, you (and we) have no idea of sensitivity.
Elsewhere I have suggested tentatively that credibility of modes is supported (but not established) by a substantive interpretation and by the ability to discern the same modality in other datasets of the same size. (Bigger is better too....)
We can't comment on either of those here. One small handle on repeatability is to compare what you get with bootstrap samples of the same size. Here are the results of a token experiment using Stata, but what you see is arbitrarily limited to Stata's defaults, which themselves are documented as plucked out of the air. I got density estimates for the original data and for 24 bootstrap samples from the same.
The indication (no more, no less) is what I think experienced analysts would just guess any way from your graph. The left-hand mode is highly repeatable and the right-hand is distinctly more fragile.
Note that there is an inevitability about this: as there are fewer data nearer the right-hand mode, it won't always reappear in a bootstrap sample. But this is also the key point.
Note that point 3. above remains untouched. But the results are somewhere between unimodal and bimodal.
For those interested, this is the code:
clear
set scheme s1color
set seed 2803
mat data = (10346, 13698, 13894, 19854, 28066, 26620, 27066, 16658, 9221, 13578, 11483, 10390, 11126, 13487, 15851, 16116, 24102, 30892, 25081, 14067, 10433, 15591, 8639, 10345, 10639, 15796, 14507, 21289, 25444, 26149, 23612, 19671, 12447, 13535, 10667, 11255, 8442, 11546, 15958, 21058, 28088, 23827, 30707, 19653, 12791, 13463, 11465, 12326, 12277, 12769, 18341, 19140, 24590, 28277, 22694, 15489, 11070, 11002, 11579, 9834, 9364, 15128, 15147, 18499, 25134, 32116, 24475, 21952, 10272, 15404, 13079, 10633, 10761, 13714, 16073, 23335, 29822, 26800, 31489, 19780, 12238, 15318, 9646, 11786, 10906, 13056, 17599, 22524, 25057, 28809, 27880, 19912, 12319, 18240, 11934, 10290, 11304, 16092, 15911, 24671, 31081, 27716, 25388, 22665, 10603, 14409, 10736, 9651, 12533, 17546, 16863, 23598, 25867, 31774, 24216, 20448, 12548, 15129, 11687, 11581)
set obs `=colsof(data)'
gen data = data[1,_n]
gen index = .
quietly forval j = 1/24 {
replace index = ceil(120 * runiform())
gen data`j' = data[index]
kdensity data`j' , nograph at(data) gen(xx`j' d`j')
}
kdensity data, nograph at(data) gen(xx d)
local xstuff xtitle(data/1000) xla(10000 "10" 20000 "20" 30000 "30") sort
local ystuff ysc(r(0 .0001)) yla(none) `ystuff'
local i = 1
local colour "orange"
foreach v of var d d? d?? {
line `v' data, lc(`colour') `xstuff' `ystuff' name(g`i', replace)
local colour "gs8"
local G `G' g`i'
local ++i
}
graph combine `G'
|
How to test if my distribution is multimodal?
The things to worry about include:
The size of the dataset. It is not tiny, not large.
The dependence of what you see on histogram origin and bin width. With only one choice evident, you (and we) h
|
9,957
|
How to test if my distribution is multimodal?
|
LP Nonparametric Mode Identification (name of the algorithm LPMode, the ref of the paper is given below)
MaxEnt Modes [Red color triangles in the plot]: 12783.36 and 24654.28.
L2 Modes [Green color triangles in the plot]: 13054.70 and 24111.61.
Interesting to note the modal shapes, especially the second one which shows considerable skewness (Traditional Gaussian Mixture model likely to fail here).
Mukhopadhyay, S. (2016) Large-Scale Mode Identification and Data-Driven Sciences. https://arxiv.org/abs/1509.06428
|
How to test if my distribution is multimodal?
|
LP Nonparametric Mode Identification (name of the algorithm LPMode, the ref of the paper is given below)
MaxEnt Modes [Red color triangles in the plot]: 12783.36 and 24654.28.
L2 Modes [Green color tr
|
How to test if my distribution is multimodal?
LP Nonparametric Mode Identification (name of the algorithm LPMode, the ref of the paper is given below)
MaxEnt Modes [Red color triangles in the plot]: 12783.36 and 24654.28.
L2 Modes [Green color triangles in the plot]: 13054.70 and 24111.61.
Interesting to note the modal shapes, especially the second one which shows considerable skewness (Traditional Gaussian Mixture model likely to fail here).
Mukhopadhyay, S. (2016) Large-Scale Mode Identification and Data-Driven Sciences. https://arxiv.org/abs/1509.06428
|
How to test if my distribution is multimodal?
LP Nonparametric Mode Identification (name of the algorithm LPMode, the ref of the paper is given below)
MaxEnt Modes [Red color triangles in the plot]: 12783.36 and 24654.28.
L2 Modes [Green color tr
|
9,958
|
How to test if my distribution is multimodal?
|
I'll add that you can replace the progress bars, which don't illuminate all that much, with an iteration count instead with this:
x2.d[i] = Mclust(x2, G=2, verbose=FALSE)$loglik - Mclust(x2,
G=1, verbose=FALSE)$loglik
x1.d[i] = Mclust(x1, G=2, verbose=FALSE)$loglik - Mclust(x1,
G=1, verbose=FALSE)$loglik
cat(sprintf("\rIteration %s of %s (%.1f%% complete)", i, B,
i/B*100))
|
How to test if my distribution is multimodal?
|
I'll add that you can replace the progress bars, which don't illuminate all that much, with an iteration count instead with this:
x2.d[i] = Mclust(x2, G=2, verbose=FALSE)$loglik - Mclust(x2,
|
How to test if my distribution is multimodal?
I'll add that you can replace the progress bars, which don't illuminate all that much, with an iteration count instead with this:
x2.d[i] = Mclust(x2, G=2, verbose=FALSE)$loglik - Mclust(x2,
G=1, verbose=FALSE)$loglik
x1.d[i] = Mclust(x1, G=2, verbose=FALSE)$loglik - Mclust(x1,
G=1, verbose=FALSE)$loglik
cat(sprintf("\rIteration %s of %s (%.1f%% complete)", i, B,
i/B*100))
|
How to test if my distribution is multimodal?
I'll add that you can replace the progress bars, which don't illuminate all that much, with an iteration count instead with this:
x2.d[i] = Mclust(x2, G=2, verbose=FALSE)$loglik - Mclust(x2,
|
9,959
|
A statistics book that explains using more images than equations
|
The Cartoon Guide to Statistics covers the basics, including random variables, hypothesis testing and confidence intervals.
|
A statistics book that explains using more images than equations
|
The Cartoon Guide to Statistics covers the basics, including random variables, hypothesis testing and confidence intervals.
|
A statistics book that explains using more images than equations
The Cartoon Guide to Statistics covers the basics, including random variables, hypothesis testing and confidence intervals.
|
A statistics book that explains using more images than equations
The Cartoon Guide to Statistics covers the basics, including random variables, hypothesis testing and confidence intervals.
|
9,960
|
A statistics book that explains using more images than equations
|
I really like A Guide to Econometrics by Peter Kennedy. Some material in it will probably be irrelevant, but the conceptual info is excellent and useful for non-economists. For example, here's Kennedy on graphical intuition for omitted variable bias and multicolinearity in multiple regression using Ballentine/Venn diagrams. Each topic starts with a simple explanation, usually with diagrams, followed by technical notes with some math and references.
|
A statistics book that explains using more images than equations
|
I really like A Guide to Econometrics by Peter Kennedy. Some material in it will probably be irrelevant, but the conceptual info is excellent and useful for non-economists. For example, here's Kennedy
|
A statistics book that explains using more images than equations
I really like A Guide to Econometrics by Peter Kennedy. Some material in it will probably be irrelevant, but the conceptual info is excellent and useful for non-economists. For example, here's Kennedy on graphical intuition for omitted variable bias and multicolinearity in multiple regression using Ballentine/Venn diagrams. Each topic starts with a simple explanation, usually with diagrams, followed by technical notes with some math and references.
|
A statistics book that explains using more images than equations
I really like A Guide to Econometrics by Peter Kennedy. Some material in it will probably be irrelevant, but the conceptual info is excellent and useful for non-economists. For example, here's Kennedy
|
9,961
|
A statistics book that explains using more images than equations
|
While reading the reviews for The Cartoon Guide to Statistics, I noticed one saying The Manga Guide To Statistics was better: http://www.amazon.com/gp/product/1593271891
The Manga Guide has fewer reviews, but gets better ones on average. (I.e. the mean number of stars is better; hopefully after reading either book you'd be able to calculate if that is a significant "better" or not ;-)
|
A statistics book that explains using more images than equations
|
While reading the reviews for The Cartoon Guide to Statistics, I noticed one saying The Manga Guide To Statistics was better: http://www.amazon.com/gp/product/1593271891
The Manga Guide has fewer revi
|
A statistics book that explains using more images than equations
While reading the reviews for The Cartoon Guide to Statistics, I noticed one saying The Manga Guide To Statistics was better: http://www.amazon.com/gp/product/1593271891
The Manga Guide has fewer reviews, but gets better ones on average. (I.e. the mean number of stars is better; hopefully after reading either book you'd be able to calculate if that is a significant "better" or not ;-)
|
A statistics book that explains using more images than equations
While reading the reviews for The Cartoon Guide to Statistics, I noticed one saying The Manga Guide To Statistics was better: http://www.amazon.com/gp/product/1593271891
The Manga Guide has fewer revi
|
9,962
|
A statistics book that explains using more images than equations
|
Ram Gnandesikan's book "Methods for Statistical Data Analysis of Multivariate Observations" has some equations but also a lot of graphics. Duda Hart and Stork "Pattern Classification Second Edition" has a lot of nice graphics including some color. Hastie, Tibshirani and Friedman "The Elements of Statistical Learning" although filled with equations is loaded with beautiful graphics and heavy use of color (true for both editions).
|
A statistics book that explains using more images than equations
|
Ram Gnandesikan's book "Methods for Statistical Data Analysis of Multivariate Observations" has some equations but also a lot of graphics. Duda Hart and Stork "Pattern Classification Second Edition"
|
A statistics book that explains using more images than equations
Ram Gnandesikan's book "Methods for Statistical Data Analysis of Multivariate Observations" has some equations but also a lot of graphics. Duda Hart and Stork "Pattern Classification Second Edition" has a lot of nice graphics including some color. Hastie, Tibshirani and Friedman "The Elements of Statistical Learning" although filled with equations is loaded with beautiful graphics and heavy use of color (true for both editions).
|
A statistics book that explains using more images than equations
Ram Gnandesikan's book "Methods for Statistical Data Analysis of Multivariate Observations" has some equations but also a lot of graphics. Duda Hart and Stork "Pattern Classification Second Edition"
|
9,963
|
A statistics book that explains using more images than equations
|
One book that I really like is "The Statistical Sleuth" by Ramsey and Schafer. It does still have the formulas, but the more complicated formulas have arrows pointing to the different parts with explanations of what that part of the formula means, there are lots of good graphics to help explain the concepts. It also covers a lot more than the cartoon guide (which I also like, but someone else already suggested it).
One of the best parts of it is that every chapter starts with one or more case studies that describe a dataset in general terms and a question that is of interest related to the data and give an answer to the question in general terms, then the chapter goes on to show the methods that lead to the answer and give more detail. It is nice to see how the technique will apply to the real world as you learn the details.
|
A statistics book that explains using more images than equations
|
One book that I really like is "The Statistical Sleuth" by Ramsey and Schafer. It does still have the formulas, but the more complicated formulas have arrows pointing to the different parts with expl
|
A statistics book that explains using more images than equations
One book that I really like is "The Statistical Sleuth" by Ramsey and Schafer. It does still have the formulas, but the more complicated formulas have arrows pointing to the different parts with explanations of what that part of the formula means, there are lots of good graphics to help explain the concepts. It also covers a lot more than the cartoon guide (which I also like, but someone else already suggested it).
One of the best parts of it is that every chapter starts with one or more case studies that describe a dataset in general terms and a question that is of interest related to the data and give an answer to the question in general terms, then the chapter goes on to show the methods that lead to the answer and give more detail. It is nice to see how the technique will apply to the real world as you learn the details.
|
A statistics book that explains using more images than equations
One book that I really like is "The Statistical Sleuth" by Ramsey and Schafer. It does still have the formulas, but the more complicated formulas have arrows pointing to the different parts with expl
|
9,964
|
How to get the value of Mean squared error in a linear regression in R
|
The multiple R-squared that R reports is the coefficient of determination, which is given by the formula
$$ R^2 = 1 - \frac{SS_{\text{res}}}{SS_{\text{tot}}}.$$
The sum of squared errors is given (thanks to a previous answer) by sum(sm$residuals^2).
The mean squared error is given by mean(sm$residuals^2). You could write a function to calculate this, e.g.:
mse <- function(sm)
mean(sm$residuals^2)
|
How to get the value of Mean squared error in a linear regression in R
|
The multiple R-squared that R reports is the coefficient of determination, which is given by the formula
$$ R^2 = 1 - \frac{SS_{\text{res}}}{SS_{\text{tot}}}.$$
The sum of squared errors is given (th
|
How to get the value of Mean squared error in a linear regression in R
The multiple R-squared that R reports is the coefficient of determination, which is given by the formula
$$ R^2 = 1 - \frac{SS_{\text{res}}}{SS_{\text{tot}}}.$$
The sum of squared errors is given (thanks to a previous answer) by sum(sm$residuals^2).
The mean squared error is given by mean(sm$residuals^2). You could write a function to calculate this, e.g.:
mse <- function(sm)
mean(sm$residuals^2)
|
How to get the value of Mean squared error in a linear regression in R
The multiple R-squared that R reports is the coefficient of determination, which is given by the formula
$$ R^2 = 1 - \frac{SS_{\text{res}}}{SS_{\text{tot}}}.$$
The sum of squared errors is given (th
|
9,965
|
How to get the value of Mean squared error in a linear regression in R
|
Another simple method is to use the anova function.
You can get the MSE with anova(model)['Residuals', 'Mean Sq']
> print(sprintf("MSE=%0.2f", sum(lmfit$residuals^2)/lmfit$df.residual))
[1] "MSE=0.27"
> print(sprintf("MSE=%0.2f", anova(lmfit)['Residuals', 'Mean Sq']))
[1] "MSE=0.27"
|
How to get the value of Mean squared error in a linear regression in R
|
Another simple method is to use the anova function.
You can get the MSE with anova(model)['Residuals', 'Mean Sq']
> print(sprintf("MSE=%0.2f", sum(lmfit$residuals^2)/lmfit$df.residual))
[1] "MSE=0.27"
|
How to get the value of Mean squared error in a linear regression in R
Another simple method is to use the anova function.
You can get the MSE with anova(model)['Residuals', 'Mean Sq']
> print(sprintf("MSE=%0.2f", sum(lmfit$residuals^2)/lmfit$df.residual))
[1] "MSE=0.27"
> print(sprintf("MSE=%0.2f", anova(lmfit)['Residuals', 'Mean Sq']))
[1] "MSE=0.27"
|
How to get the value of Mean squared error in a linear regression in R
Another simple method is to use the anova function.
You can get the MSE with anova(model)['Residuals', 'Mean Sq']
> print(sprintf("MSE=%0.2f", sum(lmfit$residuals^2)/lmfit$df.residual))
[1] "MSE=0.27"
|
9,966
|
Hessian of logistic function
|
Here I derive all the necessary properties and identities for the solution to be self-contained, but apart from that this derivation is clean and easy. Let us formalize our notation and write the loss function a little more compactly. Consider $m$ samples $\{x_i,y_i\}$ such that $x_i\in\mathbb{R}^d$ and $y_i\in\mathbb{R}$. Recall that in binary logistic regression we typically have the hypothesis function $h_\theta$ be the logistic function. Formally
$$h_\theta(x_i)=\sigma(\omega^Tx_i)=\sigma(z_i)=\frac{1}{1+e^{-z_i}},$$
where $\omega\in\mathbb{R}^d$ and $z_i=\omega^Tx_i$. The loss function (which I believe OP's is missing a negative sign) is then defined as:
$$l(\omega)=\sum_{i=1}^m -\Big( y_i\log\sigma(z_i)+(1-y_i)\log(1-\sigma(z_i))\Big)$$
There are two important properties of the logistic function which I derive here for future reference. First, note that $1-\sigma(z)=1-1/(1+e^{-z})=e^{-z}/(1+e^{-z})=1/(1+e^z)=\sigma(-z)$.
Also note that
\begin{equation}
\begin{aligned}
\frac{\partial}{\partial z}\sigma(z)=\frac{\partial}{\partial z}(1+e^{-z})^{-1}=e^{-z}(1+e^{-z})^{-2}&=\frac{1}{1+e^{-z}}\frac{e^{-z}}{1+e^{-z}}
=\sigma(z)(1-\sigma(z))
\end{aligned}
\end{equation}
Instead of taking derivatives with respect to components, here we will work directly with vectors (you can review derivatives with vectors here). The Hessian of the loss function $l(\omega)$ is given by $\vec{\nabla}^2l(\omega)$, but first recall that $\frac{\partial z}{\partial \omega} = \frac{x^T\omega}{\partial \omega}=x^T$ and $\frac{\partial z}{\partial \omega^T}=\frac{\partial \omega^Tx}{\partial \omega ^T} = x$.
Let $l_i(\omega)=-y_i\log\sigma(z_i)-(1-y_i)\log(1-\sigma(z_i))$. Using the properties we derived above and the chain rule
\begin{equation}
\begin{aligned}
\frac{\partial \log\sigma(z_i)}{\partial \omega^T} &=
\frac{1}{\sigma(z_i)}\frac{\partial\sigma(z_i)}{\partial \omega^T} =
\frac{1}{\sigma(z_i)}\frac{\partial\sigma(z_i)}{\partial z_i}\frac{\partial z_i}{\partial \omega^T}=(1-\sigma(z_i))x_i\\
\frac{\partial \log(1-\sigma(z_i))}{\partial \omega^T}&=
\frac{1}{1-\sigma(z_i)}\frac{\partial(1-\sigma(z_i))}{\partial \omega^T}
=-\sigma(z_i)x_i
\end{aligned}
\end{equation}
It's now trivial to show that
$$\vec{\nabla}l_i(\omega)=\frac{\partial l_i(\omega)}{\partial \omega^T}
=-y_ix_i(1-\sigma(z_i))+(1-y_i)x_i\sigma(z_i)=x_i(\sigma(z_i)-y_i)$$
whew!
Our last step is to compute the Hessian
$$\vec{\nabla}^2l_i(\omega)=\frac{\partial l_i(\omega)}{\partial \omega\partial \omega^T}=x_ix_i^T\sigma(z_i)(1-\sigma(z_i))$$
For $m$ samples we have $\vec{\nabla}^2l(\omega)=\sum_{i=1}^m x_ix_i^T\sigma(z_i)(1-\sigma(z_i))$. This is equivalent to concatenating column vectors $x_i\in\mathbb{R}^d$ into a matrix $X$ of size $d\times m$ such that $\sum_{i=1}^m x_ix_i^T=XX^T$. The scalar terms are combined in a diagonal matrix $D$ such that $D_{ii}=\sigma(z_i)(1-\sigma(z_i))$. Finally, we conclude that
$$ \vec{H}(\omega)=\vec{\nabla}^2l(\omega)=XDX^T$$
A faster approach can be derived by considering all samples at once from the beginning and instead work with matrix derivatives. As an extra note, with this formulation it's trivial to show that $l(\omega)$ is convex. Let $\delta$ be any vector such that $\delta\in\mathbb{R}^d$. Then
$$\delta^T\vec{H}(\omega)\delta = \delta^T\vec{\nabla}^2l(\omega)\delta = \delta^TXDX^T\delta = \delta^TXD(\delta^TX)^T = \|\delta^TDX\|^2\geq 0$$
since $D>0$ and $\|\delta^TX\|\geq 0$. This implies $H$ is positive-semidefinite and therefore $l$ is convex (but not strongly convex).
|
Hessian of logistic function
|
Here I derive all the necessary properties and identities for the solution to be self-contained, but apart from that this derivation is clean and easy. Let us formalize our notation and write the loss
|
Hessian of logistic function
Here I derive all the necessary properties and identities for the solution to be self-contained, but apart from that this derivation is clean and easy. Let us formalize our notation and write the loss function a little more compactly. Consider $m$ samples $\{x_i,y_i\}$ such that $x_i\in\mathbb{R}^d$ and $y_i\in\mathbb{R}$. Recall that in binary logistic regression we typically have the hypothesis function $h_\theta$ be the logistic function. Formally
$$h_\theta(x_i)=\sigma(\omega^Tx_i)=\sigma(z_i)=\frac{1}{1+e^{-z_i}},$$
where $\omega\in\mathbb{R}^d$ and $z_i=\omega^Tx_i$. The loss function (which I believe OP's is missing a negative sign) is then defined as:
$$l(\omega)=\sum_{i=1}^m -\Big( y_i\log\sigma(z_i)+(1-y_i)\log(1-\sigma(z_i))\Big)$$
There are two important properties of the logistic function which I derive here for future reference. First, note that $1-\sigma(z)=1-1/(1+e^{-z})=e^{-z}/(1+e^{-z})=1/(1+e^z)=\sigma(-z)$.
Also note that
\begin{equation}
\begin{aligned}
\frac{\partial}{\partial z}\sigma(z)=\frac{\partial}{\partial z}(1+e^{-z})^{-1}=e^{-z}(1+e^{-z})^{-2}&=\frac{1}{1+e^{-z}}\frac{e^{-z}}{1+e^{-z}}
=\sigma(z)(1-\sigma(z))
\end{aligned}
\end{equation}
Instead of taking derivatives with respect to components, here we will work directly with vectors (you can review derivatives with vectors here). The Hessian of the loss function $l(\omega)$ is given by $\vec{\nabla}^2l(\omega)$, but first recall that $\frac{\partial z}{\partial \omega} = \frac{x^T\omega}{\partial \omega}=x^T$ and $\frac{\partial z}{\partial \omega^T}=\frac{\partial \omega^Tx}{\partial \omega ^T} = x$.
Let $l_i(\omega)=-y_i\log\sigma(z_i)-(1-y_i)\log(1-\sigma(z_i))$. Using the properties we derived above and the chain rule
\begin{equation}
\begin{aligned}
\frac{\partial \log\sigma(z_i)}{\partial \omega^T} &=
\frac{1}{\sigma(z_i)}\frac{\partial\sigma(z_i)}{\partial \omega^T} =
\frac{1}{\sigma(z_i)}\frac{\partial\sigma(z_i)}{\partial z_i}\frac{\partial z_i}{\partial \omega^T}=(1-\sigma(z_i))x_i\\
\frac{\partial \log(1-\sigma(z_i))}{\partial \omega^T}&=
\frac{1}{1-\sigma(z_i)}\frac{\partial(1-\sigma(z_i))}{\partial \omega^T}
=-\sigma(z_i)x_i
\end{aligned}
\end{equation}
It's now trivial to show that
$$\vec{\nabla}l_i(\omega)=\frac{\partial l_i(\omega)}{\partial \omega^T}
=-y_ix_i(1-\sigma(z_i))+(1-y_i)x_i\sigma(z_i)=x_i(\sigma(z_i)-y_i)$$
whew!
Our last step is to compute the Hessian
$$\vec{\nabla}^2l_i(\omega)=\frac{\partial l_i(\omega)}{\partial \omega\partial \omega^T}=x_ix_i^T\sigma(z_i)(1-\sigma(z_i))$$
For $m$ samples we have $\vec{\nabla}^2l(\omega)=\sum_{i=1}^m x_ix_i^T\sigma(z_i)(1-\sigma(z_i))$. This is equivalent to concatenating column vectors $x_i\in\mathbb{R}^d$ into a matrix $X$ of size $d\times m$ such that $\sum_{i=1}^m x_ix_i^T=XX^T$. The scalar terms are combined in a diagonal matrix $D$ such that $D_{ii}=\sigma(z_i)(1-\sigma(z_i))$. Finally, we conclude that
$$ \vec{H}(\omega)=\vec{\nabla}^2l(\omega)=XDX^T$$
A faster approach can be derived by considering all samples at once from the beginning and instead work with matrix derivatives. As an extra note, with this formulation it's trivial to show that $l(\omega)$ is convex. Let $\delta$ be any vector such that $\delta\in\mathbb{R}^d$. Then
$$\delta^T\vec{H}(\omega)\delta = \delta^T\vec{\nabla}^2l(\omega)\delta = \delta^TXDX^T\delta = \delta^TXD(\delta^TX)^T = \|\delta^TDX\|^2\geq 0$$
since $D>0$ and $\|\delta^TX\|\geq 0$. This implies $H$ is positive-semidefinite and therefore $l$ is convex (but not strongly convex).
|
Hessian of logistic function
Here I derive all the necessary properties and identities for the solution to be self-contained, but apart from that this derivation is clean and easy. Let us formalize our notation and write the loss
|
9,967
|
Calculate mean of ordinal variable
|
A short answer is that this is contentious. Contrary to the advice you mention, people in many fields do take means of ordinal scales and are often happy that means do what they want. Grade-point averages or the equivalent in many educational systems are one example.
However, ordinal data not being normally distributed is not a valid reason, because the mean is
widely used for non-normal distributions
well-defined mathematically for very many non-normal distributions, except in some pathological cases.
It may not be a good idea to use the mean in practice if data are definitely not normally distributed, but that's different.
A stronger reason for not using the mean with ordinal data is that its value depends on conventions on coding. Numerical codes such as 1, 2, 3, 4 are usually just chosen for simplicity or convenience, but in principle they could equally well be 1, 23, 456, 7890 as far as corresponding to a defined order as concerned. Taking the mean in either case would involve taking those conventions literally (namely, as if the numbers were not arbitrary, but justifiable), and there are no rigorous grounds for doing that. You need an interval scale in which equal differences between values can be taken literally to justify taking means. That I take to be the main argument, but as already indicated people often ignore it and deliberately, because they find means useful, whatever measurement theorists say.
Here is an extra example. Often people are asked to choose one of "strongly disagree" ... "strongly agree" and (depending partly on what the software wants) researchers code that as 1 .. 5 or 0 .. 4 or whatever they want, or declare it as an ordered factor (or whatever term the software uses). Here the coding is arbitrary and hidden from the people who answer the question.
But often also people are asked (say) on a scale of 1 to 5, how do you rate something?
Examples abound: websites, sports, other kinds of competitions and indeed education. Here people are being shown a scale and being asked to use it.
It is widely understood that non-integers make sense, but you are just being allowed to use integers as a convention. Is this ordinal scale? Some say yes, some say no. Otherwise put, part of the problem is that what is ordinal scale is itself a fuzzy or debated area.
Consider again grades for academic work, say E to A. Often such grades are also treated numerically, say as 1 to 5, and routinely people calculate averages for students, courses, schools, etc. and do further analyses of such data. While it remains true that any mapping to numeric scores is arbitrary but acceptable so long as it preserves order, nevertheless in practice people assigning and receiving the grades know that scores have numeric equivalents and know that grades will be averaged.
One pragmatic reason for using means is that medians and modes are often poor summaries of the information in the data. Suppose you have a scale running from strongly disagree to strongly agree and for convenience code those points 1 to 5. Now imagine one sample coded 1, 1, 2, 2, 2 and another 1, 2, 2, 4, 5. Now raise your hands if you think that median and mode are the only justifiable summaries because it's an ordinal scale. Now raise your hands if you find the mean useful too, regardless of whether sums are well defined, etc.
Naturally, the mean would be a hypersensitive summary if the codes were the squares or cubes of 1 to 5, say, and that might not be what you want. (If your aim is to identify high-fliers quickly it might be exactly what you want!) But that's precisely why conventional coding with successive integer codes is a practical choice, because it often works quite well in practice. That is not an argument which carries any weight with measurement theorists, nor should it, but data analysts should be interested in producing information-rich summaries.
I agree with anyone who says: use the entire distribution of grade frequencies, but that is not the point at issue.
|
Calculate mean of ordinal variable
|
A short answer is that this is contentious. Contrary to the advice you mention, people in many fields do take means of ordinal scales and are often happy that means do what they want. Grade-point aver
|
Calculate mean of ordinal variable
A short answer is that this is contentious. Contrary to the advice you mention, people in many fields do take means of ordinal scales and are often happy that means do what they want. Grade-point averages or the equivalent in many educational systems are one example.
However, ordinal data not being normally distributed is not a valid reason, because the mean is
widely used for non-normal distributions
well-defined mathematically for very many non-normal distributions, except in some pathological cases.
It may not be a good idea to use the mean in practice if data are definitely not normally distributed, but that's different.
A stronger reason for not using the mean with ordinal data is that its value depends on conventions on coding. Numerical codes such as 1, 2, 3, 4 are usually just chosen for simplicity or convenience, but in principle they could equally well be 1, 23, 456, 7890 as far as corresponding to a defined order as concerned. Taking the mean in either case would involve taking those conventions literally (namely, as if the numbers were not arbitrary, but justifiable), and there are no rigorous grounds for doing that. You need an interval scale in which equal differences between values can be taken literally to justify taking means. That I take to be the main argument, but as already indicated people often ignore it and deliberately, because they find means useful, whatever measurement theorists say.
Here is an extra example. Often people are asked to choose one of "strongly disagree" ... "strongly agree" and (depending partly on what the software wants) researchers code that as 1 .. 5 or 0 .. 4 or whatever they want, or declare it as an ordered factor (or whatever term the software uses). Here the coding is arbitrary and hidden from the people who answer the question.
But often also people are asked (say) on a scale of 1 to 5, how do you rate something?
Examples abound: websites, sports, other kinds of competitions and indeed education. Here people are being shown a scale and being asked to use it.
It is widely understood that non-integers make sense, but you are just being allowed to use integers as a convention. Is this ordinal scale? Some say yes, some say no. Otherwise put, part of the problem is that what is ordinal scale is itself a fuzzy or debated area.
Consider again grades for academic work, say E to A. Often such grades are also treated numerically, say as 1 to 5, and routinely people calculate averages for students, courses, schools, etc. and do further analyses of such data. While it remains true that any mapping to numeric scores is arbitrary but acceptable so long as it preserves order, nevertheless in practice people assigning and receiving the grades know that scores have numeric equivalents and know that grades will be averaged.
One pragmatic reason for using means is that medians and modes are often poor summaries of the information in the data. Suppose you have a scale running from strongly disagree to strongly agree and for convenience code those points 1 to 5. Now imagine one sample coded 1, 1, 2, 2, 2 and another 1, 2, 2, 4, 5. Now raise your hands if you think that median and mode are the only justifiable summaries because it's an ordinal scale. Now raise your hands if you find the mean useful too, regardless of whether sums are well defined, etc.
Naturally, the mean would be a hypersensitive summary if the codes were the squares or cubes of 1 to 5, say, and that might not be what you want. (If your aim is to identify high-fliers quickly it might be exactly what you want!) But that's precisely why conventional coding with successive integer codes is a practical choice, because it often works quite well in practice. That is not an argument which carries any weight with measurement theorists, nor should it, but data analysts should be interested in producing information-rich summaries.
I agree with anyone who says: use the entire distribution of grade frequencies, but that is not the point at issue.
|
Calculate mean of ordinal variable
A short answer is that this is contentious. Contrary to the advice you mention, people in many fields do take means of ordinal scales and are often happy that means do what they want. Grade-point aver
|
9,968
|
Calculate mean of ordinal variable
|
Suppose we take ordinal values, e.g. 1 for strongly disagree, 2 for disagree, 3 for agree, and 4 for strongly agree. If four people give the responses 1,2,3 and 4, then what would be the mean? It is (1+2+3+4)/4=2.50.
How should that be interpreted, when the four person average response is "disagree or agree"? That's why we should not use mean for ordinal data.
|
Calculate mean of ordinal variable
|
Suppose we take ordinal values, e.g. 1 for strongly disagree, 2 for disagree, 3 for agree, and 4 for strongly agree. If four people give the responses 1,2,3 and 4, then what would be the mean? It is (
|
Calculate mean of ordinal variable
Suppose we take ordinal values, e.g. 1 for strongly disagree, 2 for disagree, 3 for agree, and 4 for strongly agree. If four people give the responses 1,2,3 and 4, then what would be the mean? It is (1+2+3+4)/4=2.50.
How should that be interpreted, when the four person average response is "disagree or agree"? That's why we should not use mean for ordinal data.
|
Calculate mean of ordinal variable
Suppose we take ordinal values, e.g. 1 for strongly disagree, 2 for disagree, 3 for agree, and 4 for strongly agree. If four people give the responses 1,2,3 and 4, then what would be the mean? It is (
|
9,969
|
Calculate mean of ordinal variable
|
I totally agree with @Azeem. But just to drive this point home let me elaborate a bit further.
Let's say you have ordinal data like in the example from @Azeem, where your scale ranges from 1 through 4. And let's also say you have a couple of people rating something (like Ice Cream) on this scale. Imagine that you get the following results:
Person A said 4
Person B said 3
Person C said 1
Person D said 2
When you want to interpret the results, you can conclude something to the extent of:
Person A liked Ice Cream more than Person B
Person D liked Ice Cream more than Person C
However, you don't know anything about the intervals between the ratings. Is the difference between 1 and 2 the same as that between 3 and 4? Does a rating of 4 really mean that the person likes Ice Cream 4 times more than someone who rates it as 1? And so on... When you compute the arithmetic mean, you treat the numbers as if the differences between them were equal. But that's a pretty strong assumption with ordinal data and you would have to justify it.
|
Calculate mean of ordinal variable
|
I totally agree with @Azeem. But just to drive this point home let me elaborate a bit further.
Let's say you have ordinal data like in the example from @Azeem, where your scale ranges from 1 through 4
|
Calculate mean of ordinal variable
I totally agree with @Azeem. But just to drive this point home let me elaborate a bit further.
Let's say you have ordinal data like in the example from @Azeem, where your scale ranges from 1 through 4. And let's also say you have a couple of people rating something (like Ice Cream) on this scale. Imagine that you get the following results:
Person A said 4
Person B said 3
Person C said 1
Person D said 2
When you want to interpret the results, you can conclude something to the extent of:
Person A liked Ice Cream more than Person B
Person D liked Ice Cream more than Person C
However, you don't know anything about the intervals between the ratings. Is the difference between 1 and 2 the same as that between 3 and 4? Does a rating of 4 really mean that the person likes Ice Cream 4 times more than someone who rates it as 1? And so on... When you compute the arithmetic mean, you treat the numbers as if the differences between them were equal. But that's a pretty strong assumption with ordinal data and you would have to justify it.
|
Calculate mean of ordinal variable
I totally agree with @Azeem. But just to drive this point home let me elaborate a bit further.
Let's say you have ordinal data like in the example from @Azeem, where your scale ranges from 1 through 4
|
9,970
|
Calculate mean of ordinal variable
|
I agree with the concept that arithmetic mean cannot be truly justified in ordinal scale data. Instead of calculating mean we can use mode or median in such situations which can give us more meaningful interpretation of our results.
|
Calculate mean of ordinal variable
|
I agree with the concept that arithmetic mean cannot be truly justified in ordinal scale data. Instead of calculating mean we can use mode or median in such situations which can give us more meaningfu
|
Calculate mean of ordinal variable
I agree with the concept that arithmetic mean cannot be truly justified in ordinal scale data. Instead of calculating mean we can use mode or median in such situations which can give us more meaningful interpretation of our results.
|
Calculate mean of ordinal variable
I agree with the concept that arithmetic mean cannot be truly justified in ordinal scale data. Instead of calculating mean we can use mode or median in such situations which can give us more meaningfu
|
9,971
|
How large a training set is needed?
|
The search term you are looking for is "learning curve", which gives the (average) model performance as function of the training sample size.
Learning curves depend on a lot of things, e.g.
classification method
complexity of the classifier
how well the classes are separated.
(I think for two-class LDA you may be able to derive some theoretical power calculations, but the crucial fact is always whether your data actually meets the "equal COV multivariate normal" assumption. I'd go for some simulation on for both LDA assumptions and resampling of your already existing data).
There are two aspects of the performance of a classifier trained on a finite sample size $n$ (as usual),
bias, i.e. on average a classifier trained on $n$ training samples is worse than the classifier trained on $n = \infty$ training cases (this is usually meant by learning curve), and
variance: a given training set of $n$ cases may lead to quite different model performance.
Even with few cases, you may be lucky and get good results. Or you have bad luck and get a really bad classifier.
As usual, this variance decreases with incresing training sample size $n$.
Another aspect that you may need to take into account is that it is usually not enough to train a good classifier, but you also need to prove that the classifier is good (or good enough). So you need to plan also the sample size needed for validation with a given precision. If you need to give these results as fraction of successes among so many test cases (e.g. producer's or consumer's accuracy / precision / sensitivity /
positive predictive value), and the underlying classification task is rather easy, this can need more independent cases than training of a good model.
As a rule of thumb, for training, the sample size is usually discussed in relation to model complexity (number of cases : number of variates), whereas absolute bounds on the test sample size can be given for a required precision of the performance measurement.
Here's a paper, where we explained these things in more detail, and also discuss how to constuct learning curves:
Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
This is the "teaser", showing an easy classification problem (we actually have one easy distinction like this in our classification problem, but other classes are far more difficult to distinguish):
We did not try to extrapolate to larger training sample sizes to determine how much more training cases are needed, because the test sample sizes are our bottleneck, and larger training sample sizes would let us construct more complex models, so extrapolation is questionable. For the kind of data sets I have, I'd approach this iteratively, measuring a bunch of new cases, showing how much things improved, measure more cases, and so on.
This may be different for you, but the paper contains literature references to papers using extrapolation to higher sample sizes in order to estimate the required number of samples.
|
How large a training set is needed?
|
The search term you are looking for is "learning curve", which gives the (average) model performance as function of the training sample size.
Learning curves depend on a lot of things, e.g.
classifi
|
How large a training set is needed?
The search term you are looking for is "learning curve", which gives the (average) model performance as function of the training sample size.
Learning curves depend on a lot of things, e.g.
classification method
complexity of the classifier
how well the classes are separated.
(I think for two-class LDA you may be able to derive some theoretical power calculations, but the crucial fact is always whether your data actually meets the "equal COV multivariate normal" assumption. I'd go for some simulation on for both LDA assumptions and resampling of your already existing data).
There are two aspects of the performance of a classifier trained on a finite sample size $n$ (as usual),
bias, i.e. on average a classifier trained on $n$ training samples is worse than the classifier trained on $n = \infty$ training cases (this is usually meant by learning curve), and
variance: a given training set of $n$ cases may lead to quite different model performance.
Even with few cases, you may be lucky and get good results. Or you have bad luck and get a really bad classifier.
As usual, this variance decreases with incresing training sample size $n$.
Another aspect that you may need to take into account is that it is usually not enough to train a good classifier, but you also need to prove that the classifier is good (or good enough). So you need to plan also the sample size needed for validation with a given precision. If you need to give these results as fraction of successes among so many test cases (e.g. producer's or consumer's accuracy / precision / sensitivity /
positive predictive value), and the underlying classification task is rather easy, this can need more independent cases than training of a good model.
As a rule of thumb, for training, the sample size is usually discussed in relation to model complexity (number of cases : number of variates), whereas absolute bounds on the test sample size can be given for a required precision of the performance measurement.
Here's a paper, where we explained these things in more detail, and also discuss how to constuct learning curves:
Beleites, C. and Neugebauer, U. and Bocklitz, T. and Krafft, C. and Popp, J.: Sample size planning for classification models. Anal Chim Acta, 2013, 760, 25-33.
DOI: 10.1016/j.aca.2012.11.007
accepted manuscript on arXiv: 1211.1323
This is the "teaser", showing an easy classification problem (we actually have one easy distinction like this in our classification problem, but other classes are far more difficult to distinguish):
We did not try to extrapolate to larger training sample sizes to determine how much more training cases are needed, because the test sample sizes are our bottleneck, and larger training sample sizes would let us construct more complex models, so extrapolation is questionable. For the kind of data sets I have, I'd approach this iteratively, measuring a bunch of new cases, showing how much things improved, measure more cases, and so on.
This may be different for you, but the paper contains literature references to papers using extrapolation to higher sample sizes in order to estimate the required number of samples.
|
How large a training set is needed?
The search term you are looking for is "learning curve", which gives the (average) model performance as function of the training sample size.
Learning curves depend on a lot of things, e.g.
classifi
|
9,972
|
How large a training set is needed?
|
Asking about training sample size implies you are going to hold back data for model validation. This is an unstable process requiring a huge sample size. Strong internal validation with the bootstrap is often preferred. If you choose that path you need to only compute the one sample size. As @cbeleites so nicely stated this is often an "events per candidate variable" assessment, but you need a minimum of 96 observations to accurately predict the probability of a binary outcome even if there are no features to be examined [this is to achieve of 0.95 confidence margin of error of 0.1 in estimating the actual marginal probability that Y=1].
It is important to consider proper scoring rules for accuracy assessment (e.g., Brier score and log likelihood/deviance). Also make sure you really want to classify observations as opposed to estimating membership probability. The latter is almost always more useful as it allows a gray zone.
|
How large a training set is needed?
|
Asking about training sample size implies you are going to hold back data for model validation. This is an unstable process requiring a huge sample size. Strong internal validation with the bootstra
|
How large a training set is needed?
Asking about training sample size implies you are going to hold back data for model validation. This is an unstable process requiring a huge sample size. Strong internal validation with the bootstrap is often preferred. If you choose that path you need to only compute the one sample size. As @cbeleites so nicely stated this is often an "events per candidate variable" assessment, but you need a minimum of 96 observations to accurately predict the probability of a binary outcome even if there are no features to be examined [this is to achieve of 0.95 confidence margin of error of 0.1 in estimating the actual marginal probability that Y=1].
It is important to consider proper scoring rules for accuracy assessment (e.g., Brier score and log likelihood/deviance). Also make sure you really want to classify observations as opposed to estimating membership probability. The latter is almost always more useful as it allows a gray zone.
|
How large a training set is needed?
Asking about training sample size implies you are going to hold back data for model validation. This is an unstable process requiring a huge sample size. Strong internal validation with the bootstra
|
9,973
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
Questions about robustness are very hard to answer well - because the assumptions may be violated in so many ways, and in each way to different degrees. Simulation work can only sample a very small portion of the possible violations.
Given the state of computing, I think it is often worth the time to run both a parametric and a non-parametric test, if both are available. You can then compare results.
If you are really ambitious, you could even do a permutation test.
What if Alan Turing had done his work before Ronald Fisher did his? :-).
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
Questions about robustness are very hard to answer well - because the assumptions may be violated in so many ways, and in each way to different degrees. Simulation work can only sample a very small po
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
Questions about robustness are very hard to answer well - because the assumptions may be violated in so many ways, and in each way to different degrees. Simulation work can only sample a very small portion of the possible violations.
Given the state of computing, I think it is often worth the time to run both a parametric and a non-parametric test, if both are available. You can then compare results.
If you are really ambitious, you could even do a permutation test.
What if Alan Turing had done his work before Ronald Fisher did his? :-).
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
Questions about robustness are very hard to answer well - because the assumptions may be violated in so many ways, and in each way to different degrees. Simulation work can only sample a very small po
|
9,974
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
@PeterFlom hit the nail dead on with his first sentence.
I'll try to give a rough summary of what studies I have seen (if you want links it could be a while):
Overall, the two sample t-test is reasonably power-robust to symmetric non-normality (the true type-I-error-rate is affected somewhat by kurtosis, the power is impacted more by that).
When the two samples are mildly skew in the same direction, the one-tailed t-test is no longer unbiased. The t-statistic is skewed oppositely to the distribution, and has much more power if the test is in one direction than if it's in the other. If they're skew in opposite directions, the type I error rate can be heavily affected.
Heavy skewness can have bigger impacts, but generally speaking, moderate skewness with a two-tailed test isn't too bad if you don't mind your test in essence allocating more of its power to one direction that the other.
In short - the two-tailed, two-sample t-test is reasonably robust to those kinds of things if you can tolerate some impact on the significance level and some mild bias.
There are many, many, ways for distributions to be non-normal, though, which aren't covered by those comments.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
@PeterFlom hit the nail dead on with his first sentence.
I'll try to give a rough summary of what studies I have seen (if you want links it could be a while):
Overall, the two sample t-test is reasona
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
@PeterFlom hit the nail dead on with his first sentence.
I'll try to give a rough summary of what studies I have seen (if you want links it could be a while):
Overall, the two sample t-test is reasonably power-robust to symmetric non-normality (the true type-I-error-rate is affected somewhat by kurtosis, the power is impacted more by that).
When the two samples are mildly skew in the same direction, the one-tailed t-test is no longer unbiased. The t-statistic is skewed oppositely to the distribution, and has much more power if the test is in one direction than if it's in the other. If they're skew in opposite directions, the type I error rate can be heavily affected.
Heavy skewness can have bigger impacts, but generally speaking, moderate skewness with a two-tailed test isn't too bad if you don't mind your test in essence allocating more of its power to one direction that the other.
In short - the two-tailed, two-sample t-test is reasonably robust to those kinds of things if you can tolerate some impact on the significance level and some mild bias.
There are many, many, ways for distributions to be non-normal, though, which aren't covered by those comments.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
@PeterFlom hit the nail dead on with his first sentence.
I'll try to give a rough summary of what studies I have seen (if you want links it could be a while):
Overall, the two sample t-test is reasona
|
9,975
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
@PeterFlom has already mentioned that simulation studies can never cover all scenarios and possibilities and therefore cannot lead to a definite answer. However, I still find it useful to actually explore an issue like this by conducting some simulations (this also happens to be exactly the type of exercise that I like to use when introducing the idea of Monte Carlo simulation studies to students). So, let's actually try this out. I'll use R for this.
The Code
n1 <- 33
n2 <- 45
mu1 <- 0
mu2 <- 0
sd1 <- 1
sd2 <- 1
iters <- 100000
p1 <- p2 <- p3 <- p4 <- p5 <- rep(NA, iters)
for (i in 1:iters) {
### normal distributions
x1 <- rnorm(n1, mu1, sd1)
x2 <- rnorm(n2, mu2, sd2)
p1[i] <- t.test(x1, x2)$p.value
### both variables skewed to the right
x1 <- (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p2[i] <- t.test(x1, x2)$p.value
### both variables skewed to the left
x1 <- -1 * (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- -1 * (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p3[i] <- t.test(x1, x2)$p.value
### first skewed to the left, second skewed to the right
x1 <- -1 * (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p4[i] <- t.test(x1, x2)$p.value
### first skewed to the right, second skewed to the left
x1 <- (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- -1 * (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p5[i] <- t.test(x1, x2)$p.value
}
print(round((apply(cbind(p1, p2, p3, p4, p5), 2, function(p) mean(p <= .05))), 3))
Explanation
First we set the group size (n1 and n2), the true group means (mu1 and mu2), and the true standard deviations (sd1 and sd2).
Then we define the number of iterations to run and set up vectors to store the p-values in.
Then I simulate data under 5 scenarios:
Both distributions are normal.
Both distributions are skewed to the right.
Both distributions are skewed to the left.
The first distribution is skewed to the left, the second to the right.
The first distribution is skewed to the right, the second to the left.
Note that I am using chi-squared distributions for generating the skewed distributions. With one degree of freedom, those are heavily skewed distributions. Since the true mean and variance of a chi-squared distribution with one degree of freedom is equal to 1 and 2, respectively (see wikipedia), I rescale those distributions to first have mean 0 and standard deviation 1 and then rescale them to have the desired true mean and standard deviation (this could be done in one step, but doing it this way may be clearer).
In each case, I apply the t-test (Welch's version -- one could of course also consider Student's version that does assume equal variances in the two groups) and save the p-value to the vectors set up earlier.
Finally, once all iterations are complete, I compute for each vector how often the p-value is equal to or below .05 (i.e., the test is "significant"). This is the empirical rejection rate.
Some Results
Simulating exactly as described above yields:
p1 p2 p3 p4 p5
0.049 0.048 0.047 0.070 0.070
So, when the skewness is in the same direction in both groups, the Type I error rate appears to be quite close to being well controlled (i.e., it is quite close to the nominal $\alpha = .05$). When the skewness is in opposite directions, there is some slight inflation in the Type I error rate.
If we change the code to mu1 <- .5, then we get:
p1 p2 p3 p4 p5
0.574 0.610 0.606 0.592 0.602
So, compared to the case where both distributions are normal (as assumed by the test), power actually appears to be slightly higher when the skewness is in the same direction! If you are surprised by this, you may want to rerun this a few times (of course, each time getting slightly different results), but the pattern will remain.
Note that we have to be careful with interpreting the empirical power values under the two scenarios where the skewness is in opposite directions, since the Type I error rate is not quite nominal (as an extreme case, suppose I always reject regardless of what the data show; then I will always have a test with maximal power, but of course the test also has a rather inflated Type I error rate).
One could start exploring a range of values for mu1 (and mu2 -- but what really matters is the difference between the two) and, more importantly, start changing the true standard deviations of the two groups (i.e., sd1 and sd2) and especially making them unequal. I also stuck to the sample sizes mentioned by the OP, but of course that could be adjusted as well. And skewness could of course take many other forms than what we see in a chi-squared distribution with one degree of freedom. I still think approaching things this way is useful, despite the fact that it cannot yield a definite answer.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
@PeterFlom has already mentioned that simulation studies can never cover all scenarios and possibilities and therefore cannot lead to a definite answer. However, I still find it useful to actually exp
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
@PeterFlom has already mentioned that simulation studies can never cover all scenarios and possibilities and therefore cannot lead to a definite answer. However, I still find it useful to actually explore an issue like this by conducting some simulations (this also happens to be exactly the type of exercise that I like to use when introducing the idea of Monte Carlo simulation studies to students). So, let's actually try this out. I'll use R for this.
The Code
n1 <- 33
n2 <- 45
mu1 <- 0
mu2 <- 0
sd1 <- 1
sd2 <- 1
iters <- 100000
p1 <- p2 <- p3 <- p4 <- p5 <- rep(NA, iters)
for (i in 1:iters) {
### normal distributions
x1 <- rnorm(n1, mu1, sd1)
x2 <- rnorm(n2, mu2, sd2)
p1[i] <- t.test(x1, x2)$p.value
### both variables skewed to the right
x1 <- (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p2[i] <- t.test(x1, x2)$p.value
### both variables skewed to the left
x1 <- -1 * (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- -1 * (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p3[i] <- t.test(x1, x2)$p.value
### first skewed to the left, second skewed to the right
x1 <- -1 * (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p4[i] <- t.test(x1, x2)$p.value
### first skewed to the right, second skewed to the left
x1 <- (rchisq(n1, df=1) - 1)/sqrt(2) * sd1 + mu1
x2 <- -1 * (rchisq(n2, df=1) - 1)/sqrt(2) * sd2 + mu2
p5[i] <- t.test(x1, x2)$p.value
}
print(round((apply(cbind(p1, p2, p3, p4, p5), 2, function(p) mean(p <= .05))), 3))
Explanation
First we set the group size (n1 and n2), the true group means (mu1 and mu2), and the true standard deviations (sd1 and sd2).
Then we define the number of iterations to run and set up vectors to store the p-values in.
Then I simulate data under 5 scenarios:
Both distributions are normal.
Both distributions are skewed to the right.
Both distributions are skewed to the left.
The first distribution is skewed to the left, the second to the right.
The first distribution is skewed to the right, the second to the left.
Note that I am using chi-squared distributions for generating the skewed distributions. With one degree of freedom, those are heavily skewed distributions. Since the true mean and variance of a chi-squared distribution with one degree of freedom is equal to 1 and 2, respectively (see wikipedia), I rescale those distributions to first have mean 0 and standard deviation 1 and then rescale them to have the desired true mean and standard deviation (this could be done in one step, but doing it this way may be clearer).
In each case, I apply the t-test (Welch's version -- one could of course also consider Student's version that does assume equal variances in the two groups) and save the p-value to the vectors set up earlier.
Finally, once all iterations are complete, I compute for each vector how often the p-value is equal to or below .05 (i.e., the test is "significant"). This is the empirical rejection rate.
Some Results
Simulating exactly as described above yields:
p1 p2 p3 p4 p5
0.049 0.048 0.047 0.070 0.070
So, when the skewness is in the same direction in both groups, the Type I error rate appears to be quite close to being well controlled (i.e., it is quite close to the nominal $\alpha = .05$). When the skewness is in opposite directions, there is some slight inflation in the Type I error rate.
If we change the code to mu1 <- .5, then we get:
p1 p2 p3 p4 p5
0.574 0.610 0.606 0.592 0.602
So, compared to the case where both distributions are normal (as assumed by the test), power actually appears to be slightly higher when the skewness is in the same direction! If you are surprised by this, you may want to rerun this a few times (of course, each time getting slightly different results), but the pattern will remain.
Note that we have to be careful with interpreting the empirical power values under the two scenarios where the skewness is in opposite directions, since the Type I error rate is not quite nominal (as an extreme case, suppose I always reject regardless of what the data show; then I will always have a test with maximal power, but of course the test also has a rather inflated Type I error rate).
One could start exploring a range of values for mu1 (and mu2 -- but what really matters is the difference between the two) and, more importantly, start changing the true standard deviations of the two groups (i.e., sd1 and sd2) and especially making them unequal. I also stuck to the sample sizes mentioned by the OP, but of course that could be adjusted as well. And skewness could of course take many other forms than what we see in a chi-squared distribution with one degree of freedom. I still think approaching things this way is useful, despite the fact that it cannot yield a definite answer.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
@PeterFlom has already mentioned that simulation studies can never cover all scenarios and possibilities and therefore cannot lead to a definite answer. However, I still find it useful to actually exp
|
9,976
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
In your situation, the t-test will likely be robust in terms of Type I error rate, but not Type II error rate. You would probably achieve more power through either a) a Kruskal-Wallis test, or b) a normalizing transformation prior to a t-test.
I'm basing this conclusion on two Monte Carlo studies. In the first (Khan & Rayner, 2003), skew and kurtosis were indirectly manipulated via the parameters of the g-and-k distribution family, and the resulting power was examined. Importantly, the Kruskal- Wallis test's power was less damaged by non-normality, particularly for n>=15.
A few caveats/qualifications about this study: Power was often hurt by high kurtosis, but it was less affected by skew. At first glance, this pattern might seem less relevant to your situation given that you noted a problem with skew, not kurtosis. However, I'm betting that excess kurtosis is also extreme in your case. Keep in mind that excess kurtosis will be at least as high as skew^2 - 2. (Let excess kurtosis equal the 4th standardized moment minus 3, so that excess kurtosis=0 for a normal distribution.) Note also that Khan and Rayner (2003) examined ANOVAs with 3 groups, but their results are likely to generalize to a two-sample t-test.
A second relevant study (Beasley, Erikson, & Allison, 2009) examined both Type I and Type II errors with various non-normal distributions, such as a Chi-squared(1) and Weibull(1,.5). For sample sizes of at least 25, the t-test adequately controlled the Type I error rate at or below the nominal alpha level. However, power was highest with either a Kruskal-Wallis test or with a Rank-based Inverse Normal transformation (Blom scores) applied prior to the t-test. Beasley and colleagues generally argued against the normalizing approach, but it should be noted that the the normalizing approach controlled the Type I error rate for n>=25, and its power sometimes slightly exceeded that of the Kruskal-Wallis test. That is, the normalizing approach seems promising for your situation. See tables 1 and 4 in their article for details.
References:
Khan, A., & Rayner, G. D. (2003). Robustness to non-normality of common tests for the many-sample location problem. Journal of Applied Mathematics and Decision Sciences, 7, 187-206.
Beasley, T. M., Erickson, S., & Allison, D. B. (2009). Rank-based inverse normal transformations are increasingly used, but are they merited? Behavioral Genetics, 39, 580-595.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
In your situation, the t-test will likely be robust in terms of Type I error rate, but not Type II error rate. You would probably achieve more power through either a) a Kruskal-Wallis test, or b) a n
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
In your situation, the t-test will likely be robust in terms of Type I error rate, but not Type II error rate. You would probably achieve more power through either a) a Kruskal-Wallis test, or b) a normalizing transformation prior to a t-test.
I'm basing this conclusion on two Monte Carlo studies. In the first (Khan & Rayner, 2003), skew and kurtosis were indirectly manipulated via the parameters of the g-and-k distribution family, and the resulting power was examined. Importantly, the Kruskal- Wallis test's power was less damaged by non-normality, particularly for n>=15.
A few caveats/qualifications about this study: Power was often hurt by high kurtosis, but it was less affected by skew. At first glance, this pattern might seem less relevant to your situation given that you noted a problem with skew, not kurtosis. However, I'm betting that excess kurtosis is also extreme in your case. Keep in mind that excess kurtosis will be at least as high as skew^2 - 2. (Let excess kurtosis equal the 4th standardized moment minus 3, so that excess kurtosis=0 for a normal distribution.) Note also that Khan and Rayner (2003) examined ANOVAs with 3 groups, but their results are likely to generalize to a two-sample t-test.
A second relevant study (Beasley, Erikson, & Allison, 2009) examined both Type I and Type II errors with various non-normal distributions, such as a Chi-squared(1) and Weibull(1,.5). For sample sizes of at least 25, the t-test adequately controlled the Type I error rate at or below the nominal alpha level. However, power was highest with either a Kruskal-Wallis test or with a Rank-based Inverse Normal transformation (Blom scores) applied prior to the t-test. Beasley and colleagues generally argued against the normalizing approach, but it should be noted that the the normalizing approach controlled the Type I error rate for n>=25, and its power sometimes slightly exceeded that of the Kruskal-Wallis test. That is, the normalizing approach seems promising for your situation. See tables 1 and 4 in their article for details.
References:
Khan, A., & Rayner, G. D. (2003). Robustness to non-normality of common tests for the many-sample location problem. Journal of Applied Mathematics and Decision Sciences, 7, 187-206.
Beasley, T. M., Erickson, S., & Allison, D. B. (2009). Rank-based inverse normal transformations are increasingly used, but are they merited? Behavioral Genetics, 39, 580-595.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
In your situation, the t-test will likely be robust in terms of Type I error rate, but not Type II error rate. You would probably achieve more power through either a) a Kruskal-Wallis test, or b) a n
|
9,977
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
First of all, if you assume that the distribution of the two samples is different, make sure you are using Welch's version of the t-test which assumes unequal variances between the groups. This will at least attempt to account for some of the differences that occur because of the distribution.
If we look at the formula for the Welch's t-test:
$$
t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}}
$$
where $s_{\overline{X}_1 - \overline{X}_2}$ is
$$
s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{s_1^2 \over n_1} + {s_2^2 \over n_2}}
$$
we can see that everytime there is an s we know the variance is being taken into account. Let's imagine that the two variances are in fact the same, but one is skewed, leading to a different variance estimate. If this estimate of the variance is not actually representative of your data because of skew, then the actually biasing effect will essentially be the square-root of that bias divided by the number of data points used to calculate it. Thus the effect of bad estimators of variance is muffled a bit by the square-root and a higher n, and that is probably why the consensus is that it remains a robust test.
The other issue of skewed distributions is that mean calculation will also be affected, and this is probably where the real problems of test assumption violations are since the means are relatively sensitive to skew. And the robustness of the test can be determined roughly by calculating the difference in means, compared to the difference in medians (as an idea). Perhaps you could even try replacing the difference in means by the difference in medians in the t-test as a more robust measure (I'm sure someone has discussed this but I couldn't find something on google quickly enough to link to).
I would also suggest running a permutation test if all you are doing is a t-test. The permutation test is an exact test, independent of distribution assumptions. Most importantly, the permutation tests and t-test will lead to identical results if the assumptions of the parametric test are met. Therefore, the robustness measure you seek can be 1 - the difference between the permutation and t-test p-values, where a score of 1 implies perfect robustness and 0 implies not robust at all.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
|
First of all, if you assume that the distribution of the two samples is different, make sure you are using Welch's version of the t-test which assumes unequal variances between the groups. This will a
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
First of all, if you assume that the distribution of the two samples is different, make sure you are using Welch's version of the t-test which assumes unequal variances between the groups. This will at least attempt to account for some of the differences that occur because of the distribution.
If we look at the formula for the Welch's t-test:
$$
t = {\overline{X}_1 - \overline{X}_2 \over s_{\overline{X}_1 - \overline{X}_2}}
$$
where $s_{\overline{X}_1 - \overline{X}_2}$ is
$$
s_{\overline{X}_1 - \overline{X}_2} = \sqrt{{s_1^2 \over n_1} + {s_2^2 \over n_2}}
$$
we can see that everytime there is an s we know the variance is being taken into account. Let's imagine that the two variances are in fact the same, but one is skewed, leading to a different variance estimate. If this estimate of the variance is not actually representative of your data because of skew, then the actually biasing effect will essentially be the square-root of that bias divided by the number of data points used to calculate it. Thus the effect of bad estimators of variance is muffled a bit by the square-root and a higher n, and that is probably why the consensus is that it remains a robust test.
The other issue of skewed distributions is that mean calculation will also be affected, and this is probably where the real problems of test assumption violations are since the means are relatively sensitive to skew. And the robustness of the test can be determined roughly by calculating the difference in means, compared to the difference in medians (as an idea). Perhaps you could even try replacing the difference in means by the difference in medians in the t-test as a more robust measure (I'm sure someone has discussed this but I couldn't find something on google quickly enough to link to).
I would also suggest running a permutation test if all you are doing is a t-test. The permutation test is an exact test, independent of distribution assumptions. Most importantly, the permutation tests and t-test will lead to identical results if the assumptions of the parametric test are met. Therefore, the robustness measure you seek can be 1 - the difference between the permutation and t-test p-values, where a score of 1 implies perfect robustness and 0 implies not robust at all.
|
How robust is the independent samples t-test when the distributions of the samples are non-normal?
First of all, if you assume that the distribution of the two samples is different, make sure you are using Welch's version of the t-test which assumes unequal variances between the groups. This will a
|
9,978
|
Kullback-Leibler divergence WITHOUT information theory
|
There is a purely statistical approach to Kullback-Leibler divergence: take a sample $X_1,\ldots,X_n$ iid from an unknown distribution $p^\star$ and consider the potential fit by a family of distributions, $$\mathfrak{F}=\{p_\theta\,,\ \theta\in\Theta\}$$The corresponding likelihood is defined as
$$L(\theta|x_1,\ldots,x_n)=\prod_{i=1}^n p_\theta(x_i)$$
and its logarithm is
$$\ell(\theta|x_1,\ldots,x_n)=\sum_{i=1}^n \log p_\theta(x_i)$$
Therefore, $$\frac{1}{n} \ell(\theta|x_1,\ldots,x_n) \longrightarrow
\mathbb{E}[\log p_\theta(X)]=\int \log p_\theta(x)\,p^\star(x)\text{d}x$$
which is the interesting part of the Kullback-Leibler divergence between $p_\theta$ and $p^\star$ $$\mathfrak{H}(p_\theta|p^\star)\stackrel{\text{def}}{=}\int \log \{p^\star(x)/p_\theta(x)\}\,p^\star(x)\text{d}x$$the other part$$\int \log \{p^\star(x)\}\,p^\star(x)\text{d}x$$being there to have the minimum [in $\theta$] of $\mathfrak{H}(p_\theta|p^\star)$ equal to zero.
A book that connects divergence, information theory and statistical inference is Rissanen's Optimal estimation of parameters, which I reviewed here.
|
Kullback-Leibler divergence WITHOUT information theory
|
There is a purely statistical approach to Kullback-Leibler divergence: take a sample $X_1,\ldots,X_n$ iid from an unknown distribution $p^\star$ and consider the potential fit by a family of distribut
|
Kullback-Leibler divergence WITHOUT information theory
There is a purely statistical approach to Kullback-Leibler divergence: take a sample $X_1,\ldots,X_n$ iid from an unknown distribution $p^\star$ and consider the potential fit by a family of distributions, $$\mathfrak{F}=\{p_\theta\,,\ \theta\in\Theta\}$$The corresponding likelihood is defined as
$$L(\theta|x_1,\ldots,x_n)=\prod_{i=1}^n p_\theta(x_i)$$
and its logarithm is
$$\ell(\theta|x_1,\ldots,x_n)=\sum_{i=1}^n \log p_\theta(x_i)$$
Therefore, $$\frac{1}{n} \ell(\theta|x_1,\ldots,x_n) \longrightarrow
\mathbb{E}[\log p_\theta(X)]=\int \log p_\theta(x)\,p^\star(x)\text{d}x$$
which is the interesting part of the Kullback-Leibler divergence between $p_\theta$ and $p^\star$ $$\mathfrak{H}(p_\theta|p^\star)\stackrel{\text{def}}{=}\int \log \{p^\star(x)/p_\theta(x)\}\,p^\star(x)\text{d}x$$the other part$$\int \log \{p^\star(x)\}\,p^\star(x)\text{d}x$$being there to have the minimum [in $\theta$] of $\mathfrak{H}(p_\theta|p^\star)$ equal to zero.
A book that connects divergence, information theory and statistical inference is Rissanen's Optimal estimation of parameters, which I reviewed here.
|
Kullback-Leibler divergence WITHOUT information theory
There is a purely statistical approach to Kullback-Leibler divergence: take a sample $X_1,\ldots,X_n$ iid from an unknown distribution $p^\star$ and consider the potential fit by a family of distribut
|
9,979
|
Kullback-Leibler divergence WITHOUT information theory
|
Here is a statistical interpretation of the Kullback-Leibler divergence, loosely taken from I.J. Good (Weight of evidence: A brief survey, Bayesian Statistics 2, 1985).
The weight of evidence.
Suppose you observe data points $x_1, x_2, \dots, x_n$ which you have reason to believe are independent samples from some unknown distribution having a density $f_0$. In the simplest case, you have two hypotheses $H_1$ and $H_2$ about what is $f_0$, say $H_1 = \{f_1\}$ and $H_2 = \{f_2\}$. Thus you have modelled the unknown $f_0$ as being one of $f_1$ or $f_2$.
The weight of evidence of the sample $x = (x_1, \dots, x_n)$ for $H_1$ against $H_2$ is defined as
$$
W(x) = \log \frac{f_1(x)}{f_2(x)} .
$$
It is an easy to interpret quantity, especially given a prior $P$ on the hypotheses $H_0$ and $H_1$. Indeed, in that case the posterior log-odds are $W$ plus the prior log-odds:
$$
\log \frac{P(H_0 | x)}{P(H_1 | x)} = W(x) + \log\frac{P(H_0)}{P(H_1)}.
$$
This quantity also has a number of convenient properties, such as additivity for independent samples:
$$
W(x_1, \dots, x_n) = W(x_1) + \dots +W(x_n) .
$$
Good provides further justification for the use of the weight of evidence, and $W(x)$ is also refered by Kullback and Leibler (in the paper that introduced the K-L divergence) as "the information in $x$ for discrimination between $H_1$ and $H_2$".
In summary, given a sample $x$, the weight of evidence $W(x)$ is a concrete number meant to help you understand how much evidence you have at hand. Some people even use rule of thumbs such as "$W(x) > 2$ is strong evidence" (I don't encourage the blind use of such tables, mind you).
The Kullback-Leibler divergence
Now, the Kullback-Leibler divergence between $f_1$ and $f_2$ is the expected weight of evidence in a sample $x \sim f_1$. That is,
$$
KL(f_1, f_2) = \mathbb{E}_{x \sim f_1} W(x) = \int f_1 \log\frac{f_1}{f_2}.
$$
We should intuitively expect that a sample $x \sim f_1$ provides positive evidence in favor of $H_1 = \{f_1\}$ against $H_2$, and this is indeed reflected through the inequality
$$
\mathbb{E}_{x \sim f_1} W(x) \geq 0.
$$
|
Kullback-Leibler divergence WITHOUT information theory
|
Here is a statistical interpretation of the Kullback-Leibler divergence, loosely taken from I.J. Good (Weight of evidence: A brief survey, Bayesian Statistics 2, 1985).
The weight of evidence.
Suppose
|
Kullback-Leibler divergence WITHOUT information theory
Here is a statistical interpretation of the Kullback-Leibler divergence, loosely taken from I.J. Good (Weight of evidence: A brief survey, Bayesian Statistics 2, 1985).
The weight of evidence.
Suppose you observe data points $x_1, x_2, \dots, x_n$ which you have reason to believe are independent samples from some unknown distribution having a density $f_0$. In the simplest case, you have two hypotheses $H_1$ and $H_2$ about what is $f_0$, say $H_1 = \{f_1\}$ and $H_2 = \{f_2\}$. Thus you have modelled the unknown $f_0$ as being one of $f_1$ or $f_2$.
The weight of evidence of the sample $x = (x_1, \dots, x_n)$ for $H_1$ against $H_2$ is defined as
$$
W(x) = \log \frac{f_1(x)}{f_2(x)} .
$$
It is an easy to interpret quantity, especially given a prior $P$ on the hypotheses $H_0$ and $H_1$. Indeed, in that case the posterior log-odds are $W$ plus the prior log-odds:
$$
\log \frac{P(H_0 | x)}{P(H_1 | x)} = W(x) + \log\frac{P(H_0)}{P(H_1)}.
$$
This quantity also has a number of convenient properties, such as additivity for independent samples:
$$
W(x_1, \dots, x_n) = W(x_1) + \dots +W(x_n) .
$$
Good provides further justification for the use of the weight of evidence, and $W(x)$ is also refered by Kullback and Leibler (in the paper that introduced the K-L divergence) as "the information in $x$ for discrimination between $H_1$ and $H_2$".
In summary, given a sample $x$, the weight of evidence $W(x)$ is a concrete number meant to help you understand how much evidence you have at hand. Some people even use rule of thumbs such as "$W(x) > 2$ is strong evidence" (I don't encourage the blind use of such tables, mind you).
The Kullback-Leibler divergence
Now, the Kullback-Leibler divergence between $f_1$ and $f_2$ is the expected weight of evidence in a sample $x \sim f_1$. That is,
$$
KL(f_1, f_2) = \mathbb{E}_{x \sim f_1} W(x) = \int f_1 \log\frac{f_1}{f_2}.
$$
We should intuitively expect that a sample $x \sim f_1$ provides positive evidence in favor of $H_1 = \{f_1\}$ against $H_2$, and this is indeed reflected through the inequality
$$
\mathbb{E}_{x \sim f_1} W(x) \geq 0.
$$
|
Kullback-Leibler divergence WITHOUT information theory
Here is a statistical interpretation of the Kullback-Leibler divergence, loosely taken from I.J. Good (Weight of evidence: A brief survey, Bayesian Statistics 2, 1985).
The weight of evidence.
Suppose
|
9,980
|
Kullback-Leibler divergence WITHOUT information theory
|
I have yet to see a single explanation of how these two concepts are even related.
I don't know much about information theory, but this is how I think about it: when I hear an information theory person say "length of the message," my brain says "surprise." Surprise is 1.) random and 2.) subjective.
By 1.) I mean that "surprise" is just a transformation of your random variable $X$, using some distribution $q(X)$. Surprise is defined as $- \log q(X)$, and this is definition whether or not you have a discrete random variable.
Surprise is a random variable, so eventually we want to take an expectation to make it a single number. By 2), when I say "subjective," I mean you can use whatever distribution you want ($q$), to transform $X$. The expectation, however, will always be taken with respect to the "true" distribution, $p$. These may or may not be equal. If you transform with the true $p$, you have $E_p[-\log p(X)]$, that's entropy. If some other distribution $q$ that's not equal to $p$, you get $E_p[-\log q(X)]$, and that's cross entropy. Notice how if you use the wrong distribution, you always have a higher expected surprise.
Instead of thinking about "how different they are" I think about the "increase in expected surprise from using the wrong distribution." This is all from properties of the logarithm.
$$
E_p[\log \left( \frac{p(X)}{q(X)} \right)] = E_p[-\log q(X)] - E_p[- \log p(X)] \ge 0.
$$
Edit
Response to: "Can you elaborate on how $−\log(q(x))$ is a measure of "surprise"? This quantity alone seems meaningless, as it is not even invariant under linear transforms of the sample space (I assume $q$ is a pdf)"
For one, think about what it maps values of $X$ to. If you have a $q$ that maps a certain value $x$ to $0$, then $-\log(0) = \infty$. For discrete random variables, realizations with probability $1$ have "surprise" $0$.
Second, $-\log$ is injective, so there is no way rarer values get less surprise than less rare ones.
For continuous random variables, a $q(x) > 1$ will coincide with a negative surprise. I guess this is a downside.
Olivier seems to be hinting at a property his "weight of evidence" quantity has that mine does not, which he calls an invariance under linear transformations (I'll admit I don't totally understand what he means by sample space). Presumably he is talking about if $X \sim q_X(x)$, then $Y=aX+b \sim q_x((y-b)/a)|1/a|$ as long as $X$ is continuous. Clearly $-\log q_X(X) \neq -\log q_Y(Y)$ due to the Jacobian.
I don't see how this renders the quantity "meaningless," though. In fact I have a hard time understanding why invariance is a desirable property in this case. Scale is probably important. Earlier, in a commment, I mentioned the example of variance, wherein the random variable we are taking the expectation of is $(X-EX)^2$. We could interpret this as "extremeness." This quantity suffers from lack of invariance as well, but it doesn't render meaningless peoples' intuition about what variance is.
Edit 2: looks like I'm not the only one who thinks of this as "surprise." From here:
The residual information in data $y$ conditional on $\theta$ may be
defined (up to a multiplicative constant) as $-2 \log\{ p(y \mid
\theta)\}$ (Kullback and Leibler, 1951; Burnham and Anderson, 1998) and
can be interpreted as a measure of 'surprise' (Good, 1956),
logarithmic penalty (Bernardo, 1979) or uncertainty.
|
Kullback-Leibler divergence WITHOUT information theory
|
I have yet to see a single explanation of how these two concepts are even related.
I don't know much about information theory, but this is how I think about it: when I hear an information theory pers
|
Kullback-Leibler divergence WITHOUT information theory
I have yet to see a single explanation of how these two concepts are even related.
I don't know much about information theory, but this is how I think about it: when I hear an information theory person say "length of the message," my brain says "surprise." Surprise is 1.) random and 2.) subjective.
By 1.) I mean that "surprise" is just a transformation of your random variable $X$, using some distribution $q(X)$. Surprise is defined as $- \log q(X)$, and this is definition whether or not you have a discrete random variable.
Surprise is a random variable, so eventually we want to take an expectation to make it a single number. By 2), when I say "subjective," I mean you can use whatever distribution you want ($q$), to transform $X$. The expectation, however, will always be taken with respect to the "true" distribution, $p$. These may or may not be equal. If you transform with the true $p$, you have $E_p[-\log p(X)]$, that's entropy. If some other distribution $q$ that's not equal to $p$, you get $E_p[-\log q(X)]$, and that's cross entropy. Notice how if you use the wrong distribution, you always have a higher expected surprise.
Instead of thinking about "how different they are" I think about the "increase in expected surprise from using the wrong distribution." This is all from properties of the logarithm.
$$
E_p[\log \left( \frac{p(X)}{q(X)} \right)] = E_p[-\log q(X)] - E_p[- \log p(X)] \ge 0.
$$
Edit
Response to: "Can you elaborate on how $−\log(q(x))$ is a measure of "surprise"? This quantity alone seems meaningless, as it is not even invariant under linear transforms of the sample space (I assume $q$ is a pdf)"
For one, think about what it maps values of $X$ to. If you have a $q$ that maps a certain value $x$ to $0$, then $-\log(0) = \infty$. For discrete random variables, realizations with probability $1$ have "surprise" $0$.
Second, $-\log$ is injective, so there is no way rarer values get less surprise than less rare ones.
For continuous random variables, a $q(x) > 1$ will coincide with a negative surprise. I guess this is a downside.
Olivier seems to be hinting at a property his "weight of evidence" quantity has that mine does not, which he calls an invariance under linear transformations (I'll admit I don't totally understand what he means by sample space). Presumably he is talking about if $X \sim q_X(x)$, then $Y=aX+b \sim q_x((y-b)/a)|1/a|$ as long as $X$ is continuous. Clearly $-\log q_X(X) \neq -\log q_Y(Y)$ due to the Jacobian.
I don't see how this renders the quantity "meaningless," though. In fact I have a hard time understanding why invariance is a desirable property in this case. Scale is probably important. Earlier, in a commment, I mentioned the example of variance, wherein the random variable we are taking the expectation of is $(X-EX)^2$. We could interpret this as "extremeness." This quantity suffers from lack of invariance as well, but it doesn't render meaningless peoples' intuition about what variance is.
Edit 2: looks like I'm not the only one who thinks of this as "surprise." From here:
The residual information in data $y$ conditional on $\theta$ may be
defined (up to a multiplicative constant) as $-2 \log\{ p(y \mid
\theta)\}$ (Kullback and Leibler, 1951; Burnham and Anderson, 1998) and
can be interpreted as a measure of 'surprise' (Good, 1956),
logarithmic penalty (Bernardo, 1979) or uncertainty.
|
Kullback-Leibler divergence WITHOUT information theory
I have yet to see a single explanation of how these two concepts are even related.
I don't know much about information theory, but this is how I think about it: when I hear an information theory pers
|
9,981
|
Kullback-Leibler divergence WITHOUT information theory
|
There is (also) a purely convex analytical viewpoint on the KL divergence, which personally I find very easy to understand.
Given any convex function $F: R^k \to R$, its Bregman divergence between $p$ and $q$ ($p, q \in R$) is the quantity
$$
F(p) - F(q) - \langle \nabla F(q), p - q \rangle
$$
This has a very simple interpretation: it is the difference between the actual value of the function at $p$ ($F(p)$) and the value extrapolated from a linear approximation of the function computed at $q$ (that is, $F(q) + \langle \nabla F(q), p - q \rangle$). Since the function is assumed convex, this value is always nonnegative and it is zero exactly when the linear approximation yields no error.
In the case of the KL divergence, $F$ is simply the negative entropy $F(p) = \sum_i p_i \log p_i$, which is indeed convex.
|
Kullback-Leibler divergence WITHOUT information theory
|
There is (also) a purely convex analytical viewpoint on the KL divergence, which personally I find very easy to understand.
Given any convex function $F: R^k \to R$, its Bregman divergence between $p
|
Kullback-Leibler divergence WITHOUT information theory
There is (also) a purely convex analytical viewpoint on the KL divergence, which personally I find very easy to understand.
Given any convex function $F: R^k \to R$, its Bregman divergence between $p$ and $q$ ($p, q \in R$) is the quantity
$$
F(p) - F(q) - \langle \nabla F(q), p - q \rangle
$$
This has a very simple interpretation: it is the difference between the actual value of the function at $p$ ($F(p)$) and the value extrapolated from a linear approximation of the function computed at $q$ (that is, $F(q) + \langle \nabla F(q), p - q \rangle$). Since the function is assumed convex, this value is always nonnegative and it is zero exactly when the linear approximation yields no error.
In the case of the KL divergence, $F$ is simply the negative entropy $F(p) = \sum_i p_i \log p_i$, which is indeed convex.
|
Kullback-Leibler divergence WITHOUT information theory
There is (also) a purely convex analytical viewpoint on the KL divergence, which personally I find very easy to understand.
Given any convex function $F: R^k \to R$, its Bregman divergence between $p
|
9,982
|
Can gradient descent be applied to non-convex functions?
|
The function you have graphed is indeed not convex. However, it is quasiconvex.
Gradient descent is a generic method for continuous optimization, so it can be, and is very commonly, applied to nonconvex functions. With a smooth function and a reasonably selected step size, it will generate a sequence of points $x_1, x_2, \ldots$ with strictly decreasing values $f(x_1) > f(x_2) > \ldots$.
Gradient descent will eventually converge to a stationary point of the function, regardless of convexity. If the function is convex, this will be a global minimum, but if not, it could be a local minimum or even a saddle point.
Quasiconvex functions are an interesting case. Any local minimum of a quasiconvex function is also a global minimum, but quasiconvex functions can also have stationary points that are not local minima (take $f(x) = x^3$ for example). So it's theoretically possible for gradient descent to get stuck on such a stationary point and not progress to a global min. In your example, if the shoulder on the left side of the graph were to perfectly level out, gradient descent could get stuck there. However, variants such as the heavy-ball method might be able to "roll through" and reach the global min.
|
Can gradient descent be applied to non-convex functions?
|
The function you have graphed is indeed not convex. However, it is quasiconvex.
Gradient descent is a generic method for continuous optimization, so it can be, and is very commonly, applied to noncon
|
Can gradient descent be applied to non-convex functions?
The function you have graphed is indeed not convex. However, it is quasiconvex.
Gradient descent is a generic method for continuous optimization, so it can be, and is very commonly, applied to nonconvex functions. With a smooth function and a reasonably selected step size, it will generate a sequence of points $x_1, x_2, \ldots$ with strictly decreasing values $f(x_1) > f(x_2) > \ldots$.
Gradient descent will eventually converge to a stationary point of the function, regardless of convexity. If the function is convex, this will be a global minimum, but if not, it could be a local minimum or even a saddle point.
Quasiconvex functions are an interesting case. Any local minimum of a quasiconvex function is also a global minimum, but quasiconvex functions can also have stationary points that are not local minima (take $f(x) = x^3$ for example). So it's theoretically possible for gradient descent to get stuck on such a stationary point and not progress to a global min. In your example, if the shoulder on the left side of the graph were to perfectly level out, gradient descent could get stuck there. However, variants such as the heavy-ball method might be able to "roll through" and reach the global min.
|
Can gradient descent be applied to non-convex functions?
The function you have graphed is indeed not convex. However, it is quasiconvex.
Gradient descent is a generic method for continuous optimization, so it can be, and is very commonly, applied to noncon
|
9,983
|
Can gradient descent be applied to non-convex functions?
|
Paul already mentioned one important point:
if f is convex there are no saddle points and all local minima are also global. Thus GD (with a suitable stepsize) is guaranteed to find a global minimizer.
What makes non-convex optimization hard is the presence of saddle points and local minima, where the gradient is (0,...,0) and that have an arbitrarily bad objective value.
Finding the global minmizer in such a setting is generally NP-hard and one instead settles with the goal of finding a local minimizer.
However, note that:
The probabiliy of GD to get stuck at a saddle is actually 0 (see here).
However, the presence of saddle points might severly slow GDs progress down because directions of low curvature are exploited too slowly (see here)
Depending on the dimensionality of your problem it might thus be advisable to go for a second-order optimization routine.
|
Can gradient descent be applied to non-convex functions?
|
Paul already mentioned one important point:
if f is convex there are no saddle points and all local minima are also global. Thus GD (with a suitable stepsize) is guaranteed to find a global minimizer
|
Can gradient descent be applied to non-convex functions?
Paul already mentioned one important point:
if f is convex there are no saddle points and all local minima are also global. Thus GD (with a suitable stepsize) is guaranteed to find a global minimizer.
What makes non-convex optimization hard is the presence of saddle points and local minima, where the gradient is (0,...,0) and that have an arbitrarily bad objective value.
Finding the global minmizer in such a setting is generally NP-hard and one instead settles with the goal of finding a local minimizer.
However, note that:
The probabiliy of GD to get stuck at a saddle is actually 0 (see here).
However, the presence of saddle points might severly slow GDs progress down because directions of low curvature are exploited too slowly (see here)
Depending on the dimensionality of your problem it might thus be advisable to go for a second-order optimization routine.
|
Can gradient descent be applied to non-convex functions?
Paul already mentioned one important point:
if f is convex there are no saddle points and all local minima are also global. Thus GD (with a suitable stepsize) is guaranteed to find a global minimizer
|
9,984
|
Is AdaBoost less or more prone to overfitting?
|
As you say a lot has been discussed about this matter, and there's some quite heavy theory that has gone along with it that I have to admit I never fully understood. In my practical experience AdaBoost is quite robust to overfitting, and LPBoost (Linear Programming Boosting) even more so (because the objective function requires a sparse combination of weak learners, which is a form of capacity control). The main factors that influence it are:
The "strength" of the "weak" learners: If you use very simple weak learners, such as decision stumps (1-level decision trees), then the algorithms are much less prone to overfitting. Whenever I've tried using more complicated weak learners (such as decision trees or even hyperplanes) I've found that overfitting occurs much more rapidly
The noise level in the data: AdaBoost is particularly prone to overfitting on noisy datasets. In this setting the regularised forms (RegBoost, AdaBoostReg, LPBoost, QPBoost) are preferable
The dimensionality of the data: We know that in general, we experience overfitting more in high dimensional spaces ("the curse of dimensionality"), and AdaBoost can also suffer in that respect, as it is simply a linear combination of classifiers which themselves suffer from the problem. Whether it is as prone as other classifiers is hard to determine.
Of course you can use heuristic methods such as validation sets or $k$-fold cross-validation to set the stopping parameter (or other parameters in the different variants) as you would for any other classifier.
|
Is AdaBoost less or more prone to overfitting?
|
As you say a lot has been discussed about this matter, and there's some quite heavy theory that has gone along with it that I have to admit I never fully understood. In my practical experience AdaBoos
|
Is AdaBoost less or more prone to overfitting?
As you say a lot has been discussed about this matter, and there's some quite heavy theory that has gone along with it that I have to admit I never fully understood. In my practical experience AdaBoost is quite robust to overfitting, and LPBoost (Linear Programming Boosting) even more so (because the objective function requires a sparse combination of weak learners, which is a form of capacity control). The main factors that influence it are:
The "strength" of the "weak" learners: If you use very simple weak learners, such as decision stumps (1-level decision trees), then the algorithms are much less prone to overfitting. Whenever I've tried using more complicated weak learners (such as decision trees or even hyperplanes) I've found that overfitting occurs much more rapidly
The noise level in the data: AdaBoost is particularly prone to overfitting on noisy datasets. In this setting the regularised forms (RegBoost, AdaBoostReg, LPBoost, QPBoost) are preferable
The dimensionality of the data: We know that in general, we experience overfitting more in high dimensional spaces ("the curse of dimensionality"), and AdaBoost can also suffer in that respect, as it is simply a linear combination of classifiers which themselves suffer from the problem. Whether it is as prone as other classifiers is hard to determine.
Of course you can use heuristic methods such as validation sets or $k$-fold cross-validation to set the stopping parameter (or other parameters in the different variants) as you would for any other classifier.
|
Is AdaBoost less or more prone to overfitting?
As you say a lot has been discussed about this matter, and there's some quite heavy theory that has gone along with it that I have to admit I never fully understood. In my practical experience AdaBoos
|
9,985
|
Is AdaBoost less or more prone to overfitting?
|
I agree with most of the points mentioned in tdc comment. however, I have to add and correct few things.
As shown in L2Boost by Peter Bühlmann, as the number of weak learners (rounds of boosting) increases, the bias converges exponentially fast while the variance increases by geometrically diminishing magnitudes which means: It overfits much slower than most of the other methods.
It was wrongly mentioned in Zach comment that it is better than random forest in terms of overfit. It is completely wrong. In fact, according to theory (look at original random forest paper by Breiman), Random Forest is absolutely immune against overfitting as long as its weak classifiers don't overfit to data.
Unlike what mentioned in tdc comment, most of boosting methods are highly sensitive to the labeling noise and may easily overfit in the presence of labeling noise.
In datasets where Bayes error rates are far from 0 (i.e., features are not discriminative enough) boosting methods can easily overfit , as well. Because they try to reduce the training error to zero while in reality even the optimal classifier, i.e., Bayes classifier can reach to a lets say 40% error rate.
finally, and this has not been published any where (to the best of my knowledge) there is a kind of overfitting in which the generalization error does not increase as the boosting rounds increases but it does not decreases either. It means the algorithm has stuck in a local optima. In this situation, the training error constantly decreases while the test error remains almost constant. So far, we never considered this phenomenon as an indication of overfitting but I believe it is a sign of overfitting and by using more complex weak learners, (strange!) we may in fact go against it (This last point should be considered with caution :D)
|
Is AdaBoost less or more prone to overfitting?
|
I agree with most of the points mentioned in tdc comment. however, I have to add and correct few things.
As shown in L2Boost by Peter Bühlmann, as the number of weak learners (rounds of boosting) inc
|
Is AdaBoost less or more prone to overfitting?
I agree with most of the points mentioned in tdc comment. however, I have to add and correct few things.
As shown in L2Boost by Peter Bühlmann, as the number of weak learners (rounds of boosting) increases, the bias converges exponentially fast while the variance increases by geometrically diminishing magnitudes which means: It overfits much slower than most of the other methods.
It was wrongly mentioned in Zach comment that it is better than random forest in terms of overfit. It is completely wrong. In fact, according to theory (look at original random forest paper by Breiman), Random Forest is absolutely immune against overfitting as long as its weak classifiers don't overfit to data.
Unlike what mentioned in tdc comment, most of boosting methods are highly sensitive to the labeling noise and may easily overfit in the presence of labeling noise.
In datasets where Bayes error rates are far from 0 (i.e., features are not discriminative enough) boosting methods can easily overfit , as well. Because they try to reduce the training error to zero while in reality even the optimal classifier, i.e., Bayes classifier can reach to a lets say 40% error rate.
finally, and this has not been published any where (to the best of my knowledge) there is a kind of overfitting in which the generalization error does not increase as the boosting rounds increases but it does not decreases either. It means the algorithm has stuck in a local optima. In this situation, the training error constantly decreases while the test error remains almost constant. So far, we never considered this phenomenon as an indication of overfitting but I believe it is a sign of overfitting and by using more complex weak learners, (strange!) we may in fact go against it (This last point should be considered with caution :D)
|
Is AdaBoost less or more prone to overfitting?
I agree with most of the points mentioned in tdc comment. however, I have to add and correct few things.
As shown in L2Boost by Peter Bühlmann, as the number of weak learners (rounds of boosting) inc
|
9,986
|
When would one use Gibbs sampling instead of Metropolis-Hastings?
|
Firstly, let me note [somewhat pedantically] that
There are several different kinds of MCMC algorithms:
Metropolis-Hastings, Gibbs, importance/rejection sampling (related).
importance and rejection sampling methods are not MCMC algorithms because they are not based on Markov chains. Actually, importance sampling does not produce a sample from the target distribution, $f$ say, but only importance weights $\omega$ say, to be used in Monte Carlo approximations of integrals related with $f$. Using those weights as probabilities to produce a sample does not lead to a proper sample from $f$, even though unbiased estimators of expectations under $f$ can be produced.
Secondly, the question
Why would someone go with Gibbs sampling instead of
Metropolis-Hastings? I suspect there are cases when inference is more
tractable with Gibbs sampling than with Metropolis-Hastings
does not have an answer in that a Metropolis-Hastings sampler can be almost anything, including a Gibbs sampler. I replied in rather detailed terms to an earlier and similar question. But let me add a few if redundant points here:
The primary reason why Gibbs sampling was introduced was to break the curse of dimensionality (which impacts both rejection and importance sampling) by producing a sequence of low dimension simulations that still converge to the right target. Even though the dimension of the target impacts the speed of convergence. Metropolis-Hastings samplers are designed to create a Markov chain (like Gibbs sampling) based on a proposal (like importance and rejection sampling) by correcting for the wrong density through an acceptance-rejection step. But an important point is that they are not opposed: namely, Gibbs sampling may require Metropolis-Hastings steps when facing complex if low-dimension conditional targets, while Metropolis-Hastings proposals may be built on approximations to (Gibbs) full conditionals. In a formal definition, Gibbs sampling is a special case of Metropolis-Hasting algorithm with a probability of acceptance of one. (By the way, I object to the use of inference in that quote, as I would reserve it for statistical purposes, while those samplers are numerical devices.)
Usually, Gibbs sampling [understood as running a sequence of low-dimensional conditional simulations] is favoured in settings where the decomposition into such conditionals is easy to implement and fast to run. In settings where such decompositions induce multimodality and hence a difficulty to move between modes (latent variable models like mixture models come to mind), using a more global proposal in a Metropolis-Hasting algorithm may produce a higher efficiency. But the drawback stands with choosing the proposal distribution in the Metropolis-Hasting algorithm.
|
When would one use Gibbs sampling instead of Metropolis-Hastings?
|
Firstly, let me note [somewhat pedantically] that
There are several different kinds of MCMC algorithms:
Metropolis-Hastings, Gibbs, importance/rejection sampling (related).
importance and rejectio
|
When would one use Gibbs sampling instead of Metropolis-Hastings?
Firstly, let me note [somewhat pedantically] that
There are several different kinds of MCMC algorithms:
Metropolis-Hastings, Gibbs, importance/rejection sampling (related).
importance and rejection sampling methods are not MCMC algorithms because they are not based on Markov chains. Actually, importance sampling does not produce a sample from the target distribution, $f$ say, but only importance weights $\omega$ say, to be used in Monte Carlo approximations of integrals related with $f$. Using those weights as probabilities to produce a sample does not lead to a proper sample from $f$, even though unbiased estimators of expectations under $f$ can be produced.
Secondly, the question
Why would someone go with Gibbs sampling instead of
Metropolis-Hastings? I suspect there are cases when inference is more
tractable with Gibbs sampling than with Metropolis-Hastings
does not have an answer in that a Metropolis-Hastings sampler can be almost anything, including a Gibbs sampler. I replied in rather detailed terms to an earlier and similar question. But let me add a few if redundant points here:
The primary reason why Gibbs sampling was introduced was to break the curse of dimensionality (which impacts both rejection and importance sampling) by producing a sequence of low dimension simulations that still converge to the right target. Even though the dimension of the target impacts the speed of convergence. Metropolis-Hastings samplers are designed to create a Markov chain (like Gibbs sampling) based on a proposal (like importance and rejection sampling) by correcting for the wrong density through an acceptance-rejection step. But an important point is that they are not opposed: namely, Gibbs sampling may require Metropolis-Hastings steps when facing complex if low-dimension conditional targets, while Metropolis-Hastings proposals may be built on approximations to (Gibbs) full conditionals. In a formal definition, Gibbs sampling is a special case of Metropolis-Hasting algorithm with a probability of acceptance of one. (By the way, I object to the use of inference in that quote, as I would reserve it for statistical purposes, while those samplers are numerical devices.)
Usually, Gibbs sampling [understood as running a sequence of low-dimensional conditional simulations] is favoured in settings where the decomposition into such conditionals is easy to implement and fast to run. In settings where such decompositions induce multimodality and hence a difficulty to move between modes (latent variable models like mixture models come to mind), using a more global proposal in a Metropolis-Hasting algorithm may produce a higher efficiency. But the drawback stands with choosing the proposal distribution in the Metropolis-Hasting algorithm.
|
When would one use Gibbs sampling instead of Metropolis-Hastings?
Firstly, let me note [somewhat pedantically] that
There are several different kinds of MCMC algorithms:
Metropolis-Hastings, Gibbs, importance/rejection sampling (related).
importance and rejectio
|
9,987
|
What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained with PCA?
|
Single word answer: Both.
Let's start with defining the norms. For a matrix $X$, operator $2$-norm is defined as $$\|X\|_2 = \mathrm{sup}\frac{\|Xv\|_2}{\|v\|_2} = \mathrm{max}(s_i)$$ and Frobenius norm as $$\|X\|_F = \sqrt {\sum_{ij} X_{ij}^2} = \sqrt{\mathrm{tr}(X^\top X)} = \sqrt{\sum s_i^2},$$
where $s_i$ are singular values of $X$, i.e. diagonal elements of $S$ in the singular value decomposition $X = USV^\top$.
PCA is given by the same singular value decomposition when the data are centered. $US$ are principal components, $V$ are principal axes, i.e. eigenvectors of the covariance matrix, and the reconstruction of $X$ with only the $k$ principal components corresponding to the $k$ largest singular values is given by $X_k = U_k S_k V_k^\top$.
The Eckart-Young theorem says that $X_k$ is the matrix minimizing the norm of the reconstruction error $\|X-A\|$ among all matrices $A$ of rank $k$. This is true for both, Frobenius norm and the operator $2$-norm. As pointed out by @cardinal in the comments, it was first proved by Schmidt (of Gram-Schmidt fame) in 1907 for the Frobenius case. It was later rediscovered by Eckart and Young in 1936 and is now mostly associated with their names. Mirsky generalized the theorem in 1958 to all norms that are invariant under unitary transformations, and this includes the operator 2-norm.
This theorem is sometimes called Eckart-Young-Mirsky theorem. Stewart (1993) calls it Schmidt approximation theorem. I have even seen it called Schmidt-Eckart-Young-Mirsky theorem.
Eckart and Young, 1936, The approximation of one matrix by another of lower rank
Mirsky, 1958, Symmetric gauge functions and unitarily invariant norms
Stewart, 1993, On the early history of the singular value decomposition
Proof for the operator $2$-norm
Let $X$ be of full rank $n$. As $A$ is of rank $k$, its null space has $n-k$ dimensions. The space spanned by the $k+1$ right singular vectors of $X$ corresponding to the largest singular values has $k+1$ dimensions. So these two spaces must intersect. Let $w$ be a unit vector from the intersection. Then we get:
$$\|X-A\|^2_2 \ge \|(X-A)w\|^2_2 = \|Xw\|^2_2 = \sum_{i=1}^{k+1}s_i^2(v_i^\top w)^2 \ge s_{k+1}^2 = \|X-X_k\|_2^2,$$ QED.
Proof for the Frobenius norm
We want to find matrix $A$ of rank $k$ that minimizes $\|X-A\|^2_F$. We can factorize $A=BW^\top$, where $W$ has $k$ orthonormal columns. Minimizing $\|X-BW^\top\|^2$ for fixed $W$ is a regression problem with solution $B=XW$. Plugging it in, we see that we now need to minimize $$\|X-XWW^\top\|^2=\|X\|^2-\|XWW^\top\|^2=\mathrm{const}-\mathrm{tr}(WW^\top X^\top XWW^\top)\\=\mathrm{const}-\mathrm{const}\cdot\mathrm{tr}(W^\top\Sigma W),$$ where $\Sigma$ is the covariance matrix of $X$, i.e. $\Sigma=X^\top X/(n-1)$. This means that reconstruction error is minimized by taking as columns of $W$ some $k$ orthonormal vectors maximizing the total variance of the projection.
It is well-known that these are first $k$ eigenvectors of the covariance matrix. Indeed, if $X=USV^\top$, then $\Sigma=VS^2V^\top/(n-1)=V\Lambda V^\top$. Writing $R=V^\top W$ which also has orthonormal columns, we get $$\mathrm{tr}(W^\top\Sigma W)=\mathrm{tr}(R^\top\Lambda R)=\sum_i \lambda_i \sum_j R_{ij}^2 \le \sum_{i=1}^k \lambda_k,$$ with maximum achieved when $W=V_k$. The theorem then follows immediately.
See the following three related threads:
What is the objective function of PCA?
Why does PCA maximize total variance of the projection?
PCA objective function: what is the connection between maximizing variance and minimizing error?
Earlier attempt of a proof for Frobenius norm
This proof I found somewhere online but it is wrong (contains a gap), as explained by @cardinal in the comments.
Frobenius norm is invariant under unitary transformations, because they do not change the singular values. So we get: $$\|X-A\|_F=\|USV^\top - A\| = \|S - U^\top A V\| = \|S-B\|,$$ where $B=U^\top A V$. Continuing: $$\|X-A\|_F = \sum_{ij}(S_{ij}-B_{ij})^2 = \sum_i (s_i-B_{ii})^2 + \sum_{i\ne j}B_{ij}^2.$$ This is minimized when all off-diagonal elements of $B$ are zero and all $k$ diagonal terms cancel out the $k$ largest singular values $s_i$ [gap here: this is not obvious], i.e. $B_\mathrm{optimal}=S_k$ and hence $A_\mathrm{optimal} = U_k S_k V_k^\top$.
|
What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained wit
|
Single word answer: Both.
Let's start with defining the norms. For a matrix $X$, operator $2$-norm is defined as $$\|X\|_2 = \mathrm{sup}\frac{\|Xv\|_2}{\|v\|_2} = \mathrm{max}(s_i)$$ and Frobenius n
|
What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained with PCA?
Single word answer: Both.
Let's start with defining the norms. For a matrix $X$, operator $2$-norm is defined as $$\|X\|_2 = \mathrm{sup}\frac{\|Xv\|_2}{\|v\|_2} = \mathrm{max}(s_i)$$ and Frobenius norm as $$\|X\|_F = \sqrt {\sum_{ij} X_{ij}^2} = \sqrt{\mathrm{tr}(X^\top X)} = \sqrt{\sum s_i^2},$$
where $s_i$ are singular values of $X$, i.e. diagonal elements of $S$ in the singular value decomposition $X = USV^\top$.
PCA is given by the same singular value decomposition when the data are centered. $US$ are principal components, $V$ are principal axes, i.e. eigenvectors of the covariance matrix, and the reconstruction of $X$ with only the $k$ principal components corresponding to the $k$ largest singular values is given by $X_k = U_k S_k V_k^\top$.
The Eckart-Young theorem says that $X_k$ is the matrix minimizing the norm of the reconstruction error $\|X-A\|$ among all matrices $A$ of rank $k$. This is true for both, Frobenius norm and the operator $2$-norm. As pointed out by @cardinal in the comments, it was first proved by Schmidt (of Gram-Schmidt fame) in 1907 for the Frobenius case. It was later rediscovered by Eckart and Young in 1936 and is now mostly associated with their names. Mirsky generalized the theorem in 1958 to all norms that are invariant under unitary transformations, and this includes the operator 2-norm.
This theorem is sometimes called Eckart-Young-Mirsky theorem. Stewart (1993) calls it Schmidt approximation theorem. I have even seen it called Schmidt-Eckart-Young-Mirsky theorem.
Eckart and Young, 1936, The approximation of one matrix by another of lower rank
Mirsky, 1958, Symmetric gauge functions and unitarily invariant norms
Stewart, 1993, On the early history of the singular value decomposition
Proof for the operator $2$-norm
Let $X$ be of full rank $n$. As $A$ is of rank $k$, its null space has $n-k$ dimensions. The space spanned by the $k+1$ right singular vectors of $X$ corresponding to the largest singular values has $k+1$ dimensions. So these two spaces must intersect. Let $w$ be a unit vector from the intersection. Then we get:
$$\|X-A\|^2_2 \ge \|(X-A)w\|^2_2 = \|Xw\|^2_2 = \sum_{i=1}^{k+1}s_i^2(v_i^\top w)^2 \ge s_{k+1}^2 = \|X-X_k\|_2^2,$$ QED.
Proof for the Frobenius norm
We want to find matrix $A$ of rank $k$ that minimizes $\|X-A\|^2_F$. We can factorize $A=BW^\top$, where $W$ has $k$ orthonormal columns. Minimizing $\|X-BW^\top\|^2$ for fixed $W$ is a regression problem with solution $B=XW$. Plugging it in, we see that we now need to minimize $$\|X-XWW^\top\|^2=\|X\|^2-\|XWW^\top\|^2=\mathrm{const}-\mathrm{tr}(WW^\top X^\top XWW^\top)\\=\mathrm{const}-\mathrm{const}\cdot\mathrm{tr}(W^\top\Sigma W),$$ where $\Sigma$ is the covariance matrix of $X$, i.e. $\Sigma=X^\top X/(n-1)$. This means that reconstruction error is minimized by taking as columns of $W$ some $k$ orthonormal vectors maximizing the total variance of the projection.
It is well-known that these are first $k$ eigenvectors of the covariance matrix. Indeed, if $X=USV^\top$, then $\Sigma=VS^2V^\top/(n-1)=V\Lambda V^\top$. Writing $R=V^\top W$ which also has orthonormal columns, we get $$\mathrm{tr}(W^\top\Sigma W)=\mathrm{tr}(R^\top\Lambda R)=\sum_i \lambda_i \sum_j R_{ij}^2 \le \sum_{i=1}^k \lambda_k,$$ with maximum achieved when $W=V_k$. The theorem then follows immediately.
See the following three related threads:
What is the objective function of PCA?
Why does PCA maximize total variance of the projection?
PCA objective function: what is the connection between maximizing variance and minimizing error?
Earlier attempt of a proof for Frobenius norm
This proof I found somewhere online but it is wrong (contains a gap), as explained by @cardinal in the comments.
Frobenius norm is invariant under unitary transformations, because they do not change the singular values. So we get: $$\|X-A\|_F=\|USV^\top - A\| = \|S - U^\top A V\| = \|S-B\|,$$ where $B=U^\top A V$. Continuing: $$\|X-A\|_F = \sum_{ij}(S_{ij}-B_{ij})^2 = \sum_i (s_i-B_{ii})^2 + \sum_{i\ne j}B_{ij}^2.$$ This is minimized when all off-diagonal elements of $B$ are zero and all $k$ diagonal terms cancel out the $k$ largest singular values $s_i$ [gap here: this is not obvious], i.e. $B_\mathrm{optimal}=S_k$ and hence $A_\mathrm{optimal} = U_k S_k V_k^\top$.
|
What norm of the reconstruction error is minimized by the low-rank approximation matrix obtained wit
Single word answer: Both.
Let's start with defining the norms. For a matrix $X$, operator $2$-norm is defined as $$\|X\|_2 = \mathrm{sup}\frac{\|Xv\|_2}{\|v\|_2} = \mathrm{max}(s_i)$$ and Frobenius n
|
9,988
|
Do Bayesians accept Kolmogorov's axioms?
|
In my opinion, Cox-Jaynes interpretation of probability provides a rigorous foundation for Bayesian probability:
Cox, Richard T. "Probability, frequency and reasonable expectation." American Journal of Physics 14.1 (1946): 1–13.
Jaynes, Edwin T. Probability theory: the logic of science. Cambridge University Press, 2003.
Beck, James L. "Bayesian system identification based on probability logic." Structural Control and Health Monitoring 17.7 (2010): 825–847.
The axioms of probability logic derived by Cox are:
(P1): $\Pr[b|a]\ge0$ (by convention)
(P2): $\Pr[\overline{b}|a]=1-\Pr[b|a]$ (negation function)
(P3): $\Pr[b\cap c|a]=\Pr[c|b\cap a]\Pr[b|a]$ (conjunction function)
Axioms P1-P3 imply the following [Beck, 2010]:
(P4): a) $\Pr[b|b\cap c] = 1$; b) $\Pr[\overline{b}|b\cap c] = 0$; c) $\Pr[b|c]\in[0,1]$
(P5): a) $\Pr[a|c \cap (a \Rightarrow b)]\le\Pr[b|c\cap(a \Rightarrow b)]$, b) $\Pr[a|c\cap(a \Leftrightarrow b)] = \Pr[b|c\cap(a \Leftrightarrow b)]$, where $a \Rightarrow b$ means that $a$ is contained in $c$, and $a \Leftrightarrow b$ means that $a$ is equivalent to $b$.
(P6): $\Pr[a \cup b|c] = \Pr[a|c]+\Pr[b|c]-\Pr[a\cap b|c]$
(P7): Assuming that proposition $c$ states that one and only one of propositions $b_1,\ldots,b_N$ is true, then:
a) Marginalization Theorem: $\Pr[a|c]=\sum_{n=1}^N P[a \cap b_n|c]$
b) Total Probability Theorem: $\Pr[a|c] = \sum_{n=1}^N \Pr[a|b_n\cap c]\Pr[b_n|c]$
c) Bayes' Theorem: For $k=1,\ldots,N$: $\Pr[b_k|a\cap c] = \frac{\Pr[a|b_k\cap c]\Pr[b_k|c]}{\sum_{n=1}^N \Pr[a|b_n\cap c]\Pr[b_n|c]}$
They imply Kolmogorov's statement of logic, which can be viewed as a special case.
In my interpretation of a Bayesian viewpoint, everything is always (implicitly) conditioned on our believes and on our knowledge.
The following comparison is taken from Beck [2010]:
The Bayesian point of view
Probability is a measure of plausibility of a statement based on specified information.
Probability distributions represent states of plausible knowledge about systems and phenomena, not inherent properties of them.
Probability of a model is a measure of its plausibility relative to other models in a set.
Pragmatically quantifies uncertainty due to missing information without any claim that this is due to nature's inherent randomness.
The Frequentist point of view
Probability is the relative frequency of occurrence of an inherently random event in the long run.
Probability distributions are inherent properties of random phenomena.
Limited scope, e.g., no meaning for the probability of a model.
Inherent randomness is assumed, but cannot be proven.
How to derive Kolmogorov's axioms from the axioms above
In the following, section 2.2 of [Beck, 2010] is summarized:
In the following we use: probability measure $\Pr(A)$ on subset $A$ of a finite set $X$:
[K1] $\Pr(A)\ge 0, \forall A \subset X$
[K2]: $\Pr(X) = 1$
[K3]: $\Pr(A\cup B)=\Pr(A)+\Pr(B), \forall A,B \subset X$ if $A$ and $B$ are disjoint.
In order to derive (K1–K3) from the axioms of probability theory, [Beck, 2010] introduced propositon $\pi$ that states $x\in X$ and specifies the probability model for $x$. [Beck, 2010] furthermore introduces $\Pr(A) = \Pr[x\in A|\pi]$.
P1 implies K1 with $b=\{x\in A\}$ and $c=\pi$
K2 follows from $\Pr[x\in X|\pi]=1$; P4(a), and $\pi$ states that $x\in X$.
K3 can be derived from P6: $A$ and $B$ are disjoint means that $x\in A$ and $x\in B$ are mutually exclusive. Therefore, K3: $\Pr(x\in A\cup B|\pi)=\Pr(x\in A|\pi)+\Pr(x\in B|\pi)$
|
Do Bayesians accept Kolmogorov's axioms?
|
In my opinion, Cox-Jaynes interpretation of probability provides a rigorous foundation for Bayesian probability:
Cox, Richard T. "Probability, frequency and reasonable expectation." American Journal
|
Do Bayesians accept Kolmogorov's axioms?
In my opinion, Cox-Jaynes interpretation of probability provides a rigorous foundation for Bayesian probability:
Cox, Richard T. "Probability, frequency and reasonable expectation." American Journal of Physics 14.1 (1946): 1–13.
Jaynes, Edwin T. Probability theory: the logic of science. Cambridge University Press, 2003.
Beck, James L. "Bayesian system identification based on probability logic." Structural Control and Health Monitoring 17.7 (2010): 825–847.
The axioms of probability logic derived by Cox are:
(P1): $\Pr[b|a]\ge0$ (by convention)
(P2): $\Pr[\overline{b}|a]=1-\Pr[b|a]$ (negation function)
(P3): $\Pr[b\cap c|a]=\Pr[c|b\cap a]\Pr[b|a]$ (conjunction function)
Axioms P1-P3 imply the following [Beck, 2010]:
(P4): a) $\Pr[b|b\cap c] = 1$; b) $\Pr[\overline{b}|b\cap c] = 0$; c) $\Pr[b|c]\in[0,1]$
(P5): a) $\Pr[a|c \cap (a \Rightarrow b)]\le\Pr[b|c\cap(a \Rightarrow b)]$, b) $\Pr[a|c\cap(a \Leftrightarrow b)] = \Pr[b|c\cap(a \Leftrightarrow b)]$, where $a \Rightarrow b$ means that $a$ is contained in $c$, and $a \Leftrightarrow b$ means that $a$ is equivalent to $b$.
(P6): $\Pr[a \cup b|c] = \Pr[a|c]+\Pr[b|c]-\Pr[a\cap b|c]$
(P7): Assuming that proposition $c$ states that one and only one of propositions $b_1,\ldots,b_N$ is true, then:
a) Marginalization Theorem: $\Pr[a|c]=\sum_{n=1}^N P[a \cap b_n|c]$
b) Total Probability Theorem: $\Pr[a|c] = \sum_{n=1}^N \Pr[a|b_n\cap c]\Pr[b_n|c]$
c) Bayes' Theorem: For $k=1,\ldots,N$: $\Pr[b_k|a\cap c] = \frac{\Pr[a|b_k\cap c]\Pr[b_k|c]}{\sum_{n=1}^N \Pr[a|b_n\cap c]\Pr[b_n|c]}$
They imply Kolmogorov's statement of logic, which can be viewed as a special case.
In my interpretation of a Bayesian viewpoint, everything is always (implicitly) conditioned on our believes and on our knowledge.
The following comparison is taken from Beck [2010]:
The Bayesian point of view
Probability is a measure of plausibility of a statement based on specified information.
Probability distributions represent states of plausible knowledge about systems and phenomena, not inherent properties of them.
Probability of a model is a measure of its plausibility relative to other models in a set.
Pragmatically quantifies uncertainty due to missing information without any claim that this is due to nature's inherent randomness.
The Frequentist point of view
Probability is the relative frequency of occurrence of an inherently random event in the long run.
Probability distributions are inherent properties of random phenomena.
Limited scope, e.g., no meaning for the probability of a model.
Inherent randomness is assumed, but cannot be proven.
How to derive Kolmogorov's axioms from the axioms above
In the following, section 2.2 of [Beck, 2010] is summarized:
In the following we use: probability measure $\Pr(A)$ on subset $A$ of a finite set $X$:
[K1] $\Pr(A)\ge 0, \forall A \subset X$
[K2]: $\Pr(X) = 1$
[K3]: $\Pr(A\cup B)=\Pr(A)+\Pr(B), \forall A,B \subset X$ if $A$ and $B$ are disjoint.
In order to derive (K1–K3) from the axioms of probability theory, [Beck, 2010] introduced propositon $\pi$ that states $x\in X$ and specifies the probability model for $x$. [Beck, 2010] furthermore introduces $\Pr(A) = \Pr[x\in A|\pi]$.
P1 implies K1 with $b=\{x\in A\}$ and $c=\pi$
K2 follows from $\Pr[x\in X|\pi]=1$; P4(a), and $\pi$ states that $x\in X$.
K3 can be derived from P6: $A$ and $B$ are disjoint means that $x\in A$ and $x\in B$ are mutually exclusive. Therefore, K3: $\Pr(x\in A\cup B|\pi)=\Pr(x\in A|\pi)+\Pr(x\in B|\pi)$
|
Do Bayesians accept Kolmogorov's axioms?
In my opinion, Cox-Jaynes interpretation of probability provides a rigorous foundation for Bayesian probability:
Cox, Richard T. "Probability, frequency and reasonable expectation." American Journal
|
9,989
|
Do Bayesians accept Kolmogorov's axioms?
|
After the development of Probability Theory it was necessary to show that looser concepts answering to the name of "probability" measured up to the rigorously defined concept they had inspired. "Subjective" Bayesian probabilities were considered by Ramsey and de Finetti, who independently showed that a quantification of degree of belief subject to the constraints of comparability & coherence (your beliefs are coherent if no-one can make a Dutch book against you) has to be a probability.
Differences between axiomatizations are largely a matter of taste concerning what should be defined & what derived. But countable additivity is one of Kolmogorov's that isn't derivable from Cox's or Finetti's, & has been controversial. Some Bayesians (e.g. de Finetti & Savage) stop at finite additivity & so don't accept all of Kolmogorov's axioms. They can put uniform probability distributions over infinite intervals without impropriety. Others follow Villegas in also assuming monotone continuity, & get countable additivity from that.
Ramsey (1926), "Truth and probability", in Ramsey (1931), The Foundations of Mathematics and other Logical Essays
de Finetti (1931), "Sul significato soggettivo della probabilità", Fundamenta Mathematicæ, 17, pp 298 – 329
Villegas (1964), "On qualitative probability $\sigma$-algebras", Ann. Math. Statist., 35, 4.
|
Do Bayesians accept Kolmogorov's axioms?
|
After the development of Probability Theory it was necessary to show that looser concepts answering to the name of "probability" measured up to the rigorously defined concept they had inspired. "Subje
|
Do Bayesians accept Kolmogorov's axioms?
After the development of Probability Theory it was necessary to show that looser concepts answering to the name of "probability" measured up to the rigorously defined concept they had inspired. "Subjective" Bayesian probabilities were considered by Ramsey and de Finetti, who independently showed that a quantification of degree of belief subject to the constraints of comparability & coherence (your beliefs are coherent if no-one can make a Dutch book against you) has to be a probability.
Differences between axiomatizations are largely a matter of taste concerning what should be defined & what derived. But countable additivity is one of Kolmogorov's that isn't derivable from Cox's or Finetti's, & has been controversial. Some Bayesians (e.g. de Finetti & Savage) stop at finite additivity & so don't accept all of Kolmogorov's axioms. They can put uniform probability distributions over infinite intervals without impropriety. Others follow Villegas in also assuming monotone continuity, & get countable additivity from that.
Ramsey (1926), "Truth and probability", in Ramsey (1931), The Foundations of Mathematics and other Logical Essays
de Finetti (1931), "Sul significato soggettivo della probabilità", Fundamenta Mathematicæ, 17, pp 298 – 329
Villegas (1964), "On qualitative probability $\sigma$-algebras", Ann. Math. Statist., 35, 4.
|
Do Bayesians accept Kolmogorov's axioms?
After the development of Probability Theory it was necessary to show that looser concepts answering to the name of "probability" measured up to the rigorously defined concept they had inspired. "Subje
|
9,990
|
Have I correctly specified my model in lmer?
|
Tow nested within station when tow is random and station is fixed
station+(1|station:tow) is correct. As @John said in his answer, (1|station/tow) would expand to (1|station)+(1|station:tow) (main effect of station plus interaction between tow and station), which you don't want because you have already specified station as a fixed effect.
Interaction between station and day when station is fixed and day is random.
The interaction between a fixed and a random effect is always random. Again as @John said, station*day expands to station+day+station:day, which you (again) don't want because you've already specified day in your model. I don't think there is a way to do what you want and collapse the crossed effects of day (random) and station (fixed), but you could if you wanted write station+(1|day/station), which as specified in the previous answer would expand to station + (1|day) + (1|day:station).
interaction between tow and day when tow is nested in station
Because you do not have unique values of the tow variable (i.e. because as you say below tows are specified as 1, 2, 3 at every station, you do need to specify the nesting, as (1|station:tow:day). If you did have the tows specified uniquely, you could use either (1|tow:day) or (1|station:tow:day) (they should give equivalent answers). If you do not specify the nesting in this case, lme4 will try to estimate a random effect that is shared by tow #1 at all stations ...
One way to diagnose whether you've specified the random effects correctly is to look at the number of observations reported for each grouping variable and see whether it agrees with what you expect (for example, the station:tow:day group should have a number of observations corresponding to the total number of station $\times$ tow $\times$ day combinations: if you forgot the nesting with station, you should see that you get fewer observations than you ought.
Are http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#model-specification and http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#nested-or-crossed useful to you?
|
Have I correctly specified my model in lmer?
|
Tow nested within station when tow is random and station is fixed
station+(1|station:tow) is correct. As @John said in his answer, (1|station/tow) would expand to (1|station)+(1|station:tow) (main eff
|
Have I correctly specified my model in lmer?
Tow nested within station when tow is random and station is fixed
station+(1|station:tow) is correct. As @John said in his answer, (1|station/tow) would expand to (1|station)+(1|station:tow) (main effect of station plus interaction between tow and station), which you don't want because you have already specified station as a fixed effect.
Interaction between station and day when station is fixed and day is random.
The interaction between a fixed and a random effect is always random. Again as @John said, station*day expands to station+day+station:day, which you (again) don't want because you've already specified day in your model. I don't think there is a way to do what you want and collapse the crossed effects of day (random) and station (fixed), but you could if you wanted write station+(1|day/station), which as specified in the previous answer would expand to station + (1|day) + (1|day:station).
interaction between tow and day when tow is nested in station
Because you do not have unique values of the tow variable (i.e. because as you say below tows are specified as 1, 2, 3 at every station, you do need to specify the nesting, as (1|station:tow:day). If you did have the tows specified uniquely, you could use either (1|tow:day) or (1|station:tow:day) (they should give equivalent answers). If you do not specify the nesting in this case, lme4 will try to estimate a random effect that is shared by tow #1 at all stations ...
One way to diagnose whether you've specified the random effects correctly is to look at the number of observations reported for each grouping variable and see whether it agrees with what you expect (for example, the station:tow:day group should have a number of observations corresponding to the total number of station $\times$ tow $\times$ day combinations: if you forgot the nesting with station, you should see that you get fewer observations than you ought.
Are http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#model-specification and http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#nested-or-crossed useful to you?
|
Have I correctly specified my model in lmer?
Tow nested within station when tow is random and station is fixed
station+(1|station:tow) is correct. As @John said in his answer, (1|station/tow) would expand to (1|station)+(1|station:tow) (main eff
|
9,991
|
Have I correctly specified my model in lmer?
|
Some of the things in formula are a bit confusing. The : is for interactions between two terms while the * is for main effects and interactions. The / is another one for interactions but what it does is generate an interaction between the numerator and all of the terms in the denominator (e.g. A/(B+C) = A:B + A:C). The | is for something like "grouped by". So, 1|station would be intercept grouped by station and in parentheses it's random (1|station). That's how you would do nesting.
Hopefully that's help. It's a bit odd to have a random effect nested within a fixed effect and I'm not sure how you'd represent that. I can't even imagine the situation. You might get a better response if you explain just what your variables are and what you want to accomplish. Lots of times people ask questions and are using the terminology wrong and it's hard to communicate. Explain what the variables represent and what you want to know about them.
Focusing on your your description in your last paragraph it sounds like your tow is simply an indicator of the samples you gathered and not something you need estimates of in the sense that you expect tow 1 to be consistently different from tow 2 in some way. Tow is just indicating a sample. Unless you really believe the order of tows mattered you don't even bother with that variable. And if they mattered then it's a fixed effect (and maybe random, but not a solely random effect). You say that you want to know if tows change in variability from day to day. How about the answer is yes? It's not in the realm of realistic probability that they don't vary from day to day. It's just the variance of your measures. You're not allowed to try to account for every spec of variance because then you wind up not having any variance left over for error. You'd have an over-specified model. You'd be at the point of just reporting each measure.
You make a similar statement about wondering if station varies by day; of course it does. But maybe you mean specific days? Were the days grouped in some way by season, lunar cycle, etc? Unless you have something other than just this is day 1, this is day 2, etc how does knowing that stations vary day to day tell you anything other than stations vary? So the answer to that question is, of course stations vary day to day. And of course tows vary day to day and station to station. You end up left with a simple model:
aov(y ~ station, data = dat)
The one fixed effect you have here, station, just is sampled over multiple tows and multiple days. I'm not sure you really need multi-level modelling here at all. It sounds like you're over-specifying your model.
If you really do want random day and tow effects and there's information that you haven't specified here then you might expand it out to a multi-level model. That would be:
lmer(y ~ station + (two*day|station), data = dat)
You need multiple tows at each station and day to use that model though.
|
Have I correctly specified my model in lmer?
|
Some of the things in formula are a bit confusing. The : is for interactions between two terms while the * is for main effects and interactions. The / is another one for interactions but what it does
|
Have I correctly specified my model in lmer?
Some of the things in formula are a bit confusing. The : is for interactions between two terms while the * is for main effects and interactions. The / is another one for interactions but what it does is generate an interaction between the numerator and all of the terms in the denominator (e.g. A/(B+C) = A:B + A:C). The | is for something like "grouped by". So, 1|station would be intercept grouped by station and in parentheses it's random (1|station). That's how you would do nesting.
Hopefully that's help. It's a bit odd to have a random effect nested within a fixed effect and I'm not sure how you'd represent that. I can't even imagine the situation. You might get a better response if you explain just what your variables are and what you want to accomplish. Lots of times people ask questions and are using the terminology wrong and it's hard to communicate. Explain what the variables represent and what you want to know about them.
Focusing on your your description in your last paragraph it sounds like your tow is simply an indicator of the samples you gathered and not something you need estimates of in the sense that you expect tow 1 to be consistently different from tow 2 in some way. Tow is just indicating a sample. Unless you really believe the order of tows mattered you don't even bother with that variable. And if they mattered then it's a fixed effect (and maybe random, but not a solely random effect). You say that you want to know if tows change in variability from day to day. How about the answer is yes? It's not in the realm of realistic probability that they don't vary from day to day. It's just the variance of your measures. You're not allowed to try to account for every spec of variance because then you wind up not having any variance left over for error. You'd have an over-specified model. You'd be at the point of just reporting each measure.
You make a similar statement about wondering if station varies by day; of course it does. But maybe you mean specific days? Were the days grouped in some way by season, lunar cycle, etc? Unless you have something other than just this is day 1, this is day 2, etc how does knowing that stations vary day to day tell you anything other than stations vary? So the answer to that question is, of course stations vary day to day. And of course tows vary day to day and station to station. You end up left with a simple model:
aov(y ~ station, data = dat)
The one fixed effect you have here, station, just is sampled over multiple tows and multiple days. I'm not sure you really need multi-level modelling here at all. It sounds like you're over-specifying your model.
If you really do want random day and tow effects and there's information that you haven't specified here then you might expand it out to a multi-level model. That would be:
lmer(y ~ station + (two*day|station), data = dat)
You need multiple tows at each station and day to use that model though.
|
Have I correctly specified my model in lmer?
Some of the things in formula are a bit confusing. The : is for interactions between two terms while the * is for main effects and interactions. The / is another one for interactions but what it does
|
9,992
|
Interpreting interaction terms in logit regression with categorical variables
|
I assume that PreferA = 1 when one prefered A and 0 otherwise and that ControlFALSE = 1 when treated and 0 when control.
The odds of preffering A when a person did not do so previously and did not receive a treatment (ControlFALSE=0 and PreferA=0) is $\exp(3.135)= 23$, i.e. there are 23 such persons who prefer A for every such person that prefers B. So A is very popular.
The effect of treatmeant refers to a person did not prefer A previously (PreferA=0). In that case the baseline odds decreases by a factor $\exp(-2.309) = .099$ or $(1-.099) \times 100\%=-90.1\%$ when she or he is subjected to the treatment. So the odds of choosing A for those who were treated and did not prefer A previously is $.099*23=2.3$, so there 2.3 such person who prefer A for every such person who prefers B. So among this group A is still more popular than B, but less so than in the untreated/baseline group.
The effect of prefering A previously refers to a person who is a control (ControlFALSE = 0). In that case the baseline odds decreases by a factor $.006$ or $-99.4\%$ when someone prefered A previously. (So those that pefered A previously are a lot less likely to do so now. Does that make sense?)
The interaction effect compares the effect of treatment for those persons that prefered A previously and those that did not. If a person prefered A previously (PreferA =1) then the odds ratio of treatment increases by a factor $\exp(2.850) = 17.3$. So the odds ratio of treatment for those that prefered A previously is $17.3 \times .099 = 1.71$. Alternatively, this odds ratio of treatment for those that prefered A previously could be computed as $\exp(2.850 - 2.309)$.
So the exponentiated constant gives you the baseline odds, the exponentiated coefficients of the main effects give you the odds ratios when the other variable equals 0, and the exponentiated coefficient of the interaction terms tells you the ratio by wich the odds ratio changes.
|
Interpreting interaction terms in logit regression with categorical variables
|
I assume that PreferA = 1 when one prefered A and 0 otherwise and that ControlFALSE = 1 when treated and 0 when control.
The odds of preffering A when a person did not do so previously and did not rec
|
Interpreting interaction terms in logit regression with categorical variables
I assume that PreferA = 1 when one prefered A and 0 otherwise and that ControlFALSE = 1 when treated and 0 when control.
The odds of preffering A when a person did not do so previously and did not receive a treatment (ControlFALSE=0 and PreferA=0) is $\exp(3.135)= 23$, i.e. there are 23 such persons who prefer A for every such person that prefers B. So A is very popular.
The effect of treatmeant refers to a person did not prefer A previously (PreferA=0). In that case the baseline odds decreases by a factor $\exp(-2.309) = .099$ or $(1-.099) \times 100\%=-90.1\%$ when she or he is subjected to the treatment. So the odds of choosing A for those who were treated and did not prefer A previously is $.099*23=2.3$, so there 2.3 such person who prefer A for every such person who prefers B. So among this group A is still more popular than B, but less so than in the untreated/baseline group.
The effect of prefering A previously refers to a person who is a control (ControlFALSE = 0). In that case the baseline odds decreases by a factor $.006$ or $-99.4\%$ when someone prefered A previously. (So those that pefered A previously are a lot less likely to do so now. Does that make sense?)
The interaction effect compares the effect of treatment for those persons that prefered A previously and those that did not. If a person prefered A previously (PreferA =1) then the odds ratio of treatment increases by a factor $\exp(2.850) = 17.3$. So the odds ratio of treatment for those that prefered A previously is $17.3 \times .099 = 1.71$. Alternatively, this odds ratio of treatment for those that prefered A previously could be computed as $\exp(2.850 - 2.309)$.
So the exponentiated constant gives you the baseline odds, the exponentiated coefficients of the main effects give you the odds ratios when the other variable equals 0, and the exponentiated coefficient of the interaction terms tells you the ratio by wich the odds ratio changes.
|
Interpreting interaction terms in logit regression with categorical variables
I assume that PreferA = 1 when one prefered A and 0 otherwise and that ControlFALSE = 1 when treated and 0 when control.
The odds of preffering A when a person did not do so previously and did not rec
|
9,993
|
Interpreting interaction terms in logit regression with categorical variables
|
I also found this paper to be helpful in interpreting interaction in logistic regression:
Chen, J. J. (2003). Communicating complex information: the interpretation of statistical interaction in multiple logistic regression analysis. American journal of public health, 93(9), 1376-1377.
|
Interpreting interaction terms in logit regression with categorical variables
|
I also found this paper to be helpful in interpreting interaction in logistic regression:
Chen, J. J. (2003). Communicating complex information: the interpretation of statistical interaction in multi
|
Interpreting interaction terms in logit regression with categorical variables
I also found this paper to be helpful in interpreting interaction in logistic regression:
Chen, J. J. (2003). Communicating complex information: the interpretation of statistical interaction in multiple logistic regression analysis. American journal of public health, 93(9), 1376-1377.
|
Interpreting interaction terms in logit regression with categorical variables
I also found this paper to be helpful in interpreting interaction in logistic regression:
Chen, J. J. (2003). Communicating complex information: the interpretation of statistical interaction in multi
|
9,994
|
Interpreting interaction terms in logit regression with categorical variables
|
My own preference, when trying to interpret interactions in logistic regression, is to look at the predicted probabilities for each combination of categorical variables. In your case, this would be just 4 probabilities:
Prefer A, control true
Prefer A, control false
Prefer B, control true
Prefer B, control false
When I have continuous variables, I usually look at the predicted value at the median, 1st and 3rd quartiles.
Although this doesn't directly get at the interpretation of each coefficient, I find that it often lets me (and my clients) see what is going on in a clear way.
|
Interpreting interaction terms in logit regression with categorical variables
|
My own preference, when trying to interpret interactions in logistic regression, is to look at the predicted probabilities for each combination of categorical variables. In your case, this would be ju
|
Interpreting interaction terms in logit regression with categorical variables
My own preference, when trying to interpret interactions in logistic regression, is to look at the predicted probabilities for each combination of categorical variables. In your case, this would be just 4 probabilities:
Prefer A, control true
Prefer A, control false
Prefer B, control true
Prefer B, control false
When I have continuous variables, I usually look at the predicted value at the median, 1st and 3rd quartiles.
Although this doesn't directly get at the interpretation of each coefficient, I find that it often lets me (and my clients) see what is going on in a clear way.
|
Interpreting interaction terms in logit regression with categorical variables
My own preference, when trying to interpret interactions in logistic regression, is to look at the predicted probabilities for each combination of categorical variables. In your case, this would be ju
|
9,995
|
How does the formula for generating correlated random variables work?
|
Suppose you want to find a linear combination of $X_1$ and $X_2$ such that
$$
\text{corr}(\alpha X_1 + \beta X_2, X_1) = \rho
$$
Notice that if you multiply both $\alpha$ and $\beta$ by the same (non-zero) constant, the correlation will not change. Thus, we're going to add a condition to preserve variance: $\text{var}(\alpha X_1 + \beta X_2) = \text{var}(X_1)$
This is equivalent to
$$
\rho
= \frac{\text{cov}(\alpha X_1 + \beta X_2, X_1)}{\sqrt{\text{var}(\alpha X_1 + \beta X_2) \text{var}(X_1)}}
= \frac{\alpha \overbrace{\text{cov}(X_1, X_1)}^{=\text{var}(X_1)} + \overbrace{\beta \text{cov}(X_2, X_1)}^{=0}}{\sqrt{\text{var}(\alpha X_1 + \beta X_2) \text{var}(X_1)}} = \alpha \sqrt{\frac{\text{var}(X_1)}{\alpha^2 \text{var}(X_1) + \beta^2 \text{var}(X_2)}}
$$
Assuming both random variables have the same variance (this is a crucial assumption!) ($\text{var}(X_1) = \text{var}(X_2)$), we get
$$
\rho \sqrt{\alpha^2 + \beta^2} = \alpha
$$
There are many solutions to this equation, so it's time to recall variance-preserving condition:
$$
\text{var}(X_1)
= \text{var}(\alpha X_1 + \beta X_2)
= \alpha^2 \text{var}(X_1) + \beta^2 \text{var}(X_2)
\Rightarrow \alpha^2 + \beta^2 = 1
$$
And this leads us to
$$
\alpha = \rho \\
\beta = \pm \sqrt{1-\rho^2}
$$
UPD. Regarding the second question: yes, this is known as whitening.
|
How does the formula for generating correlated random variables work?
|
Suppose you want to find a linear combination of $X_1$ and $X_2$ such that
$$
\text{corr}(\alpha X_1 + \beta X_2, X_1) = \rho
$$
Notice that if you multiply both $\alpha$ and $\beta$ by the same (non-
|
How does the formula for generating correlated random variables work?
Suppose you want to find a linear combination of $X_1$ and $X_2$ such that
$$
\text{corr}(\alpha X_1 + \beta X_2, X_1) = \rho
$$
Notice that if you multiply both $\alpha$ and $\beta$ by the same (non-zero) constant, the correlation will not change. Thus, we're going to add a condition to preserve variance: $\text{var}(\alpha X_1 + \beta X_2) = \text{var}(X_1)$
This is equivalent to
$$
\rho
= \frac{\text{cov}(\alpha X_1 + \beta X_2, X_1)}{\sqrt{\text{var}(\alpha X_1 + \beta X_2) \text{var}(X_1)}}
= \frac{\alpha \overbrace{\text{cov}(X_1, X_1)}^{=\text{var}(X_1)} + \overbrace{\beta \text{cov}(X_2, X_1)}^{=0}}{\sqrt{\text{var}(\alpha X_1 + \beta X_2) \text{var}(X_1)}} = \alpha \sqrt{\frac{\text{var}(X_1)}{\alpha^2 \text{var}(X_1) + \beta^2 \text{var}(X_2)}}
$$
Assuming both random variables have the same variance (this is a crucial assumption!) ($\text{var}(X_1) = \text{var}(X_2)$), we get
$$
\rho \sqrt{\alpha^2 + \beta^2} = \alpha
$$
There are many solutions to this equation, so it's time to recall variance-preserving condition:
$$
\text{var}(X_1)
= \text{var}(\alpha X_1 + \beta X_2)
= \alpha^2 \text{var}(X_1) + \beta^2 \text{var}(X_2)
\Rightarrow \alpha^2 + \beta^2 = 1
$$
And this leads us to
$$
\alpha = \rho \\
\beta = \pm \sqrt{1-\rho^2}
$$
UPD. Regarding the second question: yes, this is known as whitening.
|
How does the formula for generating correlated random variables work?
Suppose you want to find a linear combination of $X_1$ and $X_2$ such that
$$
\text{corr}(\alpha X_1 + \beta X_2, X_1) = \rho
$$
Notice that if you multiply both $\alpha$ and $\beta$ by the same (non-
|
9,996
|
How does the formula for generating correlated random variables work?
|
The equation is a simplified bivariate form of Cholesky decomposition. This simplified equation is sometimes called the Kaiser-Dickman algorithm (Kaiser & Dickman, 1962).
Note that $X_1$ and $X_2$ must have the same variance for this algorithm to work properly. Also, the algorithm is typically used with normal variables. If $X_1$ or $X_2$ are not normal, $Y$ might not have the same distributional form as $X_2$.
References:
Kaiser, H. F., & Dickman, K. (1962). Sample and population score matrices and sample correlation matrices from an arbitrary population correlation matrix. Psychometrika, 27(2), 179-182.
|
How does the formula for generating correlated random variables work?
|
The equation is a simplified bivariate form of Cholesky decomposition. This simplified equation is sometimes called the Kaiser-Dickman algorithm (Kaiser & Dickman, 1962).
Note that $X_1$ and $X_2$
|
How does the formula for generating correlated random variables work?
The equation is a simplified bivariate form of Cholesky decomposition. This simplified equation is sometimes called the Kaiser-Dickman algorithm (Kaiser & Dickman, 1962).
Note that $X_1$ and $X_2$ must have the same variance for this algorithm to work properly. Also, the algorithm is typically used with normal variables. If $X_1$ or $X_2$ are not normal, $Y$ might not have the same distributional form as $X_2$.
References:
Kaiser, H. F., & Dickman, K. (1962). Sample and population score matrices and sample correlation matrices from an arbitrary population correlation matrix. Psychometrika, 27(2), 179-182.
|
How does the formula for generating correlated random variables work?
The equation is a simplified bivariate form of Cholesky decomposition. This simplified equation is sometimes called the Kaiser-Dickman algorithm (Kaiser & Dickman, 1962).
Note that $X_1$ and $X_2$
|
9,997
|
How does the formula for generating correlated random variables work?
|
Correlation coefficient is the $\cos$ between two series if they are treated as vectors (with $n^{th}$ data point being $n^{th}$ dimension of a vector). The above formula simply creates a decomposition of a vector into its $\cos\theta$, $sin\theta$ components (with respect to $X_1,X_2$).
if $\rho = cos \theta$ ,
then $\sqrt{1-{\rho}^2}=\pm sin \theta$.
Because if $X_1, X_2$ are uncorrelated, the angle between them is a right angle (ie, they can be considered as orthogonal, albeit non-normalized, basis vectors ).
|
How does the formula for generating correlated random variables work?
|
Correlation coefficient is the $\cos$ between two series if they are treated as vectors (with $n^{th}$ data point being $n^{th}$ dimension of a vector). The above formula simply creates a decompositi
|
How does the formula for generating correlated random variables work?
Correlation coefficient is the $\cos$ between two series if they are treated as vectors (with $n^{th}$ data point being $n^{th}$ dimension of a vector). The above formula simply creates a decomposition of a vector into its $\cos\theta$, $sin\theta$ components (with respect to $X_1,X_2$).
if $\rho = cos \theta$ ,
then $\sqrt{1-{\rho}^2}=\pm sin \theta$.
Because if $X_1, X_2$ are uncorrelated, the angle between them is a right angle (ie, they can be considered as orthogonal, albeit non-normalized, basis vectors ).
|
How does the formula for generating correlated random variables work?
Correlation coefficient is the $\cos$ between two series if they are treated as vectors (with $n^{th}$ data point being $n^{th}$ dimension of a vector). The above formula simply creates a decompositi
|
9,998
|
What is covariate?
|
From Wikipedia:
Depending on the context, an independent variable is sometimes called a "predictor variable", regressor, covariate, "controlled variable", "manipulated variable", "explanatory variable", exposure variable (see reliability theory), "risk factor" (see medical statistics), "feature" (in machine learning and pattern recognition) or "input variable." In econometrics, the term "control variable" is usually used instead of "covariate".
Answering (some of) your questions:
Assume that you are solving linear regression, where you are trying to find a relation $\textbf{y} = f(\textbf{X})$. In this case, $\textbf{X}$ are independent variables and $\textbf{y}$ is the dependent variable.
Typically, $\textbf{X}$ consists of multiple variables which may have some relations between them, i.e. they "co-vary" -- hence the term "covariate".
Let's take a concrete example. Suppose you wish to predict the price of a house in a neighborhood, $\textbf{y}$ using the following "co-variates", $\textbf{X}$:
Width, $x_1$
Breadth, $x_2$
Number of floors, $x_3$
Area of the house, $x_4$
Distance to downtown, $x_5$
Distance to hospital, $x_6$
For a linear regression problem, $\textbf{y} = f(\textbf{X})$ the price of the house is dependent on all co-variates, i.e. $\textbf{y}$ is dependent on $\textbf{X}$. Do any of the co-variates depend on the price of the house? In other words, is $\textbf{X}$ dependent on $\textbf{y}$? The answer is NO. Hence, $\textbf{X}$ is the independent variable and $\textbf{y}$ is the dependent variable. This encapsulates a cause and effect relationship. If the independent variable changes, its effect is seen on the dependent variable.
Now, are all the co-variates independent of each other? The answer is NO! A better answer is, well it depends!
The area of the house ($x_4$) is dependent on the width ($x_1$), breadth ($x_2$) and the number of floors ($x_3$), whereas, distances to downtown ($x_5$) and hospital ($x_6$) are independent of the area of the house ($x_4$).
Hope that helps!
|
What is covariate?
|
From Wikipedia:
Depending on the context, an independent variable is sometimes called a "predictor variable", regressor, covariate, "controlled variable", "manipulated variable", "explanatory variabl
|
What is covariate?
From Wikipedia:
Depending on the context, an independent variable is sometimes called a "predictor variable", regressor, covariate, "controlled variable", "manipulated variable", "explanatory variable", exposure variable (see reliability theory), "risk factor" (see medical statistics), "feature" (in machine learning and pattern recognition) or "input variable." In econometrics, the term "control variable" is usually used instead of "covariate".
Answering (some of) your questions:
Assume that you are solving linear regression, where you are trying to find a relation $\textbf{y} = f(\textbf{X})$. In this case, $\textbf{X}$ are independent variables and $\textbf{y}$ is the dependent variable.
Typically, $\textbf{X}$ consists of multiple variables which may have some relations between them, i.e. they "co-vary" -- hence the term "covariate".
Let's take a concrete example. Suppose you wish to predict the price of a house in a neighborhood, $\textbf{y}$ using the following "co-variates", $\textbf{X}$:
Width, $x_1$
Breadth, $x_2$
Number of floors, $x_3$
Area of the house, $x_4$
Distance to downtown, $x_5$
Distance to hospital, $x_6$
For a linear regression problem, $\textbf{y} = f(\textbf{X})$ the price of the house is dependent on all co-variates, i.e. $\textbf{y}$ is dependent on $\textbf{X}$. Do any of the co-variates depend on the price of the house? In other words, is $\textbf{X}$ dependent on $\textbf{y}$? The answer is NO. Hence, $\textbf{X}$ is the independent variable and $\textbf{y}$ is the dependent variable. This encapsulates a cause and effect relationship. If the independent variable changes, its effect is seen on the dependent variable.
Now, are all the co-variates independent of each other? The answer is NO! A better answer is, well it depends!
The area of the house ($x_4$) is dependent on the width ($x_1$), breadth ($x_2$) and the number of floors ($x_3$), whereas, distances to downtown ($x_5$) and hospital ($x_6$) are independent of the area of the house ($x_4$).
Hope that helps!
|
What is covariate?
From Wikipedia:
Depending on the context, an independent variable is sometimes called a "predictor variable", regressor, covariate, "controlled variable", "manipulated variable", "explanatory variabl
|
9,999
|
What is covariate?
|
the way linear regression is generally run (there are ways to ask for different slope calculations) you are getting the unique impact of one predictor on the dependent variable. Its shared impact with other predictors on the DV (or indirect impact as with structural equation models I believe) is not part of the slope. it is sometimes stated that the slope is the impact of a specific predictor setting all other X to zero (although that breaks down obviously when some X can not take on the value of 0 or you have interaction).
|
What is covariate?
|
the way linear regression is generally run (there are ways to ask for different slope calculations) you are getting the unique impact of one predictor on the dependent variable. Its shared impact with
|
What is covariate?
the way linear regression is generally run (there are ways to ask for different slope calculations) you are getting the unique impact of one predictor on the dependent variable. Its shared impact with other predictors on the DV (or indirect impact as with structural equation models I believe) is not part of the slope. it is sometimes stated that the slope is the impact of a specific predictor setting all other X to zero (although that breaks down obviously when some X can not take on the value of 0 or you have interaction).
|
What is covariate?
the way linear regression is generally run (there are ways to ask for different slope calculations) you are getting the unique impact of one predictor on the dependent variable. Its shared impact with
|
10,000
|
What is covariate?
|
In general terms, covariates are characteristics of the participants
in an experiment. If you collect data on characteristics before you
run an experiment, you could use that data to see how your treatment
affects different groups or populations. Or, you could use that data
to control for the influence of any covariate.
Covariates may affect the outcome in a study. For example, you are
running an experiment to see how corn plants tolerate drought. Level
of drought is the actual “treatment,” but it isn’t the only factor
that affects how plants perform: size is a known factor that affects
tolerance levels so that you would run plant size as a covariate.
Another example (from Penn State): Let’s say you are comparing the
salaries of men and women to see who earns more. One factor that you
need to control for is that people tend to earn more the longer they
are out of college. Years out of college, in this case, is a
covariate.
A covariate can be an independent variable (i.e., of direct interest)
or it can be an unwanted, confounding variable. Adding a covariate to
a model can increase the accuracy of your results.
Source: https://www.statisticshowto.datasciencecentral.com/covariate/
|
What is covariate?
|
In general terms, covariates are characteristics of the participants
in an experiment. If you collect data on characteristics before you
run an experiment, you could use that data to see how your
|
What is covariate?
In general terms, covariates are characteristics of the participants
in an experiment. If you collect data on characteristics before you
run an experiment, you could use that data to see how your treatment
affects different groups or populations. Or, you could use that data
to control for the influence of any covariate.
Covariates may affect the outcome in a study. For example, you are
running an experiment to see how corn plants tolerate drought. Level
of drought is the actual “treatment,” but it isn’t the only factor
that affects how plants perform: size is a known factor that affects
tolerance levels so that you would run plant size as a covariate.
Another example (from Penn State): Let’s say you are comparing the
salaries of men and women to see who earns more. One factor that you
need to control for is that people tend to earn more the longer they
are out of college. Years out of college, in this case, is a
covariate.
A covariate can be an independent variable (i.e., of direct interest)
or it can be an unwanted, confounding variable. Adding a covariate to
a model can increase the accuracy of your results.
Source: https://www.statisticshowto.datasciencecentral.com/covariate/
|
What is covariate?
In general terms, covariates are characteristics of the participants
in an experiment. If you collect data on characteristics before you
run an experiment, you could use that data to see how your
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.