idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
14,501
|
Is there a measure of 'evenness' of spread?
|
It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in time series analyses. A very basic approach is just a simple linear model regressing the sequence values upon their index values. In the first case, your linear model would give you a singular regression coefficient of 1 (predictive $R^2 = 1$). In the later case, this would be a coefficient of 1.51 and an $R^2$ of 0.78.
|
Is there a measure of 'evenness' of spread?
|
It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in t
|
Is there a measure of 'evenness' of spread?
It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in time series analyses. A very basic approach is just a simple linear model regressing the sequence values upon their index values. In the first case, your linear model would give you a singular regression coefficient of 1 (predictive $R^2 = 1$). In the later case, this would be a coefficient of 1.51 and an $R^2$ of 0.78.
|
Is there a measure of 'evenness' of spread?
It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in t
|
14,502
|
Calculating AIC “by hand” in R
|
Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the exponent in the likelihood. Also, you can't forget to count the fact that $\sigma^2$ is a parameter.
$\cal L(\hat\mu,\hat\sigma)=(\frac{1}{\sqrt{2\pi s_n^2}})^n\exp({-\frac{1}{2}\sum_i (e_i^2/s_n^2)})$
$-2\log \cal{L} = n\log(2\pi)+n\log{s_n^2}+\sum_i (e_i^2/s_n^2)$
$= n[\log(2\pi)+\log{s_n^2}+1]$
$\text{AIC} = 2p -2\log \cal{L}$
but note that for a model with 1 independent variable, p=3 (the x-coefficient, the constant and $\sigma^2$)
Which means this is how you get their answer:
nrow(mtcars)*(log(2*pi)+1+log((sum(lm_mtcars$residuals^2)/nrow(mtcars))))
+((length(lm_mtcars$coefficients)+1)*2)
|
Calculating AIC “by hand” in R
|
Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the expone
|
Calculating AIC “by hand” in R
Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the exponent in the likelihood. Also, you can't forget to count the fact that $\sigma^2$ is a parameter.
$\cal L(\hat\mu,\hat\sigma)=(\frac{1}{\sqrt{2\pi s_n^2}})^n\exp({-\frac{1}{2}\sum_i (e_i^2/s_n^2)})$
$-2\log \cal{L} = n\log(2\pi)+n\log{s_n^2}+\sum_i (e_i^2/s_n^2)$
$= n[\log(2\pi)+\log{s_n^2}+1]$
$\text{AIC} = 2p -2\log \cal{L}$
but note that for a model with 1 independent variable, p=3 (the x-coefficient, the constant and $\sigma^2$)
Which means this is how you get their answer:
nrow(mtcars)*(log(2*pi)+1+log((sum(lm_mtcars$residuals^2)/nrow(mtcars))))
+((length(lm_mtcars$coefficients)+1)*2)
|
Calculating AIC “by hand” in R
Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the expone
|
14,503
|
Calculating AIC “by hand” in R
|
The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n} + 2(k-1)$, where $S_{\mathrm{r}}$ is the residual sum of squares, & $n$ is the sample size. These formulæ differ by an additive constant; so long as you're using the same formula & looking at differences in AIC between different models where the constants cancel, it doesn't matter.
|
Calculating AIC “by hand” in R
|
The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n}
|
Calculating AIC “by hand” in R
The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n} + 2(k-1)$, where $S_{\mathrm{r}}$ is the residual sum of squares, & $n$ is the sample size. These formulæ differ by an additive constant; so long as you're using the same formula & looking at differences in AIC between different models where the constants cancel, it doesn't matter.
|
Calculating AIC “by hand” in R
The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n}
|
14,504
|
Interpreting three forms of a "mixed model"
|
This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogously to refer to the variables in your model.
glmer(counts ~ A + T, data=data, family="Poisson") is the model
$$ \log \big( E(Y_{ij}) \big) = \beta_0 + \beta_1 A_{ij} + \beta_2 T_{ij} $$
which is just an ordinary poisson regression model.
glmer(counts ~ (A + T|Site), data=data, family="Poisson") is the model
$$ \log \big( E(Y_{ij}) \big) = \alpha_0 + \eta_{j0} + \eta_{j1} A_{ij} + \eta_{j2} T_{ij} $$
where $\eta_{j} = (\eta_{j0}, \eta_{j1}, \eta_{j2}) \sim N(0, \Sigma)$ are random effects that are shared by each observation made by individuals from site $j$. These random effects are allowed to be freely correlated (i.e. no restrictions are made on $\Sigma$) in the model you specified. To impose independence, you have to place them inside different brackets, e.g. (A-1|Site) + (T-1|Site) + (1|Site) would do it. This model assumes that $\log \big( E(Y_{ij}) \big)$ is $\alpha_0$ for all sites but each site has a random offset ($\eta_{j0}$) and has a random linear relationship with both $A_{ij}, T_{ij}$.
glmer(counts ~ A + T + (T|Site), data=data, family="Poisson") is the model
$$ \log \big( E(Y_{ij}) \big) = (\theta_0 + \gamma_{j0}) + \theta_1 A_{ij} + (\theta_2 + \gamma_{j1}) T_{ij} $$
So now $\log \big( E(Y_{ij}) \big)$ has some "average" relationship with $A_{ij}, T_{ij}$, given by the fixed effects $\theta_0, \theta_1, \theta_2$ but that relationship is different for each site and those differences are captured by the random effects, $\gamma_{j0}, \gamma_{j1}, \gamma_{j2}$. That is, the baseline is random shifted and the slopes of the two variables are randomly shifted and everyone from the same site shares the same random shift.
what is T? Is it a random effect? A fixed effect? What's actually being accomplished by putting T in both places?
$T$ is one of your covariates. It is not a random effect - Site is a random effect. There is a fixed effect of $T$ that is different depending on the random effect conferred by Site - $\gamma_{j1}$ in the model above. What is accomplished by including this random effect is to allow for heterogeneity between sites in the relationship between $T$ and $\log \big( E(Y_{ij}) \big)$.
When should something only appear in the random effects section of the model formula?
This is a matter of what makes sense in the context of the application.
Regarding the intercept - you should keep the fixed intercept in there for a lot of reasons (see, e.g., here); re: the random intercept, $\gamma_{j0}$, this primarily acts to induce correlation between observations made at the same site. If it doesn't make sense for such correlation to exist, then the random effect should be excluded.
Regarding the random slopes, a model with only random slopes and no fixed slopes reflects a belief that, for each site, there is some relationship between $\log \big( E(Y_{ij}) \big)$ and your covariates for each site, but if you average those effects over all sites, then there is no relationship. For example, if you had a random slope in $T$ but no fixed slope, this would be like saying that time, on average, has no effect (e.g. no secular trends in the data) but each Site is heading in a random direction over time, which could make sense. Again, it depends on the application.
Note that you can fit the model with and without random effects to see if this is happening - you should see no effect in the fixed model but significant random effects in the subsequent model. I must caution you that decisions like this are often better made based on an understanding of the application rather than through model selection.
|
Interpreting three forms of a "mixed model"
|
This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogousl
|
Interpreting three forms of a "mixed model"
This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogously to refer to the variables in your model.
glmer(counts ~ A + T, data=data, family="Poisson") is the model
$$ \log \big( E(Y_{ij}) \big) = \beta_0 + \beta_1 A_{ij} + \beta_2 T_{ij} $$
which is just an ordinary poisson regression model.
glmer(counts ~ (A + T|Site), data=data, family="Poisson") is the model
$$ \log \big( E(Y_{ij}) \big) = \alpha_0 + \eta_{j0} + \eta_{j1} A_{ij} + \eta_{j2} T_{ij} $$
where $\eta_{j} = (\eta_{j0}, \eta_{j1}, \eta_{j2}) \sim N(0, \Sigma)$ are random effects that are shared by each observation made by individuals from site $j$. These random effects are allowed to be freely correlated (i.e. no restrictions are made on $\Sigma$) in the model you specified. To impose independence, you have to place them inside different brackets, e.g. (A-1|Site) + (T-1|Site) + (1|Site) would do it. This model assumes that $\log \big( E(Y_{ij}) \big)$ is $\alpha_0$ for all sites but each site has a random offset ($\eta_{j0}$) and has a random linear relationship with both $A_{ij}, T_{ij}$.
glmer(counts ~ A + T + (T|Site), data=data, family="Poisson") is the model
$$ \log \big( E(Y_{ij}) \big) = (\theta_0 + \gamma_{j0}) + \theta_1 A_{ij} + (\theta_2 + \gamma_{j1}) T_{ij} $$
So now $\log \big( E(Y_{ij}) \big)$ has some "average" relationship with $A_{ij}, T_{ij}$, given by the fixed effects $\theta_0, \theta_1, \theta_2$ but that relationship is different for each site and those differences are captured by the random effects, $\gamma_{j0}, \gamma_{j1}, \gamma_{j2}$. That is, the baseline is random shifted and the slopes of the two variables are randomly shifted and everyone from the same site shares the same random shift.
what is T? Is it a random effect? A fixed effect? What's actually being accomplished by putting T in both places?
$T$ is one of your covariates. It is not a random effect - Site is a random effect. There is a fixed effect of $T$ that is different depending on the random effect conferred by Site - $\gamma_{j1}$ in the model above. What is accomplished by including this random effect is to allow for heterogeneity between sites in the relationship between $T$ and $\log \big( E(Y_{ij}) \big)$.
When should something only appear in the random effects section of the model formula?
This is a matter of what makes sense in the context of the application.
Regarding the intercept - you should keep the fixed intercept in there for a lot of reasons (see, e.g., here); re: the random intercept, $\gamma_{j0}$, this primarily acts to induce correlation between observations made at the same site. If it doesn't make sense for such correlation to exist, then the random effect should be excluded.
Regarding the random slopes, a model with only random slopes and no fixed slopes reflects a belief that, for each site, there is some relationship between $\log \big( E(Y_{ij}) \big)$ and your covariates for each site, but if you average those effects over all sites, then there is no relationship. For example, if you had a random slope in $T$ but no fixed slope, this would be like saying that time, on average, has no effect (e.g. no secular trends in the data) but each Site is heading in a random direction over time, which could make sense. Again, it depends on the application.
Note that you can fit the model with and without random effects to see if this is happening - you should see no effect in the fixed model but significant random effects in the subsequent model. I must caution you that decisions like this are often better made based on an understanding of the application rather than through model selection.
|
Interpreting three forms of a "mixed model"
This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogousl
|
14,505
|
Interpreting three forms of a "mixed model"
|
You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion of what this specification does you can find in this lmer faq question.
From this questions your model should give the following (for your fixed effect T):
A global slope
A random slopes term specifying the deviation from the overall slope for each level of Site
The correlation between the random slopes.
And as said by @mark999 this indeed is a common specification. In repeated measures designs, you generally want to have random slopes and correlations for all repeated measures (within-subjects) factors.
See the following paper for some examples (which I tend to always cite here):
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. doi:10.1037/a0028347
|
Interpreting three forms of a "mixed model"
|
You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion o
|
Interpreting three forms of a "mixed model"
You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion of what this specification does you can find in this lmer faq question.
From this questions your model should give the following (for your fixed effect T):
A global slope
A random slopes term specifying the deviation from the overall slope for each level of Site
The correlation between the random slopes.
And as said by @mark999 this indeed is a common specification. In repeated measures designs, you generally want to have random slopes and correlations for all repeated measures (within-subjects) factors.
See the following paper for some examples (which I tend to always cite here):
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. doi:10.1037/a0028347
|
Interpreting three forms of a "mixed model"
You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion o
|
14,506
|
Interpreting three forms of a "mixed model"
|
Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in classes, you usually want children only as a random effect.
|
Interpreting three forms of a "mixed model"
|
Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in class
|
Interpreting three forms of a "mixed model"
Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in classes, you usually want children only as a random effect.
|
Interpreting three forms of a "mixed model"
Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in class
|
14,507
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
|
There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I also know of K-Sample Analogues of the Kolmogorov-Smirnov and Cramer-V. Mises Tests (but they have less power as far as I know).
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
|
There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I als
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I also know of K-Sample Analogues of the Kolmogorov-Smirnov and Cramer-V. Mises Tests (but they have less power as far as I know).
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I als
|
14,508
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
|
There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does not need to be specified. Maybe you can use this.
Little example on comparing Normal and Gamma-distributed samples scaled so that they have the same mean and variance:
library("kSamples")
set.seed(142)
samp.num <- 100
alpha <- 2.0; theta <- 3.0 # Gamma parameters shape and scale, using Wikipedia notation
gam.mean <- alpha * theta # mean of the Gamma
gam.sd <- sqrt(alpha) * theta # S.D. of the Gamma
norm.data <- rnorm(samp.num, mean=gam.mean, sd=gam.sd) # Normal with the same mean and SD as the Gamma
gamma.data <- rgamma(samp.num, shape=alpha, scale=theta)
norm.data2 <- rnorm(samp.num, mean=gam.mean, sd=gam.sd)
norm.data3 <- rnorm(samp.num, mean=gam.mean, sd=gam.sd)
ad.same <- ad.test(norm.data,norm.data2,norm.data3) # "not significant, p ~ 0.459"
ad.diff <- ad.test(gamma.data,norm.data2,norm.data3) # "significant, p ~ 0.00066"
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
|
There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does not need to be specified. Maybe you can use this.
Little example on comparing Normal and Gamma-distributed samples scaled so that they have the same mean and variance:
library("kSamples")
set.seed(142)
samp.num <- 100
alpha <- 2.0; theta <- 3.0 # Gamma parameters shape and scale, using Wikipedia notation
gam.mean <- alpha * theta # mean of the Gamma
gam.sd <- sqrt(alpha) * theta # S.D. of the Gamma
norm.data <- rnorm(samp.num, mean=gam.mean, sd=gam.sd) # Normal with the same mean and SD as the Gamma
gamma.data <- rgamma(samp.num, shape=alpha, scale=theta)
norm.data2 <- rnorm(samp.num, mean=gam.mean, sd=gam.sd)
norm.data3 <- rnorm(samp.num, mean=gam.mean, sd=gam.sd)
ad.same <- ad.test(norm.data,norm.data2,norm.data3) # "not significant, p ~ 0.459"
ad.diff <- ad.test(gamma.data,norm.data2,norm.data3) # "significant, p ~ 0.00066"
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does
|
14,509
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
|
A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit over conservative). Then you can be confident that any that are still significantly different are probably not due to the multiple testing.
You could create an overall test in the flavor of KS by finding the greatest distance between any of the distributions, i.e. plot all the empirical cdf's and find the largest distance from the bottommost line to the topmost line, or maybe average distance or some other meaningful measure. Then you can find if that is significant by doing a permutation test: group all the data into 1 big bin, then randomly split it into groups with the same sample sizes as your original groups, recompute the stat on the permuted data and repeat the process many times (999 or so). Then see how your original data compares to the permuted data sets. If the original data statistic fall in the middle of the permuted ones then there is no significant differences found, but if it is at the edge, or beyond any of the permuted ones then there is something significant going on (but this does not tell you which are different). You should probably try this out with simulated data where you know there is a difference that is big enough to be interesting just to check the power of this test to find the interesting differences.
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
|
A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit ove
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit over conservative). Then you can be confident that any that are still significantly different are probably not due to the multiple testing.
You could create an overall test in the flavor of KS by finding the greatest distance between any of the distributions, i.e. plot all the empirical cdf's and find the largest distance from the bottommost line to the topmost line, or maybe average distance or some other meaningful measure. Then you can find if that is significant by doing a permutation test: group all the data into 1 big bin, then randomly split it into groups with the same sample sizes as your original groups, recompute the stat on the permuted data and repeat the process many times (999 or so). Then see how your original data compares to the permuted data sets. If the original data statistic fall in the middle of the permuted ones then there is no significant differences found, but if it is at the edge, or beyond any of the permuted ones then there is something significant going on (but this does not tell you which are different). You should probably try this out with simulated data where you know there is a difference that is big enough to be interesting just to check the power of this test to find the interesting differences.
|
Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit ove
|
14,510
|
What is the role of MDS in modern statistics?
|
In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often interested in using it? Everyone who aims either to display clusters of points or to get some insight of possible latent dimensions along which points differentiate. Or who just wants to turn a proximity matrix into points X variables data.
Are there other statistical techniques which perform similar functions? PCA (linear, nonlinear), Correspondence analysis, Multidimensional unfolding (a version of MDS for rectangular matrices). They are related in different ways to MDS but are rarely seen as substitutes of it. (Linear PCA and CA are closely related linear algebra space-reducing operations on square and rectangular matrices, respectively. MDS and MDU are similar iterative generally nonlinear space-fitting algorithms on square and rectangular matrices, respectively.)
What theory is developed around it? Matrix of observed dissimilarities $S$ is transformed into disparities $T$ in such a way as to minimize error $E$ of mapping the disparities by means of euclidean distances $D$ in $m$-dimensional space: $S \rightarrow T =^m D+E$. The transformation could be requested linear (metric MDS) or monotonic (non-metric MDS). $E$ could be absolute error or squared error or other stress function. You can obtain a map for a single matrix $S$ (classic or simple MDS) or a map for many matrices at once with additional map of weights (individual differences or weighted MDS). There are as well other forms like repeated MDS and generalized MDS. So, MDS is a diverse technique.
How does "MDS" relates to "SSA"? Notion about this can be found on Wikipedia page of MDS.
Update for the last point. This technote from SPSS leaves impression that SSA is a case of Multidimensional unfolding (PREFSCAL procedure in SPSS). The latter, as I've noted above, is MDS algo applied to rectangular (rather than square symmetric) matrices.
|
What is the role of MDS in modern statistics?
|
In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often int
|
What is the role of MDS in modern statistics?
In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often interested in using it? Everyone who aims either to display clusters of points or to get some insight of possible latent dimensions along which points differentiate. Or who just wants to turn a proximity matrix into points X variables data.
Are there other statistical techniques which perform similar functions? PCA (linear, nonlinear), Correspondence analysis, Multidimensional unfolding (a version of MDS for rectangular matrices). They are related in different ways to MDS but are rarely seen as substitutes of it. (Linear PCA and CA are closely related linear algebra space-reducing operations on square and rectangular matrices, respectively. MDS and MDU are similar iterative generally nonlinear space-fitting algorithms on square and rectangular matrices, respectively.)
What theory is developed around it? Matrix of observed dissimilarities $S$ is transformed into disparities $T$ in such a way as to minimize error $E$ of mapping the disparities by means of euclidean distances $D$ in $m$-dimensional space: $S \rightarrow T =^m D+E$. The transformation could be requested linear (metric MDS) or monotonic (non-metric MDS). $E$ could be absolute error or squared error or other stress function. You can obtain a map for a single matrix $S$ (classic or simple MDS) or a map for many matrices at once with additional map of weights (individual differences or weighted MDS). There are as well other forms like repeated MDS and generalized MDS. So, MDS is a diverse technique.
How does "MDS" relates to "SSA"? Notion about this can be found on Wikipedia page of MDS.
Update for the last point. This technote from SPSS leaves impression that SSA is a case of Multidimensional unfolding (PREFSCAL procedure in SPSS). The latter, as I've noted above, is MDS algo applied to rectangular (rather than square symmetric) matrices.
|
What is the role of MDS in modern statistics?
In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often int
|
14,511
|
What is the role of MDS in modern statistics?
|
@ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical techniques (such as MDS, but also PCA and others), you might want to take a look at his stuff (for example, this presentation may be helpful). In addition, MDS is typically used to make a plot (although it is possible to just extract some numerical information), and he has written a book this general type of plot and put it on the web for free here (albeit only one chapter is about MDS plots per se). Lastly, in terms of a typical use, it is used very commonly in market research and product positioning, where researchers use it descriptively to understand how consumers think about the similarities between different competing products; you don't want your product to be poorly differentiated from the rest.
|
What is the role of MDS in modern statistics?
|
@ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical tec
|
What is the role of MDS in modern statistics?
@ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical techniques (such as MDS, but also PCA and others), you might want to take a look at his stuff (for example, this presentation may be helpful). In addition, MDS is typically used to make a plot (although it is possible to just extract some numerical information), and he has written a book this general type of plot and put it on the web for free here (albeit only one chapter is about MDS plots per se). Lastly, in terms of a typical use, it is used very commonly in market research and product positioning, where researchers use it descriptively to understand how consumers think about the similarities between different competing products; you don't want your product to be poorly differentiated from the rest.
|
What is the role of MDS in modern statistics?
@ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical tec
|
14,512
|
What is the role of MDS in modern statistics?
|
One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sort, or directly identify similarity between objects; 2) convert the responses into dissimilarity matrix; 3) apply MDS and, ideally, find a 2 or 3D model; 4) develop hypotheses about the dimensions structuring the map.
My personal opinion is that there are other dimension reduction tools that are usually better suited for that goal, but that what MDS provides is the opportunity to develop theories about the dimensions that are being used to organize judgments. It's important to also keep in mind the degree of stress (distortion that results from the dimension reduction) and incorporate that into your thinking.
I think one of the best books out on MDS is "Applied Multidimensional Scaling" by Borg, Groenen, & Mair (2013).
|
What is the role of MDS in modern statistics?
|
One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sor
|
What is the role of MDS in modern statistics?
One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sort, or directly identify similarity between objects; 2) convert the responses into dissimilarity matrix; 3) apply MDS and, ideally, find a 2 or 3D model; 4) develop hypotheses about the dimensions structuring the map.
My personal opinion is that there are other dimension reduction tools that are usually better suited for that goal, but that what MDS provides is the opportunity to develop theories about the dimensions that are being used to organize judgments. It's important to also keep in mind the degree of stress (distortion that results from the dimension reduction) and incorporate that into your thinking.
I think one of the best books out on MDS is "Applied Multidimensional Scaling" by Borg, Groenen, & Mair (2013).
|
What is the role of MDS in modern statistics?
One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sor
|
14,513
|
If gauge charts are bad, why do cars have gauges?
|
A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physical gauges were invented, digital (numeric) displays didn't exist so there was no real choice.
A software dashboard is not physical, and is not generally looked at in a pitching, moving vehicle with other vehicles whirring around it. So the effect of imitating a physical device doesn't buy you much.
EDIT: I'd also add that a physical dashboard only has a couple of key attributes to get across to you at (literally) a glance. A corporate dashboard needs to make a lot more detail visible, though of course things should be drawn/coded/organized in a way to also give a quick status.
That's part of the Tufte philosophy of dense detail in presentations that allow a broad view but also allow you to drill down. You car's dashboard doesn't let you drill down, basically because there's no need to.
|
If gauge charts are bad, why do cars have gauges?
|
A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physic
|
If gauge charts are bad, why do cars have gauges?
A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physical gauges were invented, digital (numeric) displays didn't exist so there was no real choice.
A software dashboard is not physical, and is not generally looked at in a pitching, moving vehicle with other vehicles whirring around it. So the effect of imitating a physical device doesn't buy you much.
EDIT: I'd also add that a physical dashboard only has a couple of key attributes to get across to you at (literally) a glance. A corporate dashboard needs to make a lot more detail visible, though of course things should be drawn/coded/organized in a way to also give a quick status.
That's part of the Tufte philosophy of dense detail in presentations that allow a broad view but also allow you to drill down. You car's dashboard doesn't let you drill down, basically because there's no need to.
|
If gauge charts are bad, why do cars have gauges?
A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physic
|
14,514
|
If gauge charts are bad, why do cars have gauges?
|
In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of real-time visualization vs. more static displays might call for differences, he also mentions that gauges aren't very good for displaying multiple values. This is summed up nicely in his comment,
What you want to know is, how fast am I going right now? How much gas do I have left? What your speed was five minutes ago, or how much gas you had in your tank three hours ago, matters little.
So here is any obvious contrast between the goals of data visualization versus car-gauges, we pretty much always want to see multiple data values! And circular car-gauges are certainly a poor tool to do that. Sometimes we don't want to see multiple values though (a few circumstances are given in this question on the GIS site, What is the point of standard symbology?). And so we might expect other rules to which we apply the data visualization techniques in such circumstances. The GIS post I mention uses very flashy symbols/icons for point patterns that attempt to incapsulate the nature of the event (and sometimes visualization techniques like blinking dots to focus attention).
What I find interesting is that the work of Cleveland on comparing angles is still pertinent to car gauges though, and hence we still might expect a linear scale for a car gauge to work better than the circular display. So I suspect there might be more historical context as to why circular gauges were chosen (they are compact?), and it certainly may be this historical inertia as to why they are popular.
This much be a popular topic in the thralls lately, as the Visual.ly blog just came out with a post on the topic as well, Speedometer Design: Why It Works. In there they give credence to some of the things gung mentions in his post that I am somewhat critical about in the comments, in particular how we develop a gestalt for identifying locations around the circular display.
I think I'm partially coming around to this notion. A circular display provides more visual distinction between general areas than does a linear one. For a general example, it is easier to quickly tell the difference between a needle pointing to 3 o'clock and a needle pointing to 12 o'oclock than it is to tell the difference between 15 and 12 on a linear scale.
I'm still not totally convinced though, and I say rubbish to the notion that acceleration is easier to distinguish on a circular scale (or even if it is information we need the dashboard to inform us about anyway) that the visual.ly blog post mentions. Just my opinion though, I'm not sure any of us have been citing directly pertinent experimental results on human perception. Cleveland's is a start, but not likely to give an entirely satisfactory answer to these particular circumstances.
That being said the multiple data values are still the main crux of the argument, circular displays aren't good for multiple data values.
|
If gauge charts are bad, why do cars have gauges?
|
In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of
|
If gauge charts are bad, why do cars have gauges?
In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of real-time visualization vs. more static displays might call for differences, he also mentions that gauges aren't very good for displaying multiple values. This is summed up nicely in his comment,
What you want to know is, how fast am I going right now? How much gas do I have left? What your speed was five minutes ago, or how much gas you had in your tank three hours ago, matters little.
So here is any obvious contrast between the goals of data visualization versus car-gauges, we pretty much always want to see multiple data values! And circular car-gauges are certainly a poor tool to do that. Sometimes we don't want to see multiple values though (a few circumstances are given in this question on the GIS site, What is the point of standard symbology?). And so we might expect other rules to which we apply the data visualization techniques in such circumstances. The GIS post I mention uses very flashy symbols/icons for point patterns that attempt to incapsulate the nature of the event (and sometimes visualization techniques like blinking dots to focus attention).
What I find interesting is that the work of Cleveland on comparing angles is still pertinent to car gauges though, and hence we still might expect a linear scale for a car gauge to work better than the circular display. So I suspect there might be more historical context as to why circular gauges were chosen (they are compact?), and it certainly may be this historical inertia as to why they are popular.
This much be a popular topic in the thralls lately, as the Visual.ly blog just came out with a post on the topic as well, Speedometer Design: Why It Works. In there they give credence to some of the things gung mentions in his post that I am somewhat critical about in the comments, in particular how we develop a gestalt for identifying locations around the circular display.
I think I'm partially coming around to this notion. A circular display provides more visual distinction between general areas than does a linear one. For a general example, it is easier to quickly tell the difference between a needle pointing to 3 o'clock and a needle pointing to 12 o'oclock than it is to tell the difference between 15 and 12 on a linear scale.
I'm still not totally convinced though, and I say rubbish to the notion that acceleration is easier to distinguish on a circular scale (or even if it is information we need the dashboard to inform us about anyway) that the visual.ly blog post mentions. Just my opinion though, I'm not sure any of us have been citing directly pertinent experimental results on human perception. Cleveland's is a start, but not likely to give an entirely satisfactory answer to these particular circumstances.
That being said the multiple data values are still the main crux of the argument, circular displays aren't good for multiple data values.
|
If gauge charts are bad, why do cars have gauges?
In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of
|
14,515
|
If gauge charts are bad, why do cars have gauges?
|
There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick note here: it's worth remembering that all car speedometers are oriented in the same way. (What I mean is they all run clockwise, and the physical location of the endpoints are in approximately the same position at the bottom.) In line with @Wayne's point about having to quickly glance at the gauges and then back to the busy road and still have extracted the relevant information, note that to encode magnitude via relative distance (a la Cleveland's dotplots, which I do like a lot), you have to encode the position of the dot and also the positions of both endpoints. With a gauge, you need only notice the angle of the needle, which you can still 'see' in your mind even a few seconds later while looking at the road again. Realize that you get very used to looking at your car's speedometer. Thus, interpreting this angle can become effortless. Moreover, because all car gauges are oriented the same way, it is easy to adapt to an otherwise unfamiliar car, although because the top speed listed can vary (as @cardinal notes) some period of adaptation can be required. On the other hand, although the endpoints would always be in the same place, it would be more difficult to become automatic at reading a horizontal position because your head will always be in a different position and thus the endpoints will be in a different position relative to your head. It is possible to overcome this by making the gauge larger so that the relative position of your head has less influence. In fact, 'linear' gauges were somewhat common in the 70's & early 80's (they were actually horizontal windows over a round gauge), and they generally took up half the dashboard. This will not be a problem for a gauge, though, unless you tilt your head to the side and try to read the speedometer, in which case, it would be harder to read!
|
If gauge charts are bad, why do cars have gauges?
|
There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick n
|
If gauge charts are bad, why do cars have gauges?
There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick note here: it's worth remembering that all car speedometers are oriented in the same way. (What I mean is they all run clockwise, and the physical location of the endpoints are in approximately the same position at the bottom.) In line with @Wayne's point about having to quickly glance at the gauges and then back to the busy road and still have extracted the relevant information, note that to encode magnitude via relative distance (a la Cleveland's dotplots, which I do like a lot), you have to encode the position of the dot and also the positions of both endpoints. With a gauge, you need only notice the angle of the needle, which you can still 'see' in your mind even a few seconds later while looking at the road again. Realize that you get very used to looking at your car's speedometer. Thus, interpreting this angle can become effortless. Moreover, because all car gauges are oriented the same way, it is easy to adapt to an otherwise unfamiliar car, although because the top speed listed can vary (as @cardinal notes) some period of adaptation can be required. On the other hand, although the endpoints would always be in the same place, it would be more difficult to become automatic at reading a horizontal position because your head will always be in a different position and thus the endpoints will be in a different position relative to your head. It is possible to overcome this by making the gauge larger so that the relative position of your head has less influence. In fact, 'linear' gauges were somewhat common in the 70's & early 80's (they were actually horizontal windows over a round gauge), and they generally took up half the dashboard. This will not be a problem for a gauge, though, unless you tilt your head to the side and try to read the speedometer, in which case, it would be harder to read!
|
If gauge charts are bad, why do cars have gauges?
There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick n
|
14,516
|
If gauge charts are bad, why do cars have gauges?
|
Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right. An analog watch, can be glanced at, and you know that it is about 10 minutes to 9. You don't (usually) need to know that it is 10 minutes 16 seconds to 9 !
Virtual dashboards can very effectively indicate approximates, and add the option of switching modes to give higher resolution indicators in numeric form. This is particularly useful for pre-empting faults, such as logging trends in oil pressure in (light) aircraft.
|
If gauge charts are bad, why do cars have gauges?
|
Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right.
|
If gauge charts are bad, why do cars have gauges?
Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right. An analog watch, can be glanced at, and you know that it is about 10 minutes to 9. You don't (usually) need to know that it is 10 minutes 16 seconds to 9 !
Virtual dashboards can very effectively indicate approximates, and add the option of switching modes to give higher resolution indicators in numeric form. This is particularly useful for pre-empting faults, such as logging trends in oil pressure in (light) aircraft.
|
If gauge charts are bad, why do cars have gauges?
Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right.
|
14,517
|
Why do we use Gaussian distributions in Variational Autoencoder?
|
Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussian mixtures, which is useful for unsupervised [2] and semi-supervised [3] tasks.
Normal distribution has many nice properties, such as analytical evaluation of the KL divergence in the variational loss, and also we can use the reparametrization trick for efficient gradient computation (however, the original VAE paper [4] names many other distributions for which that works). Moreover, one of the apparent advantages of VAEs is that they allow generation of new samples by sampling in the latent space—which is quite easy if it follows Gaussian distribution. Finally, as @shimao remarked, it does not matter so much what distribution latent variables follow since using the non-linear decoder it can mimic arbitrarily complicated distribution of observations. It is simply convenient.
As for the second question, I agree with @shimao's answer.
[1]: Davidson, T.R., Falorsi, L., De Cao, N., Kipf, T. and Tomczak, J.M., 2018. Hyperspherical variational auto-encoders. arXiv preprint arXiv:1804.00891.
[2]: Dilokthanakul, N., Mediano, P.A., Garnelo, M., Lee, M.C., Salimbeni, H., Arulkumaran, K. and Shanahan, M., 2016. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648.
[3]: Kingma, D.P., Mohamed, S., Rezende, D.J. and Welling, M., 2014. Semi-supervised learning with deep generative models. In Advances in neural information processing systems (pp. 3581-3589).
[4]: Kingma, D.P. and Welling, M., 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
|
Why do we use Gaussian distributions in Variational Autoencoder?
|
Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussia
|
Why do we use Gaussian distributions in Variational Autoencoder?
Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussian mixtures, which is useful for unsupervised [2] and semi-supervised [3] tasks.
Normal distribution has many nice properties, such as analytical evaluation of the KL divergence in the variational loss, and also we can use the reparametrization trick for efficient gradient computation (however, the original VAE paper [4] names many other distributions for which that works). Moreover, one of the apparent advantages of VAEs is that they allow generation of new samples by sampling in the latent space—which is quite easy if it follows Gaussian distribution. Finally, as @shimao remarked, it does not matter so much what distribution latent variables follow since using the non-linear decoder it can mimic arbitrarily complicated distribution of observations. It is simply convenient.
As for the second question, I agree with @shimao's answer.
[1]: Davidson, T.R., Falorsi, L., De Cao, N., Kipf, T. and Tomczak, J.M., 2018. Hyperspherical variational auto-encoders. arXiv preprint arXiv:1804.00891.
[2]: Dilokthanakul, N., Mediano, P.A., Garnelo, M., Lee, M.C., Salimbeni, H., Arulkumaran, K. and Shanahan, M., 2016. Deep unsupervised clustering with gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648.
[3]: Kingma, D.P., Mohamed, S., Rezende, D.J. and Welling, M., 2014. Semi-supervised learning with deep generative models. In Advances in neural information processing systems (pp. 3581-3589).
[4]: Kingma, D.P. and Welling, M., 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
|
Why do we use Gaussian distributions in Variational Autoencoder?
Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussia
|
14,518
|
Why do we use Gaussian distributions in Variational Autoencoder?
|
We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the exact choice is not important.
As for your second question, I would question your premise -- I am pretty sure weights are NOT normally distributed -- I recall seeing that resnet weights follow a more laplacian distribution. Anyway that is a pretty unrelated matter to the choice of prior in a VAE.
|
Why do we use Gaussian distributions in Variational Autoencoder?
|
We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the ex
|
Why do we use Gaussian distributions in Variational Autoencoder?
We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the exact choice is not important.
As for your second question, I would question your premise -- I am pretty sure weights are NOT normally distributed -- I recall seeing that resnet weights follow a more laplacian distribution. Anyway that is a pretty unrelated matter to the choice of prior in a VAE.
|
Why do we use Gaussian distributions in Variational Autoencoder?
We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the ex
|
14,519
|
Why are there large coefficents for higher-order polynomial
|
This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sensitive to small variations in the data and/or roundoff in the computations (i.e. the model is not stably identifiable). See also this answer on the SciComp SE.
There are many solutions to this problem, for example Chebyshev approximation, smoothing splines, and Tikhonov regularization.
Tikhonov regularization is a generalization of ridge regression, penalizing a norm $||\Lambda \theta]||$ of the coefficient vector $\theta$, where for smoothing the weight matrix $\Lambda$ is some derivative operator. To penalize oscillations, you might use $\Lambda \theta=p^{\prime\prime}[x]$, where $p[x]$ is the polynomial evaluated at the data.
EDIT: The answer by user hxd1011 notes that some of the numerical ill-conditioning problems can be addressed using orthogonal polynomials, which is a good point. I would note however that the identifiability issues with high-order polynomials still remain. That is, numerical ill-conditioning is associated with sensitivity to "infinitesimal" perturbations (e.g. roundoff), while "statistical" ill-conditioning concerns sensitivity to "finite" perturbations (e.g. outliers; the inverse problem is ill-posed).
The methods mentioned in my second paragraph are concerned with this outlier sensitivity. You can think of this sensitivity as violation of the standard linear regression model, which by using an $L_2$ misfit implicitly assumes the data is Gaussian. Splines and Tikhonov regularization deal with this outlier sensitivity by imposing a smoothness prior on the fit. Chebyshev approximation deals with this by using an $L_{\infty}$ misfit applied over the continuous domain, i.e. not just at the data points. Though Chebyshev polynomials are orthogonal (w.r.t. a certain weighted inner product), I believe that if used with an $L_2$ misfit over the data they would still have outlier sensitivity.
|
Why are there large coefficents for higher-order polynomial
|
This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sen
|
Why are there large coefficents for higher-order polynomial
This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sensitive to small variations in the data and/or roundoff in the computations (i.e. the model is not stably identifiable). See also this answer on the SciComp SE.
There are many solutions to this problem, for example Chebyshev approximation, smoothing splines, and Tikhonov regularization.
Tikhonov regularization is a generalization of ridge regression, penalizing a norm $||\Lambda \theta]||$ of the coefficient vector $\theta$, where for smoothing the weight matrix $\Lambda$ is some derivative operator. To penalize oscillations, you might use $\Lambda \theta=p^{\prime\prime}[x]$, where $p[x]$ is the polynomial evaluated at the data.
EDIT: The answer by user hxd1011 notes that some of the numerical ill-conditioning problems can be addressed using orthogonal polynomials, which is a good point. I would note however that the identifiability issues with high-order polynomials still remain. That is, numerical ill-conditioning is associated with sensitivity to "infinitesimal" perturbations (e.g. roundoff), while "statistical" ill-conditioning concerns sensitivity to "finite" perturbations (e.g. outliers; the inverse problem is ill-posed).
The methods mentioned in my second paragraph are concerned with this outlier sensitivity. You can think of this sensitivity as violation of the standard linear regression model, which by using an $L_2$ misfit implicitly assumes the data is Gaussian. Splines and Tikhonov regularization deal with this outlier sensitivity by imposing a smoothness prior on the fit. Chebyshev approximation deals with this by using an $L_{\infty}$ misfit applied over the continuous domain, i.e. not just at the data points. Though Chebyshev polynomials are orthogonal (w.r.t. a certain weighted inner product), I believe that if used with an $L_2$ misfit over the data they would still have outlier sensitivity.
|
Why are there large coefficents for higher-order polynomial
This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sen
|
14,520
|
Why are there large coefficents for higher-order polynomial
|
The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two examples of 2nd and 15th order polynomial expansion. First we show the coefficient for 2nd order expansion.
summary(lm(mpg ~ poly(wt, 2), mtcars))
Call:
lm(formula = mpg ~ poly(wt, 2), data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.483 -1.998 -0.773 1.462 6.238
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 20.0906 0.4686 42.877 < 2e-16 ***
poly(wt, 2)1 -29.1157 2.6506 -10.985 7.52e-12 ***
poly(wt, 2)2 8.6358 2.6506 3.258 0.00286 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.651 on 29 degrees of freedom
Multiple R-squared: 0.8191, Adjusted R-squared: 0.8066
F-statistic: 65.64 on 2 and 29 DF, p-value: 1.715e-11
Then we show 15th order.
summary(lm(mpg~poly(wt,15),mtcars))
Call:
lm(formula = mpg ~ poly(wt, 15), data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-5.3233 -0.4641 0.0072 0.6401 4.0394
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 20.0906 0.4551 44.147 < 2e-16 ***
poly(wt, 15)1 -29.1157 2.5743 -11.310 4.83e-09 ***
poly(wt, 15)2 8.6358 2.5743 3.355 0.00403 **
poly(wt, 15)3 0.2749 2.5743 0.107 0.91629
poly(wt, 15)4 -1.7891 2.5743 -0.695 0.49705
poly(wt, 15)5 1.8797 2.5743 0.730 0.47584
poly(wt, 15)6 -2.8354 2.5743 -1.101 0.28702
poly(wt, 15)7 2.5613 2.5743 0.995 0.33459
poly(wt, 15)8 1.5772 2.5743 0.613 0.54872
poly(wt, 15)9 -5.2412 2.5743 -2.036 0.05866 .
poly(wt, 15)10 -2.4959 2.5743 -0.970 0.34672
poly(wt, 15)11 2.5007 2.5743 0.971 0.34580
poly(wt, 15)12 2.4263 2.5743 0.942 0.35996
poly(wt, 15)13 -2.0134 2.5743 -0.782 0.44559
poly(wt, 15)14 3.3994 2.5743 1.320 0.20525
poly(wt, 15)15 -3.5161 2.5743 -1.366 0.19089
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.574 on 16 degrees of freedom
Multiple R-squared: 0.9058, Adjusted R-squared: 0.8176
F-statistic: 10.26 on 15 and 16 DF, p-value: 1.558e-05
Note that, we are using orthogonal polynomials, so the lower order's coefficient is exactly the same as the corresponding terms in higher order's results. For example, the intercept and the coefficient for first order is 20.09 and -29.11 for both models.
On the other hand, if we use raw expansion, such thing will not happen. And we will have large and sensitive coefficients! In following example, we can see the coefficients are around in $10^6$ level.
summary(lm(mpg ~ poly(wt, 15, raw=T), mtcars))
Call:
lm(formula = mpg ~ poly(wt, 15, raw = T), data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-5.6217 -0.7544 0.0306 1.1678 5.4308
Coefficients: (3 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.287e+05 9.991e+05 0.629 0.537
poly(wt, 15, raw = T)1 -2.713e+06 4.195e+06 -0.647 0.526
poly(wt, 15, raw = T)2 5.246e+06 7.893e+06 0.665 0.514
poly(wt, 15, raw = T)3 -6.001e+06 8.784e+06 -0.683 0.503
poly(wt, 15, raw = T)4 4.512e+06 6.427e+06 0.702 0.491
poly(wt, 15, raw = T)5 -2.340e+06 3.246e+06 -0.721 0.480
poly(wt, 15, raw = T)6 8.537e+05 1.154e+06 0.740 0.468
poly(wt, 15, raw = T)7 -2.184e+05 2.880e+05 -0.758 0.458
poly(wt, 15, raw = T)8 3.809e+04 4.910e+04 0.776 0.447
poly(wt, 15, raw = T)9 -4.212e+03 5.314e+03 -0.793 0.438
poly(wt, 15, raw = T)10 2.382e+02 2.947e+02 0.809 0.429
poly(wt, 15, raw = T)11 NA NA NA NA
poly(wt, 15, raw = T)12 -5.642e-01 6.742e-01 -0.837 0.413
poly(wt, 15, raw = T)13 NA NA NA NA
poly(wt, 15, raw = T)14 NA NA NA NA
poly(wt, 15, raw = T)15 1.259e-04 1.447e-04 0.870 0.395
Residual standard error: 2.659 on 19 degrees of freedom
Multiple R-squared: 0.8807, Adjusted R-squared: 0.8053
F-statistic: 11.68 on 12 and 19 DF, p-value: 2.362e-06
|
Why are there large coefficents for higher-order polynomial
|
The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two exam
|
Why are there large coefficents for higher-order polynomial
The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two examples of 2nd and 15th order polynomial expansion. First we show the coefficient for 2nd order expansion.
summary(lm(mpg ~ poly(wt, 2), mtcars))
Call:
lm(formula = mpg ~ poly(wt, 2), data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.483 -1.998 -0.773 1.462 6.238
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 20.0906 0.4686 42.877 < 2e-16 ***
poly(wt, 2)1 -29.1157 2.6506 -10.985 7.52e-12 ***
poly(wt, 2)2 8.6358 2.6506 3.258 0.00286 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.651 on 29 degrees of freedom
Multiple R-squared: 0.8191, Adjusted R-squared: 0.8066
F-statistic: 65.64 on 2 and 29 DF, p-value: 1.715e-11
Then we show 15th order.
summary(lm(mpg~poly(wt,15),mtcars))
Call:
lm(formula = mpg ~ poly(wt, 15), data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-5.3233 -0.4641 0.0072 0.6401 4.0394
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 20.0906 0.4551 44.147 < 2e-16 ***
poly(wt, 15)1 -29.1157 2.5743 -11.310 4.83e-09 ***
poly(wt, 15)2 8.6358 2.5743 3.355 0.00403 **
poly(wt, 15)3 0.2749 2.5743 0.107 0.91629
poly(wt, 15)4 -1.7891 2.5743 -0.695 0.49705
poly(wt, 15)5 1.8797 2.5743 0.730 0.47584
poly(wt, 15)6 -2.8354 2.5743 -1.101 0.28702
poly(wt, 15)7 2.5613 2.5743 0.995 0.33459
poly(wt, 15)8 1.5772 2.5743 0.613 0.54872
poly(wt, 15)9 -5.2412 2.5743 -2.036 0.05866 .
poly(wt, 15)10 -2.4959 2.5743 -0.970 0.34672
poly(wt, 15)11 2.5007 2.5743 0.971 0.34580
poly(wt, 15)12 2.4263 2.5743 0.942 0.35996
poly(wt, 15)13 -2.0134 2.5743 -0.782 0.44559
poly(wt, 15)14 3.3994 2.5743 1.320 0.20525
poly(wt, 15)15 -3.5161 2.5743 -1.366 0.19089
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.574 on 16 degrees of freedom
Multiple R-squared: 0.9058, Adjusted R-squared: 0.8176
F-statistic: 10.26 on 15 and 16 DF, p-value: 1.558e-05
Note that, we are using orthogonal polynomials, so the lower order's coefficient is exactly the same as the corresponding terms in higher order's results. For example, the intercept and the coefficient for first order is 20.09 and -29.11 for both models.
On the other hand, if we use raw expansion, such thing will not happen. And we will have large and sensitive coefficients! In following example, we can see the coefficients are around in $10^6$ level.
summary(lm(mpg ~ poly(wt, 15, raw=T), mtcars))
Call:
lm(formula = mpg ~ poly(wt, 15, raw = T), data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-5.6217 -0.7544 0.0306 1.1678 5.4308
Coefficients: (3 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.287e+05 9.991e+05 0.629 0.537
poly(wt, 15, raw = T)1 -2.713e+06 4.195e+06 -0.647 0.526
poly(wt, 15, raw = T)2 5.246e+06 7.893e+06 0.665 0.514
poly(wt, 15, raw = T)3 -6.001e+06 8.784e+06 -0.683 0.503
poly(wt, 15, raw = T)4 4.512e+06 6.427e+06 0.702 0.491
poly(wt, 15, raw = T)5 -2.340e+06 3.246e+06 -0.721 0.480
poly(wt, 15, raw = T)6 8.537e+05 1.154e+06 0.740 0.468
poly(wt, 15, raw = T)7 -2.184e+05 2.880e+05 -0.758 0.458
poly(wt, 15, raw = T)8 3.809e+04 4.910e+04 0.776 0.447
poly(wt, 15, raw = T)9 -4.212e+03 5.314e+03 -0.793 0.438
poly(wt, 15, raw = T)10 2.382e+02 2.947e+02 0.809 0.429
poly(wt, 15, raw = T)11 NA NA NA NA
poly(wt, 15, raw = T)12 -5.642e-01 6.742e-01 -0.837 0.413
poly(wt, 15, raw = T)13 NA NA NA NA
poly(wt, 15, raw = T)14 NA NA NA NA
poly(wt, 15, raw = T)15 1.259e-04 1.447e-04 0.870 0.395
Residual standard error: 2.659 on 19 degrees of freedom
Multiple R-squared: 0.8807, Adjusted R-squared: 0.8053
F-statistic: 11.68 on 12 and 19 DF, p-value: 2.362e-06
|
Why are there large coefficents for higher-order polynomial
The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two exam
|
14,521
|
Why are there large coefficents for higher-order polynomial
|
Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M = 9 polynomial, the coefficients have become finely tuned to the data by developing large positive and negative values so that the corresponding polynomial function matches each of the data points exactly, but between data points (particularly near the ends of the range) the function exhibits large oscillations.
I think the magnitude issue is rather irrelevant to Bishop's overall point - that using a complicated model on limited data leads to 'overfitting'.
In his example 10 datapoints are used to estimate a 9 dimensional polynomial (ie 10 variables and 10 unknowns).
If we fit a sine wave (no noise), then the fit works perfectly, since sine waves [over a fixed interval] can be approximated with arbitrary accuracy using polynomials. However, in Bishop's example we have a certain amount of 'noise' that we should not fit. The way we do this is by keeping the number of datapoints to number of model variables (polynomial coefficents) large or by using regularisation.
Regularisation imposes 'soft' constraints on the model (eg in ridge regression) the cost function you try to minimise is a combination of 'fitting error' and model complexity : eg in ridge regression the complexity is measured by the sum of squared coefficients- in effect this imposes a cost on reducing error - increasing the coefficients will only be allowed if it has a large enough reduction in the fitting error [how large is large enough is specified by a multiplier on the model complexity term]. Therefore the hope is that by choosing the appropriate multiplier we will not fit to additional small noise term, since the improvement in fit does not justify the increase in coefficients.
You asked why large coefficients improve the quality of the fit. Essentially, the reason is that the function estimated (sin + noise) is not a polynomial, and the large changes in curvature required to approximate the noise effect with polynomials require large coefficients.
Note that using orthogonal polynomials has no effect ( I have added an offset of 0.1 just so that the orthogonal and raw polynomials are not on top of each other)
require (penalized)
poly_order<-9
x_long <- seq(0, 1, length.out = 100)
nx <- 10
x <- seq(0, 1, length.out = nx)
noise <- rnorm(nx, 0, 1)
noise_scale <- 0.2
y <- sin(2*pi*x) + noise_scale*noise
training_data <- data.frame(x=x, y=y)
y_long <- sin(2*pi*x_long)
plot(x, y, col ='blue', ylim=c(-1.5,1.5))
lines(x_long, y_long, col='green')
polyfit_raw <- lm(y ~ poly(x, poly_order, raw=TRUE),
data=training_data)
summary(polyfit_raw)
polyfit_raw_ridge1 <- penalized(y, ~poly(x, poly_order, raw=TRUE),
model='linear', data=training_data, lambda2=0.0001,
maxiter=10000, standardize=TRUE)
polyfit_orthog <- lm(y ~ poly(x, poly_order), data=training_data)
summary(polyfit_orthog)
pred_raw <- predict(polyfit_raw, data.frame(x=x_long))
pred_ortho <- predict(polyfit_orthog, data.frame(x=x_long))
pred_raw_ridge <- predict(polyfit_raw_ridge1,
data=data.frame(x=x_long))[,'mu']
lines(x_long, pred_raw, col='red')
# add 0.1 offset to make visible
lines(x_long, pred_ortho+0.1, col='black')
lines(x_long, pred_raw_ridge, col='purple')
legend("bottomleft", legend=c('data sin(2 pi x) +
noise','sin(2 pi x)', 'raw poly', 'orthog poly + 0.1
offset', 'raw poly + ridge regression'),
fill=c('blue', 'green', 'red', 'black', 'purple'))
|
Why are there large coefficents for higher-order polynomial
|
Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M =
|
Why are there large coefficents for higher-order polynomial
Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M = 9 polynomial, the coefficients have become finely tuned to the data by developing large positive and negative values so that the corresponding polynomial function matches each of the data points exactly, but between data points (particularly near the ends of the range) the function exhibits large oscillations.
I think the magnitude issue is rather irrelevant to Bishop's overall point - that using a complicated model on limited data leads to 'overfitting'.
In his example 10 datapoints are used to estimate a 9 dimensional polynomial (ie 10 variables and 10 unknowns).
If we fit a sine wave (no noise), then the fit works perfectly, since sine waves [over a fixed interval] can be approximated with arbitrary accuracy using polynomials. However, in Bishop's example we have a certain amount of 'noise' that we should not fit. The way we do this is by keeping the number of datapoints to number of model variables (polynomial coefficents) large or by using regularisation.
Regularisation imposes 'soft' constraints on the model (eg in ridge regression) the cost function you try to minimise is a combination of 'fitting error' and model complexity : eg in ridge regression the complexity is measured by the sum of squared coefficients- in effect this imposes a cost on reducing error - increasing the coefficients will only be allowed if it has a large enough reduction in the fitting error [how large is large enough is specified by a multiplier on the model complexity term]. Therefore the hope is that by choosing the appropriate multiplier we will not fit to additional small noise term, since the improvement in fit does not justify the increase in coefficients.
You asked why large coefficients improve the quality of the fit. Essentially, the reason is that the function estimated (sin + noise) is not a polynomial, and the large changes in curvature required to approximate the noise effect with polynomials require large coefficients.
Note that using orthogonal polynomials has no effect ( I have added an offset of 0.1 just so that the orthogonal and raw polynomials are not on top of each other)
require (penalized)
poly_order<-9
x_long <- seq(0, 1, length.out = 100)
nx <- 10
x <- seq(0, 1, length.out = nx)
noise <- rnorm(nx, 0, 1)
noise_scale <- 0.2
y <- sin(2*pi*x) + noise_scale*noise
training_data <- data.frame(x=x, y=y)
y_long <- sin(2*pi*x_long)
plot(x, y, col ='blue', ylim=c(-1.5,1.5))
lines(x_long, y_long, col='green')
polyfit_raw <- lm(y ~ poly(x, poly_order, raw=TRUE),
data=training_data)
summary(polyfit_raw)
polyfit_raw_ridge1 <- penalized(y, ~poly(x, poly_order, raw=TRUE),
model='linear', data=training_data, lambda2=0.0001,
maxiter=10000, standardize=TRUE)
polyfit_orthog <- lm(y ~ poly(x, poly_order), data=training_data)
summary(polyfit_orthog)
pred_raw <- predict(polyfit_raw, data.frame(x=x_long))
pred_ortho <- predict(polyfit_orthog, data.frame(x=x_long))
pred_raw_ridge <- predict(polyfit_raw_ridge1,
data=data.frame(x=x_long))[,'mu']
lines(x_long, pred_raw, col='red')
# add 0.1 offset to make visible
lines(x_long, pred_ortho+0.1, col='black')
lines(x_long, pred_raw_ridge, col='purple')
legend("bottomleft", legend=c('data sin(2 pi x) +
noise','sin(2 pi x)', 'raw poly', 'orthog poly + 0.1
offset', 'raw poly + ridge regression'),
fill=c('blue', 'green', 'red', 'black', 'purple'))
|
Why are there large coefficents for higher-order polynomial
Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M =
|
14,522
|
In a GLM, is the log likelihood of the saturated model always zero?
|
If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1, \ldots, y_n)$ is given by:
$$\ell(\mu; Y) = -\sum_{i = 1}^n \mu_i + \sum_{i = 1}^n y_i \log \mu_i - \sum_{i = 1}^n \log(y_i!). \tag{$*$}$$
Differentiate $\ell(\mu; Y)$ in $(*)$ with respect to $\mu_i$ and set it to $0$ (this is how we obtain the MLE for saturated model):
$$-1 + \frac{y_i}{\mu_i} = 0.$$
Solve this for $\mu_i$ to get $\hat{\mu}_i = y_i$, substituting $\hat{\mu}_i$ back into $(*)$ for $\mu_i$ gives that the log-likelihood of the saturated model is:
$$\ell(\hat{\mu}; Y) = \sum_{i = 1}^n y_i(\log y_i - 1) -\sum_{i = 1}^n \log(y_i!) \neq 0$$
unless $y_i$ take very special values.
In the help page of the R function glm, under the item deviance, the document explains this issue as follows:
deviance
up to a constant, minus twice the maximized log-likelihood. Where sensible, the constant is chosen so that a saturated model has deviance zero.
Notice that it mentioned that the deviance, instead of the log-likelihood of the saturated model is chosen to be zero.
Probably, what you really wanted to confirm is that "the deviance of the saturated model is always given as zero", which is true, since the deviance, by definition (see Section 4.5.1 of Categorical Data Analysis (2nd Edition) by Alan Agresti) is the likelihood ratio statistic of a specified GLM to the saturated model. The constant aforementioned in the R documentation is actually twice the maximized log-likelihood of the saturated model.
Regarding your statement "Yet, the way the formula for deviance is given suggests that sometimes this quantity is non zero.", it is probably due to the abuse of usage of the term deviance. For instance, in R, the likelihood ratio statistic of comparing two arbitrary (nested) models $M_1$ and $M_2$ is also referred to as deviance, which would be more precisely termed as the difference between the deviance of $M_1$ and the deviance of $M_2$, if we closely followed the definition as given in Agresti's book.
Conclusion
The log-likelihood of the saturated model is in general non-zero.
The deviance (in its original definition) of the saturated model is zero.
The deviance output from softwares (such as R) is in general non-zero as it actually means something else (the difference between deviances).
The following are the derivation for the general exponential-family case and another concrete example. Suppose that data come from exponential family (see Modern Applied Statistics with S, Chapter $7$):
$$f(y_i; \theta_i, \varphi) = \exp[A_i(y_i\theta_i - \gamma(\theta_i))/\varphi + \tau(y_i, \varphi/A_i)]. \tag{1}$$
where $A_i$ are known prior weights and $\varphi$ are dispersion/scale parameter (for many cases such as binomial and Poisson, this parameter is known, while for other cases such as normal and Gamma, this parameter is unknown). Then the log-likelihood is given by:
$$\ell(\theta, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \theta_i - \gamma(\theta_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). $$
As in the Poisson example, the saturated model's parameters can be estimated by solving the following score function:
$$0 = U(\theta_i) = \frac{\partial \ell(\theta, \varphi; Y)}{\partial \theta_i} = \frac{A_i(y_i - \gamma'(\theta_i))}{\varphi}$$
Denote the solution of the above equation by $\hat{\theta}_i$, then the general form of the log-likelihood of the saturated model (treat the scale parameter as constant) is:
$$\ell(\hat{\theta}, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \hat{\theta}_i - \gamma(\hat{\theta}_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). \tag{$**$}$$
In my previous answer, I incorrectly stated that the first term on the right side of $(**)$ is always zero, the above Poisson data example proves it is wrong. For a more complicated example, consider the Gamma distribution $\Gamma(\alpha, \beta)$ given in the appendix.
Proof of the first term in the log-likelihood of saturated Gamma model is non-zero: Given
$$f(y; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)}e^{-\beta y}y^{\alpha - 1}, \quad y > 0, \alpha > 0, \beta > 0,$$
we must do reparameterization first so that $f$ has the exponential family form $(1)$. It can be verified if letting
$$\varphi = \frac{1}{\alpha},\, \theta = -\frac{\beta}{\alpha},$$
then $f$ has the representation:
$$f(y; \theta, \varphi) = \exp\left[\frac{\theta y - (-\log(-\theta))}{\varphi}+ \tau(y, \varphi)\right],$$
where
$$\tau(y, \varphi) = -\frac{\log \varphi}{\varphi} + \left(\frac{1}{\varphi} - 1\right)\log y - \log\Gamma(\varphi^{-1}).$$
Therefore, the MLEs of the saturated model are $\hat{\theta}_i = -\frac{1}{y_i}$.
Hence
$$\sum_{i = 1}^n \frac{1}{\varphi}[\hat{\theta}_iy_i - (-\log(-\hat{\theta}_i))] = \sum_{i = 1}^n \frac{1}{\varphi}[-1 - \log(y_i)] \neq 0, $$
unless $y_i$ take very special values.
|
In a GLM, is the log likelihood of the saturated model always zero?
|
If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1,
|
In a GLM, is the log likelihood of the saturated model always zero?
If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1, \ldots, y_n)$ is given by:
$$\ell(\mu; Y) = -\sum_{i = 1}^n \mu_i + \sum_{i = 1}^n y_i \log \mu_i - \sum_{i = 1}^n \log(y_i!). \tag{$*$}$$
Differentiate $\ell(\mu; Y)$ in $(*)$ with respect to $\mu_i$ and set it to $0$ (this is how we obtain the MLE for saturated model):
$$-1 + \frac{y_i}{\mu_i} = 0.$$
Solve this for $\mu_i$ to get $\hat{\mu}_i = y_i$, substituting $\hat{\mu}_i$ back into $(*)$ for $\mu_i$ gives that the log-likelihood of the saturated model is:
$$\ell(\hat{\mu}; Y) = \sum_{i = 1}^n y_i(\log y_i - 1) -\sum_{i = 1}^n \log(y_i!) \neq 0$$
unless $y_i$ take very special values.
In the help page of the R function glm, under the item deviance, the document explains this issue as follows:
deviance
up to a constant, minus twice the maximized log-likelihood. Where sensible, the constant is chosen so that a saturated model has deviance zero.
Notice that it mentioned that the deviance, instead of the log-likelihood of the saturated model is chosen to be zero.
Probably, what you really wanted to confirm is that "the deviance of the saturated model is always given as zero", which is true, since the deviance, by definition (see Section 4.5.1 of Categorical Data Analysis (2nd Edition) by Alan Agresti) is the likelihood ratio statistic of a specified GLM to the saturated model. The constant aforementioned in the R documentation is actually twice the maximized log-likelihood of the saturated model.
Regarding your statement "Yet, the way the formula for deviance is given suggests that sometimes this quantity is non zero.", it is probably due to the abuse of usage of the term deviance. For instance, in R, the likelihood ratio statistic of comparing two arbitrary (nested) models $M_1$ and $M_2$ is also referred to as deviance, which would be more precisely termed as the difference between the deviance of $M_1$ and the deviance of $M_2$, if we closely followed the definition as given in Agresti's book.
Conclusion
The log-likelihood of the saturated model is in general non-zero.
The deviance (in its original definition) of the saturated model is zero.
The deviance output from softwares (such as R) is in general non-zero as it actually means something else (the difference between deviances).
The following are the derivation for the general exponential-family case and another concrete example. Suppose that data come from exponential family (see Modern Applied Statistics with S, Chapter $7$):
$$f(y_i; \theta_i, \varphi) = \exp[A_i(y_i\theta_i - \gamma(\theta_i))/\varphi + \tau(y_i, \varphi/A_i)]. \tag{1}$$
where $A_i$ are known prior weights and $\varphi$ are dispersion/scale parameter (for many cases such as binomial and Poisson, this parameter is known, while for other cases such as normal and Gamma, this parameter is unknown). Then the log-likelihood is given by:
$$\ell(\theta, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \theta_i - \gamma(\theta_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). $$
As in the Poisson example, the saturated model's parameters can be estimated by solving the following score function:
$$0 = U(\theta_i) = \frac{\partial \ell(\theta, \varphi; Y)}{\partial \theta_i} = \frac{A_i(y_i - \gamma'(\theta_i))}{\varphi}$$
Denote the solution of the above equation by $\hat{\theta}_i$, then the general form of the log-likelihood of the saturated model (treat the scale parameter as constant) is:
$$\ell(\hat{\theta}, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \hat{\theta}_i - \gamma(\hat{\theta}_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). \tag{$**$}$$
In my previous answer, I incorrectly stated that the first term on the right side of $(**)$ is always zero, the above Poisson data example proves it is wrong. For a more complicated example, consider the Gamma distribution $\Gamma(\alpha, \beta)$ given in the appendix.
Proof of the first term in the log-likelihood of saturated Gamma model is non-zero: Given
$$f(y; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)}e^{-\beta y}y^{\alpha - 1}, \quad y > 0, \alpha > 0, \beta > 0,$$
we must do reparameterization first so that $f$ has the exponential family form $(1)$. It can be verified if letting
$$\varphi = \frac{1}{\alpha},\, \theta = -\frac{\beta}{\alpha},$$
then $f$ has the representation:
$$f(y; \theta, \varphi) = \exp\left[\frac{\theta y - (-\log(-\theta))}{\varphi}+ \tau(y, \varphi)\right],$$
where
$$\tau(y, \varphi) = -\frac{\log \varphi}{\varphi} + \left(\frac{1}{\varphi} - 1\right)\log y - \log\Gamma(\varphi^{-1}).$$
Therefore, the MLEs of the saturated model are $\hat{\theta}_i = -\frac{1}{y_i}$.
Hence
$$\sum_{i = 1}^n \frac{1}{\varphi}[\hat{\theta}_iy_i - (-\log(-\hat{\theta}_i))] = \sum_{i = 1}^n \frac{1}{\varphi}[-1 - \log(y_i)] \neq 0, $$
unless $y_i$ take very special values.
|
In a GLM, is the log likelihood of the saturated model always zero?
If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1,
|
14,523
|
In a GLM, is the log likelihood of the saturated model always zero?
|
Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't seen this TeX'd up on this site, and because I just wrote these up for a lecture.
The likelihood is
$$
L(\mathbf{y} ; \mathbf{X}, \boldsymbol{\beta}) = \prod_{i=1}^n f(y_i ; \mathbf{x}_i, \boldsymbol{\beta}) = \prod_{i=1}^n \pi_i^{y_i}(1-\pi_i)^{1-y_i} = \prod_{i=1}^n\left( \frac{\pi_i}{1-\pi_i}\right)^{y_i} (1 - \pi_i) \tag{1}
$$
where $\pi_i = \text{invlogit}(\mathbf{x}_i^\intercal \boldsymbol{\beta} )$.
The log-likelihood is
\begin{align*}
\log L(\mathbf{y} ; \mathbf{X}, \boldsymbol{\beta}) &= \sum_{i=1}^n y_i \log \left( \frac{\pi_i}{1-\pi_i}\right) + \log(1-\pi_i) \\
&= \sum_{i=1}^n y_i \text{logit} \left( \pi_i \right) + \log(1-\pi_i) \\
&= \sum_{i=1}^n y_i \mathbf{x}_i^\intercal \boldsymbol{\beta} + \log( 1 - \text{invlogit}(\mathbf{x}_i^\intercal \boldsymbol{\beta} )) \\
&= \sum_{i=1}^n y_i \mathbf{x}_i^\intercal \boldsymbol{\beta} + \log( \text{invlogit}( - \mathbf{x}_i^\intercal \boldsymbol{\beta} )) \\
&= \sum_{i=1}^n y_i \mathbf{x}_i^\intercal \boldsymbol{\beta} - \log( 1 + \exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}] ))
\end{align*}
If you take the derivatives with respect to all of the coefficients you get
$$
\nabla \ell(\boldsymbol{\beta}) = \sum_{i=1}^n y_i \mathbf{x}_i - \frac{\exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}]}{( 1 + \exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}] ) }\mathbf{x}_i \tag{2}.
$$
Setting this expression equal to $\mathbf{0}$ and solving for $\boldsymbol{\beta}$ will give you your answer. Usually this can't be done analytically, which explains the popularity/necessity of using iterative algorithms to fit this model, but in the case of a saturated model, it is possible.
To find the saturated model, we give each row it's own coefficent. So $\boldsymbol{\beta} \in \mathbb{R}^n$ and the design matrix times the coefficient vector is
$$
\mathbf{X}\boldsymbol{\beta} =
\begin{bmatrix}
1 & 0 & \cdots & 0\\
0 & 1 & \cdots & 0\\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 1\\
\end{bmatrix}
\begin{bmatrix}
\beta_1 \\
\beta_2 \\
\vdots \\
\beta_n
\end{bmatrix}.
$$
Note that in particular, $\mathbf{x}_i^\intercal \boldsymbol{\beta} = \beta_i$.
So taking the $j$th row of equation (2) gives us
$$
\sum_{i=1}^n y_i x_{i,j} = \sum_{i=1}^n\frac{\exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}]}{( 1 + \exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}] ) }x_{i,j}
$$
which can only be true if for each observation $i$:
$$
y_i = \text{invlogit}(\beta_i )
$$
or in other words each $\beta_i$ is plus or minus infinity (if $y_i$ is $1$ or $0$, respectively). We can plug these parameters back into (1) to get the maximized likelihood:
$$
\prod_{i=1}^n \hat{\pi}_i^{y_i}(1-\hat{\pi}_i)^{1-y_i} = 1^n = 1.
$$
Clearly the log of this is $0$.
|
In a GLM, is the log likelihood of the saturated model always zero?
|
Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't see
|
In a GLM, is the log likelihood of the saturated model always zero?
Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't seen this TeX'd up on this site, and because I just wrote these up for a lecture.
The likelihood is
$$
L(\mathbf{y} ; \mathbf{X}, \boldsymbol{\beta}) = \prod_{i=1}^n f(y_i ; \mathbf{x}_i, \boldsymbol{\beta}) = \prod_{i=1}^n \pi_i^{y_i}(1-\pi_i)^{1-y_i} = \prod_{i=1}^n\left( \frac{\pi_i}{1-\pi_i}\right)^{y_i} (1 - \pi_i) \tag{1}
$$
where $\pi_i = \text{invlogit}(\mathbf{x}_i^\intercal \boldsymbol{\beta} )$.
The log-likelihood is
\begin{align*}
\log L(\mathbf{y} ; \mathbf{X}, \boldsymbol{\beta}) &= \sum_{i=1}^n y_i \log \left( \frac{\pi_i}{1-\pi_i}\right) + \log(1-\pi_i) \\
&= \sum_{i=1}^n y_i \text{logit} \left( \pi_i \right) + \log(1-\pi_i) \\
&= \sum_{i=1}^n y_i \mathbf{x}_i^\intercal \boldsymbol{\beta} + \log( 1 - \text{invlogit}(\mathbf{x}_i^\intercal \boldsymbol{\beta} )) \\
&= \sum_{i=1}^n y_i \mathbf{x}_i^\intercal \boldsymbol{\beta} + \log( \text{invlogit}( - \mathbf{x}_i^\intercal \boldsymbol{\beta} )) \\
&= \sum_{i=1}^n y_i \mathbf{x}_i^\intercal \boldsymbol{\beta} - \log( 1 + \exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}] ))
\end{align*}
If you take the derivatives with respect to all of the coefficients you get
$$
\nabla \ell(\boldsymbol{\beta}) = \sum_{i=1}^n y_i \mathbf{x}_i - \frac{\exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}]}{( 1 + \exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}] ) }\mathbf{x}_i \tag{2}.
$$
Setting this expression equal to $\mathbf{0}$ and solving for $\boldsymbol{\beta}$ will give you your answer. Usually this can't be done analytically, which explains the popularity/necessity of using iterative algorithms to fit this model, but in the case of a saturated model, it is possible.
To find the saturated model, we give each row it's own coefficent. So $\boldsymbol{\beta} \in \mathbb{R}^n$ and the design matrix times the coefficient vector is
$$
\mathbf{X}\boldsymbol{\beta} =
\begin{bmatrix}
1 & 0 & \cdots & 0\\
0 & 1 & \cdots & 0\\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 1\\
\end{bmatrix}
\begin{bmatrix}
\beta_1 \\
\beta_2 \\
\vdots \\
\beta_n
\end{bmatrix}.
$$
Note that in particular, $\mathbf{x}_i^\intercal \boldsymbol{\beta} = \beta_i$.
So taking the $j$th row of equation (2) gives us
$$
\sum_{i=1}^n y_i x_{i,j} = \sum_{i=1}^n\frac{\exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}]}{( 1 + \exp[ \mathbf{x}_i^\intercal \boldsymbol{\beta}] ) }x_{i,j}
$$
which can only be true if for each observation $i$:
$$
y_i = \text{invlogit}(\beta_i )
$$
or in other words each $\beta_i$ is plus or minus infinity (if $y_i$ is $1$ or $0$, respectively). We can plug these parameters back into (1) to get the maximized likelihood:
$$
\prod_{i=1}^n \hat{\pi}_i^{y_i}(1-\hat{\pi}_i)^{1-y_i} = 1^n = 1.
$$
Clearly the log of this is $0$.
|
In a GLM, is the log likelihood of the saturated model always zero?
Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't see
|
14,524
|
In a GLM, is the log likelihood of the saturated model always zero?
|
@Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not a sensible thing to try and achieve. slightly more generally, the log-likelihood of the saturated model gives you an upper bound for the performance of any model that follows your assumption of the underlying distribution family. In other words, the log-likelihood of a saturated binomial model it is "as good as it gets" for the given data set (X,Y) assuming Y is binomial. It makes sense to compare your glm model to this upper bound as opposed to, say, 100% (or similar), since your model is inherently constrained by your assumption on the response distribution. The deviance as defined by @Zhanxiong therefore gives you a good idea how well your model performs w.r.t to its inherit limitations that come from assuming a certain response type.
|
In a GLM, is the log likelihood of the saturated model always zero?
|
@Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not
|
In a GLM, is the log likelihood of the saturated model always zero?
@Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not a sensible thing to try and achieve. slightly more generally, the log-likelihood of the saturated model gives you an upper bound for the performance of any model that follows your assumption of the underlying distribution family. In other words, the log-likelihood of a saturated binomial model it is "as good as it gets" for the given data set (X,Y) assuming Y is binomial. It makes sense to compare your glm model to this upper bound as opposed to, say, 100% (or similar), since your model is inherently constrained by your assumption on the response distribution. The deviance as defined by @Zhanxiong therefore gives you a good idea how well your model performs w.r.t to its inherit limitations that come from assuming a certain response type.
|
In a GLM, is the log likelihood of the saturated model always zero?
@Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not
|
14,525
|
Antonym of variance
|
$1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became popular because gamma can be used as a conjugate prior for precision in normal distribution as noticed by Kruschke (2014) and @Scortchi.
Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press, p. 454.
|
Antonym of variance
|
$1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became
|
Antonym of variance
$1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became popular because gamma can be used as a conjugate prior for precision in normal distribution as noticed by Kruschke (2014) and @Scortchi.
Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press, p. 454.
|
Antonym of variance
$1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became
|
14,526
|
Generate data samples from Poisson regression
|
The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V(Y) = \mu$) and that $\log(\mu) = \beta_0 + \beta_1 x$. Generating data according to that model easily follows. Here is an example which you can adapt according to your own scenario.
> #sample size
> n <- 10
> #regression coefficients
> beta0 <- 1
> beta1 <- 0.2
> #generate covariate values
> x <- runif(n=n, min=0, max=1.5)
> #compute mu's
> mu <- exp(beta0 + beta1 * x)
> #generate Y-values
> y <- rpois(n=n, lambda=mu)
> #data set
> data <- data.frame(y=y, x=x)
> data
y x
1 4 1.2575652
2 3 0.9213477
3 3 0.8093336
4 4 0.6234518
5 4 0.8801471
6 8 1.2961688
7 2 0.1676094
8 2 1.1278965
9 1 1.1642033
10 4 0.2830910
|
Generate data samples from Poisson regression
|
The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V(
|
Generate data samples from Poisson regression
The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V(Y) = \mu$) and that $\log(\mu) = \beta_0 + \beta_1 x$. Generating data according to that model easily follows. Here is an example which you can adapt according to your own scenario.
> #sample size
> n <- 10
> #regression coefficients
> beta0 <- 1
> beta1 <- 0.2
> #generate covariate values
> x <- runif(n=n, min=0, max=1.5)
> #compute mu's
> mu <- exp(beta0 + beta1 * x)
> #generate Y-values
> y <- rpois(n=n, lambda=mu)
> #data set
> data <- data.frame(y=y, x=x)
> data
y x
1 4 1.2575652
2 3 0.9213477
3 3 0.8093336
4 4 0.6234518
5 4 0.8801471
6 8 1.2961688
7 2 0.1676094
8 2 1.1278965
9 1 1.1642033
10 4 0.2830910
|
Generate data samples from Poisson regression
The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V(
|
14,527
|
Generate data samples from Poisson regression
|
If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <- 1.5 # slope for x1
B2 <- -0.5 # slope for x2
y <- rpois(100, 6.5)
x2 <- seq(-0.5, 0.5,,length(y))
x1 <- (log(y) - B0 - B2 * x2) / B1
my.model <- glm(y ~ x1 + x2, family = poisson(link = log))
summary(my.model)
Which returns:
Call:
glm(formula = y ~ x1 + x2, family = poisson(link = log))
Deviance Residuals:
Min 1Q Median 3Q Max
-2.581e-08 -1.490e-08 0.000e+00 0.000e+00 4.215e-08
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.20000 0.08386 14.309 < 2e-16 ***
x1 1.50000 0.16839 8.908 < 2e-16 ***
x2 -0.50000 0.14957 -3.343 0.000829 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 8.8619e+01 on 99 degrees of freedom
Residual deviance: 1.1102e-14 on 97 degrees of freedom
AIC: 362.47
Number of Fisher Scoring iterations: 3
|
Generate data samples from Poisson regression
|
If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <-
|
Generate data samples from Poisson regression
If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <- 1.5 # slope for x1
B2 <- -0.5 # slope for x2
y <- rpois(100, 6.5)
x2 <- seq(-0.5, 0.5,,length(y))
x1 <- (log(y) - B0 - B2 * x2) / B1
my.model <- glm(y ~ x1 + x2, family = poisson(link = log))
summary(my.model)
Which returns:
Call:
glm(formula = y ~ x1 + x2, family = poisson(link = log))
Deviance Residuals:
Min 1Q Median 3Q Max
-2.581e-08 -1.490e-08 0.000e+00 0.000e+00 4.215e-08
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 1.20000 0.08386 14.309 < 2e-16 ***
x1 1.50000 0.16839 8.908 < 2e-16 ***
x2 -0.50000 0.14957 -3.343 0.000829 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 8.8619e+01 on 99 degrees of freedom
Residual deviance: 1.1102e-14 on 97 degrees of freedom
AIC: 362.47
Number of Fisher Scoring iterations: 3
|
Generate data samples from Poisson regression
If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <-
|
14,528
|
What are chunk tests?
|
@mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear variables tricep circumference, waist, hip circumference, all measurements of body size. To get an overall body size chunk test, you could do
require(rms)
f <- ols(y ~ age + tricep + waist + pol(hip, 2))
anova(f, tricep, waist, hip) # 4 d.f. test
You can get the same test by fitting a model containing only age (if there are no NAs in tricep, waist, hip) and doing the "difference in $R^2$ test". These equivalent tests do not suffer from even extreme collinearity among the three variables.
|
What are chunk tests?
|
@mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear var
|
What are chunk tests?
@mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear variables tricep circumference, waist, hip circumference, all measurements of body size. To get an overall body size chunk test, you could do
require(rms)
f <- ols(y ~ age + tricep + waist + pol(hip, 2))
anova(f, tricep, waist, hip) # 4 d.f. test
You can get the same test by fitting a model containing only age (if there are no NAs in tricep, waist, hip) and doing the "difference in $R^2$ test". These equivalent tests do not suffer from even extreme collinearity among the three variables.
|
What are chunk tests?
@mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear var
|
14,529
|
What are chunk tests?
|
Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnorm(50))
>
> ols1 <- ols(y ~ x1 + pol(x2, 2), data=d) # pol(x2, 2) means include x2 and x2^2 terms
> ols1
Linear Regression Model
ols(formula = y ~ x1 + pol(x2, 2), data = d)
Model Likelihood Discrimination
Ratio Test Indexes
Obs 50 LR chi2 79.86 R2 0.798
sigma 0.9278 d.f. 3 R2 adj 0.784
d.f. 46 Pr(> chi2) 0.0000 g 1.962
Residuals
Min 1Q Median 3Q Max
-1.7463 -0.4789 -0.1221 0.4465 2.2054
Coef S.E. t Pr(>|t|)
Intercept 0.8238 0.1654 4.98 <0.0001
x1 2.0214 0.1633 12.38 <0.0001
x2 0.2915 0.1500 1.94 0.0581
x2^2 0.2242 0.1163 1.93 0.0602
> anova(ols1)
Analysis of Variance Response: y
Factor d.f. Partial SS MS F P
x1 1 131.894215 131.8942148 153.20 <.0001
x2 2 10.900163 5.4500816 6.33 0.0037
Nonlinear 1 3.196552 3.1965524 3.71 0.0602
REGRESSION 3 156.011447 52.0038157 60.41 <.0001
ERROR 46 39.601647 0.8609054
Instead of considering the x2 and x2^2 terms separately, the "chunk test" is the 2-df test which tests the null hypothesis that the coefficients of those terms are both zero (I believe it's more commonly called something like a "general linear F-test"). The p-value for that test is the 0.0037 given by anova(ols1).
Note that in the rms package, you have to specify the x2 terms as pol(x2, 2) for anova.rms() to know that they are to be tested together.
anova.rms() will do similar tests for predictor variables which are represented as restricted cubic splines using, for example, rcs(x2, 3), and for categorical predictor variables. It will also include interaction terms in the "chunks".
If you wanted to do a chunk test for general "competing" predictor variables, as mentioned in the quote, I believe you would have to do it manually by fitting the two models separately and then using anova(model1, model2). [Edit: this is incorrect - see Frank Harrell's answer.]
|
What are chunk tests?
|
Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnor
|
What are chunk tests?
Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnorm(50))
>
> ols1 <- ols(y ~ x1 + pol(x2, 2), data=d) # pol(x2, 2) means include x2 and x2^2 terms
> ols1
Linear Regression Model
ols(formula = y ~ x1 + pol(x2, 2), data = d)
Model Likelihood Discrimination
Ratio Test Indexes
Obs 50 LR chi2 79.86 R2 0.798
sigma 0.9278 d.f. 3 R2 adj 0.784
d.f. 46 Pr(> chi2) 0.0000 g 1.962
Residuals
Min 1Q Median 3Q Max
-1.7463 -0.4789 -0.1221 0.4465 2.2054
Coef S.E. t Pr(>|t|)
Intercept 0.8238 0.1654 4.98 <0.0001
x1 2.0214 0.1633 12.38 <0.0001
x2 0.2915 0.1500 1.94 0.0581
x2^2 0.2242 0.1163 1.93 0.0602
> anova(ols1)
Analysis of Variance Response: y
Factor d.f. Partial SS MS F P
x1 1 131.894215 131.8942148 153.20 <.0001
x2 2 10.900163 5.4500816 6.33 0.0037
Nonlinear 1 3.196552 3.1965524 3.71 0.0602
REGRESSION 3 156.011447 52.0038157 60.41 <.0001
ERROR 46 39.601647 0.8609054
Instead of considering the x2 and x2^2 terms separately, the "chunk test" is the 2-df test which tests the null hypothesis that the coefficients of those terms are both zero (I believe it's more commonly called something like a "general linear F-test"). The p-value for that test is the 0.0037 given by anova(ols1).
Note that in the rms package, you have to specify the x2 terms as pol(x2, 2) for anova.rms() to know that they are to be tested together.
anova.rms() will do similar tests for predictor variables which are represented as restricted cubic splines using, for example, rcs(x2, 3), and for categorical predictor variables. It will also include interaction terms in the "chunks".
If you wanted to do a chunk test for general "competing" predictor variables, as mentioned in the quote, I believe you would have to do it manually by fitting the two models separately and then using anova(model1, model2). [Edit: this is incorrect - see Frank Harrell's answer.]
|
What are chunk tests?
Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnor
|
14,530
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pinpoint the problem.
Specifically you can run
FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT CORRELATION
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.
to see the correlation matrix SPSS is using to carry out the factor analysis. Then, in R, prepare the correlation matrix yourself by running
r <- cor(data)
Any discrepancy in the way missing values are handled should be apparent at this stage. Once you have checked that the correlation matrix is the same, you can feed it to the fa function and run your analysis again:
fa.results <- fa(r, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)
If you still get different results in SPSS and R, the problem is not missing values-related.
Next, you can compare the results of the factor analysis/extraction method itself.
FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT EXTRACTION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.
and
fa.results <- fa(r, nfactors=6, rotate="none",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)
Again, compare the factor matrices/communalities/sum of squared loadings. Here you can expect some tiny differences but certainly not of the magnitude you describe. All this would give you a clearer idea of what's going on.
Now, to answer your three questions directly:
In my experience, it's possible to obtain very similar results, sometimes after spending some time figuring out the different terminologies and fiddling with the parameters. I have had several occasions to run factor analyses in both SPSS and R (typically working in R and then reproducing the analysis in SPSS to share it with colleagues) and always obtained essentially the same results. I would therefore generally not expect large differences, which leads me to suspect the problem might be specific to your data set. I did however quickly try the commands you provided on a data set I had lying around (it's a Likert scale) and the differences were in fact bigger than I am used to but not as big as those you describe. (I might update my answer if I get more time to play with this.)
Most of the time, people interpret the sum of squared loadings after rotation as the “proportion of variance explained” by each factor but this is not meaningful following an oblique rotation (which is why it is not reported at all in psych and SPSS only reports the eigenvalues in this case – there is even a little footnote about it in the output). The initial eigenvalues are computed before any factor extraction. Obviously, they don't tell you anything about the proportion of variance explained by your factors and are not really “sum of squared loadings” either (they are often used to decide on the number of factors to retain). SPSS “Extraction Sums of Squared Loadings” should however match the “SS loadings” provided by psych.
This is a wild guess at this stage but have you checked if the factor extraction procedure converged in 25 iterations? If the rotation fails to converge, SPSS does not output any pattern/structure matrix and you can't miss it but if the extraction fails to converge, the last factor matrix is displayed nonetheless and SPSS blissfully continues with the rotation. You would however see a note “a. Attempted to extract 6 factors. More than 25 iterations required. (Convergence=XXX). Extraction was terminated.” If the convergence value is small (something like .005, the default stopping condition being “less than .0001”), it would still not account for the discrepancies you report but if it is really large there is something pathological about your data.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of e
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pinpoint the problem.
Specifically you can run
FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT CORRELATION
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.
to see the correlation matrix SPSS is using to carry out the factor analysis. Then, in R, prepare the correlation matrix yourself by running
r <- cor(data)
Any discrepancy in the way missing values are handled should be apparent at this stage. Once you have checked that the correlation matrix is the same, you can feed it to the fa function and run your analysis again:
fa.results <- fa(r, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)
If you still get different results in SPSS and R, the problem is not missing values-related.
Next, you can compare the results of the factor analysis/extraction method itself.
FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT EXTRACTION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE.
and
fa.results <- fa(r, nfactors=6, rotate="none",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25)
Again, compare the factor matrices/communalities/sum of squared loadings. Here you can expect some tiny differences but certainly not of the magnitude you describe. All this would give you a clearer idea of what's going on.
Now, to answer your three questions directly:
In my experience, it's possible to obtain very similar results, sometimes after spending some time figuring out the different terminologies and fiddling with the parameters. I have had several occasions to run factor analyses in both SPSS and R (typically working in R and then reproducing the analysis in SPSS to share it with colleagues) and always obtained essentially the same results. I would therefore generally not expect large differences, which leads me to suspect the problem might be specific to your data set. I did however quickly try the commands you provided on a data set I had lying around (it's a Likert scale) and the differences were in fact bigger than I am used to but not as big as those you describe. (I might update my answer if I get more time to play with this.)
Most of the time, people interpret the sum of squared loadings after rotation as the “proportion of variance explained” by each factor but this is not meaningful following an oblique rotation (which is why it is not reported at all in psych and SPSS only reports the eigenvalues in this case – there is even a little footnote about it in the output). The initial eigenvalues are computed before any factor extraction. Obviously, they don't tell you anything about the proportion of variance explained by your factors and are not really “sum of squared loadings” either (they are often used to decide on the number of factors to retain). SPSS “Extraction Sums of Squared Loadings” should however match the “SS loadings” provided by psych.
This is a wild guess at this stage but have you checked if the factor extraction procedure converged in 25 iterations? If the rotation fails to converge, SPSS does not output any pattern/structure matrix and you can't miss it but if the extraction fails to converge, the last factor matrix is displayed nonetheless and SPSS blissfully continues with the rotation. You would however see a note “a. Attempted to extract 6 factors. More than 25 iterations required. (Convergence=XXX). Extraction was terminated.” If the convergence value is small (something like .005, the default stopping condition being “less than .0001”), it would still not account for the discrepancies you report but if it is really large there is something pathological about your data.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of e
|
14,531
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows up exactly the same in each, and no oblique rotation is used.
One remaining discrepancy is in the
series of values that show up in the scree plot indicating eigenvalues after extraction.
In R's "scree(cor(mydata))" these "factors" don't match those listed in SPSS's Variance Explained table under "Extraction Sums of Squared Loadings." Note that the R scree plot's "components" do match SPSS's scree plot, which also match its Variance Explained table's "Initial Eigenvalues."
I've also found that the "Proportion Var" explained by each factor is, in R, sometimes reported as (the proportion for a given factor)/(the amount explained by all factors), while at other times it is (the proportion for a given factor)(the number of items in the analysis). So if you get the former, it is, while not a match, at least proportional to and derivable from what SPSS reports under "Extraction Sums of Squared Loadings...% of Variance."
Introducing oblimin rotation in each program, however, creates sizeable discrepancies in item loadings or factors' variance explained that I haven't been able to resolve.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows u
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows up exactly the same in each, and no oblique rotation is used.
One remaining discrepancy is in the
series of values that show up in the scree plot indicating eigenvalues after extraction.
In R's "scree(cor(mydata))" these "factors" don't match those listed in SPSS's Variance Explained table under "Extraction Sums of Squared Loadings." Note that the R scree plot's "components" do match SPSS's scree plot, which also match its Variance Explained table's "Initial Eigenvalues."
I've also found that the "Proportion Var" explained by each factor is, in R, sometimes reported as (the proportion for a given factor)/(the amount explained by all factors), while at other times it is (the proportion for a given factor)(the number of items in the analysis). So if you get the former, it is, while not a match, at least proportional to and derivable from what SPSS reports under "Extraction Sums of Squared Loadings...% of Variance."
Introducing oblimin rotation in each program, however, creates sizeable discrepancies in item loadings or factors' variance explained that I haven't been able to resolve.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows u
|
14,532
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results.
|
14,533
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If they're the same, then look at the rotation settings in each program. (For SPSS, you can find all the settings in the reference manual entry for FACTOR).
One important setting to look for is the Kaiser normalization. By default, SPSS does Kaiser normalization during rotation, whereas some R functions like 'fa' do not. You can control that setting in SPSS by specifying /CRITERIA = NOKAISER/KAISER, to verify if it eliminates any discrepancies between the results with each program.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If they're the same, then look at the rotation settings in each program. (For SPSS, you can find all the settings in the reference manual entry for FACTOR).
One important setting to look for is the Kaiser normalization. By default, SPSS does Kaiser normalization during rotation, whereas some R functions like 'fa' do not. You can control that setting in SPSS by specifying /CRITERIA = NOKAISER/KAISER, to verify if it eliminates any discrepancies between the results with each program.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If
|
14,534
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x.pdf
Algorithmic jingle jungle: A comparison of implementations
of principal axis factoring and promax rotation in R and SPSS
Silvia Grieder · Markus D. Steine
Also here, a vignette in the EFAtools package in R
Replicate SPSS and R psych results with EFAtools
https://cran.r-project.org/web/packages/EFAtools/vignettes/Replicate_SPSS_psych.html
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x.pdf
Algorithmic jingle jungle: A comparison of implementations
of principal axis factoring and promax rotation in R and SPSS
Silvia Grieder · Markus D. Steine
Also here, a vignette in the EFAtools package in R
Replicate SPSS and R psych results with EFAtools
https://cran.r-project.org/web/packages/EFAtools/vignettes/Replicate_SPSS_psych.html
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x
|
14,535
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the SPSS explained variance table which actually shows results of principal component analysis. The second part shows the results for unrotated factor analysis results and the third results after rotation (if used).
- the fact that fa function (or more precisely its print method) wrongly computes SSL for oblique factors. To get the % of total variance explained by factor, you should compute the sum of squared structural loadings by factor and divide that by number of variables. However, you can not sum these up (in case of oblique rotations) to get the % of variance explained by all factors.
To get this, either compute the mean communality or the total % of variance explained by orthogonal factors (eg, using none or varimax rotations), which can be summed.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
|
I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the S
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the SPSS explained variance table which actually shows results of principal component analysis. The second part shows the results for unrotated factor analysis results and the third results after rotation (if used).
- the fact that fa function (or more precisely its print method) wrongly computes SSL for oblique factors. To get the % of total variance explained by factor, you should compute the sum of squared structural loadings by factor and divide that by number of variables. However, you can not sum these up (in case of oblique rotations) to get the % of variance explained by all factors.
To get this, either compute the mean communality or the total % of variance explained by orthogonal factors (eg, using none or varimax rotations), which can be summed.
|
Interpreting discrepancies between R and SPSS with exploratory factor analysis
I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the S
|
14,536
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensional plane for multiple regressors and a curved surface for non-linear regression)
In this case, the vector between observation and solution is perpendicular to the plane (a space spanned be the regressors), and perpendicular to the regressors.
Regularized regression finds a solution in a restricted set inside the the plane that has the closest distance to the observation.
In this case, the vector between observation and solution is not anymore perpendicular to te plane and not anymore perpendicular to the regressors.
But, there is still some sort of perpendicular relation, namely the vector of the residuals is in some sense perpendicular to the edge of the circle (or whatever other surface that is defined by te regularization)
The model of $\hat{y}$
Our model gives estimates of the observations, $\hat{y}$, the observations as function of parameters $\beta_i$.
$$\hat{y} = f(\beta)$$
In our image this is a linear function with two parameters $\beta_0$ and $\beta_1$
(you can of course generalize this to a large size of coefficients and observations, for simplicity we regard three observations and two coefficients such that we can plot it)
$$\begin{bmatrix} \hat{y}_{1} \\ \hat{y}_{2} \\ \hat{y}_{3} \end{bmatrix} = \beta_0 \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}+
\beta_1 \begin{bmatrix} 0 \\ 1 \\ 2 \end{bmatrix}$$
The possible solutions of the model, defined by this linear sum, is represented by the red plane in the image.
Note that this plane in the image relates to the possible solutions of $y_i = \beta_0 + \beta_1 x_i$, when $[x_1,x_2,x_3] = [0,1,2]$. So we plotted the space of all possible $y_i$ (which is a 3D-space, and more generally a n-dimensional space) and the possible solutions that the model allows is a plane inside this space.
Finding the best model with least squares
The model allows any solution in the plane spanned by the model (in the image this is the 2D red plane, in general this can be a higher dimensional plane, and also it does not need to be linear).
The least-squares method will select the 'solution' $\hat{y} = \hat\beta_0 + \hat\beta_1 x_1 $ that has the lowest difference in terms of the squares of the residuals.
In geometric terms, this is equal to finding the point in the plane that has the smalles euclidian distance to the observed value. This smallest difference is achieved when the vector of residuals is orthogonal to the plane.
Finding the best model with ridge regression (or other regularization)
When we apply a penalty then this is similar to applying some constraint like 'the sum of vectors can not be above some value'. In the image this is represented by the purple drawing.
The solution is still inside the plane, but also inside the circle. Now the estimate solution is still a representing a shortest distance between the space of solutions and the observation. But the optimal solution is not anymore orthogonal projection onto the red plane. It is instead the shortes distance to the purple circles.
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensi
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensional plane for multiple regressors and a curved surface for non-linear regression)
In this case, the vector between observation and solution is perpendicular to the plane (a space spanned be the regressors), and perpendicular to the regressors.
Regularized regression finds a solution in a restricted set inside the the plane that has the closest distance to the observation.
In this case, the vector between observation and solution is not anymore perpendicular to te plane and not anymore perpendicular to the regressors.
But, there is still some sort of perpendicular relation, namely the vector of the residuals is in some sense perpendicular to the edge of the circle (or whatever other surface that is defined by te regularization)
The model of $\hat{y}$
Our model gives estimates of the observations, $\hat{y}$, the observations as function of parameters $\beta_i$.
$$\hat{y} = f(\beta)$$
In our image this is a linear function with two parameters $\beta_0$ and $\beta_1$
(you can of course generalize this to a large size of coefficients and observations, for simplicity we regard three observations and two coefficients such that we can plot it)
$$\begin{bmatrix} \hat{y}_{1} \\ \hat{y}_{2} \\ \hat{y}_{3} \end{bmatrix} = \beta_0 \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}+
\beta_1 \begin{bmatrix} 0 \\ 1 \\ 2 \end{bmatrix}$$
The possible solutions of the model, defined by this linear sum, is represented by the red plane in the image.
Note that this plane in the image relates to the possible solutions of $y_i = \beta_0 + \beta_1 x_i$, when $[x_1,x_2,x_3] = [0,1,2]$. So we plotted the space of all possible $y_i$ (which is a 3D-space, and more generally a n-dimensional space) and the possible solutions that the model allows is a plane inside this space.
Finding the best model with least squares
The model allows any solution in the plane spanned by the model (in the image this is the 2D red plane, in general this can be a higher dimensional plane, and also it does not need to be linear).
The least-squares method will select the 'solution' $\hat{y} = \hat\beta_0 + \hat\beta_1 x_1 $ that has the lowest difference in terms of the squares of the residuals.
In geometric terms, this is equal to finding the point in the plane that has the smalles euclidian distance to the observed value. This smallest difference is achieved when the vector of residuals is orthogonal to the plane.
Finding the best model with ridge regression (or other regularization)
When we apply a penalty then this is similar to applying some constraint like 'the sum of vectors can not be above some value'. In the image this is represented by the purple drawing.
The solution is still inside the plane, but also inside the circle. Now the estimate solution is still a representing a shortest distance between the space of solutions and the observation. But the optimal solution is not anymore orthogonal projection onto the red plane. It is instead the shortes distance to the purple circles.
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensi
|
14,537
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations used to derive the OLS solution (see this answer).
The coefficients in ridge regression (with penalty weighting $\lambda$) are simply:
$$\beta = (X^TX+\lambda\mathbb I)^{-1}X^Ty$$
The solution to the OLS can be obtained just as well by setting $\lambda = 0$.
The use of the normal equations to the ridge problem can be recovered from and correspond to an augmentation of $X$.
Concatenating new virtual samples formed by a identity matrix:
$$
\matrix{
X_\text{new}=\left[\matrix{
X_\text{old} \\ \sqrt{\lambda}\mathbb I_{p\times p}
}\right]
\qquad
Y_\text{new}=\left[\matrix{
Y_\text{old} \\ \mathbf 0_{p\times1}
}\right]
}$$
If we do that, it can be quite straightforwardly shown that:
$$\beta = (X_\text{old}^TX_\text{old}+\lambda\mathbb I)^{-1}X^T_\text{old} y_\text{old} = (X_\text{new}^TX_\text{new})^{-1}X_\text{new}^T y_\text{new}$$
Thus, since we are using the normal equations to derive the solution to ridge regression, the property of orthogonal residuals and predictions is kept intact.
But notice that, now, predictions involve these virtual samples.
That's why, when looking only at the real samples, this orthogonality is not guaranteed: you are missing part of the puzzle by not taking into account these "virtual" samples.
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations used to derive the OLS solution (see this answer).
The coefficients in ridge regression (with penalty weighting $\lambda$) are simply:
$$\beta = (X^TX+\lambda\mathbb I)^{-1}X^Ty$$
The solution to the OLS can be obtained just as well by setting $\lambda = 0$.
The use of the normal equations to the ridge problem can be recovered from and correspond to an augmentation of $X$.
Concatenating new virtual samples formed by a identity matrix:
$$
\matrix{
X_\text{new}=\left[\matrix{
X_\text{old} \\ \sqrt{\lambda}\mathbb I_{p\times p}
}\right]
\qquad
Y_\text{new}=\left[\matrix{
Y_\text{old} \\ \mathbf 0_{p\times1}
}\right]
}$$
If we do that, it can be quite straightforwardly shown that:
$$\beta = (X_\text{old}^TX_\text{old}+\lambda\mathbb I)^{-1}X^T_\text{old} y_\text{old} = (X_\text{new}^TX_\text{new})^{-1}X_\text{new}^T y_\text{new}$$
Thus, since we are using the normal equations to derive the solution to ridge regression, the property of orthogonal residuals and predictions is kept intact.
But notice that, now, predictions involve these virtual samples.
That's why, when looking only at the real samples, this orthogonality is not guaranteed: you are missing part of the puzzle by not taking into account these "virtual" samples.
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations
|
14,538
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge regression or otherwise, you will in general move your fit away from the projection and destroy orthgonality.
A book which I learned most of this from and is (in my opinion) difficult to surpass is Seber, G.A.F. Linear Regression Analysis, Wiley. I used the ca. 1980 edition, but there is a newer version. Grab a copy if you can.
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge regression or otherwise, you will in general move your fit away from the projection and destroy orthgonality.
A book which I learned most of this from and is (in my opinion) difficult to surpass is Seber, G.A.F. Linear Regression Analysis, Wiley. I used the ca. 1980 edition, but there is a newer version. Grab a copy if you can.
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge
|
14,539
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every predictor, and hence orthogonal to linear combinations of the predictors, which is what the predicted values $\hat{y}$ are. See, for example, section 2 of these course notes.
From that perspective, we can see that any estimate of $\beta$ other than the least squares estimate (*) -- for example, any estimate that has been regularized toward 0 to some extent -- will not have this orthogonality property.
(*) Aside from $\hat{\beta} = 0$, which leads to $\hat{y}=0$, which is always orthogonal to any other vector
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
|
One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every predictor, and hence orthogonal to linear combinations of the predictors, which is what the predicted values $\hat{y}$ are. See, for example, section 2 of these course notes.
From that perspective, we can see that any estimate of $\beta$ other than the least squares estimate (*) -- for example, any estimate that has been regularized toward 0 to some extent -- will not have this orthogonality property.
(*) Aside from $\hat{\beta} = 0$, which leads to $\hat{y}=0$, which is always orthogonal to any other vector
|
Why does regularization wreck orthogonality of predictions and residuals in linear regression?
One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every
|
14,540
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves are uncorrelated with their own past $r_{t-1}$ if stock markets are efficient (else, you could easily and profitably predict where prices are going), but their squares $r_t^2$ and $r_{t-1}^2$ are not: there is time dependence in the variances, which cluster in time, with periods of high variance in volatile times.
Here is an artificial example (yet again, I know, but "real" stock return series may well look similar):
You see the high volatility cluster around in particular $t\approx400$.
Generated using R code:
library(TSA)
garch01.sim <- garch.sim(alpha=c(.01,.55),beta=0.4,n=500)
plot(garch01.sim, type='l', ylab=expression(r[t]),xlab='t')
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves ar
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves are uncorrelated with their own past $r_{t-1}$ if stock markets are efficient (else, you could easily and profitably predict where prices are going), but their squares $r_t^2$ and $r_{t-1}^2$ are not: there is time dependence in the variances, which cluster in time, with periods of high variance in volatile times.
Here is an artificial example (yet again, I know, but "real" stock return series may well look similar):
You see the high volatility cluster around in particular $t\approx400$.
Generated using R code:
library(TSA)
garch01.sim <- garch.sim(alpha=c(.01,.55),beta=0.4,n=500)
plot(garch01.sim, type='l', ylab=expression(r[t]),xlab='t')
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves ar
|
14,541
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean, then the other must be distant from its mean.
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean,
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean, then the other must be distant from its mean.
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean,
|
14,542
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Several sets of (x, y) points, with the Pearson correlation coefficient of x and y for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Severa
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Several sets of (x, y) points, with the Pearson correlation coefficient of x and y for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Severa
|
14,543
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of correlation to Pearson correlation, which in my opinion is indeed the appropriate meaning to correlation, when no other detail is provided. However, I believe that what you really want to ask goes beyond linear correlation, towards statistical dependence, that is: When are variables dependent, but independent when measured?
I mean, it's straightforward that a measure of linear association won't catch an association between variables that are associated but not in a linear way. Examples of that are all around us, though a r value of exactly 0 can be hard to find.
However, going back to the broader question that I elaborated, there could be spurious independence. That is, the variables are dependent, but your sampling will suggest that they are independent. I wrote an article about this, and there are scientific papers mentioning this problem too, such as this one.
Controlling for variables can be equivalent to slicing your data. By slicing too much (adjusting for many other variables), it's expected for your two random variables to appear independent. One may say: But I am not adjusting for anything! And the answer is: You don't need to. The collected data may be biased (selection bias) and you're not aware of it.
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
|
There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of co
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of correlation to Pearson correlation, which in my opinion is indeed the appropriate meaning to correlation, when no other detail is provided. However, I believe that what you really want to ask goes beyond linear correlation, towards statistical dependence, that is: When are variables dependent, but independent when measured?
I mean, it's straightforward that a measure of linear association won't catch an association between variables that are associated but not in a linear way. Examples of that are all around us, though a r value of exactly 0 can be hard to find.
However, going back to the broader question that I elaborated, there could be spurious independence. That is, the variables are dependent, but your sampling will suggest that they are independent. I wrote an article about this, and there are scientific papers mentioning this problem too, such as this one.
Controlling for variables can be equivalent to slicing your data. By slicing too much (adjusting for many other variables), it's expected for your two random variables to appear independent. One may say: But I am not adjusting for anything! And the answer is: You don't need to. The collected data may be biased (selection bias) and you're not aware of it.
|
For intuition, what are some real life examples of uncorrelated but dependent random variables?
There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of co
|
14,544
|
Why splitting the data into the training and testing set is not enough
|
Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optimistic, because you are essentially reporting best-case results. As some on this site have already mentioned, optimization is the root of all evil in statistics.
Performance estimates should always be done on completely independent data. If you are optimizing some aspect based on test data, then your test data is no longer independent and you would need a validation set.
Another way to deal with this is via nested cross-validation, which consists of two cross-validation procedures wrapped around eachother. The inner cross-validation is used in tuning (to estimate the performance of a given set of hyperparameters, which is optimized) and the outer cross-validation estimates generalization performance of the entire machine learning pipeline (i.e., optimizing hyperparameters + training the final model).
|
Why splitting the data into the training and testing set is not enough
|
Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optim
|
Why splitting the data into the training and testing set is not enough
Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optimistic, because you are essentially reporting best-case results. As some on this site have already mentioned, optimization is the root of all evil in statistics.
Performance estimates should always be done on completely independent data. If you are optimizing some aspect based on test data, then your test data is no longer independent and you would need a validation set.
Another way to deal with this is via nested cross-validation, which consists of two cross-validation procedures wrapped around eachother. The inner cross-validation is used in tuning (to estimate the performance of a given set of hyperparameters, which is optimized) and the outer cross-validation estimates generalization performance of the entire machine learning pipeline (i.e., optimizing hyperparameters + training the final model).
|
Why splitting the data into the training and testing set is not enough
Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optim
|
14,545
|
Why splitting the data into the training and testing set is not enough
|
I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/algorithm.
Consider the first use as part of the actual training of the algorithm. For instance cross validating to determine regularization strength for a GLM is part of establishing the final result of the GLM. This use is typically called internal cross validation. Because (hyper)parameters are still being set, the tuning set loss is not a great measure of the actual algorithms performance.
The second use of cross validation is using data that was held out of the entire process which produced the model, to test its predictive power. This process is called external cross validation.
Note that internal validation may have been part of the process which produced the model so in many cases both internal and external cross validation are necessary.
|
Why splitting the data into the training and testing set is not enough
|
I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/a
|
Why splitting the data into the training and testing set is not enough
I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/algorithm.
Consider the first use as part of the actual training of the algorithm. For instance cross validating to determine regularization strength for a GLM is part of establishing the final result of the GLM. This use is typically called internal cross validation. Because (hyper)parameters are still being set, the tuning set loss is not a great measure of the actual algorithms performance.
The second use of cross validation is using data that was held out of the entire process which produced the model, to test its predictive power. This process is called external cross validation.
Note that internal validation may have been part of the process which produced the model so in many cases both internal and external cross validation are necessary.
|
Why splitting the data into the training and testing set is not enough
I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/a
|
14,546
|
Why splitting the data into the training and testing set is not enough
|
During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with different values of the tuning parameters, or a mixture).
Among all different models that you trained, you have to choose one and therefore you use the validation sample to find the one with the smallest error on the test sample.
For this 'final' model we still have to estimate the error and therefore we use the test sample.
|
Why splitting the data into the training and testing set is not enough
|
During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with
|
Why splitting the data into the training and testing set is not enough
During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with different values of the tuning parameters, or a mixture).
Among all different models that you trained, you have to choose one and therefore you use the validation sample to find the one with the smallest error on the test sample.
For this 'final' model we still have to estimate the error and therefore we use the test sample.
|
Why splitting the data into the training and testing set is not enough
During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with
|
14,547
|
Why splitting the data into the training and testing set is not enough
|
Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the higher would be the cross validation error.
Additionally, if you have high degrees of freedom in model selection, then there is a danger of the model performing poorly, as the cross validation criterion gets overfitted.
So, when the data is divided into 2 sets, a.k.a the training and testing sets, the splitting is done statically. So, there is a chance of overfitting the training set. However, the cross validation sets are created through different methods, like the k-fold cross validation, Leave-out-one-cross-validation(LOOCV), etc which helps ensure that the exact fit reward of the 2-set split is eliminated and thus the chance of over fit is reduced.
These are some resources which would help you understand better.
So, cross validation would help you when you have a bigger data set, rather than a smaller one.
|
Why splitting the data into the training and testing set is not enough
|
Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the
|
Why splitting the data into the training and testing set is not enough
Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the higher would be the cross validation error.
Additionally, if you have high degrees of freedom in model selection, then there is a danger of the model performing poorly, as the cross validation criterion gets overfitted.
So, when the data is divided into 2 sets, a.k.a the training and testing sets, the splitting is done statically. So, there is a chance of overfitting the training set. However, the cross validation sets are created through different methods, like the k-fold cross validation, Leave-out-one-cross-validation(LOOCV), etc which helps ensure that the exact fit reward of the 2-set split is eliminated and thus the chance of over fit is reduced.
These are some resources which would help you understand better.
So, cross validation would help you when you have a bigger data set, rather than a smaller one.
|
Why splitting the data into the training and testing set is not enough
Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the
|
14,548
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
|
None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model formulation, and the other components of stepwise variable selection you hope to use should likewise be avoided. This has been discussed at length on this site. What gave you the idea in the first place that variables should sometimes be removed from models because they are not "significant"? Don't use $P$-values or changes in $\beta$ to guide any of the model specification.
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
|
None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model fo
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model formulation, and the other components of stepwise variable selection you hope to use should likewise be avoided. This has been discussed at length on this site. What gave you the idea in the first place that variables should sometimes be removed from models because they are not "significant"? Don't use $P$-values or changes in $\beta$ to guide any of the model specification.
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model fo
|
14,549
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
|
Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was published in the international journal of forecasting entitle "Illusions of predictability" and a commentory on this article by Keith ord. I would highly recommend both these article as they clearly show that using regression statistic is often misleading. Follwoing is a screenshot of Keith Ord's article that shows by simulation why step wise regression (uses p statistic) for variable selection is bad.
Another wonderful article by Scott Armstrong that appeared in the same issue of the journal shows why one should be very cautious on using regression analysis on non-experimental data with case studies. Ever since I read these articles I avoid using regression analysis to draw causal inferences on non-experimental data. As a practitioner, I wish I had read articles like this many years which would have saved me from making bad decisions and avoiding costly mistakes.
On your specific problem, I don't think randomized experiments are possible in your case, so I would recommend that you use cross validation to select variables. A nice worked out example is available in this free online book on how you would use predictive accuracy to select variables. It also many othervariable selction methods, but I woud restrict to cross validation.
I personally like the quote from Armstrong "Somewhere I encountered the idea that statistics was supposed to aid communication. Complex regression methods and a flock of diagnostic statistics have taken us in the other direction"
Below is my own opinion. I'm not a statistician.
As a biologist I think you would appreciate this point. Nature is very complex, assuming logistic function and no interaction among variables does not occur in nature. In addition, logistic regression has following assumptions:
The true conditional probabilities are a logistic function of the
independent variables.
No important variables are omitted. No extraneous variables are included.
The independent variables are measured without error.
The observations are independent.
The independent variables are not linear combinations of each other.
I would recommend classification and regression tree (CART(r)) as an alternative over logistic regression for this type of analysis because it is assumptions free:
Non parametric/Data Driven/No assumptions that your output probablities follow logistic function.
Non linear
allows complex variable interaction.
Provides highly interpretable visual trees that a non statistician like forest managers would appreciate.
Easily handles missing values.
Dont need to be a statistician to use CART!!
automatically selects variables using cross validation.
CART is a trademark of Salford Systems. See this video for introduction and history of CART. There are also other videos such as cart - logistic regrssion hybrids in the same website. I would check it out. an open source impentation in R is called Tree, and there are many other packages such as rattle available in R. If I find time, I will post the first example in Homser's text using CART. If you insist on using logistic regression, then I would at least use methods like CART to select variables and then apply logistic regression.
I personally prefer CART over logistic regression because of aforementioned advantages. But still, I would try both logistic regression and CART or CART-Logistc Regression Hybrid, and see which gives better predictive accuracy and also more importantly better interpretatablity and choose the one that you feel would "communicate" the data more clearly.
Also, FYI CART was rejected by major statistical journals and finally the inventors of CART came out with a monograph. CART paved way to modern and highly successful machine learning algorithms like Random Forest(r), Gradient Boosting Machines (GBM), Multivariate Adaptive Regression Splines all were born. Randomforest and GBM are more accurate than CART but less interprettable (black box like) than CART.
Hopefully this is helpful. Let me know if you find this post useful ?
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
|
Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was pub
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was published in the international journal of forecasting entitle "Illusions of predictability" and a commentory on this article by Keith ord. I would highly recommend both these article as they clearly show that using regression statistic is often misleading. Follwoing is a screenshot of Keith Ord's article that shows by simulation why step wise regression (uses p statistic) for variable selection is bad.
Another wonderful article by Scott Armstrong that appeared in the same issue of the journal shows why one should be very cautious on using regression analysis on non-experimental data with case studies. Ever since I read these articles I avoid using regression analysis to draw causal inferences on non-experimental data. As a practitioner, I wish I had read articles like this many years which would have saved me from making bad decisions and avoiding costly mistakes.
On your specific problem, I don't think randomized experiments are possible in your case, so I would recommend that you use cross validation to select variables. A nice worked out example is available in this free online book on how you would use predictive accuracy to select variables. It also many othervariable selction methods, but I woud restrict to cross validation.
I personally like the quote from Armstrong "Somewhere I encountered the idea that statistics was supposed to aid communication. Complex regression methods and a flock of diagnostic statistics have taken us in the other direction"
Below is my own opinion. I'm not a statistician.
As a biologist I think you would appreciate this point. Nature is very complex, assuming logistic function and no interaction among variables does not occur in nature. In addition, logistic regression has following assumptions:
The true conditional probabilities are a logistic function of the
independent variables.
No important variables are omitted. No extraneous variables are included.
The independent variables are measured without error.
The observations are independent.
The independent variables are not linear combinations of each other.
I would recommend classification and regression tree (CART(r)) as an alternative over logistic regression for this type of analysis because it is assumptions free:
Non parametric/Data Driven/No assumptions that your output probablities follow logistic function.
Non linear
allows complex variable interaction.
Provides highly interpretable visual trees that a non statistician like forest managers would appreciate.
Easily handles missing values.
Dont need to be a statistician to use CART!!
automatically selects variables using cross validation.
CART is a trademark of Salford Systems. See this video for introduction and history of CART. There are also other videos such as cart - logistic regrssion hybrids in the same website. I would check it out. an open source impentation in R is called Tree, and there are many other packages such as rattle available in R. If I find time, I will post the first example in Homser's text using CART. If you insist on using logistic regression, then I would at least use methods like CART to select variables and then apply logistic regression.
I personally prefer CART over logistic regression because of aforementioned advantages. But still, I would try both logistic regression and CART or CART-Logistc Regression Hybrid, and see which gives better predictive accuracy and also more importantly better interpretatablity and choose the one that you feel would "communicate" the data more clearly.
Also, FYI CART was rejected by major statistical journals and finally the inventors of CART came out with a monograph. CART paved way to modern and highly successful machine learning algorithms like Random Forest(r), Gradient Boosting Machines (GBM), Multivariate Adaptive Regression Splines all were born. Randomforest and GBM are more accurate than CART but less interprettable (black box like) than CART.
Hopefully this is helpful. Let me know if you find this post useful ?
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was pub
|
14,550
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
|
I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe the R package dismo is useful for your problem. It includes a nice vignette. Using the dismo or other similar package implies to change your approach to the problem, but I believe it's worth to have a look at.
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
|
I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe t
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe the R package dismo is useful for your problem. It includes a nice vignette. Using the dismo or other similar package implies to change your approach to the problem, but I believe it's worth to have a look at.
|
Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe t
|
14,551
|
Visualizing a spline basis
|
Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:ncol(spl)) lines(spl[,j]~x, lwd=2, col=j)
Giving this:
|
Visualizing a spline basis
|
Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:nc
|
Visualizing a spline basis
Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:ncol(spl)) lines(spl[,j]~x, lwd=2, col=j)
Giving this:
|
Visualizing a spline basis
Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:nc
|
14,552
|
Visualizing a spline basis
|
Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
all.knots <- sort(c(attr(basis,"Boundary.knots") ,attr(basis, "knots"))) %>%
unname
bounds <- range(all.knots)
knot.values <- predict(basis, all.knots) %>%
set_colnames(str_c("S", seq_len(ncol(.))))
newx <- seq(bounds[1], bounds[2], length.out = n+1)
interp.values <- predict(basis, newx) %>%
set_colnames(str_c("S", seq_len(ncol(.))))
knot.df <- data.frame(x=all.knots, knot.values) %>%
melt(id.vars="x", variable.name="Spline", value.name="y")
interp.df <- data.frame(x=newx, interp.values) %>%
melt(id.vars="x", variable.name="Spline", value.name="y")
ggplot(interp.df) +
aes(x=x, y=y, color=Spline, group=Spline) +
geom_line() +
geom_point(data=knot.df) +
scale_color_discrete(guide=FALSE)
}
This lets you just call autoplot on an ns or bs object. Taking jbowman's example:
library(splines)
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
autoplot(spl)
which produces:
Edit: This will be included in the next version of the ggfortify package: https://github.com/sinhrks/ggfortify/pull/129. After that, I believe all you should need is:
library(splines)
library(ggfortify)
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
autoplot(spl)
|
Visualizing a spline basis
|
Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
|
Visualizing a spline basis
Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
all.knots <- sort(c(attr(basis,"Boundary.knots") ,attr(basis, "knots"))) %>%
unname
bounds <- range(all.knots)
knot.values <- predict(basis, all.knots) %>%
set_colnames(str_c("S", seq_len(ncol(.))))
newx <- seq(bounds[1], bounds[2], length.out = n+1)
interp.values <- predict(basis, newx) %>%
set_colnames(str_c("S", seq_len(ncol(.))))
knot.df <- data.frame(x=all.knots, knot.values) %>%
melt(id.vars="x", variable.name="Spline", value.name="y")
interp.df <- data.frame(x=newx, interp.values) %>%
melt(id.vars="x", variable.name="Spline", value.name="y")
ggplot(interp.df) +
aes(x=x, y=y, color=Spline, group=Spline) +
geom_line() +
geom_point(data=knot.df) +
scale_color_discrete(guide=FALSE)
}
This lets you just call autoplot on an ns or bs object. Taking jbowman's example:
library(splines)
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
autoplot(spl)
which produces:
Edit: This will be included in the next version of the ggfortify package: https://github.com/sinhrks/ggfortify/pull/129. After that, I believe all you should need is:
library(splines)
library(ggfortify)
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
autoplot(spl)
|
Visualizing a spline basis
Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
|
14,553
|
Interpreting Granger causality test's results
|
Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info.
Is my interpetation directionaly correct
Yes. The fact that first hypothesis was rejected and second was not means that you can use $X$ to forecast $Y$.
What key insights have I overlooked
The really important thing to know in terms of key insights is that Granger-causation is only equivalent to causation (in the more common use of the term) under a fairly restrictive assumption, viz, that there are no other potential causes. If this assumption is not satisfied then Granger-causality is actually Granger-usefulness-for-forecasting. For example, if there is a variable $Z$ that causally influences both $X$ and $Y$, then the conclusion that $Y$ Granger-causes $X$ can be explained as the influence of $Z$ being felt in $Y$ before it's felt in $X$.
The p-value of .76 allows me to accept the null for X = f(Y)
Warning: esoteric bullshtatistical blathering follows. Technically, in the test of $X = f(Y)$ you can't "accept the null". You can "fail to reject the null" -- that is, you didn't find evidence that would warrant rejecting the null. This is the Fisherian view. Alternatively, you can take the Neymanian view: you don't assert the truth of the null; you just choose to act as if the null were true. (Personally I'm a Jaynesian, but let's not get into that.)
I'm a little rusty on my F-test
The point of the F-test is that it checks that the lagged values of $X$ jointly improve the forecast of $Y$ (or vice versa). One can imagine predicting $Y$ with two predictors $X_1$ and $X_2$ where $X_2$ is just $X_1$ with a bit of added noise. The F-test would compare a model with just $X_1$ (or just $X_2$) with the model containing both and find no evidence of improved prediction in the larger model.
I'm also not sure how to interpret the CCF graph
The plots of the auto-correlation and cross-correlation functions provide a rough graphical equivalent to the t-tests used in the testing procedure. In order to understand what is being plotted, it's first necessary to understand correlation as a measure of the linear relationship between two random variables. The cross-correlation function is just the correlation of one time series versus a lagged version of the other, and the auto-correlation is just the cross-correlation of a function and itself. Thus these plots show the time structure of the strength of the linear relationships both internally (auto) and from one to the other (cross). I can see from the autocorrelation plots, for example, that $Y$ is reasonably smooth but has no other particularly strong internal structure, whereas $X$ has an oscillation with a peak-to-peak period of about 120 time steps (because it is negatively correlated with itself at about 60 time steps).
|
Interpreting Granger causality test's results
|
Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info
|
Interpreting Granger causality test's results
Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info.
Is my interpetation directionaly correct
Yes. The fact that first hypothesis was rejected and second was not means that you can use $X$ to forecast $Y$.
What key insights have I overlooked
The really important thing to know in terms of key insights is that Granger-causation is only equivalent to causation (in the more common use of the term) under a fairly restrictive assumption, viz, that there are no other potential causes. If this assumption is not satisfied then Granger-causality is actually Granger-usefulness-for-forecasting. For example, if there is a variable $Z$ that causally influences both $X$ and $Y$, then the conclusion that $Y$ Granger-causes $X$ can be explained as the influence of $Z$ being felt in $Y$ before it's felt in $X$.
The p-value of .76 allows me to accept the null for X = f(Y)
Warning: esoteric bullshtatistical blathering follows. Technically, in the test of $X = f(Y)$ you can't "accept the null". You can "fail to reject the null" -- that is, you didn't find evidence that would warrant rejecting the null. This is the Fisherian view. Alternatively, you can take the Neymanian view: you don't assert the truth of the null; you just choose to act as if the null were true. (Personally I'm a Jaynesian, but let's not get into that.)
I'm a little rusty on my F-test
The point of the F-test is that it checks that the lagged values of $X$ jointly improve the forecast of $Y$ (or vice versa). One can imagine predicting $Y$ with two predictors $X_1$ and $X_2$ where $X_2$ is just $X_1$ with a bit of added noise. The F-test would compare a model with just $X_1$ (or just $X_2$) with the model containing both and find no evidence of improved prediction in the larger model.
I'm also not sure how to interpret the CCF graph
The plots of the auto-correlation and cross-correlation functions provide a rough graphical equivalent to the t-tests used in the testing procedure. In order to understand what is being plotted, it's first necessary to understand correlation as a measure of the linear relationship between two random variables. The cross-correlation function is just the correlation of one time series versus a lagged version of the other, and the auto-correlation is just the cross-correlation of a function and itself. Thus these plots show the time structure of the strength of the linear relationships both internally (auto) and from one to the other (cross). I can see from the autocorrelation plots, for example, that $Y$ is reasonably smooth but has no other particularly strong internal structure, whereas $X$ has an oscillation with a peak-to-peak period of about 120 time steps (because it is negatively correlated with itself at about 60 time steps).
|
Interpreting Granger causality test's results
Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info
|
14,554
|
Pitfalls of linear mixed models
|
This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio test statistic. The null distribution of this test statistic is approximately chi-squared with degrees of freedom equal to difference in the dimensions of the two parameters spaces. Unfortunately, this test is only approximate and requires several assumptions. One crucial assumption is that the parameters under the null are not on the boundary of the parameter space. Since we are often interested in testing hypotheses about the random effects that take the form:
$$H_0: \sigma^2=0$$
This a real concern. The way to get around this problem is using REML. But still, the p-values will tend to be larger than they should be. This means that if you observe a significant effect using the χ2 approximation, you can be fairly confident that it is actually significant. Small, but not significant, p-values might spur one to use more accurate, but time-consuming, bootstrap methods.
Comparing fixed effects: If you plan to use the likelihood ratio test to compare two nested models that differ only in their fixed effects, you cannot use the REML estimation method. The reason is that REML estimates the random effects by considering linear combinations of the data that remove the fixed effects. If these fixed effects are changed, the likelihoods of the two models will not be directly comparable.
P-values: The p-values generated by the likelihood ratio test for fixed effects are approximate and unfortunately tend to be too small, thereby sometimes overstating the importance of some effects. We may use nonparametric bootstrap methods to find more accurate p-values for the likelihood ratio test.
There are other concerns about p-values for the fixed effects test which are highlighted by Dr. Doug Bates [here].
I am sure other members of the forum will have better answers.
Source: Extending linear models with R -- Dr. Julain Faraway.
|
Pitfalls of linear mixed models
|
This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio tes
|
Pitfalls of linear mixed models
This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio test statistic. The null distribution of this test statistic is approximately chi-squared with degrees of freedom equal to difference in the dimensions of the two parameters spaces. Unfortunately, this test is only approximate and requires several assumptions. One crucial assumption is that the parameters under the null are not on the boundary of the parameter space. Since we are often interested in testing hypotheses about the random effects that take the form:
$$H_0: \sigma^2=0$$
This a real concern. The way to get around this problem is using REML. But still, the p-values will tend to be larger than they should be. This means that if you observe a significant effect using the χ2 approximation, you can be fairly confident that it is actually significant. Small, but not significant, p-values might spur one to use more accurate, but time-consuming, bootstrap methods.
Comparing fixed effects: If you plan to use the likelihood ratio test to compare two nested models that differ only in their fixed effects, you cannot use the REML estimation method. The reason is that REML estimates the random effects by considering linear combinations of the data that remove the fixed effects. If these fixed effects are changed, the likelihoods of the two models will not be directly comparable.
P-values: The p-values generated by the likelihood ratio test for fixed effects are approximate and unfortunately tend to be too small, thereby sometimes overstating the importance of some effects. We may use nonparametric bootstrap methods to find more accurate p-values for the likelihood ratio test.
There are other concerns about p-values for the fixed effects test which are highlighted by Dr. Doug Bates [here].
I am sure other members of the forum will have better answers.
Source: Extending linear models with R -- Dr. Julain Faraway.
|
Pitfalls of linear mixed models
This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio tes
|
14,555
|
Pitfalls of linear mixed models
|
The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only because random effects account for all the variance. But since the graph of actual vs predicted looks nice you are inclined to think that your model is good.
Everything falls apart when such model is used for predicting new data. Usually then you can use only fixed effects and the fit can be very poor.
|
Pitfalls of linear mixed models
|
The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only beca
|
Pitfalls of linear mixed models
The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only because random effects account for all the variance. But since the graph of actual vs predicted looks nice you are inclined to think that your model is good.
Everything falls apart when such model is used for predicting new data. Usually then you can use only fixed effects and the fit can be very poor.
|
Pitfalls of linear mixed models
The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only beca
|
14,556
|
Pitfalls of linear mixed models
|
Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care must be taken to build an appropriate covariance structure otherwise tests of hypotheses, confidence intervals, and estimates of treatment means may not be valid. Often one needs knowledge of the experiment to specify the correct random effects.
SAS for Mixed Models is my go to resource, even if I want to do the analysis in R.
|
Pitfalls of linear mixed models
|
Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care mus
|
Pitfalls of linear mixed models
Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care must be taken to build an appropriate covariance structure otherwise tests of hypotheses, confidence intervals, and estimates of treatment means may not be valid. Often one needs knowledge of the experiment to specify the correct random effects.
SAS for Mixed Models is my go to resource, even if I want to do the analysis in R.
|
Pitfalls of linear mixed models
Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care mus
|
14,557
|
Is random forest for regression a 'true' regression?
|
This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you really want to capture a smooth response by a single predictor, you calculate the partial effect of any particular variable and fit a smooth function to it (this does not affect the model itself, which will retain this stepwise character).
Random forests offer quite a few advantages over standard regression techniques for some applications. To mention just three:
They allow the use of arbitrarily many predictors (more predictors than data points is possible)
They can approximate complex nonlinear shapes without a priori specification
They can capture complex interactions between predictions without a priori specification.
As for whether it is a 'true' regression, this is somewhat semantic. After all, piecewise regression is regression too, but is also not smooth. As is any regression with a categorical predictor, as pointed out in the comments below.
|
Is random forest for regression a 'true' regression?
|
This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient s
|
Is random forest for regression a 'true' regression?
This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you really want to capture a smooth response by a single predictor, you calculate the partial effect of any particular variable and fit a smooth function to it (this does not affect the model itself, which will retain this stepwise character).
Random forests offer quite a few advantages over standard regression techniques for some applications. To mention just three:
They allow the use of arbitrarily many predictors (more predictors than data points is possible)
They can approximate complex nonlinear shapes without a priori specification
They can capture complex interactions between predictions without a priori specification.
As for whether it is a 'true' regression, this is somewhat semantic. After all, piecewise regression is regression too, but is also not smooth. As is any regression with a categorical predictor, as pointed out in the comments below.
|
Is random forest for regression a 'true' regression?
This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient s
|
14,558
|
Is random forest for regression a 'true' regression?
|
It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which can give 200 (decimal) digits of precision, or ~600 bits. Of course, there is going to be some overlap, so you're not actually going to see 100^100 different values. The distribution tends to get more discrete the more you get to the extremes; each tree is going to have some minimum leaf (a leaf that gives an output that's less than or equal to all the other leaves), and once you get the minimum leaf from each tree, you can't get any lower. So there's going to be some minimum overall value for the forest, and as you deviate from that value, you're going to start out with all but a few trees being at their minimum leaf, making small deviations from the minimum value increase in discrete jumps. But decreased reliability at the extremes is a property of regressions in general, not just random forests.
|
Is random forest for regression a 'true' regression?
|
It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100
|
Is random forest for regression a 'true' regression?
It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which can give 200 (decimal) digits of precision, or ~600 bits. Of course, there is going to be some overlap, so you're not actually going to see 100^100 different values. The distribution tends to get more discrete the more you get to the extremes; each tree is going to have some minimum leaf (a leaf that gives an output that's less than or equal to all the other leaves), and once you get the minimum leaf from each tree, you can't get any lower. So there's going to be some minimum overall value for the forest, and as you deviate from that value, you're going to start out with all but a few trees being at their minimum leaf, making small deviations from the minimum value increase in discrete jumps. But decreased reliability at the extremes is a property of regressions in general, not just random forests.
|
Is random forest for regression a 'true' regression?
It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100
|
14,559
|
Is random forest for regression a 'true' regression?
|
The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models conditional expectation. And a regression tree can indeed be seen as an estimator of conditional expectation.
In the leaf nodes you predict the average of the sample observations reaching that leaf, and an arithmetical mean is an estimator of an expectation. The branching pattern in the tree represents the conditioning.
|
Is random forest for regression a 'true' regression?
|
The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models condit
|
Is random forest for regression a 'true' regression?
The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models conditional expectation. And a regression tree can indeed be seen as an estimator of conditional expectation.
In the leaf nodes you predict the average of the sample observations reaching that leaf, and an arithmetical mean is an estimator of an expectation. The branching pattern in the tree represents the conditioning.
|
Is random forest for regression a 'true' regression?
The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models condit
|
14,560
|
Is random forest for regression a 'true' regression?
|
It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of the training data; there is a nice graphical example here.
|
Is random forest for regression a 'true' regression?
|
It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of th
|
Is random forest for regression a 'true' regression?
It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of the training data; there is a nice graphical example here.
|
Is random forest for regression a 'true' regression?
It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of th
|
14,561
|
Why the default matrix norm is spectral norm and not Frobenius norm?
|
In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covariance matrix regularisation.
I think that part of this question stems from the terminology misdemeanour some people do (myself included) when referring to the Frobenius norm as the Euclidean matrix norm. We should not because actually the $L_2$ matrix norm (ie. the spectral norm) is the one that is induced to matrices when using the $L_2$ vector norm.
The Frobenius norm is that is element-wise: $||A||_F = \sqrt{\sum_{i,j}a_{i,j}^2}$, while the $L_2$ matrix norm ($||A||_2 = \sqrt{\lambda_{max}(A^T A)})$) is based on singular values so it is therefore more "universal" (for lack of a better term?).
The $L_2$ matrix norm is a Euclidean-type norm since it is induced by the Euclidean vector norm, where $||A||_2 = \max\limits_{||x||_2 =1} || Ax||_2$. It therefore an induced norm for matrices because it is induced by a vector norm, the $L_2$ vector norm in this case.
Probably MATLAB aims to provide the $L_2$ norm by default when using the command norm; as a consequence it provides the Euclidean vector norm but also the $L_2$ matrix norm, ie. the spectral matrix norm (rather than the wrongly quoted "Frobenius/Euclidean matrix norm").
Finally let me note that what is the default norm is a matter of opinion to some extend: For example J.E. Gentle's "Matrix Algebra - Theory, Computations, and Applications in Statistics" literally has a chapter (3.9.2) named: "The Frobenius Norm - The “Usual” Norm"; so clearly the spectral norm is not the default norm for all parties considered! :) As commented by @amoeba, different communities might have different terminology conventions. It goes without saying that I think Gentle's book is an invaluable resource on the matter of Linear Algebra application in Statistics and I would prompt you to look it further!
|
Why the default matrix norm is spectral norm and not Frobenius norm?
|
In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covarian
|
Why the default matrix norm is spectral norm and not Frobenius norm?
In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covariance matrix regularisation.
I think that part of this question stems from the terminology misdemeanour some people do (myself included) when referring to the Frobenius norm as the Euclidean matrix norm. We should not because actually the $L_2$ matrix norm (ie. the spectral norm) is the one that is induced to matrices when using the $L_2$ vector norm.
The Frobenius norm is that is element-wise: $||A||_F = \sqrt{\sum_{i,j}a_{i,j}^2}$, while the $L_2$ matrix norm ($||A||_2 = \sqrt{\lambda_{max}(A^T A)})$) is based on singular values so it is therefore more "universal" (for lack of a better term?).
The $L_2$ matrix norm is a Euclidean-type norm since it is induced by the Euclidean vector norm, where $||A||_2 = \max\limits_{||x||_2 =1} || Ax||_2$. It therefore an induced norm for matrices because it is induced by a vector norm, the $L_2$ vector norm in this case.
Probably MATLAB aims to provide the $L_2$ norm by default when using the command norm; as a consequence it provides the Euclidean vector norm but also the $L_2$ matrix norm, ie. the spectral matrix norm (rather than the wrongly quoted "Frobenius/Euclidean matrix norm").
Finally let me note that what is the default norm is a matter of opinion to some extend: For example J.E. Gentle's "Matrix Algebra - Theory, Computations, and Applications in Statistics" literally has a chapter (3.9.2) named: "The Frobenius Norm - The “Usual” Norm"; so clearly the spectral norm is not the default norm for all parties considered! :) As commented by @amoeba, different communities might have different terminology conventions. It goes without saying that I think Gentle's book is an invaluable resource on the matter of Linear Algebra application in Statistics and I would prompt you to look it further!
|
Why the default matrix norm is spectral norm and not Frobenius norm?
In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covarian
|
14,562
|
Why the default matrix norm is spectral norm and not Frobenius norm?
|
A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$ due to the constraints of finite arithmetics, so that $A\tilde x \approx b$, in some suitable sense. What is it that your solution represents, then? Well, it may well be an exact solution to some other system like
$$
\tilde A \tilde x = \tilde b
$$
So for $\tilde x$ to have utility, the tilde-system must be close to the original system:
$$
\tilde A \approx A, \quad \tilde b \approx b
$$
If your algorithm of solving the original system satisfies that property, then it is referred to as backward stable. Now, the accurate analysis of how big the discrepancies $\tilde A-A$, $\tilde b-b$ are eventually leads to errors on bounds which are expressed as $\| \tilde A-A \|$, $\| \tilde b-b\|$. For some analyses, the $l_1$ norm (max column sum) is the easiest one to push through, for others, the $l_\infty$ norm (max row sum) is the easiest to push through (for components of the solution in the linear system case, for instance), and for yet others, the $l_2$ spectral norm is the most appropriate one (induced by the traditional $l_2$ vector norm, as pointed out in another answer). For the work horse of statistical computing in symmetric p.s.d. matrix inversion, Cholesky decomposition (trivia: the first sound is a [x] as in Greek letter "chi", not [tʃ] as in "chase"), the most convenient norm to keep track of the error bounds is the $l_2$ norm... although the Frobenius norm also pops up in some results e.g. on partitioned matrix inversion.
|
Why the default matrix norm is spectral norm and not Frobenius norm?
|
A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$
|
Why the default matrix norm is spectral norm and not Frobenius norm?
A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$ due to the constraints of finite arithmetics, so that $A\tilde x \approx b$, in some suitable sense. What is it that your solution represents, then? Well, it may well be an exact solution to some other system like
$$
\tilde A \tilde x = \tilde b
$$
So for $\tilde x$ to have utility, the tilde-system must be close to the original system:
$$
\tilde A \approx A, \quad \tilde b \approx b
$$
If your algorithm of solving the original system satisfies that property, then it is referred to as backward stable. Now, the accurate analysis of how big the discrepancies $\tilde A-A$, $\tilde b-b$ are eventually leads to errors on bounds which are expressed as $\| \tilde A-A \|$, $\| \tilde b-b\|$. For some analyses, the $l_1$ norm (max column sum) is the easiest one to push through, for others, the $l_\infty$ norm (max row sum) is the easiest to push through (for components of the solution in the linear system case, for instance), and for yet others, the $l_2$ spectral norm is the most appropriate one (induced by the traditional $l_2$ vector norm, as pointed out in another answer). For the work horse of statistical computing in symmetric p.s.d. matrix inversion, Cholesky decomposition (trivia: the first sound is a [x] as in Greek letter "chi", not [tʃ] as in "chase"), the most convenient norm to keep track of the error bounds is the $l_2$ norm... although the Frobenius norm also pops up in some results e.g. on partitioned matrix inversion.
|
Why the default matrix norm is spectral norm and not Frobenius norm?
A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$
|
14,563
|
Why the default matrix norm is spectral norm and not Frobenius norm?
|
The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constants $C_1,C_2$, which depend only on dimension (and a,b) such that:
$$C_1\|x\|_b\leq \|x\|_a\leq C_2\|x\|_b.$$
This implies that norms in finite dimensions are quite boring and there is essentially no difference between them except in how they scale. This usually means that you can choose the most convenient norm for the problem you're trying to solve. Usually you want to answer questions like "is this operator or procedure bounded" or "does this numerical process converge." With boundedness, you only usually care that something is finite. With convergence, by sacrificing the rate at which you have convergence, you can opt to use a more convenient norm.
For example, in numerical linear algebra, the Frobenius norm is sometimes preferred because it's a lot easier to calculate than the euclidean norm, and also that it naturally connects with a wider class of Hilbert Schmidt operators. Also, like the Euclidean norm, it's submultiplictive: $\|AB\|_F\leq \|A\|_F\|B\|_F$, unlike say, the max norm, so it allows you to easily talk about operator multiplication in whatever space you're working in. People tend to really like both the $p=2$ norm and the Frobenius norm because they have natural relations to both the eigenvalues and singular values of matrices, along with being submultiplictive.
For practical purposes, the differences between norms become more pronounced because we live in a world of dimensions and it usually matters how big a certain quantity is, and how it's measured. Those constants $C_1,C_2$ above are not exactly tight, so it becomes important just how much more or less a certain norm $\|x\|_a$ is compared to $\|x\|_b$.
|
Why the default matrix norm is spectral norm and not Frobenius norm?
|
The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constan
|
Why the default matrix norm is spectral norm and not Frobenius norm?
The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constants $C_1,C_2$, which depend only on dimension (and a,b) such that:
$$C_1\|x\|_b\leq \|x\|_a\leq C_2\|x\|_b.$$
This implies that norms in finite dimensions are quite boring and there is essentially no difference between them except in how they scale. This usually means that you can choose the most convenient norm for the problem you're trying to solve. Usually you want to answer questions like "is this operator or procedure bounded" or "does this numerical process converge." With boundedness, you only usually care that something is finite. With convergence, by sacrificing the rate at which you have convergence, you can opt to use a more convenient norm.
For example, in numerical linear algebra, the Frobenius norm is sometimes preferred because it's a lot easier to calculate than the euclidean norm, and also that it naturally connects with a wider class of Hilbert Schmidt operators. Also, like the Euclidean norm, it's submultiplictive: $\|AB\|_F\leq \|A\|_F\|B\|_F$, unlike say, the max norm, so it allows you to easily talk about operator multiplication in whatever space you're working in. People tend to really like both the $p=2$ norm and the Frobenius norm because they have natural relations to both the eigenvalues and singular values of matrices, along with being submultiplictive.
For practical purposes, the differences between norms become more pronounced because we live in a world of dimensions and it usually matters how big a certain quantity is, and how it's measured. Those constants $C_1,C_2$ above are not exactly tight, so it becomes important just how much more or less a certain norm $\|x\|_a$ is compared to $\|x\|_b$.
|
Why the default matrix norm is spectral norm and not Frobenius norm?
The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constan
|
14,564
|
Why does propensity score matching work for causal inference?
|
I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs when a variable affects not only the treatment assigned but also the outcomes. When a randomized experiment is performed, subjects are randomized to treatments so that, on average, the subjects assigned to each treatment should be similar with respect to the covariates (age, race, gender, etc.). As a result of this randomization, it's unlikely (especially in large samples) that differences in the outcome are due to any covariates, but due to the treatment applied, since, on average, the covariates in the treatment groups are similar.
On the other hand, with observational data there is no random mechanism that assigns subjects to treatments. Take for example a study to examine the survival rates of patients following a new heart surgery compared to a standard surgical procedure. Typically one cannot randomize patients to each procedure for ethical reasons. As a result patients and doctors self-select into one of the treatments, often due to a number of reasons related to their covariates. For example the new procedure might be somewhat riskier if you are older, and as a result doctors might recommend the new treatment more often to younger patients. If this happens and you look at survival rates, the new treatment might appear to be more effective, but this would be misleading since younger patients were assigned to this treatment and younger patients tend to live longer, all else being equal. This is where propensity scores come in handy.
Propensity scores helps with the fundamental problem of causal inference -- that you may have confounding due to the non-randomization of subjects to treatments and this may be the cause of the "effects" you are seeing rather than the intervention or treatment alone. If you were able to somehow modify your analysis so that the covariates (say age, sex, gender, health status) were “balanced” between the treatment groups, you would have strong evidence that the difference in outcomes is due to the intervention/treatment rather than these covariates. Propensity scores, determine each subject’s probability of being assigned to the treatment that they received given the set of observed covarites. If you then match on these probabilities (propensity scores), then what you have done is taken subjects who were equally likely to be assigned to each treatment and compared them with one another, effectively comparing apples to apples.
You may ask why not exactly match on the covariates (e.g. make sure you match 40 year old men in good health in treatment 1 with 40 year old men in good health in treatment 2)? This works fine for large samples and a few covariates, but it becomes nearly impossible to do when the sample size is small and the number of covariates is even moderately sized (see the curse of dimensionality on Cross-Validated for why this is the case).
Now, all this being said, the Achilles heel of propensity score is the assumption of no unobserved confounders. This assumption states that you have not failed to include any covariates in your adjustment that are potential confounders. Intuitively, the reason behind this is that if you haven’t included a confounder when creating your propensity score, how can you adjust for it? There are also additional assumptions such as the stable unit treatment value assumption, which states that the treatment assigned to one subject does not affect the potential outcome of the other subjects.
|
Why does propensity score matching work for causal inference?
|
I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs
|
Why does propensity score matching work for causal inference?
I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs when a variable affects not only the treatment assigned but also the outcomes. When a randomized experiment is performed, subjects are randomized to treatments so that, on average, the subjects assigned to each treatment should be similar with respect to the covariates (age, race, gender, etc.). As a result of this randomization, it's unlikely (especially in large samples) that differences in the outcome are due to any covariates, but due to the treatment applied, since, on average, the covariates in the treatment groups are similar.
On the other hand, with observational data there is no random mechanism that assigns subjects to treatments. Take for example a study to examine the survival rates of patients following a new heart surgery compared to a standard surgical procedure. Typically one cannot randomize patients to each procedure for ethical reasons. As a result patients and doctors self-select into one of the treatments, often due to a number of reasons related to their covariates. For example the new procedure might be somewhat riskier if you are older, and as a result doctors might recommend the new treatment more often to younger patients. If this happens and you look at survival rates, the new treatment might appear to be more effective, but this would be misleading since younger patients were assigned to this treatment and younger patients tend to live longer, all else being equal. This is where propensity scores come in handy.
Propensity scores helps with the fundamental problem of causal inference -- that you may have confounding due to the non-randomization of subjects to treatments and this may be the cause of the "effects" you are seeing rather than the intervention or treatment alone. If you were able to somehow modify your analysis so that the covariates (say age, sex, gender, health status) were “balanced” between the treatment groups, you would have strong evidence that the difference in outcomes is due to the intervention/treatment rather than these covariates. Propensity scores, determine each subject’s probability of being assigned to the treatment that they received given the set of observed covarites. If you then match on these probabilities (propensity scores), then what you have done is taken subjects who were equally likely to be assigned to each treatment and compared them with one another, effectively comparing apples to apples.
You may ask why not exactly match on the covariates (e.g. make sure you match 40 year old men in good health in treatment 1 with 40 year old men in good health in treatment 2)? This works fine for large samples and a few covariates, but it becomes nearly impossible to do when the sample size is small and the number of covariates is even moderately sized (see the curse of dimensionality on Cross-Validated for why this is the case).
Now, all this being said, the Achilles heel of propensity score is the assumption of no unobserved confounders. This assumption states that you have not failed to include any covariates in your adjustment that are potential confounders. Intuitively, the reason behind this is that if you haven’t included a confounder when creating your propensity score, how can you adjust for it? There are also additional assumptions such as the stable unit treatment value assumption, which states that the treatment assigned to one subject does not affect the potential outcome of the other subjects.
|
Why does propensity score matching work for causal inference?
I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs
|
14,565
|
Why does propensity score matching work for causal inference?
|
In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to adjust for more observed potential confounders than that sample size may allow regression models to incorporate. Propensity score adjustment (best done through covariate adjustment in the majority of cases, using a spline in the logit PS) can be thought of as a data reduction technique where the reduction is along an important axis - confounding. It does not however handle outcome heterogeneity (susceptibility bias) so you also have to adjust for key important covariates even when using propensities (see also issues related to non-collapsibility of odds and hazard ratios).
Propensity score matching can exclude many observations and thus be terribly inefficient. I view any method that excludes relevant observations as problematic. The real problem with matching is that it excludes easily matched observations due to some perceived need for having 1:1 matching, and most matching algorithms are observation order-dependent.
Note that it is very easy when doing standard regression adjustment for confounding to check for and exclude non-overlap regions. Propensity score users are taught to do this and the only reason regression modelers don't is that they are not taught to.
Propensity score analysis hides any interactions with exposure, and propensity score matching hides in addition a possible relationship between PS and treatment effect.
Sensitivity (to unmeasured confounders) analysis has been worked out for PS but is even easier to do with standard regression modeling.
If you use flexible regression methods to estimate the PS (e.g., don't assume any continuous variables act linearly) you don't even need to check for balance - there must be balance or the PS regression model was not correctly specified in the beginning. You only need to check for non-overlap. This assumes there are no important interactions that were omitted from the propensity model. Matching makes the same assumption.
|
Why does propensity score matching work for causal inference?
|
In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to ad
|
Why does propensity score matching work for causal inference?
In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to adjust for more observed potential confounders than that sample size may allow regression models to incorporate. Propensity score adjustment (best done through covariate adjustment in the majority of cases, using a spline in the logit PS) can be thought of as a data reduction technique where the reduction is along an important axis - confounding. It does not however handle outcome heterogeneity (susceptibility bias) so you also have to adjust for key important covariates even when using propensities (see also issues related to non-collapsibility of odds and hazard ratios).
Propensity score matching can exclude many observations and thus be terribly inefficient. I view any method that excludes relevant observations as problematic. The real problem with matching is that it excludes easily matched observations due to some perceived need for having 1:1 matching, and most matching algorithms are observation order-dependent.
Note that it is very easy when doing standard regression adjustment for confounding to check for and exclude non-overlap regions. Propensity score users are taught to do this and the only reason regression modelers don't is that they are not taught to.
Propensity score analysis hides any interactions with exposure, and propensity score matching hides in addition a possible relationship between PS and treatment effect.
Sensitivity (to unmeasured confounders) analysis has been worked out for PS but is even easier to do with standard regression modeling.
If you use flexible regression methods to estimate the PS (e.g., don't assume any continuous variables act linearly) you don't even need to check for balance - there must be balance or the PS regression model was not correctly specified in the beginning. You only need to check for non-overlap. This assumes there are no important interactions that were omitted from the propensity model. Matching makes the same assumption.
|
Why does propensity score matching work for causal inference?
In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to ad
|
14,566
|
Why does propensity score matching work for causal inference?
|
I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correlated with the potential outcomes $y_{0i},y_{1i}$ and with the likelihood of receiving treatment, then if you find that the expected outcome of the treated is better than the expected outcome of the untreated, this may be a spurious finding since the treated tend to have higher $x$ and therefore have higher $y_{0i},y_{1i}$. The problem arises because $x$ makes $y_{0i},y_{1i}$ correlated with the treatment.
This problem can be solved by controlling for $x$. If we think that the relationship between the potential outcomes and the variables $x$ is linear, we just do this by including $x$ in a regression with a dummy variable for treatment, and the dummy variable interacted with $x$. Of course, linear regression is flexible since we can include functions of $x$ as well. But what if we do not want to impose a functional form? Then we need to use a non-parametric approach: matching.
With matching, we compare treated and untreated observations with similar $x$. We come away from this with an estimate of the effect of treatment for all $x$ values (or small ranges of values or "buckets") for which we have both treated and untreated observations. If we do not have many such $x$ values or buckets, in particular if $x$ is a high-dimensional vector so it is difficult to find observations close to one other, then it is helpful to project this space onto one dimension.
This is what propensity score matching does. If $y_{0i},y_{1i}$ are uncorrelated with treatment given $x_i$, then it turns out that they are also uncorrelated with treatment given $p(x_i)$ where $p(x)$ is the probability of treatment given $x$, i.e. the propensity score of $x$.
Here's your intuition: if we find a sub sample of observations with a very similar propensity score $p(x)$, then for that sub-sample, the treated and untreated groups are uncorrelated with $x$. Each observation is equally likely to be treated or untreated; this implies that any treated observation is equally likely to come from any of the $x$ values in the sub-sample. Since $x$ is what determines the potential outcomes in our model, this implies that, for that sub-sample, the potential outcomes $y_{0i},y_{1i}$ are uncorrelated with the treatment. This condition ensures that the sub-sample average difference of outcome between treated and untreated is a consistent estimate of the average treatment effect on this sub-sample, i.e.
$$
E[y_i|\text{Treated},p(x)] - E[y_i|\text{Untreated},p(x)]
$$
is a consistent estimate of the local average treatment effect.
Further reading:
Should we really use propensity score matching in practice?
Related question comparing matching and regression
|
Why does propensity score matching work for causal inference?
|
I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correla
|
Why does propensity score matching work for causal inference?
I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correlated with the potential outcomes $y_{0i},y_{1i}$ and with the likelihood of receiving treatment, then if you find that the expected outcome of the treated is better than the expected outcome of the untreated, this may be a spurious finding since the treated tend to have higher $x$ and therefore have higher $y_{0i},y_{1i}$. The problem arises because $x$ makes $y_{0i},y_{1i}$ correlated with the treatment.
This problem can be solved by controlling for $x$. If we think that the relationship between the potential outcomes and the variables $x$ is linear, we just do this by including $x$ in a regression with a dummy variable for treatment, and the dummy variable interacted with $x$. Of course, linear regression is flexible since we can include functions of $x$ as well. But what if we do not want to impose a functional form? Then we need to use a non-parametric approach: matching.
With matching, we compare treated and untreated observations with similar $x$. We come away from this with an estimate of the effect of treatment for all $x$ values (or small ranges of values or "buckets") for which we have both treated and untreated observations. If we do not have many such $x$ values or buckets, in particular if $x$ is a high-dimensional vector so it is difficult to find observations close to one other, then it is helpful to project this space onto one dimension.
This is what propensity score matching does. If $y_{0i},y_{1i}$ are uncorrelated with treatment given $x_i$, then it turns out that they are also uncorrelated with treatment given $p(x_i)$ where $p(x)$ is the probability of treatment given $x$, i.e. the propensity score of $x$.
Here's your intuition: if we find a sub sample of observations with a very similar propensity score $p(x)$, then for that sub-sample, the treated and untreated groups are uncorrelated with $x$. Each observation is equally likely to be treated or untreated; this implies that any treated observation is equally likely to come from any of the $x$ values in the sub-sample. Since $x$ is what determines the potential outcomes in our model, this implies that, for that sub-sample, the potential outcomes $y_{0i},y_{1i}$ are uncorrelated with the treatment. This condition ensures that the sub-sample average difference of outcome between treated and untreated is a consistent estimate of the average treatment effect on this sub-sample, i.e.
$$
E[y_i|\text{Treated},p(x)] - E[y_i|\text{Untreated},p(x)]
$$
is a consistent estimate of the local average treatment effect.
Further reading:
Should we really use propensity score matching in practice?
Related question comparing matching and regression
|
Why does propensity score matching work for causal inference?
I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correla
|
14,567
|
Why does propensity score matching work for causal inference?
|
It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps many confounding variables, or a regression model with only one variable - the propensity score (that may or may not be an equally complicated model consisting of those same confounders). You could stick with this regression vs the propensity score, or you could compare the response within similar groups, where similarity is defined by the propensity score. In spirit you're doing the same thing, but some people feel that the latter method better highlights the causal task at hand.
Update following feedback
My thought for explaining the intuition behind why propensity score matching works was to explain the Propensity Score Theorem, i.e.,
$$Y(0), Y(1) \perp T \, | \, X \Rightarrow Y(0), Y(1) \perp T \, | \, p(X),$$ something I thought I could do using regression. But as @StatsStudent argues, regression makes it easy to extrapolate comparisons between treatment and control that never occur in the data. If this is part of why propensity score matching "works," then my answer was incomplete. I consulted Counterfactuals and Causal Inference and read about one version of nearest-neighbor matching, called "caliper matching" (p. 108) where propensity scores of treatment and nearest control case must be within some maximum distance, resulting in some treatment cases without matches. In this case, the method would still work by adjusting for the propensity score using a nonparametric analogue to regression, but it also makes clear what can't be known from the data alone (without a model to extrapolate from) and allowing a redefinition of the causal quantity given the available data.
|
Why does propensity score matching work for causal inference?
|
It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps m
|
Why does propensity score matching work for causal inference?
It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps many confounding variables, or a regression model with only one variable - the propensity score (that may or may not be an equally complicated model consisting of those same confounders). You could stick with this regression vs the propensity score, or you could compare the response within similar groups, where similarity is defined by the propensity score. In spirit you're doing the same thing, but some people feel that the latter method better highlights the causal task at hand.
Update following feedback
My thought for explaining the intuition behind why propensity score matching works was to explain the Propensity Score Theorem, i.e.,
$$Y(0), Y(1) \perp T \, | \, X \Rightarrow Y(0), Y(1) \perp T \, | \, p(X),$$ something I thought I could do using regression. But as @StatsStudent argues, regression makes it easy to extrapolate comparisons between treatment and control that never occur in the data. If this is part of why propensity score matching "works," then my answer was incomplete. I consulted Counterfactuals and Causal Inference and read about one version of nearest-neighbor matching, called "caliper matching" (p. 108) where propensity scores of treatment and nearest control case must be within some maximum distance, resulting in some treatment cases without matches. In this case, the method would still work by adjusting for the propensity score using a nonparametric analogue to regression, but it also makes clear what can't be known from the data alone (without a model to extrapolate from) and allowing a redefinition of the causal quantity given the available data.
|
Why does propensity score matching work for causal inference?
It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps m
|
14,568
|
Does $r$-squared have a $p$-value?
|
In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value associated with $r^2$ "directly" using the fact that $r^2$ under the null hypothesis is distributed as $\textrm{Beta}(\frac{v_n}{2},\frac{v_d}{2})$, where $v_n$ and $v_d$ are the numerator and denominator degrees of freedom, respectively, for the associated $F$-statistic.
The 3rd bullet point in the Derived from other distributions subsection of the Wikipedia entry on the beta distribution tells us that:
If $X \sim \chi^2(\alpha)$ and $Y \sim \chi^2(\beta)$ are independent, then $\frac{X}{X+Y} \sim \textrm{Beta}(\frac{\alpha}{2}, \frac{\beta}{2})$.
Well, we can write $r^2$ in that $\frac{X}{X+Y}$ form.
Let $SS_Y$ be the total sum of squares for a variable $Y$, $SS_E$ be the sum of squared errors for a regression of $Y$ on some other variables, and $SS_R$ be the "sum of squares reduced," that is, $SS_R=SS_Y-SS_E$. Then
$$
r^2=1-\frac{SS_E}{SS_Y}=\frac{SS_Y-SS_E}{SS_Y}=\frac{SS_R}{SS_R+SS_E}
$$
And of course, being sums of squares, $SS_R$ and $SS_E$ are both distributed as $\chi^2$ with $v_n$ and $v_d$ degrees of freedom, respectively. Therefore,
$$
r^2 \sim \textrm{Beta}(\frac{v_n}{2},\frac{v_d}{2})
$$
(Of course, I didn't show that the two chi-squares are independent. Maybe a commentator can say something about that.)
Demonstration in R (borrowing code from @gung):
set.seed(111)
x = runif(20)
y = 5 + rnorm(20)
cor.test(x,y)
# Pearson's product-moment correlation
#
# data: x and y
# t = 1.151, df = 18, p-value = 0.2648
# alternative hypothesis: true correlation is not equal to 0
# 95 percent confidence interval:
# -0.2043606 0.6312210
# sample estimates:
# cor
# 0.2618393
summary(lm(y~x))
# Call:
# lm(formula = y ~ x)
#
# Residuals:
# Min 1Q Median 3Q Max
# -1.6399 -0.6246 0.1968 0.5168 2.0355
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 4.6077 0.4534 10.163 6.96e-09 ***
# x 1.1121 0.9662 1.151 0.265
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 1.061 on 18 degrees of freedom
# Multiple R-squared: 0.06856, Adjusted R-squared: 0.01681
# F-statistic: 1.325 on 1 and 18 DF, p-value: 0.2648
1 - pbeta(0.06856, 1/2, 18/2)
# [1] 0.2647731
|
Does $r$-squared have a $p$-value?
|
In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value
|
Does $r$-squared have a $p$-value?
In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value associated with $r^2$ "directly" using the fact that $r^2$ under the null hypothesis is distributed as $\textrm{Beta}(\frac{v_n}{2},\frac{v_d}{2})$, where $v_n$ and $v_d$ are the numerator and denominator degrees of freedom, respectively, for the associated $F$-statistic.
The 3rd bullet point in the Derived from other distributions subsection of the Wikipedia entry on the beta distribution tells us that:
If $X \sim \chi^2(\alpha)$ and $Y \sim \chi^2(\beta)$ are independent, then $\frac{X}{X+Y} \sim \textrm{Beta}(\frac{\alpha}{2}, \frac{\beta}{2})$.
Well, we can write $r^2$ in that $\frac{X}{X+Y}$ form.
Let $SS_Y$ be the total sum of squares for a variable $Y$, $SS_E$ be the sum of squared errors for a regression of $Y$ on some other variables, and $SS_R$ be the "sum of squares reduced," that is, $SS_R=SS_Y-SS_E$. Then
$$
r^2=1-\frac{SS_E}{SS_Y}=\frac{SS_Y-SS_E}{SS_Y}=\frac{SS_R}{SS_R+SS_E}
$$
And of course, being sums of squares, $SS_R$ and $SS_E$ are both distributed as $\chi^2$ with $v_n$ and $v_d$ degrees of freedom, respectively. Therefore,
$$
r^2 \sim \textrm{Beta}(\frac{v_n}{2},\frac{v_d}{2})
$$
(Of course, I didn't show that the two chi-squares are independent. Maybe a commentator can say something about that.)
Demonstration in R (borrowing code from @gung):
set.seed(111)
x = runif(20)
y = 5 + rnorm(20)
cor.test(x,y)
# Pearson's product-moment correlation
#
# data: x and y
# t = 1.151, df = 18, p-value = 0.2648
# alternative hypothesis: true correlation is not equal to 0
# 95 percent confidence interval:
# -0.2043606 0.6312210
# sample estimates:
# cor
# 0.2618393
summary(lm(y~x))
# Call:
# lm(formula = y ~ x)
#
# Residuals:
# Min 1Q Median 3Q Max
# -1.6399 -0.6246 0.1968 0.5168 2.0355
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 4.6077 0.4534 10.163 6.96e-09 ***
# x 1.1121 0.9662 1.151 0.265
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 1.061 on 18 degrees of freedom
# Multiple R-squared: 0.06856, Adjusted R-squared: 0.01681
# F-statistic: 1.325 on 1 and 18 DF, p-value: 0.2648
1 - pbeta(0.06856, 1/2, 18/2)
# [1] 0.2647731
|
Does $r$-squared have a $p$-value?
In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value
|
14,569
|
Does $r$-squared have a $p$-value?
|
I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population correlation between $X$ and response $Y$
F-test for zero population R-squared, i.e. nothing of the variability of $Y$ can be explained by differing $X$.
All three tests check for a linear association between $X$ and $Y$ and, fortunately(!), they all lead to the same result. Their test statistics are equivalent. (Tests 1 & 2 are based on the Student-distribution with $n-2$ df which corresponds to the sampling F-distribution of test 3, just with squared test statistic).
A quick example in R:
# Input
set.seed(3)
n <- 100
X <- runif(n)
Y <- rnorm(n) + X
cor.test(~ X + Y) # For test 2 (correlation)
# Output (part)
# t = 3.1472, df = 98, p-value = 0.002184
# alternative hypothesis: true correlation is not equal to 0
# Input (for the other two tests)
fit <- lm(Y ~ X)
summary(fit)
# Output (partial)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.03173 0.18214 -0.174 0.86204
X 1.02051 0.32426 3.147 0.00218 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9239 on 98 degrees of freedom
Multiple R-squared: 0.09179, Adjusted R-squared: 0.08253
F-statistic: 9.905 on 1 and 98 DF, p-value: 0.002184
As you can see, the three tests yield the same p value of 0.00218. Note that test 3 is the one in the last line of the output.
So your F-test for the R-squared is a very frequent one, although not many statisticians are interpreting it as a test for the R-squared.
|
Does $r$-squared have a $p$-value?
|
I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population co
|
Does $r$-squared have a $p$-value?
I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population correlation between $X$ and response $Y$
F-test for zero population R-squared, i.e. nothing of the variability of $Y$ can be explained by differing $X$.
All three tests check for a linear association between $X$ and $Y$ and, fortunately(!), they all lead to the same result. Their test statistics are equivalent. (Tests 1 & 2 are based on the Student-distribution with $n-2$ df which corresponds to the sampling F-distribution of test 3, just with squared test statistic).
A quick example in R:
# Input
set.seed(3)
n <- 100
X <- runif(n)
Y <- rnorm(n) + X
cor.test(~ X + Y) # For test 2 (correlation)
# Output (part)
# t = 3.1472, df = 98, p-value = 0.002184
# alternative hypothesis: true correlation is not equal to 0
# Input (for the other two tests)
fit <- lm(Y ~ X)
summary(fit)
# Output (partial)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.03173 0.18214 -0.174 0.86204
X 1.02051 0.32426 3.147 0.00218 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9239 on 98 degrees of freedom
Multiple R-squared: 0.09179, Adjusted R-squared: 0.08253
F-statistic: 9.905 on 1 and 98 DF, p-value: 0.002184
As you can see, the three tests yield the same p value of 0.00218. Note that test 3 is the one in the last line of the output.
So your F-test for the R-squared is a very frequent one, although not many statisticians are interpreting it as a test for the R-squared.
|
Does $r$-squared have a $p$-value?
I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population co
|
14,570
|
Does $r$-squared have a $p$-value?
|
You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical.
|
Does $r$-squared have a $p$-value?
|
You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical.
|
Does $r$-squared have a $p$-value?
You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical.
|
Does $r$-squared have a $p$-value?
You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical.
|
14,571
|
Does $r$-squared have a $p$-value?
|
There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution of a test statistic under the null hypothesis. Your title and question seems to have some confusion between Pearson correlation and the "variance explained" $r^2$. I will consider the correlation coefficient first.
There is no "best" way to test the Pearson correlation which I'm aware of. Fisher's Z transformation is one such way, based on hyperbolic transformations, so that the inference is a little bit more efficient. This is certainly a "good" approach, but the sad bit is that inference for this parameter is consistent with inference about the slope parameter $\beta$ for association: they tell the same story in the long run.
The reason why statisticians have (classically) wholly preferred tests of $\beta$ is because we do have a "best" test: linear regression, which is the BLUE estimator. In the days of modern statistics, we don't really care if a test is "best" any more, but linear regression has plenty of other fantastic properties that justify its continued usage for determining the association between two variables. In general, your intuition is right: they're essentially the same thing, and we focus our attention upon $\beta$ as a more practical measure of association.
The $r^2$ is a function of both the slope and the intercept. If either of these values are nonzero, the $r^2$ should have a discernable sampling distribution relative to that which would be expected if the linear parameters were zero. However, deriving distributions of $r^2$ under the null and comparing to $r^2$ under some alternative hypothesis doesn't give me much confidence that this test has much power to detect what we want it to. Just a gut feeling. Again turning to "best" estimators, OLS gives us "best" estimates of both the slope and the intercept, so we have that confidence that our test is at least good for determining the same (if any) association by directly testing the model parameters. To me, jointly testing the $\alpha$ and $\beta$ with OLS is superior to any test about $r^2$ except in a rare case of (perhaps) a non-nested predictive modeling calibration application... but BIC would probably be a better measure in that scenario anyway.
|
Does $r$-squared have a $p$-value?
|
There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution
|
Does $r$-squared have a $p$-value?
There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution of a test statistic under the null hypothesis. Your title and question seems to have some confusion between Pearson correlation and the "variance explained" $r^2$. I will consider the correlation coefficient first.
There is no "best" way to test the Pearson correlation which I'm aware of. Fisher's Z transformation is one such way, based on hyperbolic transformations, so that the inference is a little bit more efficient. This is certainly a "good" approach, but the sad bit is that inference for this parameter is consistent with inference about the slope parameter $\beta$ for association: they tell the same story in the long run.
The reason why statisticians have (classically) wholly preferred tests of $\beta$ is because we do have a "best" test: linear regression, which is the BLUE estimator. In the days of modern statistics, we don't really care if a test is "best" any more, but linear regression has plenty of other fantastic properties that justify its continued usage for determining the association between two variables. In general, your intuition is right: they're essentially the same thing, and we focus our attention upon $\beta$ as a more practical measure of association.
The $r^2$ is a function of both the slope and the intercept. If either of these values are nonzero, the $r^2$ should have a discernable sampling distribution relative to that which would be expected if the linear parameters were zero. However, deriving distributions of $r^2$ under the null and comparing to $r^2$ under some alternative hypothesis doesn't give me much confidence that this test has much power to detect what we want it to. Just a gut feeling. Again turning to "best" estimators, OLS gives us "best" estimates of both the slope and the intercept, so we have that confidence that our test is at least good for determining the same (if any) association by directly testing the model parameters. To me, jointly testing the $\alpha$ and $\beta$ with OLS is superior to any test about $r^2$ except in a rare case of (perhaps) a non-nested predictive modeling calibration application... but BIC would probably be a better measure in that scenario anyway.
|
Does $r$-squared have a $p$-value?
There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution
|
14,572
|
Does $r$-squared have a $p$-value?
|
This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a distribution, so a $p$-value doesn't really make sense.
Getting a $p$-value for $b$ makes a lot of sense - that's what tells you whether the model has a linear relationship or not. If $b$ is statistically significantly different from $0$ then you conclude that there is a linear relationship between the variables. The $r$ or $r^2$ then tells you how well the model explains the variation in the data. If $r^2$ is low, then your independent variable isn't helping to explain very much about the dependent variable.
A $p$-value for $a$ tells us if the intercept is statistically significantly different from $0$ or not. This is of varying usefulness, depending on the data. My favorite example: if you do a linear regression between gestation time and birth weight you might find an intercept of, say, 8 ounces that is statistically different from $0$. However, since the intercept represents a gestation age of $0$ weeks, it doesn't really mean anything.
If anyone does regularly calculate $p$-values for an $r^2$ I'd be interested in hearing about them.
|
Does $r$-squared have a $p$-value?
|
This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a
|
Does $r$-squared have a $p$-value?
This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a distribution, so a $p$-value doesn't really make sense.
Getting a $p$-value for $b$ makes a lot of sense - that's what tells you whether the model has a linear relationship or not. If $b$ is statistically significantly different from $0$ then you conclude that there is a linear relationship between the variables. The $r$ or $r^2$ then tells you how well the model explains the variation in the data. If $r^2$ is low, then your independent variable isn't helping to explain very much about the dependent variable.
A $p$-value for $a$ tells us if the intercept is statistically significantly different from $0$ or not. This is of varying usefulness, depending on the data. My favorite example: if you do a linear regression between gestation time and birth weight you might find an intercept of, say, 8 ounces that is statistically different from $0$. However, since the intercept represents a gestation age of $0$ weeks, it doesn't really mean anything.
If anyone does regularly calculate $p$-values for an $r^2$ I'd be interested in hearing about them.
|
Does $r$-squared have a $p$-value?
This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a
|
14,573
|
Variance-covariance structure for random-effects in lme4
|
The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive definite. Separate random effects terms are considered independent, however, so if you want to fit (e.g.) a model with random intercept and slope where the intercept and slope are uncorrelated (not necessarily a good idea), you can use the formula (1|g) + (0+x|g), where g is the grouping factor; the 0 in the second term suppresses the intercept. If you want to fit independent parameters of a categorical variable (again, possibly questionable), you probably need to construct numeric dummy variables by hand. You can, sort of, construct a compound-symmetric variance-covariance structure (although with non-negative covariances only) by treating the factor as a nested grouping variable. For example, if f is a factor, then (1|g/f) will assume equal correlations among the levels of f.
For other/more complex variance-covariance structures, your choices (in R) are to (1) use nlme (which has the pdMatrix constructors to allow more flexibility); (2) use MCMCglmm (which offers a variety of structures including unstructured, compound symmetric, identity with different variances, or identity with homogeneous variances); (3) use a special-purpose package such as pedigreemm that constructs a special structured matrix. There is a flexLambda branch on github that eventually hopes to provide more capabilities in this direction.
|
Variance-covariance structure for random-effects in lme4
|
The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive defini
|
Variance-covariance structure for random-effects in lme4
The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive definite. Separate random effects terms are considered independent, however, so if you want to fit (e.g.) a model with random intercept and slope where the intercept and slope are uncorrelated (not necessarily a good idea), you can use the formula (1|g) + (0+x|g), where g is the grouping factor; the 0 in the second term suppresses the intercept. If you want to fit independent parameters of a categorical variable (again, possibly questionable), you probably need to construct numeric dummy variables by hand. You can, sort of, construct a compound-symmetric variance-covariance structure (although with non-negative covariances only) by treating the factor as a nested grouping variable. For example, if f is a factor, then (1|g/f) will assume equal correlations among the levels of f.
For other/more complex variance-covariance structures, your choices (in R) are to (1) use nlme (which has the pdMatrix constructors to allow more flexibility); (2) use MCMCglmm (which offers a variety of structures including unstructured, compound symmetric, identity with different variances, or identity with homogeneous variances); (3) use a special-purpose package such as pedigreemm that constructs a special structured matrix. There is a flexLambda branch on github that eventually hopes to provide more capabilities in this direction.
|
Variance-covariance structure for random-effects in lme4
The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive defini
|
14,574
|
Variance-covariance structure for random-effects in lme4
|
I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~ 1 + x1 + (1|g) + (0+x1|g), data=data, family="binomial")
Here there are two fixed effects that are allowed to vary randomly, and one grouping factor g. Because the two random effects are separated into their own terms, no covariance term is included between them. In other words, only the diagonal of the variance-covariance matrix is estimated. The zero in the second term explicitly says do not add a random intercept term or allow an existing random intercept to vary with x1.
A second example:
glmer(y ~ 1 + x1 + (1+x1|g), data=data, family="binomial")
Here a covariance between the intercept and x1 random effects is specified because 1+x1|g is all contained in the same term. In other words, all 3 possible parameters in the variance-covariance structure are estimated.
A slightly more complicated example:
glmer(y ~ 1 + x1 + x2 + (1+x1|g) + (0+x2|g), data=data, family="binomial")
Here the intercept and x1 random effects are allowed to vary together while a zero correlation is imposed between the x2 random effect and each of the other two. Again a 0 is included in the x2 random effect term only to explicitly avoid including a random intercept that covaries with the x2 random effect.
|
Variance-covariance structure for random-effects in lme4
|
I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~
|
Variance-covariance structure for random-effects in lme4
I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~ 1 + x1 + (1|g) + (0+x1|g), data=data, family="binomial")
Here there are two fixed effects that are allowed to vary randomly, and one grouping factor g. Because the two random effects are separated into their own terms, no covariance term is included between them. In other words, only the diagonal of the variance-covariance matrix is estimated. The zero in the second term explicitly says do not add a random intercept term or allow an existing random intercept to vary with x1.
A second example:
glmer(y ~ 1 + x1 + (1+x1|g), data=data, family="binomial")
Here a covariance between the intercept and x1 random effects is specified because 1+x1|g is all contained in the same term. In other words, all 3 possible parameters in the variance-covariance structure are estimated.
A slightly more complicated example:
glmer(y ~ 1 + x1 + x2 + (1+x1|g) + (0+x2|g), data=data, family="binomial")
Here the intercept and x1 random effects are allowed to vary together while a zero correlation is imposed between the x2 random effect and each of the other two. Again a 0 is included in the x2 random effect term only to explicitly avoid including a random intercept that covaries with the x2 random effect.
|
Variance-covariance structure for random-effects in lme4
I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~
|
14,575
|
Does MLE require i.i.d. data? Or just independent parameters?
|
The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event }E;\theta)= {\mathbb P}(\text{observing } {\bf x};\theta).$$
Therefore, there is no assumption of independence of the observations. In the classical approach there is no definition for independence of parameters since they are not random variables; some related concepts could be identifiability, parameter orthogonality, and independence of the Maximum Likelihood Estimators (which are random variables).
Some examples,
(1). Discrete case. ${\bf x}=(x_1,...,x_n)$ is a sample of (independent) discrete observations with ${\mathbb P}(\text{observing } x_j ; \theta)>0$, then
$${\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n{\mathbb P}(\text{observing } x_j ; \theta).$$
Particularly, if $x_j\sim \text{Binomial}(N,\theta)$, with $N$ known, we have that
$${\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n \theta^{x_j}(1-\theta)^{N-x_j}.$$
(2). Continuous approximation. Let ${\bf x}=(x_1,...,x_n)$ be a sample from a continuous random variable $X$, with distribution $F$ and density $f$, with measurement error $\epsilon$, this is, you observe the sets $(x_j-\epsilon,x_j+\epsilon)$. Then
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n {\mathbb P}[\text{observing } (x_j-\epsilon,x_j+\epsilon);\theta] = \prod_{j=1}^n[F(x_j+\epsilon;\theta)-F(x_j-\epsilon;\theta)]
\end{eqnarray*}
When $\epsilon$ is small, this can be approximated (using the Mean Value Theorem) by
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n f(x_j;\theta)
\end{eqnarray*}
For an example with the normal case, take a look at this.
(3). Dependent and Markov model. Suppose that ${\bf x}=(x_1,...,x_n)$ is a set of observations possibly dependent and let $f$ be the joint density of ${\bf x}$, then
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto f({\bf x}; \theta).
\end{eqnarray*}
If additionally the Markov property is satisfied, then
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto f({\bf x}; \theta) = f(x_1;\theta)\prod_{j=1}^{n-1} f(x_{j+1} \vert x_j ;\theta).
\end{eqnarray*}
Take also a look at this.
|
Does MLE require i.i.d. data? Or just independent parameters?
|
The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event
|
Does MLE require i.i.d. data? Or just independent parameters?
The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event }E;\theta)= {\mathbb P}(\text{observing } {\bf x};\theta).$$
Therefore, there is no assumption of independence of the observations. In the classical approach there is no definition for independence of parameters since they are not random variables; some related concepts could be identifiability, parameter orthogonality, and independence of the Maximum Likelihood Estimators (which are random variables).
Some examples,
(1). Discrete case. ${\bf x}=(x_1,...,x_n)$ is a sample of (independent) discrete observations with ${\mathbb P}(\text{observing } x_j ; \theta)>0$, then
$${\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n{\mathbb P}(\text{observing } x_j ; \theta).$$
Particularly, if $x_j\sim \text{Binomial}(N,\theta)$, with $N$ known, we have that
$${\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n \theta^{x_j}(1-\theta)^{N-x_j}.$$
(2). Continuous approximation. Let ${\bf x}=(x_1,...,x_n)$ be a sample from a continuous random variable $X$, with distribution $F$ and density $f$, with measurement error $\epsilon$, this is, you observe the sets $(x_j-\epsilon,x_j+\epsilon)$. Then
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n {\mathbb P}[\text{observing } (x_j-\epsilon,x_j+\epsilon);\theta] = \prod_{j=1}^n[F(x_j+\epsilon;\theta)-F(x_j-\epsilon;\theta)]
\end{eqnarray*}
When $\epsilon$ is small, this can be approximated (using the Mean Value Theorem) by
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto \prod_{j=1}^n f(x_j;\theta)
\end{eqnarray*}
For an example with the normal case, take a look at this.
(3). Dependent and Markov model. Suppose that ${\bf x}=(x_1,...,x_n)$ is a set of observations possibly dependent and let $f$ be the joint density of ${\bf x}$, then
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto f({\bf x}; \theta).
\end{eqnarray*}
If additionally the Markov property is satisfied, then
\begin{eqnarray*}
{\mathcal L}(\theta;{\bf x})\propto f({\bf x}; \theta) = f(x_1;\theta)\prod_{j=1}^{n-1} f(x_{j+1} \vert x_j ;\theta).
\end{eqnarray*}
Take also a look at this.
|
Does MLE require i.i.d. data? Or just independent parameters?
The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event
|
14,576
|
Does MLE require i.i.d. data? Or just independent parameters?
|
(+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be produced by IID sampling.
If the dependence of the sampling can be written in the statistical model, you just write the likelihood accordingly and maximize it as usual.
The one case worth mentioning when you do not assume dependence is that of the multivariate Gaussian sampling (in time series analysis for example). The dependence between two Gaussian variables can be modelled by their covariance term, which you incoroporate in the likelihood.
To give a simplistic example, assume that you draw a sample of size $2$ from correlated Gaussian variables with same mean and variance. You would write the likelihood as
$$\frac{1}{2\pi\sigma^2\sqrt{1-\rho^2}}\exp\left(-\frac{z}{2\sigma^2(1-\rho^2)}\right),$$
where $z$ is
$$z = (x_1-\mu)^2-2\rho(x_1-\mu)(x_2-\mu)+(x_2-\mu)^2.$$
This is not the product of the individual likelihoods. Still, you would maximize this with parameters $(\mu, \sigma, \rho)$ to get their MLE.
|
Does MLE require i.i.d. data? Or just independent parameters?
|
(+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be
|
Does MLE require i.i.d. data? Or just independent parameters?
(+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be produced by IID sampling.
If the dependence of the sampling can be written in the statistical model, you just write the likelihood accordingly and maximize it as usual.
The one case worth mentioning when you do not assume dependence is that of the multivariate Gaussian sampling (in time series analysis for example). The dependence between two Gaussian variables can be modelled by their covariance term, which you incoroporate in the likelihood.
To give a simplistic example, assume that you draw a sample of size $2$ from correlated Gaussian variables with same mean and variance. You would write the likelihood as
$$\frac{1}{2\pi\sigma^2\sqrt{1-\rho^2}}\exp\left(-\frac{z}{2\sigma^2(1-\rho^2)}\right),$$
where $z$ is
$$z = (x_1-\mu)^2-2\rho(x_1-\mu)(x_2-\mu)+(x_2-\mu)^2.$$
This is not the product of the individual likelihoods. Still, you would maximize this with parameters $(\mu, \sigma, \rho)$ to get their MLE.
|
Does MLE require i.i.d. data? Or just independent parameters?
(+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be
|
14,577
|
Does MLE require i.i.d. data? Or just independent parameters?
|
Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal googling produces papers like this one where the likelihood is given in the general form.
Another, to an extent, more intriguing, class of examples is given by multilevel random effect models. If you have data of the form
$$y_{ij} = x_{ij}'\beta + u_i + \epsilon_{ij},$$
where indices $j$ are nested in $i$ (think of students $j$ in classrooms $i$, say, for a classic application of multilevel models), then, assuming $\epsilon_{ij} \perp u_i$, the likelihood is
$$
\ln L \sim \sum_i \ln \int \prod_j f(y_{ij}|\beta,u_i) {\rm d}F(u_i)
$$
and is a sum over the likelihood contributions defined at the level of clusters, not individual observations. (Of course, in the Gaussian case, you can push the integrals around to produce an analytic ANOVA-like solution. However, if you have say a logit model for your response $y_{ij}$, then there is no way out of numerical integration.)
|
Does MLE require i.i.d. data? Or just independent parameters?
|
Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal g
|
Does MLE require i.i.d. data? Or just independent parameters?
Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal googling produces papers like this one where the likelihood is given in the general form.
Another, to an extent, more intriguing, class of examples is given by multilevel random effect models. If you have data of the form
$$y_{ij} = x_{ij}'\beta + u_i + \epsilon_{ij},$$
where indices $j$ are nested in $i$ (think of students $j$ in classrooms $i$, say, for a classic application of multilevel models), then, assuming $\epsilon_{ij} \perp u_i$, the likelihood is
$$
\ln L \sim \sum_i \ln \int \prod_j f(y_{ij}|\beta,u_i) {\rm d}F(u_i)
$$
and is a sum over the likelihood contributions defined at the level of clusters, not individual observations. (Of course, in the Gaussian case, you can push the integrals around to produce an analytic ANOVA-like solution. However, if you have say a logit model for your response $y_{ij}$, then there is no way out of numerical integration.)
|
Does MLE require i.i.d. data? Or just independent parameters?
Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal g
|
14,578
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators?
|
Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. This may not be appropriate for a given situation. For instance, data sometimes show a curvilinear relationship. This can be dealt with by means of regressing $Y$ onto a transformation of $X$, $f(X)$. Different transformations are possible. In situations where the relationship between $X$ and $Y$ is monotonic, but continually tapers off, a log transform can be used. Another popular choice is to use a polynomial where new terms are formed by raising $X$ to a series of powers (e.g., $X^2$, $X^3$, etc.). This strategy is easy to implement, and you can interpret the fit as telling you how many 'bends' exist in your data (where the number of bends is equal to the highest power needed minus 1).
However, regressions based on the logarithm or an exponent of the covariate will fit optimally only when that is the exact nature of the true relationship. It is quite reasonable to imagine that there is a curvilinear relationship between $X$ and $Y$ that is different from the possibilities those transformations afford. Thus, we come to two other strategies. The first approach is loess, a series of weighted linear regressions computed over a moving window. This approach is older, and better suited to exploratory data analysis.
The other approach is to use splines. At it's simplest, a spline is a new term that applies to only a portion of the range of $X$. For example, $X$ might range from 0 to 1, and the spline term might only range from .7 to 1. In this instance, .7 is the knot. A simple, linear spline term would be computed like this:
$$
X_{\rm spline} = \begin{cases} 0\quad &\text{if } X\le{.7} \\
X-.7\quad &\text{if } X>.7 \end{cases}
$$
and would be added to your model, in addition to the original $X$ term. The fitted model will show a sharp break at .7 with a straight line from 0 to .7, and the line continuing on with a different slope from .7 to 1. However, a spline term need not be linear. Specifically, it has been determined that cubic splines are especially useful (i.e., $X_{\rm spline}^3$). The sharp break needn't be there, either. Algorithms have been developed that constrain the fitted parameters such that the first and second derivatives match at the knots, which makes the knots impossible to detect in the output. The end result of all this is that with just a few knots (usually 3-5) in choice locations (which software can determine for you) can reproduce pretty much any curve. Moreover, the degrees of freedom are calculated correctly, so you can trust the results, which is not true when you look at your data first and then decide to fit a squared term because you saw a bend. In addition, all of this is just another (albeit more complicated) version of the basic linear model. Thus, everything that we get with linear models comes with this (e.g., predictions, residuals, confidence bands, tests, etc.) These are substantial advantages.
The simplest introduction to these topics that I know of is:
Fox, J. (2000). Nonparametric Simple Regression: Smoothing Scatterplots, Sage.
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu
|
Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. Thi
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators?
Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. This may not be appropriate for a given situation. For instance, data sometimes show a curvilinear relationship. This can be dealt with by means of regressing $Y$ onto a transformation of $X$, $f(X)$. Different transformations are possible. In situations where the relationship between $X$ and $Y$ is monotonic, but continually tapers off, a log transform can be used. Another popular choice is to use a polynomial where new terms are formed by raising $X$ to a series of powers (e.g., $X^2$, $X^3$, etc.). This strategy is easy to implement, and you can interpret the fit as telling you how many 'bends' exist in your data (where the number of bends is equal to the highest power needed minus 1).
However, regressions based on the logarithm or an exponent of the covariate will fit optimally only when that is the exact nature of the true relationship. It is quite reasonable to imagine that there is a curvilinear relationship between $X$ and $Y$ that is different from the possibilities those transformations afford. Thus, we come to two other strategies. The first approach is loess, a series of weighted linear regressions computed over a moving window. This approach is older, and better suited to exploratory data analysis.
The other approach is to use splines. At it's simplest, a spline is a new term that applies to only a portion of the range of $X$. For example, $X$ might range from 0 to 1, and the spline term might only range from .7 to 1. In this instance, .7 is the knot. A simple, linear spline term would be computed like this:
$$
X_{\rm spline} = \begin{cases} 0\quad &\text{if } X\le{.7} \\
X-.7\quad &\text{if } X>.7 \end{cases}
$$
and would be added to your model, in addition to the original $X$ term. The fitted model will show a sharp break at .7 with a straight line from 0 to .7, and the line continuing on with a different slope from .7 to 1. However, a spline term need not be linear. Specifically, it has been determined that cubic splines are especially useful (i.e., $X_{\rm spline}^3$). The sharp break needn't be there, either. Algorithms have been developed that constrain the fitted parameters such that the first and second derivatives match at the knots, which makes the knots impossible to detect in the output. The end result of all this is that with just a few knots (usually 3-5) in choice locations (which software can determine for you) can reproduce pretty much any curve. Moreover, the degrees of freedom are calculated correctly, so you can trust the results, which is not true when you look at your data first and then decide to fit a squared term because you saw a bend. In addition, all of this is just another (albeit more complicated) version of the basic linear model. Thus, everything that we get with linear models comes with this (e.g., predictions, residuals, confidence bands, tests, etc.) These are substantial advantages.
The simplest introduction to these topics that I know of is:
Fox, J. (2000). Nonparametric Simple Regression: Smoothing Scatterplots, Sage.
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu
Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. Thi
|
14,579
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators?
|
Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and regression are two approaches to the same problem. I'd particularly draw your attention to the chapters on smoothing methods and splines.
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu
|
Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators?
Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and regression are two approaches to the same problem. I'd particularly draw your attention to the chapters on smoothing methods and splines.
|
What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu
Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and
|
14,580
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
|
No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of more coins.
Let $X$ and $Y$ denote the Bernoulli distributed variables corresponding to the two cases, $X \sim \mathrm{Ber}(p)$, $Y \sim \mathrm{Ber}(q)$. First, recall that the correlation of $X$ and $Y$ is
$$\mathrm{corr}(X, Y) = \frac{E[XY] - E[X]E[Y]}{\sqrt{\mathrm{Var}(X)\mathrm{Var}(Y)}},$$
and since you know the marginals, you know $E[X]$, $E[Y]$, $\mathrm{Var}(X)$, and $\mathrm{Var}(Y)$, so by knowing the correlation, you also know $E[XY]$. Now, $XY = 1$ if and only if both $X = 1$ and $Y = 1$, so
$$E[XY] = P(X = 1, Y = 1).$$
By knowing the marginals, you know $p = P(X = 1, Y = 0) + P(X = 1, Y = 1)$, and $q = P(X = 0, Y = 1) + P(X = 1, Y = 1)$. Since we just found that you know $P(X = 1, Y = 1)$, this means that you also know $P(X = 1, Y = 0)$ and $P(X = 0, Y = 0)$, but now you're done, as the probability you are looking for is
$$P(X = 1, Y = 0) + P(X = 0, Y = 1) + P(X = 1, Y = 1).$$
Now, I personally find all of this easier to see with a picture. Let $P_{ij} = P(X = i, Y = j)$. Then we may picture the various probabilities as forming a square:
Here, we saw that knowing the correlations meant that you could deduce $P_{11}$, marked red, and that knowing the marginals, you knew the sum for each edge (one of which are indicated with a blue rectangle).
The case of three coins
This will not go as easily for three coins; intuitively it is not hard to see why: By knowing the marginals and the correlation, you know a total of $6 = 3 + 3$ parameters, but the joint distribution has $2^3 = 8$ outcomes, but by knowing the probabilities for $7$ of those, you can figure out the last one; now, $7 > 6$, so it seems reasonable that one could cook up two different joint distributions whose marginals and correlations are the same, and that one could permute the probabilities until the ones you are looking for will differ.
Let $X$, $Y$, and $Z$ be the three variables, and let
$$P_{ijk} = P(X = i, Y = j, Z = k).$$
In this case, the picture from above becomes the following:
The dimensions have been bumped by one: The red vertex has become several coloured edges, and the edge covered by a blue rectangle have become an entire face. Here, the blue plane indicates that by knowing the marginal, you know the sum of the probabilities within; for the one in the picture,
$$P(X = 0) = P_{000} + P_{010} + P_{001} + P_{011},$$
and similarly for all other faces in the cube. The coloured edges indicate that by knowing the correlations, you know the sum of the two probabilities connected by the edge. For example, by knowing $\mathrm{corr}(X, Y)$, you know $E[XY]$ (exactly as above), and
$$E[XY] = P(X = 1, Y = 1) = P_{110} + P_{111}.$$
So, this puts some limitations on possible joint distributions, but now we've reduced the exercise to the combinatorial exercise of putting numbers on the vertices of a cube. Without further ado, let us provide two joint distributions whose marginals and correlations are the same:
Here, divide all numbers by $100$ to obtain a probability distribution. To see that these work and have the same marginals/correlations, simply note that the sum of probabilities on each face is $1/2$ (meaning that the variables are $\mathrm{Ber}(1/2)$), and that the sums for the vertices on the coloured edges agree in both cases (in this particular case, all correlations are in fact the same, but that's doesn't have to be the case in general).
Finally, the probabilities of getting at least one head, $1 - P_{000}$ and $1 - P_{000}'$, are different in the two cases, which is what we wanted to prove.
For me, coming up with these examples came down to putting numbers on the cube to produce one example, and then simply modifying $P_{111}$ and letting the changes propagate.
Edit: This is the point where I realized that you were actually working with fixed marginals, and that you know that each variable was $\mathrm{Ber}(1/10)$, but if the picture above makes sense, it is possible to tweak it until you have the desired marginals.
Four or more coins
Finally, when we have more than three coins it should not be surprising that we can cook up examples that fail, as we now have an even bigger discrepancy between the number of parameters required to describe the joint distribution and those provided to us by marginals and correlations.
Concretely, for any number of coins greater than three, you could simply consider the examples whose first three coins behave as in the two examples above and for which the outcomes of the final two coins are independent from all other coins.
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
|
No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of mo
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of more coins.
Let $X$ and $Y$ denote the Bernoulli distributed variables corresponding to the two cases, $X \sim \mathrm{Ber}(p)$, $Y \sim \mathrm{Ber}(q)$. First, recall that the correlation of $X$ and $Y$ is
$$\mathrm{corr}(X, Y) = \frac{E[XY] - E[X]E[Y]}{\sqrt{\mathrm{Var}(X)\mathrm{Var}(Y)}},$$
and since you know the marginals, you know $E[X]$, $E[Y]$, $\mathrm{Var}(X)$, and $\mathrm{Var}(Y)$, so by knowing the correlation, you also know $E[XY]$. Now, $XY = 1$ if and only if both $X = 1$ and $Y = 1$, so
$$E[XY] = P(X = 1, Y = 1).$$
By knowing the marginals, you know $p = P(X = 1, Y = 0) + P(X = 1, Y = 1)$, and $q = P(X = 0, Y = 1) + P(X = 1, Y = 1)$. Since we just found that you know $P(X = 1, Y = 1)$, this means that you also know $P(X = 1, Y = 0)$ and $P(X = 0, Y = 0)$, but now you're done, as the probability you are looking for is
$$P(X = 1, Y = 0) + P(X = 0, Y = 1) + P(X = 1, Y = 1).$$
Now, I personally find all of this easier to see with a picture. Let $P_{ij} = P(X = i, Y = j)$. Then we may picture the various probabilities as forming a square:
Here, we saw that knowing the correlations meant that you could deduce $P_{11}$, marked red, and that knowing the marginals, you knew the sum for each edge (one of which are indicated with a blue rectangle).
The case of three coins
This will not go as easily for three coins; intuitively it is not hard to see why: By knowing the marginals and the correlation, you know a total of $6 = 3 + 3$ parameters, but the joint distribution has $2^3 = 8$ outcomes, but by knowing the probabilities for $7$ of those, you can figure out the last one; now, $7 > 6$, so it seems reasonable that one could cook up two different joint distributions whose marginals and correlations are the same, and that one could permute the probabilities until the ones you are looking for will differ.
Let $X$, $Y$, and $Z$ be the three variables, and let
$$P_{ijk} = P(X = i, Y = j, Z = k).$$
In this case, the picture from above becomes the following:
The dimensions have been bumped by one: The red vertex has become several coloured edges, and the edge covered by a blue rectangle have become an entire face. Here, the blue plane indicates that by knowing the marginal, you know the sum of the probabilities within; for the one in the picture,
$$P(X = 0) = P_{000} + P_{010} + P_{001} + P_{011},$$
and similarly for all other faces in the cube. The coloured edges indicate that by knowing the correlations, you know the sum of the two probabilities connected by the edge. For example, by knowing $\mathrm{corr}(X, Y)$, you know $E[XY]$ (exactly as above), and
$$E[XY] = P(X = 1, Y = 1) = P_{110} + P_{111}.$$
So, this puts some limitations on possible joint distributions, but now we've reduced the exercise to the combinatorial exercise of putting numbers on the vertices of a cube. Without further ado, let us provide two joint distributions whose marginals and correlations are the same:
Here, divide all numbers by $100$ to obtain a probability distribution. To see that these work and have the same marginals/correlations, simply note that the sum of probabilities on each face is $1/2$ (meaning that the variables are $\mathrm{Ber}(1/2)$), and that the sums for the vertices on the coloured edges agree in both cases (in this particular case, all correlations are in fact the same, but that's doesn't have to be the case in general).
Finally, the probabilities of getting at least one head, $1 - P_{000}$ and $1 - P_{000}'$, are different in the two cases, which is what we wanted to prove.
For me, coming up with these examples came down to putting numbers on the cube to produce one example, and then simply modifying $P_{111}$ and letting the changes propagate.
Edit: This is the point where I realized that you were actually working with fixed marginals, and that you know that each variable was $\mathrm{Ber}(1/10)$, but if the picture above makes sense, it is possible to tweak it until you have the desired marginals.
Four or more coins
Finally, when we have more than three coins it should not be surprising that we can cook up examples that fail, as we now have an even bigger discrepancy between the number of parameters required to describe the joint distribution and those provided to us by marginals and correlations.
Concretely, for any number of coins greater than three, you could simply consider the examples whose first three coins behave as in the two examples above and for which the outcomes of the final two coins are independent from all other coins.
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of mo
|
14,581
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
|
The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distribution to give a specified correlation value, and then calculate the probability you want.
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
|
The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distributio
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distribution to give a specified correlation value, and then calculate the probability you want.
|
Correlated Bernoulli trials, multivariate Bernoulli distribution?
The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distributio
|
14,582
|
Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction
|
The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heteroscedastic (the variance will not be constant, and in particular will depend on the mean in systematic ways) and far from Normality, in a way that will be hard to transform away - especially if the proportions are close to 0 or 1 for some values of the predictor variable. That makes the GLMM a good idea.
You should be careful to check for/account for overdispersion. If you have a single observation (i.e. a single binomial sample/row in your data frame) per location then your (1|Site) random effect will automatically handle this (although see Harrison 2015 for a cautionary note)
if the previous assumption is right (you only have a single binomial sample per location), then you can also fit this as a regular binomial model (glm(...,family=binomial) -- in that case you can also use a quasibinomial model (family=quasibinomial) as a simpler, alternative way to account for overdispersion
if you like you can also fit your GLMM with the proportion as the response, if you set the weights argument to equal the number of samples:
glmer(insectCount/NumberOfInsectSamples~ProportionalPlantGroupPresence+
(1|Location),
weights=NumberofInsectSamples,
data=Data,family="binomial")
(this should give identical results to the glmer() fit you have in your question).
Harrison, Xavier A. “A Comparison of Observation-Level Random Effect and Beta-Binomial Models for Modelling Overdispersion in Binomial Data in Ecology and Evolution.” PeerJ 3 (July 21, 2015): e1114. doi:10.7717/peerj.1114.
|
Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction
|
The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heterosc
|
Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction
The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heteroscedastic (the variance will not be constant, and in particular will depend on the mean in systematic ways) and far from Normality, in a way that will be hard to transform away - especially if the proportions are close to 0 or 1 for some values of the predictor variable. That makes the GLMM a good idea.
You should be careful to check for/account for overdispersion. If you have a single observation (i.e. a single binomial sample/row in your data frame) per location then your (1|Site) random effect will automatically handle this (although see Harrison 2015 for a cautionary note)
if the previous assumption is right (you only have a single binomial sample per location), then you can also fit this as a regular binomial model (glm(...,family=binomial) -- in that case you can also use a quasibinomial model (family=quasibinomial) as a simpler, alternative way to account for overdispersion
if you like you can also fit your GLMM with the proportion as the response, if you set the weights argument to equal the number of samples:
glmer(insectCount/NumberOfInsectSamples~ProportionalPlantGroupPresence+
(1|Location),
weights=NumberofInsectSamples,
data=Data,family="binomial")
(this should give identical results to the glmer() fit you have in your question).
Harrison, Xavier A. “A Comparison of Observation-Level Random Effect and Beta-Binomial Models for Modelling Overdispersion in Binomial Data in Ecology and Evolution.” PeerJ 3 (July 21, 2015): e1114. doi:10.7717/peerj.1114.
|
Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction
The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heterosc
|
14,583
|
What does log-uniformly distribution mean?
|
I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to B for 0 < A < B to mean drawing uniformly in the log domain between log(A) and log(B), exponentiating to get a number between A and B, and then rounding to the nearest integer. The phrase drawn exponentially means the same thing but without rounding.
Like this:
x <- exp(runif(100000, log(128), log(4000)))
hist(x, breaks=100, xlim=c(128, 4000))
hist(log(x), breaks=100, xlim=c(log(128), log(4000)))
|
What does log-uniformly distribution mean?
|
I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to
|
What does log-uniformly distribution mean?
I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to B for 0 < A < B to mean drawing uniformly in the log domain between log(A) and log(B), exponentiating to get a number between A and B, and then rounding to the nearest integer. The phrase drawn exponentially means the same thing but without rounding.
Like this:
x <- exp(runif(100000, log(128), log(4000)))
hist(x, breaks=100, xlim=c(128, 4000))
hist(log(x), breaks=100, xlim=c(log(128), log(4000)))
|
What does log-uniformly distribution mean?
I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to
|
14,584
|
Is "test statistic" a value or a random variable?
|
The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed or observed of that random variable.
T is a random variable because it represents the results of calculating from a sample chosen randomly. Once you take the sample (and the randomness is over) then you can calculate t, the specific value, and make conclusions based on how t compares to the distribution of T.
So the test statistic is a random variable when we think about all the values it could take on based on all the different samples we could collect. But once we collect a single sample, we calculate a specific value of the test statistic.
|
Is "test statistic" a value or a random variable?
|
The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed
|
Is "test statistic" a value or a random variable?
The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed or observed of that random variable.
T is a random variable because it represents the results of calculating from a sample chosen randomly. Once you take the sample (and the randomness is over) then you can calculate t, the specific value, and make conclusions based on how t compares to the distribution of T.
So the test statistic is a random variable when we think about all the values it could take on based on all the different samples we could collect. But once we collect a single sample, we calculate a specific value of the test statistic.
|
Is "test statistic" a value or a random variable?
The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed
|
14,585
|
Is "test statistic" a value or a random variable?
|
A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. As statistics are used to estimate the value of a population parameter, they are themselves values. Because (long enough) samples are different all the time, the statistics (the numerical statements about the samples) will differ. A probability distribution of a statistic obtained through a large number of samples drawn from a specific population is called its sampling distribution --- a distribution of that statistic, considered as a random variable.
A statistic is a random variable (e.g. T): A statistic is any function of the data (unchanged from sample to sample). The data are described by random variables (of some suitable dimension). As any function of a random variable is itself a random variable, a statistic is a random variable.
It almost always clear from context what meaning is intended, especially when the upper/low-case convention is observed.
|
Is "test statistic" a value or a random variable?
|
A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. A
|
Is "test statistic" a value or a random variable?
A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. As statistics are used to estimate the value of a population parameter, they are themselves values. Because (long enough) samples are different all the time, the statistics (the numerical statements about the samples) will differ. A probability distribution of a statistic obtained through a large number of samples drawn from a specific population is called its sampling distribution --- a distribution of that statistic, considered as a random variable.
A statistic is a random variable (e.g. T): A statistic is any function of the data (unchanged from sample to sample). The data are described by random variables (of some suitable dimension). As any function of a random variable is itself a random variable, a statistic is a random variable.
It almost always clear from context what meaning is intended, especially when the upper/low-case convention is observed.
|
Is "test statistic" a value or a random variable?
A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. A
|
14,586
|
Is "test statistic" a value or a random variable?
|
A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your sample the test statistic (called t-statistic) depends on the observed data ($\bar{x}$ and $s$ are both derived from the data).
Under the assumption that your mean is $\mu_0$, the statistic you computed will follow a certain distribution. The probability of this value of the statistic occurring is then determined under the assumption. If that value is deemed to be low, the assumption ($H_0$) is rejected.
If we reject the $H_0$ assumption, this does not mean that the assumption we made was guaranteed to be untrue. If it was true and we rejected it because of the low probability of the test statistic under $H_0$, we call it a type I error.
On the other hand, if we accept the assumption this does not mean that our assumption for sure was true. If the assumption was untrue and we accepted it because it had high enough probability under our wrong assumption, this is called type II error.
The statistic is a specific value and it is only if we accept certain assumptions as given that we can assume it follows a specific probability distribution.
This principle holds for all test statistics, not just for the t-statistic you mention here.
|
Is "test statistic" a value or a random variable?
|
A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your
|
Is "test statistic" a value or a random variable?
A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your sample the test statistic (called t-statistic) depends on the observed data ($\bar{x}$ and $s$ are both derived from the data).
Under the assumption that your mean is $\mu_0$, the statistic you computed will follow a certain distribution. The probability of this value of the statistic occurring is then determined under the assumption. If that value is deemed to be low, the assumption ($H_0$) is rejected.
If we reject the $H_0$ assumption, this does not mean that the assumption we made was guaranteed to be untrue. If it was true and we rejected it because of the low probability of the test statistic under $H_0$, we call it a type I error.
On the other hand, if we accept the assumption this does not mean that our assumption for sure was true. If the assumption was untrue and we accepted it because it had high enough probability under our wrong assumption, this is called type II error.
The statistic is a specific value and it is only if we accept certain assumptions as given that we can assume it follows a specific probability distribution.
This principle holds for all test statistics, not just for the t-statistic you mention here.
|
Is "test statistic" a value or a random variable?
A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your
|
14,587
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
|
As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask whether the models have the same or different accuracy (or another statistic of interest), and perform a hypothesis test.
But as a matter of practice, researchers do not always do this. I can only speculate about the reasons, but I imagine that there are several, non-exclusive reasons:
The scale of data collection is so large that the variance of the statistic is very small. Two models with near-identical scores would be detected as "statistically different," even though the magnitude of that difference is unimportant for its practical operation. In a slightly different scenario, knowing with statistical certainty that Model A is 0.001% more accurate than Model B is simply trivia if the cost to deploy Model A is larger than the marginal return implied by the improved accuracy.
The models are expensive to train. Depending on what quantity is to be statistically tested and how, this might require retraining a model, so this test could be prohibitive. For instance, cross-validation involves retraining the same model, typically 3 to 10 times. Doing this for a model that costs millions of dollars to train once may make cross-validation infeasible.
The more relevant questions about the generalization of machine learning models are not really about the results of repeating the modeling process in the controlled settings of a laboratory, where data collection and model interpretation are carried out by experts. Many of the more concerning failures of ML arise from deployment of machine learning models in uncontrolled environments, where the data might be collected in a different manner, the model is applied outside of its intended scope, or users are able to craft malicious inputs to obtain specific results.
The researchers simply don't know how to do statistical hypothesis testing for their models or statistics of interest.
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
|
As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask whether the model
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask whether the models have the same or different accuracy (or another statistic of interest), and perform a hypothesis test.
But as a matter of practice, researchers do not always do this. I can only speculate about the reasons, but I imagine that there are several, non-exclusive reasons:
The scale of data collection is so large that the variance of the statistic is very small. Two models with near-identical scores would be detected as "statistically different," even though the magnitude of that difference is unimportant for its practical operation. In a slightly different scenario, knowing with statistical certainty that Model A is 0.001% more accurate than Model B is simply trivia if the cost to deploy Model A is larger than the marginal return implied by the improved accuracy.
The models are expensive to train. Depending on what quantity is to be statistically tested and how, this might require retraining a model, so this test could be prohibitive. For instance, cross-validation involves retraining the same model, typically 3 to 10 times. Doing this for a model that costs millions of dollars to train once may make cross-validation infeasible.
The more relevant questions about the generalization of machine learning models are not really about the results of repeating the modeling process in the controlled settings of a laboratory, where data collection and model interpretation are carried out by experts. Many of the more concerning failures of ML arise from deployment of machine learning models in uncontrolled environments, where the data might be collected in a different manner, the model is applied outside of its intended scope, or users are able to craft malicious inputs to obtain specific results.
The researchers simply don't know how to do statistical hypothesis testing for their models or statistics of interest.
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask whether the model
|
14,588
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
|
This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is to build a useful model, usually under the assumption of the corresponding generalization.
Take, for example, Fisher's Iris Flower Dataset. One question someone might ask is "Do setosa, virginica and versicolor have, on average, different sepal lengths?" We can tackle this question with the scientific method:
Hypothesize that they have the same sepal length. (bc this is falsifiable)
Attempt to gather evidence to the contrary.
Is evidence strong? If so, discard same-sepal-length hypothesis.
The $p$-value in hypothesis testing tries to help answer the "Is evidence strong?" question in this procedure.
In an ML setting, it is generally assumed that these species differ by, for example, sepal length. A question to ask might be "Can we predict {setosa, virginica, versicolor} from the sepal length?" A model is built and its ability to predict the species from the sepal length is measured in precision, recall, accuracy, etc etc etc. Note that since there are many possible models, the precision, recall and accuracy may or may not give you information about whether or not there is a relationship there in the first place. So, for example, we build a decision tree to distinguish these species based on sepal length, and it reports a 33% accuracy (i.e. no better than guessing) -- Does this mean that there is not a relationship, or just that you chose the wrong model?
Of course, in a sense, the hypothesis testing procedure also involves building and evaluating a model. However, the model usually isn't even used explicitly: it merely informs the particular equations that we use to get at "Do the species differ by sepal length?"
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
|
This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is to build a usefu
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is to build a useful model, usually under the assumption of the corresponding generalization.
Take, for example, Fisher's Iris Flower Dataset. One question someone might ask is "Do setosa, virginica and versicolor have, on average, different sepal lengths?" We can tackle this question with the scientific method:
Hypothesize that they have the same sepal length. (bc this is falsifiable)
Attempt to gather evidence to the contrary.
Is evidence strong? If so, discard same-sepal-length hypothesis.
The $p$-value in hypothesis testing tries to help answer the "Is evidence strong?" question in this procedure.
In an ML setting, it is generally assumed that these species differ by, for example, sepal length. A question to ask might be "Can we predict {setosa, virginica, versicolor} from the sepal length?" A model is built and its ability to predict the species from the sepal length is measured in precision, recall, accuracy, etc etc etc. Note that since there are many possible models, the precision, recall and accuracy may or may not give you information about whether or not there is a relationship there in the first place. So, for example, we build a decision tree to distinguish these species based on sepal length, and it reports a 33% accuracy (i.e. no better than guessing) -- Does this mean that there is not a relationship, or just that you chose the wrong model?
Of course, in a sense, the hypothesis testing procedure also involves building and evaluating a model. However, the model usually isn't even used explicitly: it merely informs the particular equations that we use to get at "Do the species differ by sepal length?"
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is to build a usefu
|
14,589
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
|
Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis testing is done on top of these "metrics" in order to measure their uncertainty.
For example, if you have 5 male and 5 female students you can measure their heights and get a "metric" for the average height difference between males and females. But the number you get will not reflect the real average difference between all males and all females. Hypothesis testing tries to check if a hypothesis about a population is consistent with the observed "metric".
Same holds for accuracy measurements in machine learning. You build a model and, using a test set of, say, 100 samples, you get an accuracy of 88%. But this is just a measure of accuracy on 100 samples, and not the true accuracy. If you used another 100 samples you would get a slightly different number. So given this accuracy on a set of 100 samples - what can we say about the true accuracy of this classifier? This is where hypothesis testing comes in. And it allows us to answer questions like "how surprising would it be to get an accuracy of 88% on my 100 samples, if the true accuracy of a model is 75%".
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
|
Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis testing is done on t
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis testing is done on top of these "metrics" in order to measure their uncertainty.
For example, if you have 5 male and 5 female students you can measure their heights and get a "metric" for the average height difference between males and females. But the number you get will not reflect the real average difference between all males and all females. Hypothesis testing tries to check if a hypothesis about a population is consistent with the observed "metric".
Same holds for accuracy measurements in machine learning. You build a model and, using a test set of, say, 100 samples, you get an accuracy of 88%. But this is just a measure of accuracy on 100 samples, and not the true accuracy. If you used another 100 samples you would get a slightly different number. So given this accuracy on a set of 100 samples - what can we say about the true accuracy of this classifier? This is where hypothesis testing comes in. And it allows us to answer questions like "how surprising would it be to get an accuracy of 88% on my 100 samples, if the true accuracy of a model is 75%".
|
Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis testing is done on t
|
14,590
|
What are some good datasets to learn basic machine learning algorithms and why?
|
The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's been mentioned which kind of algorithms are applicable.
UCI- Machine Learning repository
ML Comp
Mammo Image
Mulan
|
What are some good datasets to learn basic machine learning algorithms and why?
|
The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's be
|
What are some good datasets to learn basic machine learning algorithms and why?
The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's been mentioned which kind of algorithms are applicable.
UCI- Machine Learning repository
ML Comp
Mammo Image
Mulan
|
What are some good datasets to learn basic machine learning algorithms and why?
The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's be
|
14,591
|
What are some good datasets to learn basic machine learning algorithms and why?
|
Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots of clean datasets. While noise-free datasets aren't really representative of real-world datasets, they're especially suited for your purpose - deploying ML algorithms.
You can also view others' ML models for the same dataset, which could be a fun way to pick up some hacks along the way. It goes without saying that the kind of exposure you get from learning from the best practitioners is, like for anything else, super helpful.
|
What are some good datasets to learn basic machine learning algorithms and why?
|
Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots
|
What are some good datasets to learn basic machine learning algorithms and why?
Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots of clean datasets. While noise-free datasets aren't really representative of real-world datasets, they're especially suited for your purpose - deploying ML algorithms.
You can also view others' ML models for the same dataset, which could be a fun way to pick up some hacks along the way. It goes without saying that the kind of exposure you get from learning from the best practitioners is, like for anything else, super helpful.
|
What are some good datasets to learn basic machine learning algorithms and why?
Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots
|
14,592
|
What are some good datasets to learn basic machine learning algorithms and why?
|
First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dealing with data types and wrestling the data into the right format for the algorithm. Even if you are building an algorithm from scratch, you can start with the sample from a similar implementation and compare the performance.
Second, I'd recommend experimenting with synthetic data sets to get a feel for how the algorithm performs when you know how the data was generated and the signal to noise ratio.
In R, you can list all dataset in the currently installed packages with this command:
data(package = installed.packages()[, 1])
The R package mlbench has real datasets and can generate synthetic datasets that are useful for studying algorithm performance.
Python's scikit-learn has sample data and generates synthetic/toy dataset too.
SAS has training dataset available for download and the SPSS sample data is installed with the software at C:\Program Files\IBM\SPSS\Statistics\22\Samples
Lastly, I'd look at data in the wild. I'd compare the performance of different algorithms and tuning parameters on real data sets. This usually requires a lot more work because you will rarely find dataset with data types and structures that you can drop right into your algorithms.
For data in the wild, I'd recommend:
reddit's Dataset Archive
KDnugget's list
|
What are some good datasets to learn basic machine learning algorithms and why?
|
First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dea
|
What are some good datasets to learn basic machine learning algorithms and why?
First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dealing with data types and wrestling the data into the right format for the algorithm. Even if you are building an algorithm from scratch, you can start with the sample from a similar implementation and compare the performance.
Second, I'd recommend experimenting with synthetic data sets to get a feel for how the algorithm performs when you know how the data was generated and the signal to noise ratio.
In R, you can list all dataset in the currently installed packages with this command:
data(package = installed.packages()[, 1])
The R package mlbench has real datasets and can generate synthetic datasets that are useful for studying algorithm performance.
Python's scikit-learn has sample data and generates synthetic/toy dataset too.
SAS has training dataset available for download and the SPSS sample data is installed with the software at C:\Program Files\IBM\SPSS\Statistics\22\Samples
Lastly, I'd look at data in the wild. I'd compare the performance of different algorithms and tuning parameters on real data sets. This usually requires a lot more work because you will rarely find dataset with data types and structures that you can drop right into your algorithms.
For data in the wild, I'd recommend:
reddit's Dataset Archive
KDnugget's list
|
What are some good datasets to learn basic machine learning algorithms and why?
First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dea
|
14,593
|
What are some good datasets to learn basic machine learning algorithms and why?
|
The Iris data set hands down. It's in base R as well.
|
What are some good datasets to learn basic machine learning algorithms and why?
|
The Iris data set hands down. It's in base R as well.
|
What are some good datasets to learn basic machine learning algorithms and why?
The Iris data set hands down. It's in base R as well.
|
What are some good datasets to learn basic machine learning algorithms and why?
The Iris data set hands down. It's in base R as well.
|
14,594
|
What are some good datasets to learn basic machine learning algorithms and why?
|
In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class totaling 150 data points. One excellent resource to help you explore this dataset is this video series by Data School.
Another dataset to checkout is the Wine Quality data set from UCI -ML repository. It has 4898 data points with 12 attributes.
|
What are some good datasets to learn basic machine learning algorithms and why?
|
In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class to
|
What are some good datasets to learn basic machine learning algorithms and why?
In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class totaling 150 data points. One excellent resource to help you explore this dataset is this video series by Data School.
Another dataset to checkout is the Wine Quality data set from UCI -ML repository. It has 4898 data points with 12 attributes.
|
What are some good datasets to learn basic machine learning algorithms and why?
In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class to
|
14,595
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
|
Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the densities would be 1000.
Consequently log-likelihood may be negative, but it may also be positive.
[Indeed, according to some definitions the likelihood is only defined up to a multiplicative constant (e.g. see here), so even if the density were bounded by 1, the likelihood still wouldn't be.]
Clarifications as a result of comments/chat: For a continuous distribution, likelihood is defined in terms of density. Density must be at least $0$ and can exceed $1$; and as a result, likelihood can exceed $1$.
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
|
Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the de
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the densities would be 1000.
Consequently log-likelihood may be negative, but it may also be positive.
[Indeed, according to some definitions the likelihood is only defined up to a multiplicative constant (e.g. see here), so even if the density were bounded by 1, the likelihood still wouldn't be.]
Clarifications as a result of comments/chat: For a continuous distribution, likelihood is defined in terms of density. Density must be at least $0$ and can exceed $1$; and as a result, likelihood can exceed $1$.
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the de
|
14,596
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
|
The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If your likelihood function $L\left(x\right)$ has values in $\left(0,1\right)$ for some $x$, then the log-likelihood function $\log L\left(x\right)$ will have values between $\left(-\infty,0\right)$. For $L\left(x\right)\in\left[1,\infty\right)$ the $\log L\left(x\right)\in\left[0,\infty\right)$. So $-34.82$ is a typical value for a log-likelihood function.
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
|
The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If you
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If your likelihood function $L\left(x\right)$ has values in $\left(0,1\right)$ for some $x$, then the log-likelihood function $\log L\left(x\right)$ will have values between $\left(-\infty,0\right)$. For $L\left(x\right)\in\left[1,\infty\right)$ the $\log L\left(x\right)\in\left[0,\infty\right)$. So $-34.82$ is a typical value for a log-likelihood function.
|
Can the likelihood take values outside of the range [0, 1]? [duplicate]
The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If you
|
14,597
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
|
The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often claimed that LOOCV has higher variance than $k$-fold CV, and that it is so because the training sets in LOOCV have more overlap. This makes the estimates from different folds more dependent than in the $k$-fold CV, the reasoning goes, and hence increases the overall variance. See for example a quote from The Elements of Statistical Learning by Hastie et al. (Section 7.10.1):
What value should we choose for $K$? With $K = N$, the cross-validation
estimator is approximately unbiased for the true (expected) prediction error, but can have high variance because the $N$ "training sets" are so similar to one another.
See also a similar quote in the answer by @BrashEquilibrium (+1). The accepted and the most upvoted answers in Variance and bias in cross-validation: why does leave-one-out CV have higher variance? give the same reasoning.
HOWEVER, note that Hastie et al. do not give any citations, and while this reasoning does sound plausible, I would like to see some direct evidence that this is indeed the case. One reference that is sometimes cited is Kohavi 1995 but I don't find it very convincing in this particular claim.
MOREOVER, here are two simulations that show that LOOCV either has the same or even a bit lower variance than 10-fold CV:
https://stats.stackexchange.com/a/357572.
Does $K$-fold CV with $K=N$ (LOO) provide the MOST or LEAST variable estimates, and what is the role of "stability"?.
See also the paper linked in https://stats.stackexchange.com/a/252031. It says that it is a "misconception" that LOOCV has high variance.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
|
The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often claimed that LOOCV has higher variance than $k$-fold CV, and that it is so because the training sets in LOOCV have more overlap. This makes the estimates from different folds more dependent than in the $k$-fold CV, the reasoning goes, and hence increases the overall variance. See for example a quote from The Elements of Statistical Learning by Hastie et al. (Section 7.10.1):
What value should we choose for $K$? With $K = N$, the cross-validation
estimator is approximately unbiased for the true (expected) prediction error, but can have high variance because the $N$ "training sets" are so similar to one another.
See also a similar quote in the answer by @BrashEquilibrium (+1). The accepted and the most upvoted answers in Variance and bias in cross-validation: why does leave-one-out CV have higher variance? give the same reasoning.
HOWEVER, note that Hastie et al. do not give any citations, and while this reasoning does sound plausible, I would like to see some direct evidence that this is indeed the case. One reference that is sometimes cited is Kohavi 1995 but I don't find it very convincing in this particular claim.
MOREOVER, here are two simulations that show that LOOCV either has the same or even a bit lower variance than 10-fold CV:
https://stats.stackexchange.com/a/357572.
Does $K$-fold CV with $K=N$ (LOO) provide the MOST or LEAST variable estimates, and what is the role of "stability"?.
See also the paper linked in https://stats.stackexchange.com/a/252031. It says that it is a "misconception" that LOOCV has high variance.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often
|
14,598
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
|
From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; therefore, these ouputs are highly (positively) correlated with each other. In contrast, when we perform $k$-fold CV with $k<n$, we are averaging the outputs of $k$ fitted models that are somewhat less correlated with each other, since the overlap between the training sets in each model is smaller. Since the mean of many highly correlated quantities has higher variance than does the mean of many quantities that are not as highly correlated, the test error estimate resulting from LOOCV tends to have higher variance than does the test error estimate resulting from $k$-fold CV.
To summarize, there is a bias-variance trade-off associated with the choice of $k$ in $k$-fold cross-validation. Typically, given these considerations, one performs $k$-fold cross-validation with $k=5$ or $k=10$, as these values have been shown empirically to yield test error rate estimates that suffer neither from excessively high bias nor from very high variance.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
|
From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; t
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; therefore, these ouputs are highly (positively) correlated with each other. In contrast, when we perform $k$-fold CV with $k<n$, we are averaging the outputs of $k$ fitted models that are somewhat less correlated with each other, since the overlap between the training sets in each model is smaller. Since the mean of many highly correlated quantities has higher variance than does the mean of many quantities that are not as highly correlated, the test error estimate resulting from LOOCV tends to have higher variance than does the test error estimate resulting from $k$-fold CV.
To summarize, there is a bias-variance trade-off associated with the choice of $k$ in $k$-fold cross-validation. Typically, given these considerations, one performs $k$-fold cross-validation with $k=5$ or $k=10$, as these values have been shown empirically to yield test error rate estimates that suffer neither from excessively high bias nor from very high variance.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; t
|
14,599
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
|
In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $k$ or $n$ surrogate models yields the same predicion for the same sample (thought experiment: test surrogate models with large independent test set).
If the models are not stable, the situation gets more complex: each of the surrogate models has its own performance, so you have additional variance. In that case, all bets are open whether LOO or $k$-fold has more additional variance*. But you can iterate the $k$-fold CV and taking the grand mean over all test cases and all $i \times k$ surrogate models can mitigate that additional variance. There is no such possibility for LOO: the $n$ surrogate models are all possible surrogate models.
The large variance is usually due to two factors:
small sample size (if you weren't in a small sample size situation, you'd not be worried about variance ;-) ).
High-variance type of error measure. All proportion-of-test-cases-type of classification errors are subject to high variance. This is a basic property of estimating fractions by counting cases. Regression-type errors like MSE have a much more benign behaviour in this respect.
For classification errors, there's a number of papers that looks at the properties of different resampling validation schemes in which you also see variances, e.g.:
Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, Mellish, C. S. (ed.) Artificial Intelligence Proceedings 14$^th$ International Joint Conference, 20 -- 25. August 1995, Montréal, Québec, Canada, Morgan Kaufmann, USA, , 1137 - 1145 (1995).
We observed very similar behaviour for vibrational spectroscopic data:
Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G.: Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
(I guess similar papers may exist for regression errors as well, but I'm not aware of them)
* one may expect LOO to have less variance because the surrogate models are trained with more cases, but at least for certain types of classification models, LOO doesn't behave very well.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
|
In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $k$ or $n$ surrogate models yields the same predicion for the same sample (thought experiment: test surrogate models with large independent test set).
If the models are not stable, the situation gets more complex: each of the surrogate models has its own performance, so you have additional variance. In that case, all bets are open whether LOO or $k$-fold has more additional variance*. But you can iterate the $k$-fold CV and taking the grand mean over all test cases and all $i \times k$ surrogate models can mitigate that additional variance. There is no such possibility for LOO: the $n$ surrogate models are all possible surrogate models.
The large variance is usually due to two factors:
small sample size (if you weren't in a small sample size situation, you'd not be worried about variance ;-) ).
High-variance type of error measure. All proportion-of-test-cases-type of classification errors are subject to high variance. This is a basic property of estimating fractions by counting cases. Regression-type errors like MSE have a much more benign behaviour in this respect.
For classification errors, there's a number of papers that looks at the properties of different resampling validation schemes in which you also see variances, e.g.:
Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, Mellish, C. S. (ed.) Artificial Intelligence Proceedings 14$^th$ International Joint Conference, 20 -- 25. August 1995, Montréal, Québec, Canada, Morgan Kaufmann, USA, , 1137 - 1145 (1995).
We observed very similar behaviour for vibrational spectroscopic data:
Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G.: Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
(I guess similar papers may exist for regression errors as well, but I'm not aware of them)
* one may expect LOO to have less variance because the surrogate models are trained with more cases, but at least for certain types of classification models, LOO doesn't behave very well.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $
|
14,600
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
|
There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use all other instances for training. So in each iteration it will leave one instance from the dataset to test.So in a particular iteration of evaluation there is only one instance in the test data and rest are in training data.that is why you saw all the training data sets equal all the time.
In K-fold Cross validation by using Stratification(an advanced method use to balance the data set ensuring that each class represents approximately in equal proportion in all the samples) we can reduce the variance of the estimates.
as LOOCV is using only one instance for testing, it cannot apply Stratification.So LOOCV has a higher variance in error estimates than k-fold cross validation.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
|
There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use a
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use all other instances for training. So in each iteration it will leave one instance from the dataset to test.So in a particular iteration of evaluation there is only one instance in the test data and rest are in training data.that is why you saw all the training data sets equal all the time.
In K-fold Cross validation by using Stratification(an advanced method use to balance the data set ensuring that each class represents approximately in equal proportion in all the samples) we can reduce the variance of the estimates.
as LOOCV is using only one instance for testing, it cannot apply Stratification.So LOOCV has a higher variance in error estimates than k-fold cross validation.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.