idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
36,401 | Regression with rank order as dependent variable | I've heard of using an $L$ statistic calculated from $(N-1)r^2$, then compared to the chi-square table. (can anyone back me up on this?) All you'd have to do is convert all data into ranks, run it through a regular old multiple regression, then use the $L$ statistic to find your $p$-values.
However, I feel like inference will not be too useful in your case. Not quite sure of the data's context, but simply using Spearman correlation or scatterplots might be more telling. | Regression with rank order as dependent variable | I've heard of using an $L$ statistic calculated from $(N-1)r^2$, then compared to the chi-square table. (can anyone back me up on this?) All you'd have to do is convert all data into ranks, run it thr | Regression with rank order as dependent variable
I've heard of using an $L$ statistic calculated from $(N-1)r^2$, then compared to the chi-square table. (can anyone back me up on this?) All you'd have to do is convert all data into ranks, run it through a regular old multiple regression, then use the $L$ statistic to find your $p$-values.
However, I feel like inference will not be too useful in your case. Not quite sure of the data's context, but simply using Spearman correlation or scatterplots might be more telling. | Regression with rank order as dependent variable
I've heard of using an $L$ statistic calculated from $(N-1)r^2$, then compared to the chi-square table. (can anyone back me up on this?) All you'd have to do is convert all data into ranks, run it thr |
36,402 | Variance-covariance matrix of the parameter estimates wrongly calculated? | Optim in default setting is doing minimization, see the manual:
By default optim performs minimization
So the output is already the negative hessian.
It should be further noted that:
Because the parameters in the call to the optimiser are pi, log(zeta),
log(delta), and mu, the delta method is used to obtain the standard
errors for zeta and delta.
Source here. | Variance-covariance matrix of the parameter estimates wrongly calculated? | Optim in default setting is doing minimization, see the manual:
By default optim performs minimization
So the output is already the negative hessian.
It should be further noted that:
Because the pa | Variance-covariance matrix of the parameter estimates wrongly calculated?
Optim in default setting is doing minimization, see the manual:
By default optim performs minimization
So the output is already the negative hessian.
It should be further noted that:
Because the parameters in the call to the optimiser are pi, log(zeta),
log(delta), and mu, the delta method is used to obtain the standard
errors for zeta and delta.
Source here. | Variance-covariance matrix of the parameter estimates wrongly calculated?
Optim in default setting is doing minimization, see the manual:
By default optim performs minimization
So the output is already the negative hessian.
It should be further noted that:
Because the pa |
36,403 | Variance-covariance matrix of the parameter estimates wrongly calculated? | Seconded to @Jen 's answer. In fact the 5th line in the result of summary(hyperbfitalv) are SE's. They are indeed the square root of the diagonal elements of inverse-hessian solve(hyperbfitalv$hessian).
>>> sqrt(1.365591e-6)#for pi
0.0011685850418347824
>>> sqrt(5.113433e-3)#for mu
0.071508272248740568
>>> sqrt(1.5261031428)*0.002035#for delta
0.0025139483860139073
>>> sqrt(1.6617499980)*0.204827#for zeta
0.26404019669949413
Note that lZeta and lDelta are in fact log(Zeta) and log(Delta). Cheers! | Variance-covariance matrix of the parameter estimates wrongly calculated? | Seconded to @Jen 's answer. In fact the 5th line in the result of summary(hyperbfitalv) are SE's. They are indeed the square root of the diagonal elements of inverse-hessian solve(hyperbfitalv$hessian | Variance-covariance matrix of the parameter estimates wrongly calculated?
Seconded to @Jen 's answer. In fact the 5th line in the result of summary(hyperbfitalv) are SE's. They are indeed the square root of the diagonal elements of inverse-hessian solve(hyperbfitalv$hessian).
>>> sqrt(1.365591e-6)#for pi
0.0011685850418347824
>>> sqrt(5.113433e-3)#for mu
0.071508272248740568
>>> sqrt(1.5261031428)*0.002035#for delta
0.0025139483860139073
>>> sqrt(1.6617499980)*0.204827#for zeta
0.26404019669949413
Note that lZeta and lDelta are in fact log(Zeta) and log(Delta). Cheers! | Variance-covariance matrix of the parameter estimates wrongly calculated?
Seconded to @Jen 's answer. In fact the 5th line in the result of summary(hyperbfitalv) are SE's. They are indeed the square root of the diagonal elements of inverse-hessian solve(hyperbfitalv$hessian |
36,404 | Confusion related to predictive distribution of gaussian processes | $P(u*|x*,u) ~ N(u(x*)$, $\sigma^2$), directly from the definition of $u*$.
Notice that integration of two Gaussian pdf is normalized. It can be shown from the fact that
$$
\int_{-\infty}^{\infty}P(u^*|x^*, u)du^* =\int_{-\infty}^{\infty}\int_{u}P(u^*|x^*, u)P(u|s)dudu^*
=\int_{u}P(u|s)\int_{-\infty}^{\infty}P(u^*|x^*, u)du^*du
=\int_{u}P(u|s)\int_{-\infty}^{\infty}N(u^*-u(x*); 0, \sigma^2)du^*du
=\int_{u}P(u|s)du\int_{-\infty}^{\infty}N(u^*; 0, \sigma^2)du^*
=1
$$
With normalization out of the way,
$\int_{u}P(u^*|x^*, u)P(u|s)du$ is integrated by the following tips:
Substitute the 2 normal pdf into the equation and eliminate the terms independent of $u$, as we have already shown normalization.
Using the completing the square trick for integrating multivariate exponential, i.e., construct a multivariate normal pdf with the remaining exponential terms. Refer to this youTube video.
Eventually you are left with an exponential in terms of $u^*$, it can be observed that this is again a factor away from a normal pdf. Again, the proof of normalization gives us confidence that the final form is indeed a normal pdf. The pdf is the same as the one given in the original post. | Confusion related to predictive distribution of gaussian processes | $P(u*|x*,u) ~ N(u(x*)$, $\sigma^2$), directly from the definition of $u*$.
Notice that integration of two Gaussian pdf is normalized. It can be shown from the fact that
$$
\int_{-\infty}^{\infty}P(u^* | Confusion related to predictive distribution of gaussian processes
$P(u*|x*,u) ~ N(u(x*)$, $\sigma^2$), directly from the definition of $u*$.
Notice that integration of two Gaussian pdf is normalized. It can be shown from the fact that
$$
\int_{-\infty}^{\infty}P(u^*|x^*, u)du^* =\int_{-\infty}^{\infty}\int_{u}P(u^*|x^*, u)P(u|s)dudu^*
=\int_{u}P(u|s)\int_{-\infty}^{\infty}P(u^*|x^*, u)du^*du
=\int_{u}P(u|s)\int_{-\infty}^{\infty}N(u^*-u(x*); 0, \sigma^2)du^*du
=\int_{u}P(u|s)du\int_{-\infty}^{\infty}N(u^*; 0, \sigma^2)du^*
=1
$$
With normalization out of the way,
$\int_{u}P(u^*|x^*, u)P(u|s)du$ is integrated by the following tips:
Substitute the 2 normal pdf into the equation and eliminate the terms independent of $u$, as we have already shown normalization.
Using the completing the square trick for integrating multivariate exponential, i.e., construct a multivariate normal pdf with the remaining exponential terms. Refer to this youTube video.
Eventually you are left with an exponential in terms of $u^*$, it can be observed that this is again a factor away from a normal pdf. Again, the proof of normalization gives us confidence that the final form is indeed a normal pdf. The pdf is the same as the one given in the original post. | Confusion related to predictive distribution of gaussian processes
$P(u*|x*,u) ~ N(u(x*)$, $\sigma^2$), directly from the definition of $u*$.
Notice that integration of two Gaussian pdf is normalized. It can be shown from the fact that
$$
\int_{-\infty}^{\infty}P(u^* |
36,405 | Confusion related to predictive distribution of gaussian processes | The detailed derivations of the equations for the conditional distribution of a Gaussian process can be found in chapter 2 and appendix A of the book [Rasmussen2005].
Take a look at (Eq. 2.23, 2.24) and above, which are based on the Gaussian identities (A.6) and the matrix property (A.11).
[Rasmussen2005] C. E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2005. | Confusion related to predictive distribution of gaussian processes | The detailed derivations of the equations for the conditional distribution of a Gaussian process can be found in chapter 2 and appendix A of the book [Rasmussen2005].
Take a look at (Eq. 2.23, 2.24) a | Confusion related to predictive distribution of gaussian processes
The detailed derivations of the equations for the conditional distribution of a Gaussian process can be found in chapter 2 and appendix A of the book [Rasmussen2005].
Take a look at (Eq. 2.23, 2.24) and above, which are based on the Gaussian identities (A.6) and the matrix property (A.11).
[Rasmussen2005] C. E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2005. | Confusion related to predictive distribution of gaussian processes
The detailed derivations of the equations for the conditional distribution of a Gaussian process can be found in chapter 2 and appendix A of the book [Rasmussen2005].
Take a look at (Eq. 2.23, 2.24) a |
36,406 | Moment generating function of multinomial distribution | I will give the example with $k=2$ because it is more didactic, but you can generalize the solution. Before we start, let's remember that
$$ \sum_{x = 0}^n \frac{n!}{x!(n-x)!}a^xb^{n-x} = (a+b)^n. $$
By definition of the multinomial distribution we have
$$ P(X_1 = x_1, X_2 = x_2) = \frac{n!}{x_1!x_2!(n-x_1-x_2)!}p_1^{x_1}p_2^{x_2}(1-p_1-p_2)^{n-x_1-x_2}. $$
For now we fix $X_2 = x_2$, so we obtain
$$ \mathbb{E}\exp \theta_1 X_1 + \theta_2 x_2 = \\
\sum_{x_1=0}^{n-x_2}\frac{(n-x_2)!}{x_1!(n-x_1-x_2)!}\frac{n!}{x_2!(n-x_2!)}(p_1e^{\theta_1})^{x_1}(p_2e^{\theta_2})^{x_2}(1-p_1-p_2)^{n-x_2-x_1} =\\
\frac{n!}{x_2!(n-x_2!)}(p_1e^{\theta_1}+1-p_1-p_2)^{n-x_2}(p_2e^{\theta_2})^{x_2}. $$
We can now sum for all the values of $x_2$ between 0 and $n$ to obtain
$$ (p_1e^{\theta_1}+p_2e^{\theta_2}+1-p_1-p_2)^n, $$
which is the answer for $k=2$. The general result can be easily seen to be
$$ (p_1e^{\theta_1} + \ldots + p_ke^{\theta_k} + 1-p_1-\ldots-p_k)^n. $$ | Moment generating function of multinomial distribution | I will give the example with $k=2$ because it is more didactic, but you can generalize the solution. Before we start, let's remember that
$$ \sum_{x = 0}^n \frac{n!}{x!(n-x)!}a^xb^{n-x} = (a+b)^n. $$
| Moment generating function of multinomial distribution
I will give the example with $k=2$ because it is more didactic, but you can generalize the solution. Before we start, let's remember that
$$ \sum_{x = 0}^n \frac{n!}{x!(n-x)!}a^xb^{n-x} = (a+b)^n. $$
By definition of the multinomial distribution we have
$$ P(X_1 = x_1, X_2 = x_2) = \frac{n!}{x_1!x_2!(n-x_1-x_2)!}p_1^{x_1}p_2^{x_2}(1-p_1-p_2)^{n-x_1-x_2}. $$
For now we fix $X_2 = x_2$, so we obtain
$$ \mathbb{E}\exp \theta_1 X_1 + \theta_2 x_2 = \\
\sum_{x_1=0}^{n-x_2}\frac{(n-x_2)!}{x_1!(n-x_1-x_2)!}\frac{n!}{x_2!(n-x_2!)}(p_1e^{\theta_1})^{x_1}(p_2e^{\theta_2})^{x_2}(1-p_1-p_2)^{n-x_2-x_1} =\\
\frac{n!}{x_2!(n-x_2!)}(p_1e^{\theta_1}+1-p_1-p_2)^{n-x_2}(p_2e^{\theta_2})^{x_2}. $$
We can now sum for all the values of $x_2$ between 0 and $n$ to obtain
$$ (p_1e^{\theta_1}+p_2e^{\theta_2}+1-p_1-p_2)^n, $$
which is the answer for $k=2$. The general result can be easily seen to be
$$ (p_1e^{\theta_1} + \ldots + p_ke^{\theta_k} + 1-p_1-\ldots-p_k)^n. $$ | Moment generating function of multinomial distribution
I will give the example with $k=2$ because it is more didactic, but you can generalize the solution. Before we start, let's remember that
$$ \sum_{x = 0}^n \frac{n!}{x!(n-x)!}a^xb^{n-x} = (a+b)^n. $$
|
36,407 | Median + MAD for skewed data | If the uncontaminated data in your sample is drawn from an asymmetric distribution and the measure of scale you use to determine the width of the rejection region assumes that the good part of your data is symmetric, then, these rejection regions will be larger than they need to be. For illustration, if the distribution of the data is really right skewed. This would lead you to
Reject genuine observations from the right tail as outliers.
Fail to detect outliers from the left tail for what they are.
Overall, the combined effect would be that your (inappropriately) cleaned dataset will look more symmetric than it really is.
The alternative here is to use an outlier detection rule that treats the left and right tails of your sample separately. Of course, compared to the mad and median, this will also halve the breakdown point of your procedure (this is inevitable because the contamination rate of an half sample can be potentially twice as high as the contamination rate the full sample).
In my opinion, the best procedure for this problem is to use the rejection regions from the adjusted boxplots. In my experience (drawn from numerical simulation), they can be expected to reliably detect asymmetric contaminations even when the data contains as much as 10-15% outliers concentrated in one tail. Adjusted boxplots are widely implemented and their connection with the classical boxplots makes them easy to understand and use. This answer explains and illustrates the use of adjusted boxplots in a context quiet like yours. | Median + MAD for skewed data | If the uncontaminated data in your sample is drawn from an asymmetric distribution and the measure of scale you use to determine the width of the rejection region assumes that the good part of your da | Median + MAD for skewed data
If the uncontaminated data in your sample is drawn from an asymmetric distribution and the measure of scale you use to determine the width of the rejection region assumes that the good part of your data is symmetric, then, these rejection regions will be larger than they need to be. For illustration, if the distribution of the data is really right skewed. This would lead you to
Reject genuine observations from the right tail as outliers.
Fail to detect outliers from the left tail for what they are.
Overall, the combined effect would be that your (inappropriately) cleaned dataset will look more symmetric than it really is.
The alternative here is to use an outlier detection rule that treats the left and right tails of your sample separately. Of course, compared to the mad and median, this will also halve the breakdown point of your procedure (this is inevitable because the contamination rate of an half sample can be potentially twice as high as the contamination rate the full sample).
In my opinion, the best procedure for this problem is to use the rejection regions from the adjusted boxplots. In my experience (drawn from numerical simulation), they can be expected to reliably detect asymmetric contaminations even when the data contains as much as 10-15% outliers concentrated in one tail. Adjusted boxplots are widely implemented and their connection with the classical boxplots makes them easy to understand and use. This answer explains and illustrates the use of adjusted boxplots in a context quiet like yours. | Median + MAD for skewed data
If the uncontaminated data in your sample is drawn from an asymmetric distribution and the measure of scale you use to determine the width of the rejection region assumes that the good part of your da |
36,408 | Median + MAD for skewed data | It seems to me that these rejection rules make most sense if you have grounds to believe that your data are drawn from some majority distribution PLUS a contaminating heavier-tailed distribution. That picture of a contaminated situation should ideally draw upon subject-matter knowledge of the real generating process (physical, biological, economic, whatever).
Conversely, if you don't have independent grounds to believe that there are contaminants, how can you expect that choosing any rejection rule is the right thing to do?
But there is at least one alternative world-view, which is that outliers may be just what you expect from a heavy-tailed (and in this question asymmetric) distribution, which may or may not resemble some textbook distribution, say a lognormal.
With marked asymmetry, I would expect first to try a transformation and then see whether outliers are apparent on a more nearly symmetric scale. Alternatively, and increasingly commonly, the answer is not to reject outliers but to use a model that is based on a heavy-tailed distribution.
What I want to do here is underline one view, which is that outlier rejection rules may cause quite as many problems as they solve, and that they need not be part of routine data analysis.
I realise that some people have large datasets of dubious quality arriving in real time and they may judge that they have no alternative but to filter them with some outlier rejection rule, but I suspect I am not alone among statistical people in being deeply suspicious of such rules.
It's elementary but worth mentioning that very often the outliers are genuine and important, even though I routinely encounter students determined to omit them as awkward to analyse.
It's lose-lose: you could devise an outlier rejection rule if you had a good understanding of the precise generation process, but you don't, and so who knows what are the real properties of any rule you use. | Median + MAD for skewed data | It seems to me that these rejection rules make most sense if you have grounds to believe that your data are drawn from some majority distribution PLUS a contaminating heavier-tailed distribution. That | Median + MAD for skewed data
It seems to me that these rejection rules make most sense if you have grounds to believe that your data are drawn from some majority distribution PLUS a contaminating heavier-tailed distribution. That picture of a contaminated situation should ideally draw upon subject-matter knowledge of the real generating process (physical, biological, economic, whatever).
Conversely, if you don't have independent grounds to believe that there are contaminants, how can you expect that choosing any rejection rule is the right thing to do?
But there is at least one alternative world-view, which is that outliers may be just what you expect from a heavy-tailed (and in this question asymmetric) distribution, which may or may not resemble some textbook distribution, say a lognormal.
With marked asymmetry, I would expect first to try a transformation and then see whether outliers are apparent on a more nearly symmetric scale. Alternatively, and increasingly commonly, the answer is not to reject outliers but to use a model that is based on a heavy-tailed distribution.
What I want to do here is underline one view, which is that outlier rejection rules may cause quite as many problems as they solve, and that they need not be part of routine data analysis.
I realise that some people have large datasets of dubious quality arriving in real time and they may judge that they have no alternative but to filter them with some outlier rejection rule, but I suspect I am not alone among statistical people in being deeply suspicious of such rules.
It's elementary but worth mentioning that very often the outliers are genuine and important, even though I routinely encounter students determined to omit them as awkward to analyse.
It's lose-lose: you could devise an outlier rejection rule if you had a good understanding of the precise generation process, but you don't, and so who knows what are the real properties of any rule you use. | Median + MAD for skewed data
It seems to me that these rejection rules make most sense if you have grounds to believe that your data are drawn from some majority distribution PLUS a contaminating heavier-tailed distribution. That |
36,409 | Path analysis or a full SEM? | 1) There are surely many questions for which a path analysis is sufficient.
2) Increasing complexity or increasing sample size required (etc.) are not by themselves valid reasons to avoid inclusion of latent variables.
3) Latent variables are not "treated as observed" in a path analysis - it's more accurate to say that a path analysis is an SEM without any latent variables included (somewhat semantic, but I think it's an important distinction)
4) Many models do in fact assume no measurement error, linear regression being the most abused culprit.
5) Many people probably do say they are using SEM when they are in fact using path analysis. This isn't that egregious, because path analysis can certainly be viewed as a type of SEM "sub-model" (for lack of a better word).
Your choice to include latent variables in the model should really be driven by the theory you are testing and data you have available. If you choose to use latent variables, you should certainly do some analyses of your respective measurement models before moving onto the full SEM where these are related to other variables. Without more detail on your specific problem, it's hard to give more detailed advice. | Path analysis or a full SEM? | 1) There are surely many questions for which a path analysis is sufficient.
2) Increasing complexity or increasing sample size required (etc.) are not by themselves valid reasons to avoid inclusion o | Path analysis or a full SEM?
1) There are surely many questions for which a path analysis is sufficient.
2) Increasing complexity or increasing sample size required (etc.) are not by themselves valid reasons to avoid inclusion of latent variables.
3) Latent variables are not "treated as observed" in a path analysis - it's more accurate to say that a path analysis is an SEM without any latent variables included (somewhat semantic, but I think it's an important distinction)
4) Many models do in fact assume no measurement error, linear regression being the most abused culprit.
5) Many people probably do say they are using SEM when they are in fact using path analysis. This isn't that egregious, because path analysis can certainly be viewed as a type of SEM "sub-model" (for lack of a better word).
Your choice to include latent variables in the model should really be driven by the theory you are testing and data you have available. If you choose to use latent variables, you should certainly do some analyses of your respective measurement models before moving onto the full SEM where these are related to other variables. Without more detail on your specific problem, it's hard to give more detailed advice. | Path analysis or a full SEM?
1) There are surely many questions for which a path analysis is sufficient.
2) Increasing complexity or increasing sample size required (etc.) are not by themselves valid reasons to avoid inclusion o |
36,410 | Fat-shattering dimension | I was also looking for an explanation and this is the best one I got (Section 4.1.2). Apparently fat-shattering is a restrictive form of P-shattering that says for some fixed $r_x$ there is some $f$ that has a margin of atleast $\gamma$.
Check out the figure in reference for clearer explanation. | Fat-shattering dimension | I was also looking for an explanation and this is the best one I got (Section 4.1.2). Apparently fat-shattering is a restrictive form of P-shattering that says for some fixed $r_x$ there is some $f$ t | Fat-shattering dimension
I was also looking for an explanation and this is the best one I got (Section 4.1.2). Apparently fat-shattering is a restrictive form of P-shattering that says for some fixed $r_x$ there is some $f$ that has a margin of atleast $\gamma$.
Check out the figure in reference for clearer explanation. | Fat-shattering dimension
I was also looking for an explanation and this is the best one I got (Section 4.1.2). Apparently fat-shattering is a restrictive form of P-shattering that says for some fixed $r_x$ there is some $f$ t |
36,411 | How to perform an exponential regression with multiple variables in R | As a start:
f <- function(x1,x2,a,b1,b2) {a * (b1^x1) * (b2^x2) }
# generate some data
x1 <- 1:10
x2 <- c(2,3,5,4,6,7,8,10,9,11)
set.seed(44)
y <- 2*exp(x1/4) + rnorm(10)*2
dat <- data.frame(x1,x2, y)
# fit a nonlinear model
fm <- nls(y ~ f(x1,x2,a,b1,b2), data = dat, start = c(a=1, b1=1,b2=1))
# get estimates of a, b
co <- coef(fm) | How to perform an exponential regression with multiple variables in R | As a start:
f <- function(x1,x2,a,b1,b2) {a * (b1^x1) * (b2^x2) }
# generate some data
x1 <- 1:10
x2 <- c(2,3,5,4,6,7,8,10,9,11)
set.seed(44)
y <- 2*exp(x1/4) + rnorm(10)*2
dat <- data.frame(x1,x2, y | How to perform an exponential regression with multiple variables in R
As a start:
f <- function(x1,x2,a,b1,b2) {a * (b1^x1) * (b2^x2) }
# generate some data
x1 <- 1:10
x2 <- c(2,3,5,4,6,7,8,10,9,11)
set.seed(44)
y <- 2*exp(x1/4) + rnorm(10)*2
dat <- data.frame(x1,x2, y)
# fit a nonlinear model
fm <- nls(y ~ f(x1,x2,a,b1,b2), data = dat, start = c(a=1, b1=1,b2=1))
# get estimates of a, b
co <- coef(fm) | How to perform an exponential regression with multiple variables in R
As a start:
f <- function(x1,x2,a,b1,b2) {a * (b1^x1) * (b2^x2) }
# generate some data
x1 <- 1:10
x2 <- c(2,3,5,4,6,7,8,10,9,11)
set.seed(44)
y <- 2*exp(x1/4) + rnorm(10)*2
dat <- data.frame(x1,x2, y |
36,412 | How to perform an exponential regression with multiple variables in R | Huub Hoofs' approach above worked! Thank you. Here is the technique I utilized to plot a visualization of the model:
# x1 is the variable we want to show on the x-axis
plot(x1, y)
# generate a range of values for x1 in small increments to create a smooth line
xRange <- seq(min(x1), max(x1), length.out = 1000)
# generate the predicted y values (for a test value of x2 = 1)
yValues <- predict(fm, newdata=list(x1=xRange, x2=1))
#draw the curve
lines(xRange, yValues, col="blue")
# generate the predicted y values (for a test value of x2 = 0)
yValues <- predict(fm, newdata=list(x1=xRange, x2=0))
#draw the curve
lines(xRange, yValues, col="red") | How to perform an exponential regression with multiple variables in R | Huub Hoofs' approach above worked! Thank you. Here is the technique I utilized to plot a visualization of the model:
# x1 is the variable we want to show on the x-axis
plot(x1, y)
# generate a range | How to perform an exponential regression with multiple variables in R
Huub Hoofs' approach above worked! Thank you. Here is the technique I utilized to plot a visualization of the model:
# x1 is the variable we want to show on the x-axis
plot(x1, y)
# generate a range of values for x1 in small increments to create a smooth line
xRange <- seq(min(x1), max(x1), length.out = 1000)
# generate the predicted y values (for a test value of x2 = 1)
yValues <- predict(fm, newdata=list(x1=xRange, x2=1))
#draw the curve
lines(xRange, yValues, col="blue")
# generate the predicted y values (for a test value of x2 = 0)
yValues <- predict(fm, newdata=list(x1=xRange, x2=0))
#draw the curve
lines(xRange, yValues, col="red") | How to perform an exponential regression with multiple variables in R
Huub Hoofs' approach above worked! Thank you. Here is the technique I utilized to plot a visualization of the model:
# x1 is the variable we want to show on the x-axis
plot(x1, y)
# generate a range |
36,413 | How to perform an exponential regression with multiple variables in R | If you really want to look at R2, it's best to linearize your model.
Observe that R2 doesn't make sense for non linear general models, as discussed in another topic: https://stackoverflow.com/questions/14530770/calculating-r2-for-a-nonlinear-least-squares-fit
Note that the r squared is not defined for non-linear models, or at least very tricky, quote from R-help:
There is a good reason that an nls model fit in R does not provide
r-squared - r-squared doesn't make sense for a general nls model.
One way of thinking of r-squared is as a comparison of the residual sum of squares for the fitted model to the residual sum of
squares for a trivial model that consists of a constant only. You
cannot guarantee that this is a comparison of nested models when
dealing with an nls model. If the models aren't nested this comparison
is not terribly meaningful.
So the answer is that you probably don't want to do this in the first place.
If you want peer-reviewed evidence, see this article for example; it's not that you can't compute the R^2 value, it's just that it may not mean the same thing/have the same desirable properties as in the linear-model case. | How to perform an exponential regression with multiple variables in R | If you really want to look at R2, it's best to linearize your model.
Observe that R2 doesn't make sense for non linear general models, as discussed in another topic: https://stackoverflow.com/question | How to perform an exponential regression with multiple variables in R
If you really want to look at R2, it's best to linearize your model.
Observe that R2 doesn't make sense for non linear general models, as discussed in another topic: https://stackoverflow.com/questions/14530770/calculating-r2-for-a-nonlinear-least-squares-fit
Note that the r squared is not defined for non-linear models, or at least very tricky, quote from R-help:
There is a good reason that an nls model fit in R does not provide
r-squared - r-squared doesn't make sense for a general nls model.
One way of thinking of r-squared is as a comparison of the residual sum of squares for the fitted model to the residual sum of
squares for a trivial model that consists of a constant only. You
cannot guarantee that this is a comparison of nested models when
dealing with an nls model. If the models aren't nested this comparison
is not terribly meaningful.
So the answer is that you probably don't want to do this in the first place.
If you want peer-reviewed evidence, see this article for example; it's not that you can't compute the R^2 value, it's just that it may not mean the same thing/have the same desirable properties as in the linear-model case. | How to perform an exponential regression with multiple variables in R
If you really want to look at R2, it's best to linearize your model.
Observe that R2 doesn't make sense for non linear general models, as discussed in another topic: https://stackoverflow.com/question |
36,414 | Why does the Phi coefficient approximates the Pearson's correlation? | By default, chisq.test() applies a continuity correction when computing the test statistic for 2x2 tables. If you switch off this behavior, then:
x = c(1, 1, 0, 0, 1, 0, 1, 1, 1)
y = c(1, 1, 0, 0, 0, 0, 1, 1, 1)
cor(x,y)
sqrt(chisq.test(table(x,y), correct=FALSE)$statistic/length(x)) # phi
will give you exactly the same answer. And this essentially also answers why $\sqrt{\chi^2/n}$ with the continuity correction approximates cor(x,y) -- as $n$ increases, the continuity correction has less and less influence on the result.
The continuity correction is described here: Yates's correction for continuity | Why does the Phi coefficient approximates the Pearson's correlation? | By default, chisq.test() applies a continuity correction when computing the test statistic for 2x2 tables. If you switch off this behavior, then:
x = c(1, 1, 0, 0, 1, 0, 1, 1, 1)
y = c(1, 1, | Why does the Phi coefficient approximates the Pearson's correlation?
By default, chisq.test() applies a continuity correction when computing the test statistic for 2x2 tables. If you switch off this behavior, then:
x = c(1, 1, 0, 0, 1, 0, 1, 1, 1)
y = c(1, 1, 0, 0, 0, 0, 1, 1, 1)
cor(x,y)
sqrt(chisq.test(table(x,y), correct=FALSE)$statistic/length(x)) # phi
will give you exactly the same answer. And this essentially also answers why $\sqrt{\chi^2/n}$ with the continuity correction approximates cor(x,y) -- as $n$ increases, the continuity correction has less and less influence on the result.
The continuity correction is described here: Yates's correction for continuity | Why does the Phi coefficient approximates the Pearson's correlation?
By default, chisq.test() applies a continuity correction when computing the test statistic for 2x2 tables. If you switch off this behavior, then:
x = c(1, 1, 0, 0, 1, 0, 1, 1, 1)
y = c(1, 1, |
36,415 | has anyone implemented an autoencoder with random forests | Maybe a little late but...
Ji Feng and Zhi-Hua Zhou (2017) have recently proposed an autoencoder model based on tree ensembles.
They learn a random forest or build a "completely-random forest" for the encoder part. To decode it, they follow tree branchs backward from leaves to the root which gives a series of rules from which they extract the Maximal-Compatible Rule. Then, using this rule, they are able to more or less precisely reconstruct the input.
PS: It could be noted that Biau et al. (2016) showed that tree-ensembles could be seen as a two-layer perceptron. It may be interesting to see the Forest Autoencoder with this scope. | has anyone implemented an autoencoder with random forests | Maybe a little late but...
Ji Feng and Zhi-Hua Zhou (2017) have recently proposed an autoencoder model based on tree ensembles.
They learn a random forest or build a "completely-random forest" for th | has anyone implemented an autoencoder with random forests
Maybe a little late but...
Ji Feng and Zhi-Hua Zhou (2017) have recently proposed an autoencoder model based on tree ensembles.
They learn a random forest or build a "completely-random forest" for the encoder part. To decode it, they follow tree branchs backward from leaves to the root which gives a series of rules from which they extract the Maximal-Compatible Rule. Then, using this rule, they are able to more or less precisely reconstruct the input.
PS: It could be noted that Biau et al. (2016) showed that tree-ensembles could be seen as a two-layer perceptron. It may be interesting to see the Forest Autoencoder with this scope. | has anyone implemented an autoencoder with random forests
Maybe a little late but...
Ji Feng and Zhi-Hua Zhou (2017) have recently proposed an autoencoder model based on tree ensembles.
They learn a random forest or build a "completely-random forest" for th |
36,416 | has anyone implemented an autoencoder with random forests | You can use the 1-hot encoding - for a single tree, each example is represented by a vector containing 1 with the selected leaf, and combine these vectors for a forest (either concatenated or OR'ed). This gives you an intermediate representation.
Another option is to use the proximity measure [1] to compute an unsupervised sparse feature representation- a matrix M where M_ij = #times examples i,j terminated in the same leaf (over the entire forest). This matrix is sparse and large but you can reduce its size.
Do either of those give a useful intermediate representation? I don't know of any attempts at deep learning with random forests..
[1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#prox | has anyone implemented an autoencoder with random forests | You can use the 1-hot encoding - for a single tree, each example is represented by a vector containing 1 with the selected leaf, and combine these vectors for a forest (either concatenated or OR'ed). | has anyone implemented an autoencoder with random forests
You can use the 1-hot encoding - for a single tree, each example is represented by a vector containing 1 with the selected leaf, and combine these vectors for a forest (either concatenated or OR'ed). This gives you an intermediate representation.
Another option is to use the proximity measure [1] to compute an unsupervised sparse feature representation- a matrix M where M_ij = #times examples i,j terminated in the same leaf (over the entire forest). This matrix is sparse and large but you can reduce its size.
Do either of those give a useful intermediate representation? I don't know of any attempts at deep learning with random forests..
[1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#prox | has anyone implemented an autoencoder with random forests
You can use the 1-hot encoding - for a single tree, each example is represented by a vector containing 1 with the selected leaf, and combine these vectors for a forest (either concatenated or OR'ed). |
36,417 | Nested/SplitModel - RepeatedMeasures/MixedModel ANOVA: levels of nesting & scripting in R | Tricky problem! Is location fixed or random? Is position fixed or random? I assume that sample is random.
Since treatment is assigned to location, location is the sampling unit. Basically, the comparison between treatments is done at that level. $n=8$.
The measurement unit is the observation you take on your "samples" at a given time.
Location is not nested in treatment. The treatment is applied to the location.
Position is nested inside location.
Sample is nested inside position.
Time is nested inside Sample.
Time is crossed with treatment.
You have 3 levels of nesting (time within sample, sample within position, position within location).
If location, position and sample are random, I think the R formula will look like this:
Y ~ Treatment * Time +(1|location|position|sample)
You have 1 row in your data frame for each sample observation at each time - with appropriate codings for all of your design characteristics.
Would it work to combine the repeated measures into a score such as their average or their difference? That could make the model easier to interpret. | Nested/SplitModel - RepeatedMeasures/MixedModel ANOVA: levels of nesting & scripting in R | Tricky problem! Is location fixed or random? Is position fixed or random? I assume that sample is random.
Since treatment is assigned to location, location is the sampling unit. Basically, the compar | Nested/SplitModel - RepeatedMeasures/MixedModel ANOVA: levels of nesting & scripting in R
Tricky problem! Is location fixed or random? Is position fixed or random? I assume that sample is random.
Since treatment is assigned to location, location is the sampling unit. Basically, the comparison between treatments is done at that level. $n=8$.
The measurement unit is the observation you take on your "samples" at a given time.
Location is not nested in treatment. The treatment is applied to the location.
Position is nested inside location.
Sample is nested inside position.
Time is nested inside Sample.
Time is crossed with treatment.
You have 3 levels of nesting (time within sample, sample within position, position within location).
If location, position and sample are random, I think the R formula will look like this:
Y ~ Treatment * Time +(1|location|position|sample)
You have 1 row in your data frame for each sample observation at each time - with appropriate codings for all of your design characteristics.
Would it work to combine the repeated measures into a score such as their average or their difference? That could make the model easier to interpret. | Nested/SplitModel - RepeatedMeasures/MixedModel ANOVA: levels of nesting & scripting in R
Tricky problem! Is location fixed or random? Is position fixed or random? I assume that sample is random.
Since treatment is assigned to location, location is the sampling unit. Basically, the compar |
36,418 | How should I handle a left censored predictor variable in multiple regression? | One option is to include a variable that is 1 if symptom severity was not measured and 0 otherwise, then code all the symptom severities that were not measured as 0. The coefficient on the 0/1 variable will represent the average test score for those that did not have the severity measured and the slope for the severity will be computed based on those that had severity measures. | How should I handle a left censored predictor variable in multiple regression? | One option is to include a variable that is 1 if symptom severity was not measured and 0 otherwise, then code all the symptom severities that were not measured as 0. The coefficient on the 0/1 variab | How should I handle a left censored predictor variable in multiple regression?
One option is to include a variable that is 1 if symptom severity was not measured and 0 otherwise, then code all the symptom severities that were not measured as 0. The coefficient on the 0/1 variable will represent the average test score for those that did not have the severity measured and the slope for the severity will be computed based on those that had severity measures. | How should I handle a left censored predictor variable in multiple regression?
One option is to include a variable that is 1 if symptom severity was not measured and 0 otherwise, then code all the symptom severities that were not measured as 0. The coefficient on the 0/1 variab |
36,419 | In calculating the F-measure with precision and recall, why is the harmonic mean used? | The F-measure is often used in the natural language recognition field for means of evaluation. In particular, the F-measure was employed by the Message Understanding Conference (MUC), in order to evaluate named entity recognition (NER) tasks.
Directly quoted from A survey of named entity recognition and classification written by D. Nadeau:
The harmonic mean of two numbers is never higher than the geometrical mean. It also tends towards the least number, minimizing the impact of large outliers and maximizing the impact of small ones. The F-measure therefore tends to privilege balanced systems. | In calculating the F-measure with precision and recall, why is the harmonic mean used? | The F-measure is often used in the natural language recognition field for means of evaluation. In particular, the F-measure was employed by the Message Understanding Conference (MUC), in order to eval | In calculating the F-measure with precision and recall, why is the harmonic mean used?
The F-measure is often used in the natural language recognition field for means of evaluation. In particular, the F-measure was employed by the Message Understanding Conference (MUC), in order to evaluate named entity recognition (NER) tasks.
Directly quoted from A survey of named entity recognition and classification written by D. Nadeau:
The harmonic mean of two numbers is never higher than the geometrical mean. It also tends towards the least number, minimizing the impact of large outliers and maximizing the impact of small ones. The F-measure therefore tends to privilege balanced systems. | In calculating the F-measure with precision and recall, why is the harmonic mean used?
The F-measure is often used in the natural language recognition field for means of evaluation. In particular, the F-measure was employed by the Message Understanding Conference (MUC), in order to eval |
36,420 | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points | A typical way to test if two one-dimensional distribution functions are different is with the Kolmogorov-Smirnov test which is based on the statistic:
$\begin{align*}
\underset{x}{\operatorname{sup}}\:|F_1(x) - F_2(x)|
\end{align*}
$
The problem is that in higher dimensions there are $2^d-1$ ways to define a distribution function. There are a number of papers on higher dimensional KS tests. Below is a link for one that discusses some efficient methods carrying out such a test.
Two-Dimensional KS Test | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set | A typical way to test if two one-dimensional distribution functions are different is with the Kolmogorov-Smirnov test which is based on the statistic:
$\begin{align*}
\underset{x}{\operatorname{sup}}\ | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points
A typical way to test if two one-dimensional distribution functions are different is with the Kolmogorov-Smirnov test which is based on the statistic:
$\begin{align*}
\underset{x}{\operatorname{sup}}\:|F_1(x) - F_2(x)|
\end{align*}
$
The problem is that in higher dimensions there are $2^d-1$ ways to define a distribution function. There are a number of papers on higher dimensional KS tests. Below is a link for one that discusses some efficient methods carrying out such a test.
Two-Dimensional KS Test | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set
A typical way to test if two one-dimensional distribution functions are different is with the Kolmogorov-Smirnov test which is based on the statistic:
$\begin{align*}
\underset{x}{\operatorname{sup}}\ |
36,421 | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points | A good test provides insight as well as a quantification of the apparent difference. A permutation test will do that, because you can plot the permutation distribution and it will show you just how and to what extent there is a difference in your data.
A natural test statistic would be the mean difference between the points in one group relative to those in the other -- but with little change you can apply this approach to any statistic you choose. This test views group membership arising from the random selection of (say) the red points among the collection of all blue or red points. Each possible sample yields a value of the test statistic (a vector in this case). The permutation distribution is the distribution of all these possible test statistics, each with equal probability.
For small datasets, like that of the question ($N=12$ points with subgroups of $n=5$ and $7$ points), the number of samples is small enough you can generate them all. For larger datasets, where $\binom{N}{n}$ is impracticably large, you can sample randomly. A few thousand samples will more than suffice. Either way, these distributions of vectors can be plotted in Cartesian coordinates, shown below using one circular shape per outcome for the full permutation distribution (792 points). This is the null, or reference, distribution for assessing the location of the mean difference in the dataset, shown with a red point and red vector directed towards it.
When this point cloud looks approximately Normal, the Mahalanobis distance of the data from the origin will approximately have a chi-squared distribution with $2$ degrees of freedom (one for each coordinate). This yields a p-value for the test, shown in the title of the figure. That's a useful calculation because it (a) quantifies how extreme the arrow appears and (b) can prevent our visual impressions from deceiving us. Here, although the data look extreme--most of the red points are displaced down and to the left of most of the blue points--the p-value of $0.156$ indicates that such an extreme-looking displacement occurs frequently among random groupings of these twelve points, advising us not to conclude there is a significant difference in their locations.
This R code gives the details of the calculations and construction of the figure.
#
# The data, eyeballed.
#
X <- data.frame(x = c(1,2,5,6,8,9,11,13,14,15,18,19),
y = c(0,1.5,1,1.25, 10, 9, 3, 7.5, 8, 4, 10,11),
group = factor(c(0,0,0,1,0,1,1,1,1,0,1,1),
levels = c(0, 1), labels = c("Red", "Blue")))
#
# This approach, although inefficient for testing mean differences in location,
# readily generalizes: by precomputing all possible
# vector differences among all the points, any statistic based on differences
# observed in a sample can be easily computed.
#
dX <- with(X, outer(x, x, `-`))
dY <- with(X, outer(y, y, `-`))
#
# Given a vector `i` of indexes of the "red" group, compute the test
# statistic (in this case, a vector of mean differences).
#
stat <- function(i) rowMeans(rbind(c(dX[i, -i]), c(dY[i, -i])))
#
# Conduct the test.
#
N <- nrow(X)
n <- with(X, sum(group == "Red"))
p.max <- 2e3 # Use sampling if the number of permutations exceeds this
# set.seed(17)
if (lchoose(N, n) <= log(p.max)) {
P <- combn(seq_len(N), n)
stitle <- "P-value"
} else {
P <- sapply(seq_len(p.max), function(i) sample.int(N, n))
stitle <- "Approximate P-value"
}
S <- t(matrix(apply(P, 2, stat), 2)) # The permutation distribution
s <- stat(which(X$group == "Red")) # The statistic for the data
#
# Compute the Mahalanobis distance and its p-value.
# This works because the center of `S` is at (0,0).
#
delta <- s %*% solve(crossprod(S) / (nrow(S) - 1), s)
p <- pchisq(delta, 2, lower.tail = FALSE)
#
# Plot the reference distribution as a point cloud, then overplot the
# data statistic.
#
plot(S, asp = 1, col = "#00000020", xlab = "dx", ylab = "dy",
main = bquote(.(stitle)==.(signif(p, 3))))
abline(h = 0, v = 0, lty = 3)
arrows(0, 0, s[1], s[2], length = 0.15, angle = 18,
lwd = 2, col = "Red")
points(s[1], s[2], pch = 24, bg = "Red", cex = 1.25) | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set | A good test provides insight as well as a quantification of the apparent difference. A permutation test will do that, because you can plot the permutation distribution and it will show you just how a | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points
A good test provides insight as well as a quantification of the apparent difference. A permutation test will do that, because you can plot the permutation distribution and it will show you just how and to what extent there is a difference in your data.
A natural test statistic would be the mean difference between the points in one group relative to those in the other -- but with little change you can apply this approach to any statistic you choose. This test views group membership arising from the random selection of (say) the red points among the collection of all blue or red points. Each possible sample yields a value of the test statistic (a vector in this case). The permutation distribution is the distribution of all these possible test statistics, each with equal probability.
For small datasets, like that of the question ($N=12$ points with subgroups of $n=5$ and $7$ points), the number of samples is small enough you can generate them all. For larger datasets, where $\binom{N}{n}$ is impracticably large, you can sample randomly. A few thousand samples will more than suffice. Either way, these distributions of vectors can be plotted in Cartesian coordinates, shown below using one circular shape per outcome for the full permutation distribution (792 points). This is the null, or reference, distribution for assessing the location of the mean difference in the dataset, shown with a red point and red vector directed towards it.
When this point cloud looks approximately Normal, the Mahalanobis distance of the data from the origin will approximately have a chi-squared distribution with $2$ degrees of freedom (one for each coordinate). This yields a p-value for the test, shown in the title of the figure. That's a useful calculation because it (a) quantifies how extreme the arrow appears and (b) can prevent our visual impressions from deceiving us. Here, although the data look extreme--most of the red points are displaced down and to the left of most of the blue points--the p-value of $0.156$ indicates that such an extreme-looking displacement occurs frequently among random groupings of these twelve points, advising us not to conclude there is a significant difference in their locations.
This R code gives the details of the calculations and construction of the figure.
#
# The data, eyeballed.
#
X <- data.frame(x = c(1,2,5,6,8,9,11,13,14,15,18,19),
y = c(0,1.5,1,1.25, 10, 9, 3, 7.5, 8, 4, 10,11),
group = factor(c(0,0,0,1,0,1,1,1,1,0,1,1),
levels = c(0, 1), labels = c("Red", "Blue")))
#
# This approach, although inefficient for testing mean differences in location,
# readily generalizes: by precomputing all possible
# vector differences among all the points, any statistic based on differences
# observed in a sample can be easily computed.
#
dX <- with(X, outer(x, x, `-`))
dY <- with(X, outer(y, y, `-`))
#
# Given a vector `i` of indexes of the "red" group, compute the test
# statistic (in this case, a vector of mean differences).
#
stat <- function(i) rowMeans(rbind(c(dX[i, -i]), c(dY[i, -i])))
#
# Conduct the test.
#
N <- nrow(X)
n <- with(X, sum(group == "Red"))
p.max <- 2e3 # Use sampling if the number of permutations exceeds this
# set.seed(17)
if (lchoose(N, n) <= log(p.max)) {
P <- combn(seq_len(N), n)
stitle <- "P-value"
} else {
P <- sapply(seq_len(p.max), function(i) sample.int(N, n))
stitle <- "Approximate P-value"
}
S <- t(matrix(apply(P, 2, stat), 2)) # The permutation distribution
s <- stat(which(X$group == "Red")) # The statistic for the data
#
# Compute the Mahalanobis distance and its p-value.
# This works because the center of `S` is at (0,0).
#
delta <- s %*% solve(crossprod(S) / (nrow(S) - 1), s)
p <- pchisq(delta, 2, lower.tail = FALSE)
#
# Plot the reference distribution as a point cloud, then overplot the
# data statistic.
#
plot(S, asp = 1, col = "#00000020", xlab = "dx", ylab = "dy",
main = bquote(.(stitle)==.(signif(p, 3))))
abline(h = 0, v = 0, lty = 3)
arrows(0, 0, s[1], s[2], length = 0.15, angle = 18,
lwd = 2, col = "Red")
points(s[1], s[2], pch = 24, bg = "Red", cex = 1.25) | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set
A good test provides insight as well as a quantification of the apparent difference. A permutation test will do that, because you can plot the permutation distribution and it will show you just how a |
36,422 | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points | Given the small sample size in your graph, the Wilcoxon rank-sum test seems appropriate to compare the y values in the red and blue groups. | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set | Given the small sample size in your graph, the Wilcoxon rank-sum test seems appropriate to compare the y values in the red and blue groups. | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points
Given the small sample size in your graph, the Wilcoxon rank-sum test seems appropriate to compare the y values in the red and blue groups. | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set
Given the small sample size in your graph, the Wilcoxon rank-sum test seems appropriate to compare the y values in the red and blue groups. |
36,423 | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points | I just read this and if you're trying to determine whether these points are from different clusters in 2D space I'd recommend simply taking a multivariate approach on this.
Using discriminant analysis (DA) on the x,y coordinates as Y (covariates) and using color as the X (categories). The area under the resulting ROC curve would be a good indication on whether or not these points were from different clusters. | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set | I just read this and if you're trying to determine whether these points are from different clusters in 2D space I'd recommend simply taking a multivariate approach on this.
Using discriminant analysis | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set of data points
I just read this and if you're trying to determine whether these points are from different clusters in 2D space I'd recommend simply taking a multivariate approach on this.
Using discriminant analysis (DA) on the x,y coordinates as Y (covariates) and using color as the X (categories). The area under the resulting ROC curve would be a good indication on whether or not these points were from different clusters. | Test whether (x,y) of one set of data points is significantly greater than the (x,y) of another set
I just read this and if you're trying to determine whether these points are from different clusters in 2D space I'd recommend simply taking a multivariate approach on this.
Using discriminant analysis |
36,424 | Which distributions on [0,1] other than the beta distribution form nice compounds with the binomial distribution? | If you just want to be able to write down the probability mass function, you have a lot of flexibility, basically because you can repeatedly use integration by parts. As long as you can integrate the distribution of $X$ repeatedly to get closed form expressions, you get at worst a double sum of closed form expressions for the pmf of the compound distribution. I think you often get single sums, so maybe there is an even simpler way of expressing these.
For example, let $\text{pdf}_X(x) = -\log x$ on $[0,1]$.
$$ \int_0^1 {N\choose k}x^k (1-x)^{N-k} (-\log x) ~dx \\\ = \frac{1}{N+1}\sum_{i=k}^{N} \frac1{i+1}.$$ | Which distributions on [0,1] other than the beta distribution form nice compounds with the binomial | If you just want to be able to write down the probability mass function, you have a lot of flexibility, basically because you can repeatedly use integration by parts. As long as you can integrate the | Which distributions on [0,1] other than the beta distribution form nice compounds with the binomial distribution?
If you just want to be able to write down the probability mass function, you have a lot of flexibility, basically because you can repeatedly use integration by parts. As long as you can integrate the distribution of $X$ repeatedly to get closed form expressions, you get at worst a double sum of closed form expressions for the pmf of the compound distribution. I think you often get single sums, so maybe there is an even simpler way of expressing these.
For example, let $\text{pdf}_X(x) = -\log x$ on $[0,1]$.
$$ \int_0^1 {N\choose k}x^k (1-x)^{N-k} (-\log x) ~dx \\\ = \frac{1}{N+1}\sum_{i=k}^{N} \frac1{i+1}.$$ | Which distributions on [0,1] other than the beta distribution form nice compounds with the binomial
If you just want to be able to write down the probability mass function, you have a lot of flexibility, basically because you can repeatedly use integration by parts. As long as you can integrate the |
36,425 | $\chi^2$ test on user preferences | A psychologically meaningful model can guide us.
Derivation of a useful test
Any variation in the observations can be attributed to variations among the subjects. We might imagine that each subject, at some level, comes up with a numeric value for the result of method 1 and a numeric value for the result of method 2. They then compare these results. If the two are sufficiently different, the subject makes a definite choice, but otherwise the subject declares a tie. (This relates ties to the existence of a threshold of discrimination.)
The variation among the subject causes variation in the experimental observations. There will be a certain chance $\pi_1$ of favoring method 1, a certain chance $\pi_2$ of favoring method 2, and a certain chance $\pi_0$ of a tie.
It is fair to assume the subject respond independently of one another. Accordingly, the likelihood of observing $n_1$ subjects favoring method 1, $n_2$ subjects favoring method 2, and $n_0$ subjects giving ties, is multinomial. Apart from an (irrelevant) normalizing constant, the logarithm of the likelihood equals
$$n_1 \log(\pi_1) + n_2 \log(\pi_2) + n_0 \log(\pi_0).$$
Given that $\pi_0 + \pi_1 + \pi_2=0$, this is maximized when $\pi_i = n_i/n$ where $n = n_0+n_1+n_2$ is the number of subjects.
To test the null hypothesis that the two methods are considered equally good, we maximize the likelihood subject to the restriction implied by this hypothesis. Bearing in mind the psychological model and its invocation of a hypothetical threshold, we will have to live with the possibility that $\pi_0$ (the chance of ties) is nonzero. The only way to detect a tendency to favor one model over the other lies in how $\pi_1$ and $\pi_2$ are affected: if model 1 is favored, then $\pi_1$ should increase and $\pi_2$ decrease, and vice versa. Assuming the variation is symmetric, the no-preference situation occurs when $\pi_1=\pi_2$. (The size of $\pi_0$ will tell us something about the threshold--about discriminatory ability--but otherwise gives no information about preferences.)
When there is no favored model, the maximum likelihood occurs when $\pi_1=\pi_2 = \frac{n_1+n_2}{2}/n$ and, once again, $\pi_0 = n_0/n$. Plugging in the two previous solutions, we compute the change in maximum likelihoods, $G$:
$$\eqalign{
G &=\left(n_1\log\frac{n_1}{n} + n_2\log\frac{n_2}{n} + n_0\log\frac{n_0}{n}\right) \\
&-\left(n_1\log\frac{(n_1+n_2)/2}{n} + n_2\log\frac{(n_1+n_2)/2}{n} + n_0\log\frac{n_0}{n}\right) \\
&=n_1 \log\frac{2n_1}{n_1+n_2} + n_2 \log\frac{2n_2}{n_1+n_2}.
}$$
The size of this value--which cannot be negative--tells us how credible the null hypothesis is: when $G$ is small, the data are "explained" almost as well with the (restrictive) null hypothesis as they are in general; when the value is large, the null hypothesis is less credible.
The (asymptotic) maximum likelihood estimation theory says that a reasonable threshold for this change is one-half the $1-\alpha$ quantile of a chi-square distribution with one degree of freedom (due to the single restriction $\pi_1=\pi_2$ imposed by the null hypothesis). As usual, $\alpha$ is the size of this test, often taken to be 5% ($0.05$) or 1% ($0.01$). The corresponding quantiles are $3.841459$ and $6.634897$.
Example
Suppose that out of $n=20$ subjects, $n_1=3$ favor method 1 and $n_2=9$ favor method 2. That implies there are $n_0 = 20 - 3 - 9 = 8$ ties. The likelihood is maximized, then, for $\pi_1 = 3/20 = 0.15$ and $\pi_2 = 9/20 = 0.45$, where it has a value of $-20.208\ldots$. Under the null hypothesis the likelihood is instead maximized for $\pi_1 = \pi_2 = 6/20 = 0.30$, where its value is only $-21.778$. The difference of $G = -20.208 - (-21.778) = 1.57$ is less than one-half the $\alpha = $5% threshold of $3.84$. We therefore do not reject the null hypothesis.
About ties and alternative tests
Looking back at the formula for $G$, notice that the number of ties ($n_0$) does not appear. In the example, if we had instead observed $n=100$ subjects and among them $3$ favored method 1, $9$ favored method 2, and the remaining $100 - 3 - 9 = 88$ were tied, the result would be the same.
Splitting the ties and assigning half to method 1 and half to method 2 is intuitively reasonable, but it results in a less powerful test. For instance, let $n_1=5$ and $n_2=15$. Consider two cases:
$n=20$ subjects, so there were $n_0=0$ ties. The maximum likelihood test would reject the null for any value of $\alpha$ greater than $0.02217$. Another test frequently used in this situation (because there are no ties) is a binomial test; it would reject the null for any value of $\alpha$ greater than $0.02660$. The two tests therefore would typically give the same results, because these critical values are fairly close.
$n=100$ subjects, so there were $n_0=80$ ties. The maximum likelihood test would still reject the null for any value of $\alpha$ greater than $0.02217$. The binomial test would reject the null only for any value of $\alpha$ greater than $0.3197$. The two tests give entirely different results. In particular, the $80$ ties have weakened the ability of the binomial test to distinguish a difference that the maximum likelihood theory suggests is real.
Finally, let's consider the $3 \times 1$ contingency table approach suggested in another answer. Consider $n=20$ subjects with $n_1=3$ favoring method 1, $n_2=10$ favoring method 2, and $n_0=7$ with ties. The "table" is just the vector $(n_0,n_1,n_2)=(7,3,10)$. Its chi-squared statistic is $3.7$ with two degrees of freedom. The p-value is $0.1572$, which would cause most people to conclude there is no difference between the methods. The maximum likelihood result instead gives a p-value of $0.04614$, which would reject this conclusion at the $\alpha=$5% level.
With $n=100$ subjects suppose that only $1$ favored method 1, only $2$ favored method 2, and there were $97$ ties. Intuitively there is very little evidence that one of these methods tends to be favored. But this time the chi-squared statistic of $182.42$ clearly, incontrovertibly, (but quite wrongly) shows there is a difference (the p value is less than $10^{-15}$).
In both situations the chi-squared approach gets the answer entirely wrong: in the first case it lacks power to detect a substantial difference while in the second case (with lots of ties) it is extremely overconfident about an inconsequential difference. The problem is not that the chi-squared test is bad; the problem is that it tests a different hypothesis: namely, whether $\pi_1=\pi_2=\pi_0$. According to our conceptual model, this hypothesis is psychological nonsense, because it confuses information about preferences (namely, $\pi_1$ and $\pi_2$) with information about thresholds of discrimination (namely, $\pi_0$). This is a nice demonstration of the need to use a research context and subject matter knowledge (however simplified) in selecting a statistical test. | $\chi^2$ test on user preferences | A psychologically meaningful model can guide us.
Derivation of a useful test
Any variation in the observations can be attributed to variations among the subjects. We might imagine that each subject, a | $\chi^2$ test on user preferences
A psychologically meaningful model can guide us.
Derivation of a useful test
Any variation in the observations can be attributed to variations among the subjects. We might imagine that each subject, at some level, comes up with a numeric value for the result of method 1 and a numeric value for the result of method 2. They then compare these results. If the two are sufficiently different, the subject makes a definite choice, but otherwise the subject declares a tie. (This relates ties to the existence of a threshold of discrimination.)
The variation among the subject causes variation in the experimental observations. There will be a certain chance $\pi_1$ of favoring method 1, a certain chance $\pi_2$ of favoring method 2, and a certain chance $\pi_0$ of a tie.
It is fair to assume the subject respond independently of one another. Accordingly, the likelihood of observing $n_1$ subjects favoring method 1, $n_2$ subjects favoring method 2, and $n_0$ subjects giving ties, is multinomial. Apart from an (irrelevant) normalizing constant, the logarithm of the likelihood equals
$$n_1 \log(\pi_1) + n_2 \log(\pi_2) + n_0 \log(\pi_0).$$
Given that $\pi_0 + \pi_1 + \pi_2=0$, this is maximized when $\pi_i = n_i/n$ where $n = n_0+n_1+n_2$ is the number of subjects.
To test the null hypothesis that the two methods are considered equally good, we maximize the likelihood subject to the restriction implied by this hypothesis. Bearing in mind the psychological model and its invocation of a hypothetical threshold, we will have to live with the possibility that $\pi_0$ (the chance of ties) is nonzero. The only way to detect a tendency to favor one model over the other lies in how $\pi_1$ and $\pi_2$ are affected: if model 1 is favored, then $\pi_1$ should increase and $\pi_2$ decrease, and vice versa. Assuming the variation is symmetric, the no-preference situation occurs when $\pi_1=\pi_2$. (The size of $\pi_0$ will tell us something about the threshold--about discriminatory ability--but otherwise gives no information about preferences.)
When there is no favored model, the maximum likelihood occurs when $\pi_1=\pi_2 = \frac{n_1+n_2}{2}/n$ and, once again, $\pi_0 = n_0/n$. Plugging in the two previous solutions, we compute the change in maximum likelihoods, $G$:
$$\eqalign{
G &=\left(n_1\log\frac{n_1}{n} + n_2\log\frac{n_2}{n} + n_0\log\frac{n_0}{n}\right) \\
&-\left(n_1\log\frac{(n_1+n_2)/2}{n} + n_2\log\frac{(n_1+n_2)/2}{n} + n_0\log\frac{n_0}{n}\right) \\
&=n_1 \log\frac{2n_1}{n_1+n_2} + n_2 \log\frac{2n_2}{n_1+n_2}.
}$$
The size of this value--which cannot be negative--tells us how credible the null hypothesis is: when $G$ is small, the data are "explained" almost as well with the (restrictive) null hypothesis as they are in general; when the value is large, the null hypothesis is less credible.
The (asymptotic) maximum likelihood estimation theory says that a reasonable threshold for this change is one-half the $1-\alpha$ quantile of a chi-square distribution with one degree of freedom (due to the single restriction $\pi_1=\pi_2$ imposed by the null hypothesis). As usual, $\alpha$ is the size of this test, often taken to be 5% ($0.05$) or 1% ($0.01$). The corresponding quantiles are $3.841459$ and $6.634897$.
Example
Suppose that out of $n=20$ subjects, $n_1=3$ favor method 1 and $n_2=9$ favor method 2. That implies there are $n_0 = 20 - 3 - 9 = 8$ ties. The likelihood is maximized, then, for $\pi_1 = 3/20 = 0.15$ and $\pi_2 = 9/20 = 0.45$, where it has a value of $-20.208\ldots$. Under the null hypothesis the likelihood is instead maximized for $\pi_1 = \pi_2 = 6/20 = 0.30$, where its value is only $-21.778$. The difference of $G = -20.208 - (-21.778) = 1.57$ is less than one-half the $\alpha = $5% threshold of $3.84$. We therefore do not reject the null hypothesis.
About ties and alternative tests
Looking back at the formula for $G$, notice that the number of ties ($n_0$) does not appear. In the example, if we had instead observed $n=100$ subjects and among them $3$ favored method 1, $9$ favored method 2, and the remaining $100 - 3 - 9 = 88$ were tied, the result would be the same.
Splitting the ties and assigning half to method 1 and half to method 2 is intuitively reasonable, but it results in a less powerful test. For instance, let $n_1=5$ and $n_2=15$. Consider two cases:
$n=20$ subjects, so there were $n_0=0$ ties. The maximum likelihood test would reject the null for any value of $\alpha$ greater than $0.02217$. Another test frequently used in this situation (because there are no ties) is a binomial test; it would reject the null for any value of $\alpha$ greater than $0.02660$. The two tests therefore would typically give the same results, because these critical values are fairly close.
$n=100$ subjects, so there were $n_0=80$ ties. The maximum likelihood test would still reject the null for any value of $\alpha$ greater than $0.02217$. The binomial test would reject the null only for any value of $\alpha$ greater than $0.3197$. The two tests give entirely different results. In particular, the $80$ ties have weakened the ability of the binomial test to distinguish a difference that the maximum likelihood theory suggests is real.
Finally, let's consider the $3 \times 1$ contingency table approach suggested in another answer. Consider $n=20$ subjects with $n_1=3$ favoring method 1, $n_2=10$ favoring method 2, and $n_0=7$ with ties. The "table" is just the vector $(n_0,n_1,n_2)=(7,3,10)$. Its chi-squared statistic is $3.7$ with two degrees of freedom. The p-value is $0.1572$, which would cause most people to conclude there is no difference between the methods. The maximum likelihood result instead gives a p-value of $0.04614$, which would reject this conclusion at the $\alpha=$5% level.
With $n=100$ subjects suppose that only $1$ favored method 1, only $2$ favored method 2, and there were $97$ ties. Intuitively there is very little evidence that one of these methods tends to be favored. But this time the chi-squared statistic of $182.42$ clearly, incontrovertibly, (but quite wrongly) shows there is a difference (the p value is less than $10^{-15}$).
In both situations the chi-squared approach gets the answer entirely wrong: in the first case it lacks power to detect a substantial difference while in the second case (with lots of ties) it is extremely overconfident about an inconsequential difference. The problem is not that the chi-squared test is bad; the problem is that it tests a different hypothesis: namely, whether $\pi_1=\pi_2=\pi_0$. According to our conceptual model, this hypothesis is psychological nonsense, because it confuses information about preferences (namely, $\pi_1$ and $\pi_2$) with information about thresholds of discrimination (namely, $\pi_0$). This is a nice demonstration of the need to use a research context and subject matter knowledge (however simplified) in selecting a statistical test. | $\chi^2$ test on user preferences
A psychologically meaningful model can guide us.
Derivation of a useful test
Any variation in the observations can be attributed to variations among the subjects. We might imagine that each subject, a |
36,426 | $\chi^2$ test on user preferences | I suspect whuber's answer is (as usual) more replete than what I am about to type. I admit, I may not fully understand whuber's answer... so what I am saying may not be unique or useful. However, I did not notice where in whuber's answer the nesting of preferences under individuals as well as the nesting of preferences within test-cases was considered. I think given the question asker's clarification that:
The cases are indeed a random sample of all possible cases. I think an
analogy is the following: the election is determined by what happens
at the polls, but I do have for each voter their party affiliation. So
it would be almost expected that a candidate from one party appeals to
the voters affiliated with that party, but this is not necessarily a
given, a great candidate can win in his party and win over people form
the other party.
... these are important considerations. Therefore, perhaps what is most appropriate is not $\chi^2$ but a multi-level logistic model. Specifically in R I might cast something like:
lmer(PreferenceForM1~1+(1|RaterID)+(1|TestCaseID),family=binomial)
PreferenceForM1 would be coded as 1 (yes) and 0 (no). Here an intercept over 0 would indicate an average rater's preference for method 1 on an average test case. With samples near the lower bounds of usefulness for these techniques, I'd probably also use pvals.fnc and influence.ME to investigate my assumptions and the effects of outliers.
The basic question about ties here seems well answered by whuber. However, I'll (re-)state that it seems that ties reduce your ability to observe a statistically significant difference between the methods. In addition, I'll claim that eliminating them may cause you to over-estimate the preference individuals have for one method versus the other. For the later reason, I'd leave them in. | $\chi^2$ test on user preferences | I suspect whuber's answer is (as usual) more replete than what I am about to type. I admit, I may not fully understand whuber's answer... so what I am saying may not be unique or useful. However, I | $\chi^2$ test on user preferences
I suspect whuber's answer is (as usual) more replete than what I am about to type. I admit, I may not fully understand whuber's answer... so what I am saying may not be unique or useful. However, I did not notice where in whuber's answer the nesting of preferences under individuals as well as the nesting of preferences within test-cases was considered. I think given the question asker's clarification that:
The cases are indeed a random sample of all possible cases. I think an
analogy is the following: the election is determined by what happens
at the polls, but I do have for each voter their party affiliation. So
it would be almost expected that a candidate from one party appeals to
the voters affiliated with that party, but this is not necessarily a
given, a great candidate can win in his party and win over people form
the other party.
... these are important considerations. Therefore, perhaps what is most appropriate is not $\chi^2$ but a multi-level logistic model. Specifically in R I might cast something like:
lmer(PreferenceForM1~1+(1|RaterID)+(1|TestCaseID),family=binomial)
PreferenceForM1 would be coded as 1 (yes) and 0 (no). Here an intercept over 0 would indicate an average rater's preference for method 1 on an average test case. With samples near the lower bounds of usefulness for these techniques, I'd probably also use pvals.fnc and influence.ME to investigate my assumptions and the effects of outliers.
The basic question about ties here seems well answered by whuber. However, I'll (re-)state that it seems that ties reduce your ability to observe a statistically significant difference between the methods. In addition, I'll claim that eliminating them may cause you to over-estimate the preference individuals have for one method versus the other. For the later reason, I'd leave them in. | $\chi^2$ test on user preferences
I suspect whuber's answer is (as usual) more replete than what I am about to type. I admit, I may not fully understand whuber's answer... so what I am saying may not be unique or useful. However, I |
36,427 | Using kriging with very sparse data | From ten points you are going to have 45 (10*(10-1)/2) points in your variogram cloud from the distances between each pair of points. Once the system has binned that, or even without binning, its going to be dominated by noise, I reckon. Get a plot of the variogram cloud to see what I mean.
If autokrige can't fit a nice smooth variogram then it will do what it did, and just go 'heck, I can't work out the correlation with distance with just 10 points, my best guess is just the mean'. It really can't do better.
If you want something to look 'realistic', then you could feed it variogram parameters with a bigger range, that would over-smooth the output. But then you may as well just do inverse-distance weighting if all you want is a pretty picture. The advantage of kriging is that it is realistic. But it rejects your reality and replaces it with its own...
SUggestions:
Get a plot of the variogram cloud
Get more data :)
Look into bivariate kriging for your case with the two different data sets. I think the theory exists, there may even be code for it... | Using kriging with very sparse data | From ten points you are going to have 45 (10*(10-1)/2) points in your variogram cloud from the distances between each pair of points. Once the system has binned that, or even without binning, its goin | Using kriging with very sparse data
From ten points you are going to have 45 (10*(10-1)/2) points in your variogram cloud from the distances between each pair of points. Once the system has binned that, or even without binning, its going to be dominated by noise, I reckon. Get a plot of the variogram cloud to see what I mean.
If autokrige can't fit a nice smooth variogram then it will do what it did, and just go 'heck, I can't work out the correlation with distance with just 10 points, my best guess is just the mean'. It really can't do better.
If you want something to look 'realistic', then you could feed it variogram parameters with a bigger range, that would over-smooth the output. But then you may as well just do inverse-distance weighting if all you want is a pretty picture. The advantage of kriging is that it is realistic. But it rejects your reality and replaces it with its own...
SUggestions:
Get a plot of the variogram cloud
Get more data :)
Look into bivariate kriging for your case with the two different data sets. I think the theory exists, there may even be code for it... | Using kriging with very sparse data
From ten points you are going to have 45 (10*(10-1)/2) points in your variogram cloud from the distances between each pair of points. Once the system has binned that, or even without binning, its goin |
36,428 | Is the overlap between two gene expression samples significant? | The table looks like this
37 deg C
42 deg C yes no
yes 38 97
no 10 4855
yes and no refer to cases overexpressed or not
I ran Fisher's exact test in SAS
The output is pasted below:
Laura Gene expression data
The FREQ Procedure
Statistics for Table of Group by expressed
Fisher's Exact Test
Cell (1,1) Frequency (F) 4855
Left-sided Pr <= F 1.0000
Right-sided Pr >= F 4.776E-53
Table Probability (P) 8.132E-51
Two-sided Pr <= P 4.776E-53
Sample Size = 5000
You see here that the p value for Fisher's Exact test is very small far less than 0.0001.
This shows exactly what you stated the observed 38 overexpressed at both temperatures is far greater than what you wou expect under independence which as you stated would be 1.296. | Is the overlap between two gene expression samples significant? | The table looks like this
37 deg C
42 deg C yes no
yes 38 97
no 10 4855
yes and no refer to cases overexpressed or not
I ran Fisher's exact test | Is the overlap between two gene expression samples significant?
The table looks like this
37 deg C
42 deg C yes no
yes 38 97
no 10 4855
yes and no refer to cases overexpressed or not
I ran Fisher's exact test in SAS
The output is pasted below:
Laura Gene expression data
The FREQ Procedure
Statistics for Table of Group by expressed
Fisher's Exact Test
Cell (1,1) Frequency (F) 4855
Left-sided Pr <= F 1.0000
Right-sided Pr >= F 4.776E-53
Table Probability (P) 8.132E-51
Two-sided Pr <= P 4.776E-53
Sample Size = 5000
You see here that the p value for Fisher's Exact test is very small far less than 0.0001.
This shows exactly what you stated the observed 38 overexpressed at both temperatures is far greater than what you wou expect under independence which as you stated would be 1.296. | Is the overlap between two gene expression samples significant?
The table looks like this
37 deg C
42 deg C yes no
yes 38 97
no 10 4855
yes and no refer to cases overexpressed or not
I ran Fisher's exact test |
36,429 | Is the overlap between two gene expression samples significant? | The exact test referred to by Michael is probably the way I would recommend using to solve the problem (fewest assumptions). For reference, the corresponding common statistical test would be a $\chi^2$ test of independence. | Is the overlap between two gene expression samples significant? | The exact test referred to by Michael is probably the way I would recommend using to solve the problem (fewest assumptions). For reference, the corresponding common statistical test would be a $\chi^ | Is the overlap between two gene expression samples significant?
The exact test referred to by Michael is probably the way I would recommend using to solve the problem (fewest assumptions). For reference, the corresponding common statistical test would be a $\chi^2$ test of independence. | Is the overlap between two gene expression samples significant?
The exact test referred to by Michael is probably the way I would recommend using to solve the problem (fewest assumptions). For reference, the corresponding common statistical test would be a $\chi^ |
36,430 | Some questions about two-sample comparisons | I will answer your bullets with bullets of my own in the same order:
I think the sentence is referring to the large sample (asymptotic) distribution of the test statistic, not the data. As you can see here, the Mann-Whitney U test statistic has an approximate normal distribution when the sample size is large.
In order to assume equal variance, you may consider doing sort of diagnostic check about whether or not the variances are equal. It is common practice to operate under the equal variance assumption unless a hypothesis test rejects that hypothesis - Levene's Test, which tests the null hypothesis that the variances are equal - is commonly used for this and has the nice property that it is robust to non-normality of the data . When the variances truly are equal you will sacrifice statistical power by not assuming equal variance, so it's good to do this whenever you can. However, you should note that if you have a small sample size, you may have little power to detect inhomogeneity of variance so if the sample variances are very different from each other you should consider not assuming equal variance, even if you fail to reject the null in Levene's Test.
If by "I want to test whether the distribution of one data set is significantly larger" you mean that one mean is larger than the other, then this would be a one-sided test. If you're testing an alternative hypothesis of the form $\mu_1 > \mu_2$, then you will look at the area to the right of your observed test statistic rather than to the left, which is what distinguishes it from a "less than" one-sided test. Of course, if you interchange the roles of the two samples and switch the hypothesis to a "less than" hypothesis, you will get the same results, since everything is less reversed. If you're doing a two-sided test, interchanging the roles of the two samples should give you the exact same $p$-value. | Some questions about two-sample comparisons | I will answer your bullets with bullets of my own in the same order:
I think the sentence is referring to the large sample (asymptotic) distribution of the test statistic, not the data. As you can s | Some questions about two-sample comparisons
I will answer your bullets with bullets of my own in the same order:
I think the sentence is referring to the large sample (asymptotic) distribution of the test statistic, not the data. As you can see here, the Mann-Whitney U test statistic has an approximate normal distribution when the sample size is large.
In order to assume equal variance, you may consider doing sort of diagnostic check about whether or not the variances are equal. It is common practice to operate under the equal variance assumption unless a hypothesis test rejects that hypothesis - Levene's Test, which tests the null hypothesis that the variances are equal - is commonly used for this and has the nice property that it is robust to non-normality of the data . When the variances truly are equal you will sacrifice statistical power by not assuming equal variance, so it's good to do this whenever you can. However, you should note that if you have a small sample size, you may have little power to detect inhomogeneity of variance so if the sample variances are very different from each other you should consider not assuming equal variance, even if you fail to reject the null in Levene's Test.
If by "I want to test whether the distribution of one data set is significantly larger" you mean that one mean is larger than the other, then this would be a one-sided test. If you're testing an alternative hypothesis of the form $\mu_1 > \mu_2$, then you will look at the area to the right of your observed test statistic rather than to the left, which is what distinguishes it from a "less than" one-sided test. Of course, if you interchange the roles of the two samples and switch the hypothesis to a "less than" hypothesis, you will get the same results, since everything is less reversed. If you're doing a two-sided test, interchanging the roles of the two samples should give you the exact same $p$-value. | Some questions about two-sample comparisons
I will answer your bullets with bullets of my own in the same order:
I think the sentence is referring to the large sample (asymptotic) distribution of the test statistic, not the data. As you can s |
36,431 | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categorical and conditional items? | Some quick rules
If you have unordered categorical data (i.e., three or more unordered categories; which you do), then you don't use Cronbach's alpha.
If you have binary data (e.g., incorrect/correct data), then many people do use Cronbach's alpha, but see the Sjitsma reference given by @Momo.
If you have conditional data, then that would at the very least complicate the application of Cronbach's alpha. Skip patterns often imply the existence of an implicit additional category (e.g., "Do you play soccer?" if yes, "what day of the week do you play most often?", you could say that for the second question, there is an implicit category of "not applicable") . However, in your example, skipping item 2 means that the person does not have a degree in painting. So you could fill in that information. In all these examples there are more than 2 unordered categories so you would not apply cronbach's alpha.
Other thoughts
Cronbach's alpha relies on internal consistency to evaluate reliability. However, if your scale is formative, then internal consistency measures don't make much sense. In your case, I think your scale could be conceptualised as formative rather than reflective. I.e., the items in their totality represent something like "painting experience".
You might want to look at something like test-retest correlation or categorical PCA if you need to calculate some form of reliability. | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categori | Some quick rules
If you have unordered categorical data (i.e., three or more unordered categories; which you do), then you don't use Cronbach's alpha.
If you have binary data (e.g., incorrect/correc | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categorical and conditional items?
Some quick rules
If you have unordered categorical data (i.e., three or more unordered categories; which you do), then you don't use Cronbach's alpha.
If you have binary data (e.g., incorrect/correct data), then many people do use Cronbach's alpha, but see the Sjitsma reference given by @Momo.
If you have conditional data, then that would at the very least complicate the application of Cronbach's alpha. Skip patterns often imply the existence of an implicit additional category (e.g., "Do you play soccer?" if yes, "what day of the week do you play most often?", you could say that for the second question, there is an implicit category of "not applicable") . However, in your example, skipping item 2 means that the person does not have a degree in painting. So you could fill in that information. In all these examples there are more than 2 unordered categories so you would not apply cronbach's alpha.
Other thoughts
Cronbach's alpha relies on internal consistency to evaluate reliability. However, if your scale is formative, then internal consistency measures don't make much sense. In your case, I think your scale could be conceptualised as formative rather than reflective. I.e., the items in their totality represent something like "painting experience".
You might want to look at something like test-retest correlation or categorical PCA if you need to calculate some form of reliability. | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categori
Some quick rules
If you have unordered categorical data (i.e., three or more unordered categories; which you do), then you don't use Cronbach's alpha.
If you have binary data (e.g., incorrect/correc |
36,432 | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categorical and conditional items? | Generally Cronbach's coefficient $\alpha$ should not be used if you want a measure of reliability or internal consistency (which is what you need it for, I presume). See the OA Psychometrika article by Sjitsma.
An easily available alternative is the GLB statistic (e.g. in R psych::glb).
Edit
Based on the comments by @chl I think the following caveat is in order: The "conditional questionnaire" structure will likely introduce blocks of missing values. I suppose the skip pattern and the missing value patterns induced will affect the (co-)variance estimation usually used in reliability coefficients if the missing value mechanism is not missing completely at random. Unfortunately, I don't know how this effect will look like though. | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categori | Generally Cronbach's coefficient $\alpha$ should not be used if you want a measure of reliability or internal consistency (which is what you need it for, I presume). See the OA Psychometrika article b | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categorical and conditional items?
Generally Cronbach's coefficient $\alpha$ should not be used if you want a measure of reliability or internal consistency (which is what you need it for, I presume). See the OA Psychometrika article by Sjitsma.
An easily available alternative is the GLB statistic (e.g. in R psych::glb).
Edit
Based on the comments by @chl I think the following caveat is in order: The "conditional questionnaire" structure will likely introduce blocks of missing values. I suppose the skip pattern and the missing value patterns induced will affect the (co-)variance estimation usually used in reliability coefficients if the missing value mechanism is not missing completely at random. Unfortunately, I don't know how this effect will look like though. | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categori
Generally Cronbach's coefficient $\alpha$ should not be used if you want a measure of reliability or internal consistency (which is what you need it for, I presume). See the OA Psychometrika article b |
36,433 | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categorical and conditional items? | If you are looking at Yes/No items or items coded of 0's and 1's, I have used Guttman's split Lambda 4 coefficient, which can be done in SPSS easily. | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categori | If you are looking at Yes/No items or items coded of 0's and 1's, I have used Guttman's split Lambda 4 coefficient, which can be done in SPSS easily. | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categorical and conditional items?
If you are looking at Yes/No items or items coded of 0's and 1's, I have used Guttman's split Lambda 4 coefficient, which can be done in SPSS easily. | Is it acceptable to use Cronbach's alpha to assess reliability of questionnaire composed of categori
If you are looking at Yes/No items or items coded of 0's and 1's, I have used Guttman's split Lambda 4 coefficient, which can be done in SPSS easily. |
36,434 | OLS vs. logistic regression for exploratory analysis with a binary outcome | If the explanatory variables have values over the entire real line it makes little sense to express an expectation that is a proportion in $[0,1]$ as a linear function of variable defined over the entire real line. If the sigmoid shape of the logit transformation doesn't describe the shape then perhaps it is best to search for a different transformation that maps $[0,1]$ into $(-∞ , ∞)$. | OLS vs. logistic regression for exploratory analysis with a binary outcome | If the explanatory variables have values over the entire real line it makes little sense to express an expectation that is a proportion in $[0,1]$ as a linear function of variable defined over the ent | OLS vs. logistic regression for exploratory analysis with a binary outcome
If the explanatory variables have values over the entire real line it makes little sense to express an expectation that is a proportion in $[0,1]$ as a linear function of variable defined over the entire real line. If the sigmoid shape of the logit transformation doesn't describe the shape then perhaps it is best to search for a different transformation that maps $[0,1]$ into $(-∞ , ∞)$. | OLS vs. logistic regression for exploratory analysis with a binary outcome
If the explanatory variables have values over the entire real line it makes little sense to express an expectation that is a proportion in $[0,1]$ as a linear function of variable defined over the ent |
36,435 | logistic regression. How to get dual function? | Tom Minka gives the derivation in this excellent paper "A comparison of numerical optimizers for logistic regression" pdf, section 9 | logistic regression. How to get dual function? | Tom Minka gives the derivation in this excellent paper "A comparison of numerical optimizers for logistic regression" pdf, section 9 | logistic regression. How to get dual function?
Tom Minka gives the derivation in this excellent paper "A comparison of numerical optimizers for logistic regression" pdf, section 9 | logistic regression. How to get dual function?
Tom Minka gives the derivation in this excellent paper "A comparison of numerical optimizers for logistic regression" pdf, section 9 |
36,436 | logistic regression. How to get dual function? | LIBLINEAR supports $\ell_2$-regularized logistic regression. According to the authors, the package implements the "trust region Newton method". Here, you can find the slides to learn more, but note that it is not based on the dual formulation.
@whuber I am explaining here, because there wasn't space in the comments...
As you know, in logistic regression, the response data are chosen to be realizations of a Bernoulli random variable $Y$. In this GLM, the conditional expectation is,
\begin{equation}
\mathbb{E}(Y|X) = \sigma\big(\mathbf{w}^\mathsf{T}\mathbf{x}\big)
\end{equation}
where $\sigma(z)$ is the logistic function
\begin{equation}
\sigma(z) = \frac{1}{1+ \exp(-z)}.
\end{equation}
Here's the likelihood
\begin{equation}
\begin{aligned}
\mathcal{L}(\mathbf{w}) = \operatorname{p}(\mathbf{y}|\mathbf{X};\mathbf{w}) &= \prod_{i=1}^n \operatorname{p}(y_i|\mathbf{x}_i;\mathbf{w})\\
&= \prod_{i=1}^n \sigma\big(\mathbf{w}^\mathsf{T}\mathbf{x}_i\big)^{y_i}\big(1-\sigma(\mathbf{w}^\mathsf{T}\mathbf{x}_i)\big)^{1-y_i}.
\end{aligned}
\end{equation}
and the negative log-likelihood becomes
\begin{align}
-\ell(\mathbf{w}) = -\log \mathcal{L}(\mathbf{w}) &= -\sum_{i=1}^{n} \log \operatorname{p}(y_i| \mathbf{x}_i;\mathbf{w})\\
&= -\sum_{i=1}^{n} \log\sigma\big(y_i \mathbf{w}^\mathsf{T}\mathbf{x}_i\big)\\
&= \sum_{i=1}^{n} \log\big(1+ \exp\big(-y_i\mathbf{w}^\mathsf{T}\mathbf{x}_i\big)\big)
\end{align}
where the last equation follows because $y_i \in \{-1,1\}$.
The $\ell_2$-regularization term is the result of MAP estimation of the parameters with a Gaussian prior. | logistic regression. How to get dual function? | LIBLINEAR supports $\ell_2$-regularized logistic regression. According to the authors, the package implements the "trust region Newton method". Here, you can find the slides to learn more, but note th | logistic regression. How to get dual function?
LIBLINEAR supports $\ell_2$-regularized logistic regression. According to the authors, the package implements the "trust region Newton method". Here, you can find the slides to learn more, but note that it is not based on the dual formulation.
@whuber I am explaining here, because there wasn't space in the comments...
As you know, in logistic regression, the response data are chosen to be realizations of a Bernoulli random variable $Y$. In this GLM, the conditional expectation is,
\begin{equation}
\mathbb{E}(Y|X) = \sigma\big(\mathbf{w}^\mathsf{T}\mathbf{x}\big)
\end{equation}
where $\sigma(z)$ is the logistic function
\begin{equation}
\sigma(z) = \frac{1}{1+ \exp(-z)}.
\end{equation}
Here's the likelihood
\begin{equation}
\begin{aligned}
\mathcal{L}(\mathbf{w}) = \operatorname{p}(\mathbf{y}|\mathbf{X};\mathbf{w}) &= \prod_{i=1}^n \operatorname{p}(y_i|\mathbf{x}_i;\mathbf{w})\\
&= \prod_{i=1}^n \sigma\big(\mathbf{w}^\mathsf{T}\mathbf{x}_i\big)^{y_i}\big(1-\sigma(\mathbf{w}^\mathsf{T}\mathbf{x}_i)\big)^{1-y_i}.
\end{aligned}
\end{equation}
and the negative log-likelihood becomes
\begin{align}
-\ell(\mathbf{w}) = -\log \mathcal{L}(\mathbf{w}) &= -\sum_{i=1}^{n} \log \operatorname{p}(y_i| \mathbf{x}_i;\mathbf{w})\\
&= -\sum_{i=1}^{n} \log\sigma\big(y_i \mathbf{w}^\mathsf{T}\mathbf{x}_i\big)\\
&= \sum_{i=1}^{n} \log\big(1+ \exp\big(-y_i\mathbf{w}^\mathsf{T}\mathbf{x}_i\big)\big)
\end{align}
where the last equation follows because $y_i \in \{-1,1\}$.
The $\ell_2$-regularization term is the result of MAP estimation of the parameters with a Gaussian prior. | logistic regression. How to get dual function?
LIBLINEAR supports $\ell_2$-regularized logistic regression. According to the authors, the package implements the "trust region Newton method". Here, you can find the slides to learn more, but note th |
36,437 | logistic regression. How to get dual function? | Why not just take partial derivative of minimization function with respect to unknown parameters? You can find a lot of material on the web. | logistic regression. How to get dual function? | Why not just take partial derivative of minimization function with respect to unknown parameters? You can find a lot of material on the web. | logistic regression. How to get dual function?
Why not just take partial derivative of minimization function with respect to unknown parameters? You can find a lot of material on the web. | logistic regression. How to get dual function?
Why not just take partial derivative of minimization function with respect to unknown parameters? You can find a lot of material on the web. |
36,438 | logistic regression. How to get dual function? | Use the lower bound of the convex function $\log(1+x)$:
$$
\log (1+x) \ge \log(1+x_1) +(x-x_1)*(`d(\log(1+x_1))` )
$$
where $d(f)$ is the differentiation of $f$.
Note that the factor of $x$ on the right is your dual variable, which is in between $[0,1]$; now the problem is quadratic in $w$ and can be solved. | logistic regression. How to get dual function? | Use the lower bound of the convex function $\log(1+x)$:
$$
\log (1+x) \ge \log(1+x_1) +(x-x_1)*(`d(\log(1+x_1))` )
$$
where $d(f)$ is the differentiation of $f$.
Note that the factor of $x$ on the | logistic regression. How to get dual function?
Use the lower bound of the convex function $\log(1+x)$:
$$
\log (1+x) \ge \log(1+x_1) +(x-x_1)*(`d(\log(1+x_1))` )
$$
where $d(f)$ is the differentiation of $f$.
Note that the factor of $x$ on the right is your dual variable, which is in between $[0,1]$; now the problem is quadratic in $w$ and can be solved. | logistic regression. How to get dual function?
Use the lower bound of the convex function $\log(1+x)$:
$$
\log (1+x) \ge \log(1+x_1) +(x-x_1)*(`d(\log(1+x_1))` )
$$
where $d(f)$ is the differentiation of $f$.
Note that the factor of $x$ on the |
36,439 | Binomial regression asymptotes | Interesting question. A possibility that comes to my mind is including an additional parameter $p\in[0,1]$ in order to control the upper bound of the 'link' function.
Let $\{{\bf x}_j,y_j,n_j\}$, $j=1,...,n$ be independent observations, where $y_j\sim \text{Binomial}\{n_i,pF({\bf x}_j^T\beta)\}$, $p\in[0,1]$, ${\bf x}_j=(1,x_{j1}, \dotsc ,x_{jk})^T$ is a vector of explanatory variables, $\beta=(\beta_0,\dotsc,\beta_k)$ is a vector of regression coefficients and $F^{-1}$ is the link function. Then the likelihood function is given by
$${\mathcal L}(\beta,p) \propto \prod_{j=1}^n p^{y_j}F({\bf x}_j^T\beta)^{y_j}[1-pF({\bf x}_j^T\beta)]^{n_j-y_j}$$
The next step is to choose a link, say the logistic distribution and find the corresponding MLE of $(\beta,p)$.
Consider the following simulated toy example using a dose-response model with $(\beta_0,\beta_1,p)=(0.5,0.5,0.25)$ and $n=31$
dose = seq(-15, 15, 1)
a = 0.5
b = 0.5
n=length(dose)
sim = rep(0, n)
for(i in 1:n) sim[i] = rbinom(1, 100, 0.25*plogis(a+b*dose[i]))
plot(dose, sim/100)
lp = function(par){
if(par[3]>0& par[3]<1) return(-(n*mean(sim)*log(par[3]) +
sum(sim*log(plogis(par[1]+par[2]*dose))) +
sum((100-sim)*log(1-par[3]*plogis(par[1]+par[2]*dose))) ))
else return(-Inf)
}
optim(c(0.5, 0.5, 0.25), lp)
One of the outcomes I got is $(\hat\beta_0,\hat\beta_1,\hat p)=( 0.4526650, 0.4589112, 0.2395564)$. Therefore it seems to be accurate. Of course, a more detailed exploration of this model would be necessary because including parameters in a binary regression model can be tricky and problems of identifiability or existence of the MLE may jump on the stage 1 2.
Edit
Given the edit (which changes the problem significantly), the method I proposed previously can be modified for fitting the data you have provided. Consider the model
$$\mbox{accuracy} = pF(x;\mu,\sigma),$$
where $F$ is the logistic CDF, $\mu$ is a location parameter, $\sigma$ is a scale parameter, and the parameter $p$ controls the height of the curve similarly as in the former model. This model can be fitted using Nonlinear Least Squares. The following R code shows how to do this for your data.
rm(list=ls())
y = c(0, 0, 0, 0, 0, 1, 3, 5, 9, 13, 14, 15, 14, 15, 16, 15, 14,
14, 15)/100
x = 1:length(y)
N = length(y)
plot(y ~ x)
Data = data.frame(x,y)
nls_fit = nls(y ~ p*plogis(x,m,s), Data, start =
list(m = 10, s = 1, p = 0.2) )
lines(Data$x, predict(nls_fit), col = "red") | Binomial regression asymptotes | Interesting question. A possibility that comes to my mind is including an additional parameter $p\in[0,1]$ in order to control the upper bound of the 'link' function.
Let $\{{\bf x}_j,y_j,n_j\}$, $j=1 | Binomial regression asymptotes
Interesting question. A possibility that comes to my mind is including an additional parameter $p\in[0,1]$ in order to control the upper bound of the 'link' function.
Let $\{{\bf x}_j,y_j,n_j\}$, $j=1,...,n$ be independent observations, where $y_j\sim \text{Binomial}\{n_i,pF({\bf x}_j^T\beta)\}$, $p\in[0,1]$, ${\bf x}_j=(1,x_{j1}, \dotsc ,x_{jk})^T$ is a vector of explanatory variables, $\beta=(\beta_0,\dotsc,\beta_k)$ is a vector of regression coefficients and $F^{-1}$ is the link function. Then the likelihood function is given by
$${\mathcal L}(\beta,p) \propto \prod_{j=1}^n p^{y_j}F({\bf x}_j^T\beta)^{y_j}[1-pF({\bf x}_j^T\beta)]^{n_j-y_j}$$
The next step is to choose a link, say the logistic distribution and find the corresponding MLE of $(\beta,p)$.
Consider the following simulated toy example using a dose-response model with $(\beta_0,\beta_1,p)=(0.5,0.5,0.25)$ and $n=31$
dose = seq(-15, 15, 1)
a = 0.5
b = 0.5
n=length(dose)
sim = rep(0, n)
for(i in 1:n) sim[i] = rbinom(1, 100, 0.25*plogis(a+b*dose[i]))
plot(dose, sim/100)
lp = function(par){
if(par[3]>0& par[3]<1) return(-(n*mean(sim)*log(par[3]) +
sum(sim*log(plogis(par[1]+par[2]*dose))) +
sum((100-sim)*log(1-par[3]*plogis(par[1]+par[2]*dose))) ))
else return(-Inf)
}
optim(c(0.5, 0.5, 0.25), lp)
One of the outcomes I got is $(\hat\beta_0,\hat\beta_1,\hat p)=( 0.4526650, 0.4589112, 0.2395564)$. Therefore it seems to be accurate. Of course, a more detailed exploration of this model would be necessary because including parameters in a binary regression model can be tricky and problems of identifiability or existence of the MLE may jump on the stage 1 2.
Edit
Given the edit (which changes the problem significantly), the method I proposed previously can be modified for fitting the data you have provided. Consider the model
$$\mbox{accuracy} = pF(x;\mu,\sigma),$$
where $F$ is the logistic CDF, $\mu$ is a location parameter, $\sigma$ is a scale parameter, and the parameter $p$ controls the height of the curve similarly as in the former model. This model can be fitted using Nonlinear Least Squares. The following R code shows how to do this for your data.
rm(list=ls())
y = c(0, 0, 0, 0, 0, 1, 3, 5, 9, 13, 14, 15, 14, 15, 16, 15, 14,
14, 15)/100
x = 1:length(y)
N = length(y)
plot(y ~ x)
Data = data.frame(x,y)
nls_fit = nls(y ~ p*plogis(x,m,s), Data, start =
list(m = 10, s = 1, p = 0.2) )
lines(Data$x, predict(nls_fit), col = "red") | Binomial regression asymptotes
Interesting question. A possibility that comes to my mind is including an additional parameter $p\in[0,1]$ in order to control the upper bound of the 'link' function.
Let $\{{\bf x}_j,y_j,n_j\}$, $j=1 |
36,440 | Binomial regression asymptotes | I would use the maximum of the X vector as the total possible number of successes. (This is a biased estimate of the true maximum number of successes, but it should work fairly well if you have enough data).
accuracy <- c(0, 0, 0, 0, 0, 1, 3, 5, 9, 13, 14, 15, 14, 15, 16,
15, 14, 14, 15)
x <- 1:length(accuracy)
glmx <- glm(cbind(accuracy, max(accuracy)-accuracy) ~ x,
family=binomial)
ndf <- data.frame(x=x)
ndf$fit <- predict(glmx, newdata=ndf, type="response")
plot(accuracy/max(accuracy) ~ x)
with(ndf, lines(fit ~ x))
This creates a plot that looks like: | Binomial regression asymptotes | I would use the maximum of the X vector as the total possible number of successes. (This is a biased estimate of the true maximum number of successes, but it should work fairly well if you have enough | Binomial regression asymptotes
I would use the maximum of the X vector as the total possible number of successes. (This is a biased estimate of the true maximum number of successes, but it should work fairly well if you have enough data).
accuracy <- c(0, 0, 0, 0, 0, 1, 3, 5, 9, 13, 14, 15, 14, 15, 16,
15, 14, 14, 15)
x <- 1:length(accuracy)
glmx <- glm(cbind(accuracy, max(accuracy)-accuracy) ~ x,
family=binomial)
ndf <- data.frame(x=x)
ndf$fit <- predict(glmx, newdata=ndf, type="response")
plot(accuracy/max(accuracy) ~ x)
with(ndf, lines(fit ~ x))
This creates a plot that looks like: | Binomial regression asymptotes
I would use the maximum of the X vector as the total possible number of successes. (This is a biased estimate of the true maximum number of successes, but it should work fairly well if you have enough |
36,441 | Binomial regression asymptotes | Note that binomial regression is based on having a binary response for each individual case. each individual response has to be able to take one of two values. If there is some limit to the proportion then there must also have been some cases which could only take one value.
It sounds like you are not dealing with binary data but with data over a finite range. if this is the case, then beta regression sounds more appropriate. We can write the beta distribution as:
$$p(d_i|LU\mu_i\phi)=\frac{(d_i-L)^{\mu_i\phi-1}(U-d_i)^{(1-\mu_i)\phi-1}}{B(\mu_i\phi,(1-\mu_i)\phi)(U-L)^{\phi-1}}$$
You then set $g(\mu_i)=x_i^T\beta$ same as any link function which maps the interval $[L,U]$ into the reals. There is an R package which can be used to fit these models, though i think you need to know the bounds. If you do, then redefine the new variable $y_i=\frac{d_i-L}{U-L}$. | Binomial regression asymptotes | Note that binomial regression is based on having a binary response for each individual case. each individual response has to be able to take one of two values. If there is some limit to the proporti | Binomial regression asymptotes
Note that binomial regression is based on having a binary response for each individual case. each individual response has to be able to take one of two values. If there is some limit to the proportion then there must also have been some cases which could only take one value.
It sounds like you are not dealing with binary data but with data over a finite range. if this is the case, then beta regression sounds more appropriate. We can write the beta distribution as:
$$p(d_i|LU\mu_i\phi)=\frac{(d_i-L)^{\mu_i\phi-1}(U-d_i)^{(1-\mu_i)\phi-1}}{B(\mu_i\phi,(1-\mu_i)\phi)(U-L)^{\phi-1}}$$
You then set $g(\mu_i)=x_i^T\beta$ same as any link function which maps the interval $[L,U]$ into the reals. There is an R package which can be used to fit these models, though i think you need to know the bounds. If you do, then redefine the new variable $y_i=\frac{d_i-L}{U-L}$. | Binomial regression asymptotes
Note that binomial regression is based on having a binary response for each individual case. each individual response has to be able to take one of two values. If there is some limit to the proporti |
36,442 | What are the primary differences between Taxometric analyses (e.g., MAXCOV, MAXEIG) and Latent Class analyses? | See Tueller (2010), Tueller and Lubke (2010), and [Ruscio et al.'l book][3] for complete detail on what is summarized below. Taxometric procedures generally work by computing simple statistics on subset of sorted data. MAMBAC uses the mean, MAXCOV uses the covariance, and MAXEIG using the eigen value. Latent class analysis is a special case of the general latent variable mixture model (LVMM). The LVMM specifies a model for the data which may include latent classes, latent factors, or both. Parameters of the model are obtained using maximum likelihood or Bayesian estimates. Refer to the literature above for complete detail.
What is more important that the mathematical underpinnings (which are beyond the scope of this forum) are the hypotheses that can be tested under each approach. Taxometric procedures test the hypothesis
H1: Two classes explain all (or most) of the observed correlation among a set of indicators
H0: One (or more) continuous underlying dimension(s) explain all of the observed correlation among a set of indicators
Usually the CCFI is used to ascertain which hypothesis to reject/retain. See [John Ruscio's book on the topic][4]. Taxometric procedures can test only these two hypothesis and no others.
Used alone, latent class analysis cannot test the taxometric alternative hypothesis, H0 above.
However, latent class analysis can test the following alternative hypotheses:
H1a: Two classes explain all of the observed correlation among a set of indicators
H1b: Three classes explain all of the observed correlation among a set of indicators
...
H1k: k classes explain all of the observed correlation among a set of indicators
To test H0 from above in a latent variable framework, fit a single factor confirmatory factor analysis (CFA) model to the data (call this H0cfa which is different from H0 - H0 only tests a hypothesis of fit under the taxometric framework, but doesn't produce parameter estimates as you would get by fitting a CFA model). To compare H0cfa to H1a, H1b, ..., H1k, use the Bayesian Information Criterion (BIC) ala [Nylund et al. (2007)][5].
To summarize thus far, taxometric procedures can look at two vs. one class solutions, while latent class + CFA can test one vs. two or more class solutions. We see that taxometric procedures test a subset of the hypotheses tested by latent class + CFA model comparisons.
All of the hypotheses present thus far are extremes at two ends of a spectrum. The more general hypothesis is that some number of latent classes and some number of latent dimensions (or latent factors) best explain the data. The approaches described above reject this outright, which is a very strong assumption. Put differently, a latent class model and a taxometric procedure that leads to a conclusion of taxonic structure (rather than dimensional) assume within class individual differences besides random error. In your context, this is equivalent to say that within the chronic pain class, there is no systematic variation in the tendency to develop chronic pain, only random chance.
The weakness of this assumption is better illustrated with an example from psychopathology.
Say you have a set of indicators for depression, and your taxometric and/or latent class models lead you to conclude there is a depressed class and a non-depressed class. These models implicitly assume no variance in severity of depression within class (beyond random error or noise). In other words, you are depressed, or you are not, and among the depressed everyone is equally depressed (beyond variation in error prone observed variables). So we only need one treatment for depression at one dose level! It is easily seen that this assumption is absurd for depression, and is often just as limited for most other research contexts.
To avoid making this assumption, use a factor mixture modeling approach following the papers of [Lubke and Muthen and Lubke and Neale][6]. | What are the primary differences between Taxometric analyses (e.g., MAXCOV, MAXEIG) and Latent Class | See Tueller (2010), Tueller and Lubke (2010), and [Ruscio et al.'l book][3] for complete detail on what is summarized below. Taxometric procedures generally work by computing simple statistics on subs | What are the primary differences between Taxometric analyses (e.g., MAXCOV, MAXEIG) and Latent Class analyses?
See Tueller (2010), Tueller and Lubke (2010), and [Ruscio et al.'l book][3] for complete detail on what is summarized below. Taxometric procedures generally work by computing simple statistics on subset of sorted data. MAMBAC uses the mean, MAXCOV uses the covariance, and MAXEIG using the eigen value. Latent class analysis is a special case of the general latent variable mixture model (LVMM). The LVMM specifies a model for the data which may include latent classes, latent factors, or both. Parameters of the model are obtained using maximum likelihood or Bayesian estimates. Refer to the literature above for complete detail.
What is more important that the mathematical underpinnings (which are beyond the scope of this forum) are the hypotheses that can be tested under each approach. Taxometric procedures test the hypothesis
H1: Two classes explain all (or most) of the observed correlation among a set of indicators
H0: One (or more) continuous underlying dimension(s) explain all of the observed correlation among a set of indicators
Usually the CCFI is used to ascertain which hypothesis to reject/retain. See [John Ruscio's book on the topic][4]. Taxometric procedures can test only these two hypothesis and no others.
Used alone, latent class analysis cannot test the taxometric alternative hypothesis, H0 above.
However, latent class analysis can test the following alternative hypotheses:
H1a: Two classes explain all of the observed correlation among a set of indicators
H1b: Three classes explain all of the observed correlation among a set of indicators
...
H1k: k classes explain all of the observed correlation among a set of indicators
To test H0 from above in a latent variable framework, fit a single factor confirmatory factor analysis (CFA) model to the data (call this H0cfa which is different from H0 - H0 only tests a hypothesis of fit under the taxometric framework, but doesn't produce parameter estimates as you would get by fitting a CFA model). To compare H0cfa to H1a, H1b, ..., H1k, use the Bayesian Information Criterion (BIC) ala [Nylund et al. (2007)][5].
To summarize thus far, taxometric procedures can look at two vs. one class solutions, while latent class + CFA can test one vs. two or more class solutions. We see that taxometric procedures test a subset of the hypotheses tested by latent class + CFA model comparisons.
All of the hypotheses present thus far are extremes at two ends of a spectrum. The more general hypothesis is that some number of latent classes and some number of latent dimensions (or latent factors) best explain the data. The approaches described above reject this outright, which is a very strong assumption. Put differently, a latent class model and a taxometric procedure that leads to a conclusion of taxonic structure (rather than dimensional) assume within class individual differences besides random error. In your context, this is equivalent to say that within the chronic pain class, there is no systematic variation in the tendency to develop chronic pain, only random chance.
The weakness of this assumption is better illustrated with an example from psychopathology.
Say you have a set of indicators for depression, and your taxometric and/or latent class models lead you to conclude there is a depressed class and a non-depressed class. These models implicitly assume no variance in severity of depression within class (beyond random error or noise). In other words, you are depressed, or you are not, and among the depressed everyone is equally depressed (beyond variation in error prone observed variables). So we only need one treatment for depression at one dose level! It is easily seen that this assumption is absurd for depression, and is often just as limited for most other research contexts.
To avoid making this assumption, use a factor mixture modeling approach following the papers of [Lubke and Muthen and Lubke and Neale][6]. | What are the primary differences between Taxometric analyses (e.g., MAXCOV, MAXEIG) and Latent Class
See Tueller (2010), Tueller and Lubke (2010), and [Ruscio et al.'l book][3] for complete detail on what is summarized below. Taxometric procedures generally work by computing simple statistics on subs |
36,443 | Does it make sense to consider non-binary logit? | There are situations where this sort of thing makes sense, for instance say you are trying to determine if someone is likely to like some particular icecream as a function of the ingredients, then you could get a sample of say 100 people and get them to taste each icecream and say whether they like it or not. If you assume the sample is from some population, then whether any particular individual likes the ice cream is a Benroulli trial, with probability that depends on the ingredients. You could either build your model with a dataset with one pattern per individual for each flavour of ice cream, or you could just have one pattern for each ice cream where the propotion of subjects that liked it was the proportion of the panel that liked it. The log-loss is the same (up to a muliplicative constant) either way. I have done this before (in a protein binding problem, that is much more difficult to explain than ice cream) and it worked reasonably well.
This suggests that the logit model may be appropriate for modelling some probabilities and some proportions, as long as they can be intepreted as arising from some form of Bernoulli experiment. | Does it make sense to consider non-binary logit? | There are situations where this sort of thing makes sense, for instance say you are trying to determine if someone is likely to like some particular icecream as a function of the ingredients, then you | Does it make sense to consider non-binary logit?
There are situations where this sort of thing makes sense, for instance say you are trying to determine if someone is likely to like some particular icecream as a function of the ingredients, then you could get a sample of say 100 people and get them to taste each icecream and say whether they like it or not. If you assume the sample is from some population, then whether any particular individual likes the ice cream is a Benroulli trial, with probability that depends on the ingredients. You could either build your model with a dataset with one pattern per individual for each flavour of ice cream, or you could just have one pattern for each ice cream where the propotion of subjects that liked it was the proportion of the panel that liked it. The log-loss is the same (up to a muliplicative constant) either way. I have done this before (in a protein binding problem, that is much more difficult to explain than ice cream) and it worked reasonably well.
This suggests that the logit model may be appropriate for modelling some probabilities and some proportions, as long as they can be intepreted as arising from some form of Bernoulli experiment. | Does it make sense to consider non-binary logit?
There are situations where this sort of thing makes sense, for instance say you are trying to determine if someone is likely to like some particular icecream as a function of the ingredients, then you |
36,444 | Does it make sense to consider non-binary logit? | It definitely makes sense in the case of values between 0 and 1. Consider if you have training data with identical X but different Y. If you average the Ys for those X (and keep the proportion of those samples the original data set unchanged), you'll arrive at the same optimal solution.
Another way of thinking about it is that your labels are inherently probabilistic. For example, you are trying to summarize or speed up an existing complicated function with a log linear one. Say you have an expensive Monte Carlo simulation solution to a problem, and you want to make a fast approximation to it. You could use the simulation to generate data to train a logistic regressor, and here your labels are not going to be exactly 0 or 1.
On the other hand, trying to predict outcomes outside of the [0, 1] interval seems wrong, since they are out of the domain of the logistic function. | Does it make sense to consider non-binary logit? | It definitely makes sense in the case of values between 0 and 1. Consider if you have training data with identical X but different Y. If you average the Ys for those X (and keep the proportion of th | Does it make sense to consider non-binary logit?
It definitely makes sense in the case of values between 0 and 1. Consider if you have training data with identical X but different Y. If you average the Ys for those X (and keep the proportion of those samples the original data set unchanged), you'll arrive at the same optimal solution.
Another way of thinking about it is that your labels are inherently probabilistic. For example, you are trying to summarize or speed up an existing complicated function with a log linear one. Say you have an expensive Monte Carlo simulation solution to a problem, and you want to make a fast approximation to it. You could use the simulation to generate data to train a logistic regressor, and here your labels are not going to be exactly 0 or 1.
On the other hand, trying to predict outcomes outside of the [0, 1] interval seems wrong, since they are out of the domain of the logistic function. | Does it make sense to consider non-binary logit?
It definitely makes sense in the case of values between 0 and 1. Consider if you have training data with identical X but different Y. If you average the Ys for those X (and keep the proportion of th |
36,445 | When is half normal distribution useful? | In Bayesian statistics the half normal,with a sufficiently large variation parameter, can be used as a noninformative prior distribution on the SD of a standard distribution. This is suggested in, for example:
Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. Bayesian analysis, 1, 515-534. | When is half normal distribution useful? | In Bayesian statistics the half normal,with a sufficiently large variation parameter, can be used as a noninformative prior distribution on the SD of a standard distribution. This is suggested in, for | When is half normal distribution useful?
In Bayesian statistics the half normal,with a sufficiently large variation parameter, can be used as a noninformative prior distribution on the SD of a standard distribution. This is suggested in, for example:
Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. Bayesian analysis, 1, 515-534. | When is half normal distribution useful?
In Bayesian statistics the half normal,with a sufficiently large variation parameter, can be used as a noninformative prior distribution on the SD of a standard distribution. This is suggested in, for |
36,446 | When is half normal distribution useful? | In quality control, there is something called a moving-range statistic which is the absolute value of successive differences. The half-normal serves as the basis of the chart as discussed in the following article:
http://www.tandfonline.com/doi/abs/10.1080/08982119508904612#preview | When is half normal distribution useful? | In quality control, there is something called a moving-range statistic which is the absolute value of successive differences. The half-normal serves as the basis of the chart as discussed in the foll | When is half normal distribution useful?
In quality control, there is something called a moving-range statistic which is the absolute value of successive differences. The half-normal serves as the basis of the chart as discussed in the following article:
http://www.tandfonline.com/doi/abs/10.1080/08982119508904612#preview | When is half normal distribution useful?
In quality control, there is something called a moving-range statistic which is the absolute value of successive differences. The half-normal serves as the basis of the chart as discussed in the foll |
36,447 | Multi-dimensional goodness of fit | You can always bin/discretize your data- even in higher dimensions- and use a Chi-square test.
A review of several adaptations of the KS for two dimensions, can be found here. | Multi-dimensional goodness of fit | You can always bin/discretize your data- even in higher dimensions- and use a Chi-square test.
A review of several adaptations of the KS for two dimensions, can be found here. | Multi-dimensional goodness of fit
You can always bin/discretize your data- even in higher dimensions- and use a Chi-square test.
A review of several adaptations of the KS for two dimensions, can be found here. | Multi-dimensional goodness of fit
You can always bin/discretize your data- even in higher dimensions- and use a Chi-square test.
A review of several adaptations of the KS for two dimensions, can be found here. |
36,448 | Show series of exponential decay distributions in one chart | One simple adjustment to your current graphic production would be, instead of producing all of the elements of the box-plot (even the minimalist Tufte style one), would be to produce a line chart connecting the summary statistics that the box plot is displaying (median, quartiles, mean, outer hinges, whatever). Below is an example displaying the 90th and 99th percentile of a simulated distribution of 50 observations over 100 weeks.
This connecting of the lines allows you to make the temporal connections between the summary statistics between weeks much easier, and reduces the data to ink ratio of the plot. Even Tufte in the Visual Display of Quantitative Information has an example where connecting the lines in a display allows one to discern periodicity in a temporal series that would be very difficult to see in a scatterplot display (and I assume the same problem would extend to the box-plot display).
What exactly you should display in the lines I believe would take more insight in the nature of your data and what you are interested in (and could change as the nature of your data changes over time as well). To get a broad sense of the distribution I believe different quantiles (including the median) can be informative. Although it may take some experimentation to see where informative quantiles lie from week to week (in this display the 99th percentile is quite noisy).
Line charts like this could also be extended to different statistical summaries (such as the skew of the distribution), although I believe the quantiles as a first run are the most informative. Also if you are interested in identifying outliers you may want to consider including the dots of outliers (defined in whatever way suits your fancy) in these same line plots. There was an interesting discussion of outliers for skewed data in this question on the site, Is there a boxplot variant for Poisson distributed data?, and the questions tagged with control-chart I believe would be applicable.
Even though from just this you can probably imagine generating a multitude of different lines on one plot, it is fairly easy to plot too much information in one graphic. A rule-of-thumb I try to abide by is that a plot should not have any more than 4-5 data elements (where here a data element would be a line). Even that is frequently too many. To attempt to get around this problem, try to make a consistent template, in which the axis of the plots are consistent, so you can make accurate comparisons between plots. Or if your software allows it, make a series of small multiple plots (again using the same axis for all plots). Then even if you say have the outer-hinges and outliers on one plot and the median and quartiles on another you may be able to discern patterns between those two plots. And then you can combine the outliers and the quartiles into one plot for more scrutiny if you think you see a pattern between them.
EDIT: As an example of the smoothing that @whuber is talking about here is a similar plot to that above (generated by a simulated process in the exact same manner), except that a loess smoother is applied to the lines.
I couldn't bring myself to not plot the original data, but I just made the smoothed lines thicker (and gave them color) to bring them to the foreground of the image, and left the original lines thinner and a light grey color so they are merely in the background of the image (and hence are not as distracting). The smoother allows one to assess general trends that can be obfuscated by the variance of the series.
Tukey has other suggestions I did not display here, such as plotting the alpha hull of all the observations (and labeling those observations that make up the vertices). Some more food for thought. | Show series of exponential decay distributions in one chart | One simple adjustment to your current graphic production would be, instead of producing all of the elements of the box-plot (even the minimalist Tufte style one), would be to produce a line chart conn | Show series of exponential decay distributions in one chart
One simple adjustment to your current graphic production would be, instead of producing all of the elements of the box-plot (even the minimalist Tufte style one), would be to produce a line chart connecting the summary statistics that the box plot is displaying (median, quartiles, mean, outer hinges, whatever). Below is an example displaying the 90th and 99th percentile of a simulated distribution of 50 observations over 100 weeks.
This connecting of the lines allows you to make the temporal connections between the summary statistics between weeks much easier, and reduces the data to ink ratio of the plot. Even Tufte in the Visual Display of Quantitative Information has an example where connecting the lines in a display allows one to discern periodicity in a temporal series that would be very difficult to see in a scatterplot display (and I assume the same problem would extend to the box-plot display).
What exactly you should display in the lines I believe would take more insight in the nature of your data and what you are interested in (and could change as the nature of your data changes over time as well). To get a broad sense of the distribution I believe different quantiles (including the median) can be informative. Although it may take some experimentation to see where informative quantiles lie from week to week (in this display the 99th percentile is quite noisy).
Line charts like this could also be extended to different statistical summaries (such as the skew of the distribution), although I believe the quantiles as a first run are the most informative. Also if you are interested in identifying outliers you may want to consider including the dots of outliers (defined in whatever way suits your fancy) in these same line plots. There was an interesting discussion of outliers for skewed data in this question on the site, Is there a boxplot variant for Poisson distributed data?, and the questions tagged with control-chart I believe would be applicable.
Even though from just this you can probably imagine generating a multitude of different lines on one plot, it is fairly easy to plot too much information in one graphic. A rule-of-thumb I try to abide by is that a plot should not have any more than 4-5 data elements (where here a data element would be a line). Even that is frequently too many. To attempt to get around this problem, try to make a consistent template, in which the axis of the plots are consistent, so you can make accurate comparisons between plots. Or if your software allows it, make a series of small multiple plots (again using the same axis for all plots). Then even if you say have the outer-hinges and outliers on one plot and the median and quartiles on another you may be able to discern patterns between those two plots. And then you can combine the outliers and the quartiles into one plot for more scrutiny if you think you see a pattern between them.
EDIT: As an example of the smoothing that @whuber is talking about here is a similar plot to that above (generated by a simulated process in the exact same manner), except that a loess smoother is applied to the lines.
I couldn't bring myself to not plot the original data, but I just made the smoothed lines thicker (and gave them color) to bring them to the foreground of the image, and left the original lines thinner and a light grey color so they are merely in the background of the image (and hence are not as distracting). The smoother allows one to assess general trends that can be obfuscated by the variance of the series.
Tukey has other suggestions I did not display here, such as plotting the alpha hull of all the observations (and labeling those observations that make up the vertices). Some more food for thought. | Show series of exponential decay distributions in one chart
One simple adjustment to your current graphic production would be, instead of producing all of the elements of the box-plot (even the minimalist Tufte style one), would be to produce a line chart conn |
36,449 | Show series of exponential decay distributions in one chart | Have you considered perhaps focusing on having a 2D plot of number of tickets on the x-axis vs number of people that have submitted that many tickets on the y-axis, and showing each cohort on a plot by itself? Then you could flip from cohort to cohort (like a Rolodex) to get a sense of change over time using animation, or let time be the z-axis in a 3D plot.
(edit: fixed y-axis from "number in cohort" to "number who have submitted that many tickets") | Show series of exponential decay distributions in one chart | Have you considered perhaps focusing on having a 2D plot of number of tickets on the x-axis vs number of people that have submitted that many tickets on the y-axis, and showing each cohort on a plot b | Show series of exponential decay distributions in one chart
Have you considered perhaps focusing on having a 2D plot of number of tickets on the x-axis vs number of people that have submitted that many tickets on the y-axis, and showing each cohort on a plot by itself? Then you could flip from cohort to cohort (like a Rolodex) to get a sense of change over time using animation, or let time be the z-axis in a 3D plot.
(edit: fixed y-axis from "number in cohort" to "number who have submitted that many tickets") | Show series of exponential decay distributions in one chart
Have you considered perhaps focusing on having a 2D plot of number of tickets on the x-axis vs number of people that have submitted that many tickets on the y-axis, and showing each cohort on a plot b |
36,450 | What does "case-control" and "cross-sectional" mean in the context of logistic modeling? | First, the definitions, then a slight twist on the statement you posted, then hopefully an illuminating answer.
Cross-Sectional Study: A study where you take a "snapshot" of a population at a single point in time. You're not following anyone, it's simply a "At this point, do you have or not have a disease" - along with covariates of course. A cross-section - hence the name.
Case-Control Study: A study usually used when a cohort study or RCT is going to be difficult, if not impossible. You sample cases from some source, and then a number of controls, usually in some ratio to the number of cases (1:1, 2:1, etc.). Again, you're not following anyone, you're back tracking. Rather than saying "what exposures lead to disease" you're asking "what exposures are more common in the group that got disease?".
What the statement means is that in either case, you're limited to what you can estimate. In order to calculate a risk (and thus a risk ratio) you need to know of a population n with no diseased people, how many people would get disease in your follow-up period (incidence). In a cross-sectional study, you technically only have prevalence, not incidence. This is the twist - the statement you posted is technically wrong. You can also - and often should - estimate a Prevalence Ratio from a cross-section study, as well as an Odds Ratio.
In a case-control study, you don't have the population - you just have the cases, and a basket of non-cases - you have no idea what happened in population n. So while you can calculate odds, its literally impossible to calculate the risk, it requires information you do not have.
However, in cases where disease is rare (~<10% prevalence), the Odds Ratio should approximate the risk ratio for a similarly conducted cohort study.
What this all means statistically is that these relatively simplistic (and thus fairly flexible) study designs are somewhat restrictive in what you can do - you're largely confined to logistic regression and the calculation of an odds ratio. | What does "case-control" and "cross-sectional" mean in the context of logistic modeling? | First, the definitions, then a slight twist on the statement you posted, then hopefully an illuminating answer.
Cross-Sectional Study: A study where you take a "snapshot" of a population at a single p | What does "case-control" and "cross-sectional" mean in the context of logistic modeling?
First, the definitions, then a slight twist on the statement you posted, then hopefully an illuminating answer.
Cross-Sectional Study: A study where you take a "snapshot" of a population at a single point in time. You're not following anyone, it's simply a "At this point, do you have or not have a disease" - along with covariates of course. A cross-section - hence the name.
Case-Control Study: A study usually used when a cohort study or RCT is going to be difficult, if not impossible. You sample cases from some source, and then a number of controls, usually in some ratio to the number of cases (1:1, 2:1, etc.). Again, you're not following anyone, you're back tracking. Rather than saying "what exposures lead to disease" you're asking "what exposures are more common in the group that got disease?".
What the statement means is that in either case, you're limited to what you can estimate. In order to calculate a risk (and thus a risk ratio) you need to know of a population n with no diseased people, how many people would get disease in your follow-up period (incidence). In a cross-sectional study, you technically only have prevalence, not incidence. This is the twist - the statement you posted is technically wrong. You can also - and often should - estimate a Prevalence Ratio from a cross-section study, as well as an Odds Ratio.
In a case-control study, you don't have the population - you just have the cases, and a basket of non-cases - you have no idea what happened in population n. So while you can calculate odds, its literally impossible to calculate the risk, it requires information you do not have.
However, in cases where disease is rare (~<10% prevalence), the Odds Ratio should approximate the risk ratio for a similarly conducted cohort study.
What this all means statistically is that these relatively simplistic (and thus fairly flexible) study designs are somewhat restrictive in what you can do - you're largely confined to logistic regression and the calculation of an odds ratio. | What does "case-control" and "cross-sectional" mean in the context of logistic modeling?
First, the definitions, then a slight twist on the statement you posted, then hopefully an illuminating answer.
Cross-Sectional Study: A study where you take a "snapshot" of a population at a single p |
36,451 | Matching loss function for tanh units in a neural net | I think I've derived something that'll work:
$$-\frac{1}{2}((1-x_0)log|1-tanh(x)| + (1+x_0)log|1+tanh(x)|)$$
The derivative of this quantity w/respect to $x$ is $tanh(x) - x_0$, which is precisely what I need. | Matching loss function for tanh units in a neural net | I think I've derived something that'll work:
$$-\frac{1}{2}((1-x_0)log|1-tanh(x)| + (1+x_0)log|1+tanh(x)|)$$
The derivative of this quantity w/respect to $x$ is $tanh(x) - x_0$, which is precisely wha | Matching loss function for tanh units in a neural net
I think I've derived something that'll work:
$$-\frac{1}{2}((1-x_0)log|1-tanh(x)| + (1+x_0)log|1+tanh(x)|)$$
The derivative of this quantity w/respect to $x$ is $tanh(x) - x_0$, which is precisely what I need. | Matching loss function for tanh units in a neural net
I think I've derived something that'll work:
$$-\frac{1}{2}((1-x_0)log|1-tanh(x)| + (1+x_0)log|1+tanh(x)|)$$
The derivative of this quantity w/respect to $x$ is $tanh(x) - x_0$, which is precisely wha |
36,452 | Matching loss function for tanh units in a neural net | The loss function is chosen according to the noise process assumed to contaminate the data, not the output layer activation function. The purpose of the output layer activation function is to apply whatever constraints ought to apply on the output of the model. There is a correspondance between loss function and activation function that can simplify the implementation of the model, but that is pretty much the only real benefit (c.f. link functions in Generalised Linear Models) as neural net people generally don't go in much for analysis of parameters etc. Note the tanh function is a scaled and translated version of the logistic sigmoidal function, so a modified logistic loss with recoded targets might be a good match from that perspective. | Matching loss function for tanh units in a neural net | The loss function is chosen according to the noise process assumed to contaminate the data, not the output layer activation function. The purpose of the output layer activation function is to apply w | Matching loss function for tanh units in a neural net
The loss function is chosen according to the noise process assumed to contaminate the data, not the output layer activation function. The purpose of the output layer activation function is to apply whatever constraints ought to apply on the output of the model. There is a correspondance between loss function and activation function that can simplify the implementation of the model, but that is pretty much the only real benefit (c.f. link functions in Generalised Linear Models) as neural net people generally don't go in much for analysis of parameters etc. Note the tanh function is a scaled and translated version of the logistic sigmoidal function, so a modified logistic loss with recoded targets might be a good match from that perspective. | Matching loss function for tanh units in a neural net
The loss function is chosen according to the noise process assumed to contaminate the data, not the output layer activation function. The purpose of the output layer activation function is to apply w |
36,453 | Significance and credibility intervals for interaction term in logistic regression | No, your calculation isn't correct, because:
a) $b_1$ and $b_3$ are probably correlated in the posterior distribution, and
b) even if they weren't, that isn't how you would calculate it (think of the law of large numbers).
But never fear, there is a really easy way to do this in WinBUGS. Just define a new variable:
b1b3 <- b1 + b3
and monitor its values.
EDIT:
For a better explanation of my first point, suppose the posterior has a joint multivariate normal distribution (it won't in this case, but it serves as a useful illustration). Then the parameter $b_i$ has distribution $N(\mu_i,\sigma_i^2)$, and so the 95% credible interval is $(\mu_i - 1.96 \sigma_i,\mu_i + 1.96 \sigma_i)$ - note that this only depends on the mean and variance.
Now $b_1+b_3$ will have distribution $N(\mu_1 + \mu_3,\sigma_1^2 + 2 \rho_{13}\sigma_1\sigma_3 + \sigma_3^2)$. Note that the variance term (and hence the 95% credible interval) involves the correlation term $\rho_{13}$ which cannot be found from the intervals for $b_1$ or $b_3$.
(My point about the law of large numbers was just that the standard deviations of the sum of 2 independent random variables is less than the sum of the standard deviations.)
As for how to implement it in WinBUGS, something like this is what I had in mind:
model {
a ~ dXXXX
b1 ~ dXXXX
b2 ~ dXXXX
b3 ~ dXXXX
b1b3 <- b1 + b3
for (i in 1:N) {
logit(p[i]) <- a + b1*x[i] + b2*w[i] + b3*x[i]*w[i]
y[i] ~ dbern(p[i])
}
}
At each step of the sampler, the node b1b3 will be updated from b1 and b3. It doesn't need a prior as it is just a deterministic function of two other nodes. | Significance and credibility intervals for interaction term in logistic regression | No, your calculation isn't correct, because:
a) $b_1$ and $b_3$ are probably correlated in the posterior distribution, and
b) even if they weren't, that isn't how you would calculate it (think of the | Significance and credibility intervals for interaction term in logistic regression
No, your calculation isn't correct, because:
a) $b_1$ and $b_3$ are probably correlated in the posterior distribution, and
b) even if they weren't, that isn't how you would calculate it (think of the law of large numbers).
But never fear, there is a really easy way to do this in WinBUGS. Just define a new variable:
b1b3 <- b1 + b3
and monitor its values.
EDIT:
For a better explanation of my first point, suppose the posterior has a joint multivariate normal distribution (it won't in this case, but it serves as a useful illustration). Then the parameter $b_i$ has distribution $N(\mu_i,\sigma_i^2)$, and so the 95% credible interval is $(\mu_i - 1.96 \sigma_i,\mu_i + 1.96 \sigma_i)$ - note that this only depends on the mean and variance.
Now $b_1+b_3$ will have distribution $N(\mu_1 + \mu_3,\sigma_1^2 + 2 \rho_{13}\sigma_1\sigma_3 + \sigma_3^2)$. Note that the variance term (and hence the 95% credible interval) involves the correlation term $\rho_{13}$ which cannot be found from the intervals for $b_1$ or $b_3$.
(My point about the law of large numbers was just that the standard deviations of the sum of 2 independent random variables is less than the sum of the standard deviations.)
As for how to implement it in WinBUGS, something like this is what I had in mind:
model {
a ~ dXXXX
b1 ~ dXXXX
b2 ~ dXXXX
b3 ~ dXXXX
b1b3 <- b1 + b3
for (i in 1:N) {
logit(p[i]) <- a + b1*x[i] + b2*w[i] + b3*x[i]*w[i]
y[i] ~ dbern(p[i])
}
}
At each step of the sampler, the node b1b3 will be updated from b1 and b3. It doesn't need a prior as it is just a deterministic function of two other nodes. | Significance and credibility intervals for interaction term in logistic regression
No, your calculation isn't correct, because:
a) $b_1$ and $b_3$ are probably correlated in the posterior distribution, and
b) even if they weren't, that isn't how you would calculate it (think of the |
36,454 | Significance and credibility intervals for interaction term in logistic regression | A few thoughts:
1) I'm not sure whether the fact that this is Bayesian matters.
2) I think your approach is correct
3) Interactions in logistic regression are tricky. I wrote about this in a paper that is about SAS PROC LOGISTIC, but the general idea holds. That paper is on my blog and is available here | Significance and credibility intervals for interaction term in logistic regression | A few thoughts:
1) I'm not sure whether the fact that this is Bayesian matters.
2) I think your approach is correct
3) Interactions in logistic regression are tricky. I wrote about this in a paper th | Significance and credibility intervals for interaction term in logistic regression
A few thoughts:
1) I'm not sure whether the fact that this is Bayesian matters.
2) I think your approach is correct
3) Interactions in logistic regression are tricky. I wrote about this in a paper that is about SAS PROC LOGISTIC, but the general idea holds. That paper is on my blog and is available here | Significance and credibility intervals for interaction term in logistic regression
A few thoughts:
1) I'm not sure whether the fact that this is Bayesian matters.
2) I think your approach is correct
3) Interactions in logistic regression are tricky. I wrote about this in a paper th |
36,455 | Significance and credibility intervals for interaction term in logistic regression | I'm currently having a similar problem. I also believe that the approach to calculate the total effect of w is correct. I believe this can be tested via
h0: b2 + b3 * mean(x) = 0;
ha: b2 + b3 * mean(x) != 0
However, I stumbled upon a paper by Ai/Norton, who claim that "the magnitude of the interaction effet in nonlinear models does not equal the marginal effect of the interaction term, can be of opposite sign, and its statistical significance is not calculated by standard software." (2003, p. 123)
So perhaps you should try to apply their formulas. (And if you understand how to do that, please tell me.)
PS. This seems to resemble the chow-test for logistic regressions. Alfred DeMaris (2004, p. 283) describes a test for this.
References:
Ai, Chunrong / Norton, Edward (2003): Interaction terms in logit and probit models, Economic Letters 80, p. 123–129
DeMaris, Alfred (2004): Regression with social data: modeling continuous and limited response variables. John Wiley & Sons, Inc., Hoboken NJ | Significance and credibility intervals for interaction term in logistic regression | I'm currently having a similar problem. I also believe that the approach to calculate the total effect of w is correct. I believe this can be tested via
h0: b2 + b3 * mean(x) = 0;
ha: b2 + b3 * mean(x | Significance and credibility intervals for interaction term in logistic regression
I'm currently having a similar problem. I also believe that the approach to calculate the total effect of w is correct. I believe this can be tested via
h0: b2 + b3 * mean(x) = 0;
ha: b2 + b3 * mean(x) != 0
However, I stumbled upon a paper by Ai/Norton, who claim that "the magnitude of the interaction effet in nonlinear models does not equal the marginal effect of the interaction term, can be of opposite sign, and its statistical significance is not calculated by standard software." (2003, p. 123)
So perhaps you should try to apply their formulas. (And if you understand how to do that, please tell me.)
PS. This seems to resemble the chow-test for logistic regressions. Alfred DeMaris (2004, p. 283) describes a test for this.
References:
Ai, Chunrong / Norton, Edward (2003): Interaction terms in logit and probit models, Economic Letters 80, p. 123–129
DeMaris, Alfred (2004): Regression with social data: modeling continuous and limited response variables. John Wiley & Sons, Inc., Hoboken NJ | Significance and credibility intervals for interaction term in logistic regression
I'm currently having a similar problem. I also believe that the approach to calculate the total effect of w is correct. I believe this can be tested via
h0: b2 + b3 * mean(x) = 0;
ha: b2 + b3 * mean(x |
36,456 | Fitting a generalized least squares model with correlated data; use ML or REML? | Your intuition is correct, the same principles apply. I looked in Pinheiro/Bates section 5.4, where gls is introduced, but it doesn't say so explicitly, so you'll just have to trust me, I guess. :)
In Chapter 2 they go through the theory of REML and ML and you'll notice that none of the theory depends on there being any random effects, and that actually, you could write any random effect model using just correlation structure instead and fit with gls, though for complex random effects it would be quite complex. The simplest example is that a random intercept model is equivalent to a compound symmetry model. | Fitting a generalized least squares model with correlated data; use ML or REML? | Your intuition is correct, the same principles apply. I looked in Pinheiro/Bates section 5.4, where gls is introduced, but it doesn't say so explicitly, so you'll just have to trust me, I guess. :) | Fitting a generalized least squares model with correlated data; use ML or REML?
Your intuition is correct, the same principles apply. I looked in Pinheiro/Bates section 5.4, where gls is introduced, but it doesn't say so explicitly, so you'll just have to trust me, I guess. :)
In Chapter 2 they go through the theory of REML and ML and you'll notice that none of the theory depends on there being any random effects, and that actually, you could write any random effect model using just correlation structure instead and fit with gls, though for complex random effects it would be quite complex. The simplest example is that a random intercept model is equivalent to a compound symmetry model. | Fitting a generalized least squares model with correlated data; use ML or REML?
Your intuition is correct, the same principles apply. I looked in Pinheiro/Bates section 5.4, where gls is introduced, but it doesn't say so explicitly, so you'll just have to trust me, I guess. :) |
36,457 | Linear regression, heteroscedasticity, White's test interpretation? | The original White paper where the test statistic was proposed is an enlightening read. This excerpt I think is of interest here:
...the null hypothesis maintains not
only that the errors are
homoskedastic, but also that they are
independent of the regressors, and
that the model is correctly
specified... Failure of any of these
conditions cal lead to a statistically
significant test statistic.
Assuming that the model is correctly specified your results indicate that for non-transformed case there is a clear presence of heteroskedasticity, and in the log case there is no heteroskedasticity at 5% significance level, but there is at 10%. This means that in the log case further tests should be made, since the test "barely" accepts the null hypothesis of no heteroskedasticity. For me personally this would be an indication that maybe model specification is not correct and other heteroskedasticity tests should be made. Incidentally White gives an overview of alternative tests in its article: Godfrey, Goldfeld-Quandt, etc. | Linear regression, heteroscedasticity, White's test interpretation? | The original White paper where the test statistic was proposed is an enlightening read. This excerpt I think is of interest here:
...the null hypothesis maintains not
only that the errors are
hom | Linear regression, heteroscedasticity, White's test interpretation?
The original White paper where the test statistic was proposed is an enlightening read. This excerpt I think is of interest here:
...the null hypothesis maintains not
only that the errors are
homoskedastic, but also that they are
independent of the regressors, and
that the model is correctly
specified... Failure of any of these
conditions cal lead to a statistically
significant test statistic.
Assuming that the model is correctly specified your results indicate that for non-transformed case there is a clear presence of heteroskedasticity, and in the log case there is no heteroskedasticity at 5% significance level, but there is at 10%. This means that in the log case further tests should be made, since the test "barely" accepts the null hypothesis of no heteroskedasticity. For me personally this would be an indication that maybe model specification is not correct and other heteroskedasticity tests should be made. Incidentally White gives an overview of alternative tests in its article: Godfrey, Goldfeld-Quandt, etc. | Linear regression, heteroscedasticity, White's test interpretation?
The original White paper where the test statistic was proposed is an enlightening read. This excerpt I think is of interest here:
...the null hypothesis maintains not
only that the errors are
hom |
36,458 | Linear regression, heteroscedasticity, White's test interpretation? | This does not answer the question of how to use the test. However, you should know that most economists generally never run those tests -- especially, applied microeconomists. Instead, you just use the Huber-White adjusted standard errors which corrects for various misspecifications in the distribution of your error terms.
That's not a sharp "statistics" answer, but it's how most practitioners in economics handle it. Godfrey Goldfeld-Quant or White's tests are barely ever used or discussed. | Linear regression, heteroscedasticity, White's test interpretation? | This does not answer the question of how to use the test. However, you should know that most economists generally never run those tests -- especially, applied microeconomists. Instead, you just use th | Linear regression, heteroscedasticity, White's test interpretation?
This does not answer the question of how to use the test. However, you should know that most economists generally never run those tests -- especially, applied microeconomists. Instead, you just use the Huber-White adjusted standard errors which corrects for various misspecifications in the distribution of your error terms.
That's not a sharp "statistics" answer, but it's how most practitioners in economics handle it. Godfrey Goldfeld-Quant or White's tests are barely ever used or discussed. | Linear regression, heteroscedasticity, White's test interpretation?
This does not answer the question of how to use the test. However, you should know that most economists generally never run those tests -- especially, applied microeconomists. Instead, you just use th |
36,459 | What method is used in Google's correlate? | As chl points out, the Google Correlate tutorial states that Google Correlate uses Pearson's product-moment correlation coefficient.
They don't mention which language this is implemented in, although Google does use R for some applications, so I'd be guessing that. | What method is used in Google's correlate? | As chl points out, the Google Correlate tutorial states that Google Correlate uses Pearson's product-moment correlation coefficient.
They don't mention which language this is implemented in, although | What method is used in Google's correlate?
As chl points out, the Google Correlate tutorial states that Google Correlate uses Pearson's product-moment correlation coefficient.
They don't mention which language this is implemented in, although Google does use R for some applications, so I'd be guessing that. | What method is used in Google's correlate?
As chl points out, the Google Correlate tutorial states that Google Correlate uses Pearson's product-moment correlation coefficient.
They don't mention which language this is implemented in, although |
36,460 | Central limit theorem for sum from varied distributions | The theorem 3.1 in this book answers your first question. The key restriction in central limit theorem is not different distributions but the independence. The result is a very nice one, since it says that for interesting sums of independent random variables the limiting distribution has to have certain property, namely infinite divisibility. The classical central limit theorem (with iid variables with finite variances) is then only a very special case of this theorem.
Note that this is a very general answer to very general question. Given the nature of your distributions more precise answer can be given. For example if the distributions satisfy Lindeberg's condition then the limiting distribution is necessary normal (if we exclude let us say non-interesting cases). | Central limit theorem for sum from varied distributions | The theorem 3.1 in this book answers your first question. The key restriction in central limit theorem is not different distributions but the independence. The result is a very nice one, since it says | Central limit theorem for sum from varied distributions
The theorem 3.1 in this book answers your first question. The key restriction in central limit theorem is not different distributions but the independence. The result is a very nice one, since it says that for interesting sums of independent random variables the limiting distribution has to have certain property, namely infinite divisibility. The classical central limit theorem (with iid variables with finite variances) is then only a very special case of this theorem.
Note that this is a very general answer to very general question. Given the nature of your distributions more precise answer can be given. For example if the distributions satisfy Lindeberg's condition then the limiting distribution is necessary normal (if we exclude let us say non-interesting cases). | Central limit theorem for sum from varied distributions
The theorem 3.1 in this book answers your first question. The key restriction in central limit theorem is not different distributions but the independence. The result is a very nice one, since it says |
36,461 | Central limit theorem for sum from varied distributions | mpiktas gave a very good technical answer.
There's a nice simulation of this here:
http://onlinestatbook.com/stat_sim/sampling_dist/index.html
You can manipulate the distribution at the top of this demo to show a combination of different distributions (such as bimodal), and the distribution of sample means will still be normal. | Central limit theorem for sum from varied distributions | mpiktas gave a very good technical answer.
There's a nice simulation of this here:
http://onlinestatbook.com/stat_sim/sampling_dist/index.html
You can manipulate the distribution at the top of this d | Central limit theorem for sum from varied distributions
mpiktas gave a very good technical answer.
There's a nice simulation of this here:
http://onlinestatbook.com/stat_sim/sampling_dist/index.html
You can manipulate the distribution at the top of this demo to show a combination of different distributions (such as bimodal), and the distribution of sample means will still be normal. | Central limit theorem for sum from varied distributions
mpiktas gave a very good technical answer.
There's a nice simulation of this here:
http://onlinestatbook.com/stat_sim/sampling_dist/index.html
You can manipulate the distribution at the top of this d |
36,462 | How are zero values handled in lm()? | The problem you described here is known as limited dependent variable problem usually represented by truncated or censored data (the former could be seen as a special case of the later). In this case application of lm() function would not be the best choice, since it in general will produce biased and inconsistent estimates of the true regression line. However, truncation (dropping zeroes from the sample, as you suggested in the comment) will make this bias even larger.
Likely the problem is well known and there are usually two common options to solve it either to use a Tobit model or a Heckman's two step approach, it would be useful to study any common econometric textbook on the topic (this Cross Validated link will be useful). The difference in two models is that Heckman's method allows for either explanatory variables or parameter estimates to differ across the estimated parts that influence the zeros and the magnitude of the observed non zero values.
To implement the Tobit and Heckman models in R you will need sampleSelection or censReg packages. There are also nice Vignettes corresponding to these packages, so read them first. | How are zero values handled in lm()? | The problem you described here is known as limited dependent variable problem usually represented by truncated or censored data (the former could be seen as a special case of the later). In this case | How are zero values handled in lm()?
The problem you described here is known as limited dependent variable problem usually represented by truncated or censored data (the former could be seen as a special case of the later). In this case application of lm() function would not be the best choice, since it in general will produce biased and inconsistent estimates of the true regression line. However, truncation (dropping zeroes from the sample, as you suggested in the comment) will make this bias even larger.
Likely the problem is well known and there are usually two common options to solve it either to use a Tobit model or a Heckman's two step approach, it would be useful to study any common econometric textbook on the topic (this Cross Validated link will be useful). The difference in two models is that Heckman's method allows for either explanatory variables or parameter estimates to differ across the estimated parts that influence the zeros and the magnitude of the observed non zero values.
To implement the Tobit and Heckman models in R you will need sampleSelection or censReg packages. There are also nice Vignettes corresponding to these packages, so read them first. | How are zero values handled in lm()?
The problem you described here is known as limited dependent variable problem usually represented by truncated or censored data (the former could be seen as a special case of the later). In this case |
36,463 | How are zero values handled in lm()? | What % of the predictor is 0, and what other values does it take on?
The concern is whether a predictor with such little variation (vast majority being the value of 0) would be useful in a regression model.
To approach this, you can first stratify and do one analysis with the subset of the data where predictor is 0, and another analysis where the predictor is != 0. Once you get a sense of the structure of the data, you can decide whether to proceed with analysis using the entire dataset, and whether the predictor variable should stay in the model. | How are zero values handled in lm()? | What % of the predictor is 0, and what other values does it take on?
The concern is whether a predictor with such little variation (vast majority being the value of 0) would be useful in a regression | How are zero values handled in lm()?
What % of the predictor is 0, and what other values does it take on?
The concern is whether a predictor with such little variation (vast majority being the value of 0) would be useful in a regression model.
To approach this, you can first stratify and do one analysis with the subset of the data where predictor is 0, and another analysis where the predictor is != 0. Once you get a sense of the structure of the data, you can decide whether to proceed with analysis using the entire dataset, and whether the predictor variable should stay in the model. | How are zero values handled in lm()?
What % of the predictor is 0, and what other values does it take on?
The concern is whether a predictor with such little variation (vast majority being the value of 0) would be useful in a regression |
36,464 | How to model zero inflated, over dispersed poisson time series? | You may want to check out hurdle() from the pscl package in R. It specifies two-component models, one that handles the zero counts and one that handles the positive counts. Check out the hurdle help page here.
EDIT: I just found this post in R help that describes the zeroinf() function in R (also from the pscl package), as well as gamlss and VGAM options. However, I don't believe that the VGAM options will allow you to take into account non-independent correlation structures.
Another option is the zinb command in Stata. Fitting a model using the negative binomial family will account for the overdispersion.
I am not sure if they allow for seasonality adjustments, however. | How to model zero inflated, over dispersed poisson time series? | You may want to check out hurdle() from the pscl package in R. It specifies two-component models, one that handles the zero counts and one that handles the positive counts. Check out the hurdle help p | How to model zero inflated, over dispersed poisson time series?
You may want to check out hurdle() from the pscl package in R. It specifies two-component models, one that handles the zero counts and one that handles the positive counts. Check out the hurdle help page here.
EDIT: I just found this post in R help that describes the zeroinf() function in R (also from the pscl package), as well as gamlss and VGAM options. However, I don't believe that the VGAM options will allow you to take into account non-independent correlation structures.
Another option is the zinb command in Stata. Fitting a model using the negative binomial family will account for the overdispersion.
I am not sure if they allow for seasonality adjustments, however. | How to model zero inflated, over dispersed poisson time series?
You may want to check out hurdle() from the pscl package in R. It specifies two-component models, one that handles the zero counts and one that handles the positive counts. Check out the hurdle help p |
36,465 | How to model zero inflated, over dispersed poisson time series? | Another option for negative binomial regression in R is the excellent MASS package's glm.nb() function. UCLA's statistical consulting group has a pretty clear vignette, which unfortunately does not seem to provide any obvious insights into your autocorrelation issues, but maybe searching these various nb-regression options on R-seek or elsewhere would help? | How to model zero inflated, over dispersed poisson time series? | Another option for negative binomial regression in R is the excellent MASS package's glm.nb() function. UCLA's statistical consulting group has a pretty clear vignette, which unfortunately does not se | How to model zero inflated, over dispersed poisson time series?
Another option for negative binomial regression in R is the excellent MASS package's glm.nb() function. UCLA's statistical consulting group has a pretty clear vignette, which unfortunately does not seem to provide any obvious insights into your autocorrelation issues, but maybe searching these various nb-regression options on R-seek or elsewhere would help? | How to model zero inflated, over dispersed poisson time series?
Another option for negative binomial regression in R is the excellent MASS package's glm.nb() function. UCLA's statistical consulting group has a pretty clear vignette, which unfortunately does not se |
36,466 | How to model zero inflated, over dispersed poisson time series? | If you have access to SAS 9.2 you could use PROC COUNTREG. It's a fairly new procedure and if you poke around the SAS site you can find out about it in the SAS/ETS(R) 9.2 User's Guide. COUNTREG does count modeling with or without zero inflation, has a "by" clause to split analyses, and allows both categorical and continuous variables. | How to model zero inflated, over dispersed poisson time series? | If you have access to SAS 9.2 you could use PROC COUNTREG. It's a fairly new procedure and if you poke around the SAS site you can find out about it in the SAS/ETS(R) 9.2 User's Guide. COUNTREG does c | How to model zero inflated, over dispersed poisson time series?
If you have access to SAS 9.2 you could use PROC COUNTREG. It's a fairly new procedure and if you poke around the SAS site you can find out about it in the SAS/ETS(R) 9.2 User's Guide. COUNTREG does count modeling with or without zero inflation, has a "by" clause to split analyses, and allows both categorical and continuous variables. | How to model zero inflated, over dispersed poisson time series?
If you have access to SAS 9.2 you could use PROC COUNTREG. It's a fairly new procedure and if you poke around the SAS site you can find out about it in the SAS/ETS(R) 9.2 User's Guide. COUNTREG does c |
36,467 | When do you consider a variable is a latent variable? | The relevant section of the classical typology distinguishes between (observed) variables, latent variables, and parameters.
Regular variables are observed and have a distribution. Latent variables are not observed and have a distribution. Parameters are not observed and do not have a distribution.
Parameters vs latent variables is indeed a modelling decision. Consider a set of survey questions that tap an underlying scale. If you expect that learning about one subject's position on the scale is potentially informative about another subject's position and you wish to be able to generalise to new subjects then you should treat position as a latent variable. If not, you may as well treat it like a parameter.
Bringing up FA and IRT is a bit confusing because some measurement models aim to estimate subject parameters e.g. Rasch models, and some aim to estimate subject latent variables e.g. FA and IRT models. All types of model have parameters in addition, associated with the items.
For a survey context there are also indexes, constructed by combining several indicators (which are observed variables). You should probably think of these as non-parametric estimators of latent variables, for when you don't feel happy with measurement model parametric assumptions. (Although personally I've never been particularly sure about their status) | When do you consider a variable is a latent variable? | The relevant section of the classical typology distinguishes between (observed) variables, latent variables, and parameters.
Regular variables are observed and have a distribution. Latent variables ar | When do you consider a variable is a latent variable?
The relevant section of the classical typology distinguishes between (observed) variables, latent variables, and parameters.
Regular variables are observed and have a distribution. Latent variables are not observed and have a distribution. Parameters are not observed and do not have a distribution.
Parameters vs latent variables is indeed a modelling decision. Consider a set of survey questions that tap an underlying scale. If you expect that learning about one subject's position on the scale is potentially informative about another subject's position and you wish to be able to generalise to new subjects then you should treat position as a latent variable. If not, you may as well treat it like a parameter.
Bringing up FA and IRT is a bit confusing because some measurement models aim to estimate subject parameters e.g. Rasch models, and some aim to estimate subject latent variables e.g. FA and IRT models. All types of model have parameters in addition, associated with the items.
For a survey context there are also indexes, constructed by combining several indicators (which are observed variables). You should probably think of these as non-parametric estimators of latent variables, for when you don't feel happy with measurement model parametric assumptions. (Although personally I've never been particularly sure about their status) | When do you consider a variable is a latent variable?
The relevant section of the classical typology distinguishes between (observed) variables, latent variables, and parameters.
Regular variables are observed and have a distribution. Latent variables ar |
36,468 | When do you consider a variable is a latent variable? | That is a modeling decision. One way to look at it can be illustrated by the following example.
A couple of hundreds electrodes are attached to the head to measure brain activity. Electricity, blood flow, whatever and you get lots of signals. These measurements that you get are observables. They are mixed in probably very non-linear way and are not useful.
Latent or also hidden variables are modeling the individual variables that are responsible for their generations. They are supposed to be more pure, more interpretable. How to extract the signal that is causing the eye to blink, or to open the mouth, or emotions and many more complicated signals. Hope it helps to understand the intuition. | When do you consider a variable is a latent variable? | That is a modeling decision. One way to look at it can be illustrated by the following example.
A couple of hundreds electrodes are attached to the head to measure brain activity. Electricity, blood | When do you consider a variable is a latent variable?
That is a modeling decision. One way to look at it can be illustrated by the following example.
A couple of hundreds electrodes are attached to the head to measure brain activity. Electricity, blood flow, whatever and you get lots of signals. These measurements that you get are observables. They are mixed in probably very non-linear way and are not useful.
Latent or also hidden variables are modeling the individual variables that are responsible for their generations. They are supposed to be more pure, more interpretable. How to extract the signal that is causing the eye to blink, or to open the mouth, or emotions and many more complicated signals. Hope it helps to understand the intuition. | When do you consider a variable is a latent variable?
That is a modeling decision. One way to look at it can be illustrated by the following example.
A couple of hundreds electrodes are attached to the head to measure brain activity. Electricity, blood |
36,469 | Gaussian state space forecasting with regression effects | Here's the solution I came up with: The trick is to add NAs to the end of the observation data. When seeing NA as a response variable the Kalman filter algorithm will simply predict the next value and not update the state vector. This is exactly what we want to make our forecast.
nAhead <- 12
mod <- dlmModSeas(4)+dlmModReg(cbind(rnorm(100+nAhead),rnorm(100+nAhead)))
fi <- dlmFilter(c(rnorm(100),rep(NA,nAhead)),mod)
Is this correct? | Gaussian state space forecasting with regression effects | Here's the solution I came up with: The trick is to add NAs to the end of the observation data. When seeing NA as a response variable the Kalman filter algorithm will simply predict the next value and | Gaussian state space forecasting with regression effects
Here's the solution I came up with: The trick is to add NAs to the end of the observation data. When seeing NA as a response variable the Kalman filter algorithm will simply predict the next value and not update the state vector. This is exactly what we want to make our forecast.
nAhead <- 12
mod <- dlmModSeas(4)+dlmModReg(cbind(rnorm(100+nAhead),rnorm(100+nAhead)))
fi <- dlmFilter(c(rnorm(100),rep(NA,nAhead)),mod)
Is this correct? | Gaussian state space forecasting with regression effects
Here's the solution I came up with: The trick is to add NAs to the end of the observation data. When seeing NA as a response variable the Kalman filter algorithm will simply predict the next value and |
36,470 | Alternative to the Wilcoxon test when the distribution isn't continuous? | I have found that the Wilcoxon statistic is still fine for this purpose and that small simulations do a good job of estimating the size and the power of the test. I suspect this is more powerful than just comparing the two medians. The main concern is lack of power due to extensive numbers of ties, but that concern attaches to any solution you can conceive of: there's no way around it (except to design instruments that offer a wider range of responses!).
To perform the simulation, concatenate the two data arrays (of lengths $n$ and $m$) into a single array (of length $n+m$). In each iteration randomly permute the elements of the array and break the result into the first $n$ and last $m$ elements. | Alternative to the Wilcoxon test when the distribution isn't continuous? | I have found that the Wilcoxon statistic is still fine for this purpose and that small simulations do a good job of estimating the size and the power of the test. I suspect this is more powerful than | Alternative to the Wilcoxon test when the distribution isn't continuous?
I have found that the Wilcoxon statistic is still fine for this purpose and that small simulations do a good job of estimating the size and the power of the test. I suspect this is more powerful than just comparing the two medians. The main concern is lack of power due to extensive numbers of ties, but that concern attaches to any solution you can conceive of: there's no way around it (except to design instruments that offer a wider range of responses!).
To perform the simulation, concatenate the two data arrays (of lengths $n$ and $m$) into a single array (of length $n+m$). In each iteration randomly permute the elements of the array and break the result into the first $n$ and last $m$ elements. | Alternative to the Wilcoxon test when the distribution isn't continuous?
I have found that the Wilcoxon statistic is still fine for this purpose and that small simulations do a good job of estimating the size and the power of the test. I suspect this is more powerful than |
36,471 | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population | I have a relatively simple solution to propose, Hugo. Because you're forthright about not being a statistician (often a plus ;-) but obviously can handle technical language, I'll take some pains to be technically clear but avoid statistical jargon.
Let's start by checking my understanding: you have six series of data (t[j,i], h[j,i]), 1 <= j <= 6, 1 <= i <= n[j], where t[j,i] is the time you measured the entropy h[j,i] for artifact j and n[j] is the number of observations made of artifact j.
We may as well assume t[j,i] <= t[j,i+1] is always the case, but it sounds like you cannot necessarily assume that t[1,i] = ... = t[6,i] for all i (synchronous measurements) or even that t[j,i+1] - t[j,i] is a constant for any given j (equal time increments). We might as well also suppose j=1 designates your special artifact.
We do need a model for the data. "Exponential" versus "sublinear" covers a lot of ground, suggesting we should adopt a very broad (non-parametric) model for the behavior of the curves. One thing that simply distinguishes these two forms of evolution is that the increments h[j,i+1] - h[j,i] in the exponential case will be increasing whereas for concave sublinear growth the increments will decreasing. Specifically, the increments of the increments,
d2[j,i] = h[j,i+1] - 2*h[j,i+1] + h[j,i], 1 <= i <= n[j]-2,
will either tend to be positive (for artifact 1) or negative (for the others).
A big question concerns the nature of variation: the observed entropies might not exactly fit along any nice curve; they might oscillate, seemingly at random, around some ideal curve. Because you don't want to do any statistical modeling, we aren't going to learn much about the nature of this variation, but let's hope that the amount of variation for any given artifact j is typically about the same size for all times t[j,i]. This lets us write each entropy in the form
h[j,i] = y[j,i] + e[j,i]
where y[j,i] is the "true" entropy for artifact j at time t[j,i] and e[j,i] is the difference between the observed entropy h[j,i] and the true entropy. It might be reasonable, as a first cut at this problem, to hope that the e[j,i] act randomly and appear to be statistically independent of each other and of the y[j,i] and t[j,i].
This setup and these assumptions imply that the set of second increments for artifact j, {d2[j,i] | 1 <= i <= n[j]-2}, will not necessarily be entirely positive or entirely negative, but that each such set should look like a bunch of (potentially different) positive or negative numbers plus some fluctuation:
d2[j,i] = (y[j,i+2] - 2*y[j,i+1] + y[j,i]) + (e[j,i+2] - 2*e[j,i+1] + e[j,i]).
We're still not in a classic probability context, but we're close if we (incorrectly, but perhaps not fatally) treat the correct second increments (y[j,i+2] - 2*y[j,i+1] + y[j,i]) as if they were numbers drawn randomly from some box. In the case of artifact 1 your hope is that this is a box of all positive numbers; for the other artifacts, your hope is that it is a box of all negative numbers.
At this point we can apply some standard machinery for hypothesis testing. The null hypothesis is that the true second increments are all (or most of them) negative; the alternative hypothesis covers all the other 2^6-1 possibilities concerning the signs of the six batches of second increments. This suggests running a t-test separately for each collection of actual second increments to compare them against zero. (A non-parametric equivalent, such as a sign test, would be fine, too.) Use a Bonferroni correction with these planned multiple comparisons; that is, if you want to test at a level of alpha (e.g., 5%) to attain a desired "probability value," use the alpha/6 critical value for the test. This can readily be done even in a spreadsheet if you like. It's fast and straightforward.
This approach is not going to be the best one because among all those that could be conceived: it's one of the less powerful and it still makes some assumptions (such as independence of the errors); but if it works--that is, if you find the second increments for j=1 to be significantly above 0 and all the others to be significantly below 0--then it will have done its job. If this is not the case, your expectations might still be correct, but it would take a greater statistical modeling effort to analyze the data. (The next phase, if needed, might be to look at the runs of increments for each artifact to see whether there's evidence that eventually each curve becomes exponential or sublinear. It should also involve a deeper analysis of the nature of variation in the data.) | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population | I have a relatively simple solution to propose, Hugo. Because you're forthright about not being a statistician (often a plus ;-) but obviously can handle technical language, I'll take some pains to | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population
I have a relatively simple solution to propose, Hugo. Because you're forthright about not being a statistician (often a plus ;-) but obviously can handle technical language, I'll take some pains to be technically clear but avoid statistical jargon.
Let's start by checking my understanding: you have six series of data (t[j,i], h[j,i]), 1 <= j <= 6, 1 <= i <= n[j], where t[j,i] is the time you measured the entropy h[j,i] for artifact j and n[j] is the number of observations made of artifact j.
We may as well assume t[j,i] <= t[j,i+1] is always the case, but it sounds like you cannot necessarily assume that t[1,i] = ... = t[6,i] for all i (synchronous measurements) or even that t[j,i+1] - t[j,i] is a constant for any given j (equal time increments). We might as well also suppose j=1 designates your special artifact.
We do need a model for the data. "Exponential" versus "sublinear" covers a lot of ground, suggesting we should adopt a very broad (non-parametric) model for the behavior of the curves. One thing that simply distinguishes these two forms of evolution is that the increments h[j,i+1] - h[j,i] in the exponential case will be increasing whereas for concave sublinear growth the increments will decreasing. Specifically, the increments of the increments,
d2[j,i] = h[j,i+1] - 2*h[j,i+1] + h[j,i], 1 <= i <= n[j]-2,
will either tend to be positive (for artifact 1) or negative (for the others).
A big question concerns the nature of variation: the observed entropies might not exactly fit along any nice curve; they might oscillate, seemingly at random, around some ideal curve. Because you don't want to do any statistical modeling, we aren't going to learn much about the nature of this variation, but let's hope that the amount of variation for any given artifact j is typically about the same size for all times t[j,i]. This lets us write each entropy in the form
h[j,i] = y[j,i] + e[j,i]
where y[j,i] is the "true" entropy for artifact j at time t[j,i] and e[j,i] is the difference between the observed entropy h[j,i] and the true entropy. It might be reasonable, as a first cut at this problem, to hope that the e[j,i] act randomly and appear to be statistically independent of each other and of the y[j,i] and t[j,i].
This setup and these assumptions imply that the set of second increments for artifact j, {d2[j,i] | 1 <= i <= n[j]-2}, will not necessarily be entirely positive or entirely negative, but that each such set should look like a bunch of (potentially different) positive or negative numbers plus some fluctuation:
d2[j,i] = (y[j,i+2] - 2*y[j,i+1] + y[j,i]) + (e[j,i+2] - 2*e[j,i+1] + e[j,i]).
We're still not in a classic probability context, but we're close if we (incorrectly, but perhaps not fatally) treat the correct second increments (y[j,i+2] - 2*y[j,i+1] + y[j,i]) as if they were numbers drawn randomly from some box. In the case of artifact 1 your hope is that this is a box of all positive numbers; for the other artifacts, your hope is that it is a box of all negative numbers.
At this point we can apply some standard machinery for hypothesis testing. The null hypothesis is that the true second increments are all (or most of them) negative; the alternative hypothesis covers all the other 2^6-1 possibilities concerning the signs of the six batches of second increments. This suggests running a t-test separately for each collection of actual second increments to compare them against zero. (A non-parametric equivalent, such as a sign test, would be fine, too.) Use a Bonferroni correction with these planned multiple comparisons; that is, if you want to test at a level of alpha (e.g., 5%) to attain a desired "probability value," use the alpha/6 critical value for the test. This can readily be done even in a spreadsheet if you like. It's fast and straightforward.
This approach is not going to be the best one because among all those that could be conceived: it's one of the less powerful and it still makes some assumptions (such as independence of the errors); but if it works--that is, if you find the second increments for j=1 to be significantly above 0 and all the others to be significantly below 0--then it will have done its job. If this is not the case, your expectations might still be correct, but it would take a greater statistical modeling effort to analyze the data. (The next phase, if needed, might be to look at the runs of increments for each artifact to see whether there's evidence that eventually each curve becomes exponential or sublinear. It should also involve a deeper analysis of the nature of variation in the data.) | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population
I have a relatively simple solution to propose, Hugo. Because you're forthright about not being a statistician (often a plus ;-) but obviously can handle technical language, I'll take some pains to |
36,472 | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population | You can do it all on Excel.
Plotting the six time series should give you a hint of the shapes of the curves. Let's say that, as you mentioned, five of the curves look like they're exponential and the sixth looks like it grows sub-linearly.
Insert a trendline for each curve. If you are right, five of them will provide the best fit (as measured by r squared) with an exponential trendline, while the sixth will be best fitted to a logarithmic trendline.
This may sound non-deterministic, but if all six values of r squared are close to 1 you can be pretty confident of your result. | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population | You can do it all on Excel.
Plotting the six time series should give you a hint of the shapes of the curves. Let's say that, as you mentioned, five of the curves look like they're exponential and the | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population
You can do it all on Excel.
Plotting the six time series should give you a hint of the shapes of the curves. Let's say that, as you mentioned, five of the curves look like they're exponential and the sixth looks like it grows sub-linearly.
Insert a trendline for each curve. If you are right, five of them will provide the best fit (as measured by r squared) with an exponential trendline, while the sixth will be best fitted to a logarithmic trendline.
This may sound non-deterministic, but if all six values of r squared are close to 1 you can be pretty confident of your result. | Hypothesis testing that one time-series of a measure of entropy doesn't belong to a population
You can do it all on Excel.
Plotting the six time series should give you a hint of the shapes of the curves. Let's say that, as you mentioned, five of the curves look like they're exponential and the |
36,473 | How to start an analysis of keywords from a bibliography and detect correlations? | I'm also outside my area of expertise, but assuming that you want to use R, here are a few thoughts.
There is a bibtex package in R for importing bibtex files.
Various character functions could be used to extract the key words.
The data sounds a little like a two-mode network, which might mean packages like sna and igraph are useful.
Plots of 2d multidimensional scaling can also also be useful in visualising similarities (e.g., based on co-occurrence or some other measure) between words (here's a tutorial). | How to start an analysis of keywords from a bibliography and detect correlations? | I'm also outside my area of expertise, but assuming that you want to use R, here are a few thoughts.
There is a bibtex package in R for importing bibtex files.
Various character functions could be us | How to start an analysis of keywords from a bibliography and detect correlations?
I'm also outside my area of expertise, but assuming that you want to use R, here are a few thoughts.
There is a bibtex package in R for importing bibtex files.
Various character functions could be used to extract the key words.
The data sounds a little like a two-mode network, which might mean packages like sna and igraph are useful.
Plots of 2d multidimensional scaling can also also be useful in visualising similarities (e.g., based on co-occurrence or some other measure) between words (here's a tutorial). | How to start an analysis of keywords from a bibliography and detect correlations?
I'm also outside my area of expertise, but assuming that you want to use R, here are a few thoughts.
There is a bibtex package in R for importing bibtex files.
Various character functions could be us |
36,474 | How to start an analysis of keywords from a bibliography and detect correlations? | so you have a document x keyword matrix which basically represents a bipartite graph (or two-mode network depending on your cultural background) with edges between documents and tags. If you're not interested in individual documents - as I understand you -, you can create a network of keywords by counting the number of cooccurrences between each keyword. Simply plotting this graph might already give you a neat idea of what this data looks like. You can further tweak the visualization if you, e.g., scale the size of the keywords by the number of total occurrences, or (in case you have a lot of keywords) introduce a minimum number of total occurrences for a keyword to appear in the first place.
As a tool, I can only recommend GraphViz which allows you to specify graphs like
keyword1 -- keyword2
keyword1 -- keyword3
keyword1[label="statistics", fontsize=...]
and "compile" them into pngs, pdfs, whatever, yielding very nice results (particularly if you play a bit with the font settings). | How to start an analysis of keywords from a bibliography and detect correlations? | so you have a document x keyword matrix which basically represents a bipartite graph (or two-mode network depending on your cultural background) with edges between documents and tags. If you're not in | How to start an analysis of keywords from a bibliography and detect correlations?
so you have a document x keyword matrix which basically represents a bipartite graph (or two-mode network depending on your cultural background) with edges between documents and tags. If you're not interested in individual documents - as I understand you -, you can create a network of keywords by counting the number of cooccurrences between each keyword. Simply plotting this graph might already give you a neat idea of what this data looks like. You can further tweak the visualization if you, e.g., scale the size of the keywords by the number of total occurrences, or (in case you have a lot of keywords) introduce a minimum number of total occurrences for a keyword to appear in the first place.
As a tool, I can only recommend GraphViz which allows you to specify graphs like
keyword1 -- keyword2
keyword1 -- keyword3
keyword1[label="statistics", fontsize=...]
and "compile" them into pngs, pdfs, whatever, yielding very nice results (particularly if you play a bit with the font settings). | How to start an analysis of keywords from a bibliography and detect correlations?
so you have a document x keyword matrix which basically represents a bipartite graph (or two-mode network depending on your cultural background) with edges between documents and tags. If you're not in |
36,475 | How to start an analysis of keywords from a bibliography and detect correlations? | I would recommend using Association Rule Learning for this. It allows you to find words that often co-occur.
If you have a lot of data, it will be much faster than calculating a correlation matrix.
See my video series on text mining here. Includes a tutorial on Association Rules for text. | How to start an analysis of keywords from a bibliography and detect correlations? | I would recommend using Association Rule Learning for this. It allows you to find words that often co-occur.
If you have a lot of data, it will be much faster than calculating a correlation matrix.
| How to start an analysis of keywords from a bibliography and detect correlations?
I would recommend using Association Rule Learning for this. It allows you to find words that often co-occur.
If you have a lot of data, it will be much faster than calculating a correlation matrix.
See my video series on text mining here. Includes a tutorial on Association Rules for text. | How to start an analysis of keywords from a bibliography and detect correlations?
I would recommend using Association Rule Learning for this. It allows you to find words that often co-occur.
If you have a lot of data, it will be much faster than calculating a correlation matrix.
|
36,476 | How to start an analysis of keywords from a bibliography and detect correlations? | You may want to take a look at the phi coefficient which is a measure of association for nominal variables. | How to start an analysis of keywords from a bibliography and detect correlations? | You may want to take a look at the phi coefficient which is a measure of association for nominal variables. | How to start an analysis of keywords from a bibliography and detect correlations?
You may want to take a look at the phi coefficient which is a measure of association for nominal variables. | How to start an analysis of keywords from a bibliography and detect correlations?
You may want to take a look at the phi coefficient which is a measure of association for nominal variables. |
36,477 | How to start an analysis of keywords from a bibliography and detect correlations? | You could try to employ the theory and praxis of association analysis or market basket analysis to your problem (just read "items" as "keywords" / "cited reference" and "market basket" as "journal article").
Disclaimer - this is just an idea, I did not do anything like that myself. Just my 2Cents. | How to start an analysis of keywords from a bibliography and detect correlations? | You could try to employ the theory and praxis of association analysis or market basket analysis to your problem (just read "items" as "keywords" / "cited reference" and "market basket" as "journal art | How to start an analysis of keywords from a bibliography and detect correlations?
You could try to employ the theory and praxis of association analysis or market basket analysis to your problem (just read "items" as "keywords" / "cited reference" and "market basket" as "journal article").
Disclaimer - this is just an idea, I did not do anything like that myself. Just my 2Cents. | How to start an analysis of keywords from a bibliography and detect correlations?
You could try to employ the theory and praxis of association analysis or market basket analysis to your problem (just read "items" as "keywords" / "cited reference" and "market basket" as "journal art |
36,478 | Constructing smoothing splines with cross-validation | Nonparametric Regression and Spline Smoothing by Eubank is a good book. You probably want to start with Chapters 2 and 5 which cover goodness of fit and the theory and construction of smoothing splines. I've heard good things about Generalized Additive Models: An Introduction with R, which might be better if you're looking for examples in R. For a quick introduction, a google search turns up a course on Nonparametric function estimation where you can peruse the slides and see examples in R.
The general problem with splines is overfitting your data, but this is where cross validation comes in. | Constructing smoothing splines with cross-validation | Nonparametric Regression and Spline Smoothing by Eubank is a good book. You probably want to start with Chapters 2 and 5 which cover goodness of fit and the theory and construction of smoothing splin | Constructing smoothing splines with cross-validation
Nonparametric Regression and Spline Smoothing by Eubank is a good book. You probably want to start with Chapters 2 and 5 which cover goodness of fit and the theory and construction of smoothing splines. I've heard good things about Generalized Additive Models: An Introduction with R, which might be better if you're looking for examples in R. For a quick introduction, a google search turns up a course on Nonparametric function estimation where you can peruse the slides and see examples in R.
The general problem with splines is overfitting your data, but this is where cross validation comes in. | Constructing smoothing splines with cross-validation
Nonparametric Regression and Spline Smoothing by Eubank is a good book. You probably want to start with Chapters 2 and 5 which cover goodness of fit and the theory and construction of smoothing splin |
36,479 | How can I obtain some of all possible combinations in R? | If you wish to trade processing speed for memory (which I think you do), I would suggest the following algorithm:
Set up a loop from 1 to N Choose K, indexed by i
Each i can be considered an index to a combinadic, decode as such
Use the combination to perform your test statistic, store the result, discard the combination
Repeat
This will give you all N Choose K possible combinations without having to create them explicitly. I have code to do this in R if you'd like it (you can email me at mark dot m period fredrickson at-symbol gmail dot com). | How can I obtain some of all possible combinations in R? | If you wish to trade processing speed for memory (which I think you do), I would suggest the following algorithm:
Set up a loop from 1 to N Choose K, indexed by i
Each i can be considered an index to | How can I obtain some of all possible combinations in R?
If you wish to trade processing speed for memory (which I think you do), I would suggest the following algorithm:
Set up a loop from 1 to N Choose K, indexed by i
Each i can be considered an index to a combinadic, decode as such
Use the combination to perform your test statistic, store the result, discard the combination
Repeat
This will give you all N Choose K possible combinations without having to create them explicitly. I have code to do this in R if you'd like it (you can email me at mark dot m period fredrickson at-symbol gmail dot com). | How can I obtain some of all possible combinations in R?
If you wish to trade processing speed for memory (which I think you do), I would suggest the following algorithm:
Set up a loop from 1 to N Choose K, indexed by i
Each i can be considered an index to |
36,480 | How can I obtain some of all possible combinations in R? | Generating combinations is pretty easy, see for instance this; write this code in R and then process each combination at a time it appears. | How can I obtain some of all possible combinations in R? | Generating combinations is pretty easy, see for instance this; write this code in R and then process each combination at a time it appears. | How can I obtain some of all possible combinations in R?
Generating combinations is pretty easy, see for instance this; write this code in R and then process each combination at a time it appears. | How can I obtain some of all possible combinations in R?
Generating combinations is pretty easy, see for instance this; write this code in R and then process each combination at a time it appears. |
36,481 | How to manage categorical variable with MANY categories | Until Breiman and the 21st c the historic barrier for working with massively categorical features was computational, for example in ANOVA, inverting a cross-products matrix with too many categories was infeasible.
That said, it's useful to distinguish between massively categorical features vs targets. It's true that random forests (RFs) have trouble modeling targets with more than a few dozen levels. This is not true for massively categorical features.
Breiman's intent with RFs was to redress criticisms of his original 'single iteration' approach to classification and regression trees as being unstable and inaccurate.
What Breiman didn't realize was that any multivariate modeling engine could be plugged into his RF framework, e.g., ANOVA, multiple regression, logistic regression, k-means, and so on, to arrive at an approximating, iterative solution.
Breiman did his work in the late 90s on a single CPU when massive data meant a few dozen gigs and a couple of thousand features processed over a couple of thousand iterations of bootstrapped resampling of observations and features. Each iteration built a mini-RF model, the predictions from which were aggregated into an ensemble prediction of the target.
Today there are dozens of workarounds to modeling massively categorical features which extend Breiman's approach to breaking a large model down to many bite-sized, smaller models, sometimes known as divide and conquer algorithms.
A paper by Chen and Xie discusses D&Cs, A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data https://www.jstor.org/stable/24310963
Another good review is McGinnis' Beyond One-Hot: an exploration of categorical variables https://www.kdnuggets.com/2015/12/beyond-one-hot-exploration-categorical-variables.html
Related to this is the suggestion of impact coding, e.g., Zumel's Modeling Trick: Impact Coding of Categorical Variables with Many Levels https://win-vector.com/2012/07/23/modeling-trick-impact-coding-of-categorical-variables-with-many-levels/
A completely different, non-frequentist approach was made in marketing science wrt hierarchical Bayesian modeling: Ainslie and Steenburgh's Massively Categorical Variables: Revealing the Information in Zip Codes. Their model is easily programmed in software such as STAN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=961571
Hope this helps address your query.
Afterthought FWIW...having poked around with a few of the approaches to massive categorical information including one-hot encoding, hierarchical bayes and impact coding, I came to the opinion that impact coding offered the best results in several nonsignificantly better ways: strongest holdout metrics wrt dependence, minimized metrics of dispersion and easiest to code. | How to manage categorical variable with MANY categories | Until Breiman and the 21st c the historic barrier for working with massively categorical features was computational, for example in ANOVA, inverting a cross-products matrix with too many categories wa | How to manage categorical variable with MANY categories
Until Breiman and the 21st c the historic barrier for working with massively categorical features was computational, for example in ANOVA, inverting a cross-products matrix with too many categories was infeasible.
That said, it's useful to distinguish between massively categorical features vs targets. It's true that random forests (RFs) have trouble modeling targets with more than a few dozen levels. This is not true for massively categorical features.
Breiman's intent with RFs was to redress criticisms of his original 'single iteration' approach to classification and regression trees as being unstable and inaccurate.
What Breiman didn't realize was that any multivariate modeling engine could be plugged into his RF framework, e.g., ANOVA, multiple regression, logistic regression, k-means, and so on, to arrive at an approximating, iterative solution.
Breiman did his work in the late 90s on a single CPU when massive data meant a few dozen gigs and a couple of thousand features processed over a couple of thousand iterations of bootstrapped resampling of observations and features. Each iteration built a mini-RF model, the predictions from which were aggregated into an ensemble prediction of the target.
Today there are dozens of workarounds to modeling massively categorical features which extend Breiman's approach to breaking a large model down to many bite-sized, smaller models, sometimes known as divide and conquer algorithms.
A paper by Chen and Xie discusses D&Cs, A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data https://www.jstor.org/stable/24310963
Another good review is McGinnis' Beyond One-Hot: an exploration of categorical variables https://www.kdnuggets.com/2015/12/beyond-one-hot-exploration-categorical-variables.html
Related to this is the suggestion of impact coding, e.g., Zumel's Modeling Trick: Impact Coding of Categorical Variables with Many Levels https://win-vector.com/2012/07/23/modeling-trick-impact-coding-of-categorical-variables-with-many-levels/
A completely different, non-frequentist approach was made in marketing science wrt hierarchical Bayesian modeling: Ainslie and Steenburgh's Massively Categorical Variables: Revealing the Information in Zip Codes. Their model is easily programmed in software such as STAN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=961571
Hope this helps address your query.
Afterthought FWIW...having poked around with a few of the approaches to massive categorical information including one-hot encoding, hierarchical bayes and impact coding, I came to the opinion that impact coding offered the best results in several nonsignificantly better ways: strongest holdout metrics wrt dependence, minimized metrics of dispersion and easiest to code. | How to manage categorical variable with MANY categories
Until Breiman and the 21st c the historic barrier for working with massively categorical features was computational, for example in ANOVA, inverting a cross-products matrix with too many categories wa |
36,482 | Generating uniformly distributed random solutions of a linear equation | The intersection of the $n+1$ simplex
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf 1_{n+1}=1\}$$
when $\mathbf 1_{n+1}=(1,\ldots1)^\top$ and of the constrained hyperplane
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf \iota_{n+1}=x\}$$
when $\iota_{n+1}=(0,1,\ldots n)^\top$ is within a $(n−1)$-dimensional affine space
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ A\mathbf p=b\}\tag{1}$$
where $A$ is a $(2,n+1)$ matrix whose rows are orthonormal (wlog).
If $\mathbf p^0$ is a particular solution, i.e., a particular element of (1), the other members will be of the form $\mathbf p^0+\eta$ with $A\eta=0$, which can be expressed via an orthonormal basis of vectors satisfying $A\eta=0$. It is then sufficient to find an hypercube containing (1) by finding upper and lower bounds on the components of $\eta$ in (1), to generate uniformly points in that hypercube and accept simulations such that $\mathbf p^0+\eta\in\mathbb R_+^{n+1}$ | Generating uniformly distributed random solutions of a linear equation | The intersection of the $n+1$ simplex
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf 1_{n+1}=1\}$$
when $\mathbf 1_{n+1}=(1,\ldots1)^\top$ and of the constrained hyperplane
$$\{\mathbf{p} | Generating uniformly distributed random solutions of a linear equation
The intersection of the $n+1$ simplex
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf 1_{n+1}=1\}$$
when $\mathbf 1_{n+1}=(1,\ldots1)^\top$ and of the constrained hyperplane
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf \iota_{n+1}=x\}$$
when $\iota_{n+1}=(0,1,\ldots n)^\top$ is within a $(n−1)$-dimensional affine space
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ A\mathbf p=b\}\tag{1}$$
where $A$ is a $(2,n+1)$ matrix whose rows are orthonormal (wlog).
If $\mathbf p^0$ is a particular solution, i.e., a particular element of (1), the other members will be of the form $\mathbf p^0+\eta$ with $A\eta=0$, which can be expressed via an orthonormal basis of vectors satisfying $A\eta=0$. It is then sufficient to find an hypercube containing (1) by finding upper and lower bounds on the components of $\eta$ in (1), to generate uniformly points in that hypercube and accept simulations such that $\mathbf p^0+\eta\in\mathbb R_+^{n+1}$ | Generating uniformly distributed random solutions of a linear equation
The intersection of the $n+1$ simplex
$$\{\mathbf{p}\in\mathbb R_+^{n+1};\ \mathbf{p}^\top\mathbf 1_{n+1}=1\}$$
when $\mathbf 1_{n+1}=(1,\ldots1)^\top$ and of the constrained hyperplane
$$\{\mathbf{p} |
36,483 | Generating uniformly distributed random solutions of a linear equation | Here's on possible approach, in R. It takes random samples from a simplex as starting points, and then uses optimisation to find nearby points that meet your criteria. I'm not sure how to verify that the resulting values are still "uniformly" distributed in the admissible parameter space.
find_solution = function(n, x){
# Set up our function
basis_w = t(as.matrix(0:n))
calculate_x = function(ps){
xhat = basis_w %*% as.matrix(ps)
xhat[1,1]
}
# Create a cost function to minimize.
# It should penalise values of p out of the range 0-1,
# ps that don't sum to 1, and values of x that don't match the target
cost_func = function(ps){
# Out of bounds loss
too_high = ifelse(ps > 1, ps - 1, 0) %>% sq() %>% sum()
too_low = ifelse(ps < 0, -ps, 0) %>% sq() %>% sum()
# Doesn't sum to 1-loss
simplex_loss = sq(sum(ps) - 1)
# Wrong x value loss
xhat = calculate_x(ps)
err_loss = sq(xhat - x)
# Manually chosing tuning parameters
100*too_high + 100*too_low + simplex_loss + err_loss
}
# Randomly sample from a simplex for starting values
# (as array rather than matrix for compatibility with optim() )
starting_values = hitandrun::simplex.sample(n + 1, 1)$samples %>% c()
# Optimise
result = optim(cost_func, par = starting_values)
if(abs(result$value > .001)) warning('Solution not found')
return(result$par)
}
find_solution(7, 1.5)
# [1] 0.3474378316 0.3221905230 0.0008495489 0.2713962006 0.0002233641 0.0003346271 0.0436698092 0.0138942988
find_solution(7, 1.5)
# [1] 3.889560e-01 2.736482e-01 1.178040e-01 7.962354e-02 3.027082e-05 8.923485e-02 4.931859e-02 1.381217e-03
find_solution(7, 1.5)
# [1] 2.184026e-01 3.564897e-01 2.410103e-01 1.512842e-01 9.407019e-04 -1.540388e-05 7.533937e-06 2.939822e-02
find_solution(7, 1.5)
# [1] 1.931718e-01 4.007192e-01 1.561489e-01 2.386094e-01 2.693653e-03 1.691173e-05 6.235019e-05 8.569269e-03
find_solution(7, 1.5)
# [1] 0.4523700298 0.0139939226 0.3043223032 0.1165318696 0.0502990295 0.0478577440 0.0138911475 0.0005681597
find_solution(7, 1.5)
# [1] 0.527091876 0.035665984 0.006260825 0.372083535 0.011015178 0.014363521 0.007900288 0.024702057 | Generating uniformly distributed random solutions of a linear equation | Here's on possible approach, in R. It takes random samples from a simplex as starting points, and then uses optimisation to find nearby points that meet your criteria. I'm not sure how to verify that | Generating uniformly distributed random solutions of a linear equation
Here's on possible approach, in R. It takes random samples from a simplex as starting points, and then uses optimisation to find nearby points that meet your criteria. I'm not sure how to verify that the resulting values are still "uniformly" distributed in the admissible parameter space.
find_solution = function(n, x){
# Set up our function
basis_w = t(as.matrix(0:n))
calculate_x = function(ps){
xhat = basis_w %*% as.matrix(ps)
xhat[1,1]
}
# Create a cost function to minimize.
# It should penalise values of p out of the range 0-1,
# ps that don't sum to 1, and values of x that don't match the target
cost_func = function(ps){
# Out of bounds loss
too_high = ifelse(ps > 1, ps - 1, 0) %>% sq() %>% sum()
too_low = ifelse(ps < 0, -ps, 0) %>% sq() %>% sum()
# Doesn't sum to 1-loss
simplex_loss = sq(sum(ps) - 1)
# Wrong x value loss
xhat = calculate_x(ps)
err_loss = sq(xhat - x)
# Manually chosing tuning parameters
100*too_high + 100*too_low + simplex_loss + err_loss
}
# Randomly sample from a simplex for starting values
# (as array rather than matrix for compatibility with optim() )
starting_values = hitandrun::simplex.sample(n + 1, 1)$samples %>% c()
# Optimise
result = optim(cost_func, par = starting_values)
if(abs(result$value > .001)) warning('Solution not found')
return(result$par)
}
find_solution(7, 1.5)
# [1] 0.3474378316 0.3221905230 0.0008495489 0.2713962006 0.0002233641 0.0003346271 0.0436698092 0.0138942988
find_solution(7, 1.5)
# [1] 3.889560e-01 2.736482e-01 1.178040e-01 7.962354e-02 3.027082e-05 8.923485e-02 4.931859e-02 1.381217e-03
find_solution(7, 1.5)
# [1] 2.184026e-01 3.564897e-01 2.410103e-01 1.512842e-01 9.407019e-04 -1.540388e-05 7.533937e-06 2.939822e-02
find_solution(7, 1.5)
# [1] 1.931718e-01 4.007192e-01 1.561489e-01 2.386094e-01 2.693653e-03 1.691173e-05 6.235019e-05 8.569269e-03
find_solution(7, 1.5)
# [1] 0.4523700298 0.0139939226 0.3043223032 0.1165318696 0.0502990295 0.0478577440 0.0138911475 0.0005681597
find_solution(7, 1.5)
# [1] 0.527091876 0.035665984 0.006260825 0.372083535 0.011015178 0.014363521 0.007900288 0.024702057 | Generating uniformly distributed random solutions of a linear equation
Here's on possible approach, in R. It takes random samples from a simplex as starting points, and then uses optimisation to find nearby points that meet your criteria. I'm not sure how to verify that |
36,484 | Formal Definition of Identification | Yes, it needs to hold for all $z$ (subject to some technical caveats)
The idea of non-identifiability is that two different parameter values give the same sampling distribution, making them impossible to distinguish based on the data. Subject to some technical caveats which you can usually ignore,$^\dagger$ this means that non-identifiability occurs when the sampling density is the same for all observable data values and identifiability occurs when the sampling density is different for at least one observable data value.
Consequently, in the definition you cite in your question, I think they mean to say that identifiability occurs when $f(z|\theta) \neq f(z|\theta_0)$ for some observable value $z$. Your attempted counter-example gives one case where $f(z|\theta) = f(z|\theta_0)$ but it does not show that this holds for all $z$, so it does not establish non-identifiability.
Incidentally, a reasonable way to view identifiability is in terms of the concept of minimal sufficient parameters (see e.g., O'Neill 2005). Just as you can derive a minimal sufficient statistic from the likelihood function, you can similarly derive a "minimal sufficient parameter" by the same essential method. The minimal sufficient parameter is what can be "identified" from data from that sampling distribution, so any parameter vector that is not a function of the minimal sufficient parameter is not fully identifiable.
$^\dagger$ A slight complication to identifiability occurs because density functions are not generally unique representations of probability distributions. For instance, for continuous random variables it is possible to alter the points in a density function on an arbitrary countable set of points and it still represents the same distribution. This means that when you are assessing identifiability based on a parameterised class of sampling density functions, strictly speaking, identifiability occurs when $f(x|\theta) \neq f(x|\theta_0)$ over a set of values of $z$ that has positive probability under at least one of those densities. If you form the sampling densities so that they are all continuous then this is enough to allow you to simplify things to say that identifiability occurs when $f(x|\theta) \neq f(x|\theta_0)$ for any $z$. For the reasons discussed here, identifiability is generally not defined in terms of density functions. | Formal Definition of Identification | Yes, it needs to hold for all $z$ (subject to some technical caveats)
The idea of non-identifiability is that two different parameter values give the same sampling distribution, making them impossible | Formal Definition of Identification
Yes, it needs to hold for all $z$ (subject to some technical caveats)
The idea of non-identifiability is that two different parameter values give the same sampling distribution, making them impossible to distinguish based on the data. Subject to some technical caveats which you can usually ignore,$^\dagger$ this means that non-identifiability occurs when the sampling density is the same for all observable data values and identifiability occurs when the sampling density is different for at least one observable data value.
Consequently, in the definition you cite in your question, I think they mean to say that identifiability occurs when $f(z|\theta) \neq f(z|\theta_0)$ for some observable value $z$. Your attempted counter-example gives one case where $f(z|\theta) = f(z|\theta_0)$ but it does not show that this holds for all $z$, so it does not establish non-identifiability.
Incidentally, a reasonable way to view identifiability is in terms of the concept of minimal sufficient parameters (see e.g., O'Neill 2005). Just as you can derive a minimal sufficient statistic from the likelihood function, you can similarly derive a "minimal sufficient parameter" by the same essential method. The minimal sufficient parameter is what can be "identified" from data from that sampling distribution, so any parameter vector that is not a function of the minimal sufficient parameter is not fully identifiable.
$^\dagger$ A slight complication to identifiability occurs because density functions are not generally unique representations of probability distributions. For instance, for continuous random variables it is possible to alter the points in a density function on an arbitrary countable set of points and it still represents the same distribution. This means that when you are assessing identifiability based on a parameterised class of sampling density functions, strictly speaking, identifiability occurs when $f(x|\theta) \neq f(x|\theta_0)$ over a set of values of $z$ that has positive probability under at least one of those densities. If you form the sampling densities so that they are all continuous then this is enough to allow you to simplify things to say that identifiability occurs when $f(x|\theta) \neq f(x|\theta_0)$ for any $z$. For the reasons discussed here, identifiability is generally not defined in terms of density functions. | Formal Definition of Identification
Yes, it needs to hold for all $z$ (subject to some technical caveats)
The idea of non-identifiability is that two different parameter values give the same sampling distribution, making them impossible |
36,485 | Formal Definition of Identification | I see no problem with how identification is defined. The bracketed part is saying that $\theta_0$ is identified if whenever $\theta \neq \theta_0$ and $\theta \in \Theta$, then it must be that $f(z|\theta) \neq f(z|\theta_0)$.
Here, the $\neq$ is for two functions. Two functions are not equal if there is some $z$ for which they are not equal. So I see no ambiguity in the definition.
If you wanted, you could certainly add "if there exists some $z$ such that...", but I presume the author was trying to keep things concise.
Relating all this to your example, identification deals with population distributions, not finite draws of data as in your example (I write more about this here: What is the difference between Consistency and Identification?). So if you are going to "fix" the data, then identification will be about the corresponding finite sample distribution (which is a bit strange, but can be done). It could certainly be the case that for certain finite sample DGPs, something like the coefficients for a probit are actually not identified. | Formal Definition of Identification | I see no problem with how identification is defined. The bracketed part is saying that $\theta_0$ is identified if whenever $\theta \neq \theta_0$ and $\theta \in \Theta$, then it must be that $f(z|\t | Formal Definition of Identification
I see no problem with how identification is defined. The bracketed part is saying that $\theta_0$ is identified if whenever $\theta \neq \theta_0$ and $\theta \in \Theta$, then it must be that $f(z|\theta) \neq f(z|\theta_0)$.
Here, the $\neq$ is for two functions. Two functions are not equal if there is some $z$ for which they are not equal. So I see no ambiguity in the definition.
If you wanted, you could certainly add "if there exists some $z$ such that...", but I presume the author was trying to keep things concise.
Relating all this to your example, identification deals with population distributions, not finite draws of data as in your example (I write more about this here: What is the difference between Consistency and Identification?). So if you are going to "fix" the data, then identification will be about the corresponding finite sample distribution (which is a bit strange, but can be done). It could certainly be the case that for certain finite sample DGPs, something like the coefficients for a probit are actually not identified. | Formal Definition of Identification
I see no problem with how identification is defined. The bracketed part is saying that $\theta_0$ is identified if whenever $\theta \neq \theta_0$ and $\theta \in \Theta$, then it must be that $f(z|\t |
36,486 | Formal Definition of Identification | Below I quote the parameter identifiability definition from Section 4.6, Statistical Models, by A. C. Davison:
There must be a 1-1 mapping between models and elements of the parameter space, otherwise there may be no unique value of $\theta$ for $\hat{\theta}$ to converge to. A model in which each $\theta$ generates a different distribution is called identifiable.
This model identifiability definition is just a more heuristic way of paraphrasing the definition you cited in the question. The first sentence in the above quotation requires that the mapping from the parameter space $\Theta$ to the model space $\mathscr{P}_\Theta$: $\theta \mapsto f_\theta$ is a bijection (as the mapping is inherently surjective, it is only necessary to verify the mapping is injective). Therefore, the identifiability definition should not rely on the specific observations of data and can be discussed from a pure probabilistic perspective, as other answers have already pointed out.
A few concrete examples will elucidate this concept. Consider the probit model in your question: for parameter $\theta = (\theta_1, \theta_2)' \in \Theta$, the distribution of the response variable $y$ given the explanatory variable $x = (x_1, x_2)'$ is
\begin{align}
f_\theta(y|x) = \Phi(\theta'x)^y(1 - \Phi(\theta'x))^{1 - y}. \tag{1}
\end{align}
Hence $f_{\theta_1}(y|x) = f_{\theta_2}(y|x)$ requires, in particular $f_{\theta_1}(1|x) = f_{\theta_2}(1|x)$ for all $x$, i.e., $\Phi(\theta_1'x) = \Phi(\theta_2'x)$ holds for all $x$. Since $\Phi$ is strictly increasing, this implies $\theta_1'x = \theta_2'x$ for all $x$, which can only hold when $\theta_1 = \theta_2$. This shows that the mapping is injective hence the probit model $(1)$ is identifiable.
Now consider a model that is non-identifiable. General mixture models are usually non-identifiable according to the very original definition quoted above, which is known as the label switching problem. The two-component mixture model below (Exercise 4.6.1 from the same reference) is a very simple example:
Data arise from a mixture of two exponential populations, one with probability $\pi$ and parameter $\lambda_1$, and the other with probability $1 - \pi$ and parameter $\lambda_2$. The exponential parameters are both positive real numbers and $\pi$ lies in the range $[0, 1]$, so $\Theta = [0, 1] \times \mathbb{R}_+^2$, and
\begin{align}
f(y; \pi, \lambda_1, \lambda_2) = \pi\lambda_1e^{-\lambda_1y} + (1 - \pi)\lambda_2e^{-\lambda_2y}, \quad y > 0, 0 \leq \pi \leq 1, \lambda_1, \lambda_2 > 0. \tag{2}
\end{align}
There are many ways to show $(2)$ is non-identifiable. One way is by noting that as long as $\lambda_1 = \lambda_2$, then no matter what the value of $\pi$ is, the model degenerates to the single exponential distribution: for example, $\theta_1 = (0.5, 1, 1) \neq \theta_2 = (0.2, 1, 1)$ give the same density $e^{-y}$. The other way corresponds to the label switching, which means all the permutations of one parameter give the same density: for example, $\theta_1 = (0.2, 1, 2) \neq \theta_2 = (0.8, 2, 1)$ yield the same density $0.2e^{-y} + 0.8 \times 2e^{-2y}$.
For more examples and discussions on this topic, you can look into the referenced section. | Formal Definition of Identification | Below I quote the parameter identifiability definition from Section 4.6, Statistical Models, by A. C. Davison:
There must be a 1-1 mapping between models and elements of the parameter space, otherwis | Formal Definition of Identification
Below I quote the parameter identifiability definition from Section 4.6, Statistical Models, by A. C. Davison:
There must be a 1-1 mapping between models and elements of the parameter space, otherwise there may be no unique value of $\theta$ for $\hat{\theta}$ to converge to. A model in which each $\theta$ generates a different distribution is called identifiable.
This model identifiability definition is just a more heuristic way of paraphrasing the definition you cited in the question. The first sentence in the above quotation requires that the mapping from the parameter space $\Theta$ to the model space $\mathscr{P}_\Theta$: $\theta \mapsto f_\theta$ is a bijection (as the mapping is inherently surjective, it is only necessary to verify the mapping is injective). Therefore, the identifiability definition should not rely on the specific observations of data and can be discussed from a pure probabilistic perspective, as other answers have already pointed out.
A few concrete examples will elucidate this concept. Consider the probit model in your question: for parameter $\theta = (\theta_1, \theta_2)' \in \Theta$, the distribution of the response variable $y$ given the explanatory variable $x = (x_1, x_2)'$ is
\begin{align}
f_\theta(y|x) = \Phi(\theta'x)^y(1 - \Phi(\theta'x))^{1 - y}. \tag{1}
\end{align}
Hence $f_{\theta_1}(y|x) = f_{\theta_2}(y|x)$ requires, in particular $f_{\theta_1}(1|x) = f_{\theta_2}(1|x)$ for all $x$, i.e., $\Phi(\theta_1'x) = \Phi(\theta_2'x)$ holds for all $x$. Since $\Phi$ is strictly increasing, this implies $\theta_1'x = \theta_2'x$ for all $x$, which can only hold when $\theta_1 = \theta_2$. This shows that the mapping is injective hence the probit model $(1)$ is identifiable.
Now consider a model that is non-identifiable. General mixture models are usually non-identifiable according to the very original definition quoted above, which is known as the label switching problem. The two-component mixture model below (Exercise 4.6.1 from the same reference) is a very simple example:
Data arise from a mixture of two exponential populations, one with probability $\pi$ and parameter $\lambda_1$, and the other with probability $1 - \pi$ and parameter $\lambda_2$. The exponential parameters are both positive real numbers and $\pi$ lies in the range $[0, 1]$, so $\Theta = [0, 1] \times \mathbb{R}_+^2$, and
\begin{align}
f(y; \pi, \lambda_1, \lambda_2) = \pi\lambda_1e^{-\lambda_1y} + (1 - \pi)\lambda_2e^{-\lambda_2y}, \quad y > 0, 0 \leq \pi \leq 1, \lambda_1, \lambda_2 > 0. \tag{2}
\end{align}
There are many ways to show $(2)$ is non-identifiable. One way is by noting that as long as $\lambda_1 = \lambda_2$, then no matter what the value of $\pi$ is, the model degenerates to the single exponential distribution: for example, $\theta_1 = (0.5, 1, 1) \neq \theta_2 = (0.2, 1, 1)$ give the same density $e^{-y}$. The other way corresponds to the label switching, which means all the permutations of one parameter give the same density: for example, $\theta_1 = (0.2, 1, 2) \neq \theta_2 = (0.8, 2, 1)$ yield the same density $0.2e^{-y} + 0.8 \times 2e^{-2y}$.
For more examples and discussions on this topic, you can look into the referenced section. | Formal Definition of Identification
Below I quote the parameter identifiability definition from Section 4.6, Statistical Models, by A. C. Davison:
There must be a 1-1 mapping between models and elements of the parameter space, otherwis |
36,487 | Are mixed models necessary if random effects estimates are close to zero? | To summarize the useful information provided in comments: you don't necessarily have to use a mixed model, but you do have to take the lack of independence among the measurements into account in some way. Robust standard errors and generalized estimating equations (GEE), as discussed by Mc Neish et al. (linked from a comment by @mkt), and generalized least squares are alternatives in many situations. Your situation, with trees nested within transects nested within shelterbelts, would seem best represented by a multi-level model.
The inability to distinguish any particular random-effect estimate from 0 in a mixed model isn't important. More important is the variance among those random-effect values, which are modeled as coming from a Gaussian distribution. For example, consider this adaptation of a simple example from the lmer help page:
library(lme4)
(mod <- lmer(Reaction ~ Days + (1 | Subject), data = sleepstudy))
# Linear mixed model fit by REML ['lmerMod']
# Formula: Reaction ~ Days + (1 | Subject)
# Data: sleepstudy
# REML criterion at convergence: 1786.465
# Random effects:
# Groups Name Std.Dev.
# Subject (Intercept) 37.12
# Residual 30.99
# Number of obs: 180, groups: Subject, 18
# Fixed Effects:
# (Intercept) Days
# 251.41 10.47
Notice that the random effects are reported first, with a standard deviation for the random intercept. In this example, that standard deviation is about the same magnitude as the residual standard deviation. The variance (square of the standard deviation) among those random-effect intercept values is what the model primarily estimates. That's what you should primarily pay attention to, in terms of "random effects" being "different from zero" (Question 1). Yes, you can extract the individual random-intercept estimate from the model, but the fact that you can't distinguish any individual values from 0 doesn't mean that the random effect as a whole is unimportant (Question 2).
It's possible to have such low dispersion among the individual random-effect values that your model can't reliably estimate their variance and you end up with a singular model fit. That does not, however, appear to be the case with your data. As described on this page, linked from a comment by @dipetkov, that's typically seen when you try to fit more combinations of random effects than your data allow. How to proceed in such a case requires evaluation of what in particular is leading to the problem. The lack of independence among correlated observations still needs to be taken into account. Otherwise you do suffer from "pseudoreplication" (Question 3). | Are mixed models necessary if random effects estimates are close to zero? | To summarize the useful information provided in comments: you don't necessarily have to use a mixed model, but you do have to take the lack of independence among the measurements into account in some | Are mixed models necessary if random effects estimates are close to zero?
To summarize the useful information provided in comments: you don't necessarily have to use a mixed model, but you do have to take the lack of independence among the measurements into account in some way. Robust standard errors and generalized estimating equations (GEE), as discussed by Mc Neish et al. (linked from a comment by @mkt), and generalized least squares are alternatives in many situations. Your situation, with trees nested within transects nested within shelterbelts, would seem best represented by a multi-level model.
The inability to distinguish any particular random-effect estimate from 0 in a mixed model isn't important. More important is the variance among those random-effect values, which are modeled as coming from a Gaussian distribution. For example, consider this adaptation of a simple example from the lmer help page:
library(lme4)
(mod <- lmer(Reaction ~ Days + (1 | Subject), data = sleepstudy))
# Linear mixed model fit by REML ['lmerMod']
# Formula: Reaction ~ Days + (1 | Subject)
# Data: sleepstudy
# REML criterion at convergence: 1786.465
# Random effects:
# Groups Name Std.Dev.
# Subject (Intercept) 37.12
# Residual 30.99
# Number of obs: 180, groups: Subject, 18
# Fixed Effects:
# (Intercept) Days
# 251.41 10.47
Notice that the random effects are reported first, with a standard deviation for the random intercept. In this example, that standard deviation is about the same magnitude as the residual standard deviation. The variance (square of the standard deviation) among those random-effect intercept values is what the model primarily estimates. That's what you should primarily pay attention to, in terms of "random effects" being "different from zero" (Question 1). Yes, you can extract the individual random-intercept estimate from the model, but the fact that you can't distinguish any individual values from 0 doesn't mean that the random effect as a whole is unimportant (Question 2).
It's possible to have such low dispersion among the individual random-effect values that your model can't reliably estimate their variance and you end up with a singular model fit. That does not, however, appear to be the case with your data. As described on this page, linked from a comment by @dipetkov, that's typically seen when you try to fit more combinations of random effects than your data allow. How to proceed in such a case requires evaluation of what in particular is leading to the problem. The lack of independence among correlated observations still needs to be taken into account. Otherwise you do suffer from "pseudoreplication" (Question 3). | Are mixed models necessary if random effects estimates are close to zero?
To summarize the useful information provided in comments: you don't necessarily have to use a mixed model, but you do have to take the lack of independence among the measurements into account in some |
36,488 | Sample uniformly from unit square conditioned on sum and product | Let's solve a generalization, so that we can obtain both solutions at once.
Let $h:[0,1]^2\to\mathbb{R}$ be differentiable with derivative $\nabla h=(D_1h, D_2h).$ To avoid technical complications in the analysis, further suppose the level curve of $h$ of height $a$ determines a differentiable partial function of either variable. This means that for each $0\le x \le 1$ there is at most one $y\in[0,1]$ for which $h(x,y)=a$ and the graph of the level curve of $h$ passing through $(x,y)$ has a well-defined slope; and it means the same thing with the roles of $x$ and $y$ reversed. In particular, the partial derivatives $D_1h(x,y)$ and $D_2h(x,y)$ are never zero at any such point. (After reading through the solution it should be clear to what extent these assumptions can be relaxed. Differentiability of $h$ almost everywhere is the crucial assumption.)
The question concerns the cases $h(x,y)=xy$ and $h(x,y)=x+y.$ In the first case the level curves are portions of hyperbolas spanning $a \le x \le 1$ and in the second case the level curves are line segments lying above $0\le x \le a$ when
$a\le 1$ or above $a-1\le x \le 1$ when $1 \le a \le 2.$ (These are illustrated in the question.) Clearly they satisfy all the assumptions made above.
We can approximate the conditional distribution by thickening the level curve slightly. That is, let's examine the thin strip $\mathcal{R}_a$ of values $(x,y)$ where $a \le h(x,y) \le a+\epsilon$ for some tiny increment $\epsilon.$ For sufficiently small $\epsilon,$ the differentiability of $h$ means its level curves are arbitrarily well approximated by line segments passing through $(x,y)$ orthogonal to the direction $\nabla h(x,y).$ The set of possible values of $X$ lying beneath $\mathcal R_a$ is therefore the interval between $x$ and $x+\mathrm{d}x=x+\epsilon/D_1h(x,y),$ defining a parallelogram of height $\epsilon/D_2h(x,y).$ The conditional distribution of $X$ is the ratio of this parallelogram's probability (proportional to its width times its height) to the chance $X$ lies beneath it. Thus,
$$\Pr(X \in [x + \mathrm dx] \mid h(X,Y)\in[a, a+\epsilon]) \ \propto\ \frac{1}{D_2h(x,y)}.$$
A comparable expression holds for the conditional probability of $Y.$ As always, the constant of proportionality is found by setting the total probability to unity.
This approximation becomes exact in the limit as $\epsilon \to 0.$ Indeed, this approach of thickening the level curves and taking the limit uniquely defines what it might mean to condition on an event like $h(X,Y)=a$ that has zero probability.
To generate one value of $(X,Y),$ generate a value of $X$ from the conditional distribution and then solve the equation $h(X,Y)=a$ for $Y.$ (Alternatively, switch the roles of $X$ and $Y:$ if one of the solutions is easier to find than the other, that will determine your approach.)
For example, with $h(x,y)=xy,$ $\nabla h(x,y) = (y, x).$ For a given $a=xy=h(x,y),$ both $X$ and $Y$ can range from $a$ up through $1.$ Thus, with $$f_{X\mid h(X,Y)=a}(x) \propto \frac{1}{D_2h(x,y)} = \frac{1}{x},$$
compute
$$\int_a^1 f_{X\mid h(X,Y)=a}(x)\mathrm{d}x = \int_a^1 \frac{\mathrm{d}x}{x} = -\log(a),$$ finally giving
$$f_{X\mid h(X,Y)=a}(x) = -\frac{1}{x \log(a)},\ a \le x \le 1.$$
The same integration gives us the conditional distribution function
$$F_{X\mid h(X,Y)=a}(x) = 1 - \frac{\log x}{\log a},\ a \le x \le 1.$$
Random values from this distribution can be obtained though the probability integral transform because when $U$ has a uniform distribution on $[0,1],$ so does $1-U,$ whence
$$F^{-1}_{X\mid h(X,Y)=a}(1-U) = a^{U}$$
has the same distribution as $X.$ Given a realization of $X,$ find the corresponding $Y$ by solving the equation $h(X,Y)=a,$ giving $Y = a/X.$ In this fashion you obtain one realization of $(X,Y)$ by means of a single uniform variate $U.$
Here is working R code to illustrate.
a <- 0.05 # Specify `a` between 0 and 1
set.seed(17) # Use this to reproduce the results exactly
u <- runif(1e5) # Specify how many (X,Y) values to generate
x <- a^u
y <- a/x
The first two panels in this figure are histograms of this simulation of ten thousand realizations of $(X,Y)$ conditional on $XY = h(X,Y)= a = 0.05,$ on which graphs of $f_{X\mid h(X,Y)=a}$ and $f_{Y\mid h(X,Y)=a}$ are plotted.
As a check, the third panel shows a histogram obtained in a completely different way. I generated 20 million uniform $(X,Y)$ values and threw away all except those for which the value of $h(X,Y)$ was very close to $a.$ This histogram shows all the resulting $Y$ values (about a quarter million of them). (That skinny little bin to the left of $0.05$ reflects the errors errors made by using a finite thickening of the level curve. The histogram really is showing the true conditional distribution plus the "noise" this thickening creates.)
Evidently everything works out: the analysis, this brute-force check, and the efficiently simulated values all agree.
I leave the (much easier) analysis of $h(x,y)=x+y$ as an exercise. It leads to the solution suggested in the question.
Comments
In response to issues raised in comments, I wish to point out some subtleties.
The distributions of $X$ and $Y$ conditional on $h(x,y)=a$ alone are not defined.
The distributions of $X$ and $Y$ conditional on $h(x,y)$ are defined.
What might these seemingly contradictory statements mean? Let me illustrate with the example $h(x,y)=xy.$ On the left of this figure are contours of $h.$
Emphasized (in white) is the level set given by $h(x,y)=1/20.$ It is one member of this family, or foliation, of hyperbolas indicated by the other contours. Notice that these hyperbolas are not congruent to each other: as you move to the left and down, their curvatures increase.
The very same (white) hyperbola can be embedded in other foliations. One is shown at the right: it is a level set of $\tilde h(x,y) = x+y-\sqrt{4/20 - (x-y)^2}$ given by $\tilde h(x,y) = 0.$ All the contours of $\tilde h$ are congruent hyperbolas.
A histogram of simulated values of $X$ conditional on $\tilde h(x,y)=0$ is shown at the right of the first figure (far above). This conditional distribution clearly differs, albeit subtly, from the distribution conditional on $h(x,y).$ The reason is that the gradients $\nabla \tilde h$ in the second family vary in a different fashion along each level curve compared to the gradients $\nabla h$ in the first family.
What this analysis (which is fairly general) shows is that you cannot condition on just a single event of zero probability, but you can condition on a single event of zero probability provided (1) it defines a sufficiently smooth level set and (2) there is an outward-pointing vector field defined along it almost everywhere. That vector field plays the role of $\nabla h$ or $\nabla \tilde h$ insofar as it embeds the event infinitesimally within a foliation. | Sample uniformly from unit square conditioned on sum and product | Let's solve a generalization, so that we can obtain both solutions at once.
Let $h:[0,1]^2\to\mathbb{R}$ be differentiable with derivative $\nabla h=(D_1h, D_2h).$ To avoid technical complications in | Sample uniformly from unit square conditioned on sum and product
Let's solve a generalization, so that we can obtain both solutions at once.
Let $h:[0,1]^2\to\mathbb{R}$ be differentiable with derivative $\nabla h=(D_1h, D_2h).$ To avoid technical complications in the analysis, further suppose the level curve of $h$ of height $a$ determines a differentiable partial function of either variable. This means that for each $0\le x \le 1$ there is at most one $y\in[0,1]$ for which $h(x,y)=a$ and the graph of the level curve of $h$ passing through $(x,y)$ has a well-defined slope; and it means the same thing with the roles of $x$ and $y$ reversed. In particular, the partial derivatives $D_1h(x,y)$ and $D_2h(x,y)$ are never zero at any such point. (After reading through the solution it should be clear to what extent these assumptions can be relaxed. Differentiability of $h$ almost everywhere is the crucial assumption.)
The question concerns the cases $h(x,y)=xy$ and $h(x,y)=x+y.$ In the first case the level curves are portions of hyperbolas spanning $a \le x \le 1$ and in the second case the level curves are line segments lying above $0\le x \le a$ when
$a\le 1$ or above $a-1\le x \le 1$ when $1 \le a \le 2.$ (These are illustrated in the question.) Clearly they satisfy all the assumptions made above.
We can approximate the conditional distribution by thickening the level curve slightly. That is, let's examine the thin strip $\mathcal{R}_a$ of values $(x,y)$ where $a \le h(x,y) \le a+\epsilon$ for some tiny increment $\epsilon.$ For sufficiently small $\epsilon,$ the differentiability of $h$ means its level curves are arbitrarily well approximated by line segments passing through $(x,y)$ orthogonal to the direction $\nabla h(x,y).$ The set of possible values of $X$ lying beneath $\mathcal R_a$ is therefore the interval between $x$ and $x+\mathrm{d}x=x+\epsilon/D_1h(x,y),$ defining a parallelogram of height $\epsilon/D_2h(x,y).$ The conditional distribution of $X$ is the ratio of this parallelogram's probability (proportional to its width times its height) to the chance $X$ lies beneath it. Thus,
$$\Pr(X \in [x + \mathrm dx] \mid h(X,Y)\in[a, a+\epsilon]) \ \propto\ \frac{1}{D_2h(x,y)}.$$
A comparable expression holds for the conditional probability of $Y.$ As always, the constant of proportionality is found by setting the total probability to unity.
This approximation becomes exact in the limit as $\epsilon \to 0.$ Indeed, this approach of thickening the level curves and taking the limit uniquely defines what it might mean to condition on an event like $h(X,Y)=a$ that has zero probability.
To generate one value of $(X,Y),$ generate a value of $X$ from the conditional distribution and then solve the equation $h(X,Y)=a$ for $Y.$ (Alternatively, switch the roles of $X$ and $Y:$ if one of the solutions is easier to find than the other, that will determine your approach.)
For example, with $h(x,y)=xy,$ $\nabla h(x,y) = (y, x).$ For a given $a=xy=h(x,y),$ both $X$ and $Y$ can range from $a$ up through $1.$ Thus, with $$f_{X\mid h(X,Y)=a}(x) \propto \frac{1}{D_2h(x,y)} = \frac{1}{x},$$
compute
$$\int_a^1 f_{X\mid h(X,Y)=a}(x)\mathrm{d}x = \int_a^1 \frac{\mathrm{d}x}{x} = -\log(a),$$ finally giving
$$f_{X\mid h(X,Y)=a}(x) = -\frac{1}{x \log(a)},\ a \le x \le 1.$$
The same integration gives us the conditional distribution function
$$F_{X\mid h(X,Y)=a}(x) = 1 - \frac{\log x}{\log a},\ a \le x \le 1.$$
Random values from this distribution can be obtained though the probability integral transform because when $U$ has a uniform distribution on $[0,1],$ so does $1-U,$ whence
$$F^{-1}_{X\mid h(X,Y)=a}(1-U) = a^{U}$$
has the same distribution as $X.$ Given a realization of $X,$ find the corresponding $Y$ by solving the equation $h(X,Y)=a,$ giving $Y = a/X.$ In this fashion you obtain one realization of $(X,Y)$ by means of a single uniform variate $U.$
Here is working R code to illustrate.
a <- 0.05 # Specify `a` between 0 and 1
set.seed(17) # Use this to reproduce the results exactly
u <- runif(1e5) # Specify how many (X,Y) values to generate
x <- a^u
y <- a/x
The first two panels in this figure are histograms of this simulation of ten thousand realizations of $(X,Y)$ conditional on $XY = h(X,Y)= a = 0.05,$ on which graphs of $f_{X\mid h(X,Y)=a}$ and $f_{Y\mid h(X,Y)=a}$ are plotted.
As a check, the third panel shows a histogram obtained in a completely different way. I generated 20 million uniform $(X,Y)$ values and threw away all except those for which the value of $h(X,Y)$ was very close to $a.$ This histogram shows all the resulting $Y$ values (about a quarter million of them). (That skinny little bin to the left of $0.05$ reflects the errors errors made by using a finite thickening of the level curve. The histogram really is showing the true conditional distribution plus the "noise" this thickening creates.)
Evidently everything works out: the analysis, this brute-force check, and the efficiently simulated values all agree.
I leave the (much easier) analysis of $h(x,y)=x+y$ as an exercise. It leads to the solution suggested in the question.
Comments
In response to issues raised in comments, I wish to point out some subtleties.
The distributions of $X$ and $Y$ conditional on $h(x,y)=a$ alone are not defined.
The distributions of $X$ and $Y$ conditional on $h(x,y)$ are defined.
What might these seemingly contradictory statements mean? Let me illustrate with the example $h(x,y)=xy.$ On the left of this figure are contours of $h.$
Emphasized (in white) is the level set given by $h(x,y)=1/20.$ It is one member of this family, or foliation, of hyperbolas indicated by the other contours. Notice that these hyperbolas are not congruent to each other: as you move to the left and down, their curvatures increase.
The very same (white) hyperbola can be embedded in other foliations. One is shown at the right: it is a level set of $\tilde h(x,y) = x+y-\sqrt{4/20 - (x-y)^2}$ given by $\tilde h(x,y) = 0.$ All the contours of $\tilde h$ are congruent hyperbolas.
A histogram of simulated values of $X$ conditional on $\tilde h(x,y)=0$ is shown at the right of the first figure (far above). This conditional distribution clearly differs, albeit subtly, from the distribution conditional on $h(x,y).$ The reason is that the gradients $\nabla \tilde h$ in the second family vary in a different fashion along each level curve compared to the gradients $\nabla h$ in the first family.
What this analysis (which is fairly general) shows is that you cannot condition on just a single event of zero probability, but you can condition on a single event of zero probability provided (1) it defines a sufficiently smooth level set and (2) there is an outward-pointing vector field defined along it almost everywhere. That vector field plays the role of $\nabla h$ or $\nabla \tilde h$ insofar as it embeds the event infinitesimally within a foliation. | Sample uniformly from unit square conditioned on sum and product
Let's solve a generalization, so that we can obtain both solutions at once.
Let $h:[0,1]^2\to\mathbb{R}$ be differentiable with derivative $\nabla h=(D_1h, D_2h).$ To avoid technical complications in |
36,489 | Why does log-linear analysis seem to ignore the Poisson regression equidispersion assumption? | Quoting from Section 4.3.3 of the second edition of Agresti's "Categorical Data Analysis":
Overdispersion is common in the modeling of counts. When the model for the mean is correct but the true distribution is not Poisson, the ML estimates of model parameters are still consistent but standard errors are incorrect.
He continues to describe both negative-binomial and quasi-likelihood approaches to deal with overdispersion. So yes, for these models it (should be) implied to proceed in a way that takes into account the relationship between fitted values and variance.
The omission of this issue in introductory explanations of count modeling isn't really different from starting with the assumption of homoscedasticity and a normal error distribution in linear regression. You start from the simple, then build from there. | Why does log-linear analysis seem to ignore the Poisson regression equidispersion assumption? | Quoting from Section 4.3.3 of the second edition of Agresti's "Categorical Data Analysis":
Overdispersion is common in the modeling of counts. When the model for the mean is correct but the true dist | Why does log-linear analysis seem to ignore the Poisson regression equidispersion assumption?
Quoting from Section 4.3.3 of the second edition of Agresti's "Categorical Data Analysis":
Overdispersion is common in the modeling of counts. When the model for the mean is correct but the true distribution is not Poisson, the ML estimates of model parameters are still consistent but standard errors are incorrect.
He continues to describe both negative-binomial and quasi-likelihood approaches to deal with overdispersion. So yes, for these models it (should be) implied to proceed in a way that takes into account the relationship between fitted values and variance.
The omission of this issue in introductory explanations of count modeling isn't really different from starting with the assumption of homoscedasticity and a normal error distribution in linear regression. You start from the simple, then build from there. | Why does log-linear analysis seem to ignore the Poisson regression equidispersion assumption?
Quoting from Section 4.3.3 of the second edition of Agresti's "Categorical Data Analysis":
Overdispersion is common in the modeling of counts. When the model for the mean is correct but the true dist |
36,490 | Who first coined the phrase "correlation does not imply causation"? | tl;dr: A book reviewer with the initials F.A.D in a 1900 issue of Nature appears to be the first to publish the phrase "correlation does not imply causation."
Long form answer
Depending on whether you are looking for the exact words "correlation does not imply causation" (note also "correlation is not causation"), or just want the primary dive into the relationship between correlation and causation, a good answer is 18th Century Scottish philosopher David Hume in A Treatise on Human Nature, where he muses on the relation between correlation and causation. For example,
It is certain, that not only in philosophy, but even in
common life, we may attain the knowledge of a particular cause merely by one experiment, provided it be made with judgment, and after a careful removal of all foreign and superfluous circumstances. Now as after one experiment of this kind, the mind, upon the appearance either of the cause or the effect, can draw an inference concerning the existence of its correlative; and as a habit can never be acquired merely by one instance; it may be thought, that belief cannot in this case be esteemed the effect of custom.
While Hume was nowhere near formalizing a causal calculus (e.g., Pearl's do operator), we can see in the above few sentences (abstracted from a whole section on correlation and causation), that he asserts that causal beliefs result from correlation between putative cause and effect of it, but only within a judicious model accounting for "superfluous circumstances" (backdoor confounding, selection bias, and differential measurement error, anyone?). Causal evidence demands correlation per Hume (both in the everyday sense, but also within the sciences), but neither naked correlation nor the absence of naked correlation is enough to establish a cause and effect relationship: in today's language we would say you need appropriate study design, and a causal calculus to deductively account for the structure of causal beliefs. Hume also made the notable contribution that causation can only be inferred, that what our senses give us is correlation.
As you indicate in your question, Sewall Wright is another worthy of consideration in answer to your question. One could make a claim that Wright originated the causal path analysis which led to both Pearl's work in formal counterfactual causal inference, but to other formal models of causal inference, such as Levins' loop analysis of complex causal systems in which every variable directly or indirectly causes every variable in the system at some future time. In his paper "Correlation and Causation," Wright noted:
One should not attempt to apply in general a causal interpretation to solutions by the direct methods.
Where "direct methods" are those measuring what I called "naked correlations" above set in the context of Wright's methodological contribution of path analysis (prefiguring the use of both directed acyclic graphs and signed directed graphs in causal inference). In other words: naked correlation is not causation.
If you are looking only for the exact phrase, Google ngram book search records the first appearance of that phrase in its corpus in a May 17th, 1900 review of racist eugenicist Karl Pearson's The Grammar of Science titled "BIOLOGY AS AN “EXACT” SCIENCE" in Nature by one F.A.D., who writes:
As the author [Pearson] himself elsewhere points out, correlation does not imply causation, though the converse is no doubt true enough.
Pearson himself does not use the phrase "correlation does not imply causation" but is grappling with the relation between the two in The Grammar of Science, for example:
All causation as we have defined it is correlation, but the converse is not necessarily true, i.e. where we find correlation we cannot always predict causation.
Coda: I fully agree with @BruceET's direct comment to your question. Without wanting to reify the WEIRD, I suspect that causal reasoning, and perceptions of correlation are pretty inherent to human cognition across societies and times (my cat informs me non-human cognition also).
Selected References
D., F. A. (1900). Biology as an “Exact” Science. Nature, 62(1594), 49–50. [Collected in a May-October fassicle]
Levins, R. (1974). The Qualitative Analysis of Partially Specified Systems. Annals of the New York Academy of Sciences, 231, 123–138.
Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press.
Pearl, J. (2018). The Book of Why. Basic Books.
Pearson, K. (1900). The Grammar of Science (Second edition). Adam and Charles Black.
Wright, S. (1921). Correlation and Causation. Journal of Agricultural Research, 20(7), 557–585. | Who first coined the phrase "correlation does not imply causation"? | tl;dr: A book reviewer with the initials F.A.D in a 1900 issue of Nature appears to be the first to publish the phrase "correlation does not imply causation."
Long form answer
Depending on whether you | Who first coined the phrase "correlation does not imply causation"?
tl;dr: A book reviewer with the initials F.A.D in a 1900 issue of Nature appears to be the first to publish the phrase "correlation does not imply causation."
Long form answer
Depending on whether you are looking for the exact words "correlation does not imply causation" (note also "correlation is not causation"), or just want the primary dive into the relationship between correlation and causation, a good answer is 18th Century Scottish philosopher David Hume in A Treatise on Human Nature, where he muses on the relation between correlation and causation. For example,
It is certain, that not only in philosophy, but even in
common life, we may attain the knowledge of a particular cause merely by one experiment, provided it be made with judgment, and after a careful removal of all foreign and superfluous circumstances. Now as after one experiment of this kind, the mind, upon the appearance either of the cause or the effect, can draw an inference concerning the existence of its correlative; and as a habit can never be acquired merely by one instance; it may be thought, that belief cannot in this case be esteemed the effect of custom.
While Hume was nowhere near formalizing a causal calculus (e.g., Pearl's do operator), we can see in the above few sentences (abstracted from a whole section on correlation and causation), that he asserts that causal beliefs result from correlation between putative cause and effect of it, but only within a judicious model accounting for "superfluous circumstances" (backdoor confounding, selection bias, and differential measurement error, anyone?). Causal evidence demands correlation per Hume (both in the everyday sense, but also within the sciences), but neither naked correlation nor the absence of naked correlation is enough to establish a cause and effect relationship: in today's language we would say you need appropriate study design, and a causal calculus to deductively account for the structure of causal beliefs. Hume also made the notable contribution that causation can only be inferred, that what our senses give us is correlation.
As you indicate in your question, Sewall Wright is another worthy of consideration in answer to your question. One could make a claim that Wright originated the causal path analysis which led to both Pearl's work in formal counterfactual causal inference, but to other formal models of causal inference, such as Levins' loop analysis of complex causal systems in which every variable directly or indirectly causes every variable in the system at some future time. In his paper "Correlation and Causation," Wright noted:
One should not attempt to apply in general a causal interpretation to solutions by the direct methods.
Where "direct methods" are those measuring what I called "naked correlations" above set in the context of Wright's methodological contribution of path analysis (prefiguring the use of both directed acyclic graphs and signed directed graphs in causal inference). In other words: naked correlation is not causation.
If you are looking only for the exact phrase, Google ngram book search records the first appearance of that phrase in its corpus in a May 17th, 1900 review of racist eugenicist Karl Pearson's The Grammar of Science titled "BIOLOGY AS AN “EXACT” SCIENCE" in Nature by one F.A.D., who writes:
As the author [Pearson] himself elsewhere points out, correlation does not imply causation, though the converse is no doubt true enough.
Pearson himself does not use the phrase "correlation does not imply causation" but is grappling with the relation between the two in The Grammar of Science, for example:
All causation as we have defined it is correlation, but the converse is not necessarily true, i.e. where we find correlation we cannot always predict causation.
Coda: I fully agree with @BruceET's direct comment to your question. Without wanting to reify the WEIRD, I suspect that causal reasoning, and perceptions of correlation are pretty inherent to human cognition across societies and times (my cat informs me non-human cognition also).
Selected References
D., F. A. (1900). Biology as an “Exact” Science. Nature, 62(1594), 49–50. [Collected in a May-October fassicle]
Levins, R. (1974). The Qualitative Analysis of Partially Specified Systems. Annals of the New York Academy of Sciences, 231, 123–138.
Pearl, J. (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press.
Pearl, J. (2018). The Book of Why. Basic Books.
Pearson, K. (1900). The Grammar of Science (Second edition). Adam and Charles Black.
Wright, S. (1921). Correlation and Causation. Journal of Agricultural Research, 20(7), 557–585. | Who first coined the phrase "correlation does not imply causation"?
tl;dr: A book reviewer with the initials F.A.D in a 1900 issue of Nature appears to be the first to publish the phrase "correlation does not imply causation."
Long form answer
Depending on whether you |
36,491 | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx E[X]?$ | One situation where this is interestingly almost true is when $X$ and $Y$ are both projections from the same high-dimensional distribution, ie, $X=a^TZ$, $Y=b^TZ$ for high-dimensional $Z$. Hall & Li (and earlier work by various people) showed that for 'most' distributions for $Z$ on a high-dimensional sphere and most $a,b$, $E[X|Y]$ is approximately linear in Y and so if they are uncorrelated they are close to independent.
The result makes sense, because $X$ and $Y$ will be approximately bivariate Gaussian by the CLT, but actually pinning down the error bounds takes work.
This question was motivated by the sliced inverse regression method of Duan and Li, where you regress $X$ on $Y$ to learn about $Y|X$ | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx | One situation where this is interestingly almost true is when $X$ and $Y$ are both projections from the same high-dimensional distribution, ie, $X=a^TZ$, $Y=b^TZ$ for high-dimensional $Z$. Hall & Li | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx E[X]?$
One situation where this is interestingly almost true is when $X$ and $Y$ are both projections from the same high-dimensional distribution, ie, $X=a^TZ$, $Y=b^TZ$ for high-dimensional $Z$. Hall & Li (and earlier work by various people) showed that for 'most' distributions for $Z$ on a high-dimensional sphere and most $a,b$, $E[X|Y]$ is approximately linear in Y and so if they are uncorrelated they are close to independent.
The result makes sense, because $X$ and $Y$ will be approximately bivariate Gaussian by the CLT, but actually pinning down the error bounds takes work.
This question was motivated by the sliced inverse regression method of Duan and Li, where you regress $X$ on $Y$ to learn about $Y|X$ | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx
One situation where this is interestingly almost true is when $X$ and $Y$ are both projections from the same high-dimensional distribution, ie, $X=a^TZ$, $Y=b^TZ$ for high-dimensional $Z$. Hall & Li |
36,492 | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx E[X]?$ | No, they cannot be said to be approximately equal in general unless they are exactly equal. To see this, consider:
$$\mathbb{E}[X|Y] - \mathbb{E}[X] = \delta$$
for any $\delta$ that is near the boundary of what you consider to be "approximately equal". Now, multiply $X$ by $10^9$:
$$\mathbb{E}[10^9X|Y] - \mathbb{E}[10^9X] = 10^9\mathbb{E}[X|Y] - 10^9\mathbb{E}[X] = 10^9\delta$$
and now $\delta$ is no longer in the range of "approximately equal" values.
You should be able to see this also prevents the expansion of $E[X|Y]$ from giving you any useful information in this regard; basically, one way or another, you'd probably need to actually compute the conditional and unconditional expectations and compare them to determine whether the difference is ignorable in your application, or perhaps compute bounds on the difference and use those as a decision tool instead. | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx | No, they cannot be said to be approximately equal in general unless they are exactly equal. To see this, consider:
$$\mathbb{E}[X|Y] - \mathbb{E}[X] = \delta$$
for any $\delta$ that is near the boun | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx E[X]?$
No, they cannot be said to be approximately equal in general unless they are exactly equal. To see this, consider:
$$\mathbb{E}[X|Y] - \mathbb{E}[X] = \delta$$
for any $\delta$ that is near the boundary of what you consider to be "approximately equal". Now, multiply $X$ by $10^9$:
$$\mathbb{E}[10^9X|Y] - \mathbb{E}[10^9X] = 10^9\mathbb{E}[X|Y] - 10^9\mathbb{E}[X] = 10^9\delta$$
and now $\delta$ is no longer in the range of "approximately equal" values.
You should be able to see this also prevents the expansion of $E[X|Y]$ from giving you any useful information in this regard; basically, one way or another, you'd probably need to actually compute the conditional and unconditional expectations and compare them to determine whether the difference is ignorable in your application, or perhaps compute bounds on the difference and use those as a decision tool instead. | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx
No, they cannot be said to be approximately equal in general unless they are exactly equal. To see this, consider:
$$\mathbb{E}[X|Y] - \mathbb{E}[X] = \delta$$
for any $\delta$ that is near the boun |
36,493 | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx E[X]?$ | Now, uncorrelated does not imply independence, so $E[X \mid Y] \ne E[X]$.
I find the conclusion 'so $E[X \mid Y] \ne E[X]$' in this sentence a bit confusing.
If variables are uncorrelated then it does not follow $E[X \mid Y] \ne E[X]$.
In addition independence does neither mean $E[X \mid Y] \ne E[X]$. You can have independence while $E[X \mid Y] = E[X]$
If you have zero correlation then you can still have dependence, and also while $E[X \vert Y] = E[X]$
Example: Let $X \sim N(0,1)$ and $Y = N(0,\sigma^2 = X^2)$
If you have zero correlation then
then this means that you have a slope of zero for a line that fits $E[Y|X]$ as function of $X$ or $E[X|Y]$ as function of $Y$.
But $E[Y|X]$ can have all sorts of deviations from the straight line.
Example: let $X \sim N(0,1)$ and $Z = N(X^2,1)$
However, can they be said to be approximately equal?
In many situations, you have zero correlation, but still dependence due to heterogeneity as in the first example. There can be dependence but still $E[Y|X] = E[Y]$.
But it is difficult to give general conditions for this. The condition for $E[Y|X] = E[Y]$ is that $E[Y|X] = E[Y]$. | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx | Now, uncorrelated does not imply independence, so $E[X \mid Y] \ne E[X]$.
I find the conclusion 'so $E[X \mid Y] \ne E[X]$' in this sentence a bit confusing.
If variables are uncorrelated then it do | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx E[X]?$
Now, uncorrelated does not imply independence, so $E[X \mid Y] \ne E[X]$.
I find the conclusion 'so $E[X \mid Y] \ne E[X]$' in this sentence a bit confusing.
If variables are uncorrelated then it does not follow $E[X \mid Y] \ne E[X]$.
In addition independence does neither mean $E[X \mid Y] \ne E[X]$. You can have independence while $E[X \mid Y] = E[X]$
If you have zero correlation then you can still have dependence, and also while $E[X \vert Y] = E[X]$
Example: Let $X \sim N(0,1)$ and $Y = N(0,\sigma^2 = X^2)$
If you have zero correlation then
then this means that you have a slope of zero for a line that fits $E[Y|X]$ as function of $X$ or $E[X|Y]$ as function of $Y$.
But $E[Y|X]$ can have all sorts of deviations from the straight line.
Example: let $X \sim N(0,1)$ and $Z = N(X^2,1)$
However, can they be said to be approximately equal?
In many situations, you have zero correlation, but still dependence due to heterogeneity as in the first example. There can be dependence but still $E[Y|X] = E[Y]$.
But it is difficult to give general conditions for this. The condition for $E[Y|X] = E[Y]$ is that $E[Y|X] = E[Y]$. | If $X$ and $Y$ are uncorrelated random variables, then under what condition is $E[X \mid Y] \approx
Now, uncorrelated does not imply independence, so $E[X \mid Y] \ne E[X]$.
I find the conclusion 'so $E[X \mid Y] \ne E[X]$' in this sentence a bit confusing.
If variables are uncorrelated then it do |
36,494 | Finding unbiased estimator for Truncated Poisson Distribution | Your answer looks correct to me, but it can be further simplified. You should have:
$$\begin{align}
\mathbb{E}(T)
&= 2 \Bigg[ p_X(1) + p_X(3) + p_X(5) + \cdots \Bigg] \\[6pt]
&= 2 \Bigg[ \frac{e^{-\theta} \theta}{1! (1-e^{-\theta})} + \frac{e^{-\theta} \theta^3}{3! (1-e^{-\theta})} + \frac{e^{-\theta} \theta^5}{5! (1-e^{-\theta})} + \cdots \Bigg] \\[6pt]
&= \frac{e^{-\theta}}{1-e^{-\theta}} \Bigg[ \frac{2 \theta}{1!} + \frac{2 \theta^3}{3!} + \frac{2 \theta^5}{5!} + \cdots \Bigg] \\[8pt]
&= \frac{e^{-\theta} (e^{\theta} - e^{-\theta})}{1-e^{-\theta}} \\[8pt]
&= \frac{1 - e^{-2\theta}}{1-e^{-\theta}} \\[8pt]
&= \frac{(1 + e^{-\theta})(1 - e^{-\theta})}{1-e^{-\theta}} \\[12pt]
&= 1 + e^{-\theta}. \\[6pt]
\end{align}$$
Your original question got cut off, so it's not clear what unbiasedness property the question is asserting. In any case, you can see that the statistic $T$ is an unbiased estimator for $1+e^{-\theta}$ and biased for other functions of $\theta$. | Finding unbiased estimator for Truncated Poisson Distribution | Your answer looks correct to me, but it can be further simplified. You should have:
$$\begin{align}
\mathbb{E}(T)
&= 2 \Bigg[ p_X(1) + p_X(3) + p_X(5) + \cdots \Bigg] \\[6pt]
&= 2 \Bigg[ \frac{e^{-\t | Finding unbiased estimator for Truncated Poisson Distribution
Your answer looks correct to me, but it can be further simplified. You should have:
$$\begin{align}
\mathbb{E}(T)
&= 2 \Bigg[ p_X(1) + p_X(3) + p_X(5) + \cdots \Bigg] \\[6pt]
&= 2 \Bigg[ \frac{e^{-\theta} \theta}{1! (1-e^{-\theta})} + \frac{e^{-\theta} \theta^3}{3! (1-e^{-\theta})} + \frac{e^{-\theta} \theta^5}{5! (1-e^{-\theta})} + \cdots \Bigg] \\[6pt]
&= \frac{e^{-\theta}}{1-e^{-\theta}} \Bigg[ \frac{2 \theta}{1!} + \frac{2 \theta^3}{3!} + \frac{2 \theta^5}{5!} + \cdots \Bigg] \\[8pt]
&= \frac{e^{-\theta} (e^{\theta} - e^{-\theta})}{1-e^{-\theta}} \\[8pt]
&= \frac{1 - e^{-2\theta}}{1-e^{-\theta}} \\[8pt]
&= \frac{(1 + e^{-\theta})(1 - e^{-\theta})}{1-e^{-\theta}} \\[12pt]
&= 1 + e^{-\theta}. \\[6pt]
\end{align}$$
Your original question got cut off, so it's not clear what unbiasedness property the question is asserting. In any case, you can see that the statistic $T$ is an unbiased estimator for $1+e^{-\theta}$ and biased for other functions of $\theta$. | Finding unbiased estimator for Truncated Poisson Distribution
Your answer looks correct to me, but it can be further simplified. You should have:
$$\begin{align}
\mathbb{E}(T)
&= 2 \Bigg[ p_X(1) + p_X(3) + p_X(5) + \cdots \Bigg] \\[6pt]
&= 2 \Bigg[ \frac{e^{-\t |
36,495 | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable? | I believe you're looking for a metric of (relative) variable importance (see also this thread). Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to each predictor in a multiple linear regression model. A certain approach in this family is better known under the term "Dominance analysis" (see Azen et al. 2003). Azen et al. (2003) also discuss other measures of importance such as importance based on regression coefficients, based on correlations of importance based on a combination of coefficients and correlations. A general good overview of techniques based on variance decomposition can be found in the paper of Grömping (2012). These techniques are implemented in the R packages relaimpo, domir and yhat.
Here, I'm going to illustrate a method that is model-agnostic (i.e. it can be applied to a variety of model types) and has intuitive appeal: Variable importance based on permutation. The idea is very simple:
Decide on a performance metric that is important to you. Examples include: Root mean square error (RMSE), mean absolute error (MAE), $R^2$ etc. This also is somewhat dependent in the model type.
Calculate the metric on your dataset, $M_{orig}$. This serves as baseline performance metric.
For $i = 1, 2, \ldots, j$:
(a) Permute the values of the predictor $X_i$ in the data set.
(b) Recompute the metric on the permuted data and call it $M_{perm}$.
(c) Record the difference from baseline using $imp(X_i)=M_{perm} - M_{orig}$.
Do this repeatedly, say 1000 times, and take the average of the importance values. Intuitively, the permutations break the relationship between the predictor $X_i$ and the outcome. The larger the change in the performance metric, the higher the predictors' importance. More information can be found in this chapter of an online book by Christoph Molnar.
The R package vip implements this procedure (see the documentation (PDF) for more information). The following code applies the idea to your dataset. I chose the $R^2$ and the mean absolute error (MAE) as performance metrics and permute each predictor 1000 times:
library(vip)
# The model
mod <- lm(satisfied ~ clean*location + edu*location + transportation*location, data = my_df)
# Calculate permutation-based importance with r-squared as metric
set.seed(142857) # For reproducibility
p_r2 <- vip::vi(mod, method = "permute", target = "satisfied", metric = "rsquared", pred_wrapper = predict, nsim = 1000)
p_r2
Variable Importance StDev
<chr> <dbl> <dbl>
1 transportation 0.198 0.0492
2 clean 0.177 0.0465
3 edu 0.0462 0.0237
4 location 0.0449 0.0250
# Calculate permutation-based importance with mae as metric
p_mae <- vip::vi(mod, method = "permute", target = "satisfied", metric = "mae", pred_wrapper = predict, nsim = 1000)
p_mae
Variable Importance StDev
<chr> <dbl> <dbl>
1 transportation 0.166 0.0413
2 clean 0.144 0.0400
3 location 0.0396 0.0214
4 edu 0.0368 0.0219
According to the $R^2$, permuting transportation leads to the largest change in $R^2$, followed by clean. Using the mean absolute error shows a similar ordering with transportation and clean being most important while location and edu being least important.
References
Azen R, Budescu DV (2003): The Dominance Analysis Approach for Comparing Predictors in Multiple Regression. Psychological Methods 8:2, 129-148. (link)
Grömping U (2012): Estimators of relative importance in linear regression based on variance decomposition. Am Stat 61:2, 139-147. (link) | How to tell which variable is more meaningful when modeling the relationship between several predict | I believe you're looking for a metric of (relative) variable importance (see also this thread). Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable?
I believe you're looking for a metric of (relative) variable importance (see also this thread). Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to each predictor in a multiple linear regression model. A certain approach in this family is better known under the term "Dominance analysis" (see Azen et al. 2003). Azen et al. (2003) also discuss other measures of importance such as importance based on regression coefficients, based on correlations of importance based on a combination of coefficients and correlations. A general good overview of techniques based on variance decomposition can be found in the paper of Grömping (2012). These techniques are implemented in the R packages relaimpo, domir and yhat.
Here, I'm going to illustrate a method that is model-agnostic (i.e. it can be applied to a variety of model types) and has intuitive appeal: Variable importance based on permutation. The idea is very simple:
Decide on a performance metric that is important to you. Examples include: Root mean square error (RMSE), mean absolute error (MAE), $R^2$ etc. This also is somewhat dependent in the model type.
Calculate the metric on your dataset, $M_{orig}$. This serves as baseline performance metric.
For $i = 1, 2, \ldots, j$:
(a) Permute the values of the predictor $X_i$ in the data set.
(b) Recompute the metric on the permuted data and call it $M_{perm}$.
(c) Record the difference from baseline using $imp(X_i)=M_{perm} - M_{orig}$.
Do this repeatedly, say 1000 times, and take the average of the importance values. Intuitively, the permutations break the relationship between the predictor $X_i$ and the outcome. The larger the change in the performance metric, the higher the predictors' importance. More information can be found in this chapter of an online book by Christoph Molnar.
The R package vip implements this procedure (see the documentation (PDF) for more information). The following code applies the idea to your dataset. I chose the $R^2$ and the mean absolute error (MAE) as performance metrics and permute each predictor 1000 times:
library(vip)
# The model
mod <- lm(satisfied ~ clean*location + edu*location + transportation*location, data = my_df)
# Calculate permutation-based importance with r-squared as metric
set.seed(142857) # For reproducibility
p_r2 <- vip::vi(mod, method = "permute", target = "satisfied", metric = "rsquared", pred_wrapper = predict, nsim = 1000)
p_r2
Variable Importance StDev
<chr> <dbl> <dbl>
1 transportation 0.198 0.0492
2 clean 0.177 0.0465
3 edu 0.0462 0.0237
4 location 0.0449 0.0250
# Calculate permutation-based importance with mae as metric
p_mae <- vip::vi(mod, method = "permute", target = "satisfied", metric = "mae", pred_wrapper = predict, nsim = 1000)
p_mae
Variable Importance StDev
<chr> <dbl> <dbl>
1 transportation 0.166 0.0413
2 clean 0.144 0.0400
3 location 0.0396 0.0214
4 edu 0.0368 0.0219
According to the $R^2$, permuting transportation leads to the largest change in $R^2$, followed by clean. Using the mean absolute error shows a similar ordering with transportation and clean being most important while location and edu being least important.
References
Azen R, Budescu DV (2003): The Dominance Analysis Approach for Comparing Predictors in Multiple Regression. Psychological Methods 8:2, 129-148. (link)
Grömping U (2012): Estimators of relative importance in linear regression based on variance decomposition. Am Stat 61:2, 139-147. (link) | How to tell which variable is more meaningful when modeling the relationship between several predict
I believe you're looking for a metric of (relative) variable importance (see also this thread). Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to |
36,496 | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable? | The following answer is some sort of an ignorant attempt to use {rms} package, following @JTH's suggestion. I got to say that this is the first time I'm using this package, and I have very minimal understanding of what I'm doing. Hence, I ask that anybody who can -- please provide feedback!
I've followed the procedure described in this chapter.
my_df <- structure(list(location = c("sf", "nyc", "nyc", "sf", "nyc",
"nyc", "nyc", "nyc", "nyc", "sf", "nyc", "sf", "sf", "sf", "nyc",
"sf", "sf", "nyc", "nyc", "nyc", "sf", "sf", "sf", "sf", "sf",
"nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "nyc", "sf", "nyc",
"nyc", "nyc", "sf", "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc",
"nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc", "sf", "sf", "nyc",
"nyc", "nyc", "nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "sf",
"nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc",
"nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "sf", "nyc",
"nyc", "nyc", "sf", "sf", "sf", "nyc", "sf", "sf", "sf", "nyc",
"nyc", "nyc", "nyc", "nyc", "nyc"),
satisfied = c(5, 1, 7, 5,
7, 1, 5, 5, 5, 7, 7, 4, 1, 3, 5, 6, 7, 7, 6, 4, 4, 5, 6, 5, 5,
7, 5, 6, 5, 4, 7, 7, 5, 5, 4, 7, 7, 5, 6, 6, 3, 6, 5, 7, 5, 7,
6, 5, 4, 3, 6, 5, 7, 3, 5, 5, 7, 5, 6, 7, 7, 7, 7, 5, 4, 7, 6,
7, 7, 6, 6, 6, 5, 7, 5, 4, 6, 4, 7, 5, 6, 6, 5, 5, 6, 7, 6, 5,
1, 5, 2, 7, 7, 7, 7, 7, 1, 3, 7, 7),
clean = c(4, 1,
7, 3, 4, 1, 6, 6, 7, 4, 5, 1, 1, 1, 4, 6, 6, 1, 6, 1, 4, 2, 2,
7, 3, 5, 2, 4, 1, 1, 4, 6, 3, 5, 1, 4, 5, 2, 5, 5, 4, 5, 4, 7,
3, 6, 5, 4, 5, 4, 5, 4, 5, 1, 5, 2, 6, 5, 7, 6, 3, 7, 5, 6, 4,
6, 6, 5, 5, 5, 4, 1, 4, 4, 5, 1, 3, 1, 2, 2, 6, 4, 3, 6, 7, 7,
5, 2, 4, 1, 3, 1, 5, 3, 5, 5, 1, 1, 5, 6),
edu = c(5,
1, 7, 4, 4, 1, 6, 6, 6, 4, 5, 4, 4, 3, 3, 5, 4, 1, 1, 3, 5, 6,
5, 5, 3, 6, 2, 4, 4, 4, 6, 3, 4, 7, 1, 4, 7, 5, 6, 5, 5, 5, 4,
7, 3, 7, 6, 5, 5, 4, 5, 3, 4, 4, 4, 7, 5, 4, 6, 6, 4, 7, 4, 2,
5, 6, 6, 7, 6, 7, 3, 3, 2, 6, 6, 2, 5, 3, 6, 5, 6, 4, 4, 5, 6,
7, 3, 3, 4, 5, 4, 1, 3, 4, 4, 6, 5, 1, 4, 6),
transportation = c(1,
1, 7, 5, 7, 1, 4, 6, 6, 6, 6, 1, 1, 1, 5, 5, 4, 7, 6, 6, 7, 5,
2, 7, 3, 6, 1, 4, 7, 5, 6, 7, 4, 3, 2, 6, 4, 2, 6, 5, 4, 7, 6,
7, 3, 7, 4, 4, 5, 4, 6, 3, 5, 2, 7, 3, 7, 7, 7, 6, 7, 7, 7, 5,
3, 5, 4, 7, 6, 6, 4, 2, 4, 4, 5, 6, 5, 2, 6, 2, 6, 6, 3, 4, 7,
7, 7, 4, 5, 4, 5, 3, 7, 5, 7, 7, 7, 1, 6, 6)),
row.names = c(NA, -100L),
class = c("tbl_df", "tbl", "data.frame"))
library(rms, warn.conflicts = FALSE)
#> Loading required package: Hmisc
#> Loading required package: lattice
#> Loading required package: survival
#> Loading required package: Formula
#> Loading required package: ggplot2
#>
#> Attaching package: 'Hmisc'
#> The following objects are masked from 'package:base':
#>
#> format.pval, units
#> Loading required package: SparseM
#>
#> Attaching package: 'SparseM'
#> The following object is masked from 'package:base':
#>
#> backsolve
model_fit <- rms::ols(satisfied ~ clean*location + edu*location + transportation*location, data = my_df)
my_datadist <- rms::datadist(my_df) ## apparently, we need these two lines
options(datadist = "my_datadist") ## otherwise we get an error with `summary(model_fit)`
## I learned it from here: https://stackoverflow.com/a/41378930/6105259
plot(summary(model_fit))
Created on 2021-08-16 by the reprex package (v2.0.0)
As far as I understand this output, we see the effect each predictor carries on the outcome variable, with 95% CI. Thus, for example, we can conclude that clean has a greater effect over satisfied than edu has.
Does this make sense?
Originally, I was interested in the interaction between each edu/clean/transportation and location, as I want to learn how the relationships between predictors and outcome change between cities. But here, as far as I can see from this output, the interaction isn't reflected in terms of effect on satisfied.
UPDATE
Following @JTH's comment, I'm adding another plot:
plot(anova(model_fit))
The info underlying this plot is here:
library(dplyr, warn.conflicts = FALSE)
library(tibble)
anova(model_fit) %>%
as_tibble(rownames = "factor") %>%
mutate(across(3:6, round, 4))
#> # A tibble: 14 x 6
#> factor d.f. `Partial SS` MS F P
#> <chr> <anov.r> <anov.rms> <anov.> <anov.> <anov.>
#> 1 "clean (Factor+Higher Order F~ 2 11.8209 5.9105 3.4587 0.0356
#> 2 " All Interactions" 1 0.0000 0.0000 0.0000 0.9969
#> 3 "location (Factor+Higher Orde~ 4 4.8955 1.2239 0.7162 0.5830
#> 4 " All Interactions" 3 4.8821 1.6274 0.9523 0.4188
#> 5 "edu (Factor+Higher Order Fac~ 2 4.0844 2.0422 1.1951 0.3073
#> 6 " All Interactions" 1 3.1038 3.1038 1.8163 0.1811
#> 7 "transportation (Factor+Highe~ 2 15.3207 7.6604 4.4828 0.0139
#> 8 " All Interactions" 1 0.0476 0.0476 0.0279 0.8678
#> 9 "clean * location (Factor+Hig~ 1 0.0000 0.0000 0.0000 0.9969
#> 10 "location * edu (Factor+Highe~ 1 3.1038 3.1038 1.8163 0.1811
#> 11 "location * transportation (F~ 1 0.0476 0.0476 0.0279 0.8678
#> 12 "TOTAL INTERACTION" 3 4.8821 1.6274 0.9523 0.4188
#> 13 "TOTAL" 7 90.9761 12.9966 7.6055 0.0000
#> 14 "ERROR" 92 157.2139 1.7088 NA NA
UPDATE 2
Addressing @EdM's comment, here is a print of anova(model_fit) without converting it to a tibble.
anova(model_fit) %>%
round(., 4)
#> Analysis of Variance Response: satisfied
#>
#> Factor d.f. Partial SS
#> clean (Factor+Higher Order Factors) 2 11.8209
#> All Interactions 1 0.0000
#> location (Factor+Higher Order Factors) 4 4.8955
#> All Interactions 3 4.8821
#> edu (Factor+Higher Order Factors) 2 4.0844
#> All Interactions 1 3.1038
#> transportation (Factor+Higher Order Factors) 2 15.3207
#> All Interactions 1 0.0476
#> clean * location (Factor+Higher Order Factors) 1 0.0000
#> location * edu (Factor+Higher Order Factors) 1 3.1038
#> location * transportation (Factor+Higher Order Factors) 1 0.0476
#> TOTAL INTERACTION 3 4.8821
#> REGRESSION 7 90.9761
#> ERROR 92 157.2139
#> MS F P
#> 5.9105 3.46 0.0356
#> 0.0000 0.00 0.9969
#> 1.2239 0.72 0.5830
#> 1.6274 0.95 0.4188
#> 2.0422 1.20 0.3073
#> 3.1038 1.82 0.1811
#> 7.6604 4.48 0.0139
#> 0.0476 0.03 0.8678
#> 0.0000 0.00 0.9969
#> 3.1038 1.82 0.1811
#> 0.0476 0.03 0.8678
#> 1.6274 0.95 0.4188
#> 12.9966 7.61 <.0001
#> 1.7088 | How to tell which variable is more meaningful when modeling the relationship between several predict | The following answer is some sort of an ignorant attempt to use {rms} package, following @JTH's suggestion. I got to say that this is the first time I'm using this package, and I have very minimal und | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable?
The following answer is some sort of an ignorant attempt to use {rms} package, following @JTH's suggestion. I got to say that this is the first time I'm using this package, and I have very minimal understanding of what I'm doing. Hence, I ask that anybody who can -- please provide feedback!
I've followed the procedure described in this chapter.
my_df <- structure(list(location = c("sf", "nyc", "nyc", "sf", "nyc",
"nyc", "nyc", "nyc", "nyc", "sf", "nyc", "sf", "sf", "sf", "nyc",
"sf", "sf", "nyc", "nyc", "nyc", "sf", "sf", "sf", "sf", "sf",
"nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "nyc", "sf", "nyc",
"nyc", "nyc", "sf", "nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc",
"nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc", "sf", "sf", "nyc",
"nyc", "nyc", "nyc", "sf", "sf", "nyc", "sf", "nyc", "nyc", "sf",
"nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "nyc", "nyc",
"nyc", "sf", "sf", "nyc", "nyc", "nyc", "nyc", "sf", "sf", "nyc",
"nyc", "nyc", "sf", "sf", "sf", "nyc", "sf", "sf", "sf", "nyc",
"nyc", "nyc", "nyc", "nyc", "nyc"),
satisfied = c(5, 1, 7, 5,
7, 1, 5, 5, 5, 7, 7, 4, 1, 3, 5, 6, 7, 7, 6, 4, 4, 5, 6, 5, 5,
7, 5, 6, 5, 4, 7, 7, 5, 5, 4, 7, 7, 5, 6, 6, 3, 6, 5, 7, 5, 7,
6, 5, 4, 3, 6, 5, 7, 3, 5, 5, 7, 5, 6, 7, 7, 7, 7, 5, 4, 7, 6,
7, 7, 6, 6, 6, 5, 7, 5, 4, 6, 4, 7, 5, 6, 6, 5, 5, 6, 7, 6, 5,
1, 5, 2, 7, 7, 7, 7, 7, 1, 3, 7, 7),
clean = c(4, 1,
7, 3, 4, 1, 6, 6, 7, 4, 5, 1, 1, 1, 4, 6, 6, 1, 6, 1, 4, 2, 2,
7, 3, 5, 2, 4, 1, 1, 4, 6, 3, 5, 1, 4, 5, 2, 5, 5, 4, 5, 4, 7,
3, 6, 5, 4, 5, 4, 5, 4, 5, 1, 5, 2, 6, 5, 7, 6, 3, 7, 5, 6, 4,
6, 6, 5, 5, 5, 4, 1, 4, 4, 5, 1, 3, 1, 2, 2, 6, 4, 3, 6, 7, 7,
5, 2, 4, 1, 3, 1, 5, 3, 5, 5, 1, 1, 5, 6),
edu = c(5,
1, 7, 4, 4, 1, 6, 6, 6, 4, 5, 4, 4, 3, 3, 5, 4, 1, 1, 3, 5, 6,
5, 5, 3, 6, 2, 4, 4, 4, 6, 3, 4, 7, 1, 4, 7, 5, 6, 5, 5, 5, 4,
7, 3, 7, 6, 5, 5, 4, 5, 3, 4, 4, 4, 7, 5, 4, 6, 6, 4, 7, 4, 2,
5, 6, 6, 7, 6, 7, 3, 3, 2, 6, 6, 2, 5, 3, 6, 5, 6, 4, 4, 5, 6,
7, 3, 3, 4, 5, 4, 1, 3, 4, 4, 6, 5, 1, 4, 6),
transportation = c(1,
1, 7, 5, 7, 1, 4, 6, 6, 6, 6, 1, 1, 1, 5, 5, 4, 7, 6, 6, 7, 5,
2, 7, 3, 6, 1, 4, 7, 5, 6, 7, 4, 3, 2, 6, 4, 2, 6, 5, 4, 7, 6,
7, 3, 7, 4, 4, 5, 4, 6, 3, 5, 2, 7, 3, 7, 7, 7, 6, 7, 7, 7, 5,
3, 5, 4, 7, 6, 6, 4, 2, 4, 4, 5, 6, 5, 2, 6, 2, 6, 6, 3, 4, 7,
7, 7, 4, 5, 4, 5, 3, 7, 5, 7, 7, 7, 1, 6, 6)),
row.names = c(NA, -100L),
class = c("tbl_df", "tbl", "data.frame"))
library(rms, warn.conflicts = FALSE)
#> Loading required package: Hmisc
#> Loading required package: lattice
#> Loading required package: survival
#> Loading required package: Formula
#> Loading required package: ggplot2
#>
#> Attaching package: 'Hmisc'
#> The following objects are masked from 'package:base':
#>
#> format.pval, units
#> Loading required package: SparseM
#>
#> Attaching package: 'SparseM'
#> The following object is masked from 'package:base':
#>
#> backsolve
model_fit <- rms::ols(satisfied ~ clean*location + edu*location + transportation*location, data = my_df)
my_datadist <- rms::datadist(my_df) ## apparently, we need these two lines
options(datadist = "my_datadist") ## otherwise we get an error with `summary(model_fit)`
## I learned it from here: https://stackoverflow.com/a/41378930/6105259
plot(summary(model_fit))
Created on 2021-08-16 by the reprex package (v2.0.0)
As far as I understand this output, we see the effect each predictor carries on the outcome variable, with 95% CI. Thus, for example, we can conclude that clean has a greater effect over satisfied than edu has.
Does this make sense?
Originally, I was interested in the interaction between each edu/clean/transportation and location, as I want to learn how the relationships between predictors and outcome change between cities. But here, as far as I can see from this output, the interaction isn't reflected in terms of effect on satisfied.
UPDATE
Following @JTH's comment, I'm adding another plot:
plot(anova(model_fit))
The info underlying this plot is here:
library(dplyr, warn.conflicts = FALSE)
library(tibble)
anova(model_fit) %>%
as_tibble(rownames = "factor") %>%
mutate(across(3:6, round, 4))
#> # A tibble: 14 x 6
#> factor d.f. `Partial SS` MS F P
#> <chr> <anov.r> <anov.rms> <anov.> <anov.> <anov.>
#> 1 "clean (Factor+Higher Order F~ 2 11.8209 5.9105 3.4587 0.0356
#> 2 " All Interactions" 1 0.0000 0.0000 0.0000 0.9969
#> 3 "location (Factor+Higher Orde~ 4 4.8955 1.2239 0.7162 0.5830
#> 4 " All Interactions" 3 4.8821 1.6274 0.9523 0.4188
#> 5 "edu (Factor+Higher Order Fac~ 2 4.0844 2.0422 1.1951 0.3073
#> 6 " All Interactions" 1 3.1038 3.1038 1.8163 0.1811
#> 7 "transportation (Factor+Highe~ 2 15.3207 7.6604 4.4828 0.0139
#> 8 " All Interactions" 1 0.0476 0.0476 0.0279 0.8678
#> 9 "clean * location (Factor+Hig~ 1 0.0000 0.0000 0.0000 0.9969
#> 10 "location * edu (Factor+Highe~ 1 3.1038 3.1038 1.8163 0.1811
#> 11 "location * transportation (F~ 1 0.0476 0.0476 0.0279 0.8678
#> 12 "TOTAL INTERACTION" 3 4.8821 1.6274 0.9523 0.4188
#> 13 "TOTAL" 7 90.9761 12.9966 7.6055 0.0000
#> 14 "ERROR" 92 157.2139 1.7088 NA NA
UPDATE 2
Addressing @EdM's comment, here is a print of anova(model_fit) without converting it to a tibble.
anova(model_fit) %>%
round(., 4)
#> Analysis of Variance Response: satisfied
#>
#> Factor d.f. Partial SS
#> clean (Factor+Higher Order Factors) 2 11.8209
#> All Interactions 1 0.0000
#> location (Factor+Higher Order Factors) 4 4.8955
#> All Interactions 3 4.8821
#> edu (Factor+Higher Order Factors) 2 4.0844
#> All Interactions 1 3.1038
#> transportation (Factor+Higher Order Factors) 2 15.3207
#> All Interactions 1 0.0476
#> clean * location (Factor+Higher Order Factors) 1 0.0000
#> location * edu (Factor+Higher Order Factors) 1 3.1038
#> location * transportation (Factor+Higher Order Factors) 1 0.0476
#> TOTAL INTERACTION 3 4.8821
#> REGRESSION 7 90.9761
#> ERROR 92 157.2139
#> MS F P
#> 5.9105 3.46 0.0356
#> 0.0000 0.00 0.9969
#> 1.2239 0.72 0.5830
#> 1.6274 0.95 0.4188
#> 2.0422 1.20 0.3073
#> 3.1038 1.82 0.1811
#> 7.6604 4.48 0.0139
#> 0.0476 0.03 0.8678
#> 0.0000 0.00 0.9969
#> 3.1038 1.82 0.1811
#> 0.0476 0.03 0.8678
#> 1.6274 0.95 0.4188
#> 12.9966 7.61 <.0001
#> 1.7088 | How to tell which variable is more meaningful when modeling the relationship between several predict
The following answer is some sort of an ignorant attempt to use {rms} package, following @JTH's suggestion. I got to say that this is the first time I'm using this package, and I have very minimal und |
36,497 | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable? | As all your variables are numeric and limited to the same range of values, directly comparing the absolute values of the coefficients (as you did in your answer) is one possible way to estimate effect size.
It might be, however, that for one variable the answers are only in the range 1-3, while they are for another variable in the range 1-5. If both variables are equally important, the second will have a smaller coefficient due to its wider range. To overcome this problem, you could compute standardized coefficients, aka "beta coefficients". There are convenience functions for their computation in addon packages, but with base R you can achieve the same by standardizing your data before doing the model fit:
my_df.scaled <- scale(my_df)
In addition, you should also check, whether all coefficients are significantly different from zero. | How to tell which variable is more meaningful when modeling the relationship between several predict | As all your variables are numeric and limited to the same range of values, directly comparing the absolute values of the coefficients (as you did in your answer) is one possible way to estimate effect | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable?
As all your variables are numeric and limited to the same range of values, directly comparing the absolute values of the coefficients (as you did in your answer) is one possible way to estimate effect size.
It might be, however, that for one variable the answers are only in the range 1-3, while they are for another variable in the range 1-5. If both variables are equally important, the second will have a smaller coefficient due to its wider range. To overcome this problem, you could compute standardized coefficients, aka "beta coefficients". There are convenience functions for their computation in addon packages, but with base R you can achieve the same by standardizing your data before doing the model fit:
my_df.scaled <- scale(my_df)
In addition, you should also check, whether all coefficients are significantly different from zero. | How to tell which variable is more meaningful when modeling the relationship between several predict
As all your variables are numeric and limited to the same range of values, directly comparing the absolute values of the coefficients (as you did in your answer) is one possible way to estimate effect |
36,498 | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable? | I'd look at the residuals for determining the effectiveness of a model. Plots of residuals versus other quantities are used to find failures of assumptions. The most common plot, especially useful in simple regression, is the plot of residuals versus the fitted values. A null plot would indicate no failure of assumptions. Curvature might indicate the fitted mean function is incorrect. Residuals that seem to increase or decrease in average magnitude with the fitted values might indicate nonconstant residual variance. A few relatively large residuals may be indicative of outliers, cases for which the model is somehow inappropriate.
Assumptions of a linear model are:
Linear relationship: There exists a linear relationship between the independent variable, x, and the dependent variable, y.
Independence: The residuals are independent. In particular, there is no correlation between consecutive residuals.
Homoscedasticity: The residuals have constant variance at every level of x.
Normality: The residuals of the model are normally distributed.
If one or more of these assumptions are violated, then the results of our linear regression may be unreliable or even misleading.
Using the given data, I build a simple linear regression model given below.
# required libraries
library(caret)
library(ggplot2)
library(magrittr)
# split the train dataset into train and test set
set.seed(2021)
index <- createDataPartition(my_df$satisfied, p = 0.7, list = FALSE)
df_train <- my_df[index, ]
df_test <- my_df[-index, ]
# Model building
lm_model <- lm(satisfied ~., data = my_df)
# Make predictions and compute the R2, RMSE and MAE
predictions <- lm_model %>% predict(df_test)
data.frame( R2 = R2(predictions, df_test$satisfied),
RMSE = RMSE(predictions, df_test$satisfied),
MAE = MAE(predictions, df_test$satisfied))
R2 RMSE MAE
1 0.4513587 0.9956456 0.8224509
# Residuals = Observed - Predicted
# compute residuals
residualVals <- df_test$satisfied - predictions
df.1 <- data.frame(df_test$satisfied, predictions,
residualVals)
colnames(df.1)<- c("observed","predicted","residuals")
head(df.1)
observed predicted residuals
1 5 4.443428 0.55657221
2 7 6.930498 0.06950225
3 4 3.626674 0.37332568
4 3 3.538958 -0.53895795
5 5 5.083349 -0.08334872
6 7 6.097199 0.90280068
ggplot(data = df.1, aes(x=predicted, y=residuals))+
geom_point()+
xlab("Predicted values for satsified")+
ylab("Residuals")+
ggtitle("Residual plot")+
theme_bw()
Discussion
To understand the strength/weakness of a model, relying on a single metric is problematic. Visualization of model fit, particularly residual plots in the context of linear regression model, are critical to understanding whether the model is fit for purpose.
When the outcome is a number, the most common method for characterizing a model’s predictive capabilities is to use the root mean squared error (RMSE). This metric is a function of the model residuals, which are the observed values minus the model predictions. The mean squared error (MSE) is calculated by squaring the residuals and summing them. The RMSE is then calculated by taking the square root of the MSE so that it is in the same units as the original data. The value is usually interpreted as either how far (on average) the residuals are from zero or as the average distance between the observed values and the model predictions. Another common metric is the coefficient of determination, commonly written as R2. This value can be interpreted as the proportion of the information in the data that is explained by the model. Thus, an R2 value of 0.45 in the above model, implies that the model can explain less than half of the variation in the outcome/dependent/response variable, satisfied variation. Simply put, the above model is not good. It should be noted that R2 is a measure of correlation and not accuracy. | How to tell which variable is more meaningful when modeling the relationship between several predict | I'd look at the residuals for determining the effectiveness of a model. Plots of residuals versus other quantities are used to find failures of assumptions. The most common plot, especially useful in | How to tell which variable is more meaningful when modeling the relationship between several predictors and outcome variable?
I'd look at the residuals for determining the effectiveness of a model. Plots of residuals versus other quantities are used to find failures of assumptions. The most common plot, especially useful in simple regression, is the plot of residuals versus the fitted values. A null plot would indicate no failure of assumptions. Curvature might indicate the fitted mean function is incorrect. Residuals that seem to increase or decrease in average magnitude with the fitted values might indicate nonconstant residual variance. A few relatively large residuals may be indicative of outliers, cases for which the model is somehow inappropriate.
Assumptions of a linear model are:
Linear relationship: There exists a linear relationship between the independent variable, x, and the dependent variable, y.
Independence: The residuals are independent. In particular, there is no correlation between consecutive residuals.
Homoscedasticity: The residuals have constant variance at every level of x.
Normality: The residuals of the model are normally distributed.
If one or more of these assumptions are violated, then the results of our linear regression may be unreliable or even misleading.
Using the given data, I build a simple linear regression model given below.
# required libraries
library(caret)
library(ggplot2)
library(magrittr)
# split the train dataset into train and test set
set.seed(2021)
index <- createDataPartition(my_df$satisfied, p = 0.7, list = FALSE)
df_train <- my_df[index, ]
df_test <- my_df[-index, ]
# Model building
lm_model <- lm(satisfied ~., data = my_df)
# Make predictions and compute the R2, RMSE and MAE
predictions <- lm_model %>% predict(df_test)
data.frame( R2 = R2(predictions, df_test$satisfied),
RMSE = RMSE(predictions, df_test$satisfied),
MAE = MAE(predictions, df_test$satisfied))
R2 RMSE MAE
1 0.4513587 0.9956456 0.8224509
# Residuals = Observed - Predicted
# compute residuals
residualVals <- df_test$satisfied - predictions
df.1 <- data.frame(df_test$satisfied, predictions,
residualVals)
colnames(df.1)<- c("observed","predicted","residuals")
head(df.1)
observed predicted residuals
1 5 4.443428 0.55657221
2 7 6.930498 0.06950225
3 4 3.626674 0.37332568
4 3 3.538958 -0.53895795
5 5 5.083349 -0.08334872
6 7 6.097199 0.90280068
ggplot(data = df.1, aes(x=predicted, y=residuals))+
geom_point()+
xlab("Predicted values for satsified")+
ylab("Residuals")+
ggtitle("Residual plot")+
theme_bw()
Discussion
To understand the strength/weakness of a model, relying on a single metric is problematic. Visualization of model fit, particularly residual plots in the context of linear regression model, are critical to understanding whether the model is fit for purpose.
When the outcome is a number, the most common method for characterizing a model’s predictive capabilities is to use the root mean squared error (RMSE). This metric is a function of the model residuals, which are the observed values minus the model predictions. The mean squared error (MSE) is calculated by squaring the residuals and summing them. The RMSE is then calculated by taking the square root of the MSE so that it is in the same units as the original data. The value is usually interpreted as either how far (on average) the residuals are from zero or as the average distance between the observed values and the model predictions. Another common metric is the coefficient of determination, commonly written as R2. This value can be interpreted as the proportion of the information in the data that is explained by the model. Thus, an R2 value of 0.45 in the above model, implies that the model can explain less than half of the variation in the outcome/dependent/response variable, satisfied variation. Simply put, the above model is not good. It should be noted that R2 is a measure of correlation and not accuracy. | How to tell which variable is more meaningful when modeling the relationship between several predict
I'd look at the residuals for determining the effectiveness of a model. Plots of residuals versus other quantities are used to find failures of assumptions. The most common plot, especially useful in |
36,499 | Which link function could be used for a glm where the response is per cent (0 - 100%)? | Nice question!
You can start your modelling efforts by converting your cover variable so that it is expressed as a proportion rather than a percentage (i.e., simply divide the values of your original cover variable by 100).
Once you get the converted variable, look to see how many 0 and 1 values you have, if any, in addition to values falling in the (0,1) interval.
If you don't have any 0 or 1 values, you can use beta regression modelling. A beta regression model is not a glm model but it has similarities to a glm model.
The beta regression model has a logit link function; this means that you are modelling the logit-transformed mean cover as a function of various predictor variables. (Recall that cover is now expressed as a proportion.)
If you have a substantial number of 0 values but no 1 values, you will need to use a zero-inflated beta regression model for your modelling.
If you have a substantial number of 1 values but no 0 values, you will need to use a one-inflated beta regression model.
If you have a substantial number of 0 values and a substantial number of 1 values, you will need to use a zero-and-one-inflated beta regression model.
Do you use R? If yes, the gamlss package is your best bet for implementing these kinds of models - see, for instance, page 107 of this document:
https://www.gamlss.com/wp-content/uploads/2013/01/book-2010-Athens1.pdf.
The inflated versions of the beta regression models have multiple links because they simultaneously model multiple parameters. Some of these links can be logit links, some can be log links, etc. In gamlss, the family distribution for these models can be one of:
BE for beta regression;
BEINF0 for zero-inflated beta regression;
BEINF1 for one-inflated beta regression;
BEINF for zero-and-one inflated beta regression.
Addendum
Both betareg() and gamlss() afford the ability to fit a beta regression model to your data and then use the model for prediction purposes. The R code below provides an example for how you can do this. Note that there are some differences in output between the model summaries produced by the two functions from the same dataset.
#===========================================================
# Load Gasoline data into R
#===========================================================
library(betareg)
data("GasolineYield", package = "betareg")
#===========================================================
# Fit beta regression model to Gasoline data using betareg()
#===========================================================
model_betareg <- betareg(yield ~ temp,
link = "logit",
link.phi = "log",
data = GasolineYield)
summary(model_betareg)
#===========================================================
# Fit beta regression model to Gasoline data using gamlss()
#===========================================================
library(gamlss)
model_gamlss <- gamlss(yield ~ temp,
family = BE(mu.link = "logit",
sigma.link = "log"),
data = GasolineYield)
summary(model_gamlss)
#===========================================================
# Predict from model_betareg for a new temp value
#===========================================================
predict_betareg <- predict(model_betareg,
newdata = data.frame(temp = 349),
type="response")
predict_betareg
#===========================================================
# Predict from model_gamlss for a new temp value
#===========================================================
predict_gamlss <- predict(model_gamlss,
newdata = data.frame(temp = 349),
type="response")
predict_gamlss
The yield value predicted by model_betareg is 0.2046289 and the yield value produced by model_gamlss is 0.204634. These are very close, as expected.
Note that the prediction is performed on the scale of the response variable yield, so that the predicted values are expressed as proportions in the interval (0,1).
The newdata must be a dataframe which lists the values of all of the predictor variables included in the right-hand side of the beta regression model. The type of these values must match the type of the predictors variables as seen by R via the str() command:
str(Gasoline)
Here's the output for this str() command:
str(Gasoline)
Classes ‘nfnGroupedData’, ‘nfGroupedData’, ‘groupedData’ and 'data.frame':
32 obs. of 6 variables:
$ yield : num 6.9 14.4 7.4 8.5 8 2.8 5 12.2 10 15.2 ...
$ endpoint: num 235 307 212 365 218 235 285 205 267 300 ...
$ Sample : Ord.factor w/ 10 levels "1"<"2"<"7"<"9"<..: 8 9 5 1 3 4 7 10 6 8
...
$ API : num 38.4 40.3 40 31.8 40.8 41.3 38.1 50.8 32.2 38.4 ...
$ vapor : num 6.1 4.8 6.1 0.2 3.5 1.8 1.2 8.6 5.2 6.1 ...
$ ASTM : num 220 231 217 316 210 267 274 190 236 220 ...
If temp were listed as a factor in your dataset (e.g., "High", "Medium", "Low"), you would specify it as such in your newdata:
predict_gamlss <- predict(model_gamlss,
newdata = data.frame(temp = c("High","Low")),
type="response")
predict_gamlss | Which link function could be used for a glm where the response is per cent (0 - 100%)? | Nice question!
You can start your modelling efforts by converting your cover variable so that it is expressed as a proportion rather than a percentage (i.e., simply divide the values of your original | Which link function could be used for a glm where the response is per cent (0 - 100%)?
Nice question!
You can start your modelling efforts by converting your cover variable so that it is expressed as a proportion rather than a percentage (i.e., simply divide the values of your original cover variable by 100).
Once you get the converted variable, look to see how many 0 and 1 values you have, if any, in addition to values falling in the (0,1) interval.
If you don't have any 0 or 1 values, you can use beta regression modelling. A beta regression model is not a glm model but it has similarities to a glm model.
The beta regression model has a logit link function; this means that you are modelling the logit-transformed mean cover as a function of various predictor variables. (Recall that cover is now expressed as a proportion.)
If you have a substantial number of 0 values but no 1 values, you will need to use a zero-inflated beta regression model for your modelling.
If you have a substantial number of 1 values but no 0 values, you will need to use a one-inflated beta regression model.
If you have a substantial number of 0 values and a substantial number of 1 values, you will need to use a zero-and-one-inflated beta regression model.
Do you use R? If yes, the gamlss package is your best bet for implementing these kinds of models - see, for instance, page 107 of this document:
https://www.gamlss.com/wp-content/uploads/2013/01/book-2010-Athens1.pdf.
The inflated versions of the beta regression models have multiple links because they simultaneously model multiple parameters. Some of these links can be logit links, some can be log links, etc. In gamlss, the family distribution for these models can be one of:
BE for beta regression;
BEINF0 for zero-inflated beta regression;
BEINF1 for one-inflated beta regression;
BEINF for zero-and-one inflated beta regression.
Addendum
Both betareg() and gamlss() afford the ability to fit a beta regression model to your data and then use the model for prediction purposes. The R code below provides an example for how you can do this. Note that there are some differences in output between the model summaries produced by the two functions from the same dataset.
#===========================================================
# Load Gasoline data into R
#===========================================================
library(betareg)
data("GasolineYield", package = "betareg")
#===========================================================
# Fit beta regression model to Gasoline data using betareg()
#===========================================================
model_betareg <- betareg(yield ~ temp,
link = "logit",
link.phi = "log",
data = GasolineYield)
summary(model_betareg)
#===========================================================
# Fit beta regression model to Gasoline data using gamlss()
#===========================================================
library(gamlss)
model_gamlss <- gamlss(yield ~ temp,
family = BE(mu.link = "logit",
sigma.link = "log"),
data = GasolineYield)
summary(model_gamlss)
#===========================================================
# Predict from model_betareg for a new temp value
#===========================================================
predict_betareg <- predict(model_betareg,
newdata = data.frame(temp = 349),
type="response")
predict_betareg
#===========================================================
# Predict from model_gamlss for a new temp value
#===========================================================
predict_gamlss <- predict(model_gamlss,
newdata = data.frame(temp = 349),
type="response")
predict_gamlss
The yield value predicted by model_betareg is 0.2046289 and the yield value produced by model_gamlss is 0.204634. These are very close, as expected.
Note that the prediction is performed on the scale of the response variable yield, so that the predicted values are expressed as proportions in the interval (0,1).
The newdata must be a dataframe which lists the values of all of the predictor variables included in the right-hand side of the beta regression model. The type of these values must match the type of the predictors variables as seen by R via the str() command:
str(Gasoline)
Here's the output for this str() command:
str(Gasoline)
Classes ‘nfnGroupedData’, ‘nfGroupedData’, ‘groupedData’ and 'data.frame':
32 obs. of 6 variables:
$ yield : num 6.9 14.4 7.4 8.5 8 2.8 5 12.2 10 15.2 ...
$ endpoint: num 235 307 212 365 218 235 285 205 267 300 ...
$ Sample : Ord.factor w/ 10 levels "1"<"2"<"7"<"9"<..: 8 9 5 1 3 4 7 10 6 8
...
$ API : num 38.4 40.3 40 31.8 40.8 41.3 38.1 50.8 32.2 38.4 ...
$ vapor : num 6.1 4.8 6.1 0.2 3.5 1.8 1.2 8.6 5.2 6.1 ...
$ ASTM : num 220 231 217 316 210 267 274 190 236 220 ...
If temp were listed as a factor in your dataset (e.g., "High", "Medium", "Low"), you would specify it as such in your newdata:
predict_gamlss <- predict(model_gamlss,
newdata = data.frame(temp = c("High","Low")),
type="response")
predict_gamlss | Which link function could be used for a glm where the response is per cent (0 - 100%)?
Nice question!
You can start your modelling efforts by converting your cover variable so that it is expressed as a proportion rather than a percentage (i.e., simply divide the values of your original |
36,500 | How to systematically choose which interactions to include in a multiple regression model? | I think a lot depends on what the purpose of the model is. Inference or prediction ?
If it is inference then you really need to incorporate some domain knowledge into the process, otherwise you risk identifying completely spurious associations, where an interaction may appear to be meaningful but in reality is either an artifact of the sample, or is masking some other issues such as non-linearity in one of more of the variables.
However, if the purpose is prediction then there are a various approaches you can take. One approach would be to fit all possible models and use a train / validate / test approach to find the model that gives the best predictions.
Edit : A simple simulation can show what can go wrong with inference without domain knowledge:
set.seed(50)
N <- 50
X1 <- runif(N, 1, 15)
X2 <- rnorm(N)
Y <- X1 + X2^2 + rnorm(N)
So, here we posit an actual data generation process of $Y = X_1 + {X_2}^2$
If we had some domain / expert knowledge that suggested some nonlinearities could be involved, we might fit the model:
> lm(Y ~ X1 + I(X1^2) + X2 + I(X2^2) ) %>% summary()
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.89041 0.65047 -1.369 0.178
X1 1.21915 0.19631 6.210 1.52e-07 ***
I(X1^2) -0.01462 0.01304 -1.122 0.268
X2 -0.19150 0.15530 -1.233 0.224
I(X2^2) 1.07849 0.08945 12.058 1.08e-15 ***
which provides inferences consistent with the "true" data generating procress.
On the other hand, if we had no knowledge and instead thought about a model with just first order terms and the interaction we would obtain:
> lm(Y ~ X1*X2) %>% summary()
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.01396 0.58267 -0.024 0.981
X1 1.09098 0.07064 15.443 < 2e-16 ***
X2 -3.39998 0.54363 -6.254 1.20e-07 ***
X1:X2 0.35850 0.06726 5.330 2.88e-06 ***
which is clearly spurious.
Further edit : However, when we look at predictive accuracy using root mean squared error we find that the interaction model performs slightly better:
> lm(Y ~ X1*X2) %>% predict() %>% `^`(2) %>% sum() %>% sqrt()
[1] 64.23458
> lm(Y ~ X1 + I(X1^2) + X2 + I(X2^2) ) %>% predict() %>% `^`(2) %>% sum() %>% sqrt()
[1] 64.87996
which underlines my central point that a lot depends on the purpose of the model. | How to systematically choose which interactions to include in a multiple regression model? | I think a lot depends on what the purpose of the model is. Inference or prediction ?
If it is inference then you really need to incorporate some domain knowledge into the process, otherwise you risk i | How to systematically choose which interactions to include in a multiple regression model?
I think a lot depends on what the purpose of the model is. Inference or prediction ?
If it is inference then you really need to incorporate some domain knowledge into the process, otherwise you risk identifying completely spurious associations, where an interaction may appear to be meaningful but in reality is either an artifact of the sample, or is masking some other issues such as non-linearity in one of more of the variables.
However, if the purpose is prediction then there are a various approaches you can take. One approach would be to fit all possible models and use a train / validate / test approach to find the model that gives the best predictions.
Edit : A simple simulation can show what can go wrong with inference without domain knowledge:
set.seed(50)
N <- 50
X1 <- runif(N, 1, 15)
X2 <- rnorm(N)
Y <- X1 + X2^2 + rnorm(N)
So, here we posit an actual data generation process of $Y = X_1 + {X_2}^2$
If we had some domain / expert knowledge that suggested some nonlinearities could be involved, we might fit the model:
> lm(Y ~ X1 + I(X1^2) + X2 + I(X2^2) ) %>% summary()
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.89041 0.65047 -1.369 0.178
X1 1.21915 0.19631 6.210 1.52e-07 ***
I(X1^2) -0.01462 0.01304 -1.122 0.268
X2 -0.19150 0.15530 -1.233 0.224
I(X2^2) 1.07849 0.08945 12.058 1.08e-15 ***
which provides inferences consistent with the "true" data generating procress.
On the other hand, if we had no knowledge and instead thought about a model with just first order terms and the interaction we would obtain:
> lm(Y ~ X1*X2) %>% summary()
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.01396 0.58267 -0.024 0.981
X1 1.09098 0.07064 15.443 < 2e-16 ***
X2 -3.39998 0.54363 -6.254 1.20e-07 ***
X1:X2 0.35850 0.06726 5.330 2.88e-06 ***
which is clearly spurious.
Further edit : However, when we look at predictive accuracy using root mean squared error we find that the interaction model performs slightly better:
> lm(Y ~ X1*X2) %>% predict() %>% `^`(2) %>% sum() %>% sqrt()
[1] 64.23458
> lm(Y ~ X1 + I(X1^2) + X2 + I(X2^2) ) %>% predict() %>% `^`(2) %>% sum() %>% sqrt()
[1] 64.87996
which underlines my central point that a lot depends on the purpose of the model. | How to systematically choose which interactions to include in a multiple regression model?
I think a lot depends on what the purpose of the model is. Inference or prediction ?
If it is inference then you really need to incorporate some domain knowledge into the process, otherwise you risk i |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.